recoll is a local search engine based on Xapian:
http://www.lesbonscomptes.com/recoll/
By itself recoll does not offer web or API access,
this can be achieved using recoll-webui:
https://framagit.org/medoc92/recollwebui.git
This engine uses a custom 'files' result template
set `base_url` to the location where recoll-webui can be reached
set `dl_prefix` to a location where the file hierarchy as indexed by recoll can be reached
set `search_dir` to the part of the indexed file hierarchy to be searched, use an empty string to search the entire search domain
This change is backward compatible with the existing configurations.
If a settings.yml loaded from an user defined location (SEARX_SETTINGS_PATH or /etc/searx/settings.yml),
then this settings can relied on the default settings.yml with this option:
user_default_settings:True
Devian's request and response forms has been changed.
- fixed title
- fixed time_range_dict to 'popular-*-***'
- use image from <noscript> if exists
- drop obsolete "http to https, remove domain sharding"
- use query URL https://www.deviantart.com/search/deviations?page=5&q=foo
- add searx/engines/deviantart.py to pylint check (test.pylint)
Error pattern::
There DEBUG:searx:result: invalid title: {'url': 'https://www.deviantart.com/ ...
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
use
from searx.engines.duckduckgo import _fetch_supported_languages, supported_languages_url # NOQA
so it is possible to easily remove all unused import using autoflake:
autoflake --in-place --recursive --remove-all-unused-imports searx tests
* URL / : the index page displayed the selected or the default category.
* URL / : when the q parameter is set using the URL, the redirect includes the URL query.
* URL /search : an empty query doesn't raise an exception.
This makes it easier to separately handle search and index requests
from a web server or from a reverse proxy.
If a request to index contains a query, a permanent redirect HTTP response
is returned. This should give some level of backwards compatibility
for users that have set a searx instance in their browser's search bar.
Xpath engine and results template changed to account for the fact that
archive.org doesn't cache .onions, though some onion engines migth have
their own cache.
Disabled by default. Can be enabled by setting the SOCKS proxies to
wherever Tor is listening and setting using_tor_proxy as True.
Requires Tor and updating packages.
To avoid manually adding the timeout on each engine, you can set
extra_proxy_timeout to account for Tor's (or whatever proxy used) extra
time.
- remove paging support: a "vqd" parameter is required between each request. This parameter is uniq for each request
- update the URL (no redirect), use the POST method
- language support: works if there is no more than request per minute, otherwise it is ignored !
* Fix "?q=test&engines=wikipedia": fix exception
* Fix "?q=test&engines=wikipedia&categories=images": now the engines from images category are included.
* Fix parse_timeout: make sure a value is always returned
* Various typing fixes (searx.webadapter, searx.search.SearchQuery)
When the user add searx as a search engine, the browser loads the /opensearch.xml URL without the cookies.
Without the query parameters, the user preferences are ignored (method and autocomplete).
In addition, opensearch.xml is modified to support automatic updates,
see https://developer.mozilla.org/en-US/docs/Web/OpenSearch
Always call initialize engines except on the first run of werkzeug with the reload feature.
the reload feature is activated when:
* searx_debug is True (SEARX_DEBUG environment variable or settings.yml)
* FLASK_APP=searx/webapp.py FLASK_ENV=development flask run (see https://flask.palletsprojects.com/en/1.1.x/cli/ )
Fix SEARX_DEBUG=0 make docs
docs/admin/engines.rst : engines are initialized
See https://github.com/searx/searx/issues/2204#issuecomment-701373438
Since 1. October 2020 google has changed the 'class' attribute of the HTML
result page.
Fix the xpath expressions and ignore <div class="g" ../> sections which do not
match to title's xpath expression.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
requests 2.24.0 uses the ssl module except if it doesn't support SNI, in this case searx fallbacks to pyopenssl.
searx logs a critical message and exit if the ssl modules doesn't support SNI and pyOpenSSL is not installed.
searx logs a critical message and exit if the ssl version is older than 1.0.2.
in requirements.txt, pyopenssl is still required to install searx as a fallback.
was previously a Dict with two or three keys: name, category, from_bang
make clear that this is a engine reference (see tests/unit/test_search.py for example)
all variables using this class are renamed accordingly.
* Log each call to get_locale: display the URL, the locale and the source (browser, preferences, form).
* Rename _get_browser_language to _get_browser_or_settings_language to match the actual code.
AJAX requests send the X-Requested-With HTTP header,
so searx.webapp.autocompleter returns the results with the expected data format.
Related to #2127Close#2203
A new "base" engine called command is introduced. It is the foundation for all command line engines for now.
You can use this engine to create your own command line engine.
Add some engines (commented out to make sure no one enables anything accidentally):
* git grep: This engine lets you grep in the searx repo.
* locate: If locate is installed and initialized, you can search on the FS.
* find: You can find files with a specific name from where you started searx.
* pattern search in files: This engine utilizes the command fgrep.
* regex search in files: This engine runs `grep` to find a file based on its contents.
and some other exceptions:
* KeyboardInterrupt
* SystemExit
* RuntimeError
* SystemError
* ImportError: an engine with an unmet dependency will stop everything.
Sending queries through POST, while better for privacy, breaks functionality
with certain extensions (e.g. Firefox containers). Since Firefox does
not send cookies when requesting `/opensearch.xml`, users cannot easily
switch to GET on the client side unless they make a custom search
engine. This commit allows admins to modify the default method on their
side so they can set it to GET if needed.
Sending query params over GET seems to be the only way to be able to
enable autocomplete in the browser. This commit adds the necessary URL
formatting to opensearch.xml. In order to identify queries coming from
the URL bar (rather than an AJAX request), which requires a different
JSON format and MIME type, the request headers are checked for
"X-Requested-With: XMLHttpRequest" which is added by jQuery request.
- enabling HTTPS for sci-hub.tw by default
- making sci-hub the default DOI resolver as it has the largest collection of scientific articles.
- replaced doai.io with dissem.in, as it redirects to this new domain.
Co-authored-by: Aurora of Earth <auroraofearth@ya.ru>
* Made first attempt at the bangs redirects plugin.
* It redirects. But in a messy way via javascript.
* First version with custom plugin
* Added a help page and a operator to see all the bangs available.
* Changed to .format because of support
* Changed to .format because of support
* Removed : in params
* Fixed path to json file and changed bang operator
* Changed bang operator back to &
* Made first attempt at the bangs redirects plugin.
* It redirects. But in a messy way via javascript.
* First version with custom plugin
* Added a help page and a operator to see all the bangs available.
* Changed to .format because of support
* Changed to .format because of support
* Removed : in params
* Fixed path to json file and changed bang operator
* Changed bang operator back to &
* Refactored getting search query. Also changed bang operator to ! and is now working.
* Removed prints
* Removed temporary bangs_redirect.js file. Updated plugin documentation
* Added unit test for the bangs plugin
* Fixed a unit test and added 2 more for bangs plugin
* Changed back to default settings.yml
* Added myself to AUTHORS.rst
* Refacored working of custom plugin.
* Refactored _get_bangs_data from list to dict to improve search speed.
* Decoupled bangs plugin from webserver with redirect_url
* Refactored bangs unit tests
* Fixed unit test bangs. Removed dubbel parsing in bangs.py
* Removed a dumb print statement
* Refactored bangs plugin to core engine.
* Removed bangs plugin.
* Refactored external bangs unit tests from plugin to core.
* Removed custom_results/bangs documentation from plugins.rst
* Added newline in settings.yml so the PR stays clean.
* Changed searx/plugins/__init__.py back to the old file
* Removed newline search.py
* Refactored get_external_bang_operator from utils to external_bang.py
* Removed unnecessary import form test_plugins.py
* Removed _parseExternalBang and _isExternalBang from query.py
* Removed get_external_bang_operator since it was not necessary
* Simplified external_bang.py
* Simplified external_bang.py
* Moved external_bangs unit tests to test_webapp.py. Fixed return in search with external_bang
* Refactored query parsing to unicode to support python2
* Refactored query parsing to unicode to support python2
* Refactored bangs plugin to core engine.
* Refactored search parameter to search_query in external_bang.py
Previously only image/jpeg was not proxied.
This commit don't proxify all MIME types starting with "image/".
This is a quick fix for the PR #1985 : the google_image engine can returns some data URL.
A new option is added to engines to hide error messages from users. It
is called `display_error_messages` and by default it is set to `True`.
If it is set to `False` error messages do not show up on the UI.
Keep in mind that engines are still suspended if needed regardless of
this setting.
Closes#1828
The gigablast API has changed and seems to have some quirks, this is the first
revise. More work (hacks) are needed.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Since there are zero results, we can remove it:
$ make engines.languages
fetch languages ..
...
fetched 0 languages from engine gigablast
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Inline styles are blocked by default with Content Security Policy (CSP). Move
the rest of inline styles to CSS and correct the HTML template of the oscar
preference page.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
A *brand* of searx is a fork which might have its own design and some special
functions which might bee reasonable in a special context.
In this sense, the fork might have its own documentation but not its own issue
tracker. The *upstream* of a brand is always https://github.com/asciimoo from
where the brand-fork pulls the master branch regularly. A fork which has its
own issue tracker is a spin-off and out of the scope of the searx project
itself. The conclusion is:
- hard code ISSUE_URL (in the Makefile)
- always refer to DOCS_URL
- links in the about page refer to the *upstream* (searx project)
except DOCS_URL
- "fork me on github" ribbons refer to the *upstream*
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
We have some variables in the build environment which are also needed in the
grunt process when building themes. Theses variables are relavant if one
creates a fork with its own branding. We treat these variables under the term
'brands'.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
We have some variables in the build environment which are also needed in the
templating process. Theses variables are relavant if one creates a fork with
its own branding. We treat these variables under the term 'brands'.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
datetime.parser.parse() does not know the Spanish date format which
leads to a ValueError. Fixes#1870
Traceback (most recent call last):
File "/usr/local/searx/searx/search.py", line 160, in search_one_http_request_safe
search_results = search_one_http_request(engine, query, request_params)
File "/usr/local/searx/searx/search.py", line 97, in search_one_http_request
return engine.response(response)
File "/usr/local/searx/searx/engines/startpage.py", line 102, in response
published_date = parser.parse(date_string, dayfirst=True)
File "/usr/local/searx/searx-ve/lib/python3.6/site-packages/dateutil/parser/_parser.py", line 1358, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "/usr/local/searx/searx-ve/lib/python3.6/site-packages/dateutil/parser/_parser.py", line 649, in parse
raise ValueError("Unknown string format:", timestr)
ValueError: ('Unknown string format:', '24 Ene 2013')
When selecting other languages than 'en', bing-video did not handle the language
correct and gave very bad results. Since User-Agent is normaly rotated in
searx, the behavior of a !biv search was unpredictable and paging was broken.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
The bing_news bug (discussed in #1838) was caused by wrong language tags, which
was fixed e0c99d9d / no need to change the bing_news search string.
closes: https://github.com/asciimoo/searx/issues/1838
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
To get meaningfull diffs, the json file has to be sorted. Before applying any
further content patch, the json file needs a inital sort (without changing any
content).
Sorted by::
import sys, json
with open('engines_languages.json') as f:
j = json.load(f)
with open('engines_languages.json', 'w') as f:
json.dump(j, f, indent=2, sort_keys=True)
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
When results are fetched from any programming related documentation site
(like git-scm.com, docs.python.org etc), content in Info box is shown as
raw HTML code.
This change addresses the issue by using "safe" filter feature provided by
Django. See,
- https://docs.djangoproject.com/en/3.0/ref/templates/builtins/#safe
- Searx issue tracker (issue #1649), for more information.
Resolves: #1649
When results are fetched from any programming related documentation site
(like git-scm.com, docs.python.org etc), content in Info box is shown as
raw HTML code.
This change addresses the issue by using "safe" filter feature provided by
Django. See,
- https://docs.djangoproject.com/en/3.0/ref/templates/builtins/#safe
- Searx issue tracker (issue #1649), for more information.
Resolves: #1649
In low width devices like mobile, tablet etc, info box is present at
bottom of the page.
This change addresses the issue by rearranging column grids for low
width devices and move side bar at top of the page. See
- https://getbootstrap.com/docs/3.3/css/#grid-column-ordering.
- and Searx issue tracker (issue#1777), for more information.
Effect: Along with Info, Suggestion and Link boxes also move to top of
the page.
Resolves: #1777
Infinite scroll adds a `hr` tag to split up the sections loaded by it.
The vim bindings `j` and `k`, which jump to the next and previous result
respectively, search for a **direct** sibling with the class `result`.
With the `hr` between results a direct sibling cannot be found. To fix
this we remove the restriction of it having to be a direct sibling.
Adding a CR in some files and in others not, is a good starting point for a
DOS+Unix mess we all have already seen in many projects.
Patch fixes all files matching (even those comming from grunt's build)::
find ./searx -exec file {} \; | grep CR
BTW: Same with mixing TAB and SPACE indent styles in one and the same file. So
if sources are tuched here in this patch, its also fixed.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Fix this error while travis build::
/home/travis/build/asciimoo/searx/searx/engines/duckduckgo_definitions.py:21:44: E225 missing whitespace around operator
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Add image format and source information to display - needs changes to engines to actually display something.
Displays result.source (website from which the image was taken) and result.img_format (image type and size).
Result is styled with result-format and result-source classes. See PR #1566 for an example of an engine which has the necessary changes.
Strip <span class="highlight">...</span> in the oscar image template.
This PR fixes the result count from bing which was throwing an (hidden) error and add a validation to avoid reading more results than avalaible.
For example :
If there is 100 results from some search and we try to get results from 120 to 130, Bing will send back the results from 0 to 10 and no error. If we compare results count with the first parameter of the request we can avoid this "invalid" results.
The new url parameter "timeout_limit" set timeout limit defined in second.
Example "timeout_limit=1.5" means the timeout limit is 1.5 seconds.
In addition, the query can start with <[number] to set the timeout limit.
For number between 0 and 99, the unit is the second :
Example: "<30 searx" means the timeout limit is 3 seconds
For number above 100, the unit is the millisecond:
Example: "<850 searx" means the timeout is 850 milliseconds.
In addition, there is a new optional setting: outgoing.max_request_timeout.
If not set, the user timeout can't go above searx configuration (as before: the max timeout of selected engine for a query).
If the value is set, the user can set a timeout between 0 and max_request_timeout using
<[number] or timeout_limit query parameter.
Related to #1077
Updated version of PR #1413 from @isj-privacore
Characters that were not ASCII were incorrectly decoded.
Add an helper function: searx.utils.ecma_unescape (Python implementation of unescape Javascript function).
* Search URL is https://www.wikidata.org/w/index.php?{query}&ns0=1 (with ns0=1 at the end to avoid an HTTP redirection)
* url_detail: remove the disabletidy=1 deprecated parameter
* Add eval_xpath function: compile once for all xpath.
* Add get_id_cache: retrieve all HTML with an id, avoid the slow to procress dynamic xpath '//div[@id="{propertyid}"]'.replace('{propertyid}')
* Create an etree.HTMLParser() instead of using the global one (see #1575)
Fetch complete JSON data block, use legend to extract images.
Unquote urlencoded strings.
Add image description as 'content'.
Add 'img_format' and 'source' data (needs PR #1567 to enable this data to be displayed).
Show images which lack ownerid instead of discarding them.