Copyright © 2001-2014 Andrew Aksyonoff
Copyright © 2008-2014 Sphinx Technologies Inc, http://sphinxsearch.com
Table of Contents
searchd
query log formatssphinx.conf
options referenceindexer
program configuration optionssearchd
program configuration optionsList of Examples
Table of Contents
Sphinx is a full-text search engine, publicly distributed under GPL version 2. Commercial licensing (eg. for embedded use) is available upon request.
Technically, Sphinx is a standalone software package provides fast and relevant full-text search functionality to client applications. It was specially designed to integrate well with SQL databases storing the data, and to be easily accessed by scripting languages. However, Sphinx does not depend on nor require any specific database to function.
Applications can access Sphinx search daemon (searchd) using any of the three different access methods: a) via Sphinx own implementation of MySQL network protocol (using a small SQL subset called SphinxQL, this is recommended way), b) via native search API (SphinxAPI) or c) via MySQL server with a pluggable storage engine (SphinxSE).
Official native SphinxAPI implementations for PHP, Perl, Python, Ruby and Java are included within the distribution package. API is very lightweight so porting it to a new language is known to take a few hours or days. Third party API ports and plugins exist for Perl, C#, Haskell, Ruby-on-Rails, and possibly other languages and frameworks.
Starting from version 1.10-beta, Sphinx supports two different indexing backends: "disk" index backend, and "realtime" (RT) index backend. Disk indexes support online full-text index rebuilds, but online updates can only be done on non-text (attribute) data. RT indexes additionally allow for online full-text index updates. Previous versions only supported disk indexes.
Data can be loaded into disk indexes using a so-called data source. Built-in sources can fetch data directly from MySQL, PostgreSQL, MSSQL, ODBC compliant database (Oracle, etc) or a custom XML format. Adding new data sources drivers (eg. to natively support other DBMSes) is designed to be as easy as possible. RT indexes, as of 1.10-beta, can only be populated using SphinxQL.
As for the name, Sphinx is an acronym which is officially decoded as SQL Phrase Index. Yes, I know about CMU's Sphinx project.
Key Sphinx features are:
high indexing and searching performance;
advanced indexing and querying tools (flexible and feature-rich text tokenizer, querying language, several different ranking modes, etc);
advanced result set post-processing (SELECT with expressions, WHERE, ORDER BY, GROUP BY etc over text search results);
proven scalability up to billions of documents, terabytes of data, and thousands of queries per second;
easy integration with SQL and XML data sources, and SphinxAPI, SphinxQL, or SphinxSE search interfaces;
easy scaling with distributed searches.
To expand a bit, Sphinx:
has high indexing speed (upto 10-15 MB/sec per core on an internal benchmark);
has high search speed (upto 150-250 queries/sec per core against 1,000,000 documents, 1.2 GB of data on an internal benchmark);
has high scalability (biggest known cluster indexes over 3,000,000,000 documents, and busiest one peaks over 50,000,000 queries/day);
provides good relevance ranking through combination of phrase proximity ranking and statistical (BM25) ranking;
provides distributed searching capabilities;
provides document excerpts (snippets) generation;
provides searching from within application with SphinxAPI or SphinxQL interfaces, and from within MySQL with pluggable SphinxSE storage engine;
supports boolean, phrase, word proximity and other types of queries;
supports multiple full-text fields per document (upto 32 by default);
supports multiple additional attributes per document (ie. groups, timestamps, etc);
supports stopwords;
supports morphological word forms dictionaries;
supports tokenizing exceptions;
supports both single-byte encodings and UTF-8;
supports stemming (stemmers for English, Russian, Czech and Arabic are built-in; and stemmers for French, Spanish, Portuguese, Italian, Romanian, German, Dutch, Swedish, Norwegian, Danish, Finnish, Hungarian, are available by building third party libstemmer library);
supports MySQL natively (all types of tables, including MyISAM, InnoDB, NDB, Archive, etc are supported);
supports PostgreSQL natively;
supports ODBC compliant databases (MS SQL, Oracle, etc) natively;
...has 50+ other features not listed here, refer to API and configuration manual!
Sphinx is available through its official Web site at http://sphinxsearch.com/.
Currently, Sphinx distribution tarball includes the following software:
indexer
: an utility which creates fulltext indexes;
search
: a simple command-line (CLI) test utility which searches through fulltext indexes;
searchd
: a daemon which enables external software (eg. Web applications) to search through fulltext indexes;
sphinxapi
: a set of searchd client API libraries for popular Web scripting languages (PHP, Python, Perl, Ruby).
spelldump
: a simple command-line tool to extract the items from an ispell
or MySpell
(as bundled with OpenOffice) format dictionary to help customize your index, for use with wordforms.
indextool
: an utility to dump miscellaneous debug information about the index, added in version 0.9.9-rc2.
wordbreaker
: an utility to break down compound words into separate words, added in version 2.1.1-beta.
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. See COPYING file for details.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Non-GPL licensing (for OEM/ISV embedded use) can also be arranged, please contact us to discuss commercial licensing possibilities.
Sphinx initial author (and a benevolent dictator ever since):
Andrew Aksyonoff, http://shodan.ru
Past and present employees of Sphinx Technologies Inc who should be noted on their work on Sphinx (in alphabetical order):
Adam Rice
Adrian Nuta
Alexander Klimenko
Alexey Dvoichenkov
Alexey Vinogradov
Anton Tsitlionok
Eugene Kosov
Gloria Vinogradova
Ilya Kuznetsov
Kirill Shmatov
Rich Kelm
Stanislav Klinov
Steven Barker
Vladimir Fedorkov
Yuri Schapov
People who contributed to Sphinx and their contributions (in no particular order):
Robert "coredev" Bengtsson (Sweden), initial version of PostgreSQL data source
Len Kranendonk, Perl API
Dmytro Shteflyuk, Ruby API
Many other people have contributed ideas, bug reports, fixes, etc. Thank you!
Sphinx development was started back in 2001, because I didn't manage to find an acceptable search solution (for a database driven Web site) which would meet my requirements. Actually, each and every important aspect was a problem:
search quality (ie. good relevance)
statistical ranking methods performed rather bad, especially on large collections of small documents (forums, blogs, etc)
search speed
especially if searching for phrases which contain stopwords, as in "to be or not to be"
moderate disk and CPU requirements when indexing
important in shared hosting enivronment, not to mention the indexing speed.
Despite the amount of time passed and numerous improvements made in the other solutions, there's still no solution which I personally would be eager to migrate to.
Considering that and a lot of positive feedback received from Sphinx users during last years, the obvious decision is to continue developing Sphinx (and, eventually, to take over the world).
Table of Contents
Sphinx can be compiled either from source or installed using prebuilt packages. Most modern UNIX systems with a C++ compiler should be able to compile and run Sphinx without any modifications.
Currently known systems Sphinx has been successfully running on are:
Linux 2.4.x, 2.6.x, 3.x (many various distributions)
Windows 2000, XP, 7, 8
FreeBSD 4.x, 5.x, 6.x, 7.x, 8.x
NetBSD 1.6, 3.0
Solaris 9, 11
Mac OS X
CPU architectures known to work include i386 (aka x86), amd64 (aka x86_64), SPARC64, and ARM.
Chances are good that Sphinx should work on other Unix platforms and/or CPU architectures just as well. Please report any other platforms that worked for you!
All platforms are production quality. There are no principal functional limitations on any platform.
On UNIX, you will need the following tools to build and install Sphinx:
a working C++ compiler. GNU gcc and clang are known to work.
a good make program. GNU make is known to work.
On Windows, you will need Microsoft Visual C/C++ Studio .NET 2005 or above. Other compilers/environments will probably work as well, but for the time being, you will have to build makefile (or other environment specific project files) manually.
Extract everything from the distribution tarball (haven't you already?)
and go to the sphinx
subdirectory. (We are using
version 2.0.1-beta here for the sake of example only; be sure to change this
to a specific version you're using.)
$ tar xzvf sphinx-2.0.1-beta.tar.gz
$ cd sphinx
Run the configuration program:
$ ./configure
There's a number of options to configure. The complete listing may
be obtained by using --help
switch. The most important ones are:
--prefix
, which specifies where to install Sphinx; such as --prefix=/usr/local/sphinx
(all of the examples use this prefix)
--with-mysql
, which specifies where to look for MySQL include and library files, if auto-detection fails;
--with-static-mysql
, which builds Sphinx with statically linked MySQL support;
--with-pgsql
, which specifies where to look for PostgreSQL include and library files.
--with-static-pgsql
, which builds Sphinx with statically linked PostgreSQL support;
Build the binaries:
$ make
Install the binaries in the directory of your choice:
(defaults to /usr/local/bin/
on *nix systems,
but is overridden with configure --prefix
)
$ make install
If configure
fails to locate MySQL headers and/or libraries,
try checking for and installing mysql-devel
package. On some systems,
it is not installed by default.
If make
fails with a message which look like
/bin/sh: g++: command not found make[1]: *** [libsphinx_a-sphinx.o] Error 127
try checking for and installing gcc-c++
package.
If you are getting compile-time errors which look like
sphinx.cpp:67: error: invalid application of `sizeof' to incomplete type `Private::SizeError<false>'
this means that some compile-time type size check failed. The most probable reason is that off_t type is less than 64-bit on your system. As a quick hack, you can edit sphinx.h and replace off_t with DWORD in a typedef for SphOffset_t, but note that this will prohibit you from using full-text indexes larger than 2 GB. Even if the hack helps, please report such issues, providing the exact error message and compiler/OS details, so I could properly fix them in next releases.
If you keep getting any other error, or the suggestions above do not seem to help you, please don't hesitate to contact me.
There are two ways of getting Sphinx for Ubuntu: regular deb packages and the Launchpad PPA repository.
Deb packages:
Sphinx requires a few libraries to be installed on Debian/Ubuntu. Use apt-get to download and install these dependencies:
$ sudo apt-get install mysql-client unixodbc libpq5
Now you can install Sphinx:
$ sudo dpkg -i sphinxsearch_2.0.10-release-0ubuntu11~precise_amd64.deb
PPA repository (Ubuntu only).
Installing Sphinx is much easier from Sphinxsearch PPA repository, because you will get all dependencies and can also update Sphinx to the latest version with the same command.
First, add Sphinxsearch repository and update the list of packages:
$ sudo add-apt-repository ppa:builds/sphinxsearch-stable
$ sudo apt-get update
Install/update sphinxsearch package:
$ sudo apt-get install sphinxsearch
Sphinx searchd
daemon can be started/stopped using service command:
$ sudo service sphinxsearch start
Currently we distribute Sphinx RPMS and SRPMS on our website for both 5.x and 6.x versions of Red Hat Enterprise Linux, but they can be installed on CentOS as well.
Before installation make sure you have these packages installed:
$ yum install postgresql-libs unixODBC
Download RedHat RPM from Sphinx website and install it:
$ rpm -Uhv sphinx-2.0.10-1.rhel6.x86_64.rpm
After preparing configuration file (see Quick tour), you can start searchd daemon:
$ service searchd start
Installing Sphinx on a Windows server is often easier than installing on a Linux environment; unless you are preparing code patches, you can use the pre-compiled binary files from the Downloads area on the website.
Extract everything from the .zip file you have downloaded -
sphinx-2.0.10-release-win32.zip
,
or sphinx-2.0.10-release-win32-pgsql.zip
if you need PostgresSQL support as well.
(We are using version 2.0.10-release here for the sake of example only;
be sure to change this to a specific version you're using.)
You can use Windows Explorer in Windows XP and up to extract the files,
or a freeware package like 7Zip to open the archive.
For the remainder of this guide, we will assume that the folders are unzipped into C:\Sphinx
,
such that searchd.exe
can be found in C:\Sphinx\bin\searchd.exe
. If you decide
to use any different location for the folders or configuration file, please change it accordingly.
Edit the contents of sphinx.conf.in - specifically entries relating to @CONFDIR@ - to paths suitable for your system.
Install the searchd
system as a Windows service:
C:\Sphinx\bin> C:\Sphinx\bin\searchd --install --config C:\Sphinx\sphinx.conf.in --servicename SphinxSearch
The searchd
service will now be listed in the Services panel
within the Management Console, available from Administrative Tools. It will not have been
started, as you will need to configure it and build your indexes with indexer
before starting the service. A guide to do this can be found under
Quick tour.
During the next steps of the install (which involve running indexer pretty much as you would on Linux) you may find that you get an error relating to libmysql.dll not being found. If you have MySQL installed, you should find a copy of this library in your Windows directory, or sometimes in Windows\System32, or failing that in the MySQL core directories. If you do receive an error please copy libmysql.dll into the bin directory.
All the example commands below assume that you installed Sphinx
in /usr/local/sphinx
, so searchd
can
be found in /usr/local/sphinx/bin/searchd
.
To use Sphinx, you will need to:
Create a configuration file.
Default configuration file name is sphinx.conf
.
All Sphinx programs look for this file in current working directory
by default.
Sample configuration file, sphinx.conf.dist
, which has
all the options documented, is created by configure
.
Copy and edit that sample file to make your own configuration: (assuming Sphinx is installed into /usr/local/sphinx/
)
$ cd /usr/local/sphinx/etc
$ cp sphinx.conf.dist sphinx.conf
$ vi sphinx.conf
Sample configuration file is setup to index documents
table from MySQL database test
; so there's example.sql
sample data file to populate that table with a few documents for testing purposes:
$ mysql -u test < /usr/local/sphinx/etc/example.sql
Run the indexer to create full-text index from your data:
$ cd /usr/local/sphinx/etc
$ /usr/local/sphinx/bin/indexer --all
Query your newly created index!
Now query your indexes!
Connect to server:
$ mysql -h0 -P9306
SELECT * FROM test1 WHERE MATCH('my document');
INSERT INTO rt VALUES (1, 'this is', 'a sample text', 11);
INSERT INTO rt VALUES (2, 'some more', 'text here', 22);
SELECT gid/11 FROM rt WHERE MATCH('text') GROUP BY gid;
SELECT * FROM rt ORDER BY gid DESC;
SHOW TABLES;
SELECT *, WEIGHT() FROM test1 WHERE MATCH('"document one"/1');SHOW META;
SET profiling=1;SELECT * FROM test1 WHERE id IN (1,2,4);SHOW PROFILE;
SELECT id, id%3 idd FROM test1 WHERE MATCH('this is | nothing') GROUP BY idd;SHOW PROFILE;
SELECT id FROM test1 WHERE MATCH('is this a good plan?');SHOW PLAN;
SELECT COUNT(*) FROM test1;
CALL KEYWORDS ('one two three', 'test1');
CALL KEYWORDS ('one two three', 'test1', 1);
To query the index from command line, use search
utility:
$ cd /usr/local/sphinx/etc
$ /usr/local/sphinx/bin/search test
To query the index from your PHP scripts, you need to:
Run the search daemon which your script will talk to:
$ cd /usr/local/sphinx/etc
$ /usr/local/sphinx/bin/searchd
Run the attached PHP API test script (to ensure that the daemon was succesfully started and is ready to serve the queries):
$ cd sphinx/api
$ php test.php test
Include the API (it's located in api/sphinxapi.php
)
into your own scripts and use it.
Happy searching!
Table of Contents
The data to be indexed can generally come from very different sources: SQL databases, plain text files, HTML files, mailboxes, and so on. From Sphinx point of view, the data it indexes is a set of structured documents, each of which has the same set of fields and attributes. This is similar to SQL, where each row would correspond to a document, and each column to either a field or an attribute.
Depending on what source Sphinx should get the data from, different code is required to fetch the data and prepare it for indexing. This code is called data source driver (or simply driver or data source for brevity).
At the time of this writing, there are built-in drivers for
MySQL, PostgreSQL, MS SQL (on Windows), and ODBC. There is also
a generic driver called xmlpipe, which runs a specified command
and reads the data from its stdout
.
See Section 3.9, “xmlpipe data source” section for the format description.
There can be as many sources per index as necessary. They will be sequentially processed in the very same order which was specifed in index definition. All the documents coming from those sources will be merged as if they were coming from a single source.
Full-text fields (or just fields for brevity) are the textual document contents that get indexed by Sphinx, and can be (quickly) searched for keywords.
Fields are named, and you can limit your searches to a single field (eg. search through "title" only) or a subset of fields (eg. to "title" and "abstract" only). Sphinx index format generally supports up to 256 fields. However, up to version 2.0.1-beta indexes were forcibly limited by 32 fields, because of certain complications in the matching engine. Full support for up to 256 fields was added in version 2.0.2-beta.
Note that the original contents of the fields are not stored in the Sphinx index. The text that you send to Sphinx gets processed, and a full-text index (a special data structure that enables quick searches for a keyword) gets built from that text. But the original text contents are then simply discarded. Sphinx assumes that you store those contents elsewhere anyway.
Moreover, it is impossible to fully reconstruct the original text, because the specific whitespace, capitalization, punctuation, etc will all be lost during indexing. It is theoretically possible to partially reconstruct a given document from the Sphinx full-text index, but that would be a slow process (especially if the CRC dictionary is used, which does not even store the original keywords and works with their hashes instead).
Attributes are additional values associated with each document that can be used to perform additional filtering and sorting during search.
It is often desired to additionally process full-text search results based not only on matching document ID and its rank, but on a number of other per-document values as well. For instance, one might need to sort news search results by date and then relevance, or search through products within specified price range, or limit blog search to posts made by selected users, or group results by month. To do that efficiently, Sphinx allows to attach a number of additional attributes to each document, and store their values in the full-text index. It's then possible to use stored values to filter, sort, or group full-text matches.
Attributes, unlike the fields, are not full-text indexed. They are stored in the index, but it is not possible to search them as full-text, and attempting to do so results in an error.
For example, it is impossible to use the extended matching mode expression
@column 1
to match documents where column is 1, if column is an
attribute, and this is still true even if the numeric digits are normally indexed.
Attributes can be used for filtering, though, to restrict returned rows, as well as sorting or result grouping; it is entirely possible to sort results purely based on attributes, and ignore the search relevance tools. Additionally, attributes are returned from the search daemon, while the indexed text is not.
A good example for attributes would be a forum posts table. Assume that only title and content fields need to be full-text searchable - but that sometimes it is also required to limit search to a certain author or a sub-forum (ie. search only those rows that have some specific values of author_id or forum_id columns in the SQL table); or to sort matches by post_date column; or to group matching posts by month of the post_date and calculate per-group match counts.
This can be achieved by specifying all the mentioned columns (excluding title and content, that are full-text fields) as attributes, indexing them, and then using API calls to setup filtering, sorting, and grouping. Here as an example.
... sql_query = SELECT id, title, content, \ author_id, forum_id, post_date FROM my_forum_posts sql_attr_uint = author_id sql_attr_uint = forum_id sql_attr_timestamp = post_date ...
// only search posts by author whose ID is 123 $cl->SetFilter ( "author_id", array ( 123 ) ); // only search posts in sub-forums 1, 3 and 7 $cl->SetFilter ( "forum_id", array ( 1,3,7 ) ); // sort found posts by posting date in descending order $cl->SetSortMode ( SPH_SORT_ATTR_DESC, "post_date" );
Attributes are named. Attribute names are case insensitive. Attributes are not full-text indexed; they are stored in the index as is. Currently supported attribute types are:
The complete set of per-document attribute values is sometimes referred to as docinfo. Docinfos can either be
stored separately from the main full-text index data ("extern" storage, in .spa
file), or
attached to each occurence of document ID in full-text index data ("inline" storage, in .spd
file).
When using extern storage, a copy of .spa
file
(with all the attribute values for all the documents) is kept in RAM by
searchd
at all times. This is for performance reasons;
random disk I/O would be too slow. On the contrary, inline storage does not
require any additional RAM at all, but that comes at the cost of greatly
inflating the index size: remember that it copies all
attribute value every time when the document ID
is mentioned, and that is exactly as many times as there are
different keywords in the document. Inline may be the only viable
option if you have only a few attributes and need to work with big
datasets in limited RAM. However, in most cases extern storage
makes both indexing and searching much more efficient.
Search-time memory requirements for extern storage are
(1+number_of_attrs)*number_of_docs*4 bytes, ie. 10 million docs with
2 groups and 1 timestamp will take (1+2+1)*10M*4 = 160 MB of RAM.
This is PER DAEMON, not per query. searchd
will allocate 160 MB on startup, read the data and keep it shared between queries.
The children will NOT allocate any additional
copies of this data.
MVAs, or multi-valued attributes, are an important special type of per-document attributes in Sphinx. MVAs let you attach sets of numeric values to every document. That is useful to implement article tags, product categories, etc. Filtering and group-by (but not sorting) on MVA attributes is supported.
As of version 2.0.2-beta, MVA values can either be unsigned 32-bit integers (UNSIGNED INTEGER) or signed 64-bit integers (BIGINT). Up to version 2.0.1-beta, only the unsigned 32-bit values were supported.
The set size is not limited, you can have an arbitrary number of values
attached to each document as long as RAM permits (.spm
file
that contains the MVA values will be precached in RAM by searchd
).
The source data can be taken either from a separate query, or from a document field;
see source type in sql_attr_multi.
In the first case the query will have to return pairs of document ID and MVA values,
in the second one the field will be parsed for integer values.
There are absolutely no requirements as to incoming data order; the values will be
automatically grouped by document ID (and internally sorted within the same ID)
during indexing anyway.
When filtering, a document will match the filter on MVA attribute if any of the values satisfy the filtering condition. (Therefore, documents that pass through exclude filters will not contain any of the forbidden values.) When grouping by MVA attribute, a document will contribute to as many groups as there are different MVA values associated with that document. For instance, if the collection contains exactly 1 document having a 'tag' MVA with values 5, 7, and 11, grouping on 'tag' will produce 3 groups with '@count' equal to 1 and '@groupby' key values of 5, 7, and 11 respectively. Also note that grouping by MVA might lead to duplicate documents in the result set: because each document can participate in many groups, it can be chosen as the best one in in more than one group, leading to duplicate IDs. PHP API historically uses ordered hash on the document ID for the resulting rows; so you'll also need to use SetArrayResult() in order to employ group-by on MVA with PHP API.
To be able to answer full-text search queries fast, Sphinx needs to build a special data structure optimized for such queries from your text data. This structure is called index; and the process of building index from text is called indexing.
Different index types are well suited for different tasks. For example, a disk-based tree-based index would be easy to update (ie. insert new documents to existing index), but rather slow to search. Sphinx architecture allows internally for different index types, or backends, to be implemented comparatively easily.
Starting with 1.10-beta, Sphinx provides 2 different backends: a disk index backend, and a RT (realtime) index backend.
Disk indexes are designed to provide maximum indexing and searching speed, while keeping the RAM footprint as low as possible. That comes at a cost of text index updates. You can not update an existing document or incrementally add a new document to a disk index. You only can batch rebuild the entire disk index from scratch. (Note that you still can update document's attributes on the fly, even with the disk indexes.)
This "rebuild only" limitation might look as a big constraint at a first glance. But in reality, it can very frequently be worked around rather easily by setting up muiltiple disk indexes, searching through them all, and only rebuilding the one with a fraction of the most recently changed data. See Section 3.11, “Live index updates” for details.
RT indexes enable you to implement dynamic updates and incremental additions to the full text index. RT stands for Real Time and they are indeed "soft realtime" in terms of writes, meaning that most index changes become available for searching as quick as 1 millisecond or less, but could occasionally stall for seconds. (Searches will still work even during that occasional writing stall.) Refer to Chapter 4, Real-time indexes for details.
Last but not least, Sphinx supports so-called distributed indexes. Compared to disk and RT indexes, those are not a real physical backend, but rather just lists of either local or remote indexes that can be searched transparently to the application, with Sphinx doing all the chores of sending search requests to remote machines in the cluster, aggregating the result sets, retrying the failed requests, and even doing some load balancing. See Section 5.8, “Distributed searching” for a discussion of distributed indexes.
There can be as many indexes per configuration file as necessary.
indexer
utility can reindex either all of them
(if --all
option is specified), or a certain explicitly
specified subset. searchd
utility will serve all
the specified indexes, and the clients can specify what indexes to
search in run time.
There are a few different restrictions imposed on the source data which is going to be indexed by Sphinx, of which the single most important one is:
ALL DOCUMENT IDS MUST BE UNIQUE UNSIGNED NON-ZERO INTEGER NUMBERS (32-BIT OR 64-BIT, DEPENDING ON BUILD TIME SETTINGS).
If this requirement is not met, different bad things can happen. For instance, Sphinx can crash with an internal assertion while indexing; or produce strange results when searching due to conflicting IDs. Also, a 1000-pound gorilla might eventually come out of your display and start throwing barrels at you. You've been warned.
When indexing some index, Sphinx fetches documents from the specified sources, splits the text into words, and does case folding so that "Abc", "ABC" and "abc" would be treated as the same word (or, to be pedantic, term).
To do that properly, Sphinx needs to know
what encoding is the source text in;
what characters are letters and what are not;
what letters should be folded to what letters.
This should be configured on a per-index basis using
charset_type
and
charset_table
options.
charset_type
specifies whether the document encoding is single-byte (SBCS) or UTF-8.
charset_table
specifies the table that maps letter characters to their case
folded versions. The characters that are not in the table are considered
to be non-letters and will be treated as word separators when indexing
or searching through this index.
Default tables currently include English and Russian characters. Please do submit your tables for other languages!
As of version 2.1.1-beta, you can also specify text pattern replacement rules. For example, given the rules
regexp_filter = \b(\d+)\" => \1 inch regexp_filter = (BLUE|RED) => COLOR
the text 'RED TUBE 5" LONG' would be indexed as 'COLOR TUBE 5 INCH LONG', and 'PLANK 2" x 4"' as 'PLANK 2 INCH x 4 INCH'. Rules are applied in the given order. Text in queries is also replaced; a search for "BLUE TUBE" would actually become a search for "COLOR TUBE". Note that Sphinx must be built with the --with-re2 option to use this feature.
With all the SQL drivers, indexing generally works as follows.
connection to the database is established;
pre-query (see Section 11.1.14, “sql_query_pre”) is executed to perform any necessary initial setup, such as setting per-connection encoding with MySQL;
main query (see Section 11.1.15, “sql_query”) is executed and the rows it returns are indexed;
post-query (see Section 11.1.34, “sql_query_post”) is executed to perform any necessary cleanup;
connection to the database is closed;
indexer does the sorting phase (to be pedantic, index-type specific post-processing);
connection to the database is established again;
post-index query (see Section 11.1.35, “sql_query_post_index”) is executed to perform any necessary final cleanup;
connection to the database is closed again.
Most options, such as database user/host/password, are straightforward. However, there are a few subtle things, which are discussed in more detail here.
Main query, which needs to fetch all the documents, can impose a read lock on the whole table and stall the concurrent queries (eg. INSERTs to MyISAM table), waste a lot of memory for result set, etc. To avoid this, Sphinx supports so-called ranged queries. With ranged queries, Sphinx first fetches min and max document IDs from the table, and then substitutes different ID intervals into main query text and runs the modified query to fetch another chunk of documents. Here's an example.
Example 3.1. Ranged query usage example
# in sphinx.conf sql_query_range = SELECT MIN(id),MAX(id) FROM documents sql_range_step = 1000 sql_query = SELECT * FROM documents WHERE id>=$start AND id<=$end
If the table contains document IDs from 1 to, say, 2345, then sql_query would be run three times:
with $start
replaced with 1 and $end
replaced with 1000;
with $start
replaced with 1001 and $end
replaced with 2000;
with $start
replaced with 2001 and $end
replaced with 2345.
Obviously, that's not much of a difference for 2000-row table, but when it comes to indexing 10-million-row MyISAM table, ranged queries might be of some help.
sql_query_post
vs. sql_query_post_index
The difference between post-query and post-index query is in that post-query is run immediately when Sphinx received all the documents, but further indexing may still fail for some other reason. On the contrary, by the time the post-index query gets executed, it is guaranteed that the indexing was succesful. Database connection is dropped and re-established because sorting phase can be very lengthy and would just timeout otherwise.
xmlpipe data source was designed to enable users to plug data into Sphinx without having to implement new data sources drivers themselves. It is limited to 2 fixed fields and 2 fixed attributes, and is deprecated in favor of Section 3.10, “xmlpipe2 data source” now. For new streams, use xmlpipe2.
To use xmlpipe, configure the data source in your configuration file as follows:
source example_xmlpipe_source { type = xmlpipe xmlpipe_command = perl /www/mysite.com/bin/sphinxpipe.pl }
The indexer
will run the command specified
in xmlpipe_command
,
and then read, parse and index the data it prints to stdout
.
More formally, it opens a pipe to given command and then reads
from that pipe.
indexer will expect one or more documents in custom XML format. Here's the example document stream, consisting of two documents:
Example 3.2. XMLpipe document stream
<document> <id>123</id> <group>45</group> <timestamp>1132223498</timestamp> <title>test title</title> <body> this is my document body </body> </document> <document> <id>124</id> <group>46</group> <timestamp>1132223498</timestamp> <title>another test</title> <body> this is another document </body> </document>
Legacy xmlpipe legacy driver uses a builtin parser
which is pretty fast but really strict and does not actually
fully support XML. It requires that all the fields must
be present, formatted exactly as in this example, and
occur exactly in the same order. The only optional
field is timestamp
; it defaults to 1.
xmlpipe2 lets you pass arbitrary full-text and attribute data to Sphinx in yet another custom XML format. It also allows to specify the schema (ie. the set of fields and attributes) either in the XML stream itself, or in the source settings.
When indexing xmlpipe2 source, indexer runs the given command, opens a pipe to its stdout, and expects well-formed XML stream. Here's sample stream data:
Example 3.3. xmlpipe2 document stream
<?xml version="1.0" encoding="utf-8"?> <sphinx:docset> <sphinx:schema> <sphinx:field name="subject"/> <sphinx:field name="content"/> <sphinx:attr name="published" type="timestamp"/> <sphinx:attr name="author_id" type="int" bits="16" default="1"/> </sphinx:schema> <sphinx:document id="1234"> <content>this is the main content <![CDATA[[and this <cdata> entry must be handled properly by xml parser lib]]></content> <published>1012325463</published> <subject>note how field/attr tags can be in <b class="red">randomized</b> order</subject> <misc>some undeclared element</misc> </sphinx:document> <sphinx:document id="1235"> <subject>another subject</subject> <content>here comes another document, and i am given to understand, that in-document field order must not matter, sir</content> <published>1012325467</published> </sphinx:document> <!-- ... even more sphinx:document entries here ... --> <sphinx:killlist> <id>1234</id> <id>4567</id> </sphinx:killlist> </sphinx:docset>
Arbitrary fields and attributes are allowed. They also can occur in the stream in arbitrary order within each document; the order is ignored. There is a restriction on maximum field length; fields longer than 2 MB will be truncated to 2 MB (this limit can be changed in the source).
The schema, ie. complete fields and attributes list, must be declared
before any document could be parsed. This can be done either in the
configuration file using xmlpipe_field
and xmlpipe_attr_XXX
settings, or right in the stream using <sphinx:schema> element.
<sphinx:schema> is optional. It is only allowed to occur as the very
first sub-element in <sphinx:docset>. If there is no in-stream
schema definition, settings from the configuration file will be used.
Otherwise, stream settings take precedence.
Unknown tags (which were not declared neither as fields nor as attributes) will be ignored with a warning. In the example above, <misc> will be ignored. All embedded tags and their attributes (such as <b> in <subject> in the example above) will be silently ignored.
Support for incoming stream encodings depends on whether iconv
is installed on the system. xmlpipe2 is parsed using libexpat
parser that understands US-ASCII, ISO-8859-1, UTF-8 and a few UTF-16 variants
natively. Sphinx configure
script will also check
for libiconv
presence, and utilize it to handle
other encodings. libexpat
also enforces the
requirement to use UTF-8 charset on Sphinx side, because the
parsed data it returns is always in UTF-8.
XML elements (tags) recognized by xmlpipe2 (and their attributes where applicable) are:
Mandatory top-level element, denotes and contains xmlpipe2 document set.
Optional element, must either occur as the very first child of sphinx:docset, or never occur at all. Declares the document schema. Contains field and attribute declarations. If present, overrides per-source settings from the configuration file.
Optional element, child of sphinx:schema. Declares a full-text field. Known attributes are:
"name", specifies the XML element name that will be treated as a full-text field in the subsequent documents.
"attr", specifies whether to also index this field as a string or word count attribute. Possible values are "string" and "wordcount". Introduced in version 1.10-beta.
Optional element, child of sphinx:schema. Declares an attribute. Known attributes are:
"name", specifies the element name that should be treated as an attribute in the subsequent documents.
"type", specifies the attribute type. Possible values are "int", "bigint", "timestamp", "str2ordinal", "bool", "float", "multi" and "json".
"bits", specifies the bit size for "int" attribute type. Valid values are 1 to 32.
"default", specifies the default value for this attribute that should be used if the attribute's element is not present in the document.
Mandatory element, must be a child of sphinx:docset. Contains arbitrary other elements with field and attribute values to be indexed, as declared either using sphinx:field and sphinx:attr elements or in the configuration file. The only known attribute is "id" that must contain the unique integer document ID.
Optional element, child of sphinx:docset. Contains a number of "id" elements whose contents are document IDs to be put into a kill-list for this index.
There are two major approaches to maintaining the full-text index contents up to date. Note, however, that both these approaches deal with the task of full-text data updates, and not attribute updates. Instant attribute updates are supported since version 0.9.8. Refer to UpdateAttributes() API call description for details.
First, you can use disk-based indexes, partition them manually, and only rebuild the smaller partitions (so-called "deltas") frequently. By minimizing the rebuild size, you can reduce the average indexing lag to something as low as 30-60 seconds. This approach was the the only one available in versions 0.9.x. On huge collections it actually might be the most efficient one. Refer to Section 3.12, “Delta index updates” for details.
Second, versions 1.x (starting with 1.10-beta) add support for so-called real-time indexes (RT indexes for short) that on-the-fly updates of the full-text data. Updates on a RT index can appear in the search results in 1-2 milliseconds, ie. 0.001-0.002 seconds. However, RT index are less efficient for bulk indexing huge amounts of data. Refer to Chapter 4, Real-time indexes for details.
There's a frequent situation when the total dataset is too big to be reindexed from scratch often, but the amount of new records is rather small. Example: a forum with a 1,000,000 archived posts, but only 1,000 new posts per day.
In this case, "live" (almost real time) index updates could be implemented using so called "main+delta" scheme.
The idea is to set up two sources and two indexes, with one "main" index for the data which only changes rarely (if ever), and one "delta" for the new documents. In the example above, 1,000,000 archived posts would go to the main index, and newly inserted 1,000 posts/day would go to the delta index. Delta index could then be reindexed very frequently, and the documents can be made available to search in a matter of minutes.
Specifying which documents should go to what index and reindexing main index could also be made fully automatic. One option would be to make a counter table which would track the ID which would split the documents, and update it whenever the main index is reindexed.
Example 3.4. Fully automated live updates
# in MySQL CREATE TABLE sph_counter ( counter_id INTEGER PRIMARY KEY NOT NULL, max_doc_id INTEGER NOT NULL ); # in sphinx.conf source main { # ... sql_query_pre = SET NAMES utf8 sql_query_pre = REPLACE INTO sph_counter SELECT 1, MAX(id) FROM documents sql_query = SELECT id, title, body FROM documents \ WHERE id<=( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 ) } source delta : main { sql_query_pre = SET NAMES utf8 sql_query = SELECT id, title, body FROM documents \ WHERE id>( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 ) } index main { source = main path = /path/to/main # ... all the other settings } # note how all other settings are copied from main, # but source and path are overridden (they MUST be) index delta : main { source = delta path = /path/to/delta }
Note how we're overriding sql_query_pre
in the delta source.
We need to explicitly have that override. Otherwise REPLACE
query
would be run when indexing delta source too, effectively nullifying it. However,
when we issue the directive in the inherited source for the first time, it removes
all inherited values, so the encoding setup is also lost.
So sql_query_pre
in the delta can not just be empty; and we need
to issue the encoding setup query explicitly once again.
Merging two existing indexes can be more efficient that indexing the data
from scratch, and desired in some cases (such as merging 'main' and 'delta'
indexes instead of simply reindexing 'main' in 'main+delta' partitioning
scheme). So indexer
has an option to do that.
Merging the indexes is normally faster than reindexing but still
not instant on huge indexes. Basically,
it will need to read the contents of both indexes once and write
the result once. Merging 100 GB and 1 GB index, for example,
will result in 202 GB of IO (but that's still likely less than
the indexing from scratch requires).
The basic command syntax is as follows:
indexer --merge DSTINDEX SRCINDEX [--rotate]
Only the DSTINDEX index will be affected: the contents of SRCINDEX will be merged into it.
--rotate
switch will be required if DSTINDEX is already being served by searchd
.
The initially devised usage pattern is to merge a smaller update from SRCINDEX into DSTINDEX.
Thus, when merging the attributes, values from SRCINDEX will win if duplicate document IDs are encountered.
Note, however, that the "old" keywords will not be automatically removed in such cases.
For example, if there's a keyword "old" associated with document 123 in DSTINDEX, and a keyword "new" associated
with it in SRCINDEX, document 123 will be found by both keywords after the merge.
You can supply an explicit condition to remove documents from DSTINDEX to mitigate that;
the relevant switch is --merge-dst-range
:
indexer --merge main delta --merge-dst-range deleted 0 0
This switch lets you apply filters to the destination index along with merging. There can be several filters; all of their conditions must be met in order to include the document in the resulting mergid index. In the example above, the filter passes only those records where 'deleted' is 0, eliminating all records that were flagged as deleted (for instance, using UpdateAttributes() call).
Table of Contents
Real-time indexes (or RT indexes for brevity) are a new backend that lets you insert, update, or delete documents (rows) on the fly. RT indexes were added in version 1.10-beta. While querying of RT indexes is possible using any of the SphinxAPI, SphinxQL, or SphinxSE, updating them is only possible via SphinxQL at the moment. Full SphinxQL reference is available in Chapter 7, SphinxQL reference.
RT indexes should be declared in sphinx.conf
,
just as every other index type. Notable differences from the regular,
disk-based indexes are that a) data sources are not required and ignored,
and b) you should explicitly enumerate all the text fields, not just
attributes. Here's an example:
Example 4.1. RT index declaration
index rt { type = rt path = /usr/local/sphinx/data/rt rt_field = title rt_field = content rt_attr_uint = gid }
As of 2.0.1-beta and above, RT indexes are production quality, despite a few missing features.
RT index can be accessed using MySQL protocol. INSERT, REPLACE, DELETE, and SELECT statements against RT index are supported. For instance, this is an example session with the sample index above:
$ mysql -h 127.0.0.1 -P 9306 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 1.10-dev (r2153) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> INSERT INTO rt VALUES ( 1, 'first record', 'test one', 123 ); Query OK, 1 row affected (0.05 sec) mysql> INSERT INTO rt VALUES ( 2, 'second record', 'test two', 234 ); Query OK, 1 row affected (0.00 sec) mysql> SELECT * FROM rt; +------+--------+------+ | id | weight | gid | +------+--------+------+ | 1 | 1 | 123 | | 2 | 1 | 234 | +------+--------+------+ 2 rows in set (0.02 sec) mysql> SELECT * FROM rt WHERE MATCH('test'); +------+--------+------+ | id | weight | gid | +------+--------+------+ | 1 | 1643 | 123 | | 2 | 1643 | 234 | +------+--------+------+ 2 rows in set (0.01 sec) mysql> SELECT * FROM rt WHERE MATCH('@title test'); Empty set (0.00 sec)
Both partial and batch INSERT syntaxes are supported, ie. you can specify a subset of columns, and insert several rows at a time. Deletions are also possible using DELETE statement; the only currently supported syntax is DELETE FROM <index> WHERE id=<id>. REPLACE is also supported, enabling you to implement updates.
mysql> INSERT INTO rt ( id, title ) VALUES ( 3, 'third row' ), ( 4, 'fourth entry' ); Query OK, 2 rows affected (0.01 sec) mysql> SELECT * FROM rt; +------+--------+------+ | id | weight | gid | +------+--------+------+ | 1 | 1 | 123 | | 2 | 1 | 234 | | 3 | 1 | 0 | | 4 | 1 | 0 | +------+--------+------+ 4 rows in set (0.00 sec) mysql> DELETE FROM rt WHERE id=2; Query OK, 0 rows affected (0.00 sec) mysql> SELECT * FROM rt WHERE MATCH('test'); +------+--------+------+ | id | weight | gid | +------+--------+------+ | 1 | 1500 | 123 | +------+--------+------+ 1 row in set (0.00 sec) mysql> INSERT INTO rt VALUES ( 1, 'first record on steroids', 'test one', 123 ); ERROR 1064 (42000): duplicate id '1' mysql> REPLACE INTO rt VALUES ( 1, 'first record on steroids', 'test one', 123 ); Query OK, 1 row affected (0.01 sec) mysql> SELECT * FROM rt WHERE MATCH('steroids'); +------+--------+------+ | id | weight | gid | +------+--------+------+ | 1 | 1500 | 123 | +------+--------+------+ 1 row in set (0.01 sec)
Data stored in RT index should survive clean shutdown. When binary logging is enabled, it should also survive crash and/or dirty shutdown, and recover on subsequent startup.
RT indexes are currently quality feature, but there are still a few known usage quirks. Those quirks are listed in this section.
Prefix indexing is supported with dict = keywords starting 2.0.2-beta. Infix indexing is experimental in trunk.
Disk chunks optimization routine is not implemented yet.
On initial index creation, attributes are reordered by type, in the following order: uint, bigint, float, timestamp, string. So when using INSERT without an explicit column names list, specify all uint column values first, then bigint, etc.
Default conservative RAM chunk limit (rt_mem_limit
)
of 32M can lead to poor performance on bigger indexes, you should raise it to
256..1024M if you're planning to index gigabytes.
High DELETE/REPLACE rate can lead to kill-list fragmentation and impact searching performance.
No transaction size limits are currently imposed; too many concurrent INSERT/REPLACE transactions might therefore consume a lot of RAM.
In case of a damaged binlog, recovery will stop on the first damaged transaction, even though it's technically possible to keep looking further for subsequent undamaged transactions, and recover those. This mid-file damage case (due to flaky HDD/CDD/tape?) is supposed to be extremely rare, though.
Multiple INSERTs grouped in a single transaction perform better than equivalent single-row transactions and are recommended for batch loading of data.
RT index is internally chunked. It keeps a so-called RAM chunk that stores all the most recent changes. RAM chunk memory usage is rather strictly limited with per-index rt_mem_limit directive. Once RAM chunk grows over this limit, a new disk chunk is created from its data, and RAM chunk is reset. Thus, while most changes on the RT index will be performed in RAM only and complete instantly (in milliseconds), those changes that overflow the RAM chunk will stall for the duration of disk chunk creation (a few seconds).
Since version 2.1.1-beta, Sphinx uses double-buffering to avoid INSERT stalls. When data is being dumped to disk, the second buffer is used, so further INSERTs won't be delayed. The second buffer is defined to be 10% the size of the standard buffer, rt_mem_limit, but future versions of Sphinx may allow configuring this further.
Disk chunks are, in fact, just regular disk-based indexes.
But they're a part of an RT index and automatically managed by it,
so you need not configure nor manage them manually. Because a new
disk chunk is created every time RT chunk overflows the limit, and
because in-memory chunk format is close to on-disk format, the disk
chunks will be approximately rt_mem_limit
bytes
in size each.
Generally, it is better to set the limit bigger, to minimize both
the frequency of flushes, and the index fragmentation (number of disk
chunks). For instance, on a dedicated search server that handles
a big RT index, it can be advised to set rt_mem_limit
to 1-2 GB. A global limit on all indexes is also planned, but not yet
implemented yet as of 1.10-beta.
Disk chunk full-text index data can not be actually modified, so the full-text field changes (ie. row deletions and updates) suppress a previous row version from a disk chunk using a kill-list, but do not actually physically purge the data. Therefore, on workloads with high full-text updates ratio index might eventually get polluted by these previous row versions, and searching performance would degrade. Physical index purging that would improve the performance is planned, but not yet implemented as of 1.10-beta.
Data in RAM chunk gets saved to disk on clean daemon shutdown, and then loaded back on startup. However, on daemon or server crash, updates from RAM chunk might be lost. To prevent that, binary logging of transactions can be used; see Section 4.4, “Binary logging” for details.
Full-text changes in RT index are transactional. They are stored in a per-thread accumulator until COMMIT, then applied at once. Bigger batches per single COMMIT should result in faster indexing.
Binary logs are essentially a recovery mechanism. With binary logs
enabled, searchd
writes every given transaction
to the binlog file, and uses that for recovery after an unclean shutdown.
On clean shutdown, RAM chunks are saved to disk, and then all the binlog
files are unlinked.
During normal operation, a new binlog file will be opened every time
when binlog_max_log_size
limit
is reached. Older, already closed binlog files are kept until all of the
transactions stored in them (from all indexes) are flushed as a disk chunk.
Setting the limit to 0 pretty much prevents binlog from being unlinked
at all while searchd
is running; however, it will
still be unlinked on clean shutdown. (This is the default case as of
2.0.3-release, binlog_max_log_size
defaults to 0.)
There are 3 different binlog flushing strategies, controlled by
binlog_flush directive
which takes the values of 0, 1, or 2. 0 means to flush the log
to OS and sync it to disk every second; 1 means flush and sync
every transaction; and 2 (the default mode) means flush every
transaction but sync every second. Sync is relatively slow because
it has to perform physical disk writes, so mode 1 is the safest
(every committed transaction is guaranteed to be written on disk)
but the slowest. Flushing log to OS prevents from data loss on
searchd
crashes but not system crashes.
Mode 2 is the default.
On recovery after an unclean shutdown, binlogs are replayed and all logged transactions since the last good on-disk state are restored. Transactions are checksummed so in case of binlog file corruption garbage data will not be replayed; such a broken transaction will be detected and, currently, will stop replay. Transactions also start with a magic marker and timestamped, so in case of binlog damage in the middle of the file, it's technically possible to skip broken transactions and keep replaying from the next good one, and/or it's possible to replay transactions until a given timestamp (point-in-time recovery), but none of that is implemented yet as of 1.10-beta.
One unwanted side effect of binlogs is that actively updating
a small RT index that fully fits into a RAM chunk part will lead
to an ever-growing binlog that can never be unlinked until clean
shutdown. Binlogs are essentially append-only deltas against
the last known good saved state on disk, and unless RAM chunk
gets saved, they can not be unlinked. An ever-growing binlog
is not very good for disk use and crash recovery time. Starting
with 2.0.1-beta you can configure searchd
to perform a periodic RAM chunk flush to fix that problem
using a rt_flush_period
directive. With periodic flushes enabled, searchd
will keep a separate thread, checking whether RT indexes RAM
chunks need to be written back to disk. Once that happens,
the respective binlogs can be (and are) safely unlinked.
Note that rt_flush_period
only controls the
frequency at which the checks happen.
There are no guarantees that the
particular RAM chunk will get saved. For instance, it does
not make sense to regularly re-save a huge RAM chunk that
only gets a few rows worh of updates. The search daemon
determine whether to actually perform the flush with a few
heuristics.
Table of Contents
searchd
query log formatsSo-called matching modes are a legacy feature that used to provide (very) limited query syntax and ranking support. Currently, they are deprecated in favor of full-text query language and so-called rankers. Starting with version 0.9.9-release, it is thus strongly recommended to use SPH_MATCH_EXTENDED and proper query syntax rather than any other legacy mode. All those other modes are actually internally converted to extended syntax anyway. SphinxAPI still defaults to SPH_MATCH_ALL but that is for compatibility reasons only.
There are the following matching modes available:
SPH_MATCH_ALL, matches all query words (default mode);
SPH_MATCH_ANY, matches any of the query words;
SPH_MATCH_PHRASE, matches query as a phrase, requiring perfect match;
SPH_MATCH_BOOLEAN, matches query as a boolean expression (see Section 5.2, “Boolean query syntax”);
SPH_MATCH_EXTENDED, matches query as an expression in Sphinx internal query language (see Section 5.3, “Extended query syntax”);
SPH_MATCH_EXTENDED2, an alias for SPH_MATCH_EXTENDED;
SPH_MATCH_FULLSCAN, matches query, forcibly using the "full scan" mode as below. NB, any query terms will be ignored, such that filters, filter-ranges and grouping will still be applied, but no text-matching.
SPH_MATCH_EXTENDED2 was used during 0.9.8 and 0.9.9 development cycle, when the internal matching engine was being rewritten (for the sake of additional functionality and better performance). By 0.9.9-release, the older version was removed, and SPH_MATCH_EXTENDED and SPH_MATCH_EXTENDED2 are now just aliases.
The SPH_MATCH_FULLSCAN mode will be automatically activated in place of the specified matching mode when the following conditions are met:
The query string is empty (ie. its length is zero).
docinfo storage is set to extern
.
In full scan mode, all the indexed documents will be considered as matching.
Such queries will still apply filters, sorting, and group by, but will not perform any full-text searching.
This can be useful to unify full-text and non-full-text searching code, or to offload SQL server
(there are cases when Sphinx scans will perform better than analogous MySQL queries).
An example of using the full scan mode might be to find posts in a forum.
By selecting the forum's user ID via SetFilter()
but not actually providing any search text,
Sphinx will match every document (i.e. every post) where SetFilter()
would match -
in this case providing every post from that user. By default this will be ordered by relevancy,
followed by Sphinx document ID in ascending order (earliest first).
Boolean queries allow the following special operators to be used:
explicit operator AND:
hello & world
operator OR:
hello | world
operator NOT:
hello -world hello !world
grouping:
( hello world )
Here's an example query which uses all these operators:
There always is implicit AND operator, so "hello world" query actually means "hello & world".
OR operator precedence is higher than AND, so "looking for cat | dog | mouse" means "looking for ( cat | dog | mouse )" and not "(looking for cat) | dog | mouse".
Since version 2.1.1-beta, queries may be automatically optimized if OPTION boolean_simplify=1 is specified. Some transformations performed by this optimization include:
Excess brackets: ((A | B) | C) becomes ( A | B | C ); ((A B) C) becomes ( A B C )
Excess AND NOT: ((A !N1) !N2) becomes (A !(N1 | N2))
Common NOT: ((A !N) | (B !N)) becomes ((A|B) !N)
Common Compound NOT: ((A !(N AA)) | (B !(N BB))) becomes (((A|B) !N) | (A !AA) | (B !BB)) if the cost of evaluating N is greater than the added together costs of evaluating A and B
Common subterm: ((A (N | AA)) | (B (N | BB))) becomes (((A|B) N) | (A AA) | (B BB)) if the cost of evaluating N is greater than the added together costs of evaluating A and B
Common keywords: (A | "A B"~N) becomes A; ("A B" | "A B C") becomes "A B"; ("A B"~N | "A B C"~N) becomes ("A B"~N)
Common phrase: ("X A B" | "Y A B") becomes (("X|Y") "A B")
Common AND NOT: ((A !X) | (A !Y) | (A !Z)) becomes (A !(X Y Z))
Common OR NOT: ((A !(N | N1)) | (B !(N | N2))) becomes (( (A !N1) | (B !N2) ) !N)
Note that optimizing the queries consumes CPU time, so for simple queries -or for hand-optimized queries- you'll do better with the default boolean_simplify=0 value. Simplifications are often better for complex queries, or algorithmically generated queries.
Queries like "-dog", which implicitly include all documents from the collection, can not be evaluated. This is both for technical and performance reasons. Technically, Sphinx does not always keep a list of all IDs. Performance-wise, when the collection is huge (ie. 10-100M documents), evaluating such queries could take very long.
The following special operators and modifiers can be used when using the extended matching mode:
operator OR:
hello | world
operator NOT:
hello -world hello !world
field search operator:
@title hello @body world
field position limit modifier (introduced in version 0.9.9-rc1):
@body[50] hello
multiple-field search operator:
@(title,body) hello world
ignore field search operator (will ignore any matches of 'hello world' from field 'title'):
@!title hello world
ignore multiple-field search operator (if we have fields title, subject and body then @!(title) is equivalent to @(subject,body)):
@!(title,body) hello world
all-field search operator:
@* hello
phrase search operator:
"hello world"
proximity search operator:
"hello world"~10
quorum matching operator:
"the world is a wonderful place"/3
strict order operator (aka operator "before"):
aaa << bbb << ccc
exact form modifier (introduced in version 0.9.9-rc1):
raining =cats and =dogs
field-start and field-end modifier (introduced in version 0.9.9-rc2):
^hello world$
NEAR, generalized proximity operator (introduced in version 2.0.1-beta):
hello NEAR/3 world NEAR/4 "my test"
SENTENCE operator (introduced in version 2.0.1-beta):
all SENTENCE words SENTENCE "in one sentence"
PARAGRAPH operator (introduced in version 2.0.1-beta):
"Bill Gates" PARAGRAPH "Steve Jobs"
ZONE limit operator:
ZONE:(h3,h4)
only in these titles
ZONESPAN limit operator:
ZONESPAN:(h2)
only in a (single) title
Here's an example query that uses some of these operators:
Example 5.2. Extended matching mode: query example
"hello world" @title "example program"~5 @body python -(php|perl) @* code
The full meaning of this search is:
Find the words 'hello' and 'world' adjacently in any field in a document;
Additionally, the same document must also contain the words 'example' and 'program' in the title field, with up to, but not including, 5 words between the words in question; (E.g. "example PHP program" would be matched however "example script to introduce outside data into the correct context for your program" would not because two terms have 5 or more words between them)
Additionally, the same document must contain the word 'python' in the body field, but not contain either 'php' or 'perl';
Additionally, the same document must contain the word 'code' in any field.
There always is implicit AND operator, so "hello world" means that both "hello" and "world" must be present in matching document.
OR operator precedence is higher than AND, so "looking for cat | dog | mouse" means "looking for ( cat | dog | mouse )" and not "(looking for cat) | dog | mouse".
Field limit operator limits subsequent searching to a given field. Normally, query will fail with an error message if given field name does not exist in the searched index. However, that can be suppressed by specifying "@@relaxed" option at the very beginning of the query:
@@relaxed @nosuchfield my query
This can be helpful when searching through heterogeneous indexes with different schemas.
Field position limit, introduced in version 0.9.9-rc1, additionaly restricts the searching to first N position within given field (or fields). For example, "@body[50] hello" will not match the documents where the keyword 'hello' occurs at position 51 and below in the body.
Proximity distance is specified in words, adjusted for word count, and applies to all words within quotes. For instance, "cat dog mouse"~5 query means that there must be less than 8-word span which contains all 3 words, ie. "CAT aaa bbb ccc DOG eee fff MOUSE" document will not match this query, because this span is exactly 8 words long.
Quorum matching operator introduces a kind of fuzzy matching. It will only match those documents that pass a given threshold of given words. The example above ("the world is a wonderful place"/3) will match all documents that have at least 3 of the 6 specified words. Operator is limited to 255 keywords. Instead of an absolute number, you can also specify a number between 0.0 and 1.0 (standing for 0% and 100%), and Sphinx will match only documents with at least the specified percentage of given words. The same example above could also have been written "the world is a wonderful place"/0.5 and it would match documents with at least 50% of the 6 words.
Strict order operator (aka operator "before"), introduced in version 0.9.9-rc2, will match the document only if its argument keywords occur in the document exactly in the query order. For instance, "black << cat" query (without quotes) will match the document "black and white cat" but not the "that cat was black" document. Order operator has the lowest priority. It can be applied both to just keywords and more complex expressions, ie. this is a valid query:
(bag of words) << "exact phrase" << red|green|blue
Exact form keyword modifier, introduced in version 0.9.9-rc1, will match the document only if the keyword occurred in exactly the specified form. The default behaviour is to match the document if the stemmed keyword matches. For instance, "runs" query will match both the document that contains "runs" and the document that contains "running", because both forms stem to just "run" - while "=runs" query will only match the first document. Exact form operator requires index_exact_words option to be enabled. This is a modifier that affects the keyword and thus can be used within operators such as phrase, proximity, and quorum operators.
Field-start and field-end keyword modifiers, introduced in version 0.9.9-rc2, will make the keyword match only if it occurred at the very start or the very end of a fulltext field, respectively. For instance, the query "^hello world$" (with quotes and thus combining phrase operator and start/end modifiers) will only match documents that contain at least one field that has exactly these two keywords.
Starting with 0.9.9-rc1, arbitrarily nested brackets and negations are allowed. However, the query must be possible to compute without involving an implicit list of all documents:
// correct query aaa -(bbb -(ccc ddd)) // queries that are non-computable -aaa aaa | -bbb
NEAR operator, added in 2.0.1-beta, is a generalized version
of a proximity operator. The syntax is NEAR/N
, it is
case-sensitive, and no spaces are allowed beetwen the NEAR keyword,
the slash sign, and the distance value.
The original proximity operator only worked on sets of keywords. NEAR is more generic and can accept arbitrary subexpressions as its two arguments, matching the document when both subexpressions are found within N words of each other, no matter in which order. NEAR is left associative and has the same (lowest) precedence as BEFORE.
You should also note how a (one NEAR/7 two NEAR/7 three)
query using NEAR is not really equivalent to a
("one two three"~7)
one using keyword proximity operator.
The difference here is that the proximity operator allows for up to
6 non-matching words between all the 3 matching words, but the version
with NEAR is less restrictive: it would allow for up to 6 words between
'one' and 'two' and then for up to 6 more between that two-word
matching and a 'three' keyword.
SENTENCE and PARAGRAPH operators, added in 2.0.1-beta, matches the document when both its arguments are within the same sentence or the same paragraph of text, respectively. The arguments can be either keywords, or phrases, or the instances of the same operator. Here are a few examples:
one SENTENCE two one SENTENCE "two three" one SENTENCE "two three" SENTENCE four
The order of the arguments within the sentence or paragraph
does not matter. These operators only work on indexes built
with index_sp (sentence
and paragraph indexing feature) enabled, and revert to a mere
AND otherwise. Refer to the index_sp
directive
documentation for the notes on what's considered a sentence
and a paragraph.
ZONE limit operator, added in 2.0.1-beta, is quite similar
to field limit operator, but restricts matching to a given in-field
zone or a list of zones. Note that the subsequent subexpressions
are not required to match in a single contiguous
span of a given zone, and may match in multiple spans.
For instance, (ZONE:th hello world)
query
will match this example document:
<th>Table 1. Local awareness of Hello Kitty brand.</th> .. some table data goes here .. <th>Table 2. World-wide brand awareness.</th>
ZONE operator affects the query until the next field or ZONE limit operator, or the closing parenthesis. It only works on the indexes built with zones support (see Section 11.2.9, “index_zones”) and will be ignored otherwise.
ZONESPAN limit operator, added in 2.1.1-beta, is similar to the ZONE operator,
but requires the match to occur in a single contiguous span. In the example
above, (ZONESPAN:th hello world)>
would not match the document,
since "hello" and "world" do not occur within the same span.
Ranking (aka weighting) of the search results can be defined as a process of computing a so-called relevance (aka weight) for every given matched document with regards to a given query that matched it. So relevance is in the end just a number attached to every document that estimates how relevant the document is to the query. Search results can then be sorted based on this number and/or some additional parameters, so that the most sought after results would come up higher on the results page.
There is no single standard one-size-fits-all way to rank any document in any scenario. Moreover, there can not ever be such a way, because relevance is subjective. As in, what seems relevant to you might not seem relevant to me. Hence, in general case it's not just hard to compute, it's theoretically impossible.
So ranking in Sphinx is configurable. It has a notion of a so-called ranker. A ranker can formally be defined as a function that takes document and query as its input and produces a relevance value as output. In layman's terms, a ranker controls exactly how (using which specific algorithm) will Sphinx assign weights to the document.
Previously, this ranking function was rigidly bound to the matching mode.
So in the legacy matching modes (that is, SPH_MATCH_ALL, SPH_MATCH_ANY,
SPH_MATCH_PHRASE, and SPH_MATCH_BOOLEAN) you can not choose the ranker.
You can only do that in the SPH_MATCH_EXTENDED mode. (Which is the only
mode in SphinxQL and the suggested mode in SphinxAPI anyway.) To choose
a non-default ranker you can either use
SetRankingMode()
with SphinxAPI, or OPTION ranker
clause in SELECT
statement when using SphinxQL.
As a sidenote, legacy matching modes are internally implemented via the unified syntax anyway. When you use one of those modes, Sphinx just internally adjusts the query and sets the associated ranker, then executes the query using the very same unified code path.
Sphinx ships with a number of built-in rankers suited for different purposes. A number of them uses two factors, phrase proximity (aka LCS) and BM25. Phrase proximity works on the keyword positions, while BM25 works on the keyword frequencies. Basically, the better the degree of the phrase match between the document body and the query, the higher is the phrase proximity (it maxes out when the document contains the entire query as a verbatim quote). And BM25 is higher when the document contains more rare words. We'll save the detailed discussion for later.
Currently implemented rankers are:
SPH_RANK_PROXIMITY_BM25, the default ranking mode that uses and combines both phrase proximity and BM25 ranking.
SPH_RANK_BM25, statistical ranking mode which uses BM25 ranking only (similar to most other full-text engines). This mode is faster but may result in worse quality on queries which contain more than 1 keyword.
SPH_RANK_NONE, no ranking mode. This mode is obviously the fastest. A weight of 1 is assigned to all matches. This is sometimes called boolean searching that just matches the documents but does not rank them.
SPH_RANK_WORDCOUNT, ranking by the keyword occurrences count. This ranker computes the per-field keyword occurrence counts, then multiplies them by field weights, and sums the resulting values.
SPH_RANK_PROXIMITY, added in version 0.9.9-rc1, returns raw phrase proximity value as a result. This mode is internally used to emulate SPH_MATCH_ALL queries.
SPH_RANK_MATCHANY, added in version 0.9.9-rc1, returns rank as it was computed in SPH_MATCH_ANY mode ealier, and is internally used to emulate SPH_MATCH_ANY queries.
SPH_RANK_FIELDMASK, added in version 0.9.9-rc2, returns a 32-bit mask with N-th bit corresponding to N-th fulltext field, numbering from 0. The bit will only be set when the respective field has any keyword occurences satisfiying the query.
SPH_RANK_SPH04, added in version 1.10-beta, is generally based on the default SPH_RANK_PROXIMITY_BM25 ranker, but additionally boosts the matches when they occur in the very beginning or the very end of a text field. Thus, if a field equals the exact query, SPH04 should rank it higher than a field that contains the exact query but is not equal to it. (For instance, when the query is "Hyde Park", a document entitled "Hyde Park" should be ranked higher than a one entitled "Hyde Park, London" or "The Hyde Park Cafe".)
SPH_RANK_EXPR, added in version 2.0.2-beta, lets you specify the ranking formula in run time. It exposes a number of internal text factors and lets you define how the final weight should be computed from those factors. You can find more details about its syntax and a reference available factors in a subsection below.
You should specify the SPH_RANK_
prefix and use capital letters only
when using the SetRankingMode()
call from the SphinxAPI. The API ports expose these as global constants.
Using SphinxQL syntax, the prefix should be omitted and the ranker name
is case insensitive. Example:
// SphinxAPI $client->SetRankingMode ( SPH_RANK_SPH04 ); // SphinxQL mysql_query ( "SELECT ... OPTION ranker=sph04" );
Legacy matching modes automatically select a ranker as follows:
SPH_MATCH_ALL uses SPH_RANK_PROXIMITY ranker;
SPH_MATCH_ANY uses SPH_RANK_MATCHANY ranker;
SPH_MATCH_PHRASE uses SPH_RANK_PROXIMITY ranker;
SPH_MATCH_BOOLEAN uses SPH_RANK_NONE ranker.
Expression ranker, added in version 2.0.2-beta, lets you change the ranking formula on the fly, on a per-query basis. For a quick kickoff, this is how you emulate PROXIMITY_BM25 ranker using the expression based one:
SELECT *, WEIGHT() FROM myindex WHERE MATCH('hello world') OPTION ranker=expr('sum(lcs*user_weight)*1000+bm25')
The output of this query must not change if you omit the OPTION
clause, because the default ranker (PROXIMITY_BM25) behaves exactly like
specified in the ranker formula above. But the expression ranker is somewhat
more flexible than just that and provides access to many more factors.
The ranking formula is an arbitrary arithmetic expression that can use constants, document attributes, built-in functions and operators (described in Section 5.5, “Expressions, functions, and operators”), and also a few ranking-specific things that are only accessible in a ranking formula. Namely, those are field aggregation functions, field-level, and document-level ranking factors.
A document-level factor is a numeric value computed by the ranking engine for every matched document with regards to the current query. (So it differs from a plain document attribute in that the attribute do not depend on the full text query, while factors might.) Those factors can be used anywhere in the ranking expression. Currently implemented document-level factors are:
bm25
(integer), a document-level BM25 estimate (computed without
keyword occurrence filtering).
max_lcs
(integer), a query-level maximum possible value that
the sum(lcs*user_weight) expression can ever take. This can be
useful for weight boost scaling. For instance, MATCHANY ranker
formula uses this to guarantee that a full phrase match in any
field rankes higher than any combination of partial matches
in all fields.
field_mask
(integer), a document-level 32-bit mask of matched
fields.
query_word_count
(integer), the number of unique keywords
in a query, adjusted for a number of excluded keywords. For instance,
both (one one one one)
and (one !two)
queries
should assign a value of 1 to this factor, because there is just one unique
non-excluded keyword.
doc_word_count
(integer), the number of unique keywords
matched in the entire document.
A field-level factor is a numeric value computed by the ranking
engine for every matched in-document text field with regards to the
current query. As more than one field can be matched by a query,
but the final weight needs to be a single integer value, these
values need to be folded into a single one. To achieve that,
field-level factors can only be used within a field aggregation
function, they can not be used anywhere in the expression.
For example, you can not use (lcs+bm25)
as your
ranking expression, as lcs
takes multiple values (one
in every matched field). You should use (sum(lcs)+bm25)
instead, that expression sums lcs
over all matching fields,
and then adds bm25
to that per-field sum.
Currently implemented field-level factors are:
lcs
(integer), the length of a maximum verbatim match between
the document and the query, counted in words. LCS stands for Longest Common
Subsequence (or Subset). Takes a minimum value of 1 when only stray keywords
were matched in a field, and a maximum value of query keywords count
when the entire query was matched in a field verbatim (in the exact
query keywords order). For example, if the query is 'hello world'
and the field contains these two words quoted from the query (that is,
adjacent to each other, and exactly in the query order), lcs
will be 2. For example, if the query is 'hello world program' and
the field contains 'hello world', lcs
will be 2.
Note that any subset of the query keyword works, not just a subset
of adjacent keywords. For example, if the query is 'hello world program'
and the field contains 'hello (test program)', lcs
will be 2
just as well, because both 'hello' and 'program' matched in the same
respective positions as they were in the query. Finally, if the query
is 'hello world program' and the field contains 'hello world program',
lcs
will be 3. (Hopefully that is unsurprising at this point.)
user_weight
(integer), the user specified per-field weight
(refer to SetFieldWeights()
in SphinxAPI and OPTION field_weights
in SphinxQL respectively). The weights default to 1 if not specified
explicitly.
hit_count
(integer), the number of keyword occurrences
that matched in the field. Note that a single keyword may occur multiple
times. For example, if 'hello' occurs 3 times in a field and 'world'
occurs 5 times, hit_count
will be 8.
word_count
(integer), the number of unique keywords matched
in the field. For example, if 'hello' and 'world' occur anywhere in a field,
word_count
will be 2, irregardless of how many times do both
keywords occur.
tf_idf
(float), the sum of TF*IDF over all the keywords matched in the
field. IDF is the Inverse Document Frequency, a floating point value
between 0 and 1 that describes how frequent is the keywords (basically,
0 for a keyword that occurs in every document indexed, and 1 for a unique
keyword that occurs in just a single document). TF is the Term Frequency,
the number of matched keyword occurrences in the field. As a side note,
tf_idf
is actually computed by summing IDF over all matched
occurences. That's by construction equivalent to summing TF*IDF over
all matched keywords.
min_hit_pos
(integer), the position of the first matched keyword occurrence,
counted in words. Indexing begins from position 1.
min_best_span_pos
(integer), the position of the first maximum LCS
occurrences span. For example, assume that our query was 'hello world
program' and 'hello world' subphrase was matched twice in the field,
in positions 13 and 21. Assume that 'hello' and 'world' additionally
occurred elsewhere in the field, but never next to each other and thus
never as a subphrase match. In that case, min_best_span_pos
will be 13. Note how for the single keyword queries
min_best_span_pos
will always equal min_hit_pos
.
exact_hit
(boolean), whether a query was an exact match
of the entire current field. Used in the SPH04 ranker.
min_idf, max_idf, and sum_idx were added in version 2.1.1-beta. These factors respectively represent the min(idf), max(idf) and sum(idf) over all the keywords that were matched.
A field aggregation function is a single argument function that takes an expression with field-level factors, iterates it over all the matched fields, and computes the final results. Currently implemented field aggregation functions are:
sum
, sums the argument expression over all matched
fields. For instance, sum(1)
should return a number
of matched fields.
Most of the other rankers can actually be emulated with the expression based ranker. You just need to pass a proper expression. Such emulation is, of course, going to be slower than using the built-in, compiled ranker but still might be of interest if you want to fine-tune your ranking formula starting with one of the existing ones. Also, the formulas define the nitty gritty ranker details in a nicely readable fashion.
SPH_RANK_PROXIMITY_BM25 = sum(lcs*user_weight)*1000+bm25
SPH_RANK_BM25 = bm25
SPH_RANK_NONE = 1
SPH_RANK_WORDCOUNT = sum(hit_count*user_weight)
SPH_RANK_PROXIMITY = sum(lcs*user_weight)
SPH_RANK_MATCHANY = sum((word_count+(lcs-1)*max_lcs)*user_weight)
SPH_RANK_FIELDMASK = field_mask
SPH_RANK_SPH04 = sum((4*lcs+2*(min_hit_pos==1)+exact_hit)*user_weight)*1000+bm25
Sphinx lets you use arbitrary arithmetic expressions both via SphinxQL and SphinxAPI, involving attribute values, internal attributes (document ID and relevance weight), arithmetic operations, a number of built-in functions, and user-defined functions. This section documents the supported operators and functions. Here's the complete reference list for quick access.
The standard arithmetic operators. Arithmetic calculations involving those
can be performed in three different modes: (a) using single-precision,
32-bit IEEE 754 floating point values (the default), (b) using signed 32-bit integers,
(c) using 64-bit signed integers. The expression parser will automatically switch
to integer mode if there are no operations the result in a floating point value.
Otherwise, it will use the default floating point mode. For instance, a+b
will be computed using 32-bit integers if both arguments are 32-bit integers;
or using 64-bit integers if both arguments are integers but one of them is
64-bit; or in floats otherwise. However, a/b
or sqrt(a)
will always be computed in floats, because these operations return a result
of non-integer type. To avoid the first, you can either use IDIV(a,b)
or a DIV b
form. Also, a*b
will not be automatically promoted to 64-bit when the arguments are 32-bit.
To enforce 64-bit results, you can use BIGINT(). (But note that if there are
non-integer operations, BIGINT() will simply be ignored.)
Comparison operators (eg. = or <=) return 1.0 when the condition is true and 0.0 otherwise.
For instance, (a=b)+3
will evaluate to 4 when attribute 'a' is equal to attribute 'b', and to 3 when 'a' is not.
Unlike MySQL, the equality comparisons (ie. = and <> operators) introduce a small equality threshold (1e-6 by default).
If the difference between compared values is within the threshold, they will be considered equal.
Boolean operators (AND, OR, NOT) were introduced in 0.9.9-rc2 and behave as usual. They are left-associative and have the least priority compared to other operators. NOT has more priority than AND and OR but nevertheless less than any other operator. AND and OR have the same priority so brackets use is recommended to avoid confusion in complex expressions.
These operators perform bitwise AND and OR respectively. The operands must be of an integer types. Introduced in version 1.10-beta.
Returns the absolute value of the argument.
BITDOT(mask, w0, w1, ...) returns the sum of products of an each bit of a mask multiplied with its weight.
bit0*w0 + bit1*w1 + ...
Returns the smallest integer value greater or equal to the argument.
CONTAINS(polygon, x, y) checks whether the (x,y) point is within the given polygon, and returns 1 if true, or 0 if false. The polygon has to be specified using either the POLY2D() function or the GEOPOLY2D() function. The former function is intended for "small" polygons, meaning less than 500 km (300 miles) a side, and it doesn't take into account the Earth's curvature for speed. For larger distances, you should use GEOPOLY2D, which tesselates the given polygon in smaller parts, accounting for the Earth's curvature. These functions were added in version 2.1.1-beta.
Returns the cosine of the argument.
Returns the exponent of the argument (e=2.718... to the power of the argument).
Returns the N-th Fibonacci number, where N is the integer argument. That is, arguments of 0 and up will generate the values 0, 1, 1, 2, 3, 5, 8, 13 and so on. Note that the computations are done using 32-bit integer math and thus numbers 48th and up will be returned modulo 2^32.
Returns the largest integer value lesser or equal to the argument.
GEOPOLY2D(x1,y1,x2,y2,x3,y3...) produces a polygon to be used with the CONTAINS() function. This function takes into account the Earth's curvature by tesellating the polygon into smaller ones, and should be used for larger areas; see the POLY2D() function.
Returns the result of an integer division of the first argument by the second argument. Both arguments must be of an integer type.
Returns the natural logarithm of the argument (with the base of e=2.718...).
Returns the common logarithm of the argument (with the base of 10).
Returns the binary logarithm of the argument (with the base of 2).
Returns the bigger of two arguments.
Returns the smaller of two arguments.
POLY2D(x1,y1,x2,y2,x3,y3...) produces a polygon to be used with the CONTAINS() function. This polygon assumes a flat Earth, so it should not be too large; see the POLY2D() function.
Returns the first argument raised to the power of the second argument.
Returns the sine of the argument.
Returns the square root of the argument.
Returns the integer day of month (in 1..31 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta.
Returns the integer month (in 1..12 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta.
Returns the current timestamp as an INTEGER. Introduced in version 0.9.9-rc1.
Returns the integer year (in 1969..2038 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta.
Returns the integer year and month code (in 196912..203801 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta.
Returns the integer year, month, and date code (in 19691231..20380119 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta.
Forcibly promotes the integer argument to 64-bit type,
and does nothing on floating point argument. It's intended to help enforce evaluation
of certain expressions (such as a*b
) in 64-bit mode even though all the arguments
are 32-bit.
Introduced in version 0.9.9-rc1.
Forcibly reinterprets its 32-bit unsigned integer argument as signed, and also expands it to 64-bit type (because 32-bit type is unsigned). It's easily illustrated by the following example: 1-2 normally evaluates to 4294967295, but SINT(1-2) evaluates to -1. Introduced in version 1.10-beta.
IF()
behavior is slightly different that that of its MySQL counterpart.
It takes 3 arguments, check whether the 1st argument is equal to 0.0, returns the 2nd argument if it is not zero, or the 3rd one when it is.
Note that unlike comparison operators, IF()
does not use a threshold!
Therefore, it's safe to use comparison results as its 1st argument, but arithmetic operators might produce unexpected results.
For instance, the following two calls will produce different results even though they are logically equivalent:
IF ( sqrt(3)*sqrt(3)-3<>0, a, b ) IF ( sqrt(3)*sqrt(3)-3, a, b )
In the first case, the comparison operator <> will return 0.0 (false)
because of a threshold, and IF()
will always return 'b' as a result.
In the second one, the same sqrt(3)*sqrt(3)-3
expression will be compared
with zero without threshold by the IF()
function itself.
But its value will be slightly different from zero because of limited floating point
calculations precision. Because of that, the comparison with 0.0 done by IF()
will not pass, and the second variant will return 'a' as a result.
IN(expr,val1,val2,...), introduced in version 0.9.9-rc1, takes 2 or more arguments, and returns 1 if 1st argument
(expr) is equal to any of the other arguments (val1..valN), or 0 otherwise.
Currently, all the checked values (but not the expression itself!) are required
to be constant. (Its technically possible to implement arbitrary expressions too,
and that might be implemented in the future.) Constants are pre-sorted and then
binary search is used, so IN() even against a big arbitrary list of constants
will be very quick. Starting with 0.9.9-rc2, first argument can also be
a MVA attribute. In that case, IN() will return 1 if any of the MVA values
is equal to any of the other arguments. Starting with 2.0.1-beta, IN() also
supports IN(expr,@uservar)
syntax to check whether the value
belongs to the list in the given global user variable.
INTERVAL(expr,point1,point2,point3,...), introduced in version 0.9.9-rc1, takes 2 or more arguments, and returns the index of the argument that is less than the first argument: it returns 0 if expr<point1, 1 if point1<=expr<point2, and so on. It is required that point1<point2<...<pointN for this function to work correctly.
Returns the CRC32 value of a string argument. Introduced in version 2.0.1-beta.
GEODIST(lat1,long1,lat2,long2) function, introduced in version 0.9.9-rc2, computes geosphere distance between two given points specified by their coordinates. Note that both latitudes and longitudes must be in radians and the result will be in meters. You can use arbitrary expression as any of the four coordinates. An optimized path will be selected when one pair of the arguments refers directly to a pair attributes and the other one is constant.
PACKEDFACTORS(), introduced in version 2.1.1-beta, can be used in queries, either to just see all the weighting factors calculated when doing the matching, or to provide a binary attribute that can be used to write a custom ranking UDF. This function works only if expression ranker is specified and the query is not a full scan, otherwise it will return an error.
It can be used to implement custom ranking functions in UDFs, as in
SELECT *, CUSTOM_RANK(PACKEDFACTORS()) AS r FROM my_index WHERE match('hello') ORDER BY r DESC OPTION ranker=expr('1');
Where CUSTOM_RANK() is a function implemented in an UDF. It should declare a
SPH_UDF_FACTORS structure (defined in sphinxudf.h
), initialize this structure,
unpack the factors into it before usage, and deinitialize it afterwards, as follows:
SPH_UDF_FACTORS factors; sphinx_factors_init(&factors); sphinx_factors_unpack((DWORD*)args->arg_values[0], &factors); // ... can use the contents of factors variable here ... sphinx_factors_deinit(&factors);
PACKEDFACTORS() data is available at all query stages, not just when doing the initial matching and ranking pass. That enables another particularly interesting application of PACKEDFACTORS(), namely re-ranking.
In the example just above, we used an expression-based ranker with a dummy expression, and sorted the result set by the value computed by our UDF. In other words, we used the UDF to rank all our results. Assume now, for the sake of an example, that our UDF is extremely expensive to compute and has a throughput of just 10,000 calls per second. Assume that our query matches 1,000,000 documents. To maintain reasonable performance, we would then want to use a (much) simpler expression to do most of our ranking, and then apply the expensive UDF to only a few top results, say, top-100 results. Or, in other words, build top-100 results using a simpler ranking function and then re-rank those with a complex one. We can do that just as well with subselects:
SELECT * FROM ( SELECT *, CUSTOM_RANK(PACKEDFACTORS()) AS r FROM my_index WHERE match('hello') OPTION ranker=expr('sum(lcs)*1000+bm25') ORDER BY WEIGHT() DESC LIMIT 100 ) ORDER BY r DESC LIMIT 10
In this example, expression-based ranker will be called for every matched document to compute WEIGHT(). So it will get called 1,000,000 times. But the UDF computation can be postponed until the outer sort. And it also will be done for just the top-100 matches by WEIGHT(), according to the inner limit. So the UDF will only get called 100 times. And then the final top-10 matches by UDF value will be selected and returned to the application.
For reference, in the distributed case PACKEDFACTORS() data gets sent from the agents to master in a binary format, too. This makes it technically feasible to implement additional re-ranking pass (or passes) on the master node, if needed.
If used with SphinxQL but not called from any UDFs, the result of PACKEDFACTORS() is simply formatted as plain text, which can be used to manually assess the ranking factors. Note that this feature is not currently supported by the Sphinx API.
LENGTH(attr_mva) function, introduced in version 2.1.2-stable, returns amount of elements in MVA set. It works with both 32-bit and 64-bit MVA attributes.
There are the following result sorting modes available:
SPH_SORT_RELEVANCE mode, that sorts by relevance in descending order (best matches first);
SPH_SORT_ATTR_DESC mode, that sorts by an attribute in descending order (bigger attribute values first);
SPH_SORT_ATTR_ASC mode, that sorts by an attribute in ascending order (smaller attribute values first);
SPH_SORT_TIME_SEGMENTS mode, that sorts by time segments (last hour/day/week/month) in descending order, and then by relevance in descending order;
SPH_SORT_EXTENDED mode, that sorts by SQL-like combination of columns in ASC/DESC order;
SPH_SORT_EXPR mode, that sorts by an arithmetic expression.
SPH_SORT_RELEVANCE ignores any additional parameters and always sorts matches by relevance rank. All other modes require an additional sorting clause, with the syntax depending on specific mode. SPH_SORT_ATTR_ASC, SPH_SORT_ATTR_DESC and SPH_SORT_TIME_SEGMENTS modes require simply an attribute name. SPH_SORT_RELEVANCE is equivalent to sorting by "@weight DESC, @id ASC" in extended sorting mode, SPH_SORT_ATTR_ASC is equivalent to "attribute ASC, @weight DESC, @id ASC", and SPH_SORT_ATTR_DESC to "attribute DESC, @weight DESC, @id ASC" respectively.
In SPH_SORT_TIME_SEGMENTS mode, attribute values are split into so-called time segments, and then sorted by time segment first, and by relevance second.
The segments are calculated according to the current timestamp at the time when the search is performed, so the results would change over time. The segments are as follows:
last hour,
last day,
last week,
last month,
last 3 months,
everything else.
These segments are hardcoded, but it is trivial to change them if necessary.
This mode was added to support searching through blogs, news headlines, etc. When using time segments, recent records would be ranked higher because of segment, but within the same segment, more relevant records would be ranked higher - unlike sorting by just the timestamp attribute, which would not take relevance into account at all.
In SPH_SORT_EXTENDED mode, you can specify an SQL-like sort expression with up to 5 attributes (including internal attributes), eg:
@relevance DESC, price ASC, @id DESC
Both internal attributes (that are computed by the engine on the fly)
and user attributes that were configured for this index are allowed.
Internal attribute names must start with magic @-symbol; user attribute
names can be used as is. In the example above, @relevance
and @id
are internal attributes and price
is user-specified.
Known internal attributes are:
@id (match ID)
@weight (match weight)
@rank (match weight)
@relevance (match weight)
@random (return results in random order)
@rank
and @relevance
are just additional
aliases to @weight
.
Expression sorting mode lets you sort the matches by an arbitrary arithmetic expression, involving attribute values, internal attributes (@id and @weight), arithmetic operations, and a number of built-in functions. Here's an example:
$cl->SetSortMode ( SPH_SORT_EXPR, "@weight + ( user_karma + ln(pageviews) )*0.1" );
The operators and functions supported in the expressions are discussed in a separate section, Section 5.5, “Expressions, functions, and operators”.
Sometimes it could be useful to group (or in other terms, cluster) search results and/or count per-group match counts - for instance, to draw a nice graph of how much maching blog posts were there per each month; or to group Web search results by site; or to group matching forum posts by author; etc.
In theory, this could be performed by doing only the full-text search in Sphinx and then using found IDs to group on SQL server side. However, in practice doing this with a big result set (10K-10M matches) would typically kill performance.
To avoid that, Sphinx offers so-called grouping mode. It is enabled with SetGroupBy() API call. When grouping, all matches are assigned to different groups based on group-by value. This value is computed from specified attribute using one of the following built-in functions:
SPH_GROUPBY_DAY, extracts year, month and day in YYYYMMDD format from timestamp;
SPH_GROUPBY_WEEK, extracts year and first day of the week number (counting from year start) in YYYYNNN format from timestamp;
SPH_GROUPBY_MONTH, extracts month in YYYYMM format from timestamp;
SPH_GROUPBY_YEAR, extracts year in YYYY format from timestamp;
SPH_GROUPBY_ATTR, uses attribute value itself for grouping.
The final search result set then contains one best match per group. Grouping function value and per-group match count are returned along as "virtual" attributes named @group and @count respectively.
The result set is sorted by group-by sorting clause, with the syntax similar
to SPH_SORT_EXTENDED
sorting clause
syntax. In addition to @id
and @weight
,
group-by sorting clause may also include:
@group (groupby function value),
@count (amount of matches in group).
The default mode is to sort by groupby value in descending order,
ie. by "@group desc"
.
On completion, total_found
result parameter would
contain total amount of matching groups over he whole index.
WARNING: grouping is done in fixed memory
and thus its results are only approximate; so there might be more groups reported
in total_found
than actually present. @count
might also
be underestimated. To reduce inaccuracy, one should raise max_matches
.
If max_matches
allows to store all found groups, results will be 100% correct.
For example, if sorting by relevance and grouping by "published"
attribute with SPH_GROUPBY_DAY
function, then the result set will
contain
one most relevant match per each day when there were any matches published,
with day number and per-day match count attached,
sorted by day number in descending order (ie. recent days first).
Starting with version 0.9.9-rc2, aggregate functions (AVG(), MIN(), MAX(), SUM()) are supported through SetSelect() API call when using GROUP BY.
To scale well, Sphinx has distributed searching capabilities. Distributed searching is useful to improve query latency (ie. search time) and throughput (ie. max queries/sec) in multi-server, multi-CPU or multi-core environments. This is essential for applications which need to search through huge amounts data (ie. billions of records and terabytes of text).
The key idea is to horizontally partition (HP) searched data accross search nodes and then process it in parallel.
Partitioning is done manually. You should
setup several instances
of Sphinx programs (indexer
and searchd
)
on different servers;
make the instances index (and search) different parts of data;
configure a special distributed index on some of the searchd
instances;
and query this index.
This index only contains references to other local and remote indexes - so it could not be directly reindexed, and you should reindex those indexes which it references instead.
When searchd
receives a query against distributed index,
it does the following:
connects to configured remote agents;
issues the query;
sequentially searches configured local indexes (while the remote agents are searching);
retrieves remote agents' search results;
merges all the results together, removing the duplicates;
sends the merged resuls to client.
From the application's point of view, there are no differences between searching through a regular index, or a distributed index at all. That is, distributed indexes are fully transparent to the application, and actually there's no way to tell whether the index you queried was distributed or local. (Even though as of 0.9.9 Sphinx does not allow to combine searching through distributed indexes with anything else, this constraint will be lifted in the future.)
Any searchd
instance could serve both as a master
(which aggregates the results) and a slave (which only does local searching)
at the same time. This has a number of uses:
every machine in a cluster could serve as a master which searches the whole cluster, and search requests could be balanced between masters to achieve a kind of HA (high availability) in case any of the nodes fails;
if running within a single multi-CPU or multi-core machine, there would be only 1 searchd instance quering itself as an agent and thus utilizing all CPUs/core.
It is scheduled to implement better HA support which would allow to specify which agents mirror each other, do health checks, keep track of alive agents, load-balance requests, etc.
In version 2.0.1-beta and above two query log formats are supported.
Previous versions only supported a custom plain text format. That format
is still the default one. However, while it might be more convenient for
manual monitoring and review, but hard to replay for benchmarks, it only
logs search queries but not the other types
of requests, does not always contain the complete search query
data, etc. The default text format is also harder (and sometimes
impossible) to replay for benchmarking purposes. The new sphinxql
format alleviates that. It aims to be complete and automatable,
even though at the cost of brevity and readability.
By default, searchd
logs all succesfully executed search queries
into a query log file. Here's an example:
[Fri Jun 29 21:17:58 2007] 0.004 sec [all/0/rel 35254 (0,20)] [lj] test [Fri Jun 29 21:20:34 2007] 0.024 sec [all/0/rel 19886 (0,20) @channel_id] [lj] test
This log format is as follows:
[query-date] query-time [match-mode/filters-count/sort-mode total-matches (offset,limit) @groupby-attr] [index-name] query
Match mode can take one of the following values:
"all" for SPH_MATCH_ALL mode;
"any" for SPH_MATCH_ANY mode;
"phr" for SPH_MATCH_PHRASE mode;
"bool" for SPH_MATCH_BOOLEAN mode;
"ext" for SPH_MATCH_EXTENDED mode;
"ext2" for SPH_MATCH_EXTENDED2 mode;
"scan" if the full scan mode was used, either by being specified with SPH_MATCH_FULLSCAN, or if the query was empty (as documented under Matching Modes)
Sort mode can take one of the following values:
"rel" for SPH_SORT_RELEVANCE mode;
"attr-" for SPH_SORT_ATTR_DESC mode;
"attr+" for SPH_SORT_ATTR_ASC mode;
"tsegs" for SPH_SORT_TIME_SEGMENTS mode;
"ext" for SPH_SORT_EXTENDED mode.
Additionally, if searchd
was started with --iostats
, there will be a block of data after where the index(es) searched are listed.
A query log entry might take the form of:
[Fri Jun 29 21:17:58 2007] 0.004 sec [all/0/rel 35254 (0,20)] [lj] [ios=6 kb=111.1 ms=0.5] test
This additional block is information regarding I/O operations in performing the search: the number of file I/O operations carried out, the amount of data in kilobytes read from the index files and time spent on I/O operations (although there is a background processing component, the bulk of this time is the I/O operation time).
This is a new log format introduced in 2.0.1-beta, with the goals
begin logging everything and then some, and in a format easy to automate
(for insance, automatically replay). New format can either be enabled
via the query_log_format
directive in the configuration file, or switched back and forth
on the fly with the
SET GLOBAL query_log_format=...
statement via SphinxQL. In the new format, the example from the previous
section would look as follows. (Wrapped below for readability, but with
just one query per line in the actual log.)
/* Fri Jun 29 21:17:58.609 2007 2011 conn 2 wall 0.004 found 35254 */ SELECT * FROM lj WHERE MATCH('test') OPTION ranker=proximity; /* Fri Jun 29 21:20:34 2007.555 conn 3 wall 0.024 found 19886 */ SELECT * FROM lj WHERE MATCH('test') GROUP BY channel_id OPTION ranker=proximity;
Note that all requests would be logged in this format, including those sent via SphinxAPI and SphinxSE, not just those sent via SphinxQL. Also note, that this kind of logging works only with plain log files and will not work if you use 'syslog' for logging.
The features of SphinxQL log format compared to the default text one are as follows.
All request types should be logged. (This is still work in progress.)
Full statement data will be logged where possible.
Errors and warnings are logged.
The log should be automatically replayable via SphinxQL.
Additional performance counters (currently, per-agent distributed query times) are logged.
Every request (including both SphinxAPI and SphinxQL) request must result in exactly one log line. All request types, including INSERT, CALL SNIPPETS, etc will eventually get logged, though as of time of this writing, that is a work in progress). Every log line must be a valid SphinxQL statement that reconstructs the full request, except if the logged request is too big and needs shortening for performance reasons. Additional messages, counters, etc can be logged in the comments section after the request.
Starting with version 0.9.9-rc2, Sphinx searchd daemon supports MySQL binary network protocol and can be accessed with regular MySQL API. For instance, 'mysql' CLI client program works well. Here's an example of querying Sphinx using MySQL client:
$ mysql -P 9306 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 0.9.9-dev (r1734) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> SELECT * FROM test1 WHERE MATCH('test') -> ORDER BY group_id ASC OPTION ranker=bm25; +------+--------+----------+------------+ | id | weight | group_id | date_added | +------+--------+----------+------------+ | 4 | 1442 | 2 | 1231721236 | | 2 | 2421 | 123 | 1231721236 | | 1 | 2421 | 456 | 1231721236 | +------+--------+----------+------------+ 3 rows in set (0.00 sec)
Note that mysqld was not even running on the test machine. Everything was handled by searchd itself.
The new access method is supported in addition to native APIs which all still work perfectly well. In fact, both access methods can be used at the same time. Also, native API is still the default access method. MySQL protocol support needs to be additionally configured. This is a matter of 1-line config change, adding a new listener with mysql41 specified as a protocol:
listen = localhost:9306:mysql41
Just supporting the protocol and not the SQL syntax would be useless so Sphinx now also supports a subset of SQL that we dubbed SphinxQL. It supports the standard querying all the index types with SELECT, modifying RT indexes with INSERT, REPLACE, and DELETE, and much more. Full SphinxQL reference is available in Chapter 7, SphinxQL reference.
Multi-queries, or query batches, let you send multiple queries to Sphinx in one go (more formally, one network request).
Two API methods that implement multi-query mechanism are AddQuery() and RunQueries(). You can also run multiple queries with SphinxQL, see Section 7.33, “Multi-statement queries”. (In fact, regular Query() call is internally implemented as a single AddQuery() call immediately followed by RunQueries() call.) AddQuery() captures the current state of all the query settings set by previous API calls, and memorizes the query. RunQueries() actually sends all the memorized queries, and returns multiple result sets. There are no restrictions on the queries at all, except just a sanity check on a number of queries in a single batch (see Section 11.4.25, “max_batch_queries”).
Why use multi-queries? Generally, it all boils down to performance.
First, by sending requests to searchd
in a batch
instead of one by one, you always save a bit by doing less network
roundtrips. Second, and somewhat more important, sending queries
in a batch enables searchd
to perform certain
internal optimizations. As new types of optimizations are being
added over time, it generally makes sense to pack all the queries
into batches where possible, so that simply upgrading Sphinx
to a new version would automatically enable new optimizations.
In the case when there aren't any possible batch optimizations
to apply, queries will be processed one by one internally.
Why (or rather when) not use multi-queries? Multi-queries requires all the queries in a batch to be independent, and sometimes they aren't. That is, sometimes query B is based on query A results, and so can only be set up after executing query A. For instance, you might want to display results from a secondary index if and only if there were no results found in a primary index. Or maybe just specify offset into 2nd result set based on the amount of matches in the 1st result set. In that case, you will have to use separate queries (or separate batches).
As of 0.9.10, there are two major optimizations to be aware of: common query optimization (available since 0.9.8); and common subtree optimization (available since 0.9.10).
Common query optimization means that searchd
will identify all those queries in a batch where only the sorting
and group-by settings differ, and only perform searching once.
For instance, if a batch consists of 3 queries, all of them are for
"ipod nano", but 1st query requests top-10 results sorted by price,
2nd query groups by vendor ID and requests top-5 vendors sorted by
rating, and 3rd query requests max price, full-text search for
"ipod nano" will only be performed once, and its results will be
reused to build 3 different result sets.
So-called faceted searching is a particularly important case that benefits from this optimization. Indeed, faceted searching can be implemented by running a number of queries, one to retrieve search results themselves, and a few other ones with same full-text query but different group-by settings to retrieve all the required groups of results (top-3 authors, top-5 vendors, etc). And as long as full-text query and filtering settings stay the same, common query optimization will trigger, and greatly improve performance.
Common subtree optimization is even more interesting.
It lets searchd
exploit similarities between
batched full-text queries. It identifies common full-text query parts
(subtress) in all queries, and caches them between queries. For instance,
look at the following query batch:
barack obama president barack obama john mccain barack obama speech
There's a common two-word part ("barack obama") that can be computed only once, then cached and shared across the queries. And common subtree optimization does just that. Per-query cache size is strictly controlled by subtree_docs_cache and subtree_hits_cache directives (so that caching all sxiteen gazillions of documents that match "i am" does not exhaust the RAM and instantly kill your server).
Here's a code sample (in PHP) that fire the same query in 3 different sorting modes:
require ( "sphinxapi.php" ); $cl = new SphinxClient (); $cl->SetMatchMode ( SPH_MATCH_EXTENDED ); $cl->SetSortMode ( SPH_SORT_RELEVANCE ); $cl->AddQuery ( "the", "lj" ); $cl->SetSortMode ( SPH_SORT_EXTENDED, "published desc" ); $cl->AddQuery ( "the", "lj" ); $cl->SetSortMode ( SPH_SORT_EXTENDED, "published asc" ); $cl->AddQuery ( "the", "lj" ); $res = $cl->RunQueries();
How to tell whether the queries in the batch were actually optimized? If they were, respective query log will have a "multiplier" field that specifies how many queries were processed together:
[Sun Jul 12 15:18:17.000 2009] 0.040 sec x3 [ext/0/rel 747541 (0,20)] [lj] the [Sun Jul 12 15:18:17.000 2009] 0.040 sec x3 [ext/0/ext 747541 (0,20)] [lj] the [Sun Jul 12 15:18:17.000 2009] 0.040 sec x3 [ext/0/ext 747541 (0,20)] [lj] the
Note the "x3" field. It means that this query was optimized and processed in a sub-batch of 3 queries. For reference, this is how the regular log would look like if the queries were not batched:
[Sun Jul 12 15:18:17.062 2009] 0.059 sec [ext/0/rel 747541 (0,20)] [lj] the [Sun Jul 12 15:18:17.156 2009] 0.091 sec [ext/0/ext 747541 (0,20)] [lj] the [Sun Jul 12 15:18:17.250 2009] 0.092 sec [ext/0/ext 747541 (0,20)] [lj] the
Note how per-query time in multi-query case was improved by a factor of 1.5x to 2.3x, depending on a particular sorting mode. In fact, for both common query and common subtree optimizations, there were reports of 3x and even more improvements, and that's from production instances, not just synthetic tests.
Introduced to Sphinx in version 2.0.1-beta to supplement string sorting, collations essentially affect the string attribute comparisons. They specify both the character set encoding and the strategy that Sphinx uses to compare strings when doing ORDER BY or GROUP BY with a string attribute involved.
String attributes are stored as is when indexing, and no character set or language information is attached to them. That's okay as long as Sphinx only needs to store and return the strings to the calling application verbatim. But when you ask Sphinx to sort by a string value, that request immediately becomes quite ambiguous.
First, single-byte (ASCII, or ISO-8859-1, or Windows-1251) strings need to be processed differently that the UTF-8 ones that may encode every character with a variable number of bytes. So we need to know what is the character set type to interepret the raw bytes as meaningful characters properly.
Second, we additionally need to know the language-specific string sorting rules. For instance, when sorting according to US rules in en_US locale, the accented character 'ï' (small letter i with diaeresis) should be placed somewhere after 'z'. However, when sorting with French rules and fr_FR locale in mind, it should be placed between 'i' and 'j'. And some other set of rules might choose to ignore accents at all, allowing 'ï' and 'i' to be mixed arbitrarily.
Third, but not least, we might need case-sensitive sorting in some scenarios and case-insensitive sorting in some others.
Collations combine all of the above: the character set, the lanugage rules, and the case sensitivity. Sphinx currently provides the following four collations.
libc_ci
libc_cs
utf8_general_ci
binary
The first two collations rely on several standard C library (libc) calls
and can thus support any locale that is installed on your system. They provide
case-insensitive (_ci) and case-sensitive (_cs) comparisons respectively.
By default they will use C locale, effectively resorting to bytewise
comparisons. To change that, you need to specify a different available
locale using collation_libc_locale
directive. The list of locales available on your system can usually be obtained
with the locale
command:
$ locale -a C en_AG en_AU.utf8 en_BW.utf8 en_CA.utf8 en_DK.utf8 en_GB.utf8 en_HK.utf8 en_IE.utf8 en_IN en_NG en_NZ.utf8 en_PH.utf8 en_SG.utf8 en_US.utf8 en_ZA.utf8 en_ZW.utf8 es_ES fr_FR POSIX ru_RU.utf8 ru_UA.utf8
The specific list of the system locales may vary. Consult your OS documentation to install additional needed locales.
utf8_general_ci
and binary
locales are
built-in into Sphinx. The first one is a generic collation for UTF-8 data
(without any so-called language tailoring); it should behave similar to
utf8_general_ci
collation in MySQL. The second one
is a simple bytewise comparison.
Collation can be overriden via SphinxQL on a per-session basis using
SET collation_connection
statement. All subsequent SphinxQL
queries will use this collation. SphinxAPI and SphinxSE queries will use
the server default collation, as specified in
collation_server configuration
directive. Sphinx currently defaults to libc_ci
collation.
Collations should affect all string attribute comparisons, including those within ORDER BY and GROUP BY, so differently ordered or grouped results can be returned depending on the collation chosen.
Starting with 2.0.1-beta, Sphinx supports User-Defined Functions,
or UDF for short. They can be loaded and unloaded dynamically into
searchd
without having to restart the daemon,
and used in expressions when searching. UDF features at a glance
are as follows.
Functions can take integer (both 32-bit and 64-bit), float, string, or MVA arguments.
Functions can return integer or float values.
Functions can check the argument number, types, and names and raise errors.
Only simple functions (that is, non-aggregate ones) are currently supported.
User-defined functions need your OS to support dynamically loadable
libraries (aka shared objects). Most of the modern OSes are eligible,
including Linux, Windows, MacOS, Solaris, BSD and others. (The internal
testing has been done on Linux and Windows.) The UDF libraries must
reside in a directory specified by
plugin_dir directive, and the
server must be configured to use workers = threads
mode.
Relative paths to the library files are not allowed. Once the library
is succesfully built and copied to the trusted location, you can then
dynamically install and deinstall the functions using
CREATE FUNCTION and
DROP FUNCTION statements
respectively. A single library can contain multiple functions. A library
gets loaded when you first install a function from it, and unloaded
when you deinstall all the functions from that library.
The library functions that will implement a UDF visible to SQL statements
need to follow C calling convention, and a simple naming convention. Sphinx
source distribution provides a sample file,
src/udfexample.c,
that defines a few simple functions showing how to work with integer,
string, and MVA arguments; you can use that one as a foundation for
your new functions. It includes the UDF interface header file,
src/sphinxudf.h,
that defines the required types and structures. sphinxudf.h
header is standalone, that is, does not require any other parts of Sphinx
source to compile.
Every function that you intend to use in your SELECT statements requires at least two corresponding C/C++ functions: the initialization call, and the function call itself. You can also optionally define the deinitialization call if your function requires any post-query cleanup. (For instance, if you were allocating any memory in either the initialization call or the function calls.) Function names in SQL are case insensitive, C function names are not. They need to be all lower-case. Mistakes in function name prevent UDFs from loading. You also have to pay special attention to the calling convention used when compiling, the list and the types of arguments, and the return type of the main function call. Mistakes in either are likely to crash the server, or result in unexpected results in the best case. Last but not least, all functions need to be thread-safe.
Let's assume for the sake of example that your UDF name in SphinxQL
will be MYFUNC
. The initialization, main, and deinitialization
functions would then need to be named as follows and take the following
arguments:
/// initialization function /// called once during query initialization /// returns 0 on success /// returns non-zero and fills error_message buffer on failure int myfunc_init ( SPH_UDF_INIT * init, SPH_UDF_ARGS * args, char * error_message ); /// main call function /// returns the computed value /// writes non-zero value into error_flag to indicate errors RETURN_TYPE myfunc ( SPH_UDF_INIT * init, SPH_UDF_ARGS * args, char * error_flag ); /// optional deinitialization function /// called once to cleanup once query processing is done void myfunc_deinit ( SPH_UDF_INIT * init );
The two mentioned structures, SPH_UDF_INIT
and
SPH_UDF_ARGS
, are defined in the src/sphinxudf.h
interface header and documented there. RETURN_TYPE
of the
main function must be one of the following:
int
for the functions that return INT.
sphinx_int64_t
for the functions that return BIGINT.
float
for the functions that return FLOAT.
The calling sequence is as follows. myfunc_init()
is called
once when initializing the query. It can return a non-zero code to indicate
a failure; in that case query is not executed, and the error message from
the error_message
buffer is returned. Otherwise, myfunc()
is be called for every row, and a myfunc_deinit()
is then called
when the query ends. myfunc()
can indicate an error by writing
a non-zero byte value to error_flag
, in that case, it will
no more be called for subsequent rows, and a default value of 0 will be
substituted. Sphinx might or might not choose to terminate such queries
early, neither behavior is currently guaranteed.
Table of Contents
As mentioned elsewhere, Sphinx is not a single program called 'sphinx', but a collection of 4 separate programs which collectively form Sphinx. This section covers these tools and how to use them.
indexer
is the first of the two principal tools
as part of Sphinx. Invoked from either the command line directly, or as part
of a larger script, indexer
is solely responsible
for gathering the data that will be searchable.
The calling syntax for indexer
is as follows:
indexer [OPTIONS] [indexname1 [indexname2 [...]]]
Essentially you would list the different possible indexes (that you would later
make available to search) in sphinx.conf
, so when calling
indexer
, as a minimum you need to be telling it what index
(or indexes) you want to index.
If sphinx.conf
contained details on 2 indexes,
mybigindex
and mysmallindex
,
you could do the following:
$ indexer mybigindex $ indexer mysmallindex mybigindex
As part of the configuration file, sphinx.conf
, you specify
one or more indexes for your data. You might call indexer
to reindex
one of them, ad-hoc, or you can tell it to process all indexes - you are not limited
to calling just one, or all at once, you can always pick some combination
of the available indexes.
The majority of the options for indexer
are given
in the configuration file, however there are some options you might need to specify
on the command line as well, as they can affect how the indexing operation is performed.
These options are:
--config <file>
(-c <file>
for short)
tells indexer
to use the given file as its configuration. Normally,
it will look for sphinx.conf
in the installation directory
(e.g. /usr/local/sphinx/etc/sphinx.conf
if installed into
/usr/local/sphinx
), followed by the current directory you are
in when calling indexer
from the shell. This is most of use
in shared environments where the binary files are installed somewhere like
/usr/local/sphinx/
but you want to provide users with
the ability to make their own custom Sphinx set-ups, or if you want to run
multiple instances on a single server. In cases like those you could allow them
to create their own sphinx.conf
files and pass them to
indexer
with this option. For example:
$ indexer --config /home/myuser/sphinx.conf myindex
--all
tells indexer
to update
every index listed in sphinx.conf
, instead of listing individual indexes.
This would be useful in small configurations, or cron
-type or maintenance
jobs where the entire index set will get rebuilt each day, or week, or whatever period is best.
Example usage:
$ indexer --config /home/myuser/sphinx.conf --all
--rotate
is used for rotating indexes. Unless you have the situation
where you can take the search function offline without troubling users, you will almost certainly
need to keep search running whilst indexing new documents. --rotate
creates
a second index, parallel to the first (in the same place, simply including .new
in the filenames). Once complete, indexer
notifies searchd
via sending the SIGHUP
signal, and searchd
will attempt
to rename the indexes (renaming the existing ones to include .old
and renaming the .new
to replace them), and then start serving
from the newer files. Depending on the setting of
seamless_rotate, there may be a slight delay
in being able to search the newer indexes. Example usage:
$ indexer --rotate --all
--quiet
tells indexer
not to output anything,
unless there is an error. Again, most used for cron
-type, or other script
jobs where the output is irrelevant or unnecessary, except in the event of some kind of error.
Example usage:
$ indexer --rotate --all --quiet
--noprogress
does not display progress details as they occur;
instead, the final status details (such as documents indexed, speed of indexing and so on
are only reported at completion of indexing. In instances where the script is not being
run on a console (or 'tty'), this will be on by default. Example usage:
$ indexer --rotate --all --noprogress
--buildstops <outputfile.text> <N>
reviews
the index source, as if it were indexing the data, and produces a list of the terms
that are being indexed. In other words, it produces a list of all the searchable terms
that are becoming part of the index. Note; it does not update the index in question,
it simply processes the data 'as if' it were indexing, including running queries
defined with sql_query_pre
or sql_query_post
.
outputfile.txt
will contain the list of words, one per line,
sorted by frequency with most frequent first, and N
specifies
the maximum number of words that will be listed; if sufficiently large to encompass
every word in the index, only that many words will be returned. Such a dictionary list
could be used for client application features around "Did you mean..." functionality,
usually in conjunction with --buildfreqs
, below. Example:
$ indexer myindex --buildstops word_freq.txt 1000
This would produce a document in the current directory, word_freq.txt
with the 1,000 most common words in 'myindex', ordered by most common first. Note that
the file will pertain to the last index indexed when specified with multiple indexes or
--all
(i.e. the last one listed in the configuration file)
--buildfreqs
works with --buildstops
(and is ignored if --buildstops
is not specified).
As --buildstops
provides the list of words used within the index,
--buildfreqs
adds the quantity present in the index, which would be
useful in establishing whether certain words should be considered stopwords
if they are too prevalent. It will also help with developing "Did you mean..."
features where you can how much more common a given word compared to another,
similar one. Example:
$ indexer myindex --buildstops word_freq.txt 1000 --buildfreqs
This would produce the word_freq.txt
as above, however after each word would be the number of times it occurred in the index in question.
--merge <dst-index> <src-index>
is used
for physically merging indexes together, for example if you have a main+delta scheme,
where the main index rarely changes, but the delta index is rebuilt frequently,
and --merge
would be used to combine the two. The operation moves
from right to left - the contents of src-index
get examined
and physically combined with the contents of dst-index
and the result is left in dst-index
.
In pseudo-code, it might be expressed as: dst-index += src-index
An example:
$ indexer --merge main delta --rotate
In the above example, where the main is the master, rarely modified index,
and delta is the less frequently modified one, you might use the above to call
indexer
to combine the contents of the delta into the
main index and rotate the indexes.
--merge-dst-range <attr> <min> <max>
runs the filter range given upon merging. Specifically, as the merge is applied
to the destination index (as part of --merge
, and is ignored
if --merge
is not specified), indexer
will also filter the documents ending up in the destination index, and only
documents will pass through the filter given will end up in the final index.
This could be used for example, in an index where there is a 'deleted' attribute,
where 0 means 'not deleted'. Such an index could be merged with:
$ indexer --merge main delta --merge-dst-range deleted 0 0
Any documents marked as deleted (value 1) would be removed from the newly-merged destination index. It can be added several times to the command line, to add successive filters to the merge, all of which must be met in order for a document to become part of the final index.
--merge-killlists
(and its
shorter alias --merge-klists
) changes the way
kill lists are processed when merging indexes. By default, both
kill lists get discarded after a merge. That supports the most typical
main+delta merge scenario. With this option enabled, however, kill lists
from both indexes get concatenated and stored into the destination index.
Note that a source (delta) index kill list will be used to suppress rows
from a destination (main) index at all times.
--keep-attrs
(added in version 2.1.1-beta)
allows to reuse existing attributes on reindexing. Whenever
the index is rebuilt, each new document id is checked for presence in the
"old" index, and if it already exists, its attributes are transferred to
the "new" index; if not found, attributes from the new index are used. If
the user has updated attributes in the index, but not in the actual source
used for the index, all updates will be lost when reindexing; using --keep-attrs
enables saving the updated attribute values from the previous index
--dump-rows <FILE>
dumps rows fetched
by SQL source(s) into the specified file, in a MySQL compatible syntax.
Resulting dumps are the exact representation of data as received by
indexer
and help to repeat indexing-time issues.
--verbose
guarantees that every row that
caused problems indexing (duplicate, zero, or missing document ID;
or file field IO issues; etc) will be reported. By default, this option
is off, and problem summaries may be reported instead.
--sighup-each
is useful when you are
rebuilding many big indexes, and want each one rotated into
searchd
as soon as possible. With
--sighup-each
, indexer
will send a SIGHUP signal to searchd after succesfully
completing the work on each index. (The default behavior
is to send a single SIGHUP after all the indexes were built.)
--print-queries
prints out
SQL queries that indexer
sends to
the database, along with SQL connection and disconnection
events. That is useful to diagnose and fix problems with
SQL sources.
searchd
is the second of the two principle tools as part of Sphinx.
searchd
is the part of the system which actually handles searches;
it functions as a server and is responsible for receiving queries, processing them and
returning a dataset back to the different APIs for client applications.
Unlike indexer
, searchd
is not designed
to be run either from a regular script or command-line calling, but instead either
as a daemon to be called from init.d (on Unix/Linux type systems) or to be called
as a service (on Windows-type systems), so not all of the command line options will
always apply, and so will be build-dependent.
Calling searchd
is simply a case of:
$ searchd [OPTIONS]
The options available to searchd
on all builds are:
--help
(-h
for short) lists all of the
parameters that can be called in your particular build of searchd
.
--config <file>
(-c <file>
for short)
tells searchd
to use the given file as its configuration,
just as with indexer
above.
--stop
is used to asynchronously stop searchd
,
using the details of the PID file as specified in the sphinx.conf
file,
so you may also need to confirm to searchd
which configuration
file to use with the --config
option. NB, calling --stop
will also make sure any changes applied to the indexes with
UpdateAttributes()
will be applied to the index files themselves. Example:
$ searchd --config /home/myuser/sphinx.conf --stop
--stopwait
is used to synchronously stop searchd
.
--stop
essentially tells the running instance to exit (by sending it a SIGTERM)
and then immediately returns. --stopwait
will also attempt to wait until the
running searchd
instance actually finishes the shutdown (eg. saves all
the pending attribute changes) and exits. Example:
$ searchd --config /home/myuser/sphinx.conf --stopwait
Possible exit codes are as follows:
0 on success;
1 if connection to running searchd daemon failed;
2 if daemon reported an error during shutdown;
3 if daemon crashed during shutdown.
--status
command is used to query running
searchd
instance status, using the connection details
from the (optionally) provided configuration file. It will try to connect
to the running instance using the first configured UNIX socket or TCP port.
On success, it will query for a number of status and performance counter
values and print them. You can use Status()
API call to access the very same counters from your application. Examples:
$ searchd --status $ searchd --config /home/myuser/sphinx.conf --status
--pidfile
is used to explicitly force
using a PID file (where the searchd
process number
is stored) despite any other debugging options that say otherwise
(for instance, --console
). This is a debugging option.
$ searchd --console --pidfile
--console
is used to force searchd
into console mode; typically it will be running as a conventional server application,
and will aim to dump information into the log files (as specified in
sphinx.conf
). Sometimes though, when debugging issues
in the configuration or the daemon itself, or trying to diagnose hard-to-track-down
problems, it may be easier to force it to dump information directly
to the console/command line from which it is being called. Running in console mode
also means that the process will not be forked (so searches are done in sequence)
and logs will not be written to. (It should be noted that console mode
is not the intended method for running searchd
.)
You can invoke it as such:
$ searchd --config /home/myuser/sphinx.conf --console
--logdebug
, --logdebugv
,
and --logdebugvv
options enable additional debug output
in the daemon log. They differ by the logging verboseness level. These are
debugging options, they pollute the log a lot, and thus they should
not be normally enabled. (The normal use case for
these is to enable them temporarily on request, to assist with some
particularly complicated debugging session.)
--iostats
is used in conjuction with the
logging options (the query_log
will need to have been
activated in sphinx.conf
) to provide more detailed
information on a per-query basis as to the input/output operations
carried out in the course of that query, with a slight performance hit
and of course bigger logs. Further details are available under the
query log format section.
You might start searchd
thus:
$ searchd --config /home/myuser/sphinx.conf --iostats
--cpustats
is used to provide actual CPU time
report (in addition to wall time) in both query log file (for every given
query) and status report (aggregated). It depends on clock_gettime() system
call and might therefore be unavailable on certain systems. You might start
searchd
thus:
$ searchd --config /home/myuser/sphinx.conf --cpustats
--port portnumber
(-p
for short)
is used to specify the port that searchd
should listen on,
usually for debugging purposes. This will usually default to 9312, but sometimes
you need to run it on a different port. Specifying it on the command line
will override anything specified in the configuration file. The valid range
is 0 to 65535, but ports numbered 1024 and below usually require
a privileged account in order to run. An example of usage:
$ searchd --port 9313
--listen ( address ":" port | port | path ) [ ":" protocol ]
(or -l
for short) Works as --port
, but allow
you to specify not only the port, but full path, as IP address and port, or
Unix-domain socket path, that searchd
will listen on.
Otherwords, you can specify either an IP address (or hostname) and port number, or
just a port number, or Unix socket path. If you specify port number
but not the address, searchd will listen on all network interfaces.
Unix path is identified by a leading slash. As the last param you
can also specify a protocol handler (listener) to be used for
connections on this socket. Supported protocol values are 'sphinx'
(Sphinx 0.9.x API protocol) and 'mysql41' (MySQL protocol used since
4.1 upto at least 5.1).
--index <index>
(or -i
<index>
for short) forces this instance of
searchd
only to serve the specified index.
Like --port
, above, this is usually for debugging purposes;
more long-term changes would generally be applied to the configuration file
itself. Example usage:
$ searchd --index myindex
--strip-path
strips the path names from
all the file names referenced from the index (stopwords, wordforms,
exceptions, etc). This is useful for picking up indexes built on another
machine with possibly different path layouts.
--replay-flags=<OPTIONS>
switch,
added in version 2.0.2-beta, can be used to specify a list of extra binary log
replay options. The supported options are:
accept-desc-timestamp
,
ignore descending transaction timestamps and replay such
transactions anyway (the default behavior is to exit
with an error).
Example:
$ searchd --replay-flags=accept-desc-timestamp
There are some options for searchd
that are specific
to Windows platforms, concerning handling as a service, are only be available on Windows binaries.
Note that on Windows searchd will default to --console
mode, unless you install it as a service.
--install
installs searchd
as a service
into the Microsoft Management Console (Control Panel / Administrative Tools / Services).
Any other parameters specified on the command line, where --install
is specified will also become part of the command line on future starts of the service.
For example, as part of calling searchd
, you will likely also need
to specify the configuration file with --config
, and you would do that
as well as specifying --install
. Once called, the usual start/stop
facilities will become available via the management console, so any methods you could
use for starting, stopping and restarting services would also apply to
searchd
. Example:
C:\WINDOWS\system32> C:\Sphinx\bin\searchd.exe --install --config C:\Sphinx\sphinx.conf
If you wanted to have the I/O stats every time you started searchd
,
you would specify its option on the same line as the --install
command thus:
C:\WINDOWS\system32> C:\Sphinx\bin\searchd.exe --install --config C:\Sphinx\sphinx.conf --iostats
--delete
removes the service from the Microsoft Management Console
and other places where services are registered, after previously installed with
--install
. Note, this does not uninstall the software or delete the indexes.
It means the service will not be called from the services systems, and will not be started
on the machine's next start. If currently running as a service, the current instance
will not be terminated (until the next reboot, or searchd
is called
with --stop
). If the service was installed with a custom name
(with --servicename
), the same name will need to be specified
with --servicename
when calling to uninstall. Example:
C:\WINDOWS\system32> C:\Sphinx\bin\searchd.exe --delete
--servicename <name>
applies the given name to
searchd
when installing or deleting the service, as would appear
in the Management Console; this will default to searchd, but if being deployed on servers
where multiple administrators may log into the system, or a system with multiple
searchd
instances, a more descriptive name may be applicable.
Note that unless combined with --install
or --delete
,
this option does not do anything. Example:
C:\WINDOWS\system32> C:\Sphinx\bin\searchd.exe --install --config C:\Sphinx\sphinx.conf --servicename SphinxSearch
--ntservice
is the option that is passed by the
Management Console to searchd
to invoke it as a service
on Windows platforms. It would not normally be necessary to call this directly;
this would normally be called by Windows when the service would be started,
although if you wanted to call this as a regular service from the command-line
(as the complement to --console
) you could do so in theory.
--safetrace
forces searchd
to only use system backtrace() call in crash reports. In certain (rare) scenarios,
this might be a "safer" way to get that report. This is a debugging option.
--nodetach
switch (Linux only) tells
searchd
not to detach into background. This will also
cause log entry to be printed out to console. Query processing operates
as usual. This is a debugging option.
Last but not least, as every other daemon, searchd
supports a number of signals.
Initiates a clean shutdown. New queries will not be handled; but queries that are already started will not be forcibly interrupted.
Initiates index rotation. Depending on the value of seamless_rotate setting, new queries might be shortly stalled; clients will receive temporary errors.
Forces reopen of searchd log and query log files, letting you implement log file rotation.
search
is one of the helper tools within the
Sphinx package. Whereas searchd
is responsible for
searches in a server-type environment, search
is
aimed at testing the index from the command line, and testing the index
quickly without building a framework to make the connection to the server
and process its response.
Note: search
is not intended to be deployed as
part of a client application; it is strongly recommended you do not write
an interface to search
instead of
searchd
, and none of the bundled client APIs support
this method. (In any event, search
will reload files
each time, whereas searchd
will cache them in memory
for performance.)
That said, many types of query that you could build in the APIs
could also be made with search
, however for very
complex searches it may be easier to construct them using a small script
and the corresponding API. Additionally, some newer features may be
available in the searchd
system that have not yet
been brought into search
.
The calling syntax for search
is as
follows:
search [OPTIONS] word1 [word2 [word3 [...]]]
When calling search
, it is not necessary
to have searchd
running; simply make sure that
the account running the search
program has read
access to the configuration file and the index files.
The default behaviour is to apply a search for word1 (AND word2 AND
word3... as specified) to all fields in all indexes as given in the
configuration file. If constructing the equivalent in the API, this would
be the equivalent to passing SPH_MATCH_ALL
to
SetMatchMode
, and specifying *
as the
indexes to query as part of Query
.
There are many options available to search
.
Firstly, the general options:
--config <file>
(-c
<file>
for short) tells search
to use
the given file as its configuration, just as with
indexer
above.
--index <index>
(-i
<index>
for short) tells search
to
limit searching to the specified index only; normally it would attempt to
search all of the physical indexes listed in
sphinx.conf
, not any distributed ones.
--stdin
tells search
to
accept the query from the standard input, rather than the command line.
This can be useful for testing purposes whereby you could feed input via
pipes and from scripts.
Options for setting matches:
--any
(-a
for short) changes
the matching mode to match any of the words as part of the query (word1 OR
word2 OR word3). In the API this would be equivalent to passing
SPH_MATCH_ANY
to SetMatchMode
.
--phrase
(-p
for short)
changes the matching mode to match all of the words as part of the query,
and do so in the phrase given (not including punctuation). In the API this
would be equivalent to passing SPH_MATCH_PHRASE
to
SetMatchMode
.
--boolean
(-b
for short)
changes the matching mode to Boolean
matching. Note if using Boolean syntax matching on the command
line, you may need to escape the symbols (with a backslash) to avoid the
shell/command line processor applying them, such as ampersands being
escaped on a Unix/Linux system to avoid it forking to the
search
process, although this can be resolved by
using --stdin
, as below. In the API this would be
equivalent to passing SPH_MATCH_BOOLEAN
to
SetMatchMode
.
--ext
(-e
for short) changes
the matching mode to extended
matching which provides various text querying operators.
In the API this would be equivalent to passing
SPH_MATCH_EXTENDED
to SetMatchMode
.
--filter <attr> <v>
(-f
<attr> <v>
for short) filters the results such that
only documents where the attribute given (attr) matches the value given
(v). For example, --filter deleted 0
only matches
documents with an attribute called 'deleted' where its value is 0. You can
also add multiple filters on the command line, by specifying multiple
--filter
multiple times, however if you apply a second
filter to an attribute it will override the first defined
filter.
Options for handling the results:
--limit <count>
(-l
count
for short) limits the total number of matches back to the
number given. If a 'group' is specified, this will be the number of
grouped results. This defaults to 20 results if not specified (as do the
APIs)
--offset <count>
(-o
<count>
for short) offsets the result list by the number of
places set by the count; this would be used for pagination through
results, where if you have 20 results per 'page', the second page would
begin at offset 20, the third page at offset 40, etc.
--group <attr>
(-g
<attr>
for short) specifies that results should be grouped
together based on the attribute specified. Like the GROUP BY clause in
SQL, it will combine all results where the attribute given matches, and
returns a set of results where each returned result is the best from each
group. Unless otherwise specified, this will be the best match on
relevance.
--groupsort <expr>
(-gs
<expr>
for short) instructs that when results are grouped
with --group
, the expression given in <expr> shall
determine the order of the groups. Note, this does not specify which is
the best item within the group, only the order in which the groups
themselves shall be returned.
--sortby <clause>
(-s
<clause>
for short) specifies that results should be sorted
in the order listed in <clause>. This allows you to specify the
order you wish results to be presented in, ordering by different columns.
For example, you could say --sortby "@weight DESC entrytime
DESC"
to sort entries first by weight (or relevance) and where
two or more entries have the same weight, to then sort by the time with
the highest time (newest) first. You will usually need to put the items in
quotes (--sortby "@weight DESC"
) or use commas
(--sortby @weight,DESC
) to avoid the items being treated
separately. Additionally, like the regular sorting modes, if
--group
(grouping) is being used, this will state how to
establish the best match within each group.
--sortexpr expr
(-S expr
for
short) specifies that the search results should be presented in an order
determined by an arithmetic expression, stated in expr. For example:
--sortexpr "@weight + ( user_karma + ln(pageviews) )*0.1"
(again noting that this will have to be quoted to avoid the shell dealing
with the asterisk). Extended sort mode is discussed in more detail under
the SPH_SORT_EXTENDED
entry under the Sorting modes section of the
manual.
--sort=date
specifies that the results should
be sorted by descending (i.e. most recent first) date. This requires that
there is an attribute in the index that is set as a timestamp.
--rsort=date
specifies that the results should
be sorted by ascending (i.e. oldest first) date. This requires that there
is an attribute in the index that is set as a timestamp.
--sort=ts
specifies that the results should be
sorted by timestamp in groups; it will return all of the documents whose
timestamp is within the last hour, then sorted within that bracket for
relevance. After, it would return the documents from the last day, sorted
by relevance, then the last week and then the last month. It is discussed
in more detail under the SPH_SORT_TIME_SEGMENTS
entry
under the Sorting modes section of
the manual.
Other options:
--noinfo
(-q
for short)
instructs search
not to look-up data in your SQL
database. Specifically, for debugging with MySQL and
search
, you can provide it with a query to look up
the full article based on the returned document ID. It is explained in
more detail under the sql_query_info directive.
spelldump
is one of the helper tools within the Sphinx package.
It is used to extract the contents of a dictionary file that uses
ispell
or MySpell
format, which
can help build word lists for wordforms - all of
the possible forms are pre-built for you.
Its general usage is:
spelldump [options] <dictionary> <affix> [result] [locale-name]
The two main parameters are the dictionary's main file and its affix
file; usually these are named as
[language-prefix].dict
and
[language-prefix].aff
and will be available with most
common Linux distributions, as well as various places online.
[result]
specifies where the dictionary data should
be output to, and [locale-name]
additionally specifies
the locale details you wish to use.
There is an additional option, -c [file]
, which
specifies a file for case conversion details.
Examples of its usage are:
spelldump en.dict en.aff spelldump ru.dict ru.aff ru.txt ru_RU.CP1251 spelldump ru.dict ru.aff ru.txt .1251
The results file will contain a list of all the words in the dictionary in alphabetical order, output in the format of a wordforms file, which you can use to customise for your specific circumstances. An example of the result file:
zone > zone zoned > zoned zoning > zoning
indextool
is one of the helper tools within
the Sphinx package, introduced in version 0.9.9-rc2. It is used to
dump miscellaneous debug information about the physical index.
(Additional functionality such as index verification is planned
in the future, hence the indextool name rather than just indexdump.)
Its general usage is:
indextool <command> [options]
Options apply to all commands:
--config <file>
(-c <file>
for short)
overrides the built-in config file names.
--quiet
(-q
for short)
keep indextool quiet - it will not output banner, etc.
The commands are as follows:
--checkconfig
just loads and verifies the
config file to check if it's valid, without syntax errors.
This option was added in version 2.1.1-beta.
--build-infixes INDEXNAME
build infixes for
an existing dict=keywords index (upgrades .sph, .spi in place). You can use
this option for legacy index files that already use dict=keywords, but now
need to support infix searching too; updating the index files with indextool
may prove easier or faster than regenerating them from scratch with indexer.
This option was added in version 2.1.1-beta.
--dumpheader FILENAME.sph
quickly dumps
the provided index header file without touching any other index files
or even the configuration file. The report provides a breakdown of
all the index settings, in particular the entire attribute and
field list. Prior to 0.9.9-rc2, this command was present in
CLI search utility.
--dumpconfig FILENAME.sph
dumps
the index definition from the given index header file in (almost)
compliant sphinx.conf
file format.
Added in version 2.0.1-beta.
--dumpheader INDEXNAME
dumps index header
by index name with looking up the header path in the configuration file.
--dumpdict INDEXNAME
dumps dictionary. This was
added in version 2.1.1-beta.
--dumpdocids INDEXNAME
dumps document IDs
by index name. It takes the data from attribute (.spa) file and therefore
requires docinfo=extern to work.
--dumphitlist INDEXNAME KEYWORD
dumps all
the hits (occurences) of a given keyword in a given index, with keyword
specified as text.
--dumphitlist INDEXNAME --wordid ID
dumps all
the hits (occurences) of a given keyword in a given index, with keyword
specified as internal numeric ID.
--fold INDEXNAME OPTFILE
This options is useful too see how actually tokenizer proceeds input.
You can feed indextool with text from file if specified or from stdin otherwise.
The output will contain spaces instead of separators (accordingly to your
charset_table settings) and lowercased letters in words.
--htmlstrip INDEXNAME
filters stdin using
HTML stripper settings for a given index, and prints the filtering
results to stdout. Note that the settings will be taken from sphinx.conf,
and not the index header.
--morph INDEXNAME
applies morphology to the
given stdin and prints the result to stdout.
--check INDEXNAME
checks the index data
files for consistency errors that might be introduced either by bugs
in indexer
and/or hardware faults. Starting with
version 2.1.1-beta, --check
also works on RT indexes, RAM and disk chunks.
--strip-path
strips the path names from
all the file names referenced from the index (stopwords, wordforms,
exceptions, etc). This is useful for checking indexes built on another
machine with possibly different path layouts.
--optimize-rt-klists
optimizes
the kill list memory use in the disk chunk of a given RT index. That
is a one-off optimization intended for rather old RT indexes, created
by development versions prior to 1.10-beta release. As of 1.10-beta
releases, this kill list optimization (purging) should happen
automatically, and there should never be a need to use this option.
wordbreaker
is one of the helper tools within
the Sphinx package, introduced in version 2.1.1-beta. It is used to
split compound words, as usual in URLs, into its component words.
For example, this tool can split "lordoftherings" into its four
component words, or "http://manofsteel.warnerbros.com" into "man
of steel warner bros". This helps searching, without requiring
prefixes or infixes: searching for "sphinx" wouldn't match "sphinxsearch"
but if you break the compound word and index the separate components,
you'll get a match without the costs of prefix and infix larger index files.
Examples of its usage are:
echo manofsteel | bin/wordbreaker -dict dict.txt split
The input stream will be separated in words using the -dict
dictionary file. (The dictionary should match the language of the compound word.)
The split
command breaks words from the standard input, and
outputs the result in the standard output. There are also test
and
bench
commands that let you test the splitting quality and benchmark
the splitting functionality.
Wordbreaker
Wordbreaker needs a dictionary to recognize individual substrings within a string. To
differentiate between different guesses, it uses the relative frequency of each
word in the dictionary: higher frequency means higher split probability. You can
generate such a file using the indexer
tool, as in
indexer --buildstops dict.txt 100000 --buildfreqs myindex -c /path/to/sphinx.conf
which will write the 100,000 most frequent words, along with their counts, from myindex into dict.txt. The output file is a text file, so you can edit it by hand, if need be, to add or remove words.
See http://sphinxsearch.com/blog/2013/01/29/a-new-tool-in-the-trunk-wordbreaker/ for more on this tool.
Table of Contents
SphinxQL is our SQL dialect that exposes all of the search daemon functionality using a standard SQL syntax with a few Sphinx-specific extensions. Everything available via the SphinxAPI is also available via SphinxQL but not vice versa; for instance, writes into RT indexes are only available via SphinxQL. This chapter documents supported SphinxQL statements syntax.
SELECT select_expr [, select_expr ...] FROM index [, index2 ...] [WHERE where_condition] [GROUP BY {col_name | expr_alias} [, {sol_name | expr_alias}]] [WITHIN GROUP ORDER BY {col_name | expr_alias} {ASC | DESC}] [ORDER BY {col_name | expr_alias} {ASC | DESC} [, ...]] [LIMIT [offset,] row_count] [OPTION opt_name = opt_value [, ...]]
SELECT statement was introduced in version 0.9.9-rc2. It's syntax is based upon regular SQL but adds several Sphinx-specific extensions and has a few omissions (such as (currently) missing support for JOINs). Specifically,
Column list clause. Column names, arbitrary expressions,
and star ('*') are all allowed (ie.
SELECT @id, group_id*123+456 AS expr1 FROM test1
will work). Unlike in regular SQL, all computed expressions must be aliased
with a valid identifier. Starting with version 2.0.1-beta, AS
is optional. Special names such as @id and @weight should currently
be used with leading at-sign. This at-sign requirement will be lifted in
the future.
EXIST() function (added in version 2.1.1-beta) is supported. EXIST ( "attr-name", default-value ) replaces non-existent columns with default values. It returns either a value of an attribute specified by 'attr-name', or 'default-value' if that attribute does not exist. As of 2.1.1-beta it does not support STRING or MVA attributes. This function is handy when you are searching through several indexes with different schemas.
SELECT *, EXIST('gid', 6) as cnd FROM i1, i2 WHERE cnd>5
SNIPPET() function (added in version 2.1.1-beta) is supported. This is a wrapper around the snippets functionality, similar to what is available via CALL SNIPPETS. This function takes two arguments: the text to highlight, and a query. The intended use is as follows:
SELECT id, SNIPPET(myUdf(id), "my.query") FROM myIndex WHERE MATCH("my.query")
where myUdf() would be a UDF that fetches a document by its ID from some external storage. This enables applications to fetch the entire result set directly from Sphinx in one query, without having to separately fetch the documents in the application and then send them back to Sphinx for highlighting.
SNIPPET() is a so-called "post limit" function, meaning that computing snippets is postponed not just until the entire final result set is ready, but even after the LIMIT clause is applied. For example, with a LIMIT 20,10 clause, SNIPPET() will be called at most 10 times.
FROM clause. FROM clause should contain the list of indexes to search through. Unlike in regular SQL, comma means enumeration of full-text indexes as in Query() API call rather than JOIN. Index name should be according to the rules of a C identifier.
WHERE clause. This clause will map both to fulltext query
and filters. Comparison operators (=, !=, <, >, <=, >=), IN,
AND, NOT, and BETWEEN are all supported and map directly to filters.
OR is not supported yet but will be in the future. MATCH('query')
is supported and maps to fulltext query. Query will be interpreted
according to full-text query language rules.
There must be at most one MATCH() in the clause. Starting with version
2.0.1-beta, {col_name | expr_alias} [NOT] IN @uservar
condition syntax is supported. (Refer to Section 7.9, “SET syntax”
for a discussion of global user variables.)
GROUP BY clause. Supports grouping by multiple columns or computed expressions:
SELECT *, group_id*1000+article_type AS gkey FROM example GROUP BY gkey SELECT id FROM products GROUP BY region, price
Implicit grouping supported when using aggregate functions without specifiying a GROUP BY clause. Consider these two queries:
SELECT MAX(id), MIN(id), COUNT(*) FROM books SELECT MAX(id), MIN(id), COUNT(*), 1 AS grp FROM books GROUP BY grp
Aggregate functions (AVG(), MIN(), MAX(), SUM()) in column list clause are supported. Arguments to aggregate functions can be either plain attributes or arbitrary expressions. COUNT(*) is implicitly supported as using GROUP BY will add @count column to result set. Explicit support might be added in the future. COUNT(DISTINCT attr) is supported. Currently there can be at most one COUNT(DISTINCT) per query and an argument needs to be an attribute. Both current restrictions on COUNT(DISTINCT) might be lifted in the future. A special GROUPBY() function is also supported. It returns the GROUP BY key (and replaces the deprecated @groupby magic variable). That is particularly useful when grouping by an MVA value, in order to pick the specific value that was used to create the current group.
SELECT *, AVG(price) AS avgprice, COUNT(DISTINCT storeid), GROUPBY() FROM products WHERE MATCH('ipod') GROUP BY vendorid
Starting with 2.0.1-beta, GROUP BY on a string attribute is supported, with respect for current collation (see Section 5.12, “Collations”).
You can sort the result set by (an alias of) the aggregate value.
SELECT group_id, MAX(id) AS max_id FROM my_index WHERE MATCH('the') GROUP BY group_id ORDER BY max_id DESC
However you can not use aggregate values in the WHERE clause. (HAVING clause would be required for that, just as in standard SQL, and that is not yet supported.)
GROUP_CONCAT() function is supported, starting with version 2.1.1-beta. When you group by an attribute, the result set only shows attributes from a single document representing the whole group. GROUP_CONCAT() produces a comma-separated list of the attribute values of all documents in the group.
SELECT id, GROUP_CONCAT(price) as pricesList, GROUPBY() AS name FROM shops GROUP BY shopName;
ZONESPANLIST() function returns pairs of matched zone spans. Each pair contains the matched zone span identifier, a colon, and the order number of the matched zone span. For example, if a document reads <emphasis role="bold"><i>text</i> the <i>text</i></emphasis>, and you query for 'ZONESPAN:(i,b) text', then ZONESPANLIST() will return the string "1:1 1:2 2:1" meaning that the first zone span matched "text" in spans 1 and 2, and the second zone span in span 1 only. This was added in version 2.1.1-beta.
WITHIN GROUP ORDER BY clause. This is a Sphinx specific extension that lets you control how the best row within a group will to be selected. The syntax matches that of regular ORDER BY clause:
SELECT *, INTERVAL(posted,NOW()-7*86400,NOW()-86400) AS timeseg FROM example WHERE MATCH('my search query') GROUP BY siteid WITHIN GROUP ORDER BY @weight DESC ORDER BY timeseg DESC, @weight DESC
Starting with 2.0.1-beta, WITHIN GROUP ORDER BY on a string attribute is supported, with respect for current collation (see Section 5.12, “Collations”).
ORDER BY clause. Unlike in regular SQL, only column names (not expressions) are allowed and explicit ASC and DESC are required. The columns however can be computed expressions:
SELECT *, @weight*10+docboost AS skey FROM example ORDER BY skey
Starting with 2.1.1-beta, you can use subqueries to speed up specific searches, which involve reranking, by postponing hard (slow) calculations as late as possible. For example, SELECT id,a_slow_expression() AS cond FROM an_index ORDER BY id ASC, cond DESC LIMIT 100; could be better written as SELECT * FROM (SELECT id,a_slow_expression() AS cond FROM an_index ORDER BY id ASC LIMIT 100) ORDER BY cond DESC; because in the first case the slow expression would be evaluated for the whole set, while in the second one it would be evaluated just for a subset of values.
Starting with 2.0.1-beta, ORDER BY on a string attribute is supported, with respect for current collation (see Section 5.12, “Collations”).
Starting with 2.0.2-beta, ORDER BY RAND() syntax is supported. Note that this syntax is actually going to randomize the weight values and then order matches by those randomized weights.
LIMIT clause. Both LIMIT N and LIMIT M,N forms are supported. Unlike in regular SQL (but like in Sphinx API), an implicit LIMIT 0,20 is present by default.
OPTION clause. This is a Sphinx specific extension that lets you control a number of per-query options. The syntax is:
OPTION <optionname>=<value> [ , ... ]
Supported options and respectively allowed values are:
'agent_query_timeout' - integer (max time in milliseconds to wait for remote queries to complete, see agent-query-timeout under Index configuration options for details)
'boolean_simplify' - 0 or 1, enables simplifying the query to speed it up
'comment' - string, user comment that gets copied to a query log file
'cutoff' - integer (max found matches threshold)
'field_weights' - a named integer list (per-field user weights for ranking)
'global_idf' - use global statistics (frequencies) from the global_idf file for IDF computations, rather than the local index statistics. Added in version 2.1.1-beta.
'idf' - either 'normalized' (default) or 'plain'. Added in version 2.1.1-beta.
The standard IDF (Inverse Document Frequency) calculation may cause
undesired keyword penalization effects in the BM25 weighting functions.
For instance, if you search for [the | something] and [the] occurs
in more than 50% of the documents, then documents with both keywords
[the] and [something] will get less weight than documents with
just one keyword [something]. Using OPTION idf=plain
avoids this.
idf=normalized: bm25 variant, idf = log((N-n+1)/n), as per Robertson et al
idf=plain: plain variant, idf=log(N/n), as per Sparck-Jones
where N is the collection size and n is the number of matched documents. Hence plain IDF varies in [0, log(N)] range, and keywords are never penalized; while the normalized IDF varies in [-log(N), log(N)] range, and too frequent keywords are penalized.
'index_weights' - a named integer list (per-index user weights for ranking)
'max_matches' - integer (per-query max matches value)
'max_query_time' - integer (max search time threshold, msec)
'ranker' - any of 'proximity_bm25', 'bm25', 'none', 'wordcount', 'proximity', 'matchany', 'fieldmask', 'sph04', 'expr', or 'export' (refer to Section 5.4, “Search results ranking” for more details on each ranker)
'retry_count' - integer (distributed retries count)
'retry_delay' - integer (distributed retry delay, msec)
'reverse_scan' - 0 or 1, lets you control the order in which full-scan query processes the rows
'sort_method' - 'pq' (priority queue, set by default) or 'kbuffer' (gives faster sorting for already pre-sorted data, e.g. index data sorted by id). The result set is in both cases the same; picking one option or the other may just improve (or worsen!) performance. This option was added in version 2.1.1-beta.
Example:
SELECT * FROM test WHERE MATCH('@title hello @body world') OPTION ranker=bm25, max_matches=3000, field_weights=(title=10, body=3), agent_query_timeout=10000
SELECT @@system_variable [LIMIT [offset,] row_count]
Added in version 2.0.2-beta, this is currently a placeholder query that does nothing and reports success. That is in order to keep compatibility with frameworks and connectors that automatically execute this statement.
SHOW META [ LIKE pattern ]
SHOW META shows additional meta-information about the latest query such as query time and keyword statistics. IO and CPU counters will only be available if searchd was started with --iostats and --cpustats switches respectively.
mysql> SELECT * FROM test1 WHERE MATCH('test|one|two'); +------+--------+----------+------------+ | id | weight | group_id | date_added | +------+--------+----------+------------+ | 1 | 3563 | 456 | 1231721236 | | 2 | 2563 | 123 | 1231721236 | | 4 | 1480 | 2 | 1231721236 | +------+--------+----------+------------+ 3 rows in set (0.01 sec) mysql> SHOW META; +-----------------------+-------+ | Variable_name | Value | +-----------------------+-------+ | total | 3 | | total_found | 3 | | time | 0.005 | | keyword[0] | test | | docs[0] | 3 | | hits[0] | 5 | | keyword[1] | one | | docs[1] | 1 | | hits[1] | 2 | | keyword[2] | two | | docs[2] | 1 | | hits[2] | 2 | | cpu_time | 0.350 | | io_read_time | 0.004 | | io_read_ops | 2 | | io_read_kbytes | 0.4 | | io_write_time | 0.000 | | io_write_ops | 0 | | io_write_kbytes | 0.0 | | agents_cpu_time | 0.000 | | agent_io_read_time | 0.000 | | agent_io_read_ops | 0 | | agent_io_read_kbytes | 0.0 | | agent_io_write_time | 0.000 | | agent_io_write_ops | 0 | | agent_io_write_kbytes | 0.0 | +-----------------------+-------+ 12 rows in set (0.00 sec)
Starting version 2.1.1-beta, you can also use the optional LIKE clause. It lets you pick just the variables that match a pattern. The pattern syntax is that of regular SQL wildcards, that is, '%' means any number of any characters, and '_' means a single character:
mysql> SHOW META LIKE 'total%'; +-----------------------+-------+ | Variable_name | Value | +-----------------------+-------+ | total | 3 | | total_found | 3 | +-----------------------+-------+ 2 rows in set (0.00 sec)
SHOW WARNINGS
SHOW WARNINGS statement, introduced in version 0.9.9-rc2, can be used to retrieve the warning produced by the latest query. The error message will be returned along with the query itself:
mysql> SELECT * FROM test1 WHERE MATCH('@@title hello') \G ERROR 1064 (42000): index test1: syntax error, unexpected TOK_FIELDLIMIT near '@title hello' mysql> SELECT * FROM test1 WHERE MATCH('@title -hello') \G ERROR 1064 (42000): index test1: query is non-computable (single NOT operator) mysql> SELECT * FROM test1 WHERE MATCH('"test doc"/3') \G *************************** 1. row *************************** id: 4 weight: 2500 group_id: 2 date_added: 1231721236 1 row in set, 1 warning (0.00 sec) mysql> SHOW WARNINGS \G *************************** 1. row *************************** Level: warning Code: 1000 Message: quorum threshold too high (words=2, thresh=3); replacing quorum operator with AND operator 1 row in set (0.00 sec)
SHOW STATUS [ LIKE pattern ]
SHOW STATUS, introduced in version 0.9.9-rc2, displays a number of useful performance counters. IO and CPU counters will only be available if searchd was started with --iostats and --cpustats switches respectively.
mysql> SHOW STATUS; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | uptime | 216 | | connections | 3 | | maxed_out | 0 | | command_search | 0 | | command_excerpt | 0 | | command_update | 0 | | command_keywords | 0 | | command_persist | 0 | | command_status | 0 | | agent_connect | 0 | | agent_retry | 0 | | queries | 10 | | dist_queries | 0 | | query_wall | 0.075 | | query_cpu | OFF | | dist_wall | 0.000 | | dist_local | 0.000 | | dist_wait | 0.000 | | query_reads | OFF | | query_readkb | OFF | | query_readtime | OFF | | avg_query_wall | 0.007 | | avg_query_cpu | OFF | | avg_dist_wall | 0.000 | | avg_dist_local | 0.000 | | avg_dist_wait | 0.000 | | avg_query_reads | OFF | | avg_query_readkb | OFF | | avg_query_readtime | OFF | +--------------------+-------+ 29 rows in set (0.00 sec)
Starting from version 2.1.1-beta, an optional LIKE clause is supported. Refer to Section 7.3, “SHOW META syntax” for its syntax details.
{INSERT | REPLACE} INTO index [(column, ...)] VALUES (value, ...) [, (...)]
INSERT statement, introduced in version 1.10-beta, is only supported for RT indexes. It inserts new rows (documents) into an existing index, with the provided column values.
ID column must be present in all cases. Rows with duplicate IDs will not be overwritten by INSERT; use REPLACE to do that.
index
is the name of RT index into which the new row(s)
should be inserted. The optional column names list lets you only explicitly specify
values for some of the columns present in the index. All the other columns will be
filled with their default values (0 for scalar types, empty string for text types).
Expressions are not currently supported in INSERT and values should be explicitly specified.
Multiple rows can be inserted using a single INSERT statement by providing several comma-separated, parens-enclosed lists of rows values.
{INSERT | REPLACE} INTO index [(column, ...)] VALUES (value, ...) [, (...)]
REPLACE syntax is identical to INSERT syntax and is discussed in Section 7.6, “INSERT and REPLACE syntax”.
DELETE FROM index WHERE {id = value | id IN (val1 [, val2 [, ...]])}
DELETE statement, introduced in version 1.10-beta, is only supported for RT indexes. It deletes existing rows (documents) from an existing index based on ID.
index
is the name of RT index from which the row should be deleted.
value
is the row ID to be deleted. Support for batch
id IN (2,3,5)
syntax was added in version 2.0.1-beta.
Additional types of WHERE conditions (such as conditions on attributes, etc) are planned, but not supported yet as of 1.10-beta.
SET [GLOBAL] server_variable_name = value SET GLOBAL @user_variable_name = (int_val1 [, int_val2, ...]) SET NAMES value SET @@dummy_variable = ignored_value
SET statement, introduced in version 1.10-beta, modifies a variable value. The variable names are case-insensitive. No variable value changes survive server restart.
SET NAMES statement and SET @@variable_name syntax, both introduced in version 2.0.2-beta, do nothing. They were implemented to maintain compatibility with 3rd party MySQL client libraries, connectors, and frameworks that may need to run this statement when connecting.
There are the following classes of the variables:
per-session server variable (1.10-beta and above)
global server variable (2.0.1-beta and above)
global user variable (2.0.1-beta and above)
Global user variables are shared between concurrent sessions. Currently,
the only supported value type is the list of BIGINTs, and these variables
can only be used along with IN() for filtering purpose. The intended usage
scenario is uploading huge lists of values to searchd
(once) and reusing them (many times) later, saving on network overheads.
Example:
// in session 1 mysql> SET GLOBAL @myfilter=(2,3,5,7,11,13); Query OK, 0 rows affected (0.00 sec) // later in session 2 mysql> SELECT * FROM test1 WHERE group_id IN @myfilter; +------+--------+----------+------------+-----------------+------+ | id | weight | group_id | date_added | title | tag | +------+--------+----------+------------+-----------------+------+ | 3 | 1 | 2 | 1299338153 | another doc | 15 | | 4 | 1 | 2 | 1299338153 | doc number four | 7,40 | +------+--------+----------+------------+-----------------+------+ 2 rows in set (0.02 sec)
Per-session and global server variables affect certain server settings in the respective scope. Known per-session server variables are:
AUTOCOMMIT = {0 | 1}
Whether any data modification statement should be implicitly wrapped by BEGIN and COMMIT. Introduced in version 1.10-beta.
COLLATION_CONNECTION = collation_name
Selects the collation to be used for ORDER BY or GROUP BY on string values in the subsequent queries. Refer to Section 5.12, “Collations” for a list of known collation names. Introduced in version 2.0.1-beta.
CHARACTER_SET_RESULTS = charset_name
Does nothing; a placeholder to support frameworks, clients, and connectors that attempt to automatically enforce a charset when connecting to a Sphinx server. Introduced in version 2.0.1-beta.
SQL_AUTO_IS_NULL = value
Does nothing; a placeholder to support frameworks, clients, and connectors that attempt to automatically enforce a charset when connecting to a Sphinx server. Introduced in version 2.0.2-beta.
SQL_MODE = value
Does nothing; a placeholder to support frameworks, clients, and connectors that attempt to automatically enforce a charset when connecting to a Sphinx server. Introduced in version 2.0.2-beta.
PROFILING = {0 | 1}
Enables query profiling in the current session. Defaults to 0. See also Section 7.29, “SHOW PROFILE syntax”. Introduced in version 2.1.1-beta.
Known global server variables are:
QUERY_LOG_FORMAT = {plain | sphinxql}
Changes the current log format. Introduced in version 2.0.1-beta.
LOG_LEVEL = {info | debug | debugv | debugvv}
Changes the current log verboseness level. Introduced in version 2.0.1-beta.
Examples:
mysql> SET autocommit=0; Query OK, 0 rows affected (0.00 sec) mysql> SET GLOBAL query_log_format=sphinxql; Query OK, 0 rows affected (0.00 sec)
SET TRANSACTION ISOLATION LEVEL { READ UNCOMMITTED | READ COMMITTED | REPEATABLE READ | SERIALIZABLE }
SET TRANSACTION statement, introduced in version 2.0.2-beta, does nothing. It was implemented to maintain compatibility with 3rd party MySQL client libraries, connectors, and frameworks that may need to run this statement when connecting.
Example:
mysql> SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; Query OK, 0 rows affected (0.00 sec)
START TRANSACTION | BEGIN COMMIT ROLLBACK SET AUTOCOMMIT = {0 | 1}
BEGIN, COMMIT, and ROLLBACK statements were introduced in version 1.10-beta. BEGIN statement (or its START TRANSACTION alias) forcibly commits pending transaction, if any, and begins a new one. COMMIT statement commits the current transaction, making all its changes permanent. ROLLBACK statement rolls back the current transaction, canceling all its changes. SET AUTOCOMMIT controls the autocommit mode in the active session.
AUTOCOMMIT is set to 1 by default, meaning that every statement that perfoms any changes on any index is implicitly wrapped in BEGIN and COMMIT.
Transactions are limited to a single RT index, and also limited in size. They are atomic, consistent, overly isolated, and durable. Overly isolated means that the changes are not only invisible to the concurrent transactions but even to the current session itself.
START TRANSACTION | BEGIN
BEGIN syntax is discussed in detail in Section 7.11, “BEGIN, COMMIT, and ROLLBACK syntax”.
ROLLBACK
ROLLBACK syntax is discussed in detail in Section 7.11, “BEGIN, COMMIT, and ROLLBACK syntax”.
CALL SNIPPETS(data, index, query[, opt_value AS opt_name[, ...]])
CALL SNIPPETS statement, introduced in version 1.10-beta, builds a snippet from provided data and query, using specified index settings.
data
is the source data to extract a snippet from. It could be a single string,
or the list of the strings enclosed in curly brackets.
index
is the name of the index from which to take the text
processing settings. query
is the full-text query to build
snippets for. Additional options are documented in
Section 8.7.1, “BuildExcerpts”. Usage example:
CALL SNIPPETS('this is my document text', 'test1', 'hello world', 5 AS around, 200 AS limit); CALL SNIPPETS(('this is my document text','this is my another text'), 'test1', 'hello world', 5 AS around, 200 AS limit); CALL SNIPPETS(('data/doc1.txt','data/doc2.txt','/home/sphinx/doc3.txt'), 'test1', 'hello world', 5 AS around, 200 AS limit, 1 AS load_files);
CALL KEYWORDS(text, index, [hits])
CALL KEYWORDS statement, introduced in version 1.10-beta, splits text into particular keywords. It returns tokenized and normalized forms of the keywords, and, optionally, keyword statistics.
text
is the text to break down to keywords.
index
is the name of the index from which to take the text
processing settings. hits
is an optional boolean parameter
that specifies whether to return document and hit occurrence statistics.
SHOW TABLES [ LIKE pattern ]
SHOW TABLES statement, introduced in version 2.0.1-beta, enumerates
all currently active indexes along with their types. As of 2.0.1-beta,
existing index types are local
, distributed
,
and rt
respectively.
Example:
mysql> SHOW TABLES; +-------+-------------+ | Index | Type | +-------+-------------+ | dist1 | distributed | | rt | rt | | test1 | local | | test2 | local | +-------+-------------+ 4 rows in set (0.00 sec)
Starting from version 2.1.1-beta, an optional LIKE clause is supported. Refer to Section 7.3, “SHOW META syntax” for its syntax details.
mysql> SHOW TABLES LIKE '%4'; +-------+-------------+ | Index | Type | +-------+-------------+ | dist4 | distributed | +-------+-------------+ 1 row in set (0.00 sec)
{DESC | DESCRIBE} index [ LIKE pattern ]
DESCRIBE statement, introduced in version 2.0.1-beta, lists
index columns and their associated types. Columns are document ID,
full-text fields, and attributes. The order matches that in which
fields and attributes are expected by INSERT and REPLACE statements.
As of 2.0.1-beta, column types are field
,
integer
, timestamp
,
ordinal
, bool
,
float
, bigint
,
string
, and mva
.
ID column will be typed either integer
or bigint
based on whether the binaries
were built with 32-bit or 64-bit document ID support.
Example:
mysql> DESC rt; +---------+---------+ | Field | Type | +---------+---------+ | id | integer | | title | field | | content | field | | gid | integer | +---------+---------+ 4 rows in set (0.00 sec)
Starting from version 2.1.1-beta, an optional LIKE clause is supported. Refer to Section 7.3, “SHOW META syntax” for its syntax details.
CREATE FUNCTION udf_name RETURNS {INT | BIGINT | FLOAT | STRING} SONAME 'udf_lib_file'
CREATE FUNCTION statement, introduced in version 2.0.1-beta, installs a user-defined function (UDF) with the given name and type from the given library file. The library file must reside in a trusted plugin_dir directory. On success, the function is available for use in all subsequent queries that the server receives. Example:
mysql> CREATE FUNCTION avgmva RETURNS INT SONAME 'udfexample.dll'; Query OK, 0 rows affected (0.03 sec) mysql> SELECT *, AVGMVA(tag) AS q from test1; +------+--------+---------+-----------+ | id | weight | tag | q | +------+--------+---------+-----------+ | 1 | 1 | 1,3,5,7 | 4.000000 | | 2 | 1 | 2,4,6 | 4.000000 | | 3 | 1 | 15 | 15.000000 | | 4 | 1 | 7,40 | 23.500000 | +------+--------+---------+-----------+
DROP FUNCTION udf_name
DROP FUNCTION statement, introduced in version 2.0.1-beta, deinstalls a user-defined function (UDF) with the given name. On success, the function is no longer available for use in subsequent queries. Pending concurrent queries will not be affected and the library unload, if necessary, will be postponed until those queries complete. Example:
mysql> DROP FUNCTION avgmva; Query OK, 0 rows affected (0.00 sec)
SHOW [{GLOBAL | SESSION}] VARIABLES [WHERE variable_name='xxx']
SHOW VARIABLES statement was added in version 2.0.1-beta to improve compatibility with 3rd party MySQL connectors and frameworks that automatically execute this statement. The WHERE option was added in version 2.1.1-beta.
In version 2.0.1-beta, it did nothing.
Starting from version 2.0.2-beta, it returns the current values of a few server-wide variables. Also, support for GLOBAL and SESSION clauses was added.
mysql> SHOW GLOBAL VARIABLES; +----------------------+----------+ | Variable_name | Value | +----------------------+----------+ | autocommit | 1 | | collation_connection | libc_ci | | query_log_format | sphinxql | | log_level | info | +----------------------+----------+ 4 rows in set (0.00 sec)
Starting from 2.1.1-beta, support for WHERE variable_name clause was added, to help certain connectors.
SHOW COLLATION
Added in version 2.0.1-beta, this is currently a placeholder query that does nothing and reports success. That is in order to keep compatibility with frameworks and connectors that automatically execute this statement.
mysql> SHOW COLLATION; Query OK, 0 rows affected (0.00 sec)
SHOW CHARACTER SET
Added in version 2.1.1-beta, this is currently a placeholder query that does nothing and reports that a UTF-8 character set is available. It was added in order to keep compatibility with frameworks and connectors that automatically execute this statement.
mysql> SHOW CHARACTER SET; +---------+---------------+-------------------+--------+ | Charset | Description | Default collation | Maxlen | +---------+---------------+-------------------+--------+ | utf8 | UTF-8 Unicode | utf8_general_ci | 3 | +---------+---------------+-------------------+--------+ 1 row in set (0.00 sec)
UPDATE index SET col1 = newval1 [, ...] WHERE where_condition [OPTION opt_name = opt_value [, ...]]
UPDATE statement was added in version 2.0.1-beta. Multiple attributes and values can be specified in a single statement. Both RT and disk indexes are supported.
As of version 2.0.2-beta, all atributes types (int, bigint, float, MVA) except for strings can be updated. Previously, some of the types were not supported.
where_condition
(also added in 2.0.2-beta) has the same syntax
as in the SELECT statement (see Section 7.1, “SELECT syntax” for details).
When assigning the out-of-range values to 32-bit attributes, they will be trimmed to their lower 32 bits without a prompt. For example, if you try to update the 32-bit unsigned int with a value of 4294967297, the value of 1 will actually be stored, because the lower 32 bits of 4294967297 (0x100000001 in hex) amount to 1 (0x00000001 in hex).
MVA values sets for updating (and also for INSERT or REPLACE, refer to Section 7.6, “INSERT and REPLACE syntax”) must be specificed as comma-separated lists in parentheses. To erase the MVA value, just assign () to it.
mysql> UPDATE myindex SET enabled=0 WHERE id=123; Query OK, 1 rows affected (0.00 sec) mysql> UPDATE myindex SET bigattr=-100000000000, fattr=3465.23, mvattr1=(3,6,4), mvattr2=() WHERE MATCH('hehe') AND enabled=1; Query OK, 148 rows affected (0.01 sec)
OPTION clause. This is a Sphinx specific extension that lets you control a number of per-update options. The syntax is:
OPTION <optionname>=<value> [ , ... ]
The list of allowed options are the same as for SELECT statement. Specifically for UPDATE statement you can use these options:
'ignore_nonexistent_columns' - this option, added in version 2.1.1-beta, points that the update will silently ignore any warnings about trying to update a column which is not exists in current index schema.
ATTACH INDEX diskindex TO RTINDEX rtindex
ATTACH INDEX statement, added in version 2.0.2-beta, lets you move data from a regular disk index to a RT index.
After a successful ATTACH, the data originally stored in the source disk index becomes a part of the target RT index, and the source disk index becomes unavailable (until the next rebuild). ATTACH does not result in any index data changes. Basically, it just renames the files (making the source index a new disk chunk of the target RT index), and updates the metadata. So it is a generally quick operation which might (frequently) complete as fast as under a second.
Note that when an index is attached to an empty RT index, the fields, attributes, and text processing settings (tokenizer, wordforms, etc) from the source index are copied over and take effect. The respective parts of the RT index definition from the configuration file will be ignored.
As of 2.0.2-beta, ATTACH INDEX comes with a number of restrictions. Most notably, the target RT index is currently required to be empty, making ATTACH INDEX a one-time conversion operation only. Those restrictions may be lifted in future releases, as we add the needed functionality to the RT indexes. The complete list is as follows.
Target RT index needs to be empty. (See Section 7.27, “TRUNCATE RTINDEX syntax”)
Source disk index needs to have index_sp=0, boundary_step=0, stopword_step=1, dict=crc settings.
Source disk index needs to have an empty index_zones setting.
mysql> DESC rt; +-----------+---------+ | Field | Type | +-----------+---------+ | id | integer | | testfield | field | | testattr | uint | +-----------+---------+ 3 rows in set (0.00 sec) mysql> SELECT * FROM rt; Empty set (0.00 sec) mysql> SELECT * FROM disk WHERE MATCH('test'); +------+--------+----------+------------+ | id | weight | group_id | date_added | +------+--------+----------+------------+ | 1 | 1304 | 1 | 1313643256 | | 2 | 1304 | 1 | 1313643256 | | 3 | 1304 | 1 | 1313643256 | | 4 | 1304 | 1 | 1313643256 | +------+--------+----------+------------+ 4 rows in set (0.00 sec) mysql> ATTACH INDEX disk TO RTINDEX rt; Query OK, 0 rows affected (0.00 sec) mysql> DESC rt; +------------+-----------+ | Field | Type | +------------+-----------+ | id | integer | | title | field | | content | field | | group_id | uint | | date_added | timestamp | +------------+-----------+ 5 rows in set (0.00 sec) mysql> SELECT * FROM rt WHERE MATCH('test'); +------+--------+----------+------------+ | id | weight | group_id | date_added | +------+--------+----------+------------+ | 1 | 1304 | 1 | 1313643256 | | 2 | 1304 | 1 | 1313643256 | | 3 | 1304 | 1 | 1313643256 | | 4 | 1304 | 1 | 1313643256 | +------+--------+----------+------------+ 4 rows in set (0.00 sec) mysql> SELECT * FROM disk WHERE MATCH('test'); ERROR 1064 (42000): no enabled local indexes to search
FLUSH RTINDEX rtindex
FLUSH RTINDEX statement, added in version 2.0.2-beta, forcibly flushes RT index RAM chunk contents to disk.
Backing up a RT index is as simple as copying over its data files, followed by the binary log. However, recovering from that backup means that all the transactions in the log since the last successful RAM chunk write would need to be replayed. Those writes normally happen either on a clean shutdown, or periodically with a (big enough!) interval between writes specified in rt_flush_period directive. So such a backup made at an arbitrary point in time just might end up with way too much binary log data to replay.
FLUSH RTINDEX forcibly writes the RAM chunk contents to disk, and also causes the subsequent cleanup of (now-redundant) binary log files. Thus, recovering from a backup made just after FLUSH RTINDEX should be almost instant.
mysql> FLUSH RTINDEX rt; Query OK, 0 rows affected (0.05 sec)
FLUSH RAMCHUNK rtindex
FLUSH RAMCHUNK statement, added in version 2.1.2-release, forcibly creates a new disk chunk in a RT index.
Normally, RT index would flush and convert the contents of the RAM chunk into a new disk chunk automatically, once the RAM chunk reaches the maximum allowed rt_mem_limit size. However, for debugging and testing it might be useful to forcibly create a new disk chunk, and FLUSH RAMCHUNK statement does exactly that.
Note that using FLUSH RAMCHUNK increases RT index fragmentation. Most likely, you want to use FLUSH RTINDEX instead. We suggest that you abstain from using this statement unless you're absolutely sure what you're doing.
mysql> FLUSH RAMCHUNK rt; Query OK, 0 rows affected (0.05 sec)
TRUNCATE RTINDEX rtindex
TRUNCATE RTINDEX statement, added in version 2.1.1-beta, clears the RT index completely. It disposes the in-memory data, unlinks all the index data files, and releases the associated binary logs.
mysql> TRUNCATE RTINDEX rt; Query OK, 0 rows affected (0.05 sec)
You may want to use this if you are using RT indices as "delta index" files; when you build the main index, you need to wipe the delta index, and thus TRUNCATE RTINDEX. You also need to use this command before attaching an index; see Section 7.24, “ATTACH INDEX syntax”.
SHOW AGENT ['agent'|'index'|index] STATUS [ LIKE pattern ]
Displays the statistic of remote agents or distributed index. It includes the values like the age of the last request, last answer, the number of different kind of errors and successes, etc. The statistic is shown for every agent for last 1, 5 and 15 intervals, each of them of ha_period_karma seconds. The command exists only in sphinxql.
mysql> SHOW AGENT STATUS; +------------------------------------+----------------------------+ | Key | Value | +------------------------------------+----------------------------+ | status_period_seconds | 60 | | status_stored_periods | 15 | | ag_0_hostname | 192.168.0.202:6713 | | ag_0_references | 2 | | ag_0_lastquery | 0.41 | | ag_0_lastanswer | 0.19 | | ag_0_lastperiodmsec | 222 | | ag_0_errorsarow | 0 | | ag_0_1periods_query_timeouts | 0 | | ag_0_1periods_connect_timeouts | 0 | | ag_0_1periods_connect_failures | 0 | | ag_0_1periods_network_errors | 0 | | ag_0_1periods_wrong_replies | 0 | | ag_0_1periods_unexpected_closings | 0 | | ag_0_1periods_warnings | 0 | | ag_0_1periods_succeeded_queries | 27 | | ag_0_1periods_msecsperquery | 232.31 | | ag_0_5periods_query_timeouts | 0 | | ag_0_5periods_connect_timeouts | 0 | | ag_0_5periods_connect_failures | 0 | | ag_0_5periods_network_errors | 0 | | ag_0_5periods_wrong_replies | 0 | | ag_0_5periods_unexpected_closings | 0 | | ag_0_5periods_warnings | 0 | | ag_0_5periods_succeeded_queries | 146 | | ag_0_5periods_msecsperquery | 231.83 | | ag_1_hostname | 192.168.0.202:6714 | | ag_1_references | 2 | | ag_1_lastquery | 0.41 | | ag_1_lastanswer | 0.19 | | ag_1_lastperiodmsec | 220 | | ag_1_errorsarow | 0 | | ag_1_1periods_query_timeouts | 0 | | ag_1_1periods_connect_timeouts | 0 | | ag_1_1periods_connect_failures | 0 | | ag_1_1periods_network_errors | 0 | | ag_1_1periods_wrong_replies | 0 | | ag_1_1periods_unexpected_closings | 0 | | ag_1_1periods_warnings | 0 | | ag_1_1periods_succeeded_queries | 27 | | ag_1_1periods_msecsperquery | 231.24 | | ag_1_5periods_query_timeouts | 0 | | ag_1_5periods_connect_timeouts | 0 | | ag_1_5periods_connect_failures | 0 | | ag_1_5periods_network_errors | 0 | | ag_1_5periods_wrong_replies | 0 | | ag_1_5periods_unexpected_closings | 0 | | ag_1_5periods_warnings | 0 | | ag_1_5periods_succeeded_queries | 146 | | ag_1_5periods_msecsperquery | 230.85 | +------------------------------------+----------------------------+ 50 rows in set (0.01 sec)
Starting from version 2.1.1-beta, an optional LIKE clause is supported. Refer to Section 7.3, “SHOW META syntax” for its syntax details.
mysql> SHOW AGENT STATUS LIKE '%5period%msec%'; +-----------------------------+--------+ | Key | Value | +-----------------------------+--------+ | ag_0_5periods_msecsperquery | 234.72 | | ag_1_5periods_msecsperquery | 233.73 | | ag_2_5periods_msecsperquery | 343.81 | +-----------------------------+--------+ 3 rows in set (0.00 sec)
You can specify a particular agent by its address. In this case only that agent's data will be displayed. Also, 'agent_' prefix will be used instead of 'ag_N_':
mysql> SHOW AGENT '192.168.0.202:6714' STATUS LIKE '%15periods%'; +-------------------------------------+--------+ | Variable_name | Value | +-------------------------------------+--------+ | agent_15periods_query_timeouts | 0 | | agent_15periods_connect_timeouts | 0 | | agent_15periods_connect_failures | 0 | | agent_15periods_network_errors | 0 | | agent_15periods_wrong_replies | 0 | | agent_15periods_unexpected_closings | 0 | | agent_15periods_warnings | 0 | | agent_15periods_succeeded_queries | 439 | | agent_15periods_msecsperquery | 231.73 | +-------------------------------------+--------+ 9 rows in set (0.00 sec)
Finally, you can check the status of the agents in a specific distributed index. It can be done with a SHOW AGENT index STATUS statement. That statement shows the index HA status (ie. whether or not it uses agent mirrors at all), and then the mirror information (specifically: address, blackhole and persisent flags, and the mirror selection probability used when one of the weighted-probability strategies is in effect).
mysql> SHOW AGENT dist_index STATUS; +--------------------------------------+--------------------------------+ | Variable_name | Value | +--------------------------------------+--------------------------------+ | dstindex_1_is_ha | 1 | | dstindex_1mirror1_id | 192.168.0.202:6713:loc | | dstindex_1mirror1_probability_weight | 0.372864 | | dstindex_1mirror1_is_blackhole | 0 | | dstindex_1mirror1_is_persistent | 0 | | dstindex_1mirror2_id | 192.168.0.202:6714:loc | | dstindex_1mirror2_probability_weight | 0.374635 | | dstindex_1mirror2_is_blackhole | 0 | | dstindex_1mirror2_is_persistent | 0 | | dstindex_1mirror3_id | dev1.sphinxsearch.com:6714:loc | | dstindex_1mirror3_probability_weight | 0.252501 | | dstindex_1mirror3_is_blackhole | 0 | | dstindex_1mirror3_is_persistent | 0 | +--------------------------------------+--------------------------------+ 13 rows in set (0.00 sec)
SHOW PROFILE
SHOW PROFILE statement, added in version 2.1.1-beta, shows a detailed
execution profile of the previous SQL statement executed in the current
SphinxQL session. Also, profiling must be enabled in the current session
before running the statement to be instrumented. That can be done
with a SET profiling=1
statement. By default, profiling
is disabled to avoid potential performance implications, and therefore
the profile will be empty.
Here's a complete instrumentation example:
mysql> SET profiling=1; Query OK, 0 rows affected (0.00 sec) mysql> SELECT id FROM lj WHERE MATCH('the test') LIMIT 1; +--------+ | id | +--------+ | 946418 | +--------+ 1 row in set (0.05 sec) mysql> SHOW PROFILE; +--------------+----------+----------+ | Status | Duration | Switches | +--------------+----------+----------+ | unknown | 0.000610 | 6 | | net_read | 0.000007 | 1 | | dist_connect | 0.000036 | 1 | | sql_parse | 0.000048 | 1 | | dict_setup | 0.000001 | 1 | | parse | 0.000023 | 1 | | transforms | 0.000002 | 1 | | init | 0.000401 | 3 | | open | 0.000104 | 1 | | read_docs | 0.001570 | 71 | | read_hits | 0.003936 | 222 | | get_docs | 0.029837 | 1347 | | get_hits | 0.000548 | 1433 | | filter | 0.000619 | 1274 | | rank | 0.009892 | 2909 | | sort | 0.001562 | 52 | | finalize | 0.000250 | 1 | | dist_wait | 0.000000 | 1 | | aggregate | 0.000145 | 1 | | net_write | 0.000031 | 1 | +--------------+----------+----------+ 20 rows in set (0.00 sec)
Status column briefly describes where exactly (in which state) was the time spent. Duration column shows the wall clock time, in seconds. Switches column displays the number of times query engine changed to the given state. Those are just logical engine state switches and not any OS level context switches nor function calls (even though some of the sections can actually map to function calls) and they do not have any direct effect on the performance. In a sense, number of switches is just a number of times when the respective instrumentation point was hit.
States in the profile are returned in a prerecorded order that roughly maps (but is not identical) to the actual query order.
A list of states may (and will) vary over time, as we refine the states. Here's a brief description of the currently profiled states.
SHOW INDEX index_name STATUS
Added in version 2.1.1-beta. Displays various per-index statistics. Currently, those include:
mysql> SHOW INDEX lj STATUS; +----------------------+------------+ | Variable_name | Value | +----------------------+------------+ | index_type | disk | | indexed_documents | 999944 | | indexed_bytes | 1115085501 | | field_tokens_title | 2473915 | | field_tokens_content | 191050669 | | ram_bytes | 64705411 | +----------------------+------------+ 6 rows in set (0.00 sec)
OPTIMIZE INDEX index_name
Available since version 2.1.1-beta, OPTIMIZE statement enqueues a RT index for optimization in a background thread.
Over time, RT indexes can grow fragmented into many disk chunks and/or tainted with deleted, but unpurged data, impacting search performance. When that happens, they can be optimized. Basically, the optimization pass merges together disk chunks pairs, purging off documents suppressed by K-list as it goes.
That is a lengthy and IO intensive process, so to limit the impact, all the actual merge work is executed serially in a special background thread, and the OPTIMIZE statement simply adds a job to its queue. Currently, there is no way to check the index or queue status (that might be added in the future to the SHOW INDEX STATUS and SHOW STATUS statements respectively). The optimization thread cen be IO-throttled, you can control the maximum number of IOs per second and the maximum IO size with rt_merge_iops and rt_merge_maxiosize directives respectively. The optimization jobs queue is lost on daemon crash.
The RT index being optimized stays online and available for both searching and updates at (almost) all times during the optimization. It gets locked (very) briefly every time that a pair of disk chunks is merged succesfully, to rename the old and the new files, and update the index header.
At the moment, OPTIMIZE needs to be issued manually, the indexes will not be optimized automatically. That might change in the future releases.
mysql> OPTIMIZE INDEX rt; Query OK, 0 rows affected (0.00 sec)
SHOW PLAN
SHOW PLAN statement, added in 2.1.2-release, displays the execution plan
of the previous SELECT statment. The plan gets generated and stored
during the actual execution, so profiling must be enabled in the current
session before running that statement. That can be done
with a SET profiling=1
statement.
Here's a complete instrumentation example:
mysql> SET profiling=1 \G Query OK, 0 rows affected (0.00 sec) mysql> SELECT id FROM lj WHERE MATCH('the i') LIMIT 1 \G *************************** 1. row *************************** id: 39815 1 row in set (1.53 sec) mysql> SHOW PLAN \G *************************** 1. row *************************** Variable: transformed_tree Value: AND( AND(KEYWORD(the, querypos=1)), AND(KEYWORD(i, querypos=2))) 1 row in set (0.00 sec)
And here's a less trivial example that shows how the actually evaluated query tree can be rather different from the original one because of expansions and other transformations:
mysql> SELECT * FROM test WHERE MATCH('@title abc* @body hey') \G SHOW PLAN \G ... *************************** 1. row *************************** Variable: transformed_tree Value: AND( OR(fields=(title), KEYWORD(abcx, querypos=1, expanded), KEYWORD(abcm, querypos=1, expanded)), AND(fields=(body), KEYWORD(hey, querypos=2))) 1 row in set (0.00 sec)
Starting version 2.0.1-beta, SphinxQL supports multi-statement queries, or batches. Possible inter-statement optimizations described in Section 5.11, “Multi-queries” do apply to SphinxQL just as well. The batched queries should be separated by a semicolon. Your MySQL client library needs to support MySQL multi-query mechanism and multiple result set. For instance, mysqli interface in PHP and DBI/DBD libraries in Perl are known to work.
Here's a PHP sample showing how to utilize mysqli interface with Sphinx.
<?php $link = mysqli_connect ( "127.0.0.1", "root", "", "", 9306 ); if ( mysqli_connect_errno() ) die ( "connect failed: " . mysqli_connect_error() ); $batch = "SELECT * FROM test1 ORDER BY group_id ASC;"; $batch .= "SELECT * FROM test1 ORDER BY group_id DESC"; if ( !mysqli_multi_query ( $link, $batch ) ) die ( "query failed" ); do { // fetch and print result set if ( $result = mysqli_store_result($link) ) { while ( $row = mysqli_fetch_row($result) ) printf ( "id=%s\n", $row[0] ); mysqli_free_result($result); } // print divider if ( mysqli_more_results($link) ) printf ( "------\n" ); } while ( mysqli_next_result($link) );
Its output with the sample test1
index included
with Sphinx is as follows.
$ php test_multi.php id=1 id=2 id=3 id=4 ------ id=3 id=4 id=1 id=2
The following statements can currently be used in a batch: SELECT, SHOW WARNINGS, SHOW STATUS, and SHOW META. Arbitrary sequence of these statements are allowed. The results sets returned should match those that would be returned if the batched queries were sent one by one.
Since version 2.0.1-beta, SphinxQL supports C-style comment syntax.
Everything from an opening /*
sequence to a closing
*/
sequence is ignored. Comments can span multiple lines,
can not nest, and should not get logged. MySQL specific
/*! ... */
comments are also currently ignored.
(As the comments support was rather added for better compatibility
with mysqldump
produced dumps, rather than
improving generaly query interoperability between Sphinx and MySQL.)
SELECT /*! SQL_CALC_FOUND_ROWS */ col1 FROM table1 WHERE ...
A complete alphabetical list of keywords that are currently reserved in SphinxQL syntax (and therefore can not be used as identifiers).
AND AGENT AS ASC AVG BEGIN BETWEEN BY CALL COLLATION COMMIT COUNT DELETE DESC DESCRIBE DISTINCT FALSE FROM GLOBAL GROUP ID IN INSERT INTO LIMIT MATCH MAX META MIN NOT NULL OPTION OR ORDER REPLACE ROLLBACK SELECT SET SHOW START STATUS SUM TABLES TRANSACTION TRUE UPDATE VALUES VARIABLES WARNINGS WEIGHT WHERE WITHIN
This section only applies to existing applications that use SphinxQL versions prior to 2.0.1-beta.
In previous versions, SphinxQL just wrapped around SphinxAPI
and inherited its magic columns and column set quirks. Essentially,
SphinxQL queries could return (slightly) different columns and
in a (slightly) different order than it was explicitly requested
in the query. Namely, weight
magic column (which is not
a real column in any index) was added at all times, and GROUP BY
related @count
, @group
, and @distinct
magic columns were conditionally added when grouping. Also, the order
of columns (attributes) in the result set was actually taken from the
index rather than the query. (So if you asked for columns C, B, A
in your query but they were in the A, B, C order in the index,
they would have been returned in the A, B, C order.)
In version 2.0.1-beta, we fixed that. SphinxQL is now more
SQL compliant (and will be further brought in as much compliance
with standard SQL syntax as possible). That is not yet a breaking
change, because searchd
now supports
compat_sphinxql_magics
directive that flips between the old "compatibility" mode and the new
"compliance" mode. However, the compatibility mode support is going
to be removed in future, so it's strongly advised to update SphinxQL
applications and switch to the compliance mode.
The important changes are as follows:
@ID
magic name is deprecated in favor of
ID
. Document ID is considered an attribute.
WEIGHT
is no longer implicitly returned,
because it is not actually a column (an index attribute),
but rather an internal function computed per each row (a match).
You have to explicitly ask for it, using the WEIGHT()
function. (The requirement to alias the result will be lifted
in the next release.)
SELECT id, WEIGHT() w FROM myindex WHERE MATCH('test')
You can now use quoted reserved keywords as aliases.
The quote character is backtick ("`", ASCII code 96 decimal,
60 hex). One particularly useful example would be returning
weight
column like the old mode:
SELECT id, WEIGHT() `weight` FROM myindex WHERE MATCH('test')
The column order is now different and should now match the
one expliclitly defined in the query. So if you are accessing
columns based on their position in the result set rather than
the name (for instance, by using mysql_fetch_row()
rather than mysql_fetch_assoc()
in PHP),
check and fix the order of columns in your queries.
SELECT *
return the columns in index order,
as it used to, including the ID column. However,
SELECT *
does not automatically return WEIGHT().
To update such queries in case you access columns by names,
simply add it to the query:
SELECT *, WEIGHT() `weight` FROM myindex WHERE MATCH('test')
Otherwise, i.e., in case you rely on column order, select ID, weight, and then other columns:
SELECT id, *, WEIGHT() `weight` FROM myindex WHERE MATCH('test')
Magic @count
and @distinct
attributes are no longer implicitly returned. You now
have to explicitly ask for them when using GROUP BY.
(Also note that you currently have to alias them;
that requirement will be lifted in the future.)
SELECT gid, COUNT(*) q FROM myindex WHERE MATCH('test') GROUP BY gid ORDER BY q DESC
Table of Contents
There is a number of native searchd client API implementations for Sphinx. As of time of this writing, we officially support our own PHP, Python, and Java implementations. There also are third party free, open-source API implementations for Perl, Ruby, and C++.
The reference API implementation is in PHP, because (we believe) Sphinx is most widely used with PHP than any other language. This reference documentation is in turn based on reference PHP API, and all code samples in this section will be given in PHP.
However, all other APIs provide the same methods and implement the very same network protocol. Therefore the documentation does apply to them as well. There might be minor differences as to the method naming conventions or specific data structures used. But the provided functionality must not differ across languages.
Prototype: function GetLastError()
Returns last error message, as a string, in human readable format. If there were no errors during the previous API call, empty string is returned.
You should call it when any other function (such as Query()) fails (typically, the failing function returns false). The returned string will contain the error description.
The error message is not reset by this call; so you can safely call it several times if needed.
Prototype: function GetLastWarning ()
Returns last warning message, as a string, in human readable format. If there were no warnings during the previous API call, empty string is returned.
You should call it to verify whether your request (such as Query()) was completed but with warnings. For instance, search query against a distributed index might complete succesfully even if several remote agents timed out. In that case, a warning message would be produced.
The warning message is not reset by this call; so you can safely call it several times if needed.
Prototype: function SetServer ( $host, $port )
Sets searchd
host name and TCP port.
All subsequent requests will use the new host and port settings.
Default host and port are 'localhost' and 9312, respectively.
Prototype: function SetRetries ( $count, $delay=0 )
Sets distributed retry count and delay.
On temporary failures searchd
will attempt up to
$count
retries per agent. $delay
is the delay
between the retries, in milliseconds. Retries are disabled by default.
Note that this call will not make the API itself retry on
temporary failure; it only tells searchd
to do so.
Currently, the list of temporary failures includes all kinds of connect()
failures and maxed out (too busy) remote agents.
Prototype: function SetConnectTimeout ( $timeout )
Sets the time allowed to spend connecting to the server before giving up.
Under some circumstances, the server can be delayed in responding, either
due to network delays, or a query backlog. In either instance, this allows
the client application programmer some degree of control over how their
program interacts with searchd
when not available,
and can ensure that the client application does not fail due to exceeding
the script execution limits (especially in PHP).
In the event of a failure to connect, an appropriate error code should be returned back to the application in order for application-level error handling to advise the user.
Prototype: function SetArrayResult ( $arrayresult )
PHP specific. Controls matches format in the search results set (whether matches should be returned as an array or a hash).
$arrayresult
argument must be boolean. If $arrayresult
is false
(the default mode), matches will returned in PHP hash format with
document IDs as keys, and other information (weight, attributes)
as values. If $arrayresult
is true, matches will be returned
as a plain array with complete per-match information including
document ID.
Introduced along with GROUP BY support on MVA attributes. Group-by-MVA result sets may contain duplicate document IDs. Thus they need to be returned as plain arrays, because hashes will only keep one entry per document ID.
Prototype: function IsConnectError ()
Checks whether the last error was a network error on API side, or a remote error reported by searchd. Returns true if the last connection attempt to searchd failed on API side, false otherwise (if the error was remote, or there were no connection attempts at all). Introduced in version 0.9.9-rc1.
Prototype: function SetLimits ( $offset, $limit, $max_matches=1000, $cutoff=0 )
Sets offset into server-side result set ($offset
) and amount of matches
to return to client starting from that offset ($limit
). Can additionally
control maximum server-side result set size for current query ($max_matches
)
and the threshold amount of matches to stop searching at ($cutoff
).
All parameters must be non-negative integers.
First two parameters to SetLimits() are identical in behavior to MySQL
LIMIT clause. They instruct searchd
to return at
most $limit
matches starting from match number $offset
.
The default offset and limit settings are 0 and 20, that is, to return
first 20 matches.
max_matches
setting controls how much matches searchd
will keep in RAM while searching. All matching documents will be normally
processed, ranked, filtered, and sorted even if max_matches
is set to 1.
But only best N documents are stored in memory at any given moment for performance
and RAM usage reasons, and this setting controls that N. Note that there are
two places where max_matches
limit is enforced. Per-query
limit is controlled by this API call, but there also is per-server limit
controlled by max_matches
setting in the config file. To prevent
RAM usage abuse, server will not allow to set per-query limit
higher than the per-server limit.
You can't retrieve more than max_matches
matches to the client application.
The default limit is set to 1000. Normally, you must not have to go over
this limit. One thousand records is enough to present to the end user.
And if you're thinking about pulling the results to application
for further sorting or filtering, that would be much more efficient
if performed on Sphinx side.
$cutoff
setting is intended for advanced performance control.
It tells searchd
to forcibly stop search query
once $cutoff
matches had been found and processed.
Prototype: function SetMaxQueryTime ( $max_query_time )
Sets maximum search query time, in milliseconds. Parameter must be a non-negative integer. Default valus is 0 which means "do not limit".
Similar to $cutoff
setting from SetLimits(),
but limits elapsed query time instead of processed matches count. Local search queries
will be stopped once that much time has elapsed. Note that if you're performing
a search which queries several local indexes, this limit applies to each index
separately.
Prototype: function SetOverride ( $attrname, $attrtype, $values )
Sets temporary (per-query) per-document attribute value overrides. Only supports scalar attributes. $values must be a hash that maps document IDs to overridden attribute values. Introduced in version 0.9.9-rc1.
Override feature lets you "temporary" update attribute values for some documents within a single query, leaving all other queries unaffected. This might be useful for personalized data. For example, assume you're implementing a personalized search function that wants to boost the posts that the user's friends recommend. Such data is not just dynamic, but also personal; so you can't simply put it in the index because you don't want everyone's searches affected. Overrides, on the other hand, are local to a single query and invisible to everyone else. So you can, say, setup a "friends_weight" value for every document, defaulting to 0, then temporary override it with 1 for documents 123, 456 and 789 (recommended by exactly the friends of current user), and use that value when ranking.
Prototype: function SetSelect ( $clause )
Sets the select clause, listing specific attributes to fetch, and expressions to compute and fetch. Clause syntax mimics SQL. Introduced in version 0.9.9-rc1.
SetSelect() is very similar to the part of a typical SQL query between SELECT and FROM. It lets you choose what attributes (columns) to fetch, and also what expressions over the columns to compute and fetch. A certain difference from SQL is that expressions must always be aliased to a correct identifier (consisting of letters and digits) using 'AS' keyword. SQL also lets you do that but does not require to. Sphinx enforces aliases so that the computation results can always be returned under a "normal" name in the result set, used in other clauses, etc.
Everything else is basically identical to SQL. Star ('*') is supported. Functions are supported. Arbitrary amount of expressions is supported. Computed expressions can be used for sorting, filtering, and grouping, just as the regular attributes.
Starting with version 0.9.9-rc2, aggregate functions (AVG(), MIN(), MAX(), SUM()) are supported when using GROUP BY.
Expression sorting (Section 5.6, “SPH_SORT_EXPR mode”) and geodistance functions (Section 8.4.5, “SetGeoAnchor”) are now internally implemented using this computed expressions mechanism, using magic names '@expr' and '@geodist' respectively.
$cl->SetSelect ( "*, @weight+(user_karma+ln(pageviews))*0.1 AS myweight" ); $cl->SetSelect ( "exp_years, salary_gbp*{$gbp_usd_rate} AS salary_usd, IF(age>40,1,0) AS over40" ); $cl->SetSelect ( "*, AVG(price) AS avgprice" );
Prototype: function SetMatchMode ( $mode )
Sets full-text query matching mode, as described in Section 5.1, “Matching modes”. Parameter must be a constant specifying one of the known modes.
WARNING: (PHP specific) you must not take the matching mode constant name in quotes, that syntax specifies a string and is incorrect:
$cl->SetMatchMode ( "SPH_MATCH_ANY" ); // INCORRECT! will not work as expected $cl->SetMatchMode ( SPH_MATCH_ANY ); // correct, works OK
Prototype: function SetRankingMode ( $ranker, $rankexpr="" )
Sets ranking mode (aka ranker). Only available in SPH_MATCH_EXTENDED matching mode. Parameter must be a constant specifying one of the known rankers.
By default, in the EXTENDED matching mode Sphinx computes two factors which contribute to the final match weight. The major part is a phrase proximity value between the document text and the query. The minor part is so-called BM25 statistical function, which varies from 0 to 1 depending on the keyword frequency within document (more occurrences yield higher weight) and within the whole index (more rare keywords yield higher weight).
However, in some cases you'd want to compute weight differently - or maybe avoid computing it at all for performance reasons because you're sorting the result set by something else anyway. This can be accomplished by setting the appropriate ranking mode. The list of the modes is available in Section 5.4, “Search results ranking”.
$rankexpr
argument was added in version 2.0.2-beta.
It lets you specify a ranking formula to use with the
expression based ranker,
that is, when $ranker
is set to SPH_RANK_EXPR.
In all other cases, $rankexpr
is ignored.
Prototype: function SetSortMode ( $mode, $sortby="" )
Set matches sorting mode, as described in Section 5.6, “Sorting modes”. Parameter must be a constant specifying one of the known modes.
WARNING: (PHP specific) you must not take the matching mode constant name in quotes, that syntax specifies a string and is incorrect:
$cl->SetSortMode ( "SPH_SORT_ATTR_DESC" ); // INCORRECT! will not work as expected $cl->SetSortMode ( SPH_SORT_ATTR_ASC ); // correct, works OK
Prototype: function SetWeights ( $weights )
Binds per-field weights in the order of appearance in the index. DEPRECATED, use SetFieldWeights() instead.
Prototype: function SetFieldWeights ( $weights )
Binds per-field weights by name. Parameter must be a hash (associative array) mapping string field names to integer weights.
Match ranking can be affected by per-field weights. For instance, see Section 5.4, “Search results ranking” for an explanation how phrase proximity ranking is affected. This call lets you specify what non-default weights to assign to different full-text fields.
The weights must be positive 32-bit integers. The final weight will be a 32-bit integer too. Default weight value is 1. Unknown field names will be silently ignored.
There is no enforced limit on the maximum weight value at the moment. However, beware that if you set it too high you can start hitting 32-bit wraparound issues. For instance, if you set a weight of 10,000,000 and search in extended mode, then maximum possible weight will be equal to 10 million (your weight) by 1 thousand (internal BM25 scaling factor, see Section 5.4, “Search results ranking”) by 1 or more (phrase proximity rank). The result is at least 10 billion that does not fit in 32 bits and will be wrapped around, producing unexpected results.
Prototype: function SetIndexWeights ( $weights )
Sets per-index weights, and enables weighted summing of match weights across different indexes. Parameter must be a hash (associative array) mapping string index names to integer weights. Default is empty array that means to disable weighting summing.
When a match with the same document ID is found in several different local indexes, by default Sphinx simply chooses the match from the index specified last in the query. This is to support searching through partially overlapping index partitions.
However in some cases the indexes are not just partitions, and you
might want to sum the weights across the indexes instead of picking one.
SetIndexWeights()
lets you do that. With summing enabled,
final match weight in result set will be computed as a sum of match
weight coming from the given index multiplied by respective per-index
weight specified in this call. Ie. if the document 123 is found in
index A with the weight of 2, and also in index B with the weight of 3,
and you called SetIndexWeights ( array ( "A"=>100, "B"=>10 ) )
,
the final weight return to the client will be 2*100+3*10 = 230.
Prototype: function SetIDRange ( $min, $max )
Sets an accepted range of document IDs. Parameters must be integers. Defaults are 0 and 0; that combination means to not limit by range.
After this call, only those records that have document ID
between $min
and $max
(including IDs
exactly equal to $min
or $max
)
will be matched.
Prototype: function SetFilter ( $attribute, $values, $exclude=false )
Adds new integer values set filter.
On this call, additional new filter is added to the existing
list of filters. $attribute
must be a string with
attribute name. $values
must be a plain array
containing integer values. $exclude
must be a boolean
value; it controls whether to accept the matching documents
(default mode, when $exclude
is false) or reject them.
Only those documents where $attribute
column value
stored in the index matches any of the values from $values
array will be matched (or rejected, if $exclude
is true).
Prototype: function SetFilterRange ( $attribute, $min, $max, $exclude=false )
Adds new integer range filter.
On this call, additional new filter is added to the existing
list of filters. $attribute
must be a string with
attribute name. $min
and $max
must be
integers that define the acceptable attribute values range
(including the boundaries). $exclude
must be a boolean
value; it controls whether to accept the matching documents
(default mode, when $exclude
is false) or reject them.
Only those documents where $attribute
column value
stored in the index is between $min
and $max
(including values that are exactly equal to $min
or $max
)
will be matched (or rejected, if $exclude
is true).
Prototype: function SetFilterFloatRange ( $attribute, $min, $max, $exclude=false )
Adds new float range filter.
On this call, additional new filter is added to the existing
list of filters. $attribute
must be a string with
attribute name. $min
and $max
must be
floats that define the acceptable attribute values range
(including the boundaries). $exclude
must be a boolean
value; it controls whether to accept the matching documents
(default mode, when $exclude
is false) or reject them.
Only those documents where $attribute
column value
stored in the index is between $min
and $max
(including values that are exactly equal to $min
or $max
)
will be matched (or rejected, if $exclude
is true).
Prototype: function SetGeoAnchor ( $attrlat, $attrlong, $lat, $long )
Sets anchor point for and geosphere distance (geodistance) calculations, and enable them.
$attrlat
and $attrlong
must be strings that contain the names
of latitude and longitude attributes, respectively. $lat
and $long
are floats that specify anchor point latitude and longitude, in radians.
Once an anchor point is set, you can use magic "@geodist"
attribute
name in your filters and/or sorting expressions. Sphinx will compute geosphere distance
between the given anchor point and a point specified by latitude and lognitude
attributes from each full-text match, and attach this value to the resulting match.
The latitude and longitude values both in SetGeoAnchor
and the index
attribute data are expected to be in radians. The result will be returned in meters,
so geodistance value of 1000.0 means 1 km. 1 mile is approximately 1609.344 meters.
Prototype: function SetGroupBy ( $attribute, $func, $groupsort="@group desc" )
Sets grouping attribute, function, and groups sorting mode; and enables grouping (as described in Section 5.7, “Grouping (clustering) search results ”).
$attribute
is a string that contains group-by attribute name.
$func
is a constant that chooses a function applied to the attribute value in order to compute group-by key.
$groupsort
is a clause that controls how the groups will be sorted. Its syntax is similar
to that described in Section 5.6, “SPH_SORT_EXTENDED mode”.
Grouping feature is very similar in nature to GROUP BY clause from SQL. Results produces by this function call are going to be the same as produced by the following pseudo code:
SELECT ... GROUP BY $func($attribute) ORDER BY $groupsort
Note that it's $groupsort
that affects the order of matches
in the final result set. Sorting mode (see Section 8.3.3, “SetSortMode”)
affect the ordering of matches within group, ie.
what match will be selected as the best one from the group.
So you can for instance order the groups by matches count
and select the most relevant match within each group at the same time.
Starting with version 0.9.9-rc2, aggregate functions (AVG(), MIN(), MAX(), SUM()) are supported through SetSelect() API call when using GROUP BY.
Starting with version 2.0.1-beta, grouping on string attributes is supported, with respect to current collation.
Prototype: function SetGroupDistinct ( $attribute )
Sets attribute name for per-group distinct values count calculations. Only available for grouping queries.
$attribute
is a string that contains the attribute name.
For each group, all values of this attribute will be stored (as RAM limits
permit), then the amount of distinct values will be calculated and returned
to the client. This feature is similar to COUNT(DISTINCT)
clause in standard SQL; so these Sphinx calls:
$cl->SetGroupBy ( "category", SPH_GROUPBY_ATTR, "@count desc" ); $cl->SetGroupDistinct ( "vendor" );
can be expressed using the following SQL clauses:
SELECT id, weight, all-attributes, COUNT(DISTINCT vendor) AS @distinct, COUNT(*) AS @count FROM products GROUP BY category ORDER BY @count DESC
In the sample pseudo code shown just above, SetGroupDistinct()
call
corresponds to COUNT(DISINCT vendor)
clause only.
GROUP BY
, ORDER BY
, and COUNT(*)
clauses are all an equivalent of SetGroupBy()
settings. Both queries
will return one matching row for each category. In addition to indexed attributes,
matches will also contain total per-category matches count, and the count
of distinct vendor IDs within each category.
Prototype: function Query ( $query, $index="*", $comment="" )
Connects to searchd
server, runs given search query
with current settings, obtains and returns the result set.
$query
is a query string. $index
is an index name (or names) string.
Returns false and sets GetLastError()
message on general error.
Returns search result set on success.
Additionally, the contents of $comment
are sent to the query log, marked in square brackets, just before the search terms, which can be very useful for debugging.
Currently, the comment is limited to 128 characters.
Default value for $index
is "*"
that means
to query all local indexes. Characters allowed in index names include
Latin letters (a-z), numbers (0-9), minus sign (-), and underscore (_);
everything else is considered a separator. Therefore, all of the
following samples calls are valid and will search the same
two indexes:
$cl->Query ( "test query", "main delta" ); $cl->Query ( "test query", "main;delta" ); $cl->Query ( "test query", "main, delta" );
Index specification order matters. If document with identical IDs are found in two or more indexes, weight and attribute values from the very last matching index will be used for sorting and returning to client (unless explicitly overridden with SetIndexWeights()). Therefore, in the example above, matches from "delta" index will always win over matches from "main".
On success, Query()
returns a result set that contains
some of the found matches (as requested by SetLimits())
and additional general per-query statistics. The result set is a hash
(PHP specific; other languages might utilize other structures instead
of hash) with the following keys and values:
Hash which maps found document IDs to another small hash containing document weight and attribute values (or an array of the similar small hashes if SetArrayResult() was enabled).
Total amount of matches retrieved on server (ie. to the server side result set) by this query. You can retrieve up to this amount of matches from server for this query text with current query settings.
Total amount of matching documents in index (that were found and procesed on server).
Hash which maps query keywords (case-folded, stemmed, and otherwise processed) to a small hash with per-keyword statitics ("docs", "hits").
Query error message reported by searchd
(string, human readable). Empty if there were no errors.
Query warning message reported by searchd
(string, human readable). Empty if there were no warnings.
It should be noted that Query()
carries out the same actions as
AddQuery()
and RunQueries()
without the intermediate steps;
it is analoguous to a single AddQuery()
call, followed by a corresponding
RunQueries()
, then returning the first array element of matches
(from the first, and only, query.)
Prototype: function AddQuery ( $query, $index="*", $comment="" )
Adds additional query with current settings to multi-query batch.
$query
is a query string. $index
is an index name (or names) string.
Additionally if provided, the contents of $comment
are sent to the query log,
marked in square brackets, just before the search terms, which can be very useful for debugging.
Currently, this is limited to 128 characters.
Returns index to results array returned from RunQueries().
Batch queries (or multi-queries) enable searchd
to perform internal
optimizations if possible. They also reduce network connection overheads and search process
creation overheads in all cases. They do not result in any additional overheads compared
to simple queries. Thus, if you run several different queries from your web page,
you should always consider using multi-queries.
For instance, running the same full-text query but with different
sorting or group-by settings will enable searchd
to perform expensive full-text search and ranking operation only once,
but compute multiple group-by results from its output.
This can be a big saver when you need to display not just plain search results but also some per-category counts, such as the amount of products grouped by vendor. Without multi-query, you would have to run several queries which perform essentially the same search and retrieve the same matches, but create result sets differently. With multi-query, you simply pass all these querys in a single batch and Sphinx optimizes the redundant full-text search internally.
AddQuery()
internally saves full current settings state
along with the query, and you can safely change them afterwards for subsequent
AddQuery()
calls. Already added queries will not be affected;
there's actually no way to change them at all. Here's an example:
$cl->SetSortMode ( SPH_SORT_RELEVANCE ); $cl->AddQuery ( "hello world", "documents" ); $cl->SetSortMode ( SPH_SORT_ATTR_DESC, "price" ); $cl->AddQuery ( "ipod", "products" ); $cl->AddQuery ( "harry potter", "books" ); $results = $cl->RunQueries ();
With the code above, 1st query will search for "hello world" in "documents" index
and sort results by relevance, 2nd query will search for "ipod" in "products"
index and sort results by price, and 3rd query will search for "harry potter"
in "books" index while still sorting by price. Note that 2nd SetSortMode()
call
does not affect the first query (because it's already added) but affects both other
subsequent queries.
Additionally, any filters set up before an AddQuery()
will fall through to subsequent
queries. So, if SetFilter()
is called before the first query, the same filter
will be in place for the second (and subsequent) queries batched through AddQuery()
unless you call ResetFilters()
first. Alternatively, you can add additional filters
as well.
This would also be true for grouping options and sorting options; no current sorting, filtering, and grouping settings are affected by this call; so subsequent queries will reuse current query settings.
AddQuery()
returns an index into an array of results
that will be returned from RunQueries()
call. It is simply
a sequentially increasing 0-based integer, ie. first call will return 0,
second will return 1, and so on. Just a small helper so you won't have
to track the indexes manualy if you need then.
Prototype: function RunQueries ()
Connect to searchd, runs a batch of all queries added using AddQuery()
,
obtains and returns the result sets. Returns false and sets GetLastError()
message on general error (such as network I/O failure). Returns a plain array
of result sets on success.
Each result set in the returned array is exactly the same as
the result set returned from Query()
.
Note that the batch query request itself almost always succeds - unless there's a network error, blocking index rotation in progress, or another general failure which prevents the whole request from being processed.
However individual queries within the batch might very well fail.
In this case their respective result sets will contain non-empty "error"
message,
but no matches or query statistics. In the extreme case all queries within the batch
could fail. There still will be no general error reported, because API was able to
succesfully connect to searchd
, submit the batch, and receive
the results - but every result set will have a specific error message.
Prototype: function ResetFilters ()
Clears all currently set filters.
This call is only normally required when using multi-queries. You might want
to set different filters for different queries in the batch. To do that,
you should call ResetFilters()
and add new filters using
the respective calls.
Prototype: function ResetGroupBy ()
Clears all currently group-by settings, and disables group-by.
This call is only normally required when using multi-queries.
You can change individual group-by settings using SetGroupBy()
and SetGroupDistinct()
calls, but you can not disable
group-by using those calls. ResetGroupBy()
fully resets previous group-by settings and disables group-by mode
in the current state, so that subsequent AddQuery()
calls can perform non-grouping searches.
Prototype: function BuildExcerpts ( $docs, $index, $words, $opts=array() )
Excerpts (snippets) builder function. Connects to searchd
,
asks it to generate excerpts (snippets) from given documents, and returns the results.
$docs
is a plain array of strings that carry the documents' contents.
$index
is an index name string. Different settings (such as charset,
morphology, wordforms) from given index will be used.
$words
is a string that contains the keywords to highlight. They will
be processed with respect to index settings. For instance, if English stemming
is enabled in the index, "shoes" will be highlighted even if keyword is "shoe".
Starting with version 0.9.9-rc1, keywords can contain wildcards, that work similarly to
star-syntax available in queries.
$opts
is a hash which contains additional optional highlighting parameters:
A string to insert before a keyword match. Starting with version 1.10-beta, a %PASSAGE_ID% macro can be used in this string. The macro is replaced with an incrementing passage number within a current snippet. Numbering starts at 1 by default but can be overridden with "start_passage_id" option. In a multi-document call, %PASSAGE_ID% would restart at every given document. Default is "<b>".
A string to insert after a keyword match. Starting with version 1.10-beta, a %PASSAGE_ID% macro can be used in this string. Default is "</b>".
A string to insert between snippet chunks (passages). Default is " ... ".
Maximum snippet size, in symbols (codepoints). Integer, default is 256.
How much words to pick around each matching keywords block. Integer, default is 5.
Whether to highlight exact query phrase matches only instead of individual keywords. Boolean, default is false.
Whether to additionaly break passages by phrase boundary characters, as configured in index settings with phrase_boundary directive. Boolean, default is false.
Whether to sort the extracted passages in order of relevance (decreasing weight), or in order of appearance in the document (increasing position). Boolean, default is false.
Added in version 1.10-beta. Whether to handle $words as a query in extended syntax, or as a bag of words (default behavior). For instance, in query mode ("one two" | "three four") will only highlight and include those occurrences "one two" or "three four" when the two words from each pair are adjacent to each other. In default mode, any single occurrence of "one", "two", "three", or "four" would be highlighted. Boolean, default is false.
Added in version 1.10-beta. Ignores the snippet length limit until it includes all the keywords. Boolean, default is false.
Added in version 1.10-beta. Limits the maximum number of passages that can be included into the snippet. Integer, default is 0 (no limit).
Added in version 1.10-beta. Limits the maximum number of words that can be included into the snippet. Note the limit applies to any words, and not just the matched keywords to highlight. For example, if we are highlighting "Mary" and a passage "Mary had a little lamb" is selected, then it contributes 5 words to this limit, not just 1. Integer, default is 0 (no limit).
Added in version 1.10-beta. Specifies the starting value of
%PASSAGE_ID% macro (that gets detected and expanded in before_match
,
after_match
strings). Integer, default is 1.
Added in version 1.10-beta. Whether to handle $docs as data to extract snippets from (default behavior), or to treat it as file names, and load data from specified files on the server side. Starting with version 2.0.1-beta, up to dist_threads worker threads per request will be created to parallelize the work when this flag is enabled. Boolean, default is false. Starting with version 2.0.2-beta, building of the snippets could be parallelized between remote agents. Just set the 'dist_threads' param in the config to the value greater than 1, and then invoke the snippets generation over the distributed index, which contain only one(!) local agent and several remotes. Starting with version 2.1.1-beta, the snippets_file_prefix option is also in the game and the final filename is calculated by concatenation of the prefix with given name. Otherwords, when snippets_file_prefix is '/var/data' and filename is 'text.txt' the sphinx will try to generate the snippets from the file '/var/datatext.txt', which is exactly '/var/data' + 'text.txt'.
Added in version 2.0.2-beta. It works only with distributed snippets generation with remote agents. The source files for snippets could be distributed among different agents, and the main daemon will merge together all non-erroneous results. So, if one agent of the distributed index has 'file1.txt', another has 'file2.txt' and you call for the snippets with both these files, the sphinx will merge results from the agents together, so you will get the snippets from both 'file1.txt' and 'file2.txt'. Boolean, default is false.
If the "load_files" is also set, the request will return the error in case if any of the files is not available anywhere. Otherwise (if "load_files" is not set) it will just return the empty strings for all absent files. The master instance reset this flag when distributes the snippets among agents. So, for agents the absence of a file is not critical error, but for the master it might be so. If you want to be sure that all snippets are actually created, set both "load_files_scattered" and "load_files". If the absense of some snippets caused by some agents is not critical for you - set just "load_files_scattered", leaving "load_files" not set.
Added in version 1.10-beta. HTML stripping mode setting. Defaults to "index", which means that index settings will be used. The other values are "none" and "strip", that forcibly skip or apply stripping irregardless of index settings; and "retain", that retains HTML markup and protects it from highlighting. The "retain" mode can only be used when highlighting full documents and thus requires that no snippet size limits are set. String, allowed values are "none", "strip", "index", and "retain".
Added in version 1.10-beta. Allows empty string to be returned as highlighting result when a snippet could not be generated (no keywords match, or no passages fit the limit). By default, the beginning of original text would be returned instead of an empty string. Boolean, default is false.
Added in version 2.0.1-beta. Ensures that passages do not cross a sentence, paragraph, or zone boundary (when used with an index that has the respective indexing settings enabled). String, allowed values are "sentence", "paragraph", and "zone".
Added in version 2.0.1-beta. Emits an HTML tag with an enclosing zone name before each passage. Boolean, default is false.
Snippets extraction algorithm currently favors better passages (with closer phrase matches), and then passages with keywords not yet in snippet. Generally, it will try to highlight the best match with the query, and it will also to highlight all the query keywords, as made possible by the limtis. In case the document does not match the query, beginning of the document trimmed down according to the limits will be return by default. Starting with 1.10-beta, you can also return an empty snippet instead case by setting "allow_empty" option to true.
Returns false on failure. Returns a plain array of strings with excerpts (snippets) on success.
Prototype: function UpdateAttributes ( $index, $attrs, $values, $mva=false, $ignorenonexistent=false )
Instantly updates given attribute values in given documents. Returns number of actually updated documents (0 or more) on success, or -1 on failure.
$index
is a name of the index (or indexes) to be updated.
$attrs
is a plain array with string attribute names, listing attributes that are updated.
$values
is a hash where key is document ID, and value is a plain array of new attribute values.
Optional boolean parameter mva
points that there is update of MVA
attributes. In this case the $values must be a dict with int key (document ID)
and list of lists of int values (new MVA attribute values).
Optional boolean parameter $ignorenonexistent
(added in version 2.1.1-beta) points that the
update will silently ignore any warnings about trying to update a
column which is not exists in current index schema.
$index
can be either a single index name or a list, like in Query()
.
Unlike Query()
, wildcard is not allowed and all the indexes
to update must be specified explicitly. The list of indexes can include
distributed index names. Updates on distributed indexes will be pushed
to all agents.
The updates only work with docinfo=extern
storage strategy.
They are very fast because they're working fully in RAM, but they can also
be made persistent: updates are saved on disk on clean searchd
shutdown initiated by SIGTERM signal. With additional restrictions, updates
are also possible on MVA attributes; refer to mva_updates_pool
directive for details.
Usage example:
$cl->UpdateAttributes ( "test1", array("group_id"), array(1=>array(456)) ); $cl->UpdateAttributes ( "products", array ( "price", "amount_in_stock" ), array ( 1001=>array(123,5), 1002=>array(37,11), 1003=>(25,129) ) );
The first sample statement will update document 1 in index "test1", setting "group_id" to 456. The second one will update documents 1001, 1002 and 1003 in index "products". For document 1001, the new price will be set to 123 and the new amount in stock to 5; for document 1002, the new price will be 37 and the new amount will be 11; etc.
Prototype: function BuildKeywords ( $query, $index, $hits )
Extracts keywords from query using tokenizer settings for given index, optionally with per-keyword occurrence statistics. Returns an array of hashes with per-keyword information.
$query
is a query to extract keywords from.
$index
is a name of the index to get tokenizing settings and keyword occurrence statistics from.
$hits
is a boolean flag that indicates whether keyword occurrence statistics are required.
Usage example:
$keywords = $cl->BuildKeywords ( "this.is.my query", "test1", false );
Prototype: function EscapeString ( $string )
Escapes characters that are treated as special operators by the query language parser. Returns an escaped string.
$string
is a string to escape.
This function might seem redundant because it's trivial to implement in any calling application. However, as the set of special characters might change over time, it makes sense to have an API call that is guaranteed to escape all such characters at all times.
Usage example:
$escaped = $cl->EscapeString ( "escaping-sample@query/string" );
Prototype: function Status ()
Queries searchd status, and returns an array of status variable name and value pairs.
Usage example:
$status = $cl->Status (); foreach ( $status as $row ) print join ( ": ", $row ) . "\n";
Prototype: function FlushAttributes ()
Forces searchd
to flush pending attribute updates
to disk, and blocks until completion. Returns a non-negative internal
"flush tag" on success. Returns -1 and sets an error message on error.
Introduced in version 1.10-beta.
Attribute values updated using UpdateAttributes()
API call are only kept in RAM until a so-called flush (which writes
the current, possibly updated attribute values back to disk). FlushAttributes()
call lets you enforce a flush. The call will block until searchd
finishes writing the data to disk, which might take seconds or even minutes
depending on the total data size (.spa file size). All the currently updated
indexes will be flushed.
Flush tag should be treated as an ever growing magic number that does not mean anything. It's guaranteed to be non-negative. It is guaranteed to grow over time, though not necessarily in a sequential fashion; for instance, two calls that return 10 and then 1000 respectively are a valid situation. If two calls to FlushAttrs() return the same tag, it means that there were no actual attribute updates in between them, and therefore current flushed state remained the same (for all indexes).
Usage example:
$status = $cl->FlushAttributes (); if ( $status<0 ) print "ERROR: " . $cl->GetLastError();
Persistent connections allow to use single network connection to run multiple commands that would otherwise require reconnects.
Table of Contents
SphinxSE is MySQL storage engine which can be compiled into MySQL server 5.x using its pluggable architecure. It is not available for MySQL 4.x series. It also requires MySQL 5.0.22 or higher in 5.0.x series, or MySQL 5.1.12 or higher in 5.1.x series.
Despite the name, SphinxSE does not
actually store any data itself. It is actually a built-in client
which allows MySQL server to talk to searchd
,
run search queries, and obtain search results. All indexing and
searching happen outside MySQL.
Obvious SphinxSE applications include:
easier porting of MySQL FTS applications to Sphinx;
allowing Sphinx use with progamming languages for which native APIs are not available yet;
optimizations when additional Sphinx result set processing on MySQL side is required (eg. JOINs with original document tables, additional MySQL-side filtering, etc).
You will need to obtain a copy of MySQL sources, prepare those, and then recompile MySQL binary. MySQL sources (mysql-5.x.yy.tar.gz) could be obtained from dev.mysql.com Web site.
For some MySQL versions, there are delta tarballs with already prepared source versions available from Sphinx Web site. After unzipping those over original sources MySQL would be ready to be configured and built with Sphinx support.
If such tarball is not available, or does not work for you for any reason, you would have to prepare sources manually. You will need to GNU Autotools framework (autoconf, automake and libtool) installed to do that.
copy sphinx.5.0.yy.diff
patch file
into MySQL sources directory and run
patch -p1 < sphinx.5.0.yy.diff
If there's no .diff file exactly for the specific version you need to build, try applying .diff with closest version numbers. It is important that the patch should apply with no rejects.
in MySQL sources directory, run
sh BUILD/autorun.sh
in MySQL sources directory, create sql/sphinx
directory in and copy all files in mysqlse
directory
from Sphinx sources there. Example:
cp -R /root/builds/sphinx-0.9.7/mysqlse /root/builds/mysql-5.0.24/sql/sphinx
configure MySQL and enable Sphinx engine:
./configure --with-sphinx-storage-engine
build and install MySQL:
make make install
in MySQL sources directory, create storage/sphinx
directory in and copy all files in mysqlse
directory
from Sphinx sources there. Example:
cp -R /root/builds/sphinx-0.9.7/mysqlse /root/builds/mysql-5.1.14/storage/sphinx
in MySQL sources directory, run
sh BUILD/autorun.sh
configure MySQL and enable Sphinx engine:
./configure --with-plugins=sphinx
build and install MySQL:
make make install
To check whether SphinxSE has been succesfully compiled
into MySQL, launch newly built servers, run mysql client and
issue SHOW ENGINES
query. You should see a list
of all available engines. Sphinx should be present and "Support"
column should contain "YES":
mysql> show engines; +------------+----------+-------------------------------------------------------------+ | Engine | Support | Comment | +------------+----------+-------------------------------------------------------------+ | MyISAM | DEFAULT | Default engine as of MySQL 3.23 with great performance | ... | SPHINX | YES | Sphinx storage engine | ... +------------+----------+-------------------------------------------------------------+ 13 rows in set (0.00 sec)
To search via SphinxSE, you would need to create special ENGINE=SPHINX "search table", and then SELECT from it with full text query put into WHERE clause for query column.
Let's begin with an example create statement and search query:
CREATE TABLE t1 ( id INTEGER UNSIGNED NOT NULL, weight INTEGER NOT NULL, query VARCHAR(3072) NOT NULL, group_id INTEGER, INDEX(query) ) ENGINE=SPHINX CONNECTION="sphinx://localhost:9312/test"; SELECT * FROM t1 WHERE query='test it;mode=any';
First 3 columns of search table must have a types of
INTEGER UNSINGED
or BIGINT
for the 1st column (document id),
INTEGER
or BIGINT
for the 2nd column (match weight), and
VARCHAR
or TEXT
for the 3rd column (your query), respectively.
This mapping is fixed; you can not omit any of these three required columns,
or move them around, or change types. Also, query column must be indexed;
all the others must be kept unindexed. Columns' names are ignored so you
can use arbitrary ones.
Additional columns must be either INTEGER
, TIMESTAMP
,
BIGINT
, VARCHAR
, or FLOAT
.
They will be bound to attributes provided in Sphinx result set by name, so their
names must match attribute names specified in sphinx.conf
.
If there's no such attribute name in Sphinx search results, column will have
NULL
values.
Special "virtual" attributes names can also be bound to SphinxSE columns.
_sph_
needs to be used instead of @
for that.
For instance, to obtain the values of @groupby
, @count
,
or @distinct
virtual attributes, use _sph_groupby
,
_sph_count
or _sph_distinct
column names, respectively.
CONNECTION
string parameter can be used to specify default
searchd host, port and indexes for queries issued using this table.
If no connection string is specified in CREATE TABLE
,
index name "*" (ie. search all indexes) and localhost:9312 are assumed.
Connection string syntax is as follows:
CONNECTION="sphinx://HOST:PORT/INDEXNAME"
You can change the default connection string later:
ALTER TABLE t1 CONNECTION="sphinx://NEWHOST:NEWPORT/NEWINDEXNAME";
You can also override all these parameters per-query.
As seen in example, both query text and search options should be put into WHERE clause on search query column (ie. 3rd column); the options are separated by semicolons; and their names from values by equality sign. Any number of options can be specified. Available options are:
query - query text;
mode - matching mode. Must be one of "all", "any", "phrase", "boolean", or "extended". Default is "all";
sort - match sorting mode. Must be one of "relevance", "attr_desc", "attr_asc", "time_segments", or "extended". In all modes besides "relevance" attribute name (or sorting clause for "extended") is also required after a colon:
... WHERE query='test;sort=attr_asc:group_id'; ... WHERE query='test;sort=extended:@weight desc, group_id asc';
offset - offset into result set, default is 0;
limit - amount of matches to retrieve from result set, default is 20;
index - names of the indexes to search:
... WHERE query='test;index=test1;'; ... WHERE query='test;index=test1,test2,test3;';
minid, maxid - min and max document ID to match;
weights - comma-separated list of weights to be assigned to Sphinx full-text fields:
... WHERE query='test;weights=1,2,3;';
filter, !filter - comma-separated attribute name and a set of values to match:
# only include groups 1, 5 and 19 ... WHERE query='test;filter=group_id,1,5,19;'; # exclude groups 3 and 11 ... WHERE query='test;!filter=group_id,3,11;';
range, !range - comma-separated (integer or bigint) Sphinx attribute name, and min and max values to match:
# include groups from 3 to 7, inclusive ... WHERE query='test;range=group_id,3,7;'; # exclude groups from 5 to 25 ... WHERE query='test;!range=group_id,5,25;';
floatrange, !floatrange - comma-separated (floating point) Sphinx attribute name, and min and max values to match:
# filter by a float size ... WHERE query='test;floatrange=size,2,3;'; # pick all results within 1000 meter from geoanchor ... WHERE query='test;floatrange=@geodist,0,1000;';
maxmatches - per-query max matches value, as in max_matches parameter to SetLimits() API call:
... WHERE query='test;maxmatches=2000;';
cutoff - maximum allowed matches, as in cutoff parameter to SetLimits() API call:
... WHERE query='test;cutoff=10000;';
maxquerytime - maximum allowed query time (in milliseconds), as in SetMaxQueryTime() API call:
... WHERE query='test;maxquerytime=1000;';
groupby - group-by function and attribute, corresponding to SetGroupBy() API call:
... WHERE query='test;groupby=day:published_ts;'; ... WHERE query='test;groupby=attr:group_id;';
groupsort - group-by sorting clause:
... WHERE query='test;groupsort=@count desc;';
distinct - an attribute to compute COUNT(DISTINCT) for when doing group-by, as in SetGroupDistinct() API call:
... WHERE query='test;groupby=attr:country_id;distinct=site_id';
indexweights - comma-separated list of index names and weights to use when searching through several indexes:
... WHERE query='test;indexweights=idx_exact,2,idx_stemmed,1;';
fieldweights - comma-separated list of per-field weights that can be used by the ranker:
... WHERE query='test;fieldweights=title,10,abstract,3,content,1;';
comment - a string to mark this query in query log (mapping to $comment parameter in Query() API call):
... WHERE query='test;comment=marker001;';
select - a string with expressions to compute (mapping to SetSelect() API call):
... WHERE query='test;select=2*a+3*b as myexpr;';
host, port - remote searchd
host name
and TCP port, respectively:
... WHERE query='test;host=sphinx-test.loc;port=7312;';
ranker - a ranking function to use with "extended" matching mode, as in SetRankingMode() API call (the only mode that supports full query syntax). Known values are "proximity_bm25", "bm25", "none", "wordcount", "proximity", "matchany", "fieldmask", "sph04" (starting with 1.10-beta), "expr:EXPRESSION" (starting with 2.0.4-release) syntax to support expression-based ranker (where EXPRESSION should be replaced with your specific ranking formula), and "export:EXPRESSION" (starting with 2.1.1-beta):
... WHERE query='test;mode=extended;ranker=bm25;'; ... WHERE query='test;mode=extended;ranker=expr:sum(lcs);';
The "export" ranker works exactly like ranker=expr, but it stores the per-document factor values, while ranker=expr discards them after computing the final WEIGHT() value. Note that ranker=export is meant to be used but rarely, only to train a ML (machine learning) function or to define your own ranking function by hand, and never in actual production. When using this ranker, you'll probably want to examine the output of the RANKFACTORS() function (added in version 2.1.1-beta) that produces a string with all the field level factors for each document.
SELECT *, WEIGHT(), RANKFACTORS() FROM myindex WHERE MATCH('dog') OPTION ranker=export('100*bm25')
would produce something like
*************************** 1. row *************************** id: 555617 published: 1110067331 channel_id: 1059819 title: 7 content: 428 weight(): 69900 rankfactors(): bm25=699, bm25a=0.666478, field_mask=2, doc_word_count=1, field1=(lcs=1, hit_count=4, word_count=1, tf_idf=1.038127, min_idf=0.259532, max_idf=0.259532, sum_idf=0.259532, min_hit_pos=120, min_best_span_pos=120, exact_hit=0, max_window_hits=1), word1=(tf=4, idf=0.259532) *************************** 2. row *************************** id: 555313 published: 1108438365 channel_id: 1058561 title: 8 content: 249 weight(): 68500 rankfactors(): bm25=685, bm25a=0.675213, field_mask=3, doc_word_count=1, field0=(lcs=1, hit_count=1, word_count=1, tf_idf=0.259532, min_idf=0.259532, max_idf=0.259532, sum_idf=0.259532, min_hit_pos=8, min_best_span_pos=8, exact_hit=0, max_window_hits=1), field1=(lcs=1, hit_count=2, word_count=1, tf_idf=0.519063, min_idf=0.259532, max_idf=0.259532, sum_idf=0.259532, min_hit_pos=36, min_best_span_pos=36, exact_hit=0, max_window_hits=1), word1=(tf=3, idf=0.259532)
geoanchor - geodistance anchor, as in SetGeoAnchor() API call. Takes 4 parameters which are latitude and longiture attribute names, and anchor point coordinates respectively:
... WHERE query='test;geoanchor=latattr,lonattr,0.123,0.456';
One very important note that it is much more efficient to allow Sphinx to perform sorting, filtering and slicing the result set than to raise max matches count and use WHERE, ORDER BY and LIMIT clauses on MySQL side. This is for two reasons. First, Sphinx does a number of optimizations and performs better than MySQL on these tasks. Second, less data would need to be packed by searchd, transferred and unpacked by SphinxSE.
Starting with version 0.9.9-rc1, additional query info besides result set could be
retrieved with SHOW ENGINE SPHINX STATUS
statement:
mysql> SHOW ENGINE SPHINX STATUS; +--------+-------+-------------------------------------------------+ | Type | Name | Status | +--------+-------+-------------------------------------------------+ | SPHINX | stats | total: 25, total found: 25, time: 126, words: 2 | | SPHINX | words | sphinx:591:1256 soft:11076:15945 | +--------+-------+-------------------------------------------------+ 2 rows in set (0.00 sec)
This information can also be accessed through status variables. Note that this method does not require super-user privileges.
mysql> SHOW STATUS LIKE 'sphinx_%'; +--------------------+----------------------------------+ | Variable_name | Value | +--------------------+----------------------------------+ | sphinx_total | 25 | | sphinx_total_found | 25 | | sphinx_time | 126 | | sphinx_word_count | 2 | | sphinx_words | sphinx:591:1256 soft:11076:15945 | +--------------------+----------------------------------+ 5 rows in set (0.00 sec)
You could perform JOINs on SphinxSE search table and tables using other engines. Here's an example with "documents" from example.sql:
mysql> SELECT content, date_added FROM test.documents docs -> JOIN t1 ON (docs.id=t1.id) -> WHERE query="one document;mode=any"; +-------------------------------------+---------------------+ | content | docdate | +-------------------------------------+---------------------+ | this is my test document number two | 2006-06-17 14:04:28 | | this is my test document number one | 2006-06-17 14:04:28 | +-------------------------------------+---------------------+ 2 rows in set (0.00 sec) mysql> SHOW ENGINE SPHINX STATUS; +--------+-------+---------------------------------------------+ | Type | Name | Status | +--------+-------+---------------------------------------------+ | SPHINX | stats | total: 2, total found: 2, time: 0, words: 2 | | SPHINX | words | one:1:2 document:2:2 | +--------+-------+---------------------------------------------+ 2 rows in set (0.00 sec)
Starting with version 0.9.9-rc2, SphinxSE also includes a UDF function that lets you create snippets through MySQL. The functionality is fully similar to BuildExcerprts API call but accesible through MySQL+SphinxSE.
The binary that provides the UDF is named sphinx.so
and should be automatically built and installed to proper location
along with SphinxSE itself. If it does not get installed automatically
for some reason, look for sphinx.so
in the build
directory and copy it to the plugins directory of your MySQL instance.
After that, register the UDF using the following statement:
CREATE FUNCTION sphinx_snippets RETURNS STRING SONAME 'sphinx.so';
Function name must be sphinx_snippets, you can not use an arbitrary name. Function arguments are as follows:
Prototype: function sphinx_snippets ( document, index, words, [options] );
Document and words arguments can be either strings or table columns.
Options must be specified like this: 'value' AS option_name
.
For a list of supported options, refer to
BuildExcerprts() API call.
The only UDF-specific additional option is named 'sphinx'
and lets you specify searchd location (host and port).
Usage examples:
SELECT sphinx_snippets('hello world doc', 'main', 'world', 'sphinx://192.168.1.1/' AS sphinx, true AS exact_phrase, '[b]' AS before_match, '[/b]' AS after_match) FROM documents; SELECT title, sphinx_snippets(text, 'index', 'mysql php') AS text FROM sphinx, documents WHERE query='mysql php' AND sphinx.id=documents.id;
Unfortunately, Sphinx is not yet 100% bug free (even though I'm working hard towards that), so you might occasionally run into some issues.
Reporting as much as possible about each bug is very important - because to fix it, I need to be able either to reproduce and debug the bug, or to deduce what's causing it from the information that you provide. So here are some instructions on how to do that.
If Sphinx fails to build for some reason, please do the following:
check that headers and libraries for your DBMS are properly installed
(for instance, check that mysql-devel
package is present);
report Sphinx version and config file (be sure to remove the passwords!), MySQL (or PostgreSQL) configuration info, gcc version, OS version and CPU type (ie. x86, x86-64, PowerPC, etc):
mysql_config gcc --version uname -a
report the error message which is produced by configure
or gcc
(it should be to include error message itself only,
not the whole build log).
If Sphinx builds and runs, but there are any problems running it, please do the following:
describe the bug (ie. both the expected behavior and actual behavior) and all the steps necessary to reproduce it;
include Sphinx version and config file (be sure to remove the passwords!), MySQL (or PostgreSQL) version, gcc version, OS version and CPU type (ie. x86, x86-64, PowerPC, etc):
mysql --version gcc --version uname -a
build, install and run debug versions of all Sphinx programs (this is to enable a lot of additional internal checks, so-called assertions):
make distclean ./configure --with-debug make install killall -TERM searchd
reindex to check if any assertions are triggered (in this case, it's likely that the index is corrupted and causing problems);
if the bug does not reproduce with debug versions, revert to non-debug and mention it in your report;
if the bug could be easily reproduced with a small (1-100 record) part of your database, please provide a gzipped dump of that part;
if the problem is related to searchd
, include
relevant entries from searchd.log
and
query.log
in your bug report;
if the problem is related to searchd
, try
running it in console mode and check if it dies with an assertion:
./searchd --console
if any program dies with an assertion, provide the assertion message.
If any program dies with an assertion, crashes without an assertion or hangs up, you would additionally need to generate a core dump and examine it.
enable core dumps. On most Linux systems, this is done
using ulimit
:
ulimit -c 32768
run the program and try to reproduce the bug;
if the program crashes (either with or without an assertion), find the core file in current directory (it should typically print out "Segmentation fault (core dumped)" message);
if the program hangs, use kill -SEGV
from another console to force it to exit and dump core:
kill -SEGV HANGED-PROCESS-ID
use gdb
to examine the core file
and obtain a backtrace:
gdb ./CRASHED-PROGRAM-FILE-NAME CORE-DUMP-FILE-NAME (gdb) bt (gdb) quit
Note that HANGED-PROCESS-ID, CRASHED-PROGRAM-FILE-NAME and CORE-DUMP-FILE-NAME must all be replaced with specific numbers and file names. For example, hanged searchd debugging session would look like:
# kill -SEGV 12345 # ls *core* core.12345 # gdb ./searchd core.12345 (gdb) bt ... (gdb) quit
Note that ulimit
is not server-wide
and only affects current shell session. This means that you will not
have to restore any server-wide limits - but if you relogin,
you will have to set ulimit
again.
Core dumps should be placed in current working directory (and Sphinx programs do not change it), so this is where you would look for them.
Please do not immediately remove the core file because there could be additional helpful information which could be retrieved from it. You do not need to send me this file (as the debug info there is closely tied to your system) but I might need to ask you a few additional questions about it.
Table of Contents
indexer
program configuration optionssearchd
program configuration options
Data source type.
Mandatory, no default value.
Known types are mysql
, pgsql
, mssql
,
xmlpipe
and xmlpipe2
, and odbc
.
All other per-source options depend on source type selected by this option.
Names of the options used for SQL sources (ie. MySQL, PostgreSQL, MS SQL) start with "sql_";
names of the ones used for xmlpipe and xmlpipe2 start with "xmlpipe_".
All source types except xmlpipe
are conditional; they might or might
not be supported depending on your build settings, installed client libraries, etc.
mssql
type is currently only available on Windows.
odbc
type is available both on Windows natively and on
Linux through UnixODBC library.
type = mysql
SQL server host to connect to.
Mandatory, no default value.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
In the simplest case when Sphinx resides on the same host with your MySQL or PostgreSQL installation, you would simply specify "localhost". Note that MySQL client library chooses whether to connect over TCP/IP or over UNIX socket based on the host name. Specifically "localhost" will force it to use UNIX socket (this is the default and generally recommended mode) and "127.0.0.1" will force TCP/IP usage. Refer to MySQL manual for more details.
sql_host = localhost
SQL server IP port to connect to.
Optional, default is 3306 for mysql
source type and 5432 for pgsql
type.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Note that it depends on sql_host setting whether this value will actually be used.
sql_port = 3306
SQL user to use when connecting to sql_host.
Mandatory, no default value.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
sql_user = test
SQL user password to use when connecting to sql_host.
Mandatory, no default value.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
sql_pass = mysecretpassword
SQL database (in MySQL terms) to use after the connection and perform further queries within.
Mandatory, no default value.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
sql_db = test
UNIX socket name to connect to for local SQL servers.
Optional, default value is empty (use client library default settings).
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
On Linux, it would typically be /var/lib/mysql/mysql.sock
.
On FreeBSD, it would typically be /tmp/mysql.sock
.
Note that it depends on sql_host setting whether this value will actually be used.
sql_sock = /tmp/mysql.sock
Automatically detect and convert possible JSON strings that represent numbers, into numeric attributes. Optional, defalt value is 0 (do not convert strings into numbers). Added in 2.1.1-beta.
When this option is 1, values such as "1234" will be indexed as numbers instead of strings; if the option is 0, such values will be indexed as strings. This conversion applies to any data source, that is, JSON attributes originating from either SQL or XMLpipe sources will all be affected.
json_autoconv_numbers = 1
Whether and how to auto-convert key names within JSON attributes. Known value is 'lowercase'. Optional, defalt value is unspecified (do not convert anything). Added in 2.1.1-beta.
When this directive is set to 'lowercase', key names within JSON attributes will be automatically brought to lower case when indexing. This conversion applies to any data source, that is, JSON attributes originating from either SQL or XMLpipe sources will all be affected.
json_autoconv_keynames = lowercase
What to do if JSON format errors are found.
Optional, default value is ignore_attr
(ignore errors).
Applies only to sql_attr_json
attributes.
Added in 2.1.1-beta.
By default, JSON format errors are ignored (ignore_attr
) and
the indexer tool will just show a warning. Setting this option to fail_index
will rather make indexing fail at the first JSON format error.
on_json_attr_error = ignore_attr
MySQL client connection flags.
Optional, default value is 0 (do not set any flags).
Applies to mysql
source type only.
This option must contain an integer value with the sum of the flags. The value will be passed to mysql_real_connect() verbatim. The flags are enumerated in mysql_com.h include file. Flags that are especially interesting in regard to indexing, with their respective values, are as follows:
CLIENT_COMPRESS = 32; can use compression protocol
CLIENT_SSL = 2048; switch to SSL after handshake
CLIENT_SECURE_CONNECTION = 32768; new 4.1 authentication
For instance, you can specify 2080 (2048+32) to use both compression and SSL,
or 32768 to use new authentication only. Initially, this option was introduced
to be able to use compression when the indexer
and mysqld
are on different hosts. Compression on 1 Gbps
links is most likely to hurt indexing time though it reduces network traffic,
both in theory and in practice. However, enabling compression on 100 Mbps links
may improve indexing time significantly (upto 20-30% of the total indexing time
improvement was reported). Your mileage may vary.
mysql_connect_flags = 32 # enable compression
SSL certificate settings to use for connecting to MySQL server.
Optional, default values are empty strings (do not use SSL).
Applies to mysql
source type only.
These directives let you set up secure SSL connection between
indexer
and MySQL. The details on creating
the certificates and setting up MySQL server can be found in
MySQL documentation.
mysql_ssl_cert = /etc/ssl/client-cert.pem mysql_ssl_key = /etc/ssl/client-key.pem mysql_ssl_ca = /etc/ssl/cacert.pem
ODBC DSN to connect to.
Mandatory, no default value.
Applies to odbc
source type only.
ODBC DSN (Data Source Name) specifies the credentials (host, user, password, etc) to use when connecting to ODBC data source. The format depends on specific ODBC driver used.
odbc_dsn = Driver={Oracle ODBC Driver};Dbq=myDBName;Uid=myUsername;Pwd=myPassword
Pre-fetch query, or pre-query.
Multi-value, optional, default is empty list of queries.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Multi-value means that you can specify several pre-queries. They are executed before the main fetch query, and they will be exectued exactly in order of appeareance in the configuration file. Pre-query results are ignored.
Pre-queries are useful in a lot of ways. They are used to setup encoding, mark records that are going to be indexed, update internal counters, set various per-connection SQL server options and variables, and so on.
Perhaps the most frequent pre-query usage is to specify the encoding that the server will use for the rows it returnes. It must match the encoding that Sphinx expects (as specified by charset_type and charset_table options). Two MySQL specific examples of setting the encoding are:
sql_query_pre = SET CHARACTER_SET_RESULTS=cp1251 sql_query_pre = SET NAMES utf8
Also specific to MySQL sources, it is useful to disable query cache (for indexer connection only) in pre-query, because indexing queries are not going to be re-run frequently anyway, and there's no sense in caching their results. That could be achieved with:
sql_query_pre = SET SESSION query_cache_type=OFF
sql_query_pre = SET NAMES utf8 sql_query_pre = SET SESSION query_cache_type=OFF
Main document fetch query.
Mandatory, no default value.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
There can be only one main query. This is the query which is used to retrieve documents from SQL server. You can specify up to 32 full-text fields (formally, upto SPH_MAX_FIELDS from sphinx.h), and an arbitrary amount of attributes. All of the columns that are neither document ID (the first one) nor attributes will be full-text indexed.
Document ID MUST be the very first field,
and it MUST BE UNIQUE UNSIGNED POSITIVE (NON-ZERO, NON-NEGATIVE) INTEGER NUMBER.
It can be either 32-bit or 64-bit, depending on how you built Sphinx;
by default it builds with 32-bit IDs support but --enable-id64
option
to configure
allows to build with 64-bit document and word IDs support.
sql_query = \ SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, \ title, content \ FROM documents
Joined/payload field fetch query.
Multi-value, optional, default is empty list of queries.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
sql_joined_field
lets you use two different features:
joined fields, and payloads (payload fields). It's syntax is as follows:
sql_joined_field = FIELD-NAME 'from' ( 'query' | 'payload-query' \ | 'ranged-query' ); QUERY [ ; RANGE-QUERY ]
where
FIELD-NAME is a joined/payload field name;
QUERY is an SQL query that must fetch values to index.
RANGE-QUERY is an optional SQL query that fetches a range of values to index. (Added in version 2.0.1-beta.)
Joined fields let you avoid JOIN and/or GROUP_CONCAT statements in the main document fetch query (sql_query). This can be useful when SQL-side JOIN is slow, or needs to be offloaded on Sphinx side, or simply to emulate MySQL-specific GROUP_CONCAT funcionality in case your database server does not support it.
The query must return exactly 2 columns: document ID, and text to append to a joined field. Document IDs can be duplicate, but they must be in ascending order. All the text rows fetched for a given ID will be concatented together, and the concatenation result will be indexed as the entire contents of a joined field. Rows will be concatenated in the order returned from the query, and separating whitespace will be inserted between them. For instance, if joined field query returns the following rows:
( 1, 'red' ) ( 1, 'right' ) ( 1, 'hand' ) ( 2, 'mysql' ) ( 2, 'sphinx' )
then the indexing results would be equivalent to that of adding a new text field with a value of 'red right hand' to document 1 and 'mysql sphinx' to document 2.
Joined fields are only indexed differently. There are no other differences between joined fields and regular text fields.
Starting with 2.0.1-beta, ranged queries can be used when
a single query is not efficient enough or does not work because of
the database driver limitations. It works similar to the ranged
queries in the main indexing loop, see Section 3.8, “Ranged queries”.
The range will be queried for and fetched upfront once,
then multiple queries with different $start
and $end
substitutions will be run to fetch
the actual data.
Payloads let you create a special field in which, instead of keyword positions, so-called user payloads are stored. Payloads are custom integer values attached to every keyword. They can then be used in search time to affect the ranking.
The payload query must return exactly 3 columns: document ID; keyword; and integer payload value. Document IDs can be duplicate, but they must be in ascending order. Payloads must be unsigned integers within 24-bit range, ie. from 0 to 16777215. For reference, payloads are currently internally stored as in-field keyword positions, but that is not guaranteed and might change in the future.
Currently, the only method to account for payloads is to use SPH_RANK_PROXIMITY_BM25 ranker. On indexes with payload fields, it will automatically switch to a variant that matches keywords in those fields, computes a sum of matched payloads multiplied by field wieghts, and adds that sum to the final rank.
sql_joined_field = \ tagstext from query; \ SELECT docid, CONCAT('tag',tagid) FROM tags ORDER BY docid ASC sql_joined_field = bigint tag from ranged-query; \ SELECT id, tag FROM tags WHERE id>=$start AND id<=$end; \ SELECT MIN(id), MAX(id) FROM tags ORDER BY docid ASC
Range query setup.
Optional, default is empty.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Setting this option enables ranged document fetch queries (see Section 3.8, “Ranged queries”). Ranged queries are useful to avoid notorious MyISAM table locks when indexing lots of data. (They also help with other less notorious issues, such as reduced performance caused by big result sets, or additional resources consumed by InnoDB to serialize big read transactions.)
The query specified in this option must fetch min and max document IDs that will be used as range boundaries. It must return exactly two integer fields, min ID first and max ID second; the field names are ignored.
When ranged queries are enabled, sql_query
will be required to contain $start
and $end
macros
(because it obviously would be a mistake to index the whole table many times over).
Note that the intervals specified by $start
..$end
will not overlap, so you should not remove document IDs that are
exactly equal to $start
or $end
from your query.
The example in Section 3.8, “Ranged queries”) illustrates that; note how it
uses greater-or-equal and less-or-equal comparisons.
sql_query_range = SELECT MIN(id),MAX(id) FROM documents
Range query step.
Optional, default is 1024.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Only used when ranged queries are enabled. The full document IDs interval fetched by sql_query_range will be walked in this big steps. For example, if min and max IDs fetched are 12 and 3456 respectively, and the step is 1000, indexer will call sql_query several times with the following substitutions:
$start=12, $end=1011
$start=1012, $end=2011
$start=2012, $end=3011
$start=3012, $end=3456
sql_range_step = 1000
Kill-list query.
Optional, default is empty (no query).
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Introduced in version 0.9.9-rc1.
This query is expected to return a number of 1-column rows, each containing just the document ID. The returned document IDs are stored within an index. Kill-list for a given index suppresses results from other indexes, depending on index order in the query. The intended use is to help implement deletions and updates on existing indexes without rebuilding (actually even touching them), and especially to fight phantom results problem.
Let us dissect an example. Assume we have two indexes, 'main' and 'delta'. Assume that documents 2, 3, and 5 were deleted since last reindex of 'main', and documents 7 and 11 were updated (ie. their text contents were changed). Assume that a keyword 'test' occurred in all these mentioned documents when we were indexing 'main'; still occurs in document 7 as we index 'delta'; but does not occur in document 11 any more. We now reindex delta and then search through both these indexes in proper (least to most recent) order:
$res = $cl->Query ( "test", "main delta" );
First, we need to properly handle deletions. The result set should not contain documents 2, 3, or 5. Second, we also need to avoid phantom results. Unless we do something about it, document 11 will appear in search results! It will be found in 'main' (but not 'delta'). And it will make it to the final result set unless something stops it.
Kill-list, or K-list for short, is that something. Kill-list attached to 'delta' will suppress the specified rows from all the preceding indexes, in this case just 'main'. So to get the expected results, we should put all the updated and deleted document IDs into it.
Note that in the distributed index setup, K-lists are local to every node in the cluster. They are not get transmitted over the network when sending queries. (Because that might be too much of an impact when the K-list is huge.) You will need to setup a separate per-server K-lists in that case.
sql_query_killlist = \ SELECT id FROM documents WHERE updated_ts>=@last_reindex UNION \ SELECT id FROM documents_deleted WHERE deleted_ts>=@last_reindex
Unsigned integer attribute declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
The column value should fit into 32-bit unsigned integer range. Values outside this range will be accepted but wrapped around. For instance, -1 will be wrapped around to 2^32-1 or 4,294,967,295.
You can specify bit count for integer attributes by appending
':BITCOUNT' to attribute name (see example below). Attributes with
less than default 32-bit size, or bitfields, perform slower.
But they require less RAM when using extern storage:
such bitfields are packed together in 32-bit chunks in .spa
attribute data file. Bit size settings are ignored if using
inline storage.
sql_attr_uint = group_id sql_attr_uint = forum_id:9 # 9 bits for forum_id
Boolean attribute declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Equivalent to sql_attr_uint declaration with a bit count of 1.
sql_attr_bool = is_deleted # will be packed to 1 bit
64-bit signed integer attribute declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Note that unlike sql_attr_uint,
these values are signed.
Introduced in version 0.9.9-rc1.
sql_attr_bigint = my_bigint_id
UNIX timestamp attribute declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Timestamps can store date and time in the range of Jan 01, 1970 to Jan 19, 2038 with a precision of one second. The expected column value should be a timestamp in UNIX format, ie. 32-bit unsigned integer number of seconds elapsed since midnight, January 01, 1970, GMT. Timestamps are internally stored and handled as integers everywhere. But in addition to working with timestamps as integers, it's also legal to use them along with different date-based functions, such as time segments sorting mode, or day/week/month/year extraction for GROUP BY.
Note that DATE or DATETIME column types in MySQL can not be directly used as timestamp attributes in Sphinx; you need to explicitly convert such columns using UNIX_TIMESTAMP function (if data is in range).
Note timestamps can not represent dates before January 01, 1970, and UNIX_TIMESTAMP() in MySQL will not return anything expected. If you only needs to work with dates, not times, consider TO_DAYS() function in MySQL instead.
# sql_query = ... UNIX_TIMESTAMP(added_datetime) AS added_ts ... sql_attr_timestamp = added_ts
Deprecated.
Ordinal string number attribute declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
This attribute type (so-called ordinal, for brevity) is intended to allow sorting by string values, but without storing the strings themselves. When indexing ordinals, string values are fetched from database, temporarily stored, sorted, and then replaced by their respective ordinal numbers in the array of sorted strings. So, the ordinal number is an integer such that sorting by it produces the same result as if lexicographically sorting by original strings. by string values lexicographically.
Earlier versions could consume a lot of RAM for indexing ordinals. Starting with revision r1112, ordinals accumulation and sorting also runs in fixed memory (at the cost of using additional temporary disk space), and honors mem_limit settings.
Ideally the strings should be sorted differently, depending on the encoding and locale. For instance, if the strings are known to be Russian text in KOI8R encoding, sorting the bytes 0xE0, 0xE1, and 0xE2 should produce 0xE1, 0xE2 and 0xE0, because in KOI8R value 0xE0 encodes a character that is (noticeably) after characters encoded by 0xE1 and 0xE2. Unfortunately, Sphinx does not support that at the moment and will simply sort the strings bytewise.
Note that the ordinals are by construction local to each index, and it's therefore impossible to merge ordinals while retaining the proper order. The processed strings are replaced by their sequential number in the index they occurred in, but different indexes have different sets of strings. For instance, if 'main' index contains strings "aaa", "bbb", "ccc", and so on up to "zzz", they'll be assigned numbers 1, 2, 3, and so on up to 26, respectively. But then if 'delta' only contains "zzz" the assigned number will be 1. And after the merge, the order will be broken. Unfortunately, this is impossible to workaround without storing the original strings (and once Sphinx supports storing the original strings, ordinals will not be necessary any more).
sql_attr_str2ordinal = author_name
Floating point attribute declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
The values will be stored in single precision, 32-bit IEEE 754 format. Represented range is approximately from 1e-38 to 1e+38. The amount of decimal digits that can be stored precisely is approximately 7. One important usage of the float attributes is storing latitude and longitude values (in radians), for further usage in query-time geosphere distance calculations.
sql_attr_float = lat_radians sql_attr_float = long_radians
Multi-valued attribute (MVA) declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Plain attributes only allow to attach 1 value per each document. However, there are cases (such as tags or categories) when it is desired to attach multiple values of the same attribute and be able to apply filtering or grouping to value lists.
The declaration format is as follows (backslashes are for clarity only; everything can be declared in a single line as well):
sql_attr_multi = ATTR-TYPE ATTR-NAME 'from' SOURCE-TYPE \ [;QUERY] \ [;RANGE-QUERY]
where
ATTR-TYPE is 'uint', 'bigint' or 'timestamp'
SOURCE-TYPE is 'field', 'query', or 'ranged-query'
QUERY is SQL query used to fetch all ( docid, attrvalue ) pairs
RANGE-QUERY is SQL query used to fetch min and max ID values, similar to 'sql_query_range'
sql_attr_multi = uint tag from query; SELECT id, tag FROM tags sql_attr_multi = bigint tag from ranged-query; \ SELECT id, tag FROM tags WHERE id>=$start AND id<=$end; \ SELECT MIN(id), MAX(id) FROM tags
String attribute declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Introduced in version 1.10-beta.
String attributes can store arbitrary strings attached to every document.
There's a fixed size limit of 4 MB per value. Also, searchd
will currently cache all the values in RAM, which is an additional implicit limit.
Starting from 2.0.1-beta string attributes can be used for sorting and
grouping(ORDER BY, GROUP BY, WITHIN GROUP ORDER BY). Note that attributes
declared using sql_attr_string
will not be full-text
indexed; you can use sql_field_string
directive for that.
sql_attr_string = title # will be stored but will not be indexed
JSON attribute declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Introduced in version 2.1.1-beta.
When indexing JSON attributes, Sphinx expects a text field with JSON formatted data. Currently, only one level JSON objects are supported. Integer, float, strings, and string array values are accepted.
{ "id": 1, "gid": 2, "title": "some title", "tags": [ "tag1", "tag2", "tag3" ] }
These attributes allow Sphinx to work with documents without a fixed set of attribute columns. When you filter on a key of a JSON attribute, documents that don't include the key will simply be ignored.
You can read more on JSON attributes in http://sphinxsearch.com/blog/2013/02/07/sphinx-2-1-json-attributes/.
sql_attr_json = properties
Deprecated.
Word-count attribute declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Introduced in version 1.10-beta.
Word-count attribute takes a string column, tokenizes it according to index settings, and stores the resulting number of tokens in an attribute. This number of tokens ("word count") is a normal integer that can be later used, for instance, in custom ranking expressions (boost shorter titles, help identify exact field matches, etc).
sql_attr_str2wordcount = title_wc
Per-column buffer sizes.
Optional, default is empty (deduce the sizes automatically).
Applies to odbc
, mssql
source types only.
Introduced in version 2.0.1-beta.
ODBC and MS SQL drivers sometimes can not return the maximum
actual column size to be expected. For instance, NVARCHAR(MAX) columns
always report their length as 2147483647 bytes to
indexer
even though the actually used length
is likely considerably less. However, the receiving buffers still
need to be allocated upfront, and their sizes have to be determined.
When the driver does not report the column length at all, Sphinx
allocates default 1 KB buffers for each non-char column, and 1 MB
buffers for each char column. Driver-reported column length
also gets clamped by an upper limie of 8 MB, so in case the
driver reports (almost) a 2 GB column length, it will be clamped
and a 8 MB buffer will be allocated instead for that column.
These hard-coded limits can be overridden using the
sql_column_buffers
directive, either in order
to save memory on actually shorter columns, or overcome
the 8 MB limit on actually longer columns. The directive values
must be a comma-separated lists of selected column names and sizes:
sql_column_buffers = <colname>=<size>[K|M] [, ...]
sql_query = SELECT id, mytitle, mycontent FROM documents sql_column_buffers = mytitle=64K, mycontent=10M
Combined string attribute and full-text field declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Introduced in version 1.10-beta.
sql_attr_string only stores the column
value but does not full-text index it. In some cases it might be desired to both full-text
index the column and store it as attribute. sql_field_string
lets you do
exactly that. Both the field and the attribute will be named the same.
sql_field_string = title # will be both indexed and stored
Deprecated.
Combined word-count attribute and full-text field declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Introduced in version 1.10-beta.
sql_attr_str2wordcount only stores the column
word count but does not full-text index it. In some cases it might be desired to both full-text
index the column and also have the count. sql_field_str2wordcount
lets you do
exactly that. Both the field and the attribute will be named the same.
sql_field_str2wordcount = title # will be indexed, and counted/stored
File based field declaration.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Introduced in version 1.10-beta.
This directive makes indexer
interpret field contents
as a file name, and load and index the referred file. Files larger than
max_file_field_buffer
in size are skipped. Any errors during the file loading (IO errors, missed
limits, etc) will be reported as indexing warnings and will not early
terminate the indexing. No content will be indexed for such files.
sql_file_field = my_file_path # load and index files referred to by my_file_path
Post-fetch query.
Optional, default value is empty.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
This query is executed immediately after sql_query completes successfully. When post-fetch query produces errors, they are reported as warnings, but indexing is not terminated. It's result set is ignored. Note that indexing is not yet completed at the point when this query gets executed, and further indexing still may fail. Therefore, any permanent updates should not be done from here. For instance, updates on helper table that permanently change the last successfully indexed ID should not be run from post-fetch query; they should be run from post-index query instead.
sql_query_post = DROP TABLE my_tmp_table
Post-index query.
Optional, default value is empty.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
This query is executed when indexing is fully and succesfully completed.
If this query produces errors, they are reported as warnings,
but indexing is not terminated. It's result set is ignored.
$maxid
macro can be used in its text; it will be
expanded to maximum document ID which was actually fetched
from the database during indexing. If no documents were indexed,
$maxid will be expanded to 0.
sql_query_post_index = REPLACE INTO counters ( id, val ) \ VALUES ( 'max_indexed_id', $maxid )
Ranged query throttling period, in milliseconds.
Optional, default is 0 (no throttling).
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Throttling can be useful when indexer imposes too much load on the database server. It causes the indexer to sleep for given amount of milliseconds once per each ranged query step. This sleep is unconditional, and is performed before the fetch query.
sql_ranged_throttle = 1000 # sleep for 1 sec before each query step
Document info query.
Optional, default is empty.
Applies to mysql
source type only.
Only used by CLI search to fetch and display document information,
only works with MySQL at the moment, and only intended for debugging purposes.
This query fetches the row that will be displayed by CLI search utility
for each document ID. It is required to contain $id
macro
that expands to the queried document ID.
sql_query_info = SELECT * FROM documents WHERE id=$id
Shell command that invokes xmlpipe stream producer.
Mandatory.
Applies to xmlpipe
and xmlpipe2
source types only.
Specifies a command that will be executed and which output will be parsed for documents. Refer to Section 3.9, “xmlpipe data source” or Section 3.10, “xmlpipe2 data source” for specific format description.
xmlpipe_command = cat /home/sphinx/test.xml
xmlpipe field declaration.
Multi-value, optional.
Applies to xmlpipe2
source type only. Refer to Section 3.10, “xmlpipe2 data source”.
xmlpipe_field = subject xmlpipe_field = content
xmlpipe field and string attribute declaration.
Multi-value, optional.
Applies to xmlpipe2
source type only. Refer to Section 3.10, “xmlpipe2 data source”.
Introduced in version 1.10-beta.
Makes the specified XML element indexed as both a full-text field and a string attribute. Equivalent to <sphinx:field name="field" attr="string"/> declaration within the XML file.
xmlpipe_field_string = subject
xmlpipe field and word count attribute declaration.
Multi-value, optional.
Applies to xmlpipe2
source type only. Refer to Section 3.10, “xmlpipe2 data source”.
Introduced in version 1.10-beta.
Makes the specified XML element indexed as both a full-text field and a word count attribute. Equivalent to <sphinx:field name="field" attr="wordcount"/> declaration within the XML file.
xmlpipe_field_wordcount = subject
xmlpipe integer attribute declaration.
Multi-value, optional.
Applies to xmlpipe2
source type only.
Syntax fully matches that of sql_attr_uint.
xmlpipe_attr_uint = author_id
xmlpipe signed 64-bit integer attribute declaration.
Multi-value, optional.
Applies to xmlpipe2
source type only.
Syntax fully matches that of sql_attr_bigint.
xmlpipe_attr_bigint = my_bigint_id
xmlpipe boolean attribute declaration.
Multi-value, optional.
Applies to xmlpipe2
source type only.
Syntax fully matches that of sql_attr_bool.
xmlpipe_attr_bool = is_deleted # will be packed to 1 bit
xmlpipe UNIX timestamp attribute declaration.
Multi-value, optional.
Applies to xmlpipe2
source type only.
Syntax fully matches that of sql_attr_timestamp.
xmlpipe_attr_timestamp = published
Deprecated.
xmlpipe string ordinal attribute declaration.
Multi-value, optional.
Applies to xmlpipe2
source type only.
Syntax fully matches that of sql_attr_str2ordinal.
xmlpipe_attr_str2ordinal = author_sort
xmlpipe floating point attribute declaration.
Multi-value, optional.
Applies to xmlpipe2
source type only.
Syntax fully matches that of sql_attr_float.
xmlpipe_attr_float = lat_radians xmlpipe_attr_float = long_radians
xmlpipe MVA attribute declaration.
Multi-value, optional.
Applies to xmlpipe2
source type only.
This setting declares an MVA attribute tag in xmlpipe2 stream. The contents of the specified tag will be parsed and a list of integers that will constitute the MVA will be extracted, similar to how sql_attr_multi parses SQL column contents when 'field' MVA source type is specified.
xmlpipe_attr_multi = taglist
xmlpipe MVA attribute declaration. Declares the BIGINT (signed 64-bit integer) MVA attribute.
Multi-value, optional.
Applies to xmlpipe2
source type only.
This setting declares an MVA attribute tag in xmlpipe2 stream. The contents of the specified tag will be parsed and a list of integers that will constitute the MVA will be extracted, similar to how sql_attr_multi parses SQL column contents when 'field' MVA source type is specified.
xmlpipe_attr_multi_64 = taglist
xmlpipe string declaration.
Multi-value, optional.
Applies to xmlpipe2
source type only.
Introduced in version 1.10-beta.
This setting declares a string attribute tag in xmlpipe2 stream. The contents of the specified tag will be parsed and stored as a string value.
xmlpipe_attr_string = subject
Word-count attribute declaration. Multi-value (ie. there may be more than one such attribute declared), optional. Introduced in version 1.10-beta.
Word-count attribute takes a string column, tokenizes it according to index settings, and stores the resulting number of tokens in an attribute. This number of tokens ("word count") is a normal integer that can be later used, for instance, in custom ranking expressions (boost shorter titles, help identify exact field matches, etc).
xmlpipe_attr_wordcount = title_wc
JSON attribute declaration. Multi-value (ie. there may be more than one such attribute declared), optional. Introduced in version 2.1.1-beta.
This directive is used to declare that the contents of a given XML tag are to be treated as a JSON document and stored into a Sphinx index for later use. Refer to Section 11.1.28, “sql_attr_json” for more details on the JSON attributes.
xmlpipe_attr_json = properties
Perform Sphinx-side UTF-8 validation and filtering to prevent XML parser from choking on non-UTF-8 documents.
Optional, default is 0.
Applies to xmlpipe2
source type only.
Under certain occasions it might be hard or even impossible to guarantee that the incoming XMLpipe2 document bodies are in perfectly valid and conforming UTF-8 encoding. For instance, documents with national single-byte encodings could sneak into the stream. libexpat XML parser is fragile, meaning that it will stop processing in such cases. UTF8 fixup feature lets you avoid that. When fixup is enabled, Sphinx will preprocess the incoming stream before passing it to the XML parser and replace invalid UTF-8 sequences with spaces.
xmlpipe_fixup_utf8 = 1
MS SQL Windows authentication flag.
Boolean, optional, default value is 0 (false).
Applies to mssql
source type only.
Introduced in version 0.9.9-rc1.
Whether to use currently logged in Windows account credentials for
authentication when connecting to MS SQL Server. Note that when running
searchd
as a service, account user can differ
from the account you used to install the service.
mssql_winauth = 1
MS SQL encoding type flag.
Boolean, optional, default value is 0 (false).
Applies to mssql
source type only.
Introduced in version 0.9.9-rc1.
Whether to ask for Unicode or single-byte data when querying MS SQL Server.
This flag must be in sync with charset_type directive;
that is, to index Unicode data, you must set both charset_type
in the index
(to 'utf-8') and mssql_unicode
in the source (to 1).
For reference, MS SQL will actually return data in UCS-2 encoding instead of UTF-8,
but Sphinx will automatically handle that.
mssql_unicode = 1
Columns to unpack using zlib (aka deflate, aka gunzip).
Multi-value, optional, default value is empty list of columns.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Introduced in version 0.9.9-rc1.
Columns specified using this directive will be unpacked by indexer
using standard zlib algorithm (called deflate and also implemented by gunzip
).
When indexing on a different box than the database, this lets you offload the database, and save on network traffic.
The feature is only available if zlib and zlib-devel were both available during build time.
unpack_zlib = col1 unpack_zlib = col2
Columns to unpack using MySQL UNCOMPRESS() algorithm.
Multi-value, optional, default value is empty list of columns.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Introduced in version 0.9.9-rc1.
Columns specified using this directive will be unpacked by indexer
using modified zlib algorithm used by MySQL COMPRESS() and UNCOMPRESS() functions.
When indexing on a different box than the database, this lets you offload the database, and save on network traffic.
The feature is only available if zlib and zlib-devel were both available during build time.
unpack_mysqlcompress = body_compressed unpack_mysqlcompress = description_compressed
Buffer size for UNCOMPRESS()ed data. Optional, default value is 16M. Introduced in version 0.9.9-rc1.
When using unpack_mysqlcompress,
due to implementation intrincacies it is not possible to deduce the required buffer size
from the compressed data. So the buffer must be preallocated in advance, and unpacked
data can not go over the buffer size. This option lets you control the buffer size,
both to limit indexer
memory use, and to enable unpacking
of really long data fields if necessary.
unpack_mysqlcompress_maxsize = 1M
Index type. Known values are 'plain', 'distributed', and 'rt'. Optional, default is 'plain' (plain local index).
Sphinx supports several different types of indexes.
Versions 0.9.x supported two index types: plain local indexes
that are stored and processed on the local machine; and distributed indexes,
that involve not only local searching but querying remote searchd
instances over the network as well (see Section 5.8, “Distributed searching”).
Version 1.10-beta also adds support
for so-called real-time indexes (or RT indexes for short) that
are also stored and processed locally, but additionally allow
for on-the-fly updates of the full-text index (see Chapter 4, Real-time indexes).
Note that attributes can be updated on-the-fly using
either plain local indexes or RT ones.
Index type setting lets you choose the needed type. By default, plain local index type will be assumed.
type = distributed
Adds document source to local index. Multi-value, mandatory.
Specifies document source to get documents from when the current index is indexed. There must be at least one source. There may be multiple sources, without any restrictions on the source types: ie. you can pull part of the data from MySQL server, part from PostgreSQL, part from the filesystem using xmlpipe2 wrapper.
However, there are some restrictions on the source data. First, document IDs must be globally unique across all sources. If that condition is not met, you might get unexpected search results. Second, source schemas must be the same in order to be stored within the same index.
No source ID is stored automatically. Therefore, in order to be able to tell what source the matched document came from, you will need to store some additional information yourself. Two typical approaches include:
mangling document ID and encoding source ID in it:
source src1 { sql_query = SELECT id*10+1, ... FROM table1 ... } source src2 { sql_query = SELECT id*10+2, ... FROM table2 ... }
storing source ID simply as an attribute:
source src1 { sql_query = SELECT id, 1 AS source_id FROM table1 sql_attr_uint = source_id ... } source src2 { sql_query = SELECT id, 2 AS source_id FROM table2 sql_attr_uint = source_id ... }
source = srcpart1 source = srcpart2 source = srcpart3
Index files path and file name (without extension). Mandatory.
Path specifies both directory and file name, but without extension.
indexer
will append different extensions
to this path when generating final names for both permanent and
temporary index files. Permanent data files have several different
extensions starting with '.sp'; temporary files' extensions
start with '.tmp'. It's safe to remove .tmp*
files is if indexer fails to remove them automatically.
For reference, different index files store the following data:
.spa
stores document attributes (used in extern docinfo storage mode only);
.spd
stores matching document ID lists for each word ID;
.sph
stores index header information;
.spi
stores word lists (word IDs and pointers to .spd
file);
.spk
stores kill-lists;
.spm
stores MVA data;
.spp
stores hit (aka posting, aka word occurence) lists for each word ID;
.sps
stores string attribute data.
path = /var/data/test1
Document attribute values (docinfo) storage mode. Optional, default is 'extern'. Known values are 'none', 'extern' and 'inline'.
Docinfo storage mode defines how exactly docinfo will be
physically stored on disk and RAM. "none" means that there will be
no docinfo at all (ie. no attributes). Normally you need not to set
"none" explicitly because Sphinx will automatically select "none"
when there are no attributes configured. "inline" means that the
docinfo will be stored in the .spd
file,
along with the document ID lists. "extern" means that the docinfo
will be stored separately (externally) from document ID lists,
in a special .spa
file.
Basically, externally stored docinfo must be kept in RAM when querying. for performance reasons. So in some cases "inline" might be the only option. However, such cases are infrequent, and docinfo defaults to "extern". Refer to Section 3.3, “Attributes” for in-depth discussion and RAM usage estimates.
docinfo = inline
Memory locking for cached data. Optional, default is 0 (do not call mlock()).
For search performance, searchd
preloads
a copy of .spa
and .spi
files in RAM, and keeps that copy in RAM at all times. But if there
are no searches on the index for some time, there are no accesses
to that cached copy, and OS might decide to swap it out to disk.
First queries to such "cooled down" index will cause swap-in
and their latency will suffer.
Setting mlock option to 1 makes Sphinx lock physical RAM used
for that cached data using mlock(2) system call, and that prevents
swapping (see man 2 mlock for details). mlock(2) is a privileged call,
so it will require searchd
to be either run
from root account, or be granted enough privileges otherwise.
If mlock() fails, a warning is emitted, but index continues
working.
mlock = 1
A list of morphology preprocessors (stemmers or lemmatizers) to apply. Optional, default is empty (do not apply any preprocessor).
Morphology preprocessors can be applied to the words being indexed to replace different forms of the same word with the base, normalized form. For instance, English stemmer will normalize both "dogs" and "dog" to "dog", making search results for both searches the same.
There are 3 different morphology preprocessors that Sphinx implements: lemmatizers, stemmers, and phonetic algorithms.
The morphology processors that are implemented in Sphinx itself include Russian lemmatizer, English stemmer, Russian stemmer (that supports UTF-8 and Windows-1251 encodings), Czech stemmer, Arabic stemmer, and Soundex and Metaphone phonetic algorithms. Also, you can link with libstemmer library for even more stemmers.
Lemmatizer support was added in version 2.1.1-beta, starting with a Russian lemmatizer. (English lemmatizer support is also planned in the future.) Lemmatizers require a dictionary that needs to be additionally downloaded from the Sphinx website. That dictionary needs to be installed in a directory specified by lemmatizer_base directive. Also, there is a lemmatizer_cache directive that lets you speed up lemmatizing (and therefore indexing) by spending more RAM for, basically, an uncompressed cache of a dictionary.
Additional stemmers provided by Snowball
project libstemmer library
can be enabled at compile time using --with-libstemmer
configure
option.
Built-in English and Russian stemmers should be faster than their
libstemmer counterparts, but can produce slightly different results,
because they are based on an older version.
Soundex implementation matches that of MySQL. Metaphone implementation is based on Double Metaphone algorithm and indexes the primary code.
Built-in values that are available for use in morphology
option are as follows:
none - do not perform any morphology processing;
lemmatize_ru - apply Russian lemmatizer and pick a single root form (added in 2.1.1-beta);
lemmatize_ru_all - apply Russian lemmatizer and index all possible root forms (added in 2.1.1-beta);
stem_en - apply Porter's English stemmer;
stem_ru - apply Porter's Russian stemmer;
stem_enru - apply Porter's English and Russian stemmers;
stem_cz - apply Czech stemmer;
stem_ar - apply Arabic stemmer (added in 2.1.1-beta);
soundex - replace keywords with their SOUNDEX code;
metaphone - replace keywords with their METAPHONE code.
Additional values provided by libstemmer are in 'libstemmer_XXX' format,
where XXX is libstemmer algorithm codename (refer to
libstemmer_c/libstemmer/modules.txt
for a complete list).
Several stemmers can be specified (comma-separated). They will be applied to incoming words in the order they are listed, and the processing will stop once one of the stemmers actually modifies the word. Also when wordforms feature is enabled the word will be looked up in word forms dictionary first, and if there is a matching entry in the dictionary, stemmers will not be applied at all. Or in other words, wordforms can be used to implement stemming exceptions.
morphology = stem_en, libstemmer_sv
The keywords dictionary type. Known values are 'crc' and 'keywords'. Optional, default is 'crc'. Introduced in version 2.0.1-beta.
CRC dictionary mode (dict=crc) is the default dictionary type in Sphinx, and the only one available until version 2.0.1-beta. Keywords dictionary mode (dict=keywords) was added in 2.0.1-beta, primarly to (greatly) reduce indexing impact and enable substring searches on huge collections. They also eliminate the chance of CRC32 collisions. In 2.0.1-beta, that mode was only supported for disk indexes. Starting with 2.0.2-beta, RT indexes are also supported.
CRC dictionaries never store the original keyword text in the index.
Instead, keywords are replaced with their control sum value (either CRC32 or
FNV64, depending whether Sphinx was built with --enable-id64
)
both when searching and indexing, and that value is used internally
in the index.
That approach has two drawbacks. First, in CRC32 case there is a chance of control sum collision between several pairs of different keywords, growing quadratically with the number of unique keywords in the index. (FNV64 case is unaffected in practice, as a chance of a single FNV64 collision in a dictionary of 1 billion entries is approximately 1:16, or 6.25 percent. And most dictionaries will be much more compact that a billion keywords, as a typical spoken human language has in the region of 1 to 10 million word forms.) Second, and more importantly, substring searches are not directly possible with control sums. Sphinx alleviated that by pre-indexing all the possible substrings as separate keywords (see Section 11.2.19, “min_prefix_len”, Section 11.2.20, “min_infix_len” directives). That actually has an added benefit of matching substrings in the quickest way possible. But at the same time pre-indexing all substrings grows the index size a lot (factors of 3-10x and even more would not be unusual) and impacts the indexing time respectively, rendering substring searches on big indexes rather impractical.
Keywords dictionary, introduced in 2.0.1-beta, fixes both these drawbacks. It stores the keywords in the index and performs search-time wildcard expansion. For example, a search for a 'test*' prefix could internally expand to 'test|tests|testing' query based on the dictionary contents. That expansion is fully transparent to the application, except that the separate per-keyword statistics for all the actually matched keywords would now also be reported.
Version 2.1.1-beta introduced extended wildcards support, now special symbos like '?' and '%' are supported along with substring (infix) search (e.g. "t?st*", "run%", "*abc*"). Note, however, these wildcards work only with dict=keywords, and not elsewhere.
Indexing with keywords dictionary should be 1.1x to 1.3x slower compared to regular, non-substring indexing - but times faster compared to substring indexing (either prefix or infix). Index size should only be slightly bigger that than of the regular non-substring index, with a 1..10% percent total difference. Regular keyword searching time must be very close or identical across all three discussed index kinds (CRC non-substring, CRC substring, keywords). Substring searching time can vary greatly depending on how many actual keywords match the given substring (in other words, into how many keywords does the search term expand). The maximum number of keywords matched is restricted by the expansion_limit directive.
Essentially, keywords and CRC dictionaries represent the two different trade-off substring searching decisions. You can choose to either sacrifice indexing time and index size in favor of top-speed worst-case searches (CRC dictionary), or only slightly impact indexing time but sacrifice worst-case searching time when the prefix expands into very many keywords (keywords dictionary).
dict = keywords
Whether to detect and index sentence and paragraph boundaries. Optional, default is 0 (do not detect and index). Introduced in version 2.0.1-beta.
This directive enables sentence and paragraph boundary indexing.
It's required for the SENTENCE and PARAGRAPH operators to work.
Sentence boundary detection is based on plain text analysis, so you
only need to set index_sp = 1
to enable it. Paragraph
detection is however based on HTML markup, and happens in the
HTML stripper.
So to index paragraph locations you also need to enable the stripper
by specifying html_strip = 1
. Both types of boundaries
are detected based on a few built-in rules enumerated just below.
Sentence boundary detection rules are as follows.
Question and excalamation signs (? and !) are always a sentence boundary.
Trailing dot (.) is a sentence boundary, except:
When followed by a letter. That's considered a part of an abbreviation (as in "S.T.A.L.K.E.R" or "Goldman Sachs S.p.A.").
When followed by a comma. That's considered an abbreviation followed by a comma (as in "Telecom Italia S.p.A., founded in 1994").
When followed by a space and a small letter. That's considered an abbreviation within a sentence (as in "News Corp. announced in Februrary").
When preceded by a space and a capital letter, and followed by a space. That's considered a middle initial (as in "John D. Doe").
Paragraph boundaries are inserted at every block-level HTML tag. Namely, those are (as taken from HTML 4 standard) ADDRESS, BLOCKQUOTE, CAPTION, CENTER, DD, DIV, DL, DT, H1, H2, H3, H4, H5, LI, MENU, OL, P, PRE, TABLE, TBODY, TD, TFOOT, TH, THEAD, TR, and UL.
Both sentences and paragraphs increment the keyword position counter by 1.
index_sp = 1
A list of in-field HTML/XML zones to index. Optional, default is empty (do not index zones). Introduced in version 2.0.1-beta.
Zones can be formally defined as follows. Everything between an opening and a matching closing tag is called a span, and the aggregate of all spans corresponding sharing the same tag name is called a zone. For instance, everything between the occurrences of <H1> and </H1> in the document field belongs to H1 zone.
Zone indexing, enabled by index_zones
directive,
is an optional extension of the HTML stripper. So it will also
require that the stripper
is enabled (with html_strip = 1
). The value of the
index_zones
should be a comma-separated list of
those tag names and wildcards (ending with a star) that should
be indexed as zones.
Zones can nest and overlap arbitrarily. The only requirement is that every opening tag has a matching tag. You can also have an arbitrary number of both zones (as in unique zone names, such as H1) and spans (all the occurrences of those H1 tags) in a document. Once indexed, zones can then be used for matching with the ZONE operator, see Section 5.3, “Extended query syntax”.
index_zones = h*, th, title
Earlier versions than 2.1.1-beta only provided this feature for plain index files; currently, RT index files also provide it.
Minimum word length at which to enable stemming. Optional, default is 1 (stem everything). Introduced in version 0.9.9-rc1.
Stemmers are not perfect, and might sometimes produce undesired results.
For instance, running "gps" keyword through Porter stemmer for English
results in "gp", which is not really the intent. min_stemming_len
feature lets you suppress stemming based on the source word length,
ie. to avoid stemming too short words. Keywords that are shorter than
the given threshold will not be stemmed. Note that keywords that are
exactly as long as specified will be stemmed. So in order to avoid
stemming 3-character keywords, you should specify 4 for the value.
For more finely grained control, refer to wordforms feature.
min_stemming_len = 4
Stopword files list (space separated). Optional, default is empty.
Stopwords are the words that will not be indexed. Typically you'd put most frequent words in the stopwords list because they do not add much value to search results but consume a lot of resources to process.
You can specify several file names, separated by spaces. All the files will be loaded. Stopwords file format is simple plain text. The encoding must match index encoding specified in charset_type. File data will be tokenized with respect to charset_table settings, so you can use the same separators as in the indexed data.
The stemmers will normally be applied when parsing stopwords file. That might however lead to undesired results. Starting with 2.1.1-beta, you can turn that off with stopwords_unstemmed.
Starting with version 2.1.1-beta small enough files are stored in the index header, see Section 11.2.13, “embedded_limit” for details.
While stopwords are not indexed, they still do affect the keyword positions. For instance, assume that "the" is a stopword, that document 1 contains the line "in office", and that document 2 contains "in the office". Searching for "in office" as for exact phrase will only return the first document, as expected, even though "the" in the second one is stopped. That behavior can be tweaked through the stopword_step directive.
Stopwords files can either be created manually, or semi-automatically.
indexer
provides a mode that creates a frequency dictionary
of the index, sorted by the keyword frequency, see --buildstops
and --buildfreqs
switch in Section 6.1, “indexer
command reference”.
Top keywords from that dictionary can usually be used as stopwords.
stopwords = /usr/local/sphinx/data/stopwords.txt stopwords = stopwords-ru.txt stopwords-en.txt
Word forms dictionary. Optional, default is empty.
Word forms are applied after tokenizing the incoming text by charset_table rules. They essentialy let you replace one word with another. Normally, that would be used to bring different word forms to a single normal form (eg. to normalize all the variants such as "walks", "walked", "walking" to the normal form "walk"). It can also be used to implement stemming exceptions, because stemming is not applied to words found in the forms list.
Starting with version 2.1.1-beta small enough files are stored in the index header, see Section 11.2.13, “embedded_limit” for details.
Dictionaries are used to normalize incoming words both during indexing
and searching. Therefore, to pick up changes in wordforms file
it's required to reindex and restart searchd
.
Word forms support in Sphinx is designed to support big dictionaries well.
They moderately affect indexing speed: for instance, a dictionary with 1 million
entries slows down indexing about 1.5 times. Searching speed is not affected at all.
Additional RAM impact is roughly equal to the dictionary file size,
and dictionaries are shared across indexes: ie. if the very same 50 MB wordforms
file is specified for 10 different indexes, additional searchd
RAM usage will be about 50 MB.
Dictionary file should be in a simple plain text format. Each line should contain source and destination word forms, in exactly the same encoding as specified in charset_type, separated by "greater" sign. Rules from the charset_table will be applied when the file is loaded. So basically it's as case sensitive as your other full-text indexed data, ie. typically case insensitive. Here's the file contents sample:
walks > walk walked > walk walking > walk
There is a bundled spelldump
utility that
helps you create a dictionary file in the format Sphinx can read
from source .dict
and .aff
dictionary files in ispell
or MySpell
format (as bundled with OpenOffice).
Starting with version 0.9.9-rc1, you can map several source words to a single destination word. Because the work happens on tokens, not the source text, differences in whitespace and markup are ignored.
Starting with version 2.1.1-beta, you can use "=>" instead of ">". Comments (starting with "#" are also allowed. Finally, if a line starts with a tilde ("~") the wordform will be applied after morphology, instead of before.
core 2 duo > c2d e6600 > c2d core 2duo => c2d # Some people write '2duo' together...
Notice however that the destination wordforms are still always interpreted as a single keyword! Having a mapping like "St John > Saint John" will result in not matching "St John" when searching for "Saint" or "John", because the destination keyword will be "Saint John" with a space character in it (and it's barely possible to input a destination keyword with a space).
wordforms = /usr/local/sphinx/data/wordforms.txt wordforms = /usr/local/sphinx/data/alternateforms.txt wordforms = /usr/local/sphinx/private/dict*.txt
Starting with version 2.1.1-beta you can specify several files and not only just one. Masks can be used as a pattern, and all matching files will be processed in simple ascending order. (If multi-byte codepages are used, and file names can include foreign characters, the resulting order may not be exactly alphabetic.) If a same wordform definition is found in several files, the latter one is used, and it overrides previous definitions.
Embedded exceptions, wordforms, or stopwords file size limit. Optional, default is 16K. Added in version 2.1.1-beta.
Before 2.1.1-beta, the contents of exceptions, wordforms, or stopwords
files were always kept in the files. Only the file names were stored into
the index. Starting with 2.1.1-beta, indexer can either save the file name,
or embed the file contents directly into the index. Files sized under
embedded_limit
get stored into the index. For bigger files,
only the file names are stored. This also simplifies moving index files
to a different machine; you may get by just copying a single file.
With smaller files, such embedding reduces the number of the external
files on which the index depends, and helps maintenance. But at the same
time it makes no sense to embed a 100 MB wordforms dictionary into a tiny
delta index. So there needs to be a size threshold, and embedded_limit
is that threshold.
embedded_limit = 32K
Tokenizing exceptions file. Optional, default is empty.
Exceptions allow to map one or more tokens (including tokens with characters that would normally be excluded) to a single keyword. They are similar to wordforms in that they also perform mapping, but have a number of important differences.
Starting with version 2.1.1-beta small enough files are stored in the index header, see Section 11.2.13, “embedded_limit” for details.
Short summary of the differences is as follows:
exceptions are case sensitive, wordforms are not;
exceptions can use special characters that are not in charset_table, wordforms fully obey charset_table;
exceptions can underperform on huge dictionaries, wordforms handle millions of entries well.
The expected file format is also plain text, with one line per exception, and the line format is as follows:
map-from-tokens => map-to-token
Example file:
AT & T => AT&T AT&T => AT&T Standarten Fuehrer => standartenfuhrer Standarten Fuhrer => standartenfuhrer MS Windows => ms windows Microsoft Windows => ms windows C++ => cplusplus c++ => cplusplus C plus plus => cplusplus
All tokens here are case sensitive: they will not be processed by charset_table rules. Thus, with the example exceptions file above, "At&t" text will be tokenized as two keywords "at" and "t", because of lowercase letters. On the other hand, "AT&T" will match exactly and produce single "AT&T" keyword.
Note that this map-to keyword is a) always interpereted as a single word, and b) is both case and space sensitive! In our sample, "ms windows" query will not match the document with "MS Windows" text. The query will be interpreted as a query for two keywords, "ms" and "windows". And what "MS Windows" gets mapped to is a single keyword "ms windows", with a space in the middle. On the other hand, "standartenfuhrer" will retrieve documents with "Standarten Fuhrer" or "Standarten Fuehrer" contents (capitalized exactly like this), or any capitalization variant of the keyword itself, eg. "staNdarTenfUhreR". (It won't catch "standarten fuhrer", however: this text does not match any of the listed exceptions because of case sensitivity, and gets indexed as two separate keywords.)
Whitespace in the map-from tokens list matters, but its amount does not. Any amount of the whitespace in the map-form list will match any other amount of whitespace in the indexed document or query. For instance, "AT & T" map-from token will match "AT & T" text, whatever the amount of space in both map-from part and the indexed text. Such text will therefore be indexed as a special "AT&T" keyword, thanks to the very first entry from the sample.
Exceptions also allow to capture special characters (that are exceptions from general charset_table rules; hence the name). Assume that you generally do not want to treat '+' as a valid character, but still want to be able search for some exceptions from this rule such as 'C++'. The sample above will do just that, totally independent of what characters are in the table and what are not.
Exceptions are applied to raw incoming document and query data
during indexing and searching respectively. Therefore, to pick up
changes in the file it's required to reindex and restart
searchd
.
exceptions = /usr/local/sphinx/data/exceptions.txt
Minimum indexed word length. Optional, default is 1 (index everything).
Only those words that are not shorter than this minimum will be indexed. For instance, if min_word_len is 4, then 'the' won't be indexed, but 'they' will be.
min_word_len = 4
Character set encoding type. Optional, default is 'sbcs'. Known values are 'sbcs' and 'utf-8'.
Different encodings have different methods for mapping their internal characters codes into specific byte sequences. Two most common methods in use today are single-byte encoding and UTF-8. Their corresponding charset_type values are 'sbcs' (stands for Single Byte Character Set) and 'utf-8'. The selected encoding type will be used everywhere where the index is used: when indexing the data, when parsing the query against this index, when generating snippets, etc.
Note that while 'utf-8' implies that the decoded values must be treated as Unicode codepoint numbers, there's a family of 'sbcs' encodings that may in turn treat different byte values differently, and that should be properly reflected in your charset_table settings. For example, the same byte value of 224 (0xE0 hex) maps to different Russian letters depending on whether koi-8r or windows-1251 encoding is used.
charset_type = utf-8
Accepted characters table, with case folding rules. Optional, default value depends on charset_type value.
charset_table is the main workhorse of Sphinx tokenizing process, ie. the process of extracting keywords from document text or query txet. It controls what characters are accepted as valid and what are not, and how the accepted characters should be transformed (eg. should the case be removed or not).
You can think of charset_table as of a big table that has a mapping for each and every of 100K+ characters in Unicode (or as of a small 256-character table if you're using SBCS). By default, every character maps to 0, which means that it does not occur within keywords and should be treated as a separator. Once mentioned in the table, character is mapped to some other character (most frequently, either to itself or to a lowercase letter), and is treated as a valid keyword part.
The expected value format is a commas-separated list of mappings. Two simplest mappings simply declare a character as valid, and map a single character to another single character, respectively. But specifying the whole table in such form would result in bloated and barely manageable specifications. So there are several syntax shortcuts that let you map ranges of characters at once. The complete list is as follows:
Single char mapping, declares source char 'A' as allowed to occur within keywords and maps it to destination char 'a' (but does not declare 'a' as allowed).
Range mapping, declares all chars in source range as allowed and maps them to the destination range. Does not declare destination range as allowed. Also checks ranges' lengths (the lengths must be equal).
Stray char mapping, declares a character as allowed and maps it to itself. Equivalent to a->a single char mapping.
Stray range mapping, declares all characters in range as allowed and maps them to themselves. Equivalent to a..z->a..z range mapping.
Checkerboard range map. Maps every pair of chars to the second char. More formally, declares odd characters in range as allowed and maps them to the even ones; also declares even characters as allowed and maps them to themselves. For instance, A..Z/2 is equivalent to A->B, B->B, C->D, D->D, ..., Y->Z, Z->Z. This mapping shortcut is helpful for a number of Unicode blocks where uppercase and lowercase letters go in such interleaved order instead of contiguous chunks.
Control characters with codes from 0 to 31 are always treated as separators. Characters with codes 32 to 127, ie. 7-bit ASCII characters, can be used in the mappings as is. To avoid configuration file encoding issues, 8-bit ASCII characters and Unicode characters must be specified in U+xxx form, where 'xxx' is hexadecimal codepoint number. This form can also be used for 7-bit ASCII characters to encode special ones: eg. use U+20 to encode space, U+2E to encode dot, U+2C to encode comma.
# 'sbcs' defaults for English and Russian charset_table = 0..9, A..Z->a..z, _, a..z, \ U+A8->U+B8, U+B8, U+C0..U+DF->U+E0..U+FF, U+E0..U+FF # 'utf-8' defaults for English and Russian charset_table = 0..9, A..Z->a..z, _, a..z, \ U+410..U+42F->U+430..U+44F, U+430..U+44F, U+401->U+451, U+451
Ignored characters list. Optional, default is empty.
Useful in the cases when some characters, such as soft hyphenation mark (U+00AD), should be not just treated as separators but rather fully ignored. For example, if '-' is simply not in the charset_table, "abc-def" text will be indexed as "abc" and "def" keywords. On the contrary, if '-' is added to ignore_chars list, the same text will be indexed as a single "abcdef" keyword.
The syntax is the same as for charset_table, but it's only allowed to declare characters, and not allowed to map them. Also, the ignored characters must not be present in charset_table.
ignore_chars = U+AD
Minimum word prefix length to index. Optional, default is 0 (do not index prefixes).
Prefix indexing allows to implement wildcard searching by 'wordstart*' wildcards (refer to enable_star option for details on wildcard syntax). When mininum prefix length is set to a positive number, indexer will index all the possible keyword prefixes (ie. word beginnings) in addition to the keywords themselves. Too short prefixes (below the minimum allowed length) will not be indexed.
For instance, indexing a keyword "example" with min_prefix_len=3 will result in indexing "exa", "exam", "examp", "exampl" prefixes along with the word itself. Searches against such index for "exam" will match documents that contain "example" word, even if they do not contain "exam" on itself. However, indexing prefixes will make the index grow significantly (because of many more indexed keywords), and will degrade both indexing and searching times.
There's no automatic way to rank perfect word matches higher in a prefix index, but there's a number of tricks to achieve that. First, you can setup two indexes, one with prefix indexing and one without it, search through both, and use SetIndexWeights() call to combine weights. Second, you can enable star-syntax and rewrite your extended-mode queries:
# in sphinx.conf enable_star = 1 // in query $cl->Query ( "( keyword | keyword* ) other keywords" );
min_prefix_len = 3
Minimum infix prefix length to index. Optional, default is 0 (do not index infixes).
Infix indexing allows to implement wildcard searching by 'start*', '*end', and '*middle*' wildcards (refer to enable_star option for details on wildcard syntax). When mininum infix length is set to a positive number, indexer will index all the possible keyword infixes (ie. substrings) in addition to the keywords themselves. Too short infixes (below the minimum allowed length) will not be indexed. For instance, indexing a keyword "test" with min_infix_len=2 will result in indexing "te", "es", "st", "tes", "est" infixes along with the word itself. Searches against such index for "es" will match documents that contain "test" word, even if they do not contain "es" on itself. However, indexing infixes will make the index grow significantly (because of many more indexed keywords), and will degrade both indexing and searching times.
There's no automatic way to rank perfect word matches higher in an infix index, but the same tricks as with prefix indexes can be applied.
min_infix_len = 3
Maximum substring (either prefix or infix) length to index. Optional, default is 0 (do not limit indexed substrings). Applies to dict=crc only.
By default, substring (either prefix or infix) indexing in the
dict=crc mode will index all
the possible substrings as separate keywords. That might result
in an overly large index. So the max_substring_len
directive lets you limit the impact of substring indexing
by skipping too-long substrings (which, chances are, will never
get searched for anyway).
For example, a test index of 10,000 blog posts takes this much disk space depending on the settings:
So in this test limiting the max substring length saved us 10-15% on the index size.
There is no performance impact associtated with substring length when using dict=keywords mode, so this directive is not applicable and intentionally forbidden in that case. If required, you can still limit the length of a subtring that you search for in the application code.
max_substring_len = 12
The list of full-text fields to limit prefix indexing to. Optional, default is empty (index all fields in prefix mode).
Because prefix indexing impacts both indexing and searching performance, it might be desired to limit it to specific full-text fields only: for instance, to provide prefix searching through URLs, but not through page contents. prefix_fields specifies what fields will be prefix-indexed; all other fields will be indexed in normal mode. The value format is a comma-separated list of field names.
prefix_fields = url, domain
The list of full-text fields to limit infix indexing to. Optional, default is empty (index all fields in infix mode).
Similar to prefix_fields, but lets you limit infix-indexing to given fields.
infix_fields = url, domain
Enables star-syntax (or wildcard syntax) when searching through prefix/infix indexes. Optional, default is is 0 (do not use wildcard syntax), for compatibility with 0.9.7. Known values are 0 and 1.
This feature enables "star-syntax", or wildcard syntax, when searching
through indexes which were created with prefix or infix indexing enabled.
It only affects searching; so it can be changed without reindexing
by simply restarting searchd
.
The default value is 0, that means to disable star-syntax and treat all keywords as prefixes or infixes respectively, depending on indexing-time min_prefix_len and min_infix_len settings. The value of 1 means that star ('*') can be used at the start and/or the end of the keyword. The star will match zero or more characters.
For example, assume that the index was built with infixes and that enable_star is 1. Searching should work as follows:
"abcdef" query will match only those documents that contain the exact "abcdef" word in them.
"abc*" query will match those documents that contain any words starting with "abc" (including the documents which contain the exact "abc" word only);
"*cde*" query will match those documents that contain any words which have "cde" characters in any part of the word (including the documents which contain the exact "cde" word only).
"*def" query will match those documents that contain any words ending with "def" (including the documents that contain the exact "def" word only).
enable_star = 1
N-gram lengths for N-gram indexing. Optional, default is 0 (disable n-gram indexing). Known values are 0 and 1 (other lengths to be implemented).
N-grams provide basic CJK (Chinese, Japanese, Korean) support for unsegmented texts. The issue with CJK searching is that there could be no clear separators between the words. Ideally, the texts would be filtered through a special program called segmenter that would insert separators in proper locations. However, segmenters are slow and error prone, and it's common to index contiguous groups of N characters, or n-grams, instead.
When this feature is enabled, streams of CJK characters are indexed as N-grams. For example, if incoming text is "ABCDEF" (where A to F represent some CJK characters) and length is 1, in will be indexed as if it was "A B C D E F". (With length equal to 2, it would produce "AB BC CD DE EF"; but only 1 is supported at the moment.) Only those characters that are listed in ngram_chars table will be split this way; other ones will not be affected.
Note that if search query is segmented, ie. there are separators between individual words, then wrapping the words in quotes and using extended mode will resut in proper matches being found even if the text was not segmented. For instance, assume that the original query is BC DEF. After wrapping in quotes on the application side, it should look like "BC" "DEF" (with quotes). This query will be passed to Sphinx and internally split into 1-grams too, resulting in "B C" "D E F" query, still with quotes that are the phrase matching operator. And it will match the text even though there were no separators in the text.
Even if the search query is not segmented, Sphinx should still produce good results, thanks to phrase based ranking: it will pull closer phrase matches (which in case of N-gram CJK words can mean closer multi-character word matches) to the top.
ngram_len = 1
N-gram characters list. Optional, default is empty.
To be used in conjunction with in ngram_len, this list defines characters, sequences of which are subject to N-gram extraction. Words comprised of other characters will not be affected by N-gram indexing feature. The value format is identical to charset_table.
ngram_chars = U+3000..U+2FA1F
Phrase boundary characters list. Optional, default is empty.
This list controls what characters will be treated as phrase boundaries, in order to adjust word positions and enable phrase-level search emulation through proximity search. The syntax is similar to charset_table. Mappings are not allowed and the boundary characters must not overlap with anything else.
On phrase boundary, additional word position increment (specified by phrase_boundary_step) will be added to current word position. This enables phrase-level searching through proximity queries: words in different phrases will be guaranteed to be more than phrase_boundary_step distance away from each other; so proximity search within that distance will be equivalent to phrase-level search.
Phrase boundary condition will be raised if and only if such character is followed by a separator; this is to avoid abbreviations such as S.T.A.L.K.E.R or URLs being treated as several phrases.
phrase_boundary = ., ?, !, U+2026 # horizontal ellipsis
Phrase boundary word position increment. Optional, default is 0.
On phrase boundary, current word position will be additionally incremented by this number. See phrase_boundary for details.
phrase_boundary_step = 100
Whether to strip HTML markup from incoming full-text data. Optional, default is 0. Known values are 0 (disable stripping) and 1 (enable stripping).
Both HTML tags and entities and considered markup and get processed.
HTML tags are removed, their contents (i.e., everything between <P> and </P>) are left intact by default. You can choose to keep and index attributes of the tags (e.g., HREF attribute in an A tag, or ALT in an IMG one). Several well-known inline tags are completely removed, all other tags are treated as block level and replaced with whitespace. For example, 'te<B>st</B>' text will be indexed as a single keyword 'test', however, 'te<P>st</P>' will be indexed as two keywords 'te' and 'st'. Known inline tags are as follows: A, B, I, S, U, BASEFONT, BIG, EM, FONT, IMG, LABEL, SMALL, SPAN, STRIKE, STRONG, SUB, SUP, TT.
HTML entities get decoded and replaced with corresponding UTF-8 characters. Stripper supports both numeric forms (such as ï) and text forms (such as ó or ). All entities as specified by HTML4 standard are supported.
Stripping does not work with xmlpipe
source type
(it's suggested to upgrade to xmlpipe2 anyway). It should work with
properly formed HTML and XHTML, but, just as most browsers, may produce
unexpected results on malformed input (such as HTML with stray <'s
or unclosed >'s).
Only the tags themselves, and also HTML comments, are stripped. To strip the contents of the tags too (eg. to strip embedded scripts), see html_remove_elements option. There are no restrictions on tag names; ie. everything that looks like a valid tag start, or end, or a comment will be stripped.
html_strip = 1
A list of markup attributes to index when stripping HTML. Optional, default is empty (do not index markup attributes).
Specifies HTML markup attributes whose contents should be retained and indexed even though other HTML markup is stripped. The format is per-tag enumeration of indexable attributes, as shown in the example below.
html_index_attrs = img=alt,title; a=title;
A list of HTML elements for which to strip contents along with the elements themselves. Optional, default is empty string (do not strip contents of any elements).
This feature allows to strip element contents, ie. everything that is between the opening and the closing tags. It is useful to remove embedded scripts, CSS, etc. Short tag form for empty elements (ie. <br />) is properly supported; ie. the text that follows such tag will not be removed.
The value is a comma-separated list of element (tag) names whose contents should be removed. Tag names are case insensitive.
html_remove_elements = style, script
Local index declaration in the distributed index. Multi-value, optional, default is empty.
This setting is used to declare local indexes that will be searched when given distributed index is searched. Many local indexes can be declared per each distributed index. Any local index can also be mentioned several times in different distributed indexes.
Note that by default all local indexes will be searched sequentially,
utilizing only 1 CPU or core. To parallelize processing of the local parts
in the distributed index, you should use dist_threads
directive,
see Section 11.4.29, “dist_threads”.
Before dist_threads
, there also was a legacy solution
to configure searchd
to query itself instead of using
local indexes (refer to Section 11.2.33, “agent” for the details). However,
that creates redundant CPU and network load, and dist_threads
is now strongly suggested instead.
local = chunk1 local = chunk2
Remote agent declaration in the distributed index. Multi-value, optional, default is empty.
agent
directive declares remote agents that are searched
every time when the enclosing distributed index is searched. The agents
are, essentially, pointers to networked indexes. Prior to version 2.1.1-beta,
the value format was:
agent = address:index-list
Starting with 2.1.1-beta, the value can additionally specify multiple alternatives (agent mirrors) for either the address only, or the address and index list:
agent = address1 [ | address2 [...] ]:index-list agent = address1:index-list [ | address2:index-list [...] ]
In both cases the address specification must be one of the following:
address = hostname:port # eg. server2:9312 address = /absolute/unix/socket/path # eg. /var/run/sphinx2.sock
Where
hostname
is the remote host name,
port
is the remote TCP port number,
index-list
is a comma-separated list of index names,
and square braces [] designate an optional clause.
In other words, you can point every single agent to one or more remote indexes, residing on one or more networked servers. There are absolutely no restrictions on the pointers. To point out a couple importang things, the host can be localhost, and the remote index can be a distributed index in turn, all that is legal. That enables a bunch of very different usage modes:
sharding over multiple agent servers, and creating an arbitrary cluster topology;
sharding over multiple agent servers, mirrored for HA/LB (High Availability and Load Balancing) purposes (starting with 2.1.1-beta);
sharding within localhost, to utilize multiple cores (historical and not recommended in versions 1.x and above, use multiple local indexes and dist_threads directive instead);
All agents are searched in parallel. An index list is passed verbatim to the remote agent. How exactly that list is searched within the agent (ie. sequentially or in parallel too) depends solely on the agent configuration (ie. dist_threads directive). Master has no remote control over that.
# config on box2 # sharding an index over 3 servers agent = box2:9312:chunk2 agent = box3:9312:chunk3 # config on box2 # sharding an index over 3 servers agent = box1:9312:chunk2 agent = box3:9312:chunk3 # config on box3 # sharding an index over 3 servers agent = box1:9312:chunk2 agent = box2:9312:chunk3
New syntax added in 2.1.1-beta lets you define so-called agent mirrors that can be used interchangeably when processing a search query. Master server keeps track of mirror status (alive or dead) and response times, and does automatic failover and load balancing based on that. For example, this line:
agent = box1:9312|box2:9312|box3:9312:chunk2
Declares that box1:9312, box2:9312, and box3:9312 all have an index called chunk2, and can be used as interchangeable mirrors. If any single of those servers go down, the queries will be distributed between the other two. When it gets back up, master will detect that and begin routing queries to all three boxes again.
Another way to define the mirrors is to explicitly specify the index list for every mirror:
agent = box1:9312:box1chunk2|box2:9312:box2chunk2
This works essentially the same as the previous example, but different index names will be used when querying different severs: box1chunk2 when querying box1:9312, and box2chunk when querying box2:9312.
By default, all queries are routed to the best of the mirrors. The best one is picked based on the recent statistics, as controlled by the ha_period_karma config directive. Master stores a number of metrics (total query count, error count, response time, etc) recently observed for every agent. It groups those by time spans, and karma is that time span length. The best agent mirror is then determined dynamically based on the last 2 such time spans. Specific algorithm that will be used to pick a mirror can be configured ha_strategy directive.
The karma period is in seconds and defaults to 60 seconds. Master stores upto 15 karma spans with per-agent statistics for instrumentation purposes (see SHOW AGENT STATUS statement). However, only the last 2 spans out of those are ever used for HA/LB logic.
When there are no queries, master sends a regular ping command every ha_ping_interval milliseconds in order to have some statistics and at least check, whether the remote host is still alive. ha_ping_interval defaults to 1000 msec. Setting it to 0 disables pings and statistics will only be accumulated based on actual queries.
# sharding index over 4 servers total # in just 2 chunks but with 2 failover mirrors for each chunk # box1, box2 carry chunk1 as local # box3, box4 carry chunk2 as local # config on box1, box2 agent = box3:9312|box4:9312:chunk2 # config on box3, box4 agent = box1:9312|box2:9312:chunk1
Persistenly connected remote agent declaration. Multi-value, optional, default is empty. Introduced in version 2.1.1-beta.
agent_persistent
directive syntax matches that of
the agent directive. The only difference
is that the master will not open a new connection to the agent for
every query and then close it. Rather, it will keep a connection open and
attempt to reuse for the subsequent queries. The maximal number of such persistent connections per one agent host
is limited by persistent_connections_limit option of searchd section.
Note, that you have to set the last one in something greater than 0 if you want to use persistent agent connections. Otherwise - when persistent_connections_limit is not defined, it assumes the zero num of persistent connections, and 'agent_persistent' acts exactly as simple 'agent'.
Persistent master-agent connections reduce TCP port pressure, and save on connection handshakes. As of time of this writing, they are supported only in workers=threads mode. In other modes, simple non-persistent connections (i.e., one connection per operation) will be used, and a warning will show up in the console.
agent_persistent = remotebox:9312:index2
Remote blackhole agent declaration in the distributed index. Multi-value, optional, default is empty. Introduced in version 0.9.9-rc1.
agent_blackhole
lets you fire-and-forget queries
to remote agents. That is useful for debugging (or just testing)
production clusters: you can setup a separate debugging/testing searchd
instance, and forward the requests to this instance from your production
master (aggregator) instance without interfering with production work.
Master searchd will attempt to connect and query blackhole agent
normally, but it will neither wait nor process any responses.
Also, all network errors on blackhole agents will be ignored.
The value format is completely identical to regular
agent directive.
agent_blackhole = testbox:9312:testindex1,testindex2
Remote agent connection timeout, in milliseconds. Optional, default is 1000 (ie. 1 second).
When connecting to remote agents, searchd
will wait at most this much time for connect() call to complete
succesfully. If the timeout is reached but connect() does not complete,
and retries are enabled,
retry will be initiated.
agent_connect_timeout = 300
Remote agent query timeout, in milliseconds. Optional, default is 3000 (ie. 3 seconds). Added in version 2.1.1-beta.
After connection, searchd
will wait at most this
much time for remote queries to complete. This timeout is fully separate
from connection timeout; so the maximum possible delay caused by
a remote agent equals to the sum of agent_connection_timeout
and
agent_query_timeout
. Queries will not be retried
if this timeout is reached; a warning will be produced instead.
agent_query_timeout = 10000 # our query can be long, allow up to 10 sec
Whether to pre-open all index files, or open them per each query. Optional, default is 0 (do not preopen).
This option tells searchd
that it should pre-open
all index files on startup (or rotation) and keep them open while it runs.
Currently, the default mode is not to pre-open the files (this may
change in the future). Preopened indexes take a few (currently 2) file
descriptors per index. However, they save on per-query open()
calls;
and also they are invulnerable to subtle race conditions that may happen during
index rotation under high load. On the other hand, when serving many indexes
(100s to 1000s), it still might be desired to open the on per-query basis
in order to save file descriptors.
This directive does not affect indexer
in any way,
it only affects searchd
.
preopen = 1
Whether to keep the dictionary file (.spi) for this index on disk, or precache it in RAM. Optional, default is 0 (precache in RAM). Introduced in version 0.9.9-rc1.
The dictionary (.spi) can be either kept on RAM or on disk. The default
is to fully cache it in RAM. That improves performance, but might cause
too much RAM pressure, especially if prefixes or infixes were used.
Enabling ondisk_dict
results in 1 additional disk IO
per keyword per query, but reduces memory footprint.
This directive does not affect indexer
in any way,
it only affects searchd
.
ondisk_dict = 1
Whether to enable in-place index inversion. Optional, default is 0 (use separate temporary files). Introduced in version 0.9.9-rc1.
inplace_enable
greatly reduces indexing disk footprint,
at a cost of slightly slower indexing (it uses around 2x less disk,
but yields around 90-95% the original performance).
Indexing involves two major phases. The first phase collects,
processes, and partially sorts documents by keyword, and writes
the intermediate result to temporary files (.tmp*). The second
phase fully sorts the documents, and creates the final index
files. Thus, rebuilding a production index on the fly involves
around 3x peak disk footprint: 1st copy for the intermediate
temporary files, 2nd copy for newly constructed copy, and 3rd copy
for the old index that will be serving production queries in the meantime.
(Intermediate data is comparable in size to the final index.)
That might be too much disk footprint for big data collections,
and inplace_enable
allows to reduce it.
When enabled, it reuses the temporary files, outputs the
final data back to them, and renames them on completion.
However, this might require additional temporary data chunk
relocation, which is where the performance impact comes from.
This directive does not affect searchd
in any way,
it only affects indexer
.
inplace_enable = 1
In-place inversion fine-tuning option. Controls preallocated hitlist gap size. Optional, default is 0. Introduced in version 0.9.9-rc1.
This directive does not affect searchd
in any way,
it only affects indexer
.
inplace_hit_gap = 1M
In-place inversion fine-tuning option. Controls preallocated docinfo gap size. Optional, default is 0. Introduced in version 0.9.9-rc1.
This directive does not affect searchd
in any way,
it only affects indexer
.
inplace_docinfo_gap = 1M
In-place inversion fine-tuning option. Controls relocation buffer size within indexing memory arena. Optional, default is 0.1. Introduced in version 0.9.9-rc1.
This directive does not affect searchd
in any way,
it only affects indexer
.
inplace_reloc_factor = 0.1
In-place inversion fine-tuning option. Controls in-place write buffer size within indexing memory arena. Optional, default is 0.1. Introduced in version 0.9.9-rc1.
This directive does not affect searchd
in any way,
it only affects indexer
.
inplace_write_factor = 0.1
Whether to index the original keywords along with the stemmed/remapped versions. Optional, default is 0 (do not index). Introduced in version 0.9.9-rc1.
When enabled, index_exact_words
forces indexer
to put the raw keywords in the index along with the stemmed versions. That, in turn,
enables exact form operator in the query language to work.
This impacts the index size and the indexing time. However, searching performance
is not impacted at all.
index_exact_words = 1
Position increment on overshort (less that min_word_len) keywords. Optional, allowed values are 0 and 1, default is 1. Introduced in version 0.9.9-rc1.
This directive does not affect searchd
in any way,
it only affects indexer
.
overshort_step = 1
Position increment on stopwords. Optional, allowed values are 0 and 1, default is 1. Introduced in version 0.9.9-rc1.
This directive does not affect searchd
in any way,
it only affects indexer
.
stopword_step = 1
Hitless words list. Optional, allowed values are 'all', or a list file name. Introduced in version 1.10-beta.
By default, Sphinx full-text index stores not only a list of matching documents for every given keyword, but also a list of its in-document positions (aka hitlist). Hitlists enables phrase, proximity, strict order and other advanced types of searching, as well as phrase proximity ranking. However, hitlists for specific frequent keywords (that can not be stopped for some reason despite being frequent) can get huge and thus slow to process while querying. Also, in some cases we might only care about boolean keyword matching, and never need position-based searching operators (such as phrase matching) nor phrase ranking.
hitless_words
lets you create indexes that either
do not have positional information (hitlists) at all, or skip it for
specific keywords.
Hitless index will generally use less space than the respective regular index (about 1.5x can be expected). Both indexing and searching should be faster, at a cost of missing positional query and ranking support. When searching, positional queries (eg. phrase queries) will be automatically converted to respective non-positional (document-level) or combined queries. For instance, if keywords "hello" and "world" are hitless, "hello world" phrase query will be converted to (hello & world) bag-of-words query, matching all documents that mention either of the keywords but not necessarily the exact phrase. And if, in addition, keywords "simon" and "says" are not hitless, "simon says hello world" will be converted to ("simon says" & hello & world) query, matching all documents that contain "hello" and "world" anywhere in the document, and also "simon says" as an exact phrase.
hitless_words = all
Expand keywords with exact forms and/or stars when possible. Optional, default is 0 (do not expand keywords). Introduced in version 1.10-beta.
Queries against indexes with expand_keywords
feature
enabled are internally expanded as follows. If the index was built with
prefix or infix indexing enabled, every keyword gets internally replaced
with a disjunction of keyword itself and a respective prefix or infix
(keyword with stars). If the index was built with both stemming and
index_exact_words enabled,
exact form is also added. Here's an example that shows how internal
expansion works when all of the above (infixes, stemming, and exact
words) are combined:
running -> ( running | *running* | =running )
Expanded queries take naturally longer to complete, but can possibly improve the search quality, as the documents with exact form matches should be ranked generally higher than documents with stemmed or infix matches.
Note that the existing query syntax does not allowe to emulate this kind of expansion, because internal expansion works on keyword level and expands keywords within phrase or quorum operators too (which is not possible through the query syntax).
This directive does not affect indexer
in any way,
it only affects searchd
.
expand_keywords = 1
Blended characters list. Optional, default is empty. Introduced in version 1.10-beta.
Blended characters are indexed both as separators and valid characters. For instance, assume that & is configured as blended and AT&T occurs in an indexed document. Three different keywords will get indexed, namely "at&t", treating blended characters as valid, plus "at" and "t", treating them as separators.
Positions for tokens obtained by replacing blended characters with whitespace
are assigned as usual, so regular keywords will be indexed just as if there was
no blend_chars
specified at all. An additional token that
mixes blended and non-blended characters will be put at the starting position.
For instance, if the field contents are "AT&T company" occurs in the very
beginning of the text field, "at" will be given position 1, "t" position 2,
"company" positin 3, and "AT&T" will also be given position 1 ("blending"
with the opening regular keyword). Thus, querying for either AT&T or just
AT will match that document, and querying for "AT T" as a phrase also match it.
Last but not least, phrase query for "AT&T company" will also
match it, despite the position
Blended characters can overlap with special characters used in query syntax (think of T-Mobile or @twitter). Where possible, query parser will automatically handle blended character as blended. For instance, "hello @twitter" within quotes (a phrase operator) would handle @-sign as blended, because @-syntax for field operator is not allowed within phrases. Otherwise, the character would be handled as an operator. So you might want to escape the keywords.
Starting with version 2.0.1-beta, blended characters can be remapped, so that multiple different blended characters could be normalized into just one base form. This is useful when indexing multiple alternative Unicode codepoints with equivalent glyphs.
blend_chars = +, &, U+23 blend_chars = +, &->+ # 2.0.1 and above
Blended tokens indexing mode.
Optional, default is trim_none
.
Introduced in version 2.0.1-beta.
By default, tokens that mix blended and non-blended characters
get indexed in there entirety. For instance, when both at-sign and
an exclamation are in blend_chars
, "@dude!" will get
result in two tokens indexed: "@dude!" (with all the blended characters)
and "dude" (without any). Therefore "@dude" query will not
match it.
blend_mode
directive adds flexibility to this indexing
behavior. It takes a comma-separated list of options.
blend_mode = option [, option [, ...]] option = trim_none | trim_head | trim_tail | trim_both | skip_pure
Options specify token indexing variants. If multiple options are specified, multiple variants of the same token will be indexed. Regular keywords (resulting from that token by replacing blended with whitespace) are always be indexed.
Index the entire token.
Trim heading blended characters, and index the resulting token.
Trim trailing blended characters, and index the resulting token.
Trim both heading and trailing blended characters, and index the resulting token.
Do not index the token if it's purely blended, that is, consists of blended characters only.
Returning to the "@dude!" example above, setting blend_mode = trim_head,
trim_tail
will result in two tokens being indexed, "@dude" and "dude!".
In this particular example, trim_both
would have no effect,
because trimming both blended characters results in "dude" which is already
indexed as a regular keyword. Indexing "@U.S.A." with trim_both
(and assuming that dot is blended two) would result in "U.S.A" being indexed.
Last but not least, skip_pure
enables you to fully ignore
sequences of blended characters only. For example, "one @@@ two" would be
indexed exactly as "one two", and match that as a phrase. That is not the case
by default because a fully blended token gets indexed and offsets the second
keyword position.
Default behavior is to index the entire token, equivalent to
blend_mode = trim_none
.
blend_mode = trim_tail, skip_pure
RAM chunk size limit. Optional, default is empty. Introduced in version 1.10-beta.
RT index keeps some data in memory (so-called RAM chunk) and also maintains a number of on-disk indexes (so-called disk chunks). This directive lets you control the RAM chunk size. Once there's too much data to keep in RAM, RT index will flush it to disk, activate a newly created disk chunk, and reset the RAM chunk.
The limit is pretty strict; RT index should never allocate more memory than it's limited to. The memory is not preallocated either, hence, specifying 512 MB limit and only inserting 3 MB of data should result in allocating 3 MB, not 512 MB.
rt_mem_limit = 512M
Full-text field declaration. Multi-value, mandatory Introduced in version 1.10-beta.
Full-text fields to be indexed are declared using rt_field
directive. The names must be unique. The order is preserved; and so field values
in INSERT statements without an explicit list of inserted columns will have to be
in the same order as configured.
rt_field = author rt_field = title rt_field = content
Unsigned integer attribute declaration. Multi-value (an arbitrary number of attributes is allowed), optional. Declares an unsigned 32-bit attribute. Introduced in version 1.10-beta.
rt_attr_uint = gid
Boolean attribute declaration. Multi-value (there might be multiple attributes declared), optional. Declares a 1-bit unsigned integer attribute. Introduced in version 2.1.2-release.
rt_attr_bool = available
BIGINT attribute declaration. Multi-value (an arbitrary number of attributes is allowed), optional. Declares a signed 64-bit attribute. Introduced in version 1.10-beta.
rt_attr_bigint = guid
Floating point attribute declaration. Multi-value (an arbitrary number of attributes is allowed), optional. Declares a single precision, 32-bit IEEE 754 format float attribute. Introduced in version 1.10-beta.
rt_attr_float = gpa
Multi-valued attribute (MVA) declaration. Declares the UNSIGNED INTEGER (unsigned 32-bit) MVA attribute. Multi-value (ie. there may be more than one such attribute declared), optional. Applies to RT indexes only.
rt_attr_multi = my_tags
Multi-valued attribute (MVA) declaration. Declares the BIGINT (signed 64-bit) MVA attribute. Multi-value (ie. there may be more than one such attribute declared), optional. Applies to RT indexes only.
rt_attr_multi_64 = my_wide_tags
Timestamp attribute declaration. Multi-value (an arbitrary number of attributes is allowed), optional. Introduced in version 1.10-beta.
rt_attr_timestamp = date_added
String attribute declaration. Multi-value (an arbitrary number of attributes is allowed), optional. Introduced in version 1.10-beta.
rt_attr_string = author
JSON attribute declaration. Multi-value (ie. there may be more than one such attribute declared), optional. Introduced in version 2.1.1-beta.
Refer to Section 11.1.28, “sql_attr_json” for more details on the JSON attributes.
rt_attr_json = properties
Agent mirror selection strategy, for load balancing. Optional, default is random. Added in 2.1.1-beta.
The strategy used for mirror selection, or in other words, choosing a specific agent mirror in a distributed index. Essentially, this directive controls how exactly master does the load balancing between the configured mirror agent nodes. As of 2.1.1-beta, the following strategies are implemented:
ha_strategy = random
The default balancing mode. Simple linear random distribution among the mirrors. That is, equal selection probability are assigned to every mirror. Kind of similar to round-robin (RR), but unlike RR, does not impose a strict selection order.
The default simple random strategy does not take mirror status, error rate, and, most importantly, actual response latencies into account. So to accomodate for heterogenous clusters and/or temporary spikes in agent node load, we have a group of balancing strategies that dynamically adjusts the probabilities based on the actual query latencies observed by the master.
The adaptive strategies based on latency-weighted probabilities basically work as follows:
latency stats are accumulated, in blocks of ha_period_karma seconds;
once per karma period, latency-weighted probabilities get recomputed;
once per request (including ping requests), "dead or alive" flag is adjusted.
Currently (as of 2.1.1-beta), we begin with equal probabilities (or percentages, for brevity), and on every step, scale them by the inverse of the latencies observed during the last "karma" period, and then renormalize them. For example, if during the first 60 seconds after the master startup 4 mirrors had latencies of 10, 5, 30, and 3 msec/query respectively, the first adjustment step would go as follow:
initial percentages: 0.25, 0.25, 0.25, 0.2%;
observed latencies: 10 ms, 5 ms, 30 ms, 3 ms;
inverse latencies: 0.1, 0.2, 0.0333, 0.333;
scaled percentages: 0.025, 0.05, 0.008333, 0.0833;
renormalized percentages: 0.15, 0.30, 0.05, 0.50.
Meaning that the 1st mirror would have a 15% chance of being chosen during the next karma period, the 2nd one a 30% chance, the 3rd one (slowest at 30 ms) only a 5% chance, and the 4th and the fastest one (at 3 ms) a 50% chance. Then, after that period, the second adjustment step would update those chances again, and so on.
The rationale here is, once the observed latencies stabilize, the latency weighted probabilities stabilize as well. So all these adjustment iterations are supposed to converge at a point where the average latencies are (roughly) equal over all mirrors.
ha_strategy = nodeads
Latency-weighted probabilities, but dead mirrors are excluded from the selection. "Dead" mirror is defined as a mirror that resulted in multiple hard errors (eg. network failure, or no answer, etc) in a row.
ha_strategy = noerrors
Latency-weighted probabilities, but mirrors with worse errors/success ratio are excluded from the selection.
ha_strategy = roundrobin
Simple round-robin selection, that is, selecting the 1st mirror in the list, then the 2nd one, then the 3rd one, etc, and then repeating the process once the last mirror in the list is reached. Unlike with the randomized strategies, RR imposes a strict querying order (1, 2, 3, .., N-1, N, 1, 2, 3, ... and so on) and guarantees that no two subsequent queries will be sent to the same mirror.
A list of keywords considered "frequent" when indexing bigrams. Optional, default is empty. Added in 2.1.1-beta.
Bigram indexing is a feature to accelerate phrase searches. When indexing, it stores a document list for either all or some of the adjacent words pairs into the index. Such a list can then be used at searching time to significantly accelerate phrase or sub-phrase matching.
Some of the bigram indexing modes (see Section 11.2.65, “bigram_index”) require to define a list of frequent keywords. These are not to be confused with stopwords! Stopwords are completely eliminated when both indexing and searching. Frequent keywords are only used by bigrams to determine whether to index a current word pair or not.
bigram_freq_words
lets you define a list of such keywords.
bigram_freq_words = the, a, you, i
Bigram indexing mode. Optional, default is none. Added in 2.1.1-beta.
Bigram indexing is a feature to accelerate phrase searches. When indexing, it stores a document list for either all or some of the adjacent words pairs into the index. Such a list can then be used at searching time to significantly accelerate phrase or sub-phrase matching.
bigram_index
controls the selection of specific word pairs.
The known modes are:
all
, index every single word pair.
(NB: probably totally not worth it even on a moderately sized index,
but added anyway for the sake of completeness.)
first_freq
, only index word pairs
where the first word is in a list of frequent words
(see Section 11.2.64, “bigram_freq_words”). For example, with
bigram_freq_words = the, in, i, a
, indexing
"alone in the dark" text will result in "in the" and "the dark" pairs
being stored as bigrams, because they begin with a frequent keyword
(either "in" or "the" respectively), but "alone in" would not
be indexed, because "in" is a second word in that pair.
both_freq
, only index word pairs where
both words are frequent. Continuing with the same example, in this mode
indexing "alone in the dark" would only store "in the" (the very worst
of them all from searching perspective) as a bigram, but none of the
other word pairs.
For most usecases, both_freq
would be the best mode, but
your mileage may vary.
bigram_freq_words = both_freq
Enables computing and storing of field lengths (both per-document and average per-index values) into the index. Optional, default is 0 (do not compute and store). Added in 2.1.1-beta.
When index_fields_lengths
is set to 1, indexer
will 1) create a respective length attribute for every full-text field,
sharing the same name; 2) compute a field length (counted in keywords) for
every document and store in to a respective attribute; 3) compute the per-index
averages. The lengths attributes will have a special TOKENCOUNT type, but their
values are in fact regular 32-bit integeres, and their values are generally
accessible.
BM25A() and BM25F() functions in the expression ranker are based
on these lengths and require index_fields_lengths
to be enabled.
Historically, Sphinx used a simplified, stripped-down variant of BM25 that,
unlike the complete function, did not account for document length.
(We later realized that it should have been called BM15 from the start.)
Starting with 2.1.1-beta, we added support for both a complete variant of BM25,
and its extension towards multiple fields, called BM25F. They require
per-document length and per-field lengths, respectively. Hence the additional
directive.
index_field_lengths = 1
Regular expressions (regexps) to filter the fields and queries with. Optional, multi-value, default is an empty list of regexps. Added in 2.1.1-beta.
In certain applications (like product search) there can be many different ways to call a model, or a product, or a property, and so on. For instance, 'iphone 3gs' and 'iphone 3 gs' (or even 'iphone3 gs') are very likely to mean the same product. Or, for a more tricky example, '13-inch', '13 inch', '13"', and '13in' in a laptop screen size descriptions do mean the same.
Regexps provide you with a mechanism to specify a number of rules specific to your application to handle such cases. In the first 'iphone 3gs' example, you could possibly get away with a wordforms files tailored to handle a handful of iPhone models. However even in a comparatively simple second '13-inch' example there is just way too many individual forms and you are better off specifying rules that would normalize both '13-inch' and '13in' to something identical.
Regular expressions listed in regexp_filter
are
applied in the order they are listed. That happens at the earliest
stage possible, before any other processing, even before tokenization.
That is, regexps are applied to the raw source fields when indeixng,
and to the raw search query text when searching.
We use the RE2 engine
to implement regexps. So when building from the source, the library must be
installed in the system and Sphinx must be configured built with a
--with-re2
switch. Binary packages should come with RE2
builtin.
# index '13-inch' as '13inch' regexp_filter = \b(\d+)\" => \1inch # index 'blue' or 'red' as 'color' regexp_filter = (blue|red) => color
Whether to apply stopwords before or after stemming. Optional, default is 0 (apply stopword filter after stemming). Added in 2.1.1-beta.
By default, stopwords are stemmed themselves, and applied to tokens after stemming (or any other morphology processing). In other words, by default, a token is stopped when stem(token) == stem(stopword). That can lead to unexpected results when a token gets (erroneously) stemmed to a stopped root. For example, 'Andes' gets stemmed to 'and' by our current stemmer implementation, so when 'and' is a stopword, 'Andes' is also stopped.
stopwords_unstemmed directive fixes that issue. When it's enabled, stopwords are applied before stemming (and therefore to the original word forms), and the tokens are stopped when token == stopword.
stopwords_unstemmed = 1
The path to a file with global (cluster-wide) keyword IDFs. Optional, default is empty (use local IDFs). Added in 2.1.1-beta.
On a multi-index cluster, per-keyword frequencies are quite likely to differ across different indexes. That means that when the ranking function uses TF-IDF based values, such as BM25 family of factors, the results might be ranked slightly different depending on what cluster node they reside.
The easiest way to fix that issue is to create and utilize a global frequency dictionary, or a global IDF file for short. This directive lets you specify the location of that file. It it suggested (but not required) to use a .idf extension. When the IDF file is specified for a given index and and OPTION global_idf is set to 1, the engine will use the keyword frequencies and collection documents count from the global_idf file, rather than just the local index. That way, IDFs and the values that depend on them will stay consistent across the cluster.
IDF files can be shared across multiple indexes. Only a single
copy of an IDF file will be loaded by searchd
,
even when many indexes refer to that file. Should the contents of
an IDF file change, the new contents can be loaded with a SIGHUP.
You can build an .idf file using indextool
utility, by dumping dictionaries using --dumpdict
switch
first, then converting those to .idf format using --buildidf
,
then merging all .idf files across cluser using --mergeidf
.
Refer to Section 6.5, “indextool
command reference” for more information.
global_idf = /usr/local/sphinx/var/global.idf
Indexing RAM usage limit. Optional, default is 32M.
Enforced memory usage limit that the indexer
will not go above. Can be specified in bytes, or kilobytes
(using K postfix), or megabytes (using M postfix); see the example.
This limit will be automatically raised if set to extremely low value
causing I/O buffers to be less than 8 KB; the exact lower bound
for that depends on the indexed data size. If the buffers are
less than 256 KB, a warning will be produced.
Maximum possible limit is 2047M. Too low values can hurt
indexing speed, but 256M to 1024M should be enough for most
if not all datasets. Setting this value too high can cause
SQL server timeouts. During the document collection phase,
there will be periods when the memory buffer is partially
sorted and no communication with the database is performed;
and the database server can timeout. You can resolve that
either by raising timeouts on SQL server side or by lowering
mem_limit
.
mem_limit = 256M # mem_limit = 262144K # same, but in KB # mem_limit = 268435456 # same, but in bytes
Maximum I/O operations per second, for I/O throttling. Optional, default is 0 (unlimited).
I/O throttling related option. It limits maximum count of I/O operations (reads or writes) per any given second. A value of 0 means that no limit is imposed.
indexer
can cause bursts of intensive disk I/O during
indexing, and it might desired to limit its disk activity (and keep something
for other programs running on the same machine, such as searchd
).
I/O throttling helps to do that. It works by enforcing a minimum guaranteed
delay between subsequent disk I/O operations performed by indexer
.
Modern SATA HDDs are able to perform up to 70-100+ I/O operations per second
(that's mostly limited by disk heads seek time). Limiting indexing I/O
to a fraction of that can help reduce search performance dedgradation
caused by indexing.
max_iops = 40
Maximum allowed I/O operation size, in bytes, for I/O throttling. Optional, default is 0 (unlimited).
I/O throttling related option. It limits maximum file I/O operation
(read or write) size for all operations performed by indexer
.
A value of 0 means that no limit is imposed.
Reads or writes that are bigger than the limit
will be split in several smaller operations, and counted as several operation
by max_iops setting. At the time of this
writing, all I/O calls should be under 256 KB (default internal buffer size)
anyway, so max_iosize
values higher than 256 KB must not affect anything.
max_iosize = 1048576
Maximum allowed field size for XMLpipe2 source type, bytes. Optional, default is 2 MB.
max_xmlpipe2_field = 8M
Write buffer size, bytes. Optional, default is 1 MB.
Write buffers are used to write both temporary and final index files when indexing. Larger buffers reduce the number of required disk writes. Memory for the buffers is allocated in addition to mem_limit. Note that several (currently up to 4) buffers for different files will be allocated, proportionally increasing the RAM usage.
write_buffer = 4M
Maximum file field adaptive buffer size, bytes. Optional, default is 8 MB, minimum is 1 MB.
File field buffer is used to load files referred to from
sql_file_field columns.
This buffer is adaptive, starting at 1 MB at first allocation,
and growing in 2x steps until either file contents can be loaded,
or maximum buffer size, specified by max_file_field_buffer
directive, is reached.
Thus, if there are no file fields are specified, no buffer
is allocated at all. If all files loaded during indexing are under
(for example) 2 MB in size, but max_file_field_buffer
value is 128 MB, peak buffer usage would still be only 2 MB. However,
files over 128 MB would be entirely skipped.
max_file_field_buffer = 128M
How to handle IO errors in file fields.
Optional, default is ignore_field
.
Introduced in version 2.0.2-beta.
When there is a problem indexing a file referenced by a file field
(Section 11.1.33, “sql_file_field”), indexer
can
either index the document, assuming empty content in this particular field,
or skip the document, or fail indexing entirely. on_file_field_error
directive controls that behavior. The values it takes are:
ignore_field
, index the current document without field;
skip_document
, skip the current document but continue indexing;
fail_index
, fail indexing with an error message.
The problems that can arise are: open error, size error (file too big),
and data read error. Warning messages on any problem will be given at all times,
irregardless of the phase and the on_file_field_error
setting.
Note that with on_file_field_error = skip_document
documents will only be ignored if problems are detected during
an early check phase, and not during the actual file parsing
phase. indexer
will open every referenced file
and check its size before doing any work, and then open it again
when doing actual parsing work. So in case a file goes away
between these two open attempts, the document will still be
indexed.
on_file_field_errors = skip_document
Lemmatizer dictionaries base path. Optional, defaut is /usr/local/share (as in --datadir switch to ./configure script). Added in version 2.1.1-beta.
Our lemmatizer implementation (see Section 11.2.6, “morphology” for a discussion of what lemmatizers are) is dictionary driven. lemmatizer_base directive configures the base dictionary path. File names are hardcoded and specific to a given lemmatizer; the Russian lemmatizer uses ru.pak dictionary file. The dictionaries can be obtained from the Sphinx website.
lemmatizer_base = /usr/local/share/sphinx/dicts/
Lemmatizer cache size. Optional, defaut is 256K. Added in version 2.1.1-beta.
Our lemmatizer implementation (see Section 11.2.6, “morphology” for a discussion of what lemmatizers are) uses a compressed dictionary format that enables a space/speed tradeoff. It can either perform lemmatization off the compressed data, using more CPU but less RAM, or it can decompress and precache the dictionary either partially or fully, thus using less CPU but more RAM. And the lemmatizer_cache directive lets you control how much RAM exactly can be spent for that uncompressed dictionary cache.
Currently, the only available dictionary is ru.pak, the Russian one. The compressed dictionary is approximately 10 MB in size. Note that the dictionary stays in memory at all times, too. The default cache size is 256 KB. The accepted cache sizes are 0 to 2047 MB. It's safe to raise the cache size too high; the lemmatizer will only use the needed memory. For instance, the entire Russian dictionary decompresses to approximately 110 MB; and thus setting lemmatizer_cache anywhere higher than that will not affect the memory use: even when 1024 MB is allowed for the cache, if only 110 MB is needed, it will only use those 110 MB.
On our benchmarks, the total indexing time with different cache sizes was as follows:
Your mileage may vary, but a simple rule of thumb would be to either go with the small default 256 KB cache when pressed for memory, or spend 128 MB extra RAM and cache the entire dictionary for maximum indexing performance.
lemmatizer_cache = 256M # cache it all
This setting lets you specify IP address and port, or Unix-domain
socket path, that searchd
will listen on.
Introduced in version 0.9.9-rc1.
The informal grammar for listen
setting is:
listen = ( address ":" port | port | path ) [ ":" protocol ]
I.e. you can specify either an IP address (or hostname) and port
number, or just a port number, or Unix socket path. If you specify
port number but not the address, searchd
will listen on
all network interfaces. Unix path is identified by a leading slash.
Starting with version 0.9.9-rc2, you can also specify a protocol handler (listener) to be used for connections on this socket. Supported protocol values are 'sphinx' (Sphinx 0.9.x API protocol) and 'mysql41' (MySQL protocol used since 4.1 upto at least 5.1). More details on MySQL protocol support can be found in Section 5.10, “MySQL protocol support and SphinxQL” section.
listen = localhost listen = localhost:5000 listen = 192.168.0.1:5000 listen = /var/run/sphinx.s listen = 9312 listen = localhost:9306:mysql41
There can be multiple listen directives, searchd
will
listen for client connections on all specified ports and sockets. If
no listen
directives are found then the server will listen
on all available interfaces using the default SphinxAPI port 9312.
Starting with 1.10-beta, it will also listen on default SphinxQL
port 9306. Both port numbers are assigned by IANA (see
http://www.iana.org/assignments/port-numbers
for details) and should therefore be available.
Unix-domain sockets are not supported on Windows.
Interface IP address to bind on. Optional, default is 0.0.0.0 (ie. listen on all interfaces). DEPRECATED, use listen instead.
address
setting lets you specify which network interface
searchd
will bind to, listen on, and accept incoming
network connections on. The default value is 0.0.0.0 which means to listen
on all interfaces. At the time, you can not specify multiple interfaces.
address = 192.168.0.1
searchd
TCP port number.
DEPRECATED, use listen instead.
Used to be mandatory. Default port number is 9312.
port = 9312
Log file name.
Optional, default is 'searchd.log'.
All searchd
run time events will be logged in this file.
Also you can use the 'syslog' as the file name. In this case the events will be sent to syslog daemon. To use the syslog option the sphinx must be configured '--with-syslog' on building.
log = /var/log/searchd.log
Query log file name.
Optional, default is empty (do not log queries).
All search queries will be logged in this file. The format is described in Section 5.9, “searchd
query log formats”.
In case of 'plain' format, you can use the 'syslog' as the path to the log file. In this case all search queries will be sent to syslog daemon with LOG_INFO priority, prefixed with '[query]' instead of timestamp. To use the syslog option the sphinx must be configured '--with-syslog' on building.
query_log = /var/log/query.log
Query log format. Optional, allowed values are 'plain' and 'sphinxql', default is 'plain'. Introduced in version 2.0.1-beta.
Starting with version 2.0.1-beta, two different log formats are supported.
The default one logs queries in a custom text format. The new one logs
valid SphinxQL statements. This directive allows to switch between the two
formats on search daemon startup. The log format can also be altered
on the fly, using SET GLOBAL query_log_format=sphinxql
syntax.
Refer to Section 5.9, “searchd
query log formats” for more discussion and format
details.
query_log_format = sphinxql
Network client request read timeout, in seconds.
Optional, default is 5 seconds.
searchd
will forcibly close the client connections which fail to send a query within this timeout.
read_timeout = 1
Maximum time to wait between requests (in seconds) when using persistent connections. Optional, default is five minutes.
client_timeout = 3600
Maximum amount of children to fork (or in other words, concurrent searches to run in parallel). Optional, default is 0 (unlimited).
Useful to control server load. There will be no more than this much concurrent searches running, at all times. When the limit is reached, additional incoming clients are dismissed with temporarily failure (SEARCHD_RETRY) status code and a message stating that the server is maxed out.
max_children = 10
searchd
process ID file name.
Mandatory.
PID file will be re-created (and locked) on startup. It will contain
head daemon process ID while the daemon is running, and it will be unlinked
on daemon shutdown. It's mandatory because Sphinx uses it internally
for a number of things: to check whether there already is a running instance
of searchd
; to stop searchd
;
to notify it that it should rotate the indexes. Can also be used for
different external automation scripts.
pid_file = /var/run/searchd.pid
Maximum amount of matches that the daemon keeps in RAM for each index and can return to the client. Optional, default is 1000.
Introduced in order to control and limit RAM usage, max_matches
setting defines how much matches will be kept in RAM while searching each index.
Every match found will still be processed; but only
best N of them will be kept in memory and return to the client in the end.
Assume that the index contains 2,000,000 matches for the query. You rarely
(if ever) need to retrieve all of them. Rather, you need
to scan all of them, but only choose "best" at most, say, 500 by some criteria
(ie. sorted by relevance, or price, or anything else), and display those
500 matches to the end user in pages of 20 to 100 matches. And tracking
only the best 500 matches is much more RAM and CPU efficient than keeping
all 2,000,000 matches, sorting them, and then discarding everything but
the first 20 needed to display the search results page. max_matches
controls N in that "best N" amount.
This parameter noticeably affects per-query RAM and CPU usage.
Values of 1,000 to 10,000 are generally fine, but higher limits must be
used with care. Recklessly raising max_matches
to 1,000,000
means that searchd
will have to allocate and
initialize 1-million-entry matches buffer for every
query. That will obviously increase per-query RAM usage, and in some cases
can also noticeably impact performance.
CAVEAT EMPTOR! Note that there also is another place where this limit
is enforced. max_matches
can be decreased on the fly
through the corresponding API call,
and the default value in the API is also set to 1,000. So in order
to retrieve more than 1,000 matches to your application, you will have
to change the configuration file, restart searchd, and set proper limit
in SetLimits() call.
Also note that you can not set the value in the API higher than the value
in the .conf file. This is prohibited in order to have some protection
against malicious and/or malformed requests.
max_matches = 10000
Prevents searchd
stalls while rotating indexes with huge amounts of data to precache.
Optional, default is 1 (enable seamless rotation).
Indexes may contain some data that needs to be precached in RAM.
At the moment, .spa
, .spi
and
.spm
files are fully precached (they contain attribute data,
MVA data, and keyword index, respectively.)
Without seamless rotate, rotating an index tries to use as little RAM
as possible and works as follows:
new queries are temporarly rejected (with "retry" error code);
searchd
waits for all currently running queries to finish;
old index is deallocated and its files are renamed;
new index files are renamed and required RAM is allocated;
new index attribute and dictionary data is preloaded to RAM;
searchd
resumes serving queries from new index.
However, if there's a lot of attribute or dictionary data, then preloading step could take noticeble time - up to several minutes in case of preloading 1-5+ GB files.
With seamless rotate enabled, rotation works as follows:
new index RAM storage is allocated;
new index attribute and dictionary data is asynchronously preloaded to RAM;
on success, old index is deallocated and both indexes' files are renamed;
on failure, new index is deallocated;
at any given moment, queries are served either from old or new index copy.
Seamless rotate comes at the cost of higher peak
memory usage during the rotation (because both old and new copies of
.spa/.spi/.spm
data need to be in RAM while
preloading new copy). Average usage stays the same.
seamless_rotate = 1
Whether to forcibly preopen all indexes on startup. Optional, default is 1 (preopen everything).
Starting with 2.0.1-beta, the default value for this option is now 1 (foribly preopen all indexes). In prior versions, it used to be 0 (use per-index settings).
When set to 1, this directive overrides and enforces
preopen on all indexes.
They will be preopened, no matter what is the per-index
preopen
setting. When set to 0, per-index
settings can take effect. (And they default to 0.)
Pre-opened indexes avoid races between search queries
and rotations that can cause queries to fail occasionally.
They also make searchd
use more file
handles. In most scenarios it's therefore preferred and
recommended to preopen indexes.
preopen_indexes = 1
Whether to unlink .old index copies on succesful rotation. Optional, default is 1 (do unlink).
unlink_old = 0
When calling UpdateAttributes()
to update document attributes in
real-time, changes are first written to the in-memory copy of attributes
(docinfo
must be set to extern
).
Then, once searchd
shuts down normally (via SIGTERM
being sent), the changes are written to disk.
Introduced in version 0.9.9-rc1.
Starting with 0.9.9-rc1, it is possible to tell searchd
to periodically write these changes back to disk, to avoid them being lost. The time
between those intervals is set with attr_flush_period
, in seconds.
It defaults to 0, which disables the periodic flushing, but flushing will still occur at normal shut-down.
attr_flush_period = 900 # persist updates to disk every 15 minutes
Instance-wide defaults for ondisk_dict directive. Optional, default it 0 (precache dictionaries in RAM). Introduced in version 0.9.9-rc1.
This directive lets you specify the default value of
ondisk_dict for all the indexes
served by this copy of searchd
. Per-index directive
take precedence, and will overwrite this instance-wide default value,
allowing for fine-grain control.
ondisk_dict_default = 1 # keep all dictionaries on disk
Maximum allowed network packet size. Limits both query packets from clients, and response packets from remote agents in distributed environment. Only used for internal sanity checks, does not directly affect RAM use or performance. Optional, default is 8M. Introduced in version 0.9.9-rc1.
max_packet_size = 32M
Shared pool size for in-memory MVA updates storage. Optional, default size is 1M. Introduced in version 0.9.9-rc1.
This setting controls the size of the shared storage pool for updated MVA values. Specifying 0 for the size disable MVA updates at all. Once the pool size limit is hit, MVA update attempts will result in an error. However, updates on regular (scalar) attributes will still work. Due to internal technical difficulties, currently it is not possible to store (flush) any updates on indexes where MVA were updated; though this might be implemented in the future. In the meantime, MVA updates are intended to be used as a measure to quickly catchup with latest changes in the database until the next index rebuild; not as a persistent storage mechanism.
mva_updates_pool = 16M
Deprecated debugging setting, path (formally prefix) for crash log files. Introduced in version 0.9.9-rc1. Deprecated in version 2.0.1-beta, as crash debugging information now gets logged into searchd.log in text form, and separate binary crash logs are no longer needed.
Maximum allowed per-query filter count. Only used for internal sanity checks, does not directly affect RAM use or performance. Optional, default is 256. Introduced in version 0.9.9-rc1.
max_filters = 1024
Maximum allowed per-filter values count. Only used for internal sanity checks, does not directly affect RAM use or performance. Optional, default is 4096. Introduced in version 0.9.9-rc1.
max_filter_values = 16384
TCP listen backlog. Optional, default is 5.
Windows builds currently (as of 0.9.9) can only process the requests one by one. Concurrent requests will be enqueued by the TCP stack on OS level, and requests that can not be enqueued will immediately fail with "connection refused" message. listen_backlog directive controls the length of the connection queue. Non-Windows builds should work fine with the default value.
listen_backlog = 20
Per-keyword read buffer size. Optional, default is 256K.
For every keyword occurrence in every search query, there are two associated read buffers (one for document list and one for hit list). This setting lets you control their sizes, increasing per-query RAM use, but possibly decreasing IO time.
read_buffer = 1M
Unhinted read size. Optional, default is 32K.
When querying, some reads know in advance exactly how much data is there to be read, but some currently do not. Most prominently, hit list size in not currently known in advance. This setting lest you control how much data to read in such cases. It will impact hit list IO time, reducing it for lists larger than unhinted read size, but raising it for smaller lists. It will not affect RAM use because read buffer will be already allocated. So it should be not greater than read_buffer.
read_unhinted = 32K
Limits the amount of queries per batch. Optional, default is 32.
Makes searchd perform a sanity check of the amount of the queries submitted in a single batch when using multi-queries. Set it to 0 to skip the check.
max_batch_queries = 256
Max common subtree document cache size, per-query. Optional, default is 0 (disabled).
Limits RAM usage of a common subtree optimizer (see Section 5.11, “Multi-queries”). At most this much RAM will be spent to cache document entries per each query. Setting the limit to 0 disables the optimizer.
subtree_docs_cache = 8M
Max common subtree hit cache size, per-query. Optional, default is 0 (disabled).
Limits RAM usage of a common subtree optimizer (see Section 5.11, “Multi-queries”). At most this much RAM will be spent to cache keyword occurrences (hits) per each query. Setting the limit to 0 disables the optimizer.
subtree_hits_cache = 16M
Multi-processing mode (MPM). Optional; allowed values are none, fork, prefork, and threads. Default is fork on Unix based systems, and threads on Windows. Introduced in version 1.10-beta.
Lets you choose how searchd
processes multiple
concurrent requests. The possible values are:
All requests will be handled serially, one-by-one. Prior to 1.x, this was the only mode available on Windows.
A new child process will be forked to handle every incoming request. Historically, this is the default mode.
On startup, searchd
will pre-fork
a number of worker processes, and pass the incoming requests
to one of those children.
A new thread will be created to handle every incoming request. This is the only mode compatible with RT indexing backend.
Historically, searchd
used fork-based model,
which generally performs OK but spends a noticeable amount of CPU
in fork() system call when there's a high amount of (tiny) requests
per second. Prefork mode was implemented to alleviate that; with
prefork, worker processes are basically only created on startup
and re-created on index rotation, somewhat reducing fork() call
pressure.
Threads mode was implemented along with RT backend and is required to use RT indexes. (Regular disk-based indexes work in all the available modes.)
workers = threads
Max local worker threads to use for parallelizable requests (searching a distributed index; building a batch of snippets). Optional, default is 0, which means to disable in-request parallelism. Introduced in version 1.10-beta.
Distributed index can include several local indexes. dist_threads
lets you easily utilize multiple CPUs/cores for that (previously existing
alternative was to specify the indexes as remote agents, pointing searchd
to itself and paying some network overheads).
When set to a value N greater than 1, this directive will create up to N threads for every query, and schedule the specific searches within these threads. For example, if there are 7 local indexes to search and dist_threads is set to 2, then 2 parallel threads would be created: one that sequentially searches 4 indexes, and another one that searches the other 3 indexes.
In case of CPU bound workload, setting dist_threads
to 1x the number of cores is advised (creating more threads than cores
will not improve query time). In case of mixed CPU/disk bound workload
it might sometimes make sense to use more (so that all cores could be
utilizes even when there are threads that wait for I/O completion).
Note that dist_threads
does not require
threads MPM. You can perfectly use it with fork or prefork MPMs too.
Starting with version 2.0.1-beta, building a batch of snippets
with load_files
flag enabled can also be parallelized.
Up to dist_threads
threads are be created to process
those files. That speeds up snippet extraction when the total amount
of document data to process is significant (hundreds of megabytes).
index dist_test { type = distributed local = chunk1 local = chunk2 local = chunk3 local = chunk4 } # ... dist_threads = 4
Binary log (aka transaction log) files path. Optional, default is build-time configured data directory. Introduced in version 1.10-beta.
Binary logs are used for crash recovery of RT index data, and also of attributes updates of plain disk indices that would otherwise only be stored in RAM untill flush. When logging is enabled, every transaction COMMIT-ted into RT index gets written into a log file. Logs are then automatically replayed on startup after an unclean shutdown, recovering the logged changes.
binlog_path
directive specifies the binary log
files location. It should contain just the path; searchd
will create and unlink multiple binlog.* files in that path as necessary
(binlog data, metadata, and lock files, etc).
Empty value disables binary logging. That improves performance, but puts RT index data at risk.
WARNING! It is strongly recommended to always explicitly define 'binlog_path' option in your config. Otherwise, the default path, which in most cases is the same as working folder, may point to the folder with no write access (for example, /usr/local/var/data). In this case, the searchd will not start at all.
binlog_path = # disable logging binlog_path = /var/data # /var/data/binlog.001 etc will be created
Binary log transaction flush/sync mode. Optional, default is 2 (flush every transaction, sync every second). Introduced in version 1.10-beta.
This directive controls how frequently will binary log be flushed to OS and synced to disk. Three modes are supported:
0, flush and sync every second. Best performance, but up to 1 second worth of committed transactions can be lost both on daemon crash, or OS/hardware crash.
1, flush and sync every transaction. Worst performance, but every committed transaction data is guaranteed to be saved.
2, flush every transaction, sync every second. Good performance, and every committed transaction is guaranteed to be saved in case of daemon crash. However, in case of OS/hardware crash up to 1 second worth of committed transactions can be lost.
For those familiar with MySQL and InnoDB, this directive is entirely
similar to innodb_flush_log_at_trx_commit
. In most
cases, the default hybrid mode 2 provides a nice balance of speed
and safety, with full RT index data protection against daemon crashes,
and some protection against hardware ones.
binlog_flush = 1 # ultimate safety, low speed
Maximum binary log file size. Optional, default is 0 (do not reopen binlog file based on size). Introduced in version 1.10-beta.
A new binlog file will be forcibly opened once the current binlog file reaches this limit. This achieves a finer granularity of logs and can yield more efficient binlog disk usage under certain borderline workloads.
binlog_max_log_size = 16M
A prefix to prepend to the local file names when generating snippets. Optional, default is empty. Introduced in version 2.1.1-beta.
This prefix can be used in distributed snippets generation along with
load_files
or load_files_scattered
options.
Note how this is a prefix, and not a path! Meaning that if a prefix
is set to "server1" and the request refers to "file23", searchd
will attempt to open "server1file23" (all of that without quotes). So if you
need it to be a path, you have to mention the trailing slash.
Note also that this is a local option, it does not affect the agents anyhow. So you can safely set a prefix on a master server. The requests routed to the agents will not be affected by the master's setting. They will however be affected by the agent's own settings.
This might be useful, for instance, when the document storage locations (be those local storage or NAS mountpoints) are incosistent across the servers.
snippets_file_prefix = /mnt/common/server1/
Default server collation. Optional, default is libc_ci. Introduced in version 2.0.1-beta.
Specifies the default collation used for incoming requests. The collation can be overridden on a per-query basis. Refer to Section 5.12, “Collations” section for the list of available collations and other details.
collation_server = utf8_ci
Server libc locale. Optional, default is C. Introduced in version 2.0.1-beta.
Specifies the libc locale, affecting the libc-based collations. Refer to Section 5.12, “Collations” section for the details.
collation_libc_locale = fr_FR
Trusted location for the dynamic libraries (UDFs). Optional, default is empty (no location). Introduced in version 2.0.1-beta.
Specifies the trusted directory from which the UDF libraries can be loaded. Requires workers = thread to take effect.
workers = threads plugin_dir = /usr/local/sphinx/lib
A server version string to return via MySQL protocol. Optional, default is empty (return Sphinx version). Introduced in version 2.0.1-beta.
Several picky MySQL client libraries depend on a particular version
number format used by MySQL, and moreover, sometimes choose a different
execution path based on the reported version number (rather than the
indicated capabilities flags). For instance, Python MySQLdb 1.2.2 throws
an exception when the version number is not in X.Y.ZZ format; MySQL .NET
connector 6.3.x fails internally on version numbers 1.x along with
a certain combination of flags, etc. To workaround that, you can use
mysql_version_string
directive and have searchd
report a different version to clients connecting over MySQL protocol.
(By default, it reports its own version.)
mysql_version_string = 5.0.37
RT indexes RAM chunk flush check period, in seconds. Optional, default is 10 hours. Introduced in version 2.0.1-beta.
Actively updated RT indexes that however fully fit in RAM chunks can result in ever-growing binlogs, impacting disk use and crash recovery time. With this directive the search daemon performs periodic flush checks, and eligible RAM chunks can get saved, enabling consequential binlog cleanup. See Section 4.4, “Binary logging” for more details.
rt_flush_period = 3600 # 1 hour
Per-thread stack size. Optional, default is 64K. Introduced in version 2.0.1-beta.
In the workers = threads
mode, every request is processed
with a separate thread that needs its own stack space. By default, 64K per
thread are allocated for stack. However, extremely complex search requests
might eventually exhaust the default stack and require more. For instance,
a query that matches a few thousand keywords (either directly or through
term expansion) can eventually run out of stack. Previously, that resulted
in crashes. Starting with 2.0.1-beta, searchd
attempts
to estimate the expected stack use, and blocks the potentially dangerous
queries. To process such queries, you can either the thread stack size
by using the thread_stack
directive (or switch to a different
workers
setting if that is possible).
A query with N levels of nesting is estimated to require approximately
30+0.16*N KB of stack, meaning that the default 64K is enough for queries
with upto 250 levels, 150K for upto 700 levels, etc. If the stack size limit
is not met, searchd
fails the query and reports
the required stack size in the error message.
thread_stack = 256K
The maximum number of expanded keywords for a single wildcard. Optional, default is 0 (no limit). Introduced in version 2.0.1-beta.
When doing substring searches against indexes built with
dict = keywords
enabled, a single wildcard may
potentially result in thousands and even millions of matched
keywords (think of matching 'a*' against the entire Oxford
dictionary). This directive lets you limit the impact
of such expansions. Setting expansion_limit = N
restricts expansions to no more than N of the most frequent
matching keywords (per each wildcard in the query).
expansion_limit = 16
Legacy SphinxQL quirks compatiblity mode. Optional, default is 0 starting with 2.1.1-beta (SQL compliance mode). Introduced in version 2.0.1-beta.
Starting with version 2.0.1-beta, we're bringing SphinxQL in closer
compliance with standard SQL. However, existing applications must not
get broken, and compat_sphinxql_magics
lets you upgrade
safely. It defaults to 0, which acts in modern way.
SphinxQL compatibility mode is now deprecated and
will be removed once we complete bringing SphinxQL in line
with standard SQL syntax.
Please refer to Section 7.36, “SphinxQL upgrade notes, version 2.0.1-beta”
for the details and update instruction.
compat_sphinxql_magics = 0 # the future is now
Threaded server watchdog. Optional, default is 1 (watchdog enabled). Introduced in version 2.0.1-beta.
A crashed query in threads
multi-processing mode
(workers = threads
)
can take down the entire server. With watchdog feature enabled,
searchd
additionally keeps a separate lightweight
process that monitors the main server process, and automatically
restarts the latter in case of abnormal termination. Watchdog
is enabled by default.
watchdog = 0 # disable watchdog
Delay between restarting preforked children on index rotation, in milliseconds. Optional, default is 0 (no delay). Introduced in version 2.0.2-beta.
When running in workers = prefork
mode, every index rotation needs to restart all children to propagate the newly
loaded index data changes. Restarting all of them at once might put excessive
strain on CPU and/or network connections. (For instance, when the application
keeps a bunch of open persistent connections to different children, and all those
children restart.) Those bursts can be throttled down with
prefork_rotation_throttle
directive. Note that
the children will be restarted sequentially, and thus "old" results might
persist for a few more seconds. For instance, if
prefork_rotation_throttle
is set to 50 (milliseconds), and
there are 30 children, then the last one would only be actually
restarted 1.5 seconds (50*30=1500 milliseconds) after
the "rotation finished" message in the searchd
event log.
prefork_rotation_throttle = 50 # throttle children restarts by 50 msec each
Path to a file where current SphinxQL state will be serialized. Available since version 2.1.1-beta.
On daemon startup, this file gets replayed. On eligible state changes (eg. SET GLOBAL), this file gets rewritten automatically. This can prevent a hard-to-diagnose problem: If you load UDF functions, but Sphinx crashes, when it gets (automatically) restarted, your UDF and global variables will no longer be available; using persistent state helps a graceful recovery with no such surprises.
sphinxql_state = uservars.sql
Interval between agent mirror pings, in milliseconds. Optional, default is 1000. Added in 2.1.1-beta.
For a distributed index with agent mirrors in it (see more in ???), master sends all mirrors a ping command during the idle periods. This is to track the current agent status (alive or dead, network roundtrip, etc). The interval between such pings is defined by this directive.
To disable pings, set ha_ping_interval to 0.
ha_ping_interval = 0
Agent mirror statistics window size, in seconds. Optional, default is 60. Added in 2.1.1-beta.
For a distributed index with agent mirrors in it (see more in ???),
master tracks several different per-mirror counters. These counters
are then used for failover and balancing. (Master picks the best
mirror to use based on the counters.) Counters are accumulated in
blocks of ha_period_karma
seconds.
After beginning a new block, master may still use the accumulated values from the previous one, until the new one is half full. Thus, any previous history stops affecting the mirror choice after 1.5 times ha_period_karma seconds at most.
Despite that at most 2 blocks are used for mirror selection, upto 15 last blocks are actually stored, for instrumentation purposes. They can be inspected using SHOW AGENT STATUS statement.
ha_period_karma = 120
The maxinum # of simultaneos persistent connections to remote persistent agents. Each time connecting agent defined under 'agent_persistent' we try to reuse existing connection (if any), or connect and save the connection for the future. However we can't hold unlimited # of such persistent connections, since each one holds a worker on agent size (and finally we'll receive the 'maxed out' error, when all of them are busy). This very directive limits the number. It affects the num of connections to each agent's host, across all distributed indexes.
It is reasonable to set the value equal or less than max_matches option of the agents.
persistent_connections_limit = 29 # assume that each host of agents has max_children = 30 (or 29).
A maximum number of I/O operations (per second) that the RT chunks merge thread is allowed to start. Optional, default is 0 (no limit). Added in 2.1.1-beta.
This directive lets you throttle down the I/O impact arising from
the OPTIMIZE
statements. It is guaranteed that all the
RT optimization activity will not generate more disk iops (I/Os per second)
than the configured limit. Modern SATA drives can perform up to around 100 I/O operations per
second, and limiting rt_merge_iops can reduce search performance degradation caused by merging.
rt_merge_iops = 40
A maximum size of an I/O operation that the RT chunks merge thread is allowed to start. Optional, default is 0 (no limit). Added in 2.1.1-beta.
This directive lets you throttle down the I/O impact arising from
the OPTIMIZE
statements. I/Os bigger than this limit will be
broken down into 2 or more I/Os, which will then be accounted as separate I/Os
with regards to the rt_merge_iops
limit. Thus, it is guaranteed that all the optimization activity will not
generate more than (rt_merge_iops * rt_merge_maxiosize) bytes of disk I/O
per second.
rt_merge_maxiosize = 1M
Costs for the query time prediction model, in nanoseconds. Optional, default is "doc=64, hit=48, skip=2048, match=64" (without the quotes). Added in 2.1.1-beta.
Terminating queries before completion based on their execution time (via either SetMaxQueryTime() API call, or SELECT ... OPTION max_query_time SphinxQL satement) is a nice safety net, but it comes with an inborn drawback: indeterministic (unstable) results. That is, if you repeat the very same (complex) search query with a time limit several times, the time limit will get hit at different stages, and you will get different result sets.
Starting with 2.1.1-beta, there is a new option, SELECT ... OPTION max_predicted_time, that lets you limit the query time and get stable, repeatable results. Instead of regularly checking the actual current time while evaluating the query, which is indeterministic, it predicts the current running time using a simple linear model instead:
predicted_time = doc_cost * processed_documents + hit_cost * processed_hits + skip_cost * skiplist_jumps + match_cost * found_matches
The query is then terminated early when the predicted_time
reaches a given limit.
Of course, this is not a hard limit on the actual time spent (it is, however, a hard limit on the amount of processing work done), and a simple linear model is in no way an ideally precise one. So the wall clock time may be either below or over the target limit. However, the error margins are quite acceptable: for instance, in our experiments with a 100 msec target limit the majority of the test queries fell into a 95 to 105 msec range, and all of the queries were in a 80 to 120 msec range. Also, as a nice side effect, using the modeled query time instead of measuring actual run time results in somewhat less gettimeofday() calls, too.
No two server makes and models are identical, so predicted_time_costs
directive lets you configure the costs for the model above. For convenience, they are
integers, counted in nanoseconds. (The limit in max_predicted_time is counted
in milliseconds, and having to specify cost values as 0.000128 ms instead of 128 ns
is somewhat more error prone.) It is not necessary to specify all 4 costs at once,
as the missed one will take the default values. However, we strongly suggest
to specify all of them, for readability.
predicted_time_costs = doc=128, hit=96, skip=4096, match=128
Table of Contents
fixed #1994, parsing of empty JSON arrays
fixed #1987, handling of index_exact_words with AOT morphology and infixes on
fixed #1984, teaching HTML parser to handle hex numbers
fixed #1983, master and agents networking issue
fixed #1977, escaping of characters doens't work with exceptions
fixed #1968, parsing of WEIGHT() function (queries to distributed indexes affected)
fixed #1933, quorum operator works incorrectly if it's number is exception
fixed #1932, fixed daemon index recovery after failed rotation
fixed #1923, crash at indexer
with dict=keywords
fixed #1918, fixed crash while hitless words are used within fulltext operators which require hits
fixed #1878, daemon doesn't reset regexp_filter after rotation with seamless_rotate=0
fixed #1682, field end modifier doesn't work with words containing blended chars
fixed #1917, field limit propagation outside of group
fixed #1915, exact form passes to index skipping stopwords filter
fixed #1905, multiple lemmas at the end of a field
fixed #1903, indextool
check mode for hitless indexes and indexes with large amount of documents
fixed #1884, crash at SNIPPET() with local indexes at distributed index
fixed #1802, loading large keywords dictionary
fixed #1786, indextool
fails to handle indexes with AOT morphology
fixed crash of daemon on logging extra large message
fixed expression engine: division by zero, log and sqrt() functions of non-positive arguments
fixed LCS and min_best_span_pos computation
fixed unnecessary escaping in JSON result set
fixed Quick Tour documentation chapter
fixed #1882, race of periodic and forced FLUSHing on an RT index
fixed #1881, quorum syntax with '.' as blended char
fixed #1880, crash on multiquery with one incorrect query
fixed #1876, crash on words with large code points and infix searches
fixed #1875, fixed crash on adding documents with long words in dict=keyword
index with morphology and infixes enabled
fixed #1857, crash in arabic stemmer
fixed evaluating of LCS by an expression ranker
fixed #1848, infixes and morphology clash
fixed #1823, indextool
fails to handle indexes with lemmatizer morphology
fixed #1799, crash in queries to distributed indexes with GROUP BY on multiple values
fixed #1718, expand_keywords
option lost in disk chunks of RT indexes
fixed documentation on rt_flush_period
fixed network protocol issue which results in timeouts of libmysqlclient
for big Sphinx responses
fixed #1778, indexes with more than 255 attributes
fixed #1777, ORDER BY WEIGHT()
fixed #1796, missing results in queries with quorum operator of indexes with some lemmatizer
fixed #1780, incorrect results while querying indexes with wordforms, some lemmatizer and enable_star=1
fixed, SHOW PROFILE for fullscan queries
fixed, --with-re2 check
fixed #1753, path to re2 sources could not be set using --with-re2
, options --with-re2-libs
and --with-re2-includes
added to configure
fixed #1739, erroneous conversion of RAM chunk into disk chunk when loading id32 index with id64 binary
fixed #1738, unlinking RAM chunk when converting it to disk chunk
fixed #1710, unable to filter by attributes created by index_field_lenghts=1
fixed #1716, random crash with with multiple running threads
fixed crash while querying index with lemmatizer and wordforms
added FLUSH RAMCHUNK statement
added SHOW PLAN statement
added support for GROUP BY on multiple attributes
added BM25F() function to SELECT
expressions (now works with the expression based ranker)
added indextool --fold
command and -q
switch
added JSON debug check for RT index RAM chunk
added LENGTH() function for MVA
added missing rt_attr_bool directive
added support for selecting over 250 columns via SphinxQL
deprecated custom sort mode, and str2ordinal and str2wordcount attribute types
optimized SELECT
, UPDATE
for indexes with many attributes (up to 3.5x speedup in extreme cases)
JSON
attributes (up to 5-20% faster SELECTs
using JSON objects)
optimized xmlpipe2 indexing (up to 9 times faster on some schemas)
fixed #1684, COUNT(DISTINCT smth) with implicit GROUP BY
returns correct value now
fixed #1672, exact token AOT vs lemma (indexer
skips exact form of token that passed AOT through tokenizer)
fixed #1659, fail while loading empty infix dictionary with dict=keywords
fixed #1638, force explicit JSON type conversion for aggregate functions
fixed #1628, GROUP_CONCAT() and GROUPBY() support for distributed agents
fixed #1619, INTEGER()
conversion function doesn't support signed integers
fixed #1615, global IDF vs exact term (=term) fixed global IDF for missed terms fixed SphinxQL global_idf=0 option
fixed #1607, now ignoring binlog when running daemon with --console
flag
fixed #1606, hard interruption of the daemon by Ctrl+C (SIGINT) signal
fixed #1592, duplicates vs expression ranker
fixed #1578, SORT BY string attribute via API attr_asc
\ attr_desc
fixed #1575, crash of daemon on MVA receive from agents with dist_threads enabled
fixed #1574, agent got kill list of local indexes of distributed index
fixed #1573, ranker expression vs expanded terms
fixed #1572, BM25F
vs negative terms
fixed #1550, float got cut at full-text part of a query
fixed #1541, BM25F
expression in distributes indexes
fixed #1508, #1522, distributed index query lasts up to agent_connect_timeout with epoll path
fixed #1508, master failed to connect waiting agents up to agent_connect_timeout time
fixed #1489, filtering by integer field in JSON using floating point precision
fixed #1485, index_exact_words vs keyword dict with infix
fixed #1478, memory leaks at daemon PACKEDFACTORS() as UDF argument, index query tokenizer, expression ranker SUM()
fixed #1470, broken UDF unpack (since r3738 UDF version 2)
fixed #1468, multiple conditions in WHERE
for JSON attributes
fixed #1466, index_field_lengths vs XML data source
fixed #1463, daemon shutdown vs RT index optimize (added forced terminate of long merging operation)
fixed #1460, aggregate functions AVG()
, MAX()
, MIN()
, SUM()
do not work for JSON attributes
fixed #1459, BM25F
doesn't work with field_string fields
fixed #1458, factors to copy field_tf
at UDF
fixed #1450, garbage in JSON fields when selecting them from a RT index
fixed #1449, broken build on Mac OS X
fixed #1445, field-start/field-end modifiers did not work for star-expanded keywords
fixed #1443, morphology=lemmatizer_ru_all now works with index_exact_words=1 (exact forms can be matches)
fixed #1442, incorrect COUNT(*)
value in queries to distributed indexes with implicit GROUP BY
fixed #1439, filters on float values in JSON issue, string values quoting issue
fixed #1399, filter error message on string attribute
fixed #1384, added possibility to define any own DSN line with source=mssql (like as in source=odbc
)
fixed ATTACH vs wordforms or stopwords; after daemon was restarted this setting was getting lost in RT indexes
fixed balancing of agents in HA
fixed co-working of index_exact_word
+ AOT lemmatizer
fixed epoll invoking and turned on by default
fixed incorrect handling of wildcards in tokenizer
fixed infix indexing with dict=keywords
fixed max_predicted_time integer overflows
fixed memory error in tokenizer
fixed several memory leaks
fixed PACKEDFACTORS()
to work in different GROUP BY
queries
fixed preprocessor definitions for RE2 in VS solution
fixed rotation of global IDF for workers=threads
and seamless_rotate=1
fixed rotation of old indexes
fixed RT kill list survives TRUNCATE
and works in newly ATTACH
ed index
fixed saving id32 RT index with id64 daemon
fixed stemmer vs RT index INSERT
fixed string case error with JSON attributes in select list of a query
fixed TOP_COUNT
usage in misc/suggest
and updated to PHP 5.3 and UTF-8
added query profiling (SET PROFILING=1 and SHOW PROFILE statements)
added AOT-based Russian lemmatizer (morphology={lemmatize_ru | lemmatize_ru_all}, lemmatizer_base, and lemmatizer_cache directives)
added wordbreaker, a tool to split compounds into individual words
added JSON attributes support (sql_attr_json, on_json_attr_error, json_autoconv_numbers, json_autoconv_keynames directives)
added initial subselects support, SELECT * FROM (SELECT ... ORDER BY cond1 LIMIT X) ORDER BY cond2 LIMIT Y
added bigram indexing, and phrase searching with bigrams (bigram_index, bigram_freq_words directives)
added HA/LB support, ha_strategy and agent_persistent directives, SHOW AGENT STATUS statement
added RT index optimiation (OPTIMIZE INDEX statement, rt_merge_iops and rt_merge_maxiosize directives)
added wildcards support to dict=keywords (eg. "t?st*")
added substring search support (min_infix_len=2 and above) to dict=keywords
added --checkconfig switch to indextool to check config file for correctness (bug #1395)
added global IDF support (global_idf directive, OPTION global_idf)
added "term1 term2 term3"/0.5 quorum fraction syntax (bug #1372)
added an option to apply stopwords before morphology, stopwords_unstemmed directive
added an alternative method to compute keyword IDFs, OPTION idf=plain
added boolean query optimizations, OPTION boolean_simplify=1 (bug #1294)
added stringptr return type support to UDFs, and CREATE FUNCTION ... RETURNS STRING syntax
added early query termination by predicted execution time (OPTION max_predicted_time, and predicted_time_costs directive)
added index_field_lengths directive, BM25A() and BM25F() functions to expression ranker
added ranker=export, and PACKEDFACTORS() function
added support for attribute files over 4 GB (bug #1274)
added addr2line output to crash reports (bug #1265)
added OPTION ignore_nonexistent_columns to UPDATE, and a respective UpdateAttributes() argument
added --keep-attrs switch to indexer
added --with-static-mysql, --with-static-pgsql switches to configure
added --morph, --dumpdict switch to indextool
added support for multiple wordforms files, comment syntax, and pre/post-morphology wordforms
added ZONESPANLIST() builtin function
added regexp_filter directive, regexp document/query filtering support (uses RE2)
added min_idf, max_idf, sum_idf ranking factors
added uservars persistence, and sphinxql_state directive (bug #1132)
added ZONESPAN operator
added snippets_file_prefix directive
added Arabic stemmer, morphology=stem_ar directive (bug #519)
added OPTION sort_method={pq | kbuffer}, an alternative match sorting method
added SPZ (sentence, paragraph, zone) support to RT indexes
added support for upto 255 keywords in quorum operator (bug #1030)
added multi-threaded agent querying (bug #1000)
added SHOW INDEX indexname STATUS statement
added LIKE clause support to multiple SHOW xxx statements
added SNIPPET() function
added GROUP_CONCAT() aggregate function
added GROUPBY() builtin function
added iostats and cpustats to SHOW META
added support for DELETE statement over distributed indexes (bug #1104)
added EXIST('attr_name', default_value) builtin function (bug #1037)
added SHOW VARIABLES WHERE variable_name='xxx' syntax
added TRUNCATE RTINDEX statement
changed that UDFs are now allowed in fork/prefork modes via sphinxql_state startup script
changed that compat_sphinxql_magics now defaults to 0
changed that small enough exceptions, wordforms, stopwords files are now embedded into the index header
changed that rt_mem_limit can now be over 2 GB (bug #1059)
optimized tokenizer (upto 1.25x indexing and snippets speedup)
optimized multi-keyword searching (added skiplists)
optimized filtering and scan in several frequent cases (single-value, 2-arg, 3-arg WHERE clauses)
fixed #1778, SENTENCE and PARAGRAPH operators and infix stars clash
fixed #1774, stack overflow on parsing large expressions
fixed #1744, daemon failed to write to log file bigger than 4G
fixed #1705, expression ranker handling of indexes with more than 32 fields
fixed #1700, crash and cutoff in fullscan reverse_scan=1 queries
fixed #1698, proper handling of stopword with blended chars
fixed #1682, field end modifier and index_exact_words clash
fixed #1678, memory leak in SUM() function of an expression ranker
fixed #1670, updating of MVA attributes in distributed indexes via API
fixed #1662, EscapeString() API escapes '<' too now
fixed #1520, SetLimits() API documentation
fixed #1491, documentation: space character is prohibited in charset_table
fixed memory leak in expressions with max_window_hits
fixed rt_flush_period - less stricter internal check and more often flushes overall
fixed #1655, special characters like ()?* were not processed correctly by exceptions
fixed #1651, CREATE FUNCTION can now be used with BIGINT return type
fixed #1649, incorrect warning message (about statistics mismatch) was returned when mixing wildcards and regular keywords
fixed #1603, passing MVA64 arguments to non-MVA functions caused unpredicted behaviour and crashes (now explicitly forbidden)
fixed #1601, negative numbers in IN() clause caused a syntax error
fixed #1581, dict=keywords and sql_joined_field occasionally caused indexer
to build corrupted indexes
fixed #1546, file descriptor leaked on index rotation (that eventually prevented searchd
to reload indexes)
fixed #1537, COUNT(*)
and compat_sphinxql_magics=0 via SphinxAPI caused an incorrect error message
fixed #1531, #1589, several matching and highlighting issues when using both blend_chars and multi-wordforms
fixed #1521, indextool --check
did not handle empty RT MVA and gave an incorrect warning
fixed #1392, SphinxSE builds with MySQL 5.6 now
fixed #757, wordforms shared between multiple indexes with different tokenizer settings failed to load (they now load with a warning)
fixed that batch queries did not batch in some cases (because of internal expression alias issues)
fixed that CALL KEYWORDS occasionally gave incorrect error messages
fixed searchd crashes on ATTACHing plain indexes with MVAs
fixed several deadlocks and other threading issues
fixed incorrect sorting order with utf8_general_ci
fixed that in some cases incorrect attribute values were returned when using expression aliases
optimized xmlpipe2 indexing
added a warning for missed stopwords, exception, wordforms files on index load and in indextool --check
fixed #1515, log strings over 2KB were clipped when query_log_format=plain
fixed #1514, RT index disk chunk lose attribute update on daemon restart
fixed #1512, crash while formatting log messages
fixed #1511, crash on indexing PostgreSQL data source with MVA attributes
fixed #1509, blend_chars vs incomplete multi-form and overshort
fixed #1504, RT binlog replay vs descending tid on update
fixed #1499, sql_field_str2wordcount actually is int, not string
fixed #1498, now working with exceptions starting with number too
fixed #1496, multiple destination keywords in wordform
fixed #1494, lost 'mod', '%' operations in select list. Also corrected few typers in the doc.
fixed #1490, expand_keywords vs prefix
fixed #1487, `id` in expression fixed
fixed #1483, snippets limits fix
fixed #1481, shebang config changes check on rotation
fixed #1479, port handling in PHP Sphinx API
fixed #1474, daemon crash at SphinxQL packet overflows max_packet_size
fixed #1472, crash on loading index to indextool
for check
fixed #1465, expansion_limit got lost in index rotation
fixed #1427, #1506, utf8 3 and 4-bytes codepoints
fixed #1405, between with mixed int float values
fixed #1475, memory leak in the expression parser
fixed #1457, error messages over 2KB were clipped
fixed #1454, searchd did not display an error message when the binlog path did not exist
fixed #1441, SHOW META in a query batch was returning the last non-batch error
fixed #1435, typo in the documentation
fixed #1430, rt_flush_period now works even with a disabled binlog
fixed #1427, overlong 4-byte UTF-8 codes in source text could cause indexer crashes or index corruption
fixed #1418, warnings from local index searches were lost with dist_threads>0
fixed #1417, crash handler now works on searchd startup stage, too (eg. to report index load time crashes)
fixed #1410, bad numerics like '123abc' now result in a proper SphinxQL error message
fixed #1404, a tiny memory leak in shared mutex
fixed #1394, race in --iostats caused incorrect I/O statistics in threaded modes
fixed #1391, QUORUM operator vs docinfo=inline returned wrong attribute values
fixed #1389, edge case in the ORDER operator caused occasionaly searchd crashes
fixed #1382, query parts with field limits but without real keywords (like '@name {') are now simply ignored and no longer cause a query syntax error
fixed #1370, Windows indexer builds failed to fetch rows from MSSQL 2012
fixed #1368, ORDER BY RAND() did not work in RT indexes
fixed #1364, queries with hitless words could occasionally crash searchd
fixed #1363, '*' in charset_table was causing query syntax errors with enable_star=1
fixed #1353, added filtering by 'id' syntax (in addition to '@id') to SphinxSE
fixed #1346, fixed NEAR operator behavior vs duplicated keywords
fixed #1345, invalid PROXIMITY operator threshold now causes a query syntax error rather than unexpected search behavior
fixed #1343, misconfigured indexes with 0 full text fields are now explicitly forbidden
fixed #1342, specific error messages (from the preload stage) went missing when failing to load the indexes
fixed #1339, no warning on inconsistent word statistics
fixed #1335, typo in searchd help screen
fixed #1334, typo in SELECT documentation
fixed #1316, PHRASE operator did not match in a rare self-repeating document/query case
fixed #1297, letting queries complete gracefully instead of killing them off in seamless_rotate=1, workers=prefork case
fixed #1295, mentioned index naming requirements (proper identifier) in the FROM clause docs
fixed #1221, incorrect results when using @groupby in select list via SphinxAPI with compat_sphinxql_magics=0
fixed #1180, special SPZ chars occasionally leaking into snippets
fixed #1171, preforked children did not reload logs on SIGUSR1
fixed #1150, added support for `id` syntax in DELETE and parents in WHERE
fixed #1135, crashes when using MVA/strings attributes in expression ranker
fixed #1124, corrupted attributes after merging with an empty index
fixed #1090, SphinxSE snippets UDF updated to support MySQL 5.5
fixed #1041, added initial support for MVA updates (and other mutex protected things) on FreeBSD
fixed #999, fullscan returned empty result sets in mixed batches of fullscan and fulltext queries
fixed #921, document count/bytes 32bit overflow in indexer progress output
fixed #539, added processing suffix rules with dots in .affix file to spelldump
fixed #481, rotation did not work on Windows with preopen=1
fixed #268, added warnings about duplicate elements in xmlpipe2
fixed CSphStaticMutex (double initialization issue)
fixed documentation typo in SQL data sources
fixed too-late initialization of mutex at daemon
fixed that an instance of searchd resurrected by watchdog could leak resources and/or crash
added a console message about crashes during index loading at startup
added more debug info about failed index loading
fixed #1322, J connector seems to be broken in rel20 , but works in trunk
fixed #1321, 'set names utf8' passes, but 'set names utf-8' doesn't because of syntax error '-'
fixed #1318, unhandled float comparison operators at filter
fixed #1317, FD leaks on thread seamless rotation
fixed #1313, crash on stopping daemon with incorrect RT index config
fixed #1306, 'jolly roger ;)', and '(((((((((9 brackets)' crashes searchd
fixed #1304, OS X debug compilation
fixed #1302, daemon random crashes on OS X
fixed #1301, indexer
fails to send rotate signal
fixed #1300, lost index settings on attach
fixes #1285, crash on running searchd
with syslog
and watchdog
fixed #1279, linking against explicitly disabled iconv. Also added --with-libexpat
to config options, which sometimes required on systems without XML support
fixed #1277, broken build on some toolchains (like uClibc) where not defined LLONG_MIN
, added ULLONG_MAX
fixed #1274, large spa
( >4GB ) file hasn't loaded
fixed #1269, crash at RT index with MVA from disk chunk previously updated
fixed #1268, unuseful warning removed
fixed #1264, string and MVA attributes aliasing works again
fixed #1246, attributes of 100 character length not being saved
fixed #1216, typos, mem_limit default size and RT documentation
fixed #1148, RT documentation updated
fixed #1140, mem_limit default value
fixed #1138, updated documentation on sql_attr_string
fixed #1129, snippets vs empty files and empty filenames
fixed #1123, configure compatibility fix
fixed #1122, 64bit sql_range_step
fixed #1082, crashes and deadlocks on OS X with workers=threads
and field leak of read-write lock
fixed #1081, select only count distinct attr1 but group by attr2
fixed #1064, mistake while working with timestamp functions
fixed #1043, inaccurate distinct count in case many indexes or distributed index
fixed #1042, arithmetic expressions overflow
fixed #1007, Russian stemming on big endian systems
fixed #986, asserting in SetRankingMode (PHP API)
fixed #975, incorrect ranking in some rare cases
fixed #967, Python API type checking error
fixed #934, API vs fullscan vs non-empty query
fixed #899, error if using SetFilterRange as HAVING from SQL
fixed #867, indexer
accepts index names starting with digit or _
fixed #699, signed vs unsigned 64-bit DocIDs in SphinxQL
fixed #668, now ignoring single @ character (incorrect field operator)
fixed #611, @! operator vs non-existent field, updated documentation
fixed #412, multiple --filter
arguments work as they should in search utility
fixed #108, support for system libstemmer library. The sources of libstemmer placed into libstemmer_c
is preferred, but the system lib will be tried if no sources found
fixed ORDER BY output at query log with SphinxQL mode
fixed documentation entry about sql_joined_field
fixed sample config file
fixed x64 configurations for libstemmer
fixed #1258, xmlpipe2
refused to index indexes with docinfo=inline
fixed #1257, legacy groupby modes vs dist_threads
could occasionally return wrong search results (race condition)
fixed #1253, missing single-word query performance optimization (simplified ranker) vs prefix-expanded keywords vs dict=keywords
fixed #1252, COUNT(*) vs dist_threads could occasionally crash (race condition)
fixed #1251, missing expression support in the IN() function
fixed #1245, FlushAttributes mistakenly disabled by attr_flush_period=0 setting
fixed #1244, per-API-command (search, update, etc) statistics were not updated by SphinxQL requests
fixed #1243, misc issues (broken statistics, weights, checks) with very long keywords having blended parts in RT indexes
fixed #1240, embedded xmlpipe2
schema with more attributes than the sphinx.conf
one caused indexer
to crash
fixed #1239, memory leak when optimizing ABS(const)
and other 1-arg functions
fixed #1228, #761, #1183, #1190, #1198, misc issues occasonally caused by MVA updates (crash on SaveAttributes; index rotation vs index name and TID; looped MVA updates; persistent MVA removal on rotation)
fixed #1227, API queries with SetGeoAnchor()
were logged incorrectly in SphinxQL-format query logs (query_log_format=sphinxql
)
fixed #1214, phrase query parsing issues when blend_chars contained a quote (") symbol
fixed #1213, attribute aliases were not recognized by the subsequent SELECT
items
fixed #1210, crash when indexing an index with joined fields only (no regular fields)
fixed #1209, xmlpipe_fixup_utf8
off by a byte on certain (pretty rare) malformed sequences
fixed #1202, various issues with CALL KEYWORDS
vs RT indexes (crashes vs dict=keywords
, missing modifiers in output)
fixed #1201, snippets vs query_mode=1
vs complex OR-queries could occasionally crash
fixed #1197, indexer
running out of disk space could either crash, or fail to display a proper error message
fixed #1185, keywords with wildcards were not handled when highlighting the entire document
fixed #1184, indexer
crash when ngram_chars was set, but ngram_len=0
fixed #1182, indexer
crash on certain combinations of docinfo=inline
vs bitfields
fixed #1181, GROUP BY
on a MVA64 was truncated at 32 bits
fixed #1179, passage_boundary
in snippets could get ignored (when highlighting the entire document)
fixed #1178, indexer
could crash when charset_table
specified out-of-bounds codes
fixed #1177, SPZ queries in snippets erroneously required passage_boundary option to be explicitly set
fixed #1176, multi-queries with a GROUP/ORDER BY
on a string attributed crashed
fixed #1175, connection id mismatch in SphinxQL-format query logs
fixed #1167, nested parentheses in a full-text query could mistakenly reset preceding field or zone limit operator
fixed #1158, float range filters were not supported in a multi-query batch optimizer
fixed #1157, broken gcc-4.7 build
fixed #1156, empty result set instead of an error message when querying distributed indexes with compat_sphinxql_magic=1 and hitting an error
fixed #1143, dash after a number incorrectly parsed as an operator NOT
fixed #1137, searchd
--stopwait hanged when the running instance crashed during shutdown
fixed #1136, high idle CPU load on systems without pthread_timed_lock()
fixed #1134, issues with prefork
workers on systems without pthread_timed_lock()
fixed #1133, BuildExcerpts()
on a distributed index with load_files
did not distribute the jobs
fixed #1126, inaccurate hits sorting progress report on joined field indexing
fixed #1121, occasional bad entries (wrong characters or invalid SQL) in SphinxQL-format query log
fixed #1118, libsphinxclient
requests failed when using SPH_RANK_EXPR
fixed #1073, improved handling of wordforms/multiforms rules referring to stopwords
fixed #1062, bigint filter ranges truncated when searching via SphinxQL
fixed #1052, SphinxSE range arguments with leading zeroes mistakenly parsed as octal
fixed #1011, negative MVA64 values mistakenly converted to positive (on indexing and/or output)
fixed #974, crash when logging queries over 2048 bytes with performance counters enabled
fixed #909, field-end modifier was ignored when followed by a non-whitespace syntax character (eg quote or bracket)
fixed #907, issue with bigint filtering (large positive or negative values)
fixed #906, #1074, Mac OS X 10.7.3 builds (conflicting memory allocation routines in Sphinx and external libs)
fixed #901, #1066, sending bigger request packets was broken in Python API
fixed #879, filters on weight-dependent expressions did not work correctly
fixed #553, default/missing port value was not handled properly in SetServer() API call
fixed that blended vs multiforms vs min_word_len could hang the query parser
fixed missing command-line switches documentation
fixed #605, pack vs mysql compress
fixed #783, #862, #917, #985, #990, #1032 documentation bugs
fixed #885, bitwise AND/OR were not available via API
fixed #984, crash on indexing data with MAGIC_CODE_ZONE symbol
fixed #1004, RT index loses words from dictionary on segments merging with id64
enabled
fixed #1035, daemon doesn't properly handle FDs in case of socket overflow FD_SETSIZE ( *nix, preopen_indexes=0
, worker=threads
)
fixed #1038, quoted string for API select
fixed #1046, head SPZ overflow, snippet generation at non fast with SPZ
fixed #1048, distributed index can't sort \ filter because of missed attributes
fixed #1050, expression ranker vs agents
fixed #1054, max_query_time not handled properly on searching at RT index
fixed #1055, expansion_limit on searching at RT disk chunks
fixed #1057, daemon crashes on generating snippet with 0 documents provided
fixed #1060, load_files_scattered don't work
fixed #1065, libsphinxclient vs distribute index (agents)
fixed #1067, modifiers were not escaped in legacy query emulation
fixed #1071, master - agent communication got slower for a large query
fixed #1076, #1077, (redundant copying, and a possible mutex leak with uservars)
fixed #1078, blended
vs FIELD_END
fixed #1084 crash \ index corruption on loading persist MVA
fixed #1091, RT attach of plain index with string \ MVA attributes prior regular attributes
fixed #1092, update got binloged with wrong TID
fixed #1098, crash on creating large expression
fixed #1099, cleaning up temporary files on fail of indexing
fixed #1100, missing xmlpipe_attr_bigint config directive
fixed #1101, now ignoring dashes within keywords when dash is not in charset_table
fixed #1103, ZONE
operator incorrectly works on more than one keywords in a simple zone
fixed #1106, optimized WHERE id=value
, WHERE id IN (values_list)
clauses used in SELECT
, UPDATE
statements
fixed #1112, Sphinx doesn't work out-of-the-box because the collision of binlog_path
option
fixed #1116, crash on FLUSH RTINDEX
unknown-index-name
fixed #1117, occasional RT headers corruption (leading to crashes and/or missing results)
fixed #1119, missing expression ranker support in SphinxSE
fixed #1120, negative total_found, docs and hits counter on huge indexes
fixed #1031, SphinxQL parsing syntax for MVA at insert \ replace statements
fixed #1027, stalls on attribute update in high-concurrency load
fixed #1026, daemon crash on malformed API command
fixed #1021, max_children
option has been ignored with worker=threads
fixed #1020, crash on large attribute files loading
fixed #1014, crash on rotation when index has been removed from config file (worker=threads
, *nix box)
fixed #1001, broken MVA files in RT index while saving disk chunk
fixed #995, crash on empty MVA updates
fixed #994, crash on daemon shutdown with seamless_rotate=0
and workers=threads
fixed #993, #998, crash on replay DELETE
statement vs RT index with dict=keywords
, fixed sequential INSERT
into dict=keywords
index right after INSERT
into dict=crc
index
fixed #991, crash on indexing mssql source with mssql_unicode
enabled
fixed #983, #950, crash on host name lookup (SphinxSE with MySQL 5.5)
fixed #981, snippet inconsistency with allow_empty=0
fixed #980, broken index produced by index merge in rare cases
fixed #971, absent error message at master on agent "maxed out"
fixed #695, #815, #835, #866, malformed warnings in SphinxQL
fixed build of SphinxSE with MySQL 5.1
fixed crash log for 'fork' and 'prefork' workers
added keywords dictionary (dict=keywords
) support to RT indexes
added MVA, index_exact_words support to RT indexes (#888)
added MVA64 (a set of BIGINTs) support to both disk and RT indexes (rt_attr_multi_64 directive)
added an expression-based ranker, and a number of new ranking factors
added ATTACH INDEX statement that converts a disk index to RT index
added WHERE
clause support to UPDATE statement
added bigint
, float
, and MVA
attribute support to UPDATE statement
added support for upto 256 searchable fields (was upto 32 before)
added FIBONACCI()
function to expressions
added load_files_scattered option to snippets
added implicit attribute type promotions in multi-index result sets (#939)
added index names to indexer
progress message on merge (#928)
added --replay-flags
switch to searchd
added string attribute support and a few previously missing snippets options to SphinxSE
added previously missing Status()
, SetConnectTimeout()
API calls to Python API
added ORDER BY RAND()
support to SELECT statement
added Sphinx version to Windows crash log
added RT index support to indextool --check
(checks disk chunks only) (#877)
added prefork_rotation_throttle directive (preforked children restart delay, in milliseconds) (#873)
added on_file_field_error directive (different sql_file_field
handling modes)
added manpages for all the programs
added syslog logging support
added sentence, paragraph, and zone support in html_strip_mode=retain
mode to snippets
optimized search performance with many ZONE
operators
improved suggestion tool (added Levenshtein limit, removed extra DB fetch)
improved sentence extraction (handles salutations, starting initials better now)
changed max_filter_values sanity check to 10M values
added FLUSH RTINDEX statement
added dist_threads
directive (parallel processing), load_files
, load_files_scattered
, batch syntax (multiple documents) support to CALL SNIPPETS statement
added OPTION comment='...'
support to SELECT statement (#944)
added SHOW VARIABLES statement
added dummy handlers for SET TRANSACTION, SET NAMES, SELECT @@sysvar statements, and for sql_auto_is_null
, sql_mode
, and @@-style variables (like @@tx_isolation) in SET statement (better MySQL frameworks/connectors support)
added complete SphinxQL error logging (all errors are logged now, not just SELECT
s)
improved SELECT statement syntax, made expressions aliases optional
fixed #982, empty binlogs prevented upgraded daemon from starting up
fixed #978, libsphinxclient build failed on sparc/sparc64 solaris
fixed #977, eliminated (most) compiler warnings
fixed #969, broken expression MVA/string argument type check prevented IF(IN(mva..)) and other valid expressions from working
fixed #966, NOT IN @global_var syntax was not supported
fixed #958, mem_limit over INT_MAX was not clamped
fixed #954, UTF-8 snippets could crash on malformed data
fixed #951, UTF-8 snippets could hang on malformed data
fixed #947, bad float column type was reported via SphinxQL, breaking some clients
fixed #940, group-by with a small enough max_matches
limit could occasionaly crash and/or sort wrongly
fixed #932, sending huge queries to agents occasionally failed (mainly on Windows)
fixed #926, snippets did not highlight widlcard matches with morphology enabled
fixed #918, crash logger did not report a proper query in dist_threads
case
fixed #916, watchdog caused (endless) respawns if there was a crash during shutdown
fixed #904, attribute names were not forcibly case-folded in some API calls (eg. SetGroupDistinct
)
fixed #902, query parser did not support stopword_step=0
fixed #897, network sockets dangled (open but unattended) while replaying binlog
fixed #855, allow_empty
option in snippets did not always work correctly
fixed #854, indexing with many bigint
attributes and docinfo=inline
crashed
fixed #838, RT MVA insertion did not sort MVA values, caused matching issues
fixed #833, duplicate MVA values were not eliminated on update
fixed #832, certain (overshort/incorrect) documents crashed indexing MS SQL Unicode columns
fixed #829, query parser did not properly handle numerics with blend_chars
fixed #814, group-by string attributes in RT indexes dit not always work correctly
fixed #812, utf8 stemming produced unexpected stems on words with single-byte chars
fixed #808, huge queries crashed logging with query_log_format=sphinxql
fixed #806, stray single-star keyword crashed on querying
fixed #798, snippets ignored index_exact_words
in query_mode
fixed #797, RT klist loader had an occasional off-by-one crash
fixed #791, preopen_indexes
erroneously defaulted to 0 on Windows
fixed #790, huge dictionaries (over 4 GB) did not work
fixed #786, inplace_enable
could occasionally corrupt the indexes
fixed #775, doc had a typo (soundex vs metaphone)
fixed #772, snippets duplicated blended chars on a SPZ boundary
fixed #762, query parser truncated digit-only keywords over 15 digits
fixed #736, query parser dit not properly handle blended/special char sequence
fixed #726, rotation of an index with a changed attribute count crashed
fixed #687, querying multiple indexes with index weights and sort-by expression produced incorrect (unadjusted) weights
fixed #585, (unsupported) string ordinals were silently zeroed out with docinfo=inline
(instead of failing)
fixed #583, certain keywords could occasionally crash multiforms
fixed that concurrent MVA updates could crash
fixed that query parser did not ignore a pure blended token with a leading modifier
fixed that query parser did not properly handle a modifier followed by a dash
fixed that substring indexing with dict=crc
did not support index_exact_words
and zones
fixed that in a rare edge case common subtree cache could crash
fixed that empty result set returned the full schema (rather than SELECT
-ed columns)
fixed that SphinxQL did not have a sanity check for (currently unsupported) result set schemas over 250 attributes
fixed that updates on regular indexes were not binlogged
fixed that multi-query optimization check for expressions did not handle multi-index case
fixed that SphinxSE did not build vs MySQL 5.5 release
fixed that proximity_bm25
ranker could yield incorrect weight on duplicated keywords
fixed that prefix expansion with dict=keyword
occasionally crashed
fixed that strip_path
did not work on RT disk chunks
fixed that exclude filters were not properly logged in query_log_format=sphinxql
mode
fixed that plain string attribute check in indextool
--check
was broken
fixed that Java API did not let specify a connection timeout
fixed that ordinal and wordcount attributes could not be fetched via SphinxQL
fixed that in a rare edge case OR/ORDER
would not match properly
fixed that sending (huge) query response did not handle EINTR
properly
fixed that SPH04
ranker could yield incorrectly high weight in some cases
fixed that C API did not let zero out cutoff, max_matches
settings
fixed that on a persistent connection there were occasionally issues handling signals while doing network reads/waitss
fixed that in a rare edge case (field start modifier in a certain complex query) querying crashed
fixed that snippets did not support dist_threads
with load_files=0
fixed that in some extremely rare edge cases tiny parts of an index could end up corrupted with dict=keywords
fixed that field/zone conditions were not propagated to expanded keywords with dict=keywords
added remapping support to blend_chars directive
added multi-threaded snippet batches support (requires a batch sent via API, dist_threads, and load_files
)
added collations (collation_server, collation_libc_locale directives)
added support for sorting and grouping on string attributes (ORDER BY
, GROUP BY
, WITHIN GROUP ORDER BY
)
added UDF support (plugin_dir directive; CREATE FUNCTION, DROP FUNCTION statements)
added query_log_format directive, SET GLOBAL query_log_format | log_level = ... statements; and connection id tracking
added sql_column_buffers directive, fixed out-of-buffer column handling in ODBC/MS SQL sources
added blend_mode directive that enables indexing multiple variants of a blended sequence
added UNIX socket support to C, Ruby APIs
added ranged query support to sql_joined_field
added rt_flush_period directive
added thread_stack directive
added SENTENCE, PARAGRAPH, ZONE operators (and index_sp, index_zones directives)
added keywords dictionary support (and dict, expansion_limit directives)
added passage_boundary
, emit_zones
options to snippets
added a watchdog process in threaded mode
added persistent MVA updates
added crash dumps to searchd.log
, deprecated crash_log_path
directive
added id32 index support in id64 binaries (EXPERIMENTAL)
added SphinxSE support for DELETE and REPLACE on SphinxQL tables
added new, more SQL compliant SphinxQL syntax; and a compat_sphinxql_magics directive
added CRC32(), DAY(), MONTH(), YEAR(), YEARMONTH(), YEARMONTHDAY() functions
added reverse_scan=(0|1) option to SELECT
added support for MySQL packets over 16M
added dummy SHOW VARIABLES, SHOW COLLATION, and SET character_set_results support (to support handshake with certain client libraries and frameworks)
added mysql_version_string directive (to workaround picky MySQL client libraries)
added support for global filter variables, SET GLOBAL @uservar=(int_list)
added DELETE ... IN (id_list) syntax support
added C-style comments syntax (for example, SELECT /*!40000 some comment*/ id FROM test
)
added UPDATE ... WHERE id=X syntax support
added DESCRIBE, SHOW TABLES statements
added --print-queries
switch to indexer
that dumps SQL queries it runs
added --sighup-each
switch to indexer
that rotates indexes one by one
added --strip-path
switch to searchd
that skips file paths embedded in the index(-es)
added --dumpconfig
switch to indextool
that dumps an index header in sphinx.conf
format
changed default preopen_indexes value to 1
optimized English stemmer (results in 1.3x faster snippets and indexing with morphology=stem_en)
optimized snippets, 1.6x general speedup
optimized const-list parsing in SphinxQL
optimized full-document highlighting CPU/RAM use
optimized binlog replay (improved performance on K-list update)
fixed #767, joined fields vs ODBC sources
fixed #757, wordforms shared by indexes with different settings
fixed #733, loading of indexes in formats prior to v.14
fixed #763, occasional snippets failures
fixed #648, occasionally missed rotations on multiple SIGHUPs
fixed #750, an RT segment merge leading to false positives and/or crashes in some cases
fixed #755, zones in snippets output
fixed #754, stopwords counting at snippet passage generation
fixed #723, fork/prefork index rotation in children processes
fixed #696, freeze on zero threshold in quorum operator
fixed #732, query escaping in SphinxSE
fixed #739, occasional crashes in MT mode on result set send
fixed #746, crash with a named list in SphinxQL option
fixed #674, AVG vs group order
fixed #734, occasional crashes attempting to report NULL errors
fixed #829, tail hits within field position modifier
fixed #712, missing query_mode, force_all_words snippet option defaults in Java API
fixed #721, added dupe removal on RT batch INSERT/REPLACE
fixed #720, potential extraneous highlighting after a blended keyword
fixed #702, exceptions vs star search
fixed #666, ext2 query grouping vs exceptions
fixed #688, WITHIN GROUP ORDER BY related crash
fixed #660, multi-queue batches vs dist_threads
fixed #678, crash on dict=keywords vs xmlpipe vs min_prefix_len
fixed #596, ECHILD vs scripted configs
fixed #653, dependency in expression, sorting, grouping
fixed #661, concurrent distributed searches vs workers=threads
fixed #646, crash on status query via UNIX socket
fixed #589, libexpat.dll missing from some Win32 build types
fixed #574, quorum match order
fixed multiple documentation issues (#372, #483, #495, #601, #623, #632, #654)
fixed that ondisk_dict did not affect RT indexes
fixed that string attributes check in indextool --check was erroneously sensitive to string data order
fixed a rare crash when using BEFORE operator
fixed an issue with multiforms vs BuildKeywords()
fixed an edge case in OR operator (emitted wrong hits order sometimes)
fixed aliasing in docinfo accessors that lead to very rare crashes and/or missing results
fixed a syntax error on a short token at the end of a query
fixed id64 filtering and performance degradation with range filters
fixed missing rankers in libsphinxclient
fixed missing SPH04 ranker in SphinxSE
fixed column names in sql_attr_multi sample (works with example.sql now)
fixed an issue with distributed local+remote setup vs aggregate functions
fixed case sensitive columns names in RT indexes
fixed a crash vs strings from multiple indexes in result set
fixed blended keywords vs snippets
fixed secure_connection vs MySQL protocol vs MySQL.NET connector
fixed that Python API did not works with Python 2.3
fixed overshort_step vs snippets
fixed keyword staistics vs dist_threads searching
fixed multiforms vs query parsing (vs quorum)
fixed missed quorum words vs RT segments
fixed blended keywords occasionally skipping extra character when querying (eg "abc[]")
fixed Python API to handle int32 values
fixed prefix and infix indexing of joined fields
fixed MVA ranged query
fixed missing blended state reset on document boundary
fixed a crash on missing index while replaying binlog
fixed an error message on filter values overrun
fixed passage duplication in snippets in weight_order mode
fixed select clauses over 1K vs remote agents
fixed overshort accounting vs soft-whitespace tokens
fixed rotation vs workers=threads
fixed schema issues vs distributed indexes
fixed blended-escaped sequence parsing issue
fixed MySQL IN clause (values order etc)
fixed that post_index did not execute when 0 documents were succesfully indexed
fixed field position limit vs many hits
fixed that joined fields missed an end marker at field end
fixed that xxx_step settings were missing from .sph index header
fixed libsphinxclient missing request cleanup in sphinx_query() (eg after network errors)
fixed that index_weights were ignored when grouping
fixed multi wordforms vs blend_chars
fixed broken MVA output in SphinxQL
fixed a few RT leaks
fixed an issue with RT string storage going missing
fixed an issue with repeated queries vs dist_threads
fixed an issue with string attributes vs buffer overrun in SphinxQL
fixed unexpected character data warnings within ignored xmlpipe tags
fixed a crash in snippets with NEAR syntax query
fixed passage duplication in snippets
fixed libsphinxclient SIGPIPE handling
fixed libsphinxclient vs VS2003 compiler bug
added RT indexes support (Chapter 4, Real-time indexes)
added prefork and threads support (workers directives)
added multi-threaded local searches in distributed indexes (dist_threads directive)
added common subquery cache (subtree_docs_cache, subtree_hits_cache directives)
added string attributes support (sql_attr_string, sql_field_string, xml_attr_string, xml_field_string directives)
added indexing-time word counter (sql_attr_str2wordcount, sql_field_str2wordcount directives)
added CALL SNIPPETS(), CALL KEYWORDS() SphinxQL statements
added field_weights
, index_weights
options to
SphinxQL SELECT statement
added insert-only SphinxQL-talking tables to SphinxSE (connection='sphinxql://host[:port]/index')
added select
option to SphinxSE queries
added backtrace on crash to searchd
added SQL+FS indexing, aka loading files by names fetched from SQL (sql_file_field directive)
added a watchdog in threads mode to searchd
added automatic row phantoms elimination to index merge
added hitless indexing support (hitless_words directive)
added --check, --strip-path, --htmlstrip, --dumphitlist ... --wordid switches to indextool
added --stopwait, --logdebug switches to searchd
added --dump-rows, --verbose switches to indexer
added "blended" characters indexing support (blend_chars directive)
added joined/payload field indexing (sql_joined_field directive)
added query_mode, force_all_words, limit_passages, limit_words, start_passage_id, load_files, html_strip_mode, allow_empty options, and %PASSAGE_ID% macro in before_match, after_match options to BuildExcerpts() API call
added @groupby/@count/@distinct columns support to SELECT (but not to expressions)
added query-time keyword expansion support (expand_keywords directive, SPH_RANK_SPH04 ranker)
added query batch size limit option (max_batch_queries directive; was hardcoded)
added SINT() function to expressions
improved SphinxQL syntax error reporting
improved expression optimizer (better constant handling)
improved dash handling within keywords (no longer treated as an operator)
improved snippets (better passage selection/trimming, around option now a hard limit)
optimized index format that yields ~20-30% smaller indexes
optimized sorting code (indexing time 1-5% faster on average; 100x faster in worst case)
optimized searchd startup time (moved .spa preindexing to indexer), added a progress bar
optimized queries against indexes with many attributes (eliminated redundant copying)
optimized 1-keyword queries (performace regression introduced in 0.9.9)
optimized SphinxQL protocol overheads, and performance on bigger result sets
optimized unbuffered attributes writes on index merge
changed attribute handling, duplicate names are strictly forbidden now
fixed that SphinxQL sessions could stall shutdown
fixed consts with leading minus in SphinxQL
fixed AND/OR precedence in expressions
fixed #334, AVG() on integers was not computed in floats
fixed #371, attribute flush vs 2+ GB files
fixed #373, segfault on distributed queries vs certain libc versions
fixed #398, stopwords not stopped in prefix/infix indexes
fixed #404, erroneous MVA failures in indextool --check
fixed #408, segfault on certain query batches (regular scan, plus a scan with MVA groupby)
fixed #431, occasional shutdown hangs in preforked workers
fixed #436, trunk checkout builds vs Solaris sh
fixed #440, escaping vs parentheses declared as valid in charset_table
fixed #442, occasional non-aligned free in MVA indexing
fixed #447, occasional crashes in MVA indexing
fixed #449, pconn busyloop on aborted clients on certain arches
fixed #465, build issue on Alpha
fixed #468, build issue in libsphinxclient
fixed #472, multiple stopword files failing to load
fixed #489, buffer overflow in query logging
fixed #493, Python API assertion after error returned from Query()
fixed #500, malformed MySQL packet when sending MVAs
fixed #504, SIGPIPE in libsphinxclient
fixed #506, better MySQL protocol commands support in SphinxQL (PING etc)
fixed #509, indexing ranged results from stored procedures
added Open, Close, Status calls to libsphinxclient (C API)
added automatic persistent connection reopening to PHP, Python APIs
added 64-bit value/range filters, fullscan mode support to SphinxSE
MAJOR CHANGE, our IANA assigned ports are 9312 and 9306 respectively (goodbye, trusty 3312)
MAJOR CHANGE, erroneous filters now fail with an error (were silently ignored before)
optimized unbuffered .spa writes on merge
optimized 1-keyword queries ranking in extended2 mode
fixed #441 (IO race in case of highly conccurent load on a preopened)
fixed #434 (distrubuted indexes were not searchable via MySQL protocol)
fixed #317 (indexer MVA progress counter)
fixed #398 (stopwords not removed from search query)
fixed #328 (broken cutoff)
fixed #250 (now quoting paths w/spaces when installing Windows service)
fixed #348 (K-list was not updated on merge)
fixed #357 (destination index were not K-list-filtered on merge)
fixed #369 (precaching .spi files over 2 GBs)
fixed #438 (missing boundary proximity matches)
fixed #371 (.spa flush in case of files over 2 GBs)
fixed #373 (crashes on distributed queries via mysql proto)
fixed critical bugs in hit merging code
fixed #424 (ordinals could be misplaced during indexing in case of bitfields etc)
fixed #426 (failing SE build on Solaris; thanks to Ben Beecher)
fixed #423 (typo in SE caused crash on SHOW STATUS)
fixed #363 (handling of read_timeout over 2147 seconds)
fixed #376 (minor error message mismatch)
fixed #413 (minus in SphinxQL)
fixed #417 (floats w/o leading digit in SphinxQL)
fixed #403 (typo in SetFieldWeights name in Java API)
fixed index rotation vs persistent connections
fixed backslash handling in SphinxQL parser
fixed uint unpacking vs. PHP 5.2.9 (possibly other versions)
fixed #325 (filter settings send from SphinxSE)
fixed #352 (removed mysql wrapper around close() in SphinxSE)
fixed #389 (display error messages through SphinxSE status variable)
fixed linking with port-installed iconv on OS X
fixed negative 64-bit unpacking in PHP API
fixed #349 (escaping backslash in query emulation mode)
fixed #320 (disabled multi-query route when select items differ)
fixed #353 (better quorum counts check)
fixed #341 (merging of trailing hits; maybe other ranking issues too)
fixed #368 (partially; @field "" caused crashes; now resets field limit)
fixed #365 (field mask was leaking on field-limited terms)
fixed #339 (updated debug query dumper)
fixed #361 (added SetConnectTimeout() to Java API)
fixed #338 (added missing fullscan to mode check in Java API)
fixed #323 (added floats support to SphinxQL)
fixed #340 (support listen=port:proto syntax too)
fixed #332 (\r is legal SphinxQL space now)
fixed xmlpipe2 K-lists
fixed #322 (safety gaps in mysql protocol row buffer)
fixed #313 (return keyword stats for empty indexes too)
fixed #344 (invalid checkpoints after merge)
fixed #326 (missing CLOCK_xxx on FreeBSD)
added IsConnectError(), Open(), Close() calls to Java API (bug #240)
added read_buffer, read_unhinted directives
added checks for build options returned by mysql_config (builds on Solaris now)
added fixed-RAM index merge (bug #169)
added logging chained queries count in case of (optimized) multi-queries
added GEODIST() function
added MySpell (OpenOffice) affix file support (bug #281)
added ODBC support (both Windows and UnixODBC)
added support for @id in IN() (bug #292)
added support for aggregate functions in GROUP BY (namely AVG, MAX, MIN, SUM)
added MySQL UDF that builds snippets using searchd
added write_buffer directive (defaults to 1M)
added xmlpipe_fixup_utf8 directive
added suggestions sample
added microsecond precision int64 timer (bug #282)
added listen_backlog directive
added max_xmlpipe2_field directive
added initial SphinxQL support to mysql41 handler, SELECT .../SHOW WARNINGS/STATUS/META are handled
added support for different network protocols, and mysql41 protocol
added fieldmask ranker, updated SphinxSE list of rankers
added mysql_ssl_xxx directives
added --cpustats (requires clock_gettime()) and --status switches to searchd
added performance counters, Status() API call
added overshort_step and stopword_step directives
added strict order operator (aka operator before, eg. "one << two << three")
added indextool utility, moved --dumpheader there, added --debugdocids, --dumphitlist options
added own RNG, reseeded on @random sort query (bug #183)
added field-start and field-end modifiers support (syntax is "^hello world$"; field-end requires reindex)
added MVA attribute support to IN() function
added AND, OR, and NOT support to expressions
improved logging of (optimized) multi-queries (now logging chained query count)
improved handshake error handling, fixed protocol version byte order (omg)
updated SphinxSE to protocol 1.22
allowed phrase_boundary_step=-1 (trick to emulate keyword expansion)
removed SPH_MAX_QUERY_WORDS limit
fixed CLI search vs documents missing from DB (bug #257)
fixed libsphinxclient results leak on subsequent sphinx_run_queries call (bug #256)
fixed libsphinxclient handling of zero max_matches and cutoff (bug #208)
fixed Java API over-64K string reads (eg. big snippets) in Java API (bug #181)
fixed Java API 2nd Query() after network error in 1st Query() call (bug #308)
fixed typo-class bugs in SetFilterFloatRange (bug #259), SetSortMode (bug #248)
fixed missing @@relaxed support (bug #276), fixed missing error on @nosuchfield queries, documented @@relaxed
fixed UNIX socket permissions to 0777 (bug #288)
fixed xmlpipe2 crash on schemas with no fields, added better document structure checks
fixed (and optimized) expr parser vs IN() with huge (10K+) args count
fixed double EarlyCalc() in fullscan mode (minor performance impact)
fixed phrase boundary handling in some cases (on buffer end, on trailing whitespace)
fixes in snippets (aka excerpts) generation
fixed inline attrs vs id64 index corruption
fixed head searchd crash on config re-parse failure
fixed handling of numeric keywords with leading zeroes such as "007" (bug #251)
fixed junk in SphinxSE status variables (bug #304)
fixed wordlist checkpoints serialization (bug #236)
fixed unaligned docinfo id access (bug #230)
fixed GetRawBytes() vs oversized blocks (headers with over 32K charset_table should now work, bug #300)
fixed buffer overflow caused by too long dest wordform, updated tests
fixed IF() return type (was always int, is deduced now)
fixed legacy queries vs. special chars vs. multiple indexes
fixed write-write-read socket access pattern vs Nagle vs delays vs FreeBSD (oh wow)
fixed exceptions vs query-parser issue
fixed late calc vs @weight in expressions (bug #285)
fixed early lookup/calc vs filters (bug #284)
fixed emulated MATCH_ANY queries (empty proximity and phrase queries are allowed now)
fixed MATCH_ANY ranker vs fields with no matches
fixed index file size vs inplace_enable (bug #245)
fixed that old logs were not closed on USR1 (bug #221)
fixed handling of '!' alias to NOT operator (bug #237)
fixed error handling vs query steps (step failure was not reported)
fixed querying vs inline attributes
fixed stupid bug in escaping code, fixed EscapeString() and made it static
fixed parser vs @field -keyword, foo|@field bar, "" queries (bug #310)
added min_stemming_len directive
added IsConnectError() API call (helps distingusih API vs remote errors)
added duplicate log messages filter to searchd
added --nodetach debugging switch to searchd
added blackhole agents support for debugging/testing (agent_blackhole directive)
added max_filters, max_filter_values directives (were hardcoded before)
added int64 expression evaluation path, automatic inference, and BIGINT() enforcer function
added crash handler for debugging (crash_log_path directive)
added MS SQL (aka SQL Server) source support (Windows only, mssql_winauth and mssql_unicode directives)
added indexer-side column unpacking feature (unpack_zlib, unpack_mysqlcompress directives)
added nested brackers and NOTs support to query language, rewritten query parser
added persistent connections support (Open() and Close() API calls)
added index_exact_words feature, and exact form operator to query language ("hello =world")
added status variables support to SphinxSE (SHOW STATUS LIKE 'sphinx_%')
added max_packet_size directive (was hardcoded at 8M before)
added UNIX socket support, and multi-interface support (listen directive)
added star-syntax support to BuildExcerpts() API call
added inplace inversion of .spa and .spp (inplace_enable directive, 1.5-2x less disk space for indexing)
added builtin Czech stemmer (morphology=stem_cz)
added IDIV(), NOW(), INTERVAL(), IN() functions to expressions
added index-level early-reject based on filters
added MVA updates feature (mva_updates_pool directive)
added select-list feature with computed expressions support (see SetSelect() API call, test.php --select switch), protocol 1.22
added integer expressions support (2x faster than float)
added multiforms support (multiple source words in wordforms file)
added legacy rankers (MATCH_ALL/MATCH_ANY/etc), removed legacy matching code (everything runs on V2 engine now)
added field position limit modifier to field operator (syntax: @title[50] hello world)
added killlist support (sql_query_killlist directive, --merge-killlists switch)
added on-disk SPI support (ondisk_dict directive)
added indexer IO stats
added periodic .spa flush (attr_flush_period directive)
added config reload on SIGHUP
added per-query attribute overrides feature (see SetOverride() API call); protocol 1.21
added signed 64bit attrs support (sql_attr_bigint directive)
improved HTML stripper to also skip PIs (<? ... ?>, such as <?php ... ?>)
improved excerpts speed (upto 50x faster on big documents)
fixed a short window of searchd inaccessibility on startup (started listen()ing too early before)
fixed .spa loading on systems where read() is 2GB capped
fixed infixes vs morphology issues
fixed backslash escaping, added backslash to EscapeString()
fixed handling of over-2GB dictionary files (.spi)
added configure script to libsphinxclient
changed proximity/quorum operator syntax to require whitespace after length
fixed potential head process crash on SIGPIPE during "maxed out" message
fixed handling of incomplete remote replies (caused over-degraded distributed results, in rare cases)
fixed sending of big remote requests (caused distributed requests to fail, in rare cases)
fixed FD_SET() overflow (caused searchd to crash on startup, in rare cases)
fixed MVA vs distributed indexes (caused loss of 1st MVA value in result set)
fixed tokenizing of exceptions terminated by specials (eg. "GPS AT&T" in extended mode)
fixed buffer overrun in stemmer on overlong tokens occasionally emitted by proximity/quorum operator parser (caused crashes on certain proximity/quorum queries)
fixed wordcount ranker (could be dropping hits)
fixed --merge feature (numerous different fixes, caused broken indexes)
fixed --merge-dst-range performance
fixed prefix/infix generation for stopwords
fixed ignore_chars vs specials
fixed misplaced F_SETLKW check (caused certain build types, eg. RPM build on FC8, to fail)
fixed dictionary-defined charsets support in spelldump, added \x-style wordchars support
fixed Java API to properly send long strings (over 64K; eg. long document bodies for excerpts)
fixed Python API to accept offset/limit of 'long' type
fixed default ID range (that filtered out all 64-bit values) in Java and Python APIs
added support for 64-bit document and keyword IDs, --enable-id64 switch to configure
added support for floating point attributes
added support for bitfields in attributes, sql_attr_bool directive and bit-widths part in sql_attr_uint directive
added support for multi-valued attributes (MVA)
added metaphone preprocessor
added libstemmer library support, provides stemmers for a number of additional languages
added xmlpipe2 source type, that supports arbitrary fields and attributes
added word form dictionaries, wordforms directive (and spelldump utility)
added tokenizing exceptions, exceptions directive
added an option to fully remove element contents to HTML stripper, html_remove_elements directive
added HTML entities decoder (with full XHTML1 set support) to HTML stripper
added per-index HTML stripping settings, html_strip, html_index_attrs, and html_remove_elements directives
added IO load throttling, max_iops and max_iosize directives
added SQL load throttling, sql_ranged_throttle directive
added an option to index prefixes/infixes for given fields only, prefix_fields and infix_fields directives
added an option to ignore certain characters (instead of just treating them as whitespace), ignore_chars directive
added an option to increment word position on phrase boundary characters, phrase_boundary and phrase_boundary_step directives
added --merge-dst-range switch (and filters) to index merging feature (--merge switch)
added mysql_connect_flags directive (eg. to reduce indexing time MySQL network traffic and/or time)
improved ordinals sorting; now runs in fixed RAM
improved handling of documents with zero/NULL ids, now skipping them instead of aborting
added an option to unlink old index on succesful rotation, unlink_old directive
added an option to keep index files open at all times (fixes subtle races on rotation), preopen and preopen_indexes directives
added an option to profile searchd disk I/O, --iostats command-line option
added an option to rotate index seamlessly (fully avoids query stalls), seamless_rotate directive
added HTML stripping support to excerpts (uses per-index settings)
added 'exact_phrase', 'single_passage', 'use_boundaries', 'weight_order 'options to BuildExcerpts() API call
added distributed attribute updates propagation
added distributed retries on master node side
added log reopen on SIGUSR1
added --stop switch (sends SIGTERM to running instance)
added Windows service mode, and --servicename switch
added Windows --rotate support
improved log timestamping, now with millisecond precision
added extended engine V2 (faster, cleaner, better; SPH_MATCH_EXTENDED2 mode)
added ranking modes support (V2 engine only; SetRankingMode() API call)
added quorum searching support to query language (V2 engine only; example: "any three of all these words"/3)
added query escaping support to query language, and EscapeString() API call
added multi-field syntax support to query language (example: "@(field1,field2) something"), and @@relaxed field checks option
added optional star-syntax ('word*') support in keywords, enable_star directive (for prefix/infix indexes only)
added full-scan support (query must be fully empty; can perform block-reject optimization)
added COUNT(DISTINCT(attr)) calculation support, SetGroupDistinct() API call
added group-by on MVA support, SetArrayResult() PHP API call
added per-index weights feature, SetIndexWeights() API call
added geodistance support, SetGeoAnchor() API call
added result set sorting by arbitrary expressions in run time (eg. "@weight+log(price)*2.5"), SPH_SORT_EXPR mode
added result set sorting by @custom compile-time sorting function (see src/sphinxcustomsort.inl)
added result set sorting by @random value
added result set merging for indexes with different schemas
added query comments support (3rd arg to Query()/AddQuery() API calls, copied verbatim to query log)
added keyword extraction support, BuildKeywords() API call
added binding field weights by name, SetFieldWeights() API call
added optional limit on query time, SetMaxQueryTime() API call
added optional limit on found matches count (4rd arg to SetLimits() API call, so-called 'cutoff')
added pure C API (libsphinxclient)
added Ruby API (thanks to Dmytro Shteflyuk)
added Java API
added SphinxSE support for MVAs (use varchar), floats (use float), 64bit docids (use bigint)
added SphinxSE options "floatrange", "geoanchor", "fieldweights", "indexweights", "maxquerytime", "comment", "host" and "port"; and support for "expr:CLAUSE"
improved SphinxSE max query size (using MySQL condition pushdown), upto 256K now
added scripting (shebang syntax) support to config files (example: #!/usr/bin/php in the first line)
added unified config handling and validation to all programs
added unified documentation
added .spec file for RPM builds
added automated testing suite
improved index locking, now fcntl()-based instead of buggy file-existence-based
fixed unaligned RAM accesses, now works on SPARC and ARM
added pure C API (libsphinxclient)
added Ruby API
added SetConnectTimeout() PHP API call
added allowed type check to UpdateAttributes() handler (bug #174)
added defensive MVA checks on index preload (protection against broken indexes, bug #168)
added sphinx-min.conf sample file
added --without-iconv switch to configure
removed redundant -lz dependency in searchd
removed erroneous "xmlpipe2 deprecated" warning
fixed EINTR handling in piped read (bug #166)
fixup query time before logging and sending to client (bug #153)
fixed attribute updates vs full-scan early-reject index (bug #149)
fixed gcc warnings (bug #160)
fixed mysql connection attempt vs pgsql source type (bug #165)
fixed 32-bit wraparound when preloading over 2 GB files
fixed "out of memory" message vs over 2 GB allocs (bug #116)
fixed unaligned RAM access detection on ARM (where unaligned reads do not crash but produce wrong results)
fixed missing full scan results in some cases
fixed several bugs in --merge, --merge-dst-range
fixed @geodist vs MultiQuery and filters, @expr vs MultiQuery
fixed GetTokenEnd() vs 1-grams (was causing crash in excerpts)
fixed sql_query_range to handle empty strings in addition to NULL strings (Postgres specific)
fixed morphology=none vs infixes
fixed case sensitive attributes names in UpdateAttributes()
fixed ext2 ranking vs. stopwords (now using atompos from query parser)
fixed EscapeString() call
fixed escaped specials (now handled as whitespace if not in charset)
fixed schema minimizer (now handles type/size mismatches)
fixed word stats in extended2; stemmed form is now returned
fixed spelldump case folding vs dictionary-defined character sets
fixed Postgres BOOLEAN handling
fixed enforced "inline" docinfo on empty indexes (normally ok, but index merge was really confused)
fixed rare count(distinct) out-of-bounds issue (it occasionaly caused too high @distinct values)
fixed hangups on documents with id=DOCID_MAX in some cases
fixed rare crash in tokenizer (prefixed synonym vs. input stream eof)
fixed query parser vs "aaa (bbb ccc)|ddd" queries
fixed BuildExcerpts() request in Java API
fixed Postgres specific memory leak
fixed handling of overshort keywords (less than min_word_len)
fixed HTML stripper (now emits space after indexed attributes)
fixed 32-field case in query parser
fixed rare count(distinct) vs. querying multiple local indexes vs. reusable sorter issue
fixed sorting of negative floats in SPH_SORT_EXTENDED mode
added support for sql_str2ordinal_column
added support for upto 5 sort-by attrs (in extended sorting mode)
added support for separate groups sorting clause (in group-by mode)
added support for on-the-fly attribute updates (PRE-ALPHA; will change heavily; use for preliminary testing ONLY)
added support for zero/NULL attributes
added support for 0.9.7 features to SphinxSE
added support for n-grams (alpha, 1-grams only for now)
added support for warnings reported to client
added support for exclude-filters
added support for prefix and infix indexing (see max_prefix_len
, max_infix_len
)
added @*
syntax to reset current field to query language
added removal of duplicate entries in query index order
added PHP API workarounds for PHP signed/unsigned braindamage
added locks to avoid two concurrent indexers working on same index
added check for existing attributes vs. docinfo=none
case
improved groupby code a lot (better precision, and upto 25x times faster in extreme cases)
improved error handling and reporting
improved handling of broken indexes (reports error instead of hanging/crashing)
improved mmap()
limits for attributes and wordlists (now able to map over 4 GB on x64 and over 2 GB on x32 where possible)
improved malloc()
pressure in head daemon (search time should not degrade with time any more)
improved test.php
command line options
improved error reporting (distributed query, broken index etc issues now reported to client)
changed default network packet size to be 8M, added extra checks
fixed division by zero in BM25 on 1-document collections (in extended matching mode)
fixed .spl
files getting unlinked
fixed crash in schema compatibility test
fixed UTF-8 Russian stemmer
fixed requested matches count when querying distributed agents
fixed signed vs. unsigned issues everywhere (ranged queries, CLI search output, and obtaining docid)
fixed potential crashes vs. negative query offsets
fixed 0-match docs vs. extended mode vs. stats
fixed group/timestamp filters being ignored if querying from older clients
fixed docs to mention pgsql
source type
fixed issues with explicit '&' in extended matching mode
fixed wrong assertion in SBCS encoder
fixed crashes with no-attribute indexes after rotate
added support for extended matching mode (query language)
added support for extended sorting mode (sorting clauses)
added support for SBCS excerpts
added mmap()ing
for attributes and wordlist (improves search time, speeds up fork()
greatly)
fixed attribute name handling to be case insensitive
fixed default compiler options to simplify post-mortem debugging (added -g
, removed -fomit-frame-pointer
)
fixed rare memory leak
fixed "hello hello" queries in "match phrase" mode
fixed issue with excerpts, texts and overlong queries
fixed logging multiple index name (no longer tokenized)
fixed trailing stopword not flushed from tokenizer
fixed boolean evaluation
fixed pidfile being wrongly unlink()ed
on bind()
failure
fixed --with-mysql-includes/libs
(they conflicted with well-known paths)
fixes for 64-bit platforms
added alpha index merging code
added an option to decrease max_matches
per-query
added an option to specify IP address for searchd to listen on
added support for unlimited amount of configured sources and indexes
added support for group-by queries
added support for /2 range modifier in charset_table
added support for arbitrary amount of document attributes
added logging filter count and index name
added --with-debug
option to configure to compile in debug mode
added -DNDEBUG
when compiling in default mode
improved search time (added doclist size hints, in-memory wordlist cache, and used VLB coding everywhere)
improved (refactored) SQL driver code (adding new drivers should be very easy now)
improved exceprts generation
fixed issue with empty sources and ranged queries
fixed querying purely remote distributed indexes
fixed suffix length check in English stemmer in some cases
fixed UTF-8 decoder for codes over U+20000 (for CJK)
fixed UTF-8 encoder for 3-byte sequences (for CJK)
fixed overshort (less than min_word_len
) words prepended to next field
fixed source connection order (indexer does not connect to all sources at once now)
fixed line numbering in config parser
fixed some issues with index rotation
added support for empty indexes
added support for multiple sql_query_pre/post/post_index
fixed timestamp ranges filter in "match any" mode
fixed configure issues with --without-mysql and --with-pgsql options
fixed building on Solaris 9
added boolean queries support (experimental, beta version)
added simple file-based query cache (experimental, beta version)
added storage engine for MySQL 5.0 and 5.1 (experimental, beta version)
added GNU style configure
script
added new searchd protocol (all binary, and should be backwards compatible)
added distributed searching support to searchd
added PostgreSQL driver
added excerpts generation
added min_word_len
option to index
added max_matches
option to searchd, removed hardcoded MAX_MATCHES limit
added initial documentation, and a working example.sql
added support for multiple sources per index
added soundex support
added group ID ranges support
added --stdin
command-line option to search utility
added --noprogress
option to indexer
added --index
option to search
fixed UTF-8 decoder (3-byte codepoints did not work)
fixed PHP API to handle big result sets faster
fixed config parser to handle empty values properly
fixed redundant time(NULL)
calls in time-segments mode