Sphinx 2.1.9-release reference manual ===================================== Free open-source SQL full-text search engine ============================================ Copyright (c) 2001-2014 Andrew Aksyonoff Copyright (c) 2008-2014 Sphinx Technologies Inc, http://sphinxsearch.com ---------------------------------------------------------------------------- Table of Contents 1. Introduction 1.1. About 1.2. Sphinx features 1.3. Where to get Sphinx 1.4. License 1.5. Credits 1.6. History 2. Installation 2.1. Supported systems 2.2. Compiling Sphinx from source 2.2.1. Required tools 2.2.2. Compiling on Linux 2.2.3. Known installation issues 2.3. Installing Sphinx packages on Debian and Ubuntu 2.4. Installing Sphinx packages on RedHat and CentOS 2.5. Installing Sphinx on Windows 2.6. Quick Sphinx usage tour 3. Indexing 3.1. Data sources 3.2. Full-text fields 3.3. Attributes 3.4. MVA (multi-valued attributes) 3.5. Indexes 3.6. Restrictions on the source data 3.7. Charsets, case folding, translation tables, and replacement rules 3.8. SQL data sources (MySQL, PostgreSQL) 3.9. xmlpipe data source 3.10. xmlpipe2 data source 3.11. Live index updates 3.12. Delta index updates 3.13. Index merging 4. Real-time indexes 4.1. RT indexes overview 4.2. Known caveats with RT indexes 4.3. RT index internals 4.4. Binary logging 5. Searching 5.1. Matching modes 5.2. Boolean query syntax 5.3. Extended query syntax 5.4. Search results ranking 5.5. Expressions, functions, and operators 5.5.1. Operators 5.5.2. Numeric functions 5.5.3. Date and time functions 5.5.4. Type conversion functions 5.5.5. Comparison functions 5.5.6. Miscellaneous functions 5.6. Sorting modes 5.7. Grouping (clustering) search results 5.8. Distributed searching 5.9. searchd query log formats 5.9.1. Plain log format 5.9.2. SphinxQL log format 5.10. MySQL protocol support and SphinxQL 5.11. Multi-queries 5.12. Collations 5.13. User-defined functions (UDF) 6. Command line tools reference 6.1. indexer command reference 6.2. searchd command reference 6.3. search command reference 6.4. spelldump command reference 6.5. indextool command reference 6.6. wordbreaker command reference 7. SphinxQL reference 7.1. SELECT syntax 7.2. SELECT @@system_variable syntax 7.3. SHOW META syntax 7.4. SHOW WARNINGS syntax 7.5. SHOW STATUS syntax 7.6. INSERT and REPLACE syntax 7.7. REPLACE syntax 7.8. DELETE syntax 7.9. SET syntax 7.10. SET TRANSACTION syntax 7.11. BEGIN, COMMIT, and ROLLBACK syntax 7.12. BEGIN syntax 7.13. ROLLBACK syntax 7.14. CALL SNIPPETS syntax 7.15. CALL KEYWORDS syntax 7.16. SHOW TABLES syntax 7.17. DESCRIBE syntax 7.18. CREATE FUNCTION syntax 7.19. DROP FUNCTION syntax 7.20. SHOW VARIABLES syntax 7.21. SHOW COLLATION syntax 7.22. SHOW CHARACTER SET syntax 7.23. UPDATE syntax 7.24. ATTACH INDEX syntax 7.25. FLUSH RTINDEX syntax 7.26. FLUSH RAMCHUNK syntax 7.27. TRUNCATE RTINDEX syntax 7.28. SHOW AGENT STATUS 7.29. SHOW PROFILE syntax 7.30. SHOW INDEX STATUS syntax 7.31. OPTIMIZE INDEX syntax 7.32. SHOW PLAN syntax 7.33. Multi-statement queries 7.34. Comment syntax 7.35. List of SphinxQL reserved keywords 7.36. SphinxQL upgrade notes, version 2.0.1-beta 8. API reference 8.1. General API functions 8.1.1. GetLastError 8.1.2. GetLastWarning 8.1.3. SetServer 8.1.4. SetRetries 8.1.5. SetConnectTimeout 8.1.6. SetArrayResult 8.1.7. IsConnectError 8.2. General query settings 8.2.1. SetLimits 8.2.2. SetMaxQueryTime 8.2.3. SetOverride 8.2.4. SetSelect 8.3. Full-text search query settings 8.3.1. SetMatchMode 8.3.2. SetRankingMode 8.3.3. SetSortMode 8.3.4. SetWeights 8.3.5. SetFieldWeights 8.3.6. SetIndexWeights 8.4. Result set filtering settings 8.4.1. SetIDRange 8.4.2. SetFilter 8.4.3. SetFilterRange 8.4.4. SetFilterFloatRange 8.4.5. SetGeoAnchor 8.5. GROUP BY settings 8.5.1. SetGroupBy 8.5.2. SetGroupDistinct 8.6. Querying 8.6.1. Query 8.6.2. AddQuery 8.6.3. RunQueries 8.6.4. ResetFilters 8.6.5. ResetGroupBy 8.7. Additional functionality 8.7.1. BuildExcerpts 8.7.2. UpdateAttributes 8.7.3. BuildKeywords 8.7.4. EscapeString 8.7.5. Status 8.7.6. FlushAttributes 8.8. Persistent connections 8.8.1. Open 8.8.2. Close 9. MySQL storage engine (SphinxSE) 9.1. SphinxSE overview 9.2. Installing SphinxSE 9.2.1. Compiling MySQL 5.0.x with SphinxSE 9.2.2. Compiling MySQL 5.1.x with SphinxSE 9.2.3. Checking SphinxSE installation 9.3. Using SphinxSE 9.4. Building snippets (excerpts) via MySQL 10. Reporting bugs 11. sphinx.conf options reference 11.1. Data source configuration options 11.1.1. type 11.1.2. sql_host 11.1.3. sql_port 11.1.4. sql_user 11.1.5. sql_pass 11.1.6. sql_db 11.1.7. sql_sock 11.1.8. json_autoconv_numbers 11.1.9. json_autoconv_keynames 11.1.10. on_json_attr_error 11.1.11. mysql_connect_flags 11.1.12. mysql_ssl_cert, mysql_ssl_key, mysql_ssl_ca 11.1.13. odbc_dsn 11.1.14. sql_query_pre 11.1.15. sql_query 11.1.16. sql_joined_field 11.1.17. sql_query_range 11.1.18. sql_range_step 11.1.19. sql_query_killlist 11.1.20. sql_attr_uint 11.1.21. sql_attr_bool 11.1.22. sql_attr_bigint 11.1.23. sql_attr_timestamp 11.1.24. sql_attr_str2ordinal 11.1.25. sql_attr_float 11.1.26. sql_attr_multi 11.1.27. sql_attr_string 11.1.28. sql_attr_json 11.1.29. sql_attr_str2wordcount 11.1.30. sql_column_buffers 11.1.31. sql_field_string 11.1.32. sql_field_str2wordcount 11.1.33. sql_file_field 11.1.34. sql_query_post 11.1.35. sql_query_post_index 11.1.36. sql_ranged_throttle 11.1.37. sql_query_info 11.1.38. xmlpipe_command 11.1.39. xmlpipe_field 11.1.40. xmlpipe_field_string 11.1.41. xmlpipe_field_wordcount 11.1.42. xmlpipe_attr_uint 11.1.43. xmlpipe_attr_bigint 11.1.44. xmlpipe_attr_bool 11.1.45. xmlpipe_attr_timestamp 11.1.46. xmlpipe_attr_str2ordinal 11.1.47. xmlpipe_attr_float 11.1.48. xmlpipe_attr_multi 11.1.49. xmlpipe_attr_multi_64 11.1.50. xmlpipe_attr_string 11.1.51. xmlpipe_attr_wordcount 11.1.52. xmlpipe_attr_json 11.1.53. xmlpipe_fixup_utf8 11.1.54. mssql_winauth 11.1.55. mssql_unicode 11.1.56. unpack_zlib 11.1.57. unpack_mysqlcompress 11.1.58. unpack_mysqlcompress_maxsize 11.2. Index configuration options 11.2.1. type 11.2.2. source 11.2.3. path 11.2.4. docinfo 11.2.5. mlock 11.2.6. morphology 11.2.7. dict 11.2.8. index_sp 11.2.9. index_zones 11.2.10. min_stemming_len 11.2.11. stopwords 11.2.12. wordforms 11.2.13. embedded_limit 11.2.14. exceptions 11.2.15. min_word_len 11.2.16. charset_type 11.2.17. charset_table 11.2.18. ignore_chars 11.2.19. min_prefix_len 11.2.20. min_infix_len 11.2.21. max_substring_len 11.2.22. prefix_fields 11.2.23. infix_fields 11.2.24. enable_star 11.2.25. ngram_len 11.2.26. ngram_chars 11.2.27. phrase_boundary 11.2.28. phrase_boundary_step 11.2.29. html_strip 11.2.30. html_index_attrs 11.2.31. html_remove_elements 11.2.32. local 11.2.33. agent 11.2.34. agent_persistent 11.2.35. agent_blackhole 11.2.36. agent_connect_timeout 11.2.37. agent_query_timeout 11.2.38. preopen 11.2.39. ondisk_dict 11.2.40. inplace_enable 11.2.41. inplace_hit_gap 11.2.42. inplace_docinfo_gap 11.2.43. inplace_reloc_factor 11.2.44. inplace_write_factor 11.2.45. index_exact_words 11.2.46. overshort_step 11.2.47. stopword_step 11.2.48. hitless_words 11.2.49. expand_keywords 11.2.50. blend_chars 11.2.51. blend_mode 11.2.52. rt_mem_limit 11.2.53. rt_field 11.2.54. rt_attr_uint 11.2.55. rt_attr_bool 11.2.56. rt_attr_bigint 11.2.57. rt_attr_float 11.2.58. rt_attr_multi 11.2.59. rt_attr_multi_64 11.2.60. rt_attr_timestamp 11.2.61. rt_attr_string 11.2.62. rt_attr_json 11.2.63. ha_strategy 11.2.64. bigram_freq_words 11.2.65. bigram_index 11.2.66. index_field_lengths 11.2.67. regexp_filter 11.2.68. stopwords_unstemmed 11.2.69. global_idf 11.3. indexer program configuration options 11.3.1. mem_limit 11.3.2. max_iops 11.3.3. max_iosize 11.3.4. max_xmlpipe2_field 11.3.5. write_buffer 11.3.6. max_file_field_buffer 11.3.7. on_file_field_error 11.3.8. lemmatizer_base 11.3.9. lemmatizer_cache 11.4. searchd program configuration options 11.4.1. listen 11.4.2. address 11.4.3. port 11.4.4. log 11.4.5. query_log 11.4.6. query_log_format 11.4.7. read_timeout 11.4.8. client_timeout 11.4.9. max_children 11.4.10. pid_file 11.4.11. max_matches 11.4.12. seamless_rotate 11.4.13. preopen_indexes 11.4.14. unlink_old 11.4.15. attr_flush_period 11.4.16. ondisk_dict_default 11.4.17. max_packet_size 11.4.18. mva_updates_pool 11.4.19. crash_log_path 11.4.20. max_filters 11.4.21. max_filter_values 11.4.22. listen_backlog 11.4.23. read_buffer 11.4.24. read_unhinted 11.4.25. max_batch_queries 11.4.26. subtree_docs_cache 11.4.27. subtree_hits_cache 11.4.28. workers 11.4.29. dist_threads 11.4.30. binlog_path 11.4.31. binlog_flush 11.4.32. binlog_max_log_size 11.4.33. snippets_file_prefix 11.4.34. collation_server 11.4.35. collation_libc_locale 11.4.36. plugin_dir 11.4.37. mysql_version_string 11.4.38. rt_flush_period 11.4.39. thread_stack 11.4.40. expansion_limit 11.4.41. compat_sphinxql_magics 11.4.42. watchdog 11.4.43. prefork_rotation_throttle 11.4.44. sphinxql_state 11.4.45. ha_ping_interval 11.4.46. ha_period_karma 11.4.47. persistent_connections_limit 11.4.48. rt_merge_iops 11.4.49. rt_merge_maxiosize 11.4.50. predicted_time_costs A. Sphinx revision history A.1. Version 2.1.9-release, 03 jul 2014 A.2. Version 2.1.8-release, 28 apr 2014 A.3. Version 2.1.7-release, 30 mar 2014 A.4. Version 2.1.6-release, 24 feb 2014 A.5. Version 2.1.5-release, 22 jan 2014 A.6. Version 2.1.4-release, 18 dec 2013 A.7. Version 2.1.3-release, 12 nov 2013 A.8. Version 2.1.2-release, 10 oct 2013 A.9. Version 2.1.1-beta, 20 feb 2013 A.10. Version 2.0.11-dev, xx xxx xxxx A.11. Version 2.0.10-release, 22 jan 2014 A.12. Version 2.0.9-release, 26 aug 2013 A.13. Version 2.0.8-release, 26 apr 2013 A.14. Version 2.0.7-release, 26 mar 2013 A.15. Version 2.0.6-release, 22 oct 2012 A.16. Version 2.0.5-release, 28 jul 2012 A.17. Version 2.0.4-release, 02 mar 2012 A.18. Version 2.0.3-release, 23 dec 2011 A.19. Version 2.0.2-beta, 15 nov 2011 A.20. Version 2.0.1-beta, 22 apr 2011 A.21. Version 1.10-beta, 19 jul 2010 A.22. Version 0.9.9-release, 02 dec 2009 A.23. Version 0.9.9-rc2, 08 apr 2009 A.24. Version 0.9.9-rc1, 17 nov 2008 A.25. Version 0.9.8.1, 30 oct 2008 A.26. Version 0.9.8, 14 jul 2008 A.27. Version 0.9.7, 02 apr 2007 A.28. Version 0.9.7-rc2, 15 dec 2006 A.29. Version 0.9.7-rc1, 26 oct 2006 A.30. Version 0.9.6, 24 jul 2006 A.31. Version 0.9.6-rc1, 26 jun 2006 List of Examples 3.1. Ranged query usage example 3.2. XMLpipe document stream 3.3. xmlpipe2 document stream 3.4. Fully automated live updates 4.1. RT index declaration 5.1. Boolean query example 5.2. Extended matching mode: query example Chapter 1. Introduction ======================= Table of Contents 1.1. About 1.2. Sphinx features 1.3. Where to get Sphinx 1.4. License 1.5. Credits 1.6. History 1.1. About ========== Sphinx is a full-text search engine, publicly distributed under GPL version 2. Commercial licensing (eg. for embedded use) is available upon request. Technically, Sphinx is a standalone software package provides fast and relevant full-text search functionality to client applications. It was specially designed to integrate well with SQL databases storing the data, and to be easily accessed by scripting languages. However, Sphinx does not depend on nor require any specific database to function. Applications can access Sphinx search daemon (searchd) using any of the three different access methods: a) via Sphinx own implementation of MySQL network protocol (using a small SQL subset called SphinxQL, this is recommended way), b) via native search API (SphinxAPI) or c) via MySQL server with a pluggable storage engine (SphinxSE). Official native SphinxAPI implementations for PHP, Perl, Python, Ruby and Java are included within the distribution package. API is very lightweight so porting it to a new language is known to take a few hours or days. Third party API ports and plugins exist for Perl, C#, Haskell, Ruby-on-Rails, and possibly other languages and frameworks. Starting from version 1.10-beta, Sphinx supports two different indexing backends: "disk" index backend, and "realtime" (RT) index backend. Disk indexes support online full-text index rebuilds, but online updates can only be done on non-text (attribute) data. RT indexes additionally allow for online full-text index updates. Previous versions only supported disk indexes. Data can be loaded into disk indexes using a so-called data source. Built-in sources can fetch data directly from MySQL, PostgreSQL, MSSQL, ODBC compliant database (Oracle, etc) or a custom XML format. Adding new data sources drivers (eg. to natively support other DBMSes) is designed to be as easy as possible. RT indexes, as of 1.10-beta, can only be populated using SphinxQL. As for the name, Sphinx is an acronym which is officially decoded as SQL Phrase Index. Yes, I know about CMU's Sphinx project. 1.2. Sphinx features ==================== Key Sphinx features are: * high indexing and searching performance; * advanced indexing and querying tools (flexible and feature-rich text tokenizer, querying language, several different ranking modes, etc); * advanced result set post-processing (SELECT with expressions, WHERE, ORDER BY, GROUP BY etc over text search results); * proven scalability up to billions of documents, terabytes of data, and thousands of queries per second; * easy integration with SQL and XML data sources, and SphinxAPI, SphinxQL, or SphinxSE search interfaces; * easy scaling with distributed searches. To expand a bit, Sphinx: * has high indexing speed (upto 10-15 MB/sec per core on an internal benchmark); * has high search speed (upto 150-250 queries/sec per core against 1,000,000 documents, 1.2 GB of data on an internal benchmark); * has high scalability (biggest known cluster indexes over 3,000,000,000 documents, and busiest one peaks over 50,000,000 queries/day); * provides good relevance ranking through combination of phrase proximity ranking and statistical (BM25) ranking; * provides distributed searching capabilities; * provides document excerpts (snippets) generation; * provides searching from within application with SphinxAPI or SphinxQL interfaces, and from within MySQL with pluggable SphinxSE storage engine; * supports boolean, phrase, word proximity and other types of queries; * supports multiple full-text fields per document (upto 32 by default); * supports multiple additional attributes per document (ie. groups, timestamps, etc); * supports stopwords; * supports morphological word forms dictionaries; * supports tokenizing exceptions; * supports both single-byte encodings and UTF-8; * supports stemming (stemmers for English, Russian, Czech and Arabic are built-in; and stemmers for French, Spanish, Portuguese, Italian, Romanian, German, Dutch, Swedish, Norwegian, Danish, Finnish, Hungarian, are available by building third party libstemmer library); * supports MySQL natively (all types of tables, including MyISAM, InnoDB, NDB, Archive, etc are supported); * supports PostgreSQL natively; * supports ODBC compliant databases (MS SQL, Oracle, etc) natively; * ...has 50+ other features not listed here, refer to API and configuration manual! 1.3. Where to get Sphinx ======================== Sphinx is available through its official Web site at http://sphinxsearch.com/. Currently, Sphinx distribution tarball includes the following software: * indexer: an utility which creates fulltext indexes; * search: a simple command-line (CLI) test utility which searches through fulltext indexes; * searchd: a daemon which enables external software (eg. Web applications) to search through fulltext indexes; * sphinxapi: a set of searchd client API libraries for popular Web scripting languages (PHP, Python, Perl, Ruby). * spelldump: a simple command-line tool to extract the items from an ispell or MySpell (as bundled with OpenOffice) format dictionary to help customize your index, for use with wordforms. * indextool: an utility to dump miscellaneous debug information about the index, added in version 0.9.9-rc2. * wordbreaker: an utility to break down compound words into separate words, added in version 2.1.1-beta. 1.4. License ============ This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. See COPYING file for details. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Non-GPL licensing (for OEM/ISV embedded use) can also be arranged, please contact us to discuss commercial licensing possibilities. 1.5. Credits ============ Author ------ Sphinx initial author (and a benevolent dictator ever since): * Andrew Aksyonoff, http://shodan.ru Team ---- Past and present employees of Sphinx Technologies Inc who should be noted on their work on Sphinx (in alphabetical order): * Adam Rice * Adrian Nuta * Alexander Klimenko * Alexey Dvoichenkov * Alexey Vinogradov * Anton Tsitlionok * Eugene Kosov * Gloria Vinogradova * Ilya Kuznetsov * Kirill Shmatov * Rich Kelm * Stanislav Klinov * Steven Barker * Vladimir Fedorkov * Yuri Schapov Contributors ------------ People who contributed to Sphinx and their contributions (in no particular order): * Robert "coredev" Bengtsson (Sweden), initial version of PostgreSQL data source * Len Kranendonk, Perl API * Dmytro Shteflyuk, Ruby API Many other people have contributed ideas, bug reports, fixes, etc. Thank you! 1.6. History ============ Sphinx development was started back in 2001, because I didn't manage to find an acceptable search solution (for a database driven Web site) which would meet my requirements. Actually, each and every important aspect was a problem: * search quality (ie. good relevance) * statistical ranking methods performed rather bad, especially on large collections of small documents (forums, blogs, etc) * search speed * especially if searching for phrases which contain stopwords, as in "to be or not to be" * moderate disk and CPU requirements when indexing * important in shared hosting enivronment, not to mention the indexing speed. Despite the amount of time passed and numerous improvements made in the other solutions, there's still no solution which I personally would be eager to migrate to. Considering that and a lot of positive feedback received from Sphinx users during last years, the obvious decision is to continue developing Sphinx (and, eventually, to take over the world). Chapter 2. Installation ======================= Table of Contents 2.1. Supported systems 2.2. Compiling Sphinx from source 2.2.1. Required tools 2.2.2. Compiling on Linux 2.2.3. Known installation issues 2.3. Installing Sphinx packages on Debian and Ubuntu 2.4. Installing Sphinx packages on RedHat and CentOS 2.5. Installing Sphinx on Windows 2.6. Quick Sphinx usage tour 2.1. Supported systems ====================== Sphinx can be compiled either from source or installed using prebuilt packages. Most modern UNIX systems with a C++ compiler should be able to compile and run Sphinx without any modifications. Currently known systems Sphinx has been successfully running on are: * Linux 2.4.x, 2.6.x, 3.x (many various distributions) * Windows 2000, XP, 7, 8 * FreeBSD 4.x, 5.x, 6.x, 7.x, 8.x * NetBSD 1.6, 3.0 * Solaris 9, 11 * Mac OS X CPU architectures known to work include i386 (aka x86), amd64 (aka x86_64), SPARC64, and ARM. Chances are good that Sphinx should work on other Unix platforms and/or CPU architectures just as well. Please report any other platforms that worked for you! All platforms are production quality. There are no principal functional limitations on any platform. 2.2. Compiling Sphinx from source ================================= 2.2.1. Required tools --------------------- On UNIX, you will need the following tools to build and install Sphinx: * a working C++ compiler. GNU gcc and clang are known to work. * a good make program. GNU make is known to work. On Windows, you will need Microsoft Visual C/C++ Studio .NET 2005 or above. Other compilers/environments will probably work as well, but for the time being, you will have to build makefile (or other environment specific project files) manually. 2.2.2. Compiling on Linux ------------------------- 1. Extract everything from the distribution tarball (haven't you already?) and go to the sphinx subdirectory. (We are using version 2.0.1-beta here for the sake of example only; be sure to change this to a specific version you're using.) | $ tar xzvf sphinx-2.0.1-beta.tar.gz | $ cd sphinx 2. Run the configuration program: | $ ./configure There's a number of options to configure. The complete listing may be obtained by using --help switch. The most important ones are: * --prefix, which specifies where to install Sphinx; such as --prefix=/usr/local/sphinx (all of the examples use this prefix) * --with-mysql, which specifies where to look for MySQL include and library files, if auto-detection fails; * --with-static-mysql, which builds Sphinx with statically linked MySQL support; * --with-pgsql, which specifies where to look for PostgreSQL include and library files. * --with-static-pgsql, which builds Sphinx with statically linked PostgreSQL support; 3. Build the binaries: | $ make 4. Install the binaries in the directory of your choice: (defaults to /usr/local/bin/ on *nix systems, but is overridden with configure --prefix) | $ make install 2.2.3. Known installation issues -------------------------------- If configure fails to locate MySQL headers and/or libraries, try checking for and installing mysql-devel package. On some systems, it is not installed by default. If make fails with a message which look like | /bin/sh: g++: command not found | make[1]: *** [libsphinx_a-sphinx.o] Error 127 try checking for and installing gcc-c++ package. If you are getting compile-time errors which look like | sphinx.cpp:67: error: invalid application of `sizeof' to | incomplete type `Private::SizeError' this means that some compile-time type size check failed. The most probable reason is that off_t type is less than 64-bit on your system. As a quick hack, you can edit sphinx.h and replace off_t with DWORD in a typedef for SphOffset_t, but note that this will prohibit you from using full-text indexes larger than 2 GB. Even if the hack helps, please report such issues, providing the exact error message and compiler/OS details, so I could properly fix them in next releases. If you keep getting any other error, or the suggestions above do not seem to help you, please don't hesitate to contact me. 2.3. Installing Sphinx packages on Debian and Ubuntu ==================================================== There are two ways of getting Sphinx for Ubuntu: regular deb packages and the Launchpad PPA repository. Deb packages: 1. Sphinx requires a few libraries to be installed on Debian/Ubuntu. Use apt-get to download and install these dependencies: $ sudo apt-get install mysql-client unixodbc libpq5 2. Now you can install Sphinx: $ sudo dpkg -i sphinxsearch_2.0.10-release-0ubuntu11~precise_amd64.deb PPA repository (Ubuntu only). Installing Sphinx is much easier from Sphinxsearch PPA repository, because you will get all dependencies and can also update Sphinx to the latest version with the same command. 1. First, add Sphinxsearch repository and update the list of packages: $ sudo add-apt-repository ppa:builds/sphinxsearch-stable $ sudo apt-get update 2. Install/update sphinxsearch package: $ sudo apt-get install sphinxsearch Sphinx searchd daemon can be started/stopped using service command: $ sudo service sphinxsearch start 2.4. Installing Sphinx packages on RedHat and CentOS ==================================================== Currently we distribute Sphinx RPMS and SRPMS on our website for both 5.x and 6.x versions of Red Hat Enterprise Linux, but they can be installed on CentOS as well. 1. Before installation make sure you have these packages installed: $ yum install postgresql-libs unixODBC 2. Download RedHat RPM from Sphinx website and install it: $ rpm -Uhv sphinx-2.0.10-1.rhel6.x86_64.rpm 3. After preparing configuration file (see Quick tour), you can start searchd daemon: $ service searchd start 2.5. Installing Sphinx on Windows ================================= Installing Sphinx on a Windows server is often easier than installing on a Linux environment; unless you are preparing code patches, you can use the pre-compiled binary files from the Downloads area on the website. 1. Extract everything from the .zip file you have downloaded - sphinx-2.0.10-release-win32.zip, or sphinx-2.0.10-release-win32-pgsql.zip if you need PostgresSQL support as well. (We are using version 2.0.10-release here for the sake of example only; be sure to change this to a specific version you're using.) You can use Windows Explorer in Windows XP and up to extract the files, or a freeware package like 7Zip to open the archive. For the remainder of this guide, we will assume that the folders are unzipped into C:\Sphinx, such that searchd.exe can be found in C:\Sphinx\bin\searchd.exe. If you decide to use any different location for the folders or configuration file, please change it accordingly. 2. Edit the contents of sphinx.conf.in - specifically entries relating to @CONFDIR@ - to paths suitable for your system. 3. Install the searchd system as a Windows service: C:\Sphinx\bin> C:\Sphinx\bin\searchd --install --config C:\Sphinx\sphinx.conf.in --servicename SphinxSearch 4. The searchd service will now be listed in the Services panel within the Management Console, available from Administrative Tools. It will not have been started, as you will need to configure it and build your indexes with indexer before starting the service. A guide to do this can be found under Quick tour. During the next steps of the install (which involve running indexer pretty much as you would on Linux) you may find that you get an error relating to libmysql.dll not being found. If you have MySQL installed, you should find a copy of this library in your Windows directory, or sometimes in Windows\System32, or failing that in the MySQL core directories. If you do receive an error please copy libmysql.dll into the bin directory. 2.6. Quick Sphinx usage tour ============================ All the example commands below assume that you installed Sphinx in /usr/local/sphinx, so searchd can be found in /usr/local/sphinx/bin/searchd. To use Sphinx, you will need to: 1. Create a configuration file. Default configuration file name is sphinx.conf. All Sphinx programs look for this file in current working directory by default. Sample configuration file, sphinx.conf.dist, which has all the options documented, is created by configure. Copy and edit that sample file to make your own configuration: (assuming Sphinx is installed into /usr/local/sphinx/) | $ cd /usr/local/sphinx/etc | $ cp sphinx.conf.dist sphinx.conf | $ vi sphinx.conf Sample configuration file is setup to index documents table from MySQL database test; so there's example.sql sample data file to populate that table with a few documents for testing purposes: | $ mysql -u test < /usr/local/sphinx/etc/example.sql 2. Run the indexer to create full-text index from your data: | $ cd /usr/local/sphinx/etc | $ /usr/local/sphinx/bin/indexer --all 3. Query your newly created index! Now query your indexes! Connect to server: | $ mysql -h0 -P9306 | SELECT * FROM test1 WHERE MATCH('my document'); | INSERT INTO rt VALUES (1, 'this is', 'a sample text', 11); | INSERT INTO rt VALUES (2, 'some more', 'text here', 22); | SELECT gid/11 FROM rt WHERE MATCH('text') GROUP BY gid; | SELECT * FROM rt ORDER BY gid DESC; | SHOW TABLES; | SELECT *, WEIGHT() FROM test1 WHERE MATCH('"document one"/1');SHOW META; | SET profiling=1;SELECT * FROM test1 WHERE id IN (1,2,4);SHOW PROFILE; | SELECT id, id%3 idd FROM test1 WHERE MATCH('this is | nothing') GROUP BY idd;SHOW PROFILE; | SELECT id FROM test1 WHERE MATCH('is this a good plan?');SHOW PLAN; | SELECT COUNT(*) FROM test1; | CALL KEYWORDS ('one two three', 'test1'); | CALL KEYWORDS ('one two three', 'test1', 1); To query the index from command line, use search utility: | $ cd /usr/local/sphinx/etc | $ /usr/local/sphinx/bin/search test To query the index from your PHP scripts, you need to: 1. Run the search daemon which your script will talk to: | $ cd /usr/local/sphinx/etc | $ /usr/local/sphinx/bin/searchd 2. Run the attached PHP API test script (to ensure that the daemon was succesfully started and is ready to serve the queries): | $ cd sphinx/api | $ php test.php test 3. Include the API (it's located in api/sphinxapi.php) into your own scripts and use it. Happy searching! Chapter 3. Indexing =================== Table of Contents 3.1. Data sources 3.2. Full-text fields 3.3. Attributes 3.4. MVA (multi-valued attributes) 3.5. Indexes 3.6. Restrictions on the source data 3.7. Charsets, case folding, translation tables, and replacement rules 3.8. SQL data sources (MySQL, PostgreSQL) 3.9. xmlpipe data source 3.10. xmlpipe2 data source 3.11. Live index updates 3.12. Delta index updates 3.13. Index merging 3.1. Data sources ================= The data to be indexed can generally come from very different sources: SQL databases, plain text files, HTML files, mailboxes, and so on. From Sphinx point of view, the data it indexes is a set of structured documents, each of which has the same set of fields and attributes. This is similar to SQL, where each row would correspond to a document, and each column to either a field or an attribute. Depending on what source Sphinx should get the data from, different code is required to fetch the data and prepare it for indexing. This code is called data source driver (or simply driver or data source for brevity). At the time of this writing, there are built-in drivers for MySQL, PostgreSQL, MS SQL (on Windows), and ODBC. There is also a generic driver called xmlpipe, which runs a specified command and reads the data from its stdout. See Section 3.9, <> section for the format description. There can be as many sources per index as necessary. They will be sequentially processed in the very same order which was specifed in index definition. All the documents coming from those sources will be merged as if they were coming from a single source. 3.2. Full-text fields ===================== Full-text fields (or just fields for brevity) are the textual document contents that get indexed by Sphinx, and can be (quickly) searched for keywords. Fields are named, and you can limit your searches to a single field (eg. search through "title" only) or a subset of fields (eg. to "title" and "abstract" only). Sphinx index format generally supports up to 256 fields. However, up to version 2.0.1-beta indexes were forcibly limited by 32 fields, because of certain complications in the matching engine. Full support for up to 256 fields was added in version 2.0.2-beta. Note that the original contents of the fields are not stored in the Sphinx index. The text that you send to Sphinx gets processed, and a full-text index (a special data structure that enables quick searches for a keyword) gets built from that text. But the original text contents are then simply discarded. Sphinx assumes that you store those contents elsewhere anyway. Moreover, it is impossible to fully reconstruct the original text, because the specific whitespace, capitalization, punctuation, etc will all be lost during indexing. It is theoretically possible to partially reconstruct a given document from the Sphinx full-text index, but that would be a slow process (especially if the CRC dictionary is used, which does not even store the original keywords and works with their hashes instead). 3.3. Attributes =============== Attributes are additional values associated with each document that can be used to perform additional filtering and sorting during search. It is often desired to additionally process full-text search results based not only on matching document ID and its rank, but on a number of other per-document values as well. For instance, one might need to sort news search results by date and then relevance, or search through products within specified price range, or limit blog search to posts made by selected users, or group results by month. To do that efficiently, Sphinx allows to attach a number of additional attributes to each document, and store their values in the full-text index. It's then possible to use stored values to filter, sort, or group full-text matches. Attributes, unlike the fields, are not full-text indexed. They are stored in the index, but it is not possible to search them as full-text, and attempting to do so results in an error. For example, it is impossible to use the extended matching mode expression @column 1 to match documents where column is 1, if column is an attribute, and this is still true even if the numeric digits are normally indexed. Attributes can be used for filtering, though, to restrict returned rows, as well as sorting or result grouping; it is entirely possible to sort results purely based on attributes, and ignore the search relevance tools. Additionally, attributes are returned from the search daemon, while the indexed text is not. A good example for attributes would be a forum posts table. Assume that only title and content fields need to be full-text searchable - but that sometimes it is also required to limit search to a certain author or a sub-forum (ie. search only those rows that have some specific values of author_id or forum_id columns in the SQL table); or to sort matches by post_date column; or to group matching posts by month of the post_date and calculate per-group match counts. This can be achieved by specifying all the mentioned columns (excluding title and content, that are full-text fields) as attributes, indexing them, and then using API calls to setup filtering, sorting, and grouping. Here as an example. Example sphinx.conf part: ------------------------- | ... | sql_query = SELECT id, title, content, \ | author_id, forum_id, post_date FROM my_forum_posts | sql_attr_uint = author_id | sql_attr_uint = forum_id | sql_attr_timestamp = post_date | ... Example application code (in PHP): ---------------------------------- | // only search posts by author whose ID is 123 | $cl->SetFilter ( "author_id", array ( 123 ) ); | | // only search posts in sub-forums 1, 3 and 7 | $cl->SetFilter ( "forum_id", array ( 1,3,7 ) ); | | // sort found posts by posting date in descending order | $cl->SetSortMode ( SPH_SORT_ATTR_DESC, "post_date" ); Attributes are named. Attribute names are case insensitive. Attributes are not full-text indexed; they are stored in the index as is. Currently supported attribute types are: * unsigned integers (1-bit to 32-bit wide); * UNIX timestamps; * floating point values (32-bit, IEEE 754 single precision); * string ordinals (specially computed integers); * strings (since 1.10-beta); * JSON (since 2.1.1-beta); * MVA, multi-value attributes (variable-length lists of 32-bit unsigned integers). The complete set of per-document attribute values is sometimes referred to as docinfo. Docinfos can either be * stored separately from the main full-text index data ("extern" storage, in .spa file), or * attached to each occurence of document ID in full-text index data ("inline" storage, in .spd file). When using extern storage, a copy of .spa file (with all the attribute values for all the documents) is kept in RAM by searchd at all times. This is for performance reasons; random disk I/O would be too slow. On the contrary, inline storage does not require any additional RAM at all, but that comes at the cost of greatly inflating the index size: remember that it copies all attribute value every time when the document ID is mentioned, and that is exactly as many times as there are different keywords in the document. Inline may be the only viable option if you have only a few attributes and need to work with big datasets in limited RAM. However, in most cases extern storage makes both indexing and searching much more efficient. Search-time memory requirements for extern storage are (1+number_of_attrs)*number_of_docs*4 bytes, ie. 10 million docs with 2 groups and 1 timestamp will take (1+2+1)*10M*4 = 160 MB of RAM. This is PER DAEMON, not per query. searchd will allocate 160 MB on startup, read the data and keep it shared between queries. The children will NOT allocate any additional copies of this data. 3.4. MVA (multi-valued attributes) ================================== MVAs, or multi-valued attributes, are an important special type of per-document attributes in Sphinx. MVAs let you attach sets of numeric values to every document. That is useful to implement article tags, product categories, etc. Filtering and group-by (but not sorting) on MVA attributes is supported. As of version 2.0.2-beta, MVA values can either be unsigned 32-bit integers (UNSIGNED INTEGER) or signed 64-bit integers (BIGINT). Up to version 2.0.1-beta, only the unsigned 32-bit values were supported. The set size is not limited, you can have an arbitrary number of values attached to each document as long as RAM permits (.spm file that contains the MVA values will be precached in RAM by searchd). The source data can be taken either from a separate query, or from a document field; see source type in sql_attr_multi. In the first case the query will have to return pairs of document ID and MVA values, in the second one the field will be parsed for integer values. There are absolutely no requirements as to incoming data order; the values will be automatically grouped by document ID (and internally sorted within the same ID) during indexing anyway. When filtering, a document will match the filter on MVA attribute if any of the values satisfy the filtering condition. (Therefore, documents that pass through exclude filters will not contain any of the forbidden values.) When grouping by MVA attribute, a document will contribute to as many groups as there are different MVA values associated with that document. For instance, if the collection contains exactly 1 document having a 'tag' MVA with values 5, 7, and 11, grouping on 'tag' will produce 3 groups with '@count' equal to 1 and '@groupby' key values of 5, 7, and 11 respectively. Also note that grouping by MVA might lead to duplicate documents in the result set: because each document can participate in many groups, it can be chosen as the best one in in more than one group, leading to duplicate IDs. PHP API historically uses ordered hash on the document ID for the resulting rows; so you'll also need to use SetArrayResult() in order to employ group-by on MVA with PHP API. 3.5. Indexes ============ To be able to answer full-text search queries fast, Sphinx needs to build a special data structure optimized for such queries from your text data. This structure is called index; and the process of building index from text is called indexing. Different index types are well suited for different tasks. For example, a disk-based tree-based index would be easy to update (ie. insert new documents to existing index), but rather slow to search. Sphinx architecture allows internally for different index types, or backends, to be implemented comparatively easily. Starting with 1.10-beta, Sphinx provides 2 different backends: a disk index backend, and a RT (realtime) index backend. Disk indexes are designed to provide maximum indexing and searching speed, while keeping the RAM footprint as low as possible. That comes at a cost of text index updates. You can not update an existing document or incrementally add a new document to a disk index. You only can batch rebuild the entire disk index from scratch. (Note that you still can update document's attributes on the fly, even with the disk indexes.) This "rebuild only" limitation might look as a big constraint at a first glance. But in reality, it can very frequently be worked around rather easily by setting up muiltiple disk indexes, searching through them all, and only rebuilding the one with a fraction of the most recently changed data. See Section 3.11, <> for details. RT indexes enable you to implement dynamic updates and incremental additions to the full text index. RT stands for Real Time and they are indeed "soft realtime" in terms of writes, meaning that most index changes become available for searching as quick as 1 millisecond or less, but could occasionally stall for seconds. (Searches will still work even during that occasional writing stall.) Refer to Chapter 4, Real-time indexes for details. Last but not least, Sphinx supports so-called distributed indexes. Compared to disk and RT indexes, those are not a real physical backend, but rather just lists of either local or remote indexes that can be searched transparently to the application, with Sphinx doing all the chores of sending search requests to remote machines in the cluster, aggregating the result sets, retrying the failed requests, and even doing some load balancing. See Section 5.8, <> for a discussion of distributed indexes. There can be as many indexes per configuration file as necessary. indexer utility can reindex either all of them (if --all option is specified), or a certain explicitly specified subset. searchd utility will serve all the specified indexes, and the clients can specify what indexes to search in run time. 3.6. Restrictions on the source data ==================================== There are a few different restrictions imposed on the source data which is going to be indexed by Sphinx, of which the single most important one is: ALL DOCUMENT IDS MUST BE UNIQUE UNSIGNED NON-ZERO INTEGER NUMBERS (32-BIT OR 64-BIT, DEPENDING ON BUILD TIME SETTINGS). If this requirement is not met, different bad things can happen. For instance, Sphinx can crash with an internal assertion while indexing; or produce strange results when searching due to conflicting IDs. Also, a 1000-pound gorilla might eventually come out of your display and start throwing barrels at you. You've been warned. 3.7. Charsets, case folding, translation tables, and replacement rules ====================================================================== When indexing some index, Sphinx fetches documents from the specified sources, splits the text into words, and does case folding so that "Abc", "ABC" and "abc" would be treated as the same word (or, to be pedantic, term). To do that properly, Sphinx needs to know * what encoding is the source text in; * what characters are letters and what are not; * what letters should be folded to what letters. This should be configured on a per-index basis using charset_type and charset_table options. charset_type specifies whether the document encoding is single-byte (SBCS) or UTF-8. charset_table specifies the table that maps letter characters to their case folded versions. The characters that are not in the table are considered to be non-letters and will be treated as word separators when indexing or searching through this index. Default tables currently include English and Russian characters. Please do submit your tables for other languages! As of version 2.1.1-beta, you can also specify text pattern replacement rules. For example, given the rules | regexp_filter = \b(\d+)\" => \1 inch | regexp_filter = (BLUE|RED) => COLOR the text 'RED TUBE 5" LONG' would be indexed as 'COLOR TUBE 5 INCH LONG', and 'PLANK 2" x 4"' as 'PLANK 2 INCH x 4 INCH'. Rules are applied in the given order. Text in queries is also replaced; a search for "BLUE TUBE" would actually become a search for "COLOR TUBE". Note that Sphinx must be built with the --with-re2 option to use this feature. 3.8. SQL data sources (MySQL, PostgreSQL) ========================================= With all the SQL drivers, indexing generally works as follows. * connection to the database is established; * pre-query (see Section 11.1.14, <>) is executed to perform any necessary initial setup, such as setting per-connection encoding with MySQL; * main query (see Section 11.1.15, <>) is executed and the rows it returns are indexed; * post-query (see Section 11.1.34, <>) is executed to perform any necessary cleanup; * connection to the database is closed; * indexer does the sorting phase (to be pedantic, index-type specific post-processing); * connection to the database is established again; * post-index query (see Section 11.1.35, <>) is executed to perform any necessary final cleanup; * connection to the database is closed again. Most options, such as database user/host/password, are straightforward. However, there are a few subtle things, which are discussed in more detail here. Ranged queries -------------- Main query, which needs to fetch all the documents, can impose a read lock on the whole table and stall the concurrent queries (eg. INSERTs to MyISAM table), waste a lot of memory for result set, etc. To avoid this, Sphinx supports so-called ranged queries. With ranged queries, Sphinx first fetches min and max document IDs from the table, and then substitutes different ID intervals into main query text and runs the modified query to fetch another chunk of documents. Here's an example. Example 3.1. Ranged query usage example | # in sphinx.conf | | sql_query_range = SELECT MIN(id),MAX(id) FROM documents | sql_range_step = 1000 | sql_query = SELECT * FROM documents WHERE id>=$start AND id<=$end If the table contains document IDs from 1 to, say, 2345, then sql_query would be run three times: 1. with $start replaced with 1 and $end replaced with 1000; 2. with $start replaced with 1001 and $end replaced with 2000; 3. with $start replaced with 2001 and $end replaced with 2345. Obviously, that's not much of a difference for 2000-row table, but when it comes to indexing 10-million-row MyISAM table, ranged queries might be of some help. sql_query_post vs. sql_query_post_index --------------------------------------- The difference between post-query and post-index query is in that post-query is run immediately when Sphinx received all the documents, but further indexing may still fail for some other reason. On the contrary, by the time the post-index query gets executed, it is guaranteed that the indexing was succesful. Database connection is dropped and re-established because sorting phase can be very lengthy and would just timeout otherwise. 3.9. xmlpipe data source ======================== xmlpipe data source was designed to enable users to plug data into Sphinx without having to implement new data sources drivers themselves. It is limited to 2 fixed fields and 2 fixed attributes, and is deprecated in favor of Section 3.10, <> now. For new streams, use xmlpipe2. To use xmlpipe, configure the data source in your configuration file as follows: | source example_xmlpipe_source | { | type = xmlpipe | xmlpipe_command = perl /www/mysite.com/bin/sphinxpipe.pl | } The indexer will run the command specified in xmlpipe_command, and then read, parse and index the data it prints to stdout. More formally, it opens a pipe to given command and then reads from that pipe. indexer will expect one or more documents in custom XML format. Here's the example document stream, consisting of two documents: Example 3.2. XMLpipe document stream | | 123 | 45 | 1132223498 | test title | | this is my document body | | | | | 124 | 46 | 1132223498 | another test | | this is another document | | Legacy xmlpipe legacy driver uses a builtin parser which is pretty fast but really strict and does not actually fully support XML. It requires that all the fields must be present, formatted exactly as in this example, and occur exactly in the same order. The only optional field is timestamp; it defaults to 1. 3.10. xmlpipe2 data source ========================== xmlpipe2 lets you pass arbitrary full-text and attribute data to Sphinx in yet another custom XML format. It also allows to specify the schema (ie. the set of fields and attributes) either in the XML stream itself, or in the source settings. When indexing xmlpipe2 source, indexer runs the given command, opens a pipe to its stdout, and expects well-formed XML stream. Here's sample stream data: Example 3.3. xmlpipe2 document stream | | | | | | | | | | | | this is the main content entry | must be handled properly by xml parser lib]]> | 1012325463 | note how field/attr tags can be | in randomized order | some undeclared element | | | | another subject | here comes another document, and i am given to understand, | that in-document field order must not matter, sir | 1012325467 | | | | | | 1234 | 4567 | | | Arbitrary fields and attributes are allowed. They also can occur in the stream in arbitrary order within each document; the order is ignored. There is a restriction on maximum field length; fields longer than 2 MB will be truncated to 2 MB (this limit can be changed in the source). The schema, ie. complete fields and attributes list, must be declared before any document could be parsed. This can be done either in the configuration file using xmlpipe_field and xmlpipe_attr_XXX settings, or right in the stream using element. is optional. It is only allowed to occur as the very first sub-element in . If there is no in-stream schema definition, settings from the configuration file will be used. Otherwise, stream settings take precedence. Unknown tags (which were not declared neither as fields nor as attributes) will be ignored with a warning. In the example above, will be ignored. All embedded tags and their attributes (such as in in the example above) will be silently ignored. Support for incoming stream encodings depends on whether iconv is installed on the system. xmlpipe2 is parsed using libexpat parser that understands US-ASCII, ISO-8859-1, UTF-8 and a few UTF-16 variants natively. Sphinx configure script will also check for libiconv presence, and utilize it to handle other encodings. libexpat also enforces the requirement to use UTF-8 charset on Sphinx side, because the parsed data it returns is always in UTF-8. XML elements (tags) recognized by xmlpipe2 (and their attributes where applicable) are: sphinx:docset Mandatory top-level element, denotes and contains xmlpipe2 document set. sphinx:schema Optional element, must either occur as the very first child of sphinx:docset, or never occur at all. Declares the document schema. Contains field and attribute declarations. If present, overrides per-source settings from the configuration file. sphinx:field Optional element, child of sphinx:schema. Declares a full-text field. Known attributes are: * "name", specifies the XML element name that will be treated as a full-text field in the subsequent documents. * "attr", specifies whether to also index this field as a string or word count attribute. Possible values are "string" and "wordcount". Introduced in version 1.10-beta. sphinx:attr Optional element, child of sphinx:schema. Declares an attribute. Known attributes are: * "name", specifies the element name that should be treated as an attribute in the subsequent documents. * "type", specifies the attribute type. Possible values are "int", "bigint", "timestamp", "str2ordinal", "bool", "float", "multi" and "json". * "bits", specifies the bit size for "int" attribute type. Valid values are 1 to 32. * "default", specifies the default value for this attribute that should be used if the attribute's element is not present in the document. sphinx:document Mandatory element, must be a child of sphinx:docset. Contains arbitrary other elements with field and attribute values to be indexed, as declared either using sphinx:field and sphinx:attr elements or in the configuration file. The only known attribute is "id" that must contain the unique integer document ID. sphinx:killlist Optional element, child of sphinx:docset. Contains a number of "id" elements whose contents are document IDs to be put into a kill-list for this index. 3.11. Live index updates ======================== There are two major approaches to maintaining the full-text index contents up to date. Note, however, that both these approaches deal with the task of full-text data updates, and not attribute updates. Instant attribute updates are supported since version 0.9.8. Refer to UpdateAttributes() API call description for details. First, you can use disk-based indexes, partition them manually, and only rebuild the smaller partitions (so-called "deltas") frequently. By minimizing the rebuild size, you can reduce the average indexing lag to something as low as 30-60 seconds. This approach was the the only one available in versions 0.9.x. On huge collections it actually might be the most efficient one. Refer to Section 3.12, <> for details. Second, versions 1.x (starting with 1.10-beta) add support for so-called real-time indexes (RT indexes for short) that on-the-fly updates of the full-text data. Updates on a RT index can appear in the search results in 1-2 milliseconds, ie. 0.001-0.002 seconds. However, RT index are less efficient for bulk indexing huge amounts of data. Refer to Chapter 4, Real-time indexes for details. 3.12. Delta index updates ========================= There's a frequent situation when the total dataset is too big to be reindexed from scratch often, but the amount of new records is rather small. Example: a forum with a 1,000,000 archived posts, but only 1,000 new posts per day. In this case, "live" (almost real time) index updates could be implemented using so called "main+delta" scheme. The idea is to set up two sources and two indexes, with one "main" index for the data which only changes rarely (if ever), and one "delta" for the new documents. In the example above, 1,000,000 archived posts would go to the main index, and newly inserted 1,000 posts/day would go to the delta index. Delta index could then be reindexed very frequently, and the documents can be made available to search in a matter of minutes. Specifying which documents should go to what index and reindexing main index could also be made fully automatic. One option would be to make a counter table which would track the ID which would split the documents, and update it whenever the main index is reindexed. Example 3.4. Fully automated live updates | # in MySQL | CREATE TABLE sph_counter | ( | counter_id INTEGER PRIMARY KEY NOT NULL, | max_doc_id INTEGER NOT NULL | ); | | # in sphinx.conf | source main | { | # ... | sql_query_pre = SET NAMES utf8 | sql_query_pre = REPLACE INTO sph_counter SELECT 1, MAX(id) FROM documents | sql_query = SELECT id, title, body FROM documents \ | WHERE id<=( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 ) | } | | source delta : main | { | sql_query_pre = SET NAMES utf8 | sql_query = SELECT id, title, body FROM documents \ | WHERE id>( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 ) | } | | index main | { | source = main | path = /path/to/main | # ... all the other settings | } | | # note how all other settings are copied from main, | # but source and path are overridden (they MUST be) | index delta : main | { | source = delta | path = /path/to/delta | } Note how we're overriding sql_query_pre in the delta source. We need to explicitly have that override. Otherwise REPLACE query would be run when indexing delta source too, effectively nullifying it. However, when we issue the directive in the inherited source for the first time, it removes all inherited values, so the encoding setup is also lost. So sql_query_pre in the delta can not just be empty; and we need to issue the encoding setup query explicitly once again. 3.13. Index merging =================== Merging two existing indexes can be more efficient that indexing the data from scratch, and desired in some cases (such as merging 'main' and 'delta' indexes instead of simply reindexing 'main' in 'main+delta' partitioning scheme). So indexer has an option to do that. Merging the indexes is normally faster than reindexing but still not instant on huge indexes. Basically, it will need to read the contents of both indexes once and write the result once. Merging 100 GB and 1 GB index, for example, will result in 202 GB of IO (but that's still likely less than the indexing from scratch requires). The basic command syntax is as follows: | indexer --merge DSTINDEX SRCINDEX [--rotate] Only the DSTINDEX index will be affected: the contents of SRCINDEX will be merged into it. --rotate switch will be required if DSTINDEX is already being served by searchd. The initially devised usage pattern is to merge a smaller update from SRCINDEX into DSTINDEX. Thus, when merging the attributes, values from SRCINDEX will win if duplicate document IDs are encountered. Note, however, that the "old" keywords will not be automatically removed in such cases. For example, if there's a keyword "old" associated with document 123 in DSTINDEX, and a keyword "new" associated with it in SRCINDEX, document 123 will be found by both keywords after the merge. You can supply an explicit condition to remove documents from DSTINDEX to mitigate that; the relevant switch is --merge-dst-range: | indexer --merge main delta --merge-dst-range deleted 0 0 This switch lets you apply filters to the destination index along with merging. There can be several filters; all of their conditions must be met in order to include the document in the resulting mergid index. In the example above, the filter passes only those records where 'deleted' is 0, eliminating all records that were flagged as deleted (for instance, using UpdateAttributes() call). Chapter 4. Real-time indexes ============================ Table of Contents 4.1. RT indexes overview 4.2. Known caveats with RT indexes 4.3. RT index internals 4.4. Binary logging Real-time indexes (or RT indexes for brevity) are a new backend that lets you insert, update, or delete documents (rows) on the fly. RT indexes were added in version 1.10-beta. While querying of RT indexes is possible using any of the SphinxAPI, SphinxQL, or SphinxSE, updating them is only possible via SphinxQL at the moment. Full SphinxQL reference is available in Chapter 7, SphinxQL reference. 4.1. RT indexes overview ======================== RT indexes should be declared in sphinx.conf, just as every other index type. Notable differences from the regular, disk-based indexes are that a) data sources are not required and ignored, and b) you should explicitly enumerate all the text fields, not just attributes. Here's an example: Example 4.1. RT index declaration | index rt | { | type = rt | path = /usr/local/sphinx/data/rt | rt_field = title | rt_field = content | rt_attr_uint = gid | } As of 2.0.1-beta and above, RT indexes are production quality, despite a few missing features. RT index can be accessed using MySQL protocol. INSERT, REPLACE, DELETE, and SELECT statements against RT index are supported. For instance, this is an example session with the sample index above: | $ mysql -h 127.0.0.1 -P 9306 | Welcome to the MySQL monitor. Commands end with ; or \g. | Your MySQL connection id is 1 | Server version: 1.10-dev (r2153) | | Type 'help;' or '\h' for help. Type '\c' to clear the buffer. | | mysql> INSERT INTO rt VALUES ( 1, 'first record', 'test one', 123 ); | Query OK, 1 row affected (0.05 sec) | | mysql> INSERT INTO rt VALUES ( 2, 'second record', 'test two', 234 ); | Query OK, 1 row affected (0.00 sec) | | mysql> SELECT * FROM rt; | +------+--------+------+ | | id | weight | gid | | +------+--------+------+ | | 1 | 1 | 123 | | | 2 | 1 | 234 | | +------+--------+------+ | 2 rows in set (0.02 sec) | | mysql> SELECT * FROM rt WHERE MATCH('test'); | +------+--------+------+ | | id | weight | gid | | +------+--------+------+ | | 1 | 1643 | 123 | | | 2 | 1643 | 234 | | +------+--------+------+ | 2 rows in set (0.01 sec) | | mysql> SELECT * FROM rt WHERE MATCH('@title test'); | Empty set (0.00 sec) Both partial and batch INSERT syntaxes are supported, ie. you can specify a subset of columns, and insert several rows at a time. Deletions are also possible using DELETE statement; the only currently supported syntax is DELETE FROM WHERE id=. REPLACE is also supported, enabling you to implement updates. | mysql> INSERT INTO rt ( id, title ) VALUES ( 3, 'third row' ), ( 4, 'fourth entry' ); | Query OK, 2 rows affected (0.01 sec) | | mysql> SELECT * FROM rt; | +------+--------+------+ | | id | weight | gid | | +------+--------+------+ | | 1 | 1 | 123 | | | 2 | 1 | 234 | | | 3 | 1 | 0 | | | 4 | 1 | 0 | | +------+--------+------+ | 4 rows in set (0.00 sec) | | mysql> DELETE FROM rt WHERE id=2; | Query OK, 0 rows affected (0.00 sec) | | mysql> SELECT * FROM rt WHERE MATCH('test'); | +------+--------+------+ | | id | weight | gid | | +------+--------+------+ | | 1 | 1500 | 123 | | +------+--------+------+ | 1 row in set (0.00 sec) | | mysql> INSERT INTO rt VALUES ( 1, 'first record on steroids', 'test one', 123 ); | ERROR 1064 (42000): duplicate id '1' | | mysql> REPLACE INTO rt VALUES ( 1, 'first record on steroids', 'test one', 123 ); | Query OK, 1 row affected (0.01 sec) | | mysql> SELECT * FROM rt WHERE MATCH('steroids'); | +------+--------+------+ | | id | weight | gid | | +------+--------+------+ | | 1 | 1500 | 123 | | +------+--------+------+ | 1 row in set (0.01 sec) Data stored in RT index should survive clean shutdown. When binary logging is enabled, it should also survive crash and/or dirty shutdown, and recover on subsequent startup. 4.2. Known caveats with RT indexes ================================== RT indexes are currently quality feature, but there are still a few known usage quirks. Those quirks are listed in this section. * Prefix indexing is supported with dict = keywords starting 2.0.2-beta. Infix indexing is experimental in trunk. * Disk chunks optimization routine is not implemented yet. * On initial index creation, attributes are reordered by type, in the following order: uint, bigint, float, timestamp, string. So when using INSERT without an explicit column names list, specify all uint column values first, then bigint, etc. * Default conservative RAM chunk limit (rt_mem_limit) of 32M can lead to poor performance on bigger indexes, you should raise it to 256..1024M if you're planning to index gigabytes. * High DELETE/REPLACE rate can lead to kill-list fragmentation and impact searching performance. * No transaction size limits are currently imposed; too many concurrent INSERT/REPLACE transactions might therefore consume a lot of RAM. * In case of a damaged binlog, recovery will stop on the first damaged transaction, even though it's technically possible to keep looking further for subsequent undamaged transactions, and recover those. This mid-file damage case (due to flaky HDD/CDD/tape?) is supposed to be extremely rare, though. * Multiple INSERTs grouped in a single transaction perform better than equivalent single-row transactions and are recommended for batch loading of data. 4.3. RT index internals ======================= RT index is internally chunked. It keeps a so-called RAM chunk that stores all the most recent changes. RAM chunk memory usage is rather strictly limited with per-index rt_mem_limit directive. Once RAM chunk grows over this limit, a new disk chunk is created from its data, and RAM chunk is reset. Thus, while most changes on the RT index will be performed in RAM only and complete instantly (in milliseconds), those changes that overflow the RAM chunk will stall for the duration of disk chunk creation (a few seconds). Since version 2.1.1-beta, Sphinx uses double-buffering to avoid INSERT stalls. When data is being dumped to disk, the second buffer is used, so further INSERTs won't be delayed. The second buffer is defined to be 10% the size of the standard buffer, rt_mem_limit, but future versions of Sphinx may allow configuring this further. Disk chunks are, in fact, just regular disk-based indexes. But they're a part of an RT index and automatically managed by it, so you need not configure nor manage them manually. Because a new disk chunk is created every time RT chunk overflows the limit, and because in-memory chunk format is close to on-disk format, the disk chunks will be approximately rt_mem_limit bytes in size each. Generally, it is better to set the limit bigger, to minimize both the frequency of flushes, and the index fragmentation (number of disk chunks). For instance, on a dedicated search server that handles a big RT index, it can be advised to set rt_mem_limit to 1-2 GB. A global limit on all indexes is also planned, but not yet implemented yet as of 1.10-beta. Disk chunk full-text index data can not be actually modified, so the full-text field changes (ie. row deletions and updates) suppress a previous row version from a disk chunk using a kill-list, but do not actually physically purge the data. Therefore, on workloads with high full-text updates ratio index might eventually get polluted by these previous row versions, and searching performance would degrade. Physical index purging that would improve the performance is planned, but not yet implemented as of 1.10-beta. Data in RAM chunk gets saved to disk on clean daemon shutdown, and then loaded back on startup. However, on daemon or server crash, updates from RAM chunk might be lost. To prevent that, binary logging of transactions can be used; see Section 4.4, <> for details. Full-text changes in RT index are transactional. They are stored in a per-thread accumulator until COMMIT, then applied at once. Bigger batches per single COMMIT should result in faster indexing. 4.4. Binary logging =================== Binary logs are essentially a recovery mechanism. With binary logs enabled, searchd writes every given transaction to the binlog file, and uses that for recovery after an unclean shutdown. On clean shutdown, RAM chunks are saved to disk, and then all the binlog files are unlinked. During normal operation, a new binlog file will be opened every time when binlog_max_log_size limit is reached. Older, already closed binlog files are kept until all of the transactions stored in them (from all indexes) are flushed as a disk chunk. Setting the limit to 0 pretty much prevents binlog from being unlinked at all while searchd is running; however, it will still be unlinked on clean shutdown. (This is the default case as of 2.0.3-release, binlog_max_log_size defaults to 0.) There are 3 different binlog flushing strategies, controlled by binlog_flush directive which takes the values of 0, 1, or 2. 0 means to flush the log to OS and sync it to disk every second; 1 means flush and sync every transaction; and 2 (the default mode) means flush every transaction but sync every second. Sync is relatively slow because it has to perform physical disk writes, so mode 1 is the safest (every committed transaction is guaranteed to be written on disk) but the slowest. Flushing log to OS prevents from data loss on searchd crashes but not system crashes. Mode 2 is the default. On recovery after an unclean shutdown, binlogs are replayed and all logged transactions since the last good on-disk state are restored. Transactions are checksummed so in case of binlog file corruption garbage data will not be replayed; such a broken transaction will be detected and, currently, will stop replay. Transactions also start with a magic marker and timestamped, so in case of binlog damage in the middle of the file, it's technically possible to skip broken transactions and keep replaying from the next good one, and/or it's possible to replay transactions until a given timestamp (point-in-time recovery), but none of that is implemented yet as of 1.10-beta. One unwanted side effect of binlogs is that actively updating a small RT index that fully fits into a RAM chunk part will lead to an ever-growing binlog that can never be unlinked until clean shutdown. Binlogs are essentially append-only deltas against the last known good saved state on disk, and unless RAM chunk gets saved, they can not be unlinked. An ever-growing binlog is not very good for disk use and crash recovery time. Starting with 2.0.1-beta you can configure searchd to perform a periodic RAM chunk flush to fix that problem using a rt_flush_period directive. With periodic flushes enabled, searchd will keep a separate thread, checking whether RT indexes RAM chunks need to be written back to disk. Once that happens, the respective binlogs can be (and are) safely unlinked. Note that rt_flush_period only controls the frequency at which the checks happen. There are no guarantees that the particular RAM chunk will get saved. For instance, it does not make sense to regularly re-save a huge RAM chunk that only gets a few rows worh of updates. The search daemon determine whether to actually perform the flush with a few heuristics. Chapter 5. Searching ==================== Table of Contents 5.1. Matching modes 5.2. Boolean query syntax 5.3. Extended query syntax 5.4. Search results ranking 5.5. Expressions, functions, and operators 5.5.1. Operators 5.5.2. Numeric functions 5.5.3. Date and time functions 5.5.4. Type conversion functions 5.5.5. Comparison functions 5.5.6. Miscellaneous functions 5.6. Sorting modes 5.7. Grouping (clustering) search results 5.8. Distributed searching 5.9. searchd query log formats 5.9.1. Plain log format 5.9.2. SphinxQL log format 5.10. MySQL protocol support and SphinxQL 5.11. Multi-queries 5.12. Collations 5.13. User-defined functions (UDF) 5.1. Matching modes =================== So-called matching modes are a legacy feature that used to provide (very) limited query syntax and ranking support. Currently, they are deprecated in favor of full-text query language and so-called rankers. Starting with version 0.9.9-release, it is thus strongly recommended to use SPH_MATCH_EXTENDED and proper query syntax rather than any other legacy mode. All those other modes are actually internally converted to extended syntax anyway. SphinxAPI still defaults to SPH_MATCH_ALL but that is for compatibility reasons only. There are the following matching modes available: * SPH_MATCH_ALL, matches all query words (default mode); * SPH_MATCH_ANY, matches any of the query words; * SPH_MATCH_PHRASE, matches query as a phrase, requiring perfect match; * SPH_MATCH_BOOLEAN, matches query as a boolean expression (see Section 5.2, <>); * SPH_MATCH_EXTENDED, matches query as an expression in Sphinx internal query language (see Section 5.3, <>); * SPH_MATCH_EXTENDED2, an alias for SPH_MATCH_EXTENDED; * SPH_MATCH_FULLSCAN, matches query, forcibly using the "full scan" mode as below. NB, any query terms will be ignored, such that filters, filter-ranges and grouping will still be applied, but no text-matching. SPH_MATCH_EXTENDED2 was used during 0.9.8 and 0.9.9 development cycle, when the internal matching engine was being rewritten (for the sake of additional functionality and better performance). By 0.9.9-release, the older version was removed, and SPH_MATCH_EXTENDED and SPH_MATCH_EXTENDED2 are now just aliases. The SPH_MATCH_FULLSCAN mode will be automatically activated in place of the specified matching mode when the following conditions are met: 1. The query string is empty (ie. its length is zero). 2. docinfo storage is set to extern. In full scan mode, all the indexed documents will be considered as matching. Such queries will still apply filters, sorting, and group by, but will not perform any full-text searching. This can be useful to unify full-text and non-full-text searching code, or to offload SQL server (there are cases when Sphinx scans will perform better than analogous MySQL queries). An example of using the full scan mode might be to find posts in a forum. By selecting the forum's user ID via SetFilter() but not actually providing any search text, Sphinx will match every document (i.e. every post) where SetFilter() would match - in this case providing every post from that user. By default this will be ordered by relevancy, followed by Sphinx document ID in ascending order (earliest first). 5.2. Boolean query syntax ========================= Boolean queries allow the following special operators to be used: * explicit operator AND: | hello & world * operator OR: | hello | world * operator NOT: | hello -world | hello !world * grouping: | ( hello world ) Here's an example query which uses all these operators: Example 5.1. Boolean query example | ( cat -dog ) | ( cat -mouse) There always is implicit AND operator, so "hello world" query actually means "hello & world". OR operator precedence is higher than AND, so "looking for cat | dog | mouse" means "looking for ( cat | dog | mouse )" and not "(looking for cat) | dog | mouse". Since version 2.1.1-beta, queries may be automatically optimized if OPTION boolean_simplify=1 is specified. Some transformations performed by this optimization include: * Excess brackets: ((A | B) | C) becomes ( A | B | C ); ((A B) C) becomes ( A B C ) * Excess AND NOT: ((A !N1) !N2) becomes (A !(N1 | N2)) * Common NOT: ((A !N) | (B !N)) becomes ((A|B) !N) * Common Compound NOT: ((A !(N AA)) | (B !(N BB))) becomes (((A|B) !N) | (A !AA) | (B !BB)) if the cost of evaluating N is greater than the added together costs of evaluating A and B * Common subterm: ((A (N | AA)) | (B (N | BB))) becomes (((A|B) N) | (A AA) | (B BB)) if the cost of evaluating N is greater than the added together costs of evaluating A and B * Common keywords: (A | "A B"~N) becomes A; ("A B" | "A B C") becomes "A B"; ("A B"~N | "A B C"~N) becomes ("A B"~N) * Common phrase: ("X A B" | "Y A B") becomes (("X|Y") "A B") * Common AND NOT: ((A !X) | (A !Y) | (A !Z)) becomes (A !(X Y Z)) * Common OR NOT: ((A !(N | N1)) | (B !(N | N2))) becomes (( (A !N1) | (B !N2) ) !N) Note that optimizing the queries consumes CPU time, so for simple queries -or for hand-optimized queries- you'll do better with the default boolean_simplify=0 value. Simplifications are often better for complex queries, or algorithmically generated queries. Queries like "-dog", which implicitly include all documents from the collection, can not be evaluated. This is both for technical and performance reasons. Technically, Sphinx does not always keep a list of all IDs. Performance-wise, when the collection is huge (ie. 10-100M documents), evaluating such queries could take very long. 5.3. Extended query syntax ========================== The following special operators and modifiers can be used when using the extended matching mode: * operator OR: | hello | world * operator NOT: | hello -world | hello !world * field search operator: | @title hello @body world * field position limit modifier (introduced in version 0.9.9-rc1): | @body[50] hello * multiple-field search operator: | @(title,body) hello world * ignore field search operator (will ignore any matches of 'hello world' from field 'title'): | @!title hello world * ignore multiple-field search operator (if we have fields title, subject and body then @!(title) is equivalent to @(subject,body)): | @!(title,body) hello world * all-field search operator: | @* hello * phrase search operator: | "hello world" * proximity search operator: | "hello world"~10 * quorum matching operator: | "the world is a wonderful place"/3 * strict order operator (aka operator "before"): | aaa << bbb << ccc * exact form modifier (introduced in version 0.9.9-rc1): | raining =cats and =dogs * field-start and field-end modifier (introduced in version 0.9.9-rc2): | ^hello world$ * NEAR, generalized proximity operator (introduced in version 2.0.1-beta): | hello NEAR/3 world NEAR/4 "my test" * SENTENCE operator (introduced in version 2.0.1-beta): | all SENTENCE words SENTENCE "in one sentence" * PARAGRAPH operator (introduced in version 2.0.1-beta): | "Bill Gates" PARAGRAPH "Steve Jobs" * ZONE limit operator: | ZONE:(h3,h4) only in these titles * ZONESPAN limit operator: | ZONESPAN:(h2) only in a (single) title Here's an example query that uses some of these operators: Example 5.2. Extended matching mode: query example | "hello world" @title "example program"~5 @body python -(php|perl) @* code The full meaning of this search is: * Find the words 'hello' and 'world' adjacently in any field in a document; * Additionally, the same document must also contain the words 'example' and 'program' in the title field, with up to, but not including, 5 words between the words in question; (E.g. "example PHP program" would be matched however "example script to introduce outside data into the correct context for your program" would not because two terms have 5 or more words between them) * Additionally, the same document must contain the word 'python' in the body field, but not contain either 'php' or 'perl'; * Additionally, the same document must contain the word 'code' in any field. There always is implicit AND operator, so "hello world" means that both "hello" and "world" must be present in matching document. OR operator precedence is higher than AND, so "looking for cat | dog | mouse" means "looking for ( cat | dog | mouse )" and not "(looking for cat) | dog | mouse". Field limit operator limits subsequent searching to a given field. Normally, query will fail with an error message if given field name does not exist in the searched index. However, that can be suppressed by specifying "@@relaxed" option at the very beginning of the query: | @@relaxed @nosuchfield my query This can be helpful when searching through heterogeneous indexes with different schemas. Field position limit, introduced in version 0.9.9-rc1, additionaly restricts the searching to first N position within given field (or fields). For example, "@body[50] hello" will not match the documents where the keyword 'hello' occurs at position 51 and below in the body. Proximity distance is specified in words, adjusted for word count, and applies to all words within quotes. For instance, "cat dog mouse"~5 query means that there must be less than 8-word span which contains all 3 words, ie. "CAT aaa bbb ccc DOG eee fff MOUSE" document will not match this query, because this span is exactly 8 words long. Quorum matching operator introduces a kind of fuzzy matching. It will only match those documents that pass a given threshold of given words. The example above ("the world is a wonderful place"/3) will match all documents that have at least 3 of the 6 specified words. Operator is limited to 255 keywords. Instead of an absolute number, you can also specify a number between 0.0 and 1.0 (standing for 0% and 100%), and Sphinx will match only documents with at least the specified percentage of given words. The same example above could also have been written "the world is a wonderful place"/0.5 and it would match documents with at least 50% of the 6 words. Strict order operator (aka operator "before"), introduced in version 0.9.9-rc2, will match the document only if its argument keywords occur in the document exactly in the query order. For instance, "black << cat" query (without quotes) will match the document "black and white cat" but not the "that cat was black" document. Order operator has the lowest priority. It can be applied both to just keywords and more complex expressions, ie. this is a valid query: | (bag of words) << "exact phrase" << red|green|blue Exact form keyword modifier, introduced in version 0.9.9-rc1, will match the document only if the keyword occurred in exactly the specified form. The default behaviour is to match the document if the stemmed keyword matches. For instance, "runs" query will match both the document that contains "runs" and the document that contains "running", because both forms stem to just "run" - while "=runs" query will only match the first document. Exact form operator requires index_exact_words option to be enabled. This is a modifier that affects the keyword and thus can be used within operators such as phrase, proximity, and quorum operators. Field-start and field-end keyword modifiers, introduced in version 0.9.9-rc2, will make the keyword match only if it occurred at the very start or the very end of a fulltext field, respectively. For instance, the query "^hello world$" (with quotes and thus combining phrase operator and start/end modifiers) will only match documents that contain at least one field that has exactly these two keywords. Starting with 0.9.9-rc1, arbitrarily nested brackets and negations are allowed. However, the query must be possible to compute without involving an implicit list of all documents: | // correct query | aaa -(bbb -(ccc ddd)) | | // queries that are non-computable | -aaa | aaa | -bbb NEAR operator, added in 2.0.1-beta, is a generalized version of a proximity operator. The syntax is NEAR/N, it is case-sensitive, and no spaces are allowed beetwen the NEAR keyword, the slash sign, and the distance value. The original proximity operator only worked on sets of keywords. NEAR is more generic and can accept arbitrary subexpressions as its two arguments, matching the document when both subexpressions are found within N words of each other, no matter in which order. NEAR is left associative and has the same (lowest) precedence as BEFORE. You should also note how a (one NEAR/7 two NEAR/7 three) query using NEAR is not really equivalent to a ("one two three"~7) one using keyword proximity operator. The difference here is that the proximity operator allows for up to 6 non-matching words between all the 3 matching words, but the version with NEAR is less restrictive: it would allow for up to 6 words between 'one' and 'two' and then for up to 6 more between that two-word matching and a 'three' keyword. SENTENCE and PARAGRAPH operators, added in 2.0.1-beta, matches the document when both its arguments are within the same sentence or the same paragraph of text, respectively. The arguments can be either keywords, or phrases, or the instances of the same operator. Here are a few examples: | one SENTENCE two | one SENTENCE "two three" | one SENTENCE "two three" SENTENCE four The order of the arguments within the sentence or paragraph does not matter. These operators only work on indexes built with index_sp (sentence and paragraph indexing feature) enabled, and revert to a mere AND otherwise. Refer to the index_sp directive documentation for the notes on what's considered a sentence and a paragraph. ZONE limit operator, added in 2.0.1-beta, is quite similar to field limit operator, but restricts matching to a given in-field zone or a list of zones. Note that the subsequent subexpressions are not required to match in a single contiguous span of a given zone, and may match in multiple spans. For instance, (ZONE:th hello world) query will match this example document: | Table 1. Local awareness of Hello Kitty brand. | .. some table data goes here .. | Table 2. World-wide brand awareness. ZONE operator affects the query until the next field or ZONE limit operator, or the closing parenthesis. It only works on the indexes built with zones support (see Section 11.2.9, <>) and will be ignored otherwise. ZONESPAN limit operator, added in 2.1.1-beta, is similar to the ZONE operator, but requires the match to occur in a single contiguous span. In the example above, (ZONESPAN:th hello world)> would not match the document, since "hello" and "world" do not occur within the same span. 5.4. Search results ranking =========================== Ranking overview ---------------- Ranking (aka weighting) of the search results can be defined as a process of computing a so-called relevance (aka weight) for every given matched document with regards to a given query that matched it. So relevance is in the end just a number attached to every document that estimates how relevant the document is to the query. Search results can then be sorted based on this number and/or some additional parameters, so that the most sought after results would come up higher on the results page. There is no single standard one-size-fits-all way to rank any document in any scenario. Moreover, there can not ever be such a way, because relevance is subjective. As in, what seems relevant to you might not seem relevant to me. Hence, in general case it's not just hard to compute, it's theoretically impossible. So ranking in Sphinx is configurable. It has a notion of a so-called ranker. A ranker can formally be defined as a function that takes document and query as its input and produces a relevance value as output. In layman's terms, a ranker controls exactly how (using which specific algorithm) will Sphinx assign weights to the document. Previously, this ranking function was rigidly bound to the matching mode. So in the legacy matching modes (that is, SPH_MATCH_ALL, SPH_MATCH_ANY, SPH_MATCH_PHRASE, and SPH_MATCH_BOOLEAN) you can not choose the ranker. You can only do that in the SPH_MATCH_EXTENDED mode. (Which is the only mode in SphinxQL and the suggested mode in SphinxAPI anyway.) To choose a non-default ranker you can either use SetRankingMode() with SphinxAPI, or OPTION ranker clause in SELECT statement when using SphinxQL. As a sidenote, legacy matching modes are internally implemented via the unified syntax anyway. When you use one of those modes, Sphinx just internally adjusts the query and sets the associated ranker, then executes the query using the very same unified code path. Available rankers ----------------- Sphinx ships with a number of built-in rankers suited for different purposes. A number of them uses two factors, phrase proximity (aka LCS) and BM25. Phrase proximity works on the keyword positions, while BM25 works on the keyword frequencies. Basically, the better the degree of the phrase match between the document body and the query, the higher is the phrase proximity (it maxes out when the document contains the entire query as a verbatim quote). And BM25 is higher when the document contains more rare words. We'll save the detailed discussion for later. Currently implemented rankers are: * SPH_RANK_PROXIMITY_BM25, the default ranking mode that uses and combines both phrase proximity and BM25 ranking. * SPH_RANK_BM25, statistical ranking mode which uses BM25 ranking only (similar to most other full-text engines). This mode is faster but may result in worse quality on queries which contain more than 1 keyword. * SPH_RANK_NONE, no ranking mode. This mode is obviously the fastest. A weight of 1 is assigned to all matches. This is sometimes called boolean searching that just matches the documents but does not rank them. * SPH_RANK_WORDCOUNT, ranking by the keyword occurrences count. This ranker computes the per-field keyword occurrence counts, then multiplies them by field weights, and sums the resulting values. * SPH_RANK_PROXIMITY, added in version 0.9.9-rc1, returns raw phrase proximity value as a result. This mode is internally used to emulate SPH_MATCH_ALL queries. * SPH_RANK_MATCHANY, added in version 0.9.9-rc1, returns rank as it was computed in SPH_MATCH_ANY mode ealier, and is internally used to emulate SPH_MATCH_ANY queries. * SPH_RANK_FIELDMASK, added in version 0.9.9-rc2, returns a 32-bit mask with N-th bit corresponding to N-th fulltext field, numbering from 0. The bit will only be set when the respective field has any keyword occurences satisfiying the query. * SPH_RANK_SPH04, added in version 1.10-beta, is generally based on the default SPH_RANK_PROXIMITY_BM25 ranker, but additionally boosts the matches when they occur in the very beginning or the very end of a text field. Thus, if a field equals the exact query, SPH04 should rank it higher than a field that contains the exact query but is not equal to it. (For instance, when the query is "Hyde Park", a document entitled "Hyde Park" should be ranked higher than a one entitled "Hyde Park, London" or "The Hyde Park Cafe".) * SPH_RANK_EXPR, added in version 2.0.2-beta, lets you specify the ranking formula in run time. It exposes a number of internal text factors and lets you define how the final weight should be computed from those factors. You can find more details about its syntax and a reference available factors in a subsection below. You should specify the SPH_RANK_ prefix and use capital letters only when using the SetRankingMode() call from the SphinxAPI. The API ports expose these as global constants. Using SphinxQL syntax, the prefix should be omitted and the ranker name is case insensitive. Example: | // SphinxAPI | $client->SetRankingMode ( SPH_RANK_SPH04 ); | | // SphinxQL | mysql_query ( "SELECT ... OPTION ranker=sph04" ); Legacy matching modes rankers ----------------------------- Legacy matching modes automatically select a ranker as follows: * SPH_MATCH_ALL uses SPH_RANK_PROXIMITY ranker; * SPH_MATCH_ANY uses SPH_RANK_MATCHANY ranker; * SPH_MATCH_PHRASE uses SPH_RANK_PROXIMITY ranker; * SPH_MATCH_BOOLEAN uses SPH_RANK_NONE ranker. Expression based ranker (SPH_RANK_EXPR) --------------------------------------- Expression ranker, added in version 2.0.2-beta, lets you change the ranking formula on the fly, on a per-query basis. For a quick kickoff, this is how you emulate PROXIMITY_BM25 ranker using the expression based one: | SELECT *, WEIGHT() FROM myindex WHERE MATCH('hello world') | OPTION ranker=expr('sum(lcs*user_weight)*1000+bm25') The output of this query must not change if you omit the OPTION clause, because the default ranker (PROXIMITY_BM25) behaves exactly like specified in the ranker formula above. But the expression ranker is somewhat more flexible than just that and provides access to many more factors. The ranking formula is an arbitrary arithmetic expression that can use constants, document attributes, built-in functions and operators (described in Section 5.5, <>), and also a few ranking-specific things that are only accessible in a ranking formula. Namely, those are field aggregation functions, field-level, and document-level ranking factors. A document-level factor is a numeric value computed by the ranking engine for every matched document with regards to the current query. (So it differs from a plain document attribute in that the attribute do not depend on the full text query, while factors might.) Those factors can be used anywhere in the ranking expression. Currently implemented document-level factors are: * bm25 (integer), a document-level BM25 estimate (computed without keyword occurrence filtering). * max_lcs (integer), a query-level maximum possible value that the sum(lcs*user_weight) expression can ever take. This can be useful for weight boost scaling. For instance, MATCHANY ranker formula uses this to guarantee that a full phrase match in any field rankes higher than any combination of partial matches in all fields. * field_mask (integer), a document-level 32-bit mask of matched fields. * query_word_count (integer), the number of unique keywords in a query, adjusted for a number of excluded keywords. For instance, both (one one one one) and (one !two) queries should assign a value of 1 to this factor, because there is just one unique non-excluded keyword. * doc_word_count (integer), the number of unique keywords matched in the entire document. A field-level factor is a numeric value computed by the ranking engine for every matched in-document text field with regards to the current query. As more than one field can be matched by a query, but the final weight needs to be a single integer value, these values need to be folded into a single one. To achieve that, field-level factors can only be used within a field aggregation function, they can not be used anywhere in the expression. For example, you can not use (lcs+bm25) as your ranking expression, as lcs takes multiple values (one in every matched field). You should use (sum(lcs)+bm25) instead, that expression sums lcs over all matching fields, and then adds bm25 to that per-field sum. Currently implemented field-level factors are: * lcs (integer), the length of a maximum verbatim match between the document and the query, counted in words. LCS stands for Longest Common Subsequence (or Subset). Takes a minimum value of 1 when only stray keywords were matched in a field, and a maximum value of query keywords count when the entire query was matched in a field verbatim (in the exact query keywords order). For example, if the query is 'hello world' and the field contains these two words quoted from the query (that is, adjacent to each other, and exactly in the query order), lcs will be 2. For example, if the query is 'hello world program' and the field contains 'hello world', lcs will be 2. Note that any subset of the query keyword works, not just a subset of adjacent keywords. For example, if the query is 'hello world program' and the field contains 'hello (test program)', lcs will be 2 just as well, because both 'hello' and 'program' matched in the same respective positions as they were in the query. Finally, if the query is 'hello world program' and the field contains 'hello world program', lcs will be 3. (Hopefully that is unsurprising at this point.) * user_weight (integer), the user specified per-field weight (refer to SetFieldWeights() in SphinxAPI and OPTION field_weights in SphinxQL respectively). The weights default to 1 if not specified explicitly. * hit_count (integer), the number of keyword occurrences that matched in the field. Note that a single keyword may occur multiple times. For example, if 'hello' occurs 3 times in a field and 'world' occurs 5 times, hit_count will be 8. * word_count (integer), the number of unique keywords matched in the field. For example, if 'hello' and 'world' occur anywhere in a field, word_count will be 2, irregardless of how many times do both keywords occur. * tf_idf (float), the sum of TF*IDF over all the keywords matched in the field. IDF is the Inverse Document Frequency, a floating point value between 0 and 1 that describes how frequent is the keywords (basically, 0 for a keyword that occurs in every document indexed, and 1 for a unique keyword that occurs in just a single document). TF is the Term Frequency, the number of matched keyword occurrences in the field. As a side note, tf_idf is actually computed by summing IDF over all matched occurences. That's by construction equivalent to summing TF*IDF over all matched keywords. * min_hit_pos (integer), the position of the first matched keyword occurrence, counted in words. Indexing begins from position 1. * min_best_span_pos (integer), the position of the first maximum LCS occurrences span. For example, assume that our query was 'hello world program' and 'hello world' subphrase was matched twice in the field, in positions 13 and 21. Assume that 'hello' and 'world' additionally occurred elsewhere in the field, but never next to each other and thus never as a subphrase match. In that case, min_best_span_pos will be 13. Note how for the single keyword queries min_best_span_pos will always equal min_hit_pos. * exact_hit (boolean), whether a query was an exact match of the entire current field. Used in the SPH04 ranker. * min_idf, max_idf, and sum_idx were added in version 2.1.1-beta. These factors respectively represent the min(idf), max(idf) and sum(idf) over all the keywords that were matched. A field aggregation function is a single argument function that takes an expression with field-level factors, iterates it over all the matched fields, and computes the final results. Currently implemented field aggregation functions are: * sum, sums the argument expression over all matched fields. For instance, sum(1) should return a number of matched fields. Expressions for the built-in rankers ------------------------------------ Most of the other rankers can actually be emulated with the expression based ranker. You just need to pass a proper expression. Such emulation is, of course, going to be slower than using the built-in, compiled ranker but still might be of interest if you want to fine-tune your ranking formula starting with one of the existing ones. Also, the formulas define the nitty gritty ranker details in a nicely readable fashion. * SPH_RANK_PROXIMITY_BM25 = sum(lcs*user_weight)*1000+bm25 * SPH_RANK_BM25 = bm25 * SPH_RANK_NONE = 1 * SPH_RANK_WORDCOUNT = sum(hit_count*user_weight) * SPH_RANK_PROXIMITY = sum(lcs*user_weight) * SPH_RANK_MATCHANY = sum((word_count+(lcs-1)*max_lcs)*user_weight) * SPH_RANK_FIELDMASK = field_mask * SPH_RANK_SPH04 = sum((4*lcs+2*(min_hit_pos==1)+exact_hit)*user_weight)*1000+bm25 5.5. Expressions, functions, and operators ========================================== Sphinx lets you use arbitrary arithmetic expressions both via SphinxQL and SphinxAPI, involving attribute values, internal attributes (document ID and relevance weight), arithmetic operations, a number of built-in functions, and user-defined functions. This section documents the supported operators and functions. Here's the complete reference list for quick access. * Arithmetic operators: +, -, *, /, %, DIV, MOD * Comparison operators: <, > <=, >=, =, <> * Boolean operators: AND, OR, NOT * Bitwise operators: &, | * ABS() * BIGINT() * BITDOT() * CEIL() * CONTAINS() * COS() * CRC32() * DAY() * EXP() * FIBONACCI() * FLOOR() * GEODIST() * GEOPOLY2D() * IDIV() * IF() * IN() * INTERVAL() * LENGTH() * LN() * LOG10() * LOG2() * MAX() * MIN() * MONTH() * NOW() * POLY2D() * POW() * SIN() * SINT() * SQRT() * YEAR() * YEARMONTH() * YEARMONTHDAY() 5.5.1. Operators ---------------- Arithmetic operators: +, -, *, /, %, DIV, MOD The standard arithmetic operators. Arithmetic calculations involving those can be performed in three different modes: (a) using single-precision, 32-bit IEEE 754 floating point values (the default), (b) using signed 32-bit integers, (c) using 64-bit signed integers. The expression parser will automatically switch to integer mode if there are no operations the result in a floating point value. Otherwise, it will use the default floating point mode. For instance, a+b will be computed using 32-bit integers if both arguments are 32-bit integers; or using 64-bit integers if both arguments are integers but one of them is 64-bit; or in floats otherwise. However, a/b or sqrt(a) will always be computed in floats, because these operations return a result of non-integer type. To avoid the first, you can either use IDIV(a,b) or a DIV b form. Also, a*b will not be automatically promoted to 64-bit when the arguments are 32-bit. To enforce 64-bit results, you can use BIGINT(). (But note that if there are non-integer operations, BIGINT() will simply be ignored.) Comparison operators: <, > <=, >=, =, <> Comparison operators (eg. = or <=) return 1.0 when the condition is true and 0.0 otherwise. For instance, (a=b)+3 will evaluate to 4 when attribute 'a' is equal to attribute 'b', and to 3 when 'a' is not. Unlike MySQL, the equality comparisons (ie. = and <> operators) introduce a small equality threshold (1e-6 by default). If the difference between compared values is within the threshold, they will be considered equal. Boolean operators: AND, OR, NOT Boolean operators (AND, OR, NOT) were introduced in 0.9.9-rc2 and behave as usual. They are left-associative and have the least priority compared to other operators. NOT has more priority than AND and OR but nevertheless less than any other operator. AND and OR have the same priority so brackets use is recommended to avoid confusion in complex expressions. Bitwise operators: &, | These operators perform bitwise AND and OR respectively. The operands must be of an integer types. Introduced in version 1.10-beta. 5.5.2. Numeric functions ------------------------ ABS() Returns the absolute value of the argument. BITDOT() BITDOT(mask, w0, w1, ...) returns the sum of products of an each bit of a mask multiplied with its weight. bit0*w0 + bit1*w1 + ... CEIL() Returns the smallest integer value greater or equal to the argument. CONTAINS() CONTAINS(polygon, x, y) checks whether the (x,y) point is within the given polygon, and returns 1 if true, or 0 if false. The polygon has to be specified using either the POLY2D() function or the GEOPOLY2D() function. The former function is intended for "small" polygons, meaning less than 500 km (300 miles) a side, and it doesn't take into account the Earth's curvature for speed. For larger distances, you should use GEOPOLY2D, which tesselates the given polygon in smaller parts, accounting for the Earth's curvature. These functions were added in version 2.1.1-beta. COS() Returns the cosine of the argument. EXP() Returns the exponent of the argument (e=2.718... to the power of the argument). FIBONACCI() Returns the N-th Fibonacci number, where N is the integer argument. That is, arguments of 0 and up will generate the values 0, 1, 1, 2, 3, 5, 8, 13 and so on. Note that the computations are done using 32-bit integer math and thus numbers 48th and up will be returned modulo 2^32. FLOOR() Returns the largest integer value lesser or equal to the argument. GEOPOLY2D() GEOPOLY2D(x1,y1,x2,y2,x3,y3...) produces a polygon to be used with the CONTAINS() function. This function takes into account the Earth's curvature by tesellating the polygon into smaller ones, and should be used for larger areas; see the POLY2D() function. IDIV() Returns the result of an integer division of the first argument by the second argument. Both arguments must be of an integer type. LN() Returns the natural logarithm of the argument (with the base of e=2.718...). LOG10() Returns the common logarithm of the argument (with the base of 10). LOG2() Returns the binary logarithm of the argument (with the base of 2). MAX() Returns the bigger of two arguments. MIN() Returns the smaller of two arguments. POLY2D() POLY2D(x1,y1,x2,y2,x3,y3...) produces a polygon to be used with the CONTAINS() function. This polygon assumes a flat Earth, so it should not be too large; see the POLY2D() function. POW() Returns the first argument raised to the power of the second argument. SIN() Returns the sine of the argument. SQRT() Returns the square root of the argument. 5.5.3. Date and time functions ------------------------------ DAY() Returns the integer day of month (in 1..31 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta. MONTH() Returns the integer month (in 1..12 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta. NOW() Returns the current timestamp as an INTEGER. Introduced in version 0.9.9-rc1. YEAR() Returns the integer year (in 1969..2038 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta. YEARMONTH() Returns the integer year and month code (in 196912..203801 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta. YEARMONTHDAY() Returns the integer year, month, and date code (in 19691231..20380119 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta. 5.5.4. Type conversion functions -------------------------------- BIGINT() Forcibly promotes the integer argument to 64-bit type, and does nothing on floating point argument. It's intended to help enforce evaluation of certain expressions (such as a*b) in 64-bit mode even though all the arguments are 32-bit. Introduced in version 0.9.9-rc1. SINT() Forcibly reinterprets its 32-bit unsigned integer argument as signed, and also expands it to 64-bit type (because 32-bit type is unsigned). It's easily illustrated by the following example: 1-2 normally evaluates to 4294967295, but SINT(1-2) evaluates to -1. Introduced in version 1.10-beta. 5.5.5. Comparison functions --------------------------- IF() IF() behavior is slightly different that that of its MySQL counterpart. It takes 3 arguments, check whether the 1st argument is equal to 0.0, returns the 2nd argument if it is not zero, or the 3rd one when it is. Note that unlike comparison operators, IF() does not use a threshold! Therefore, it's safe to use comparison results as its 1st argument, but arithmetic operators might produce unexpected results. For instance, the following two calls will produce different results even though they are logically equivalent: | IF ( sqrt(3)*sqrt(3)-3<>0, a, b ) | IF ( sqrt(3)*sqrt(3)-3, a, b ) In the first case, the comparison operator <> will return 0.0 (false) because of a threshold, and IF() will always return 'b' as a result. In the second one, the same sqrt(3)*sqrt(3)-3 expression will be compared with zero without threshold by the IF() function itself. But its value will be slightly different from zero because of limited floating point calculations precision. Because of that, the comparison with 0.0 done by IF() will not pass, and the second variant will return 'a' as a result. IN() IN(expr,val1,val2,...), introduced in version 0.9.9-rc1, takes 2 or more arguments, and returns 1 if 1st argument (expr) is equal to any of the other arguments (val1..valN), or 0 otherwise. Currently, all the checked values (but not the expression itself!) are required to be constant. (Its technically possible to implement arbitrary expressions too, and that might be implemented in the future.) Constants are pre-sorted and then binary search is used, so IN() even against a big arbitrary list of constants will be very quick. Starting with 0.9.9-rc2, first argument can also be a MVA attribute. In that case, IN() will return 1 if any of the MVA values is equal to any of the other arguments. Starting with 2.0.1-beta, IN() also supports IN(expr,@uservar) syntax to check whether the value belongs to the list in the given global user variable. INTERVAL() INTERVAL(expr,point1,point2,point3,...), introduced in version 0.9.9-rc1, takes 2 or more arguments, and returns the index of the argument that is less than the first argument: it returns 0 if exprarg_values[0], &factors); | // ... can use the contents of factors variable here ... | sphinx_factors_deinit(&factors); PACKEDFACTORS() data is available at all query stages, not just when doing the initial matching and ranking pass. That enables another particularly interesting application of PACKEDFACTORS(), namely re-ranking. In the example just above, we used an expression-based ranker with a dummy expression, and sorted the result set by the value computed by our UDF. In other words, we used the UDF to rank all our results. Assume now, for the sake of an example, that our UDF is extremely expensive to compute and has a throughput of just 10,000 calls per second. Assume that our query matches 1,000,000 documents. To maintain reasonable performance, we would then want to use a (much) simpler expression to do most of our ranking, and then apply the expensive UDF to only a few top results, say, top-100 results. Or, in other words, build top-100 results using a simpler ranking function and then re-rank those with a complex one. We can do that just as well with subselects: | SELECT * FROM ( | SELECT *, CUSTOM_RANK(PACKEDFACTORS()) AS r | FROM my_index WHERE match('hello') | OPTION ranker=expr('sum(lcs)*1000+bm25') | ORDER BY WEIGHT() DESC | LIMIT 100 | ) ORDER BY r DESC LIMIT 10 In this example, expression-based ranker will be called for every matched document to compute WEIGHT(). So it will get called 1,000,000 times. But the UDF computation can be postponed until the outer sort. And it also will be done for just the top-100 matches by WEIGHT(), according to the inner limit. So the UDF will only get called 100 times. And then the final top-10 matches by UDF value will be selected and returned to the application. For reference, in the distributed case PACKEDFACTORS() data gets sent from the agents to master in a binary format, too. This makes it technically feasible to implement additional re-ranking pass (or passes) on the master node, if needed. If used with SphinxQL but not called from any UDFs, the result of PACKEDFACTORS() is simply formatted as plain text, which can be used to manually assess the ranking factors. Note that this feature is not currently supported by the Sphinx API. LENGTH() LENGTH(attr_mva) function, introduced in version 2.1.2-stable, returns amount of elements in MVA set. It works with both 32-bit and 64-bit MVA attributes. 5.6. Sorting modes ================== There are the following result sorting modes available: * SPH_SORT_RELEVANCE mode, that sorts by relevance in descending order (best matches first); * SPH_SORT_ATTR_DESC mode, that sorts by an attribute in descending order (bigger attribute values first); * SPH_SORT_ATTR_ASC mode, that sorts by an attribute in ascending order (smaller attribute values first); * SPH_SORT_TIME_SEGMENTS mode, that sorts by time segments (last hour/day/week/month) in descending order, and then by relevance in descending order; * SPH_SORT_EXTENDED mode, that sorts by SQL-like combination of columns in ASC/DESC order; * SPH_SORT_EXPR mode, that sorts by an arithmetic expression. SPH_SORT_RELEVANCE ignores any additional parameters and always sorts matches by relevance rank. All other modes require an additional sorting clause, with the syntax depending on specific mode. SPH_SORT_ATTR_ASC, SPH_SORT_ATTR_DESC and SPH_SORT_TIME_SEGMENTS modes require simply an attribute name. SPH_SORT_RELEVANCE is equivalent to sorting by "@weight DESC, @id ASC" in extended sorting mode, SPH_SORT_ATTR_ASC is equivalent to "attribute ASC, @weight DESC, @id ASC", and SPH_SORT_ATTR_DESC to "attribute DESC, @weight DESC, @id ASC" respectively. SPH_SORT_TIME_SEGMENTS mode --------------------------- In SPH_SORT_TIME_SEGMENTS mode, attribute values are split into so-called time segments, and then sorted by time segment first, and by relevance second. The segments are calculated according to the current timestamp at the time when the search is performed, so the results would change over time. The segments are as follows: * last hour, * last day, * last week, * last month, * last 3 months, * everything else. These segments are hardcoded, but it is trivial to change them if necessary. This mode was added to support searching through blogs, news headlines, etc. When using time segments, recent records would be ranked higher because of segment, but within the same segment, more relevant records would be ranked higher - unlike sorting by just the timestamp attribute, which would not take relevance into account at all. SPH_SORT_EXTENDED mode ---------------------- In SPH_SORT_EXTENDED mode, you can specify an SQL-like sort expression with up to 5 attributes (including internal attributes), eg: | @relevance DESC, price ASC, @id DESC Both internal attributes (that are computed by the engine on the fly) and user attributes that were configured for this index are allowed. Internal attribute names must start with magic @-symbol; user attribute names can be used as is. In the example above, @relevance and @id are internal attributes and price is user-specified. Known internal attributes are: * @id (match ID) * @weight (match weight) * @rank (match weight) * @relevance (match weight) * @random (return results in random order) @rank and @relevance are just additional aliases to @weight. SPH_SORT_EXPR mode ------------------ Expression sorting mode lets you sort the matches by an arbitrary arithmetic expression, involving attribute values, internal attributes (@id and @weight), arithmetic operations, and a number of built-in functions. Here's an example: | $cl->SetSortMode ( SPH_SORT_EXPR, | "@weight + ( user_karma + ln(pageviews) )*0.1" ); The operators and functions supported in the expressions are discussed in a separate section, Section 5.5, <>. 5.7. Grouping (clustering) search results ========================================= Sometimes it could be useful to group (or in other terms, cluster) search results and/or count per-group match counts - for instance, to draw a nice graph of how much maching blog posts were there per each month; or to group Web search results by site; or to group matching forum posts by author; etc. In theory, this could be performed by doing only the full-text search in Sphinx and then using found IDs to group on SQL server side. However, in practice doing this with a big result set (10K-10M matches) would typically kill performance. To avoid that, Sphinx offers so-called grouping mode. It is enabled with SetGroupBy() API call. When grouping, all matches are assigned to different groups based on group-by value. This value is computed from specified attribute using one of the following built-in functions: * SPH_GROUPBY_DAY, extracts year, month and day in YYYYMMDD format from timestamp; * SPH_GROUPBY_WEEK, extracts year and first day of the week number (counting from year start) in YYYYNNN format from timestamp; * SPH_GROUPBY_MONTH, extracts month in YYYYMM format from timestamp; * SPH_GROUPBY_YEAR, extracts year in YYYY format from timestamp; * SPH_GROUPBY_ATTR, uses attribute value itself for grouping. The final search result set then contains one best match per group. Grouping function value and per-group match count are returned along as "virtual" attributes named @group and @count respectively. The result set is sorted by group-by sorting clause, with the syntax similar to SPH_SORT_EXTENDED sorting clause syntax. In addition to @id and @weight, group-by sorting clause may also include: * @group (groupby function value), * @count (amount of matches in group). The default mode is to sort by groupby value in descending order, ie. by "@group desc". On completion, total_found result parameter would contain total amount of matching groups over he whole index. WARNING: grouping is done in fixed memory and thus its results are only approximate; so there might be more groups reported in total_found than actually present. @count might also be underestimated. To reduce inaccuracy, one should raise max_matches. If max_matches allows to store all found groups, results will be 100% correct. For example, if sorting by relevance and grouping by "published" attribute with SPH_GROUPBY_DAY function, then the result set will contain * one most relevant match per each day when there were any matches published, * with day number and per-day match count attached, * sorted by day number in descending order (ie. recent days first). Starting with version 0.9.9-rc2, aggregate functions (AVG(), MIN(), MAX(), SUM()) are supported through SetSelect() API call when using GROUP BY. 5.8. Distributed searching ========================== To scale well, Sphinx has distributed searching capabilities. Distributed searching is useful to improve query latency (ie. search time) and throughput (ie. max queries/sec) in multi-server, multi-CPU or multi-core environments. This is essential for applications which need to search through huge amounts data (ie. billions of records and terabytes of text). The key idea is to horizontally partition (HP) searched data accross search nodes and then process it in parallel. Partitioning is done manually. You should * setup several instances of Sphinx programs (indexer and searchd) on different servers; * make the instances index (and search) different parts of data; * configure a special distributed index on some of the searchd instances; * and query this index. This index only contains references to other local and remote indexes - so it could not be directly reindexed, and you should reindex those indexes which it references instead. When searchd receives a query against distributed index, it does the following: 1. connects to configured remote agents; 2. issues the query; 3. sequentially searches configured local indexes (while the remote agents are searching); 4. retrieves remote agents' search results; 5. merges all the results together, removing the duplicates; 6. sends the merged resuls to client. From the application's point of view, there are no differences between searching through a regular index, or a distributed index at all. That is, distributed indexes are fully transparent to the application, and actually there's no way to tell whether the index you queried was distributed or local. (Even though as of 0.9.9 Sphinx does not allow to combine searching through distributed indexes with anything else, this constraint will be lifted in the future.) Any searchd instance could serve both as a master (which aggregates the results) and a slave (which only does local searching) at the same time. This has a number of uses: 1. every machine in a cluster could serve as a master which searches the whole cluster, and search requests could be balanced between masters to achieve a kind of HA (high availability) in case any of the nodes fails; 2. if running within a single multi-CPU or multi-core machine, there would be only 1 searchd instance quering itself as an agent and thus utilizing all CPUs/core. It is scheduled to implement better HA support which would allow to specify which agents mirror each other, do health checks, keep track of alive agents, load-balance requests, etc. 5.9. searchd query log formats ============================== In version 2.0.1-beta and above two query log formats are supported. Previous versions only supported a custom plain text format. That format is still the default one. However, while it might be more convenient for manual monitoring and review, but hard to replay for benchmarks, it only logs search queries but not the other types of requests, does not always contain the complete search query data, etc. The default text format is also harder (and sometimes impossible) to replay for benchmarking purposes. The new sphinxql format alleviates that. It aims to be complete and automatable, even though at the cost of brevity and readability. 5.9.1. Plain log format ----------------------- By default, searchd logs all succesfully executed search queries into a query log file. Here's an example: | [Fri Jun 29 21:17:58 2007] 0.004 sec [all/0/rel 35254 (0,20)] [lj] test | [Fri Jun 29 21:20:34 2007] 0.024 sec [all/0/rel 19886 (0,20) @channel_id] [lj] test This log format is as follows: | [query-date] query-time [match-mode/filters-count/sort-mode | total-matches (offset,limit) @groupby-attr] [index-name] query Match mode can take one of the following values: * "all" for SPH_MATCH_ALL mode; * "any" for SPH_MATCH_ANY mode; * "phr" for SPH_MATCH_PHRASE mode; * "bool" for SPH_MATCH_BOOLEAN mode; * "ext" for SPH_MATCH_EXTENDED mode; * "ext2" for SPH_MATCH_EXTENDED2 mode; * "scan" if the full scan mode was used, either by being specified with SPH_MATCH_FULLSCAN, or if the query was empty (as documented under Matching Modes) Sort mode can take one of the following values: * "rel" for SPH_SORT_RELEVANCE mode; * "attr-" for SPH_SORT_ATTR_DESC mode; * "attr+" for SPH_SORT_ATTR_ASC mode; * "tsegs" for SPH_SORT_TIME_SEGMENTS mode; * "ext" for SPH_SORT_EXTENDED mode. Additionally, if searchd was started with --iostats, there will be a block of data after where the index(es) searched are listed. A query log entry might take the form of: | [Fri Jun 29 21:17:58 2007] 0.004 sec [all/0/rel 35254 (0,20)] [lj] | [ios=6 kb=111.1 ms=0.5] test This additional block is information regarding I/O operations in performing the search: the number of file I/O operations carried out, the amount of data in kilobytes read from the index files and time spent on I/O operations (although there is a background processing component, the bulk of this time is the I/O operation time). 5.9.2. SphinxQL log format -------------------------- This is a new log format introduced in 2.0.1-beta, with the goals begin logging everything and then some, and in a format easy to automate (for insance, automatically replay). New format can either be enabled via the query_log_format directive in the configuration file, or switched back and forth on the fly with the SET GLOBAL query_log_format=... statement via SphinxQL. In the new format, the example from the previous section would look as follows. (Wrapped below for readability, but with just one query per line in the actual log.) | /* Fri Jun 29 21:17:58.609 2007 2011 conn 2 wall 0.004 found 35254 */ | SELECT * FROM lj WHERE MATCH('test') OPTION ranker=proximity; | | /* Fri Jun 29 21:20:34 2007.555 conn 3 wall 0.024 found 19886 */ | SELECT * FROM lj WHERE MATCH('test') GROUP BY channel_id | OPTION ranker=proximity; Note that all requests would be logged in this format, including those sent via SphinxAPI and SphinxSE, not just those sent via SphinxQL. Also note, that this kind of logging works only with plain log files and will not work if you use 'syslog' for logging. The features of SphinxQL log format compared to the default text one are as follows. * All request types should be logged. (This is still work in progress.) * Full statement data will be logged where possible. * Errors and warnings are logged. * The log should be automatically replayable via SphinxQL. * Additional performance counters (currently, per-agent distributed query times) are logged. Every request (including both SphinxAPI and SphinxQL) request must result in exactly one log line. All request types, including INSERT, CALL SNIPPETS, etc will eventually get logged, though as of time of this writing, that is a work in progress). Every log line must be a valid SphinxQL statement that reconstructs the full request, except if the logged request is too big and needs shortening for performance reasons. Additional messages, counters, etc can be logged in the comments section after the request. 5.10. MySQL protocol support and SphinxQL ========================================= Starting with version 0.9.9-rc2, Sphinx searchd daemon supports MySQL binary network protocol and can be accessed with regular MySQL API. For instance, 'mysql' CLI client program works well. Here's an example of querying Sphinx using MySQL client: | $ mysql -P 9306 | Welcome to the MySQL monitor. Commands end with ; or \g. | Your MySQL connection id is 1 | Server version: 0.9.9-dev (r1734) | | Type 'help;' or '\h' for help. Type '\c' to clear the buffer. | | mysql> SELECT * FROM test1 WHERE MATCH('test') | -> ORDER BY group_id ASC OPTION ranker=bm25; | +------+--------+----------+------------+ | | id | weight | group_id | date_added | | +------+--------+----------+------------+ | | 4 | 1442 | 2 | 1231721236 | | | 2 | 2421 | 123 | 1231721236 | | | 1 | 2421 | 456 | 1231721236 | | +------+--------+----------+------------+ | 3 rows in set (0.00 sec) Note that mysqld was not even running on the test machine. Everything was handled by searchd itself. The new access method is supported in addition to native APIs which all still work perfectly well. In fact, both access methods can be used at the same time. Also, native API is still the default access method. MySQL protocol support needs to be additionally configured. This is a matter of 1-line config change, adding a new listener with mysql41 specified as a protocol: | listen = localhost:9306:mysql41 Just supporting the protocol and not the SQL syntax would be useless so Sphinx now also supports a subset of SQL that we dubbed SphinxQL. It supports the standard querying all the index types with SELECT, modifying RT indexes with INSERT, REPLACE, and DELETE, and much more. Full SphinxQL reference is available in Chapter 7, SphinxQL reference. 5.11. Multi-queries =================== Multi-queries, or query batches, let you send multiple queries to Sphinx in one go (more formally, one network request). Two API methods that implement multi-query mechanism are AddQuery() and RunQueries(). You can also run multiple queries with SphinxQL, see Section 7.33, <>. (In fact, regular Query() call is internally implemented as a single AddQuery() call immediately followed by RunQueries() call.) AddQuery() captures the current state of all the query settings set by previous API calls, and memorizes the query. RunQueries() actually sends all the memorized queries, and returns multiple result sets. There are no restrictions on the queries at all, except just a sanity check on a number of queries in a single batch (see Section 11.4.25, <>). Why use multi-queries? Generally, it all boils down to performance. First, by sending requests to searchd in a batch instead of one by one, you always save a bit by doing less network roundtrips. Second, and somewhat more important, sending queries in a batch enables searchd to perform certain internal optimizations. As new types of optimizations are being added over time, it generally makes sense to pack all the queries into batches where possible, so that simply upgrading Sphinx to a new version would automatically enable new optimizations. In the case when there aren't any possible batch optimizations to apply, queries will be processed one by one internally. Why (or rather when) not use multi-queries? Multi-queries requires all the queries in a batch to be independent, and sometimes they aren't. That is, sometimes query B is based on query A results, and so can only be set up after executing query A. For instance, you might want to display results from a secondary index if and only if there were no results found in a primary index. Or maybe just specify offset into 2nd result set based on the amount of matches in the 1st result set. In that case, you will have to use separate queries (or separate batches). As of 0.9.10, there are two major optimizations to be aware of: common query optimization (available since 0.9.8); and common subtree optimization (available since 0.9.10). Common query optimization means that searchd will identify all those queries in a batch where only the sorting and group-by settings differ, and only perform searching once. For instance, if a batch consists of 3 queries, all of them are for "ipod nano", but 1st query requests top-10 results sorted by price, 2nd query groups by vendor ID and requests top-5 vendors sorted by rating, and 3rd query requests max price, full-text search for "ipod nano" will only be performed once, and its results will be reused to build 3 different result sets. So-called faceted searching is a particularly important case that benefits from this optimization. Indeed, faceted searching can be implemented by running a number of queries, one to retrieve search results themselves, and a few other ones with same full-text query but different group-by settings to retrieve all the required groups of results (top-3 authors, top-5 vendors, etc). And as long as full-text query and filtering settings stay the same, common query optimization will trigger, and greatly improve performance. Common subtree optimization is even more interesting. It lets searchd exploit similarities between batched full-text queries. It identifies common full-text query parts (subtress) in all queries, and caches them between queries. For instance, look at the following query batch: | barack obama president | barack obama john mccain | barack obama speech There's a common two-word part ("barack obama") that can be computed only once, then cached and shared across the queries. And common subtree optimization does just that. Per-query cache size is strictly controlled by subtree_docs_cache and subtree_hits_cache directives (so that caching all sxiteen gazillions of documents that match "i am" does not exhaust the RAM and instantly kill your server). Here's a code sample (in PHP) that fire the same query in 3 different sorting modes: | require ( "sphinxapi.php" ); | $cl = new SphinxClient (); | $cl->SetMatchMode ( SPH_MATCH_EXTENDED ); | | $cl->SetSortMode ( SPH_SORT_RELEVANCE ); | $cl->AddQuery ( "the", "lj" ); | $cl->SetSortMode ( SPH_SORT_EXTENDED, "published desc" ); | $cl->AddQuery ( "the", "lj" ); | $cl->SetSortMode ( SPH_SORT_EXTENDED, "published asc" ); | $cl->AddQuery ( "the", "lj" ); | $res = $cl->RunQueries(); How to tell whether the queries in the batch were actually optimized? If they were, respective query log will have a "multiplier" field that specifies how many queries were processed together: | [Sun Jul 12 15:18:17.000 2009] 0.040 sec x3 [ext/0/rel 747541 (0,20)] [lj] the | [Sun Jul 12 15:18:17.000 2009] 0.040 sec x3 [ext/0/ext 747541 (0,20)] [lj] the | [Sun Jul 12 15:18:17.000 2009] 0.040 sec x3 [ext/0/ext 747541 (0,20)] [lj] the Note the "x3" field. It means that this query was optimized and processed in a sub-batch of 3 queries. For reference, this is how the regular log would look like if the queries were not batched: | [Sun Jul 12 15:18:17.062 2009] 0.059 sec [ext/0/rel 747541 (0,20)] [lj] the | [Sun Jul 12 15:18:17.156 2009] 0.091 sec [ext/0/ext 747541 (0,20)] [lj] the | [Sun Jul 12 15:18:17.250 2009] 0.092 sec [ext/0/ext 747541 (0,20)] [lj] the Note how per-query time in multi-query case was improved by a factor of 1.5x to 2.3x, depending on a particular sorting mode. In fact, for both common query and common subtree optimizations, there were reports of 3x and even more improvements, and that's from production instances, not just synthetic tests. 5.12. Collations ================ Introduced to Sphinx in version 2.0.1-beta to supplement string sorting, collations essentially affect the string attribute comparisons. They specify both the character set encoding and the strategy that Sphinx uses to compare strings when doing ORDER BY or GROUP BY with a string attribute involved. String attributes are stored as is when indexing, and no character set or language information is attached to them. That's okay as long as Sphinx only needs to store and return the strings to the calling application verbatim. But when you ask Sphinx to sort by a string value, that request immediately becomes quite ambiguous. First, single-byte (ASCII, or ISO-8859-1, or Windows-1251) strings need to be processed differently that the UTF-8 ones that may encode every character with a variable number of bytes. So we need to know what is the character set type to interepret the raw bytes as meaningful characters properly. Second, we additionally need to know the language-specific string sorting rules. For instance, when sorting according to US rules in en_US locale, the accented character 'ï' (small letter i with diaeresis) should be placed somewhere after 'z'. However, when sorting with French rules and fr_FR locale in mind, it should be placed between 'i' and 'j'. And some other set of rules might choose to ignore accents at all, allowing 'ï' and 'i' to be mixed arbitrarily. Third, but not least, we might need case-sensitive sorting in some scenarios and case-insensitive sorting in some others. Collations combine all of the above: the character set, the lanugage rules, and the case sensitivity. Sphinx currently provides the following four collations. 1. libc_ci 2. libc_cs 3. utf8_general_ci 4. binary The first two collations rely on several standard C library (libc) calls and can thus support any locale that is installed on your system. They provide case-insensitive (_ci) and case-sensitive (_cs) comparisons respectively. By default they will use C locale, effectively resorting to bytewise comparisons. To change that, you need to specify a different available locale using collation_libc_locale directive. The list of locales available on your system can usually be obtained with the locale command: | $ locale -a | C | en_AG | en_AU.utf8 | en_BW.utf8 | en_CA.utf8 | en_DK.utf8 | en_GB.utf8 | en_HK.utf8 | en_IE.utf8 | en_IN | en_NG | en_NZ.utf8 | en_PH.utf8 | en_SG.utf8 | en_US.utf8 | en_ZA.utf8 | en_ZW.utf8 | es_ES | fr_FR | POSIX | ru_RU.utf8 | ru_UA.utf8 The specific list of the system locales may vary. Consult your OS documentation to install additional needed locales. utf8_general_ci and binary locales are built-in into Sphinx. The first one is a generic collation for UTF-8 data (without any so-called language tailoring); it should behave similar to utf8_general_ci collation in MySQL. The second one is a simple bytewise comparison. Collation can be overriden via SphinxQL on a per-session basis using SET collation_connection statement. All subsequent SphinxQL queries will use this collation. SphinxAPI and SphinxSE queries will use the server default collation, as specified in collation_server configuration directive. Sphinx currently defaults to libc_ci collation. Collations should affect all string attribute comparisons, including those within ORDER BY and GROUP BY, so differently ordered or grouped results can be returned depending on the collation chosen. 5.13. User-defined functions (UDF) ================================== Starting with 2.0.1-beta, Sphinx supports User-Defined Functions, or UDF for short. They can be loaded and unloaded dynamically into searchd without having to restart the daemon, and used in expressions when searching. UDF features at a glance are as follows. * Functions can take integer (both 32-bit and 64-bit), float, string, or MVA arguments. * Functions can return integer or float values. * Functions can check the argument number, types, and names and raise errors. * Only simple functions (that is, non-aggregate ones) are currently supported. User-defined functions need your OS to support dynamically loadable libraries (aka shared objects). Most of the modern OSes are eligible, including Linux, Windows, MacOS, Solaris, BSD and others. (The internal testing has been done on Linux and Windows.) The UDF libraries must reside in a directory specified by plugin_dir directive, and the server must be configured to use workers = threads mode. Relative paths to the library files are not allowed. Once the library is succesfully built and copied to the trusted location, you can then dynamically install and deinstall the functions using CREATE FUNCTION and DROP FUNCTION statements respectively. A single library can contain multiple functions. A library gets loaded when you first install a function from it, and unloaded when you deinstall all the functions from that library. The library functions that will implement a UDF visible to SQL statements need to follow C calling convention, and a simple naming convention. Sphinx source distribution provides a sample file, src/udfexample.c, that defines a few simple functions showing how to work with integer, string, and MVA arguments; you can use that one as a foundation for your new functions. It includes the UDF interface header file, src/sphinxudf.h, that defines the required types and structures. sphinxudf.h header is standalone, that is, does not require any other parts of Sphinx source to compile. Every function that you intend to use in your SELECT statements requires at least two corresponding C/C++ functions: the initialization call, and the function call itself. You can also optionally define the deinitialization call if your function requires any post-query cleanup. (For instance, if you were allocating any memory in either the initialization call or the function calls.) Function names in SQL are case insensitive, C function names are not. They need to be all lower-case. Mistakes in function name prevent UDFs from loading. You also have to pay special attention to the calling convention used when compiling, the list and the types of arguments, and the return type of the main function call. Mistakes in either are likely to crash the server, or result in unexpected results in the best case. Last but not least, all functions need to be thread-safe. Let's assume for the sake of example that your UDF name in SphinxQL will be MYFUNC. The initialization, main, and deinitialization functions would then need to be named as follows and take the following arguments: | /// initialization function | /// called once during query initialization | /// returns 0 on success | /// returns non-zero and fills error_message buffer on failure | int myfunc_init ( SPH_UDF_INIT * init, SPH_UDF_ARGS * args, | char * error_message ); | | /// main call function | /// returns the computed value | /// writes non-zero value into error_flag to indicate errors | RETURN_TYPE myfunc ( SPH_UDF_INIT * init, SPH_UDF_ARGS * args, | char * error_flag ); | | /// optional deinitialization function | /// called once to cleanup once query processing is done | void myfunc_deinit ( SPH_UDF_INIT * init ); The two mentioned structures, SPH_UDF_INIT and SPH_UDF_ARGS, are defined in the src/sphinxudf.h interface header and documented there. RETURN_TYPE of the main function must be one of the following: * int for the functions that return INT. * sphinx_int64_t for the functions that return BIGINT. * float for the functions that return FLOAT. The calling sequence is as follows. myfunc_init() is called once when initializing the query. It can return a non-zero code to indicate a failure; in that case query is not executed, and the error message from the error_message buffer is returned. Otherwise, myfunc() is be called for every row, and a myfunc_deinit() is then called when the query ends. myfunc() can indicate an error by writing a non-zero byte value to error_flag, in that case, it will no more be called for subsequent rows, and a default value of 0 will be substituted. Sphinx might or might not choose to terminate such queries early, neither behavior is currently guaranteed. Chapter 6. Command line tools reference ======================================= Table of Contents 6.1. indexer command reference 6.2. searchd command reference 6.3. search command reference 6.4. spelldump command reference 6.5. indextool command reference 6.6. wordbreaker command reference As mentioned elsewhere, Sphinx is not a single program called 'sphinx', but a collection of 4 separate programs which collectively form Sphinx. This section covers these tools and how to use them. 6.1. indexer command reference ============================== indexer is the first of the two principal tools as part of Sphinx. Invoked from either the command line directly, or as part of a larger script, indexer is solely responsible for gathering the data that will be searchable. The calling syntax for indexer is as follows: | indexer [OPTIONS] [indexname1 [indexname2 [...]]] Essentially you would list the different possible indexes (that you would later make available to search) in sphinx.conf, so when calling indexer, as a minimum you need to be telling it what index (or indexes) you want to index. If sphinx.conf contained details on 2 indexes, mybigindex and mysmallindex, you could do the following: | $ indexer mybigindex | $ indexer mysmallindex mybigindex As part of the configuration file, sphinx.conf, you specify one or more indexes for your data. You might call indexer to reindex one of them, ad-hoc, or you can tell it to process all indexes - you are not limited to calling just one, or all at once, you can always pick some combination of the available indexes. The majority of the options for indexer are given in the configuration file, however there are some options you might need to specify on the command line as well, as they can affect how the indexing operation is performed. These options are: * --config (-c for short) tells indexer to use the given file as its configuration. Normally, it will look for sphinx.conf in the installation directory (e.g. /usr/local/sphinx/etc/sphinx.conf if installed into /usr/local/sphinx), followed by the current directory you are in when calling indexer from the shell. This is most of use in shared environments where the binary files are installed somewhere like /usr/local/sphinx/ but you want to provide users with the ability to make their own custom Sphinx set-ups, or if you want to run multiple instances on a single server. In cases like those you could allow them to create their own sphinx.conf files and pass them to indexer with this option. For example: | $ indexer --config /home/myuser/sphinx.conf myindex * --all tells indexer to update every index listed in sphinx.conf, instead of listing individual indexes. This would be useful in small configurations, or cron-type or maintenance jobs where the entire index set will get rebuilt each day, or week, or whatever period is best. Example usage: | $ indexer --config /home/myuser/sphinx.conf --all * --rotate is used for rotating indexes. Unless you have the situation where you can take the search function offline without troubling users, you will almost certainly need to keep search running whilst indexing new documents. --rotate creates a second index, parallel to the first (in the same place, simply including .new in the filenames). Once complete, indexer notifies searchd via sending the SIGHUP signal, and searchd will attempt to rename the indexes (renaming the existing ones to include .old and renaming the .new to replace them), and then start serving from the newer files. Depending on the setting of seamless_rotate, there may be a slight delay in being able to search the newer indexes. Example usage: | $ indexer --rotate --all * --quiet tells indexer not to output anything, unless there is an error. Again, most used for cron-type, or other script jobs where the output is irrelevant or unnecessary, except in the event of some kind of error. Example usage: | $ indexer --rotate --all --quiet * --noprogress does not display progress details as they occur; instead, the final status details (such as documents indexed, speed of indexing and so on are only reported at completion of indexing. In instances where the script is not being run on a console (or 'tty'), this will be on by default. Example usage: | $ indexer --rotate --all --noprogress * --buildstops reviews the index source, as if it were indexing the data, and produces a list of the terms that are being indexed. In other words, it produces a list of all the searchable terms that are becoming part of the index. Note; it does not update the index in question, it simply processes the data 'as if' it were indexing, including running queries defined with sql_query_pre or sql_query_post. outputfile.txt will contain the list of words, one per line, sorted by frequency with most frequent first, and N specifies the maximum number of words that will be listed; if sufficiently large to encompass every word in the index, only that many words will be returned. Such a dictionary list could be used for client application features around "Did you mean..." functionality, usually in conjunction with --buildfreqs, below. Example: | $ indexer myindex --buildstops word_freq.txt 1000 This would produce a document in the current directory, word_freq.txt with the 1,000 most common words in 'myindex', ordered by most common first. Note that the file will pertain to the last index indexed when specified with multiple indexes or --all (i.e. the last one listed in the configuration file) * --buildfreqs works with --buildstops (and is ignored if --buildstops is not specified). As --buildstops provides the list of words used within the index, --buildfreqs adds the quantity present in the index, which would be useful in establishing whether certain words should be considered stopwords if they are too prevalent. It will also help with developing "Did you mean..." features where you can how much more common a given word compared to another, similar one. Example: | $ indexer myindex --buildstops word_freq.txt 1000 --buildfreqs This would produce the word_freq.txt as above, however after each word would be the number of times it occurred in the index in question. * --merge is used for physically merging indexes together, for example if you have a main+delta scheme, where the main index rarely changes, but the delta index is rebuilt frequently, and --merge would be used to combine the two. The operation moves from right to left - the contents of src-index get examined and physically combined with the contents of dst-index and the result is left in dst-index. In pseudo-code, it might be expressed as: dst-index += src-index An example: | $ indexer --merge main delta --rotate In the above example, where the main is the master, rarely modified index, and delta is the less frequently modified one, you might use the above to call indexer to combine the contents of the delta into the main index and rotate the indexes. * --merge-dst-range runs the filter range given upon merging. Specifically, as the merge is applied to the destination index (as part of --merge, and is ignored if --merge is not specified), indexer will also filter the documents ending up in the destination index, and only documents will pass through the filter given will end up in the final index. This could be used for example, in an index where there is a 'deleted' attribute, where 0 means 'not deleted'. Such an index could be merged with: | $ indexer --merge main delta --merge-dst-range deleted 0 0 Any documents marked as deleted (value 1) would be removed from the newly-merged destination index. It can be added several times to the command line, to add successive filters to the merge, all of which must be met in order for a document to become part of the final index. * --merge-killlists (and its shorter alias --merge-klists) changes the way kill lists are processed when merging indexes. By default, both kill lists get discarded after a merge. That supports the most typical main+delta merge scenario. With this option enabled, however, kill lists from both indexes get concatenated and stored into the destination index. Note that a source (delta) index kill list will be used to suppress rows from a destination (main) index at all times. * --keep-attrs (added in version 2.1.1-beta) allows to reuse existing attributes on reindexing. Whenever the index is rebuilt, each new document id is checked for presence in the "old" index, and if it already exists, its attributes are transferred to the "new" index; if not found, attributes from the new index are used. If the user has updated attributes in the index, but not in the actual source used for the index, all updates will be lost when reindexing; using --keep-attrs enables saving the updated attribute values from the previous index * --dump-rows dumps rows fetched by SQL source(s) into the specified file, in a MySQL compatible syntax. Resulting dumps are the exact representation of data as received by indexer and help to repeat indexing-time issues. * --verbose guarantees that every row that caused problems indexing (duplicate, zero, or missing document ID; or file field IO issues; etc) will be reported. By default, this option is off, and problem summaries may be reported instead. * --sighup-each is useful when you are rebuilding many big indexes, and want each one rotated into searchd as soon as possible. With --sighup-each, indexer will send a SIGHUP signal to searchd after succesfully completing the work on each index. (The default behavior is to send a single SIGHUP after all the indexes were built.) * --print-queries prints out SQL queries that indexer sends to the database, along with SQL connection and disconnection events. That is useful to diagnose and fix problems with SQL sources. 6.2. searchd command reference ============================== searchd is the second of the two principle tools as part of Sphinx. searchd is the part of the system which actually handles searches; it functions as a server and is responsible for receiving queries, processing them and returning a dataset back to the different APIs for client applications. Unlike indexer, searchd is not designed to be run either from a regular script or command-line calling, but instead either as a daemon to be called from init.d (on Unix/Linux type systems) or to be called as a service (on Windows-type systems), so not all of the command line options will always apply, and so will be build-dependent. Calling searchd is simply a case of: | $ searchd [OPTIONS] The options available to searchd on all builds are: * --help (-h for short) lists all of the parameters that can be called in your particular build of searchd. * --config (-c for short) tells searchd to use the given file as its configuration, just as with indexer above. * --stop is used to asynchronously stop searchd, using the details of the PID file as specified in the sphinx.conf file, so you may also need to confirm to searchd which configuration file to use with the --config option. NB, calling --stop will also make sure any changes applied to the indexes with UpdateAttributes() will be applied to the index files themselves. Example: | $ searchd --config /home/myuser/sphinx.conf --stop * --stopwait is used to synchronously stop searchd. --stop essentially tells the running instance to exit (by sending it a SIGTERM) and then immediately returns. --stopwait will also attempt to wait until the running searchd instance actually finishes the shutdown (eg. saves all the pending attribute changes) and exits. Example: | $ searchd --config /home/myuser/sphinx.conf --stopwait Possible exit codes are as follows: * 0 on success; * 1 if connection to running searchd daemon failed; * 2 if daemon reported an error during shutdown; * 3 if daemon crashed during shutdown. * --status command is used to query running searchd instance status, using the connection details from the (optionally) provided configuration file. It will try to connect to the running instance using the first configured UNIX socket or TCP port. On success, it will query for a number of status and performance counter values and print them. You can use Status() API call to access the very same counters from your application. Examples: | $ searchd --status | $ searchd --config /home/myuser/sphinx.conf --status * --pidfile is used to explicitly force using a PID file (where the searchd process number is stored) despite any other debugging options that say otherwise (for instance, --console). This is a debugging option. | $ searchd --console --pidfile * --console is used to force searchd into console mode; typically it will be running as a conventional server application, and will aim to dump information into the log files (as specified in sphinx.conf). Sometimes though, when debugging issues in the configuration or the daemon itself, or trying to diagnose hard-to-track-down problems, it may be easier to force it to dump information directly to the console/command line from which it is being called. Running in console mode also means that the process will not be forked (so searches are done in sequence) and logs will not be written to. (It should be noted that console mode is not the intended method for running searchd.) You can invoke it as such: | $ searchd --config /home/myuser/sphinx.conf --console * --logdebug, --logdebugv, and --logdebugvv options enable additional debug output in the daemon log. They differ by the logging verboseness level. These are debugging options, they pollute the log a lot, and thus they should not be normally enabled. (The normal use case for these is to enable them temporarily on request, to assist with some particularly complicated debugging session.) * --iostats is used in conjuction with the logging options (the query_log will need to have been activated in sphinx.conf) to provide more detailed information on a per-query basis as to the input/output operations carried out in the course of that query, with a slight performance hit and of course bigger logs. Further details are available under the query log format section. You might start searchd thus: | $ searchd --config /home/myuser/sphinx.conf --iostats * --cpustats is used to provide actual CPU time report (in addition to wall time) in both query log file (for every given query) and status report (aggregated). It depends on clock_gettime() system call and might therefore be unavailable on certain systems. You might start searchd thus: | $ searchd --config /home/myuser/sphinx.conf --cpustats * --port portnumber (-p for short) is used to specify the port that searchd should listen on, usually for debugging purposes. This will usually default to 9312, but sometimes you need to run it on a different port. Specifying it on the command line will override anything specified in the configuration file. The valid range is 0 to 65535, but ports numbered 1024 and below usually require a privileged account in order to run. An example of usage: | $ searchd --port 9313 * --listen ( address ":" port | port | path ) [ ":" protocol ] (or -l for short) Works as --port, but allow you to specify not only the port, but full path, as IP address and port, or Unix-domain socket path, that searchd will listen on. Otherwords, you can specify either an IP address (or hostname) and port number, or just a port number, or Unix socket path. If you specify port number but not the address, searchd will listen on all network interfaces. Unix path is identified by a leading slash. As the last param you can also specify a protocol handler (listener) to be used for connections on this socket. Supported protocol values are 'sphinx' (Sphinx 0.9.x API protocol) and 'mysql41' (MySQL protocol used since 4.1 upto at least 5.1). * --index (or -i for short) forces this instance of searchd only to serve the specified index. Like --port, above, this is usually for debugging purposes; more long-term changes would generally be applied to the configuration file itself. Example usage: | $ searchd --index myindex * --strip-path strips the path names from all the file names referenced from the index (stopwords, wordforms, exceptions, etc). This is useful for picking up indexes built on another machine with possibly different path layouts. * --replay-flags= switch, added in version 2.0.2-beta, can be used to specify a list of extra binary log replay options. The supported options are: * accept-desc-timestamp, ignore descending transaction timestamps and replay such transactions anyway (the default behavior is to exit with an error). Example: | $ searchd --replay-flags=accept-desc-timestamp There are some options for searchd that are specific to Windows platforms, concerning handling as a service, are only be available on Windows binaries. Note that on Windows searchd will default to --console mode, unless you install it as a service. * --install installs searchd as a service into the Microsoft Management Console (Control Panel / Administrative Tools / Services). Any other parameters specified on the command line, where --install is specified will also become part of the command line on future starts of the service. For example, as part of calling searchd, you will likely also need to specify the configuration file with --config, and you would do that as well as specifying --install. Once called, the usual start/stop facilities will become available via the management console, so any methods you could use for starting, stopping and restarting services would also apply to searchd. Example: | C:\WINDOWS\system32> C:\Sphinx\bin\searchd.exe --install | --config C:\Sphinx\sphinx.conf If you wanted to have the I/O stats every time you started searchd, you would specify its option on the same line as the --install command thus: | C:\WINDOWS\system32> C:\Sphinx\bin\searchd.exe --install | --config C:\Sphinx\sphinx.conf --iostats * --delete removes the service from the Microsoft Management Console and other places where services are registered, after previously installed with --install. Note, this does not uninstall the software or delete the indexes. It means the service will not be called from the services systems, and will not be started on the machine's next start. If currently running as a service, the current instance will not be terminated (until the next reboot, or searchd is called with --stop). If the service was installed with a custom name (with --servicename), the same name will need to be specified with --servicename when calling to uninstall. Example: | C:\WINDOWS\system32> C:\Sphinx\bin\searchd.exe --delete * --servicename applies the given name to searchd when installing or deleting the service, as would appear in the Management Console; this will default to searchd, but if being deployed on servers where multiple administrators may log into the system, or a system with multiple searchd instances, a more descriptive name may be applicable. Note that unless combined with --install or --delete, this option does not do anything. Example: | C:\WINDOWS\system32> C:\Sphinx\bin\searchd.exe --install | --config C:\Sphinx\sphinx.conf --servicename SphinxSearch * --ntservice is the option that is passed by the Management Console to searchd to invoke it as a service on Windows platforms. It would not normally be necessary to call this directly; this would normally be called by Windows when the service would be started, although if you wanted to call this as a regular service from the command-line (as the complement to --console) you could do so in theory. * --safetrace forces searchd to only use system backtrace() call in crash reports. In certain (rare) scenarios, this might be a "safer" way to get that report. This is a debugging option. * --nodetach switch (Linux only) tells searchd not to detach into background. This will also cause log entry to be printed out to console. Query processing operates as usual. This is a debugging option. Last but not least, as every other daemon, searchd supports a number of signals. SIGTERM Initiates a clean shutdown. New queries will not be handled; but queries that are already started will not be forcibly interrupted. SIGHUP Initiates index rotation. Depending on the value of seamless_rotate setting, new queries might be shortly stalled; clients will receive temporary errors. SIGUSR1 Forces reopen of searchd log and query log files, letting you implement log file rotation. 6.3. search command reference ============================= search is one of the helper tools within the Sphinx package. Whereas searchd is responsible for searches in a server-type environment, search is aimed at testing the index from the command line, and testing the index quickly without building a framework to make the connection to the server and process its response. Note: search is not intended to be deployed as part of a client application; it is strongly recommended you do not write an interface to search instead of searchd, and none of the bundled client APIs support this method. (In any event, search will reload files each time, whereas searchd will cache them in memory for performance.) That said, many types of query that you could build in the APIs could also be made with search, however for very complex searches it may be easier to construct them using a small script and the corresponding API. Additionally, some newer features may be available in the searchd system that have not yet been brought into search. The calling syntax for search is as follows: | search [OPTIONS] word1 [word2 [word3 [...]]] When calling search, it is not necessary to have searchd running; simply make sure that the account running the search program has read access to the configuration file and the index files. The default behaviour is to apply a search for word1 (AND word2 AND word3... as specified) to all fields in all indexes as given in the configuration file. If constructing the equivalent in the API, this would be the equivalent to passing SPH_MATCH_ALL to SetMatchMode, and specifying * as the indexes to query as part of Query. There are many options available to search. Firstly, the general options: * --config (-c for short) tells search to use the given file as its configuration, just as with indexer above. * --index (-i for short) tells search to limit searching to the specified index only; normally it would attempt to search all of the physical indexes listed in sphinx.conf, not any distributed ones. * --stdin tells search to accept the query from the standard input, rather than the command line. This can be useful for testing purposes whereby you could feed input via pipes and from scripts. Options for setting matches: * --any (-a for short) changes the matching mode to match any of the words as part of the query (word1 OR word2 OR word3). In the API this would be equivalent to passing SPH_MATCH_ANY to SetMatchMode. * --phrase (-p for short) changes the matching mode to match all of the words as part of the query, and do so in the phrase given (not including punctuation). In the API this would be equivalent to passing SPH_MATCH_PHRASE to SetMatchMode. * --boolean (-b for short) changes the matching mode to Boolean matching. Note if using Boolean syntax matching on the command line, you may need to escape the symbols (with a backslash) to avoid the shell/command line processor applying them, such as ampersands being escaped on a Unix/Linux system to avoid it forking to the search process, although this can be resolved by using --stdin, as below. In the API this would be equivalent to passing SPH_MATCH_BOOLEAN to SetMatchMode. * --ext (-e for short) changes the matching mode to extended matching which provides various text querying operators. In the API this would be equivalent to passing SPH_MATCH_EXTENDED to SetMatchMode. * --filter (-f for short) filters the results such that only documents where the attribute given (attr) matches the value given (v). For example, --filter deleted 0 only matches documents with an attribute called 'deleted' where its value is 0. You can also add multiple filters on the command line, by specifying multiple --filter multiple times, however if you apply a second filter to an attribute it will override the first defined filter. Options for handling the results: * --limit (-l count for short) limits the total number of matches back to the number given. If a 'group' is specified, this will be the number of grouped results. This defaults to 20 results if not specified (as do the APIs) * --offset (-o for short) offsets the result list by the number of places set by the count; this would be used for pagination through results, where if you have 20 results per 'page', the second page would begin at offset 20, the third page at offset 40, etc. * --group (-g for short) specifies that results should be grouped together based on the attribute specified. Like the GROUP BY clause in SQL, it will combine all results where the attribute given matches, and returns a set of results where each returned result is the best from each group. Unless otherwise specified, this will be the best match on relevance. * --groupsort (-gs for short) instructs that when results are grouped with --group, the expression given in shall determine the order of the groups. Note, this does not specify which is the best item within the group, only the order in which the groups themselves shall be returned. * --sortby (-s for short) specifies that results should be sorted in the order listed in . This allows you to specify the order you wish results to be presented in, ordering by different columns. For example, you could say --sortby "@weight DESC entrytime DESC" to sort entries first by weight (or relevance) and where two or more entries have the same weight, to then sort by the time with the highest time (newest) first. You will usually need to put the items in quotes (--sortby "@weight DESC") or use commas (--sortby @weight,DESC) to avoid the items being treated separately. Additionally, like the regular sorting modes, if --group (grouping) is being used, this will state how to establish the best match within each group. * --sortexpr expr (-S expr for short) specifies that the search results should be presented in an order determined by an arithmetic expression, stated in expr. For example: --sortexpr "@weight + ( user_karma + ln(pageviews) )*0.1" (again noting that this will have to be quoted to avoid the shell dealing with the asterisk). Extended sort mode is discussed in more detail under the SPH_SORT_EXTENDED entry under the Sorting modes section of the manual. * --sort=date specifies that the results should be sorted by descending (i.e. most recent first) date. This requires that there is an attribute in the index that is set as a timestamp. * --rsort=date specifies that the results should be sorted by ascending (i.e. oldest first) date. This requires that there is an attribute in the index that is set as a timestamp. * --sort=ts specifies that the results should be sorted by timestamp in groups; it will return all of the documents whose timestamp is within the last hour, then sorted within that bracket for relevance. After, it would return the documents from the last day, sorted by relevance, then the last week and then the last month. It is discussed in more detail under the SPH_SORT_TIME_SEGMENTS entry under the Sorting modes section of the manual. Other options: * --noinfo (-q for short) instructs search not to look-up data in your SQL database. Specifically, for debugging with MySQL and search, you can provide it with a query to look up the full article based on the returned document ID. It is explained in more detail under the sql_query_info directive. 6.4. spelldump command reference ================================ spelldump is one of the helper tools within the Sphinx package. It is used to extract the contents of a dictionary file that uses ispell or MySpell format, which can help build word lists for wordforms - all of the possible forms are pre-built for you. Its general usage is: | spelldump [options] [result] [locale-name] The two main parameters are the dictionary's main file and its affix file; usually these are named as [language-prefix].dict and [language-prefix].aff and will be available with most common Linux distributions, as well as various places online. [result] specifies where the dictionary data should be output to, and [locale-name] additionally specifies the locale details you wish to use. There is an additional option, -c [file], which specifies a file for case conversion details. Examples of its usage are: | spelldump en.dict en.aff | spelldump ru.dict ru.aff ru.txt ru_RU.CP1251 | spelldump ru.dict ru.aff ru.txt .1251 The results file will contain a list of all the words in the dictionary in alphabetical order, output in the format of a wordforms file, which you can use to customise for your specific circumstances. An example of the result file: | zone > zone | zoned > zoned | zoning > zoning 6.5. indextool command reference ================================ indextool is one of the helper tools within the Sphinx package, introduced in version 0.9.9-rc2. It is used to dump miscellaneous debug information about the physical index. (Additional functionality such as index verification is planned in the future, hence the indextool name rather than just indexdump.) Its general usage is: | indextool [options] Options apply to all commands: * --config (-c for short) overrides the built-in config file names. * --quiet (-q for short) keep indextool quiet - it will not output banner, etc. The commands are as follows: * --checkconfig just loads and verifies the config file to check if it's valid, without syntax errors. This option was added in version 2.1.1-beta. * --build-infixes INDEXNAME build infixes for an existing dict=keywords index (upgrades .sph, .spi in place). You can use this option for legacy index files that already use dict=keywords, but now need to support infix searching too; updating the index files with indextool may prove easier or faster than regenerating them from scratch with indexer. This option was added in version 2.1.1-beta. * --dumpheader FILENAME.sph quickly dumps the provided index header file without touching any other index files or even the configuration file. The report provides a breakdown of all the index settings, in particular the entire attribute and field list. Prior to 0.9.9-rc2, this command was present in CLI search utility. * --dumpconfig FILENAME.sph dumps the index definition from the given index header file in (almost) compliant sphinx.conf file format. Added in version 2.0.1-beta. * --dumpheader INDEXNAME dumps index header by index name with looking up the header path in the configuration file. * --dumpdict INDEXNAME dumps dictionary. This was added in version 2.1.1-beta. * --dumpdocids INDEXNAME dumps document IDs by index name. It takes the data from attribute (.spa) file and therefore requires docinfo=extern to work. * --dumphitlist INDEXNAME KEYWORD dumps all the hits (occurences) of a given keyword in a given index, with keyword specified as text. * --dumphitlist INDEXNAME --wordid ID dumps all the hits (occurences) of a given keyword in a given index, with keyword specified as internal numeric ID. * --fold INDEXNAME OPTFILE This options is useful too see how actually tokenizer proceeds input. You can feed indextool with text from file if specified or from stdin otherwise. The output will contain spaces instead of separators (accordingly to your charset_table settings) and lowercased letters in words. * --htmlstrip INDEXNAME filters stdin using HTML stripper settings for a given index, and prints the filtering results to stdout. Note that the settings will be taken from sphinx.conf, and not the index header. * --morph INDEXNAME applies morphology to the given stdin and prints the result to stdout. * --check INDEXNAME checks the index data files for consistency errors that might be introduced either by bugs in indexer and/or hardware faults. Starting with version 2.1.1-beta, --check also works on RT indexes, RAM and disk chunks. * --strip-path strips the path names from all the file names referenced from the index (stopwords, wordforms, exceptions, etc). This is useful for checking indexes built on another machine with possibly different path layouts. * --optimize-rt-klists optimizes the kill list memory use in the disk chunk of a given RT index. That is a one-off optimization intended for rather old RT indexes, created by development versions prior to 1.10-beta release. As of 1.10-beta releases, this kill list optimization (purging) should happen automatically, and there should never be a need to use this option. 6.6. wordbreaker command reference ================================== wordbreaker is one of the helper tools within the Sphinx package, introduced in version 2.1.1-beta. It is used to split compound words, as usual in URLs, into its component words. For example, this tool can split "lordoftherings" into its four component words, or "http://manofsteel.warnerbros.com" into "man of steel warner bros". This helps searching, without requiring prefixes or infixes: searching for "sphinx" wouldn't match "sphinxsearch" but if you break the compound word and index the separate components, you'll get a match without the costs of prefix and infix larger index files. Examples of its usage are: | echo manofsteel | bin/wordbreaker -dict dict.txt split The input stream will be separated in words using the -dict dictionary file. (The dictionary should match the language of the compound word.) The split command breaks words from the standard input, and outputs the result in the standard output. There are also test and bench commands that let you test the splitting quality and benchmark the splitting functionality. Wordbreaker Wordbreaker needs a dictionary to recognize individual substrings within a string. To differentiate between different guesses, it uses the relative frequency of each word in the dictionary: higher frequency means higher split probability. You can generate such a file using the indexer tool, as in | indexer --buildstops dict.txt 100000 --buildfreqs myindex -c /path/to/sphinx.conf which will write the 100,000 most frequent words, along with their counts, from myindex into dict.txt. The output file is a text file, so you can edit it by hand, if need be, to add or remove words. See http://sphinxsearch.com/blog/2013/01/29/a-new-tool-in-the-trunk-wordbreaker / for more on this tool. Chapter 7. SphinxQL reference ============================= Table of Contents 7.1. SELECT syntax 7.2. SELECT @@system_variable syntax 7.3. SHOW META syntax 7.4. SHOW WARNINGS syntax 7.5. SHOW STATUS syntax 7.6. INSERT and REPLACE syntax 7.7. REPLACE syntax 7.8. DELETE syntax 7.9. SET syntax 7.10. SET TRANSACTION syntax 7.11. BEGIN, COMMIT, and ROLLBACK syntax 7.12. BEGIN syntax 7.13. ROLLBACK syntax 7.14. CALL SNIPPETS syntax 7.15. CALL KEYWORDS syntax 7.16. SHOW TABLES syntax 7.17. DESCRIBE syntax 7.18. CREATE FUNCTION syntax 7.19. DROP FUNCTION syntax 7.20. SHOW VARIABLES syntax 7.21. SHOW COLLATION syntax 7.22. SHOW CHARACTER SET syntax 7.23. UPDATE syntax 7.24. ATTACH INDEX syntax 7.25. FLUSH RTINDEX syntax 7.26. FLUSH RAMCHUNK syntax 7.27. TRUNCATE RTINDEX syntax 7.28. SHOW AGENT STATUS 7.29. SHOW PROFILE syntax 7.30. SHOW INDEX STATUS syntax 7.31. OPTIMIZE INDEX syntax 7.32. SHOW PLAN syntax 7.33. Multi-statement queries 7.34. Comment syntax 7.35. List of SphinxQL reserved keywords 7.36. SphinxQL upgrade notes, version 2.0.1-beta SphinxQL is our SQL dialect that exposes all of the search daemon functionality using a standard SQL syntax with a few Sphinx-specific extensions. Everything available via the SphinxAPI is also available via SphinxQL but not vice versa; for instance, writes into RT indexes are only available via SphinxQL. This chapter documents supported SphinxQL statements syntax. 7.1. SELECT syntax ================== | SELECT | select_expr [, select_expr ...] | FROM index [, index2 ...] | [WHERE where_condition] | [GROUP BY {col_name | expr_alias} [, {sol_name | expr_alias}]] | [WITHIN GROUP ORDER BY {col_name | expr_alias} {ASC | DESC}] | [ORDER BY {col_name | expr_alias} {ASC | DESC} [, ...]] | [LIMIT [offset,] row_count] | [OPTION opt_name = opt_value [, ...]] SELECT statement was introduced in version 0.9.9-rc2. It's syntax is based upon regular SQL but adds several Sphinx-specific extensions and has a few omissions (such as (currently) missing support for JOINs). Specifically, * Column list clause. Column names, arbitrary expressions, and star ('*') are all allowed (ie. SELECT @id, group_id*123+456 AS expr1 FROM test1 will work). Unlike in regular SQL, all computed expressions must be aliased with a valid identifier. Starting with version 2.0.1-beta, AS is optional. Special names such as @id and @weight should currently be used with leading at-sign. This at-sign requirement will be lifted in the future. * EXIST() function (added in version 2.1.1-beta) is supported. EXIST ( "attr-name", default-value ) replaces non-existent columns with default values. It returns either a value of an attribute specified by 'attr-name', or 'default-value' if that attribute does not exist. As of 2.1.1-beta it does not support STRING or MVA attributes. This function is handy when you are searching through several indexes with different schemas. | SELECT *, EXIST('gid', 6) as cnd FROM i1, i2 WHERE cnd>5 * SNIPPET() function (added in version 2.1.1-beta) is supported. This is a wrapper around the snippets functionality, similar to what is available via CALL SNIPPETS. This function takes two arguments: the text to highlight, and a query. The intended use is as follows: | SELECT id, SNIPPET(myUdf(id), "my.query") | FROM myIndex WHERE MATCH("my.query") where myUdf() would be a UDF that fetches a document by its ID from some external storage. This enables applications to fetch the entire result set directly from Sphinx in one query, without having to separately fetch the documents in the application and then send them back to Sphinx for highlighting. SNIPPET() is a so-called "post limit" function, meaning that computing snippets is postponed not just until the entire final result set is ready, but even after the LIMIT clause is applied. For example, with a LIMIT 20,10 clause, SNIPPET() will be called at most 10 times. * FROM clause. FROM clause should contain the list of indexes to search through. Unlike in regular SQL, comma means enumeration of full-text indexes as in Query() API call rather than JOIN. Index name should be according to the rules of a C identifier. * WHERE clause. This clause will map both to fulltext query and filters. Comparison operators (=, !=, <, >, <=, >=), IN, AND, NOT, and BETWEEN are all supported and map directly to filters. OR is not supported yet but will be in the future. MATCH('query') is supported and maps to fulltext query. Query will be interpreted according to full-text query language rules. There must be at most one MATCH() in the clause. Starting with version 2.0.1-beta, {col_name | expr_alias} [NOT] IN @uservar condition syntax is supported. (Refer to Section 7.9, <> for a discussion of global user variables.) * GROUP BY clause. Supports grouping by multiple columns or computed expressions: | SELECT *, group_id*1000+article_type AS gkey FROM example GROUP BY gkey | SELECT id FROM products GROUP BY region, price Implicit grouping supported when using aggregate functions without specifiying a GROUP BY clause. Consider these two queries: | SELECT MAX(id), MIN(id), COUNT(*) FROM books | SELECT MAX(id), MIN(id), COUNT(*), 1 AS grp FROM books GROUP BY grp Aggregate functions (AVG(), MIN(), MAX(), SUM()) in column list clause are supported. Arguments to aggregate functions can be either plain attributes or arbitrary expressions. COUNT(*) is implicitly supported as using GROUP BY will add @count column to result set. Explicit support might be added in the future. COUNT(DISTINCT attr) is supported. Currently there can be at most one COUNT(DISTINCT) per query and an argument needs to be an attribute. Both current restrictions on COUNT(DISTINCT) might be lifted in the future. A special GROUPBY() function is also supported. It returns the GROUP BY key (and replaces the deprecated @groupby magic variable). That is particularly useful when grouping by an MVA value, in order to pick the specific value that was used to create the current group. | SELECT *, AVG(price) AS avgprice, COUNT(DISTINCT storeid), GROUPBY() | FROM products | WHERE MATCH('ipod') | GROUP BY vendorid Starting with 2.0.1-beta, GROUP BY on a string attribute is supported, with respect for current collation (see Section 5.12, <>). You can sort the result set by (an alias of) the aggregate value. | SELECT group_id, MAX(id) AS max_id | FROM my_index WHERE MATCH('the') | GROUP BY group_id ORDER BY max_id DESC However you can not use aggregate values in the WHERE clause. (HAVING clause would be required for that, just as in standard SQL, and that is not yet supported.) * GROUP_CONCAT() function is supported, starting with version 2.1.1-beta. When you group by an attribute, the result set only shows attributes from a single document representing the whole group. GROUP_CONCAT() produces a comma-separated list of the attribute values of all documents in the group. | SELECT id, GROUP_CONCAT(price) as pricesList, GROUPBY() AS name FROM shops GROUP BY shopName; * ZONESPANLIST() function returns pairs of matched zone spans. Each pair contains the matched zone span identifier, a colon, and the order number of the matched zone span. For example, if a document reads text the text, and you query for 'ZONESPAN:(i,b) text', then ZONESPANLIST() will return the string "1:1 1:2 2:1" meaning that the first zone span matched "text" in spans 1 and 2, and the second zone span in span 1 only. This was added in version 2.1.1-beta. * WITHIN GROUP ORDER BY clause. This is a Sphinx specific extension that lets you control how the best row within a group will to be selected. The syntax matches that of regular ORDER BY clause: | SELECT *, INTERVAL(posted,NOW()-7*86400,NOW()-86400) AS timeseg | FROM example WHERE MATCH('my search query') | GROUP BY siteid | WITHIN GROUP ORDER BY @weight DESC | ORDER BY timeseg DESC, @weight DESC Starting with 2.0.1-beta, WITHIN GROUP ORDER BY on a string attribute is supported, with respect for current collation (see Section 5.12, <>). * ORDER BY clause. Unlike in regular SQL, only column names (not expressions) are allowed and explicit ASC and DESC are required. The columns however can be computed expressions: | SELECT *, @weight*10+docboost AS skey FROM example ORDER BY skey Starting with 2.1.1-beta, you can use subqueries to speed up specific searches, which involve reranking, by postponing hard (slow) calculations as late as possible. For example, SELECT id,a_slow_expression() AS cond FROM an_index ORDER BY id ASC, cond DESC LIMIT 100; could be better written as SELECT * FROM (SELECT id,a_slow_expression() AS cond FROM an_index ORDER BY id ASC LIMIT 100) ORDER BY cond DESC; because in the first case the slow expression would be evaluated for the whole set, while in the second one it would be evaluated just for a subset of values. Starting with 2.0.1-beta, ORDER BY on a string attribute is supported, with respect for current collation (see Section 5.12, <>). Starting with 2.0.2-beta, ORDER BY RAND() syntax is supported. Note that this syntax is actually going to randomize the weight values and then order matches by those randomized weights. * LIMIT clause. Both LIMIT N and LIMIT M,N forms are supported. Unlike in regular SQL (but like in Sphinx API), an implicit LIMIT 0,20 is present by default. * OPTION clause. This is a Sphinx specific extension that lets you control a number of per-query options. The syntax is: | OPTION = [ , ... ] Supported options and respectively allowed values are: * 'agent_query_timeout' - integer (max time in milliseconds to wait for remote queries to complete, see agent-query-timeout under Index configuration options for details) * 'boolean_simplify' - 0 or 1, enables simplifying the query to speed it up * 'comment' - string, user comment that gets copied to a query log file * 'cutoff' - integer (max found matches threshold) * 'field_weights' - a named integer list (per-field user weights for ranking) * 'global_idf' - use global statistics (frequencies) from the global_idf file for IDF computations, rather than the local index statistics. Added in version 2.1.1-beta. * 'idf' - either 'normalized' (default) or 'plain'. Added in version 2.1.1-beta. The standard IDF (Inverse Document Frequency) calculation may cause undesired keyword penalization effects in the BM25 weighting functions. For instance, if you search for [the | something] and [the] occurs in more than 50% of the documents, then documents with both keywords [the] and [something] will get less weight than documents with just one keyword [something]. Using OPTION idf=plain avoids this. * idf=normalized: bm25 variant, idf = log((N-n+1)/n), as per Robertson et al * idf=plain: plain variant, idf=log(N/n), as per Sparck-Jones where N is the collection size and n is the number of matched documents. Hence plain IDF varies in [0, log(N)] range, and keywords are never penalized; while the normalized IDF varies in [-log(N), log(N)] range, and too frequent keywords are penalized. * 'index_weights' - a named integer list (per-index user weights for ranking) * 'max_matches' - integer (per-query max matches value) * 'max_query_time' - integer (max search time threshold, msec) * 'ranker' - any of 'proximity_bm25', 'bm25', 'none', 'wordcount', 'proximity', 'matchany', 'fieldmask', 'sph04', 'expr', or 'export' (refer to Section 5.4, <> for more details on each ranker) * 'retry_count' - integer (distributed retries count) * 'retry_delay' - integer (distributed retry delay, msec) * 'reverse_scan' - 0 or 1, lets you control the order in which full-scan query processes the rows * 'sort_method' - 'pq' (priority queue, set by default) or 'kbuffer' (gives faster sorting for already pre-sorted data, e.g. index data sorted by id). The result set is in both cases the same; picking one option or the other may just improve (or worsen!) performance. This option was added in version 2.1.1-beta. Example: | SELECT * FROM test WHERE MATCH('@title hello @body world') | OPTION ranker=bm25, max_matches=3000, | field_weights=(title=10, body=3), agent_query_timeout=10000 7.2. SELECT @@system_variable syntax ==================================== | SELECT @@system_variable [LIMIT [offset,] row_count] Added in version 2.0.2-beta, this is currently a placeholder query that does nothing and reports success. That is in order to keep compatibility with frameworks and connectors that automatically execute this statement. 7.3. SHOW META syntax ===================== | SHOW META [ LIKE pattern ] SHOW META shows additional meta-information about the latest query such as query time and keyword statistics. IO and CPU counters will only be available if searchd was started with --iostats and --cpustats switches respectively. | mysql> SELECT * FROM test1 WHERE MATCH('test|one|two'); | +------+--------+----------+------------+ | | id | weight | group_id | date_added | | +------+--------+----------+------------+ | | 1 | 3563 | 456 | 1231721236 | | | 2 | 2563 | 123 | 1231721236 | | | 4 | 1480 | 2 | 1231721236 | | +------+--------+----------+------------+ | 3 rows in set (0.01 sec) | | mysql> SHOW META; | +-----------------------+-------+ | | Variable_name | Value | | +-----------------------+-------+ | | total | 3 | | | total_found | 3 | | | time | 0.005 | | | keyword[0] | test | | | docs[0] | 3 | | | hits[0] | 5 | | | keyword[1] | one | | | docs[1] | 1 | | | hits[1] | 2 | | | keyword[2] | two | | | docs[2] | 1 | | | hits[2] | 2 | | | cpu_time | 0.350 | | | io_read_time | 0.004 | | | io_read_ops | 2 | | | io_read_kbytes | 0.4 | | | io_write_time | 0.000 | | | io_write_ops | 0 | | | io_write_kbytes | 0.0 | | | agents_cpu_time | 0.000 | | | agent_io_read_time | 0.000 | | | agent_io_read_ops | 0 | | | agent_io_read_kbytes | 0.0 | | | agent_io_write_time | 0.000 | | | agent_io_write_ops | 0 | | | agent_io_write_kbytes | 0.0 | | +-----------------------+-------+ | 12 rows in set (0.00 sec) Starting version 2.1.1-beta, you can also use the optional LIKE clause. It lets you pick just the variables that match a pattern. The pattern syntax is that of regular SQL wildcards, that is, '%' means any number of any characters, and '_' means a single character: | mysql> SHOW META LIKE 'total%'; | +-----------------------+-------+ | | Variable_name | Value | | +-----------------------+-------+ | | total | 3 | | | total_found | 3 | | +-----------------------+-------+ | 2 rows in set (0.00 sec) 7.4. SHOW WARNINGS syntax ========================= | SHOW WARNINGS SHOW WARNINGS statement, introduced in version 0.9.9-rc2, can be used to retrieve the warning produced by the latest query. The error message will be returned along with the query itself: | mysql> SELECT * FROM test1 WHERE MATCH('@@title hello') \G | ERROR 1064 (42000): index test1: syntax error, unexpected TOK_FIELDLIMIT | near '@title hello' | | mysql> SELECT * FROM test1 WHERE MATCH('@title -hello') \G | ERROR 1064 (42000): index test1: query is non-computable (single NOT operator) | | mysql> SELECT * FROM test1 WHERE MATCH('"test doc"/3') \G | *************************** 1. row *************************** | id: 4 | weight: 2500 | group_id: 2 | date_added: 1231721236 | 1 row in set, 1 warning (0.00 sec) | | mysql> SHOW WARNINGS \G | *************************** 1. row *************************** | Level: warning | Code: 1000 | Message: quorum threshold too high (words=2, thresh=3); replacing quorum operator | with AND operator | 1 row in set (0.00 sec) 7.5. SHOW STATUS syntax ======================= | SHOW STATUS [ LIKE pattern ] SHOW STATUS, introduced in version 0.9.9-rc2, displays a number of useful performance counters. IO and CPU counters will only be available if searchd was started with --iostats and --cpustats switches respectively. | mysql> SHOW STATUS; | +--------------------+-------+ | | Variable_name | Value | | +--------------------+-------+ | | uptime | 216 | | | connections | 3 | | | maxed_out | 0 | | | command_search | 0 | | | command_excerpt | 0 | | | command_update | 0 | | | command_keywords | 0 | | | command_persist | 0 | | | command_status | 0 | | | agent_connect | 0 | | | agent_retry | 0 | | | queries | 10 | | | dist_queries | 0 | | | query_wall | 0.075 | | | query_cpu | OFF | | | dist_wall | 0.000 | | | dist_local | 0.000 | | | dist_wait | 0.000 | | | query_reads | OFF | | | query_readkb | OFF | | | query_readtime | OFF | | | avg_query_wall | 0.007 | | | avg_query_cpu | OFF | | | avg_dist_wall | 0.000 | | | avg_dist_local | 0.000 | | | avg_dist_wait | 0.000 | | | avg_query_reads | OFF | | | avg_query_readkb | OFF | | | avg_query_readtime | OFF | | +--------------------+-------+ | 29 rows in set (0.00 sec) Starting from version 2.1.1-beta, an optional LIKE clause is supported. Refer to Section 7.3, <> for its syntax details. 7.6. INSERT and REPLACE syntax ============================== | {INSERT | REPLACE} INTO index [(column, ...)] | VALUES (value, ...) | [, (...)] INSERT statement, introduced in version 1.10-beta, is only supported for RT indexes. It inserts new rows (documents) into an existing index, with the provided column values. ID column must be present in all cases. Rows with duplicate IDs will not be overwritten by INSERT; use REPLACE to do that. index is the name of RT index into which the new row(s) should be inserted. The optional column names list lets you only explicitly specify values for some of the columns present in the index. All the other columns will be filled with their default values (0 for scalar types, empty string for text types). Expressions are not currently supported in INSERT and values should be explicitly specified. Multiple rows can be inserted using a single INSERT statement by providing several comma-separated, parens-enclosed lists of rows values. 7.7. REPLACE syntax =================== | {INSERT | REPLACE} INTO index [(column, ...)] | VALUES (value, ...) | [, (...)] REPLACE syntax is identical to INSERT syntax and is discussed in Section 7.6, <>. 7.8. DELETE syntax ================== | DELETE FROM index WHERE {id = value | id IN (val1 [, val2 [, ...]])} DELETE statement, introduced in version 1.10-beta, is only supported for RT indexes. It deletes existing rows (documents) from an existing index based on ID. index is the name of RT index from which the row should be deleted. value is the row ID to be deleted. Support for batch id IN (2,3,5) syntax was added in version 2.0.1-beta. Additional types of WHERE conditions (such as conditions on attributes, etc) are planned, but not supported yet as of 1.10-beta. 7.9. SET syntax =============== | SET [GLOBAL] server_variable_name = value | SET GLOBAL @user_variable_name = (int_val1 [, int_val2, ...]) | SET NAMES value | SET @@dummy_variable = ignored_value SET statement, introduced in version 1.10-beta, modifies a variable value. The variable names are case-insensitive. No variable value changes survive server restart. SET NAMES statement and SET @@variable_name syntax, both introduced in version 2.0.2-beta, do nothing. They were implemented to maintain compatibility with 3rd party MySQL client libraries, connectors, and frameworks that may need to run this statement when connecting. There are the following classes of the variables: 1. per-session server variable (1.10-beta and above) 2. global server variable (2.0.1-beta and above) 3. global user variable (2.0.1-beta and above) Global user variables are shared between concurrent sessions. Currently, the only supported value type is the list of BIGINTs, and these variables can only be used along with IN() for filtering purpose. The intended usage scenario is uploading huge lists of values to searchd (once) and reusing them (many times) later, saving on network overheads. Example: | // in session 1 | mysql> SET GLOBAL @myfilter=(2,3,5,7,11,13); | Query OK, 0 rows affected (0.00 sec) | | // later in session 2 | mysql> SELECT * FROM test1 WHERE group_id IN @myfilter; | +------+--------+----------+------------+-----------------+------+ | | id | weight | group_id | date_added | title | tag | | +------+--------+----------+------------+-----------------+------+ | | 3 | 1 | 2 | 1299338153 | another doc | 15 | | | 4 | 1 | 2 | 1299338153 | doc number four | 7,40 | | +------+--------+----------+------------+-----------------+------+ | 2 rows in set (0.02 sec) Per-session and global server variables affect certain server settings in the respective scope. Known per-session server variables are: AUTOCOMMIT = {0 | 1} Whether any data modification statement should be implicitly wrapped by BEGIN and COMMIT. Introduced in version 1.10-beta. COLLATION_CONNECTION = collation_name Selects the collation to be used for ORDER BY or GROUP BY on string values in the subsequent queries. Refer to Section 5.12, <> for a list of known collation names. Introduced in version 2.0.1-beta. CHARACTER_SET_RESULTS = charset_name Does nothing; a placeholder to support frameworks, clients, and connectors that attempt to automatically enforce a charset when connecting to a Sphinx server. Introduced in version 2.0.1-beta. SQL_AUTO_IS_NULL = value Does nothing; a placeholder to support frameworks, clients, and connectors that attempt to automatically enforce a charset when connecting to a Sphinx server. Introduced in version 2.0.2-beta. SQL_MODE = value Does nothing; a placeholder to support frameworks, clients, and connectors that attempt to automatically enforce a charset when connecting to a Sphinx server. Introduced in version 2.0.2-beta. PROFILING = {0 | 1} Enables query profiling in the current session. Defaults to 0. See also Section 7.29, <>. Introduced in version 2.1.1-beta. Known global server variables are: QUERY_LOG_FORMAT = {plain | sphinxql} Changes the current log format. Introduced in version 2.0.1-beta. LOG_LEVEL = {info | debug | debugv | debugvv} Changes the current log verboseness level. Introduced in version 2.0.1-beta. Examples: | mysql> SET autocommit=0; | Query OK, 0 rows affected (0.00 sec) | | mysql> SET GLOBAL query_log_format=sphinxql; | Query OK, 0 rows affected (0.00 sec) 7.10. SET TRANSACTION syntax ============================ | SET TRANSACTION ISOLATION LEVEL { READ UNCOMMITTED | | READ COMMITTED | | REPEATABLE READ | | SERIALIZABLE } SET TRANSACTION statement, introduced in version 2.0.2-beta, does nothing. It was implemented to maintain compatibility with 3rd party MySQL client libraries, connectors, and frameworks that may need to run this statement when connecting. Example: | mysql> SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; | Query OK, 0 rows affected (0.00 sec) 7.11. BEGIN, COMMIT, and ROLLBACK syntax ======================================== | START TRANSACTION | BEGIN | COMMIT | ROLLBACK | SET AUTOCOMMIT = {0 | 1} BEGIN, COMMIT, and ROLLBACK statements were introduced in version 1.10-beta. BEGIN statement (or its START TRANSACTION alias) forcibly commits pending transaction, if any, and begins a new one. COMMIT statement commits the current transaction, making all its changes permanent. ROLLBACK statement rolls back the current transaction, canceling all its changes. SET AUTOCOMMIT controls the autocommit mode in the active session. AUTOCOMMIT is set to 1 by default, meaning that every statement that perfoms any changes on any index is implicitly wrapped in BEGIN and COMMIT. Transactions are limited to a single RT index, and also limited in size. They are atomic, consistent, overly isolated, and durable. Overly isolated means that the changes are not only invisible to the concurrent transactions but even to the current session itself. 7.12. BEGIN syntax ================== | START TRANSACTION | BEGIN BEGIN syntax is discussed in detail in Section 7.11, <>. 7.13. ROLLBACK syntax ===================== | ROLLBACK ROLLBACK syntax is discussed in detail in Section 7.11, <>. 7.14. CALL SNIPPETS syntax ========================== | CALL SNIPPETS(data, index, query[, opt_value AS opt_name[, ...]]) CALL SNIPPETS statement, introduced in version 1.10-beta, builds a snippet from provided data and query, using specified index settings. data is the source data to extract a snippet from. It could be a single string, or the list of the strings enclosed in curly brackets. index is the name of the index from which to take the text processing settings. query is the full-text query to build snippets for. Additional options are documented in Section 8.7.1, <>. Usage example: | CALL SNIPPETS('this is my document text', 'test1', 'hello world', | 5 AS around, 200 AS limit); | CALL SNIPPETS(('this is my document text','this is my another text'), 'test1', 'hello world', | 5 AS around, 200 AS limit); | CALL SNIPPETS(('data/doc1.txt','data/doc2.txt','/home/sphinx/doc3.txt'), 'test1', 'hello world', | 5 AS around, 200 AS limit, 1 AS load_files); 7.15. CALL KEYWORDS syntax ========================== | CALL KEYWORDS(text, index, [hits]) CALL KEYWORDS statement, introduced in version 1.10-beta, splits text into particular keywords. It returns tokenized and normalized forms of the keywords, and, optionally, keyword statistics. text is the text to break down to keywords. index is the name of the index from which to take the text processing settings. hits is an optional boolean parameter that specifies whether to return document and hit occurrence statistics. 7.16. SHOW TABLES syntax ======================== | SHOW TABLES [ LIKE pattern ] SHOW TABLES statement, introduced in version 2.0.1-beta, enumerates all currently active indexes along with their types. As of 2.0.1-beta, existing index types are local, distributed, and rt respectively. Example: | mysql> SHOW TABLES; | +-------+-------------+ | | Index | Type | | +-------+-------------+ | | dist1 | distributed | | | rt | rt | | | test1 | local | | | test2 | local | | +-------+-------------+ | 4 rows in set (0.00 sec) Starting from version 2.1.1-beta, an optional LIKE clause is supported. Refer to Section 7.3, <> for its syntax details. | mysql> SHOW TABLES LIKE '%4'; | +-------+-------------+ | | Index | Type | | +-------+-------------+ | | dist4 | distributed | | +-------+-------------+ | 1 row in set (0.00 sec) 7.17. DESCRIBE syntax ===================== | {DESC | DESCRIBE} index [ LIKE pattern ] DESCRIBE statement, introduced in version 2.0.1-beta, lists index columns and their associated types. Columns are document ID, full-text fields, and attributes. The order matches that in which fields and attributes are expected by INSERT and REPLACE statements. As of 2.0.1-beta, column types are field, integer, timestamp, ordinal, bool, float, bigint, string, and mva. ID column will be typed either integer or bigint based on whether the binaries were built with 32-bit or 64-bit document ID support. Example: | mysql> DESC rt; | +---------+---------+ | | Field | Type | | +---------+---------+ | | id | integer | | | title | field | | | content | field | | | gid | integer | | +---------+---------+ | 4 rows in set (0.00 sec) Starting from version 2.1.1-beta, an optional LIKE clause is supported. Refer to Section 7.3, <> for its syntax details. 7.18. CREATE FUNCTION syntax ============================ | CREATE FUNCTION udf_name | RETURNS {INT | BIGINT | FLOAT | STRING} | SONAME 'udf_lib_file' CREATE FUNCTION statement, introduced in version 2.0.1-beta, installs a user-defined function (UDF) with the given name and type from the given library file. The library file must reside in a trusted plugin_dir directory. On success, the function is available for use in all subsequent queries that the server receives. Example: | mysql> CREATE FUNCTION avgmva RETURNS INT SONAME 'udfexample.dll'; | Query OK, 0 rows affected (0.03 sec) | | mysql> SELECT *, AVGMVA(tag) AS q from test1; | +------+--------+---------+-----------+ | | id | weight | tag | q | | +------+--------+---------+-----------+ | | 1 | 1 | 1,3,5,7 | 4.000000 | | | 2 | 1 | 2,4,6 | 4.000000 | | | 3 | 1 | 15 | 15.000000 | | | 4 | 1 | 7,40 | 23.500000 | | +------+--------+---------+-----------+ 7.19. DROP FUNCTION syntax ========================== | DROP FUNCTION udf_name DROP FUNCTION statement, introduced in version 2.0.1-beta, deinstalls a user-defined function (UDF) with the given name. On success, the function is no longer available for use in subsequent queries. Pending concurrent queries will not be affected and the library unload, if necessary, will be postponed until those queries complete. Example: | mysql> DROP FUNCTION avgmva; | Query OK, 0 rows affected (0.00 sec) 7.20. SHOW VARIABLES syntax =========================== | SHOW [{GLOBAL | SESSION}] VARIABLES [WHERE variable_name='xxx'] SHOW VARIABLES statement was added in version 2.0.1-beta to improve compatibility with 3rd party MySQL connectors and frameworks that automatically execute this statement. The WHERE option was added in version 2.1.1-beta. In version 2.0.1-beta, it did nothing. Starting from version 2.0.2-beta, it returns the current values of a few server-wide variables. Also, support for GLOBAL and SESSION clauses was added. | mysql> SHOW GLOBAL VARIABLES; | +----------------------+----------+ | | Variable_name | Value | | +----------------------+----------+ | | autocommit | 1 | | | collation_connection | libc_ci | | | query_log_format | sphinxql | | | log_level | info | | +----------------------+----------+ | 4 rows in set (0.00 sec) Starting from 2.1.1-beta, support for WHERE variable_name clause was added, to help certain connectors. 7.21. SHOW COLLATION syntax =========================== | SHOW COLLATION Added in version 2.0.1-beta, this is currently a placeholder query that does nothing and reports success. That is in order to keep compatibility with frameworks and connectors that automatically execute this statement. | mysql> SHOW COLLATION; | Query OK, 0 rows affected (0.00 sec) 7.22. SHOW CHARACTER SET syntax =============================== | SHOW CHARACTER SET Added in version 2.1.1-beta, this is currently a placeholder query that does nothing and reports that a UTF-8 character set is available. It was added in order to keep compatibility with frameworks and connectors that automatically execute this statement. | mysql> SHOW CHARACTER SET; | +---------+---------------+-------------------+--------+ | | Charset | Description | Default collation | Maxlen | | +---------+---------------+-------------------+--------+ | | utf8 | UTF-8 Unicode | utf8_general_ci | 3 | | +---------+---------------+-------------------+--------+ | 1 row in set (0.00 sec) 7.23. UPDATE syntax =================== | UPDATE index SET col1 = newval1 [, ...] WHERE where_condition [OPTION opt_name = opt_value [, ...]] UPDATE statement was added in version 2.0.1-beta. Multiple attributes and values can be specified in a single statement. Both RT and disk indexes are supported. As of version 2.0.2-beta, all atributes types (int, bigint, float, MVA) except for strings can be updated. Previously, some of the types were not supported. where_condition (also added in 2.0.2-beta) has the same syntax as in the SELECT statement (see Section 7.1, <