Description
Use the simple approach of n-gramming outside of Solr and indexing n-gram documents. For example:
<doc>
<field name="word">lettuce</field>
<field name="start3">let</field>
<field name="gram3">let ett ttu tuc uce</field>
<field name="end3">uce</field>
<field name="start4">lett</field>
<field name="gram4">lett ettu ttuc tuce</field>
<field name="end4">tuce</field>
</doc>
See:
http://www.mail-archive.com/solr-user@lucene.apache.org/msg01254.html
Java clients: SOLR-20 (add delete commit optimize), SOLR-30 (search)
Attachments
Attachments
Issue Links
- relates to
-
LUCENE-759 Add n-gram tokenizers to contrib/analyzers
- Resolved