Lecture3 Tolerant Retrieval
Lecture3 Tolerant Retrieval
Introduction to
Information Retrieval
CS276: Information Retrieval and Web Search
Pandu Nayak and Prabhakar Raghavan
This lecture
Dictionary data structures
“Tolerant” retrieval
Wild-card queries
Spelling correction
Soundex
3
Introduction to Information Retrieval Sec. 3.1
4
Introduction to Information Retrieval Sec. 3.1
A naïve dictionary
An array of struct:
6
Introduction to Information Retrieval Sec. 3.1
Hashtables
Each vocabulary term is hashed to an integer
(We assume you’ve seen hashtables before)
Pros:
Lookup is faster than for a tree: O(1)
Cons:
No easy way to find minor variants:
judgment/judgement
No prefix search [tolerant retrieval]
If vocabulary keeps growing, need to occasionally do the
expensive operation of rehashing everything
7
Introduction to Information Retrieval Sec. 3.1
ot
s
le
v ark
g en
zy g
s i ck
huy
a ard
8
Introduction to Information Retrieval Sec. 3.1
Tree: B-tree
a-hu n-z
hy-m
Trees
Simplest: binary tree
More usual: B-trees
Trees require a standard ordering of characters and hence
strings … but we typically have one
Pros:
Solves the prefix problem (terms starting with hyp)
Cons:
Slower: O(log M) [and this requires balanced tree]
Rebalancing binary trees is expensive
But B-trees mitigate the rebalancing problem
10
Introduction to Information Retrieval
WILD-CARD QUERIES
11
Introduction to Information Retrieval Sec. 3.2
Wild-card queries: *
mon*: find all docs containing any word beginning
with “mon”.
Easy with binary tree (or B-tree) lexicon: retrieve all
words in range: mon ≤ w < moo
*mon: find words ending in “mon”: harder
Maintain an additional B-tree for terms backwards.
Can retrieve all words in range: nom ≤ w < non.
Query processing
At this point, we have an enumeration of all terms in
the dictionary that match the wild-card query.
We still have to look up the postings for each
enumerated term.
E.g., consider the query:
se*ate AND fil*er
This may result in the execution of many Boolean
AND queries.
13
Introduction to Information Retrieval Sec. 3.2
14
Introduction to Information Retrieval Sec. 3.2.1
Permuterm index
For term hello, index under:
hello$, ello$h, llo$he, lo$hel, o$hell, $hello
where $ is a special symbol.
Queries:
X lookup on X$ X* lookup on $X*
*X lookup on X$* *X* lookup on X*
X*Y lookup on Y$X* X*Y*Z ??? Exercise!
Query = hel*o
X=hel, Y=o
Lookup o$hel*
15
Introduction to Information Retrieval Sec. 3.2.1
16
Introduction to Information Retrieval Sec. 3.2.2
$a,ap,pr,ri,il,l$,$i,is,s$,$t,th,he,e$,$c,cr,ru,
ue,el,le,es,st,t$, $m,mo,on,nt,h$
$ is a special word boundary symbol
Maintain a second inverted index from bigrams to
dictionary terms that match each bigram.
17
Introduction to Information Retrieval Sec. 3.2.2
$m mace madden
mo among amortize
on along among
18
Introduction to Information Retrieval Sec. 3.2.2
Processing wild-cards
Query mon* can now be run as
$m AND mo AND on
Gets terms that match AND version of our wildcard
query.
But we’d enumerate moon.
Must post-filter these terms against query.
Surviving enumerated terms are then looked up in
the term-document inverted index.
Fast, space efficient (compared to permuterm).
19
Introduction to Information Retrieval Sec. 3.2.2
Search
Type your search terms, use ‘*’ if you need to.
E.g., Alex* will match Alexander.
SPELLING CORRECTION
21
Introduction to Information Retrieval Sec. 3.3
Spell correction
Two principal uses
Correcting document(s) being indexed
Correcting user queries to retrieve “right” answers
Two main flavors:
Isolated word
Check each word on its own for misspelling
Will not catch typos resulting in correctly spelled words
e.g., from form
Context-sensitive
Look at surrounding words,
e.g., I flew form Heathrow to Narita.
22
Introduction to Information Retrieval Sec. 3.3
Document correction
Especially needed for OCR’ed documents
Correction algorithms are tuned for this: rn/m
Can use domain-specific knowledge
E.g., OCR can confuse O and D more often than it would confuse O
and I (adjacent on the QWERTY keyboard, so more likely
interchanged in typing).
But also: web pages and even printed material have
typos
Goal: the dictionary contains fewer misspellings
But often we don’t change the documents and
instead fix the query-document mapping
23
Introduction to Information Retrieval Sec. 3.3
Query mis-spellings
Our principal focus here
E.g., the query Alanis Morisett
We can either
Retrieve documents indexed by the correct spelling, OR
Return several suggested alternative queries with the
correct spelling
Did you mean … ?
24
Introduction to Information Retrieval Sec. 3.3.2
25
Introduction to Information Retrieval Sec. 3.3.2
26
Introduction to Information Retrieval Sec. 3.3.3
Edit distance
Given two strings S1 and S2, the minimum number of
operations to convert one to the other
Operations are typically character-level
Insert, Delete, Replace, (Transposition)
E.g., the edit distance from dof to dog is 1
From cat to act is 2 (Just 1 with transpose.)
from cat to dog is 3.
Generally found by dynamic programming.
See http://www.merriampark.com/ld.htm for a nice
example plus an applet.
27
Introduction to Information Retrieval Sec. 3.3.3
28
Introduction to Information Retrieval Sec. 3.3.4
30
Introduction to Information Retrieval Sec. 3.3.4
n-gram overlap
Enumerate all the n-grams in the query string as well
as in the lexicon
Use the n-gram index (recall wild-card search) to
retrieve all lexicon terms matching any of the query
n-grams
Threshold by number of matching n-grams
Variants – weight by keyboard layout, etc.
31
Introduction to Information Retrieval Sec. 3.3.4
32
Introduction to Information Retrieval Sec. 3.3.4
X Y / X Y
Equals 1 when X and Y have the same elements and
zero when they are disjoint
X and Y don’t have to be of the same size
Always assigns a number between 0 and 1
Now threshold to decide if you have a match
E.g., if J.C. > 0.8, declare a match
33
Introduction to Information Retrieval Sec. 3.3.4
Matching trigrams
Consider the query lord – we wish to identify words
matching 2 of its 3 bigrams (lo, or, rd)
35
Introduction to Information Retrieval Sec. 3.3.5
Context-sensitive correction
Need surrounding context to catch this.
First idea: retrieve dictionary terms close (in
weighted edit distance) to each query term
Now try all possible resulting phrases with one word
“fixed” at a time
flew from heathrow
fled form heathrow
flea form heathrow
Hit-based spelling correction: Suggest the alternative
that has lots of hits.
36
Introduction to Information Retrieval Sec. 3.3.5
Exercise
Suppose that for “flew form Heathrow” we have 7
alternatives for flew, 19 for form and 3 for heathrow.
How many “corrected” phrases will we enumerate in
this scheme?
37
Introduction to Information Retrieval Sec. 3.3.5
Another approach
Break phrase query into a conjunction of biwords
(Lecture 2).
Look for biwords that need only one term corrected.
Enumerate only phrases containing “common”
biwords.
38
Introduction to Information Retrieval Sec. 3.3.5
SOUNDEX
40
Introduction to Information Retrieval Sec. 3.4
Soundex
Class of heuristics to expand a query into phonetic
equivalents
Language specific – mainly for names
E.g., chebyshev tchebycheff
Invented for the U.S. census … in 1918
41
Introduction to Information Retrieval Sec. 3.4
http://www.creativyst.com/Doc/Articles/SoundEx1/SoundEx1.htm#Top
42
Introduction to Information Retrieval Sec. 3.4
Soundex continued
4. Remove all pairs of consecutive digits.
5. Remove all zeros from the resulting string.
6. Pad the resulting string with trailing zeros and
return the first four positions, which will be of the
form <uppercase letter> <digit> <digit> <digit>.
Soundex
Soundex is the classic algorithm, provided by most
databases (Oracle, Microsoft, …)
How useful is soundex?
Not very – for information retrieval
Okay for “high recall” tasks (e.g., Interpol), though
biased to names of certain nationalities
Zobel and Dart (1996) show that other algorithms for
phonetic matching perform much better in the
context of IR
45
Introduction to Information Retrieval
46
Introduction to Information Retrieval
Exercise
Draw yourself a diagram showing the various indexes
in a search engine incorporating all the functionality
we have talked about
Identify some of the key design choices in the index
pipeline:
Does stemming happen before the Soundex index?
What about n-grams?
Given a query, how would you parse and dispatch
sub-queries to the various indexes?
47
Introduction to Information Retrieval Sec. 3.5
Resources
IIR 3, MG 4.2
Efficient spell retrieval:
K. Kukich. Techniques for automatically correcting words in text. ACM
Computing Surveys 24(4), Dec 1992.
J. Zobel and P. Dart. Finding approximate matches in large lexicons.
Software - practice and experience 25(3), March 1995.
http://citeseer.ist.psu.edu/zobel95finding.html
Mikael Tillenius: Efficient Generation and Ranking of Spelling Error Corrections.
Master’s thesis at Sweden’s Royal Institute of Technology.
http://citeseer.ist.psu.edu/179155.html
Nice, easy reading on spell correction:
Peter Norvig: How to write a spelling corrector
http://norvig.com/spell-correct.html
48