Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
168 views

Introduction To: Information Retrieval

This document provides an introduction to information retrieval. It discusses how information retrieval aims to find relevant documents from large collections to satisfy an information need. It describes how early systems used grep to search documents but that this was inefficient. The document introduces the concept of an inverted index to efficiently store term and document mappings. It explains how an inverted index represents term postings lists that store document identifiers where each term appears. The indexing process of tokenizing, sorting, and generating the dictionary and postings lists is also summarized.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
168 views

Introduction To: Information Retrieval

This document provides an introduction to information retrieval. It discusses how information retrieval aims to find relevant documents from large collections to satisfy an information need. It describes how early systems used grep to search documents but that this was inefficient. The document introduces the concept of an inverted index to efficiently store term and document mappings. It explains how an inverted index represents term postings lists that store document identifiers where each term appears. The indexing process of tokenizing, sorting, and generating the dictionary and postings lists is also summarized.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 48

Introduction to Information Retrieval

Introduction to
Information Retrieval
CS276
Information Retrieval and Web Search
Christopher Manning and Prabhakar Raghavan
Lecture 1: Boolean retrieval
Introduction to Information Retrieval

Information Retrieval
 Information Retrieval (IR) is finding material (usually
documents) of an unstructured nature (usually text)
that satisfies an information need from within large
collections (usually stored on computers).

2
Introduction to Information Retrieval

Unstructured (text) vs. structured


(database) data in 1996

3
Introduction to Information Retrieval

Unstructured (text) vs. structured


(database) data in 2009

4
Introduction to Information Retrieval Sec. 1.1

Unstructured data in 1680


 Which plays of Shakespeare contain the words Brutus
AND Caesar but NOT Calpurnia?
 One could grep all of Shakespeare’s plays for Brutus
and Caesar, then strip out lines containing Calpurnia?
 Why is that not the answer?
 Slow (for large corpora)
 NOT Calpurnia is non-trivial
 Other operations (e.g., find the word Romans near
countrymen) not feasible
 Ranked retrieval (best documents to return)
 Later lectures
5
Introduction to Information Retrieval Sec. 1.1

Term-document incidence

Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth

Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0

Brutus AND Caesar BUT NOT 1 if play contains


Calpurnia word, 0 otherwise
Introduction to Information Retrieval Sec. 1.1

Incidence vectors
 So we have a 0/1 vector for each term.
 To answer query: take the vectors for Brutus, Caesar
and Calpurnia (complemented)  bitwise AND.
 110100 AND 110111 AND 101111 = 100100.

7
Introduction to Information Retrieval Sec. 1.1

Answers to query
 Antony and Cleopatra, Act III, Scene ii
Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus,
When Antony found Julius Caesar dead,
He cried almost to roaring; and he wept
When at Philippi he found Brutus slain.

 Hamlet, Act III, Scene ii


Lord Polonius: I did enact Julius Caesar I was killed i' the
Capitol; Brutus killed me.

8
Introduction to Information Retrieval Sec. 1.1

Basic assumptions of Information Retrieval


 Collection: Fixed set of documents
 Goal: Retrieve documents with information that is
relevant to the user’s information need and helps the
user complete a task

9
Introduction to Information Retrieval

The classic search model


TASK Get rid of mice in a
politically correct way
Misconception?
Info Need Info about removing mice
without killing them
Mistranslation?
Verbal How do I trap mice alive?
form
Misformulation?

Query mouse trap

SEARCH
ENGINE

Query Results
Corpus
Refinement
Introduction to Information Retrieval Sec. 1.1

How good are the retrieved docs?


 Precision : Fraction of retrieved docs that are
relevant to user’s information need
 Recall : Fraction of relevant docs in collection that are
retrieved
 More precise definitions and measurements to
follow in later lectures

11
Introduction to Information Retrieval Sec. 1.1

Bigger collections
 Consider N = 1 million documents, each with about
1000 words.
 Avg 6 bytes/word including spaces/punctuation
 6GB of data in the documents.
 Say there are M = 500K distinct terms among these.

12
Introduction to Information Retrieval Sec. 1.1

Can’t build the matrix


 500K x 1M matrix has half-a-trillion 0’s and 1’s.
 But it has no more than one billion 1’s.
 matrix is extremely sparse. Why?
 What’s a better representation?
 We only record the 1 positions.

13
Introduction to Information Retrieval Sec. 1.2

Inverted index
 For each term t, we must store a list of all documents
that contain t.
 Identify each by a docID, a document serial number
 Can we used fixed-size arrays for this?

Brutus 1 2 4 11 31 45 173 174


Caesar 1 2 4 5 6 16 57 132
Calpurnia 2 31 54 101

What happens if the word Caesar is


added to document 14?
14
Introduction to Information Retrieval Sec. 1.2

Inverted index
 We need variable-size postings lists
 On disk, a continuous run of postings is normal and best
 In memory, can use linked lists or variable length arrays
 Some tradeoffs in size/ease of insertion Posting

Brutus 1 2 4 11 31 45 173 174


Caesar 1 2 4 5 6 16 57 132
Calpurnia 2 31 54 101

Dictionary Postings
Sorted by docID (more later on why). 15
Introduction to Information Retrieval Sec. 1.2

Inverted index construction


Documents to Friends, Romans, countrymen.
be indexed.

Tokenizer
Token stream. Friends Romans Countrymen

Linguistic modules
Modified tokens. friend roman countryman

Indexer friend 2 4

roman 1 2
Inverted index.
countryman 13 16
Introduction to Information Retrieval Sec. 1.2

Indexer steps: Token sequence


 Sequence of (Modified token, Document ID) pairs.

Doc 1 Doc 2

I did enact Julius So let it be with


Caesar I was killed Caesar. The noble
i' the Capitol; Brutus hath told you
Brutus killed me. Caesar was ambitious
Introduction to Information Retrieval Sec. 1.2

Indexer steps: Sort


 Sort by terms
 And then docID

Core indexing step


Introduction to Information Retrieval Sec. 1.2

Indexer steps: Dictionary & Postings


 Multiple term entries in
a single document are
merged.
 Split into Dictionary
and Postings
 Doc. frequency
information is added.

Why frequency?
Will discuss later.
Introduction to Information Retrieval Sec. 1.2

Where do we pay in storage?


Lists of
docIDs

Terms
and
counts Later in the
course:
•How do we
index efficiently?
•How much
storage do we
need?
Pointers 20
Introduction to Information Retrieval Sec. 1.3

The index we just built


 How do we process a query? Today’s
 Later - what kinds of queries can we process? focus

21
Introduction to Information Retrieval Sec. 1.3

Query processing: AND


 Consider processing the query:
Brutus AND Caesar
 Locate Brutus in the Dictionary;
 Retrieve its postings.
 Locate Caesar in the Dictionary;
 Retrieve its postings.
 “Merge” the two postings:

2 4 8 16 32 64 128 Brutus
1 2 3 5 8 13 21 34 Caesar

22
Introduction to Information Retrieval Sec. 1.3

The merge
 Walk through the two postings simultaneously, in
time linear in the total number of postings entries

2 4 8 16 32 64 128 Brutus
2 8
1 2 3 5 8 13 21 34 Caesar

If the list lengths are x and y, the merge takes O(x+y)


operations.
Crucial: postings sorted by docID.
23
Introduction to Information Retrieval

Intersecting two postings lists


(a “merge” algorithm)

24
Introduction to Information Retrieval Sec. 1.3

Boolean queries: Exact match


 The Boolean retrieval model is being able to ask a
query that is a Boolean expression:
 Boolean Queries are queries using AND, OR and NOT to
join query terms
 Views each document as a set of words
 Is precise: document matches condition or not.
 Perhaps the simplest model to build an IR system on
 Primary commercial retrieval tool for 3 decades.
 Many search systems you still use are Boolean:
 Email, library catalog, Mac OS X Spotlight

25
Introduction to Information Retrieval Sec. 1.4

Example: WestLaw http://www.westlaw.com/

 Largest commercial (paying subscribers) legal


search service (started 1975; ranking added
1992)
 Tens of terabytes of data; 700,000 users
 Majority of users still use boolean queries
 Example query:
 What is the statute of limitations in cases involving
the federal tort claims act?
 LIMIT! /3 STATUTE ACTION /S FEDERAL /2 TORT /
3 CLAIM
 /3 = within 3 words, /S = in same sentence
26
Introduction to Information Retrieval Sec. 1.4

Example: WestLaw http://www.westlaw.com/

 Another example query:


 Requirements for disabled people to be able to access a
workplace
 disabl! /p access! /s work-site work-place (employment /3
place
 Note that SPACE is disjunction, not conjunction!
 Long, precise queries; proximity operators;
incrementally developed; not like web search
 Many professional searchers still like Boolean search
 You know exactly what you are getting
 But that doesn’t mean it actually works better….
Introduction to Information Retrieval Sec. 1.3

Boolean queries:
More general merges
 Exercise: Adapt the merge for the queries:
Brutus AND NOT Caesar
Brutus OR NOT Caesar

Can we still run through the merge in time O(x+y)?


What can we achieve?

28
Introduction to Information Retrieval Sec. 1.3

Merging
What about an arbitrary Boolean formula?
(Brutus OR Caesar) AND NOT
(Antony OR Cleopatra)
 Can we always merge in “linear” time?
 Linear in what?
 Can we do better?

29
Introduction to Information Retrieval Sec. 1.3

Query optimization

 What is the best order for query processing?


 Consider a query that is an AND of n terms.
 For each of the n terms, get its postings, then
AND them together.
Brutus 2 4 8 16 32 64 128
Caesar 1 2 3 5 8 16 21 34
Calpurnia 13 16

Query: Brutus AND Calpurnia AND Caesar 30


Introduction to Information Retrieval Sec. 1.3

Query optimization example


 Process in order of increasing freq:
 start with smallest set, then keep cutting further.

This is why we kept


document freq. in dictionary

Brutus 2 4 8 16 32 64 128
Caesar 1 2 3 5 8 16 21 34
Calpurnia 13 16

Execute the query as (Calpurnia AND Brutus) AND Caesar.


31
Introduction to Information Retrieval Sec. 1.3

More general optimization


 e.g., (madding OR crowd) AND (ignoble OR
strife)
 Get doc. freq.’s for all terms.
 Estimate the size of each OR by the sum of its
doc. freq.’s (conservative).
 Process in increasing order of OR sizes.

32
Introduction to Information Retrieval

Exercise
 Recommend a query
processing order for

Term Freq
(tangerine OR trees) AND eyes 213312
(marmalade OR skies) AND kaleidoscope 87009
(kaleidoscope OR eyes) marmalade 107913
skies 271658
tangerine 46653
trees 316812

33
Introduction to Information Retrieval

Query processing exercises


 Exercise: If the query is friends AND romans AND
(NOT countrymen), how could we use the freq of
countrymen?
 Exercise: Extend the merge to an arbitrary Boolean
query. Can we always guarantee execution in time
linear in the total postings size?
 Hint: Begin with the case of a Boolean formula query:
in this, each query term appears only once in the
query.

34
Introduction to Information Retrieval

Exercise
 Try the search feature at
http://www.rhymezone.com/shakespeare/
 Write down five search features you think it could do
better

35
Introduction to Information Retrieval

What’s ahead in IR?


Beyond term search
 What about phrases?
 Stanford University
 Proximity: Find Gates NEAR Microsoft.
 Need index to capture position information in docs.
 Zones in documents: Find documents with (author =
Ullman) AND (text contains automata).

36
Introduction to Information Retrieval

Evidence accumulation
 1 vs. 0 occurrence of a search term
 2 vs. 1 occurrence
 3 vs. 2 occurrences, etc.
 Usually more seems better
 Need term frequency information in docs

37
Introduction to Information Retrieval

Ranking search results


 Boolean queries give inclusion or exclusion of docs.
 Often we want to rank/group results
 Need to measure proximity from query to each doc.
 Need to decide whether docs presented to user are
singletons, or a group of docs covering various aspects of
the query.

38
Introduction to Information Retrieval

IR vs. databases:
Structured vs unstructured data
 Structured data tends to refer to information in
“tables”
Employee Manager Salary
Smith Jones 50000
Chang Smith 60000
Ivy Smith 50000

Typically allows numerical range and exact match


(for text) queries, e.g.,
Salary < 60000 AND Manager = Smith.
39
Introduction to Information Retrieval

Unstructured data
 Typically refers to free text
 Allows
 Keyword queries including operators
 More sophisticated “concept” queries e.g.,
 find all web pages dealing with drug abuse
 Classic model for searching text documents

40
Introduction to Information Retrieval

Semi-structured data
 In fact almost no data is “unstructured”
 E.g., this slide has distinctly identified zones such as
the Title and Bullets
 Facilitates “semi-structured” search such as
 Title contains data AND Bullets contain search

… to say nothing of linguistic structure

41
Introduction to Information Retrieval

More sophisticated semi-structured


search
 Title is about Object Oriented Programming AND
Author something like stro*rup
 where * is the wild-card operator
 Issues:
 how do you process “about”?
 how do you rank results?
 The focus of XML search (IIR chapter 10)

42
Introduction to Information Retrieval

Clustering, classification and ranking


 Clustering: Given a set of docs, group them into
clusters based on their contents.
 Classification: Given a set of topics, plus a new doc D,
decide which topic(s) D belongs to.

 Ranking: Can we learn how to best order a set of


documents, e.g., a set of search results

43
Introduction to Information Retrieval

The web and its challenges


 Unusual and diverse documents
 Unusual and diverse users, queries, information
needs
 Beyond terms, exploit ideas from social networks
 link analysis, clickstreams ...

 How do search engines work? And how can we


make them better?

44
Introduction to Information Retrieval

More sophisticated information retrieval


 Cross-language information retrieval
 Question answering
 Summarization
 Text mining
 …

45
 
Introduction to Information Retrieval

Course details
 Course URL: cs276.stanford.edu
 [a.k.a., http://www.stanford.edu/class/cs276/ ]
 Work/Grading:
 Problem sets (2) 20%
 Practical exercises (2) 10% + 20% = 30%
 Midterm 20%
 Final 30%
 Textbook:
 Introduction to Information Retrieval
 In bookstore and online (http://informationretrieval.org/)
 We’re happy to get comments/corrections/feedback on it!
46
Introduction to Information Retrieval

Course staff
 Professor: Christopher Manning
Office: Gates 158 manning@cs.stanford.edu
 Professor: Prabhakar Raghavan pragh@yahoo-
inc.com
 TAs: Andrey Guev, Shakti Sinha, Roshan Sumbaly
 In general, don’t use the above addresses, but:
 Newsgroup: su.class.cs276 [preferred]
 cs276-aut0910-staff@lists.stanford.edu

47
Introduction to Information Retrieval

Resources for today’s lecture


 Introduction to Information Retrieval, chapter 1
 Shakespeare:
 http://www.rhymezone.com/shakespeare/
 Try the neat browse by keyword sequence feature!

 Managing Gigabytes, chapter 3.2


 Modern Information Retrieval, chapter 8.2

Any questions?
48

You might also like