Introduction to
Information Retrieval
Introducing Information Retrieval
and Web Search
Information Retrieval
Information Retrieval (IR) is finding material
(usually documents) of an unstructured
nature (usually text) that satisfies an
information need from within large
collections (usually stored on computers).
These days we frequently think first of web
search, but there are many other cases:
E-mail search
Searching your laptop
Corporate knowledge bases
Legal information retrieval
Unstructured (text) vs. structured
(database) data in the mid-nineties
Unstructured (text) vs. structured
(database) data today
Sec. 1.1
Basic assumptions of Information
Retrieval
Collection: A set of documents
Assume it is a static collection for the
moment
Goal: Retrieve documents with
information that is relevant to the
users information need and helps
the user complete a task
5
The classic search model
Get rid of mice in a
politically correct way
User task
Misconception?
Info about removing mice
without killing them
Info need
Misformulation?
Query
how trap mice alive
Search
engine
Query
refinement
Results
Collection
Sear
ch
Sec. 1.1
How good are the retrieved
docs?
Precision : Fraction of retrieved docs
that are relevant to the users
information need
Recall : Fraction of relevant docs in
collection that are retrieved
More precise definitions and
measurements to follow later
Introduction to
Information Retrieval
Term-document incidence
matrices
Sec. 1.1
Unstructured data in 1620
Which plays of Shakespeare contain the
words Brutus AND Caesar but NOT
Calpurnia?
One could grep all of Shakespeares plays
for Brutus and Caesar, then strip out
lines containing Calpurnia?
Why is that not the answer?
Slow (for large corpora)
NOT Calpurnia is non-trivial
Other operations (e.g., find the word Romans
near countrymen) not feasible
Ranked retrieval (best documents to return)
Later lectures
Sec. 1.1
Term-document incidence
matrices
Antony and Cleopatra
J ulius Caesar
The Tempest
Hamlet
Othello
Macbeth
Antony
Brutus
Caesar
Calpurnia
Cleopatra
mercy
worser
Brutus AND Caesar BUT
NOT Calpurnia
1 if play contains
word, 0 otherwise
Sec. 1.1
Incidence vectors
So we have a 0/1 vector for each term.
To answer query: take the vectors for
Brutus, Caesar and Calpurnia
(complemented) bitwise AND.
110100 AND
110111 AND
101111 =
100100
Antony and Cleopatra
J ulius Caesar
The Tempest
Hamlet
Othello
Macbeth
Antony
Brutus
Caesar
Calpurnia
Cleopatra
mercy
worser
11
Sec. 1.1
Answers to query
Antony and Cleopatra, Act III, Scene ii
Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus,
When Antony found Julius Caesar dead,
He cried almost to roaring; and he wept
When at Philippi he found Brutus slain.
Hamlet, Act III, Scene ii
Lord Polonius: I did enact Julius Caesar I was killed i the
Capitol; Brutus killed me.
12
Sec. 1.1
Bigger collections
Consider N = 1 million documents,
each with about 1000 words.
Avg 6 bytes/word including
spaces/punctuation
6GB of data in the documents.
Say there are M = 500K distinct
terms among these.
13
Sec. 1.1
Cant build the matrix
500K x 1M matrix has half-a-trillion 0s
and 1s.
Why?
But it has no more than one billion 1s.
matrix is extremely sparse.
Whats a better representation?
We only record the 1 positions.
14
Introduction to
Information Retrieval
The Inverted Index
The key data structure
underlying modern IR
Sec. 1.2
Inverted index
For each term t, we must store a list
of all documents that contain t.
Identify each doc by a docID, a
document serial number
Can we used fixed-size arrays for
Brutus
1 2
4 11 31 45 173 174
this?
Caesa
r
Calpurnia
1
2
2
31
16 57 132
54 101
What happens if the word
Caesar is added to
document 14?
16
Sec. 1.2
Inverted index
We need variable-size postings lists
On disk, a continuous run of postings is
normal and best
In memory, can use linked lists or
variable length arrays
Posting
Some tradeoffs in size/ease of insertion
Brutus
Caesar
Calpurnia
Dictionary
2
2
31
11 31 45 173 174
16 57 132
54 101
Postings
17
Sorted by docID (more later on why).
Sec. 1.2
Inverted index construction
Documents to
be indexed
Friends, Romans, countrymen.
Tokenizer
Token stream
Friends
Romans
Countrymen
roman
countryman
Linguistic modules
friend
Modified tokens
Indexer
Inverted index
friend
roman
countryman
13
16
Initial stages of text
processing
Tokenization
Cut character sequence into word tokens
Deal with Johns, a state-of-the-art solution
Normalization
Map text and query term to same form
You want U.S.A. and USA to match
Stemming
We may wish different forms of a root to match
authorize, authorization
Stop words
We may omit very common words (or not)
the, a, to, of
Indexer steps: Token
sequence
Sequence of (Modified token, Document ID)
pairs.
Doc 1
I did enact Julius
Caesar I was killed
i the Capitol;
Brutus killed me.
Doc 2
So let it be with
Caesar. The noble
Brutus hath told you
Caesar was ambitious
Sec. 1.2
Sec. 1.2
Indexer steps: Sort
Sort by terms
And then docID
Core indexing step
Sec. 1.2
Indexer steps: Dictionary &
Postings
Multiple term
entries in a single
document are
merged.
Split into Dictionary
and Postings
Doc. frequency
information is
added.
Why frequency?
Will discuss later.
Sec. 1.2
Where do we pay in
storage?
Lists of
docIDs
Terms
and
counts
Pointers
IR system
implementation
How do we
index
efficiently?
How much
storage do we
23
need?
Introduction to
Information Retrieval
Query processing with an
inverted index
Sec. 1.3
The index we just built
How do we process a query?
Our focus
Later - what kinds of queries can we
process?
25
Sec. 1.3
Query processing: AND
Consider processing the query:
Brutus AND Caesar
Locate Brutus in the Dictionary;
Retrieve its postings.
Locate Caesar in the Dictionary;
Retrieve its postings.
Merge the two postings (intersect the
2 sets):
4
8 16 32 64 128 Brutus
document
1
1
3
21
34 Caesar
26
Sec. 1.3
The merge
Walk through the two postings
simultaneously, in time linear in the
total number of postings entries
2
16
32
8
64
13
21
128
Brutus
34 Caesar
If the list lengths are x and y, the merge takes O(x+y)
operations.
Crucial: postings sorted by docID.
27
Intersecting two postings lists
(a merge algorithm)
28
Introduction to
Information Retrieval
The Boolean Retrieval Model
& Extended Boolean Models
Sec. 1.3
Boolean queries: Exact
match
The Boolean retrieval model is being able to
ask a query that is a Boolean expression:
Boolean Queries are queries using AND, OR and
NOT to join query terms
Views each document as a set of words
Is precise: document matches condition or not.
Perhaps the simplest model to build an IR system
on
Primary commercial retrieval tool for 3
decades.
Many search systems you still use are Boolean:
Email, library catalog, Mac OS X Spotlight
30
Sec. 1.4
Example: WestLaw
http://www.westlaw.com/
Largest commercial (paying subscribers)
legal search service (started 1975; ranking
added 1992; new federated search added
2010)
Tens of terabytes of data; ~700,000 users
Majority of users still use boolean queries
Example query:
What is the statute of limitations in cases
involving the federal tort claims act?
LIMIT! /3 STATUTE ACTION /S FEDERAL /2
TORT /3 CLAIM
/3 = within 3 words, /S = in same sentence
31
Sec. 1.4
Example: WestLaw
http://www.westlaw.com/
Another example query:
Requirements for disabled people to be able to
access a workplace
disabl! /p access! /s work-site work-place
(employment /3 place
Note that SPACE is disjunction, not
conjunction!
Long, precise queries; proximity operators;
incrementally developed; not like web search
Many professional searchers still like Boolean
search
You know exactly what you are getting
But that doesnt mean it actually works
better.
Sec. 1.3
Boolean queries:
More general merges
Exercise: Adapt the merge for the
queries:
Brutus AND NOT Caesar
Brutus OR NOT Caesar
Can we still run through the merge in
time O(x+y)? What can we achieve?
33
Sec. 1.3
Merging
What about an arbitrary Boolean
formula?
(Brutus OR Caesar) AND NOT
(Antony OR Cleopatra)
Can we always merge in linear
time?
Linear in what?
Can we do better?
34
Sec. 1.3
Query optimization
What is the best order for query
processing?
Consider a query that is an AND of
n terms.
Brutus
2 n
4 terms,
8 16 get
32 64
128
For each of the
its
Caesar
1 AND
2 3them
5 8 together.
16 21 34
postings, then
Calpurnia
13 16
Query: Brutus AND Calpurnia AND Caesar
35
Sec. 1.3
Query optimization example
Process in order of increasing freq:
start with smallest set, then keep
cutting further.
This is why we kept
document freq. in
dictionary
Brutus
Caesar
Calpurnia
4
2
16 32 64 128
16 21 34
13 16
Execute the query as (Calpurnia AND Brutus) AND Ca
36
Sec. 1.3
More general optimization
e.g., (madding OR crowd) AND
(ignoble OR strife)
Get doc. freq.s for all terms.
Estimate the size of each OR by the
sum of its doc. freq.s (conservative).
Process in increasing order of OR sizes.
37
Exercise
Recommend a query
processing order for
(tangerine OR trees) AND
(marmalade OR skies) AND
(kaleidoscope OR eyes)
Which two terms
should we process
first?
Term
eyes
kaleidoscope
marmalade
skies
tangerine
trees
Freq
213312
87009
107913
271658
46653
316812
38
Query processing exercises
Exercise: If the query is friends AND
romans AND (NOT countrymen), how
could we use the freq of countrymen?
Exercise: Extend the merge to an arbitrary
Boolean query. Can we always guarantee
execution in time linear in the total
postings size?
Hint: Begin with the case of a Boolean
formula query: in this, each query term
appears only once in the query.
39
Exercise
Try the search feature at
http://www.rhymezone.com/shakespe
are/
Write down five search features you
think it could do better
40
Introduction to
Information Retrieval
Phrase queries and positional
indexes
Sec. 2.4
Phrase queries
We want to be able to answer queries such
as stanford university as a phrase
Thus the sentence I went to university at
Stanford is not a match.
The concept of phrase queries has proven
easily understood by users; one of the few
advanced search ideas that works
Many more queries are implicit phrase queries
For this, it no longer suffices to store only
<term : docs> entries
Sec. 2.4.1
A first attempt: Biword
indexes
Index every consecutive pair of terms in the
text as a phrase
For example the text Friends, Romans,
Countrymen would generate the biwords
friends romans
romans countrymen
Each of these biwords is now a dictionary
term
Two-word phrase query-processing is now
immediate.
Sec. 2.4.1
Longer phrase queries
Longer phrases can be processed by
breaking them down
stanford university palo alto can be
broken into the Boolean query on
biwords:
stanford university AND university
palo AND palo alto
Without the docs, we cannot verify that
the docs matching the above Boolean
query do contain the phrase.
Can have false positives!
Sec. 2.4.1
Issues for biword indexes
False positives, as noted before
Index blowup due to bigger dictionary
Infeasible for more than biwords, big
even for them
Biword indexes are not the standard
solution (for all biwords) but can be
part of a compound strategy
Solution 2: Positional
indexes
Sec. 2.4.2
In the postings, store, for each term
the position(s) in which tokens of it
appear:
<term, number of docs containing term;
doc1: position1, position2 ;
doc2: position1, position2 ;
etc.>
Sec. 2.4.2
Positional index example
<be: 993427;
1: 7, 18, 33, 72, 86, 231;
2: 3, 149;
4: 17, 191, 291, 430, 434;
5: 363, 367, >
Which of docs 1,2,4,5
could contain to be
or not to be?
For phrase queries, we use a merge
algorithm recursively at the document
level
But we now need to deal with more than
just equality
Sec. 2.4.2
Processing a phrase query
Extract inverted index entries for each
distinct term: to, be, or, not.
Merge their doc:position lists to enumerate
all positions with to be or not to be.
to:
2:1,17,74,222,551; 4:8,16,190,429,433; 7:13,23,191;
...
be:
1:17,19; 4:17,191,291,430,434; 5:14,19,101; ...
Same general method for proximity
searches
Sec. 2.4.2
Proximity queries
LIMIT! /3 STATUTE /3 FEDERAL /2 TORT
Again, here, /k means within k words of.
Clearly, positional indexes can be used
for such queries; biword indexes cannot.
Exercise: Adapt the linear merge of
postings to handle proximity queries.
Can you make it work for any value of k?
This is a little tricky to do correctly and
efficiently
See Figure 2.12 of IIR
Sec. 2.4.2
Positional index size
A positional index expands postings
storage substantially
Even though indices can be compressed
Nevertheless, a positional index is
now standardly used because of the
power and usefulness of phrase and
proximity queries whether used
explicitly or implicitly in a ranking
retrieval system.
Sec. 2.4.2
Positional index size
Need an entry for each occurrence,
not just once per document
Index size depends on average
document size
Why?
Average web page has <1000 terms
SEC filings, books, even some epic
poems easily 100,000 terms
Document size
Postings
Positional postings
Consider a term with frequency 0.1%
1000
100,000
100
Sec. 2.4.2
Rules of thumb
A positional index is 24 as large as a
non-positional index
Positional index size 3550% of
volume of original text
Caveat: all of this holds for English-like
languages
Sec. 2.4.3
Combination schemes
These two approaches can be profitably
combined
For particular phrases (Michael Jackson,
Britney Spears) it is inefficient to keep
on merging positional postings lists
Even more so for phrases like The Who
Williams et al. (2004) evaluate a more
sophisticated mixed indexing scheme
A typical web query mixture was executed in
of the time of using just a positional index
It required 26% more space than having a
positional index alone
Introduction to
Information Retrieval
Structured vs. Unstructured Data
IR vs. databases:
Structured vs unstructured data
Structured data tends to refer to
information in tables
Employee
Manager
Salary
Smith
Jones
50000
Chang
Smith
60000
Ivy
Smith
50000
Typically allows numerical range and exact match
(for text) queries, e.g.,
Salary < 60000 AND Manager = Smith.
55
Unstructured data
Typically refers to free text
Allows
Keyword queries including operators
More sophisticated concept queries
e.g.,
find all web pages dealing with drug abuse
Classic model for searching text
documents
56
Semi-structured data
In fact almost no data is unstructured
E.g., this slide has distinctly identified zones
such as the Title and Bullets
to say nothing of linguistic structure
Facilitates semi-structured search such as
Title contains data AND Bullets contain search
Or even
Title is about Object Oriented Programming AND
Author something like stro*rup
where * is the wild-card operator
57