SlideShare a Scribd company logo
 
Why  Engine?
Finding key information from gigantic  World Wide Web  is similar to find a needle lost in haystack. For this purpose we would use a special magnet that would automatically, quickly and effortlessly attract that needle for us. In  this scenario magnet is “Search Engine”
“ Even a blind squirrel finds a nut , occasionally.” But few of us are determined enough to search through millions, or billions, of pages of information to find our “nut.” So, to reduce the problem to a, more or less, manageable solution, web “search engines” were introduced a few years ago.
A software program that searches a database and gathers and reports information that contains or is related to specified terms. OR A website whose primary function is providing a search for gathering and reporting information available on the Internet or a portion of the Internet .  S e a r c h  E n g i n e
Eight reasonably well-known Web search engines are : -
Top 10 Search Providers by Searches, August 2007 Provider Searches (000) Share of Total Searches (%) 4,199,495 53.6 1,561,903 19.9 1,011,398 12.9 435,088 5.6 136,853 1.7 71,724 0.9 37,762 0.5 34,699 0.4 32,483 0.4 31,912 0.4 Other 275,812 3.5 All  Search 7,829,129 100.0 Source: Nielsen//NetRatings, 2007
1990  - The first search engine  Archie  was released . There was no World Wide Web at the time.  Data resided on defense contractor , university, and government computers, and techies were the only people accessing the data.  The computers were interconnected by Telenet .  File Transfer  Protocol (FTP) used for transferring files from computer to computer.  There was no such thing as a browser. Files were transferred in their native format and viewed using the associated file type software.  Archie searched FTP servers and indexed their files into a searchable directory.  S e a r c h  E n g i n e  History
1991 -  Gopherspace came into existence with the advent of Gopher.   Gopher cataloged FTP sites, and the resulting catalog became known as Gopherspace . 1994 -  WebCrawler, a new type of search engine that indexed the entire content of a web page , was introduced.   Telenet / FTP passed information among the new web browsers accessing not FTP sites but WWW sites. Webmasters and web site owners begin submitting sites for inclusion in the growing number of web directories.
1995 - Meta tags in the web page were first utilized by some search engines to determine relevancy. 1997 -  Search engine rank-checking software was introduced. It provides an automated tool to determine web site position and ranking within the major search engines. 1998  - Search engine algorithms begin incorporating esoteric information in their ranking algorithms. E.g. Inclusion of the number of links to a web site to determine its “link popularity.” Another ranking approach was to determine the number of clicks (visitors) to a web site based upon keyword and phrase relevancy.
2000 -  Marketers determined that pay-per click campaigns were an easy yet expensive approach to gaining top search rankings. To elevate sites in the search engine rankings web sites started adding useful and relevant content while optimizing their web pages for each specific search engine.
 
 
Finding documents:  It is potentially needed to find interesting documents on   the Web consists of millions of documents, distributed over tens of thousands of servers.  Formulating queries:  It needed to express exactly what kind of information is to retrieve.  Determining relevance: T he system must determine whether a document contains the required information or not.  Stages in information retrieval
Types of  S e a r c h  E n g i n e On the basis of working, Search engine is categories in following group :- Crawler-Based Search Engines Directories Hybrid Search Engines Meta Search Engines
It uses automated software programs to survey and categories web pages , which is known as ‘spiders’, ‘crawlers’, ‘robots’ or ‘bots’. A spider will find a web page, download it and analyses the information presented on the web page. The web page will then be added to the search engine’s database.  When a user performs a search, the search engine will check its database of web pages for the key words the user searched.  The results (list of suggested links to go to), are listed on pages by order of which is ‘closest’ (as defined by the ‘bots). Examples of crawler-based search engines are: Google ( www.google.com )  Ask Jeeves ( www.ask.com )  Crawler-Based Search Engines
All robots use the following algorithm for retrieving documents from the Web:  The algorithm uses a list of known URLs. This list  contains at least one URL to start with.  A URL is taken from the list, and the corresponding  document is retrieved from the Web.  The document is parsed to retrieve information for the  index database and to extract the embedded links to  other documents.  The URLs of the links found in the document are added  to the list of known URLs.  If the list is empty or some limit is exceeded (number of  documents retrieved, size of the index database, time  elapsed since startup, etc.) the algorithm stops.  otherwise the algorithm continues at step 2.  Robot Algorithm
Crawler program treated World Wide Web as  big graph having pages as nodes And the  hyperlinks as arcs. Crawler works with a simple goal: indexing all  the keywords in web pages’ titles. Three data structure is needed for crawler or  robot algorithm A large linear array ,  url_table Heap Hash table
Url_table :  It is a large linear array that contains millions of entries Each entry contains two pointers – Pointer to URL Pointer to title These are variable length strings and kept on heap Heap It is a large unstructured chunk of virtual memory to which strings can be appended.
Hash table :  It is third data structure of size ‘n’ entries. Any URL can be run through a hash  function to produce a nonnegative  integer less than ‘n’. All URL that hash to the value ‘k’ are  hooked together on a linked list starting  at the entry ‘k’ of the hash table. Every entry into url_table  is also entered  into hash table The main use of hash table is to start with a  URL and be able to quickly determine  whether it is already present in url_table.
U URL URL Title Title 6 44 19 21 5 4 2 Pointers to URL Pointers to title Overflow chains Heap Url_table Hash table String storage Hash  Code 0 1 2 3 n Data structure  for crawler
Building the index requires two phases :  Searching (URL proceesing ) Indexing.  The heart of the search engine is a recursive procedure procees_url, which takes a URL string as input.
Searching is done by procedure, procees_url as follows :- It hashes the URL to see if it is  already present in url_table. If so, it is done and returns immediately.  If the  URL  is not already known, its page is fetched.  The URL and title are then copied to the heap and pointers to these two strings are entered in url_table.  The URL is also entered into the hash table. Finally, process_url extracts all the hyperlinks from the page and calls process_url once per hyperlink, passing the hyperlink’s  URL as the input parameter
This design is simple and theoretically correct, but it has a serious problem  Depth-first search is used which may cause recursion . Path-length is not pridictable it may be thousands of hyperlinks long which cause memory problem such as ,stack overflow .
Solution Processed URL s are removed from the list and Breadth-first search is used to limit path-length  To avoid memory problem pointed pages are not traced in same order as they obtained.
For each entry in url_table, indexing procedure will examine the title and selects out all words not on the stop list.  Each selected word is written on to a file with a line consisting of the word followed by the current url_table entry number.  when the whole table has been scanned , the file is shorted by word.  keyword Indexing The stop list prevents indexing of prepositions, conjunctions, articles, and other words with many hits and little value.
Formulating Queries  Keyword submission cause a  POST  request to be done to a CGI  script  on the machine where the index is located. The CGI  script then looks up the keyword in the index to find the set of URl_table indices for each keyword . if the user wants the  Boolean and of the keywords the set intersection is computed. If the Boolean or is desired the set union is computed. The script now indexes into url_table to find all the titles and urls. These are then combined to form a web page and sent back to user as the response of the POST .
Determining Relevance Classic algorithm "TF / IDF“ is used for determining relevance. It is a weight often used in information retrieval and text mining.This weight is a statistical measure used to evaluate how important a word is to a document in a collection A high weight in tf–idf is reached by a high term frequency (in the given document) and a low document frequency of the term in the whole collection of documents
Term Frequency The “term frequency” in the given document is simply the number of times a given term appears in that document.  It gives a measure of the importance of the term  t i   within the particular document. Term Frequency, where  n i  is the number of occurrences of the considered term, and the denominator is the number of occurrences of all terms.
Term Frequency The  term frequency  (TF) is the number of times the word appears in a document divided by the number of total words in the document.  For Example  , If a document contains 100 total words and the word  computer  appears 3 times, then the term frequency of the word  computer  in the document is 0.03 (3/100)
Inverse Document Frequency The  “ inverse document frequency  ” is a measure of the general importance of the term (obtained by dividing the number of all documents by the number of documents containing the term, and then taking the logarithm of that quotient). Where, |  D  | : total number of documents in the corpus     : number of documents where the  term  t i  appears  (that is  ).
Inverse Document Frequency There are many different formulas used to calculate tf–idf.  One way of calculating  “document frequency”  (DF) is to determine how many documents contain the word  and  divide it by the total number of documents in the collection.  For Example  , If  the word  computer  appears in 1,000 documents out of a total of 10,000,000 then the document frequency is 0.0001 (1000/10,000,000).  Alternatives to this formula are to take the log of the document frequency. The natural logarithm is commonly used. In this example we would have   idf = ln(1,000 / 10,000,000) =1/ 9.21
Inverse Document Frequency The final tf-idf score is then calculated by dividing the  “term frequency”  by the  “document frequency” .  For our example, the tf-idf score for  computer  in the collection would be : tf-idf = 0.03/0.0001= 300 , by using first formula  of idf. If alternate formula used we would have  tf-idf = 0.03 * 9.21 = 0.27.
A ‘directory’ uses human editors who decide what category the site belongs to. They place websites within specific categories or subcategories in the ‘directories’ database.  By focusing on particular categories and subcategories, user can narrow the search to those records that are most likely to be relevant to his/her interests. Directories
The human editors comprehensively check the website and rank it, based on the information they find, using a pre-defined set of rules. There are two major directories : Yahoo Directory ( www.yahoo.com )  Open Directory ( www.dmoz.org )  Directories
Hybrid search engines use a combination of both crawler-based results and directory results.  Examples of hybrid search engines are: Yahoo ( www.yahoo.com ) Google ( www.google.com )  Hybrid Search Engines
Also known as Multiple Search Engines or Metacrawlers. Meta search engines query several other Web search engine databases in parallel and then combine the results in one list. Examples of Meta search engines include: Metacrawler ( www.metacrawler.com ) Dogpile ( www.dogpile.com ) Meta Search Engines
Pros :- Easy to use Able to search more web pages in less time.  High probability of finding the desired page(s)  It will get at least some results when no result had been obtained with traditional search engines. Pros and Cons of Meta Search Engines
Cons :- Metasearch engine results are less relevant, since it doesn’t know the internal “alchemy” of search engine used. Since, only top 10-50 hits are retrieved from each search engine, the total number of hits retrieved may be considerably less than found by doing a direct search.  Advanced search features (like, searches with boolean operators and field limiting ; use of " ", +/-. default  AND between words e.t.c.) are not usually available.  Pros and Cons of Meta Search Engines
Meta Search Engines Cont…. Meta-Search Engine Primary Web Databases Ad  Databases Special Features Vivisimo Ask, MSN, Gigablast, Looksmart, Open Directory, Wisenut Google  Clusters results Clusty Ask, MSN, Gigablast, Looksmart, Open Directory, Wisenut Google  Clusters results Ixquick AltaVista, EntireWeb, Gigablast, Go, Looksmart,Netscape, Open Directory,Wisenut, Yahoo Yahoo  Dogpile Ask, Google, MSN, Yahoo!, Teoma, Open Directory, more Google, Yahoo  All top 4 engines Mamma About, Ask, Business.com, EntireWeb, Gigablast, Open Directory,Wisenut Miva, Ask Refine options Kartoo AlltheWeb, AltaVista, EntireWeb, Exalead, Hotbot, Looksmart, Lycos, MSN, Open Directory, Teoma, ToileQuebec, Voila, Wisenut, Yahoo ?? Visual results display
"Real" MSEs which aggregate/rank the results in one page   "Pseudo" MSEs type I which exclusively group the results by search engine "Pseudo" MSEs type II which open a separate browser window for each search engine used and Search Utilities, software search tools. Meta Search Engines (MSEs)  Come In Four Flavors
T H A N K  Y O U

More Related Content

What's hot (20)

PPT
Web Crawler
iamthevictory
 
PPTX
Working of search engine
Nikhil Deswal
 
PPTX
Search engine
silambu111
 
PPTX
Learning About Keyword Research PPT
Ketaki Gambhir
 
PPT
SEO PPT
princebhola
 
PPTX
Indexing Techniques: Their Usage in Search Engines for Information Retrieval
Vikas Bhushan
 
PPT
What is SEO? - Basic SEO Guide for Beginners.pptx
Geromme Talampas
 
PPT
Seo Presentation for Beginners, Complete SEO ppt,
Digital Marketing Training Institute
 
PPTX
KEYWORD RESEARCH & SEO
AVIK BAL
 
PPT
Search engine
Alisha Korpal
 
PPTX
Web crawler
poonamkenkre
 
PDF
An introduction to Search Engine Optimization (SEO) and web analytics on fao.org
FAO
 
PPT
How search engines work
Chinna Botla
 
PPTX
Crawling and Indexing
Himani Tyagi
 
PDF
Keyword Research Presentation .pdf
TheoRuby1
 
PPT
Internet search techniques by tariq ghayyur1
Tariq Ghayyur
 
PDF
Web Analytics in 10 slides
Aishwarya Saseendran
 
PPT
On page seo
sathya ravi
 
Web Crawler
iamthevictory
 
Working of search engine
Nikhil Deswal
 
Search engine
silambu111
 
Learning About Keyword Research PPT
Ketaki Gambhir
 
SEO PPT
princebhola
 
Indexing Techniques: Their Usage in Search Engines for Information Retrieval
Vikas Bhushan
 
What is SEO? - Basic SEO Guide for Beginners.pptx
Geromme Talampas
 
Seo Presentation for Beginners, Complete SEO ppt,
Digital Marketing Training Institute
 
KEYWORD RESEARCH & SEO
AVIK BAL
 
Search engine
Alisha Korpal
 
Web crawler
poonamkenkre
 
An introduction to Search Engine Optimization (SEO) and web analytics on fao.org
FAO
 
How search engines work
Chinna Botla
 
Crawling and Indexing
Himani Tyagi
 
Keyword Research Presentation .pdf
TheoRuby1
 
Internet search techniques by tariq ghayyur1
Tariq Ghayyur
 
Web Analytics in 10 slides
Aishwarya Saseendran
 
On page seo
sathya ravi
 

Viewers also liked (20)

PPTX
Search Engine Powerpoint
201014161
 
PPTX
Search engines
Sahiba Khurana
 
PPTX
Search Engines Presentation
JSCHO9
 
PPT
Introduction to Search Engines
Nitin Pande
 
PPT
Search Engine Strategies
jsotir
 
PPT
Types of Search Engines
Surendra Kapadia
 
PPTX
Search Engine
Ram Dutt Shukla
 
PPT
Basic SEO Presentation
Paul Kortman
 
PPTX
Search engines and its types
Nagarjuna Kalluru
 
PPT
Search Engines
Shamprasad Pujar
 
PPT
Search Engine Optimization PPT
Kranthi Shaik
 
PPTX
Introduction to SEO
Rand Fishkin
 
PPT
Information organization
Stefanos Anastasiadis
 
PPTX
Basics of internet
Noman Rajput
 
PPT
Web Browsing War
u2804444
 
PPT
Googling of GooGle
binit singh
 
PPT
Kna prestation 3
Jimmi Cherian
 
PPTX
ChaCha contest entry
Tama-Lea Lorenzen
 
PDF
ChaCha - Nice to Meet You Media Kit
Sara Camden
 
PDF
Faceted Navigation
Ruslan Zavacky
 
Search Engine Powerpoint
201014161
 
Search engines
Sahiba Khurana
 
Search Engines Presentation
JSCHO9
 
Introduction to Search Engines
Nitin Pande
 
Search Engine Strategies
jsotir
 
Types of Search Engines
Surendra Kapadia
 
Search Engine
Ram Dutt Shukla
 
Basic SEO Presentation
Paul Kortman
 
Search engines and its types
Nagarjuna Kalluru
 
Search Engines
Shamprasad Pujar
 
Search Engine Optimization PPT
Kranthi Shaik
 
Introduction to SEO
Rand Fishkin
 
Information organization
Stefanos Anastasiadis
 
Basics of internet
Noman Rajput
 
Web Browsing War
u2804444
 
Googling of GooGle
binit singh
 
Kna prestation 3
Jimmi Cherian
 
ChaCha contest entry
Tama-Lea Lorenzen
 
ChaCha - Nice to Meet You Media Kit
Sara Camden
 
Faceted Navigation
Ruslan Zavacky
 
Ad

Similar to Working Of Search Engine (20)

PPTX
How a search engine works slide
Sovan Misra
 
DOCX
Seminar report(rohitsahu cs 17 vth sem)
ROHIT SAHU
 
PDF
Tolmachev Alexander Web Search Engines
AlexanderTolmachev
 
PPTX
Search engine
Chinmay Patel
 
PPTX
Introduction to internet.
Anish Thomas
 
PPTX
Search engines
Anshuman Tyagi
 
DOC
How a search engine works report
Sovan Misra
 
PDF
sunny-slides
20DC11NOUFALN
 
PDF
Search Engine Google
Chidanand Byahatti
 
PPTX
Web Search Engine, Web Crawler, and Semantics Web
Aatif19921
 
PPT
Business Intelligence Solution Using Search Engine
ankur881120
 
PPT
3 Understanding Search
masiclat
 
DOC
Search Engine
Ram Dutt Shukla
 
PDF
Web Search Engine
saurabh goel
 
PDF
Haifa
Ram Dutt Shukla
 
DOC
Notes for
9pallen
 
PDF
Enterprise Search Share Point2009 Best Practices Final
Marianne Sweeny
 
PDF
Birds Bears and Bs:Optimal SEO for Today's Search Engines
Marianne Sweeny
 
PDF
Optimal SEO (Marianne Sweeny)
uxpa-dc
 
How a search engine works slide
Sovan Misra
 
Seminar report(rohitsahu cs 17 vth sem)
ROHIT SAHU
 
Tolmachev Alexander Web Search Engines
AlexanderTolmachev
 
Search engine
Chinmay Patel
 
Introduction to internet.
Anish Thomas
 
Search engines
Anshuman Tyagi
 
How a search engine works report
Sovan Misra
 
sunny-slides
20DC11NOUFALN
 
Search Engine Google
Chidanand Byahatti
 
Web Search Engine, Web Crawler, and Semantics Web
Aatif19921
 
Business Intelligence Solution Using Search Engine
ankur881120
 
3 Understanding Search
masiclat
 
Search Engine
Ram Dutt Shukla
 
Web Search Engine
saurabh goel
 
Notes for
9pallen
 
Enterprise Search Share Point2009 Best Practices Final
Marianne Sweeny
 
Birds Bears and Bs:Optimal SEO for Today's Search Engines
Marianne Sweeny
 
Optimal SEO (Marianne Sweeny)
uxpa-dc
 
Ad

More from NIKHIL NAIR (8)

PPT
Captchas
NIKHIL NAIR
 
PPT
Cluster Computing
NIKHIL NAIR
 
PPT
Symbian OS
NIKHIL NAIR
 
PPT
Hdmi
NIKHIL NAIR
 
PPT
Url
NIKHIL NAIR
 
PPT
Holographic Memory
NIKHIL NAIR
 
PPT
Edge
NIKHIL NAIR
 
PPT
Gps
NIKHIL NAIR
 
Captchas
NIKHIL NAIR
 
Cluster Computing
NIKHIL NAIR
 
Symbian OS
NIKHIL NAIR
 
Holographic Memory
NIKHIL NAIR
 

Recently uploaded (20)

PDF
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
PDF
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
PPTX
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
PDF
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
PDF
"AI Transformation: Directions and Challenges", Pavlo Shaternik
Fwdays
 
PDF
From Code to Challenge: Crafting Skill-Based Games That Engage and Reward
aiyshauae
 
PDF
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PDF
The Builder’s Playbook - 2025 State of AI Report.pdf
jeroen339954
 
PPTX
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
PDF
Empower Inclusion Through Accessible Java Applications
Ana-Maria Mihalceanu
 
PDF
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
PDF
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
PDF
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
PDF
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
CIFDAQ Weekly Market Wrap for 11th July 2025
CIFDAQ
 
PPTX
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
PDF
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
"AI Transformation: Directions and Challenges", Pavlo Shaternik
Fwdays
 
From Code to Challenge: Crafting Skill-Based Games That Engage and Reward
aiyshauae
 
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
The Builder’s Playbook - 2025 State of AI Report.pdf
jeroen339954
 
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
Empower Inclusion Through Accessible Java Applications
Ana-Maria Mihalceanu
 
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
CIFDAQ Weekly Market Wrap for 11th July 2025
CIFDAQ
 
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 

Working Of Search Engine

  • 1.  
  • 3. Finding key information from gigantic World Wide Web is similar to find a needle lost in haystack. For this purpose we would use a special magnet that would automatically, quickly and effortlessly attract that needle for us. In this scenario magnet is “Search Engine”
  • 4. “ Even a blind squirrel finds a nut , occasionally.” But few of us are determined enough to search through millions, or billions, of pages of information to find our “nut.” So, to reduce the problem to a, more or less, manageable solution, web “search engines” were introduced a few years ago.
  • 5. A software program that searches a database and gathers and reports information that contains or is related to specified terms. OR A website whose primary function is providing a search for gathering and reporting information available on the Internet or a portion of the Internet . S e a r c h E n g i n e
  • 6. Eight reasonably well-known Web search engines are : -
  • 7. Top 10 Search Providers by Searches, August 2007 Provider Searches (000) Share of Total Searches (%) 4,199,495 53.6 1,561,903 19.9 1,011,398 12.9 435,088 5.6 136,853 1.7 71,724 0.9 37,762 0.5 34,699 0.4 32,483 0.4 31,912 0.4 Other 275,812 3.5 All Search 7,829,129 100.0 Source: Nielsen//NetRatings, 2007
  • 8. 1990 - The first search engine Archie was released . There was no World Wide Web at the time. Data resided on defense contractor , university, and government computers, and techies were the only people accessing the data. The computers were interconnected by Telenet . File Transfer Protocol (FTP) used for transferring files from computer to computer. There was no such thing as a browser. Files were transferred in their native format and viewed using the associated file type software. Archie searched FTP servers and indexed their files into a searchable directory. S e a r c h E n g i n e History
  • 9. 1991 - Gopherspace came into existence with the advent of Gopher. Gopher cataloged FTP sites, and the resulting catalog became known as Gopherspace . 1994 - WebCrawler, a new type of search engine that indexed the entire content of a web page , was introduced. Telenet / FTP passed information among the new web browsers accessing not FTP sites but WWW sites. Webmasters and web site owners begin submitting sites for inclusion in the growing number of web directories.
  • 10. 1995 - Meta tags in the web page were first utilized by some search engines to determine relevancy. 1997 - Search engine rank-checking software was introduced. It provides an automated tool to determine web site position and ranking within the major search engines. 1998 - Search engine algorithms begin incorporating esoteric information in their ranking algorithms. E.g. Inclusion of the number of links to a web site to determine its “link popularity.” Another ranking approach was to determine the number of clicks (visitors) to a web site based upon keyword and phrase relevancy.
  • 11. 2000 - Marketers determined that pay-per click campaigns were an easy yet expensive approach to gaining top search rankings. To elevate sites in the search engine rankings web sites started adding useful and relevant content while optimizing their web pages for each specific search engine.
  • 12.  
  • 13.  
  • 14. Finding documents: It is potentially needed to find interesting documents on the Web consists of millions of documents, distributed over tens of thousands of servers. Formulating queries: It needed to express exactly what kind of information is to retrieve. Determining relevance: T he system must determine whether a document contains the required information or not. Stages in information retrieval
  • 15. Types of S e a r c h E n g i n e On the basis of working, Search engine is categories in following group :- Crawler-Based Search Engines Directories Hybrid Search Engines Meta Search Engines
  • 16. It uses automated software programs to survey and categories web pages , which is known as ‘spiders’, ‘crawlers’, ‘robots’ or ‘bots’. A spider will find a web page, download it and analyses the information presented on the web page. The web page will then be added to the search engine’s database. When a user performs a search, the search engine will check its database of web pages for the key words the user searched. The results (list of suggested links to go to), are listed on pages by order of which is ‘closest’ (as defined by the ‘bots). Examples of crawler-based search engines are: Google ( www.google.com ) Ask Jeeves ( www.ask.com ) Crawler-Based Search Engines
  • 17. All robots use the following algorithm for retrieving documents from the Web: The algorithm uses a list of known URLs. This list contains at least one URL to start with. A URL is taken from the list, and the corresponding document is retrieved from the Web. The document is parsed to retrieve information for the index database and to extract the embedded links to other documents. The URLs of the links found in the document are added to the list of known URLs. If the list is empty or some limit is exceeded (number of documents retrieved, size of the index database, time elapsed since startup, etc.) the algorithm stops. otherwise the algorithm continues at step 2. Robot Algorithm
  • 18. Crawler program treated World Wide Web as big graph having pages as nodes And the hyperlinks as arcs. Crawler works with a simple goal: indexing all the keywords in web pages’ titles. Three data structure is needed for crawler or robot algorithm A large linear array , url_table Heap Hash table
  • 19. Url_table : It is a large linear array that contains millions of entries Each entry contains two pointers – Pointer to URL Pointer to title These are variable length strings and kept on heap Heap It is a large unstructured chunk of virtual memory to which strings can be appended.
  • 20. Hash table : It is third data structure of size ‘n’ entries. Any URL can be run through a hash function to produce a nonnegative integer less than ‘n’. All URL that hash to the value ‘k’ are hooked together on a linked list starting at the entry ‘k’ of the hash table. Every entry into url_table is also entered into hash table The main use of hash table is to start with a URL and be able to quickly determine whether it is already present in url_table.
  • 21. U URL URL Title Title 6 44 19 21 5 4 2 Pointers to URL Pointers to title Overflow chains Heap Url_table Hash table String storage Hash Code 0 1 2 3 n Data structure for crawler
  • 22. Building the index requires two phases : Searching (URL proceesing ) Indexing. The heart of the search engine is a recursive procedure procees_url, which takes a URL string as input.
  • 23. Searching is done by procedure, procees_url as follows :- It hashes the URL to see if it is already present in url_table. If so, it is done and returns immediately. If the URL is not already known, its page is fetched. The URL and title are then copied to the heap and pointers to these two strings are entered in url_table. The URL is also entered into the hash table. Finally, process_url extracts all the hyperlinks from the page and calls process_url once per hyperlink, passing the hyperlink’s URL as the input parameter
  • 24. This design is simple and theoretically correct, but it has a serious problem Depth-first search is used which may cause recursion . Path-length is not pridictable it may be thousands of hyperlinks long which cause memory problem such as ,stack overflow .
  • 25. Solution Processed URL s are removed from the list and Breadth-first search is used to limit path-length To avoid memory problem pointed pages are not traced in same order as they obtained.
  • 26. For each entry in url_table, indexing procedure will examine the title and selects out all words not on the stop list. Each selected word is written on to a file with a line consisting of the word followed by the current url_table entry number. when the whole table has been scanned , the file is shorted by word. keyword Indexing The stop list prevents indexing of prepositions, conjunctions, articles, and other words with many hits and little value.
  • 27. Formulating Queries Keyword submission cause a POST request to be done to a CGI script on the machine where the index is located. The CGI script then looks up the keyword in the index to find the set of URl_table indices for each keyword . if the user wants the Boolean and of the keywords the set intersection is computed. If the Boolean or is desired the set union is computed. The script now indexes into url_table to find all the titles and urls. These are then combined to form a web page and sent back to user as the response of the POST .
  • 28. Determining Relevance Classic algorithm "TF / IDF“ is used for determining relevance. It is a weight often used in information retrieval and text mining.This weight is a statistical measure used to evaluate how important a word is to a document in a collection A high weight in tf–idf is reached by a high term frequency (in the given document) and a low document frequency of the term in the whole collection of documents
  • 29. Term Frequency The “term frequency” in the given document is simply the number of times a given term appears in that document. It gives a measure of the importance of the term t i within the particular document. Term Frequency, where n i is the number of occurrences of the considered term, and the denominator is the number of occurrences of all terms.
  • 30. Term Frequency The term frequency (TF) is the number of times the word appears in a document divided by the number of total words in the document. For Example , If a document contains 100 total words and the word computer appears 3 times, then the term frequency of the word computer in the document is 0.03 (3/100)
  • 31. Inverse Document Frequency The “ inverse document frequency ” is a measure of the general importance of the term (obtained by dividing the number of all documents by the number of documents containing the term, and then taking the logarithm of that quotient). Where, | D | : total number of documents in the corpus   : number of documents where the term t i appears (that is ).
  • 32. Inverse Document Frequency There are many different formulas used to calculate tf–idf. One way of calculating “document frequency” (DF) is to determine how many documents contain the word and divide it by the total number of documents in the collection. For Example , If the word computer appears in 1,000 documents out of a total of 10,000,000 then the document frequency is 0.0001 (1000/10,000,000). Alternatives to this formula are to take the log of the document frequency. The natural logarithm is commonly used. In this example we would have idf = ln(1,000 / 10,000,000) =1/ 9.21
  • 33. Inverse Document Frequency The final tf-idf score is then calculated by dividing the “term frequency” by the “document frequency” . For our example, the tf-idf score for computer in the collection would be : tf-idf = 0.03/0.0001= 300 , by using first formula of idf. If alternate formula used we would have tf-idf = 0.03 * 9.21 = 0.27.
  • 34. A ‘directory’ uses human editors who decide what category the site belongs to. They place websites within specific categories or subcategories in the ‘directories’ database. By focusing on particular categories and subcategories, user can narrow the search to those records that are most likely to be relevant to his/her interests. Directories
  • 35. The human editors comprehensively check the website and rank it, based on the information they find, using a pre-defined set of rules. There are two major directories : Yahoo Directory ( www.yahoo.com ) Open Directory ( www.dmoz.org ) Directories
  • 36. Hybrid search engines use a combination of both crawler-based results and directory results. Examples of hybrid search engines are: Yahoo ( www.yahoo.com ) Google ( www.google.com ) Hybrid Search Engines
  • 37. Also known as Multiple Search Engines or Metacrawlers. Meta search engines query several other Web search engine databases in parallel and then combine the results in one list. Examples of Meta search engines include: Metacrawler ( www.metacrawler.com ) Dogpile ( www.dogpile.com ) Meta Search Engines
  • 38. Pros :- Easy to use Able to search more web pages in less time. High probability of finding the desired page(s) It will get at least some results when no result had been obtained with traditional search engines. Pros and Cons of Meta Search Engines
  • 39. Cons :- Metasearch engine results are less relevant, since it doesn’t know the internal “alchemy” of search engine used. Since, only top 10-50 hits are retrieved from each search engine, the total number of hits retrieved may be considerably less than found by doing a direct search. Advanced search features (like, searches with boolean operators and field limiting ; use of " ", +/-. default AND between words e.t.c.) are not usually available. Pros and Cons of Meta Search Engines
  • 40. Meta Search Engines Cont…. Meta-Search Engine Primary Web Databases Ad Databases Special Features Vivisimo Ask, MSN, Gigablast, Looksmart, Open Directory, Wisenut Google Clusters results Clusty Ask, MSN, Gigablast, Looksmart, Open Directory, Wisenut Google Clusters results Ixquick AltaVista, EntireWeb, Gigablast, Go, Looksmart,Netscape, Open Directory,Wisenut, Yahoo Yahoo Dogpile Ask, Google, MSN, Yahoo!, Teoma, Open Directory, more Google, Yahoo All top 4 engines Mamma About, Ask, Business.com, EntireWeb, Gigablast, Open Directory,Wisenut Miva, Ask Refine options Kartoo AlltheWeb, AltaVista, EntireWeb, Exalead, Hotbot, Looksmart, Lycos, MSN, Open Directory, Teoma, ToileQuebec, Voila, Wisenut, Yahoo ?? Visual results display
  • 41. "Real" MSEs which aggregate/rank the results in one page "Pseudo" MSEs type I which exclusively group the results by search engine "Pseudo" MSEs type II which open a separate browser window for each search engine used and Search Utilities, software search tools. Meta Search Engines (MSEs) Come In Four Flavors
  • 42. T H A N K Y O U