How Search Engines Work

A search engine operates in the following order: 1) Review 2) Deep Crawling Depth-first search (DFS), 3) Fresh Crawling Breadth-first search (BFS), 4) Indexing 5) Searching.

Web search engines work by storing information about a large number of web pages that they download from the WWW itself. These pages are retrieved from a web crawler (also known as a spider) - an automated Web browser which follows every link it sees, can only be done by using robots.txt. The content of each page are then analyzed to determine how it should be indexed. Information about web pages are stored in an index database for use in later queries. Some search engines such as Google, store all or a portion of the source side (called a cache) as well as information on websites, while some store every word of every page, noting that such AltaVista. This cached page always has the actual search text since it is the one that was actually indexed, so it can be very useful when the contents of the current page has been updated and the search terms are no longer in it. This problem can be considered a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be the returned webpage. This is consistent with the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere.

When a user comes to the search engine and does a query, typically by giving keywords, the engine looks up the index, and contains a list of best matching web pages according to its criteria, usually with a short summary containing the document title and sometimes parts of the text. Most search engines support the use of Boolean terms AND, OR and NOT to further refine your search. An advanced feature is proximity search, which allows you to define the distance between keywords.

The usefulness of a search engine depends on the relevance of the results it gives back. While there may be millions of webpages that contain a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines use methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results to be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques developed.

Most Web search engines are commercial ventures supported by advertising revenue and, consequently, employing about the controversial practice of allowing advertisers to pay money to have their listings ranked higher in search results.

The vast majority of search engines are run by private companies using proprietary algorithms and closed databases, the most popular currently being Google, MSN Search and Yahoo! Search. But the open-source search engine technology available, such as Dig, Nutch, Sena, Egothor, OpenFTS, Data Search park and many others.