This week I was asked to read 2 articles and 1 study, ‘Overlap Among Major Web Search Engines’ by Amanda Spink et al, ‘Searching the World Wide Web’ by Craig Knoblock and ‘Web Search Engines’ by David Hawking
Amanda Spink’s study was to designed to examine search results and see if there any one search engine that could produce the best results. In the end they determined that there was no one defining search engine and that each work differently. If I’m completely honest I found the whole study to be rather pointless. It’s good to have competition because it motivates and inspires ideas. Without competition technology could grow stagnate. If there really was one search engine to end them I think users would know about it and the fact that people do have personal preference proves that there isn’t.
Craig Knoblock’s article was wrote before the days of Google and talk about the technology present in 1997. He discusses the history, the basic workings of a search engine and the struggles with storing all the data they use. Nods are made to how rapidly the technology is evolving although he claim that MetaCrawler could be the future of search engines.
David Hawking discusses how search engines have overcome the problem of storing data and how that data is processed. He explains how the most simple search algorithm works by basically making a list of URLs and it then identifies out of that list which sits are good quality. He notices that problems are apparent with this method thought as it would take years to compile the list of pages. So he introduces the technology that search engines use to speed up the process. Things like caching the results for the most popular searches are used so that the engine can quickly look at its own data to make suggestions. He concludes by saying that search engines need to be maintained so the quality of there results does not drop