At one time I was on a team charged with analyzing a large-scale search implementation for a web site that handled hundreds of thousands of hits a day. Sifting through the search data, we were surprised to find that while there were many thousands of unique searches; most of the searches were for one of the eighty different search strings. We could reduce the load on our server by almost 75 percent simply by caching the results for those eighty search strings.
We can extrapolate from this example. Perhaps autocompletion would be useful if the server only returned a subset of all of the possible search terms. That subset would have to reflect the popular information on the site in order to be useful. Implementing things this way could cost less performance-wise, while still being useful to the user.
Was this article helpful?