The World Wide Web has had far-reaching and unpredictable consequences (just as its founders predicted). Perhaps most difficult to predict was the complexity facing scientists who produced tools intended to search this database. It’s a much bigger challenge to create a tool that can search a dynamic, growing, public database whose participants are often motivated by economic self-interest than it is to create a traditional information retrieval tool.
In any dynamic marketplace, some ideas work, and some don’t. You throw some tomato sauce and gnocchi against the wall and find out what sticks after the fact. Sometimes you just make a mess. I got to thinking about this today, enjoying a Labor Day break while dusting my car interior with a magnetic dust mitt (here to stay, IMHO) and pondering why no one I know tries to do their dry cleaning in their clothes dryer (a dud of an idea, if you ask me).
So here we are at a crossroads in the history of Internet search and retrieval. We’ve learned that it’s a far different world than the closed world of academics, librarians, newspaper and periodical archives, etc. which preceded it.
Partly due to people’s inherent need for continuity, and partly due to some already-entrenched vested interests, some bad ideas in the world of Internet search continue to live on past the point of usefulness. In a field where relevance is paramount, these ideas have become irrelevant.
The first failed experiment on my hit list is metatags.
Metadata schemata in general make enormous sense. Information becomes particularly usable when labelled using smart protocols.
But for now, many web site owners are still wondering what to do about metatags. These are bits of information, inserted by webmasters or page authors, that tell us a bit more about what a page is about – supposedly. They appear in the head of an HTML document. The two key types of metatag are “keywords,” in which the page author inserts keywords that describe the subject matter of the page, separated (or not) by commas, and “description” – a brief summary of the page. Webmasters and their search engine marketing consultants have for years agonized over these tags.
Metatags, as many in the industry are aware, were an early victim, succumbing to the opportunism of web site owners. Marketers, particularly operators of porn sites, which made up much of the money-making power of Internet commerce circa 1995, made search engines like Altavista look pretty silly. Search engines which looked at and took metatags seriously were riddled with spam (insincere pages which manipulated their metatags in order to rank higher in searches) until they began more aggressively filtering spam with increasingly sophisticated ranking methods and filters.
Today, some search engines still look at metatags, but increasingly they put much more emphasis on both visible text on the page and “off-page factors” (popularity, linking structure of the Internet, etc.) to measure page relevance. Google doesn’t bother with metatags – it doesn’t even incorporate the description tag in the summary of page contents, preferring to grab text from the page itself.
This puts site owners in an awkward “should I or shouldn’t I?” position, and doubly so for the consultants who are often hired by companies who need someone to “make sense of all that search engine ranking business, like metatags and those other things we don’t understand.” Should SEO consultants tell their clients that metatag work is unnecessary? Or cling to the mystique? Should they do the work if it’s of marginal benefit? Or could they be doing better things with their time?
The current thinking amongst many consultants seems to be “well it can’t hurt, so go ahead and use them.” Sure, sure. But why are we always talking about what “can’t hurt” when “you’re marketing your site”? Can we take one second out to talk about what makes sense generally for the world of Internet information retrieval, independent of our own current site marketing projects?
If somebody would just declare the end of the metatag era, full stop, it would make it easier on everyone. But think for a second. Someone pretty important actually did. Google. Google, for one, has decided emphatically that metatags are too easily manipulated to be of any value in determining a page’s importance or relevance. Google is the #1 search engine property in the world, and trails only AOL, MSN, and Yahoo in unique visitors per month in the US. Maybe someone’s trying to tell you something.
The future darlings of the world of search technology will take this philosophy even farther. Good rankings in search engines now have widely-acknowledged economic value, so it makes no sense to use page labeling conventions that offer marketers carte blanche to deceive search engines for their private benefit. Easily-duped conventions make search engine results irrelevant, thus defeating the consumer’s very purpose in using them.
Showing consumers the types of information that they’re really looking for when they type in their search keywords will be an increasingly sophisticated challenge. Off-page factors may soon be joined by user-defined ranking criteria, peer-to-peer search indexes, sophisticated forms of metasearch, and other innovations, all in the name of avoiding keyword spam and deceptive practices. Search engines have always wanted their ranking algorithms to be a moving target to avoid “reverse engineering” by marketers. The pursuit of the “ultimate moving target” continues.
Metatags as we know them today – I refer specifically to the meta keyword and meta description tags inserted into the head of an HTML document – don’t factor into this future.