Archive: November 2004

by
Tuesday, November 30, 2004

Lovin’ It, But Not Findin’ It

On ClickZ today, Pete Blackshaw makes a good point about the inadequacy of the site search on Mcdonalds.com. If they’re all about the salads, then how about some smarter results when you go type in “salads” into the site search box?

I’ve been pondering the McDonald’s salads example for awhile, and even worked a lengthy discussion of it into a draft chapter of my forthcoming book. This got mixed reviews at my publisher. An editor wondered why I was so eager to toss in examples from huge companies instead of focusing on “real life” stories about smaller businesses using Adwords. (One might as well ask Machiavelli why he wrote The Prince instead of “The Profitable Baker,” but anyway…)

McDonald’s could really use some search marketing to go with their marketing. I know they can do it, because along with other large companies, they’re no stranger to bopping users over the head with big banners at sites like ivillage.com.

A Google search for the term “salads” turns up Subways.com in the first organic position. And few if any advertisers. Nice going!

McDonald’s is up to something with AdWords, though, at least in Canada. If you type in “McDonalds” you see an ad for mcdealcanada.com, which is an official McDonald’s site that outlines the daily “inexpensive sandwich of the day” ($1.69 Cdn.) for each day of the week. ($1.79 in Thunder Bay. Sorry.)

That’s a head-scratcher right there. McD’s is trying to create floor traffic by marking down Big Macs and other lower-margin products, when the business pages tell us that McDonald’s new-found profitability is attributable to their salad lineup, which would be extremely easy to promote! But maybe people drive up to the window motivated by the cheap Big Xtra offer, and change to a salad at the last second.

Who knows what they’re thinkin’. But I might have to make a trip to Subways tonight to reward them for their excellent ranking on the term “salads.” Call it search karma.

Posted by Andrew Goodman

 

Who’s Afraid of Google Scholar?, Part I: Amazon

Chris Locke has pointed to a little-known feature being tested by Amazon: hypercitation. The entry for one example, a book on art history, cites a list of books which are cited by this book, as well as a list of citations of this book in other works.

I can’t seem to stumble on any other examples of this at the moment. Anyone?

Posted by Andrew Goodman

 

Monday, November 22, 2004

It Begins

Firefox cutting into IE’s lead. I predict that by this time next year, the browser market share will look like this:

Internet Explorer: 75%
FireFox: 25%
Other: 5%

Don’t ask me what “other” browsers might be. That’s my margin of error based on, um, early exit polls, which are never wrong.

Posted by Cory Kleinschmidt

 

Perpetrators of Click Fraud: Not-Not-Evil

Stefanie Olsen reports that Google has launched what may be the first civil suit directed at a group conspiring to defraud advertisers through click fraud. It’s about time. Not so subtly, for some time I’ve been advocating jail time for those who conspire to commit click fraud.

(While no one in the industry explicitly disagrees with my stance, they do sometimes scratch their heads and wonder how they could possibly benefit from a response that doesn’t allow them to sell a fraud-auditing tool. Sending conspirators to jail? What’s in it for us? Nothing but justice, friends.)

I’ve started a thread about this topic on SEM 2.0 (you’ll have to join to participate).

Posted by Andrew Goodman

 

GMail to Support Outside POP Accounts

Among other things, Google’s Georges Harik states in this interview on WebTalk Radio that GMail will allow users to check external POP-based accounts (a feature that’s been long available in Yahoo Mail, Hotmail, and other web-based email clients). For those who intend to use GMail as their primary email organizer, that’s a must.

Posted by Andrew Goodman

 

“Reputation Follows from Quality”: Jakob Nielsen

Jakob Nielsen isn’t exactly thinking small today in an Alertbox column that declares the end of the industrial revolution and uncannily describes a more decentralized work process (i.e., what you’re probably already doing).

You rarely read a more sweeping statement than this: “The switch from centralization to decentralization goes to the heart of the human experience. And because the switch will drive up quality, it will tend to be a force for good.” But I tend to agree with Nielsen.

Those who refuse to drop old memes become bewildered by today’s subtle economic changes. Those who already work productively in the new realm welcome those changes, as ‘system’ potentially makes a more comfortable peace with the ‘lifeworlds’ of the privileged members of knowledge classes.

Notions of flexible specialization aren’t new, of course, as the ample literature on “the looming threat of post-Fordism” will attest. As much by accident (but in large part thanks to educational and other opportunities) as anything else, many of us have gained more control over our work lives than ever. (That doesn’t help the person pushing a mop for the same real wage as twenty years ago, of course. Get an education, kids.)

My generation — formerly called “slackers” — often defined its aspirations in terms of what it didn’t want. It didn’t want to offer its loyalty to centralized corporate systems that offered none of the same in return. It didn’t want to dress a certain way, or move to a certain place, just because a “job carrot” was dangled. It believed in a multitude of narratives (and channels, and tastes), and shied away from grand narratives. It believed that merit, not image, should determine one’s pay packet (in that, we were of course hopelessly naive).

As it turns out, the obstinate, sometimes almost pre-industrial work habits of today’s overeducated, disloyal middle classes are quite suitable for the emerging economic order. As it turns out, the generation of knowledge workers who are flexible enough to pursue goals on a project basis or in a variety of different team formats have turned out to be drivers of unprecedented productivity gains. The majority of those who lose steady “jobs” through offshoring will ultimately benefit if they can revamp their work styles so they’re always moving towards high-output goals as part of a multitude of different project teams.

In declaring the industrial revolution over (finally!!!), Nielsen has hit the nail right on the head. He even reminds us that search marketing is part of the picture of serving niche markets on short notice.

Posted by Andrew Goodman

 

Sunday, November 21, 2004

Google Scholar as Academic Metasearch: Political not Algorithmic

As with many who have dared to hastily critique Google products, I’ve been proven wrong on my lukewarm assessment of Google Scholar. Already.

Taking slightly more time to check it out, I’m beginning to understand the power of the service. The real problem — analogous to the problem with a world with 2,000 channels and nothing on, and a world where I can set up a wireless network for $50 and check email on my laptop while feeding the neighbour’s beagle, but instead, I watch NFL football while posting to my blog — is that the power of the tools is comically underutilized.

But just as it might take Howard Stern to rapidly increase the uptake of Sirius satellite radio, if it takes Google releasing a better way of searching for academic materials to improve the chances that students and professors might adopt a more systematic approach to research, so be it.

In any case. The “library search” feature of Google Scholar is already powerful, and as it improves, could ultimately make the whole world into a big “interuniversity loan” system. Obviously, many libraries have reciprocal agreements to share materials. But the more easily researchers are able to pinpoint the availability of these materials, the more likely they are to initiate a request for them. Of course it probably won’t be too much longer before 95% of those materials are scanned and available on-demand, which will pose a separate set of problems for authors, but these are nothing new to academic authors who have seen an evolution from informal, unauthorized copying to a more consistent practice of putting together course kits with permissions. (For some students seeking wider access to materials for free, bootleg distribution services will no doubt crop up to spur innovation in the “legit” publishing world.)

This time around my example search was for Peter deLeon’s Thinking About Political Corruption, an original and insightful study, though it might read a little stiffly for those who get their information from Wolf Blitzer. The Google Scholar search tool allows you to enter a zip code or other jurisdiction (I entered “ontario”) to determine the material’s availability in libraries. Pretty good result. I saw that the book was available at York University, Ryerson University, The University of Western Ontario, the University of Windsor, etc. The handy tables already seem useful in that they offer a quick link to library hours and basic info for each institution cited.

To continue extending on this kind of unification of disparate data, Google will need to engage in a process of ongoing cooperation with library systems. It won’t be able to automate everything, and probably won’t be able to fall back on some easy, ready-made metadata protocol. In short, the process is as much “political” (requiring detail work and talking to people who have traditions of their own) as it is algorithmic. (For the uninitiated, the title of my blog entry today is a tribute to an important expository article, “Justice as Fairness: Political not Metaphysical,” by the most-cited contemporary Anglo-American philosopher John Rawls. Rawls ultimately situated liberal notions of economic justice and rights in American traditions, responding to interpretations that viewed him as an ‘abstract’ neo-Kantian rights theorist. As this article is one of the most-cited philosophy papers of all time, it does, incidentally, underscore the incompleteness of Google’s beta effort thus far. Google Scholar shows only 85 citations of this article.)

The other interesting and quite Googlish feature of the library search results is the “regional libraries” results, which seem to be a list of libraries slightly farther afield which have the material you’ve searched for. Google Scholar brings up available copies of the de Leon book at dozens of libraries in Michigan, New York state, and as far away as Minnesota. Google is good at geography. πŸ™‚

Given the strong start for Google Scholar, then, it’s possible to imagine it getting much better. Part of this scenario must involve cooperation from research professionals.

As for de Leon’s fine book, York University library (York U., Toronto, is Canada’s second-largest university behind the University of Toronto, which does not seem well-connected to Google Scholar at this juncture) keeps only one copy on the shelves. Further investigation quickly shows that the book is available. Political corruption has been a pivotal theme in North American politics of late. Armchair analyses in the media are proffered daily. In Canada, press accounts of advertising spending scandals dominated the debate in the recent election in which the ruling Liberal Party was reduced to a minority government. In this context, de Leon’s analysis stands up as one of the only systematic studies of the phenomenon of political corruption in the context of developed democracies. Published in 1993, Thinking About Political Corruption put forward a bold but far from proven hypothesis: that the decentralized nature of U.S. political institutions directly correlates with ongoing misappropriation of government monies and failure to uphold federal standards. This analysis descends from Theodore Lowi’s vaguely similar argument in The End of Liberalism (1969). For all I know, only a handful of students (other than my seminar of 20 upper-year undergrads at Trent University six years ago) have ever debated de Leon’s thesis. Needless to say, it hasn’t made its way into news coverage of politics.

According to Google Scholar, de Leon’s book is “cited by 6” other sources. (This obviously is an incomplete list; we assume that this tool will work much better in a year’s time.) One of those citations is a 1994 article by Mark E. Warren in the American Journal of Political Science (48:2, 328-343), “What Does Corruption Mean in a Democracy?” In the abstract we learn that Warren argues: “Despite a growing interest in corruption, the topic has been absent from democratic theory.” Warren finds that most incidences of political corruption in a democracy indicate a “deficit in democracy.” Back to you, Peter de Leon.

Actually, back to my alma mater (Phase II), York University. York prides itself as the home of one of the only research institutions — The Centre for Practical Ethics — which studies political ethics in a systematic way. In spite of this, in the height of the academic year, de Leon’s groundbreaking study of political corruption sits lonely on the shelves of Scott Library, waiting for a taker. To explain why would be a much longer discussion. Perhaps it’s best to change the subject and summarize it with a joke that used to be told by snobby University of Toronto students: “Trent University library burned to the ground last Wednesday. Luckily there were no casualties as no one was in there at the time.”

As much as this might surprise us given the magnitude of the social utility that would arise from making progress in this area, the study of political corruption — much like Google Scholar itself — is just getting off the starting line. I wish both well. Now, back to football.

Posted by Andrew Goodman

 

Saturday, November 20, 2004

FireFox Developers Riding the Wave

The publicity avalanche surrounding FireFox continues unabated. News.com profiles developers at the forefront of the FireFox developer movement and explains how some crafty developers are taking advantage of Mozilla’s open-source architecture to generate closed-source commercial products.

“Business is pretty crazy right now,” said Pete Collins, who last year founded the Mozdev Group in anticipation of demand for private Mozilla development work. “With the popularity of Firefox and the economy rebounding, we’ve been swamped. We don’t even advertise–clients find us and provide us with work.”

How cool is that? I’m thinking this is just the tip of the iceberg, folks. I can tell you that I am definitely willing to pay some bucks in order to have a powerful browsing experience. Microsoft’s years of neglect have only compounded that feeling of frustration and obsolescence that IE users experience these days.

I’ve even found a few new extensions since my last post about FireFox. I’ve also played around with FireFox’s more advanced options, and I can now replicate the functionality of my former favorite browser, NetCaptor. So, bye bye, NetCaptor. All of this is made possible by a rock-solid open-source web browser that simply blows IE away. It’s so good, and even better than I thought it would be.

Oh, and I got my FireFox t-shirt last week. Yes, I think that fully qualifies me as a geek. πŸ™‚

There’s even a FireFox blog. With all this glowing publicity about the upstart browser, I think we’re about to see another “Halloween memo” from Microsoft pretty soon. Will 2005 be the year of Google and Firefox vs. Microsoft. Get your tickets now!

Posted by Cory Kleinschmidt

 

Thursday, November 18, 2004

I Love Grilled Cheese

In the next few days, hundreds of publishers will try to get people to stampede to their site by working the phrase “grilled cheese” into a sentence like “I like to eat grilled cheese out of a trucker cap.” The eBayers are over the top with this one. It’s the campiest trend on the Internet since Mahir, jennicam, all your base are belong to us, and nigritude ultramaroon (common misspelling).

Posted by Andrew Goodman

 

Google Scholar vs. Real Scholarship

Today’s release of Google Scholar, an academic search tool developed by a Google engineer in his “20% time,” is an interesting and noble but less-than-groundbreaking contribution to research.

Professors (and librarians) will worry that time-strapped students will carry the trend towards sloppy Internet-based research even further. Pulling an all-nighter and strapped for time? Enter something into a search box. Students, take note: the stuff you pull up on Google Scholar will be a fairly random, incomplete selection of materials, including many abstracts. The best way to write your paper is still to identify the key readings you need to consult to put together a coherent argument, and plop your butt down in the library and actually read through them.

Typing a few queries myself, I discovered just how radically incomplete the results are. In my favorite field, political philosophy, memorable journal articles such as “Communitarianism: A Guide for the Perplexed,” “Communitarianism: The Good, the Bad, and the Muddly,” and the mindbending “The Foucauldian impasse: no sex, no self, no revolution,” are not actually available — only various citations or mentions of them. Most of the available citations lead only to abstracts at various subscriber-only services. In a few cases, actual journal articles are offered (usually in PDF form), but it’s hit-and-miss. One doesn’t blame Google Scholar for this, but the very presence of the tempting search box might lull some users into believing that this is a powerful search tool. Many more powerful tools are currently available in the public domain, particularly to students enrolled at accredited institutions.

The nice thing about better educational institutions — and this is part of the ranking methodology used by third parties — is that when you access their library systems, you can get just about anything you need, no matter how rarefied or rare. Sometimes, you can get a whole pile of that material and actually work on it in a relatively quiet space — handy when the only space to call “your own” is half a dorm room.

Distance education has much to recommend it. But as nothing truly replaces face-to-face contact in the business world, it doesn’t hurt to spend actual time on a campus soaking up wisdom and tracking down journals and books. As this stage, it still makes a lot of sense to be in the physical presence of professors, fellow students, libraries and library people, if only to familiarize oneself with the notion that there really are people doing serious research.

No doubt the introduction of tools like Google Scholar will push the various academic subscription services and libraries to standardize their protocols for making obscure information available to students (particularly grad students) and researchers. But for the foreseeable future, you’re going to get a lot farther, faster, by talking to a professor or librarian who can help you figure out where to look for the actual material you need.

What is interesting is the embryonic categorization that’s being built into Google Scholar. The top result for the Gad Horowitz “Foucauldian impasse” article is an entry called [citations] — confusingly, the system only sees two citations of this piece although there are likely dozens or hundreds in the academic literature. In green letters you also see what amounts to a “meta categorization” statement: “Michel Foucault, critical assessments, 1995.” Better than nothing, but again, librarians are likely wincing watching Google reinvent the wheel. We’ll be watching this space.

Posted by Andrew Goodman

 

Tuesday, November 16, 2004

Will Google Ban Affiliate Bidding?

Some useful discussion on John Battelle’s blog about the ongoing issues surrounding trademarked terms and (in a slightly separate but related vein) affiliates bidding on brand-name keywords.

One affiliate worries that Google will “ban” all affiliates from bidding on keywords.

I worry that these kinds of rumors may spread and create a distorted debate, and of course I have no idea what Google will actually do. But it’s worth looking more closely at exactly what’s being discussed.

In the first place, there are many different kinds of affiliates and they behave in different ways. Some are responsible and clever, some aren’t. Some have their own sites. Others don’t. But I don’t believe it will be easy to come up with hard-and-fast rules to eliminate a whole category of bid types or words from the bidding universe, because these issues are rarely clear-cut.

On the whole, I believe that affiliate-parent relationships are best dealt with privately. But that’s not to say that Google itself won’t also have to step in to quell certain ongoing practices. And when they do, certain aggressive full-time affiliate folks who find themselves shut out of the action will have only themselves to blame for their lemming-like risk-free participation in the AdWords auction. Funny, but choking AdWords with crappy ads sounds a lot like spamming a search index full of crummy spam pages. It costs nearly nothing, but pays off if your affiliate link generates sales. Hmmm. There has to be a better way to live.

So, from the standpoint of the poor user, it looks like too many advertisers are in there choking the system with dictionaries full of keywords that lead users only to a big-company site like eBay. And they’re doing so using the keyword replace function for maximum coverage with minimum work. In other words, they’re using generic ad copy and hoping to use the automated tool to make it seem somewhat personalized. Some time ago I predicted that the impact of matching a user’s search query exactly (until now, generally this improved user response) would be diminished if every ad on the page had the same title. Soon, the spoils would go to the advertiser who took time to write a genuinely interesting or personalized title.

Lo and behold, this day has arrived! I just did a search for “stairmaster.” Here, in order, are the titles that were used by the eight ad listings on the SERP page:

Stairmaster

Stairmaster

Stairmaster

Stairmaster

Stairmaster

Stairmaster

Stairmaster

Stairmaster

How do you think the user is going to feel about that?

What I’m talking about may not be immediately apparent if you do a search in the United States (though it’s not hard to find here either). Checking out the results on a Google search for “Audi A3” for the Canadian user, I saw only three ads, but all were affiliate ads pointing to eBay. That’s silly, I thought.

I tried the same query for “Audi A4,” but for the U.S. viewer. I got a mix of ad listings. Nothing to worry about there. Then I tried “Audi A4” for the Canadian viewer. Ouch. Eight — all eight — of the sponsored links were affiliate links to eBay. The reason these don’t show up on the U.S. listings on the first page is that there are fewer advertisers in Canada, and the folks who play the “choke AdWords with keywords” game are usually lowballing at 5-10 cents since their arbitrage strategy only allows them to bid about this much.

Recent Google moves to put such keywords “on hold” or “in trial” before they accrue too many impressions are likely directed at such advertisers. A couple of things. First, Google has denied (on forums when asked) that recent moves such as this are meant to separate naughty advertisers from nice ones. Second, they also claim that the new system is actually giving a looser leash to some diligent advertisers who find themselves flirting with the 0.5% CTR cutoff. Maybe. No one really knows how this is supposed to work or what it’s really supposed to do. What we do see is some campaigns working slightly better, while others are being whacked with a lot of “on hold” and “in trial” keywords based on a predictive model that Google is currently tinkering with.

Google has some tough politics to juggle at this juncture. The “affiliate folks” who are being asked to stop choking AdWords with dictionaries full of words are also the same “folks” that help Google generate so much revenue. These participants in the mayhem of low-ball bidding and search engine optimization and keyword arbitrage and such are often the very same people who are AdSense publishers, sharing revenue with Google on contextual ads. It’s those publishers who have been responsible from taking Google AdWords from “fairly profitable” to insanely profitable” since the inception of the AdSense program. Google has to enact policies many such webmasters won’t like, while reassuring them that they love them all dearly. It’s a juggling act I don’t envy.

If I had to make a prediction, I’d guess that Google will soon learn that they can’t coddle this crowd at the same time as handcuffing them. At some point they’ll need to be more decisive, and that will alienate a lot of the “AdSense crowd” and negatively impact Google revenues in the short term. But that probably won’t be happening over the short term, as a drop in AdSense revenue would hurt Google’s stock, and in spite of what the founders say, the stock price matters to most Google employees.

Thus far, the “new way of dealing with keyword relevancy” move hasn’t completely quelled the fun and games in affiliate-land, as the Canadian user who typed “Audi A4” found out today. But this could become less of an issue soon as Google’s new keyword relevance method gets refined. A query for “Frigidaire ovens” returns a reasonable mix of advertisers, including a top ad result from one of Google Canada’s most aggressive advertisers, Sears.ca. But still, there are two of the eBay affiliate lowballers on that page. Were Canadian retailers to wake up and smell the baking banana bread, of course, those arbitrageurs would be crowded right off the page. For now, too many Canadian companies have chosen to ignore keyword bidding, so you can still get great exposure at the 10-cent level.

Google will no doubt continue to study ways of forcing advertisers to be more responsible with their keyword-dumping orgies. Quite simply, seeing eight generic affiliate ads for eBay for a single query is a horrible user experience. Don’t even ask Jakob Nielsen what he thinks. Actually, let’s. Jakob, maybe it’s about time for an update on your April, 2003 column “Will Plain-Text Ads Continue to Rule?”

Posted by Andrew Goodman

 

Saturday, November 13, 2004

So What’s Your Favorite FireFox Extension?

As you probably know by now, FireFox is the browser that’s hotter than Pac-Man fever. One of the best features of FireFox is the dozens of extensions that expand the functionality of this already nimble browser.

My favorite extension thus far is called McSearch Preview, although I’m not sure why it’s called that. This extension inserts website thumbnails alongside of the search results of all major engines. I never thought this capability would be very useful, but I find it surprisingly so.

What’s your favorite extension? Please include the download link and a brief explanation of why you like it!

Posted by Cory Kleinschmidt

 

Thursday, November 11, 2004

As Promised, MSN Releases Sincere Search

Tried the new MSN Search today, as you, dear reader, surely have done by now, too.

Capsule summary…

Is it good? Yes.

Is it cool? Yes.

Is it better than Google Search? In some ways, no. In other, important, ways: YES.

Using the product feels like a bit of a trip back in time, to when AltaVista came out with Raging Search. Or perhaps, much earlier, to a less cluttered, less spammy time, when information wanted to be found.

The MSN product clearly aims high, right at where search enthusiasts live, offering advanced features right out of the gate without making the interface confusing for the average user. Unlike AltaVista Raging Search, they haven’t used the term “search enthusiast market” in their publicity. (Then again, what publicity? It seems to amount to leaking news on purpose to mainstream news outlets, feeding info to John Battelle, and posting cryptic comments on this blog.)

Ultimately, you’re dead if you literally go directly after the elite as your market, but you do need to impress the elite and avoid talking down to the mass market if you hope to maintain credibility with an emerging generation of “always on” young users.

Google (and MSN is following the same route) targeted the elite as a PR strategy, but won over the mass market with ease of use. MSN may similarly win over the mass market because they understand that this market is growing ever more sophisticated. You can’t get away with marketing to people who barely know how to turn on their computer. If Tara and Chris and Gary like it, chances are that more users might give it a whirl too. As of this writing, neither Tara Calishain nor Gary Price have had much to say yet. Chris Sherman’s first assessment seemed cautiously lukewarm.

When playing with this new toy I could have thought I heard strains of the Moonglows’ 1954 classic, Sincerely, drifting through the air. I harkened back to a simpler time, when search engine indexes weren’t riddled with spam. When they didn’t expect the user to be a complete dumbass (as we certainly saw with previous versions of MSN Search). When features were pleasing and information was plentiful.

Trying my Osler.com examples from last night, I found certain areas where MSN Search demonstrated clear superiority over Google Search. Power users have long lamented the fact that Google shows only a tiny proportion of which sites “link to” any given site when you use the link:www.example.com nomenclature. For Osler.com, Google displays 91 inbound links. MSN Search gives you the Fully Monty: it says 2,300. (The present site shows around 1,000 inbounds on Google Search; MSN Search gives us credit for 11,500. Now that’s comprehensive.)

The sample search for “Douglas Rienzo” served 63 results on MSN Search. The top three results were journal articles or mentions; the fourth was Rienzo’s bio at Osler.com. Google found 41 results. The disparity may be neither here nor there. A closer analysis would be required. Google’s ranking had put Rienzo’s bio at the very top.

With personalization sliders enabled on MSN Search to privilege freshness of page, Rienzo’s bio falls to the eleventh result on the page, from fourth. It never rises higher than fourth no matter how “static” the setting.

Clearly, neither ranking is “correct.” Users who know that they’re looking for fresh articles would of their own accord adjust the setting. Those looking for static biographical pages on company websites might use different settings. All the more important that Microsoft is previewing this personalization technology. It’s available by clicking on the “search builder link” from the MSN Search Beta home page, and then clicking on “results ranking.” Up pops the interface with three “sliders,” exactly the one that Gary Price stumbled on earlier.

The “personalization sliders” were a thrill to use for this search enthusiast. It may sound like a small thing, but setting them in my own way allowed some typically hard-to-find pages to bubble up to the first and second pages of the SERP’s. But this feature does not yet go far enough. Since one of the key benefits of such personalization will be to stamp out spam, other variables should be controllable to really put the advanced searcher in the driver’s seat. If a common spam technique du jour is high keyword density or stuffing keywords into h2 heading tags, then the savvy user might want to have a suite of settings which include discounting such techniques. Such a user might want to cycle through three or four searches quickly to see if they can uncover different information on the first couple pages of SERP’s. Kind of like personalized metasearch – searching the same index, but with different algorithmic weightings.

As the closing bars of “Sincerely” continued to waft eerily through the room it morphed into harsh cover versions of the same song, and a sad premonition overcame me. This is as good as it’s going to get for MSN Search. We wish them all the best, because a search tool this good will help a lot of users find the information they need. But MSN’s index has yet to be put to the acid test; has yet to be pummelled with a systematic stream of spam. And the very reason they’ve come up with certain features (like better disclosure of all inbound links) is because they’re way behind in the race, so they have to give us what we actually want, instead of what some corporate strategist thinks we should want.

The participation of this feisty “upstart” in the search wars certainly does put a strange spin on things. Wasn’t it always Microsoft that sat back and refused to update its products because they had a virtual monopoly? While Google’s pace of innovation has been breathtaking, they’ve resisted making certain changes or releasing certain information in order to avoid tipping off the competition — or worse, simply because they can. So has Google already become like the Microsoft of search (not, as Battelle correctly insists, its Netscape)? Have they become fat, happy, and arrogant? A little company from Redmond hopes so.

MSN’s sincere little search engine should blow the lid off and cause Google to do a little bit of soul-searching about how its core product serves an ever-more-sophisticated user. But the MSN Search technology will have to be very good if it’s to get through even the first year of its life without being shown up as just another easily spammable wannabe.

Posted by Andrew Goodman

 

Billions and Billions Crawled

Google attempts to mute Microsoft’s buzz by announcing that its index has jumped to a colossal eight billion pages. The company isn’t offering many details beyond pointing out that they “continue to innovate on the crawling side of the business,” as one official put it. Certainly there are a number of obvious ways that Google could be finding additional pages, given its various overlapping products (eg. conversion tracking, Blogger, AdSense, toolbar, Froogle). Perhaps there are some non-obvious technical advances involved as well in following links on dynamic sites, etc.

Assuming the pages in it are useful, a huge increase in index size is a happy event. Some time ago I conferred with a large law firm, Osler, Hoskin, and Harcourt (osler.com). They had trouble getting all the pages on their highly dynamic site indexed. As much as one might want to scold the company for having such a hard-to-spider site, the more pages Google can find *without* them having to rework their site, the better. And better for the consumer. This site is full of articles and resources, as well as listings of lawyers and their bios.

Today, I noticed that on key queries valued by the company, like “business law canada,” osler.com now ranks #3 (behind some public-domain and library resources), and finally ahead of the tiny immigration law company from upstate New York that used to routinely rank first on this query.

Looking for Douglas Rienzo, a partner in the Osler firm? Two years ago, when I searched for particular associates and partners, the only Google mentions of their names were pointing anywhere but the Osler site. Now, the top results are bios that appear on Osler.com, with contact info. This is neither here nor there, but it does prove that a lot more important pages on the massive Osler site got spidered and well-treated by Googlebot.

It’s impossible to gauge search quality on just a couple of example searches. But it is heartening to see that on some queries, like “business law canada” — to say nothing of “Douglas Rienzo” — the results are now better, not worse, than they were two years ago.

Chris Sherman wonders, does this increase in index size portend the release of new search features or perhaps a significant rejigging of the ranking algorithm?

Can I be first to dub the next cataclysmic Google re-index? We’re at about the one-year anniversary of the algorithmic imbroglio that was “Florida.”

Let’s call the next big one “Ohio.”

Posted by Andrew Goodman

 

Wednesday, November 10, 2004

Is it News? Microsoft Search to Exit Vapor-Land Tomorrow

Word seems to have leaked out quite handily to every major media outlet: Microsoft will launch its new search index tomorrow.

Let’s review the progress of this “news” story:

  • Many search enthusiasts have already tried out the preview version of the product.
  • Anonymous Microsoft supporters (their IP addresses resolve to a subterranean facility near Dayton, Ohio) have posted comments on this blog for several months of the “look out baby, here it comes!” variety.
  • Research expert Gary Price stumbled on a page he evidently wasn’t supposed to see, displaying advanced features such as “personalization sliders” that seem to outgun Google’s beta personalization. This is a topic we’ve reported on previously in this space. The page was quickly removed. Will such advanced features be public yet? Or ever?
  • Today’s news is telling us that there will be news about this tomorrow.

As with most other things Microsoft, when Microsoft’s new search engine launches, it won’t be news. The question is, will it be any good? Will it be disruptive? Or just good enough? And most importantly: will you use it?

Posted by Andrew Goodman

 

Google Dips Toe in ‘Reseller’ Channel

Google has taken the first step towards acknowledging the role of webmasters, Search Engine Marketing consultancies (SEM’s), and agencies in spreading the gospel about its AdWords paid search advertising program. The company has announced a new program dubbed Google Advertising Professionals, which offers new tools and resources as well as a training and certification component.

“We’ve opened the door to programmatically recognizing the ecosystem,” said Sukhinder Singh, Google’s General Manager of Local Search and Third-Party Partnerships.

The most welcome development for those managing multiple accounts will be a new interface called My Client Center, offering SEM’s the ability to oversee all accounts under their management with a single login. (No billing flow-through will be necessary to access client accounts, just permission from the client to do so.)

The program will be heavy on learning. Google will offer training modules and tips for increasing one’s AdWords campaign management business. In addition, PPC advertising professionals will be given the chance to take an exam in order to become qualified Google Advertising Professionals. Those who pass with a grade of 75% or more will qualify to display a Google Advertising Professionals logo. The time-limited exam will cost $50 and “is intended to be rigorous,” emphasizes Singh. Additional hurdles required before being permitted to display the logo include 90 days of experience using the My Client Center interface, and a modest $1,000 total spent within the interface.

To those who might think of this initiative as aimed at mainly small webmasters, Singh points out that the programs will benefit professionals who work at companies of varying sizes. Moreover, since this is just the first step in Google’s now-formalized approach to encouraging and training what amount to resellers, Singh agrees that “there is the opportunity to differentiate further,” adding more features and different streams in future. “For some time now, Google has recognized that there have been thousands of third parties doing this [promoting AdWords to their clients] organically on their own,” says Singh.

This recognition has not yet translated into a formal triage of small-timers vs. big spenders, it seems, even if one is aware that big spenders have important informal relationships with Google salesforces. For now, Google’s initiative seems intended to ensure basic competence amongst those advocating PPC marketing, while giving them some resources to better do their jobs. These include tips on what to charge for professional services. This handy primer does come off a bit funny given the high percentage of Googlers who are much more intimately familiar with the world of salary and options than they are with the much grittier realities of open marketplace competition. Maybe it’s a good thing that Google doesn’t get too esoteric with discussions of the business models of your average webmaster, as that reality could be sobering enough to make one want to come up with a better business idea, like, say, starting a new search engine.

If Google Advertising Professionals are tantamount to resellers, formalized recognition in the form of reseller commissions can’t come too soon. For now, Google has no stated plans to offer such commissions, but one assumes that could change.

Posted by Andrew Goodman

 

Tuesday, November 09, 2004

Rising Online Ad Tide Lifts Many Boats… Even Microsoft’s!

Steve Ballmer says Microsoft will double its online ad revenue in five years. Might we suggest they pick up the pace.

Posted by Andrew Goodman

 

CNN Covers FireFox

You know something is gaining serious public mindshare when it’s covered in mainstream news sites. Now that FireFox has officially released version 1.0, big news outlets like CNN are taking note.

Of course, that might be because CNN is owned by Time Warner, which owns Nestcape, which owned Mozilla, which created FireFox. But, I’m willing to bet that FireFox 1.0 is big enough news that other major news sources will soon be covering it.

Posted by Cory Kleinschmidt

 

Saturday, November 06, 2004

Blogs We Love

PaidContent.org, by Rafat Ali, is one of the most informative and exhaustive blogs covering the online content industry. I highly recommend it. However, it is such a chore to read due to the site’s perplexing layout and navigation. Please, Rafat, give your excellent site a much-needed face lift soon!

Posted by Cory Kleinschmidt

 

Wednesday, November 03, 2004

Evenin’, Squire (Can We Interest You in an Alarm System?)

Charlene Li weighs in with an excellent post about the significance of Google’s acquisition of Keyhole. Geography is quite often a proxy for income and other demographic characteristics, so deeper use of such technology could help online advertisers target better without necessarily gathering extensive info on users.

Li points out:

What if Google could overlay Claritas PRIZM segments against users IP addresses – and allow advertisers to adjust their bids up if certain households searched on specific terms? For example, if a search user was identified to be in the β€œLanded Gentry” segment, the advertiser may be more willing to pay more for this person’s attention.

Although such targeting would only be a rough guide in some cases, in specific sub-sectors, such as the so-called “country squires” subsegment of the Claritas profiling, IP targeting would map very closely to the target demographic.

In the Greater Toronto Area where I live, the very large, newly built homes in certain areas around the suburb of Aurora, for example, rest on large lots. In certain neighbourhoods, you’d have to travel a fair distance before you bled into a non-affluent one. And certain “exurbs” such as parts of North Oakville, Georgetown, and Caledon not only harbor large lots with large homes, but in some cases, brand new schools, brand new office parks, etc.

Unraveling the mysteries of urban, suburban, and exurban geography is a matter for anthropologists and marketers alike. Those marketers who do a better job of it will clean up, as they so often have.

In Mississauga, an established but still-expanding suburb west of Toronto, there are clusters of brand-new buildings which employ thousands of knowledge workers for larger companies such as Microsoft. Depending on IP address one could begin targeting these “business users,” confident in the knowledge that they’re more likely to mean certain things when typing certain keyword searches.

As it stands, Google AdWords offers a sort of “do-it-yourself” version of IP targeting, whereby one could theoretically target various slivers of the urban/suburban/exurban geography.

But until companies like Google build “prefabricated” versions of this to help their advertisers achieve substantive goals without having to become amateur demographers and geographers, advertiser adoption may be slow.

Depending on how affluent or how business-oriented certain subsectors are, the bidding wars on certain keywords could become ferocious once advertisers became more certain about the characteristics of those viewing their ads.

It’s clear that as things currently stand, keyword bidding is quite primitive. On Google AdWords, niche or B2B advertisers find it difficult to exclude mass-market searchers on certain terms, because meanings overlap. The result is that the CTR drops too low, and these advertisers’ ads get disabled… so they’re stuck advertising on extremely narrow terms and hoping that someday they’ll show enough ads to their target audience to make a difference.

Being able to experiment with different response rates for different IP-mapped “user profiles,” for example allowing one to sell to known business users and home users living in certain areas, would offer greater predictability, and would result in higher CTR’s. Search (and paid listings) would become more relevant, and highly focused advertisers wouldn’t be punished or disabled based on the popularity of certain keywords in the mass market.

Imagine if you sold riding mowers and wanted to be absolutely sure to grab the attention of people with “lawns” that are roughly between 1 and 5 acres. You probably already know how to send direct-mail pitches to lawn-care companies. But what about those who own their own mower? Do you run a multi-million-dollar TV campaign telling everybody in America that nothing runs like a Deere? Or do you launch a new premium product with advanced features that will get people talking over the fence, and crank up bids really high on certain keywords, showing these only to the group of viewers searching from neighborhoods with “those” types of homes, to aim more squarely at a much smaller segment of the market? Once you’ve made a quick inroad into that market, putting your latest toy into the hands of a decent number of sneezers, you should be able to sell them related stuff as well. If you even make that stuff, or have the capacity to retail it, that is.

Maybe the mowing example is a rotten one. Those suckers don’t seem to break down often enough to warrant replacing a perfectly good one. But maybe Toyota could make a Prius Mower or something for those wanting the ultimate compromise between landed comfort and “look at me” social conscience.

This is only an example. The principles aren’t entirely new, but advancing technology combined with the advent of keyword bidding adds plenty of new wrinkles and will open up new opportunities. This is merely a hypothetical view of what should become possible online in the next 18-24 months. Void where prohibited. Wear safety boots while mowing. Do not stick your hand underneath mower, even when shut off.

Yours truly,

Lord Goodman

Posted by Andrew Goodman

 

Monday, November 01, 2004

The Fall and Rise of Online Advertising

Today’s news that online ad serving company DoubleClick may be mulling a sale comes on the heels of a lukewarm forecast for fourth-quarter earnings and a downgrade to “underperform” by Piper Jaffray analyst Safa Rashtchy. (Or “Sara” Rashtchy as newratings.com referred to “her,” although Safa’s photo would seem to indicate otherwise.) Once the carrier of the torch for the entire online advertising industry, Doubleclick now limps along with slight profitability on anemic revenues of around $75 million a quarter. That’s small potatoes when you’ve spent most of your life thinking and acting like a “big” company.

Meanwhile, Fastclick, a growing startup in the online ad serving space, has closed a $75 million round of Series A financing. Bob Davis, a partner in Highland Capital Partners and former CEO of Lycos, is one of the heavy hitters behind this financing. Let’s hope one of the tough questions asked of Fastclick was “how will your life cycle be any different from Doubleclick’s?” Then again, from a VC standpoint, any life cycle that includes going public and trading for above a split-adjusted $100 per share for the better part of a year, as Doubleclick did, has to be considered a “win.”

Reviewing Fastclick’s product benefits, it looks promising insofar as advertisers would have the ability to control ad delivery based on predetermined metrics like cost per lead, cost per click, cost per order, etc., and to carefully track post-click behavior. Ultimately, though, the achilles heel of the business model is the same one that faced Doubleclick (and which is now awaiting the likes of Google and Overture): publisher-driven disintermediation. Those who control the traffic can squeeze third party ad technology middlemen. Advertiser demand is clearly there, but Fastclick’s, or any other intermediary’s, ability to deliver big online reach to make the advanced ad serving and tracking worthwhile, is in serious doubt and can change from year to year.

Online advertising is dead. Long live online advertising.

Posted by Andrew Goodman

 

Online, Useless Information Just Doesn’t Cut It

I’ve been greatly enjoying the gradually-improving My Yahoo! functionality. However, there are certain things about the experience that continue to be needlessly irritating. Not all of this is Yahoo’s fault, but they share the blame.

The Wall Street Journal headline modules seem to be divvied into two types: articles for paid subscribers only, denoted by a [$$$], and the free ones. Unfortunately, in practice, there is no difference. Most of the time I click on one of these headlines, I’m asked to become a WSJ subscriber in order to view the content. Some might indeed want to do this, but a lot of the headlines in question are just garden-variety stories also available through major newswires, and easily accessed with a couple of mouse clicks through Google News.

Another irritant is the use of unreliable third-party data feeds for easily available information, such as a real-time update of the PGA Tour money list. At this time of year, I usually look at the bottom of the list to get the answers to morbid questions like “did Paul Azinger finish 127th on the list?” Yahoo’s partner for this info, Golfserv, takes two days longer to update this information than the PGATour.com site itself, thus necessitating a trip to, well, PGATour.com. The PGATour.com site, and several other sites, also seems to update real-time tournament scoring info about 60 minutes sooner than Yahoo’s Golfserv feed.

We can send a man to the moon, but we still can’t… etc.

Posted by Andrew Goodman

 

A Little Poem for Americans

Very soon we’ll vote for a new president.
Or we’ll keep the current edition.
This race has come down to the wire.
Every vote must count this time out.

Forget what the polls say.
Only you matter on Tuesday,
Republican or Democrat.

Kerry or Bush,
Everything comes down to this.
Remember to vote!
Remember to vote!
Your vote is your voice.

πŸ˜‰

Posted by Cory Kleinschmidt

You may also like