Monday, April 26, 2010
Google Affirms the Vital Role of Marketing and Advertising Agencies
It’s great news that Google has taken the time to think through the pivotal role of agencies in helping advertisers advertise on the Google AdWords platform, and to release a new AdWords Certification program. As the head of a search marketing agency, I value the fact Google is explicitly affirming their philosophical support for the agency world at the same time as they release specific changes in programs and pricing that support that relationship. Official mission statements are important; they ensure that no one at any level in the company is hearing contradictory messages. Sometimes, all it takes for us to be able to work better together is to hear someone say (and write): you’ve got a formal place in our ecosystem, and a special place that won’t be interchangeable with everyone else’s place, or too easily devalued.
So, the obligatory punch on the shoulder, and “aww shucks, thanks, guys”.
To be sure, no one is naive enough to think that Google won’t also work directly with some advertisers. But there should be no more talk that Google is uncertain in its approach to the agency ecosystem, or that the powers that be at Google somehow want to “cut agencies out of the equation.” You don’t invest in support, agency teams, new certification programs, and new API models unless you’re sincere in the support.
Waiving AdWords API fees for agencies using their own bid management tools adds up to a significant chunk of change. It also, as Google notes, leads to more innovation. In developing new ways to automate marketing, developers at agencies (and the end client) won’t have to mentally subtract out the cost of the API tokens when deciding how much time and money to invest in new tools. If some agencies abuse the privilege, that’s easy enough to stop. Tell them to stop it, or the fees will kick back in (and their Partner status could be revoked).
Outside the AdWords ad world, this might seem like a minor deal. To those in it, it’s pretty significant because it means Google has indeed evolved into a mature player much as many of us hoped and expected.
Here’s a quick before and after to give you a sense of things:
Before: A confusing Google AdWords Professionals certification that was very easy to achieve, handed out to a wide variety of semi-qualified individuals, with no clear delineation between scrappy upstarts who can pass a simple test, and who would be really interested in helping you with your AdWords advertising; and real agencies with infrastructure and a track record of working cooperatively with Google and solving many client problems over time. Later, a Qualified Company certification got bolted onto that. While a step in the right direction, it left too much confusion in the marketplace and didn’t give enough credit to the difference between entities (agencies) and individuals (anyone who can get a decent grade on what amounts to an open-book exam).
After: A redefinition of the Qualified Individuals status to help individuals showcase their talents to potential employers (not directly competing for clients with more qualified agencies or experienced in-house talent); a redefinition of the Partner Certified Qualified Companies to mean more rigorous exams, and a range of other benefits like a searchable Google Partners listing.
There’s quite a bit more to it, but that’s a start.
I started as early as 2005 trying to articulate the case for such an evolution at Google — mainly, in both editions of Winning Results with Google AdWords. While many in the space sort of took Google’s informality at face value (panting with lust at any announcement of any kind of Google certification), I always figured they’d have to take another crack at this because the ecosystem of resellers and partners (assuming it demonstrates its value and shows itself deep, wily, and resilient enough to maintain customer relationships as opposed to being disintermediated/crushed) must be treated formally as such, much as it always has been in the technology world, with companies like Microsoft serving as the global standard (but there have been hundreds of others). As Google began rolling this type of thing out with Google Website Optimizer and Google Analytics (as strange as it is to be a “reseller” of free products), the logic became clearer, and you knew/hoped that Google would soon be on its way towards formalizing those relationships on a few fronts.
The old approach and the old programs were a bit tantamount to us out here being asked to “fan” Google on a Facebook page, without too much interaction, formality, or “anything in it for us,” and as a result, on the other side, Google couldn’t ask too much in terms of stated qualifications, business characteristics, more rigorous certifications, etc.
The new approach takes aim at the future and walks us all kicking and screaming into adult relationships. The old, informal ways were fun and we will miss them. But they’re a thing of the past.
I’ll leave off quoting at length from Winning Results with Google AdWords, 2nd ed. (2008), where I addressed this sort of thing.
“Third parties often advise clients on how to use AdWords, or directly manage complex campaigns. … Observing Google’s progress in dealing with the environment of marketing and advertising agencies, they have never fully given up on the idea that advertisers really should be coming directly to them for advice. However, this situation appears to be improving.
A Google Advertising Professionals (GAP) program, launched in November 2004, was an interesting initiative that was supposed to sort out qualified from unqualified individual AdWords campaign management practitioners. A company wide (agency) version of this is also available. This is more of a training and indoctrination program than anything else, however. The reward to the qualified professionals and agencies is minimal at best, though ostensibly it helps advertisers avoid working with “hacks”.
Agencies certainly get much less out of Google in terms of financial rewards (such as a commission) than they have in any relationship in the history of advertising. On a variety of fronts, including the Google-agency relationship, observers have asked the question: is Google sucking the proverbial oxygen out of the room? While consultative relationships have improved and become more formalized — a key improvement, to be sure — many of the leading AdWords consultants and evangelists must make their living from service fees alone … while Google’s extreme profit margins continue to fuel the company’s growth. There are practical hurdles to be addressed before such traditional advertising industry practices can be adopted, particularly in the “geek culture” which has served Google so well. However, the goodwill … of the search marketing agency community … may hinge on a recalibration of their financial relationship with Google.
In its formative years, having the right (geeky, iconoclastic, world-beating) attitude at the right time was a big part of what made Google into a global powerhouse. Some critics predict that this same attitude could be its undoing. Experts believe that the degree of cooperation with the developer community (and I would add, the marketing ecosystem) will determine whether the company has the staying power of a Microsoft.
Through the back door, Google may be studying ways of responding to the above analysis. Beyond AdWords, the company has new, highly technical products, like Google Analytics and Google Website Optimizer. It has initiated partner and reseller programs for these products. By instituting criteria for membership, working closely with the community on product development, and figuring out ways of steering valuable consulting business to such resellers and partners, Google can study the ins and outs of forming such productive relationships. Such relationships seem to be founded on classic models common in the software industry, especially in high-ticket enterprise software. What makes this unorthodox (as usual) is that Google’s products are often free, and many of the customers for them are small to midsized businesses. What will it mean for my consulting firm to “resell” Google’s free product to a small customer, I wonder? Like many others, including Google themselves, I can’t wait to unravel that puzzle. …”
Posted by Andrew Goodman
Twitter Ad Potential: Huge (Source: History, Users’ Love of Searching)
Regarding that last post about Twitter and monetization, I haven’t changed my mind on all of it, but for the projection/prediction part about Twitter potentially putting up very modest ad revenue numbers in the first two years. That part, I realize, is wrong!
Certainly, they’re well behind Facebook in many areas (revenues included) and possibly will continue to be forever, but what they have going that Facebook doesn’t have as much of yet? Search! (Fascinating piece today by Eli Goodman of comScore: What History Tells Us About Facebook’s Potential as a Search Engine, Part 1).
Goodman’s point so far seems to be that as search improves and as users come to expect it to be highly useful, usage increases, familiarity with the tools increases, etc. This is going to happen with Facebook, and it’s going to happen with Twitter.
By contrast with Facebook, though, Twitter already gets 19 billion monthly searches — about 19% of what Google does in a month. Astounding. That with a search platform that often doesn’t work well, or sometimes return any results at all. Twitter searching is going to grow to incredible levels. And where inventory and granularity are that huge, even very cautious forms of monetization lead to sizeable revenues and positive feedback loops in CPM rates and user satisfaction.
So I’m coming to the realization that 2011 is going to be a strong year for Twitter’s ad revenues, and 2012 could shock people.
A huge wrinkle here, though. Those supposed 19 billion monthly searches count API calls from third parties, and that would include standing queries from users, more like how people use feeds to display their favorite content. But hang on, isn’t that a good thing? That’s great contextual information and where there is such great contextual information, eventually there will be ad deals, and ad revenues. Sure, there will be ad-free ways to use third party tools, just as some advertising will actually appeal to users (or at least they will tolerate it).
Based on a more conservative definition of a “search,” let’s dial the 19 billion back, then, to around potentially one billion actual searches per month in 2011 for either Twitter or Facebook (so, more like 1-2% of Google’s overall volume). That’s still impressive. Based on Eli Goodman’s logic, that could certainly lead to a snowball effect of bona fide search product development and bona fide user addiction. In essence, the search product and ad product teams at Twitter and Facebook alike won’t be able to hire people and build products quickly enough.
Promoted tweets, then, should be viewed just a light pilot project to try something sort of “alternative” in the space for a reasonably guaranteed amount of cash. Down the road, Twitter can monetize something we’re all very familiar with as the highest-CPM, win-winningest, digital advertising channel: search and keywords. I doubt Twitter’s founders or any of the early adopters predicted this type of user behavior in the early going. Certainly, it’s a credit to them and their far-sighted investors that they all bet big on the potential and the direction of user excitement, rather than trying to get too specific about how it would get used or how it would make money, too early on.
Labels: facebook, monetization, twitter
Posted by Andrew Goodman
Tuesday, April 13, 2010
Twitter’s Monetization Model: On the Mark, or Off-Target?
As Twitter moves to pilot its first experiments in monetization, it might be interesting to speculate on its prospects for success. To help, I’ll go through some of the elements of success and failure that have been proven in the last twelve years or so of online advertising experimentation. Without all of these elements being in place, ad-supported models have tended to fail.
1. Large enough audience to matter. Wrapping some ads around content or functionality geared to a relatively small audience is tricky on a number of levels. First, no one in the press cares, and investors don’t care. Most importantly, advertisers and agencies don’t care, since there’s not enough to buy, so you get lumped into remnant or at least underpriced network inventory unless you’ve got a really smart little sales force. Second, any hiccup gives you a greater chance of killing the golden goose of whatever you wrap the ads around. Third, you lack statistically significant data for testing and refining, so it’s hard to perfect. Fourth, related to the third point, dipping a toe into the water becomes difficult. Large publishers can run tests without alienating anyone as they test the model in a small sliver of the content.
2. Targeting by keyword. Publishers and ad mavens have bent over backward to insist that targeting can be based on concepts, personalization, demographics, and factors other than keywords. Even Google, the King of Keywords, began fairly early in its attempt to paint the keyword as only one sub-facet in the global effort to better align advertising with user tastes and intent. (Bonus: that effort to blend into the woodwork might have helped Google in court if trademark and patent lawsuits really started to escalate out of hand, or if they started losing cases so badly that they’d need to substantially revise their business model ahead of schedule.) Deny it all you like, but keywords still “click” with advertisers. Users like them too, because it’s a way of seeing relatively relevant ads without feeling too creeped out. Keywords triggering relevant text ads and offers are the display-advertising-in-content cousin to permission marketing as it was conceived by Seth Godin for email. Somewhere, a line can get crossed. Keywords do a really great job of helping advertisers and users connect without that line being crossed as often.
3. Doesn’t get in the way, or even at times enhances the experience. Advertising is a necessary evil to some, but to a substantial part of the population, it’s a buying aid or even a cultural experience. Glossy ads in fashion magazines are part of the “art” and “positioning” and are seen as less intrusive than advertising that really “gets in the way” of reading an article online. The same goes for billboards by the highway: an eyesore to some, they’re a part of cultural history to others — and hence, provide free buzz over and above the advertising cost. Burma Shave was before most of us were born, but chances are, you’ve heard of the roadside signs.
4. Is in a place online that people willingly go to or are addicted to, rather than being an app that is a bit cumbersome to use, take-it-or-leave-it, overly incentivized (paid in points or cash to “surf,”), or weakly appreciated but maybe a flash-in-the-pan. Related to this, the user base has to understand what the owners plan to do around advertising and what kind of “trade-off” they can expect. Do they get involved in using something for one reason, then find it’s infected their user experience or device (i.e. “scumware”)? Or is the format and the trade-off relatively transparent?
5. Isn’t susceptible to “banner blindness”. For the time being, we can consider this one relatively unimportant, as initially, enough advertisers will be lining up to try new things where the audience is big enough and attention can be grabbed. But performance marketers are turned off by ads that don’t perform, and historically these types of ad formats have had limited upside when compared with personal, anticipated, and relevant communications (especially when the latter are connected with keywords). You can be “big” with the support of brand-building advertisers, but with the approval of direct marketers on top of that, you can be huge… because then any advertiser, large or small, can justify it to themselves or to someone on their board of directors. And agencies too can come up with those justifications.
To look at some quick examples:
- Intrusive or oversized display ad formats — leaderboards, “popovers,” garish animations, etc. — have had mixed success. They’ve driven online advertising to a degree, but somehow got surpassed by little old search, despite their reach. That’s because they fail on counts 3, 4, and 5, and aren’t even all that great on 1 and 2.
- Weird apps like Pointcast, eTour, and Gator eventually fail because people uninstall the apps, don’t install the apps, etc. Performance is uneven and users squeal. To incentivize users to do things they wouldn’t otherwise do, you either deceive them or pay them too much (thus killing profit). Fail all around.
- Point 4 relates to Facebook — in both senses. The network effect and addiction factor actually outweigh the fact that Facebook has been particularly brazen in doing wacky, unpredictable, privacy-invading things to its users. Facebook is very strong on point 1 and has point 2 covered also. Because its audience is very large, it can be cautious relating to points 3 and 5, monetizing below “potential,” thus leaving long-term potential on the table. Huge win.
So how about Twitter? Twitter’s scheme sounds like it will largely succeed on points 1 through 4. The ad revenue, once disclosed, will appear pitifully small for the first year or two. As long as trust is built gradually and testing provides insight, that revenue should pyramid up over time.
Some will question whether users will remain addicted to Twitter long term. Facebook is an entire social environment, and Twitter still feels like a “feature,” a quick hit, despite a large user base on paper. That one hangs in the balance. Perhaps the litmus test for any would-be top-tier destination would be: are users choosing to download and keep their favorite mobile app related to that content, brand, function, or community, in the most accessible place on their mobile device? Will people get bored with them and stop? Will ads be easier to ignore on mobile devices? Will people look for versions of apps that allow them to ignore ads? (That’s where point #3 really comes in.)
Change will be rapid, but based on these criteria, it appears that Twitter has the correct fundamentals and the right strategy in place for a long-term win. But if Facebook has a two-year head start here, you still have a nagging feeling that Twitter just needs to keep hitting certain user targets and to look reasonably dangerous revenue-wise, for the more realistic goal of selling the company to Google or Facebook.
Labels: online advertising, twitter
Posted by Andrew Goodman
Wednesday, April 07, 2010
Don’t Go to Google, TripAdvisor, or OurFaves for Restaurant Reviews
If you come to my town, where’s the best place to go to look for that perfect restaurant, or opinions about a place you’re considering dining at? Me. Seriously.
And maybe a few of my friends.
We know the truth.
You can sometimes get some of that truth from Toronto Life and Zagat.
Once in awhile, we’ll maybe go write a review on Yelp. But probably not. If you ate at 50 places a year and 30 of them are really good, you’d tire of writing it up.
So, if you go to Google’s review aggregation that includes results from TripAdvisor, OurFaves, and Google itself… you’ll see a bunch of inexplicable one-star reviews for some of the best restaurants in the city.
“I could barely see my food…” Have you heard of ambience?
“The staff was unfriendly…” … after you put ketchup on the filet of sole.
“Cramped…” Sorry it isn’t the Rainforest Cafe.
Don’t get me wrong: I’m a big fan of user-generated content and recommendations. Unfortunately, when you go to TripAdvisor, you often have to wade through the most inexplicable, knuckle-dragger “reviews” of some of the best hotels and restaurants known to mankind.
It’s also an interface issue with Google’s review aggregation, though. The Harbord Room actually averages four stars on Yelp… but you wouldn’t know this from Google’s tally, which makes it look like there are a lot of dissatisfied visitors, and the average looks like it’s just over one star.
You know what? Just let me make the reservation, and if the place is no good, I’ll take full responsibility. 🙂
In the meantime though you could trust good reviewers like this guy.
Labels: yelp-or-bust
Posted by Andrew Goodman
Can Search Engines Sniff Out “Remarkable”?
I never tire of listening to experts like Mike Grehan speaking about the new signals search engines are beginning to look at, because it’s so important to bust the myths about how search engines work.
To hear many people talk, today’s major engines are faced with little more than a slightly-beefed- up, slightly larger, version of a closed database search. Need the medical records for your patient Johnny Jones, from your closed database of 500 medical records, just type in johnny or jones or johnny jones, and you’re good to go. Isn’t that search, in a nutshell? It is: if you can guarantee that you’re referring to a nutshell like that. But with web search, it’s nothing like that.
The World Wide Web now has a trillion pages or page-like entities… that Google knows about. (They don’t know what to do with all of them, but they’ll admit to the trillion.) Some observers estimate that there will soon be five trillion of these in total, too many to index or handle. Who knows, maybe 10% of that could be useful to a user or worthy of indexing. But until some signal tells the search engine to index them in earnest, they’ll just sit there, invisible. That’s out of necessity: there’s just too much.
The difference isn’t only quantitative, it’s also qualitative. User queries have all sorts of intents, and search engines aren’t just trying to show you “all the pages that match”. There are too many pages that match, in one way or another. The task of measuring relevancy, quality, and intent is far more complex than it looks at first.
And on top of that, people are trying to game the algorithm. Millions of people. This is known as “adversarial” information retrieval in an “open” system where anyone can post information or spam. The complexity of rank ordering results on a particular keyword query therefore rises exponentially.
In light of all this, search engines have done a pretty good job of looking at off-page signals to tell what’s useful, relevant, and interesting. The major push began with the linking structure of the web, and now the effort has vastly expanded to many other emerging signals; especially, user behavior (consumption of the content; clickstreams; user trails) and new types of sharing and linking behavior in social media.
This is a must, because any mechanical counting and measuring exercise is bound to disappoint users if it isn’t incredibly sophisticated and subtle. Think links. Thousands of SEO experts are still teaching you tricks for how to get “authoritative” inbound links to your sites & pages. But do users want to see truly remarkable content, or content that scored highly in part because someone followed an SEO to-do list? And how, then, do we measure what is truly remarkable?
Now that Twitter is a key source of evidence for the remarkability of content, let’s consider it as an interesting behavioral lab. Look at two kinds of signal. The first is where you ask a few friends to retweet your article or observation, and they do. A prickly variation of that is where you have a much larger circle of friends, or you orchestrate semi-fake friends to do your bidding, with significant automation involved.
But another type of remarkable happens when your contribution truly makes non-confidantes want to retweet and otherwise mention you. When your article or insight achieves “breakout” beyond your circle of confidantes, and further confirming signals of user satisfaction later on when people stumble on it.
Telling the difference is an incredible challenge for search engines. Garden variety tactical optimization will work to a degree, mainly because some signals of interest will tend to dwarf the many instances of “zero effort or interest”. But we should all hope that search engines get better and better at sniffing out the difference between truly remarkable (or remarkably relevant to you the end user) and these counterfeit signals that can be manufactured by tacticians simply going through the motions.
Labels: search engine relevancy
Posted by Andrew Goodman