Archive: March 2008

by

Monday, March 31, 2008

Google: tooting their own horn

A little self promotion never hurt anyone, right?

That’s the attitude Google adopted in recent months as they began aggressively promoting their content network, albeit with some help from their friends.

Google feels strongly about their content network – namely, content targeting’s ability to gets Google AdWords advertisers’ names out quickly, easily, and potentially at a lower CPC than a traditional AdWords campaign – and they want everyone to know about it. That’s why Google advertisers have recently been direct-marketed with “success stories” about happy clients using the content network.

It’s particularly interesting to note that today’s Google success stories are pushed out to advertisers right inside the AdWords interface, in an announcements area, so it seems almost part of the navigational experience.

Take the SEM agency SearchRev, for example. They’re gushing over Google AdSense after seeing retail clients increase revenue by 25 per cent since implementing it. SearchRev used the content network to test out a few “general” type keywords that didn’t include clients’ brand names. As a result, retailers successfully introduced themselves to new customers who hadn’t otherwise encountered them before. Even though most of those newbies didn’t click through the ads, it didn’t matter; many of them ended up searching for the retailer later on and joining the company’s growing list of keyword search targets.

Since SearchRev specializes in SEM, and thus qualifies as a “third-party agent,” eyebrows may raise at the thought of them partnering with Google. Does a third-party endorsement look suspect when it’s done within the AdWords interface, providing the agency with exclusive exposure?

In theory, this new reciprocal promotional arrangement with SearchRev could make Google a hypocrite. But can we really fault them for finding a creative way to advertise? They are a corporation, after all. More importantly, recent improvements to AdSense suggest they’re listening to customer demand and backing up their self-promotion with a quality product.

Despite its relative success, Google’s content network developed a somewhat shady reputation among advertisers in recent years, with rumours of “built-for-AdSense” booby-trap sites surfacing. Google responded with several positive changes to the content network over the last couple of years. This includes placement performance reports and various new flavors of site exclusion.

Sure, Google’s running the show a bit differently than in the past, but given their growth imperatives today, it’s unsurprising that they’re working harder to push their product.

Labels: content targeting

Posted by Matt Larkin

 

Saturday, March 29, 2008

Google Canada Goes “Lights Out”

Google.ca has a new look today, in conjunction with Earth Hour.

It’s a good idea, but practically speaking, I have mixed feelings about Earth Hour. What am I going to do that’s so environmentally-aware at 8:00 p.m. on a Saturday night in March? Rollerblade in the twilight, through melting ice and grit, in frigid weather? Watch TV in the dark? Go to a candlelit restaurant and talk to myself in a corner? See a movie at the power-hungry multiplex?

And before any of you wiseacres say “just do what you and your wife did last time there was a big blackout,” geez Louise, she’s out of town, OK?! Why do you think I’m posting so often on a Saturday?

Posted by Andrew Goodman

 

EU Bans Fake Reviews

Is this cool or what? In Europe, fake reviews and websites purporting to be from customers will soon be banned.

Author beware: the law will also apply to authors who go onto book sites “such as Amazon” to write glowing reviews.

The law is set to go into effect in April.

No word on how regulators intend to track down brothers-in-law, cousins, moms, and hirelings who are asked to glowingly write about your villa. Or whether you’re allowed to ask your friends to go and say nice things about your book.

Labels: ugc

Posted by Andrew Goodman

 

Worth a Look: Google Uses Massive Amounts of Data to Combat Fraud

In case any of the naysayers are still listening, here’s the scoop on how Google looks for patterns so it can do the best job of proactively filtering fraudulent clicks.

No, Google’s post does not give away all the methods, of course, but if you engage him in conversation, you’ll find that Googlers like Shuman Ghosemajumder are willing to go into a little more depth. For example, fraudsters know that clicking on a certain ad will create anomalies in the data that will trigger further investigation of their IP addresses, methods, or the targeting of a particular advertiser’s ads. So they try to “even out” the anomalies by creating more impressions to balance out the clicks, and other attempts to make the fraud seem more “real.” But all this does is dig them in deeper, creating other anomalies.

To sum up: we don’t see much click fraud these days with our clients’ accounts, due to meticulous campaign organization, careful attention to geotargeting, filtering methods on our campaigns, and a variety of other cautious settings, to say nothing of Google’s proactive filtering.

To verify that an account is running without major fraud problems, I find a fourth-party audit service such as PPC Assurance helpful, as stated here before.

If you say that in a conference session on click fraud, the anomalously high number of bearishly inclined attendees who are anomalously more likely to fill out the comment form at the end, are unlikely to give the above analysis a high rating. They’re unlikely to give Shuman a 5 out of 5 and say that his talk “kicked ass.” It doesn’t change the facts, though.

And that ties into a future post about the quality of speakers at conferences, and the weakness of evaluation stats. Highly polarizing speakers can be near the top of the list in terms of the quality of the information they provide, but the combination of 1’s and 5’s they receive can actually make them look “bland” at a 3.3 rating. Talk to you about this next week.

Labels: click fraud

Posted by Andrew Goodman

 

Friday, March 28, 2008

How Yahoo Could Avoid Microsoft – Part 1

The following went out to the Traffick Occasional newsletter list a few days ago. Here’s the reprint.

Hi all,

As usual, it’s been awhile.

So our industry is at something of a crossroads again. This watershed moment is the will-they or won’t-they merge Yahoo into Microsoft to form a giant online powerhouse.

My main take has been that consolidation in the paid search platforms currently operated by Microsoft and Yahoo would make life easier for us marketers day-to-day. Consolidation of the search audiences would also provide the combined company with more data and make it more competitive.

http://traffick.c.topica.com/maakEKlabFWUMauv0PjbaeQx03/

http://traffick.c.topica.com/maakEKlabFWUNauv0PjbaeQx03/

But search ad platforms are part of a much bigger story – even if we confine the story only to search.

More recently, I posted a promise to map out an interesting scenario for how Yahoo could find its feet in such a way that would satisfy shareholders that they should do a different deal, and not the Microsoft one.

Here goes. These are kind of geek-mode, wacky ideas, so I thought I’d bounce them off loyal Traffick readers for comments. If you have any, please comment.

Part 1 of this exploration comes in the form of a letter I wrote to my colleagues at the consumer review site I co-founded, HomeStars.com. (To see what we’re up to these days, check out http://traffick.c.topica.com/maakEKlabFWUrauv0PjbaeQx03/ — I’m giving you the Toronto home page so you can see what an individual city home page looks like — but it’s now available in your city too!)

I had just excitedly taken in an SES New York keynote by Andrew Tomkins, Chief Scientist at Yahoo Search. I believe Andrew is saying things that say a lot about the future of Yahoo. And it would be a shame if some of this great work were interrupted by an old-school Microsoft takeover-and-purge operation.

Here, slightly edited, was my excitable note to HomeStars colleagues:

“Hi all,

Big news!

Well, I think so, anyway — as it relates to our little world of UGC [user-generated content] which is of course actually the biggest world there is in terms of online content.

I just took in a keynote from one of Yahoo’s chief scientists, Andrew Tomkins. He talked a lot about how the current “face” of search engines is quite static given how much the engines have invested and how much growth in content we’ve seen in the past 5 yrs. This is unlikely to remain static in the years to come.

The overall topic of the talk was really to drill down further on Yahoo’s recent embrace of open formats / microformats. To take an example, they have worked with beta partners such as Yelp to embed new tags (like hreview, and various other elements of their category and visual structure, such as the star system) and to “interoperate”… the visual result on a search result is a structured type of result on a Yahoo search for a restaurant review that is totally favorable to Yelp because Yelp is communicating all of its pertinent info to Yahoo in a format that Yelp wants (to quote Tomkins directly, ***”we’re structuring it that way because we’re working directly with Yelp, and that’s the way Yelp thought it should look”***), using a contemporary abstract vocabulary. It looks like a little Yelp microsite coming up as one of the top search results, in other words. In even more evocative words, Yahoo is essentially accepting a form of custom Yelp widget, because this is better for users than an uncommunicative one-way display method based on outdated or no standards. That goes beyond Google’s “guessing method” that it uses to sometimes put a few important internal links from a given site in front of a search
user. (Google, we love that you try, but you have to admit, open formats are intriguing. And we don’t love it when you remain overly proprietary, and that goes for most other tech companies we want to admire, of course.)

Other Yahoo partners in the early going of beta testing the display of OPEN FORMAT structured content include epicurious and babyzone (or babycenter or babysomething).

This may sound dryly academic but it is a major development in search, IMHO. I would like to talk more about the implications when we have time. Yahoo’s market share is of course lower, but this commitment to openness as opposed to proprietary standards is potentially going to make Google’s methods of spidering reviews and other UGC look awkwardly proprietary. And, if we can get traction with the engine that has 11% market share it is still better than a kick in the teeth.

The other bullet point here is: I got to ask the only question after the keynote! They really should make a rule that usual suspects like Goodman are barred from the mike 🙂 …

Anyway, I asked:

(1) In terms of recurring elements of UGC page structures, like the “star format” – (Yahoo is displaying red stars for Yelp on their 5-star system), I asked if there might come to be some standardization – would there be an attempt to normalize these or would a site that had “a big banana with a number in the middle” be able to convey that methodology to Yahoo and so forth;

(2) What did he think of the ecosystem in terms of what happens if Epicurious or Yelp potentially being acquired by Yahoo – does it now become a situation where other players no longer have as good a chance of coming up in search results around let’s say restaurant reviews…

His answers were:

(1) For some time to come he would expect many niche industries and sites and user bases to have their own numbering and visual systems, so there would be no standardization forced on them, i.e. the display box might even show a yellow banana with a number in it;

(2) He said the right things about the need for outside content to be valued highly and that a preference for showing users the best third-party results should be “hard-baked” into the design of search.

Now the purpose of the good scientist’s talk was certainly not to explain to folks how they can jump the queue and sign what amounts to a “metadata partnership” with Yahoo…. but that does lead to a whole new discussion of how Yahoo – if the scientists have their way – may be able to replace its non-transparent paid inclusion program with a much richer, “unpaid inclusion” cooperative effort with quality publishers.

** This potential to participate in microformat communications with a search engine confers no promised ranking benefit** , but obviously I’d expect good things from being on a SE’s radar in this way. This is a big departure from the Yahoo of years ago standing up and announcing the format of its paid inclusion program, as if this would solve all the problems of publishers.

The new direction in organic search is, on one hand, open, free, and informational; on the other, you have to think that if they are identifying “ideal types” like Yelp, there will be an indirect bias towards a smaller universe of known players. It’s a new opportunity to get on the ground floor of a genuine and sincere effort to serve users, as opposed to trying to play catchup on a 10-year-old, broken, PageRank system that has allowed all sorts of perverse incentives (the ones that are causing us to bend over backwards trying to get links rather than letting them develop naturally).

For developers, Yahoo are creating a development kit so that abstract formats can be built easily. This sounds promising.

The reality of the massive growth in web content (most of it user-gen) is – something must change so that search engines work better with formatted, quality content, rather than their own proprietary, generic, semi-intuiting way of trying to sort out what’s what. Google long ago broke with the majority of “troglodyte metadata” conventions, but nothing really solid has risen to take its place (Google Base is a failure). I see the new adoption of contemporary open formats by Yahoo as a big step in an evolution towards a more usable web, much more so than, say, the SiteMaps protocol.

Fun fact: we are, in orders of magnitude, not far off as a planet from creating nearly as much content daily as we would if, given current cognition abilities and typing speeds, we sat down and typed every waking hour. The amount of UGC, then, is ridiculously high today, to the point where something fundamental has shifted already, and so fundamental that we can already point to and imagine the upper limit, and that hypothetical upper limit looks not much different from what is currently happening (much of this content is private of course – look at your email and Skype chat logs). [Thanks to Dr. Tomkins for these insights.]

The need for a “communicative” approach in relation to publishers, as Tomkins sees it, is contrasted with the “overly high hopes” search idealists have in things like personalization. Personalization methods work a tiny percentage of the time, but generally give worse results 98% of the time. User personalization is pie in the sky compared with the potential of improved ecosystem communication with publishers.

Thanks again, Dr. Tomkins.

Best,

Andrew

To sum up: some of what they’re working on is game-changing. It changes the face of “organic search” and so-called SEO. Yahoo has a vision. It’s a compelling one, in my opinion.

But that isn’t the end of the story. It will help to flesh out the vision a little further. I’ll explain how startups like Mahalo are on the right track, but ultimately, utterly wrong. I’ll talk about how Yahoo has it right, if they move forward in a certain direction. And I’ll discuss their target audience and the potential that yes, they could still come back to be a credible alternative to Google in many markets. They could do it with a friendly version of a Microsoft takeover. But they could do it with another partner too.

Part 2 to your inbox in a day or two. Stay tuned.

Really over and out this time,

Andrew Goodman
Editor-at-Large, Traffick.com
www.traffick.com

Founder & Principal, Page Zero Media
www.pagezero.com

Posted by Andrew Goodman

 

Monday, March 24, 2008

Notes From SES… Er, Great Times With Larry Chase and Bryan Eisenberg

As anyone in our industry will tell you goes without saying, the people are some of the warmest and most genuine you’ll ever meet.

This year’s SES New York was particularly rewarding from that standpoint. Along with getting to know several people better – yes, even people I already knew, like Kevin Ryan and Rory Brown, along with new faces (to me) like Pauline Ores, the social media expert at IBM – the week was fantastic.

But particular special mention has to go out to my friends Bryan Eisenberg and Larry Chase. Prior to SES, Larry hosted a dinner at one of his favorite haunts in Manhattan, where we got a chance to network with several new people as well as old gurus. Larry’s tales of a long-ago trip across Canada, running out of money, and working his way across the prairies and the Rockies, were particularly fun. (That’s Mona Elesseily and me deciding what to order.)

Larry has a detailed recap of SES New York here. Highly recommended.

As anyone hanging around in the speaker room (Tim Ash, Li Evans, Rory Brown, Matt McGowan) knows from the leftovers they scarfed down, Bryan kidnapped a few of us and took us on a little walking tour of Brooklyn, complete with genuine delicious Brooklyn pizza. Search people are nothing if not authentic.

Thanks, guys.

Labels: bill barnes, brooklyn pizza, bryan eisenberg, jill whalen, larry chase, mona elesseily, pauline kerbici, ses new york

Posted by Andrew Goodman

 

What’s the Deal With GA and Geography?

If you delve into Google Analytics reports by geography in my neck of the woods, you get some curious results: places that are really neighborhoods are classified as municipalities. Places that are only distant memories, officially speaking, are still ontologically in your face in GA reporting.

For example, one site I work with gets a lot of traffic from Etobicoke, Malton, Weston, and Islington. If we’re laying out the information architecture of our website, should we perhaps use Google Analytics as a guide in planning? Definitely not.

Problem: Etobicoke, once a town near Toronto, then a borough of Metro Toronto, and finally, a mere ward district and place with a name that is meaningful from a real estate standpoint, is definitely not a city or town today. Malton is a mere “neighborhood” within Mississauga, although it might once have been a town, city, postal unit, etc. etc. in the distant past. “Islington” is a nice name for a certain intersection and surrounding areas, in loving memory of many decades ago when Islington was a town and postal unit. Rexdale and Weston are much the same as all of the above. The problem is compounded by the fact that many users are still associated with national ISP’s like Rogers and Sympatico that have IP addresses assigned to these locales.

Not only isn’t GA hip to the subtleties, it’s using designations that don’t exist and phantom names-of-things that recall fond memories of malt shops and filling stations from the 1940’s.

We can all agree that an information architecture on a site like Toronto Life should be as flexible as possible, and include neighborhood names and informal names. (I personally get a great kick out of people from West Queen West who write letters to the editor decrying Toronto Life’s version, Queen West West. Ha ha! I’ll say it again – Queen West West! Queen West West! Hope that guy’s reading.)

If it’s neighborhoods you’re after, there are certainly databases of neighborhoods for every city and town out there through major data providers. But if you want to know the difference between a neighborhood, a ghost town, a real town, a city, or a metropolitan area, don’t look to GA. It hasn’t a clue.

Rather disappointing. Beyond that, for people that don’t re-examine their geographic assumptions, they’re likely to get tripped up when using the information for real-world purposes.

In general, geography is a lot harder than it looks at first glance, especially when you’re building a website. Looks like that could be an interesting panel for the Local Search tracks.

Labels: google analytics, local search

Posted by Andrew Goodman

 

Saturday, March 22, 2008

The Way That Yahoo Can Resist Microsoft

Yahoo has a lot of substance to it with all kinds of substantial products and services across the globe. Call it peanut butter or call it honey, but you have to admit, a lot of it is pretty sweet (or sticky, or both).

That’s why Yahoo’s chiefs don’t want Microsoft to acquire them. They don’t want all that Yahoo to just disappear.

There are only a couple of credible ways out of this. One way is to convince its board and the suing shareholders that a combination of a radical transformation in the world of search – some elements of which Yahoo’s scientists are all over – plus the familiar consumer orientation that companies like Yahoo have already proven they have in spades – could result in an increasingly vibrant second-place-to-Google entity. Not a mere also-ran, but an also-favorite just a few percentage points back in the pack.

But there’s a few elements missing. Microsoft has those elements, but they come attached to a giant steamroller. There are a couple of credible alternatives, no more.

My formula for Yahoo wriggling out of the clutches of the behemoth? That’s a bit of a geek-fest, and at times it’s going to seem kind of old school to new readers of the blog. So instead I’ll send it out to the Traffick newsletter subscriber list (hey, I said this was old school). The newsletter is still free :). If you’re not on the list, you can sign up (see signup box at right). Don’t worry, I won’t pound you with an autoresponder. I send out a little missive to this list every 2-3 months.

Labels: microhoo

Posted by Andrew Goodman

 

‘Jericho’ Canceled Again

It was the viral protest heard ’round the world. So much so that the CBS show Jericho being canceled, this time for good, was international news.

For the brains behind of the ‘Nuts to CBS’ promotion — Jeff Braverman at New-Jersey-based NutsOnline.com — it must have been a sweet run, anyway. Perhaps this time around, viewers could send boxes of chocolate to their favorite cast members — dark chocolate, of course — to symbolize an empty theater after curtain call.

Posted by Andrew Goodman

 

Wednesday, March 19, 2008

Contextual Ad Relevance Post du Jour: These Are the Mikes I Know

Google must really be having a hard time getting a read on my email thread today.

In this one, I was *not* talking about Mike Grehan. But there he is. 🙂

Labels: mike in manhattan

Posted by Andrew Goodman

 

Tuesday, March 18, 2008

The Long Tail Not Always Good, If Quality Score is Your Thing

I had the pleasure of moderating the panel on Ads in a Quality Score World at SES New York today. Along with two advertiser-side speakers (Joel Lapp and Jon Kelly), Frederick Vallaeys of Google and David Miller of Yahoo weighed in.

Frederick pointed out that very long phrases and very low volume keywords well down the long tail are not necessarily an advantage to a marketer, as they don’t reflect how “real users” normally search. The sweet spot of the long tail is 2-to-4-word phrase. 5-8 word phrases, not so much. Among other things, Google will have such limited data on these, they have no choice but to assign slightly worse quality scores to them.

I did notice this difference in one of our new accounts today. In a group of 2-4-word phrases, most started with initially Great or OK quality scores, but some 5-word phrases were marginally Poor. Also, there were subtle differences in meaning between some 4-word phrases and others. For example (fictitious example), “martian loyalty points offer” was OK, whereas “martian loyalty points program” was marginally Poor. Although the difference in meaning was not enough to deactivate either phrase, the latter may well have less predicted relevance to our offer, because it’s more generic. Rather than an ecommerce transaction from a reseller, that user might be looking for an information page from the official Martian Loyalty website.

Basically, then, marketers need to stop asking the question “to long tail or not to long tail,” but rather, even within the long tail, they should consider whether they’re just getting too fine, or dumping too many irrelevant phrases into otherwise functional ad groups.

This may dovetail with a point made by David Miller of Yahoo Search Marketing. In his approach to explaining a successful campaign structure, he referred a couple of times to “thinking at the ad group level.” Although in a formal sense keywords are evaluated for quality individually, we know that the whole picture matters. So at YSM, if you have a high-impression, loosely-relevant keyword generating a lot of the clicks or impressions of the ads in a given ad group, its lack of targeting could be “dragging down” the quality index for the whole group.

Although the specifics are bound to change, the concept of tight targeting, and making advertisers pay a premium for experiments with loose targeting, appears to be here to stay. There is nothing that says that long tail keywords are always particularly well targeted to a given ad or landing page. Often, they are detritus mucking up the rest of the group – and potentially, your forecasting and tracking efforts.

Related:Keyword Intent: Tidy Campaigns Avoid the Dump and Chase

Labels: ad quality

Posted by Andrew Goodman

 

Monday, March 17, 2008

Ad Exchanges – What You Need to Know

To learn more about the changes in ad exchanges, I interviewed both Ramsey McGrory, VP of Exchange Development, Right Media and Jay Sears, SVP of Strategic products and business development, ContextWeb. If your interested in more information on ad exchanges, both will be speaking in the Ad Exchanges are Everything session at SES New York 2008 on March 19 2008 (Day 3).

1) What are the benefits of using ad exchanges (for publishers and advertisers)?

Answer from Ramsey McGrory:

At its most fundamental level, an exchange addresses underlying needs of buyers and sellers by providing the ad serving platform, community, controls and services.

For a publisher, this means monetizing their inventory as effectively as possible, minimizing the operational workload through automation and a better understanding of the process, maximizing the controls to protect its brand, direct sales and users and providing new products to their clients so a publisher can secure greater share of brand, performance, behavioral and search budgets where possible.

For an advertiser, this means greater access to inventory, greater visibility into and control of pricing, performance, global frequency, messaging while minimizing the ad ops workload to manage it. Exchanges ideally provide protection mechanisms for advertisers as well, preventing ads from ending up on sites with objectionable content and preserving brand integrity. For those who are driving their businesses in Search, an exchange represents an opportunity to create coordinated cross channel marketing campaigns, and create incremental revenue streams as a service provider leveraging the expertise of search into the online display space.

2) How has the emerging ad exchange market been unfolding? Can you detail some significant technological (or otherwise) developments?

Answer from Jay Sears:

The ad exchange space has been emerging at light speed. Much of this is driven by the media and audience fragmentation occurring in the marketplace.

While portals were once the dominant source of news and information, page views on the top 3 portals declined 18% from August 2004 to August 2007 vs. an overall 21% total internet growth in page views. As David Sifry’s Technorati web logs growth chart has shown every year for the past few years, users are moving to the Long Tail – over 120,000 new content blogs are created every day.

Exchanges can present a huge opportunity for advertisers to reach the increasingly fragmented web audience in a single efficient buy. They allow advertisers to trade directly on the exchange. Advertisers can use any pricing model (CPM, CPC, CPA), a variety of ad formats (graphical, rich media, text) across numerous publishers on exchanges.

3) As we all know, the search engines have been investing more in ad exchanges (i.e. Yahoo & Right Media, Google & Double Click). Where do you see the search engines taking this business? What are the implications of this in terms of online advertising in general?

Answer from Jay Sears:

2008 is the year of the “P Word”—Platform. Yahoo! has APEX, the advertiser and publisher exchange; Microsoft has AdECN, a “network of networks” exchange; AOL has Platform A, its collection of ad networks with various specialties and Google has the DoubleClick advertising exchange.

Display advertising is the next battleground after search. The display market is highly fragmented and Google will have its hands full working to create a dominant position from its current 3% market share (Yahoo! holds a 30% market share). The big media companies must go after the display market to continue to scale their businesses. Search is “supply-gated”—there is only so much of it. In contrast 95% of user time online is spent looking at content. Once an advertiser finds success for a particular offer running on a specific type of content, the advertiser should be able to replicate, or scale, this type of success on an exchange.

4) Where do you see the ad exchange market in the next 5 years? 10 years?

Answer from Ramsey McGrory:

Exchanges will provide a mechanism for buying and selling when each segment of the market is mature enough. For example, because of short supply and because of no accepted industry standards, video ads are generally not sold through an exchange. As video advertising grows and as standards are created, an exchange will become more important to that format. In the online display space, there are many flavors of exchanges currently. I think the market will solidify around a few that can provide and support the tools, services and ecosystem required for buyers and sellers. The rest of the companies will change their business models or integrate with the larger exchanges.

5) When it comes to using ad exchanges, can you suggest a few best practices?

Answer from Ramsey McGrory:

Thought it’s not a best practice, I think a critical first step is to understand the differences between an exchange and a network marketing themselves as an exchange. An exchange is a platform that enables buyers and sellers to work together. It is effectively a technology solution with a set of tools and practices. A network is a company that is buying or selling inventory.

If an ‘exchange’ company comes to you and tries to buy or sell media with you, they are a network. Why does it matter? It matters because each is motivated differently. A network buys and sells media with the goal of maximizing yield for itself. An exchange provides the technology, tools, practices and services to enable buyers or sellers to operate efficiently and their interest is completely aligned with their clients.

I believe there is significant confusion around what an exchange provides and how buyers and sellers leverage it. Many technology providers are building ‘an exchange’ as a point solution, which assumes current ad serving technology, practices and products don’t change and that the exchange is bolted on. What we found is that adopting an exchange methodology is less about buying inventory from or selling inventory to the exchange. It’s about questioning vision of how technology and services must change in world that’s different than when ad servers such as DoubleClick were first created (mid/late 90s).

After the differences are understood, I recommend making sure you use an exchange that provides a lot of buyers and sellers to work with, the tools to manage campaign for performance and delivery, and the controls for advertisers, publishers, networks and agencies to protect themselves and their clients.

Answer from Jay Sears:

Jay believes you need to ask yourself the following five questions to determine which exchanges best suit your needs:

#1. Inventory. What kind of inventory will you find on the exchange? Remnant or premium inventory? Spot market (bided – similar to how SEMs buy) vs. futures market (reserved/ guaranteed inventory – similar to how agencies buy)? Safe for brands or direct response only? Designed for agency and/or SEM workflow?

#2. Pricing models. What types of price are available? CPM, CPC and/or CPA?

#3. Targeting – what types of targeting are available? Contextual – category or keyword? Behavioral – what types of behavioral targeting? Geo-targeting? Other targeting types?

#4. Formats – what ad formats are available? Graphical ads? Rich media ads? In banner video ads? Pre-roll video ads? Text ad formats?

#5. Publisher types – what types of publishers are in the exchange? Portals and large sites only? Long Tail sites including blogs and specialized niche content sites? Ad networks? Social media sites?

Posted by Mona Elesseily

 

Thursday, March 13, 2008

Relative Complexities of Paid and Organic Search, and Implications for Marketing Effectiveness

Because there have always been more professional SEO advocates and amateur SEO junkies than paid search practitioners and advocates, many have come to assume that organic search somehow “performs better” than paid search.

In one ugly distortion of reality, an analytics vendor we like continues to give truncated examples showing that bounce rates (or very short visits) on paid search are higher in many cases than they are for organic search. This “might mean you should stop wasting money on paid search and begin focusing more on organic optimization.” It might, but it probably doesn’t. The premise doesn’t lead to the conclusion. Obviously this man isn’t a marketer.

I just had a conversation with a client – one with a big site and lots of both kinds of traffic – that noted their revenue per paid visit is more than double what it is per organic visit. Why the disparity? Doesn’t everyone know that organic is better and we should be doing better with those visitors?

Table 1

Organic search referral revenue per click: 8.5 cents
Paid search referral revenue per click: 19 cents

(Varies wildly by page and keyword – but these are the averages.)

Not at all. Let’s look at some numbers.

How many landing pages do Google and marketers collectively need to keep track of on the paid side? Back of the envelope, assume 600,000 AdWords accounts of any size or active significance.
Assume, generously, an average of 100 landing pages being used for each. No – let’s assume very active ad testing that includes somebody varying destination URL’s as part of the test. 150 landing pages per advertiser, on average. That’s 90,000,000 landing pages in the whole Google paid search universe. That’s probably a bit high, but let it go for now.

On top of that, Google knows that the majority of those pages are of a certain caliber and can check them more carefully. They aren’t indexing them per se, but most of these things “make it into the index.” There isn’t a whole other job of kicking spam pages out of the index (though there is something analogous going on… it’s just a lot harder to create AdWords accounts than to spam the organic index).

Google’s whole organic index (not counting the pages they don’t index) has, perhaps, above 20 billion. That’s more than 200X larger than the paid search index, with less ability to “know” about the intent behind the pages. In terms of where to rank pages on which queries, hey, the organic algo is trying, but it’s bound to be less accurate.

On the other hand, paid search advertisers are telling the sorting system which keywords they think they’ll profit from. They’re shaping messages to ensure that only high-intent buyers come to their chosen landing pages. If they send them to the “wrong” page, they don’t make as much money, so they learn to send users to right pages. The organic search engine might be fond of sending people to “wrong” pages, from a business model standpoint for the site owner, anyway.

The subject probably needs deeper treatment than I’m able to give it here, and my math might be out a bit, but the principle is clear: it’s a no-brainer that revenue from organic searches will be lower than that from paid searches, in part because high-ranking pages may be “wrong” pages from the site owner’s business standpoint.

If that’s such a no-brainer, how come all those SEO’s keep telling you different?

Labels: organic search, paid search

Posted by Andrew Goodman

 

Well That’ll Make the AOL-Yahoo Merger Messier, Won’t It?

So AOL acquires huge social network Bebo.

Sort of reminds me of when Overture acquired two search engine companies prior to being acquired by Yahoo. Bulk up when in reality you’re planning your exit. A bargaining chip, if you will.

Posted by Andrew Goodman

 

Tuesday, March 11, 2008

SES Toronto – Call for Speakers

Interested in speaking at SES Toronto, June 17-18, 2008? Canada’s premier search marketing event is fast approaching. I’ve posted the preliminary call for speakers here. At this stage, we’re also entertaining new session ideas.

Posted by Andrew Goodman

 

Do You Lie Wildly About Your Traffic?

I was pleased to see that a friend of mine got a freelance article published in a business publication recently. Good quality piece about the techniques for igniting a web content business.

The company being profiled displayed a certain degree of cockiness about its prospects, but at the same time noted that its growth had “leveled off” and that they needed partnerships or investment to move forward. Fair enough.

The problem was that they weren’t nearly clear enough on their unique contribution to the space… and that they were wildly inflating their traffic claims. I know, because it’s not hard to check up on these things these days. The various tools available will give you the answer within an order of magnitude.

Other popular sites mentioned in the story – such as a well known sports blog – passed the sniff test. Their traffic turned out to be every bit as strong as portrayed.

Not so for the featured company. Who do they think they’re kidding? At this point in dot com history, bragging about traffic you don’t have isn’t going to cut it for long.

Labels: traffic

Posted by Andrew Goodman

 

Friday, March 07, 2008

Meanest Post of the Year: We Already Have a Winner!

All other candidates are conceding. Marc Andreessen’s post of last month on the New York Times Deathwatch contains a scathing indictment of the lack of Internet expertise on its very large board. The concluding sentence encapsulates the whole mean mess in a compact format:

“So, if you want to issue bonds to pay for FCC-approved snack cake manufacturing in a submarine on display at a national park by a sundress-wearing cigarette-puffing Levitra-popping Judy Miller, you’re pretty much set.

Go team!”

So, if you’ve said anything mean to anyone this year, or if your “analysis” of a situation has been a tad “sharp,” relax! Our friendly and incisive Netscape founder has you beat by a country mile!

Labels: media, new york times

Posted by Andrew Goodman

 

Site Performance Increasingly Important to SEM Performance

Google has just released a post indicating that they’re poised to incorporate landing page load time into Quality Scores.

What does this mean? Well.. it means they’re about to incorporate landing page load times into quality scores.

[As a side note, much like quality-based bidding itself, it’s possible that they’ve already been testing the inclusion of such site performance variables. After all, they gave the thumbs-down to popups years ago. A variety of annoying and intrusive page designs and ad serving formats have probably come to Google’s attention since then.]

More broadly, it means you’ll be in some trouble if your site profits from advertising, if that advertising is causing pages to be slow to load.

Google is saying you’ll be warned in the keyword status area, so you have a month to make necessary adjustments to lessen load times before your quality score gets whacked.

As for even broader meaning, I think it’s fair to take away the theory that Google does now, and will increasingly in the future, incorporate assessments of site performance into the ranking algorithms for organic search. They’re publicly stating the importance of these factors to users, so take heed.

Labels: quality score, usability

Posted by Andrew Goodman

 

Thursday, March 06, 2008

Holly Asks Ask to Get a Hobby? [At Least They Didn’t Call it a ‘Wife Engine’ Dept.]

On the subject of Ask.com’s latest restructuring, we have the following guest post from Holly Buchanan, of Future Now. Take it away Holly!


So Ask.com is now going to focus on married women. OK, they have my interest since marketing to women online is my focus.

“With the shift, the Oakland-based company will return to its roots by concentrating on finding answers to basic questions about recipes, hobbies, children’s homework, entertainment and health.”

Um –ok. I’m a woman and I can tell you that I’ve done close to a hundred searches this month and not one was on any of the above topics.

Here’s something for Ask.com to think about – women don’t compartmentalize their lives. Women business owners often use the same products at work that they use at home and vice versa.

Married women are searching for many things related to both work and home life.I don’t know if they are going to use one search engine to search for “recipes, hobbies, children’s homework, entertainment and health,” and a different search engine for work related subjects.

The Pew Internet study- Women and Men Online – did find that men tend to stick to a single engine, while women had a few favorites.But I wonder if it’s simply because she was using MSN as her home page and searching on that, or if she happened to be looking at content on Yahoo and used their search engine.I think it’s a matter of “convenience” more than “favorites.”

But with Google search now being included in so many task bars – I wonder if women will find THAT more convenient.

Will women actually type in www.ask.com to go to another search engine with other good options so readily available? I don’t know.

In the end, I suspect it will come down to two things :

  1. Convenience. Which search engine is at her fingertips at the time.
  2. Best results.Which search engine delivers the most relevant results.

If I were ask.com and focusing on women – those would be the two things I would concentrate on.

Oh, and one other suggestion – “hobbies” sounds condescending. Women have passions and interests. Those words might win you more fans.


Holly Buchanan is co-author of The Soccer Mom Myth (Wizard Academy Press, 2008).

Labels: ask.com

Posted by Andrew Goodman

 

Wednesday, March 05, 2008

Tearful Ask Fans Lament Decline

Ask.com is gradually sinking… like the Titanic. Danny Sullivan proffers this detailed obit. I worry for Lisa Barone’s health in that she is “intensely angry” with Barry Diller.

It’s stunning to note that share is king and that differentiation is hard to achieve. Nothing has changed, really, since I riffed on this here 4.5 years ago. Google is winning. Everyone else isn’t.

Labels: search engine market share

Posted by Andrew Goodman

 

Tuesday, March 04, 2008

Facebook Lands Monetizer Extraordinaire

Facebook has done themselves a favor by hiring long-time Googler Sheryl Sandberg as their COO. Sandberg was one of the brilliant early thinkers in the Google AdWords program and part of the reason Google maintained its monetization “compass” for so long… leading to enormous long-term profitability.

Labels: facebook

Posted by Andrew Goodman

 

Saturday, March 01, 2008

Wall Street Doesn’t Get It (Part I)

If you’re an outsider looking to come up to speed on what’s going on in the world of online advertising – particularly with regard to the complex machinations of the paid search algorithms which line search engine companies’ pockets – look no further than this post by Magid Abraham and James Lamberti of comScore.

This detailed explanation of click monetization
is intended to act as a corrective to rampant misinterpretations of the recent comScore report showing tepid growth in total paid clicks, which led to $20 billion being shaved off Google’s stock market valuation. In particular, note that the “ad coverage index” at Google has continued to drop – most recently, from 52% to 48%. This means that Google is deliberately monetizing fewer queries, to increase user satisfaction with search engines.

Rarely do you see a piece of writing that so deeply and incisively delves into the economics of this world. It stacked up well with the recent panel I was on, Decrypting Quality Scores, where experts from the search engines – most notably Google’s Nick Fox – joined agency side folk (like me) in explaining the ins and outs of the engines’ “ad quality” initiatives.

The resolve they’ve shown to take so many ads out of circulation, on purpose, is beyond incrementalism to the point of being worth characterized as “aggressive.” And you know what, it worked. Ad revenue continues to rise, paid clicks have slowed, and users still think the search engines are nice places to look. A bold, forward-looking, user-driven initiative was indeed just what was needed to save Google from much steeper stock market plunges – and those would have been justified.

If you get it, you know that Google’s financial position is going to be just fine this year and next. The stock market may see it otherwise though, as markets are governed by other considerations, like sentiment and liquidity.

Posted by Andrew Goodman

 

You may also like