It may sound like a joke, but Index Exchange has been coordinating trips to Toronto for numerous major publishers and DSPs to meet their squad of 200 engineers. Many a wiseacre would comment they’d rather avoid engineers entirely than travel long distances to meet them, but Index Index Exchange President and CEO Andrew Casale notes that these get-togethers have been essential in the development of the company’s header wrapper.
“People don’t realize what and who is behind the tech,” he comments over a drink at the Miami Publisher Forum. “When we’re building a wrapper, there are 10 engineers working on it. That’s easy to say, but it’s hard to appreciate.”
Index attributes its header success to efforts like this to solicit the needs of the publishing community. The whole concept of a wrapper, Casale claims, was the idea of publishers—Index built it and continues to improve on it based on their requirements. And what do you know, that includes some incredible advances regarding bid architecture, working with ad libraries and pre-fetching—but with those same advancements come interesting new challenges.
GAVIN DUNAWAY: The wrapper has come a long way in a short time; what are the most significant developments?
ANDREW CASALE: If you look at the earliest form of wrapper, it was basically, “Can you do a lot of what a publisher’s engineering team would have to do to integrate header bidders?” This meant taking API specs from third-party bidders, understanding each and coding them onto the site. Read each spec, understand it and integrate it. Set it up so they can all sit next to each other. That was a lot of work we were able to free up from a publisher’s engineering team.
But that’s still the most basic implementation of header, and it might work with the most simplistic publisher pages: the page renders on load, the ad slots are all static, your fire up all the headers and they return bids. Easy.
Now enter the new web—responsive design, infinite scroll, viewability-first implementations like smart loading. The web’s evolved quite a bit too in the last 16 months—most sites don’t work the same way they used to. The first big improvement to the wrapper was solving for those environmental challenges.
Another huge area of development is race conditions. It sounds like a complicated term, but the idea is: there is a race going on. The header fires bidders and they respond with bids while the ad server is running in parallel, rendering the placements and setting up the slots. The latter has to finish before the former or you have what we call a race condition, which can lead to errors and missed ad opportunities. It’s about getting timing right.
That’s where the wrapper becomes embedded into the very fabric of a publisher; each publisher’s environment is unique and each publisher’s ad server is fired through an ad library that’s very similar to a wrapper.
The ad library is a custom set of javascript that allows the CMS and ad tech to play nice with each other. Each ad library is about 1 to 10,000 lines of code depending on the publisher. We have to reverse engineer the ad library before we can build the wrapper. The wrapper now also has to play nice in that framework and the timing must be spot on.
This library development led us to other ways to push the boundaries on speed. There’s a newer development called pre-fetch, which is a response to the constant gripes about header creating latency and affecting user experience.
Suppose a page is responsive and as you scroll, slots are continually added—for example, a four-slot page on initial render can become a 12-slotter as you continue to scroll down. The header can anticipate that—a publisher can tell us on the average 50% of our readers scroll and four more slots will come in view. When we do the work in the header to fetch for demand, we can anticipate subsequent slots that don’t appear at render, so as they come into view they are instantly available.
This is a huge change because the way header used to work is that as you scrolled into view, we’d have to initiate a new auction that would be a jarring user experience, hang the page and create latency.
GD: So you’re running all the auctions earlier and have a probability engine saying these may or may not appear that is worked into the auction itself. Does the winning bidder understand the ad may never be served?
AC: The winning bidder would never see it unless the creative renders—that has a pixel that signals the win. So you can anticipate in the header without affecting the end advertiser. If the user doesn’t render the creative, it’s assumed or implied that it was never available.
GD: So the creative isn’t even—for lack of a better term—fetched until the placement appears on the page.
AC: But the auction is already done, which is the big change. We’ve anticipated who wants to buy down there before the user ever gets down there.
GD: What affect has the move from single-request bid architecture versus multi-request had on header bidding?
AC: A lot of the early header bidders were based on multi-request architecture. More advanced single-request architecture is transcending those norms.
The premise of multi-request architecture is that each slot on the page should be its own autonomous auction, facilitated by its own request. Think of it like this: if there are four slots on the page, the page initiates four requests. If the site has five header bidders and I hit refresh, 20 requests hit my browser. The average browser queue can only handle six things simultaneously. That means that two-thirds of the 20 are waiting—they’re blocked.
Some of our savvier pubs have been encouraging bidders to move to single-request architecture. It doesn’t matter how many slots are on a page—every header bidder gets one request. So there could be eight slots on a page—if you’ve got five header bidders, that’s five requests. That’s a huge improvement—in terms of latency and the header, these architectural improvements are going to dramatically reduce the concerns people have today.
We’re tolerating multi-request bid architecture from probably two-thirds of all bidders in the header right now. As those bidders embrace single-request, we’ll probably eliminate 75% of all requests.
GD: What other benefits come along with single-request architecture?
AC: Single-request also unlocks tandem ads. Because single-request architecture sells the page in one go, you know everything that’s on the page. We signal to the buy side that we’ve got five slots and you buy all five. This has not only fixed early header multi-request architecture, but also the autonomous tag-based programmatic world. Each auction for each slot used to be independent, but now you can sell the entire page. It’s pretty cool.
You can do more than sell it—you can buy the entire page. You could theoretically do sequential ads going down a page. It’s approaching the same level as direct, especially if you’re approaching programmatic guaranteed and trying to crazy executions.
This isn’t even theoretical, though. Some of the bigger media companies are already selling tandem ads through the header leveraging single-request architecture. They can literally sell the journey. The best we could do before in programmatic was match audience and data to the request, which was great, but limited. “Wow, that travel site is following me around, now those shoes are following me around.” It was very direct response. Now if that travel site really wants a user, it can create an ad experience based around that user’s journey on a media site.
GD: That’s fascinating because it adds a whole new value to private marketplaces. Now I can give buyers access to whole pages a user lands on instead of slot by slot.
AC: When we go to the buy side with a single request, we’re doing it before the page is rendered. Some of the bigger media companies have far more than the standard display sizes available; some have as many as 60. We can now tell the buy side this page has 60 different potential ad sizes available in this moment. They could pick one of the higher impact ones that could nullify other spots; if that’s a desirable outcome based on the sequencing within the ad server, we could re-format the page on the fly.
GD: How do you have that ability?
AC: Because the ad library has that power, and it talks to the wrapper! Think about sites that have giant units at the top—at $50 CPM, that could nullify the need for any other ad on the page. If we find a buyer that’s set on the masthead, we’ll tell the ad library, “OK, we’re not going to sell the rest.”
GD: While this is all pretty bullish on the header, there still have to be some headwinds—what are the biggest challenges header bidding is facing?
AC: This advancement is unprecedented; in a short amount of time header has become quite elaborate. People that are calling it a hack should really study just what this thing can do, because it’s becoming more powerful than any piece of ad tech we’ve seen on a page.
However, the speed of advancement is also creating interesting challenges. For instance, some of the bigger pubs pushed for single-request instead of multi-request bid architecture because when you hit refresh on a page and you open up a web development tool, you don’t want to see eight calls from a header bidder because there are eight slots on a page. It’s a lot of noise. You want one.
So some pubs now say, “Single-request architecture or you can’t work with us.” Some of the companies in the header want to build out single-request, but it may take them six months. It’s not an easy architectural change.
The wrapper’s speed of advancement is sometimes accelerating beyond the advancement of bidders. There’s potentially an upside—if you’re a single-request bidder, the browser sees you as just one request and therefore a simple payload. If you’re eight requests, the browser sees you as slow.
And because everything is coded down to the end domain, the single-request might get prioritized over the multi-request. There are claims of impropriety in the wrapper—“Why does this bidder have an advantage over this bidder?”—but it’s actually just the browser.
It’s a daily conversation we’re having when walking pubs through how the code works. Not everyone has built single-request, so you have to tell your partners on multi-request to move to the superior architecture. It’s easy to say that, but not always to do.
Pre-fetch is another example: we’re going to anticipate that half the users scroll down, but the other half don’t. So if you’re pre-fetched as a bidder, that means potentially 50% of the time you take an infrastructure hit for the users that never scroll down—that’s a cost.
If you’re a forward-looking bidder that realizes pubs want better user experience and pre-fetch is good for the user, every now and then you might take an infrastructure hit on a un-monetized pre-fetch. However, some platforms refuse to do pre-fetch because they don’t want to incur costs on an opportunity they’re not eligible to win.
Most advanced bidders share the view that this is a cost of business. Not everyone feels that same way, but I always side with the pub. The pub is the reason the wrapper exists, the pubs is the reason all these new ideas have been born, and at the end of the day, if you’re the vendor, you should be doing what’s best for the pub.