Through header bidding, publishers aren’t just seeing more revenue from their demand partners–they’re also seeing seeing huge amounts of data. The problem is, that data isn’t always clearly understandable, and isn’t always easy to take action on it. At times, it’s not even clear where pubs should look to find data insights they need in order to improve their bottom line.
Publishers need to access and process data in order to optimize their header bidding strategies. To understand the challenges at hand, and to get a sense of how pubs can find some relief, I called up Shareably Cofounder Peter Kim and Roxot Marketing Director Alex Kharitoshin. Peter’s background gives him a compelling perspective with these issues–he’s running Shareably’s ad ops efforts now, but prior to the company’s founding, he had been an engineer. Throughout this conversation, we talked about the engineering issues in acting on data from header integrations, the Prebid.js environment and Roxot’s interest in solving problems within it, the data challenges server-to-server integrations pose, and a few other related topics to boot.
BRIAN LaRUE: Peter, where are you seeing discrepancies in your reports and what do you do to resolve or otherwise account for them?
PETER KIM: As we’ve added more demand partners into our ad stack, we started seeing discrepancies in revenue. In the past, we looked at reports from our demand partners and tried to reconcile them with what we could pull out of DFP. What Roxot provides is client-side data collection, a second way of consolidating data with the demand partner. We can dig down and see whether a discrepancy in revenue was a discrepancy in impressions or in bid prices.
ALEX KHARITOSHIN: When you have data from multiple sources, it’s really hard to compare. Every demand partner tags even country or mobile device differently. Even with timeouts and response times, demand partners don’t know whether they’ve reached the client side—they just respond with their bids. Publishers simply don’t have a source of reliable information.
BRIAN: When you look at the data from your demand partners, how does that data help you make decisions that helps Shareably’s bottom line?
PETER: What we’ve been able to do with data from demand partners has been very limited, because of resource constraints and how we prioritize solidifying our own data versus the partners’ data. Also because of fragmentation—there isn’t any standard of how data is structured across partners. Collecting all the data and transforming it into something publishers can make business decisions off of requires one-off integrations. That’s an intensive engineering task. Before we go into extracting data from our demand partners, we first want to solidify data we can control—client-side header bidding data going through DFP. With DFP server reporting, the data is always structured and there’s an accessible API. Then go into the unknown—client-side header bidding data. The third frontier is looking at demand partners’ data to see if we can bring that in.
BRIAN: What kind of solutions were you looking at to analyze your data before you got to a point where you decided you needed some vendor assistance?
PETER: When Prebid.js first came out, the solution they offered was that you could integrate Google Analytics and piggyback off that. But we found Google Analytics data unreliable, because of Shareably’s volume and scale. A lot of the data coming in was getting lost, getting sampled, getting throttled.
BRIAN: When Roxot was developing these tools, what complaints were you hearing from publishers that made you think of a solution you could offer?
ALEX: The biggest problem was the ability to make data-driven decisions on ad stack optimizations. If you don’t know how your ad stack is doing, how do you know if adding a partner is going to help you or not? In the Prebid environment, there are so many moving parts and variables that you can’t maximize your effectiveness by simply turning on and off bidders based on revenue they report. The problem that we were trying to solve with Prebid Analytics is recency, quality, and availability of ad auction data for every Prebid publisher. We have good experience with the data; we work with machine learning. We used the skills we had to provide publishers with a solution like Prebid Analytics. The goal with the product was to make each element actionable and easy to understand, according to the publisher’s needs. Each dashboard provides you with a unique perspective on your prebid data. You can spot an issue on Total Dashboard and move around other views to dig deeper.
Prebid Analytics provides publishers with client-side data for header bidding performance, and it’s an official partner of Prebid.org. Integration is very similar to plugging a new demand provider into Prebid.js.
PETER: Prebid.js is the most-used open source header bidding client tool out there, and they’ve released the API where you can switch out the internal analytics with whatever third-party tool you want.
BRIAN: Peter, how has your engineering background influenced the way you approach revenue optimization?
PETER: Coming in from an engineering background, I’ve found the space to be especially fascinating because of all the opportunities in programmatic—any optimizations you do in programmatic are scalable. It doesn’t required a lot of manpower, my optimizations will be there to stay, and it doesn’t cost anything to run—it’s just code running on the client or server. Right now we primarily make our revenue off programmatic display.
BRIAN: How does your stake in programmatic inform who goes into the header versus who goes into the waterfall?
PETER: We only run eight to 10 bidders and evaluate very stringently how much they affect our bottom line. We’re a very small team, and going through that business development process of adding a bidder is something we don’t take very lightly.
Programmatic is moving very, very quickly. We want to make sure we’re working with demand partners who are moving as quickly as us. That maintains our flexibility to adapt in the market.
BRIAN: How does the service you’re getting from Prebid Analytics now help you toward your goals of managing, processing and taking action on data?
PETER: Consolidating reporting from our demand partners and DFP manually, we found a lot of discrepancies. Prebid Analytics helped us identify certain bugs. After that we started digging into the data a little bit more, looking at individual bidders and statistics such as average bid price, winning eCPM, win rate and timeout rate. We saw the timeout rate for a lot of bidders was a lot higher than we were expecting. That information helps open up the conversation with our demand partners to work together to resolve the issue. When you have more data, you have leverage.
BRIAN: Server-to-server is becoming more widespread. How does that affect publishers’ ability to access data and act on it?
PETER: It depends. Client-side header bidding came to be because there’s a lack of transparency in the marketplace. Server-to-server is a little different. When you’re working with an enterprise solution, you have to make sure they’re providing the data you want, and trust them enough that they’re not going to use the data to unfairly optimize their own goals. In the market today, the trust between SSPs and publishers isn’t the greatest. As of today, we still aren’t on a server-to-server solution.
Prebid Server, just announced recently, promises you’ll be making the calls in the server, but all the bids are returned to the client, where the auction still takes place. You still have control of the end data. That hybrid solution could be a happy medium, but it’s early. Right now, we’re bearish on enterprise server-to-server solutions. We’re cautious of entering a one-off agreement with one demand partner when we should be fairly competing all the demand partners.
ALEX: With server-to-server, you have to make sure that data is transparently accessible and not hidden in the vendor’s black box. Passing back all the data to the client is the only way to get transparent and clear data on your whole stack.
BRIAN: What needs to be done, either in-house or through partner support, to present server-to-server bidder data together with data from client-side header bidders, so publishers can make impactful decisions?
ALEX: The ideal option is when the server-to-server solution passes all of its data back to the client side, they still have to connect in one place all the data tied to one impression. This way the data is completely transparent and actionable—you don’t have to go to different data sources and combine everything manually. Prebid Analytics will collect the data as it does with demand partners in the header without additional manipulations or dev work. With this data, publishers can use Prebid Analytics to identify which partners should go to the cloud and who should stay in the header.
PETER: When you go with an enterprise server-to-server solution, and they only show up in Prebid.js as one bidder, you have to go to the demand partner and have them split the data, then merge that data with what you’re getting from Prebid.js in order to have a full picture. If that partner eventually goes away or you realize they weren’t the best solution, you have to rebuild that same integration with another partner. Even when you know there’s consolidation coming in the marketplace, with an open source solution you can know the structure is still going to stay intact.