|
OpenAI to Judge: The New York Times Hacked Us |
Image sourced from Shutterstock
|
Everyone in the publishing industry is interested in watching The New York Times v. OpenAI. The New York Times filed a suit against the AI pioneer, claiming that ChatGPT was trained on its content, violating its copyrighted materials. OpenAI said there’s more to the story than meets the eye and vowed to reveal all. In a Feb. 26 court filing, OpenAI told the judge its side of the story: The New York Times hacked them. “The allegations in the Times’s Complaint do not meet its famously rigorous journalistic standards. The truth, which will come out in this case, is that the Times paid someone to hack OpenAI’s products,” the complaint reads. That “hacking” took the form of malicious prompt engineering tactics that “blatantly violate OpenAI’s terms of use” along with some exploitation of a known bug. On top of that, OpenAI claims The New York Times fed excerpts from the same articles that “sought to elicit verbatim passages of, virtually all of which appear on multiple public websites.” It’s not how “normal” people use ChatGPT, OpenAI tells the court. Actually, that kind of malicious prompt engineering - red teaming- is not uncommon, nor should it be. At least one AI expert told AdMonsters that all publishers should red-team AI models. Last October, the Biden Administration issued an Executive Order on AI Safety pretty much requiring it. Early this year, the Harvard Business Review published a helpful guide, How to Red Team a Gen AI Model, noting that Google and other companies now advocate for internal red teams. What is red teaming exactly? According to the Biden EO and other sources, it’s “adversarial testing in order to look for flaws and vulnerabilities,” which is pretty much what the New York Times did to ChatGPT. It’s a bit amusing that OpenAI is acting shocked. “The Times’s suggestion that the contrived attacks of its hired gun show that the Fourth Estate is somehow imperiled by this technology is pure fiction. So too is its implication that the public en masse might mimic its agent’s aberrant activity,” OpenAI writes. Perhaps not en masse, but it’s not exactly difficult to find detailed guides on how to jailbreak ChatGPT and other models written in language that the average person can understand. And if the New York Times red teaming found vulnerabilities in ChatGPT, shouldn’t OpenAI be obliged to fix them? It will be interesting to see the next volley in this match. |
Speaking of Lawsuits, Alphabet is Hit Once Again, this One by Axel Springer & Others |
Alphabet's Google was hit yet again with another multi-billion dollar lawsuit filed by 32 European media groups, including Axel Springer SE and Schibsted. The complaint asserts that these media companies have suffered financial losses due to Google's anti-competitive practices and market misconduct. If the playing field were even, the media companies argue, they would have earned more revenue and could invest in the European media landscape. As proof, the lawsuit cites previous fines imposed on Google by European authorities, such as the French Competition Authority's 500 million Euro penalty in 2021 and the European Commission's 1.5 billion Euro fine. For its part, Google calls the lawsuit "speculative and opportunistic" and claims it collaborates with plenty of media companies across the continent. This adds another legal battle for Google, which includes a U.S. regulator's lawsuit against Google, alleging antitrust violations related to the company's dominance in the search engine market, the EU's Antitrust lawsuit, and Google engaged in exclusionary conduct to maintain dominance over digital advertising technologies. |
|
Will Marketers See their Dream of Performance-Based TV Campaigns Come True? |
That's certainly what tvScientific is hoping for, and it just raised $9.4 million from the Martin Sorrell-backed venture fund S4S ventures in a series B round to make that dream a reality. tvScientific designed its platform to go beyond reach and frequency metrics and allow advertisers to buy CTV on a cost-per-outcome basis. Those outcomes include cost per acquisition, return on ad spending, sales, and post-exposure campaign traffic. According to Business Insider, tvScientific can track viewers' actions after seeing a streaming ad. "For example, tvScientific can analyze how many people purchased a product or visited a website after watching an ad." The attribution occurs through a direct 1:1 deterministic ID, which the tvScientific website claims lets advertisers match site visits to ad exposure. The platform captures the IP addresses of devices that have seen ads and stores them in an "exposure file." A device graph maps all other devices connected to that IP address so they can attribute any campaign actions taken by a user of one of those devices to that ad exposure. Consumers who object to this level of tracking may, depending on their jurisdiction, have the right to opt-out, but doing so isn't exactly intuitive (the tvScientific's privacy page tells people to send an email to its privacy mailbox, but how people even learn about tvScientific's role in their ads they see may be a mystery to them). And the platform's attribution mechanism can be problematic if Google continues its promise to block IP addresses. |
TAG Takes Aim at Content Pirates with New Pre-Bid Pirate Domain Exclusion List |
Publishers have a new tool to protect their IP against piracy: TAG's Project Brand Integrity 2.0 (PBI 2.0). PBI 2.0 includes new tactics to protect publishers from criminals who profit from stolen content and advertisers who are tricked into buying ads on unsafe websites. PBI 2.0 introduces a pre-bid pirate domain exclusion list that TAG says will prevent ad dollars from funding pirate websites and stop these sites from monetizing stolen intellectual property. The pirate domain exclusion list is developed and maintained through the TAG AdSec Threat Exchange. By incorporating real-time intelligence on new pirate domains from TAG's Ad Sec Threat Exchange and TAG member companies, PBI 2.0 will protect brands while preventing ad dollars from reaching those illegitimate sites. Why PBI 2.0? "Project Brand Integrity 1.0 was incredibly effective but hard to scale, as it involved a time-consuming manual process of notifying advertisers when their ads were found on pirate sites," said Mike Zaneis, CEO of TAG. "Although most advertisers took action when alerted to such misplacements, the money often had already changed hands, and the criminals quickly moved their efforts to new domains. PBI 2.0 helps automate, expand, and accelerate that process by blocking the money to pirate sites in advance through pre-bid intermediaries, so it never reaches the criminals in the first place." |
Using Generative AI to Streamline AdOps Tasks? We Want to Hear from You |
AdOps teams are using generative AI to streamline mundane tasks, get deeper nuance from audience behaviors, and a host of other tasks. If you’re using generative AI in your day-to-day job, AdMonsters wants to hear from you for our upcoming Publisher Pulse report, “The New AdOps Team: How Generative AI Will Transform Traditional AdOps Roles.” Send a message to [email protected] and we’ll schedule a 15-minute interview. |
|
||||
|
||||
|