|
Kids Online Safety Act Passes US Senate; House Will Take Up the Bills Post Recess |
Two bills, called the Kids Online Safety Act (KOSA), passed the US Senate with overwhelming support in late July. Energy and Commerce Committee Chair Cathy McMorris Rodgers said her committee will begin work on a House version as soon as Congress returns from its recess in September. KOSA The first bill, KOSA, requires social media platforms to "take reasonable measures in the design and operation of products or services used by minors to prevent and mitigate certain harms that may arise from that use (e.g., sexual exploitation and online bullying)." Additionally, platforms will need to incorporate certain safety-by-design features, such as settings that restrict access to minors' personal data and tools that allow parents or guardians to supervise and manage their children's use of the platform. KOSA has broad support from children's advocacy groups. In May of last year, the Surgeon General warned of the negative impact of social media on the mental health of the nation's youth. Not all kids like the bill, and over 300 teens traveled to DC to voice their displeasure. And KOSA has plenty of other detractors. The Electronic Frontier Foundation calls it censorship, noting that politicians can use KOSA as a means to eliminate the history of slavery or racism in America from the curriculum because it "depresses" kids. Others fear it will suppress information on reproductive rights and LGTBQ issues. How this affects publishers: While KOSA doesn't explicitly require age gating, some in the industry warn that publications that publish content regarding sensitive content will prompt publishers to implement them anyway. COPPA 2.0 The other bill, the Children's and Teens' Online Privacy Protection Act — aka COPPA 2.0 — strengthens the existing Children's Online Privacy Protection Act enacted under the Clinton Administration. The primary goal of COPPA is to enable parents to control what information is collected about their children aged 13 and younger when they go online. COPPA 2.0 extends protections to teens up to age 17, although kids aged 14 to 17 can consent to data collection without their parents' permission. The bill also expands the definition of private data to encompass biometric data such as fingerprints, voice prints, facial imagery, and gait, given that companies can easily use this data to track people in real life. How this affects publishers: Tech industry stakeholders caution that COPPA 2.0 may restrict how third-party companies can advertise to kids under 17. This updated legislation bans contextual advertising to minors, preventing companies from using personalized data such as phone locations or web browsing history to target ads at young users. Meta has offered its own approach, saying, "We think there's a better way to help parents oversee their teens' online experiences: federal legislation should require app stores to get parents' approval whenever their teens under 16 download apps." – SS |
Google Plans To Make It Harder for Users To Find Explicit Deepfakes |
Deepfakes are quickly becoming the bully's preferred method of sexual abuse. While celebrities are a main target due to the abundance of their images, advances in autoencoders — the AI technology that encodes and reconstructs facial features for manipulation — now require less data to create convincing deepfakes. The result? Millions of women across the globe are becoming victims of explicit deepfakes in videos that often use their real names in the titles. According to the New York Times, a study found that 90% of deepfake images featured non-consensual, sexually explicit images, and most of the victims were women. What's it like to group up in a world like this? A significant majority of girls aged 18 and younger (57%) told researchers that these deepfakes are a major source of anxiety. A major source of that anxiety is the fear that anyone may have seen the images — a boss, a coworker, the neighbors down the street, an opponent in a school board race. Earlier this year, Google implemented changes to its search engine algorithms that may prevent this nightmare from occurring. Last week, Google announced in a blog post that it has implemented updates to its ranking algorithms to attack the problem head-on. These changes reduce the visibility of fake explicit images by over 70% in searches related to specific people, prioritizing news articles and other non-explicit content in their place. While prioritizing quality news articles is relatively straightforward, distinguishing between consensual and non-consensual is not. The post notes, "Generally, if a site has a lot of pages that we've removed from Search under our policies, that's a pretty strong signal that it's not a high-quality site, and we should factor that into how we rank other pages from that site. So, we're demoting sites that have received a high volume of removals for fake explicit imagery. This approach has worked well for other types of harmful content, and our testing shows that it will be a valuable way to reduce fake explicit content in search results." According to a Wired investigation, Google was aware of the proliferation of harmful deep fakes but was slow to act. Its management "rejected numerous ideas proposed by staff and outside experts to combat the growing problem of intimate portrayals of people spreading online without their permission." – SS |
X Is Training Grok, Its AI Assistant, on User Posts |
Despite all the troubles Elon Musk caused when he purchased X, publishers still rely on the platform to build and engage audiences. While many brands have ceased advertising on X due to trust and safety issues, media and sports industries still post quite a bit, averaging around 75.9 and 41.7 posts per week, respectively. But publishers may want to rethink the number of times they post (if at all). According to Forbes, X has begun training its AI assistant, Grok, on the content its users share. The social platform did not ask users if they were okay with Grok using their posts and interactions for training purposes; X made that decision for them. Of course, any user who hears about it can wander over to X’s safety page. This kind of move has rankled publishers since the launch of ChatGPT made headlines. In December last year, the New York Times filed a lawsuit in the Federal District Court in Manhattan, accusing OpenAI and Microsoft of using millions of New York Times articles to train their AI models without permission or compensation. They’re not alone. Reuters, The Washington Post, and others are seeking payment from AI companies that use their content to train AI models. Some 550 publishers have installed blockers to stop these companies from accessing their content for training purposes. That works if Grok comes directly to a publisher’s site but not when journalists post their content to X. – SS |
AI Companies, Unlike Google & Facebook Willing to Pay Publishers for Content Article Title |
The walled gardens seem to have an aversion to paying publishers for content and will go to great lengths to avoid it. Take Meta. In April, Facebook began removing news from its sites, rather than paying news organizations to use their content. A month later, Google began rolling out AI-generated summaries in search results that are based on publisher content, prompting the News/Media Alliance to send a letter to the Justice Department and the FTC. But outside of the walled gardens, AI companies appear more willing to play nice with publishers. At the end of July, Perplexity.ai announced a revenue-sharing deal with TIME, Fortune, and several other publishers to use their content, promising a minimum of 10% (and potentially higher) from sponsored questions displayed below search results in the search app. Last month, COO at TIME, Mark Hoaward, said his company signed a multi-year content deal and strategic partnership with OpenAI to allow its AI models access to its current and archived content going back 101 years. OpenAI has also inked deals with The Atlantic, Axel Springer, and Vox among several other publishers. OpenAI also recently unveiled its search tool, SearchGPT. Following Perplexity.ai’s model, Search GPT will prominently cite and link to publishers in searches with clear, in-line, named attribution, TechCrunch reports. As privacy regulations and other challenges eat into revenue, many in the industry advise publishers to look to AI companies as a path for growth. During AdMonsters PubForum in Austin, keynote, Burhan Hamid, CTO, TIME, told publishers that these kind of deals could be an alternative revenue stream. He also highlighted that media websites of the future might very well look something akin to Perplexity.ai. – SS |
@{optoutfooterhtml}@ |