human Archives - AdMonsters https://admonsters.com/tag/human/ Ad operations news, conferences, events, community Mon, 15 Jul 2024 20:08:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 HUMAN’s Satori Team Uncovers Konfety Fraud Operation With New Malvertising Tactics https://www.admonsters.com/humans-satori-team-uncovers-konfety-fraud-operation-with-new-malvertising-tactics/ Tue, 16 Jul 2024 13:00:35 +0000 https://www.admonsters.com/?p=658706 HUMAN’s Satori Threat Intelligence Team began noticing that apps that don’t offer advertising were generating an abundance of IVT traffic. Concerned, they began studying the traffic source and, in the process, discovered a massive mobile malvertising scheme that used highly sophisticated tactics.

The post HUMAN’s Satori Team Uncovers Konfety Fraud Operation With New Malvertising Tactics appeared first on AdMonsters.

]]>
HUMAN’s Satori Threat Intelligence Team uncovered a massive mobile malvertising scheme named Konfety, exploiting sophisticated tactics through decoy apps and their “evil twins” to generate up to 10 billion fraudulent programmatic bids per day.

HUMAN’s Satori Threat Intelligence Team began noticing that apps that don’t offer advertising were generating an abundance of IVT traffic. Concerned, they began studying the traffic source and, in the process, discovered a massive mobile malvertising scheme that used highly sophisticated tactics.

They named the scheme Konfety, which means “candy” in Russian, in a nod to CaramelAds, the Russian mobile advertising SDK that the threat actors managed to abuse. Konfety is a massive fraud perpetrated against DSPs and advertising networks, and at its peak, Konfety-related programmatic bids reached 10 billion requests per day.

To learn more about the threat, AdMonsters talked with Lindsay Kaye, VP of Threat Intelligence at HUMAN, who was instrumental in uncovering Konfety. For a complete discussion, see the HUMAN Satori Threat Alert: Konfety Spreads “Evil Twin” Apps for Multiple Fraud Schemes.

Susie Stulz: Konfety uses several new mechanisms in malvertising. This scheme uses decoy apps and evil twins. Can you provide an overview of the scheme and how it worked?

Lindsay Kaye: Sure. The threat actors created about 250 decoy Android application package files — or APK apps — which they uploaded to the Google Play Store. These apps don’t provide any sort of fraud when we download and execute them. 

And yet, in the real world, we saw a lot of IVT coming from those apps, so we started investigating. We found that APK apps in the Play Store are decoys and they provide something really important to the threat actors, which is the legitimate identifiers of Google Play Store Apps.

After a lot of research we discovered the presence of evil twins to those decoy apps. Those evil twins are not distributed in the Play Store, they spread through malvertising, and they are the apps responsible for the ad fraud. 

SS: So, the evil twin apps offered “inventory” in the programmatic markets 10 billion times per day?

LK: Yes, and at first glance, it looks like the fraudulent traffic comes from these decoy apps because both the evil twins and the decoy apps use the same Google identifiers. We believe threat actors have developed a new and very sophisticated technique to host malicious apps outside of the Play Store.

SS: Is that what tipped you off that a unique type of malvertising was at work?

LK: We saw no ad fraud stemming from the decoy apps we downloaded from the Play Store itself. In fact, those apps do not show ads, even if they technically can support advertising. However, when we looked at third-party repositories, like VirusTotal and some others, we noticed that there were two APKs with the same name. To dig deeper, we looked at the hashes and saw they were different.

SS: What do you mean by hashes?

LK: Hashes are unique identifiers which are generated when a developer applies a hash function to a file’s contents. They act as digital fingerprints, so that when there are changes to a file, a new hash will be generated. Comparing hashes allows us to determine if two files with the same name are identical or different.

SS: So, were the different hashes the first clue?

LK: Yes, that was the first tip, and we began investigating from there. We thought this was interesting: two APKs with the same name but different hashes. 

But the two APKs themselves were also really different; they weren’t even pretending to be the same app. The decoy APK in the Google Play store may be a car racing app, but its evil twin wasn’t. It was just stealing the legitimate Google identifiers of the decoy to commit ad fraud.

SS: How often were the decoy apps downloaded?

LK: Not very often; they averaged 10,000 downloads per app, which is nothing in the app world. This is one of the things that stood out to us: Apps with a small number of installs were generating a huge amount of IVT. 

SS: Is the CaramelAds SDK inherently fraudulent?

LK: SDK has some vulnerabilities that allow threat actors to abuse it. If you’re looking for an SDK to monetize your mobile app, I suggest looking elsewhere until those vulnerabilities are fixed.

SS: At present, HUMAN has observed ad fraud only stemming from Konfety, but haven’t you noticed other things getting loaded on the user devices, such as a search tool and intent signals? What are the purposes of these things?

LK: To date, we have only observed ad fraud, but in the report, we describe other things, like intent filters, that were loaded onto the devices. These are links that pretend to open other applications, such as Zoom or TikTok. Certainly, those intent links can be used for other frauds that target the user, such as credential stealing or pushing other kinds of malware onto the device. We just didn’t observe that kind of activity to date.

Obviously, this is an ongoing threat, and one that we expect will evolve and we will continue to monitor.

SS: What advice do you have for AdOps teams so they can avoid the Konfety threat?

LK: The most important thing AdOps teams can do is to use an IVT monitoring tool or platform. Obviously, HUMAN offers one, but there are others. Campaigns like Konfety show that the threat actors are getting more sophisticated, making their threats very difficult to detect.

Uncovering the evil twins required an extremely complex investigation that AdOps teams might not have the time or skillset to conduct on their own.

The second thing I’d recommend is for AdOps teams to look at their past traffic. Do you see a lot of ads served to apps that have a small number of downloads? If yes, you might want to investigate it and share your findings with your partners. Sharing insights makes the industry safer.

As I said earlier, avoid using CaramelAds until they’ve fixed its vulnerabilities. 

SS: The challenge, I think, is that fraudsters are often copycats. They see threat actors succeed with one tactic, in this case, decoys and evil twins, and they create their version of it. Does this mean evil twins in malvertising will be with us for a while?

LK: That’s likely, so AdOps teams must choose their SDKs wisely and work with only reputable companies. However, even then, threat actors may find new vulnerabilities to exploit, so monitoring IVT regularly is critical.

Cybersecurity has always been a game of cat and mouse, and Konfety is a great example of this. Threat actors were getting kicked out of the Play Store, so they found a way to commit fraud outside the official app stores.

SS: Final question: the report offers a great deal of technical descriptions, sample code, the domain names, the names of the decoy apps and so on. Where can readers access that report?

LK: It’s available online, at: https://www.humansecurity.com/learn/blog/satori-threat-intelligence-alert-konfety-spreads-evil-twin-apps-for-multiple-fraud-schemes

The post HUMAN’s Satori Team Uncovers Konfety Fraud Operation With New Malvertising Tactics appeared first on AdMonsters.

]]>
Merry-Go-Round Scheme Conceals Ads for Consumers and Brands https://www.admonsters.com/merry-go-round-conceals-ads-for-consumers-and-brands/ Thu, 30 May 2024 13:00:23 +0000 https://www.admonsters.com/?p=656083 HUMAN’s Satori Threat Intelligence issued a Security Threat Alert this morning, detailing a scheme it calls Merry-Go-Round. At its peak, Merry-Go-Round reached 782 million fraudulent bid requests daily, cleverly evading detection through a sophisticated cloaking mechanism.

The post Merry-Go-Round Scheme Conceals Ads for Consumers and Brands appeared first on AdMonsters.

]]>
HUMAN’s Satori Threat Intelligence Team says that a scheme called Merry-Go-Round, at its peak, reached 782 million bid requests a day. 

HUMAN’s Satori Threat Intelligence issued a Security Threat Alert this morning, detailing a scheme it calls Merry-Go-Round. At its peak, Merry-Go-Round reached 782 million fraudulent bid requests daily, cleverly evading detection through a sophisticated cloaking mechanism.

Although the scheme has been detected and interrupted, the Satori team warns that the industry isn’t out of the woods as the operation is still active and accounts for 200 fake million bid requests daily.

How it Works

Consumers visit several piracy and adult-content websites that are affected by Merry-Go-Round (HUMAN has not published the names of those domains). 

The Merry-Go-Round kicks off when a user clicks on a story or video from one of the affected site’s directory. An overlay hijacks the click, opening a second tab to display the content the user expects to see. Meanwhile, the original tab- now out of the user’s focus- redirects the user to a series of pages on fake sites that the fraudsters created for the scam. Those sites, all of which have benign names such as beautyparade.co and caloriamania.co don’t have any actual content. They’re simply pages cluttered with ads that sell via the open markets.

The volume of impressions created on these out-of-focus tabs is immense. Let’s say a user visits one of the affected sites to download a movie and doesn’t notice the out-of-focus pop-under tab for the entire two hours he or she watches the movie. Every 60 seconds, the out-of-focus tab directs the user to the next page in the fake domains that make up the Merry-Go-Round network. Each page can contain up to 100 fake ads, so over the course of that movie, some 12,000 bid requests will occur. If, like many people, the user doesn’t notice the open tab and leaves it open for 24 hours, some 150,000 ad requests will be generated.

The more tabs left open, the more fake bid requests sent to SSPs. In one instance, HUMAN saw more than 789,000 ad requests associated with Merry-Go-Round from a single residential IP address in a single day.

Cloaking Mechanism

So, how do brands and their advertising partners not know these sites are fake? Don’t they audit sites in their networks?

To evade detection, the Merry-Go-Round perpetrators have deployed a sophisticated domain cloaking mechanism built on path-dependent domain loading, a method in which the content displayed on the website depends on how the user arrives there. Brand auditors who directly type a Merry-Go-Round domain into their browsers will see a seemingly legitimate, if mundane, website, as they have programmed those sites to prevent redirects during direct visits.

“These actors have gone out of their way to conceal what they’re doing,” explained Will Herbig, Director of Fraud Operations at HUMAN Security. “They scrubbed all the referral information between the Merry-Go-Round domains and the piracy domains, as well as all the referrals within the Merry-Go-Round network. They’ve also added some anti-crawler features to the website. As a result, it is very challenging for a layperson at a brand to detect the scheme.”

To protect their budgets from the Merry-Go-Round scheme, Herbig recommends that brands know as much about their partners as possible. Direct relationships can help brands avoid these types of situations.

The rise of domain cloaking techniques like path-dependent domain loading and IP address filtering presents a significant challenge in ad fraud detection. These techniques allow fraudsters to mask a website’s true nature, creating a major disconnect between what advertisers believe they’re buying (ad impressions on legitimate sites) and what they actually get (impressions on hidden, malicious content).

“We found quite a bit of fraud around this domain cloaking, and we’re going to be publishing other things along those lines and throughout the rest of the summer, but it continues to be an area where we’re seeing quite a lot of fraud, and the techniques there are evolving and making it you know, harder and harder for people, especially advertisers to know whether or not what they’re getting is actually real or not,” Herbig said.

For more details, including examples of the iFrames and overlays used in the Merry-Go-Round scheme, download the report, Satori Threat Intelligence Alert: Merry-Go-Round Conceals Ads from Users and Brands.

The post Merry-Go-Round Scheme Conceals Ads for Consumers and Brands appeared first on AdMonsters.

]]>
HUMAN Security Holiday Report Explains How Grinch Bots Steal the Holidays https://www.admonsters.com/human-security-holiday-report-explains-how-grinch-bots-steal-the-holidays/ Thu, 03 Aug 2023 13:37:19 +0000 https://www.admonsters.com/?p=646905 HUMAN Security released its 2023 Bad Bot Holiday Report, which details what cybercriminals were up to last holiday season. The report offers websites and online retailers a look at their ploys so that security teams can keep their sites and their customers safe in the upcoming holiday season.

The post HUMAN Security Holiday Report Explains How Grinch Bots Steal the Holidays appeared first on AdMonsters.

]]>
HUMAN Security released its 2023 Bad Bot Holiday Report, which details what cybercriminals were up to last holiday season.

The bad bots started early, planned carefully, and then unleashed a torrent of bad bots to bilk retailers and consumers alike. HUMAN’s report offers websites and online retailers a look at their ploys so that security teams can keep their sites and their customers safe in the upcoming holiday season.

Below are the things to watch out for, according to HUMAN.

Cybercriminals Start Early

Cybercriminals begin planning their crimes in the months leading up to Cyber Monday. Last September to November, HUMAN measured 99% more bad bot traffic to retail sites than the yearly average.

Human traffic, on the other hand, stayed relatively flat, reaching its peak during Cyber Week.

What Was All That Bot Traffic Up To?

While most consumers spent last summer and fall barbecuing and getting their kids ready for school, the cybercriminals were laying the groundwork for their crimes. According to HUMAN, they were busy: 

  • Harvesting sensitive data from breaches, leaky databases, phishing campaigns, and dark web lists  
  • Executing automated credential stuffing, carding, and brute force attacks to validate credentials, credit card numbers, and other PII 
  • Submitting fake leads and contaminating web engagement metrics.

“Cybercriminals use bad bots to prepare in the summer and fall, so they will be ready when the holiday season rolls around,” HUMAN warns. “These bad bots then launch large-scale attacks during major online traffic periods and sales events.”

Types of Attacks

HUMAN noted that three types of attacks dominated the holiday season:

Account Takeovers

These attacks get unauthorized use of a user’s credentials to make purchases, drain their bank accounts, and a host of other ills. Account takeover attacks were up 123% in the second half of last year. In fact, 48.2% of all log-ins were malicious.

Carding Attacks

Carding, or using bots to test stolen credit cards, bank cards, and gift card numbers is the biggest threat to e-commerce retailers during the holiday season. Once the fraudster validates the numbers they buy all sorts of things to resell online.

In early November 2022, the percentage of malicious checkout attempts out of total checkout attempts rose 350%. The percentage of carding attacks out of total checkouts increased 900% in the days following Cyber Monday. This was likely due to bots continuing their attacks on e-commerce sites even after human traffic subsided. 

For ecommerce alone, HUMAN measured a significant peak in the summer months, when almost 30% of checkout attempts were malicious. This was followed by another small peak in October and a jump during the holiday season.

Scraping

Scraping, which is when bots scrape a website’s data to capture competitive intel. Scraping also takes a toll on a website’s SEO ranking (most sites invest in SEO during the holiday season so this is especially frustrating).

“Brand and marketers are profiting from online advertising during the holiday season with holiday sales growing last year from 2021 by more than 5.3% to $936.3 billion according to the National Retail Federation and consumers are spending nearly $1,500 on gifts, travel, and entertainment according to PWC research,” said Liel Strauch, HUMAN’s Senior Director of Enterprise Research. “It’s no wonder cybercriminals and fraudsters are already planning and embarking on their schemes. Our research demonstrates why bots are one of the most prolific tools for cybercrime because their increased sophistication gives fraudsters an uncanny ability to mimic human behavior online. They’re utilizing carding attacks, account takeovers, and scraping attacks to target both consumers and e-commerce sites, which can impact a consumer’s bank account and an e-commerce site’s profits.”

Learn More

The report goes into more detail, which you can read here

The post HUMAN Security Holiday Report Explains How Grinch Bots Steal the Holidays appeared first on AdMonsters.

]]>
A Look into the Flourishing Bot Economy https://www.admonsters.com/a-look-into-the-flourishing-bot-economy/ Mon, 13 Mar 2023 20:00:07 +0000 https://www.admonsters.com/?p=642128 While the world frets about the possibility of a recession, one positively flourishing sector is the bot economy. To learn more about today’s bot networks and how the industry can work together to limit their damage, Admonsters spoke with Zach Edwards, Senior Manager of Threats Insight for HUMAN. 

The post A Look into the Flourishing Bot Economy appeared first on AdMonsters.

]]>
While the world frets about the possibility of a recession, one positively flourishing sector is the bot economy. And it’s not just growing in size; the level of sophistication of bot networks is increasing in leaps in bounds. As a result, the bot economy is now a favored tool for sophisticated organized criminal activity. 

Recently, HUMAN Security made headlines when it reported it had successfully taken down a massive bot network known as VASTFLUX. At its height, VASTFLUX stole potentially tens of millions of dollars in revenue by launching fraudulent SSPs to host auctions for impressions that didn’t exist, and by using ad seats on DSPs to purchase ads that contained their zero-day payload.

That payload triggered unexpected new sideloaded auctions monetized by their fraudulent SSPs. It was a dazzlingly elaborate scheme that required real seats on DSPs, technical expertise, and supporting infrastructure that cost millions of dollars. This, to HUMAN, is a perfect example of the bot economy.

To learn more about today’s bot networks and how the industry can work together to limit their damage, Admonsters spoke with Zach Edwards, Senior Manager of Threats Insight for HUMAN. 

AdMonsters: You say that bot schemes are on the rise. By how much? What’s the main tactic?

Zach Edwards: We see a huge spike in account takeovers. They’ve increased by 98% in the last six months. Once deployed, Bots break into username-protected accounts and cause all sorts of grief for the victims.

AdMonsters: In a previous email, HUMAN said that the bot economy is flourishing with SaaS delivery and customer support. Does that mean anyone can buy a bot and use it to start stealing ad revenue from publishers and advertisers?

ZE: Not exactly. The more you simplify it, the more impossible that scenario becomes. The amount of money, technical skills, and infrastructure means that bot networks on par with VASTFLUX are out of reach for your college student looking to make quick money.

It’s great for the industry that the barriers are high. But at the same time, the bad actors who exist and target our ad system are not in jail. 

AdMonsters: Then what do you mean by bot SaaS models?

ZE: It’s a software as a malicious service, meaning that bots are sold and used for malicious activities. In this ecosystem, we see overlapping threat actors, people who develop a threat tactic, backburner it for a few years, then bring it out again. 

It’s important to think about this particular service ecosystem as a big affiliate structure, so it’s much more sophisticated than buying a sneaker bot, which anyone can buy on the web.

As you said in the intro, these bot networks require capital, infrastructure, technical expertise, and huge operations to create accounts. I believe that people in the industry will really benefit from an understanding of the operational chunk of a bot network.

AdMonsters: Okay, how do bot operations work?

ZE: There are multiple structures. Often, a bad actor will sign up for a DSP and submit fake or real corporate credentials tied back to a know-your-customer (KYC) process. But, they don’t accurately disclose where they do business or their location. This is where things are imploding.

The fraudsters will lay low, purchasing inventory and displaying ads without malicious code until they build their credentials. Once they’re flying under the radar a bit, they begin to deploy the malicious code. They also have sophisticated detection capabilities. For instance, they can detect when an ad is screened for bots and display an innocuous ad in such scenarios to avoid getting caught. This is the classic fraudulent DSP.

All malicious bot networks need a cashout mechanism, to divert the legitimate actor’s budget into their own pockets. In the case of VASTFLUX — which was discovered by my colleagues, HUMAN Threat Researcher Vikas Parthasarathy, and Data Scientist Marion Habiby, the malicious ads triggered additional invisible auctions. In a sense, the fraudsters cashed out by acting as fraudulent SSPs and selling millions of dollars worth of fake inventory.

AdMonsters: So the bad guys buy one legitimate impression, then sell that same impression to multiple unsuspecting buyers?

ZE: Exactly. To the buyers, it looks like they purchased a legitimate impression, so they don’t put a stop to their buys.

It’s important to note that VASTFLUX targeted real users with some portion of bots involved. But that’s just one investigation. We’re looking into dozens of others where that’s flipped the other way. These schemes rely more heavily on bots compared to real users. 

The latter schemes rely on bots and fake traffic, which they can get from criminal organizations establishing affiliate networks spanning thousands of websites. The crime organization’s customers can purchase traffic to specific websites from different countries, and referral traffic from specific domains and social networks. This process allows the bad actors to customize what the fake traffic looks like or customize which bots they rent. The more enterprising ones can turn around and sell that customized bot network to their customers.

AdMonsters: What can the industry do to recognize when they’re buying fraud?

ZE: We need to recognize we’re buying too many impressions from a specific app. If you buy 30 million impressions a month on a specific app, you definitely want to be in contact with that app publisher. Reaching out to that app publisher and informing them of the exchanges in which you’re purchasing that app’s inventory will create a feedback loop that can let you know if things aren’t lining up. That app publisher may tell you they don’t sell on those exchanges, or that their apps don’t have the number of users required to generate 30 million impressions each month, or you may get a great direct buy deal with a discount on your impressions just by reaching out.

I’m not suggesting that such conversations alone can uncover schemes like VASTFLUX.. Still, they have an excellent way for buyers and sellers to assess if fraud exists in cases when the marketer is buying vast amounts of inventory. And anyway, those dialogs could lead to partnerships or discounts, so they never hurt.

The post A Look into the Flourishing Bot Economy appeared first on AdMonsters.

]]>
Residential Proxy IP Networks: What Everyone in Ad Tech Needs to Know https://www.admonsters.com/residential-proxy-ip-networks-what-everyone-in-ad-tech-needs-to-know/ Wed, 01 Feb 2023 19:49:23 +0000 https://www.admonsters.com/?p=640948 A residential IP proxy network is a network that pays consumers to share their Internet. The network then re-sells those consumers’ residential IP addresses to its customers — companies or users who want, for whatever reason, to appear as if they’re residential IPs within a specific region.

The post Residential Proxy IP Networks: What Everyone in Ad Tech Needs to Know appeared first on AdMonsters.

]]>
Residential IP proxy networks are popping up everywhere and undoubtedly provide useful services for many businesses.

Still, some wonder if nefarious players are leveraging them to commit ad fraud (the answer is yes, according to multiple sources, but more on that later).

If you have yet to hear of a residential proxy IP network, I encourage you to Google them and start reading. Some networks claim to have millions of residential IP addresses for rent. These IP addresses are global, with some networks boasting nearly 200 locations, all of which can be used to get around the restrictions applied to content and websites.

What is a Residential IP Proxy Network

A residential IP proxy network is a network that pays consumers to share their Internet. The network then re-sells those consumers’ residential IP addresses to its customers — companies or users who want, for whatever reason, to appear as if they’re residential IPs within a specific region.

To get their pool of residential IP addresses, apps like HoneygainEarn.appPawns.app, and many others pay home users to share their Internet traffic. These users install a proxy network app on their smartphones or computers and forget about it until the money rolls in. These payments are small (ranging from $.20 per GB per shared data to $75 per month. Still, if users are looking for pocket money, residential IP proxy networks promise to pay them for doing nothing. 

Some networks emphasize that their residential IP addresses are “ethically sourced,” meaning the consumer is told quite clearly that sharing their Internet means someone else will use their IP address to access sites. Consumers don’t know who will use their IP addresses and for what purpose. Many promise consumers that networks will only use IP addresses for approved use cases, but they need to go into more detail on what that actually means.

As Brian Krebs explains in his security blog, other residential IP networks, such as 911 S5, get their IP addresses by offering free VPN services to users and use them without telling them.

Residential Proxy IP Use Cases

It’s hard to know how many residential proxy IP networks are out there. Jonathan Tomek, VP of Research and Development at Digital Envoy, believes there are thousands. Millions of people must want to rent residential IP addresses if that’s the case. What are their use cases? 

In researching this article, I read the websites of a dozen or so providers. Web scraping is a key use case for all of them, and they offer particular services, such as the ability to scrape product data from the world’s largest online retailers. If you want to outwit bot detection for price comparison or competitive analysis purposes, a residential IP proxy network is the tool for you. 

Several residential IP proxy networks say their services are best fit to promote ad verification. Suppose a multinational brand launches a campaign in six languages across five regions. In that case, it can deploy Residential IP proxies to view each ad in each region to verify that ads appeared as expected. 

The worrisome issue with residential proxy networks is that they’re tools, and tools are deployed for legitimate or nefarious purposes. Multiple industry experts say ad fraud is common, as identifying residential IP proxies can be tricky.

“Unlike datacenter proxies, which typically are easily identified in the industry, residential proxies seem much harder to block. Thus, they are sometimes favored by scheme operators who seek to bypass detection and restrictions,” explained Gilit Saporta, Director of Fraud Analytics at DoubleVerify.

These networks are actively deployed in various fraudulent schemes, including ad fraud. This past summer, the FBI seized the website Rsock.net and disrupted a botnet that, according to the DoJ, hijacked millions of computers to “convert residential computers into proxy servers, allowing the botnet’s customers to use them for malicious activity or to appear as coming from a residential IP address.” 

In 2019, security professionals discovered that TheMoon botnet, long known for its DDoS attacks, had switched tactics and targeted YouTube in an ad fraud scheme. That same year, DoubleVerify identified (and stopped) OctoBot, an ad fraud scheme that bilked CTV advertisers out of millions of dollars each month. DoubleVerify’s Saporta said residential IP proxies played a role in that scheme.

In 2018, cybersecurity experts and Federal investigators discovered that 1.7 million IP addresses were hacked and deployed to view up to 12 billion digital ads daily

Rich Kahn, the founder of Anura.io, sees residential IP proxy traffic used in ad fraud on a daily basis. “We defend a lot of lead generation companies. We see human fraud firms take advantage of these residential proxy networks to make their IP address appear legit.” He estimates that one in four leads that stem from advertising is fraudulent, of which some, but not all, are residential IP proxies. 

“I just reviewed a campaign with a client in which we marked a series of fraudulent transactions that came through their network. All of those transactions came from residential cable modems, which is a strong indication of a residential IP proxy network,” said Kahn.

Zach Edwards of HUMAN agrees. “This is an extremely common way to commit ad fraud, and we’ve written about residential proxy usage for ad fraud within our Terracotta investigation. HUMAN does not have public numbers to share about real residential proxy usage, but we actively monitor proxy networks.” 

Residential Proxy IP Networks Makes Life Easier for Fraudsters

According to some security professionals, residential IP proxy networks add a lot of efficiencies to their criminal operations and let them hide from law enforcement and blockers.

Today nefarious players don’t need to bother hacking devices and hijacking residential IPs; they can just rent all the proxies they need from a network. It’s affordable too. When investigating RSock.net, the DoJ noted it costs just $200 per day for 90,000 proxies. This makes ad fraud schemes profitable as long as the fraudster earns more in CPC commissions than they pay for the proxies.

Of course, a handful of hits to an ad from the same IP address will raise red flags. Still, with hundreds of thousands (or even a million) of residential IP addresses available from just one network, getting around those security checks is easy. BlackProxies, a residential proxy network that claims to have a million “real” IPs in its network, offers “blazing fast rotating residential proxies.” 

Another convenience: fraudsters no longer need to set up a farm of thousands of cell phones to view ads; they only need to pay $200 or so to a network with 90,000 residential IP addresses.

So are fraudsters abandoning the traditional click farms in favor of residential proxy IP networks? Digital Element’s Tomek, who worked on the 3ve investigation (a botnet involved in a massive ad fraud scheme that HUMAN led in taking down) is convinced they are, although he admits that it’s difficult to pinpoint the exact amount as this traffic looks like organic residential users to ad verification tools.

Detecting Residential IP Proxies

It’s possible to detect when fraudsters use residential IP proxies to view and click on ads, and all of the people interviewed for this article say their companies offer detection services. Methodologies vary from company to company.

“We go down to the user level to identify what’s happening. Is there anything more than just bouncing off that IP address? Fraudsters need to do a certain amount of automation and other things that we can detect. That’s how we catch them,” explained Kahn of Anura.io.

Detecting residential IP proxies is an art. For instance, DoubleVerify can detect them and other masked IPs by combining network telemetry with traffic analysis. DV taps into its extensive experience in classifying and detecting the diverse scenarios of valid and invalid browsing to succeed.

Edwards of HUMAN Security advises advertisers to weigh new ad channels and buying opportunities carefully and avoid ad traffic if residential proxies can’t be segmented for analysis. “If you are unable to change the channels/apps/websites you’re buying on while seeing similar impacts in the residential proxy usage stats, there could be a residential proxy bot adversary targeting a wide swath of inventory across that network, which can end up inflating costs for all campaigns on that network,” he said.

A Complicated Future

Edwards warns that new developments will complicate the detection of residential IP proxies. For instance, privacy changes, such as Apple’s iCloud Private Relay technology, will mean millions of legitimate consumers will use new traffic-sharing technology. Consequently, the digital ad tech industry should expect to see more and more residential proxies and pooled IP addresses in ad traffic in the future.

Note: Digital Element is a client of Susie Stulz.

The post Residential Proxy IP Networks: What Everyone in Ad Tech Needs to Know appeared first on AdMonsters.

]]>
HUMAN Discovers and Shuts Down Massive Ad Fraud Scheme https://www.admonsters.com/human-discovers-and-shuts-down-massive-ad-fraud-scheme/ Fri, 20 Jan 2023 19:50:37 +0000 https://www.admonsters.com/?p=640371 Last year, advertisers spent over $327 billion targeting users as they engaged with popular mobile apps, but as HUMAN Security announced yesterday, a chunk of that spending went into the pockets of fraudsters who successfully launched a massive ad fraud scheme.

The post HUMAN Discovers and Shuts Down Massive Ad Fraud Scheme appeared first on AdMonsters.

]]>
Mobile advertising is big business, and where money flows, fraudsters follow. 

Last year, advertisers spent over $327 billion targeting users as they engaged with popular mobile apps, but as HUMAN Security announced yesterday, a chunk of that spending went into the pockets of fraudsters who successfully launched a massive ad fraud scheme.

HUMAN discovered the highly sophisticated scheme last summer. Dubbed VASTFLUX (a combination of fast flux, an evasion technique used by cybercriminals, and VAST, which the criminals exploited to perpetrate this crime). 

Massive Ad Fraud Scheme

In this multistep fraud, attackers purchased mobile inventory via programmatic exchanges and then injected malicious JavaScript code. That code allowed the attackers to stack as many as 25 video ads on top of one another, enabling the fraudsters to register multiple ad views. All ads, of course, were completely invisible to the user, which was instrumental in evading detection.

“What was technically impressive and incredibly concerning about VASTFLUX was the fraudsters hijacked impressions on legitimate apps, which makes it nearly impossible for users to tell if they are impacted,” said Gavin Reid, HUMAN’s newly-appointed CISO, in a statement.

HUMAN discovered VASTFLUX as its data scientists were investigating an entirely different threat. VASTFLUX managed to spoof some 1,700 apps, target 120 publishers, and run ads on 11 million devices.  At its peak, the fraud accounted for more than 12 billion fraudulent ad requests per day.

“It is clear the bad actors were well organized and went to great lengths to avoid detection, making sure the attack would run as long as possible—making as much money as possible,” Marion Habiby, a data scientist at Human Security, told Wired.

A Bot Economy

Thanks to their ability to mimic human behavior, bots are a favored and prolific tool of cybercriminals. Tamer Hassan, CEO and Co-Founder of HUMAN Security says bots are used in 77% of all digital attacks. 

“What’s especially important to understand is there is a bot economy that supports sophisticated organized criminal activity, allowing anyone to buy bots. This allows bad actor groups to function like legitimate businesses and fund other criminal schemes.” 

For instance, his company has seen:

  • Botnets leased or even franchised similar to the way other SaaS products are marketed and sold. 
  • Customer success services complete with customized solutions and 24/7 support via encrypted chat rooms.
  • Marketplaces for everyday users – not sophisticated cybercriminals – to purchase bot support to secure coveted items like tickets, sneakers, etc.

As a result of this economy, malicious actors, such as those behind VASTFLUX, can easily develop, deploy and adapt botnets in order to bilk advertisers while evading detection. 

According to Hassan, the ultimate goal is to eliminate the financial incentive for these schemes, and effort that will require cooperation among everyone in the industry.

“We need to change our approach and embrace modern defense as a core framework for effective intra-industry and public-private collaboration. This approach goes after the economics of cybercrime, ultimately making schemes like VASTFLUX unprofitable while collective protection lowers the cost of defense. Winning the economic game is how we win as an industry against cybercriminals.” 

The post HUMAN Discovers and Shuts Down Massive Ad Fraud Scheme appeared first on AdMonsters.

]]>