When you start pulling a defense out of Twitter’s playbook, you don’t look great. And that’s exactly what Meta decided they were going to do Monday when accused of placing ads near problematic content.
What’s going down?
Disney, Pizza Hut, Walmart, Match, and Tinder have been suspending ad campaigns on Instagram. This comes after the Wall Street Journal and The Canadian Centre for Child Protection conducted tests to mimic child predators and found ads near sexually explicit material.
Something is to be said when all these major brands can control whether their ads appear near content like this. Another has to be said when it fails.
Both groups noted what was being served to them after conducting the tests.
Instagram’s system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos—and ads for some of the biggest U.S. brands […]
In a stream of videos recommended by Instagram, an ad for the dating app Bumble appeared between a video of someone stroking the face of a life-size latex doll and a video of a young girl with a digitally obscured face lifting up her shirt to expose her midriff. In another, a Pizza Hut commercial followed a video of a man lying on a bed with his arm around what the caption said was a 10-year-old girl.
It’s important to note that while this is still a small portion of Instagram users, the Wall Street Journal says that there are “tens of thousands” of accounts that fit the bill. They also reported seeing similar content when it followed those accounts.
Following what it described as Meta’s unsatisfactory response to its complaints, Match began canceling Meta advertising for some of its apps, such as Tinder, in October. It has since halted all Reels advertising and stopped promoting its major brands on any of Meta’s platforms. “We have no desire to pay Meta to market our brands to predators or place our ads anywhere near this content,” said Match spokeswoman Justine Sacco.
Robbie McKay, a spokesman for Bumble, said it “would never intentionally advertise adjacent to inappropriate content,” and that the company is suspending its ads across Meta’s platforms.
The Twitter Defense
Meta, of course, denies any wrongdoing and says that these aren’t indicative of an actual
We don’t want this kind of content on our platforms and brands don’t want their ads to appear next to it. We continue to invest aggressively to stop it – and report every quarter on the prevalence of such content, which remains very low. Our systems are effective at reducing harmful content, and we’ve invested billions in safety, security and brand suitability solutions.
These results are based on a manufactured experience that does not represent what billions of people around the world see every single day when they use our products and services. We tested Reels for nearly a year before releasing it widely – with a robust set of safety controls and measures. In 2023, we actioned over 4 million Reels per month across Facebook and Instagram globally for violating our policies.
Again, this goes to show, just like with Twitter, they would rather say that things were manipulated instead of actually fixing the problem. The fact that companies would rather not spend the money to do the right thing is just baffling.
I know how we got here, but it’s still wild that we are here.