LOADING

Type to search

Whistleblowers Say Social Media Pushed Dangerous Content for Profit

Share

New information revealed by whistleblowers has sparked concerns about the way big social media companies deal with harmful content, as the whistleblowers have accused some companies of knowingly allowing dangerous or offensive content to be shared widely among their users in the name of engagement and growth.

This information has been revealed through testimonies given by over a dozen current and former employees at some of the biggest technology companies, talking about the major decisions made by the companies that impacted the way their algorithms recommended content to their users.

A Meta Platforms engineer, which is the company that owns Facebook and Instagram, told reporters that the company had pushed its teams to loosen restrictions on “borderline content” as a way to compete with the rapid growth of the popular app TikTok.

The engineer stated that the company had given this directive due to financial pressures, as he remembered that the company had told the teams that “the stock price is down.”

Algorithm Competition and Safety Concerns

The whistleblowers’ accounts provide a rare glimpse into how the social media industry reacted to the explosive rise of TikTok’s recommendation engine, which dramatically reshaped how short-form videos are distributed online.

Former Meta Platforms researcher Matt Motyl said the company launched Instagram Reels in 2020 as a direct attempt to compete with TikTok’s format.

According to internal studies shared with investigators, the feature initially rolled out with limited safeguards, resulting in higher levels of harmful interactions compared with other parts of Instagram.

Research documents suggested comments on Reels showed significantly higher levels of bullying and harassment, hate speech and incitement to violence.

Motyl explained that many decisions were influenced by business considerations tied to user engagement and advertising revenue.

“Meta’s products are used by north of three billion people and the more time they can keep you on there, the more ads they sell, the more money they make,” he said.

However, he warned that when platforms fail to manage these systems responsibly, “really bad things happen.”

Another former Meta engineer, identified only as Tim, said leadership pressure to compete with TikTok created a shift in strategy.

He recalled how internal discussions focused on catching up quickly with the rival platform.

“People started becoming paranoid and reactive and they were like, let’s just do whatever we can to catch up,” he said.

Also Read: Parliament Rejects TikTok Ban, Opts for Tough New Rules to Regulate Platform

Claims of Prioritising Political Relationships

Separate testimony from a trust and safety staff member at TikTok also raised questions about how the company prioritises content moderation cases.

The employee, referred to as Nick, provided rare access to internal dashboards used by moderators to review reports from users.

Nick said staff were instructed to prioritise certain cases involving political figures while some serious complaints involving young users received less urgent attention.

According to the whistleblower, the company aimed to maintain favourable relationships with policymakers.

Decisions were sometimes made to maintain a “strong relationship” with political figures, the staffer said, arguing this was intended to avoid possible regulatory action or bans in certain countries.

Nick described the ethical tension faced by moderation teams tasked with reviewing harmful content daily.

“If you’re feeling guilty on a daily basis because of what you’re instructed to do, at some point you can decide, should I say something?” he said.

The whistleblower has also claimed that reports involving serious risks to minors, including allegations of cyberbullying or sexual blackmail, have at times been rated lower in internal review systems than politically sensitive reports.

Engineers Describe Limits of Algorithm Oversight

Former machine learning engineer Ruofan Ding, who worked on TikTok’s recommendation systems, said algorithms used to suggest videos operate in ways that can be difficult for engineers themselves to fully control.

“The algorithms are a ‘black box’ whose internal workings are difficult to scrutinise,” he explained, noting that developers focus largely on technical optimisation rather than evaluating each individual piece of content.

“We have no control of the deep-learning algorithm in itself,” Ding said, explaining that engineers often treat posts simply as data points rather than analysing their meaning.

He compared the relationship between algorithm developers and safety teams to different departments working on parts of a car.

“There’s the team that are responsible for the acceleration, the engine. So we expect the team working on the braking system was doing a good job,” he said.

However, Ding added that as TikTok pushed frequent updates to its recommendation system, he began noticing increasing amounts of “borderline” content surfacing after users spent longer browsing the platform.

Also Read: Global Study Lists Top 10 Countries Targeted by AI Cybercrime

Companies Reject the Allegations

The allegations have been rejected by both Meta Platforms and TikTok.

Meta Platforms rejected the allegations that the company intentionally promotes harmful content for its own financial gain.

According to a spokesperson for the company, “These allegations are not accurate. We have taken a number of strict policy steps and have invested heavily in safety measures over the past decade.”

TikTok has rejected the allegations and termed them “fabricated claims.”

According to the company, “Our technology is designed to prevent harmful content from appearing in users’ feeds. Teen accounts have numerous safety settings enabled by default.”

However, the allegations and the whistleblower testimony have raised fresh concerns about the extent to which social media algorithms can shape public discourse.

According to Brandon Silverman, the former executive at CrowdTangle, social media companies should accept that they have a part to play in controlling online behavior.

“Nobody’s saying you’re responsible for all polarisation,” he said, addressing the social media companies. “We’re just saying you contribute to it. And if you just made a few changes, you might not contribute to it as much.”

Follow our WhatsApp channel for instant news updates

Owner of Meta Services Mark Zuckerberg. PHOTO/ File

Owner of Meta Services Mark Zuckerberg. PHOTO/ File

Tags: