The Federal Trade Commission rebuked social media and streaming companies including YouTube, Amazon and Facebook on Thursday, accusing them of failing to adequately protect users from privacy intrusions and safeguard children and teens on their sites.
In a sprawling 129-page staff report, the agency summed up a years-long study of industry practices by accusing the companies of not “consistently prioritizing” users’ privacy, broadly scooping up data to power new artificial intelligence tools and refusing to confront potential risks to kids.
FTC Chair Lina Khan, a Democrat whose aggressive oversight of the tech giants has drawn plaudits from liberals and conservatives alike, said the report shows how companies’ practices “can endanger people’s privacy, threaten their freedoms and expose them to a host of harms.” The findings on child safety were “especially troubling,” she added.
In 2020, the FTC demanded that nine social networks and video streaming providers hand over information on how they collect, use and sell people’s personal data, how their products are powered by algorithms and how their policies affect kids and teens.
The agency was able to compel information from companies whose practices lawmakers and regulators have often criticized as being too opaque. They included Amazon, Facebook (now Meta), Google-owned YouTube, Twitter (now X), Snap, TikTok owner ByteDance, Discord, Reddit and Meta-owned WhatsApp. (Amazon founder Jeff Bezos owns The Washington Post.)
FTC employees wrote that the report described “general findings,” but they noted that not all of the findings applied to every company in every instance. Still, agency staffers noted numerous pervasive patterns that they said exposed users to harm or left them in the dark about how their data was being used to make money for the companies.
According to the report, the companies have collected troves of data on users and nonusers, often in “ways consumers might not expect,” and many of the guardrails put in place to protect that information were erected only in response to global regulations. While the companies are increasingly mining that data to launch AI products, the agency found, consumers typically lacked “any meaningful control over how personal information was used” for them.
The findings, the authors wrote, revealed “an inherent tension between business models that rely on the collection of user data and the protection of user privacy.” The agency’s Democratic leadership has spoken out before against “commercial surveillance” practices they say have come to dominate Silicon Valley.
Kate Sheerin, Discord’s head of public policy for the United States and Canada, called the report’s focus on consumers “an important step.” But she said it “lumps very different models into one bucket and paints [with] a broad brush, which might confuse consumers and portray some platforms, like Discord, inaccurately.”
Google spokesman José Castañeda said the company “has the strictest privacy policies in our industry,” including restrictions against using sensitive data to serve ads and personalizing ads to users under 18.
Meta, which owns Facebook and WhatsApp, had no immediate comment. Spokesmen for the other companies did not respond to requests for comment.
An FTC official, who briefed reporters on the condition of anonymity to discuss the findings, declined to comment on how the study might shape the agency’s enforcement but said it showed that many of the issues ran much deeper than expected.
According to the report, many of the companies studied “bury their heads in the sand when it comes to children” on their sites. Many claimed that because their products were not directly targeted at children and their policies did not allow children on their sites, they knew nothing of children being present on them. “This is not credible,” agency staffers wrote.
Child safety advocates have long expressed concern that under existing federal child privacy laws, known as the Children’s Online Privacy Protection Act, or COPPA, companies can avoid accountability by claiming not to have knowledge that children are accessing their sites.
Concerns about companies failing to protect younger users were particularly pronounced among teens, whom many platforms simply treated like “traditional adult users” and typically did not afford the same protections as young children, the agency wrote.
The FTC official declined to comment on Instagram’s newly released safety tools for teens but said companies cannot be relied upon to regulate themselves.
The report recommended that Congress pass both comprehensive federal privacy legislation to cover all consumers and to expand existing guardrails for children to apply to teens.
Since the study began four years ago, the social media market has become more fractured and decentralized as upstarts such as TikTok challenge long-standing leaders and as platforms such as Telegram cater to increasingly niche audiences. Asked whether the agency’s analysis was still relevant, the FTC official said it was difficult to obtain information from the internet companies even with the agency’s investigative authority.
The official added that the highlighted practices are tied to the companies’ business models, which have not changed.
While the study began during the Trump administration, the FTC under Khan has dialed up its enforcement against the tech sector over data privacy and child safety complaints, including by launching sprawling efforts to update privacy regulations.
The study’s release arrives as lawmakers at the federal and state levels push to pass expanded protections for children’s privacy and safety. Dozens of states have passed laws to that effect over the past year, and a key House committee advanced a pair of bills Wednesday that would mark the most significant update to child online safety laws in decades.
But those efforts face opposition from tech industry and business groups that say they trample on users’ free speech rights, force companies to collect more data and stifle innovation.
(c) Washington Post
Category:
Recent comments