On Wednesday, Senators Richard Blumenthal (D., Conn) and Marsha Blackburn (R., Tenn.) will introduce bipartisan legislation, dubbed the Kids Online Safety Act, aimed at holding social-media platforms responsible for harm they cause to children.
The proposed bill would also require tech companies to provide regular assessment of how their algorithms, design features and targeted advertising systems might contribute to harm to minors. Companies would also have to offer minors the ability to opt out of algorithmic recommendations.
If this proposal becomes law, it will represent a shift from the immunity that Big Tech companies have enjoyed since the adoption of Section 230 of the Communication Decency Act, which essentially gives internet companies immunity from harmful content posted by their users.
The bill is being introduced as an independent panel with bipartisan support. President Biden has been a longtime supporter of regulating Big Tech, including amending Section 230, so it wouldn´t be difficult for him to support this initiative, if needed.
The proposed bill also includes more transparency for the algorithms that tech companies use to target content to specific users.
“In hearings over the last year, Senator Blumenthal and I have heard countless stories of physical and emotional damage affecting young users, and Big Tech’s unwillingness to change,” said Sen. Blackburn. In view of the senators, this bill would give children and parents tools to protect against harmful content.
Regulatory Changes in Content Moderation
For the last few years, and after numerous events that showed the potential harmful effects that content posted on social-media platforms can have on individuals and communities, regulators around the world have been turning the spotlight on how to make sure that Meta, TikTok, Twitter and others step up their content moderation efforts.
The testimony of France Haugen against Facebook, who revealed how the company tolerated certain content even if there was evidence of the harmful effect on teenagers, was probably the trigger for policymakers to start drafting or concluding legislation.
Europe recently approved the Digital Service Act (DSA) that will hold Big Tech companies accountable for the illegal content posted in their platforms — they will be required to put in place mechanisms to ensure the content is removed in a timely fashion. Even content considered legal, but harmful, should be quickly removed.
Read More: EU Parliament Approves Digital Service Act Targeting Big Tech
The U.K. is also proposing new legislation, the Online Safety Bill, with similar requirements to the European DSA, but adding new criminal offences to the bill to ensure that companies do their best to guarantee that harmful content is removed.
Unlike the proposed bill in the U.S., these two laws are not limited to children — they affect content posted in digital platforms that is harmful to anyone, although there are some additional protections for minors.
China has also taken measures to protect consumers from harmful or undesirable content. The new rules, which will take effect on March 1, make it possible for the Chinese government to investigate and change the algorithms of Big Tech companies. The regulation will target how companies/algorithms recommend things to consumers in an attempt to limit the negative effects caused by, among other things, social media and disinformation.
Read More: China’s Rules on Algorithms are Promising; the Devil is in the Details
The main problem with most of these laws is monitoring companies’ compliance with the provisions of the law and how far the authority in charge can go to suggest changes in the companies’ algorithms. But this is a first step to end the immunity that internet companies have enjoyed — first by protecting minors, then perhaps gradually extending to everyone else.
Sign up here for daily updates on the legal, policy and regulatory issues shaping the future of the connected economy.