An “eraser button”? Focused Ideas Could Help Bridle Big Tech

Washington: Broke Big Tech? In cases where they push content to users, how about shielding tech companies from liability for causing harm? Or creating a new regulator to strictly monitor the industry?

Those ideas have drawn official attention in the US, Europe, the UK and Australia as controversy surrounded Facebook, which on Thursday renamed itself Meta Google, Amazon and other giants. Revelations of deeper problems faced by former Facebook product manager Frances Haugen, backed by a slew of internal company documents, have sparked legislative and regulatory efforts.

But while regulators are still contemplating major moves like liquidating some companies or limiting their acquisitions, the most realistic changes may be more tangible and less grandly ambitious. And also that people can actually see popping up in their social feeds.

That’s why lawmakers are getting creative as they introduce several bills intended to take Big Tech down a peg. One bill proposes an eraser button that would allow parents to quickly delete all personal information collected from their children or teens. Another proposal bans specific features such as video auto-play, push alerts, like buttons and follower count for children under 16. Also being floated is prohibited from collecting personal data from anyone between the ages of 13 and 15 without their consent. and a new digital bill of rights for minors that would likewise limit the collection of personal data from teens.

For online users of all ages, personal data is paramount. It’s at the heart of social platforms’ lucrative business model: collecting data from their users and using it to sell personalized ads aimed at pinpointing specific consumer groups. Data is the financial lifeblood for a social network giant that is valued at $1 trillion like Facebook. Hey, meta. Almost all of its revenue comes from ad sales, which peaked at about $86 billion last year.

That means a proposed law targeting personal data collected from youth could affect the bottom line of social media companies. On Tuesday, executives from YouTube, TikTok and Snapchat offered in-principle support during a congressional hearing on child protection, but would not commit to supporting the previously proposed legislation. Instead, he offered boilerplate Washington lobbyist-speak, saying he looked forward to working with Congress on the matter. Translation: They want to impress the proposals.

Censors Edward Markey, D-Mass., and Richard Blumenthal, D-Conn., proposed two bills that address the safety of children online. They say they are hearing more and more stories of teens on opioids obtained online or died by suicide when their depression or self-loathing was exacerbated by social media.

Among Hogens’ many condemnations of Facebook, his disclosure of the company’s internal research showing that the use of the Instagram photo-sharing app appears to harm some teens resonates most with the public.

When it comes to children, Republican and Democratic lawmakers divided over alleged political bias and hate speech in social media, there is a solid consensus that something needs to be done, and quickly. One thing that unites Democrats and Republicans is that no one please think about children, said Gautam Hans, a technology advocate and free-speech expert and professor at Vanderbilt University. Its very marketable on a bipartisan basis.

In the UK, efforts are moving towards tighter regulations to protect social media users, especially young people. Members of the UK Parliament sought guidance from Hogen on how to improve British online security law. She appeared in London before a parliamentary committee on Monday, warning members that time was running out to regulate social media companies that use artificial intelligence to push engaging content to users.

EU privacy and competition regulators have been far more aggressive than their US counterparts in reining in the tech giant. They have fined some companies billions of dollars and adopted sweeping new rules in recent years. The UK this spring established a new regulator for Facebook and Google.

US regulators only kicked into gear in 2019, when the Federal Trade Commission fined Facebook $5 billion and YouTube $170 million in separate cases for alleged privacy violations. Late last year, the US Department of Justice and several states filed historic antitrust lawsuits against Google over its market dominance in online search. The FTC and several states took a parallel antitrust action against Facebook, accusing it of abusing its market power to crush smaller competitors.

Beyond child protection measures, US legislators on both sides have drawn up a slew of proposals to crack down on social media; targeting anti-competitive practices by Big Tech companies, possibly ordering breakups; And go-to algorithms tech platforms deploy to determine what shows up on a user feed.

All these proposals are facing heavy lifting towards final enactment.

For example, The Justice Against Malicious Algorithm Act was introduced by senior House Democrats after Hogan testified how social media algorithms push excessive content to users and incite anger to boost user engagement. provoke. The bill would hold social media companies accountable by removing their shield against liability, known as Section 230, in line with recommendations that harm users.

Some experts who support stricter regulation of social media say the law could have unintended consequences. It doesn’t explain well enough which specific algorithmic behavior would lead to loss of liability protection, they suggest, making it difficult to see how it would work in practice and what it might actually do. But there is widespread disagreement.

For example, Paul Barrett, deputy director of New York University’s Stern Center for Business and Human Rights, describes the bill in a way too broad that its authors can’t understand, and suggests that it removes the liability shield almost entirely. can break. But Southern Methodist University First Amendment scholar Jared Schroeder said there is a noble purpose behind the bill, but the constitutional free-speech guarantee could hinder any attempt to prosecute the social-media platform.

A spokesperson for Meta, which owns the Facebook service, declined to comment Friday on the legislative proposals. In a statement, the company said it has long advocated for the updated rules, but did not provide any specifics.

Facebook CEO Mark Zuckerberg said that Meta CEO Mark Zuckerberg has suggested changes that would give legal protection to Internet platforms only if they can prove that their systems for identifying illegal content are poor. Although this requirement may be more difficult for smaller tech companies and startups, leading critics charge that it will ultimately work in Facebook’s favor.

___

This story was originally published on October 31, 2021. It was updated on November 2, 2021 to clarify that Paul Barrett, who teaches a seminar in law, economics and journalism at New York University, holds the position of deputy director. NYU’s Stern Center for Business and Human Rights.

Disclaimer: This post has been self-published from the agency feed without modification and has not been reviewed by an editor

read all breaking news, breaking news And coronavirus news Here. follow us on Facebook, Twitter And Wire.

Leave a Reply