CEOs of Facebook, Twitter and Google testify before Congress about disinformation

House Energy and Commerce Committee members are expected to put pressure on Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter CEO Jack Dorsey about their platforms’ efforts to contain unfounded claims electoral fraud and vaccine skepticism. Opaque algorithms that prioritize user engagement and promote misinformation can also be examined, suggested a memo from the committee.

Technology platforms, which had already faced intense pressure to repel misinformation and foreign interference before the 2020 elections, were subjected to further scrutiny in the following months. Even with some of the companies implementing new measures to crack down on electoral conspiracy theories, it was not enough to stop President Donald Trump’s hardline supporters from invading the United States Capitol.

The hearing also marks CEOs’ first time before Congress since Trump was banned or suspended from their respective platforms after the Capitol riots. In their prepared comments, some of the executives address the events of January 6 from the front.

“The attack on Capitol Hill was a horrible attack on our values ​​and our democracy, and Facebook is committed to helping law enforcement bring insurrectionists to justice,” says Zuckerberg’s testimony. But Zuckerberg also adds: “We do more to deal with disinformation than any other company”.

The hearings coincide with legislation under active consideration in the House and Senate to control the technology industry. Some bills are aimed at the economic dominance of companies and alleged anti-competitive practices. Others focus on approaching platforms for content moderation or data privacy. The various proposals may introduce new rigid requirements for technology platforms, or expose them to greater legal liability in ways that can reshape the industry.

For high-profile executives, Thursday’s session may also be their last chance to present a case in person to lawmakers before Congress embarks on potentially radical changes in federal law.

At the center of the coming political battle is Section 230 of the Communications Act of 1934, the signature liability shield that grants websites legal immunity for much of the content posted by their users. Members of both parties asked for updates to the law, which has been widely interpreted by the courts and is responsible for the development of the open Internet.

What the Biden administration means for the future of Silicon Valley

The CEO’s written testimony prior to the high-profile hearing on Thursday outlines areas of potential common ground with lawmakers and indicates areas in which companies intend to work with Congress – and areas in which Big Tech should step back.

Zuckerberg plans to narrow the scope of Section 230. In his written comments, Zuckerberg says that Facebook favors a form of conditional liability, in which online platforms can be sued for user content if companies fail to comply with certain best practices established by a third party .
The other two CEOs do not enter the Section 230 debate or discuss the role of government with such granularity. But they offer their overviews for moderating content. Pichai’s testimony calls for clearer content policies and gives users a way to appeal content decisions. Dorsey’s testimony reiterates his calls for more user-driven content moderation and the creation of better settings and tools that allow users to personalize their online experience.
At this point, CEOs have had a lot of experience in testifying before Congress. Zuckerberg and Dorsey recently appeared in the Senate in November on content moderation. And before that, Zuckerberg and Pichai testified in the House last summer on antitrust issues.
In the days leading up to Thursday’s hearing, companies argued that they had acted aggressively to repel misinformation. Facebook said on Monday that it removed 1.3 billion fake accounts last fall and now has more than 35,000 people working on content moderation. Twitter said this month he would begin to apply warning labels for misinformation about the coronavirus vaccine and said that repeated violations of his Covid-19 policies could lead to permanent bans. YouTube said this month that it removed tens of thousands of videos containing erroneous information about Covid’s vaccine and, in January, after the Capitol riots, announced it would restrict channels that share false allegations doubting the outcome of the 2020 elections.

But these claims of progress are unlikely to appease committee members, whose memo cited several research papers indicating that misinformation and extremism are still common on platforms.

.Source