Blackburn: Tech Platforms Need an Objective Cop on the Beat

September 24, 2019

WASHINGTON, D.C. – Today, Senator Marsha Blackburn (R-Tenn.) spoke on the Senate floor about the responsibility of tech companies to police extremist content on their platforms while also protecting free speech.

To watch the Senator’s full remarks, click below or HERE.

big tech

REMARKS AS PREPARED

 

Thank you, Madam President.

 

The internet and social media platforms have transformed the way humans communicate. Correspondence that just a few years ago would have taken paper, pen, and postage, is now sent and received with the click of a mouse.

 

Everything happens online, from shopping to party planning to campaigning. We share our photos and milestones with our “friends”—and with the companies who have built multibillion dollar empires based on their ability to convince us to surrender just one more unique piece of data.

 

Beyond social media, we live our everyday, transactional lives online, too. We bank via apps, sign up for credit cards using codes we received in an email, and manage our finances with cloud-based software.

Information we once would have locked securely in a desk drawer, we now plug into an online form without a second thought.

 

We have contributed to our own “virtual you” —that is, our personal online footprint— by trusting these platforms to keep our data secure.

 

In a way, this level of connectivity and trust has made life a lot easier, but it has also made us vulnerable to exploitation and exposure.

 

I have spoken before about consumers’ justifiable expectation of a right to privacy online.

 

This year I introduced the BROWSER Act in an effort to codify this right. BROWSER gives Big Tech basic guidelines to follow when collecting and selling user data.

 

But protecting an individual’s data is only part of the picture.

 

Last week, the Senate Committee on Commerce, Science, and Technology held a hearing to address the role digital services play in the distribution of violent and extremist content.

 

We welcomed testimony from Facebook, Twitter, and Google detailing what they’re doing to remove extremist content from their platforms.

 

But I will tell you, before we talk about policing content, we as members of this Body need to make sure we understand how the American people view their use of social media and the internet.

 

Whether social media platforms should be regulated under the First Amendment is beside the point.

 

Americans view these services as open public forums, where they can speak their minds on everything from defense funding to the Emmy awards.

 

These consumers don’t want the wild west; nor do they want to be censored based on a content reviewer’s subjective opinion.

 

What they want is an “objective cop on the beat” who is equipped to identify incitement, threats, and other types of speech that could put lives at risk.

 

This, of course, is easier said than done. In the case of Facebook, for example, that translates to creating a set of standards that 30,000 in-house engineers and analysts, and 15,000 content reviewers will be able to apply.

 

45,000 people. And that’s just one platform.

 

You know, there is a reason why, time and again, Big Tech executives ask Congress to exact more regulatory control over the way they do business—and it’s this:

 

Policing legitimately dangerous content is a big job—and policing “awful but lawful” content, as Facebook CEO Mark Zuckerberg likes to call it, is an even bigger, more daunting task.

 

It takes 45,000 people to do a bare-minimum job for one company: imagine trying to create easy to understand, bright-line standards that 45,000 employees will be able to digest and apply quickly enough to keep up with the flow of content.

 

That has got to be an intimidating task.

 

But I will tell you—if those executives think the government can do a better job deciding, down to the letter, what those standards should be, they are sorely mistaken.

 

Only the engineers and innovators know their companies well enough to set their own internal policies for acceptable uses of their platform.

 

But that’s not to say I won’t be taking an interest in their ideas.

 

For example: Facebook is in the process of putting together a “content oversight board” to adjudicate appeals from users whose posts have been deemed in violation, and taken down.

 

They’ve pledged to make the identities of the moderators and their decisions public—barring any safety risks—and to choose a diverse panel.

 

The biggest unanswered question here is, will the moderators really reflect the American political spectrum? How will they be chosen? The American people are surely going to demand more than a promise to be fair and impartial.

 

As I said, government cannot make these decisions for Big Tech—but we can help guide the way.

 

This is where working groups like the Judiciary Committee’s Tech Task Force come into play.

 

Last week, I was speaking to a group of private sector tech gurus, and I told them that the only way we’ll be able to move forward is if the government does more listening, and they do more talking.

 

I stand by what I said.

 

It is not, and should not be, Congress’s job to decide, in retrospect, what sort of culture companies like Facebook and Twitter meant to create.

 

I yield the floor.