Saturday, January 30, 2021

Counterpoint — Yale Law Professor Emily Bazelon Provides A Discussion Of The Internet Freedom Of Speech Controversy Without Ideological Bias

Not without trepidation, this blogger posted an essay yesterday that examined the Internet freedom of speech controversy with a right-of-center orientation. Yesterday's essay was Point and today's essay (below) is Counterpoint. If this is (fair & balanced) adhereance to free and open discussion of an issue in a democracy, so be it.

[x NY Fishwrap 'Zine]
Why Is Big Tech Policing Speech? Because The Government Isn’t
By Emily Bazelon

TagCrowd Cloud provides a visual summary of the blog post below

created at TagCrowd.com

In the months leading up to the November election, the social media platform Parler attracted millions of new users by promising something competitors, increasingly, did not: unfettered free speech. “If you can say it on the streets of New York,” promised the company’s chief executive, John Matze, in a June CNBC interview, “you can say it on Parler.”

The giants of social media — Facebook, Twitter, YouTube, Instagram — had more stringent rules. And while they still amplified huge amounts of far-right content, they had started using warning labels and deletions to clamp down on misinformation about COVID-19 and false claims of electoral fraud, including in posts by President Trump. Conservative figures, including Senator Ted Cruz, Eric Trump and Sean Hannity, grew increasingly critical of the sites and beckoned followers to join them on Parler, whose investors include the right-wing activist and heiress Rebekah Mercer. The format was like Twitter’s, but with only two clear rules: no criminal activity and no spam or bots. On Parler, you could say what you wanted without being, as conservatives complained, “silenced.”

After the election, as Trump sought to overturn his defeat with a barrage of false claims, Matze made a classic First Amendment argument for letting the disinformation stand: More speech is better. Let the marketplace of ideas run without interference. “If you don’t censor, if you don’t — you just let him do what he wants, then the public can judge for themselves,” Matze said of Trump’s Twitter account on the New York Times podcast “Sway.” “Just sit there and say: ‘Hey, that’s what he said. What do you guys think?’”

Matze was speaking to the host of “Sway,” Kara Swisher, on January 7 — the day after Trump told supporters to march on the US Capitol and fight congressional certification of the Electoral College vote. In the chaos that followed Trump’s speech, the American marketplace of ideas clearly failed. Protecting democracy, for Trump loyalists, had become a cry to subvert and even destroy it. And while Americans’ freedoms of speech and the press were vital to exposing this assault, they were also among its causes. Right-wing media helped seed destabilizing lies; elected officials helped them grow; and the democratizing power of social media spread them, steadily, from one node to the next.

Social media sites effectively function as the public square where people debate the issues of the day. But the platforms are actually more like privately owned malls: They make and enforce rules to keep their spaces tolerable, and unlike the government, they’re not obligated to provide all the freedom of speech offered by the First Amendment. Like the bouncers at a bar, they are free to boot anyone or anything they consider disruptive. In the days after January 6, they swiftly cracked down on whole channels and accounts associated with the violence. Reddit removed the r/DonaldTrump subreddit. YouTube tightened its policy on posting videos that called the outcome of the election into doubt. TikTok took down posts with hashtags like #stormthecapitol. Facebook indefinitely suspended Trump’s account, and Twitter — which, like Facebook, had spent years making some exceptions to its rules for the president — took his account away permanently.

Parler, true to its stated principles, did none of this. But it had a weak point: It was dependent on other private companies to operate. In the days after the Capitol assault, Apple and Google removed Parler from their app stores. Then Amazon Web Services stopped hosting Parler, effectively cutting off its plumbing. Parler sued, but it had agreed, in its contract, not to host content that “may be harmful to others”; having promised the streets of New York, it was actually bound by the rules of a kindergarten playground. In a court filing, Amazon provided samples of about 100 posts it had notified Parler were in violation of its contract in the weeks before the Capitol assault. “Fry ’em up,” one said, with a list of targets that included Nancy Pelosi and Chuck Schumer. “We are coming for you and you will know it.” On January 21, a judge denied Parler’s demand [PDF] to reinstate Amazon’s services.

It’s unlikely the volume of incendiary content on Parler could rival that of Twitter or Facebook, where groups had openly planned for January 6. But Parler is the one that went dark. A platform built to challenge the oligopoly of its giant rivals was deplatformed by other giants, in a demonstration of how easily they, too, could block speech at will.

Over all, the deplatforming after January 6 had the feeling of an emergency response to a wave of lies nearly drowning our democracy. For years, many tech companies had invoked the American ethos of free speech while letting disinformation and incitement spread abroad, even when it led to terrible violence. Now they leapt to action as if, with America in trouble, American ideals no longer applied. Parler eventually turned to overseas web-hosting services to get back online.

“We couldn’t beat you in the war of ideas and discourse, so we’re pulling your mic” — that’s how Archon Fung, a professor at Harvard’s Kennedy School of Government, put it, in expressing ambivalence about the moves. It seemed curiously easier to take on Trump and his allies in the wake of Democrats’ victories in the Senate runoffs in Georgia, giving them control of both chambers of Congress along with the White House. (Press officers for Twitter and Facebook said no election outcome influenced the companies’ decision.) And in setting an example that might be applied to the speech of the other groups — foreign dissidents, sex-worker activists, Black Lives Matter organizers — the deplatforming takes on an ominous cast.

Fadi Quran, a campaign director for the global human rights group Avaaz, told me he, too, found the precedent worrying. “Although the steps may have been necessary to protect American lives against violence,” he said, “they are a reminder of the power big tech has over our information infrastructure. This infrastructure should be governed by deliberative democratic processes.”

But what would those democratic processes be? Americans have a deep and abiding suspicion of letting the state regulate speech. At the moment, tech companies are filling the vacuum created by that fear. But do we really want to trust a handful of chief executives with policing spaces that have become essential parts of democratic discourse? We are uncomfortable with government doing it; we are uncomfortable with Silicon Valley doing it. But we are also uncomfortable with nobody doing it at all. This is a hard place to be — or, perhaps, two rocks and a hard place.

When Twitter banned Trump, he found a seemingly unlikely defender: Chancellor Angela Merkel of Germany, who criticized the decision as a “problematic” breach of the right to free speech. This wasn’t necessarily because Merkel considered the content of Trump’s speech defensible. The deplatforming troubled her because it came from a private company; instead, she said through a spokesman, the United States should have a law restricting online incitement, like the one Germany passed in 2017 to prevent the dissemination of hate speech and fake news stories.

Among democracies, the United States stands out for its faith that free speech is the right from which all other freedoms flow. European countries are more apt to fight destabilizing lies by balancing free speech with other rights. It’s an approach informed by the history of fascism and the memory of how propaganda, lies and the scapegoating of minorities can sweep authoritarian leaders to power. Many nations shield themselves from such anti-pluralistic ideas. In Canada, it’s a criminal offense to publicly incite hatred “against any identifiable group.” South Africa prosecutes people for uttering certain racial slurs. A number of countries in Europe treat Nazism as a unique evil, making it a crime to deny the Holocaust.

In the United States, laws like these surely wouldn’t survive Supreme Court review, given the current understanding of the First Amendment — an understanding that comes out of our country’s history and our own brushes with suppressing dissent. The First Amendment did not prevent the administration of John Adams from prosecuting more than a dozen newspaper editors for seditious libel or the Socialist and labor leader Eugene V. Debs from being convicted of sedition over a speech, before a peaceful crowd, opposing involvement in World War I. In 1951, the Supreme Court upheld the convictions of Communist Party leaders for “conspiring” to advocate the overthrow of the government, though the evidence showed only that they had met to discuss their ideological beliefs.

It wasn’t until the 1960s that the Supreme Court enduringly embraced the vision of the First Amendment expressed, decades earlier, in a dissent by Justice Oliver Wendell Holmes Jr.: “The ultimate good desired is better reached by free trade in ideas.” In Brandenburg v. Ohio, that meant protecting the speech of a Ku Klux Klan leader at a 1964 rally, setting a high bar for punishing inflammatory words. Brandenburg “wildly overprotects free speech from any logical standpoint,” the University of Chicago law professor Geoffrey R. Stone points out. “But the court learned from experience to guard against a worse evil: the government using its power to silence its enemies.”

This era’s concept of free speech still differed from today’s in one crucial way: The court was willing to press private entities to ensure they allowed different voices to be heard. As another University of Chicago law professor, Genevieve Lakier, wrote in a law-review article last year [PDF], a hallmark of the 1960s was the court’s “sensitivity to the threat that economic, social and political inequality posed” to public debate. As a result, the court sometimes required private property owners, like TV broadcasters, to grant access to speakers they wanted to keep out.

But the court shifted again, Lakier says, toward interpreting the First Amendment “as a grant of almost total freedom” for private owners to decide who could speak through their outlets. In 1974, it struck down a Florida law requiring newspapers that criticized the character of political candidates to offer them space to reply. Chief Justice Warren Burger, in his opinion for the majority, recognized that barriers to entry in the newspaper market meant this placed the power to shape public opinion “in few hands.” But in his view, there was little the government could do about it.

Traditionally, conservatives have favored that libertarian approach: Let owners decide how their property is used. That’s changing now that they find their speech running afoul of tech-company rules. “Listen to me, America, we were wiped out,” the right-wing podcaster Dan Bongino, an investor in Parler, said in a Fox News interview after Amazon pulled its services. “And to all the geniuses out there, too, saying this is a private company, it’s not a First Amendment fight — really, it’s not?” The law that prevents the government from censoring speech should still apply, he said, because “these companies are more powerful than a de facto government.” You needn’t sympathize with him to see the hit Parler took as the modern equivalent of, in Burger’s terms, disliking one newspaper and taking the trouble to start your own, only to find no one will sell you ink to print it.

One problem with private companies’ holding the ability to deplatform any speaker is that they’re in no way insulated from politics — from accusations of bias to advertiser boycotts to employee walkouts. Facebook is a business, driven by profit and with no legal obligation to explain its decisions the way a court or regulatory body would. Why, for example, hasn’t Facebook suspended the accounts of other leaders who have used the platform to spread lies and bolster their power, like the president of the Philippines, Rodrigo Duterte? A spokesman said suspending Trump was “a response to a specific situation based on risk” — but so is every decision, and the risks can be just as high overseas.

“It’s really media and public pressure that is the difference between Trump coming down and Duterte staying up,” says Evelyn Douek, a lecturer at Harvard Law School. “But the winds of public opinion are a terrible basis for free-speech decisions! Maybe it seems like it’s working right now. But in the longer run, how do you think unpopular dissidents and minorities will fare?”

Deplatforming works, at least in the short term. There are indications that in the weeks after the platforms cleaned house — with Twitter suspending not just Trump but some 70,000 accounts, including many QAnon influencers — conversations about election fraud decreased significantly across several sites. After Facebook reintroduced a scoring system to promote news sources based on its judgment of their quality, the list of top performers, usually filled by hyperpartisan sources, featured CNN, NPR and local news outlets.

But there’s no reason to think the healthier information climate will last. The very features that make social media so potent work both to the benefit and the detriment of democracy. YouTube, for instance, changed its recommendation algorithm in 2019, after researchers and reporters (including Kevin Roose at The New York Times) showed how it pushed some users toward radicalizing content. It’s also telling that, since the election, Facebook has stopped recommending civic groups for people to join. After January 6, the researcher Aric Toler at Bellingcat surfaced a cheery video, automatically created by Facebook to promote its groups, which imposed the tagline “community means a lot” over images of a militia brandishing weapons and a photo of Robert Gieswein, who has since been charged in the assault on the Capitol. “I’m afraid that the technology has upended the possibility of a well-functioning, responsible speech environment,” the Harvard law professor Jack Goldsmith says. “It used to be we had masses of speech in a reasonable range, and some extreme speech we could tolerate. Now we have a lot more extreme speech coming from lots of outlets and mouthpieces, and it’s more injurious and harder to regulate.”

For decades, tech companies mostly responded to such criticism with proud free-speech absolutism. But external pressures, and the absence of any other force to contain users, gradually dragged them into the expensive and burdensome role of policing their domains. Facebook, for one, now has legions of low-paid workers reviewing posts flagged as harmful, a task gruesome enough that the company has agreed to pay $52 million in mental-health compensation to settle a lawsuit by more than 10,000 moderators.

Perhaps because it’s so easy to question their motives, some executives have taken to begging for mercy. “We are facing something that feels impossible,” said Jack Dorsey, Twitter’s chief executive, while being grilled by Congress last year. And Facebook’s founder and chief executive, Mark Zuckerberg, has agreed with lawmakers that the company has too much power over speech. Two weeks after suspending Trump, Facebook said its new oversight board, an independent group of 20 international experts, would review the decision, with the power to make a binding ruling.

Zuckerberg and Dorsey have also suggested openness to government regulation that would hold platforms to external standards. That might include, for example, requiring rules for slowing the spread of disinformation from known offenders. European lawmakers, with their more skeptical free-speech tradition (and lack of allegiance to American tech companies), have proposed requiring platforms to show how their recommendations work and giving users more control over them, as has been done in the realm of privacy. Steps like these seem better suited to combating misinformation than eliminating, as is often suggested, the immunity platforms currently enjoy from lawsuits, which directly affects only a narrow range of cases, mostly involving defamation.

There is no consensus on a path forward, but there is precedent for some intervention. When radio and television radically altered the information landscape, Congress passed laws to foster competition, local control and public broadcasting. From the 1930s until the 1980s, anyone with a broadcast license had to operate in the “public interest” — and starting in 1949, that explicitly included exposing audiences to multiple points of view in policy debates. The court let the elected branches balance the rights of private ownership with the collective good of pluralism.

This model coincided with relatively high levels of trust in media and low levels of political polarization. That arrangement has been rare in American history. It’s hard to imagine a return to it. But it’s worth remembering that radio and TV also induced fear and concern, and our democracy adapted and thrived. The First Amendment of the era aided us. The guarantee of free speech is for democracy; it is worth little, in the end, apart from it. ###

[Emily Bazelon is a staff writer for The New York Times Magazine and a former senior editor at Slate. Bazelon also is a senior research scholar in Law and Truman Capote Fellow for Creative Writing and Law at Yale Law School. Her 2019 book, Charged: The New Movement to Transform American Prosecution and End Mass Incarceration, won the Los Angeles Times Book Prize in the current-interest category. And before that, Bazelon wrote Sticks and Stones: Defeating the Culture of Bullying and Rediscovering the Power of Character and Empathy (2013). She is a graduate of Yale College (BA, English) and Yale Law School (JD) and was an editor of the Yale Law Journal.]

Copyright © 2021 The New York Times Company

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License..

Copyright © 2021 Sapper's (Fair & Balanced) Rants & Raves