The Trump administration’s battle with Silicon Valley over content moderation escalated this month when the Justice Department urged Congress to strip some immunity protections from social media platforms for content they host.
The move followed a largely symbolic executive order President Trump signed after Twitter — for the first time — slapped a fact check on two of his false tweets, a move the president tried to frame as a crackdown on conservative speech. On Tuesday, Twitter took action for the fifth time against the president, hiding a Trump tweet threatening “serious force” against protesters who tried to establish an autonomous zone in Washington, D.C., saying it violated Twitter’s rules on abusive behavior.
There is much disagreement within and among social media companies about what to do about politicians’ lies and misleading or false political ads. (Twitter has banned political ads, while Facebook allows them, though it said in mid-June it would soon allow all users to opt out of them. And on Friday, Facebook, faced with a growing advertiser boycott, outlined a broader category of hateful content it would ban in ads.) Four years after the Russians exploited the platforms to sow division and boost Trump’s candidacy, there also remains little consensus about how — or if — Congress should regulate the technologies that remain beset by bots, disinformation and hate speech.
On Wednesday, a congressional hearing on disinformation underscored the dangers of doing nothing.
Rep. Robert Latta, an Ohio Republican, noted that liability protections have allowed social media companies to become the true gatekeepers of the internet, “but too often, they don’t want to take responsibility for the content behind those gates.”
Still, no matter what the Justice Department desires or how much hand-wringing there is on Capitol Hill, it’s unlikely a divided Congress will swiftly make radical changes to Section 230 — the critical portion of the 1996 Communications Decency Act that prevents social media companies from being held liable for content but gives them the power to take down posts and set guidelines.
Social media companies will continue to tangle with the executive branch, and technology leaders may never come up with a consistent logic regarding false content. But that doesn’t mean Congress can’t do anything before November to at least blunt some of the damage of political ad microtargeting. Congress can quickly act to reduce voter exposure to uncontested lies online by passing the Banning Microtargeted Political Ads Act, a smart bill introduced last month by Rep. Anna Eshoo, D-Palo Alto.
Bombarding me with online advertisements for products based on searches I conducted is annoying, creepy and poses risks to my personal privacy. But bombardingAmericans with specially crafted political ads based on their likes and shares and other identifying characteristics is damaging our democracy.
Research agrees. “Online political campaigns targeting Facebook users by gender, location and political allegiance significantly increased support for Republican candidate Donald Trump,” a 2018 University of Warwick study showed. “The micro-targeted campaigns exploiting Facebook’s profiling tools were highly effective both in persuading undecided voters to support Mr. Trump, and in persuading Republican supporters to turn out on polling day.”
These ads can be laced with falsehoods or designed to enrage. And since journalists, and sometimes even political opponents, can’t see all the microtargeted ads in real time — Trump’s campaign reportedly put forth 5.9 million different ads on Facebook during the 2016 election cycle — they aren’t always able to hold the politicians or political groups distributing them accountable.
“Microtargeting political ads fractures our open democratic debate into millions of private, unchecked silos, allowing for the spread of false promises, polarizing lies, disinformation, fake news, and voter suppression,” Eshoo — whose district lies in the heart of Silicon Valley — said when she introduced the bill. “With spending on digital ads in the 2020 election expected to exceed $1.3 billion, Congress must step in to protect our nation’s democratic process.”
Eshoo, in a letter to her colleagues, points to an example from 2016 when the Trump campaign created an ad that framed Hillary Clinton, the Democratic nominee, as racist and showed it only to African American voters who vote infrequently — and in specific districts where the vote was close — to persuade them to stay home.
The letter cites an October 2016 Bloomberg News story that describes how the ad was “delivered to certain African American voters through Facebook ‘dark posts’— nonpublic posts whose viewership the campaign controls.” The idea, Bloomberg quoted Trump’s campaign manager Brad Parscale as saying, is to make it so, “only the people we want to see it, see it.”
During last year’s impeachment hearings, thousands of microtargeted ads “flooded the internet, portraying Trump as a heroic reformer cracking down on foreign corruption while Democrats plotted a coup,” the Atlantic reported.
While our election laws prohibit foreign foreigners from buying campaign advertisements, the Russians set up fake accounts and shell companies that made them seem like they were American to buy microtargeted ads.
Americans agree this is awful. Seventy-two percent of Americans said internet companies shouldn’t make information about user behavior available to political campaigns so they can microtarget them, according to a March survey by the John S. and James L. Knight Foundation and Gallup.
Under Eshoo’s bill, political groups and politicians could still target ads to broad geographies like states, municipalities, and congressional districts. This would allow voters from all political persuasions to see them and opponents and reporters and public interest groups to evaluate them as well. There would be more accountability. Falsehoods could be corrected.
There’s no reason — except profit — that Facebook needs to make information available about users’ behavior to those who want to use it to microtarget for political purposes. One way to avoid being forced to make this change by Congress is for Facebook to decide to make these changes to political ad microtargeting on its own.
In Friday’s announcement, Facebook CEO Mark Zuckerberg signaled the company would begin to label — and even remove — some politicians’ speech that violates its policies: “Even if a politician or government official says it, if we determine that content may lead to violence or deprive people of their right to vote, we will take that content down.”
While a bold shift for Facebook, the new policy focuses on voter suppression and hate speech and falls short of a broader commitment to fact check politicians’ untruths. Defending that decision last month, Zuckerberg said, “Political speech is one of the most sensitive parts in a democracy, and people should be able to see what politicians say.”
If Zuckerberg really believes this, then he should reform his political microtargeting ad policy to make it so.
Janine Zacharia, a former Washington Post reporter, is a lecturer in the department of communication at Stanford University.