By Yaël Eisenstat

Special to The Washington Post

I joined Facebook in June 2018 as its “head of Global Elections Integrity Ops” in the company’s business integrity organization, focused specifically on political advertising. I had spent much of my career working to strengthen and defend democracy — including freedom of speech — as an intelligence officer, diplomat and White House adviser. Now I had the opportunity to help a company that I viewed as playing a major role in one of the biggest threats to our democracy possibly correct course.

A year and a half later, as the company continues to struggle with how to handle political content and as another presidential election approaches, it’s clear that tinkering around the margins of advertising policies won’t fix the most serious issues. The real problem is that Facebook profits partly by amplifying lies and selling dangerous targeting tools that allow political operatives to engage in a new level of information warfare. Its business model exploits our data to let advertisers custom-target people, show us each a different version of the truth and manipulate us with hyper-customized ads — ads that, as of two weeks ago, can contain blatantly false and debunked information if they’re run by a political campaign. As long as Facebook prioritizes profit over healthy discourse, it can’t avoid damaging democracies.

Early in my time there, I dug into the question of misinformation in political advertising. Posting in a “tribe” (Facebook’s internal collaboration platform), I asked our teams whether we should incorporate the same tools for political ads that other integrity teams at Facebook were developing for misinformation in pages and organic posts.

The fact that we were taking money for political ads and allowing campaigns and other political organizations to target users based on the vast amounts of data we had gathered meant political ads should have an even higher bar for integrity than what people were posting in organic content.

Most of my colleagues agreed. People wanted to get this right. But above me, there was no appetite for my pushing, and I was accused of “creating confusion.” My leadership in the business integrity organization rejected some of the proactive solutions my team was building to try to solve highly consequential problems.

Ultimately, I was not empowered to do the job I was hired to do, and I left within six months. So, I don’t know if anybody up the chain ever considered our proposals to combat misinformation in political ads. Based on the company’s current policy allowing politicians to lie in ads, it seems they did not.

As we now know, paid advertising was just a small fraction of the Russian activities ahead of our 2016 presidential election, and social media affects civil discourse and warps democracy in many other ways. But how the company decides to handle the current controversy is the biggest test for whether it will ever truly put society and democracy ahead of profit and ideology. This is a very real tension at Facebook: I repeatedly saw passionate and thoughtful work in my own group not make it past the few voices who ultimately decided the company’s overall direction.

Free political speech is core to our democratic principles, and it’s true that social media companies should not be the arbiters of truth. But the only way Facebook or other companies that use our behavioral data to potentially manipulate us through targeted advertising can prevent abuse of their platform to harm our electoral process is to end their most egregious targeting and amplification practices and provide real transparency. Until they volunteer — or are forced by government — to do so, I now believe they should halt political advertising.

Banning political ads will unleash larger problems. But allowing candidates to spread disinformation using sophisticated targeting tools cannot be the only other possible option.

The “culture of fear,” nasty political campaigns and amplified extreme voices are not new in American society. But the scale to which these platforms have fueled and exacerbated this by using our emotional biases to keep our eyeballs on their screens, to vacuum up our data and sell their targeting tools to advertisers, has tilted the playing field toward the most salacious and fanatical voices.

Broader debates about whether politicians should run fake ads, who should decide whether claims are true and who should govern the internet are important questions that society will continue to debate. But that is a different matter from whether companies should profit from providing potent information warfare tools for political advertisers to target us with disinformation. The answer there is clear: We can’t afford to let them anymore.

— Yaël Eisenstat is a visiting fellow at Cornell Tech and a former elections integrity head at Facebook, CIA officer, and White House adviser.