As 2020 Election Day nears, technology companies like Facebook are doing a better job than they did in 2016 in fighting the social media misinformation campaigns of foreign adversaries like Russia and Iran, according to Alex Stamos. He should know — Stamos was a former security chief for the world’s largest social media platform.
But the cybersecurity expert, who now is director of the Stanford Internet Observatory, says the toughest misinformation threat technology companies face, and can’t solve, is from within the U.S. — disinformation sowed by U.S. politicians, and one figure, in particular.
“What’s changed in 2020 versus 2016 is the massive amount of cooperation between large and small technology platforms and the government. In 2016, it was really no one’s job to think about how Russia or others might use online to cause chaos in the election,” Stamos said at the CNBC Technology Executive Council virtual summit on Thursday.
“Now there is a dedicated group between the government and tech and massive takedowns of Russian assets and also Iran. A huge difference,” he said.
Employees work in Facebook’s “War Room” at the social media giant’s Menlo Park, California, headquarters shortly after launching ahead of the 2018 midterm elections. It’s the nerve center for the fight against misinformation and manipulation by foreign actors trying to influence elections in the United States and elsewhere.
NOAH BERGER | AFP | Getty Images
Stamos also pointed to efforts like the election integrity site EI Partnership, with which he works. But making sure voters get the elections information they need to make informed decisions is being made harder by the threat from within our own borders.
“The most important disinformation this cycle is coming from domestic sources, and that is huge problem for technology companies,” Stamos said. “They are loathe to wade into democratic processes in the U.S.”
“It’s wonderful tech companies have war rooms and counter disinformation efforts, but we also need to see leadership from the federal government, but we’re seeing the federal government sowing misinformation, spreading misinformation, and it’s very significant in terms of causing nervousness and lack of confidence in the voting process,” said Alexis Wichowski, Deputy CTO for Innovation, New York City Mayor’s Office of the CTO, who spoke on the CNBC TEC virtual summit with Stamos.
The former Facebook security chief said the collaboration between tech platforms and units within the federal government, such as the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), which didn’t exist in 2016, as well as offensive operations from the military’s Cyber Command, are working against the foreign disinformation.
“But at the same time, there is election disinformation out of the White House, out of the Twitter account of the president himself and then also from theses influencers around him trying to amplify election disinformation,” Stamos said. “I’m not sure we have a good solution between now and Election Day.”
He used an early claim of victory in the president election as an example of why censorship won’t work.
Claims made online can be labeled or taken down, but if President Trump goes to a White House podium and declares victory, it will be covered by every major news organization.
“There is only so much they can do if the president gets on a podium of the White House. … that is newsworthy and will be run on TV and everywhere and they can’t censor it.”
This week, the top executives from the biggest tech companies including Facebook, Twitter and Alphabet, were grilled by Senate Republicans over claims of bias.
President Trump has railed against technology companies and signed an executive order last May targeting Twitter and others after the social media company fact-checked some of his tweeted claims, which he called “censorship.”
The Senate hearing was focused on Section 230 of the Communications Decency Act, and threats to alter the Act as a response to social media company actions. Facebook and Twitter recently limited access to a New York Post story claiming corrupt actions involving Democratic Party candidate Joe Biden and his son Hunter, which Twitter CEO Jack Dorsey later said was the wrong move.
“Section 230 is one of the most understood laws about the internet and cited by everyone as too important, and cause of lots of behavior. But the truth is the vast majority of stuff people don’t like online is protected under the First Amendment,” Stamos said.
What technology companies can do, Stamos said, is add context, like news organizations would, and he said that makes more sense than trying to block claims entirely.
In the end, Stamos said individuals need to make sure they are very careful about what they look at, and where they get information online, and rely on state and local election officials and authorities, of which there are roughly 8,000 in the U.S., and get information on voting directly from them.
They may even turn to non-traditional sources of information online, and given the situation, Wichowski said that is a reasonable option. “In the absence of leadership from the White House or confusing leadership messaging from all levels of government, people look to whomever they can trust,” Wichowski said.
That can even include celebrities and social media influencers, she added, “if they don’t trust elected officials. … can’t just rely on traditional sources of power.”
The New York City technology officer said one thing citizens should not worry about is the processing of ballots. “From an election perspective, decentralization is a strength. It may take more time to process ballots, but we want to protect system integrity from outside actors and each region and district have a solid handle on their own slice of the pie, and decentralization is the way to go about it.”
Stamos said after Election Day, one major issue a new Congress should take up is how to structure more federal support for local agencies to make sure disinformation is not promulgated by politicians.