News

Russian-Bought Facebook Ads Aren't The Only Way Elections Can Be Swayed & I'm Shook

Chip Somodevilla/Getty Images News/Getty Images

The House Intelligence Committee released thousands of Facebook ads on Thursday, May 10, that were paid for by a Russian agency in an effort to sway the 2016 U.S. elections and stir dissent, according to published reports. Nearly two years after the presidential election, new revelations continue to emerge about the scope of Russian influence in U.S. elections, but more worrisome is the realm of possibility for manipulating information through technology. Russian-bought Facebook ads aren't the only way elections can be swayed, and it may be more important than ever to scrutinize your experience on the internet — especially as elections are going on across the country this year.

The release of the ads last Thursday comes as Facebook is still tweaking its privacy policies and political advertising restrictions ahead of upcoming elections, not just in the U.S., but around the world. Before September, when Facebook revealed 470 accounts that purchased 3,000 ads for more than $100,000 in two years, it had repeatedly denied that Russians used the platform for political purposes. Then, there was the Cambridge Analytica bombshell, in which it came to light that millions of users' Facebook data had reportedly been shared with a data-mining company in an apparent attempt to identify voters via social media and influence their behavior. Cambridge Analytica denied these claims, and has since filed for bankruptcy. The May 2 statement posted to the company's website reads, in part,

Over the past several months, Cambridge Analytica has been the subject of numerous unfounded accusations and, despite the Company’s efforts to correct the record, has been vilified for activities that are not only legal, but also widely accepted as a standard component of online advertising in both the political and commercial arenas ... Despite Cambridge Analytica’s unwavering confidence that its employees have acted ethically and lawfully ... the siege of media coverage has driven away virtually all of the Company’s customers and suppliers.

It's certainly not the best time for online trust, but Facebook has made a demonstrated effort in fixing these vulnerabilities.

"Now, I wish I could tell you that we're gonna be able to stop all interference, but that just wouldn't be realistic," Facebook CEO Mark Zuckerberg said in a September 2017 video address. "There will always be bad actors in the world and we can't prevent all governments from all interference. But we can make it harder. We can make it much harder. And that's what we're going to focus on doing."

But it's not just Facebook that carries influence — there are several other companies you've probably heard of that play a role in the handling of personal information for advertising purposes, as Mashable reported. Personal data and advertisements have become tied together in the modern advertising space, as companies show "interest-based" ads informed by the data you provide to services like Facebook, Google, or Amazon. For example, on Wednesday, May 9, Google announced it would suspend all political advertising related to an Irish abortion referendum at the end of the month to avoid influencing the vote.

"Google is walking a very fine line,” David Yoffie, a professor at the Harvard Business School, told NBC News on Thursday. “Search, plus Android, gives Google amazing insight into individual behavior. Google’s stated privacy policies seem adequate, but the question that I cannot answer is whether Google’s stated policy and actual behavior are one and the same. Facebook had a stated policy for the last three years which most of us found acceptable, until Cambridge Analytica came to light.”

Justin Sullivan/Getty Images News/Getty Images

So what's at stake going forward? Well, a lot more than online ads are at stake.

Aviv Ovadya, chief technologist at the Center for Social Media Responsibility and an MIT graduate who has worked for tech companies like Quora, told BuzzFeed social media and online algorithmically-driven services are incredibly vulnerable to manipulation. He didn't paint a pretty picture for what they could do to democracy.

One of the biggest threats Ovadya outlined was "reality apathy," which is sort of a super-sized version of what some of those Russian-bought Facebook ads attempted in 2016. The bombardment and saturation of politically-charged or falsified information on the internet can blur the lines of fact and fiction so severely, Ovadya said, that anyone can make it look like "anything has happened, regardless of whether or not it did.”

For example, in a YouTube artificial intelligence (AI) demonstration, the computer-learning software shows a fake speech by President Barack Obama. The AI program is able to create a completely believable phony visual of Obama by scanning archival footage of him talking — regardless of whether or not those videos are high-resolution. This technology can pose much greater threats than Facebook ads could, for example, by convincing an entire country that an event has happened. It can also use public information about your friends or loved ones to create phony but believable personalized messages for phishing scams.

Google also debuted an eerily human AI voice on Tuesday, May 8, that can make phone calls for you to complete rudimentary tasks like scheduling appointments. Some technology ethicists were concerned about AI being able to deceive humans, and honestly that's when I fully descended into futuristic-nightmare territory.

The Future of Humanity Institute (yes, that's its real name) at the University of Oxford noted in a February 2018 study that the rise of AI technologies is increasing rapidly, and so are the ways in which it can be used maliciously. Like the new iteration of the Google assistant, AI is responsible for lots of technology many currently use, including automatic speech recognition, machine translation, spam filters, and search engines. In the near-future, the study reported, AI will enable driverless cars, digital assistants for nurses and doctors, and drones for expediting disaster relief operations. None of these are inherently bad things. The problem is that these forms of tech can evolve so quickly that damage can be done before they can be regulated.

The report found that AI is putting more psychological distance between "bad actors" and the crimes they intend to commit — which give more incentive for crimes like election-tampering. "If an actor knows that an attack will not be tracked back to them, and if they feel less empathy toward their target and expect to experience less trauma, then they may be more willing to carry out the attack," the report states. Even spookier, the report makes clear that many of the digital threats to our institutions and democratic systems are not yet known to us.

The nightmare scenarios are quick to emerge, but that's also because basic internet functions like purchasing online ads are already proven to have a significant impact. What's more, lawmakers don't seem to have a grasp on how to handle this. ABC's Jimmy Kimmel Live aired a sketch including clips of senators' questions of Zuckerberg in April, and while these questions seem funny, it's kind of scary to think that these people are the last line of defense from cyber attacks at the moment, as far as the government is concerned.

It's important to see what, if any, steps the government (or powerful tech companies) takes in order to safeguard these new technological advancements, especially this year, as the midterm elections go on. Clearly, these threats are emerging and go way beyond Facebook advertisements. I just hope our leaders can solve these unanticipated problems before it's too late.