From autonomous vehicles going awry to chatbots generating hate speech, artificial intelligence (AI) systems, which are still evolving, have been shown to be replete with problems.
That has some AI researchers worried about the use of AI in the upcoming U.S. elections, said one such researcher during a panel discussion, “AI and Ethics,” held on January 16 at Princeton University. In addition to AI ethics, the discussion covered natural language processing, clinical responsibility and bias in AI datasets.
“This is going to be a big year for AI and the elections. There are certain reasons the systems are at risk now in this time of technology,” continued panelist Rebecca Mercuri, a leading expert in electronic voting systems who has spent decades researching election technology.
“So, it’s like if it doesn’t work on that one day in November, we are in big trouble. We are doomed,” she said.
“Now computers and the internet provide opportunities to everybody to carry out covert attacks on all aspects of the elections. AI is increasingly becoming an important tool in the election subversion arsenal.”
The panel discussion took place during a meeting of the Princeton Joint ACM/IEEE Computer Society Chapter.
Joining Mercuri on the panel were Robert Krovetz, president of Lexical Research, a Hillsborough company focused on research and development in natural language processing; Lauren Maffeo, associate principal analyst at GetApp ( Maryland and Barcelona), who examines the impact of emerging technologies like AI and blockchain on small and midsize businesses; and Casimir Kulikowski, a professor of computer science at Rutgers University and a pioneering researcher who has worked for the past 50 years on pattern recognition, AI and expert systems, and on their biomedical and health informatics applications.
Since 1992, Mercuri has strenuously promoted the use of voter-verified paper ballots as the definitive evidence for recounts. Her company, Notable Software (Philadelphia), provides forensic investigations and expert witness services for contested elections, criminal defense and intellectual property matters.
She has been following the political landscape closely, including the delayed Iowa caucus results and potential headaches in California. She discussed her observations during the meeting and in interviews after the meeting.
“The problems that occurred in Iowa had nothing to do with AI,” she said. “They did, though, appear to result at least in part from the use of uncertified phone App software to convey caucus results. The use of the internet and Wi-Fi/wireless technologies is not appropriate for any election tabulation and vote consolidation processes. Nor should the primary and caucus season be used as a test-bed for new election software, voting or vote-tallying equipment.”
The Iowa Democratic Party reportedly blamed the problem on a code in a mobile phone app that precincts used to report results. The app was recording data accurately, but it was reporting out only partial data due to the coding in the reporting system.
“The caucus method of primary voting has previously been lauded for its openness and transparency. Tallies from each caucus location should have been hand-recorded, and the results made available within hours of the end of the voting session. This is how caucuses are supposed to work. That totals are still dribbling in days later raises questions as to the integrity of the results and casts doubt on the caucus process,” she said.
“I have serious concerns about the upcoming Super Tuesday primary on March 3rd. The 5 million voters in California’s most populous municipality, Los Angeles, will be using new systems that allow a sample ballot on one’s cell phone to be used to generate a QR code that can then be scanned electronically and cast at any voting center up to 10 days before the election.
“The panoply of issues that this scenario presents, ranging from vote selling to outright hacking and denial of service attacks, is astonishing. That California’s November general election gives all of its 55 presidential electors to the candidate with more than 50% of the votes provides further incentive for shenanigans to take place in a close race.
“As November nears, voters can expect to see some election fraud cases. “There is a wide gamut of illegal and illicit election manipulation, including hacking and rigging and all sorts of attacks, so that’s the premise we have with AI, and how we can use AI to do bad things,” such as voter disenfranchisement.
Then there was the Facebook–Cambridge Analytica scandal, Mercuri noted. “In 2014, a Facebook quiz invited users to find out their personality type. The app collected the data of those taking the quiz, but also recorded the public data of their friends because they were on Facebook. So about 305,000 people installed the app, but it gathered information on up to 87 million people, according to Facebook. It claimed that some of that data was sold to Cambridge Analytica, which used it to psychologically profile voters in the U.S. presidential election,” she said.
She said there are recent cases in Europe and the United States in which online questionnaires helped voters pick the candidates most closely aligned with their political views.
What’s next? “Let AI vote for the masses and tell them who they should support,” she quipped.
“If you think the MIT Media Lab also doesn’t also want to corner the market on this, they have an automatic democracy project that imagines making citizens responsible for all legislative decisions with digital agents voting on their behalf,” she said.
“So, this is where things are headed if we don’t stop it. So, please stop it!”
This is Part I of a two-part article on the “AI and Ethics” panel discussion.