AI and Ethics, Part 2: While Solid Applications Exist, Healthy Skepticism of AI a Good Thing, Panelists Say at Princeton Meeting

During a panel discussion on artificial intelligence (AI), ethics in AI, natural language processing, clinical responsibility and bias in AI datasets at Princeton University on January 16, and in subsequent conversations and emails, the panelists made one point very clear: there are important concerns about the applications of AI systems.

The panel discussion took place at a meeting of the Princeton Joint ACM/IEEE Computer Society Chapter.

Researchers pondering the effects of AI on society pointed to vulnerabilities that could affect systems, such as those used in U.S. elections, in ways that replicate bad human behavior.

The panel included Rebecca Mercuri, a leading expert in electronic voting systems who has spent decades researching election technology and is the founder of  Notable Software (Philadelphia), a consulting company specializing in computer forensics, security and expert witness testimony; Robert Krovetz, president of Lexical Research, a Hillsborough company focused on research and development in natural language processing; Lauren Maffeo, associate principal analyst at GetApp (Maryland and Barcelona), who examines the impact of emerging technologies like AI and blockchain on small and midsize businesses; and Casimir Kulikowski, a professor of computer science at Rutgers University and a pioneering researcher who has worked for the past 50 years on pattern recognition, AI and expert systems, and on their biomedical and health informatics applications.

According to Krovetz, many AI systems are guided by the choice of “training data” that may be used in the initial setup of the system. “If we are not careful about the data that is used to train AI systems, we might be training these systems to be unethical, to the detriment of their users and the general public,” he said.

“The system that produced hate speech did so because it replicated unwanted human behavior.”

And he added that, “while we should rightly be concerned about the negative consequences of unethical use, we also mentioned positive applications. My favorite is a system developed at the University of Waterloo that can help reduce the number of children and animals that are inadvertently locked in a car.”

Prescribed: A Healthy Dose of Skepticism

Maffeo prescribed a “healthy dose of skepticism about any product that claims to have AI.” She explained that “any AI system is only as good as the data it’s trained on and the teams building it. If the teams’ training techniques and priorities for AI systems aren’t documented, we can’t trust them to produce accurate, ethical results.”

And she said, “The lack of AI literacy really concerns me. There is so much hype and confusion about it, which means that even high-level leaders don’t make effective choices about why, when, and how to use it. I also worry that we won’t teach AI skills quickly enough. This runs the risk of keeping wealth concentrated with a few AI developers, while most people get left behind.”

Kulikowski said, “Even in the future, it will be hard to trust AI with the currently ambiguous and often deliberately muddied interpretations of AI.

“Artificial intelligence/machine learning is very different from most previous technologies, as it can be made to work autonomously in a most general way, so that the range of its misuses or misapplications is incredibly amplified.”

“Machines just cannot and will not share the empathy, nor act in the trustworthy way, that we expect of humans caring for other humans. ” Casimir Kulikowski, Rutgers

He added, “We ought always be suspicious and not have blind faith in any technology, and especially in the current machine-learning versions of AI, which I dub ‘machine guessing,’ and which have many useful applications as long as one understands the limits of just doing pattern matching on large databases with heuristic guesses about the voting schemes involved, and [understands] about the highly uncontrolled data gathered in an uncritical, but sometimes in a deliberately biased and self-serving manner.

“My own research on medical-decision support systems has taught me the key lesson for any AI application: that, like [applications used] in medicine and nursing, it requires responsible human judgment and ethical empathy towards patients — or those being affected by ‘AI’ more generally.

“Machines just cannot and will not share the empathy, nor act in the trustworthy way, that we expect of humans caring for other humans. Whatever our philosophy, the notion that we may be entering an Age of Spiritual Machines is an unscientific and deceptive conjecture, and an outright sham.”

Part 1 of this story, “AI and Ethics Part 1: Will Vulnerable AI Disrupt the 2020 Elections?” appeared here.

Sharing is caring!

26 More posts in computer science category
Recommended for you
Liaquat Hossain in front of the Montclair State University School of Computing | Courtesy L Hossain
Montclair State’s School of Computing is Being Transformed to Better Serve its Diverse Community

Big changes are happening at Montclair State University’s School of Computing. Under the leadership of...