Dele Atanda said that he was not afraid of robots or artificial intelligence (AI).
“I am afraid of the people who own the robots and the people who task the robots with achieving certain goals,” he said.
Atanda knows something about AI. He is the founder and CEO of metaMe (New York), which he says is the world’s first self-sovereign AI and clean data marketplace.
“I do believe that technology, ultimately, is for the betterment of mankind. That’s the role technology should serve, and I definitely think that smart cities could be incredible,” he said.
“But you have to be cognizant of the huge dystopic threat that is lurking, and almost imminent, in that regard.”
Atanda made his remarks at Propelify Innovation Festival 2020, during a discussion titled “AI and Smart Cities and the Impact on Privacy” with Aaron Price, Propelify founder and the president and CEO of TechUnited:NJ (New Brunswick). The panel took place on Oct. 8 during a day themed as #BetterConnected.
All told, metaMe is focused on providing people with a personal AI and digital identity designed to serve their needs. It is a universal wallet for managing your money and personal data, as well as a marketplace for selling data to brands for money and rewards or in exchange for tailored products and services, he said.
But safeguards must be put in place to protect humanity and the ability to operate ethically and progressively, steering clear of “animalistic tendencies” driven by the economics of greed, as well as the implicit biases that could lead to the “Dark Ages” of AI, he said.
“It is important that we not only design systems with a profit principle in mind, but also with a particular ethical social agenda. And if we encode this into our designs, then, yes, I think they should be a net positive,” he said.
AI starts with an algorithmic framework and is made up of processing engines trained by data sets. Decisions are based on that data, and that can lead to inherent social consequences and amplify existing social bias,Atanda said.
“So, if you feed it more of a particular type of data set, it’s going to become better at observing patterns within those data sets,” he said.
“If, for example, you feed a facial recognition algorithm with a lot of Caucasian faces, it’s going to understand different types of Caucasian faces.
“But if it is not fed with faces of African-Americans or people of color, then it won’t be able to make accurate predictions when it encounters people of color.
“If you train an AI on that sort of incomplete data, and you don’t give it guidance around how to use that data and understand its shortcomings, then it’s going to accept that that is the social norm, and amplify that in its decision-making processes, making decisions based off of that,” he said.
This type of bias and discrimination has been shown to negatively affect AI systems used to:
- allocate parole time, when people of color are given longer parole sentences;
- regulate how self-driving cars interact with people, when vehicles did not recognize and interact with people of color; and
- analyze pictures of suspects for crimes, when dark-skinned people have been incorrectly been targeted as suspects.
“So, how does AI make our lives better or worse? And how does it have an impact on smart cities?” Price asked.
Atanda responded, “There’s a big promise that AI will give us much more intelligent services, hyper personalization, reliable results very quickly, with much greater efficiency, much better utilization of resources generally.
“But, of course, we’ve seen that there is a dark side to AI, on a service level,” he added. “If AI’s biases are not addressed, we’re going to be looking at more and more social injustice.”