Artificial Intelligence and Ethics — Our Final Invention?

UPDATED June 19, 2019 03:40 pm .

Francis Cordor
June 19, 2019 03:40 pm

Ever notice that Google seems to know exactly what you are thinking, presenting relevant keyword terms you hadn’t even fully formulated for yourself yet? What about the sponsored ads in your newsfeed – are they becoming more interesting to you? Are retailers sending you coupons a few days before you run out of a given item? Seems everything is getting smarter and more automated, doesn’t it?

You’re not imagining it. These are examples of artificial intelligence (AI) in action. AI is alive and well today — and growing. We hear about self-driving cars, and may have even encountered one on the road. Our own cars have accident avoidance systems. Some can even park themselves. Our homes have Siri, Alexa, and Cortana, each willing to lend a helping hand when asked. Chat bots appear on our screens, letting us know they’re there should we need them, and while some messengers are powered by human customer service agents, many are powered by artificial intelligence.

While there’s a certain “gee whiz” factor when you ask Alexa to play you a song or turn on the lights, there’s also a nagging feeling of “what have we done?” After all, we’ve seen plenty of science fiction movies where artificial intelligence goes horribly awry. We can’t help but wonder just how far artificial intelligence can go, and if perhaps, this is our last invention as posed in the book Our Final Invention by James Barrat.

Barrat is not the first to ponder the limits of AI. Sci-fi writer Isaac Asimov gave the ethics of artificial intelligence a lot of thought many decades ago. In fact, Asimov came up with the “three laws of robotics” in 1942 in his short story Runaround. These laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

A neat plot device, there’s some debate as to whether these laws have any bearing on artificial intelligence in the real world. For example, an article on Gizmodo, Why Asimov’s Three Laws of Robotics Can’t Protect Us, summarizes an AI theorist’s views on the inadequacy of the three laws as follows:

“They are inherently adversarial; based on a known flawed ethical framework (deontology); rejected by researchers; and fail even in fiction.”

While it’s smart to consider, and find ways to avoid, the possibility of dangerous AI behaviors such as assassination and warfare, there are many other far-reaching ethical challenges AI developers face. The World Economic Forum lists nine of them:

  1. Unemployment — Automation is great, until it kills your job. What will happen when self-driving shuttles, cars, and trucks rule the road? A lot of bus drivers, taxi drivers, and truck drivers will be out of work. What happens when chat bots can accurately answer questions and solve customer problems? A lot of customer service representatives will be out of work, as will everyone else who supports massive call centers.
  2. Inequality — The wealth gap will likely become wider in a post-work society. Those who develop and own artificial systems will reap the bounty while those who have lost the ability to generate an income will probably suffer as a result.
  3. Humanity — Do you feel silly thanking Siri or Alexa? Maybe you don’t bother to anymore because they aren’t real. Social graces could suffer. Meanwhile, machines and systems can become addictive by activating the reward centers in our brains. Developers are already using A/B testing and artificial intelligence to get us hooked on things like video games and click bait headlines.
  4. Artificial stupidity — You’ve heard the phrase garbage in, garbage out. What if the machine learning that takes place starts with garbage? A self-driving car that can’t tell the difference between a stop sign and bus stop sign would be both stupid and deadly.
  5. Racist robots — Likewise, what if the developer is biased or outright racist?
  6. Security — We’ve heard of hackers taking over unsecured webcams and Internet of Things devices. What would happen if criminals were to hack AI devices — or an armed drone for that matter?
  7. Unintended consequences — Humans can understand context, but artificial intelligence, not so much. What if we tasked a system with solving a given problem, such as eradicating cancer, and the system solves it by wiping out mankind. Problem solved, but not in the way we intended.
  8. Singularity — What if artificial intelligence systems become so advanced to the point that humans no longer have the advantage. Singularity is often at the heart of those previously mentioned sci-fi stories. It’s a horrifying prospect, and one that could make AI our last invention.
  9. Robot rights — It sounds crazy at first, but we are building systems that learn through both reward and aversion — just like humans and animals. All those robot rebellions in the movies? Didn’t they all start because robots were treated like second class citizens or oppressed in some way?

While we have a long way to go before we must worry about robots’ feelings and their rights (though Alexa may disagree), there are plenty of ethical concerns that need to be addressed now. In his book review about Our Final Invention by James Barrat, author Seth Baum writes, “Ultimately, the risk from AI is driven by the humans who design AI, and the humans who sponsor them, and other humans of influence.”

In other words, whether AI becomes our final invention comes down to us. The ethical choices we make now can help mitigate some of the problems associated with automation and AI such as widespread unemployment, income inequality, humanity, artificial stupidity, bias, security, unintended consequences, singularity, and perhaps even robot rights.



Featured Photo: Courtesy of WCCS Computers

Powered by Ducor Sports