Q&A on the Ethics of AI

Earlier this month, the SNIA Cloud Storage Technologies Initiative (CSTI) hosted an intriguing discussion on the Ethics of Artificial Intelligence (AI). Our expert, Rob Enderle, Founder of The Enderle Group, and Eric Hibbard, Chair of the SNIA Security Technical Work Group, shared their experiences and insights on what it takes to keep AI ethical. If you missed the live event, it Is available on-demand along with the presentation slides at the SNIA Educational Library.

As promised during the live event, our experts have provided written answers to the questions from this session, many of which we did not have time to get to.

Q. The webcast cited a few areas where AI as an attacker could make a potential cyber breach worse, are there also some areas where AI as a defender could make cybersecurity or general welfare more dangerous for humans?

A. Indeed, we address several different scenarios where AI running at a speed of thought and reaction is much faster than human reaction. Some that we didn’t address are the impact of AI on general cybersecurity. Phishing attacks using AI are getting more sophisticated, and an AI that can compromise systems with cameras or microphones has the ability to pick up significant amounts of information from users.

As we continue to automate a response to an attack there could be situations where an attacker is misidentified, and an innocent person is charged by mistake. AI operates at large scale, sometimes making decisions on data that it not apparent to humans looking at the same data. This might cause an issue where an AI believes a human is in the wrong in ways that we could not otherwise see. An AI might also overreach to an attack, for instance noticing there is an attempt to hack into the infrastructure of a company, shutting down that infrastructure in an abundance of caution could leave workers with no power, lights, or air conditioning. Some water-cooling systems if shut down suddenly will burst and that could cause both safety and severe damage issues.

Q. What are some of the technical and legal standards that are currently in place that are trying to regulate AI from an ethics standpoint?  Are legal experts actually familiar enough with AI technology and bias training to make informed decisions?

A. The legal community is definitely aware of AI. As an example, the American Bar Association Science and Technology Law Section’s (ABA SciTech) Artificial Intelligence & Robotics Committee has been active since at least 2008. ABA SciTech is currently planning its third National Institute Artificial Intelligence (AI) and Robotics for October 2021 in which AI ethics will figure prominently. That said, case law on AI ethics/bias in the U.S. is still limited, but expected to grow as AI becomes more prevalent in business decisions and operations.

It is also worth noting that international standards on AI ethics/bias either exist or are under development. For example, the IEEE 7000 Standards Working Groups are already developing standards for the future of ethical intelligent and autonomous technologies. In addition, ISO/IEC JTC 1/SC 42 is developing AI and Machine Learning standards that includes ethics/bias as an element.

Q. The webcast talked a lot about automated vehicles and the work done by companies in terms of safety as well as in terms of liability protection.  Is there a possibility that these two conflict?

A. In the webcast we discussed the fact that autonomous vehicle safety requires a multi-layered approach that could include connectivity in-vehicle, with other vehicles, with smart city infrastructure, and with individuals’ schedules and personal information. This is obviously a complex environment, and current liability process makes it difficult for companies and municipalities to work together without encountering legal risk.

For instance, let’s say an autonomous car sees a pedestrian in danger and could place itself between the pedestrian and that danger. But it doesn’t because the resulting accident could result in the vehicle attracting liability. Or, hitting ice on a corner, turning control over to the driver so the driver is clearly responsible for the accident even though the autonomous system could be more effective at reducing the chance of a fatal outcome.

Q. You didn’t discuss much on AI as a teacher. Is there a possibility that AI could be used to educate students, and what are some of the ethical implications of AI teaching humans?

A. An AI can scale to individually-focused custom teaching plans far better than a human could.  However, AIs aren’t inherently unbiased and were they’re corrupted through their training they will perform consistently with that training. If the training promotes unethical behavior that is what the AI will teach.

Q. Could an ethical issue involving AI become unsolvable by current human ethical standards?  What is an example of that, and what are some steps to mitigate that circumstance?

A. Certainly, ethics are grounded in rules and those rules aren’t consistent and are in flux.  These two conditions make it virtually impossible to assure the AI is truly ethical because the related standard is fluid.  Machines like immutable rules, ethics rules aren’t immutable.

Q. I can’t believe that nobody’s brought up HAL from Arthur C. Clarke’s 2001 book. Wasn’t this a prototype of AI ethics issues?

A. We spent some time at the end of the session, where Jim mentioned that our “Socratic Forebearers” were some of the early science fiction writers such as Clarke and Isaac Asimov. We spent some time discussing Asimov’s Three Laws of Robotics and how Asimov and others later theorized how smart robots could get around the three laws. In truth, there’s been decades of thought into the ethics of an artificial intelligence, and we’re fortunate to be able to build on that as we address what are now real-world problems.

Leave a Reply

Your email address will not be published. Required fields are marked *