There has been a lot of discussion of artificial intelligence in the media recently.
Stephen Hawking,
Elon Musk, and
Bill Gates have all made statements about the possible dangers of superintelligent AI in the future. Others, such as
Jerry Kaplan and
Robin Hanson, feel that much of the concern is overblown or misplaced--however these people still generally see AI as coming with a large number of serious problems that need to be addressed, and even Nick Bostrom whose
book is among the primary writings on the subject agrees that mental pictures of "Terminator scenarios" are unproductive.
The current state of machine intelligence is well below the level of general intelligence that is popularly associated with the idea of AI. But the consequences of AI are already huge. Some are personal, such as
Facebook broadcasting information about the recently deceased. Some are insensitive, such as
Microsoft's ill-fated Tay or
Google Photos tagging black people as gorillas. Others are world-shaking, like the
36-minute, trillion dollar "Flash Crash" in 2010 or
the use of autonomous weapons by NATO.
The common thread through these incidents is the idea that machine intelligence is exploitable, complex, and unpredictable--and that research in machine intelligence generally
does not draw on previous ethical traditions; perhaps because of its position in industry rather than academia.
It's easy to be intimidated by the prospect of machine learning, and adding highly impactful and complicated ethical considerations certainly doesn't make things seem any easier.
Fortunately, many of the existing problems are engineering problems. The economy didn't crash in 2010, and Google's self-driving car continues to be relatively safe despite
occasional incidents. Designing learning systems that behave safely and promote the common good is possible, and there are practical steps one can take to keep automated systems in line.
Much of the process of humanizing digital systems is contextual, but here are a few possible starting points for considering the context and impact of machine learning systems:
- Accomodate complex data that may not meet your expectations.
- Explore incorrectly classified data points, and where possible preserve fairness across data classes.
- Ask how data is gathered, for what purpose, and whether that process is appropriate to your context.
- Consider different levels of autonomous operation for systems--when and how do humans intervene?
- Review previous similar projects to see where they have issues and what best practices they've put forth.
- Differentiate between metrics and goals. A self-driving car may use similarity to a human driver as a metric, but it's goal is to avoid accidents.
- Make conscious choices regarding the tradeoffs between performance and interpretability.
- Define properties of your dataset so that it is clear when a learner will or will not be reliable.
This may seem like an intimidating list, and it is far from complete. But every step in this process, from data exploration to evaluating a solution's robustness, is already a necessary part of machine learning. There is no separation between bridge design experts and bridge safety experts--building a reliable system that fulfills its goal is an integral part of building any system in the first place.