Here's an Itemized List of Thirty Years of Disagreements

Apr 21, 2016 00:14

I went to a talk by Jerry Kaplan called "AI: Think Again" tonight. I have a lot of thoughts about it, and as the title suggests there are many places I disagree with him. However, I think the majority of those disagreements occur somewhat in the future, or else are stylistic. For example, he said he wouldn't want to watch F1 racing with driverless cars. WHAT THE FUCK, MAN?! That's just crazy talk. Obviously it's different than current F1 racing, but it's awesome. Robot cars, going fast and crashing gloriously.

In terms of the core points, there are a lot of places that I do agree with Dr. Kaplan and I have some hope of working with him on a course on ethics for workers in machine learning and artificial intelligence. I think approaching these ethical issues as engineering issues and taking responsibility for them as designers is very important and I hope I can produce something, perhaps with his help, that will enable many more people to engage more productively with the ethical issues involved in building autonomous systems.

All that serious stuff being said, here are my somewhat shorthanded notes, and digressions thereupon.

Dr. Kaplan said that he has seen no persuasive evidence that machines are on the path to thinking. Later in the talk he suggested that this is generally not the most interesting question to ask, that really what we are concerned with is whether machines are on the path to needing to be considered as agents and granted moral clout. I think very much that they are, even independent of other issues. Of course this is, at this time, universally NOT because we need to respect the preferences of machines, so much as because we need to respect the preferences of their users. However I do somewhat believe that humans might just be big old neural networks--and that a neural network built in such a way that it could pass a Turing test would have it's own moral worth and its preferences would need to be considered by society. He expressed after the talk that whole brain emulations are not something he would assign moral worth to--that statement somewhat concerns me! Though I've read enough Robin Hanson to realize that if you just grant legal status to emulations of people you get some bad Malthusian results real quick.

"Where are the robots?" There was some expression that after various barriers are broken by AI, such as Deep Blue, self-driving cars, and Watson, that there is an expectation of robots doing everything. On the one hand, of course automation is generally increasing human capability and reducing crew sizes more than replacing humans outright. On the other hand, say you build a neural network with a hundred hidden layers and plug it into a robot body with cameras and speakers and a microphone and try to teach it like a child. What do you think would happen? It's not commercially interesting but I'm curious philosophically why one would think it's necessarily impossible for this entity to attain sentience.

Dr. Kaplan expressed that while machines can perform the same tasks as humans at or far above human levels, this "doesn't mean that machines are intelligent in the same way as people." I agree that this is true of most AI tasks, however what about ML models that explicitly build "conceptual understanding" by mixing pre-built underlying models in different ways? How much do we need to know about our models and about our own brains? Is the substrate or even the computational algorithm (or being written in C versus Java versus Python) actually important in determining whether something is intelligent in the "same way as people"? With tools like LIME becoming available we may be able to start understanding a bit more "why" deep neural nets work the way they do, and I certainly can imagine the possibility that they think the way humans do!

Dr. Kaplan mentioned that IQ is meaningless--while I agree that it's a bad measure of what we call "intelligence" I'm skeptical of it being meaningless since it's powerfully predictive, even at levels >4 standard deviations away from the mean (though no longer linearly predictive). This is being a bit more pedantic than many of my points though.

Dr. Kaplan noted that in many arenas, incremental progress occurred for many years before breakthroughs, especially Deep Blue and self-driving cars. I suspect much of this was the case with Watson, but this was explicitly NOT the case with AlphaGo. Though a certain amount of this should be credited to Google simply pouring far more resources into the problem than was expected, the ability of these things to happen much faster over time is a specifically important concern!

In reference to a discussion in a press release by the creator of the Perceptron unit, Dr. Kaplan said regarding machine translation, "He was right! It was just fifty years later!" Meanwhile, he dismissed the creator's thoughts on other matters. This seems suspect to me; couldn't the other ideas have also been prescient, but simply be delayed to a time when computing power and data availability were more ubiquitous? A day like today?

Dr. Kaplan addressed the idea that neural networks are biologically inspired, saying that airplanes are as well. He said, "we're not worried that 747s will build nests." However the guiding principles from nature used in a 747 are basic mechanics, and very well understood. We understand neural networks only on a very surface level, and the entire purpose of the automation of them is to give them the ability to surprise us! They certainly do surprise us with happenstances like referring to black people as gorillas and turning into racist little shits at the drop of ten thousand trolls' hats. While this is a far, far cry from developing their own preferences and rising up against us, within the scope of tasks that machines are capable of they certainly do behave in an unruly fashion a lot of the time--and those scopes are ever expanding.

In reference to Hal 9000 from 2001, Dr. Kaplan said "How hard is it to say it's not okay to kill people in pursuit of your goals?" My first thought is that, in some circumstances, it WILL be necessary for automated systems to kill people. Even if this is not their intended purpose! For example a self-driving car facing a trolley dilemma. Dr. Kaplan did address that this is important--that in order for automated systems to make truly appropriate moral decisions, it will be necessary for them to know how to evaluate things like human life.

He also said "If you were on the engineering team for Hal 9000, you'd be fired." My thought following this was, what about the team designing Tay? Was anyone fired? Of course, no one died when Tay was released, but it was a CLEAR case of gross incompetence on the part of the team at Microsoft. Many community standards exist for creating Twitter bots, and they were repeatedly ignored. Blacklists were created specifically to safeguard the bot but they were woefully insufficient. In order for this to be comforting, I need to see real consequences visited on real people who are building real projects! Maybe it hasn't been publicized but I don't think anything major really happened to the engineers working on Tay. And then they re-released her and she fucked up again. I guess I just need to continue my twenty year plus trend of not putting any trust in Microsoft engineering. Apologies to my roommate who's an engineer at Microsoft.

Dr. Kaplan said that what is important for autonomous agents is that we teach them to abide by human social conventions--and this is an area in which there is very little research to be built on! I'm happy to point to this paper on teaching reinforcement learning agents social norms through storytelling, but one paper is not enough to make a field of research.

My other thought on that is that properly teaching agents social norms is very much an anthropomorphic analogy both in terms of the way he phrased it and in terms of... if teaching something social norms isn't a mark of sentience, I'll start getting curious about why I think other humans are sentient. Obviously I exaggerate slightly, but with the amount of shit he talked about anthropomorphizing AI the whole talk made me a bit salty when he started doing it.

One concept that I found very interesting from the talk was the idea of a Safe Operating Envelope. For example, when a self-driving car runs out of fuel or is confused by its circumstances it tries to safely come to a stop. This seems like a great design pattern and it underlies a lot of ideas that I've seen and approved of in a number of circumstances. That said, I think there are a lot of boundaries that can make the idea problematic. For example, if a self-driving car exits its SOE on a crowded freeway it definitely can't just safely come to a stop. If an automated stock trader is participating in an instant crash, how can it really tell this is happening? Obviously measuring a greater than 1% change in the overall stock market could happen but I'm a bit concerned. I guess part of this is I'd love to see more about what kind of warning signals can exist nicely for stock trading algorithms.

Dr. Kaplan mentioned another cool idea I'd like to hear more about regarding licensing for autonomous agents. If a robot is going to give you a massage, it should probably pass some standards to verify that it won't destroy your body--how would you come up with and enforce these standards? How can you interact with enough masseuses to gain the specific subject matter knowledge needed to come up with useful and coherent licenses, without alienating them at the prospect of being replaced by robots?

Dr. Kaplan also mentioned the idea of job mortgages, which I found a bit concerning as on the face of it sounds a lot like the student debt crisis. Of course, working at Udacity with the jobs guarantee I guess we're doing a version of this that I'm hoping will benefit workers a lot more than traditional universities will.

He mentioned that AI replaces labor with capital, which drives the value of capital up, which drives the concentration of wealth toward the richest. He then discussed how redistributive economics is necessary in this ecosystem. Of course this wasn't an econ talk, so I guess I shouldn't expect a solid answer as to how this should work. That said, he mentioned after the talk the program from the past of taking government land on loan and being granted ownership of the land if one works it for seven years. Granting public capital to citizens who put it to use does sound promising--especially in this era where capital can mean already-existing unoccupied rentals and second tier refurbished computers rather than the dangerously expansionist land grants of the past that created conflict with indigenous Americans.

Dr. Kaplan said that danger from AI is an engineering problem, not an existential threat to humanity. Why not both? These are (clearly?) not mutually exclusive, especially after the advent of nuclear weapons.

He also mentioned that he believes the future will be more Star Trek and less Terminator. I wonder what metrics he's using to compare our societies. It definitely seems to me that we already live in a crazy cyberpunk dystopia. We are a long way away from Star Trek.

On a number of issues, he referred to laws governing property, corporations, and pets. While I can respect that for building a legal system surrounding autonomous agents these are good places to start when understanding liability and culpability. However leaning on them for societal solutions feels a bit close to passing the buck to me. I'd much rather see a stronger engineering solution--a set of widely distributed best practices following from a coherent set of design principles that can guide us in building machines that will make the future better instead of worse.

Overall I got a reasonable amount out of the talk. A lot of my disagreements are about my being more concerned with taking personal responsibility for issues than he seems to be. Some of them are about my looking at a greater time horizon than he is. But regardless of all of this, I think we both agree that we need a set of best practices to exist, and we need to have conversations about them and improve them.

ai, ethics, education, jerry kaplan, machine ethics, industrious, exposition

Previous post Next post
Up