Франсуа Шолле. "Что беспокоит меня в ИИ".

Apr 02, 2019 11:06

Интересные мысли Ф. Шолле, отца Keras - одного из фреймворка глубокого обучения машин :\
https://medium.com/@francois.chollet/what-worries-me-about-ai-ed9df072b704
Анти-Фейсбучная статья, можно сказать.

Отрывок:
Human behavior as an optimization problem

In short, social network companies can simultaneously measure everything about us, and control the information we consume. And that’s an accelerating trend. When you have access to both perception and action, you’re looking at an AI problem. You can start establishing an optimization loop for human behavior, in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see. A large subset of the field of AI - in particular “reinforcement learning” - is about developing algorithms to solve such optimization problems as efficiently as possible, to close the loop and achieve full control of the target at hand - in this case, us. By moving our lives to the digital realm, we become vulnerable to that which rules it - AI algorithms.
A reinforcement learning loop for human behavior

This is made all the easier by the fact that the human mind is highly vulnerable to simple patterns of social manipulation. Consider, for instance, the following vectors of attack:

Identity reinforcement: this is an old trick that has been leveraged since the first very ads in history, and still works just as well as it did the first time, consisting of associating a given view with markers that you identify with (or wish you did), thus making you automatically siding with the target view. In the context of AI-optimized social media consumption, a control algorithm could make sure that you only see content (whether news stories or posts from your friends) where the views it wants you to hold co-occur with your own identity markers, and inversely for views the algorithm wants you to move away from.
Negative social reinforcement: if you make a post expressing a view that the control algorithm doesn’t want you to hold, the system can choose to only show your post to people who hold the opposite view (maybe acquaintances, maybe strangers, maybe bots), and who will harshly criticize it. Repeated many times, such social backlash is likely to make you move away from your initial views.
Positive social reinforcement: if you make a post expressing a view that the control algorithm wants to spread, it can choose to only show it to people who will “like” it (it could even be bots). This will reinforce your belief and put you under the impression that you are part of a supportive majority.
Sampling bias: the algorithm may also be more likely to show you posts from your friends (or the media at large) that support the views it wants you to hold. Placed in such an information bubble, you will be under the impression that these views have much broader support than they do in reality.
Argument personalization: the algorithm may observe that exposure to certain pieces of content, among people with a psychological profile close to yours, has resulted in the sort of view shift it seeks. It may then serve you with content that is expected to be maximally effective for someone with your particular views and life experience. In the long run, the algorithm may even be able to generate such maximally-effective content from scratch, specifically for you.

From an information security perspective, you would call these vulnerabilities: known exploits that can be used to take over a system. In the case of the human minds, these vulnerabilities never get patched, they are just the way we work. They’re in our DNA. The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume.

И интересное замечание:
In short, I think it’s because we’re really bad at AI. But that may be about to change.

Until 2015, all ad targeting algorithms across the industry were running on mere logistic regression. In fact, that’s still true to a large extent today - only the biggest players have switched to more advanced models. Logistic regression, an algorithm that predates the computing era, is one of the most basic techniques you could use for personalization. It is the reason why so many of the ads you see online are desperately irrelevant. Likewise, the social media bots used by hostile state actors to sway public opinion have little to no AI in them. They’re all extremely primitive. For now.

Machine learning and AI have been making fast progress in recent years, and that progress is only beginning to get deployed in targeting algorithms and social media bots. Deep learning has only started to make its way into newsfeeds and ad networks in 2016. Who knows what will be next. It is quite striking that Facebook has been investing enormous amounts in AI research and development, with the explicit goal of becoming a leader in the field. When your product is a social newsfeed, what use are you going to make of natural language processing and reinforcement learning?

We’re looking at a company that builds fine-grained psychological profiles of almost two billion humans, that serves as a primary news source for many of them, that runs large-scale behavior manipulation experiments, and that aims at developing the best AI technology the world has ever seen. Personally, it scares me. And consider that Facebook may not even be the most worrying threat here. Ponder, for instance, China’s use of information control to enable unprecedented forms of totalitarianism, such as its “social credit system”. Many people like to pretend that large corporations are the all-powerful rulers of the modern world, but what power they hold is dwarfed by that of governments. If given algorithmic control over our minds, governments may well turn into far worst actors than corporations.

Now, what can we do about it? How can we defend ourselves? As technologists, what can we do to avert the risk of mass manipulation via our social newsfeeds?
Previous post Next post
Up