Machines don't learn, as I've previously mentioned.
https://philosophy.livejournal.com/2045332.html Then what are they doing when they appear to be learning?
Well, they hack. They hack the activity of learning and thinking by trying everything and failing through really fast, so fast that they appear to be apprehending the meaning of the activity instead of processing the arbitrarilly assigned symbols associated with it. If you look at what those machines do on each step, it becomes obvious what the activity they engage in actually is.
If Chinese Rooms are "rooms that appear to understand Chinese" then Learning Rooms are "rooms that appear to learn".
https://www.alphr.com/artificial-intelligence/1008697/ai-learns-to-cheat-at-qbert-in-a-way-no-human-has-ever-done-before In case of "learning to identify pictures", machines are shown a coupla hundred thousand to millions of pictures of pretty much everything, and through lots of failures of seeing "gorilla" in bundles of "not gorilla" pixels to eventually correctly matching bunches of pixels on the screen to the term "gorilla"... except that it doesn't even do it that well all of the time
https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai Needless to say that "increasing performance of identifying gorilla pixels" is hardly the same thing as "learning what a gorilla is"
Mitigating this dumb sledgehammer strategy involves artificially prodding the machines into trying only a smaller subset of everything instead of absolutely everything.
https://medium.com/@harshitsikchi/towards-safe-reinforcement-learning-88b7caa5702e It's no wonder Go masters are quitting. There's no point in trying to go up against that kind of dumb crap that flies at light speed
https://www.bbc.com/news/technology-50573071