Impact of Artificial Intelligence on Society
More dangerous ravings of a carbon-based life form
In my last blog on this subject I posed two questions:
How can we tell if a machine is intelligent? And is it appropriate to compare machine intelligence with human or animal intelligence?
Most scientists think that the same rules apply to a machine as apply to us and to animals: perception of surroundings, goal-making, reconnoitering, solving problems and measuring results. You can add to that, maybe, maximizing chances of success by taking appropriate actions, both from the outset and on the fly.
Examples of what machines do that exhibit intelligence include playing chess – remember when Deep Blue beat Garry Kasparov in a re-match in 1997 and Kasparov accused IBM of cheating? (Kasparov later explained that he had seen flashes of deep intelligence and creativity in the machine's moves, suggesting to him that expert human chess players had intervened on behalf of the machine – amazing.)
More recently, a software program named AlphaGo “learned” (by playing many matches) to play the very difficult board game, Go. It has beaten the brains out of every human since March 2016 and become the world’s number one ranked player. It was even awarded a professional 9th dan (grade) ranking by the Chinese Weiqi Association.
What’s interesting about AlphaGo is that it uses a “Monte Carlo” tree search algorithm to determine and make its moves. To win every game, it uses its own previously “learned” knowledge. It acquired that knowledge through “machine learning” – which is basically a combination of extensive training, practice (with both humans and machines), making lots of mistakes and “deciding” not to make them again, and an artificial neural network that allows it to make decisions and store potential moves, both good and bad.
A “Monte Carlo” tree search essentially analyzes and concentrates on the most promising moves based on board layout and probabilities. The computer “plays out” the game over and over during a match based on random examinations of the search data. In each play-out, the game is played to the very end by selecting moves at random. The final result of each play-out is then used to assign odds of success to each of the branches of the game “tree.” The best branches are “remembered” and are therefore more likely to be chosen by the computer in future play-outs. In this way the computer “learns” (by making mistakes as well as by doing things right) how best to play the game.
I wish I could learn that way. Instead, I call my method “trial and error” – except it’s actually “error and error.”
There are machines now that are capable of driving motor vehicles, understanding and interpreting human speech, “gaming” military strategies, reading, routing delivery vehicles based on constantly changing traffic patterns, buying and selling stocks and shares – and chatting with you on the phone.
These machines – incredibly – have tremendous levels of data about the worlds they operate in. They “know” and can “see” and recognize objects, properties of materials, categories of objects, relationships between objects, situations, events, causes and effects. They are particularly adept at information about information (what we call “metadata”) – the data we have about the data that is out there somewhere or that is known to others. They can plan, process information and modify plans – even strategize, react to random events, perceive, learn, create art and music and even predict the emotional reactions of people under certain circumstances. These machines can do more or less what Schrödinger’s cat can do, in their own limited spheres of capability.
What these machines do not have is common sense – yet. Scientists say that’s coming. (There remains an ongoing public argument in certain quarters about whether or not people have common sense. My dad used to say that common sense is not so common. He might have been right.)
All of which brings me to the whole point of this mad rant: how will artificial intelligence affect ordinary people? What will be its impact on our everyday lives from the social and ethical points of view? What will we think of ourselves when we can’t tell the difference between a phone call with a machine and a phone call from Madge in North Sydney, Nova Scotia? Will we swoon over computer-generated art the way we rave about Matisse and Dégas? Will we hear the beauty and grandeur in computer-generated music the way we hear it in Beethoven? Even if it’s every bit as good, will we have the human magnanimity to regard it that way? And what about how artificial intelligence will impact how we make a living?
As usual, I’ll try to answer these questions next week. Enjoy the rest of your week.