SEPTEMBER 2018CIOAPPLICATIONS.COM9tons­that is a stockpile of rice bigger than Mt. Everest! Yes, AI is moving at the speed of Moore's Law, doubling in power and capacity every 18 months. Currently, AI can do many things better than humans like face recognition, fly planes, play chess, drive cars, identify breast cancer in X rays, etc.Yes, I admit that in American culture we tend to fear and vilify the AI while in other cultures, like Japan, the AI is the hero who saves humanity. Perhaps these cultural differences are the result of animism, "Frankensteinism", and the Biblical injunction against creating life. Yet the fact remains that AI systems are progressing very fast because they are learning to learn and doing so much faster than humans can ever learn. Humans learning speed does not double every 18 months; the rate has been very flat for the past 150,000 years. Nowadays, programmers themselves do not really understand how the most advanced algorithms do what they do. Every day, AI becomes better than humans do at doing all kinds of tasks.The danger with AI is not that one day they will wake up and decide we are no different from a global cockroach infestation and decide to spray us with Black Flag. The immediate danger is that AI is learning from historical data as well as from watching how we do things. AI, therefore, is learning racism, biases, and all those negative attributes that are so particular to the human condition.Another major weakness we humans have is we anthropomorphize anything that shows the most basic illusion of mind. For example, some people who return their Roomba vacuum robot for service emphatically request that the same exact machine be returned because they have grown emotionally attached to it. They see personality and patterns of behavior that simply do not exist. On top of that, we tend to be gullible. Most people's ideas about AI come from TV and movies where an AI just cannot tell a lie. Why wouldn't an AI lie if it advanced their particular purpose and know how gullible we are?Philosophers and authors have pondered about ways of protecting humans from AI. For example, Asimov's made an attempt with the following 3 rules: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey orders given by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Volumes can be written on why these rules are rather nonsensical. For example, the First Law has to be violated by a Reaper drone with a Hellfire missile and by a machine gun mounted MAARS (Modular Advanced Armed Robotic System) since their goal is to literally kill humans.In today's age of populism and anti-intellectual movements, where "my Google search and opinion are just as good and valid as your PhD", we really should take seriously Elon Musk when he says that this issue keeps him up at night. Recently he said, I'm really quite close, very close to the cutting edge in AI. It scares the hell out of me. It's capable of vastly more than almost anyone on Earth, and the rate of improvement is exponential.Several experts at Google's DeepMind have echoed this AI concern, and similarly, echoed by the Future of Humanity Institute, a multidisciplinary research group at the University of Oxford. They are combining mathematics, philosophy, and science to stop AI from learning to prevent, or seeking to prevent, humans from taking control of AI. Yet, I see this like a group of a hundred five year olds trying to keep imprisoned an adult; that adult is going to get out and gain control of the children.Perhaps we are at an inflexion point in Earth's evolutionary history where we, the humans, will soon become the Neanderthal and our extinction will just help accelerate the inevitable progress of these new AI life forms. Maybe soon it will be time to give it up and let a superior AI life take over the planet. We can only hope that those superior intelligence beings treat us better than how we have historically treated less intelligent life forms. David TamayoAI is capable of vastly more than almost anyone on Earth, and the rate of improvement is exponential
< Page 8 | Page 10 >