DEC - FEBCIOAPPLICATIONS.COM9So, when things go wrong and your pig cannot fly, what do you do? You try to explain why it failed and fix it, right? Well, that is the rub. The hard part of using AI approaches is the explainability issue and that has led to slowed adoption of AI for critical tasks that have severe consequences for errors or failures. The internal architecture of an AI model is so complex it is not explainable like traditional software bugs are. When you debug traditional software, you can find the lines of code that caused the output to be wrong and fix it. With AI, it is not so simple. You have to reevaluate (retrain) your model with examples it can learn so it discriminates the way you want it to. In general, the more training data you have, the better your AI will perform.If you decide to use AI, you will very quickly learn that the quantity and quality of your data are most of the effort. It is so important that there is a plethora of tools for cleansing, validating, munging, tracking, balancing, morphing, and synthesizing data. Most companies quickly realize that their data is not organized or complete enough to begin an AI effort.I would be remiss to now discuss the latest advances with GPT models, specifically ChatGPT developed by OpenAI. The holy grail of AI is artificial general intelligence (AGI) which is the ability of an intelligent agent to understand or learn any intellectual task that human beings or other animals can. If you interact with ChatGPT, you will be amazed at how fast it answers your questions with in-depth information and also lets you follow up will more questions while understanding the topic being discussed. Google searches are good, but each consecutive search does not consider what you asked it 5 seconds ago, each search is independent. The more you use ChatGPT the more you will start to see errors. It seems so brilliant one minute and the next it says something that you suspect as being false. You will soon learn that these approaches "hallucinate." They are so determined to answer your question that they make stuff up! Why does it do that? Because it does not understand what it is saying, it is only predicting what the most probably next word should be, given the previous sequence of words. It is like when you know someone so well you can predict the next words they are going to say. There is a tremendous amount of work (and money) going into this technology and the hallucination problem is being solved, but again, the curation of the data used to feed these LLMs will be a topic of hot debate.AI is here to stay but our definition of AI is not. New technologies, like ChatGPT, will continue to change our understanding, and someday, AI may not feel artificial at all, just like a Facetime call with someone halfway across the world would have been seen as magic 100 years ago and is now perfectly normal. AI IS HERE TO STAY BUT OUR DEFINITION OF AI IS NOT. NEW TECHNOLOGIES, LIKE CHATGPT, WILL CONTINUE TO CHANGE OUR UNDERSTANDING, AND SOMEDAY, AI MAY NOT FEEL ARTIFICIAL AT ALL, JUST LIKE A FACETIME CALL WITH SOMEONE HALFWAY ACROSS THE WORLD WOULD HAVE BEEN SEEN AS MAGIC 100 YEARS AGO AND IS NOW PERFECTLY NORMAL
<
Page 8 |
Page 10 >