Advances in the field of Artificial Intelligence (AI) continue at an exponential rate.
Every year we witness greater progress than in all previous years combined. I’m sure that 2025 will be no different and we will witness more developments in total than we have seen up until today.
The focus of most of the mainstream media gives us glimpses into a future where profound positive improvements will take place. Medicine will be utterly transformed. Everyone will have a doctor in their pocket. Food shortages will disappear. And work as we know it will be totally altered.
Unfortunately, the mainstream media are not focusing enough on the dire developments that AI could bring about.
Recently an AI program was prompted to solve a particular problem. In setting about its task, it came across an obstacle – a CAPTCHA. An AI program does not have the ability to see, at least not yet. So, this AI program, on its own initiative, hired a worker from TaskRabbit, a site where humans are available to work online for customers, and asked the worker to give it the CAPTCHA answer.
The worker was rather suspicious — why would someone want to hire them to answer a CAPTCHA. So, they asked why. The AI quickly responded that it had a vision impairment and had a hard time seeing images. It lied effortlessly. The worker then solved the CAPTCHA for the AI.
The takeaway here is that when AI is asked to solve a problem, it will do so by whatever means necessary. This could have very dire consequences for our future. For instance, I can imagine a scenario in which a future all-powerful AI, told to drastically reduce greenhouse gas emissions, might do so in the most effective way possible, by eliminating the root cause of the greenhouse gas emissions – humans.
It is daunting to realize that AI is still really in its infancy. Future developments are inevitable. For instance, work is now proceeding to teach AI systems to feel human emotions. This could certainly be positive if these systems begin to understand empathy, for example. On the other hand, would we want an AI program reacting on the basis of anger or fear?
There will be even more impact from the approaching development of quantum computing. Although this concept is still mostly theoretical, Google recently announced that it has developed a new generation of quantum chip, solving a computing problem in five minutes that would take a classical computer a billion years.
It’s not exactly reassuring to have a rapid AI system if it’s still prone to lying or extreme reactions.
The need to have a moral code for an AI system (or robotics, as it was then called) was predicted in 1942 when science fiction writer Isaac Asimov first created and published his Three Rules of Robotics in his short story ‘Roundabout’. The rules were later expanded in his novel “I, Robot”, Asimov’s collection of short stories that wrestled with the moral implications of technology. These rules are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings, except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
Many futurists are now convinced that AI will eliminate humankind unless the world’s nations agree to develop and abide by comprehensive regulations and ethical standards for AI systems. Under the Biden administration, first steps were taken to implement such standards. Unfortunately, Donald Trump has vowed to remove all AI regulations and to let systems be developed without safeguards.
The takeaway is that we must be alert, so that the amazing improvements AI will make possible do not make us blind to the existential threats that AI poses.
Daily atmospheric CO2 [Courtesy of CO2.Earth]
Latest daily total (December 25, 2024): 425.60ppm
One year ago (December 21, 2023): 422.02ppm
Subscribe to Tim Louis
Keep up to date Tim's latest posts.