Is Artificial Intelligence Really an Existential Threat?
In recent years, artificial intelligence (AI) has become a hot topic, attracting the attention of both scientists and the public. Stories about AI becoming intelligent beyond control, even potentially posing a threat to the survival of humanity, have appeared more and more in the media. However, do these concerns really reflect the true nature of artificial intelligence?
Humanity’s obsession with artificial intelligence
Since the 1940s, when the first computers were born, people have begun to worry about the capabilities of these machines. One typical example is the 1970 science fiction film “Colossus: The Forbin Project”, about a supercomputer that controls all of America’s nuclear weapons and gradually conquers the world. The idea of powerful, uncontrollable AI has inspired many works of art and haunted many scientists.
For more than half a century, predictions have been made that computers would achieve human-level intelligence within a few years and quickly surpass us. However, the reality is that despite great advances, artificial intelligence has not yet reached that level. Although it has existed since the 1960s, AI has only recently become popular thanks to language and image processing systems. But are these systems really as scary as we think?
Over the past six decades, there have been repeated predictions by experts that computers will demonstrate human-level intelligence within five years and beyond within 10 years.
New study: AI is not an existential threat
A new study from the University of Bath and TU Darmstadt, presented at the 62nd Annual Conference of the Association for Computational Linguistics (ACL 2024), has made remarkable findings about the capabilities of large language models (LLMs). LLM, a popular form of artificial intelligence, is actually more controllable, predictable, and safer than feared, according to the study.
Dr Harish Tayyar Madabushi, a computer scientist at the University of Bath, stressed that stories about AI posing a threat to humanity have slowed down the development and adoption of the technology. Instead, he argued that concerns about LLMs developing new capabilities without human intervention are unfounded. According to the study, LLMs are mostly good at performing tasks that require them to follow pre-programmed instructions, but they are incapable of learning or developing new skills independently.
The study also found that while LLMs can exhibit some surprising behaviors, they can all be explained by their programming. Therefore, the idea of an AI developing into a dangerous entity on its own is unfounded.
Artificial intelligence has been around since at least the 1960s and has been used in many fields for decades.
We tend to think of this technology as “new” because it’s only recently that AI systems that process language and images have become widely available. However, AI may not be the existential threat many people think it is. According to a new study, large language models (LLMs) can only follow instructions, cannot develop new skills on their own, and are “controllable, predictable, and safe” by nature.
The real danger lies with humans, not AI
That doesn’t mean AI is completely harmless, though. A team from the University of Bath and TU Darmstadt warns that AI can still cause worrying problems. Current AI systems are already capable of manipulating information, creating fake news, and can be misused for malicious purposes. The danger lies not with the AI itself, but with the people who program and control it.
It is important that we take a careful and responsible approach to the development and application of AI. Rather than fearing that machines will become the enemy of humanity, we need to pay attention to the people behind these systems. It is humans who will determine whether AI will be a useful tool or a potential threat to society.
AI is not an independent conscious entity, it is just a tool created by humans. The real threat comes from how we use this tool.
Artificial intelligence, especially large language models, is not the existential threat that many fear. They have the ability to control and predict, but they are not capable of developing new skills or becoming dangerous beyond control. However, this does not mean that we can let our guard down. The real risk lies with the people who program and control AI systems. Therefore, continued research, oversight, and responsible application of AI are essential to ensure that this technology serves the good of humanity.
Post Comment