AI Pioneer Warns of Risk
Prominent AI researcher Geoffrey Hinton quit his job at Google and is now saying ‘bad actors’ will use generative AI for ‘bad things.’ And my take on it.

Geoffrey Hinton, often referred to as “the Godfather of A.I.,” is a pioneer in artificial intelligence whose research has significantly contributed to the development of A.I. technology. In 2012, Dr Hinton and two of his graduate students at the University of Toronto developed a technology that laid the groundwork for generative A.I. systems, which are widely used in the tech industry today. However, in recent times, Dr Hinton has become one of the growing numbers of critics of the A.I. industry, warning of the potential dangers of the technology.
In an interview, Dr Hinton stated that he had resigned from his position at Google, where he had worked for over a decade and became one of the most respected voices in the field, to speak out freely about the risks of A.I. He expressed regret about his life’s work and warned that generative A.I. could be a tool for misinformation and ultimately pose a risk to humanity. Dr Hinton’s journey from an A.I. groundbreaker to a doomsayer is a remarkable moment for the technology industry, which is at its most important inflexion point in decades.
Industry leaders believe that A.I. systems could be as significant as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education. But, many fear that they are releasing something dangerous into the wild. Generative A.I. has the potential to create misinformation and could become a risk to jobs. Dr. Hinton’s concerns have been echoed by 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, who released a letter warning of the risks of A.I.
Dr Hinton’s career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, he embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea, but it became Dr Hinton’s life’s work. In the 1980s, Dr Hinton was a professor of computer science at Carnegie Mellon University but left the university for Canada because he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department.
Dr Hinton is deeply opposed to the use of artificial intelligence on the battlefield, which he calls “robot soldiers.” In 2012, Dr Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs, and cars. Google spent $44 million to acquire a company started by Dr Hinton and his two students. Their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr Sutskever went on to become the chief scientist at OpenAI. In 2018, Dr Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
Dr Hinton thought that using neural networks to learn from huge amounts of digital text was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language. However, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.

In addition to concerns about the potential dangers of A.I., Dr Hinton is also worried about the implications of the technology on employment. As A.I. becomes more advanced, many fear that it will replace human workers in many industries, leading to widespread job loss and economic upheaval.
Dr Hinton is not alone in his concerns. Many experts and academics have voiced similar worries about the impact of A.I. on jobs and the economy. Some have called for the development of new social safety nets and job training programs to help workers transition to new roles as A.I. becomes more prevalent in the workforce.
Despite these concerns, the tech industry continues to push forward with the development of A.I. systems. Companies like Google, Microsoft, and Amazon are investing billions of dollars in the technology, believing that it holds the key to unlocking new levels of productivity and innovation.
But as the risks and potential downsides of A.I. become more apparent, it is clear that a more cautious approach is needed. Dr Hinton and other A.I. pioneers are calling for greater transparency and accountability in the development of these systems, as well as more robust safety protocols to prevent unintended consequences.
As Dr Hinton himself has said, “We need to be very careful about what we’re doing. We’re playing with fire here.”
In conclusion, Geoffrey Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflexion point in decades. While many industry leaders believe that the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education, there are growing concerns about the potential risks and downsides of these technologies.
As one of the most respected voices in the field of A.I., Dr Hinton’s decision to speak out about these risks is a significant development. While it is clear that A.I. holds enormous potential for innovation and progress, it is equally clear that we must proceed with caution and carefully consider the potential downsides and risks of these technologies. By doing so, we can help ensure that A.I. is used in a responsible and ethical manner and that it benefits society as a whole.
Here is my take on it…
The concerns raised by Dr Hinton and other experts in the field of AI about the potential risks of generative AI are not new. The increasing capabilities of AI systems, combined with the lack of transparency and accountability of their development and deployment, raise valid concerns about their impact on society and humanity as a whole. The potential risks include the spread of misinformation, job displacement, and even existential threats to human civilization.
It is important to acknowledge that AI systems have the potential to bring many benefits to society, such as improving healthcare, education, and environmental sustainability. However, it is equally important to ensure that these systems are developed and deployed in an ethical, responsible, and transparent manner that prioritizes human well-being and safety.
The responsibility of ensuring the safe and ethical development and deployment of AI systems does not rest solely on the shoulders of individual researchers or companies. Governments, regulators, and civil society organizations also have a crucial role to play in shaping the policies, regulations, and norms that govern the development and deployment of AI.
In summary, the concerns raised by experts about the potential risks of generative AI should be taken seriously and addressed through a collaborative effort between researchers, industry, governments, and civil society. It is crucial to strike a balance between the potential benefits of AI and its potential risks to ensure a safe and prosperous future for humanity.
If you’ve enjoyed reading my content and want to stay up-to-date with my latest posts, be sure to follow me! By hitting that “Follow” button, you’ll never miss an article and will be able to engage with my ideas and thoughts in real time.
Stay informed about my latest articles and trends by subscribing to my newsletter! Sign up today by clicking the link below: