Geoffrey Hinton, a renowned researcher and known as the “Godfather of AI,” announced on Tuesday that he resigned from Google to have the freedom to speak openly about the potential hazards of artificial intelligence. He explained on his Twitter account that he realized computers might surpass human intelligence much sooner than expected, and he didn’t want to be restricted by any potential conflicts of interest that could arise from his employment at Google.
Geoffrey Hinton, aged 75, expressed his concern in an interview with the New York Times regarding the potential for AI to generate false images and texts that are so convincing that people will no longer be able to distinguish what is real and what is not. Hinton’s groundbreaking research on deep learning and neural networks has played a significant role in the development of modern AI technology.
Geoffrey Hinton has stated that he believes Google has been responsible in its approach to AI, since his departure. He mentioned that there are many positive aspects of Google that he would like to talk about, but he feels that his comments would be more trustworthy if he is not associated with the company anymore. Google confirmed that Hinton retired from his position after serving as the head of the Google Research team in Toronto for a decade.
When asked to provide additional information, Hinton refused to comment further on Tuesday but said he would be willing to speak about the matter in more detail at a conference the following day.
Geoffrey Hinton warns AI chatbots could be danger
According to an interview with the BBC, Geoffrey Hinton expressed that some of the potential risks associated with AI chatbots are “unsettling” and that although they do not currently possess greater intelligence than humans, he believes that could change soon.
In addition, Hinton raised concerns in an interview with MIT Technology Review regarding “bad actors” who could use AI for harmful purposes, including manipulating elections or inciting violence. Hinton also commented to the New York Times that there were a few individuals who believed that AI could eventually surpass human intelligence.
Geoffrey Hinton initially believed that the idea of AI surpassing human intelligence was far-fetched and could take decades to materialize. However, his stance has since changed, and he now believes it could happen sooner than he thought. The release of Microsoft-backed startup OpenAI’s ChatGPT in November 2021 has led to an increasing number of “generative AI” applications capable of generating text or images, which has raised concerns about the future regulation of such technology.
The debate around AI’s potential dangers revolves around whether the primary risks lie in the future or the present. Some argue that the main threat is the hypothetical scenario of computers exceeding human intelligence, while others believe that automated technology currently being deployed by businesses and governments can already cause real-world harm.
The popularity of AI chatbots has made AI a topic of discussion not just among experts and developers but also among the general public, according to Alondra Nelson, the former head of the White House Office of Science and Technology Policy.
She believes that this is an opportunity to have a conversation about what a democratic and non-exploitative future with technology should look like.
However, some experts have expressed concern about the potential dangers of AI, with some computer scientists even regretting their work. Dr Carissa Veliz, an associate professor in philosophy at the University of Oxford, argues that policymakers should take these concerns seriously and regulate AI before it’s too late. Google’s chief scientist, Jeff Dean, says the company remains committed to a responsible approach to AI and continues to learn about emerging risks while innovating boldly.
Click here for the latest news of Business, World, Politics, Sports and Technology and more…