May 6, 2024
Former Google chief warns AI likely to view humans as ‘scum’ who need to be controlled 

Former Google chief warns AI likely to view humans as ‘scum’ who need to be controlled 

Another former Google chief has warned against the dangers of artificial intelligence, saying it could come to view humanity as ‘scum’ and even take over military drones to exterminate us.

Mo Gawdat, the former chief business officer at Google‘s secretive R&D wing X, said the technology had the power to ‘love’ humanity or to ‘squish’ humanity ‘like flies.’

He is the second staffer at the search giant to warn about the power of AI in as many weeks, after their ‘Godfather of Artificial Intelligence ‘ Geoffrey Hinton resigned, issuing his own dire warnings about our AI future

At Google’s X, Gawdat said he thought of the AI they created as his ‘children,’ but now he has some regrets about that parenthood.

‘I’ve lived among those machines. I know how intelligent they are,’ Gawdat told podcaster Dan Murray Serter this month. ‘I wish I hadn’t started them.’

Mo Gawdat, former chief business officer at Google X, a 'serial entrepreneur' and start-up mentor, believes that humanity has to be careful what information we feed into AI

Mo Gawdat, former chief business officer at Google X, a 'serial entrepreneur' and start-up mentor, believes that humanity has to be careful what information we feed into AI

Mo Gawdat, former chief business officer at Google X, a ‘serial entrepreneur’ and start-up mentor, believes that humanity has to be careful what information we feed into AI

Gawdat says a dystopian scenario like the film adaptation I, Robot is likely, if humans continue to pursue automated killing machines for warfare. But he warns that the public is too concerned with these scenarios to focus on fixing the culture that will inevitably lead to it

Gawdat says a dystopian scenario like the film adaptation I, Robot is likely, if humans continue to pursue automated killing machines for warfare. But he warns that the public is too concerned with these scenarios to focus on fixing the culture that will inevitably lead to it

Gawdat says a dystopian scenario like the film adaptation I, Robot is likely, if humans continue to pursue automated killing machines for warfare. But he warns that the public is too concerned with these scenarios to focus on fixing the culture that will inevitably lead to it

Gawdat hopes the public will stay focused on what can be done to prevent a dystopian future of authoritarian killing machines now, while there’s still time. 

He warns that the language-learning models training today’s AI only let the machines learn about the human race from the mess we have created online. There, the bots are likely to see only the worst of what humanity has to offer.

‘The problem is the negativity bias,’ Gawdat said. ‘Those who are intentionally evil are all in the headlines. They’re also the ones that invest more time and effort to be in power.’

Any intelligent AI trained on the controversy-stoking and ‘rage bait’ culture of online content — generated by the news and spread by social media — will come to view our species as evil and a threat.

‘How high is AI likely to think of us as scum, today?’ Gawdat said. ‘Very high.’

Google’s former ‘Godfather of AI’ Hinton also expressed concern about the potential for a toxic dynamic between AI and the news.  

Speaking to the New York Times about his resignation this month, he warned that in the near future, AI would flood the internet with fake photos, videos and texts. The fakes would be of a standard, Hinton said, where the average person would ‘not be able to know what is true anymore.’

For Gawdat, who wrote a book about the future of AI in 2021, Scary Smart, common fears about ChatGPT are a ‘red herring’ and the power of today’s AI chatbots are still greatly exaggerated by the public and government policymakers.

‘Now that ChatGPT is upon us, even though ChatGPT truly and honestly is not the issue, now everyone is waking up and saying, “Panic! Panic! Let’s do something about it,”‘ Gawdat told Serter on his Secret Leaders podcast.

By focusing on distant apocalyptic scenarios, he said, humanity might fail to address the issues it can change right now to ensure a more harmonious future in our inevitable partnership with hyper-intelligent AI.

‘Between now and the time AI can actually generate its own compute power and do installations itself through robotic arms and so on and so forth, it doesn’t have the agency to do the scenarios that you’re talking about here,’ Gawdat said. 

‘Humanity is what is going to decide to create bigger and better data centers, to dedicate more power to those machines, to riot and protest in the streets for losing their jobs and calling the AI “the demon,”‘ He added. 

‘It’s human actions, it’s humanity that is the threat.’ 

Gawdat told Secret Leaders’ listeners that people should stop worrying about distant possibilities in 2037 or 2040 when AI might decide to ‘squish’ humanity ‘like flies.’  

But, he does believe that AI will soon have the ‘the agency to create killing machines,’ but only ‘because humans are creating them.’ 

‘So, yeah, AI might use that to dictate an agenda like the movie I, Robot,’ Gawdat said, ‘but that’s still a bit far away.’

As AI tech continues to advance very rapidly, however, he cautions that even his own expert opinions must be taken with skepticism.  

‘Any statement about AI today that is future-centric is false,’ Gawdat said. ‘Why? Because the shit has already hit the fan.’

Source link