Ex-Google boss fears for AI 'Bin Laden scenario'
- Robert Salier

- Feb 14
- 2 min read
Updated: May 12
Eric Schmidt was Google CEO and executive chairman from 2001 to 2011. His views are interesting, and not necessary what you’d expect from one of the who’s who of Silicon Valley. Certainly not the views you would get from the “tech oligarchs” as they are now being called. The dramatic heading of the article may unfortunately overshadow a range of concerns related to the impacts of AI oversight and regulation, and technology in general.
It’s pretty normal for industry to push back on government oversight and regulation, but I share his concerns that there isn’t enough when it comes to AI. His comments come as he was attending a global AI Summit in Paris, where the UK and USA refused to sign an international AI Action Statement signed by 60 other countries that outlines an ambition to promote AI accessibility, ensure the tech's development is "transparent", "safe" as well as "secure and trustworthy".
We’re already seeing how AI can be harnessed to produce incredible productivity tools, which like any tool can bring great benefits if used for good. However, like any tool such as a knife, it can be used to do harm. Considerable harm. Let's not lose sight of the bigger picture, i.e. that tech companies are all scrambling as fast as they possibly can to the pot of gold that they see AI holding. Actually more than a pot, more of a Fort Knox. When money is at play, ethics and safety suffer.
Also, let’s not forget the stated goal of many of the tech giants is to produce “Artificial General Intelligence”, i.e. “true” artificial intelligence capable of performing any intellectual task that a human being can do. Tech companies are throwing huge amounts of processing power to build what are essentially synthetic brains ... even if they are currently fairly crude synthetic brains made from perhaps equally crude simulations of neurons. Try asking ChatGPT "How many neuron equivalents does ChatGPT have and how does that compare to the human brain?” *. What if they end up creating something more intelligent and/or faster than a human brain? If this sounds like science fiction to you, then I recommend reading up, and listen to reputable podcasts.
Regulation needs to watch that things don’t get out of hand. I’m concerned that in this nascent phase of creating Artificial Intelligence (and particularly Artificial General Intelligence) that there’s not nearly enough oversight and regulation. I fear that it will take a catastrophic accident or event to get enough attention on this.
* [Robert Salier update] When I published this article the ChatGPT answer included actual numbers, but when I checked on 5th April 2025 ChatGPT did not quote any numbers of neurons, although to be fair maybe the new answer is better.


