Collab Episode: The AI bubble is much worse than we thought. It’s a financial nightmare!
Episode Summary
The tables have turned! I had the pleasure of joining Ben Byford for the second time on his Machine Ethics Podcast, after a four-year gap, discussing why the AI bubble is far worse than we thought and how it has become a financial nightmare.
Ben is a good friend of our show here at The CEO Retort and a previous guest who discussed digital ethics and ethical AI, without the hyperbolic doom-and-gloom fantasy.
As part of our collaboration, I’m the guest this time, and Ben and I discuss podcasting, the history of OpenAI, London startups, AI use cases, and whether GenAI is even safe.
But the key topic of our discussion is the AI bubble, snake oil salesmen, and whether we need all these data centres.
Above all, we are exploring the financial implications post the AI bubble explosion.
As ethicists, we also discuss the practical implications of replacing human workers, the danger of data oligarchies, the erosion of trust in AI, AI psychosis and more.
Join the conversation and let us know what you think about the AI bubble.
Key Takeaways
- 00:00:00 – Preview
- 00:01:45 – CEOR Intro
- 00:02:25 – Tim’s journey with AI as a deeptech entrepreneur since 2002 and why he decided to launch The CEOR podcast.
- 00:08:53 – Why Tim believes Google is still the dominant force in AI today, not OpenAI, based on his observations at Google Campus London.
- 00:10:32 – How did the current wave of AI cause the rise of the AI bubble?
- 00:22:03 – What impact does the AI bubble have on regular people’s lives?
- 00:27:19 – What makes this AI bubble different?
- 00:33:30 – The failure of the AI sector in justifying their claims that AI “can change everything” and the dangerous consequences of this delusion.
- 00:39:39 – Replacing humans is a fantasy. It’s about profit and control, not innovation. There is nothing new here!
- 00:46:10 – The “anti-human” rhetoric from the AI sector, coupled with aggressive and unlawful data grab, thought manipulation and why it must stop.
- 00:52:30 – What could the world look like after the AI bubble bursts?
- 00:56:37 – Why “AI Data Centres” are not “long-term” investments with big ROIs like railways.
- 01:01:20 – Why nonprofits and socially responsible AI research and services will play a critical role in the post-AI bubble cleanup.
- 01:05:12 – Tim’s advice on how to protect yourself from chatbots’ praise!
Our Favourite Quote from This Episode
References and Citations
-
Watch the Machine Ethics edition of this podcast with additional commentary from Ben: The AI Bubble.
-
My discussion with Ben about AI readiness at his Machine Ethics Podcast
-
OpenAI Exec Says It Could Use Some Financial Support From the Government – Futurism
-
OpenAI discussed government loan guarantees for chip plants, not data centers, Altman says – Reuters
-
Transformer: A Novel Neural Network Architecture for Language Understanding – Google Research
-
Transformer-based translation system, these models have resulted in substantial gains in translation quality – Google Research
-
Recent Advances in Google Translate – Google Research
-
The AI boom is based on a fundamental mistake – The Verge
-
All the president’s millions: how the Trumps are turning the presidency into riches – The Guardian
-
AI tools may soon manipulate people’s online decision-making, say researchers – The Guardian
-
Frustrated with today’s ‘attention economy’? You’re really going to hate what comes next – Fast Company
-
Forget the attention economy. Prepare for the intention economy – Fast Company
-
May Bulman: Tony Blair’s institute turned into Oracle’s sales and lobbying operation – CEOR
-
Making Chips To Last Their Expected Lifetimes – Semiconductor Engineering
-
The question everyone in AI is asking: How long before a GPU depreciates? – CNBC
-
Morally corrupt innovations are the easiest innovations to create – It’s the lazy approach with dangerous consequences – CEOR
-
Sara Grimes: How Tech and AI Ignore Children’s Rights – CEOR
-
The Elite College Students Who Can’t Read Books – The Atlantic
-
AI hallucinations are getting worse – and they’re here to stay – New Scientist
-
Trust, attitudes and use of artificial intelligence: A global study 2025 – KPMG
-
Oklahoma high schools to teach 2020 election conspiracy theories as fact – The Guardian
-
AI-generated ‘slop’ is slowly killing the internet, so why is nobody trying to stop it? – The Guardian
-
The Future of AI Under Trump and Project 2025
-
The End of AI Utopia Thanks to Dumb Politics – CEOR
-
Generative AI is Damaging Children’s Mental Health and Safety in the Age of “Brain Rot”
-
Why 2025 Is The Era of Internet of Sh*t (IoS) – CEOR
-
“Brain rot” is the 2024 Word of the Year — why is this bad news? – CEOR