Skip to main content

Collab Episode: The AI bubble is much worse than we thought. It’s a financial nightmare!

Episode supported by
Watch or Listen

Episode Summary

The tables have turned! I had the pleasure of joining ​Ben Byford​ for the second time on his ​Machine Ethics Podcast,​ after a four-year gap, discussing why the AI bubble is far worse than we thought and how it has become a financial nightmare.

Ben is a good friend of our show here at The CEO Retort and a previous guest who discussed digital ethics and ethical AI, without the hyperbolic doom-and-gloom fantasy.

As part of our collaboration, I’m the guest this time, and Ben and I discuss podcasting, the history of OpenAI, London startups, AI use cases, and whether GenAI is even safe.

But the key topic of our discussion is the AI bubble, snake oil salesmen, and whether we need all these data centres.

Above all, we are exploring the financial implications post the AI bubble explosion.

As ethicists, we also discuss the practical implications of replacing human workers, the danger of data oligarchies, the erosion of trust in AI, AI psychosis and more.

Join the conversation and let us know what you think about the AI bubble.

Key Takeaways

(YouTube Timestamps)
  • 00:00:00 – Preview
  • 00:01:45 – CEOR Intro
  • 00:02:25 – Tim’s journey with AI as a deeptech entrepreneur since 2002 and why he decided to launch The CEOR podcast.
  • 00:08:53 – Why Tim believes Google is still the dominant force in AI today, not OpenAI, based on his observations at Google Campus London.
  • 00:10:32 – How did the current wave of AI cause the rise of the AI bubble?
  • 00:22:03 – What impact does the AI bubble have on regular people’s lives?
  • 00:27:19 – What makes this AI bubble different?
  • 00:33:30 – The failure of the AI sector in justifying their claims that AI “can change everything” and the dangerous consequences of this delusion.
  • 00:39:39 – Replacing humans is a fantasy. It’s about profit and control, not innovation. There is nothing new here!
  • 00:46:10 – The “anti-human” rhetoric from the AI sector, coupled with aggressive and unlawful data grab, thought manipulation and why it must stop.
  • 00:52:30 – What could the world look like after the AI bubble bursts?
  • 00:56:37 – Why “AI Data Centres” are not “long-term” investments with big ROIs like railways.
  • 01:01:20 – Why nonprofits and socially responsible AI research and services will play a critical role in the post-AI bubble cleanup.
  • 01:05:12 – Tim’s advice on how to protect yourself from chatbots’ praise!

Our Favourite Quote from This Episode

You don’t need AI for that. These are the people behind the AI, basically masquerading it as an AI innovation, but in fact, it’s an attempt to aggressively grab as much of your data as possible. That way, you can “behave yourself”

References and Citations

About our Guest

Ben Byford

The CEO Retort Guest Profile: Ben Byford

Ben is a podcaster of the Machine Ethics Podcast and one of the earliest and most avid advocates for digital ethics and ethical AI, long before it became fashionable in the current AI hype cycle.

Ben runs the Ethical by Design consultancy, focusing on AI models. He makes games with his company, Nuclear Candy Games. He has also worked on web and app projects and has taught data science, design and coding with Cambridge Spark, Decoded and Ada College.

Share this