Skip to main content
Leaked Documents OpenAI Has a Very Clear Definition of "AGI" — it’s not intelligence.

Leaked Documents — OpenAI Has a Very Clear Definition of “AGI” — It’s Not Intelligence

December 27, 2024

Leaked Documents — OpenAI Has a Very Clear Definition of “AGI” — It’s Not Intelligence

December 27, 2024
According to leaked documents, OpenAI and Microsoft agreed to define AGI as a system “that can generate $100 billion in profits”.

Yes, you’ve read this correctly. According to leaked documents obtained by The Information
OpenAI and Microsoft agreed to define AGI as a system “that can generate $100 billion in profits”.

That’s it!

So, if I am to prove to you that I’m an intelligent person, all I have to do is demonstrate to you that I am generating $100 billion *in profits* — really?

As a quick FYI — OpenAI’s website states that AGI refers to “AI systems that are smarter than humans”. 🤔

I don’t know about you, but both definitions seem somewhat different to me <sarcasm>

Let me tell you a secret as a tech entrepreneur of over two decades:

Sadly, far too many tech startups use this typical trick of offering investors obscure “targets” to get investors excited while also allowing startups to move the goalposts.

Why do you think deeptech startups don’t get the same level of investments? Because most deeptechs have clear targets. For example, developing a new drug, building a new cybersecurity system, or building AI models on ethical standards from the ground up without using copyrighted material (as is the case with my company Nebuli), etc.

Such clear targets are not sexy enough for many investors, apparently.

I called this out on an episode highlighting a real problem of how investors are not prioritising deeptech. I shared how DeepMind was on the verge of death if it weren’t for Google’s acquisition. Yes, they could not raise more investment. Yet, their CEO has just won the Nobel prize!

So it seems that, unless you build a deeptech company that makes scientifically suspicious, out-of-this-world promises that turn out to be fraudulent, you don’t stand a chance of getting the funds that you deserve — as we have seen with Theranos — a fake blood test company that was the darling of the media and investors alike.

Why do you think social media startups pitched on the basis of huge vanity metrics (ie huge user sign-ups and engagements etc) which turned out to include millions of fake users?

You get the picture!

So the fact that these leaked documents do not provide any clear scientific metrics that would allow OpenAI to define “AGI” should be an eye-opener.

Certainly, I would argue, that the promise of $100 billion *IN PROFIT* is what attracted the investment, not AGI.

Don’t forget, the AGI concept has been around for a long time and I cannot tell you how many times I heard promises that it will be available in “5-10 years” — I’ve been waiting for 25 years, and I’m still waiting!

What do you all think? Should this leaked report change the market’s direction and priorities?

In my opinion, it should certainly push for more serious scrutiny, not to mention their aggressive data grab to train these models in a somewhat rushed and reckless fashion.

Citations:

Share this

Discussions