Rumours on GPT5 by OpenAI, AGI and some thoughts on our future

Rumours on GPT5 by OpenAI, AGI and some thoughts on our future.

GPT-5 will have 2000 to 5000 Billion Parameters

Training said to take double time 6-10 months instead of 3-4 months.

Training on 25,000 Nvidia A100 and some H100. Huge resources, high wattage, operating this alone is a huge challenge.

Data Cut off Dec 2022
End of Training Dec 2023
Release March 2024

GPT-5 expected to achieve AGI.

It is expected that competition cannot keep up with training once AGI achieved.

(did somebody from OpenAI ask for a “pause”?)

Not much time left for humanity to learn and adapt to what’s coming. Let alone create policies, LOL.

Sam Altman CEO of OpenAI says, he cannot imagine a world where human intelligence is worthless.

I also cannot imagine fully, how it will be, but we are heading to it. We will see extreme changes, opportunities and already get some ideas from our daily use of AI. All of us will be affected.

Policy makers will be years or decades too late.

I believe we already see the start of a separation between humans using/embracing use of AI and humans not using/embracing/fighting AI.

OpenAI know they have to move fast because DeepMind can throw a lot more hardware and expertise at the problem. 25,000 machines isn’t even stretching Google’s cloud. I think that’s less than the hardware in just one of their many, many datacentres.

However, looking at Bard I think I figured out where OpenAI have their advantage. You see, for all the years of NLP usage and data Google have, it was all about searches. And people frame a search very differently to the way they write a prompt for a chatbot. It’s a radically different kind of NLP that is needed, and I just have the feeling that Bard didn’t account for that, and that Google’s data actually made them go a little in the wrong direction.

Meanwhile, OpenAI have years of GPT data, the usage of Jasper, etc. So while they don’t have the hardware or expertise that Google have, they did have a much more applicable source of training for the right kind of NLP. But that advantage lessens every single day that Bard can be gathering usage data, and while DeepMind are actively working on a more chat-like, conversational kind of NLP to be using.

If you remember, Google actively chose their Dialog based LLM (LaMDA) over their more accurate PaLM-based LLM that I (and many others) had expected. Although Google have switched to more PaLM based systems according to recent news from the CEO. So while my guess may be wrong, it is not an entirely random or blind guess, and it does seem to fit some of what we know…

Anyway, the main thing is that OpenAI moved first, and they need to keep that momentum going as long as they can. The more time passes, the greater the odds that Google will catch up and even overtake… Unless ChatGPT can leverage that first-mover advantage and gain the funding and resources to scale up and hope to even the odds while they still have the lead.

I’m pretty certain we’re nowhere close to AGI yet. I mean, for sure I think that GPT5 might be able to pass the Turing Test that used to be the benchmark for ‘proper’ AI, but that’s because the test is based on chatting, and ChatGPT is a dedicated Chatbot. AGI has to be versatile, able to be intelligent in all sorts of applications, just as a person is. In other words, it has to go far beyond just writing like a human, or chatting to fool a human. It needs to be capable of actual reasoning, of figuring out knowledge it was never trained on - the dangerous stuff. If we were 6 months from that we’d be hearing a lot more than just asking for a short pause - we’d be hearing of major protests and calls for backtracking and turning off the AIs…


Good point, Google lacks training material for “real prompts”… and a real thumbs up/down, only dwell time and other “indirect feedback”.

THE reason why ChatGPT is still free for most, endless free training data.

And Google is too big, too slow, it seems.

Google Brain and DeepMind Teams had rivalry, instead of collaboration? Typical corporate BS. That’s a huge issue imo.

Do they have more servers? Yes. Can they deploy it for new projects? No, we have to ask A and B and C and those forecasts, and bonus programs and the last round of 369 degree feedbacks yadda yadda :wink:

On a tech topic, what we see already with

  • LangChain
  • Auto GPT
    and finally also
  • ChatGPT Plugins (similar to LangChain/actions)

is turning towards a more “versatile” “intelligence” that can do (try out) things, and learn from it.

The “context size/memory problem” is solvable pretty cool in some new ways using a Vector DB, as @RealityMoez already mentioned somewhere.

I think what is dubbed as “AGI” will be just the maximum on a scale from “little helpful” to “unbelievable helpful”.

Letting AI prompt itself, do that iteratively and automated is possible, today April 11 2023,
like those chess computers 30 years ago already, or brute force password crackers…

I also believe Google will keep up, at some time.

I believe less and less that it will be any relevant anymore, by then.

1 Like

Where did you hear/read about rivalry? Alphabet own both companies - if there is rivalry in that situation, it is by design, because it makes both more effective. Just like when a coach or manager of a sports team allows rivalry between players on the same team because it spurs them both on. If the rivalry isn’t productive, the coach just fires the worse player.

1 Like

I have no insights, so just have to rely on typical secondary sources, news, e.g.

because it makes both more effective

Yes, I fully agree, but that’s not how such huge international corps work, usually.

Everyone has 20 management levels above where they need to report some nonsense KPIs to,
and 20 levels below where they need poll those KPIs.

Everything honest (off the record) I’ve heard from Google insiders paints the exact same picture.

Therefore these statements about both companies not working well together are very probable imo.