How does Google PalM compare to OpenAI GPT?

I often read or got the question what PalM 540B by Google is, and here’s the answer.
A much smarter LLM than GPT3 175B.

Source: AI + IQ testing (human vs AI) – Dr Alan D. Thompson – Life Architect

If you look at it that way, then many reports about the possible replacement of Google, even from well-known sources, appear very uninformed when viewed today, but above all sensational. For the interested reader,

I definitely recommend reading up on the PalM model, a Google language model based on over 500 billion parameters instead of 175 billion.

The final question remains: what, IF, will Google use the (expensive) power for? Certainly not just to clean out some SEO Spam in the SERPs, that’s clear for me. For that purpose they have more than enough methods and signals, and don’t need the power (and cost) of such an LLM.

There’s some finance guys questioning and cutting costs at Google, like in any major corporation.

5 Likes

The difference is not merely one of size. Many different AI models have entirely different modes of use or primary purposes. ChatGPT is based on GPT and the GP stands for ‘General Purpose’. It doesn’t specialize in anything, and so can be used for all kinds of things, from having a chat to writing code, from trying to be somewhat creative, to where code needs to work precisely and a single comma out of place breaks the code.

I for one was initially quite surprised that when Google announced it would be introducing an AI side-kick to its search engine that they went with Bard. They have at least a half-dozen varieties of PaLM based systems alreay in use, and they tend to be mostly in applications that depend on accurate answers.

For example, consider MedPaLM, their AI model that is specific for medicine, and while it doesn’t yet diagnose quite as accurately as a real doctor (but impressively high nonetheless), what is amazing is that it gives harmful mis-diagnosis LESS OFTEN than actual doctors do.

Naturally I’d expected an AI built around that kind of accuracy and safety to be their most likely choice. So, to say I was surprised they went with Bard, a variation of a dialog focused AI, is something of an understatement.

But it actually makes sense. For an awful lot of the kind of queries one might want an AI for, a huge amount of them don’t have one right answer. So many of the searches people make are for things that are subjective, that change (either over time, by season/day/month, suddenly ala trends/burtiness, or by context such as past search history, geography, or whatever), or where we actually want a range of options. Those are things the PaLM model doesn’t seem to have been built for.

Thankfully, while OpenAI have largely only ever built one language model (GPT) in various versions, plus an image AI, DeepMind (the company who’s success OpenAI was literally founded to cash in on) has dozens. DeepMind have built hundreds of different AIs for all sorts of purposes, and many of them are in commercial use. To the extent that even while DeepMind can only sell AIs to other companies under the Alphabet group, it still turned profitable 2 years ago.

So why did Google go with Bard, a Dialog Model, a specific type of AI built expressly for dialog applications? Well, partly it’s the situation. It is meant to supplement search, not supplant it. Bard is far more powerful than GPT in that it is more focused on the specifics of chat-based dialog. And right now, none of the companies involved know exactly how people will use these AI tools. So a robust, flexible dialogue based model might be exactly what they need to handle anything while usage data tells them what specifics to build the next AI for.

2 Likes