Google's Bard Has Launched!

Can’t believe I’m the first to bring this up here, but obviously the big news has been people sharing their first impressions of Bard, and a lot of people excited about getting access.

I’m someone who definitely doesn’t buy into any hype as a rule, but I have to confess that this is something I find exciting.

Google’s Bard is not what some news sources are calling it, a late rival to OpenAI. No, it is based on LaMDA, a LLM built specifically for dialogue (GPT is more of a general purpose bot, thus its ability with coding may be better, but at the expense of LaMDA/Bard’s focus specifically on Dialogue).

LaMDA was around (and making big news after one particular Google employee got into huge trouble for calling it ‘sentient’) long before ChatGPT was released, and the LLM it uses is estimated by experts to be above that used by GPT4, back when OpenAI were still working on it and had only released ChatGPT with GPT3.5…

So, this is no “Johnny come lately” but a product that Google had held back from public, full release purely for caution and limited release testing. If all the expert stories from every source other than OpenAI of course are true, this thing is likely to be impressive, and better than GPT, at least for now.

That’s not so much a “Yay Google!” thing, but simply that DeepMind, the Company that produces these AIs for all of the Alphabet Group, has literally years and years more experience, more successful models developed, and more years of the near bottomless support of Google and the parent company. OpenAI was only even formed as a company in the first place after DeepMind got acquired (and skeptic that I am, I have always understood that Musk and Fellow investors only invested in the first place because their exit strategy was to build something that Google or a Google rival might buy … Such as Microsoft).

To give you a concept of the difference in the companies, and how far along each is, DeepMind is restricted in profit-making in that it can only ‘sell’ its products to other parts of the Alphabet group, and yet still turned profitable, more than 2 years ago - a year before OpenAI, desperate for more funding, released ChatGPT to finally push Microsoft to make its stake a serious one, or entice another investor who would.

And even now, Microsoft only Invested enough into OpenAI to buy a 49% stake (the largest single stake), while years ago now Google had not hesitated a second in buying DeepMind outright. And then investing in headhunting the very best minds in AI for years after. For an idea of the scale of that, a couple of years ago it was said that the average salary at DeepMind was over $400k per annum, and the current total salary cost alone for DeepMind is somewhere just over $1 Billion per year…

Ever since the founding of OpenAI, the fact is that the only staff they could hire were the people Google and DeepMind hadn’t. (or those with some personal fear or grudge against Google they were willing to take huge pay cuts over, of course).

That’s not to diminish what OpenAI have achieved, or how very far they have come in such a relatively short time. They have done incredibly well. But its kind of like that small town football team who suddenly found themselves promoted to the Major Leagues. They earned the promotion, but don’t think that puts them on a level pegging for the staff they have, the facilities they have, or the experience they have.

It is going to be very, very exciting. And in all honesty, it is almost certainly going to be sometimes frightening.

We are all, including the AI scientists, off into uncharted and unexplored territory, not even knowing what dangers we don’t know we should be scared of or worried about yet - not 'til we find out the hard way. But hasn’t humanity always loved that, and dreamed of finding cities of gold?



1 Like

Honestly, if DeepMind hadn’t been a UK-based company, still with a HQ building (an impressive one) in London, then this would almost certainly be yet another US-only or USA first rollout, like so many other Google things.


I had my very first disagreement with Bard. I asked if it could summarize articles and if it could, should I paste the text or just put up the link? It said, I can just post the link, so I did. It said it could not summarize the article just based upon the link. I reminded it that it told me just the opposite only a few minutes before. It said that I was correct, it apologized and said it was learning. I gave it a thumbs up for its last response in apologizing. Of course, I think we all know these systems get things wrong and correct when presented with the correct information. I just thought it was interesting to get an error on my very first prompt.
Final thoughts? “This looks like the beginning of a beautiful friendship”.


That sounds about right. From what I’m seeing and hearing, Google put a lot of emphasis on making sure that Bard would stress its limitations far more obviously than ChatGPT does (which it certainly does) but it still makes mistakes. The simplest way to explain it is that there are a number of things that the base AI model could do, but that were disabled or limited for safety, and sometimes it will say things based on what it could do, before it remembers that it is not allowed to do those things in this particular release or version.

1 Like

I genuinely like the way Bard goes about ensuring people learn the limitations of AI and that (at the current state of all AI) mistakes are effectively inevitable. You get unmissable cues that any AI is fallible and that it is up to the human user to check all outputs.

Of course, many humans are imperfect and will completely overlook that, or outright argue with it, but the Bard team have really done their best. You get a pop-up on first access, and the very first prompt hint you get offers the prompt to ask the AI about the limitations or inaccuracies of LLMs generally.

Honestly, I wasn’t quite as happy with its answer as I was with the effort to get users to seek an answer, but it was factually accurate, even if very incomplete. Of course, it was only responding to a one-line prompt and would doubtless be better with more prompt context. My follow-up question gave it some more context, as well as redirection of focus.

The way that Bard generates 3 drafts of answers is pretty cool, but at least in this case (and others) it doesn’t seem especially useful in itself. However, I really do like the fact that, once again, it reminds the user that an answer may not be complete, or the only right answer, and that slightly different interpretations of a prompt might give significantly different answers

1 Like