The Biggest Mistake With AI Use?

For a lot of people, (and by a lot, I certainly mean the vast majority, on the 99%+ level), The past few years of AI, from Jarvis to ChatGPT, came out of nowhere. Heck, I’d probably say that for 90% of people they didn’t even use Jarvis and so just ChatGPT itself just suddenly appreared out of nowhere for them.

Very few people were much interested in the research science of AI. Most of you probably never heard of PaLM or LaMDA at all until very recently, despite both of those predating ChatGPT. That’s not a criticism, simply a statement of fact.

Heck, I saw articles and posts on social media from people who were ‘supposed’ to be smart SEOs, talking about the novel idea of maybe Bing using an AI in a search engine, completely ignoring that Google had already been using a dozen different forms of AI and Machine Leaning in their normal results for literal years before. It’s simply that those AI were not something you directly or openly interacted with. Those AI did things like process your query and attempt to get a better understanding of not just your meaning, but the intent behind that meaning.

There were AI that helped detect and remove spam, AI that had been installed into Google Maps so its directions would result in an average journey time reduction of 30%, and many, many others, but you couldn’t talk with any of them.

That’s a key difference, one of the absolute central points, and I want you to bear it in mind.

ChatGPT is very, very far from the first AI that could work out complex things. AI started to out play Chess Grandmasters long, long, long ago. Then it was Go players. An AI can attempt to model thousands of possible choices, and the predicted outcomes, in the blink of an eye to choose the one that has the statistically highest chance of being the best move. Whether that is a chess move, or mapping network connections, or route planning, or… predicting the next few words that are most likely to produce writing that matches the patterns it has been taught are ‘good’.

What it is not doing is ‘creating’. It is not ‘writing’ in the way we do, to think about the reader, to imagine ourselves communicating with them, and trying to think of how to keep them engaged and get our meaning across. Instead, it is looking at the patterns of every document it has been trained on and then predicting which words will most closely bring results that match one of those patterns. It is specifically, by design, trying to write like things that have already been written before, using facts and statements that have been used before, in the most successful (and thus widespread) patterns and styles.

All AI generated writing, at this stage in AI development, is inherently derivative. It is based on what already exists and was successful. It is formulaic, because it actually does detect and copy formula.

It is really important to understand this. AI can write great ‘filler’ content, the predictable blurb on the side of your box that is the same as the writing on the side of anyone else’s box, and that’s okay because nobody reads the box anyway. In addition, AI will create better filler text, faster, and cheaper, than using a human who knew nothing about the topic, and hadn’t one original thought or creative idea to their writing. But regardless of who wrote filler text, or why, or how fast, it is still filler text - it is still derivative, uncreative, formulaic, predictable, and ultimately, not something you’d use for an article, or where you want people to be impressed or persuaded by that writing.

Now, there are times where you can inject all the creativity and usefulness into the prompt, and all you need an AI to do is impose order and pattern on that information. That’s fine. If you are hooking up a couple of data sources into a prompt, data that has never been merged together in some cool and useful way, then AI can write the filler text and impose the structure at a scale that simply couldn’t be done by humans. That’s where AI generated content can shine.

But if there is something that could be written by humans, and is meant to be read by humans, and whether those readers are impressed and persuaded matters… That is the last place you should use AI.

So why are so many people using AI in the absolute worst way?

The answer is a cognitive bias called The Dunning-Kruger Effect. Those who are not good at writing, don’t have a great knowledge and experience of writing, and thus are not good at knowing good writing from bad. Their lack of ability in literature means that they literally can’t tell the difference between good and bad, and instead tend to judge writing by how many words it has.

When those people see that ChatGPT can write words, and they can’t tell that the words are unoriginal, derivative, formulaic, or even why those things matter, well, they don’t understand and figure words are words.

There’s a second part to that same flaw though. Those who use ChatGPT or any other AI because they are poor at writing, at using language, are missing the really, really huge ‘elephant in the room’. Large Language Model (LLM) based AIs use language. They are built entirely around a finer understanding of language than any prior generations of AI. Your sole control over those AI, your entire input into how well they will work, is all down to how well you can use that one thing it understands: Language.

If you want to be ahead in prompt engineering in the coming years, then the single most important skill you can learn, is better language skills. The ability to more clearly express your needs in a prompt.

AI is not a tool to help you avoid learning how to write. It is an entire new economy that is based entirely on how well you can use language to prompt a machine.

6 Likes

Great ideas to sleep on.

1 Like

Since I have been dwelling in this pit around as long as you have, I have seen lots of different attempts at approaching content automation. And you are absolutely right of course. AI is a tool. It’s not a replacement for creativity.

At least not in it’s current state.

If it ever will become anything more than a good tool for filler text remains to be seen.

I used GTP 4 last night to generate a monster-article about content automation. I’m still reading through it to see what can be used, and what is simply filler text.

At first glanse it looks pretty well laid out. But I have to read through it a few more times to see if it get’s across what I had in mind.

It sure is interesting though to use a tool this powerful. And yes, it’s all about the prompting.

So far, I am having a lot of fun playing with GTP4. It even managed to make me feel emotional when I asked it about the background for my dog’s name :smiling_face_with_tear:

4 Likes

AI makes a great work buddy. You can converse with it to bounce ideas around about the best way to approach a piece of writing, what points you might want to cover, and even what order of tackling those might be the most expected.

You can give it snippets of writing you aren’t happy with and ask it to suggest improvements.

You can list the topics you do cover and ask it if it thinks there are others that should be covered.

These are all things that play to its super-power, of knowing what has been written about already, by others, in a library of millions of documents.

But what AI can’t do is tell you what isn’t in that library. You can’t ask it for original or new ideas outside of everything it knows (because it has no concept of anything outside of all it knows). It literally can’t imagine anything new. It simply has an unimaginably massive knowledge of everything already written.

Oh, and once you bounce those ideas around, write the creative article yourself, AI makes for a stunningly effective editor (Editor like the job title, not like a tool), and will catch any typos, correct your grammar, or in all other ways make your content fit the expected patterns of publications it has learned so well.

3 Likes

Yeah. I agree. I’ve been using various AI’s for quite a few years now. GTP4 is by far the best I have come across so far. Midjourney is pretty amazing as well with the right prompts.

Google Bard is so far the most pathetic attempt, but aparently Google Bard is Switching To a More ‘Capable’ Language Model according to the CEO. I think they were to scared of ending up like that Twitter AI of Bing (Taj I think), spitting antisemitism at everyone.

If Google is gonna have a chance at not becoming Altavista, they better gear up and get in the game fast.

I’m having a little experiment going in GPT4 now asking it to pretend to be a well known Norwegian stand-up comedian. I have to compare the jokes to his jokes to see if they are any good.

2 Likes

Right now, part of which Chatbot is best for us is going to be down to the prompts we write, and which responds best to our own style of prompting. I’ve heard people that say Bard is answering the same prompts better than ChatGPT 4, and people saying the exact opposite. The difference is the prompts they are giving.

You’re spot on about Google being more cautious though. After all, remember that Bard is just a purpose-tweaked version of LaMDA, and that LaMDA was working and in some use long before ChatGPT launched with its version 3.5 engine. It simply wasn’t released to public use.

I also believe that Google are being more strategic. By that I mean that while OpenAI literally only have one LLM to use, DeepMind has many choices, and a lot of people expected a PaLM based AI to be chosen. PaLM is exceptionally good at giving accurate answers, even in safety critical applications - as proven by MedPaLM.

What’s impressive about MedPaLM isn’t that it is getting close to being as good at diagnosis as actual doctors. No, what’s impressive is that MedPaLM gives harmful misdiagnosis less often than actual doctors do. In safety terms, it is already above trained doctors.

Lots of people I spoke with, and me too, thought it was obvious that Google would go with a variant of PaLM for that kind of accuracy. When they first announced their plans for Bard, using LaMDA I was genuinely massively surprised. So I guessed that maybe they had even wider test data on LaMDA…

But right after that, as I so often do, I questioned that assumption. Why else might Google deliberately go with a Dialogue based AI (the final DA in LaMDA stands for Dialogue Application)? Well, for one thing, Google are the current holders of the ‘Answers’ space. When people want an answer, they already go to Google. If there is such a thing as a clear right answer, Google will already get you to it fast, and successfully, and own well over 90% of the market-share of that need.

It’s when you are unsure, where many possible answers might apply, that Google isn’t as good. And for that, a Dialogue based chatbot, something you can have a back and forth dialogue with, is likely the better choice and doesn’t cannibalize what Google SERPs already do.

OpenAI and Bing don’t have that worry. For the tiny marketshare that Bing have, just about any change at all can’t be any worse than where they have been the past 10 years. Disruption, however drastic, is good for them. They literally have nothing to lose, and everything to gain, even from the most random roll of the dice.

Even the worst possible outcome - that Bing might lose all of its tiny market share forever (which is highly unlikely given it is the pre-installed default on new Windows PCs) - Microsoft will make more money from the integration of AI into Word, Office, Outlook, etc. than anything Bing was ever worth. Literally nothing to lose and everything to gain.

1 Like

Yeah. I was very surprised as well with Google’s choice. We will see what they roll out next though. They have lots of firepower if they chose to use it. I do think they got surprised at how good GPT 3.5 was, and how fast 4 rolled out.

And of course you are right about Bing. They literally has nothing to lose. Just upside for them so far.

1 Like

It’s going to be exciting. Reminiscent of the height of the Browser Wars, or indeed when dozens of search engines were all competing to be the best.

I personally think of Bard as simply Google putting in some early stake to show they were taking part. But I absolutely think that DeepMind are working full steam on something new, and they have a LOT more years of experience (and successes) behind them.

We have no idea how long OpenAI were planning and preparing their big move. But we absolutely know that Bard is what Google were able to respond with very quickly, without planning and forewarning… Bard was simply the instinctive parry on a long-planned surprise attack. What will they release when they have thought and purpose behind it? :slight_smile:

But, on the flipside, what are OpenAI planning as their next move, and had they got it to a certain point before launching the attack?

All we can do is wait and watch, and learn how we can best adapt to the new paradigm as it evolves.

2 Likes

What is worrying in all this, is the fact they don’t know much about what’s really in there. I would really recommend having a look at this interview by Lex Fridman with Eliezer Yudkowsky Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368 - YouTube

I honestly have absolutely no worries at all about the machine takeover or robot uprising. None at all. The real risks are far more likely - that AI will indeed be able to do a lot of human work cheaper and faster (even if not as well). Because the past half-Century has shown us where that goes. An ever-widening gap between the richest and the rest of us.

What happened to blue-collar workers over the past 60 years is very likely going to happen to most white-collar workers over the next 50 years. Any knowledge-based profession is about to be faced with something that can flawlessly learn and recall the most complex knowledge and interact with people to inform them.

Surgeons will have longer than many other doctors since they combine fine motor skills with their knowledge, but Lawyers have only their creativity to save them. They’ll have to compete against cheap AI legal advisors for anything but the most creative aspects of law. Accountants are certainly at immediate risk. Incomes in a lot of professions will plummet.

That loss of money and shrinking of the white collar and professional segments will have a knock-on effect on those professions that have traditionally sold best to middle-class and upper middle-classes. That means a loss of market for all sorts of jobs and careers that were not themselves being replaced by robots, but whose market was, and where robots don’t need their services. All sorts of artisans and craftspeople, fitness coaches, restaurants, theatres, and so forth.

Those who will benefit will be the wealthy - those who can invest hard in AI early and replace all those pesky and expensive humans. Robots won’t replace us - greed for profit will.

3 Likes

Yeah. I agree. I think one of the first ones to go, will be customer service reps. That’s of the lowest hanging fruit when it comes to savings for any organisation. I still recommend the interview though. It’s 3 hours long, and they do discuss many aspects that has rarely been discussed in public about the issues AGI rises.

Anyways. You have some really good points Ammon.

1 Like

Accountants are a long way from being replaced, a few years at least. Data in equals data out. The data streams are consistently polluted and the laws change and multiply every single year.

2 Likes

Hi @Mark_Upshaw and welcome to the discussion. I could be wrong about accountants, as I’m sure there are aspects of their work that I’m unaware of as an outsider. But the ability to plug in entirely new knowledge and instantly replace the old is the kind of thing AI is better than humans at. They can be far better at fully absorbing changes to laws and practices, both in terms of breadth and depth (how the changes connect to everything else), and most especially in doing so far, far faster.

It is intuition and creativity that AI lacks.

Obviously, I am saying “AI” and not “ChatGPT”. With ChatGPT the input part is still too limited. However, with a licensed version of the GPT4 LLM, further trained and tuned to run on specific accounting needs and knowledge, and with its image recognition meaning it can even replace the type of bookkeeper who is presented with receipts and has to process them - GPT4’s image processing would enable that too.

Are you familiar with Amazon’s practices with their warehouses etc? Where the human workers are effectively managed and instructed by machines, AI, and algorithms? The human workers are the drones, under the control of an automated Quartermaster - because very simply, software is cheap, and hardware (robots) are not.

Isn’t a Quartermaster kind of an accountant and recordkeeper of accounts, where many of the account items are items rather than currency? They have to manage and balance the books, and comply with regulations and practices, handling the stock levels that are dictated, etc?

Just food for thought. But Amazon workers have had AI bosses for years now, very successfully managing logistical tasks that are just as complex as accounting, if not more so.

3 Likes

Yes, Ai will replace most all accountants eventually. The sooner the better for ease. But, the further it can be pushed out, the better for keeping revenue agents from snatching funds out of your accounts in error.

Accounting is far more complicated than inventory and logistics. Because reporting to principals, shareholders and regulating/taxing agencies. Different books for different reasons.

All doctors that I know going back decades had a minimum of three sets of books. One for the IRS, one for medicare and then for themselves so that they can see where their money is actually going and make decisions for where it should flow.

After an audit, the entries (codes) must change to conform to the auditors desires which will seem to contradict the current law. But there is always room for interpretation among agents, lawyers, accountants and the gov’t.

Now, with green incentives and special carve outs and other programs the differences for big and mediums size businesses is red to black if you don’t take advantage of the incentives or carve outs in the right way.

The system seems to be designed and redesigned each year for the largest conglomerates to unfairly compete against everyone else.

Do you remember when Trump said the system is rigged and the moderator asked him how would he know. “Because I take advantage of it.”

2 Likes

What I see is that the big separation isn’t going to be that the professions will be “gone” but that the individuals in the profession that can use the new tools (AI, etc) while become so much better at their jobs they will be working with a level of volume that others who refuse cannot match.

2 Likes

Welcome @Todd_Lemieux
I wish I could tell you something comforting and affirming for a response to your first post here. Instead, I can only offer you honesty. Extrapolate your thought further.

Yes, individuals in the profession can use the new tools.

So can the individuals outside of the profession.

Any business owner, or employee of a business owner, will be able to use the exact same tools, even the exact same prompts, to get the exact same results. They’ll be able to find those prompts in their local Chamber of Commerce, shared by fellow business owners. They’ll be able to find those prompts in AIPRM for free. They’ll be able to find those prompts on any small business forum.

4 Likes

Thank you, excellent thoughts. I think the gap between rich and poor countries is going to widen and many naive people will believe that with AI tools they will be a better professional or more productive. They have hope, they have faith, they don’t want to think the opposite, they don’t want to hear about arbitrary layoffs, they believe that now they will work less because the AI will help them, they will rest while the AI works, so everyone will have time to be distracted or watch NETFLIX. The reality is that as employers it will be cheaper for them to hire or maintain 1 human with AI help than to have 5 guys with medium productivity but that generate expenses, vacations, health, etc. That one guy left in the job will either be very creative or have something that the AI can’t give him yet. Let’s keep alert without being fatalistic or naive like those who eat up everything that is published.

3 Likes