It never works. But sometimes it will lie because it is programmed to always respond in the most positive and helpful manner.
OpenAI built it. They know what it can and cannot do. ChatGPT is not even connected to the internet, nor to any source of data, except for its internal ‘mind’ that was trained on documents made prior to 2022 (about halfway through 2021 was the actual cut-off), and your prompt (into which you can try pasting new, updated data, but which still may conflict with the fact that to GPT’s ‘mind’ it is still halfway through 2021).
From what I saw in the press release, OpenAI plan to soon launch a series of plugins, and one of those seems to add at least some capability to access the web and get fresh data. They aren’t making a big deal out of that because it is something it already does, or can do, but because it is a whole new capability.
What AI is trained on is a set of documents. You have to be careful, in that any misinformation in the documents, any biases or prejudices, or hate speech, becomes a part of the AI forever. So they very, very carefully select what documents to include. But if the AI is allowed to access other sources and learn from them, it can get corrupted, fed malicious or hateful data and opinions, just as happened with the famous incident of Microsoft’s own Chatbot they created and released on Twitter (the bot was named Tay). It was supposed to learn from its interactions, hopefully becoming more natural, more human-seeming. Instead, people taught it hate-speech, misogyny, anti-Semitism, and so forth.
It was very public, very embarrassing. Nobody involved in AI forgets the lessons learned.
So OpenAI built safeguards into ChatGPT. That’s why it only has a relatively small and short-term memory - not long enough or deep enough for you to corrupt it. That’s why it literally has no connection to the Internet, or any other live-source of data that people might find and spam to corrupt it. It literally isn’t plugged in, except via prompts.
Now, with that as an absolute fact, it does make for very interesting, and very educational, discussions about how it manages to look like it might be reading web pages. Much of that is down to the fact that most writers follow formulas, set patterns. We know what a press release should look like and what needs to be in it. Same deal with a news article. In fact, those are two of the absolute most predictable formats there are.
Yet the minute someone points out that OpenAI literally has no possible way of looking up a web page (it really doesn’t), someone will almost always try to prove it can by giving it the most easily predictable kind of URL to predict the content of.