It’s a nice idea.
The problem with it is that GPT, the underlying AI beneath ChatGPT and also available as a product on its own, was trained with a huge corpus of training data, the most recent of which was from 2021. Obviously, there was no ChatGPT at that time, so none of that huge amount of data had ANY knowledge on how to prompt something that didn’t exist yet. So, GPT knows about the least possible about how to prompt ChatGPT.
The scientists did, I believe, hard-code and inject some basic tips and some hard-coded knowledge about ChatGPT, enough to recognize that people are referring to it, and so forth, but its self-knowledge is extremely limited and unreliable.
I’d very much recommend taking the necessary time to learn at least the basics of how current LLM-based AI work, what they can and cannot do, etc. and you can make a start on that with A Crash Course in LLM-based AI