OpenAI, the company that built and trained and operate ChatGPT and all of the GPT large language models build them in English. That is to say that their staff speak English, read English, select training documents written in English, and measure the performance of the results testing in English.
They specifically stated, on release, that it is best used with English, as that is the only language it is tested and measured on. It gives poorer results, having much less data to draw from, with any other language that is not English.
Now, that is not to say that the training corpus, the huge mass of documents it was trained on didn’t include language courses aimed at teaching the English to speak other languages. It almost certainly also included some forums and sites like Reddit that will include posts and comments in other languages. But it is, by comparison to the English language fed it, mere table scraps.
That’s a limitation not with AIPRM, nor just with ChatGPT, but with the underlying Large Language Models themselves, where the whole framework is built mostly to handle English structures of language and grammar, is quality tested in English, etc.
The Language Model data is so large that even the lesser abilities in French, German, Spanish, etc is still better than anything there has been before, but it is always, always worse than using English in prompts. If you want to really watch it struggle and almost burn out, give it prompts and instructions in a language that reads from right to left, such as Arabic, when it was built for left-to-right.
You need to understand this. You need to be aware of it.
You will NOT get an Italian speaking AI that is as good as it works in English until a company with a significant base in Italy, Native Italian staff, Native Italian familiarity, and expert knowledge of Italian literature, science, politics, etc. Is involved both in the selection of the training data and in every step of quality control along the way.