There are categories for prompts, e.g., copywriting, SAAS, software engineering. I recognize the words as English in many of these categories. However, I am entirely unclear as to the output one would expect in most. Normally, this would not matter because I know I am not a software engineer and never will be. However, maybe others are like me, an enthusiast who sees the possibility of making my own.
I don’t expect an explanation of each category. Rather, it would help if I knew how it mattered. Are the categories meant for humans or chatGPT? That is, does selection of a category install some set of preferences that clue chatGPT into the likely intent of the user? Or, is it just a way for engineers to find others who may have posted in that category about engineering? So far, I have prompted in several categories and I have not noticed a difference between writing my own out and the few prompts that exist.
Writing this I realized, "of course, it is the history of the prompt that matters. It is machine learning, afterall. There is not set of preferences that loads. That is old school. The title of a prompt is set by chatGPT so it must know. " Yes?
The next question is: the ratings are only on user like/dislike ratios? chatGPT or a third-party does not rate the prompt?