Folks: please don't rely on the ChatGPT neither for giving answers nor as a source od knowledge - it is way too unreliable for that and way too often it just makes up answers that look valid, but are actually complete bulls&$#.
See the article by Wolfram for an insight:
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/In short, LLM (Large Language Models) are neural networks designed and trained to GENERATE naturally looking text. They were never designed to give correct answers. Fact, that they most often do, is quite interesting and surprising, but doesn't make them in any way worth of using for learning nor as a source of knowledge or information. There are thousands of examples on the net, actually first well know case where ChatGPT was found hallucinating (it is now a commonly and more or less officially used term) was related to list of reference papers it produced when asked about some specific research - half of them didn't exist. List was properly formatted, referred to existing magazines, looked in every possible way deceivingly OK, but was just made up.
So don't be deceived by the GPT. It is an impressive thing, and it will definitely at some iteration become a reliable tool, but it is not there yet.
Interesting tidbit: what we write here is part of the C4 dataset used to train the LLM's AI (see the attached image). Not that we have a large influence, but still. Data taken from
https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/Sooner or later I am going to add a note abut not using ChatGPT at the forums to the rules (at other sites it is often getting banned as it produces more problems misguiding students and lay people than it solves).