OpenAI’s incredibly capable, albeit flawed, GPT-3 was possibly the first to demonstrate that artificial intelligence can write convincingly like a person, even if not completely like a human.
It is anticipated that the GPT-4, which will be the successor of GPT-3 and will most likely be called GPT-4, will be introduced in the upcoming years, possibly as early as 2023.
ChatGPT, a fine-tuned version of GPT-3.5, was released to the public on Wednesday. ChatGPT is essentially a chatbot that may be used for a variety of purposes.
ChatGPT, which made its debut in a public demo yesterday afternoon, can engage with a wide variety of subject matter, such as scientific concepts, television scripts, and programs.
OpenAI claims that GPT-3.5 was trained on a combination of text and code that was published before the fourth quarter of 2021. GPT-3.5, much like GPT-3 and other text-generating artificial intelligence, words, parts of words, and learned the relationships between sentences by ingesting enormous amounts of content from the internet.
Instead of releasing the fully trained GPT-3.5, OpenAI split it up into numerous different systems optimized for different applications, all of which may be accessed via the OpenAI API.
The lab claims that text-davinci-003 excels in both long-form and “high-quality” writing, and can handle more complex instructions than models created on GPT-3.
Data scientist at OpenAI Jan Leike claims that text-davinci-003 is related to but distinct from InstructGPT, a family of GPT-3-based models introduced by OpenAI earlier this year that generate text that is less likely to cause problems while better matching the user’s goal.
According to a tweet by Leike, text-davinci-003, and by extension GPT-3.5, scores higher on human preference ratings” despite having “less severe constraints.