OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art

AI, AI Models, Artificial Cognition, GPT-4 -

OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art

OpenAI has released a powerful new image- and text-understanding AI model, GPT-4, that the company calls “the latest milestone in its effort in scaling up deep learning.”

GPT-4 is available today to OpenAI’s paying users via ChatGPT Plus (with a usage cap), and developers can sign up on a waitlist to access the API.

Pricing is $0.03 per 1,000 “prompt” tokens (about 750 words) and $0.06 per 1,000 “completion” tokens (again, about 750 words).

 

Tokens represent raw text; for example, the word “fantastic” would be split into the tokens “fan,” “tas” and “tic.” Prompt tokens are the parts of words fed into GPT-4 while completion tokens are the content generated by GPT-4.

GPT-4 has been hiding in plain sight, as it turns out. Microsoft confirmed today that Bing Chat, its chatbot tech co-developed with OpenAI, is running on GPT-4.

Other early adopters include Stripe, which is using GPT-4 to scan business websites and deliver a summary to customer support staff. Duolingo built GPT-4 into a new language learning subscription tier.

Morgan Stanley is creating a GPT-4-powered system that’ll retrieve info from company documents and serve it up to financial analysts. And Khan Academy is leveraging GPT-4 to build some sort of automated tutor.

 

GPT-4 can generate text and accept image and text inputs — an improvement over GPT-3.5, its predecessor, which only accepted text — and performs at “human level” on various professional and academic benchmarks. For example, GPT-4 passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%.


0 comments

Leave a comment

#WebChat .container iframe{ width: 100%; height: 100vh; }