8 years of cost reduction in 5 weeks: how Stanford's Alpaca model changes everything, including the economics of OpenAI and GPT 4.
The breakthrough, using self-instruct, has big implications for Apple's secret large language model, Baidu's ErnieBot, Amazon's attempts and even governmental efforts, like the newly announced BritGPT.
I will go through how Stanford put the model together, why it costs so little, and demonstrate in action versus Chatgpt and GPT 4.
And what are the implications of short-circuiting human annotation like this?