The Verge explains all those AI terms

Don't get your LLMs in a bunch; The Verge has put together a handy primer on all the acronyms and terms you need to know if you hope to survive SkyNet.

This list contains the basic things you need to understand about AI, from language models to hallucinations. When talking to a close friend who is also an AI developer about her work, I noticed that there is something I miss or need to ask about every sentence or two. I wish I'd had this list a few months ago!

Diffusion models: AI models that can be used for things like generating images from text prompts. They are trained by first adding noise — such as static — to an image and then reversing the process so that the AI has learned how to create a clear image. There are also diffusion models that work with audio and video.

Foundation models: These generative AI models are trained on a huge amount of data and, as a result, can be the foundation for a wide variety of applications without specific training for those tasks. (The term was coined by Stanford researchers in 2021.) OpenAI's GPT, Google's Gemini, Meta's Llama, and Anthropic's Claude are all examples of foundation models. Many companies are also marketing their AI models as multimodal, meaning they can process multiple types of data, such as text, images, and video.

Frontier models: In addition to foundation models, AI companies are working on what they call "frontier models," which is basically just a marketing term for their unreleased future models. Theoretically, these models could be far more powerful than the AI models that are available today, though there are also concerns that they could pose significant risks.

The Verge