14 must know generative AI terms
Improve your generative AI vocabulary - 14 must know terms.
Generative AI: machine learning (ML) techniques that allow computer models to create new, realistic content such as images, text, audio, and video
Large Models: the engine behind the power of
Generative AI - composed of neural networks with billions of parameters
Foundation Model: reusable, modularly-trained ML models like that serve as generalized starting points for developing generative AI applications
Large Language Model: natural language processing (NLP) system of deep neural networks with billions of parameters that are trained on massive text datasets.
Multimodal: combining multiple modes or types of data such as text, images, speech, and video to build integrated AI models and applications.
Fine-Tuning: the process of taking a pre-trained model and adapting it to a downstream task by updating the weights through additional training on a smaller, task-specific dataset
Parameter efficient fine-tuning (PEFT): adapts a pre-trained model to new tasks using limited task data and without substantially changing the original model's parameters
Prompt Engineering: process of constructing effective natural language prompts to provide as inputs to LLMs to guide them towards the desired task or response
RLHF (Reinforcement Learning with Human Feedback): trains AI by allowing humans to positively or negatively reinforce the behaviors in an interactive loop
Embedding: vector (an array of numbers) representation of an entity like a word, image, or video that encodes key information
Vector Database: repositories of vector representations of entities like words, images, documents, and videos that enable similarity searches and other vector-based operations
Agent: Ability to reason, observe, plan, act - accelerate the delivery of generative AI application
Token: discrete atomic unit of data like a word, image patch, or audio snippet used as input to or output from a foundation model — generally 75 english words is around 1,000 tokens
Context length: the maximum number of tokens provided as context to a model when making predictions or generations — typically 4K context is worth a paragraph while 100K context is worth a novel.