Enter your email address below and subscribe to our newsletter

AI Glossary: Every AI Term Explained

AI Glossary: Every Term You Need to Know

A comprehensive dictionary of AI terminology, explained in plain English. Bookmark this page for quick reference.

A

AGI (Artificial General Intelligence)

A hypothetical AI system that could perform any intellectual task that a human can. Unlike current AI which excels at specific tasks, AGI would have general reasoning abilities. We haven't achieved AGI yet.

Algorithm

A set of rules or instructions that a computer follows to solve a problem or complete a task. Machine learning algorithms learn patterns from data rather than following fixed rules.

Alignment

The challenge of ensuring AI systems behave according to human values and intentions. Alignment research focuses on making AI helpful, harmless, and honest.

B

Bias

When AI systems produce unfair or discriminatory outputs due to biased training data or flawed algorithms. For example, a hiring AI trained on historical data might perpetuate past discrimination.

Black Box

An AI system whose internal workings are not understandable or explainable, even to its creators. Deep neural networks are often described as black boxes because it's hard to explain why they make specific decisions.

C

Chatbot

An AI program designed to have conversations with humans, typically through text. Examples include ChatGPT, Claude, and customer service bots.

Context Window

The amount of text an AI model can consider at once when generating a response. Larger context windows allow AI to reference more of a conversation or document. Measured in tokens.

Computer Vision

AI technology that enables computers to interpret and understand visual information from images and videos. Used in facial recognition, self-driving cars, and medical imaging.

D

Deep Learning

A subset of machine learning using neural networks with many layers. “Deep” refers to the number of layers. Deep learning powers most modern AI breakthroughs in image recognition, language, and more.

Diffusion Model

A type of AI that generates images by learning to reverse a process of adding noise to images. DALL-E, Midjourney, and Stable Diffusion all use diffusion models.

E-F

Embedding

A way of representing words, sentences, or other data as numbers (vectors) that capture meaning. Similar concepts have similar embeddings, enabling AI to understand relationships between things.

Fine-tuning

Taking a pre-trained AI model and training it further on specific data to improve performance for a particular task. Like teaching a generalist to become a specialist.

Foundation Model

A large AI model trained on broad data that can be adapted for many tasks. GPT-4, Claude, and Llama are foundation models. They serve as a starting point for more specialized applications.

G-H

Generative AI

AI systems that create new content—text, images, audio, video, or code—rather than just analyzing existing content. ChatGPT and DALL-E are generative AI.

GPT (Generative Pre-trained Transformer)

OpenAI's series of large language models. GPT-4 is the latest, powering ChatGPT Plus. The “transformer” architecture revolutionized AI language understanding.

Hallucination

When AI generates false information presented as fact. The AI isn't lying—it's essentially making confident mistakes. Always verify important AI-generated claims.

I-L

Inference

The process of using a trained AI model to make predictions or generate outputs. Training teaches the model; inference uses what it learned.

LLM (Large Language Model)

AI models trained on massive amounts of text to understand and generate human language. GPT-4, Claude, Gemini, and Llama are all LLMs.

LoRA (Low-Rank Adaptation)

A technique for efficiently fine-tuning large AI models by only training a small number of parameters. Makes customization faster and cheaper.

M-N

Machine Learning (ML)

A type of AI where systems learn from data rather than being explicitly programmed. The system finds patterns and improves with experience.

Multimodal

AI that can process multiple types of input—text, images, audio, video—and often generate multiple types of output. GPT-4V and Gemini are multimodal.

Neural Network

An AI architecture inspired by the human brain, consisting of interconnected nodes (neurons) organized in layers. The foundation of modern deep learning.

NLP (Natural Language Processing)

The field of AI focused on enabling computers to understand, interpret, and generate human language. Chatbots, translation, and text analysis all use NLP.

O-P

Open Source

AI models whose code and weights are publicly available for anyone to use, modify, and distribute. Llama and Stable Diffusion are open source.

Parameter

The internal values in a neural network that are adjusted during training. More parameters generally means more capability. GPT-4 reportedly has over 1 trillion parameters.

Prompt

The text input you give to an AI system. Good prompts are clear, specific, and provide necessary context. The art of writing effective prompts is called “prompt engineering.”

R-S

RAG (Retrieval-Augmented Generation)

A technique that gives AI access to external knowledge by retrieving relevant information before generating a response. Reduces hallucinations and enables access to current information.

Reinforcement Learning

A type of machine learning where AI learns through trial and error, receiving rewards for good actions and penalties for bad ones. Used to train game-playing AI and robotics.

RLHF (Reinforcement Learning from Human Feedback)

A training technique where humans rate AI outputs, and the model learns to produce responses that humans prefer. Key to making ChatGPT helpful and safe.

T-Z

Temperature

A setting that controls how creative or random an AI's outputs are. Low temperature = more predictable, focused responses. High temperature = more creative, varied outputs.

Token

The basic unit of text that AI models process. Roughly 4 characters or 0.75 words in English. AI pricing and context limits are often measured in tokens.

Transformer

The neural network architecture behind modern LLMs. Introduced in 2017, transformers use “attention” mechanisms to understand relationships between words regardless of their position.

Zero-shot Learning

When an AI performs a task it wasn't explicitly trained for, based on its general knowledge. Modern LLMs can often handle new tasks without specific examples.


Missing a term? Let us know and we'll add it to the glossary. This page is updated regularly as new AI terminology emerges.

Stay informed and not overwhelmed, subscribe now!