practically.dev

The practically.dev Universe

The dictionary of AI & product terms you've always wanted. 50 terms and counting.

All TermsAIData & AnalyticsProductBedrock

A

AI Agent

ai

An AI system that doesn't just answer questions — it actually goes and does stuff. Like, you tell it "book me a flight to Tokyo" and it searches airlines, picks options, and books one. Agents are the next frontier of AI products, and honestly, they're still kind of finicky.

A/B Testing

product

Showing two different versions of something to two different groups of users and seeing which one performs better. Version A vs Version B — hence the name. It's how product teams make decisions with data instead of opinions (in theory).

API

general

Application Programming Interfaces are like drive-thru windows, but in code: you give them specific inputs and they give you predictable outputs. Most AI capabilities are accessed through APIs — you send text to OpenAI's API, it sends back a response.

Activation Rate

product

The percentage of new users who hit that magical "aha moment" where they actually get what your product does. If 100 people sign up and 40 complete your onboarding flow, that's a 40% activation rate. Low activation? Your product might be confusing.

B

Bias (in AI)

ai

When an AI produces unfair or skewed results because of problems in its training data or design. Train a hiring model on historical data from a company that mostly hired men, and it'll learn to prefer men. Garbage in, garbage out — but with real consequences.

Batch Processing

data

Processing a big pile of data all at once on a schedule — like running a nightly job to generate recommendations for all your users. Cheaper and simpler than real-time, but your data is always at least a little stale.

C

Context Window

ai

The maximum amount of text an AI model can look at and consider at once, measured in tokens. Think of it as the model's working memory. GPT-4 has a 128K token context window. Claude has 200K. Bigger = can handle longer documents, but also = more expensive.

Churn Rate

product

The percentage of customers who stop using your product over a given period. It's the silent killer of SaaS businesses. Reducing churn by even a couple percentage points is often worth more than acquiring a bunch of new users.

Cohort Analysis

product

Grouping users by something they have in common (usually sign-up date) and tracking how they behave over time. "January signups have 40% retention at day 30" — that kind of thing. One of the most useful tools in a PM's analytics toolkit.

D

DAU/MAU

product

Daily Active Users divided by Monthly Active Users. This ratio tells you how "sticky" your product is — how often people come back. Social media apps aim for 50%+. Most B2B products are happy with 20-30%. If yours is at 5%, people are signing up and forgetting you exist.

Data Pipeline

data

An automated series of steps that moves data from point A to point B, cleaning and transforming it along the way. Like a factory assembly line, but for data. If your pipeline breaks, everything downstream breaks too — which is why data engineers are always stressed.

Data Warehouse

data

A special type of database designed for analytics instead of running your app. It's where companies dump all their data from different sources so analysts can run queries and dashboards. Snowflake, BigQuery, and Redshift are the big players.

E

Embedding

ai

A way to turn text (or images, or whatever) into a list of numbers that captures its meaning. "Dog" and "puppy" would have similar embeddings, while "dog" and "refrigerator" would be far apart. This is how vector databases and semantic search actually work under the hood.

ETL

data

Extract, Transform, Load — the classic three-step process of getting data from where it lives (extract), making it useful (transform), and putting it somewhere you can analyze it (load). It's been around forever and it's still the backbone of most data work.

Explainability

ai

The ability to understand and explain how an AI model makes its decisions. Some models are very explainable. LLMs? Not so much. This is becoming a big deal as regulators start asking companies "why did your AI do that?"

F

Fine-Tuning

ai

Taking an existing AI model and training it further on your own data to make it better at specific tasks. Like hiring a generalist and then teaching them your company's domain. It's expensive and time-consuming, so most teams start with prompt engineering instead.

Feature Engineering

data

The process of taking raw data and transforming it into useful inputs for a machine learning model. It's one of those things that sounds boring but is actually where a lot of the magic happens — the right features can make a mediocre model great.

Funnel Analysis

product

Tracking the steps users take toward a goal and seeing where they drop off. 1000 visited pricing → 500 clicked Start Trial → 200 entered email → 50 started. Each step is a chance to ask "why are we losing people here?"

Few-Shot Learning

ai

Giving an AI a few examples in your prompt to show it what you want. "Here are 3 examples of good product briefs. Now write one for this feature." It's one of the easiest ways to get better results from LLMs without any actual training.

G

Guardrails

ai

Safety mechanisms built around AI systems to keep them from going off the rails (pun intended). Content filters, output validation, topic restrictions. If you're shipping an AI product without guardrails, you're going to have a very bad day eventually.

H

Hallucination

ai

When an AI confidently makes something up. It'll sound perfectly reasonable and authoritative, but the information is completely fabricated. This is one of the biggest challenges in shipping AI products — your model will occasionally just... lie.

Human-in-the-Loop

ai

A system where humans review or approve AI decisions before they go live. Essential for anything high-stakes — you probably don't want a model autonomously approving bank loans or writing legal contracts without someone checking its work.

I

Inference

ai

Running a trained model to get predictions or outputs. Training is the learning phase; inference is the using phase. When you send a message to ChatGPT and it responds, that's inference. It's also where most of the ongoing cost comes from.

L

LLM

ai

Large Language Models take in a prompt and generate text. They're trained on massive piles of internet text, and they power things like ChatGPT and Claude. Think of them as very sophisticated autocomplete.

Latency

general

How long it takes to get a response after you ask for one. In AI products, this is a big deal — users don't want to sit around for 10 seconds waiting for a chatbot to respond. Lower latency = happier users, but usually = more expensive infrastructure.

M

MLOps

ai

Machine Learning Operations — basically DevOps but for ML models. It's the set of practices for deploying, monitoring, and maintaining models in production. Because training a model is only like 20% of the work; keeping it running well is the other 80%.

Multimodal AI

ai

AI that can work with multiple types of data — text, images, audio, video — in a single model. GPT-4 can look at pictures. Gemini can process video. The trend is clear: models that can see, hear, and read are going to be the standard.

Model Drift

ai

When your AI model gets worse over time because the real world changes but your model doesn't. A model trained on 2023 user behavior might not work great in 2026 because people change. This is why monitoring and retraining matter.

N

North Star Metric

product

The single metric that best captures the core value your product delivers. For Spotify, it's time spent listening. For Airbnb, it's nights booked. Finding the right one is important — it's supposed to align every team around what actually matters.

O

OKRs

product

Objectives and Key Results — a goal-setting framework where you pick a big ambitious Objective and measurable Key Results. Originally from Intel, popularized by Google, now used by basically every startup. Sometimes they work great. Sometimes they're just busy work.

P

Precision

data

When your model makes a prediction, precision measures how often it's actually right. If your spam filter flags 100 emails and 90 are actually spam, that's 90% precision. The other 10 were probably important emails from your boss.

Prompt Engineering

ai

The art (yes, art) of writing instructions to AI models to get useful outputs. Turns out, how you ask an LLM to do something matters a lot. A well-crafted prompt can be the difference between a useless answer and a brilliant one.

Product-Market Fit

product

The magical moment when your product satisfies a real market demand and people actually want to use it (and ideally pay for it). For AI products, this means the AI is genuinely solving a problem better than alternatives — not just being AI for AI's sake.

Product Analytics

product

The practice of tracking what users actually do in your product (not what they say they do) and using that data to make it better. Amplitude, Mixpanel, and PostHog are the big tools. If you're making product decisions without analytics, you're guessing.

R

RAG

ai

Retrieval Augmented Generation is a fancy way of saying "look stuff up before answering." Instead of relying purely on what it learned during training, the AI pulls relevant info from a database first, then generates a response. Makes answers way more accurate.

Recall

data

Recall measures how many of the things your model should have found, it actually found. If there were 100 spam emails in your inbox and your filter only caught 70, that's 70% recall. The other 30 are sitting there pretending to be real.

Reinforcement Learning

ai

A type of ML where the model learns by trial and error — taking actions and getting rewards or penalties. It's how AlphaGo learned to beat the world's best Go player, and it's a key ingredient in how ChatGPT learned to not be weird.

RLHF

ai

Reinforcement Learning from Human Feedback — the secret sauce that turned LLMs from rambling text generators into helpful assistants. Humans rate the model's outputs as good or bad, and the model learns to produce more of the good ones. It's why ChatGPT sounds helpful and not unhinged.

Real-Time Processing

data

Processing data the moment it arrives, instead of waiting for a batch. Essential for things like fraud detection (you want to catch the fraud now, not tomorrow morning) and live chatbots. More complex and expensive, but sometimes you need it.

Retention

product

The percentage of users who stick around over time. Arguably the most important metric for any product — if people aren't coming back, nothing else matters. High retention = you built something people need. Low retention = you built a leaky bucket.

S

Supervised Learning

ai

A type of machine learning where you show the model a bunch of examples with correct answers ("this email is spam", "this email is not spam") and it learns patterns. Like teaching a kid with flash cards — except the kid has billions of flash cards.

Synthetic Data

data

Fake data that looks and acts like real data. Companies generate it when real data is too sensitive, too scarce, or too expensive to collect. Turns out, good fake data is almost as good as real data for training models.

T

Transformer

ai

The neural network architecture behind basically every modern AI model. GPT, Claude, Gemini — all transformers. The key innovation is something called "attention" which lets the model understand relationships between words. It's from a 2017 paper literally titled "Attention Is All You Need."

Token

ai

A chunk of text that AI models process. A token can be a word, part of a word, or even a single character. When people talk about API pricing or context limits, they're usually talking in tokens. Roughly speaking, 1 token ≈ ¾ of a word.

Technical Debt

general

All the shortcuts and quick fixes in your codebase that someone will have to clean up someday. Like actual debt — taking it on lets you move faster now, but you'll pay interest later in bugs, slow development, and engineers who really want to do a rewrite.

U

User Segmentation

product

Dividing your users into groups based on what they do, who they are, or what they need. Power users vs casual users, enterprise vs SMB, that kind of thing. Helps you build features for actual humans instead of some imaginary average user.

Unsupervised Learning

ai

Machine learning without the answer key. You give the model a pile of data and say "find patterns." Good at grouping similar customers together or spotting anomalies, but you can't really tell it what to look for.

V

Vector Database

data

A place where developers store specially formatted data to use for AI and search. Instead of matching exact keywords like a normal database, vector databases find things by meaning — so searching for "happy" would also surface "joyful" and "excited."

Z

Zero-Shot Learning

ai

When an AI model can do a task it was never specifically trained to do. You ask GPT-4 to translate Swahili even though nobody explicitly trained it on Swahili translation — and it just... does it. One of the wild emergent abilities of large models.

50 terms and growing

All terms link directly into our knowledge bases with contextual definitions. No more pretending you know what RAG means.

Start learning →