← All lessons
Browse lessons

Week 1: Foundations and Economics · Lesson 1.5

The cost-latency-quality frontier

How do we reason about quality improvements that change cost and latency?

Retired course. Due to the fast pace of AI, this course was retired before full release. Exercises, datasets, and videos referenced in this lesson are not available. The slide content and frameworks remain free to study.

Slide 1 of 19

Reader Notes

Lesson 1.5 is the final lesson of Week 1. Over the last four lessons, non-determinism was quantified, the evaluation surface was mapped, a failure taxonomy was built from traces, and system performance consistency was measured using the reliability metrics from the previous lesson. The picture of what the AI system does and how consistently it does it is now clear. Today's problem is different. The product team provides three constraints (a latency SLA, a monthly budget, and a quality floor) and says "pick a model." The discovery: no model satisfies all three. This is not an evaluation problem. This is a decision-making-under-constraints problem, and it is what product teams face before they even start building an evaluation system. By the end of this lesson, the deliverables include a benchmark table comparing model configurations, a framework for identifying dominated versus true tradeoff positions, and a Model Selection Decision Template with explicit reasoning. That template feeds directly into Week 4's release criteria.

Go deeper with AI Analytics for Builders

5-week course: metrics, root cause analysis, experimentation, and storytelling. Think like a Product Data Scientist.

Book 1-on-1 with Shane

30-minute AI evals Q&A. Talk through your specific evaluation challenges and get hands-on guidance.

Finished all 36 lessons? Take the exam and get your free AI Evals certification.