Personal
Extended Thinking

Map your understanding and build lessons from the gaps

Claude Opus 4.6 traces your confusion to its source. It maps what you already understand, finds the specific misconception underneath, and builds personalized learning experiences around it.
Try in Claude
  • Author
    Anthropic
  • Category
    Personal
  • Model
    Opus 4.6
  • Features
    Extended Thinking
  • Share
    Copy link
    https://claude.com/resources/use-case/map-your-understanding-and-build-lessons-from-the-gaps
1

Describe the task

Some concepts don’t click no matter how many times you revisit them or how clearly they’re explained. The issue may not be in the explanation itself, but a wrong assumption somewhere in your understanding.

Claude Opus 4.6 gathers context before acting. For learning, that changes everything. Instead of explaining a concept from scratch, it maps what you already know, finds where your understanding breaks, and builds the explanation from that specific point. The diagnostic shapes everything that follows, including the artifacts.

From a single conversation, Opus 4.6 produces three artifacts: an interactive lesson built around your specific misconception, a workbook that applies the concept to your own data, and a concept map connecting it forward to what you’ll encounter next.

Describe the concept you’re stuck on and what you do understand.

I keep running into “Bayesian reasoning” in things I read—essays, podcasts, even conversations at work. People say “update your priors” or “you’re ignoring the base rate” and I nod along, but I can’t actually follow the logic when it gets specific. I understand basic probability fine. I can calculate odds, I know what a conditional probability is in the abstract. But when someone explains why a 99% accurate test doesn’t mean a 99% chance you’re sick, I lose the thread halfway through.

Help me understand this. Then build me an interactive lesson, a workbook I can use to audit which signals in my hiring pipeline actually predict success, and a concept map connecting it to what I’ll encounter next.

Next
Ask Claude
2

Give Claude context

Opus 4.6 invests in understanding before acting, so what you share upfront directly shapes the diagnostic. Describe where your understanding is solid, where it gets foggy, and what prompted you to learn this now. The more precisely you draw that boundary, the faster Claude finds the break. Don’t worry about getting it perfect—Claude may ask you targeted follow-up questions to narrow things down before continuing.

Required context

  • No files needed for the diagnostic itself. If you want a personalized workbook, upload a CSV or XLSX export of your data. Opus 4.6 scans across the full dataset and builds the analysis around your actual figures.
  • Your screening or pipeline data (optional): CSV or XLSX with candidate records, interview scores, hiring decisions, and outcomes—or any data where you’re using a signal to screen or sort a population

No items found.

Optional context

Extended Thinking gives Opus 4.6 space to think step by step before responding. For a task like this—where Claude needs to reason carefully about your knowledge structure to find a hidden misconception—that extra reasoning depth makes the diagnostic more precise. You can see Claude’s thinking process as it works through the problem.

Extended Thinking
3

What Claude creates

Claude asks diagnostic questions first, probing what you know until it identifies the specific misconception. Then it rebuilds from the gap and produces all three artifacts. Opus 4.6 stays persistent through this entire sequence, and its writing keeps pace—the explanation, lesson, workbook, and concept map all maintain a consistent voice calibrated to your actual comprehension level.

From Claude

Let me figure out where your understanding is solid and where it breaks.

You said you understand basic probability and conditional probability in the abstract. Quick scenario: a disease affects 1 in 1,000 people. A test for it is 99% accurate—it correctly identifies 99% of sick people, and only gives false positives 1% of the time. You take the test and it comes back positive. What’s the probability you actually have the disease?

[After your response]

You said about 99%. That’s the exact misconception I was looking for—and you’re in good company. Studies show most doctors get this wrong too.

The answer is about 9%. Here’s why. Imagine testing 1,000 people. One person actually has the disease—the test catches them. But of the 999 healthy people, 1% get false positives. That’s about 10 people. So you have roughly 11 positive results total, and only 1 of them is actually sick. Your positive test means a 1-in-11 chance—about 9%—not 99%.

The number you were ignoring is the base rate: how rare the disease is in the first place. When something is rare, even an accurate test produces more false alarms than real catches, because it’s being applied to an enormous pool of people who don’t have it and a tiny pool who do. The false positives from the large group swamp the true positives from the small one.

This is why the concept trips people up in practice. A hiring manager sees that 90% of their successful hires had a certain trait, so they screen for it. But if 90% of all candidates have that trait—successful or not—it predicts nothing. A prosecutor tells a jury that the chance of a DNA match with an innocent person is 1 in a million, so the defendant must be guilty. But in a city of 10 million people, you’d expect 10 innocent matches. The evidence isn’t as damning as it sounds.

The core confusion: people treat the accuracy of a signal as if it’s the probability of the conclusion, without asking how common the thing they’re looking for is in the first place.

Now let me build you three things you can keep.

Interactive lesson: Base rate explorer

[View the interactive lesson]

A simulator where you can see base rates reshape probability in real time. It visualizes a population as a grid of dots—healthy and sick, qualified and unqualified—and shows what happens when you apply a screening test:

  • Drag the base rate slider and watch the ratio of true positives to false positives shift. At 50% prevalence the test works like you’d expect. At 1% prevalence it’s nearly useless
  • Adjust the test’s accuracy and specificity independently to see which matters more (specificity dominates when the base rate is low)
  • Toggle between medical testing, hiring screening, and fraud detection to see the same math play out across domains
  • Run a “100 courtrooms” simulation to see how often a prosecutor’s statistical argument convicts an innocent person

Play with the sliders until the relationship between base rate and predictive value becomes something you feel, not just something you calculate.

Signal audit workbook: your hiring pipeline

Since you uploaded your spreadsheet, I pulled your team’s hiring data directly. Three tabs:

  • Your pipeline + base rates: Your actual candidate flow—applications received, phone screens passed, interviews completed, offers extended, hires made, and hires still performing at 12 months. Each stage shows the base rate: what percentage of all candidates at that stage eventually become successful hires. The overall base rate of a successful hire from your applicant pool is 8%, which means any screening signal needs to clear a high bar to be genuinely predictive.
  • Signal audit: For each criterion your team scores in interviews—technical assessment, culture fit rating, years of experience, referral source—this tab calculates the true predictive value when the base rate is factored in. Your technical assessment bumps the probability of a successful hire from 8% to 34%. That’s a real signal. But your culture fit score, despite feeling predictive, only moves it from 8% to 11%. Almost all candidates score well on it, so a high score tells you very little.
  • Decision framework: A five-question checklist for evaluating any screening criterion: What’s the base rate without this signal? What percentage of successful hires have it? What percentage of unsuccessful hires also have it? Does it add information beyond signals you already screen for? What would change if you dropped it entirely?

[Download: signal-audit-workbook.xlsx]

Concept map: how Bayesian reasoning connects forward

A one-page visual showing how base rate reasoning connects to related ideas you’ll encounter: signal detection theory (the formal framework for trading off false positives and false negatives), the prosecutor’s fallacy (a high-stakes version of base rate neglect in courtrooms), information gain (how to measure which evidence actually moves the needle), and why A/B tests need proper control groups. Each concept links to the reasoning chain that explains it. The next time someone invokes one of these terms, you’ll know exactly which piece of the puzzle it belongs to.

[View the concept map]

4

Follow up prompts

Continue the conversation with Claude to refine, expand, or explore further.

Test it against your own data

Upload your pipeline or screening data and ask Claude to separate real signals from noise. Opus 4.6 scans across large files and datasets before responding, so it handles messy real-world numbers without oversimplifying.

Here’s my team’s full hiring pipeline from the last 2 years—applications through 12-month retention. Walk me through which interview criteria actually predict success once you factor in the base rates. I especially want to know about our new case study exercise—we added it in Q3 and our hire quality seems better. Is the case study a real signal or are we seeing what we want to see?

Next
Ask Claude

Keep working in your spreadsheet

Open the workbook in Excel and continue the analysis with Claude in Excel. Claude works directly in a sidebar in Excel—it can read your data, modify formulas, and build on the workbook without leaving your spreadsheet.

I downloaded the signal audit workbook. Can you add a tab that checks whether any of our interview signals are redundant—measuring the same thing as another signal we already screen for? I want to cut the ones that aren’t adding new information.

Next
Ask Claude

Challenge what you think you know

Ask Claude to stress-test your understanding. Opus 4.6 won’t just confirm what you’ve told it—it pushes back on your reasoning and finds gaps you missed.

Build a longer learning sequence

Connect this concept into a multi-session learning project. Opus 4.6 carries instructions through longer sessions without drifting, so your diagnostic context holds as you move through related topics.

I think I understand Bayesian reasoning now. Give me five real-world scenarios and have me identify the base rate, the signal, and whether the conclusion accounts for both. Don’t go easy on me—include some tricky cases where the base rate is hidden or where two signals interact.

Now that I understand base rates, I want to build out my understanding of the related concepts on the map—signal detection theory, information gain, and the prosecutor’s fallacy. Can you create a learning sequence that covers one per session, with a workbook for each?

Next
Ask Claude
5

Tricks, tips, and troubleshooting

Front-load what you know

Opus 4.6 invests in understanding your knowledge structure before trying to change it. The more clearly you describe where your understanding is solid, the faster Claude can locate where it breaks. “I understand basic probability but I can’t follow base rate arguments” is a sharper starting point than “I don’t understand statistics.” The gap between what clicks and what doesn’t is usually where the misconception lives.

Say it once

You don’t need to repeat your requirements or add “and remember to...” reminders. Opus 4.6 follows instructions precisely through the full diagnostic, the artifact construction, and any follow-ups. State what you want upfront and trust that it carries through.

Let the diagnostic do its work

It might feel slow when Claude asks you questions instead of jumping to an explanation. That’s the point. Opus 4.6 gathers context before acting, and here that means mapping your knowledge before trying to change it. The explanation it builds afterward is calibrated to exactly where your understanding breaks. If you want to see Claude’s reasoning as it works through the diagnostic, turn on Adaptive Thinking.

Start in chat, expand in Cowork

The diagnostic conversation and interactive artifacts work well in Claude.ai chat on your web browser. If you want to build a longer learning project that spans multiple sessions (covering related concepts and building workbooks from different datasets), use Cowork in Claude Desktop [Research Preview]. Opus 4.6 stays persistent across extended projects, and Cowork maintains context across conversations and saves files directly to your machine.

Ready to try for yourself?

Try Opus 4.6 on a concept you can't seem to crack and see what changes when the explanation starts from your understanding, not from scratch.
Try in Claude
Open artifact in new window