.png)
This guide is designed to be used alongside the Anthropic Education Report: The AI Fluency Index. It works for teams of any size, whether you're a leadership group exploring AI skills development, a faculty team discussing implications for your institution, or a professional learning community reflecting on how you collaborate with AI at work.
Each section below includes a brief summary of a key topic, followed by discussion prompts. You don't need to cover every section or every question. Choose what's most relevant to your group.
Suggested format:
The data shows that most people already demonstrate a number of fluency behaviors naturally. Roughly 86% of conversations include iteration and refinement. About half include goal clarification. Around 41% include examples of what good output looks like. These are real skills, and it’s likely many people on your team are already doing them.
1. Looking at the behavioral prevalence chart in the report, which fluency behaviors do you think are strongest on your team? Which would you guess are weakest?
2. The most common behavior is iteration and refinement. When you think about how your team uses AI, do people tend to iterate and push back on initial responses, or accept the first output? What drives that behavior?
Iteration and refinement is the single strongest correlate of every other fluency behavior in the data. Conversations with iteration show substantially higher rates across the board: +23.6pp for clarifying goals, +22.6pp for providing examples, +17.1pp for identifying missing context, +16.3pp for specifying format, and +14.7pp for questioning reasoning.
1. The report shows that iteration is correlated with more fluency behaviors but can't yet prove causation. What do you think explains this relationship? Could it be that more complex tasks naturally require both iteration and more fluency skills?
2. What barriers might prevent people on your team from iterating? Think about time pressure, perceived effort, uncertainty about what to ask next, or simply not knowing that iteration is valuable.
3. If you were designing a team norm or practice to encourage more iteration, what would it look like? What would a "good" follow-up message look like compared to a less effective one?
When AI produces polished outputs, users put more effort into directing the work up front but become less likely to evaluate what they get back. The report calls this the artifact effect.
The data shows higher rates of directive behaviors in artifact conversations: +14.7pp for clarifying goals, +14.5pp for specifying format, +13.4pp for providing examples, and +9.7pp for iterating. But all three discernment behaviors drop: -3.1pp for questioning reasoning, -3.7pp for checking facts, and -5.2pp for identifying missing context.
The report offers several possible explanations as this data is correlative, not causal. Polished outputs may signal "done" even when they shouldn't. Artifact tasks may involve less factual precision than other work. Or users may be evaluating outputs through other channels, like running code or sharing drafts with colleagues, rather than within the conversation itself.
1. Have you ever accepted an AI-generated output because it looked polished, only to discover problems later? What happened?
2. The report notes that users may be evaluating artifacts outside the conversation. In your team's experience, where does the real evaluation of AI outputs happen? Is it in the conversation, or after?
3. The report argues that as AI models produce increasingly polished-looking outputs, the ability to critically evaluate those outputs will become more valuable. What would help people on your team maintain a critical eye when outputs look finished?
The research points to three areas where many users could strengthen their skills: staying in the conversation (iterating), questioning polished outputs, and setting the terms of collaboration.
1. Of the three recommendations from the report (iterate more, question polished outputs, set the terms of collaboration), which feels most actionable for your team right now? Which would be hardest to adopt, and why?
2. The report suggests telling the AI how you want it to interact with you, with instructions like "push back if my assumptions are wrong" or "tell me what you're uncertain about." Has anyone on your team tried this? What difference did it make?
3. What does your organization's current approach to AI skills development look like? Is it formal training, peer learning, trial and error, or something else? How well is it working?
4. If you could focus on building one AI fluency skill across your organization in the next quarter, which would it be? What's one concrete step you could take to get started?
These hands-on exercises can extend the discussion into practice. Choose one based on your group's interests and available time.
Each participant opens a conversation with Claude on a task relevant to their work. After receiving the first response, they must send at least three follow-up messages that refine, push back, or redirect before considering the output complete. Afterward, discuss: how did the output change? What would you have missed if you'd stopped at the first response?
Have Claude generate an artifact (a document, a piece of code, an analysis) on a topic your group knows well. Review the output together and identify what looks right but may be wrong, what's missing, and what assumptions went unchecked. Use this to ground the conversation about evaluation skills.
Each participant writes a short "collaboration preamble" that they would use at the beginning of important AI conversations: how they want the model to interact with them, what kind of pushback they want, what they expect the model to flag. Share and compare approaches across the group.
These resources provide additional context for the findings in the AI Fluency Index.
This discussion guide is a companion to the Anthropic Education Report: The AI Fluency Index (2026). It was created by the AI Fluency Program at Anthropic to support leaders, educators, and teams in making sense of the report's findings and cultivating stronger AI fluency across their organizations.