What Happens When AI Becomes a Skill
0x41434f
It feels like AI tools such as OpenAI ChatGPT and Anthropic Claude are no longer novelties or even optional productivity enhancers. They’re becoming embedded in how we think and work, quietly transforming both education and the workplace. As someone who’s been closely watching this shift, particularly through initiatives like the Anthropic Economic Index, I’ve been trying to understand not just where AI is being used, but how deeply it's reshaping cognitive labor itself.
What makes Anthropic’s research stand out is that it doesn’t rely on projections or surveys. It studies how people actually use Claude across millions of anonymized conversations. Their most recent Education Report offered a fascinating look into how university students are integrating AI into their academic lives. The patterns they found say as much about the future of work as they do about the future of learning.
Students, especially those in STEM fields, are leading adopters. Computer Science students made up nearly 37% of all student conversations on Claude, despite being just 5% of degree recipients in the U.S. Natural Sciences and Math also showed disproportionately high usage. In contrast, students in Business, Health, and the Humanities were underrepresented. This skew likely reflects a mix of factors, including AI’s strengths in tasks like coding and data analysis, greater awareness in technical communities, and perhaps a more immediate perceived link to future career paths.
But what struck me most wasn’t just who was using AI, but how. Anthropic identified four broad patterns of student-AI interaction. These ranged from direct problem solving and output creation to more collaborative forms of the same. Roughly a quarter of all conversations fell into each category. This even distribution suggests students are not just asking for quick answers, but engaging in complex, multi-step dialogues. They are brainstorming, refining, and building with the model. Claude isn’t just functioning as a smarter search engine. It’s becoming a creative partner.
Yet the report doesn’t shy away from the difficult questions. Nearly half the interactions were classified as “direct,” and while many of those are perfectly valid for learning, some examples raised red flags. Students were asking for multiple choice answers or help evading plagiarism detection. Even in collaborative interactions, the risk remains that students might offload too much of the actual thinking. One conversation prompt in the dataset was simply “solve my statistics homework with explanations.” Helpful, perhaps, but also revealing.
To go deeper, Anthropic mapped these conversations to Bloom’s Taxonomy, a classic framework that ranks cognitive skills from basic recall to higher-order creation. What they found was something like an inverted pyramid. Students were using Claude most frequently for “Creating” and “Analyzing,” the two most complex categories. “Remembering” was nearly absent. In some ways, this is encouraging. AI is helping students engage with difficult material and build new things. But it also raises the question: if AI handles the higher-order thinking, are students still developing the foundational skills underneath?
This concern isn’t limited to the classroom. In fact, it echoes trends already visible in the workplace. Anthropic’s Economic Index applies a similar methodology to professional contexts, mapping real-world usage to job tasks drawn from the U.S. Department of Labor’s O*NET database. The highest concentrations of AI use were in software engineering, data science, and content writing. These mid-to-high wage occupations, often associated with cognitive work, are seeing the earliest and most significant impacts.
The Index also sheds light on how workers use AI, distinguishing between automation, where AI performs a task directly, and augmentation, where AI supports or enhances the user’s work. Across all sectors, augmentation was more common, making up about 57% of usage. That suggests people are working with AI, not simply handing off tasks to it. But the boundary between those two modes can be subtle. A user might ask Claude to draft an entire memo, which looks like automation, and then edit it heavily, turning the interaction into a form of augmentation.
These distinctions matter, especially as AI becomes more capable. In the report, roles like copywriting showed high rates of iterative collaboration, while translation leaned toward direct automation. Certain cognitive tasks are more easily shared or offloaded than others. And as these systems become more embedded in software, not just chat interfaces, the degree of automation could grow rapidly.
This was one of the main arguments in the “GPTs are GPTs” paper. It introduced the idea of “exposure,” the degree to which LLMs can speed up a task without sacrificing quality. It found that integrating LLMs into tools could potentially affect up to 50% of work tasks. Not just writing prompts and reading answers, but embedding AI into workflows in ways that transform how work is structured. The implication is clear: the biggest gains, and the biggest shifts, will come not from using AI directly, but from building with it.
Another paper, “AI in the Knowledge Economy,” offered a helpful framework. It modeled AI agents as either non-autonomous copilots or autonomous coworkers, and showed how their capabilities and independence could shift labor market dynamics. Basic autonomous agents might displace lower-skilled workers, pushing them into more complex roles. But more advanced agents could absorb high-skill tasks, flattening hierarchies or redistributing work in unexpected ways. The key insight was that the design of an AI system, its level of autonomy and its cognitive reach, has real consequences for inequality and productivity alike.
Reading this, I thought back to the Anthropic data. The student usage patterns already reflect both modes. Some are using Claude as a copilot, collaborating through multiple turns. Others are handing it a task and walking away. The balance isn’t just theoretical. It’s happening now.
And then came the Shopify memo. When CEO Tobi Lütke wrote that “reflexive AI usage is now a baseline expectation,” it felt like a natural, if blunt, extension of everything above. He wasn’t just encouraging experimentation. He was mandating it. Employees are now expected to use AI in the prototype phase of every project. Before asking for more resources, teams must show why AI cannot get the job done. AI is framed not just as a tool but as a multiplier, something that turns already excellent contributors into people who can do ten times as much.
That memo, which was never meant for public release, lands differently because it is real. It confirms what the data has been suggesting: the expectation isn’t just that you’ll be familiar with AI. It’s that you’ll be fluent. It’s no longer enough to be curious. You need to be competent.
And fluency isn’t just about generating better outputs. It is about learning how to think with these systems, knowing when to trust, when to verify, and when to push back. It is about asking the right questions, shaping the right prompts, and designing the right feedback loops. It is about knowing how to build with AI in ways that are useful, responsible, and durable.
The shift we’re seeing is not about replacing people. It’s about reconfiguring how we work and learn alongside these new systems. That’s why I keep returning to the idea that AI is a skill now. Not a tool. A skill. One that will define how we navigate the next decade of work, education, and creativity.
We are just beginning to understand what that means.