When resumes are piling up but offer letters aren’t being sent out, a certain kind of silence descends upon a hiring manager’s office. No announcement. There was no press release. No awkward explanation at a town hall meeting by an executive that the industry is “evolving.” Just the sound of a door that has been left open slowly, almost imperceptibly, swinging shut—a quiet, almost bureaucratic stillness.
Last month, Anthropic did not release a new model. A peer-reviewed labor economics paper based on real usage data from millions of real Claude conversations was published, which is perhaps more significant. The “Observed Exposure” metric, which sounds boring until you read what it actually measures, was introduced in the study.
| Subject | Anthropic |
|---|---|
| Founded | 2021 |
| Headquarters | San Francisco, California, USA |
| Founders | Dario Amodei, Daniela Amodei, and others formerly of OpenAI |
| Type | AI Safety Company / Private |
| Primary Product | Claude (Large Language Model) |
| Key Report | Anthropic Economic Index — Labor Market Impact Study |
| Lead Researchers | Maxim Massenkoff, Peter McCrory |
| Notable Finding | 33% of tasks in technical occupations already covered by AI in real-world usage |
| Reference Website | https://www.anthropic.com/research |
Not what AI might theoretically accomplish, but what AI is actually doing in actual work settings today. It turns out that there is a big difference between those two. And perhaps the most costly blind spot in business today is the discrepancy between what the data indicates and how businesses are actually reacting.
The key finding is that 33% of tasks in computer and mathematical occupations are already covered by AI. The coverage rate for computer programmers is 75%. 67% of workers are data entry personnel. Technical writers, market research analysts, and financial analysts are all grouped close to the top of the exposure list. These forecasts are not conjectural. They are based on behavior that has been seen in millions of professional exchanges with an active AI system. Compared to the claims labor economists have been making for the last three years, that is a different kind of assertion.
However, the automation figure is not what makes the Anthropic paper truly unsettling. It’s what immediately follows it. There was no consistent rise in unemployment among employees in highly exposed positions, according to researchers Maxim Massenkoff and Peter McCrory. Not one. In terms of statistics, there are still jobs. They are still with the employees.
So far, it’s quite comforting—until you consider what happens to those who are applying for those positions for the first time. Hiring rates into AI-exposed occupations have decreased by about 14% for workers between the ages of 22 and 25. Not layoffs. not reorganizing. For those who haven’t had a chance to establish the institutional standing that shields older workers from the initial wave of any disruption, it’s just a quiet, gradual removal of opportunities.
It’s difficult to ignore this historical resonance. In 1869, Massachusetts established the nation’s first Bureau of Statistics of Labor, in part as a reaction to the Second Industrial Revolution tearing through New England. Not only were the machines quicker.
The reformers behind that early bureau realized that you can’t solve a problem you haven’t measured, and they were rearranging who was allowed to engage in economic life. In a structurally similar manner, Anthropic has attempted to create a measurement framework prior to the disruption becoming obvious and before post-hoc analyses start debating what transpired and why. It is not as common as it might seem to have such foresight.
The current state of affairs in the majority of businesses is far more complicated than any data set can adequately depict. 72% of knowledge workers regularly use generative AI tools at work, according to a McKinsey survey conducted in late 2025. However, HR has virtually no insight into which positions have undergone covert changes and which have not.
Claude is being prompted by someone to write that competitive analysis at eleven o’clock at night. It is being used by someone else to produce the initial three iterations of a financial model, which would have required a full week for two junior analysts. The task is being completed. The salary ranges, performance evaluations, and job descriptions all still seem to be from 2019.
As a result, businesses are caught between two negative reactions. The first is to continue hiring in the same manner, posting the same jobs with the same qualifications, and paying full compensation for jobs that are now 30 to 50 percent automated. This is exactly what the majority of businesses are doing, and it’s wasting money in ways that won’t be apparent until the next earnings cycle, when someone starts raising serious concerns.
A panicked hiring freeze, which is blanket, unexamined, and blunt, is the second option. It stops the bleeding in one area while allowing institutional knowledge to leak out through regular attrition in another. The basic problem with both answers is that they view a job as either fully automated or unaltered, human or machine. Almost nothing functions in that manner, as the Anthropic data makes evident.
The traditional hiring procedure was created under the more straightforward premise that people do all the work and that the skills you test for during an interview are the ones that really result in the output. For a sizable and expanding segment of the economy, that assumption is now untrue. The coding interview is partially assessing Claude’s abilities at two in the morning when AI covers 75% of a computer programmer’s task profile.
In the meantime, there is no evaluation of the skills that appear to be most important at the moment, such as knowing when to trust AI output, knowing what questions to ask rather than how to execute every answer, and knowing how to design workflows that combine human judgment with machine speed.
Observing all of this, it seems like middle management in particular is standing on ground that is changing more quickly than their mental models are keeping up. Not because they lack intelligence. However, in a world where a significant portion of the work is being done by something that doesn’t appear on an org chart, the tools they’ve always used to understand their teams—headcount, utilization rates, and output metrics—were not intended.
The Anthropic data indicates that the time for thoughtful, measured change may already be over. It’s more difficult to predict what will happen next. However, the counting has started, and historically, that’s when things start to get serious.
