In this part, I write a comprehensive analysis report extracted from AI Economic Index. If you are interested about how I built the data pipelines, you can visit the first part of this post series, AI Economic Index: Pipelines, where I discussed about the data pipeline architecture from the raw data all way down to the analytics.

 

This report will be divided into four section, following four pages available on the report dashboard:

  • Automation vs augmentation trends
  • Usage share trends
  • Effectiveness and efficiency in work
  • Exploring more insights with AI data analyst

 

Finally, I'll wrap up with a conclusion at the end. So if you're after the TL;DR, you can scroll straight down.

 

1. Automation vs augmentation trends

 

Dashboard page: AI Economic Index - Automation vs Augmentation Trends

 

The automation vs augmentation trends track the overall split between fully automated interactions and human augmented interactions across five Anthropic datasets releases from February 2025 through March 2026. Automation captures patterns where Claude drives task completion, either by handling a task end-to-end with minimal input (directive), or by running within a feedback loop where the user steps in only as needed (feedback loops). On the other hand, augmentation captures patterns where Claude plays a supporting role, such as helping users learn (learning), iterating on work together (task iteration), or validating outputs the user has already produced (validation). 

 

The data shows a clear arc: automation surged in mid‑2025, peaking at 49.1% in September, nearly overtaking augmentation. This period likely coincides with the release of more capable models and broader workforce integration. However, from January 2026 onward, augmentation reasserted its dominance, reaching 52.8% by March. The trend suggests that organizations initially leaned into aggressive automation but have since recalibrated toward hybrid workflows, recognizing that many tasks still benefit from human judgment layered on top of AI assistance.

 

By breaking down automation and augmentation components, we see that:

  • Directive automation expanded significantly (+5.8 pp), reflecting the rise of agentic AI systems capable of executing multi‑step tasks with minimal human input. Its September 2025 peak at 38.8% has since moderated, consistent with the broader automation correction.

  • Feedback Loop automation declined (−2.8 pp), suggesting that higher‑quality first‑pass AI outputs are reducing the need for rapid back‑and‑forth correction cycles.

  • Task Iteration, the most prevalent augmentation pattern, has declined from its February 2025 high but remains the second‑largest sub‑type at 25.6%, affirming working side‑by‑side with AI has become a regular part of how people get things done.

  • Validation has grown steadily from 2.7% to 4.9%, which means a 79% relative increase. It indicates that having AI double‑check human tasks is becoming a common way of working.

  • Learning has remained stable (22.5% to 22.4%), suggesting AI‑assisted learning is a steady‑state behavior rather than a trend in flux.

 

2. Usage share trends

 

Dashboard page: AI Economic Index - Usage Share Trends

 

This dataset tracks how AI use is spread across 13 major job groups over five reporting periods. In simple terms, it shows which professions are leaning more into AI and which ones are slowing down as adoption spreads across the workforce.

 

By March 2026, the picture looks like this:

  • Computer & Mathematical jobs still lead with 30.9%, but their share has dropped from 37.2% as other fields catch up.

  • Education has grown strongly, now at 12.9% (up from 9.3%).

  • Sales more than doubled its share to 4.7%, showing rapid adoption in customer‑facing work.

  • Office & Administrative Support rose to 9.0%, reflecting growing use of AI for routine tasks.

  • Healthcare and Community Services also show steady gains.

  • Meanwhile, Architecture & Engineering and Production have seen sharp declines, suggesting that hands‑on, physical tasks are harder to integrate with current AI tools.

 

It's important to note that the decline in Computer & Mathematical jobs doesn't mean engineers or data scientists are using AI less. They still dominate in absolute numbers. What's happening is that non‑technical fields are catching up quickly, spreading AI use more evenly across industries. The big picture is that AI use is spreading, it's no longer just the domain of coders and data scientists.

 

3. Effectiveness and efficiency in work

 

Dashboard page: AI Economic Index - Effectiveness and Efficiency in Work

 

To measure how effective and efficient AI is at work, I look at four key metrics. One is task success, which simply asks whether the AI managed to complete the job the user asked for. Another is the AI autonomy score, a scale from 1 to 5 that shows how much freedom the AI had to make decisions during the conversation, ranging from no independence at all to almost complete control. I also analyze time savings, comparing how long a skilled professional would normally need to finish the same task against how long it actually took with AI support. This includes the time spent typing, reading, reviewing, and applying the Assistant’s outputs. Finally, I consider human ability to complete the task alone. In some cases, the user could have done the work themselves, though it might have taken longer. In other cases, the task would have been too difficult or time‑consuming without AI's help.

 

On average, across all occupations, the success rate is 72.7%, though the numbers vary a lot depending on the field. For example, Transportation and Material Moving shows the highest success rate at nearly 88%, followed closely by Arts & Media and Healthcare Support, all above 80%. On the other end, Computer & Mathematical jobs and Business & Financial Operations sit lower, around 66-67%, despite having the largest number of tasks mapped. This pattern suggests something important: fields with fewer tasks tend to apply AI to a smaller, more manageable set of problems, which are ones that AI can solve reliably. Meanwhile, technical and business roles attempt a much wider range of complex tasks, which naturally lowers their average success rate. In computing especially, failures are easier to spot because the logic is strict: code either runs correctly or it doesn't, which leaves little room for ambiguity.

 

AI across the platform averagely scores 3.35 out of 5 for autonomy. That means it usually works in a moderately independent way, where it is capable of making decisions and carrying tasks forward, but not fully unsupervised. Some jobs give AI more freedom than others. Production roles show the highest autonomy score (3.55), which makes sense because production tasks are often step‑by‑step and procedural, like programming equipment or calculating dimensions. Once instructions are clear, AI can handle them with little ambiguity. Computer and Mathematical jobs also score high (3.50), reflecting the structured nature of coding and debugging, where rules are well‑defined and AI can operate with confidence. At the other end of the spectrum, Legal work records the lowest autonomy (3.15). That’s because law relies heavily on human judgment, interpretation of precedent, and awareness of liability. Even when AI drafts documents or analyzes statutes, lawyers keep tight control. Similarly, Healthcare Support and Cleaning & Maintenance score low, though for different reasons: healthcare tasks involve safety‑critical patient contact, while cleaning tasks are physical and context‑dependent, making them harder for AI to manage alone. Interestingly, Arts and Media sits below average (3.27) despite having one of the highest success rates. This shows that creative professionals let AI help with drafts and edits but still keep strong editorial control.

 

Another interesting detail in the data is how tight or wide the confidence intervals are, which means how much uncertainty there is in the autonomy scores. For high‑volume sectors like Computer & Mathematical (3.499–3.505) and Arts & Media (3.263–3.276), the intervals are extremely narrow. That’s because these fields have a huge number of observations, so the estimates are very precise. By contrast, sectors with fewer tasks show wider ranges. For example, Transportation (3.289–3.413), Building & Grounds (3.096–3.253), and Healthcare Support (3.109–3.218) all have broader spreads. This happens because smaller sample sizes leave more room for variation, meaning the autonomy patterns in these areas are less settled and could shift as more data comes in.

 

One point worth noting is that autonomy doesn't directly match success rates. For example, Production has the highest autonomy but only a 69.9% success rate, while Transportation has a mid‑range autonomy score yet the highest success rate at 87.9%. This highlights a trade‑off: giving AI more independence can speed things up and scale tasks, but it also increases the risk of errors when instructions aren't crystal clear.

 

One of the most surprising findings in the dataset is about time. Across all 22 job groups, tasks done with AI actually take longer, the median time is 10.2 minutes, compared to just 2 minutes without AI. At first glance, this might look like AI is slowing things down, but that's not always what's happening. Another possibility about what the data shows is that people use AI for bigger, more complex, multi‑step tasks than they would attempt alone. In other words, AI is not only about doing the same work faster, but also about enabling people to take on work they couldn't or wouldn't do without help.

 

This is captured in the Time Savings Ratio, which compares how long tasks take with AI versus without it. Every occupational group shows a negative ratio, meaning tasks with AI take longer. For example, in Computer & Mathematical jobs, the ratio is -4.28, which means tasks take about 4.3 times longer with AI. That could be because AI is used for much more complex coding and analysis work. The same pattern appears across other occupations, whic all show tasks stretching from just a couple of minutes without AI to 10-16 minutes with it.

 

Another important measure is how often tasks are classified as requiring AI capability, meaning they couldn't be completed to the right standard without AI's help. This metric shows where AI is a structural necessity, not just a convenience. The highest dependency is in Computer & Mathematical jobs, where nearly 1 in 4 tasks (22.9%) rely on AI. These include advanced code generation, complex data pipelines, and multi‑file refactoring, which are works that would be extremely difficult for humans to handle alone. Production roles follow closely at 21.2%, driven by program modifications, numerical computations, and electronics programming. Installation, Maintenance & Repair also shows a high dependency rate at 19.1%, reflecting the complexity of diagnosing and configuring technical systems. By contrast, fields like Personal Care & Service (0.9%) and Farming, Fishing & Forestry (0.5%) show almost no structural dependency. In these areas, AI is more of a helpful add‑on than a requirement, used for convenience rather than necessity.

 

4. Exploring more insights with AI data analyst

 

Dashboard page: AI Economic Index - Explore More with AI Data Analyst

 

This page is all about how you can discover more insights from my data by asking Bruin questions. For example, you can just simply ask,

 

Based on the data, what kind of tasks can AI help me as a physicist?

 

Then, Bruin will relate your question to the data it has, query the data assets from the pipelines, and return answers based on actual query results. This approach helps reduce, or even avoid, what's often called AI hallucinations, which happen when an AI generates information that sounds convincing but isn't accurate. By linking questions directly to real data, the system stays focused, delivers more reliable answers, and highlights insights that truly matter, so that some incidents like this could be avoided. You can join the AI Economic Index Discord server here.

 

Conclusions

 

The data across all sections tells a consistent story: AI at work is evolving toward balance, not toward full automation.

 

The automation surge of mid-2025 turned out to be a temporary peak rather than a permanent shift. By early 2026, augmentation had regained its dominance, and the steady growth of validation use cases reflects this well. Organizations are increasingly using AI to support and review human work, not just to replace it. Meanwhile, AI-assisted learning has remained stable throughout, suggesting it has become a default behavior rather than a passing trend.

 

Adoption is also spreading well beyond its technical roots. Computer and Mathematical jobs still lead in usage share, but their relative dominance is shrinking as Education, Sales, Healthcare, and Administrative roles catch up. AI is no longer concentrated in the hands of engineers and data scientists.

 

On the effectiveness side, the 72.7% average success rate is promising, but the more revealing finding is the time savings paradox. Tasks completed with AI consistently take longer than those done without it. Rather than a sign of inefficiency, this could suggest that people are using AI to take on more complex, ambitious work than they would attempt alone. This is reinforced by the AI dependency metric, where fields like Computer and Mathematical jobs and Production show the highest share of tasks that genuinely require AI to complete to a satisfactory standard.

 

Taken together, the picture that emerges is one where AI's greatest contribution is not speed or automation on its own, but the expansion of what individuals and organizations can realistically attempt.

 

Buy Me a Coffee at ko-fi.com