How this AI job risk calculator works (and what it does NOT claim)
This calculator uses a task-based model: tasks drive exposure, not job titles. Then it applies practical modifiers like adoption speed, regulation, safety, and on-site constraints. The output is an estimate to help you reason about career moves and skill investments.
1) Task substitution vs job replacement
Most “AI replaces jobs” headlines are really about task substitution. A role can stay the same on paper while the task mix changes dramatically.
This calculator therefore treats job titles as labels and focuses on what you do (task mix) and how your work is evaluated (repeatability, data, verification).
2) The three levers that change the risk fastest
- Routine digital share: the higher it is, the more likely AI + workflow automation can substitute work.
- Clarity of evaluation: if “correctness” is easy to verify, automation scales faster.
- AI tool fluency: people who use AI well can compress time-to-output, shifting the labor demand.
3) Driver table (SEO quick reference)
| Driver | Higher value means… | Why it increases risk | How to mitigate (practical) |
|---|---|---|---|
| Routine digital tasks | More repeatable, text/form/report work | These tasks are easiest to standardize, automate, and QA at scale | Own the workflow: validation, QA, exception handling, and stakeholder outcomes |
| Repeatability | Same inputs → similar outputs | Automation works best when variability is low | Move into edge cases, exceptions, and decision trade-offs |
| Evaluation clarity | Easy to verify correctness | When success is measurable, AI-assisted output can be productionized faster | Become the evaluator: tests, audits, acceptance criteria, governance |
| Data availability | Lots of examples/docs/clean data | AI systems learn and generalize better with clean, labeled patterns | Own data quality and policy; focus on ambiguous, context-heavy work |
| On-site / physical constraints | More in-person, real-world actions | GenAI is strongest in digital work; physical work is impacted more indirectly | Shift to high-trust coordination, diagnostics, and safety/accountability |
| Regulation / safety-critical context | More compliance and liability | Slows substitution, but increases AI-assisted documentation/review demand | Specialize in regulated workflows, audit trails, and risk management |
4) Formulas (what we actually compute)
This model is intentionally transparent. We compute an exposure score from your task mix, then scale it by work-design and context multipliers. Finally, we split exposure into replacement risk and augmentation potential using AI tool fluency.
Notes: Percent inputs are normalized to sum to 100%. “clamp” caps the output to the 0–100 range. The constants are heuristic (chosen for interpretability), not a guaranteed prediction.
5) AI curiosities (useful mental models)
- Jobs rarely vanish overnight — tasks do. The first wave is usually “drafting + summarizing + auto-fill” paired with human review.
- Verification is the bottleneck. In many workflows, the constraint shifts from “creating text” to “checking correctness, legality, and safety.”
- Jevons effect can happen. When the cost of producing something drops, demand can rise — so AI can both reduce some tasks and increase total volume of work in a category.
6) What lowers risk (without pretending you can ‘future-proof’ everything)
Lower risk usually comes from owning accountability, trust, and real-world constraints. AI can generate drafts; it can’t easily own consequences.
If you want a complementary calculator, check out our AI Carbon Footprint Calculator to estimate the resource footprint of AI usage.