Hire engineers who
actually know how
to use AI
Vetted evaluates real Claude Code sessions to measure craft, judgment, and technical fluency — so you hire based on signal, not theater.
Free during beta · No credit card required
Resumes can't measure AI fluency
Traditional hiring signals break down when the most important skill is how someone collaborates with AI.
Resumes list tools, not skill
Saying "proficient in Claude Code" tells you nothing about how someone actually uses it — or whether they can think alongside AI.
Coding tests miss the point
LeetCode doesn't measure whether an engineer can debug AI output, leverage tooling, or make real trade-offs under ambiguity.
Interviews are expensive guesswork
A 45-minute interview can't reveal how someone works through ambiguous problems with AI assistance over hours of real work.
How it works
Three steps to better hiring signal
Replace guesswork with evidence. Get a clear picture of how candidates actually work with AI.
Candidate completes a session
They work on a real problem using Claude Code. No contrived puzzles — just authentic engineering work that shows how they actually think and build.
Vetted analyzes everything
Our AI evaluates the full session across 4 dimensions — craft, clarity, judgment, and ambition — using rubrics built from real engineering standards.
You get actionable signal
A detailed report with scores, evidence, and specific observations. Enough signal to make a confident hiring decision — in minutes, not weeks.
Evaluation rubric
Four dimensions of excellence
Each session is scored across the dimensions that matter most for AI-native engineering — with evidence and specific observations.
Craft
Tool mastery & best practices
Do they use CLAUDE.md files, MCP servers, hooks, and custom slash commands? Or are they using Claude Code like a basic chatbot?
Clarity
Communication & intent
Can they articulate what they want clearly? Do they provide context, set constraints, and guide the AI with domain knowledge?
Judgment
Decision-making & critical thinking
Do they catch AI mistakes? Push back on bad suggestions? Make thoughtful trade-offs when multiple approaches exist?
Ambition
Scope & real-world impact
Are they solving real problems with meaningful complexity? Or just generating boilerplate and calling it a day?
What you get
Reports with real signal
Not a pass/fail. A nuanced breakdown of how the candidate thinks, builds, and collaborates with AI.
Strong AI-native engineer
Demonstrates advanced Claude Code proficiency with strong tool usage, clear intent communication, and solid technical judgment.
“When Claude suggested a recursive approach, the candidate identified the O(2^n) complexity issue, pushed back, and guided Claude toward a dynamic programming solution with memoization — demonstrating both technical depth and effective AI collaboration.”
FAQ
Common questions
We accept .txt, .jsonl, and .json files up to 10MB. You can export sessions from Claude Code using /export, or upload the raw .jsonl files from ~/.claude/projects/.
Our AI analyzes the full session transcript against a rubric covering four dimensions: Craft, Clarity, Judgment, and Ambition. Each dimension is scored 1–5 with specific evidence from the session.
Most sessions are analyzed in under 60 seconds. You'll see real-time progress while we process your file.
Session data is used only for analysis and is never shared with third parties. We hash IPs for rate limiting and don't store personal information beyond what's needed for the analysis.
Yes — each report has a unique, shareable link. You can send it to candidates for transparency, or keep it internal.
Free during beta. We'll announce pricing well before any changes, and early users will get preferential rates.
Stop guessing. Start evaluating.
Join the waitlist and be first to hire with real AI fluency signal.