// assess what matters

Measure AI prompting skills. Not résumé claims.

Put candidates in a real AI sandbox. Create a test in minutes, get an objective PromptScore from 0–100.

inpromptify.com/test/sandbox
Write a Marketing Email
Write a compelling product launch email for CloudSync Pro targeting enterprise CTOs. Focus on data synchronization pain points and include a clear CTA.
GPT-4oSubject: Your data sync takes 23 hours/week. Let's fix that.

Dear [Name],

Enterprise teams lose an average of 23 hours weekly to fragmented data systems. CloudSync Pro delivers real-time bi-directional sync across your entire stack…
Refine the subject line to be more specific…
Send

Trusted by forward-thinking teams

Acme CorpGlobexInitechHooliPied PiperAviato

Scoring

A score that means something

PromptScore evaluates efficiency, output quality, iteration strategy, and token economy. One number, fully comparable across candidates.

Detailed breakdowns show exactly where a candidate excels—or falls short. No more guessing.

PromptScore

Sarah Chen · Marketing Email Task

0/100
Output Quality92
Efficiency84
Iteration Strategy88
Token Economy79
3 attempts · 847 tokens · 6:12Top 15%

Dashboard

Compare candidates at a glance

Your employer dashboard shows every test result, ranked and filterable.

Marketing Email Task5 candidates
1Sarah Chen87
2Marcus Rivera82
3Anja Petrov76
4James Okafor71
5Priya Nair64

Process

Three steps. Five minutes.

01

Create

Define a task, pick an AI model, set token and time limits.

02

Assess

Candidates enter a sandboxed environment and solve the task with a real LLM.

03

Hire

Get a PromptScore 0–100 with detailed analytics. Compare and decide.

Why InpromptiFy

The old way vs. the better way

Old way

45-minute interview

  • “Tell me about a time you used AI”
  • Subjective interviewer notes
  • No standardization across candidates
  • Résumé claims you can't verify
~45min per candidate
InpromptiFy

5-minute test, objective score

  • Hands-on task with a real AI model
  • Automated PromptScore 0–100
  • Every candidate, same conditions
  • Detailed analytics on technique
~5min per candidate

Features

Everything you need to assess AI skills

Real LLM Sandbox

Candidates use actual AI models in a controlled, monitored environment.

Prompt Scoring

Automated 0–100 scoring based on efficiency, quality, and iteration count.

Custom Tasks

Create assessments tailored to your role — marketing, engineering, data.

Token Budgets

Set token limits to measure how efficiently candidates work with AI.

Time Controls

Configurable time limits keep assessments standardized and fair.

Analytics Dashboard

Compare candidates side-by-side with detailed performance breakdowns.

Pricing

Simple, transparent pricing

Start free. Scale when you're ready.

Free

$0/mo

3 tests/month. Perfect for trying it out.

Get Started

Plus

$14.99/mo

15 tests/month. Ideal for freelancers.

Get Started

Pro

$79/mo

100 tests/month. Full analytics. Teams.

Start Free Trial

Business

$249/mo

500 tests/month. ATS integrations. API.

Start Free Trial

Stop guessing. Start measuring.

Set up your first assessment in under five minutes.