Configuring Scoring Criteria
This is where you decide what qualifies a candidate and how HeyMilo evaluates them across every stage of your end-to-end AI recruiting flow. By setting up clear, weighted criteria, you ensure every candidate is evaluated fairly and consistently, making it easy to identify your top talent.
Understanding HeyMilo Scoring
Your AI agent automatically:
Evaluates responses against your defined criteria
Assigns scores based on response quality and relevance
Provides reasoning for each score given
Ranks candidates by overall performance
Highlights strengths and areas of concern
Each candidate profile details:
Overall Score (0-100): Composite score across all criteria
Question Scores (1-5): Individual question performance
Summary: Key highlights and lowlights from the interview
Speech Score: Assessment of clarity, delivery, and tone
Transcript and Recording: Full interview transcript and audio recording available for review
Question scoring by HeyMilo is automatically weighted from 1-5. Recruiters also have the option to configure weight from 1-10.
Some employers are happy using the AI-generated questions as they are, while others prefer to edit the evaluation criteria or tweak the questions themselves to better fit their needs. In some cases, employers might not use the AI-generated questions at all and start from scratch.
Reviewing the AI-generated questions and customizing them to meet your exact needs ensures the interview reflects your real hiring standards, and that you can gather the most relevant information.
Every interview stage in HeyMilo includes a Questions & Scoring section:

Resume Screening
Voice / Video Interview
SMS Screening
Application Form
Each stage is fully customizable and transparent.

HeyMilo automatically:
generates questions + criteria based on the role for each agent layered in
proposes scoring logic
suggests dealbreakers where appropriate
You decide what stays, what changes, and what matters.
Customization Is the Point
Some employers:
use the AI-generated questions exactly as they are
make light edits to wording or scoring
generate evaluation criteria + tune with AI
replace everything with their own rubric and design every detail
All four are supported, and you are never locked into the AI’s first draft.
What You Can Customize (Across All Stages)
For every question or criterion, you can:
edit the wording
choose how it’s evaluated
assign importance or weight
decide if it’s a dealbreaker
control follow-up behavior
exclude it from the final score if needed
reorder questions or criteria to control interview flow
This applies to every screening agent.
Reordering Questions and Criteria
You can drag and drop questions or criteria to change their order.

Reordering affects:
the sequence candidates experience questions in
which topics are covered earlier vs later
how structured or conversational the interview feels
Examples:
Put dealbreakers first to identify unqualified candidates early
Start voice interviews with an open-ended warm-up question
Group related topics together (e.g. tools → collaboration → outcomes)
Understanding Weights (What “Importance” Really Means)
Weighting tells HeyMilo how much a question should influence the final score.

Higher weight = matters more in the final score
Lower weight = still evaluated, but less critical
You do not need to make everything high weight. Many teams often stick with the default settings.
How scores + weighting works
Scored questions use a 1–5 answer scale. You can fully customize what a bad/non-ideal response looks like (1), and what a good answer looks like (5).

Each question has an importance level using a 1-10 scale (low → average → critical)
HeyMilo combines score + weight to calculate the overall result
This allows strong candidates to stand out for the right reasons, not just because they did well on easy questions.
Examples:
Sales role
Objection handling = high weight
Background story = low weight
Engineering role
Problem-solving approach = high weight
Tool familiarity = medium weight
Support role
Communication clarity = high weight
Years of experience = medium weight
Weighting helps you avoid accidental bias, where less critical questions dominate results.
Resume Screening: Early Signal, Clean Filters
Contextual Resume Screening works best when you want to filter fast before other screeners and interviews.
You can configure:
Required Qualifications
Auto-disqualifiers like certifications, portfolio links, licenses, or minimum experience

Examples:
Design role: portfolio required, score impact and craft
Support role: customer-facing experience required
Warehouse role: certifications required, no scoring needed
Use resume screening to remove noise early while keeping standards consistent.
Application Forms: Structured Intake
Form questions are ideal when you need structured data, and want to qualify fit before interviews.

Supported question types include:
short answer
numeric (minimum or slider)
multiple choice
date
file upload

You can also enforce criteria so candidates who don’t meet requirements don’t advance.

Examples:
Healthcare: license upload required
Contract roles: start date required
Product roles: portfolio link + explanation
Forms reduce back-and-forth and clean up your pipeline.
SMS Screening: Speed and Eligibility
SMS Screening is optimized for quick engagement and qualification.

Questions types supported:
yes/no
numeric
pass/fail

You decide:
which questions are dealbreakers
minimum acceptable values
SMS works best when you want fast answers with minimal friction, especially for on-the-go candidates.
Voice / Video Interviews: Judgment and Depth
Voice interviews are where nuance matters. HeyMilo starts you off with a balanced set of open-ended questions that you can edit or replace.

You define:
evaluation criteria (what you’re looking for)
what a strong answer looks like (score of 5)
what signals a weak response (score of 1)
how much each question matters
how many follow-ups are allowed to probe for more detail
You can additionally generate evaluation criteria using AI, or get AI to tune the question to be more concise, more open-ended, or use a custom prompt.
Questions types you can mix:
Scored questions (1–5)

Structured questions (pass/fail, multiple choice, numeric)

Informational questions (no score impact)

Examples:
Sales: weight objection handling higher than background
Engineering: prioritize problem framing and tradeoffs
Leadership: score decision-making and reflection
This is where the AI recruiter starts thinking and interviewing like you with adaptive, and conversational voice AI.
Dealbreakers vs Scoring
A simple way to think about it:
Dealbreakers = eligibility
Scoring = quality
This keeps interviews fair, efficient, and predictable.
Tags (Optional, Informational)
Tags extract additional useful details from candidate responses (transcript) without affecting scores.

Tags appear in candidate profiles + reports to help with sorting and review.
Test Before You Launch
Every Questions & Scoring section includes Test Now before you activate your agent.

Testing lets you experience the interview as a candidate, see how follow-ups trigger, validate scoring logic, and catch unclear wording early.
Always test before activating!
Define Evaluation Categories
Consider organizing your scoring around key competency areas:
Technical Skills
Role-specific expertise
Tool and technology proficiency
Problem-solving abilities
Industry knowledge
Soft Skills
Communication effectiveness
Leadership potential
Teamwork and collaboration
Adaptability and learning
Examples For Weighting Your Criteria
Consider assigning importance percentages based on role requirements:
Example: Sales Representative
Communication Skills: 30%
Sales Experience: 25%
Results Achievement: 20%
Cultural Fit: 15%
Technical Skills: 10%
Example: Software Engineer
Technical Skills: 40%
Problem-Solving: 25%
Experience: 20%
Communication: 10%
Cultural Fit: 5%
Example: Customer Service Manager
Leadership Experience: 30%
Customer Service Skills: 25%
Communication: 20%
Problem-Solving: 15%
Cultural Fit: 10%
Scoring Best Practices
✅ Start Conservative
Begin with broader scoring ranges
Refine criteria based on actual candidate performance
Avoid being too restrictive initially
✅ Be Specific
Define exactly what you're looking for
Use concrete examples in your rubrics
Avoid subjective or vague criteria
✅ Test Your Scoring
Review initial candidate scores carefully
Adjust weightings if needed
Ensure scores align with your expectations
✅ Regular Calibration
Review scoring accuracy monthly
Compare AI scores with hiring outcomes
Adjust criteria based on performance data
Common Scoring Mistakes
❌ Over-Weighting Single Factors
Balance technical and soft skills appropriately
Consider the full candidate profile
❌ Setting Unrealistic Standards
Avoid requiring perfection in all areas
Consider growth potential, not just current skills
Allow for different paths to success
❌ Ignoring Cultural Fit
Don't focus solely on technical qualifications
Consider long-term retention factors
Evaluate team integration potential
❌ Not Updating Criteria
Review and iterate on scoring for future opportunities
Adapt to changing role requirements
Incorporate lessons learned from hiring
Interpreting Overall Candidate Scores
Score Ranges
90-100: Exceptional candidate, likely top 5%
80-89: Strong candidate, definitely worth interviewing
70-79: Good candidate, consider for next round
60-69: Average candidate, may need development
Below 50: Likely not a good fit for the role
Beyond the Numbers
Consider additional factors:
Growth Trajectory: Is the candidate improving?
Potential: Could they excel with training?
Unique Strengths: Do they bring something special?
Team Needs: Do they fill a specific gap?
Score Trends
Monitor patterns across candidates:
Are scores too high/low overall?
Which questions differentiate best?
Are knockout criteria too strict?
Do scores predict hiring success?
Continuous Improvement
Track Hiring Outcomes
Monitor which scored candidates get hired
Measure performance of hired candidates
Identify scoring criteria that predict success
Gather Feedback
Ask hiring managers/team members about score accuracy
Get input from successful hires
Review candidate feedback on the process
Refine As Needed
Update scoring criteria
Adjust weightings based on results
Add new criteria as roles evolve
Pro Tip: The best scoring system is one that consistently identifies candidates who succeed in the role. Focus on criteria that predict actual job performance, not just interview performance.
Ready to Invite Candidates?
With compelling questions and intelligent scoring configured, your interview agent is ready to start evaluating candidates. The next step is activating your agent and learning how to share it with candidates to drive high completion rates.
Remember: Scoring is an iterative process. Start with a solid foundation and refine based on real candidate data and hiring outcomes!
Last updated

