Editable Google Sheet + scoring rubric video.
A good SDR scorecard makes performance obvious without encouraging spammy behaviour. This guide shows exactly what to measure, how to weight inputs vs outcomes, and how to write rubrics that drive better conversations. You’ll get an editable structure, Google Sheet formulas, and a plan to calibrate in weekly 1:1s.
Key takeaways: Blend inputs (activity quality) and outcomes (meetings/pipeline). Weight what you control early in ramp, then tilt to outcomes. Use clear 1–5 rubrics, a single source of truth (CRM), and review weekly.
Blend inputs, quality, and outcomes; coach weekly.
Score SDRs on inputs, quality, and outcomes with explicit rubrics. Start 50/30/20 (inputs/quality/outcomes) in ramp, shift to 30/30/40 by month three. Review weekly with coaching notes.
Inputs track consistent effort; quality checks if messages are specific and targeted; outcomes confirm market signal. Use a single sheet per SDR pulling from CRM exports; lock formulas and document definitions.
Measure controllable inputs and conversion‑linked outputs; ignore vanity metrics like opens or raw dials without context.
Inputs: verified accounts added, targeted emails sent, calls with voicemail, LinkedIn touches, research notes. Quality: reply rate, positive reply %, ICP fit, message specificity. Outcomes: meetings held, qualified opps, pipeline created.
Use higher input weight in the first 60–90 days, then tilt to outcomes once patterns stabilise.
Example: Month 1 = 50/30/20; Month 2 = 40/30/30; Month 3+ = 30/30/40 (inputs/quality/outcomes). Document the schedule in the scorecard so expectations are clear.
Write behaviour‑anchored descriptions so a ‘3’ vs ‘4’ is objective.
Inputs: 3 = meets weekly targets with some batching; 4 = consistent pacing with research notes; 5 = proactive list hygiene and segmentation. Quality: 3 = 3–5% replies; 4 = 5–8% replies with clear ICP; 5 = >8% replies and peer‑proof usage. Outcomes: 3 = hits meeting target; 4 = exceeds by 10–20%; 5 = creates pipeline beyond quota with clean hand‑offs.
One tab per SDR; one ‘Config’ tab for weights/targets; import CRM exports weekly and refresh pivots.
Use =SUMPRODUCT(values, weights)
for the overall score. Lock cells with definitions; colour‑code by traffic‑light bands; add a ‘Coaching notes’ column per week.
Review highlights, blockers, and two specific experiments for the next week; update rubrics if a definition causes debate.
Bring two examples of messages (one good, one to improve). Agree on one experiment per channel and log it in the sheet. Re‑check pipeline attribution with AEs to keep trust high.
Show score, trend, and pipeline created per SDR; tie variable comp to outcomes and quality, not raw volume.
Avoid incentives that push quantity over relevance. Link a portion of variable to qualified opportunities and meeting quality (show rate, AE acceptance).
If your scorecard drives better replies, don’t lose them on a slow page: INP ≤200 ms, LCP ≤2.5 s, CLS ≤0.1.
Optimise hero and proof images (≤150 KB WebP), reserve dimensions, and lazy‑load non‑critical JS. Provide a static PDF as a fast fallback.
Related reads: Outbound Sales Playbook, Multi‑touch Cadence, Cold‑email Deliverability 2025.
Make early wins achievable; increase complexity gradually.
Config‑driven so you can tune weights without breaking math.
# Config!B2:D2 = weights for Inputs,Quality,Outcomes (e.g., 0.3,0.3,0.4)
# Metrics tab has weekly data by SDR
OverallScore =
SUMPRODUCT(
{InputsScore, QualityScore, OutcomesScore},
Config!B2:D2
)
ReplyRate = Replies / Delivered
QualityScore =
IF(ReplyRate<0.03, 2,
IF(ReplyRate<0.05, 3,
IF(ReplyRate<0.08, 4, 5)))
InputsScore =
IF(TargetedEmails<60, 2,
IF(TargetedEmails<80, 3,
IF(TargetedEmails<100, 4, 5)))
Define each metric once and link it to a system and field.
Reward relevance, not volume.
Make the differences explicit.
Small team? This fits between calls.
Run before you roll it out.
ICP: Ideal Customer Profile. AE accepted: meeting/opportunity approved by AE. Reply rate: unique replies / delivered. Rubric: behaviour‑anchored scoring scale.
Review quarterly with the team.
Update targets and weights if reply or meeting rates shift materially; log changes with dates so trends remain comparable.
Score what the business values: accepted meetings and pipeline.
Define “accepted” clearly (ICP fit, pain, next step). Add a weekly 10‑minute AE review of the last five meetings: accept/decline + one note. Feed the note back into coaching and adjust the rubric examples so SDRs see what “good” looks like in context.
Turn the score into actions.
Which angle got the highest reply rate last week—and why? What would make this message more specific to the buyer? Which 10 accounts should we drop or swap? What tiny experiment will you run on Day‑1 emails before next week?
Short answers on SDR scorecards.
Want an SDR scorecard tailored to your ICP, motion, and ramp plan?
© EA Partners 2025. All Rights Reserved.