Back to blog
Guiding 8 min read

The Problem With How We Assess Client Competency

Your client tells you they've climbed to 5.10. You write it down. But how current is that grade? In what environment? On lead, or as a second? The number alone tells you almost nothing — and yet most of us have been building trip plans around it for years.

A

Ascents Team

March 20, 2026

Every guide has a version of this story. A client lists a 5.10 on their résumé. You ask a few follow-up questions and discover it was a single-pitch sport route, five years ago, in perfect conditions, with a guide clipping bolts above them. The grade was real. But what it says about their capability on the alpine objective you’re planning together — almost nothing.

And yet that number — 5.10, WI4, AD, S5 — is often the primary currency of pre-trip client assessment. We collect grades like coordinates and assume they navigate us reliably toward good decisions. Most of the time they do, roughly. But the margin for error on a technical alpine objective is not “roughly.”

This is the problem the Client Technical Competency assessment — the CTC — was built to address. Not to replace guide judgment. To give it a better foundation.


What a grade actually tells you

A grade is a measure of technical difficulty at a point in time, in a specific context, under conditions that may or may not have been demanding. It is useful. But it is, on its own, an incomplete picture of a person’s current capability on your planned route.

Consider what it doesn’t tell you:

When. A client who led 5.11 two seasons ago has a very different current capability from one who led it last month. Finger strength goes first. Technique degrades. The confidence to commit to a move on a poorly-protected slab does not survive a two-year layoff intact.

Where. A 5.10 leader at a sport crag and a 5.10 leader on a committing multi-pitch are not the same client. The grade is the same. The environment it was earned in — the consequence, the commitment, the self-management required — is categorically different. A 5.10 earned outdoors on traditional gear, multi-pitch, in variable conditions is worth approximately twice what the same grade indoors tells you for the purposes of planning an alpine objective.

In what role. Following and leading are different activities. A client who has seconded 5.10 for years is not equivalent to one who leads it regularly. They may be a great candidate for an objective as a second. They are not yet a candidate as a co-leader or for independent movement on a mixed alpine route.

Under what psychological conditions. This is the one we talk about least and arguably assess worst. A technically proficient client who freezes at exposure — who can execute the moves but cannot manage their fear response in a committing position — is a significant field management problem regardless of grade. And there is currently no systematic way to capture this from a résumé.

“The grade is the same. The environment it was earned in is categorically different. A 5.10 earned on traditional gear, multi-pitch, in variable conditions is worth approximately twice what the same grade indoors tells you.”

— From the CTC design framework

The standard workaround — and why it’s not enough

Experienced guides compensate for this informally. We ask follow-up questions. We probe for context. We read the subtext of how someone describes their experience — the vocabulary they use, the routes they name, the way they talk about the parts that were hard.

This is good guiding instinct, and it works reasonably well in a one-to-one conversation with a client you’ve spent some time with. But it has three meaningful failure modes.

First, it’s not systematic. The questions an experienced guide asks in a good conversation depend on that guide’s intuition, their mood that day, the pressure of a busy booking season, and whether the client happens to trigger any of their internal pattern-matches. The same client, assessed by two different guides on two different days, might get meaningfully different risk profiles — not because the client changed, but because the process varied.

Second, it doesn’t produce a record. The conversation happens, the judgment is formed, and it lives in the guide’s head. If anything goes wrong — if the trip ends in an incident, a rescue, or a claim — the guide has to reconstruct their pre-trip assessment from memory. That is not a strong position to defend.

Third, it doesn’t scale. When a guide service has three guides leading trips simultaneously, the institutional knowledge of the senior guide does not transfer automatically to the others. The junior guide who joined last season is making client assessments with whatever framework they developed in their own certification training, modulated by whatever briefings their employer provides. That gap in consistency is a risk exposure most guide service owners would rather not have.

The core problem: Guide expertise in client assessment is real and hard-won. The problem is that it is largely invisible, inconsistently applied, and produces no durable record. The CTC makes it structural.

What the CTC does differently

The CTC is not a form. It is a scoring engine with three evidence sources: the client’s logbook, their responses to a structured questionnaire, and direct guide observation where available. Each source has a different reliability weight. Logbook evidence is weighted highest because it is objective, timestamped, and graded. Questionnaire responses are self-reported, so they carry a lower weight — and the system automatically flags cases where self-reported competency is significantly higher than what the logbook actually demonstrates.

Each logbook entry is not just a grade — it is a grade with context. The entry records the environment (alpine committing, multi-pitch outdoor, single-pitch, gym), whether the client led or followed, whether conditions were ideal or challenging, and when it happened. The scoring engine applies multipliers based on all of these factors. A WI4 route led on a Scottish winter day in poor visibility in the past twelve months scores differently from a WI4 seconded at a roadside crag in ideal conditions three years ago. Both are logged as WI4. Only one says something reliable about what this client can do next month on your objective.

The composite is weighted by objective type. For an alpine ice objective, the alpine and water ice columns carry the most weight. The rock score barely matters. This means the same client assessed for a technical rock route would produce a very different profile — because the disciplines that matter are different.

Alongside the score, the system surfaces flags for the guide — specific conditions that warrant direct attention before committing to an objective.

These are not decisions. They are inputs. The guide still makes the call. But they make it with a structured picture of the client’s actual demonstrated capability — not just a headline grade — in front of them.


The five things the CTC scores that grades don’t capture

After extensive work with certified guides to develop the framework, we settled on five dimensions that matter for any alpine objective but are systematically underweighted by grade-only assessment:

DimensionWhat it capturesWhy it matters
RecencyHow current the demonstrated grade isAbility degrades. A grade from five years ago is not evidence of today’s capability
EnvironmentWhere the grade was earned — gym, roadside, committing alpineThe same grade in different environments represents categorically different competency
RoleWhether the client led or followedFollowing and leading are different skills. Route-finding, protection judgement, and commitment only come from leading
ConditionsWhether conditions were ideal or challenging when the grade was achievedA client who has only climbed in perfect conditions has never been tested by the conditions you will likely encounter
Psychological readinessComfort with exposure, self-management under stress, retreat historyTechnical proficiency and psychological readiness are not the same thing. The most common guide-management problems in the field are psychological, not technical

That last dimension — psychological readiness — deserves a note about how it is assessed. The CTC does not infer this from technical grades. It captures it through a dedicated questionnaire domain that asks clients directly about their experience with exposure, their response when scared in committing terrain, and whether they have a history of retreating from objectives when conditions or ability were not adequate.

That last question is the most revealing one. A client who has turned around mid-route is demonstrating good judgment, not failure. The CTC treats it as a positive signal — and it surfaces it to the guide accordingly.

What this means for documentation

There is a second reason to care about systematic client assessment beyond making better pre-trip decisions. The documentation it produces.

When an incident occurs on a guided trip — even a minor one — the central legal question is: what did the guide know about their client’s capability, and how did they assess it? A guide who can produce a timestamped, structured competency assessment showing they reviewed the client’s logbook, assessed five dimensions of capability, and set minimum requirements for the objective before departing is in a fundamentally stronger position than one who has to reconstruct their judgment from memory after the fact.

The CTC assessment record maps directly to ISO 31000 — the international risk management standard used by courts, insurers, and corporate risk managers to evaluate whether a professional exercised a reasonable standard of care. This is not a coincidence. We designed it that way deliberately, because the guide’s client assessment is part of the same risk management process as their pre-trip hazard identification and go/no-go decision. It should be documented with the same rigour.

On documentation: A structured client assessment record does not create new liability. It defends against the liability that already exists. The guide without documentation is the one who looks like they didn’t think it through.

A note on what the CTC is not

We have been careful, in developing this framework, about one specific risk: the risk that a scoring system becomes a substitute for guide judgment rather than a support for it.

The CTC does not tell you whether to take a client on an objective. It does not flag “this client should not be here.” It does not override your reading of a person you have spent time with, observed in the field, or worked with before. Guide expertise — the accumulated pattern recognition of years of professional experience — is not replaceable by a scoring algorithm, and we have no interest in trying.

What it does do is make the inputs to your judgment more structured, more complete, and more durable. It ensures you have asked the questions worth asking. It surfaces the flags worth surfacing. And it produces a record of the assessment that protects you, your business, and ultimately the profession.

The judgment is still yours. The CTC just gives it a better foundation.


We are currently working with a small group of AMGA and ACMG certified guides to refine the CTC framework before it becomes part of the Ascents platform. If you are a certified guide and this resonates — or if you think we have something wrong — we would genuinely like to hear from you. The best assessment tools in the profession were built by the profession. That is what we are trying to do here.