1. Charter and Design Lock
Scope, sample assumptions, quality thresholds, and gate criteria confirmed.
A Market Research Proposal to Inform ARDC’s Grantmaking Strategy
2026-03-13
Disruptive Technologies
Universal Internet
Social Learning Revolution
Evidence-Based Investment
Proactive Investment
Knowledge Leadership
Entry: Becoming a Ham
Continuation: Being a Ham
End-to-End Journey View
Acquisition: Bringing in New Licensees
Retention: Keeping Licensees Engaged
Current & Former Licensees
| Who We’ll Learn From | How We’ll Reach Them | What We’ll Learn |
|---|---|---|
| Entire licensee population | FCC ULS Mail-to-web survey + interviews | Overall population profile |
| Early-term new licensees | FCC ULS Mail-to-web survey | Initial integration experience |
| Mid-term licensees | FCC ULS Mail-to-web survey | Factors influencing long-term participation |
| Renewal window licensees | FCC ULS Mail-to-web survey | Propensity to renew and why |
| Recently non-renewing licensees | FCC ULS Mail-to-web survey | Reasons for non-renewal |
Prospective Licensees
Feeder Channels
| Who We’ll Learn From | How We’ll Reach Them | What We’ll Learn |
|---|---|---|
| Early-term new licensees | FCC ULS Mail-to-web survey | Feeder channel mix |
| Feeder channel leaders | Expert interviews – Technology – Emcomm – Youth STEM |
Feeder channel description and performance |
Tracks 1 + 2 Together Tell the Whole Story
Tracks Share Execution and Analytics
Incremental Deliverables
Final Deliverables
Note: This deliverables set is preliminary and will be refined before engagement begins, then updated during charter development.
Scope, sample assumptions, quality thresholds, and gate criteria confirmed.
Expert interviews surface full breadth of data collection requirements. Qual signal translated into measurement refinements and survey draft.
Comprehension and operations validated before full launch approval.
ULS sample fielding with response monitoring.
Snapshot, trajectories, and investment levers for licensed populations.
-> Output to integration: Decision Kit inputs and board-ready briefings.
Scope, target feeder channels, sample size for survey, information requirements for interviews and gate criteria confirmed.
Design survey questions targeted to recent licensees. Plan for a conditional branch when respondent fits recent licensee criteria
Create interview structure for Expert Interviews across key feeder channels.
Launch survey with oversampling for recent licensees.
Schedule and perform interviews. Capture key feedback in structured system.
Feeder Channel Map and Scoring, and investment levers for improved licensing journey.
Planning anchor: Assume a target pool of about 12,000 invitations yielding roughly 1,200 to 1,500 responses. That is enough for strong whole-population analysis, plus some support for subgroups. Further review will determine any additional outreach or oversampling is required.
Illustrative cost components we can estimate now
| Component | Directional range / signal | Why it matters |
|---|---|---|
| Invitation mail | $3 to $5 per invitation | Largest visible variable cost at the planning volume |
| Survey hosting | $1,000 to $3,000 | Baseline platform cost |
| Survey implementation | ~$3,000 | Optional setup / programming support |
| Full-service contract house project management | $10,000 to $25,000 | Optional, depending on execution model |
| Qualitative discovery | Variable | Depends on interview count, structure, and synthesis depth |
| Reminder mail / incentives | Optional / variable | Can improve response or subgroup yield |
Note: Directional ranges shown here are drawn from the proposal’s planning assumptions and estimated cost-component table. Final budget and sequencing should be set after module selection, confidence targets, and execution responsibilities are agreed.
What will move the final budget up or down
How to read this slide
What It Represents
How ARDC Can Use It
Engagement Factors
%%{init: {
"flowchart": { "nodeSpacing": 70, "rankSpacing": 90, "curve": "basis" },
"themeVariables": { "fontSize": "22px", "nodePadding": 18 }
}}%%
flowchart TB
A["Initial License"] --> B["Renewal Window"] --> C["Expiration"] --> D["End of Grace"]
T1["Thriving"] -.-> O1["On-time Renewal"]
T2["Steady"] -.-> O1
T3["Drifting"] -.-> O1
T3 -.-> O3["No Renewal"]
T4["Early Exit"] -.-> O3
T5["Immediate Exit"] -.-> O3
A -.-> T1
A -.-> T2
A -.-> T3
A -.-> T4
A -.-> T5
%% Styling (matched to long-form exhibit palette)
classDef anchor fill:#ffffff,stroke:#111111,stroke-width:2px;
classDef red fill:#fff5f5,stroke:#c53030,stroke-width:2px;
classDef orange fill:#fffaf0,stroke:#dd6b20,stroke-width:2px;
classDef green fill:#f0fff4,stroke:#2f855a,stroke-width:2px;
class B,C,D anchor;
class A,T1,O1 green;
class T2,T3,O2 orange;
class T4,T5,O3 red;
Trajectories are repeating similar participant journeys. We will find common patterns as people make their way through Entry and Continuation activities. These trajectories can be models for success or failure. They are like a persona - an archetype - but describe a journey rather than a person. We will use them to help focus our energy on classes of problems that are deserving of investment.
How People Come into Amateur Radio
How ARDC Can Use This
Initial Discovery
Interest Development
Licensing Intent
License Achievement
| Resource | Required? | Comments |
|---|---|---|
| Consultant | Required | Typically fixed + costs, based on project size and complexity |
| Survey Costs | ||
| Qualitative Discovery | Required | Needs to be done in some form (focus groups / selected interviews) |
| Hosting | Required | $1,000 to $3,000 |
| Implementation | Optional | $3,000 |
| Invitation mail | Required | Typically $3 to $5 per invitation (letter + custom tracking code) |
| Incentive | Optional | Small gift card or chance to win a drawing |
| Reminder mail | Optional | Slightly less than initial mailing (postcard) |
| Full-Service Project Management | Optional | $10,000 to $25,000 |
| Interview Costs | ||
| Impartial interviewer | Optional | Tradeoff: impartiality vs domain expertise |
| Invitations | Optional | Needed only if seeking a representative sample |
| Incentive | Optional | May not be required within the community |
| Transcription and data formatting | Optional | Highly recommended for processing lengthy interviews |
Note: Cost values remain directional until scope lock, module selection, and charter agreement.
Market Research + Strategy
Deep background in market research, product strategy, and evidence-based decision support.
Data-Driven Consulting
Experience helping large organizations translate data and research into practical strategic choices.
Amateur Radio Domain Knowledge
Lifelong amateur radio operator with broad familiarity across operating modes, participation styles, and community culture.
Nonprofit + ARDC Context
Active nonprofit and ARDC committee / leadership experience with direct familiarity with strategic decision environments.
Why this matters for this proposal
This work benefits from the combination of research capability, ecosystem understanding, and strategic framing.
Multi-Method Research
Designed research programs that combine quantitative and qualitative inputs for product, positioning, and decision-making needs.
Why it matters here: Relevant to ARDC’s two-track mixed-method design.
Longitudinal Benchmarking
Built and operated recurring benchmark programs that track change over time and support trend-based decisions.
Why it matters here: Relevant to durable measurement and knowledge leadership over time.
Customer / User Data Platforms
Led work involving customer data, segmentation, behavioral analysis, and advanced analytics for strategic use.
Why it matters here: Relevant to participation profiling, segmentation, and investment targeting.
Adoption and Friction Research
Conducted large-scale research focused on how people adopt, engage, hesitate, or drop off across journeys and systems.
Why it matters here: Relevant to understanding entry, continuation, attrition, and intervention points.
Proven pattern: translate research into practical frameworks, recurring measurement, and action-oriented decisions.
Core idea: The right sample is the one that supports ARDC’s most important investment decisions with enough confidence to act.
Level 1 — Whole-Population Decisions
Population-wide questions about participation, continuation, and broad investment opportunity areas.
Evidence need: Strong national read
Level 2 — Major-Segment Decisions
Comparisons across large groups such as license class, tenure, and other major segments.
Evidence need: Solid segment comparison
Level 3 — Narrow Subgroup Decisions
Questions about smaller populations, crossed segments, states, or respondent-defined groups.
Evidence need: Deeper precision; may require oversampling
What this means for sample design
When design gets more expensive
Recommended planning posture
What follows: Clearer decision priorities make the sample size, confidence, and cost tradeoffs much easier to define.
Core idea: ARDC can get strong whole-population insight at practical sample sizes. Precision falls as the analysis moves to smaller groups, crossed segments, and state-level minimums.
400 completes
Early directional read; limited subgroup use.
800 completes
Stronger general analysis; some large-segment comparison.
1,200 completes
Recommended planning target; strong population view and usable major-segment analysis.
1,500 completes
Stronger margins for major classes and slightly more room for segment work.
Major license classes at practical sample scale
Using the class distribution in the current licensee population:
Technician: 49.2% · General: 25.1% · Extra: 21.4%
| Sample | Tech | General | Extra |
|---|---|---|---|
| 800 completes | ~393, ±4.9% | ~201, ±6.9% | ~171, ±7.5% |
| 1,200 completes | ~590, ±4.0% | ~302, ±5.6% | ~257, ±6.1% |
At 800 completes, the major license classes are analyzable, but General and Extra remain relatively coarse. At 1,200 completes, class-level analysis becomes materially more useful for investment-oriented comparison.
Why narrower subgroup ambitions increase sample needs quickly
Illustrative example: if ARDC wanted to expect at least 30 respondents from every state under a simple national random sample, the smallest state would drive the requirement.
Bottom line: Whole-population insight is affordable. State-level minimums are not.
How to interpret error bands in practice: If a result pattern is 32 / 28 / 21 / 11 / 8 and the relevant margin of error is about ±10 points, it can support broad prioritization, but not precise ranking of nearby options. Wide error bands are best used to identify tiers and meaningful gaps, not false precision.
Bias deserves attention up front. No survey project is bias-free, and this one will carry some risk as well. But Track 1 begins from a stronger position than many studies because it uses a defensible population frame built from FCC ULS records rather than a convenience sample. That lowers sampling risk materially. The more important question for this project is whether the major remaining bias risks are anticipated early and addressed through invitation design, instrument design, fielding discipline, weighting, and careful interpretation.
Nonresponse bias
People who do not respond may differ systematically from those who do, so response patterns must be monitored and corrected where possible.
Breakoff / incomplete survey bias
If certain respondents are more likely to abandon the survey before finishing, later questions can become less representative than earlier ones.
Recall bias
Respondents may misremember past behavior, timing, or causes, especially when describing earlier participation, lapse, or return.
Measurement bias
Question wording, answer choices, or survey structure can unintentionally shape responses and distort what is being measured.
Social desirability bias
Some respondents may overstate socially valued behaviors or understate disengagement, uncertainty, or less respected motivations.
Implication: In this project, bias control is less about finding a frame and more about disciplined execution and interpretation.
Privacy and data stewardship should be designed in from the start. This project will follow today’s practical privacy-by-design best practices: use public-record data only for clearly bounded research purposes, minimize identity-bearing data, separate outreach identifiers from analysis data, and report results only in aggregated form with conservative safeguards against re-identification.
Major stewardship topics
ARDC Market Research Proposal · Jim Idelson · March 2026