What Drives Participation in Amateur Radio and Digital Communications?

A Market Research Proposal to Inform ARDC’s Grantmaking Strategy

Jim Idelson

2026-03-13

Three Forces are Reshaping Amateur Radio

The environment is not static, and strategy cannot be static either

Disruptive Technologies

  • Innovation has decentralized.
  • Digital modes are accelerating activity.
  • RF and internet paths increasingly blend.

Universal Internet

  • Boundaries are collapsing.
  • Local activity connects globally.
  • Access to stations and communities is broader.

Social Learning Revolution

  • Knowledge spreads through online communities.
  • Learning is faster and distributed.
  • New entrants engage through modern channels.
Participation in amateur radio is being radically reinvented.

Market Knowledge is the Foundation ARDC Needs to Stay Ahead

ARDC’s investment strategy must be informed by today — focused on the future

Evidence-Based Investment

  • Move from intuition to measurable evidence.
  • Evaluate proposals against consistent criteria.
  • Set baselines for impact over time.

Proactive Investment

  • Shape the agenda, not just react.
  • Identify leverage points earlier.
  • Act where intervention has highest return.

Knowledge Leadership

  • Build durable market-wide insights.
  • Track meaningful trends continuously.
  • Support evidence-guided decisions.
  • A capability built over time.
A data-driven understanding of the amateur radio landscape, how it functions, and how it is changing

Research Objectives

The core questions ARDC needs answered to make stronger, evidence-based investment decisions

  • Is the Licensing Journey Working?
    What factors shape how people discover amateur radio and move from interest to first license?
  • How Do We Launch New Hams?
    What happens in the period immediately after initial licensing? What ensures a successful launch?
  • How Do We Keep Licensees Active Long-Term?
    Which factors most influence long-term continuation, deeper involvement, and progression over time?
  • What Measures are Important?
    What practical metrics should ARDC use to evaluate investments and compare alternatives consistently?
  • Can We Apply Learnings to Real Grants?
    How do we transform this knowledge into a practical framework ARDC can use across future grant cycles?
  • Research Objectives (expanded)

Understanding the Opportunity Space

Two distinct phases of the amateur radio journey: Entry and Continuation are distinct but connected

Entry to continuation conceptual divider

Entry: Becoming a Ham

  • A journey from awareness to interest and then movement to first license.
  • Many different paths.
  • Mixed success; friction along the way.
  • No one is licensed at this stage.

Continuation: Being a Ham

  • Trajectory varies for every ham.
  • Potential for increasing involvement or drift and departure.
  • Not everyone engages and continues for the long-term.
  • Everyone in this stage is licensed.

Two Tracks: A Research Model that Matches the Real World

Tracks focused on different populations with different challenges

End-to-End Journey View

  • Entry and Continuation are phases of one system.
  • Initial licensing is a critical handoff.
  • Different populations requiring separate approaches.
  • Trajectories show how participation paths diverge over time.
  • ARDC can pinpoint where investments can improve outcomes anywhere on that journey.
Exhibit 3.1: End-to-end participation flow from entry pathways through initial licensing and continuation outcomes

Acquisition: Bringing in New Licensees

  • Population: Entry stage license candidates.
  • Focus: Upstream entry pathways and friction points.
  • Methods: Survey + qualitative channel insight.
  • Output: Where to fund for stronger entry flow.

Retention: Keeping Licensees Engaged

  • Population: Licensees from the moment of licensing.
  • Focus: Post-license continuation outcomes.
  • Method: Survey of FCC ULS current & prior licensees.
  • Output: Robust benchmarks for engagement and drift.

Track 1: Retention of Current, Licensed Participants

Understand when and how licensee participation increases or fades over time

Current & Former Licensees

  • Capture licensee profiles over the course of the license term journey.
  • Being licensed is not a proxy for participation and involvement.
  • Engagement Score is the metric that will lead us to positives and negatives of the journey.
  • Sample size of 1,200 completed surveys recommended.
Track 1 focus view of Exhibit 3.1
Who We’ll Learn From How We’ll Reach Them What We’ll Learn
Entire licensee population FCC ULS Mail-to-web survey + interviews Overall population profile
Early-term new licensees FCC ULS Mail-to-web survey Initial integration experience
Mid-term licensees FCC ULS Mail-to-web survey Factors influencing long-term participation
Renewal window licensees FCC ULS Mail-to-web survey Propensity to renew and why
Recently non-renewing licensees FCC ULS Mail-to-web survey Reasons for non-renewal

Track 2: Acquisition of New Licensees through Multiple Channels

Understand how people become licensed and ecosystem that guides that journey

Prospective Licensees

  • Understand the Entry path experience.
  • Capture satisfaction levels.
  • Qualitative interviews.

Feeder Channels

  • Feeder Channel Model: map the feeder channel ecosystem.
  • Compare feeder channel performance.
  • Identify investment opportunities.
  • Expert Interview data collection
Track 2 focus view of Exhibit 3.1
Who We’ll Learn From How We’ll Reach Them What We’ll Learn
Early-term new licensees FCC ULS Mail-to-web survey Feeder channel mix
Feeder channel leaders Expert interviews
– Technology
– Emcomm
– Youth STEM
Feeder channel description and performance

Two Tracks Working Together

One integrated framework for immediate decisions and long-term knowledge leadership

Tracks 1 + 2 Together Tell the Whole Story

  • The two tracks examine different phases of one end-to-end participation journey.
  • Early new licensee outcomes in Track 1 are linked back to pathways and experiences studied in Track 2.
  • This connection shows where pre-license experience drives durable participation and where it breaks.
  • ARDC gets a clearer basis for deciding where to intervene across the full journey.

Tracks Share Execution and Analytics

  • Expert interviews run in parallel, with overlap where contributors inform both tracks.
  • A shared survey architecture supports both tracks through conditional branching.
  • Engagement Scoring helpful across the full journey.
  • ARDC can run recent-licensee cohort tracking, and targeted deep dives that cut across the entire journey.

Deliverables by Phase

Built continuously during execution, then packaged into decision-ready final outputs

Incremental Deliverables

  • Program charter and design lock
  • ULS sampling frame and segment definitions
  • Survey and interview instrument package
  • Qualitative discovery findings
  • Cognitive test and pilot validation report
  • Survey operations and response tracking
  • Cleaned datasets and data dictionary
  • Cross-track integration matrix

Final Deliverables

  • Track 1 final report (Retention findings)
  • Track 2 final report (Acquisition findings)
  • Feeder channel map and scoring framework
  • Engagement and trajectory baseline metrics
  • Integrated decision kit for grantmaking
  • Executive briefing deck and summary memo

Note: This deliverables set is preliminary and will be refined before engagement begins, then updated during charter development.

Timelines

Workflow summaries for Track 1 and Track 2 — Possible range 26 to 40 weeks

Track 1 Workflow - Current and Former Licensees

1. Charter and Design Lock

Scope, sample assumptions, quality thresholds, and gate criteria confirmed.

2. Qualitative Discovery and Instrument Build

Expert interviews surface full breadth of data collection requirements. Qual signal translated into measurement refinements and survey draft.

3. Cognitive Testing and Pilot Gate

Comprehension and operations validated before full launch approval.

4. Full Survey Execution

ULS sample fielding with response monitoring.

5. Analysis and Track 1 Reporting

Snapshot, trajectories, and investment levers for licensed populations.

-> Output to integration: Decision Kit inputs and board-ready briefings.

Track 2 Workflow - Entry Paths for Prospective Licensees

1. Charter and Design Lock

Scope, target feeder channels, sample size for survey, information requirements for interviews and gate criteria confirmed.

2. Quantitative Design and Instrument Build

Design survey questions targeted to recent licensees. Plan for a conditional branch when respondent fits recent licensee criteria

2. Qualitatative Design and Instrument Build

Create interview structure for Expert Interviews across key feeder channels.

3. Survey Execution

Launch survey with oversampling for recent licensees.

4. Expert Interviews

Schedule and perform interviews. Capture key feedback in structured system.

5. Analysis and Track 2 Reporting

Feeder Channel Map and Scoring, and investment levers for improved licensing journey.

Early Budget Ranges and Cost Drivers

Several costs are estimable now; final budget depends on scope, sampling, and execution choices

Planning anchor: Assume a target pool of about 12,000 invitations yielding roughly 1,200 to 1,500 responses. That is enough for strong whole-population analysis, plus some support for subgroups. Further review will determine any additional outreach or oversampling is required.

Illustrative cost components we can estimate now

Component Directional range / signal Why it matters
Invitation mail $3 to $5 per invitation Largest visible variable cost at the planning volume
Survey hosting $1,000 to $3,000 Baseline platform cost
Survey implementation ~$3,000 Optional setup / programming support
Full-service contract house project management $10,000 to $25,000 Optional, depending on execution model
Qualitative discovery Variable Depends on interview count, structure, and synthesis depth
Reminder mail / incentives Optional / variable Can improve response or subgroup yield

Note: Directional ranges shown here are drawn from the proposal’s planning assumptions and estimated cost-component table. Final budget and sequencing should be set after module selection, confidence targets, and execution responsibilities are agreed.

What will move the final budget up or down

  • Track 1/2 Scope and depth of analysis per track
  • Confidence targets for key segments and sub-populations
  • Oversampling intensity for priority groups
  • Number and depth of expert interviews
  • Internal vs. external execution mix
  • Opportunities for parallelism across the two tracks

How to read this slide

  • These are planning ranges, not a final quote
  • The biggest early driver is likely to be mail volume
  • Contract pricing should allow for flexibility during final charter refinement and scope decisions

Additional Information

Supporting detail, methodology, and reference material

Research Objectives - Expanded

The core questions ARDC needs answered to make stronger, evidence-based investment decisions

  • Is the Licensing Journey Working?
    What factors shape how people discover amateur radio and move from interest to first license?
    • What is the mix of prospective licensees coming into the hobby by area of interest (Emcomm, technology, etc)?
    • Which feeder channels (clubs, education centers, Emcomm groups, etc) are most successful and why? Why not?
    • Is investment warranted, and what are ARDC’s best opportunities to improve this ecosystem?
  • How Do We Launch New Hams?
    What happens in the period immediately after initial licensing? What ensures a successful launch?
    • What tools are available to get new hams started? What is working and where are the gaps?
    • Does this differ by interest area? Feeder channel? Demographics?
    • How many licensees drop out very shortly after licensing, and why?
    • How can ARDC investment increase the portion of licensees who fully engage and become long-term hams?
  • How Do We Keep Licensees Active Long-Term?
    Which factors most influence long-term continuation, deeper involvement, and progression over time?
    • What generates excitement and desire to stay involved? Is it technical? Public service? Learning? Social?
    • Who stays? Who drops out? Why?
    • What high-potential programs should ARDC support or initiate?
  • What Measures are Important?
    What practical metrics should ARDC use to evaluate investments and compare alternatives consistently?
    • What does progress look like? What outcomes does ARDC care about?
    • What metrics best reflect progress? How do we capture them?
  • Can We Apply Learnings to Real Grants?
    How do we transform this knowledge into a practical framework ARDC can use across future grant cycles?
    • What tools will help evaluators make informed recommendations?
    • What tools can drive proactive ARDC strategic investing?

Engagement Metric

A single multi-factor signal for participation depth and retention risk

What It Represents

  • A measure of how connected an individual is to amateur radio.
  • A composite score based on multiple dimensions.
  • Supports early detection of likely continuation vs. attrition risk.

How ARDC Can Use It

  • Compare segment health using a common signal.
  • Prioritize interventions for at-risk groups.
  • Starting base for tracking engagement over time.

Trajectories: Engagement-Driven Outcomes

Engagement Levels are Likely Indicators of High or Low Participation and Renewal Outcome

%%{init: {
  "flowchart": { "nodeSpacing": 70, "rankSpacing": 90, "curve": "basis" },
  "themeVariables": { "fontSize": "22px", "nodePadding": 18 }
}}%%
flowchart TB
  A["Initial License"] --> B["Renewal Window"] --> C["Expiration"] --> D["End of Grace"]

  T1["Thriving"] -.-> O1["On-time Renewal"]
  T2["Steady"] -.-> O1
  T3["Drifting"] -.-> O1
  T3 -.-> O3["No Renewal"]
  T4["Early Exit"] -.-> O3
  T5["Immediate Exit"] -.-> O3

  A -.-> T1
  A -.-> T2
  A -.-> T3
  A -.-> T4
  A -.-> T5

  %% Styling (matched to long-form exhibit palette)
  classDef anchor fill:#ffffff,stroke:#111111,stroke-width:2px;
  classDef red fill:#fff5f5,stroke:#c53030,stroke-width:2px;
  classDef orange fill:#fffaf0,stroke:#dd6b20,stroke-width:2px;
  classDef green fill:#f0fff4,stroke:#2f855a,stroke-width:2px;

  class B,C,D anchor;
  class A,T1,O1 green;
  class T2,T3,O2 orange;
  class T4,T5,O3 red;

Trajectories are repeating similar participant journeys. We will find common patterns as people make their way through Entry and Continuation activities. These trajectories can be models for success or failure. They are like a persona - an archetype - but describe a journey rather than a person. We will use them to help focus our energy on classes of problems that are deserving of investment.

Amateur Radio Feeder Channel Flow

How feeder channels make a difference for people on the licensing journey

How People Come into Amateur Radio

  • There are many ways people arrive at amateur radio — Emcomm, STEM, school club, radiosport, and more.
  • Everyone still moves through the same basic stages from discovery to licensing.
  • Some channels are better at early awareness; others are better at helping people commit and get licensed.
  • Some channels do the full journey; others only support part of it.

How ARDC Can Use This

  • Build a clear map of the feeder-channel ecosystem.
  • Compare different channels using one common progression model.
  • Spot where people are getting stuck and where support is missing.
  • Prioritize investments in channels that create stronger movement and better early outcomes after licensing.

Cost Components (Planning Table)

Reference ranges from the proposal to support early budgeting conversations

Resource Required? Comments
Consultant Required Typically fixed + costs, based on project size and complexity
Survey Costs
Qualitative Discovery Required Needs to be done in some form (focus groups / selected interviews)
Hosting Required $1,000 to $3,000
Implementation Optional $3,000
Invitation mail Required Typically $3 to $5 per invitation (letter + custom tracking code)
Incentive Optional Small gift card or chance to win a drawing
Reminder mail Optional Slightly less than initial mailing (postcard)
Full-Service Project Management Optional $10,000 to $25,000
Interview Costs
Impartial interviewer Optional Tradeoff: impartiality vs domain expertise
Invitations Optional Needed only if seeking a representative sample
Incentive Optional May not be required within the community
Transcription and data formatting Optional Highly recommended for processing lengthy interviews

Note: Cost values remain directional until scope lock, module selection, and charter agreement.

Appendix A — About Jim Idelson

Research rigor, domain fluency, and decision-ready framing for this proposal

Market Research + Strategy
Deep background in market research, product strategy, and evidence-based decision support.

Data-Driven Consulting
Experience helping large organizations translate data and research into practical strategic choices.

Amateur Radio Domain Knowledge
Lifelong amateur radio operator with broad familiarity across operating modes, participation styles, and community culture.

Nonprofit + ARDC Context
Active nonprofit and ARDC committee / leadership experience with direct familiarity with strategic decision environments.

Why this matters for this proposal

  • Combines research rigor with domain familiarity
  • Bridges methodology and practical interpretation
  • Designed to turn findings into decision-ready guidance
  • Well suited to connect evidence with ARDC’s grantmaking choices

This work benefits from the combination of research capability, ecosystem understanding, and strategic framing.

Appendix B — Selected Relevant Research Experience

A repeatable pattern of research design, evidence building, and decision support

Multi-Method Research

Designed research programs that combine quantitative and qualitative inputs for product, positioning, and decision-making needs.

Why it matters here: Relevant to ARDC’s two-track mixed-method design.

Longitudinal Benchmarking

Built and operated recurring benchmark programs that track change over time and support trend-based decisions.

Why it matters here: Relevant to durable measurement and knowledge leadership over time.

Customer / User Data Platforms

Led work involving customer data, segmentation, behavioral analysis, and advanced analytics for strategic use.

Why it matters here: Relevant to participation profiling, segmentation, and investment targeting.

Adoption and Friction Research

Conducted large-scale research focused on how people adopt, engage, hesitate, or drop off across journeys and systems.

Why it matters here: Relevant to understanding entry, continuation, attrition, and intervention points.

Proven pattern: translate research into practical frameworks, recurring measurement, and action-oriented decisions.

Appendix C — Designing the Sample Around ARDC’s Decisions

Sample design should follow ARDC’s investment decisions — not generic survey convention

Core idea: The right sample is the one that supports ARDC’s most important investment decisions with enough confidence to act.

Level 1 — Whole-Population Decisions

Population-wide questions about participation, continuation, and broad investment opportunity areas.

Evidence need: Strong national read

Level 2 — Major-Segment Decisions

Comparisons across large groups such as license class, tenure, and other major segments.

Evidence need: Solid segment comparison

Level 3 — Narrow Subgroup Decisions

Questions about smaller populations, crossed segments, states, or respondent-defined groups.

Evidence need: Deeper precision; may require oversampling

What this means for sample design

  • Start with population-level decisions
  • Protect major strategic segments
  • Add narrower precision selectively
  • Let realized counts guide respondent-defined segments

When design gets more expensive

  • State-by-state minimums
  • Crossed-segment analysis
  • Rare or priority populations
  • Small respondent-defined buckets

Recommended planning posture

  • Start with strong whole-population insight
  • Ensure useful precision for major segments
  • Add oversampling only where it changes investment decisions

What follows: Clearer decision priorities make the sample size, confidence, and cost tradeoffs much easier to define.

Appendix C — Implications for Sample Size and Confidence

Population-level insight practical scale; narrower subgroups increase sample needs quickly

Core idea: ARDC can get strong whole-population insight at practical sample sizes. Precision falls as the analysis moves to smaller groups, crossed segments, and state-level minimums.

400 completes

Early directional read; limited subgroup use.

800 completes

Stronger general analysis; some large-segment comparison.

1,200 completes

Recommended planning target; strong population view and usable major-segment analysis.

1,500 completes

Stronger margins for major classes and slightly more room for segment work.

Major license classes at practical sample scale

Using the class distribution in the current licensee population:

Technician: 49.2% · General: 25.1% · Extra: 21.4%

Sample Tech General Extra
800 completes ~393, ±4.9% ~201, ±6.9% ~171, ±7.5%
1,200 completes ~590, ±4.0% ~302, ±5.6% ~257, ±6.1%

At 800 completes, the major license classes are analyzable, but General and Extra remain relatively coarse. At 1,200 completes, class-level analysis becomes materially more useful for investment-oriented comparison.

Why narrower subgroup ambitions increase sample needs quickly

Illustrative example: if ARDC wanted to expect at least 30 respondents from every state under a simple national random sample, the smallest state would drive the requirement.

  • For the 50 states, that implies about 15,200 completes
  • To be highly confident of that minimum, the requirement is closer to 20,000 completes

Bottom line: Whole-population insight is affordable. State-level minimums are not.

How to interpret error bands in practice: If a result pattern is 32 / 28 / 21 / 11 / 8 and the relevant margin of error is about ±10 points, it can support broad prioritization, but not precise ranking of nearby options. Wide error bands are best used to identify tiers and meaningful gaps, not false precision.

Appendix D — Bias Risk and Bias Control

All survey projects carry bias risk, but this one starts from a stronger sampling foundation

Bias deserves attention up front. No survey project is bias-free, and this one will carry some risk as well. But Track 1 begins from a stronger position than many studies because it uses a defensible population frame built from FCC ULS records rather than a convenience sample. That lowers sampling risk materially. The more important question for this project is whether the major remaining bias risks are anticipated early and addressed through invitation design, instrument design, fielding discipline, weighting, and careful interpretation.

Nonresponse bias
People who do not respond may differ systematically from those who do, so response patterns must be monitored and corrected where possible.

Breakoff / incomplete survey bias
If certain respondents are more likely to abandon the survey before finishing, later questions can become less representative than earlier ones.

Recall bias
Respondents may misremember past behavior, timing, or causes, especially when describing earlier participation, lapse, or return.

Measurement bias
Question wording, answer choices, or survey structure can unintentionally shape responses and distort what is being measured.

Social desirability bias
Some respondents may overstate socially valued behaviors or understate disengagement, uncertainty, or less respected motivations.

Implication: In this project, bias control is less about finding a frame and more about disciplined execution and interpretation.

Appendix E — Privacy and Data Stewardship

Privacy and data stewardship should be designed in from the start. This project will follow today’s practical privacy-by-design best practices: use public-record data only for clearly bounded research purposes, minimize identity-bearing data, separate outreach identifiers from analysis data, and report results only in aggregated form with conservative safeguards against re-identification.

Major stewardship topics

  • Purpose limitation: use data only for defined research objectives
  • Data minimization: collect and retain only what is needed
  • Separation of outreach identifiers from analysis data
  • No enhanced directory / no re-publication of licensee-level data
  • Security controls for access, transfer, and storage
  • Aggregate reporting with suppression and disclosure controls
  • Transparency and voluntary participation
  • Opt-out support throughout outreach and data handling
  • Retention limits and deletion confirmation
  • DPIA-style review / governance before launch