The single biggest predictor of whether your SBIR application gets funded isn't your team, your market, or your writing quality. It's whether your innovation qualifies as R&D or product development in the eyes of federal reviewers.
SBIR innovation requirements demand genuine technical risk and scientific novelty -- not a better version of something that already exists. Most founders get this classification wrong, and the mistake costs 3-6 months of wasted effort per application.
What determines whether your technology qualifies as R&D for SBIR? Federal agencies like NSF and ARPA-H classify innovations into three tiers: new scientific principles (Tier A), novel applications of known science (Tier B), and engineering optimization (Tier C). Each tier has hard score boundaries -- Tier C proposals hit a ceiling of 4 out of 9, making them unfundable regardless of writing quality.
At Cada, we've written 50+ SBIR applications with an 86% success rate. We classify every technology into one of these three tiers before recommending a grant program. That classification determines which agencies are realistic targets, what score range to expect, and whether an application is worth the 40-100+ hours it takes to write. Here's the exact framework we use.
The A/B/C Innovation Classification: How SBIR R&D Requirements Actually Work
Federal grant reviewers at NSF and ARPA-H don't evaluate "innovation" as a single spectrum. They classify it into distinct tiers, and each tier has hard score boundaries that determine whether your application can succeed -- regardless of how well it's written.
Tier A: New Scientific Principle or Method
You're creating knowledge that doesn't exist. A new therapeutic mechanism, a novel measurement principle, a previously undemonstrated computational architecture. The approach itself is the innovation, not just the outcome.
- NSF score floor: 7 out of 9
- ARPA-H score floor: 7 out of 10
- What reviewers look for: "Has this been demonstrated before? Is the method itself novel?"
Tier B: Novel Application of Known Science to a New Domain
The underlying science exists, but nobody has applied it this way. You're taking a proven principle and using it to solve a problem in a domain where it hasn't been tried.
- NSF score floor: 5 out of 9
- ARPA-H score floor: 5 out of 10
- What reviewers look for: "Is this a genuinely new application, or is it an obvious extension?"
Tier C: Engineering Optimization of Existing Approaches
You're making something better, faster, or cheaper without a fundamental technical leap. The science is known, the application is known -- you're improving the implementation.
- NSF score ceiling: 4 out of 9
- ARPA-H score ceiling: 4 out of 10
- What reviewers look for: "Could a competent engineering team do this with existing knowledge?"
That score ceiling on Tier C is the critical insight. A Tier C proposal can be brilliantly written with a massive market opportunity, and it will still score below the funding line. NSF reviewers explicitly distinguish R&D from product development. ARPA-H program managers require "non-incremental" innovation -- 10x improvement, not 10% optimization.
Does Your Technology Qualify for SBIR? The Self-Assessment
Answer these four questions honestly. Each one maps to a classification tier.
Question 1: What's the nature of your technical risk?
- "Will this work at all?" (unknown feasibility) -- Points toward Tier A. Your core approach hasn't been demonstrated. You're testing whether a principle or method can produce a desired outcome.
- "Will this work in this context?" (proven elsewhere, new domain) -- Points toward Tier B. The underlying science works, but applying it to your specific problem introduces technical uncertainty.
- "Can we make this work better/faster/cheaper?" (optimization) -- Points toward Tier C. The approach works. You're engineering improvements to performance, cost, or scale.
Question 2: Could a skilled engineering team replicate your approach using existing knowledge?
If yes -- if a team with the right domain expertise could build what you're building using published methods, existing frameworks, and known techniques -- your innovation is likely Tier C. The grant-funded R&D component must require creating new knowledge, not applying known knowledge more skillfully.
Question 3: What's novel -- your method or your outcome?
- Novel method producing a novel outcome -- Tier A
- Known method producing a novel outcome in a new domain -- Tier B
- Known method producing a better version of a known outcome -- Tier C
This distinction trips up many applied-technology founders. Building the first AI-powered inventory management system is a novel outcome, but if the AI architecture is conventional, the innovation is Tier C in the eyes of NSF/ARPA-H reviewers.
Question 4: The ARPA-H 10x test
Can you identify at least one quantifiable metric where your technology achieves >= 10x improvement over the current standard of care or practice? Not 2x. Not "significantly better." Ten times better on a specific, measurable dimension with a sourced baseline.
If yes, you likely have a Tier A or B innovation for ARPA-H. If your best metric shows 2-3x improvement, you're in Tier C territory for ARPA-H (though potentially Tier B for NSF, which doesn't require the 10x threshold).
Why Innovation Classification Determines Your Grant Outcome
The classification framework isn't an academic exercise. It directly controls your application's score range before a reviewer reads a single word of your proposal.
At NSF: The NSF SBIR Innovation Classification Gate
NSF SBIR reviewers score "Technical Risk & Innovation" as a core criterion. Their primary question: "Is the R&D genuinely high-risk/high-reward and scientifically sound, or is this incremental engineering dressed up as research?"
Based on Cada's analysis of NSF review patterns across 50+ applications, this classification is "the single most important determinant of NSF SBIR success." Engineering optimizations that score 5-6 on a general "innovation" scale feel competitive to the applicant, but NSF reviewers who distinguish R&D from product development will score them at the Tier C ceiling of 4.
Common NSF decline patterns tied to classification:
- "Incremental improvement, not R&D breakthrough"
- "Objectives describe product development, not R&D"
- "Technical approach has no novelty (replicates prior funded work)"
- Risk framed as "we need funding" rather than "this hasn't been demonstrated scientifically"
At ARPA-H: The 10x Non-Incremental Test
ARPA-H is even more explicit. Their mission statement says it: "non-incremental, transformative health technologies that cannot be achieved through conventional approaches."
Every ARPA-H submission requires a quantitative metrics comparison table showing current standard versus your proposed technology. At least one metric must show >= 10x improvement, with a sourced baseline.
"Better, faster, cheaper" without a mechanism change is banned language -- literally. Based on Cada's analysis of ARPA-H submission feedback and review patterns, terms like "optimization," "enhancement," "improvement (without quantified 10x basis)," "incremental," and "evolutionary" signal the wrong innovation tier and trigger immediate red flags.
Where NIH funds hypothesis-driven basic research and accepts incremental advances, ARPA-H explicitly rejects that framing. Using NIH language ("hypothesis-driven," "exploratory study," "preliminary data suggests") in an ARPA-H submission signals you don't understand the agency's culture.
What To Do With Your Classification
If You're Tier A: Strong NSF and ARPA-H Fit
Your innovation clears the classification gate at both agencies. Your challenge isn't proving novelty -- it's proving feasibility. Reviewers will ask: "This is ambitious, but can you actually do it?"
Focus your application on:
- Team credentials that match the specific R&D proposed
- Concrete milestones with measurable success criteria
- A realistic timeline (12 months for NSF Phase I at $275K, base period for ARPA-H)
- Evidence that your approach is feasible, not just novel
If You're Tier B: Good Fit, But Framing Is Everything
Tier B is where most fundable technologies land, and where framing makes the difference between a 5 (borderline) and a 7 (competitive). The same technology can read as Tier B or Tier C depending on how you write it.
The framing test: Does your proposal emphasize what's new about applying known science to this domain? Or does it read like you're engineering a product?
Consider a company building a hyperspectral imaging system for real-time soil nutrient analysis. The underlying spectroscopy science exists -- it's used in mining and food processing. But applying it to in-field agricultural soil analysis with variable moisture, organic matter, and lighting conditions -- that's a novel application with genuine technical uncertainty. Framed as "we're building a better soil sensor," it's Tier C. Framed as "we're adapting hyperspectral imaging to a domain where it hasn't been validated, with unsolved challenges around moisture interference and real-time calibration," it's Tier B.
NSF wants the novel application clearly separated from the known science. ARPA-H wants the 10x metric quantified. Both want R&D objectives, not product development tasks.
If You're Tier C: SBIR May Not Be Your Path
This is the hardest message to deliver, but it saves months: if your innovation is genuinely Tier C at NSF and ARPA-H, a standard SBIR application to those agencies will not succeed. The score ceiling means even a perfectly written proposal can't cross the funding threshold.
Alternatives to consider:
- DOD SBIR programs (AFWERX, Army xTech) -- more product-development tolerant, often seeking "better, faster, cheaper" solutions for specific military needs
- State-level grants -- many states fund product development, not just R&D
- Revenue-based financing -- if your technology works and has market traction, non-dilutive options exist outside grants
- Reframing -- before giving up on NSF/ARPA-H, test whether your technology has a Tier B component (see next section)
The Reframing Test: Can You Move From C to B?
Some Tier C technologies have a Tier B component buried inside. The test: can you isolate a technical risk that requires creating new knowledge?
Example 1: From product development to novel architecture
- Tier C framing: "Improving our drone's battery life with better power management"
- Tier B framing: "Developing a novel bio-inspired energy harvesting system that converts ambient thermal gradients into supplemental flight power -- a thermoelectric principle proven in industrial waste heat recovery but never miniaturized for aerial platforms"
The difference: the first describes optimization of existing methods. The second identifies a specific technical challenge (miniaturized thermoelectric harvesting) that requires creating new engineering knowledge.
Example 2: From optimization to novel application
- Tier C framing: "Building a better soil sensor for farmers"
- Tier B framing: "Adapting hyperspectral imaging for real-time in-field soil nutrient analysis -- a measurement technique proven in mining and food processing but never validated under variable outdoor conditions with heterogeneous soil matrices"
The difference: the first is product development. The second identifies the novel application of known measurement science to an unproven domain.
A critical warning: Reframing must be honest. If the R&D component is a thin veneer over product development, reviewers will catch it.
NSF's review specifically asks: "Is the approach itself innovative, or just the outcome?" ARPA-H program managers evaluate against the Heilmeier Catechism, which explicitly asks: "What is new in your approach and why do you think it will be successful?" A "new mechanism" must be real, not cosmetic.
Frequently Asked Questions
Can a software or SaaS company qualify for SBIR?
Yes -- if the software requires creating new algorithms, methods, or computational architectures (Tier A or B). Building features on existing frameworks like TensorFlow, React, or standard cloud infrastructure is Tier C. The R&D must be in the software's core approach, not in its market application.
NSF funds software R&D regularly. The key: your Phase I objectives must describe technical research, not product development milestones. "Build user dashboard" is a product task. "Develop and validate a novel graph-based recommendation algorithm that achieves < 50ms latency on sparse datasets" is R&D.
My technology has been patented. Does that mean it qualifies as R&D?
Not necessarily. Patents protect implementation, not scientific novelty in the grant reviewer's sense. The question is whether your next phase of work -- the work you're proposing to do with SBIR funding -- involves genuine R&D with technical risk. A patented product that needs engineering refinement is still Tier C.
What if my technology classifies differently at different agencies?
This happens regularly. A technology that's Tier B at NSF (novel application of known science) might be Tier C at ARPA-H (doesn't meet the 10x threshold). Conversely, a breakthrough health technology that's Tier A at ARPA-H might face challenges at NSF if it doesn't fit a current topic area.
Cada's roadmap assessment includes per-agency classification specifically because a single technology can have different grant paths depending on which agency evaluates it.
I've been told ARPA-H only funds moonshots. Is that true?
ARPA-H requires non-incremental innovation (10x improvement, not 10% optimization), but that doesn't mean science fiction. It means a new mechanism or approach that produces dramatically better health outcomes than current practice. A novel diagnostic that detects cancer 10x earlier than current screening is non-incremental. A marginally better version of an existing diagnostic is not.
The bar is high but specific: at least one quantifiable metric must show >= 10x improvement with a sourced baseline. That's the test.
How long does the classification assessment take?
For most technologies, 15-30 minutes of structured conversation with someone who knows the agency review criteria. The four diagnostic questions above give you a strong preliminary signal. A professional assessment (like Cada's free 15-minute call) adds agency-specific nuance and catches common framing mistakes.
Not Sure Where Your Technology Falls?
Innovation classification is the first question to answer -- before investing 40-100+ hours in a grant application. Getting it wrong means either missing a strong funding opportunity (Tier B founders who assume they're Tier C) or wasting months on an application that can't succeed (Tier C founders who don't realize the score ceiling).
Cada's roadmap assessment includes innovation classification as part of agency fit analysis. We classify your technology against the A/B/C framework, test it against the ARPA-H 10x threshold, and recommend which agencies (if any) are realistic targets. 15 minutes, no pitch, straight answer.
