Our Approach
We built something grant consultants can't, and ChatGPT won't.
Most founders face a choice: expensive consultants who take months, or AI tools that generate generic text reviewers reject. We combined the best of both into something neither can do alone.
The problem with the status quo
The $20K Consultant
- Charges $15K--$30K retainer
- 6--8 weeks per proposal
- Paid regardless of outcome
- One proposal at a time
- Limited to agencies they know
The DIY AI Approach
- Free (or nearly free)
- Fast output
- No agency-specific knowledge
- Generic text that scores poorly
- No competitive strategy
- No eligibility verification
- No awareness of agency AI policies
Both approaches leave money on the table. One is slow and expensive. The other is fast and wrong.
Cada's three layers
Each layer builds on the last. Together, they produce proposals that score the way agencies want to fund them.
Agency-Specific Intelligence
Every agency scores differently. NSF uses dual-reviewer panels. NIH runs study sections on a 1--9 scale. DARPA evaluates against Heilmeier's Catechism. ARPA-H wants 10x, not incremental.
We've built playbooks calibrated to how each agency evaluates -- including their AI policies, disclosure requirements, and compliance rules. The same workflow that's fine at NSF could get your NIH application thrown out.
30+ agencies mappedReview Simulation
Before you submit, your proposal gets scored by the same criteria real reviewers use.
NSF proposals face a dual-reviewer simulation. NIH proposals go through a full study section. ARPA-H submissions get evaluated against the program manager's priorities.
This catches the problems that look fine to you but tank your score.
Expert Strategic Framing
AI can write text. It can't decide which of the 40+ open solicitations your technology should target, how to position your innovation against what reviewers have already seen, or which eligibility traps will disqualify you.
That's judgment -- built from writing hundreds of proposals across 30+ agencies.
But why can't I just use AI myself?
You absolutely can start there. Here are five things founders consistently miss when they go it alone.
Wrong agency match
AI will suggest SBIR because it's the most-documented program. But your technology might score higher at ARPA-H, or qualify for a DOE FOA that AI doesn't know exists because the solicitation was posted last week.
Eligibility traps
NIH requires PI citizenship for some mechanisms but not others. STTR requires a research institution partner. AFWERX has a completely different application format. These aren't in the training data -- they're in the FOA fine print.
Competitive framing
Saying "our approach is novel" scores a 3. Saying "existing approaches fail because [specific limitation], and our mechanism addresses this by [specific technical differentiation]" scores a 1. Reviewers see hundreds of proposals. Framing is the difference.
No verification
AI will confidently recommend programs that closed last year, cite award amounts that changed, and miss that the solicitation you're targeting requires a nonprofit lead applicant. We verify every opportunity against live sources.
AI compliance risk
NIH will reject applications "substantially developed by AI" and is actively scanning submissions with detection tools. NSF requires disclosure. DOD has no formal policy, but reviewers penalize AI-generated content. The rules are different at every agency -- and off-the-shelf AI doesn't know which ones apply to your submission.
How Cada compares
| Traditional Consultant | DIY with AI | Cada | |
|---|---|---|---|
| Upfront cost | $15K--$30K retainer | Free | Free roadmap |
| Revenue model | Paid regardless of outcome | N/A | Success fee -- paid when you're funded |
| Timeline | 6--8 weeks per proposal | Hours (no strategy) | Days to weeks |
| Agency knowledge | Limited to their network | Generic training data | 30+ agencies, calibrated playbooks |
| Proposal quality | Variable, depends on writer | Generic, scores poorly | Agency-specific, review-simulated |
| Incentive alignment | Bill more hours | None | Only wins when you win |
| AI compliance | May not understand new rules | Your risk entirely | Agency-specific AI policies built into every playbook |
Every agency has different AI rules
There is no government-wide standard. Each agency sets its own policy on AI use in proposals -- and the penalties for getting it wrong range from rejection to misconduct investigation.
Applications "substantially developed by AI" will not be reviewed. NIH is actively scanning submissions with AI-detection tools.
Post-award detection can trigger an Office of Research Integrity referral, cost disallowance, and grant termination.
AI assistance is permitted, but proposers are encouraged to disclose AI use in the Project Description. The PI is fully responsible for accuracy.
Fabrication, falsification, or plagiarism via AI tools constitutes research misconduct -- same as if a person did it.
No published prohibition on AI in SBIR proposals. But reviewers are identifying and penalizing AI-generated content -- over 64% of flagged applications fail initial triage.
Separate risk: inputting ITAR or CUI-controlled information into commercial AI tools could constitute an unauthorized disclosure.
No blanket AI policy for BAA responses. Individual solicitations may have program-specific requirements.
DARPA's culture prizes novel, boundary-pushing proposals -- generic AI-generated content is particularly disadvantageous here.
No restriction on proposer AI use. Notably, ARPA-H has disclosed that it is piloting LLM tools in its own review process to "organize, summarize, and surface key information."
Human reviewers still make all funding decisions. ARPA-H is under HHS but is not bound by NIH's policy.
DOE's generative AI policy (DOE P 203.1) explicitly "does not apply to recipients of financial assistance from DOE." Grant applicants are not bound by it.
Individual FOAs may contain program-specific language. No department-wide prohibition on AI-assisted proposals.
See which grants fit your technology
Free assessment. No obligation. Takes 10 minutes.
Get Your Grant Roadmap