Hundreds of Proposals Written
Federal, State & Foundation Grants

Our Approach

We built something grant consultants can't, and ChatGPT won't.

Most founders face a choice: expensive consultants who take months, or AI tools that generate generic text reviewers reject. We combined the best of both into something neither can do alone.


The problem with the status quo

The $20K Consultant

  • Charges $15K--$30K retainer
  • 6--8 weeks per proposal
  • Paid regardless of outcome
  • One proposal at a time
  • Limited to agencies they know

The DIY AI Approach

  • Free (or nearly free)
  • Fast output
  • No agency-specific knowledge
  • Generic text that scores poorly
  • No competitive strategy
  • No eligibility verification
  • No awareness of agency AI policies

Both approaches leave money on the table. One is slow and expensive. The other is fast and wrong.


Cada's three layers

Each layer builds on the last. Together, they produce proposals that score the way agencies want to fund them.

01

Agency-Specific Intelligence

Every agency scores differently. NSF uses dual-reviewer panels. NIH runs study sections on a 1--9 scale. DARPA evaluates against Heilmeier's Catechism. ARPA-H wants 10x, not incremental.

We've built playbooks calibrated to how each agency evaluates -- including their AI policies, disclosure requirements, and compliance rules. The same workflow that's fine at NSF could get your NIH application thrown out.

30+ agencies mapped
02

Review Simulation

Before you submit, your proposal gets scored by the same criteria real reviewers use.

NSF proposals face a dual-reviewer simulation. NIH proposals go through a full study section. ARPA-H submissions get evaluated against the program manager's priorities.

This catches the problems that look fine to you but tank your score.

03

Expert Strategic Framing

AI can write text. It can't decide which of the 40+ open solicitations your technology should target, how to position your innovation against what reviewers have already seen, or which eligibility traps will disqualify you.

That's judgment -- built from writing hundreds of proposals across 30+ agencies.


But why can't I just use AI myself?

You absolutely can start there. Here are five things founders consistently miss when they go it alone.

Wrong agency match

AI will suggest SBIR because it's the most-documented program. But your technology might score higher at ARPA-H, or qualify for a DOE FOA that AI doesn't know exists because the solicitation was posted last week.

Eligibility traps

NIH requires PI citizenship for some mechanisms but not others. STTR requires a research institution partner. AFWERX has a completely different application format. These aren't in the training data -- they're in the FOA fine print.

Competitive framing

Saying "our approach is novel" scores a 3. Saying "existing approaches fail because [specific limitation], and our mechanism addresses this by [specific technical differentiation]" scores a 1. Reviewers see hundreds of proposals. Framing is the difference.

No verification

AI will confidently recommend programs that closed last year, cite award amounts that changed, and miss that the solicitation you're targeting requires a nonprofit lead applicant. We verify every opportunity against live sources.

AI compliance risk

NIH will reject applications "substantially developed by AI" and is actively scanning submissions with detection tools. NSF requires disclosure. DOD has no formal policy, but reviewers penalize AI-generated content. The rules are different at every agency -- and off-the-shelf AI doesn't know which ones apply to your submission.


How Cada compares

Traditional ConsultantDIY with AICada
Upfront cost$15K--$30K retainerFreeFree roadmap
Revenue modelPaid regardless of outcomeN/ASuccess fee -- paid when you're funded
Timeline6--8 weeks per proposalHours (no strategy)Days to weeks
Agency knowledgeLimited to their networkGeneric training data30+ agencies, calibrated playbooks
Proposal qualityVariable, depends on writerGeneric, scores poorlyAgency-specific, review-simulated
Incentive alignmentBill more hoursNoneOnly wins when you win
AI complianceMay not understand new rulesYour risk entirelyAgency-specific AI policies built into every playbook

Every agency has different AI rules

There is no government-wide standard. Each agency sets its own policy on AI use in proposals -- and the penalties for getting it wrong range from rejection to misconduct investigation.

NIHRestrictive

Applications "substantially developed by AI" will not be reviewed. NIH is actively scanning submissions with AI-detection tools.

Post-award detection can trigger an Office of Research Integrity referral, cost disallowance, and grant termination.

NOT-OD-25-132, effective Sept 2025
NSFDisclosure Required

AI assistance is permitted, but proposers are encouraged to disclose AI use in the Project Description. The PI is fully responsible for accuracy.

Fabrication, falsification, or plagiarism via AI tools constitutes research misconduct -- same as if a person did it.

PAPPG 24-1, Supplement 1, Dec 2025
DOD / AFWERXNo Formal Policy

No published prohibition on AI in SBIR proposals. But reviewers are identifying and penalizing AI-generated content -- over 64% of flagged applications fail initial triage.

Separate risk: inputting ITAR or CUI-controlled information into commercial AI tools could constitute an unauthorized disclosure.

No formal directive as of April 2026
DARPANo Formal Policy

No blanket AI policy for BAA responses. Individual solicitations may have program-specific requirements.

DARPA's culture prizes novel, boundary-pushing proposals -- generic AI-generated content is particularly disadvantageous here.

Check each BAA individually
ARPA-HAI on Both Sides

No restriction on proposer AI use. Notably, ARPA-H has disclosed that it is piloting LLM tools in its own review process to "organize, summarize, and surface key information."

Human reviewers still make all funding decisions. ARPA-H is under HHS but is not bound by NIH's policy.

ARPA-H ADVOCATE program statement, 2026
DOENo Applicant Policy

DOE's generative AI policy (DOE P 203.1) explicitly "does not apply to recipients of financial assistance from DOE." Grant applicants are not bound by it.

Individual FOAs may contain program-specific language. No department-wide prohibition on AI-assisted proposals.

DOE P 203.1, Dec 2025
This is why "just use ChatGPT" is risky. The same AI-assisted workflow that's fully compliant at NSF could get your NIH application rejected -- or trigger a misconduct investigation. We track these policies across every agency we work with and build compliance into every proposal.

See which grants fit your technology

Free assessment. No obligation. Takes 10 minutes.

Get Your Grant Roadmap