AI, R&D Grants, and the Tragedy of the Commons

 

The Signal and the Noise

While scanning through the latest batch of EU-funded R&D projects, I noticed something odd. The proposals were polished, almost too polished. Buzzwords landed just right. Impact sections flowed with mechanical precision. Even the risk matrices read like algorithmic poetry.

It felt familiar, but not in a good way. As if they’d all been passed through the same invisible machine.

They had. And in a way, so had mine.

A Confession, from Inside the System

I’ve spent years helping innovators secure public funding, from Horizon Europe to national R&D schemes. But I don’t just ghostwrite or polish structure. I get involved. Deeply.

I bring my own science and innovation background to the table. I question assumptions, sharpen the IP, rethink the user journey. When done right, a grant proposal isn't just a document, it's a blueprint for a stronger, clearer innovation strategy.

And yet, I also use generative AI tools. They’re too productive not to! I use them to iterate, to tighten language, to explore call text. They’re part of my workflow now. Just as they are for many others in many other fields of science, technology, policy, etc. We’re all part of this sudden acceleration, a collective boost in capacity that risks overwhelming the very system we depend on.

And I wonder: are we building better ideas, or just better artifacts?

Proposal as a Product (PaaP)

In systems terms, we’re experiencing a kind of feedback breakdown.

Generative tools have decoupled effort from output. Think about it! A proposal that once took months can now take days if not hours. (the same is true in other fields too). So, more people write. More people submit. Proposal quality, at least on paper, skyrockets. But funding rates have stayed the same. Thus, the bottleneck shifts to reviewers, who now face a tsunami of near-identical excellence and need to select the top 10-20 proposals out of 1000 near-excellent ones (i.e., a 1-2% success rate, and 99% of proposals getting rejected and thrown to the bin).

The result? The proposal becomes a product, optimized to survive a scoring rubric rather than express a real innovation. Impact pathways are simulated. Ethics boxes are ticked. AI helps you say exactly what reviewers want to hear, even if it’s not exactly what you mean.

Toolmakers and the Tragedy of Scale

Some of the tools driving this change were built with good intentions: “to democratize access, to reduce bureaucratic friction, to empower SMEs.”

But they may end up breaking the commons.

By making it trivially easy to submit high-polish proposals, we risk saturating the ecosystem. Reviewers burn out. Success rates plummet. The signal gets buried under procedural noise. And the very innovators these systems were meant to empower now face an even steeper climb.

We tool users, and tool builders, must take a harder look at the unintended consequences of scale.

Let’s Go Deeper

Fact: AI is here, and it’s not going away. But we need to talk about how we use it, and what happens when we don't draw a line between copilot and ghostwriter.

Copilot vs. Ghostwriter: The Invisible Line

  • Using AI as a copilot means it supports your thinking. You feed it your structure, insights, data points. It sharpens, rephrases, and accelerates. But the ideas, the architecture, the ethical frame—that’s yours.

  • Ghostwriting, on the other hand, means generating entire proposals from scratch with minimal human input. Just feed the tool the call text, a few bullet points, and it spits out a full draft, shiny, fluent, and eerily reviewer-friendly. It's synthetic compliance.

This distinction matters. Because when everyone starts ghostwriting, and the AI becomes better than most humans at mimicking fundable language, we’re no longer selecting good ideas, we’re selecting good mimicry.

How is that good use of tax-payer’s money?

Success Rates: When 1% Becomes the Norm

We’re already seeing success rates drop to 5% or lower in some calls. With mass AI adoption, 1% success rates aren’t far-fetched, particularly in competitive EU instruments like EIC Accelerator, Pathfinder, or highly popular Horizon topics.

That turns proposal writing into a statistical long shot. When every proposal is “near-excellent,” even excellent ones get lost in the crowd. Worse, applicants will lose trust in the system. Why invest in anything that has a 1-in-100 shot?

Reviewers Under Siege

Proposal inflation breaks the other end of the system too: the reviewers.

  • Volume overload: They’re now asked to read, score, and comment on dozens of nearly indistinguishable proposals.

  • Superficial assessment: Under time pressure, they skim rather than scrutinize.

  • Bias amplification: Small differences in phrasing or formatting, often enhanced by AI, can skew outcomes.

Some are already exploring AI-assisted review tools to cope. But this opens a recursive loop: AI-generated proposals being scored by AI-generated evaluations. At that point, the process risks becoming a closed, self-optimizing system, detached from the real-world viability or originality of the ideas.

Lottery Dynamics: When Skill Stops Mattering

This is where lottery dynamics kick in. In systems where the number of "qualifying" entries far exceeds the available rewards, outcomes become stochastic. Like a lottery, the difference between success and failure is no longer quality, it’s pure luck.

That doesn’t just demotivate applicants. It undermines the legitimacy of the process. And in public funding, legitimacy is everything.

Why the System Is Failing

A Missed Opportunity for RRI and AREA

This collapse didn’t come out of nowhere. It reflects a failure to embed Responsible Research and Innovation (RRI) and AREA principles (Anticipate, Reflect, Engage, Act) in the very tools and workflows we built. Had RRI been applied to AI grant writing tools and consultancy practices, we might have asked:

  1. Anticipate: What happens when every SME has access to a proposal factory?

  2. Reflect: Are we optimizing for submission rate or innovation quality?

  3. Engage: Have we talked to reviewers and policy-makers about how this changes their task?

  4. Act: Should we set boundaries for automated authorship? Should funding agencies evolve alongside the tools?

Instead, we raced for productivity (that’s not a good KPI here). We solved for access, but ignored saturation. We empowered everyone, but didn’t redesign the system to handle the new volume or the shift in power dynamics.

Policy and Systemic Responses: The Coming Recalibration

 

Photo Credit: George Stavrinos https://www.flickr.com/people/ssj_george/

 

Systems under pressure adapt, sometimes messily, but always eventually. As proposal volume continues to swell under the weight of AI-assisted authorship, funding bodies will have no choice but to rethink how they allocate scarce public resources. This of course will take time, maybe years. Some of the likely (and already emerging) responses include:

1. Pre-Screening and Human Interviews

Expect more multi-stage funding processes, where brief expressions of interest, interviews, or real-time pitching are used to filter applicants before full proposals are allowed. This reintroduces a human filter early on, aimed at surfacing authentic ideas and motivated teams before the templated polish sets in.

Ironically, this takes us back to a more relationship-based, subjective model, precisely what simplified digital submission systems tried to escape.

2. Post-Proposal or Pay-on-Results Models

To counter the growing disconnect between promised impact and delivered outcomes, funders may shift towards post-proposal validation: pilot phases, staged funding, milestone-based payments, or post-hoc evaluations.

This is tragic in a way. The EU has spent years simplifying the post-award process, especially through lump-sum funding models, to reduce bureaucracy. But the influx of indistinguishable proposals may now force a return to more conditional, risk-managed funding, a step backward in simplicity, but possibly forward in integrity.

3. Triage by Reputation, Not Just Content

We may also see increased reliance on track record, networks, and social proof: has this team delivered before? Are they embedded in real ecosystems? While problematic (it risks excluding new entrants), this approach tries to find proxies for real-world grounding in an ocean of text-generated optimism.

4. Shift to Mission-Oriented, Systems-Based Funding

More calls may focus on specific missions or ecosystems, where success isn’t just proposal quality but systemic contribution. Here, AI can still help, but it can’t replace deep knowledge, collaboration, or lived experience.

step Back to the Big Picture

A Pause for Reflection

I take pride in my work. And I’ll continue helping innovators write stronger proposals, not just in language, but in logic, ethics, and systems thinking. But I also believe it's time to slow down and engage in broader conversations about the kind of future we're building.

Yes, AI has given us extraordinary productivity. But we must stop mistaking acceleration for progress.

We’ve optimized the pipeline, but not the purpose.

Every time I hear the: “AI won’t take your job, but people who use AI will”, especially in fields like medicine, law, or academia, I find myself uneasy. Yes it's true, in a narrow sense. But it reflects quite a shallow, individualistic interpretation of technology’s role in society.

IN My Opinion

From Problem Solving to Purpose Alignment

AI can solve problems. That's never been in doubt (in my mind at least). What’s missing is a deeper commitment to solving the right problems, the ones that matter over decades, not quarters.

We don’t need more AI that writes grant proposals. We need AI that helps restructure how we think about health systems, support social resilience, or enable climate transition at the community level. That’s where the real potential lies.

Not in matching keywords, but in aligning technology with sustainable, long-term societal goals.

This is where frameworks like RRI matter, not just in academic theory, but in practice. If we had applied Anticipate, Reflect, Engage, Act to this AI explosion earlier, perhaps we’d be better prepared now. But it’s not too late.

 

Photo Credit: George Stavrinos https://www.flickr.com/people/ssj_george/

 

A Final Thought

We are not just users of a system. We are shapers of it. As consultants, as scientists, as innovators, we have the tools—and the responsibility—to ask deeper questions about the structures we’re reinforcing or unraveling with every AI-assisted keystroke.

Let’s keep using AI. But let’s also keep asking: To what end? Because not everything that can be accelerated should be.

 
Next
Next

How Visuable Brought My Personal Brand to Life