Scenario Result

Frontier Allocation

Scenario driven by the frontier score and therefore more willing to concentrate the round around differentiated upside. Recorded by DEAP Crash Demo Operator.

Milestone Score

7.6

Band 6.2-8.9

Need Score

17,796.2

Band 14,628.5-20,963.9

Frontier Score

7.5

Quality 6.9

Budget Deployed

£180,000

0 projects hit their optimal cap

Project Allocation

Hybrid-style redistribution is still capped at each project's optimal budget assumption.

Scenario Notes

This scenario uses the same observed facts and committee assumptions as every other saved run.

Strategy

Frontier First

Confidence

High

Input-confidence bands are derived from project confidence labels and source types. They are directional uncertainty bands, not probabilistic 95% forecasts.

Capping Rule

Allocations are capped at each project's optimal budget assumption. Any remainder is redistributed across uncapped, screening-pass projects.

Project Results Table

Project-level outputs for this frozen scenario.

ProjectHubAllocationQualityMilestoneNeedFrontierCost-EffectivenessWeightExplanation
GridPatchLifeArc£48,3006.957.9020,1608.050.13226.8%Frontier allocation assigns GridPatch a normalized weight of 26.8%, then caps the outcome at the project's optimal budget assumption.
MND STaR target validation programmeLifeArc£47,4007.058.1024,2557.900.14726.3%Frontier allocation assigns MND STaR target validation programme a normalized weight of 26.3%, then caps the outcome at the project's optimal budget assumption.
Primer Fund blood test for early MND detectionLifeArc£43,2006.757.1010,0807.200.20324.0%Frontier allocation assigns Primer Fund blood test for early MND detection a normalized weight of 24.0%, then caps the outcome at the project's optimal budget assumption.
AgriTwinLifeArc£41,1006.857.0015,6806.850.17522.8%Frontier allocation assigns AgriTwin a normalized weight of 22.8%, then caps the outcome at the project's optimal budget assumption.

Assumptions Snapshot

These statements explain what is treated as observed versus modelled in the demo.

  • 2 project rows are anchored in observed public material; 2 are synthetic comparators used only for committee calibration.
  • Observed public stage and public funding facts support the narrative layer.
  • Requested budgets, minimum viable budgets, optimal budgets, screening checks, rubric scores, and milestone costs are committee modelling assumptions.
  • Quality, milestone, cost-effectiveness, need, frontier, and allocation outputs are deterministic calculations from the DEAP formulas.
  • Scenario allocations are capped at each project's optimal budget assumption, and any remainder is redistributed across uncapped projects.