<aside>

Problem

Frontier AI is scaling far faster than safety, governance, and incident-response capacity. Capital, talent, and political attention are flowing into building bigger models and infrastructure, while the field trying to reduce AI x-risk is comparatively underfunded, fragmented, and slow.

If nothing changes, deployment and capability scaling will continue to outrun safety and governance.

There is no shared, live “field map” of needs, talent, and feasible interventions, nor pooled funds designed to move quickly on bottlenecks.

</aside>

<aside>

Solution: A Field Execution Engine for AI Safety

A small, focused organization that maps field needs and resources, designs pop-up collaboration projects (“scope → fund → staff → deliver”) that helps multiple AI safety orgs close non-technical bottlenecks in a matter of months, and converts funders via concrete, measurable pilots.

</aside>

Roadmap

MVP: Needs Map & Ally Engine (Months 0–6)

Pilot Pop-up Project (Months 6–12)

Series of Pop-up Projects + Agency Build (Year 2)

Field Map & Pooled Funds (Years 2–3)


Theory of Change

Key assumptions to pressure-test early


Budget & Needs for Pilot

Numbers are indicative and based on current best guesses. They will be refined with partners and funders.

MVP & Fundraising (Months 1-6) ~$50-$64

Phase 2 – Pilot Pop-up Project (Months 6-12) ~$300k



Founder: Oksana Kotelnikova, 7+ years building and scaling NGOs in civic tech and humanitarian response, now field-building in AI safety.

Join me! I’m looking for:

Individual Participants: Mid-career/senior people pivoting into AI x-risk/Safety

<aside> <img src="/icons/arrow-right_blue.svg" alt="/icons/arrow-right_blue.svg" width="40px" />

Join to use the DB

</aside>

Contributors: People working in AI Safety, Governance, and field-building

<aside> <img src="/icons/wrench_orange.svg" alt="/icons/wrench_orange.svg" width="40px" />

Contribute

</aside>