How We Think About Impact
Most compute access programmes operate like grant panels. OpenToken takes a different approach: rigorous vetting at the gate, community-driven prioritisation inside the platform.
The problem with gatekeeping
Most compute access programmes operate like grant panels. A small committee reviews applications, decides which projects deserve support, and allocates resources accordingly. This model has an obvious appeal — it sounds rigorous — but it has a serious flaw: it concentrates judgment about what research matters in the hands of the people least likely to understand the local context.
A committee in London or San Francisco is not well placed to decide whether a speech corpus for Yoruba depression screening is more valuable than a crop yield model for smallholder farmers in Rwanda. The researchers doing the work know. The communities they serve know. OpenToken's job is not to substitute our judgment for theirs.
This is a principle, not a platitude. The history of international development is littered with well-intentioned centralised allocation mechanisms that ended up reproducing the very inequities they set out to address — prioritising projects legible to Western institutions over those most needed by the communities they claimed to serve. We are determined not to replicate that pattern.
What we actually vet for
OpenToken vets projects for legitimacy, not merit. We verify that projects are what they claim to be. We do not rank the relative importance of different research questions — that is for communities, researchers, and the wider public to decide.
Concretely, our vetting process confirms three things:
Are the people real? We verify that the individuals and institutions behind a project are who they say they are. This means checking institutional affiliations, publication records, company registrations, and professional credentials. A PhD student at the University of Nairobi and the CTO of a Lagos startup are equally welcome — but both must be verifiable.
Is the project genuine? We confirm that the stated workload matches the compute request. A project requesting 10,000 GPU-hours of A100 time should be able to explain, in reasonable technical detail, why that volume and that hardware class are appropriate. We are looking for proportionality, not perfection.
Is the project lawful and ethical? We screen for compliance with applicable law, data protection requirements (our Phase 1 infrastructure is GDPR-compliant), and basic ethical standards. We do not fund projects that would cause harm, violate human rights, or involve deceptive practices.
If a project passes these three checks, it is listed on the OpenToken platform. That is the extent of our gatekeeping.
The community decides
Once a project is listed, its public profile tells its story: what it does, who is behind it, how much compute it needs, and what it hopes to achieve. Anyone — researchers, donors, corporate sponsors, the wider public — can see the projects on the platform and decide which ones they want to support.
This is the GoFundMe principle applied to AI compute. The community, not OpenToken, determines which projects attract attention, support, and resources. A malaria diagnostics project in Accra and a language model for Sinhala-Tamil translation compete not on the basis of an internal score, but on the strength of their stories, their teams, and the communities that rally behind them.
This design choice is deliberate. We believe that the people closest to a problem are best placed to judge its urgency. We believe that researchers in the Global South should control their own research agendas — what we call epistemic sovereignty — rather than having priorities set by distant institutions. And we believe that transparency creates accountability more effectively than centralised judgment.
Our impact framework
That said, we recognise that compute providers and institutional partners need confidence that the projects on our platform are genuine and impactful. Saying "the community decides" is not a licence to list anything.
To provide that confidence, OpenToken maintains an internal impact framework anchored in internationally recognised standards. This framework is advisory, not determinative — it informs our understanding of each project but does not override the community's role in directing support.
The framework draws on three established reference points:
The UN Sustainable Development Goals. We assess which of the 17 SDGs and 169 targets a project contributes to. This provides a common, globally recognised language for describing impact that funders, providers, and development institutions already understand.
The OECD DAC Evaluation Criteria. The six criteria adopted by the OECD Development Assistance Committee — relevance, coherence, effectiveness, efficiency, impact, and sustainability — provide the evaluative structure. We adapt these for prospective project assessment, asking questions like: does this project address a genuine need? Is the compute request proportionate? Will the benefits persist beyond the allocation period?
The Hamburg Declaration on Responsible AI for the SDGs. Co-architected by UNDP in 2025, this declaration frames the principle that AI development should be inclusive and equitable. Our emphasis on epistemic sovereignty — ensuring that Global South researchers define their own research questions and retain control of their outputs — operationalises this principle.
The six dimensions
We assess projects across six dimensions:
Development relevance. Does the project address a genuine development need? Is it aligned with specific SDG targets? Is the need validated by the community it serves, rather than assumed by external actors?
Epistemic sovereignty. Does the project build indigenous research capacity? Is the research agenda locally defined? Do outputs — models, datasets, publications — remain under local control? Or does the project replicate knowledge extraction patterns, using local researchers as data collectors for externally led initiatives?
Feasibility and legitimacy. Is the team verifiable? Is the workplan realistic? For projects with commercial potential, is the downstream revenue pathway credible? This is the core of our vetting function — confirming that projects are genuine.
Compute appropriateness. Is the GPU request proportionate to the stated workload? Is the hardware class appropriate? Has the team considered compute efficiency? Larger requests receive more rigorous scrutiny — a student requesting 500 GPU-hours faces a simpler process than a research team requesting 50,000.
Sustainability and scalability. Will the project's benefits persist after the compute allocation ends? Are the outputs durable — trained models, published datasets, deployed services — or ephemeral? Is there a plan for continued operation, whether through revenue, grants, or institutional support?
Community and openness. Does the project contribute to the broader ecosystem? Is the team willing to share outputs openly, maintain a public project profile, and participate in impact reporting? For Tier 1 (attribution) projects, community engagement is the compensation — this dimension matters most. For commercial customers paying full rates, openness is encouraged but not required.
How scores are used
Each dimension is scored 1–5. The resulting profile is advisory information, not a ranking. We do not publish scores or use them to determine which projects receive compute ahead of others. They serve two specific purposes:
Provider confidence. Compute providers who supply infrastructure to OpenToken may request to see our impact assessment for specific projects or for the portfolio as a whole. The framework gives providers a structured, standards-anchored view of the projects their hardware is powering — useful for ESG reporting, stakeholder communications, and internal decision-making.
Internal quality assurance. The framework helps OpenToken maintain consistency in vetting decisions across a growing portfolio. As we scale from 10 projects to 1,000, the rubric ensures that our legitimacy checks remain rigorous and our impact understanding remains structured.
Scores are disclosed to compute providers on request. They are not disclosed to the public, to donors, or to other projects. They do not determine allocation priority — the community layer does that.
What this means in practice
A project that scores exceptionally well on our framework but attracts no community support will not receive priority over a lower-scoring project that resonates with donors and sponsors. Conversely, a project that generates enormous community enthusiasm but fails our basic legitimacy checks will not be listed at all.
This is the sweet spot we are trying to occupy: rigorous vetting at the gate, community-driven prioritisation inside the platform. We verify that projects are real. We let the community decide which ones matter most.
We expect this framework to evolve. As we onboard more projects and learn from the pilot phase, we will refine both the dimensions and the process. We welcome feedback from researchers, providers, and the communities we serve.
OpenToken is a compute brokerage designed to ensure no innovative AI project fails for want of infrastructure. Learn more at www.opentoken.global.