Updates

Policy notes, announcements, and reflections from OpenToken.

Policy Note

What Are OTUs?

When we tell someone that a project needs “2,000 GPU-hours,” it sounds precise. It isn’t. The OpenToken Unit (OTU) is a standardised measure that makes heterogeneous compute legible to innovators, donors, and partners.

The problem with GPU-hours

When we tell someone that a project needs “2,000 GPU-hours,” it sounds precise. It isn’t. A GPU-hour on an NVIDIA A100 is not the same as a GPU-hour on an RTX 4090. The A100 has more than three times the memory. The RTX 4090 has comparable raw processing speed but can handle a much narrower range of workloads. An hour on a V100 — a perfectly capable machine — delivers roughly a third of the useful work of either.

This matters because OpenToken aggregates compute from multiple sources. Our pilot draws capacity from commercial cloud providers, sustainable infrastructure partners, academic HPC facilities, and distributed GPU networks. The hardware is heterogeneous by design. That is how we keep costs low enough to offer free compute to innovators who could never afford hyperscale cloud pricing. But heterogeneity creates a communication problem. When a donor funds compute for a project, they deserve to know how much research (or, more broadly, how much ‘intelligence’) their contribution actually enables, regardless of which machine happens to execute it. When a researcher receives an allocation, they need to understand what they are getting. And when we report our impact, we need a unit that means something consistent.

Introducing the OpenToken Unit

The OpenToken Unit (OTU) is a standardised measure of AI compute. One OTU is defined as one hour of a single NVIDIA A100 80GB GPU. This particular GPU is the workhorse of the current AI research ecosystem, widely deployed across university computing centres and commercial providers alike.

Every other GPU type is converted into OTU equivalents using a fixed exchange rate. An hour on a more powerful machine is worth more than one OTU. An hour on a less powerful machine is worth less. The exchange rate captures both the processing speed and the memory capacity of the hardware, because for the workloads our innovators run, memory is often the binding constraint, not raw speed.

This is not a novel concept. The US National Science Foundation’s ACCESS programme uses a similar approach to allocate compute across dozens of heterogeneous national facilities. Grid computing federations have used token-based allocation models for decades. We are applying an established principle to a new context: making donated compute legible to a global community of innovators, donors, and partners.

Why this matters for donors and sponsors

If you contribute to a project on OpenToken, OTU tells you exactly what your contribution achieved.

Without OTU, we would have to say something like: “Your donation funded 1,200 hours on a mix of RTX 4090s and V100s across two providers.” That is technically accurate but practically meaningless. It gives you no sense of how much research you enabled or how your contribution compares to another donor’s.

With OTU, we can say: “Your donation delivered 600 OTUs to a project building a crop disease detection model for farmers in East Africa.” That number is comparable across projects, across hardware, and across time. If someone else contributes 600 OTUs to a different project, you know the scale of your contributions was equivalent — even if the underlying hardware was completely different.

This clarity is also essential for corporate sponsors and institutional partners. ESG reporting requires auditable metrics. Development finance institutions need standardised impact data. OTU provides a single, consistent number that can be tracked, reported, and verified.

Why this matters for researchers

Researchers applying to OpenToken describe their compute needs in familiar terms: the number of GPUs they need, the memory requirements, and the duration. We convert this into OTU during the allocation process, so that the researcher’s allocation is expressed in a unit that is independent of which specific hardware fulfils it.

This has a practical benefit. If a researcher is allocated 400 OTUs and we initially provision the work on RTX 4090 hardware, that allocation represents approximately 800 GPU-hours. If we later identify a more suitable machine, the same 400 OTU represents approximately 400 GPU-hours on the new hardware. The researcher’s entitlement stays the same. The hardware can change without renegotiation.

This flexibility is important because our provider landscape is evolving. As new partners come online and hardware generations turn over, the specific machines available to us will shift. OTU insulates researchers from that complexity. They receive a commitment measured in useful work, not in clock time on a particular machine.

OpenToken is a compute brokerage designed to ensure no innovative AI project fails for want of infrastructure. Learn more at www.opentoken.global.

Policy Note

How We Think About Impact

Most compute access programmes operate like grant panels. OpenToken takes a different approach: rigorous vetting at the gate, community-driven prioritisation inside the platform.

The problem with gatekeeping

Most compute access programmes operate like grant panels. A small committee reviews applications, decides which projects deserve support, and allocates resources accordingly. This model has an obvious appeal — it sounds rigorous — but it has a serious flaw: it concentrates judgment about what research matters in the hands of the people least likely to understand the local context.

A committee in London or San Francisco is not well placed to decide whether a speech corpus for Yoruba depression screening is more valuable than a crop yield model for smallholder farmers in Rwanda. The researchers doing the work know. The communities they serve know. OpenToken's job is not to substitute our judgment for theirs.

This is a principle, not a platitude. The history of international development is littered with well-intentioned centralised allocation mechanisms that ended up reproducing the very inequities they set out to address — prioritising projects legible to Western institutions over those most needed by the communities they claimed to serve. We are determined not to replicate that pattern.

What we actually vet for

OpenToken vets projects for legitimacy, not merit. We verify that projects are what they claim to be. We do not rank the relative importance of different research questions — that is for communities, researchers, and the wider public to decide.

Concretely, our vetting process confirms three things:

Are the people real? We verify that the individuals and institutions behind a project are who they say they are. This means checking institutional affiliations, publication records, company registrations, and professional credentials. A PhD student at the University of Nairobi and the CTO of a Lagos startup are equally welcome — but both must be verifiable.

Is the project genuine? We confirm that the stated workload matches the compute request. A project requesting 10,000 GPU-hours of A100 time should be able to explain, in reasonable technical detail, why that volume and that hardware class are appropriate. We are looking for proportionality, not perfection.

Is the project lawful and ethical? We screen for compliance with applicable law, data protection requirements (our Phase 1 infrastructure is GDPR-compliant), and basic ethical standards. We do not fund projects that would cause harm, violate human rights, or involve deceptive practices.

If a project passes these three checks, it is listed on the OpenToken platform. That is the extent of our gatekeeping.

The community decides

Once a project is listed, its public profile tells its story: what it does, who is behind it, how much compute it needs, and what it hopes to achieve. Anyone — researchers, donors, corporate sponsors, the wider public — can see the projects on the platform and decide which ones they want to support.

This is the GoFundMe principle applied to AI compute. The community, not OpenToken, determines which projects attract attention, support, and resources. A malaria diagnostics project in Accra and a language model for Sinhala-Tamil translation compete not on the basis of an internal score, but on the strength of their stories, their teams, and the communities that rally behind them.

This design choice is deliberate. We believe that the people closest to a problem are best placed to judge its urgency. We believe that researchers in the Global South should control their own research agendas — what we call epistemic sovereignty — rather than having priorities set by distant institutions. And we believe that transparency creates accountability more effectively than centralised judgment.

Our impact framework

That said, we recognise that compute providers and institutional partners need confidence that the projects on our platform are genuine and impactful. Saying "the community decides" is not a licence to list anything.

To provide that confidence, OpenToken maintains an internal impact framework anchored in internationally recognised standards. This framework is advisory, not determinative — it informs our understanding of each project but does not override the community's role in directing support.

The framework draws on three established reference points:

The UN Sustainable Development Goals. We assess which of the 17 SDGs and 169 targets a project contributes to. This provides a common, globally recognised language for describing impact that funders, providers, and development institutions already understand.

The OECD DAC Evaluation Criteria. The six criteria adopted by the OECD Development Assistance Committee — relevance, coherence, effectiveness, efficiency, impact, and sustainability — provide the evaluative structure. We adapt these for prospective project assessment, asking questions like: does this project address a genuine need? Is the compute request proportionate? Will the benefits persist beyond the allocation period?

The Hamburg Declaration on Responsible AI for the SDGs. Co-architected by UNDP in 2025, this declaration frames the principle that AI development should be inclusive and equitable. Our emphasis on epistemic sovereignty — ensuring that Global South researchers define their own research questions and retain control of their outputs — operationalises this principle.

The six dimensions

We assess projects across six dimensions:

Development relevance. Does the project address a genuine development need? Is it aligned with specific SDG targets? Is the need validated by the community it serves, rather than assumed by external actors?

Epistemic sovereignty. Does the project build indigenous research capacity? Is the research agenda locally defined? Do outputs — models, datasets, publications — remain under local control? Or does the project replicate knowledge extraction patterns, using local researchers as data collectors for externally led initiatives?

Feasibility and legitimacy. Is the team verifiable? Is the workplan realistic? For projects with commercial potential, is the downstream revenue pathway credible? This is the core of our vetting function — confirming that projects are genuine.

Compute appropriateness. Is the GPU request proportionate to the stated workload? Is the hardware class appropriate? Has the team considered compute efficiency? Larger requests receive more rigorous scrutiny — a student requesting 500 GPU-hours faces a simpler process than a research team requesting 50,000.

Sustainability and scalability. Will the project's benefits persist after the compute allocation ends? Are the outputs durable — trained models, published datasets, deployed services — or ephemeral? Is there a plan for continued operation, whether through revenue, grants, or institutional support?

Community and openness. Does the project contribute to the broader ecosystem? Is the team willing to share outputs openly, maintain a public project profile, and participate in impact reporting? For Tier 1 (attribution) projects, community engagement is the compensation — this dimension matters most. For commercial customers paying full rates, openness is encouraged but not required.

How scores are used

Each dimension is scored 1–5. The resulting profile is advisory information, not a ranking. We do not publish scores or use them to determine which projects receive compute ahead of others. They serve two specific purposes:

Provider confidence. Compute providers who supply infrastructure to OpenToken may request to see our impact assessment for specific projects or for the portfolio as a whole. The framework gives providers a structured, standards-anchored view of the projects their hardware is powering — useful for ESG reporting, stakeholder communications, and internal decision-making.

Internal quality assurance. The framework helps OpenToken maintain consistency in vetting decisions across a growing portfolio. As we scale from 10 projects to 1,000, the rubric ensures that our legitimacy checks remain rigorous and our impact understanding remains structured.

Scores are disclosed to compute providers on request. They are not disclosed to the public, to donors, or to other projects. They do not determine allocation priority — the community layer does that.

What this means in practice

A project that scores exceptionally well on our framework but attracts no community support will not receive priority over a lower-scoring project that resonates with donors and sponsors. Conversely, a project that generates enormous community enthusiasm but fails our basic legitimacy checks will not be listed at all.

This is the sweet spot we are trying to occupy: rigorous vetting at the gate, community-driven prioritisation inside the platform. We verify that projects are real. We let the community decide which ones matter most.

We expect this framework to evolve. As we onboard more projects and learn from the pilot phase, we will refine both the dimensions and the process. We welcome feedback from researchers, providers, and the communities we serve.


OpenToken is a compute brokerage designed to ensure no innovative AI project fails for want of infrastructure. Learn more at www.opentoken.global.