Reviewing the UK's new volunteering data standard from the ODI

The Open Data Institute's announcement in March 2026 was carefully worded. The project had reached "a key milestone." Not a completion. Not a launch. A milestone. That distinction matters more than it might appear, because the gap between what was announced and what actually exists in the codebase tells a story the press release does not quite tell.
This is not a criticism. The project has produced something genuinely significant for the UK volunteering sector, backed by government funding, sector credibility, and real technical foundations. But with the ODI committing to stewardship and the sector beginning to build on top of this work, an honest account of where things stand (and where they do not) is more useful than another round of applause.
What was actually announced
The ODI milestone announcement, published on 26 March 2026, reported the outputs of a six-month alpha phase funded by the Department for Culture, Media and Sport and delivered in partnership with Do IT and Team Kinetic. Three things were presented as the headline deliverables.
First, a published open data standard for volunteering opportunities — a shared vocabulary, described in formal linked-data terms, covering what opportunities involve, where they are, who they suit, and how to apply. The standard is published at standard.volunteeringdata.io and is stewarded by an open Standards Working Group.
Second, an operational API: a live, queryable data service at api.volunteeringdata.io that exposes the standardised data over HTTP, supports full-text and geospatial search, and includes a SPARQL endpoint for those who want direct access to the underlying graph.
Third, three pilots demonstrating real-world application: SCVO's Milo platform confirming implementation of the standard across Scotland's Third Sector Interfaces; research into the challenges facing small and grassroots organisations; and an AI-powered opportunity discovery tool built by Do IT using ChatGPT, able to match volunteers to opportunities through natural-language conversation.
Ministerial endorsement came from Civil Society Minister Stephanie Peacock. The ODI's chief executive framed it as proof that "a shared, open approach to volunteering data is not only technically achievable but genuinely wanted by the sector." 68% of contributing organisations agreed that a shared open standard would benefit the sector as a whole.
By the standards of the UK VCSE sector, this is an unusually well-resourced, well-governed, and well-documented piece of infrastructure work. That should be said clearly.
What the codebase actually shows
The GitHub organisation at github.com/volunteeringdata tells a slightly different story, not a contradictory one, but a more granular one. There are four public repositories.
The standard repo, with 208 commits and four versioned releases, is the most mature. The ontology is real, the Standards Working Group process is real, and the WebVOWL visualisation confirms the data model has meaningful depth. This is the part of the announcement that most closely matches the underlying reality.
The open-data-infrastructure repo, with 422 commits, is where the API and data pipeline live. It is a C#/.NET application wrapping an Apache Jena Fuseki triple-store, deployed via GitHub Actions to two Azure App Services. The README is honest about what is unfinished: CORS is explicitly listed as a TODO, geospatial search using GeoSPARQL is marked as not yet supported, the Lucene indexing is noted as needing improvement, and the Croissant dataset endpoint is marked as broken. There is a single data source - Do IT's JSON feed, with JSON and CSV input formats listed as future work.
The object repo is a published TypeScript/npm library providing typed RDF mapping classes. It is on version 0.4.0, has continuous integration running, and is live on npm. This is genuinely useful developer infrastructure.
The mcp-server repo, which provides a Model Context Protocol integration so that AI agents can query the dataset natively, has two commits and no releases. It exists as a proof of concept and a config file.
The security posture, assessed separately, reveals a meaningful attack surface: no authentication or authorisation on any endpoint, bulk enumeration endpoints that allow full dataset harvesting in seconds, SPARQL query templates publicly exposed in the API documentation, and no visible rate limiting or pagination controls. The Swagger UI is served on the production endpoint. These are not fatal flaws for an alpha-phase open data API, but they are not production-ready, and if the project's ambition is to become critical national infrastructure for the volunteering sector, they will need to be addressed before that becomes credible.
The genuine achievements
None of the above should obscure what this project has actually accomplished, because it is more than the sector has managed before.
The existence of a formal, open, sector-backed ontology for volunteering data is new. Previous attempts at data standardisation in the VCSE sector have tended to produce spreadsheet schemas or informal agreements between specific platforms, not reusable linked-data vocabularies with governance structures and versioned releases. The OWL/RDF approach means the standard is extensible, machine-readable, and compatible with broader semantic web infrastructure. That is the right technical foundation.
The involvement of SCVO and GoVo as early adopters is significant. Milo is live infrastructure used across Scotland's Third Sector Interfaces. GoVo is the platform selected for the Big Help Out campaign, giving it a national profile. When two platforms of that scale commit to implementing a standard in its early life, they lower the coordination cost for everyone who follows. Standards die without early adopters; this one has them.
The AI demonstration, while built on ChatGPT rather than the sector's own infrastructure, proved something important: that standardised volunteering data can power natural-language discovery tools. The same standardised data that drives a search filter can drive a conversational AI. The same API that feeds a website widget can feed a language model. This convergence is not trivial — it is what transforms a data standard from a technical convenience into something with genuine public-facing value.
The hackathon produced working prototypes in accessible opportunity discovery, conversational volunteer matching, crisis response coordination, and AI-powered search. Four functional prototypes in two days from 25 participants is strong evidence that the foundation is buildable upon.
The research finding that platform gatekeeping prevents smaller organisations from publishing data at all is important, even though it points to a problem rather than a solution. Naming it clearly is the first step to designing around it.
The honest limitations
The announcement describes a "published standard." What exists is a well-developed alpha-version ontology with active governance. The distinction matters because a published standard implies stability. Implementations can depend on it without expecting breaking changes. Ontology version 4 has been released, but the working group discussions visible on GitHub show that the model is still actively debated in areas including accessibility, geolocation, and the representation of volunteer-involving organisations. Implementations built now are building on living, changing ground.
The API is real, but it is currently populated almost entirely by Do IT data. One source is not a shared data infrastructure, it is a single publisher with a shared API on top. The value of standardisation is proportional to the number of parties publishing in that standard. SCVO's Milo has committed to implementing it. GoVo has agreed to join an early adopter group. But agreeing to implement and actually publishing standardised data are different things, and neither commitment comes with a published timeline.
The AI demonstration was built on ChatGPT. This is not a criticism at all, using an existing AI platform is the sensible way to demonstrate proof of concept quickly. But it means the AI capability sits inside a commercial product controlled by a third party, not inside the open infrastructure being developed. The MCP server, which would allow any AI agent to query the volunteering dataset natively, is two commits old with no documentation. The path from "ChatGPT demo on standardised data" to "AI-native open infrastructure the sector owns" is not a short one.
The research into small and grassroots organisations identified platform gatekeeping as a barrier but did not solve it. The standard currently requires technical implementation, either within a platform that adopts it or through direct API publishing. For the community football club, the neighbourhood food bank, or the street-level mutual aid group that represents the majority of volunteering activity by volume, neither route is accessible. The standard measures £24.69 billion in economic value contributed by 24 million volunteers; the gap between that scale and the current reach of the infrastructure is vast.
What this means for the sector
The UK volunteering sector has operated, for decades, on informal data infrastructure. Opportunities are published on organisation websites, on local council pages, on platform-specific listings, in Facebook groups, in email newsletters. The friction this creates is not hypothetical, it is measurable in the volunteers who never found the right opportunity, in the organisations that spent staff time entering the same information on five different platforms, in the crisis responses that were slower than they needed to be because data was siloed.
What this project has done is demonstrate, credibly and in public, that a better approach is technically feasible and sector-supported. That demonstration has value independent of the current state of the technology, because it changes the political and commercial calculus for everyone operating in the space.
Platform providers who previously had no incentive to share data now face a different question: not "why would we standardise?" but "why wouldn't we, given that the standard exists and early adopters are already using it?" The ODI's stewardship role provides the governance continuity that makes adoption a rational long-term bet rather than a donation to a project that might not survive.
For organisations like SCVO, which already had federated data infrastructure across Scotland's Third Sector Interfaces, the standard provides an upgrade path rather than a replacement. For newer platforms, it removes the need to design a data model from scratch and provides interoperability from day one.
The AI dimension is the most forward-looking piece of the announcement, and also the least developed. But the direction is clear: as more volunteers use AI assistants to manage their lives, as local councils build AI-powered resident services, as emergency response coordinators reach for conversational tools during crises, the question "where do I find a volunteer with these skills, near this location, available at this time?" will increasingly be asked of machines rather than search engines. Whether those machines have access to standardised, open, queryable volunteering data will determine whether they can answer it. The MCP server, embryonic as it is, points directly at this future.
How much work remains
The honest answer is: a great deal.
The standard needs to achieve stability. The working group process is healthy, but a standard that changes frequently is a standard that is expensive to implement. The project needs to reach a point where the ontology is stable enough that SCVO, GoVo, and others can build production implementations without constant re-engineering. This is a governance problem as much as a technical one.
The data pipeline needs to be federated. One source (however good) is not a sector infrastructure. Each new publisher added to the standard multiplies its value, but adding publishers requires solving the same problem at different levels of technical sophistication: a national platform like GoVo needs API-level integration; a medium-sized charity needs a simple data export; a small community group needs either a form-based publishing tool or a platform that abstracts the standard away entirely.
The security posture needs to be production-ready. The current API is appropriate for a DCMS-funded alpha with a small developer audience. It is not appropriate for infrastructure that SCVO's national systems or GoVo's Big Help Out volumes are routing through. Authentication, rate limiting, input validation, and pagination are not optional features, they are the difference between open infrastructure and an open vulnerability.
The AI integration needs to move from demonstration to owned infrastructure. The ChatGPT prototype showed what is possible. The MCP server points at what is needed. What sits between them is months of engineering, a decision about hosting and governance, and a plan for making AI-queryable volunteering data a feature of the standard rather than an afterthought.
The small organisation problem needs a direct solution. The research identified it. The standard does not yet solve it. A data publishing tool simple enough for a grassroots group to use, ideally embedded within platforms those groups already use, is the missing piece that determines whether this becomes infrastructure for the whole sector or infrastructure for the better-resourced part of it.
And the adoption needs to broaden beyond the current early adopter group. Two committed platforms is a foundation. Ten is a network. Fifty is infrastructure. The step from foundation to network is not automatic, it requires active partnership development, technical support for implementing organisations, and continued demonstration of value.
The right verdict
The ODI announcement described this as a "key milestone." That framing is accurate. A milestone on a road is not the destination but it is evidence that the road exists and that progress is being made.
The volunteering data standard is real, technically sound, and backed by credible governance. The API works. The early adopters are serious. The AI direction is the right one. The sector appetite for this exists, 68% is not consensus but it is majority support, which is more than most infrastructure initiatives attract.
What it is not, yet, is the shared data layer for UK volunteering. The gap between the alpha and that ambition is not measured in months, it is measured in adoption breadth, data quality, security maturity, small-organisation access, and sustainable funding for the next phase of development.
The foundation has been poured. The question now is whether the sector, funders, and platforms will continue building on it or whether this becomes another thoughtful, well-intentioned piece of UK open data infrastructure that stalled between the demonstration and the deployment.
The technical groundwork suggests it deserves better than that fate. Whether it gets it depends on decisions being made now.
This analysis draws on the ODI project announcement of March 2026, the volunteeringdata.io GitHub repositories, and independent technical review of the published API specification.
This is why I’m building Impactful, a platform that helps organisations connect with young people who want to volunteer. If your organisation cares about youth volunteering, I’d like to hear from you.