301 reads
301 reads

Behind the Startup: How OpenLedger is Building a Blockchain-Native AI Ecosystem

by Ishan PandeyMay 26th, 2025
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

OpenLedger is decentralizing AI with transparent data attribution, rewards, and agent-based economies. Learn how their team plans to make AI accountable.
featured image - Behind the Startup: How OpenLedger is Building a Blockchain-Native AI Ecosystem
Ishan Pandey HackerNoon profile picture

As AI and blockchain converge, one project stands at the intersection of this revolution: OpenLedger. With promises to decentralize model training, reward data attribution, and power agent economies, OpenLedger is pushing toward a new era of transparent and community-owned AI.

In this interview, we sat down with core contributor Kamesh to understand the principles, innovations, and roadmap that underpin OpenLedger’s unique thesis.

Ishan Pandey: Hi Kamesh, it's a pleasure to welcome you to our “Behind the Startup” series. Could you start by telling us about yourself and how you became involved with OpenLedger?


Kamesh: Hey Ishan, I’m a core contributor at OpenLedger. Before OpenLedger, I was working on an AI/ML research and development company during which I worked with enterprise clients like Walmart, Cadbury and more. There were few main issues that we noticed in the AI segment, which is the black box models and lack of transparency, and that there was no way of knowing which data led to a specific inference. The bigger issue is that centralized companies have not fairly compensated their data contributors, and this is what we’re trying to address with OpenLedger.


Ishan Pandey: OpenLedger is being positioned as the “blockchain built for AI.” Can you walk us through the gap in the current infrastructure that OpenLedger is trying to solve and why this moment in time is critical?


Kamesh: As mentioned previously, centralized companies have trained their AI models on user data without permissions and have made billions of dollars without paying anyone fairly. On OpenLedger, every training step, data source, and model upgrade leaves a trace that anyone can inspect. This matters right now because people ask AI for financial advice, health suggestions, and even election coverage. While dealing with such sensitive topics, it is important to ensure that the model is using accurate data and not hallucinating. By using Proof of Attribution, we can eliminate the data that led to a specific harmful inference, ensuring safety in sensitive use cases.


Ishan Pandey: One of the more ambitious ideas behind OpenLedger is to create attribution-based rewards for data and model contributions. In practical terms, how do you measure contribution in a decentralized environment?


Kamesh: Think of a shared on-chain history that records every dataset and model tweak along with the wallet that submitted it. Whenever the network trains a new version or responds to a user query, it looks back through that history to see which contributions were involved. Each time your entry shows up, you automatically receive a portion of the fee tied to that action. This information is public, so anyone can open the explorer and trace exactly how their work has been used and what it has earned.


Ishan Pandey: Let’s talk about the Model Factory and OpenLoRA. From a technical perspective, how are these tools built to handle resource sharing, GPU bottlenecks, and the demands of model iteration at scale?


Kamesh: Think of Model Factory as a no code platform where anyone can fine-tune a specialized language model without renting an entire data-center. You pick a base model and select the parameters. When your fine-tune finishes, it’s saved as a lightweight LoRA adapter, so many versions can live side by side without eating huge amounts of memory or bandwidth. OpenLoRA then lets you plug in those adapters onto a shared base model during inference, so a single GPU can switch between dozens of specializations, allowing for iteration at scale. Model Factory and OpenLoRA are very important pillars of the ecosystem since they allow everyone to participate in AI development with no significant cost.


Ishan Pandey: You’re also introducing a concept called “Proof of Attribution (POA),” what exactly is being measured here, and how do you ensure it's a reliable metric in assessing agent activity?


Kamesh: Proof of Attribution is how we track which data was used by the model to arrive at a specific inference and reward every meaningful contribution. When your data is used by a model to create an inference, it is recorded on-chain. Each time users rely on the model, a portion of the revenue is automatically routed back to you, and the entire trail is open for anyone to verify. It allows contributors to see proof of their work on-chain and allows them to get rewarded for it fairly.


Ishan Pandey: AI royalty as a concept hinges on long-term tracking and trust. How do you plan to handle issues of model forking, proxy usage, and downstream value attribution across AI agents?


Kamesh: Our priority is to ensure that contributors always get compensated fairly. To avoid issues such as model forking and proxy usage, we will be hosting all the models ourselves and external access will be via API only.


Ishan Pandey: You’ve hinted at a testnet rollout. Could you share the next set of milestones you're working toward, and what developers can expect when engaging with OpenLedger in its current phase?


Kamesh: Contributors are already spinning up nodes and streaming real data into the network. We currently have over 4 million active nodes running on our testnet, and we just wrapped up Epoch 2. We have over 10 projects building on us already, including a former Google DeepMind researcher. We are also very excited to share that we will be soon going for our TGE and mainnet launch. We will share the full details soon, so keep an eye out.


Ishan Pandey: Finally, as someone who’s building deep-tech infrastructure, what advice would you give to developers or researchers looking to enter this intersection of decentralized AI?


Kamesh: The best advice I can give people is to keep your product simple. People should be able to understand what you do in the first few minutes. Users care more about whether a product works smoothly and solves their issue instead of fancy keywords.


Don’t forget to like and share the story!


Vested Interest Disclosure: This author is an independent contributor publishing via our business blogging program. HackerNoon has reviewed the report for quality, but the claims herein belong to the author. #DYO


Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks