BLOGPOST
3327 Logo

Made with love️ for blockchain community by humans from MVP Workshop

Contact us at

[email protected]

All Rights reserved 2022 c 3327

ML meets Blockchain: The Clash of Buzzwords

Blockchain technology and ZK proofs may be the last missing puzzle pieces for creating trustless and secure decentralized machine learning networks. This complex symphony of technologies opens the door for accessing knowledge from private databases, such as hospitals or government databases, without leaking information while still providing verifiable models and predictions. 

Intro

Machine learning has become one of the most popular buzzwords in modern tech society, followed by Blockchain and Web 3.0. When the hype element is removed, there is actually a good reason for this. Machine learning, mainly in the form of deep learning, has highly utilized technological advances in hardware and opened a whole new world of use cases previously unreachable using purely deterministic algorithms. Average desktop computers are still unable to efficiently train large deep neural networks, so it is often the case that training of the models is delegated to cloud services powered by large server farms. There is a problem with this approach. If the data owners don't have the resources required for generating complex ML models, the only option for verifying the model is to trust the cloud service that the supplied model is correct. Another interesting pain point is large volumes of privately owned data. Medical data stored in hospitals would be highly beneficial for training ML models and providing predictions but are not accessible to external users for privacy and safety reasons. The solution for both of the presented problems may be closer than expected, thanks to ZK proofs and Blockchain technology.

Zero-knowledge proofs convince a verifier that a specific computation was executed correctly without revealing input values. The proof is verified against a public value, called commitment, announced before the computation. A good example of commitment value is a hash of the inputs. The ZK proof can then testify that the computation was performed correctly on the inputs whose hash was previously submitted as a commitment value.

On the other hand, extracting verifiable pieces of information while hiding the unwanted parts is a school example use case for using zero-knowledge proofs. It is not a new idea to use zero-knowledge magic to generate verifiable subsets of the credentials. Still, a lack of standardization for the existing approaches that could also cover the Blockchain use cases opens up a new research topic.

Machine learning and ZK

This research aims to give clues and directions toward achieving ML model generation in a trustless environment and deriving predictions from models generated using private data. Solving these issues is not straightforward, but we can follow the path from the fundamental assumptions and see where we can reach. Let's start with the basics.

Zero-knowledge proofs convince a verifier that a specific computation was executed correctly without revealing input values. The proof is verified against a public value, called commitment, announced before the computation. A good example of commitment value is a hash of the inputs. The ZK proof can testify that the computation was performed correctly on the inputs whose hash was previously submitted as a commitment value.

Any computation can be represented as an arithmetic circuit. In theory, we can generate ZK proofs for arbitrary computations, including training ML models and giving predictions based on the trained models. However, there are practical constraints that stand in our way. The steps of the ML algorithms, and their number, are highly dependent on the input data, which is a significant practical restriction and requires highly complex circuits to support all execution cases. The circuits may become so complex that the generation time for the zkSNARK proofs exceeds limits for any practical application.

Limitations of ZK on Blockchain

Another severe limitation is the Blockchain computational power. EVM smart contracts can efficiently execute only zkSNARK proofs, specifically those generated using Groth16 and, in some cases, PLONK algorithms.

We may be slightly hitting a dead-end, but there is another way of looking at the problem. Even though our ML algorithms may be complex and highly dependent on inputs, they are executed on CPUs with relatively few deterministic instructions. If we could simulate program execution step by step instead of modeling the arithmetic circuit of a specific problem, that would be nice. To be more general, if we could prove that the state changes created by executing program instructions are correct, we could "easily" prove arbitrary computations. 

This sounds like a job for Cairo or Noir, specific programming languages used to write programs with verifiable execution traces. The execution trace can then be used as input for generating proof of correct execution. This approach sounds promising, but there is a catch - the proofs of the two systems are not verifiable on the EVM blockchains (at least not using SC methods). 

There are, however, ways to generate STARK proofs of the program execution and then use those proofs as inputs for SNARK provers. The SNARK provers can prove that the STARK proof was correctly generated, and SNARK proofs can be verified on-chain. This approach is used in the design of Polygon zkEVM. To summarize what we have so far - we have the tools for building provers and verifiers for ML algorithms. Next stop - decentralization. 

Decentralized ML

Let's assume that we have a decentralized network of nodes capable of generating and verifying ZK proofs for the execution of ML algorithms. If node A requests that node B trains a new ML model over some data, the flow is the following:

  • Node A sends data to node B
  • Node B trains the model and generates the execution proof
  • Node B sends the model parameters along with the execution proof to node A
  • Node A verifies the proof and accepts the model if the proof is correct or rejects otherwise

Technically, we have everything we need, but the incentives are a bit off. Node A is incentivized to offload hard work to node B, but Node B is not incentivized to perform the computations. Let's consider some monetary incentives and see how that plays out. 

Node B only wants to work after getting paid, so Node A offers some money for the service. Node B can take the money and disappear, but that is certainly different from what node A would prefer. To prevent that, node A needs to commit to some reward initially, but node B should receive the reward only if the execution proof is correct.

A blockchain would be a good escrow where node A can lock some money and node B can unlock the payment by providing the execution proof. To enable this scheme, node A would first commit to data by writing a data hash on the blockchain and send data to node B. Node B would verify if the data hash is correct and refuse the job if the hash doesn't match the provided data. If the hash matches the data, node B generates the model over the given data and provides proof, linked with the committed data hash, to the blockchain. 

Assuming that the blockchain can verify the proof and transfer money to node B, node B may refuse to send the model to node A, so node A needs to receive the model before payment. If node B sends the model parameters to node A, node A may falsely claim that node B hasn't sent any parameters and cancel the payment. It may seem that we are hitting the wall - again. 

ML and non-EVM Blockchains

If we look from the perspective of EVM blockchain - we are probably stuck. But what if we use a non-EVM blockchain where we can have custom transactions and custom block parameters? If we could put all model parameters on-chain, we would have a simple trustless schema:

  • Node A deposits money into an escrow on the blockchain and commits to a data hash.
  • Node A sends the data to node B
  • Node B verifies the data hash and computes the model parameters.
  • Node B submits the execution proof and model parameters on-chain.
  • Blockchain verifies the proof against the provided model parameters and transfers the money to node B if everything is correct. 
  • Node A collects model parameters from the blockchain.

The main issue is the costly storage on EVM blockchains, but a blockchain that allows custom transactions, like Cosmos, can be used as a special-purpose blockchain that we need here.

Private Data Models

What about model predictions from models trained on private data? Fortunately, the situation is much easier here, as the nodes with the same governance generate the model and have access to data. For example, a hospital node may generate a data commitment hash and store it on-chain without revealing any data. An external node may request the classification prediction of a patient with given symptoms based on the model trained on hospital hidden data. The flow would be the following:

  • The external node submits the request on the blockchain along with a fee for prediction services.
  • The hospital node performs the prediction and generates the execution trace.
  • The hospital sends both predicted value and proof on the chain.
  • If the proof matches the committed data hash, requested prediction method, and requested inputs - the hospital node receives the fee deposited by the external node.

Merging ML with Blockchain

The proposed protocol requires us to think outside of EVM blockchains and construct specialized blockchains for the specific purpose of ML. This decision comes naturally, as we need the following:

  • Network nodes that can communicate with each other.
  • Distributed ledger to create escrows and verify the results.
  • Consensus mechanism to synchronize nodes and their logs.

EVM blockchains pass all requirements, but the limitations in costs and transaction size leave us with no choice but to look outside EVM space and create custom chains allowing greater transaction size.

Let's investigate the possibilities of such a decentralized ML system. All executions are verifiable, and the money reaches its target account only when all conditions are satisfied. This setup represents a fertile ground for commercial use cases. As we mentioned in the introduction, average users do not own hardware powerful enough to train huge neural models efficiently, but parallel training on multiple nodes, training of simpler models on private data, or training ensemble models. 

This practically means that average users can offer their available hardware resources to train ML models, run predictions on local data and earn passive income. Private data silos can provide verifiable predictions from the models trained on their private data without any data leaks, monetizing their data and making huge quantities of data available to research and the business community. 

Conclusion 

  • EVM blockchains are not the only solution.
  • Private data can be monetized without leaking information.
  • Available hardware resources can be utilized for training ML models and running predictions, thus creating passive income.
  • Proving the execution steps of a program may be, in some cases, easier than constructing arithmetic circuits for a specific computation.

Got any questions or comments? Find us on Twitter @3327_io

SHARE

COMMENTS (0)

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

You may also find this interesting:

Cosmos Blog Header
Ticking All the Boxes: How Madara Modifications Enable On-Chain Game Logic

One of the main lessons from the previous two years is that the blockchain gaming space is expanding rapidly, with many new projects and features being built. However, it still remains in its infancy stage, as no title has managed to grab the mainstream market. I’m not talking about metaverse and virtual reality on the blockchain, though it is a massive part of the gaming space. I’m talking about a core gameplay experience that entirely runs on-chain.

By Filip Moldvai
August 10, 2023
How Lens helps us build user-centric Web3 products?

In our previous blog post, we covered one of the biggest challenges Web3 faces: building user (de)centric products. The main points were: One of the ways to solve this challenge is to start with the validation and exploration of the problems users have rather than a tech solution that can be developed. We want to […]

By Milos Novitovic
December 15, 2022
Deep Dive DeFi: Derivatives

Derivatives DeFi has been an emerging market for the past three years in the Web3 world. Protocols have tried to bridge the gap and bring traditional financial instruments to the Web3world, offering users decentralization, full custody, and favorable conditions to make their own choices with minimum intermediation.  So far, we (Web3 users) have been successful […]

By Andrija Raicevic
December 8, 2022
Layer Hack: zkSync’s Account Abstraction

About Layer Hack Encode Club hosted a great hackathon focused on Layer 2 scaling solutions - Layer Hack. The hackathon lasted for nearly a month (Oct. 17 - Nov. 13), and was sponsored by AltLayer, Boba Network, Metis, and zkSync - each sponsor had their track with its own topic.  Out of those four tracks, […]

By Milos Bojinovic
November 24, 2022
Let’s geek out together!
Would you love to work with us on Web3-related experiments and studies?
Drop us a message