5 Feb 2024 | KD CONWAY, CATHIE SO, XIAOHANG YU, KARTIN WONG
opML is an innovative approach that enables blockchain systems to perform AI model inference. It uses an interactive fraud proof protocol, similar to optimistic rollup systems, to ensure decentralized and verifiable consensus for ML services, enhancing trust and transparency. Unlike zkML, which relies on zero-knowledge proofs, opML uses fraud proofs for efficiency and low cost, allowing execution of large models like 7B-LLaMA on standard PCs without GPUs. opML combines blockchain and AI to create accessible, secure, and efficient on-chain machine learning.
opML addresses the challenge of performing AI computations directly on-chain, which is infeasible due to high gas costs. Instead, it uses fraud proofs to verify ML results on-chain, reducing computational burden and memory usage. opML is more cost-efficient and practical for large models compared to zkML, which is limited by high proof generation costs and memory consumption.
opML's design principles include deterministic ML execution, separate execution from proving, and optimistic machine learning with interactive fraud proofs. It uses a fraud-proof virtual machine (FPVM) to trace instruction steps and prove them on-chain. The system also includes a high-efficiency ML engine for both native execution and fraud-proof scenarios, ensuring consistency and determinism.
The opML workflow involves a requester initiating a task, a submitter performing the ML task and committing the result on-chain, and verifiers checking the results. If a dispute arises, the bisection protocol is initiated to pinpoint the erroneous step. The smart contract then arbitrates the dispute, ensuring the correct result is committed.
opML's fraud proof virtual machine (FPVM) ensures equivalence between off-chain and on-chain execution. It uses a Merkle tree to manage VM states and memory, enabling efficient on-chain arbitration. The system also includes a multi-phase dispute game to handle large models efficiently, reducing memory usage and improving performance.
The multi-phase dispute game allows computations to be performed in native environments for most phases, with only the final phase executed in the fraud-proof VM. This approach significantly enhances execution performance, making opML suitable for large models. The system also includes an incentive mechanism to ensure validators check results, preventing cheating and ensuring safety and liveness.
opML's security analysis under the AnyTrust assumption shows that any one honest validator can force the system to behave correctly. This is more secure than the Majority Trust model, which is vulnerable to attacks when malicious validators outnumber honest ones. opML's incentive mechanism, including the Attention Challenge, ensures validators check results, preventing the Verifier Dilemma.
opML can be extended to support training and fine-tuning processes, ensuring the correctness of ML model updates on the blockchain. By integrating zkML and opML, privacy can be enhanced, with zkML used for input processing and opML for the remaining layers, achieving a secure and non-reversible level of obfuscation. opML's integration with training processes ensures auditable and transparent model updates, safeguardopML is an innovative approach that enables blockchain systems to perform AI model inference. It uses an interactive fraud proof protocol, similar to optimistic rollup systems, to ensure decentralized and verifiable consensus for ML services, enhancing trust and transparency. Unlike zkML, which relies on zero-knowledge proofs, opML uses fraud proofs for efficiency and low cost, allowing execution of large models like 7B-LLaMA on standard PCs without GPUs. opML combines blockchain and AI to create accessible, secure, and efficient on-chain machine learning.
opML addresses the challenge of performing AI computations directly on-chain, which is infeasible due to high gas costs. Instead, it uses fraud proofs to verify ML results on-chain, reducing computational burden and memory usage. opML is more cost-efficient and practical for large models compared to zkML, which is limited by high proof generation costs and memory consumption.
opML's design principles include deterministic ML execution, separate execution from proving, and optimistic machine learning with interactive fraud proofs. It uses a fraud-proof virtual machine (FPVM) to trace instruction steps and prove them on-chain. The system also includes a high-efficiency ML engine for both native execution and fraud-proof scenarios, ensuring consistency and determinism.
The opML workflow involves a requester initiating a task, a submitter performing the ML task and committing the result on-chain, and verifiers checking the results. If a dispute arises, the bisection protocol is initiated to pinpoint the erroneous step. The smart contract then arbitrates the dispute, ensuring the correct result is committed.
opML's fraud proof virtual machine (FPVM) ensures equivalence between off-chain and on-chain execution. It uses a Merkle tree to manage VM states and memory, enabling efficient on-chain arbitration. The system also includes a multi-phase dispute game to handle large models efficiently, reducing memory usage and improving performance.
The multi-phase dispute game allows computations to be performed in native environments for most phases, with only the final phase executed in the fraud-proof VM. This approach significantly enhances execution performance, making opML suitable for large models. The system also includes an incentive mechanism to ensure validators check results, preventing cheating and ensuring safety and liveness.
opML's security analysis under the AnyTrust assumption shows that any one honest validator can force the system to behave correctly. This is more secure than the Majority Trust model, which is vulnerable to attacks when malicious validators outnumber honest ones. opML's incentive mechanism, including the Attention Challenge, ensures validators check results, preventing the Verifier Dilemma.
opML can be extended to support training and fine-tuning processes, ensuring the correctness of ML model updates on the blockchain. By integrating zkML and opML, privacy can be enhanced, with zkML used for input processing and opML for the remaining layers, achieving a secure and non-reversible level of obfuscation. opML's integration with training processes ensures auditable and transparent model updates, safeguard