Decentralized
AI Compute

0G Compute is the decentralized marketplace for AI inference and training. Access GPU power on-demand with verifiable execution and pay-per-use pricing.

0G Compute - Decentralized AI Compute

AI Compute is Broken

Cloud AI services are expensive, opaque, and centralized.

Developers and enterprises pay premium prices, without visibility into execution, performance guarantees, or long-term ownership. A few infrastructure providers control nearly all AI compute, creating vendor lock-in and systemic risk.

Prohibitive Risk

GPU capacity is priced with heavy cloud markups and limited availability during demand spikes.

Black Box Execution

No way to verify that models ran correctly or that results weren't tampered with.

Centralized Control

A few providers control all compute, creating vendor lock-in and single points of failure.

0G Compute flips the script. Access a global network of GPU providers at 90% lower cost, with cryptographic proof that your models executed correctly.

Start Using 0G Compute

AI Compute Without Compromise

0G Compute combines decentralized infrastructure with proof-based verification to deliver trustless execution at scale.

Low-Latency, Elastic Compute

Run inference and fine-tuning globally, without capacity constraints.

Verifiable AI Execution

Every job comes with cryptographic guarantees of who ran it, how, and what was produced.

Dynamic, Cost-Effective Pricing

Pay per task, not per instance. Providers compete on cost and reliability, not brand reputation.

Composable, Ecosystem-Agnostic Design

Plug into the 0G modular stack or connect to other chains and protocols.

Take Your AI Truly Onchain

The 0G Service Marketplace connects builders to on-demand AI compute services, all powered by the 0G Compute network.

Inference Live

Run LLMs, vision models, speech-to-text, and more through decentralized providers.

Fine-Tuning Live

Customize models using your own datasets, with proof-based execution and onchain settlement.

Training Coming Soon

Full training pipelines to unlock end-to-end AI development onchain.

Built for High-Performance AI Workloads

0G Compute is not a single API, it is a modular execution system designed to support high-throughput AI workloads while preserving transparency and trust.

01

Smart Contracts

Onchain registration, payments, and settlement with cryptographic verification of every transaction.

02

Provider Network

Permissionless marketplace where independent GPU operators compete on reliability and cost.

03

Client SDKs

One-line integration handling signing, routing, and local verification for developers.

04

Verification Layer

Proof-based execution with cryptographic guarantees you can independently audit.

0G Compute Architecture

Trustless Inference Pipeline

From request to verified result, 0G Compute ensures every step is transparent and cryptographically secured.

Request

Inference Flow

Users submit inference requests through the 0G network. Smart contracts handle payment and route jobs to optimal providers.

  • Request Submission Submit model, inputs, and payment in a single transaction.
  • Intelligent Routing Jobs automatically routed to providers with best price-performance ratio.
Request
Escrow
GPU
Result
Verification

Execution Proofs

Multiple verification mechanisms ensure results are trustworthy, from hardware attestations to zero-knowledge proofs.

  • TEEML (TEE) Hardware-based attestation using Trusted Execution Environments.
  • OPML / ZKML Optimistic and zero-knowledge proofs for cryptographic verification.
TEEML
OPML
ZKML

Trust Through Verification

0G Compute doesn't ask you to trust providers - it proves they executed correctly.

TEE Attestation

Hardware enclaves provide tamper-proof execution environments with remote attestation.

Optimistic Proofs

OPML allows fast results with economic guarantees against fraud.

Zero-Knowledge Proofs

ZKML provides mathematical certainty without revealing model weights or inputs.

Economic Security

Provider stakes ensure aligned incentives and slashing for misbehavior.

Metrics

Live statistics from the 0G Compute network

Active GPU Providers Live
0 providers
+0%
Avg. Cost Savings Live
0 %
vs. cloud providers

Earn by Providing Compute

Turn your idle GPUs into a revenue stream. Join the 0G Compute network and earn tokens for every inference you serve.

Become a Provider
Register GPU
Serve Inferences
Earn Rewards

Frequently Asked Questions

You can access models via the Web UI, CLI, or SDK. Deposit tokens into your account, select a model and provider, then submit your inference request. The SDK handles payment and verification automatically.

TEEML offers fastest verification with hardware-based attestation. OPML provides economic security with challenge periods. ZKML gives mathematical certainty but requires more compute. Choose based on your security and latency needs.

Register your GPU with the 0G Compute network, stake the required tokens, and run the provider software. You'll automatically receive inference requests and earn rewards for successful completions.

Current models include DeepSeek V3.1, Qwen 2.5, GPT-OSS variants for chat, Whisper Large V3 for speech-to-text, and Flux Turbo for text-to-image. New models are added regularly.

Pricing is market-driven based on supply and demand. Providers set their rates, and the network routes requests to optimal price-performance matches. On average, users save 90% compared to cloud providers.

Get Started with 0G Compute

Whether you need AI compute power or have GPUs to share, 0G Compute connects you to the decentralized AI economy.

Users

Run AI inference at 90% lower cost than cloud providers.

Start Computing

Providers

Monetize your GPU hardware by serving AI workloads.

Become Provider

Documentation

Learn how to integrate 0G Compute into your stack.

Read Docs