Launching on

Cocoon — the Confidential Compute Open Network

A decentralized network for secure, private AI inference. App developers reward GPU owners with TON for processing inference requests. Users get privacy; operators earn; everyone wins.

Apply

Build on Cocoon

Or DM us: @cocoon

Earn with your GPUs

Or DM us: @cocoon

How Cocoon works

1

Developer

Submits confidential inference request via Cocoon SDK or API.

2

Dispatcher

Assigns job to the best secure node considering latency and capacity.

3

Secure Node

Runs the model in a confidential environment; input/output are private.

4

Private Response

Encrypted results are returned to the app; TON payout is settled on-chain.

For developers

Why build on Cocoon?

  • Low-cost, confidential AI inference at scale.
  • Plug-and-play access to a global GPU network.
  • Seamless Telegram distribution and demand.

For GPU owners

Why provide compute?

  • Earn TON by powering confidential AI workloads.
  • Flexible capacity; you control uptime and hardware.
  • Fair, transparent payouts.

Security & privacy

Confidential by design

Inputs, prompts, and outputs are processed in secure enclaves and never exposed to operators.

Audit & attest

Nodes register attested hardware/config; dispatchers enforce policies and record proofs.