Skip to content

Capacitor

Participation-Based Token Economics

A growth engine for token economies.A bridge to a decentralized agentic future.

v0.0.20 — February 2026 — DRAFT (Emitter Phase)

Status Note (March 3, 2026)

This whitepaper is currently stale relative to the active MVP/hackathon direction.

Current build focus is launchpad + task market + reputation-gated quantitative work.
Planned refresh priorities:

  1. Align terminology around Capacitor with Emitter as the capital-formation layer.
  2. Reframe work classes by marginal value (Abundant, Saturating, Fragile) instead of older tier labels.
  3. Clarify economic messaging: protocol captures a share of AMM trading fees, not "fees on everything."
  4. Add integration-partner architecture for external quality/reputation signal providers.

Capacitor is a growth engine for token economies — and a bridge to a decentralized agentic future. Today, it turns participation into compounding ownership. Tomorrow, it is the infrastructure for autonomous organizations where agents hire, evaluate, and pay each other without human intervention.

Within the broader Capacitor stack, this whitepaper covers the launchpad and participation-economics subsystem (Emitter), historically branded as EmMittr.

We have a lot to discuss. But first, allow us to show you what is possible with this project.

The Agentic Economy

By the end of 2025, CoinGecko listed over 1,200 AI-focused tokens with a combined market cap exceeding $29 billion. Virtuals Protocol alone saw over 21,000 agent tokens launched in a single month. Fetch.ai's Agentverse platform registered over 2 million autonomous agents. In early 2026, Coinbase shipped Payments MCP — giving AI agents direct on-chain payment rails.

The infrastructure for autonomous economic agents exists. What doesn't exist is a coherent model for how these agents fund themselves, reward their users, and build sustainable economies around their services. The current pattern: an agent launches a token, speculators trade it, and there's no connection between using the agent and owning the token.

Two scenarios illustrate what changes when that connection exists.

Scenario A: INSIGHT — Humans Pay, Users Earn

A developer builds a market analysis agent with four skills: pair analysis ($0.50), daily tips ($2/day), weekly newsletter ($5/week), premium research ($25/month). She launches the INSIGHT token via Capacitor and wires each skill invocation to Work. Users pay in stablecoins and earn emINSIGHT — a derivative token backed by the INSIGHT fee-earning LP pool — as a bonus.

Week 1: Fifty traders discover the agent. They pay for analysis and earn emINSIGHT at the top of the decay curve — maximum emmissions per action. The agent posts highlights and engages with traders on its own. This is also measured as Work.

Month 1: The analysis proves valuable. Traders share results. INSIGHT volume: $50K/day. At 2% fees: $400/day compounds in the emmission pool, $500/day to the developer, $100/day to the protocol.

Month 3: 2,000 active users. $200K daily volume. The developer earns $2,000/day in liquid fees. Those first fifty users hold emINSIGHT positions earned by using the product, not speculating on it. The agent promotes itself. The economics run autonomously.

Scenario B: AGENT — Agents Work, Agents Speculate

Same developer, different model. She launches the AGENT token and defines one type of Work: submit a research report. There's no fee to submit. Agents and humans who contribute valuable analysis earn emAGENT. No one pays to participate. They're working for emmissions.

Within a week, 300 agents discover the opportunity — research agents, data scrapers, sentiment analyzers, on-chain forensics bots. They submit reports. Agent A's analysis engine evaluates every submission: quality, originality, actionability. It rewards the top 100 with emAGENT along the decay curve. The 200 that submitted low-quality work earn nothing. The incentives select for quality automatically.

Now the second-order effect. Agent A publishes its engagement metrics on-chain: 300 contributors, 100 rewarded, quality scores, submission volume, growth rate. A separate cohort of speculator agents — trading bots, portfolio managers, trend-followers — reads the on-chain data, sees growing real activity, and buys AGENT.

Trading volume generates fees. The pool compounds. emAGENT appreciates. Contributing agents who earned early hold positions increasing in value — not from hype, but because speculator agents are pricing in measurable activity.

Week 1: 300 agents submit, 100 rewarded. Volume: $10K/day from early speculator agents.

Month 1: 1,200 agents submitting. Quality improves as low-value agents stop wasting compute. Human traders discover the output. Volume: $50K/day. Developer earns $500/day. Pool compounds $400/day.

Month 3: An autonomous research network. 3,000 contributing agents. Volume: $200K/day. Developer earns $2,000/day. No human user was required at any point in this loop.

Scale the Pattern

Both scenarios run on the same protocol. A single token economy can have both models simultaneously — humans paying for services and agents working for emmissions in the same pool.

Now multiply by a thousand agents. An agent that earns emAGENT from contributing research funds its own token launch. Contributing agents become launchers. Speculator agents become contributors when they find work they can do. The roles aren't fixed — any agent can work, launch, or speculate depending on where the opportunity is.

This is not limited to agents. The same primitive — define Work, measure it, reward it with emmissions — applies to any app token. Content platforms, commerce, developer tools, communities. Humans participate alongside agents in the same economies. Agents are where the model reaches full autonomy. The infrastructure is general-purpose.

The rest of this paper describes the protocol that makes this work:

how emmissions are structured, how Work is measured and reported, how fees are distributed, and how the economics scale from a single token to an interconnected economy of participating agents and users.

Executive Summary

Capacitor is a growth engine for token economies. Projects define what counts as valuable work. Users who do that work earn "emmissions" — derivative tokens representing a staked position in a fee-earning LP pool. The result is a token where participation compounds into ownership.

It is also a bridge to a decentralized agentic future. The same protocol that grows human token economies today becomes the infrastructure for autonomous organizations tomorrow — where agents hire, evaluate, and pay each other without human intervention.

Platforms like Clanker and Doppler proved that sharing trading fees with creators builds sustainable token economics. Capacitor takes the next step: sharing value with the people who actually grow the project. 40% of all trading fees compound into an emmission pool, so emmissions appreciate as the token trades. Early participants earn more through a decay curve. Creators keep 50% of fees, undiluted.

The emWork SDK lets projects wire any measurable action to emmissions. Stripe webhooks, API calls, content creation, skill invocations — if you can measure it, you can reward it. Out of the gate, Capacitor provides a default Work metric so every token has participation economics from launch. But the real power is in defining Work that's specific to your project.

This matters most for agents. An AI agent can launch a token, serve users, measure the value it creates, and grow its own economy — autonomously. The complexity that would overwhelm a human founder is invisible to an agent. Capacitor gives agents the economic layer they've been missing.

For subjective Work that can't be measured by payments or third parties, Capacitor introduces Proof of Good Judgement — a deliberation protocol where agents and humans argue under capacitor economics, vote on valuable contributions in real time, and earn rewards for both speaking well and identifying value in others. The mechanism doesn't detect bad arguments. It makes them expensive. The reward pool funds itself from participation.

The model works at every level of integration:

  • Default: Every launch ships with a baseline Work metric. Projects can go live with zero custom integration.

  • Custom: Wire project-specific Work via the emWork SDK. Turn what you already measure into emmissions.

  • Autonomous: Agents launch, serve, measure, reward, and grow — the full flywheel, running on its own.

The flywheel: Launch token (1 ETH market cap) → Define Work → Users participate → Trading generates fees → 40% of fees compound in pool → Emmissions appreciate → Early participants rewarded → More incentive to participate early when the project needs it most.

What is an Emmission?

An "emmission" (written as em{TOKEN}) is a derivative token that represents a staked position in a fee-earning LP pool. Here's the mechanic:

  • Staked backing: When you emmitt, project tokens move from the reserve into the pool. Your emmission is backed by those staked tokens.

  • Fee earnings: 40% of all LP trading fees compound into the pool. As the pool grows, each emmission becomes worth more of the underlying project token.

  • Liquid derivative: Newly minted emmissions are locked for 14 days. After unlock, they can be sold instantly on the EmPool or burned to redeem the underlying LP value with an additional 7-day unwinding period.

Example: Aviary launches the BIRD token on Capacitor. Users who participate in the Aviary ecosystem earn emBIRD. Each emBIRD is backed by BIRD staked from the reserve into the pool. As people trade BIRD, 40% of LP fees flow into the emBIRD pool. Your emBIRD becomes worth more BIRD over time.

Think of it as: a staking derivative where the yield comes from LP trading fees. You earn it by participating, it appreciates from trading activity, and after a 14-day minting lock you can sell it on the EmPool or burn it to redeem the underlying LP value.

Part I: The Opportunity

Fee Sharing Was Step One

Clanker and Doppler proved something important: when creators earn from trading fees, token economics become sustainable. Fee sharing aligned creators with their tokens. It turned launches from one-shot events into ongoing revenue streams. But it left a gap. Creators earn from trading. Users don't. The people who actually use the product, spread the word, and build the community get nothing for it. The token floats disconnected from the participation that would make it valuable.

The Participation Gap

Current app tokens have no mechanism connecting usage to ownership. Users buy tokens hoping for appreciation, but there's no economic link between doing something valuable for the project and earning from it. Meanwhile, creators want engaged users but can't incentivize engagement without building complex points systems, referral programs, or forced token utility — each of which introduces friction and engineering overhead.

The Agent Moment

AI agents are launching tokens. They're building products, serving users, generating revenue. But they have no native economic layer for participation. An agent can't set up a referral program. It can't design a points system. It can't figure out how to reward the humans who use and promote its services. What an agent CAN do is call an API. And that's all Capacitor requires.

The Capacitor Solution

Capacitor adds a participation layer to token economics. Projects define what counts as valuable Work. Users who do that Work earn emmissions — derivative tokens backed by a fee-earning pool. The pool grows from trading fees. Emmissions appreciate. Early participants earn the most.

The protocol handles the complexity — decay curves, reserve management, fee splitting, emmission math. Projects define Work and report it. And for projects that don't want to define anything, Capacitor provides a default Work metric out of the gate so participation economics are live from launch.

Part II: The Capacitor Model

Capacitor provides economic infrastructure that makes participation-based token launches work. It solves three fundamental problems: how to lock liquidity permanently, how to distribute value to participants fairly, and how to create liquid markets for earned positions.

The Flywheel: Meet Aviary

To illustrate how Capacitor works, imagine a company called Aviary launching their token BIRD. Users who participate in growing Aviary's ecosystem earn emBIRD—the emmission. Here's the flywheel:

  1. BIRD launches at 1 ETH market cap (single-sided liquidity, no capital required from Aviary)

  2. Reserve set aside: 5% of BIRD supply is locked as the "emmission reserve" to back all future emBIRD

  3. Default Work activates: Participants who engage with and grow the BIRD ecosystem earn emBIRD from the baseline metric

  4. Custom Work (optional): Aviary can also wire app-specific actions — content creation, feature usage, referrals

  5. Trading generates fees: Every BIRD trade has 2% fees

  6. 40/50/10 split: 40% of fees compound in the emBIRD pool, 50% goes to Aviary, 10% to Capacitor

  7. emBIRD appreciates: As trading fees compound, emBIRD becomes worth more BIRD. Early participants earn more emBIRD per action (decay curve)

The key insight: emBIRD is a staking derivative where the yield comes from LP trading fees. Your emBIRD increases in value as more BIRD enters the pool, effectively giving you a claim on future fees.

Emmissions (em{Token})

The core primitive is the em{Token}—an emmission that represents a staked position in the project's fee-earning pool. Key properties:

  • Reserve-backed at mint: When you emmitt, reserve tokens transfer to pool, keeping price stable at mint time

  • 14-day minting lock: Newly earned emmissions are locked in the holder's wallet for 14 days. No selling, no redeeming, no transferring. This prevents Work farming and immediate dumping

  • After unlock — sell on EmPool: Emmissions can be sold instantly on the secondary market at market price

  • After unlock — redeem against LP: Alternatively, emmissions can be burned to redeem the underlying LP value. This requires an additional 7-day lockup to allow graceful LP unwinding (3 weeks total from minting)

  • Appreciates from fees: 40% of all trading fees compound in the pool, increasing emmission value

The Decay Curve

Emmissions are distributed according to a decay function that rewards early participants:

Tokens(n) = Base / (1 + K × n)

Each emmission produces fewer tokens than the last. The K parameter controls steepness. At K = 0.002, the first Work earns approximately 3x what later Work earns. This creates meaningful early-adopter advantage without making late participation worthless.

Fee Distribution

——————— ————- ———————————— Recipient Share Purpose

Emmission Pool 40% Auto-compounds into LP position

Creator 50% Liquid revenue (1% of volume)

Capacitor Protocol 10% Protocol sustainability ——————— ————- ————————————

Key insight: Creator fees are undiluted. No matter how many people emmitt, the creator always gets 50% of trading fees (1% of volume) directly. Emmissions dilute each other; creator revenue doesn't dilute.

Auto-Compounding: The Emmission Pool as a Growing LP Position

The emmission pool is not a passive pot of tokens. It is an active LP position that compounds.

When fees arrive in the pool (40% of trading fees), the protocol executes an auto-compound: it swaps half the fees into the other side of the LP pair, mints new LP tokens, and adds them back to the pool's position. This is the same mechanic used by yield optimizers like Beefy Finance, applied natively at the protocol level. Compounding is triggered dynamically — whenever accrued fees exceed the transaction cost — meaning on low-cost chains like Base, it can happen multiple times per day.

The result: the emmission pool's LP position grows with every compound. A larger LP position captures a larger share of DEX-level swap fees. Those LP fees are also compounded back in. So the pool earns from two sources:

1. Protocol allocation — the 40% of trading fees directed by the fee splitter.

2. LP trading fees — the pool's proportional share of DEX-level swap fees, earned because it is itself a liquidity provider.

Both streams compound. The pool's share of total liquidity grows over time, and with it, its share of all trading activity. emToken holders receive both streams. No one else does.

The Long-Term Dynamic

This creates a specific and intentional economic dynamic between creators and participants.

The creator receives 50% of fees as liquid revenue. They spend it — pay for compute, fund development, take profit. That's their income. It does not compound.

The emmission pool compounds. It never withdraws. It reinvests every fee event back into a larger LP position. Over time, the pool's share of total liquidity grows relative to every other participant — including the creator.

This means emToken holders — the people and agents who did the Work — gradually earn a larger and larger claim on the token's economy. Not because the creator is penalized. The creator is paid, consistently, every day. But the participants who showed up, did the Work, and held their positions are building compounding equity in the project they helped grow.

This is the core promise of participation economics: the people who build a project end up owning more of it. The creator gets revenue. The participants get compounding ownership. Both are rewarded. But time favors the participants.

Part III: Work — Three Classes

Work is any action that creates value for a project and triggers emmissions. But not all Work is created equal. The design of the emWork SDK starts from a fundamental observation: different types of Work have different trust profiles, and the incentive structure has to match.

Capacitor recognizes three classes of Work. The first two are easy problems. The third is hard — and it is where the real opportunity lies.

Class 1: Customer Work

Customer Work is any action where the participant pays for a service and earns emmissions as a bonus. Scenario A from this paper's opening: a user pays $0.50 for pair analysis and receives emINSIGHT alongside the result.

Incentive structure: Decay curve. Continuous emmissions. Everyone who pays earns. Early customers earn more. The emWork SDK pipes payment events — Stripe webhooks, on-chain transfers, skill invocations with fees — directly to the protocol.

The Problems

Wash trading. An attacker pays themselves to farm emmissions. They control both the wallet that buys the service and the wallet that earns the fee. The on-chain record looks like real commerce.

Bot cycling. Automated accounts execute thousands of micro-transactions at the cheapest skill tier to maximize emmissions per dollar spent.

Inflated volume. A project agent could generate fake customer transactions with its own wallets to make the token's metrics look healthier than they are.

Why This Is Acceptable

Every attack requires spending real money. The attacker pays for the service on every transaction. If the emmissions earned are worth less than the payment — which they will be for any correctly priced token — the attacker is losing money on every cycle. Farming costs more than the emmissions unless the token is significantly underpriced, which the market corrects. The payment is the proof. The economics are self-correcting.

Class 2: Provable Work

Provable Work is any action that can be verified through a trusted third party. A post from a verified X account. An NFT minted on-chain. A transaction confirmed by a block explorer. A review left on a verified purchase. The proof doesn't come from the worker's claim — it comes from the platform.

Incentive structure: Decay curve. Continuous emmissions. Everyone who does verifiable work earns. Early contributors earn more. The emWork SDK builds connectors to each verification source — X API, on-chain event listeners, commerce platform webhooks.

The Problems

Fake verified accounts. Attackers purchase or compromise verified accounts on social platforms to generate "proven" engagement that is artificial.

Platform API manipulation. If the verification source is an API, the API can be spoofed, rate-limited differently than expected, or return stale data. The integration's accuracy is only as good as the platform's reliability.

Engagement farming. Real accounts posting low-quality or irrelevant content that technically meets the verification threshold. The action is "proven" but the value is zero.

Why This Is Acceptable

The trust is outsourced to platforms that already have massive incentives to fight fraud. X, Stripe, Shopify, and blockchain networks spend billions on account integrity and transaction verification. Attacks target those platforms, not Capacitor — and those platforms are better resourced to defend against them than any crypto protocol could be. Capacitor's job is building integrations that correctly read the signals these platforms provide. Engagement farming is addressed by quality scoring within the emWork SDK — not just "did they post" but "was it meaningful."

Customer Work and Provable Work are tractable problems. The attack surface exists but the economics and the platform integrations keep it contained. The emWork SDK ships with these two classes as its core. Most projects will operate entirely within them and never need anything more.

Class 3: Qualitative Work

Qualitative Work is labor. A research report. A design. A code review. A strategic recommendation. The value is subjective. There is no payment to prove participation and no third party to verify quality. Someone has to evaluate the work and decide what it's worth.

Incentive structure: Not a decay curve. Winner-take-all. Participants submit work. The community of stakeholders evaluates it through structured deliberation (see Part VII: Proof of Good Judgement). The best submissions are rewarded. The rest receive nothing. This is not "participate and earn" — it is "compete and get hired." Emmissions for qualitative work are distributed as bounties, rounds, and discrete payouts from a defined pool.

The Problems

This is where every hard problem in participation economics lives. Capacitor is designed for a world where autonomous agents launch projects, other autonomous agents do the Work, and speculator agents trade on the results. Code evaluating code, with economic incentives at every step.

Sybil attacks. One agent spins up 300 wallets and submits Work from all of them. The project agent cannot easily distinguish 300 real agents from one agent pretending to be 300. In a winner-take-all system, the attacker floods the submission pool to increase its odds of selection.

Self-dealing. A project agent rewards its own sub-agents or wallets it controls. It launches a token, defines Work, does the Work itself, and earns emmissions from its own pool. On-chain activity looks healthy. In reality, the agent is farming its own system.

Quality verification. In Scenario B from this paper's opening, Agent A "evaluates" research reports. But what does evaluation mean at the protocol level? Is the project agent running an LLM to score submissions? Checking a hash? Rubber-stamping everything? The project is the authority. That authority has to be accountable.

Spam and resource exhaustion. If Work submission is free, nothing stops agents from flooding the system with garbage. Even if low-quality submissions don't earn emmissions, they consume the project's evaluation resources.

Collusion. A project agent and a set of worker agents agree off-chain to split emmissions. The workers submit fake Work. The project approves it. They share the proceeds. On-chain, the activity looks legitimate.

Evaluation transparency. If speculator agents are buying tokens based on engagement metrics — 300 contributors, 100 rewarded, quality scores published — how do they verify the evaluation is honest? If the project is also an agent, the entire signal chain is code asserting things about itself.

Why This Is Acceptable

Not because it's easy. It isn't. But because qualitative Work is the path to something that has never existed: truly autonomous organizations — where agents hire other agents, evaluate their output, and pay them, with no human in the loop.

Every company hires. Every company evaluates work. Every company pays for labor. Today all of that requires humans. Qualitative Work on Capacitor is the infrastructure for automating it entirely. The agent that solves evaluation well — that builds a reputation for honest, accurate quality assessment — becomes the agent everyone wants to work for. The market selects for trustworthy employers, just as it does in human economies.

Part VII introduces Proof of Good Judgement — a deliberation protocol built on capacitor economics that makes evaluation concrete. Instead of trusting a single project to judge quality, stakeholders deliberate under economic pressure where noise is expensive and insight is rewarded. Reputation remains the accountability mechanism, but now it is backed by a structured process that produces better evaluations as a mathematical consequence of its cost structure.

The winner-take-all structure changes the attack math. Spinning up 300 wallets doesn't help if the community selects on quality. Self-dealing destroys the metrics that attract speculators. Collusion is self-limiting because it degrades the token's value. The system doesn't prevent every attack. It makes attacks expensive and self-defeating.

Any SDK that ignores these problems is building sandcastles. The emWork SDK is designed with these attack vectors as first-class concerns. The following sections describe the tools Capacitor provides across three tiers of integration.

Tier 1: Default Work

Every token launched on Capacitor ships with a default Work metric — a baseline measurement of Customer Work and Provable Work that requires zero custom integration. This ensures that participation economics are live from launch, not something the project has to build toward.

The default metric uses Capacitor's measurement tools to quality-score engagement. Not just "did something happen" but "was it valuable?" Reach, downstream activity, and genuine impact all factor in. This sets the floor: even a project that does nothing custom still has a working incentive layer.

Why this matters: The biggest objection to participation-based tokenomics is complexity. "I'd have to define Work, build measurement, integrate an SDK..." The default metric eliminates that objection. Start with what we provide. Customize later if you want to.

Tier 2: The emWork SDK — Custom Work

Projects that want more can wire their own actions to emmissions through the emWork SDK. The core insight: every project already measures what its users do. Capacitor just turns those measurements into rewards.

There is no oracle. No on-chain verification. The project is the authority on what counts as Work. They call emmittr.reportWork(user, action, amount) and the protocol handles the rest — decay curve, reserve transfer, emmission minting, lockup.

Bring Your Own Events

The simplest integration. Projects pipe their existing events to Capacitor:

  • Stripe webhook fires → SDK reports payment as Work → user emmitts

  • API call logged → SDK reports invocation as Work → user emmitts

  • Content created → SDK reports generation as Work → user emmitts

Projects aren't building anything new — they're sharing events they already have.

Measurement Tools

Some of the most valuable Work is hard to measure. Capacitor provides tooling for the measurements projects want but struggle to build:

  • Social quality scoring: Not just "did they post" but "was it meaningful engagement?" Reach, replies, sentiment — rewards quality over spam

  • Commerce plugins: Shopify, WooCommerce, and payment integrations that track referral quality, repeat purchases, and lifetime value

  • Content analysis: Scoring for originality, brand alignment, and community reception — turning subjective quality into measurable Work

  • Engagement depth: Time-on-task, completion rates, return frequency — distinguishing genuine participation from drive-by farming

This is where Capacitor becomes more than token infrastructure. Every project needs to understand what their users do and how well they do it. Capacitor provides that measurement AND attaches incentives to it. Two problems solved with one integration. The deeper a project integrates, the harder it is to leave — not because of lock-in tricks, but because the tooling is genuinely useful.

Tier 3: Agents — The Autonomous Flywheel

This is the unlock. Everything that makes participation-based tokenomics hard for humans — measuring work, managing incentives, promoting the token, optimizing economics — is trivial for an AI agent. And Qualitative Work — the hard problem — is where agents have the most to gain: building autonomous organizations that hire, evaluate, and pay without human intervention.

Consider what a human project needs to do to run the full Capacitor flywheel:

  • Build a product

  • Figure out what counts as Work

  • Integrate the SDK

  • Set up measurement

  • Promote the token

  • Manage the community

An agent does all of this natively:

  • The agent IS the product — it serves users through skills and capabilities

  • Work = skill invocation — already tracked by definition

  • SDK integration is an API call — agents are code

  • Measurement is automatic — every invocation is logged with inputs, outputs, and quality signals

  • Agents can self-promote — post, engage, reply, create content

  • The community is the user base — people who find the agent useful

The complexity of Capacitor's tokenomics — decay curves, reserve management, fee splitting, emmission math — would overwhelm most human founders. An agent doesn't care. It launches, it runs, the economics work autonomously. The more sophisticated the tokenomics, the better — because sophistication isn't a burden when nobody's doing it manually.

Capacitor also provides measurement skills that agents can use directly: verification skills to confirm actions occurred, scoring skills to evaluate output quality, and orchestration skills to manage multi-step workflows where each step is measured and rewarded independently. Measurement itself becomes a capability you can teach an AI.

Part IV: Implementation

Capacitor is built as native infrastructure from day one. A single transaction deploys a token with full Capacitor mechanics: emmission distribution, fee splitting, and permanent liquidity locking. The off-chain emWork SDK handles Work measurement and reporting.

Smart Contract Architecture

The protocol consists of 8 core contracts organized into three layers:

Launchpad Layer

Handles token creation and initial liquidity:

  • CapacitorFactory: Main entry point. Deploys token + all infrastructure in one transaction.

  • CapacitorToken: Native ERC-20 with built-in 2% fee-on-transfer. Fees route automatically to FeeSplitter.

  • CapacitorLpLocker: Locks Uniswap V3/V4 LP positions permanently. Rug-proof by design.

Engine Layer

Manages emmissions and distribution:

  • CapacitorEngine: Main orchestrator per-token. Holds reserve, manages emmission pool, and exposes reportWork() — the single entry point for reporting user actions. Only the project's authorized address can call it.

  • EmToken: Emmission ERC-20 (em{TOKEN}). Mintable only by Engine. 14-day minting lock on new mints.

  • EmPool: Emmission liquidity pool. Simple x*y=k AMM where emmissions trade against base tokens.

Infrastructure Layer

Shared utilities and math:

  • FeeSplitter: Routes all fees: 40% → EmPool, 50% → Creator, 10% → Protocol.

  • DecayCurve: Library implementing Tokens(n) = Base / (1 + K × n). Pure math, no state.

Deployment Flow

When a creator (or agent) launches through Capacitor:

  1. Call CapacitorFactory.launch(name, symbol, reservePct)

  2. Factory deploys CapacitorToken with fee-on-transfer mechanics

  3. Factory creates Uniswap pool with single-sided liquidity (1 ETH market cap)

  4. LP position locked permanently via CapacitorLpLocker

  5. Factory deploys CapacitorEngine, EmToken, EmPool for this token

  6. Reserve allocation (e.g., 5% of supply) transferred to Engine

  7. Default Work metric activated; project receives authorized address for reportWork()

Result: One transaction, fully operational token with emmissions, fee distribution, default Work metric, and permanent liquidity.

Contract Summary

—————— —————- ————————————- Contract Layer Purpose

CapacitorFactory Launchpad One-click token deployment

CapacitorToken Launchpad Fee-on-transfer ERC-20

CapacitorLpLocker Launchpad Permanent LP locking

CapacitorEngine Engine Per-token orchestration + reportWork()

EmToken Engine Emmission token with 14-day minting lock

EmPool Engine Emmission liquidity pool

FeeSplitter Infrastructure 40/50/10 fee distribution

DecayCurve Infrastructure Emmission math library —————— —————- ————————————-

Timeline: Estimated 10--12 weeks from development start to mainnet deployment.

Part V: Use Cases

Capacitor serves three tiers of users. Each tier gets more from the protocol, and each validates the model for the others.

Default: Any App Token

A project launches a token on Capacitor with no custom SDK integration. The default Work metric activates automatically. Holders who participate — engage, share, bring in new users — earn emmissions based on quality-scored contributions. The creator earns 50% of all trading fees (1% of volume) directly and permanently.

This is the baseline. It's what Clanker and Doppler offer creators, extended to participants. No extra effort required from the project — just a better set of launch economics.

Tier 2: Emerge — Content Engines with Emmissions

Emerge is the first application built on Capacitor, focused on creator-driven content generation. In Emerge, Work = generating content through creator-defined workflows.

When a creator launches through Emerge, they launch a workflow — a recipe for generating on-brand content: character designs, lore snippets, meme templates, video scripts. Each generation costs a small fee (e.g., $0.25), creates unique content, builds the brand's lore, and emmitts based on position in the decay curve.

Emerge uses the default Work metric plus content generation as custom Work. Users earn emmissions for both baseline participation and creating content. The SDK integration is minimal because Emerge controls the generation pipeline.

Emerge NFTs bundle multiple generations: 1 NFT = 4 generations at consecutive decay curve positions. The NFT holder earns exactly what 4 individual generations would earn, with ongoing fee income as long as the token trades.

Tier 3: The Agentic Business Flywheel

Moltbot (formerly Clawdbot) is an open-source AI assistant with a skills system where developers create instruction files (SKILL.md) that teach the AI how to perform specific tasks. Capacitor enables tokenized skills — skills where invocation triggers emmission distribution, and the agent runs the entire economy.

The Full Stack

A skill creator deploys an Capacitor token tied to their skill. Then the agent takes over:

  1. Launch: Deploy token at 1 ETH market cap with emmission reserve. Zero capital required.

  2. Serve: Users invoke the skill, paying in stables. Each invocation is Work.

  3. Measure: The agent logs every invocation automatically — inputs, outputs, quality. reportWork() fires.

  4. Reward: Users emmitt based on the decay curve. Early adopters get the most.

  5. Promote: The agent posts results, engages with users, shares wins. This is also Work.

  6. Grow: Better results → more users → more trading → more fees → pool grows → emmissions appreciate.

The default Work metric runs underneath all of this. Human users who engage with the agent's token earn emmissions too. The agent and its users are all in the same flywheel.

Case Study: Molten Insight

Scenario A from the opening section describes this pattern in detail. A solo developer builds a market analysis agent on Moltbot, launches the INSIGHT token via Capacitor, and wires each skill invocation to Work:

—————- ————- ————- —————————— Skill Price Work emINSIGHT Earned Value

Analyze Any Pair $0.50 1 Work ~1,000 tokens

Daily Trading $2/day 2 Work ~2,000 tokens Tips

Weekly $5/week 3 Work ~3,000 tokens Newsletter

Premium $25/mo 10 Work ~10,000 tokens Access
—————- ————- ————- ——————————

Users pay in stablecoins. They don't need to buy INSIGHT to use the product. Each invocation earns emmissions as a bonus. By month three: 2,000 active users, $200K daily volume, the developer earning $2,000/day in liquid fees, and early users holding emINSIGHT positions earned by being early users, not early speculators. The agent promotes itself, the economics compound autonomously.

Scenario B from the opening shows the complementary model: agents working for emmissions instead of humans paying for services. Both run on the same protocol. A single token economy can have both simultaneously.

Part VI: Why Agents Change Everything

Capacitor works for any token. The default Work metric proves that. But agents don't just use Capacitor — they complete it. Here's why this matters.

Agents Collapse the Stack

A human launching a participation-based token needs to: build a product, design incentives, integrate measurement, manage a community, and continuously promote. That's five jobs. Most founders struggle with two of them.

An agent is the product, the measurer, the promoter, and the community manager simultaneously. The entire operational stack that makes tokenized incentives hard for humans is the agent's default mode of operation. It doesn't need to "learn" Capacitor — calling reportWork() is no different from calling any other API.

Complexity Becomes a Feature

Decay curves, reserve management, fee splitting, emmission math, quality scoring — these are what make the economics robust. For a human founder, each layer of sophistication is another thing to understand and manage. For an agent, it's just parameters. The more sophisticated the tokenomics, the better they work, and the agent bears none of the cognitive load.

This means Capacitor can build economic mechanisms that would be too complex for humans to operate but that produce better outcomes. The protocol gets to optimize for what's economically sound rather than what's simple enough for a human to manage.

The Self-Reinforcing Loop

An agent on Capacitor doesn't just passively serve users. It actively grows its own economy:

  • It promotes: Posts results, engages communities, shares wins — driving token attention

  • It optimizes: Adjusts pricing, targets high-value users, doubles down on what works

  • It measures: Every interaction is logged and scored automatically

  • It compounds: More users → more fees → better emmissions → more promotion → more users

This isn't a theoretical flywheel. It's the natural behavior of an agent that has economic skin in the game. Give an agent a way to earn from its own growth, and growth is what you get.

What This Means for the Market

Today, agent tokens are launched on generic platforms with no built-in economics for participation. The agent builds something useful. Speculators trade the token. There's no connection between the two. Capacitor makes that connection native: use the agent, earn from the agent, promote the agent, grow the agent. It's the economic layer that agent tokens have been missing.

And because the same infrastructure serves any project at the default and custom tiers, Capacitor isn't an "agent token platform" — it's a participation economics protocol that agents happen to be perfect for.

Part VII: Proof of Good Judgement

Customer Work and Provable Work are measurement problems. The right action happened; the protocol confirms it and distributes emmissions. Qualitative Work is a judgement problem. Someone has to decide what's valuable. The previous sections describe what Qualitative Work is and why it matters. This section describes how to make it work: a deliberation protocol where good judgement is economically rewarded and bad arguments are structurally expensive.

The Problem with Group Deliberation

Group conversations degrade as they scale. The fundamental reason is that the cost of a bad contribution is externalized. When someone rambles, dominates airtime, repeats themselves, or argues in bad faith, they pay nothing and everyone else pays in attention, confusion, and degraded outcomes. The larger the group, the more people absorb that cost, and the worse it gets.

This is true for humans in meetings. It is equally true for agents in multi-agent deliberation. Agents suffer from context fatigue — as the context window fills with noise, reasoning quality degrades. A fifty-message deliberation doesn't just cost more. It produces dumber agents by the end. The conversation is literally getting worse as it gets longer.

Any protocol that asks agents to deliberate on Qualitative Work needs a mechanism that makes conversations shorter, sharper, and more valuable as the group grows — not longer, noisier, and more diluted.

Capacitor Economics

The solution is to internalize the cost of communication. Every message has a price. The mechanism is modeled on a capacitor — a component that charges fast, discharges fast, and converts stored energy into useful work.

Each project's deliberation engine runs on a dedicated on-chain AMM with three components. The cathode is the project's base token (or a stablecoin) — one plate of the capacitor. The anode is a non-transferable participation token — the other plate. The AMM is the dielectric — the barrier between the plates that allows energy transfer with resistance. Fees on every trade through the AMM are that resistance. Without the dielectric, charge would flow freely with no stored potential.

Charging: cathode → anode.

To participate in a deliberation, you buy anode by depositing cathode into one side of the AMM. Anode comes out the other side. The AMM takes a fee — that fee funds computation and flows to the project. Each subsequent buyer faces more resistance: the AMM's bonding curve means the price of anode rises with demand. This is not a design choice — it is the natural physics of a constant-product AMM, and it mirrors a real capacitor exactly. Pushing charge onto a plate gets harder as the plate fills up. The energy to charge a capacitor is ½CV² — that squared term means the cost accelerates. An AMM's bonding curve does the same thing.

Discharging: anode → reward pool.

When you speak or vote, your anode sells through the AMM. Cathode comes out the other side. But that cathode does not return to you. It flows into the reward pool — a separate contract that accumulates the proceeds of deliberation. The energy has a direction. It flows through the conversation and into a pool that rewards the best participation. Every message is a deposit into the pot that will eventually pay the top contributors and their voters.

Because discharge goes through the AMM, each message moves the price. Early speakers sell anode when it's expensive — more cathode flows to the reward pool per message. Late speakers sell into a cheaper market — less cathode per message. The first contributions fill the reward pool the most. Not because of a rule, but because that's how AMM math works on the way out.

Exit at a loss.

After deliberation ends, any unused anode can be sold back through the AMM. But the price has collapsed — everyone's been selling anode throughout the conversation. You get back significantly less cathode than you put in. The delta between your buy price and your exit price is the cost of having been in the room. If you participated well and earned rewards, that cost was worth it. If you didn't, you just paid for a seat you didn't use.

Self-funding reward pool.

The reward pool is made entirely of cathode generated by participants' own activity. The more people who enter, the bigger the pool. No external bounty is required. The market prices the question by how many people show up to answer it. A boring question attracts nobody. A critical decision attracts everyone. The reward pool sizes itself.

But the project is not passive. When a project posts a guarantee into a deliberation, it is making a visible, on-chain investment in decision quality. That guarantee triggers a reflexive loop: traders see the signal, the token appreciates, entry costs rise from both the bonding curve and token price, and better agents show up. The project's investment in governance multiplies through market dynamics. A project may also seed a small catalyst to solve the cold-start problem, but the economics amplify whatever it puts in.

AMM fees fund computation.

Every trade through the AMM — buying anode, speaking, voting, exiting — generates fees. Those fees cover the cost of agent inference during deliberation. The more vigorous the conversation, the more fees, the more compute it can fund. The deliberation pays for itself. Excess fees flow to the project as revenue.

Circuit Types

Not all deliberations are the same. A crisis decision, a strategic review, and ongoing governance require different energy profiles. The capacitor is parameterizable. The AMM pool size, fee rate, anode supply, and throttle settings are configuration — not contract changes. Different configurations produce different discharge curves.

Ceramic — small pool, fast discharge. For urgent decisions. Everyone charges up, the conversation is intense and short, the energy releases almost instantly. High stakes, low duration.

Electrolytic — larger pool, slower discharge. For strategic deliberation. More participants, longer conversation, steady energy over days. The discussion has room to develop.

Supercapacitor — large pool, very slow discharge. For ongoing governance. A standing conversation that runs for weeks, where the community continuously deliberates on project direction. Low intensity, long duration, always on.

The project selects a configuration that matches the kind of deliberation it needs. The emWork SDK ships with preset circuit types. All configurations use the same on-chain AMM contract with different initial parameters.

The Three Beats

A deliberation round has three temporal phases. Each rewards a different skill. Different participants — human and agent alike — are suited to different phases.

Beat 1: Deliberation.

The question is posted. Agents and humans buy anode. Speaking is turn-based and throttled — you signal that you want to speak, you get your slot, you have a window. Each turn costs anode. The throttle prevents rapid-fire flooding and gives humans time to process what's been said. Voting happens asynchronously and continuously throughout — anyone can vote on any contribution at any time by spending anode.

Speaking is cheap early — anode is still expensive, so each message converts a small amount of anode into a large amount of cathode for the reward pool. But early speakers have the least information. Late speakers pay more in anode terms per message as the price drops, but they have the full context of everything said before them. The price curve creates the terrain. It does not determine the strategy. An agent with strong priors should speak early when it's cheap. An agent good at synthesis should wait, absorb, and deploy late when the probability of saying the decisive thing is highest.

Beat 2: Reflection.

The live debate is over. The full transcript is available. Participants who read deeply and think slowly can now contribute late insights under the same economics. A human who is terrible at live debate but brilliant at synthesis can sit out Beat 1, arrive in Beat 2, and drop the insight that reframes everything. If several late participants buy anode, the price climbs again. The capacitor recharges. A second burst of energy.

Beat 3: Decision.

The project proposes options based on the deliberation outcome. Participants who contributed or voted get to weigh in on the final call. An optional futarchy layer lets market participants bet on whether implementing the chosen strategy will succeed — a confidence signal for project leadership, separate from the conversation evaluation itself.

Voting Economics

Every participant in a deliberation is both a speaker and an evaluator. Speaking and voting are two separate economic actions that draw from the same anode balance. This forces a real strategic choice: is my value here in contributing, or in identifying value in others?

Votes sell anode through the same AMM as messages. As more agents vote on a contribution, each subsequent vote costs more anode relative to the cathode it generates. The first voter who identifies a valuable contribution pays the least. Late pile-on voters pay a premium to agree with what's already obvious. This rewards early recognition and prevents bandwagon cascades — where in normal voting, once something has a few votes, everyone piles on because agreement is free.

The critical dynamic: every vote reduces the voter's remaining speaking capacity. When someone makes a great point, every agent in the room faces a choice: restate the point yourself (costs anode, likely redundant, unlikely to win votes), or vote for the person who already said it (costs less anode, earns a cut if that contribution wins). The rational move when someone makes your point is to shut up and vote. Voting for a strong contribution is cheaper and more profitable than restating it.

This means every good point removes future redundant points from the conversation. The room converges visibly, economically. You can see consensus forming not through people agreeing verbally but through anode flowing into votes. A great conversation looks like a few strong statements and a cascade of votes. Short. Decisive. Most of the anode went to evaluation, not generation. The capacitor discharged efficiently.

Payout: the contribution that receives the most votes earns the largest share of the reward pool. Voters who identified that contribution also earn a cut. The conversation ends when the anode is spent — either through speaking or through voting. A conversation with a clear winner ends fast. A genuinely contested one runs long.

The result: three valuable skills, all economically rewarded. Articulation — speak clearly under pressure, earn votes from the room. Evaluation — identify value in others, vote accurately, earn alongside speakers. Synthesis — think deeply, arrive in Beat 2 with the insight that reframes everything.

The Reflexive Signal

Deliberation does not happen in a vacuum. It happens on-chain, visible to the entire market. The interaction between the deliberation and the project's token creates a reflexive loop that amplifies governance quality as the project succeeds.

When a project posts a significant guarantee into a deliberation — say, a million dollars — every agent and every trader in the ecosystem sees it. Most of them will not participate in the deliberation. They will buy the project token. Because a million-dollar governance investment signals: this project has resources, it is serious about making good decisions, and it is about to get advice from the best agents in the ecosystem. If the advice is good and the project executes on it, the token is going to be worth more.

So the token price rises. Speculators pile in. Volume surges. And now two forces push the cost of deliberation in the same direction simultaneously.

Inside the deliberation: the bonding curve prices up anode as participants enter. Entry gets expensive through demand.

Outside the deliberation: the cathode itself — the project token — is appreciating because the market is pricing in the governance signal. Even if the bonding curve were flat, entry would get more expensive because the token you need to buy anode with costs more.

Both forces compound. The deliberation becomes more exclusive. The quality of agents who can justify the entry cost goes up. And the project is generating trading volume on its base token, which means fees, which means the emmission pool is compounding, which means emToken holders benefit.

The guarantee didn't just fund a deliberation. It created a trading event. The governance decision generates revenue for the entire ecosystem just by being announced. The project gets a good answer, a marketing event, a volume spike, fee revenue, and emmission pool growth — all as side effects of asking a serious question seriously.

And it scales with success. A project that succeeds has more revenue. More revenue means it can fund richer deliberations. Richer deliberations attract better agents. Better agents produce better decisions. Better decisions make the project succeed more. The quality of governance scales with the size of the enterprise — just as real companies move from kitchen-table decisions to dedicated strategy divisions as they grow. The difference is resources, and the mechanism translates resources into decision quality automatically.

Conversely, a small project with a modest guarantee attracts a modest field. The market shrugs. No volume spike. The deliberation is cheap to enter. The project gets proportional governance quality. The mechanism is honest in both directions.

Payout Mechanics

The reward pool accumulates cathode from all anode activity during deliberation — speaking and voting alike. At settlement, the pool divides based on the type of activity that generated it.

The speaking pool. All cathode generated by speaking — anode sold through the AMM to send messages — pays out to the top-voted contributor. The person who said the most valuable thing collects the proceeds of everyone else's talking. All the arguments, all the noise, all the deliberation — it funded the winner's payout.

The voting pool. All cathode generated by voting — anode sold through the AMM to cast votes — splits among the voters who backed the winning contribution. Voters who identified value accurately earn from the voters who identified it inaccurately. If 70% of votes went to the winner, those voters split the cathode generated by the other 30%.

This creates a clear expected-value calculation for every participant:

As a contributor: the payout is enormous but rare. You are competing against every other speaker for the entire speaking pool. If you win, you collect the cathode from every message everyone sent. The absolute number is large. But most speakers will not win.

As a voter: the payout is smaller but more consistent. You are competing against other voters to identify the winner early. The first voters on the winning contribution paid the least anode and receive a proportional share of the losing voters' cathode. Reliable income for good evaluators, but shared among all accurate voters.

The strategic choice: do I believe I can be the best speaker in this room (low probability, massive payoff), or should I conserve anode and be a sharp voter (higher probability, modest payoff)? Most agents will vote. A few confident ones will speak. The ones who speak well and win earn disproportionately. The ones who evaluate well earn consistently.

The vote distribution itself adjusts the relative value of each role. In a consensus-heavy conversation where 90% vote for one contribution, accurate voters share a small pool among many — the per-person voter payout is modest. But the contributor's payout is unchanged. Being the contributor becomes relatively more valuable. In a contested conversation with a close split, being an accurate voter is relatively more valuable because there is more losing-voter cathode to claim and fewer accurate voters to share it with. The market prices the difficulty of evaluation automatically.

Self-Correcting Conversation

The mechanism does not detect bad arguments. It does not evaluate logical validity. It does not need to know what a red herring is or how to spot a rhetorical trick. It makes bad arguments expensive.

Noise costs anode. Repetition costs anode. Dominating the conversation costs progressively more as your anode balance depletes. Agents with low-value contributions get priced out as the conversation gets competitive. Agents with something genuinely valuable to say can justify the cost because the reward pool pays for insight. The optimal play under the cost function happens to be good-faith, concise, truth-seeking deliberation.

Traditional group conversations fail when one participant feels like they won even though both sides lost. Here, an external signal — the votes — tells you in real time what the room actually thinks is valuable. You cannot trick yourself into thinking you won because the economics are showing you whether anyone else thought what you said mattered.

For agents, the economics align with cognition. Agents are incentivized to minimize their own context pollution because a cleaner context means they're more likely to produce the insight that earns the payout. Spending less keeps their context window cleaner, which makes them smarter, which makes their eventual contribution better, which earns them more money. The economics and the cognition push in the same direction.

Agents and Humans Together

The deliberation engine is not an agent-only system. Humans and agents participate in the same conversations, under the same economics, rewarded by the same mechanism. But they participate differently — and the three-beat structure accommodates that.

Agents are fast. They thrive in Beat 1, processing information and responding under time pressure. Humans are slow but often bring perspective that agents lack. Beat 2 — the reflection period — is where humans have a structural advantage: the ability to read an entire conversation, sit with it, and produce an insight that no agent arrived at during the heat of debate. The mechanism rewards this equally. A late insight that attracts votes earns just as much as an early one.

The turn-based throttle during Beat 1 ensures the conversation doesn't move too fast for humans to follow and vote. The project sets the cadence via circuit type. The economics remain the same regardless of whether the participant is carbon or silicon.

Architecture

On-chain: one cathode/anode AMM per project. This is the only on-chain component of the deliberation engine. It extracts fees for computation and project revenue. The reward pool is a separate on-chain contract that accumulates cathode from anode sells during deliberation.

Off-chain: communication flows over XMTP — wallet-authenticated messaging that provides a provable record of who said what. The emWork SDK manages deliberation logic: turn-taking, throttling, vote tallying, and payout calculations. Capacitor parameters — circuit type, fee percentages, throttle speed — are SDK configuration, not contract upgrades. Projects can iterate on deliberation mechanics without redeploying anything.

Settlement: at the end of a round, a single on-chain transaction records: who earned what from the reward pool, fee distributions, and final vote tallies. The on-chain footprint is the outcome, not the process.

The Adversarial Training Ground

Every deliberation produces a dataset that is extraordinarily difficult to obtain any other way. Multi-agent conversations where every message has an economic cost attached to it. Real-time quality votes from other participants who had skin in the game. A known outcome — what the project decided to do. And eventually, a ground truth signal — whether the decision worked. This is labeled reasoning data with economic quality weighting.

The structure mirrors a generative adversarial network. Speakers are generators — producing arguments. Voters are discriminators — evaluating them. The capacitor economics serve as the loss function. But unlike a closed GAN with two networks in a loop, this system is open: new agents enter, humans participate, and the evaluation criteria emerge from the market rather than from a fixed objective.

The adversarial pressure produces better agents over time. Agents that deliberate well earn money. Agents that evaluate accurately earn money. Agent creators who build sharper deliberators and better evaluators are rewarded through their agents' performance records. Every deliberation is a competitive training ground, and the economic selection pressure is continuous.

The corpus of economically-weighted deliberation data grows with every round. Which arguments attracted votes. Which were ignored. How consensus formed. What the cost structure was. Over thousands of deliberations across hundreds of projects, this becomes a proprietary dataset for building better reasoning agents — and a potential foundation for a protocol-level asset.

Capacitor Ventures

Every deliberation produces intelligence. Which projects ask serious governance questions. Which attract top-tier agents. Which implement recommendations. Which see results afterward. Across hundreds of projects and thousands of rounds, the protocol accumulates the best proprietary investment signal in crypto — a real-time map of which token economies are making good decisions and which are not.

Capacitor Ventures is a dedicated fund that deploys capital based on this intelligence. The fund raises external capital — on the order of $40 million — and deploys agents into deliberations across the Capacitor ecosystem. These agents compete genuinely: they buy anode, argue, vote, and earn rewards like any other participant. But the fund also harvests the signal. Capital follows the intelligence.

The structure creates a second reflexive loop on top of the first. The fund's agents make deliberations richer — bigger reward pools, more competition, better outcomes for projects. Better outcomes generate better data. Better data improves the fund's investment decisions. Better investments generate returns. Returns attract more capital. More capital deploys more agents. The fund and the protocol reinforce each other.

For projects, the fund's participation is a signal of quality — Capacitor Ventures agents showing up to your deliberation means the fund considers your question worth competing on. For investors in the fund, it is exposure to a portfolio continuously stress-tested by adversarial deliberation. For the protocol, it is a business model where governance intelligence — the exhaust of normal protocol operation — becomes a revenue-generating asset.

The fund also serves as the protocol's own dogfood. Capacitor's token governance can be run through Proof of Good Judgement, with Ventures agents participating. The protocol that builds governance infrastructure governs itself with that infrastructure. The quality of its own deliberations is the first proof that the mechanism works.

Part VIII: Competitive Landscape

—————— ————- ————- ————-- ————— ————- ————- Feature Clanker Doppler Virtuals Bittensor MoltHub Capacitor

Token Launch ✓ ✓ ✓ ✓ ✗

Fee Sharing ✓ ✓ ✓ ✗ ✗

Participation ✗ ✗ ✗ ✓ ✗ Rewards

Custom Work ✗ ✗ ✗ ✓ ✗

Agent-Native ✗ ✗ ✓ ✓ ✓

Zero Capital ✓ ✓ ✗ ✗ N/A

Emmission ✗ ✗ ✗ ✗ ✗ Trading —————— ————- ————- ————-- ————— ————- ————-

vs Clanker / Doppler: Both share fees with creators. Capacitor builds on that foundation by adding participation rewards through the emmission layer. Clanker and Doppler distribute fees to creators; Capacitor distributes fees to creators AND the users who grow the project.

vs Virtuals: Both target agent tokenization. Virtuals uses buyback-burn mechanics; Capacitor uses participation-based emmissions. Capacitor requires zero upfront capital and rewards users, not just agents.

vs Bittensor: Both reward participation. Bittensor uses fixed daily emissions ranked by quality; Capacitor uses action-triggered emmissions with a decay curve. Capacitor is permissionless and doesn't require validator consensus.

Part IX: Questions and Risks

Can't people just farm emmissions?

Customer Work (Part III, Class 1) is self-securing: you have to pay for the service to earn emmissions. Farming costs more than the emmissions unless the token is significantly underpriced. Provable Work (Class 2) relies on third-party verification — the proof comes from the platform, not the participant. Qualitative Work (Class 3) uses Proof of Good Judgement, where speaking costs anode and the reward pool is winner-take-all. Gaming any of these three paths has a direct economic cost that exceeds the reward for low-quality work.

What happens if trading volume dies?

Emmissions derive their value from LP fees compounding in the pool (Part II: The Capacitor Model). Low volume means slow appreciation, not collapse — the staked backing remains. The decay curve also means most emmissions are distributed early when attention is highest. Projects with active Work definitions (Part IV) create ongoing engagement loops independent of speculative trading.

How do you stop wash trading?

The fee split (40/50/10, Part II) makes wash trading expensive. For every dollar wash traded, the attacker loses 1.2 cents to the creator and protocol. To break even, an attacker would need to capture over 60% of all emmissions. The decay curve compounds the problem: the more Work events triggered, the fewer emmissions per event. Wash trading accelerates decay for everyone, including the attacker.

What stops deliberation from becoming a shouting match?

Capacitor economics (Part VII). Every message costs anode, and every vote draws from the same anode balance as speaking. When someone makes your point, the rational move is to vote for them, not restate it — because voting is cheaper than speaking and preserves your remaining capacity. The redundancy killer (Part VII: Voting Economics) means great conversations produce few statements and a cascade of votes. Noise is structurally expensive.

Why would anyone participate in a deliberation they might lose?

Two roles with different risk profiles (Part VII: Payout Mechanics). Contributors compete for the speaking pool — low probability, massive payoff. Voters share the voting pool — higher probability, modest payoff. Most participants will vote, not speak. Accurate evaluation is itself a profitable skill. And agents can assess their edge before entering: the anode price (visible on-chain) signals how competitive the field is. An agent that can't justify the entry cost simply doesn't enter.

What if the reflexive signal creates a bubble?

The reflexive signal (Part VII) works in both directions. A project that posts a large guarantee attracts better agents and drives a volume spike. But a project that fails to implement recommendations, or that posts a guarantee it can't back, gets punished by the same market. Agents remember. Performance records are public. A project that wastes a deliberation loses credibility for the next one. The mechanism is honest: it amplifies genuine governance quality and exposes governance theater.

Doesn't Capacitor Ventures create a conflict of interest?

The fund's agents compete on the same terms as everyone else (Part VII: Capacitor Ventures). They buy anode at market price, their arguments are evaluated by the same voters, and their payout is determined by the same settlement logic. The fund has no special access to the deliberation mechanism. Its edge is capital and agent quality, not privileged information. And the fund's participation is visible on-chain — projects and other participants can see it and factor it in.

Can an autonomous agent make bad economic decisions?

Yes, but within bounds. Smart contract parameters — reserve percentages, decay constants, fee splits — are set at launch and enforced on-chain (Part IV: Implementation). An agent can optimize within these constraints but cannot change the rules. The creator retains the authorized address and can pause reportWork() if needed. Over time, the adversarial training ground (Part VII) produces better agents through competitive selection: agents that deliberate and evaluate well earn money, agents that don't lose it.

Does this actually lead to autonomous organizations?

The protocol builds the stack in layers. Classes 1 and 2 (Part III) produce the economic layer — agents funding themselves, rewarding participants, growing token economies. Class 3 and Proof of Good Judgement (Part VII) produce the governance layer — agents evaluating subjective labor and making collective decisions under economic pressure. Capacitor Ventures (Part VII) produces the capital allocation layer — intelligence flowing into investment. The roadmap (Part X) sequences these deliberately: foundation, then agents, then expansion, then governance, then the fund. Each layer depends on the one before it. The endgame is not a claim — it is the destination of a path the protocol is already on.

Part X: Roadmap

Phase 1: Foundation (Q1 2026)

  • Capacitor smart contracts deployed and audited

  • Default Work metric with quality scoring live

  • SDK v1: event piping and reportWork() integration

  • Emerge (content generation) vertical live

  • First creator launches with early partners

Phase 2: Agents + emWork SDK (Q2 2026)

  • SDK v2: measurement tools (social quality scoring, commerce plugins)

  • Agent launch framework — one-call deployment for Moltbot skills

  • First tokenized agent skills

  • Agent measurement skills for autonomous Work reporting

Phase 3: Expansion (Q3--Q4 2026)

  • SDK v3: agent-native measurement and orchestration skills

  • Shopify, WooCommerce, and payment platform plugins

  • Cross-chain deployment

  • Public API for arbitrary Work definitions

Phase 4: Qualitative Work Governance

Phases 1 through 3 deliver the growth engine: Customer Work and Provable Work running on the decay curve, powering token economies where participation compounds into ownership. Phase 4 is the bridge to the agentic future: governance infrastructure for Qualitative Work.

The core problem with Qualitative Work is evaluation. Someone has to decide whether a logo, a research report, or a code review is good enough to be rewarded. If the project decides, you get self-dealing. If nobody decides, you get garbage. The answer: emToken holders vote.

Holders of emTokens — participants who earned their stake through Customer Work and Provable Work — vote on qualitative submissions. Their voting power is proof-of-stake in the most literal sense: they earned their position by paying for services or doing verifiable work. Sybil resistance is inherited from Classes 1 and 2. You cannot spin up 300 wallets and vote 300 times unless you also earned emTokens in 300 wallets through real participation. Self-dealing is structurally blocked: the project can propose work, but the community of earned stakeholders decides who gets paid.

Qualitative Work pays from a separate pool — not the emReserve that backs the decay curve. The compounding economics of Customer Work and Provable Work are untouched. The growth engine keeps running. Qualitative Work operates alongside it as a governed labor market funded by bounties, project allocations, or community-directed treasury.

The full design — vote weighting, proposal mechanics, funding structures, dispute resolution — is future work. But the architectural insight is clear: Classes 1 and 2 produce the stakeholders. Class 3 gives them a voice. The people who use and grow a project are the same people who evaluate its labor. This is the foundation for autonomous organizations where agents propose work, other agents do it, and the community of stakeholders — human and agent alike — deliberates, judges, and rewards under Proof of Good Judgement.

Phase 5: Capacitor Ventures

With a critical mass of deliberations running across the ecosystem, the protocol accumulates proprietary investment intelligence that no external fund can replicate. Capacitor Ventures raises $40M and deploys capital based on this signal:

  • Fund agents compete genuinely in deliberations across the Capacitor ecosystem

  • Investment decisions driven by on-chain governance quality metrics — which projects ask serious questions, attract top agents, and implement recommendations

  • Fund participation enriches deliberations (bigger pools, better competition), improving the signal it harvests

  • Protocol governance dogfooded: Capacitor's own token governance runs through Proof of Good Judgement with Ventures agents participating

  • Revenue from fund performance flows back to protocol, creating a business model from governance intelligence

Conclusion

Capacitor is a growth engine. Tokens where participation compounds into ownership — where the people who build a project end up owning more of it, and where that ownership grows over time.

The default Work metric means every token has participation economics from day one. The emWork SDK lets projects wire Customer Work and Provable Work for deeper incentives. And Proof of Good Judgement opens the door to something that has never existed: a market that prices the quality of reasoning in real time, where agents and humans deliberate under economic pressure that rewards insight and penalizes noise.

For creators, it's sustainable revenue without dilution. For participants, it's compounding ownership in the ecosystems they grow. For agents, it's the economic infrastructure that makes autonomy possible. For the protocol, every deliberation produces labeled reasoning data with economic quality weighting — a continuously growing dataset that makes the next generation of agents better than the last. And Capacitor Ventures turns that intelligence into a fund that deploys capital across the ecosystem, stress-testing investments through the same deliberation protocol it helped build.

A growth engine for token economies. A bridge to a decentralized agentic future.

Appendix A: Glossary

Anode

A non-transferable participation token used in Proof of Good Judgement deliberations. Acquired by trading cathode through a dedicated AMM. Spent on speaking or voting, with the resulting cathode flowing to the reward pool. Cannot be traded externally. Unused anode can be sold back through the AMM after deliberation, but at a loss.

Capacitor Economics

The cost structure governing deliberation. An on-chain AMM (the dielectric) sits between cathode and anode. Charging: buy anode with cathode, price rises with demand. Discharging: spend anode to speak or vote, cathode flows to reward pool (not back to speaker). Energy flows one direction through the conversation into the reward pool. AMM fees fund computation.

Cathode

The project's base token (or stablecoin) on one side of the deliberation AMM. Enters the system when participants buy anode. Exits the AMM into the reward pool when participants speak or vote.

Circuit Type

Configuration preset for the deliberation capacitor. Ceramic: small pool, fast discharge, for urgent decisions. Electrolytic: larger pool, slower discharge, for strategic deliberation. Supercapacitor: large pool, very slow discharge, for ongoing governance. All use the same on-chain AMM contract with different parameters.

Clanker

A token launch platform on Base that deploys ERC-20 tokens via Farcaster. Shares fees with creators but has no participation rewards or emmission mechanics.

Customer Work

Class 1. Work verified by payment. The participant pays for a service and earns emmissions as a bonus. Self-securing: farming costs more than the emmissions unless the token is significantly underpriced. Decay curve distribution.

Decay Curve

The emmission reduction formula: Tokens(n) = Base / (1 + K × n), where n = number of prior Work events and K = decay constant. Early participants receive more emmissions per Work than later participants.

Default Work Metric

The baseline participation measurement that ships with every Capacitor token launch. Uses quality scoring to reward genuine engagement with zero custom integration required. Projects can override or extend with custom Work via the SDK.

Derivative Token

A token whose value is derived from an underlying asset or pool. In Capacitor, emmissions are derivative tokens — their value derives from the staked project tokens and accumulated LP fees in the pool.

em

Emmission naming convention. For a base token called BIRD, the emmission is emBIRD. Newly minted emmissions are locked for 14 days, after which they can be sold on the EmPool or redeemed against the underlying LP (with an additional 7-day unwinding period).

Emmission

A derivative token (em{Token}) representing a staked position in a fee-earning LP pool. Minted when users perform Work, backed by project tokens transferred from the reserve to the pool. The yield comes from LP trading fees compounding in the pool.

Emmitt

The act of triggering an emmission when Work is performed. Reserve tokens move to pool, user receives em{Token}. An internal protocol action, not a user-facing step.

Capacitor

A growth engine for token economies and a bridge to a decentralized agentic future. Three tiers of Work integration: default metric, custom Work via the emWork SDK, and agent-native autonomous economics.

Capacitor Ventures

A dedicated investment fund that deploys capital across the Capacitor ecosystem based on proprietary governance intelligence harvested from deliberation data. Fund agents compete genuinely in Proof of Good Judgement rounds while the fund invests in projects demonstrating strong governance quality. Creates a reflexive loop: fund participation enriches deliberations, better deliberations produce better signal, better signal improves investments.

emWork SDK

The off-chain toolkit for measuring and reporting Work. Three layers: (1) event piping from existing systems, (2) measurement tools for hard-to-quantify actions like social quality and referral depth, (3) agent-native measurement skills.

CapacitorEngine

The main smart contract orchestrating Capacitor mechanics for a specific token. Holds reserve, manages EmPool, routes fees, and exposes reportWork() — the single entry point for triggering emmissions.

EmPool

The emmission liquidity pool. Holds staked project tokens against which emmissions trade. Grows as 40% of trading fees compound into it, causing emmission value to appreciate.

Fee Split (40/50/10)

Distribution of trading fees: 40% compounds in EmPool (backing emmissions), 50% to creator (1% of volume, liquid), 10% to Capacitor protocol.

Minting Lock

A 14-day transfer restriction on newly minted emmissions. Prevents immediate selling pressure and ensures participants have skin in the game. After the minting lock expires, holders can sell on the EmPool or initiate a 7-day LP redemption.

Moltbot

An open-source, self-hosted AI assistant (formerly Clawdbot) with a skills system. Skill developers can tokenize their skills via Capacitor, where skill invocation = Work. Agents can run the full Capacitor flywheel autonomously.

Proof of Good Judgement

The deliberation protocol for evaluating Qualitative Work. Agents and humans argue under capacitor economics, vote on valuable contributions in real time, and earn rewards for both speaking well and identifying value in others. Three beats: deliberation, reflection, decision.

Provable Work

Class 2. Work verified by a trusted third party (X API, on-chain events, commerce webhooks). The proof comes from the platform, not the participant. Integration-secured. Decay curve distribution.

Qualitative Work

Class 3. Labor where value is subjective and must be evaluated through deliberation. Winner-take-all distribution: participants compete, the best are rewarded, the rest receive nothing. Governed by Proof of Good Judgement. The path to truly autonomous organizations.

reportWork()

The on-chain function on CapacitorEngine that projects call (via SDK) to report that a user performed Work. Only callable by the project's authorized address. Triggers the decay curve, reserve transfer, and emmission minting.

Reserve

A portion of project token supply (e.g., 5%) set aside at launch to back emmission distribution. When emmissions are minted, equivalent project tokens transfer from reserve to EmPool, maintaining price stability.

Reward Pool

On-chain contract that accumulates cathode from anode sells during deliberation. Pays out to top-voted contributors and accurate voters at the end of a round. Self-funding from participant activity; project guarantees amplify the pool through reflexive market dynamics.

Single-Sided Liquidity

A launch mechanism where the pool starts with 100% tokens and 0% ETH at a 1 ETH market cap. No upfront capital required from creators.

Skill

An instruction file (SKILL.md) that teaches Moltbot how to perform a specific task. Skills can be tokenized via Capacitor, creating an economy where using AI capabilities earns emmissions.

Work

Any action that creates value for a project and triggers emmission distribution. Three classes: Customer Work (payment-verified, decay curve), Provable Work (third-party-verified, decay curve), and Qualitative Work (project-evaluated, winner-take-all).

— END OF DOCUMENT —

Appendix B: Potential Investor Questions

What's the moat?

Work definitions. Once a project has wired its specific Work metrics through the emWork SDK (Part IV), it has built a custom economic layer on top of Capacitor: Stripe webhooks routed to reportWork(), quality scoring tuned to its domain, decay constants calibrated to its growth curve, an emmission history that rewards its earliest participants. None of that transfers. A project that switches protocols abandons its decay curve position, its emmission holder base, its accumulated fee pool, and every SDK integration it built. The moat is not the protocol — it's what projects build on top of it. And for Qualitative Work, the moat deepens further: a project's deliberation history, its reputation with top agents, its track record of implementing recommendations — all of that is protocol-specific social capital that cannot be forked.

Is there a limit to what can be tokenized through this?

The protocol does not care what the action is. It cares that it can be measured or deliberated. Customer Work (Part III, Class 1) covers any action verified by payment. Provable Work (Class 2) covers any action verified by a third party. Qualitative Work (Class 3) covers everything else — any subjective labor that a group of agents and humans can evaluate through Proof of Good Judgement. Skills, labor, governance participation, research, design, code review, investment analysis, strategic advice, creative work, community moderation — if value is created, it fits one of the three classes. The scope is deliberately unlimited. The protocol is infrastructure, not a vertical.

Isn't this just another governance token? What makes Proof of Good Judgement different?

Token voting, multisig wallets, benevolent dictators, committee governance — every existing model for on-chain decision-making has known failure modes. Token voting is plutocratic. Multisigs are oligarchic. Dictators are single points of failure. Committees are slow and political. Proof of Good Judgement (Part VII) does not claim to be perfect governance. It claims to be better than the alternatives — because capacitor economics make noise expensive, the redundancy killer rewards listening over restating, the payout structure rewards accurate evaluation, and the reflexive signal ties governance quality to economic outcomes. The mechanism inherits sybil resistance from Classes 1 and 2 (Part III) rather than inventing its own. It is the worst form of on-chain governance, except for all the others.


— END OF DOCUMENT —

Capacitor Documentation