DEEP Connects Bold Ideas to Real World Change and build a better future together.

DEEP Connects Bold Ideas to Real World Change and build a better future together.

Coming Soon

AgentArena: Adversarial Agent Dev Platform

chevron-icon
RFP Proposals
Top
chevron-icon
project-presentation-img
user-profile-img
Tanaka
Project Owner

AgentArena: Adversarial Agent Dev Platform

Expert Rating

n/a

Overview

Ethereum's MEV crisis proved what happens when agents hit production untested against adversaries: billions lost. ASI:Chain's agent economy will face the same risk at larger scale. AgentArena is a competitive arena on DevNet where developers build agents and pit them against each other in economic scenarios — market making, governance, resource allocation — while an AI judge on Singularity Compute analyzes strategies, detects exploits, and recommends improvements. The arena IS the dev environment: agents evolve through adversarial pressure, then deploy to MainNet with zero code changes.

RFP Guidelines

Internal Proposal Review

An AI-native Development Environment for...

Ended on:

13 Feb. 2026

Days
Hours
Minutes
  • Type SingularityNET RFP
  • Total RFP Funding $50,000 USD
  • Proposals 27
  • Awarded Projects n/a

An AI-native Development Environment for the ASI:Chain

Internal Proposal Review
  • Type SingularityNET RFP
  • Total RFP Funding $50,000 USD
  • Proposals 27
  • Awarded Projects n/a
author-img
SingularityNET
Feb. 4, 2026

This RFP seeks proposals for the development of an AI-native Development Environment (IDE) that improves the efficiency and accessibility of blockchain application development for the ASI:Chain.

Proposal Description

Our Team

Kyoto Digital fields a team with rare expertise at the intersection of multi-agent systems, blockchain development, and competitive AI. We have built agent competition platforms, deployed autonomous trading systems, and contributed to decentralized governance research. Our combined 30 years of experience span game-theoretic AI, Rholang development, and developer platform engineering — the exact skillset AgentArena demands.

Company Name (if applicable)

Kyoto Digital

Project details

Ethereum's MEV problem cost billions because agents were never stress-tested against adversaries before deployment. ASI:Chain faces the same risk at a larger scale — agents here will compete for resources, governance power, and economic influence, not just transaction ordering. Yet today, developers write agent logic, deploy to DevNet, and hope.

 

AgentArena fixes this by making competition the development environment.

 

HOW IT WORKS

 

Developers submit agents as Rholang processes or MeTTa programs. The Arena Engine deploys them to ASI:Chain DevNet — real BlockDAG consensus, real CBC Casper finality, not a simulator — and runs them in competitive economic scenarios:

 

• Market Making — agents provide liquidity under volatility shocks and adversarial arbitrage

• Governance — strategic voting under manipulation and coalition dynamics

• Resource Allocation — bidding for Singularity Compute capacity

• Escrow — multi-party negotiation with dispute resolution and collusion

 

After each round, an AI Judge running on Singularity Compute analyzes what happened: classifying strategies, detecting exploits with exact mechanisms, flagging emergent phenomena like price manipulation rings, and generating code-level recommendations referencing MeTTa type specs.

 

Developers then replay every decision their agent made — what it saw, what it chose, what happened next — and iterate. The cycle is: specify → compete → analyze → improve.

 

WHY THIS MATTERS

 

Unit tests cannot prepare agents for adversarial economics. You cannot write a test for "does my agent survive an attacker I haven't imagined." Emergent behavior only appears when populations interact under real incentives. AgentArena provides that pressure — and because it runs on actual DevNet, agents deploy to MainNet with zero code changes.

 

MeTTa's type system enables unique capabilities: type-checked strategies (proving an agent never bids more than its balance) and cross-agent analysis at the type level (the judge can formally reason about how two agents' typed interfaces interact).

 

Target: 80 active developers and 200+ unique agents across 4 arena types within 6 months.

Open Source Licensing

MIT - Massachusetts Institute of Technology License

Background & Experience

Our team lead designed a multi-agent simulation platform used by 3 blockchain protocols to test tokenomic models before MainNet launch. Our AI engineer built adversarial agent generators for DeFi security auditing, identifying $2.3M in potential exploits across 15 protocols. Our platform engineer developed competitive coding platforms serving 8,000+ developers with real-time match execution. We contributed to MeTTa agent specification discussions in the Hyperon community and authored a research paper on typed agent communication protocols at AAMAS 2024.

Links and references

https://foldspace.vercel.app/

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

  • Total Milestones

    3

  • Total Budget

    $50,000 USD

  • Last Updated

    13 Feb 2026

Milestone 1 - M1: Arena Engine + DevNet Integration

Description

Build the Arena Engine: scenario definitions, agent deployment to DevNet as Rholang processes, real-time competition execution with BlockDAG consensus, scoring and ranking. Two arena types: Market Making and Escrow. MeTTa Agent Specification Format and spec-to-Rholang compiler. Agent lifecycle management.

Deliverables

1. Arena Engine managing agent deployment, scenario execution, result collection on DevNet. 2. Market Making Arena with configurable volatility and scoring. 3. Escrow Arena with dispute resolution and collusion scenarios. 4. MeTTa Agent Specification Format and spec-to-Rholang compiler. 5. Scoring/ranking system with persistent leaderboard. 6. Test suite: 20 scenario configurations.

Budget

$20,000 USD

Success Criterion

1. Manages 10+ simultaneous agents on DevNet without crashes across 50 runs. 2. Market Making top agent scores 30%+ above random baseline. 3. Escrow handles disputes correctly in 90%+ of 30 test cases. 4. MeTTa-to-Rholang compiler valid for 85%+ of 25-spec test suite. 5. All code open-sourced under MIT license.

Milestone 2 - M2: AI Judge + Adversarial Agent Gen

Description

AI Judge system: strategy classification, exploit detection, emergent behavior flagging, improvement recommendations. Adversarial agent generator creating AI opponents (front-running, manipulation, collusion). Both on Singularity Compute. Add Governance Arena and Resource Allocation Arena.

Deliverables

1. AI Judge: strategy classification, exploit detection, emergent behavior detection. 2. Improvement recommendations referencing MeTTa specs. 3. Adversarial agent generator with multiple attack strategies. 4. Governance Arena and Resource Allocation Arena. 5. Singularity Compute deployment for all AI workloads.

Budget

$18,000 USD

Success Criterion

1. Strategy classification 80%+ agreement with human experts. 2. Exploit detection finds 75%+ of injected exploits. 3. Adversarial agents defeat naive baselines in 80%+ of competitions. 4. Singularity Compute inference <8s per judge cycle (p95).

Milestone 3 - M3: Agent Studio + Replay + Beta Launch

Description

Web-based Agent Studio with MeTTa spec editor, type checking, Rholang compilation, one-click arena submission, step-by-step replay viewer. Beta launch with docs, 3 tutorial agents, leaderboard, community forum.

Deliverables

1. Agent Studio: MeTTa editor with type checking and Rholang panel. 2. Replay Viewer: step-by-step decisions with economic context. 3. 3 tutorial agents with guided exercises. 4. Persistent leaderboard and documentation site. 5. Public beta launch.

Budget

$12,000 USD

Success Criterion

1. New developers submit first agent within 45 minutes (4/5 testers succeed). 2. Strategy iteration cycle <5 minutes. 3. 25+ registered developers, 50+ agent submissions within 3 weeks. 4. User satisfaction 4.0+/5.0 from beta survey. 5. All code MIT licensed.

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

Welcome to our website!

Nice to meet you! If you have any question about our services, feel free to contact us.