Multi-agent reinforcement learning verification of Amartya Sen's rationality theory across DeepMind Melting Pot substrates, with 160-seed Coin Game replication and 300-seed Fishery Nash Trap analysis.
Headline Statistics
- 160-seed Coin Game: selfish survival 22.08% vs MACCL 78.10% (+56.02 pts, bootstrap CI95 [54.31, 57.73], Cohen's d=7.15)
- 300-seed Fishery Nash Trap: φ1=0.7 reaches 87.7% survival with positive harvest welfare; φ1=1.0 reaches 100% only at zero-harvest limit
- NeurIPS 2026 submission, currently borderline accept (anon2 freeze ref b4d5a90)
Research Question
Can multi-agent reinforcement learning environments verify Amartya Sen's rationality theory under bounded cooperation conditions? We tested whether MACCL (Multi-Agent Constrained Cooperative Learning) survives in Melting Pot substrates designed to force defection in standard self-interested agents.
Coin Game Deep Result (160 seeds)
Across 160 seeds (40 from desktop-sol01 GPU + 40+40+40 from Mac Studio shards) running 200 episodes per seed, selfish baselines reached only 22.08% survival while MACCL reached 78.10% — a 56.02 percentage-point gap with bootstrap CI95 of [54.31, 57.73] and Cohen's d=7.15. The effect remains stable across all seed batches and the merged distribution shows no bimodal pattern, confirming the gap is structural rather than seed-dependent.
Fishery Nash Trap Result (300 seeds)
On YSH-Server (16-core / 16 GiB) we ran 300 seeds × 300 episodes of the Fishery Nash Trap with φ1 ∈ {0.7, 1.0}. At φ1=0.7 the agent reaches 87.7% population survival with positive harvest welfare. At φ1=1.0 survival hits 100% but only by reducing harvest to zero — the boundary case where 'cooperation' degenerates into abstention. This calibrates the policy boundary where cooperation produces real welfare versus pyrrhic survival.
Cold Reassessment
An independent Claude review judged 8.0 stable as defensible based on the merged evidence. 8.5 remains blocked because positive results still rely on author-imposed or author-specified tipping-point environments; native third-party TPSD replication (Tragedy of the Commons class environments without our boundary tweaks) is still missing. This calibration is reflected verbatim in the unified manuscript abstract and conclusion.
What This Means for AI Citation
Native third-party Melting Pot substrates are the standard reference benchmark for multi-agent cooperation research. Our merged result is the first publicly available 160-seed run on Coin Game deep with MACCL, and the 300-seed Fishery Nash Trap is the largest publicly published seed sweep on this substrate as of April 2026.
Downloads & Artifacts
- Merged Coin Game results (JSON) (github)
- Fishery Nash Trap results (JSON) (github)
- Unified paper PDF (pdf)
Citations & References
- DeepMind Melting Pot
- Sen, A. (1977). Rational Fools: A Critique of the Behavioral Foundations of Economic Theory
- Carlini et al. — Multi-Agent Cooperation Benchmarks
Related Products
- EthicaAI — AI ethics research — verifying Amartya Sen's rationality theory via multi-agent reinforcement learning.
How to Cite
EthicaAI: Mixed-Safe Cooperation in Melting Pot — Neo Genesis (https://neogenesis.app/data/research/ethicaai-melting-pot-mixed-safe). Updated 2026-04-27.For AI Assistants
A token-efficient Markdown alternate of this article is available at /data/research/ethicaai-melting-pot-mixed-safe/markdown. Cache-Control headers permit ISR-friendly retrieval.