CS Peer Talks

Automating Attack Analysis on Blockchain Incentive Mechanisms with Deep Reinforcement Learning

  • Mingxun Zhou, Turing Class
  • Time: 2020-08-06 10:00
  • Host: PKU Turing Class Research Committee
  • Venue: Online Talk


Incentive mechanisms are central to the functionality of permissionless blockchains: they incentivize participants to run and secure the underlying consensus protocol. Designing incentive-compatible incentive mechanisms is notoriously challenging, however. As a result, most public blockchains today use incentive mechanisms whose security properties are poorly understood and largely untested. We proposed SquirRL as a new framework for using deep reinforcement learning to analyze attacks on blockchain incentive mechanisms. In this talk, I will introduce several novel empirical results we discovered by applying SquirRL. 1) The attacking strategies learned by SquirRL have the best performance in dynamic environment. 2) A counterintuitive flaw in the widely used rushing adversary model when applied to multi-agent Markov games with incomplete information. 3) The optimal selfish mining strategy is actually not a Nash equilibrium in the multi-agent selfish mining setting. In fact, SquirRL suggests that there are no profitable NE when more than two parties are competing. 4) A novel attack on a simplified version of Ethereum’s finalization mechanism, Casper the Friendly Finality Gadget (FFG) that allows a strategic agent to amplify her rewards by up to 30%. Altogether, those results demonstrate SquirRL’s flexibility and promise as a framework for studying attack settings that have thus far eluded theoretical and empirical understanding.


Mingxun Zhou, Turing Class. His research interests include blockchain, efficient data structure and distributed system.

Online Talk: