Building Agent-Based Models for DeFi: A Practical Bottom-Up Guide
How we use ABMs to explore complex systems like MEV extraction, starting from first principles
1. Introduction
Agent-Based Modeling (ABM) is an increasingly valuable tool for studying complex decentralized systems, particularly in DeFi, where markets are driven by heterogeneous agents and emergent behavior. At POL Finance, ABMs are a core part of the research methodology for simulating mechanisms such as automated market makers, lending protocols, and most recently, MEV (Maximal Extractable Value) dynamics.
This post outlines a practical, step-by-step approach to building ABMs, based on active development of an MEV simulation currently in progress at POL. Rather than diving into highly theoretical discussions, the focus here is on how to structure a minimal yet functional ABM, following the called “bottom-up” modeling principles from the book An Introduction to Agent-Based Modeling by Wilensky & Rand.
By starting simple and iterating with agent behaviors, interactions, and environment structures, the POL team is able to use ABMs as a flexible exploratory framework, helping both define key research questions and analyze emergent outcomes.
This guide aims to help protocol designers, researchers, and DeFi analysts adopt a similar modeling mindset when exploring their own systems.
2. Top-Down vs Bottom-Up in ABM Design
Agent-Based Modeling allows for multiple entry points into the modeling process. At POL Finance, the team often alternates between two complementary strategies: top-down and bottom-up design.
In a top-down approach, the process begins with a specific question or real-world phenomenon. From there, agents and rules are designed to reflect that context. The goal is clear from the outset, and the model is gradually refined to the level of detail required to answer the question effectively.
In contrast, the bottom-up approach starts without a fixed hypothesis. It begins with a domain or mechanism of interest and constructs a minimal prototype, often just a few agent types and rules. The model evolves as it runs. Questions emerge through experimentation, and both the model and its purpose co-develop over time.
In the development of the MEV ABM at POL, the team followed a predominantly bottom-up strategy. The goal wasn't predefined; instead, the model started with basic components of a DeFi transaction environment, and through iteration, began to surface interesting dynamics related to MEV extraction and agent incentives.
This flexibility is one of ABM’s greatest strengths. Whether starting with a clear research question or just curiosity about a system, the methodology supports both paths, and even combinations of the two.
In the next section, we'll outline the concrete steps followed at POL when building agent-based models from scratch, following the practical design process outlined by Wilensky and Rand.
3. Key Steps for Designing an ABM
At POL Finance, the process of designing an agent-based model follows a flexible but structured flow. This section outlines the main components of that process, contextualized through the MEV simulation currently in development.
3.1 Choose a Guiding Question or Domain
In some cases, modeling begins with a clear question. In others, it starts from a domain of interest. In the MEV model, the team started with some questions as a guide:
How different agents compete for MEV opportunities under various execution condition?
Which blockchain agents and configurations generate higher levels of MEV?
How do changes in the fee market (priority gas auction, latency, tx expiration) impact the magnitude and distribution of MEV?
These questions were not rigid, they evolved as the model took shape.
3.2 Find a Concrete Reference or Mechanism
Before writing any code, it helps to anchor the model in real-world systems or research. For MEV, the team drew from protocol documentation, current execution models (e.g., PBS, ePBS), and insights from prior ABMs related to auctions or validator behavior.
This step ensures the model is not built in abstraction but grounded in known dynamics that can be observed, tested, or reproduced.
3.3 Define Agent Types
A foundational step is identifying which types of agents will participate in the system. In the MEV ABM developed at POL, the core agent types include:
Searchers: agents that identify profitable transaction bundles from the mempool.
Builders: agents that aggregate bundles and construct candidate blocks.
Proposers: agents that select blocks submitted by builders and propose them to the chain.
Validators: (optional) agents that act under different roles depending on the protocol being modeled (e.g., as delegates, committees, or reward distributors).
Each agent type captures a distinct role in the block-building pipeline, reflecting the modular execution layers observed in Ethereum post-Merge and in PBS (Proposer-Builder Separation) inspired designs.
3.4 Choose Agent Properties
Each agent is defined by a minimal set of attributes relevant to its function. For example:
Searchers: bidding strategy, bundle success rate, capital constraints
Builders: bundle selection strategy, inclusion criteria, latency
Proposers: fee preferences, selection policy, MEV policy
These properties don’t need to be realistic or complete from the start. The goal is to create just enough structure for each agent to act meaningfully in the model.
3.5 Define Agent Behavior
This step defines what agents do during each time step. In the MEV model:
Searchers scan for profitable opportunities and submit bundles with a bid.
Builders receive bundles and construct blocks by selecting the most profitable combination.
Proposers evaluate builder submissions and choose one based on policies like highest bid or fairness.
Agent behavior is governed by simple decision rules, often using heuristics, randomization, or thresholds. Over time, these behaviors can be extended with reinforcement learning, strategy switching, or richer memory.
3.6 Design the Time Step and Execution Logic
ABMs operate in discrete time steps. Each step needs a clear schedule and order of execution. In the MEV ABM:
Searchers submit bundles.
Builders construct blocks and submit to proposers.
Proposers choose which block to include.
Optionally: environment updates (rewards, slashing, chain state).
This schedule mimics the sequential structure of real-world execution pipelines, particularly in systems with auction-based MEV markets.
3.7 Add Environmental or Static Components
Not everything in the model needs to be an agent. Some parts are static but essential. For instance:
Mempool: where searchers access transactions.
Gas limits: constraints for block size.
Reward functions: mechanisms for paying agents.
These components define the rules of the environment and constrain or influence agent behavior.
3.8 Choose Parameters and Output Metrics
From the beginning, it's useful to define:
Parameters to vary: number of agents, max bundles per block, latency delays, etc.
Metrics to track: total MEV extracted, inclusion rate per agent type, block efficiency, proposer fairness, etc.
These metrics help compare protocol designs, test hypotheses, and evaluate outcomes statistically.
In the next section, we’ll explore how diagrams like entity-relationship charts (ERDs) can help organize agent interactions and visualize system structure before any code is written.
4. From ERD to ABM Logic
Before jumping into code, it often helps to visualize the structure of the system and how agents interact. At POL Finance, one of the most useful tools for this is an Entity Relationship Diagram (ERD), a visual map that connects agent types, their properties, and the key relationships between them.
While ERDs are traditionally used in database design, they translate surprisingly well to ABMs. In practice, they help answer questions like:
Which agents interact directly?
What resources or data are shared across agent types?
What are the key attributes or states that drive decisions?
Even in models that start small, like the early versions of the MEV ABM, sketching an ERD helped clarify:
How bundles move from searchers to builders
How builders submit blocks to proposers
What criteria proposers use to select blocks
How rewards and feedback flow back into the system
These diagrams don’t have to be perfect or complete, they're just tools to make agent logic explicit and to reduce confusion when iterating on the model.
5. Instrumenting the Simulation
Designing an ABM is only half the story. Once agents are running and interacting, it becomes essential to track, analyze, and interpret what’s happening during and after the simulation.
At POL Finance, instrumenting an ABM involves adding lightweight but powerful tooling to capture agent behaviors, performance, and systemic metrics over time.
Here’s how that plays out in the MEV ABM project:
What to Measure
Choosing the right metrics depends on the question, or in the case of bottom-up exploration, on what patterns start to emerge. In MEV-related models, some core outputs include:
Total MEV extracted (by searchers or builders)
Inclusion rate of bundles per searcher
Block efficiency (e.g., value density)
Proposer selection bias (who gets selected and why)
Distribution of rewards across agent types
These outputs are collected per simulation run, and optionally per time step, enabling both aggregate analysis and temporal evolution studies.
Logging and State Tracking
To make debugging and post-hoc analysis easier, the model tracks:
Agent-specific states (e.g., current strategy, last reward, number of submissions)
System-level states (e.g., mempool size, number of valid bundles, average gas used)
Event logs (who submitted what, which block was accepted, etc.)
These can be saved as CSV files or JSON logs, and later visualized using Python notebooks or interactive dashboards.
Reproducibility and Exploration
Instrumenting the model also supports multi-seed runs, executing the same configuration with different random seeds to observe variability and emergent trends.
This step is critical in exploratory ABMs, as it reveals whether behaviors are robust or just artifacts of a particular seed.
In the next section, we’ll share key reflections and lessons learned from building the MEV ABM, including what worked, what didn’t, and what’s next.
6. Reflections and Lessons Learned
Building the MEV ABM at POL Finance has provided both technical and conceptual insights, not only into the mechanics of MEV extraction, but also into the modeling process itself.
Start Simple, Learn Fast
The most important principle reaffirmed during development was: start simple. Even when modeling a complex system like MEV markets, beginning with minimal agent behavior made it easier to spot bugs, understand dynamics, and surface meaningful questions early on.
Many of the most valuable insights came not from answering predefined questions, but from observing what agents did in early versions and adjusting the model accordingly.
Behavior > Complexity
Rather than overfitting agents with too many rules or parameters, focusing on a few key behaviors led to more interpretable outcomes. For example, just defining how builders prioritize bundles under different proposer policies was enough to produce emergent competitive behavior among searchers.
In ABMs, rich dynamics often come from simple rules interacting, not from increasing code complexity.
Code is Not the Model
Another key realization: the model is not just the code. Diagrams, logs, and structured documentation (like the ERD or the scheduling logic) were just as essential for understanding what was happening and communicating it to others.
Investing time in visual clarity and modular design paid off when iterating, debugging, and planning next steps.
The Model Shapes the Question
In bottom-up modeling, it’s common for the model to inform the question, not the other way around. As the MEV simulation evolved, the team began to explore new dimensions like:
How proposer policies affect fairness?
How searcher competition impacts builder strategies?
How different auction mechanisms could be tested within the same model?
This feedback loop between modeling and questioning is where ABM becomes a true exploratory tool.
Really great blog. I loved how pedagogical clarity was combined with a practical MEV case to illustrate the construction of an ABM.
It’s super valuable to have examples like this, where the model is grounded without losing sight of its general logic.
A few questions came to mind that I’d love to explore further with you:
How do you determine the appropriate level of complexity for the agents?
→ When is it better to keep them simple versus making them more strategic?
What parameters tend to be key in the model?
→ Do you use any specific methods for calibrating them? Are there best practices for setting them, or is it more of an exploratory process?
What challenges arise when trying to scale an ABM toward more realistic or multi-protocol environments?
Great work—congrats to the team! 🙌