Since the launch of Uniswap V1 in November 2018, decentralized exchanges (DEXes) with liquidity pool-based automated market makers (AMMs) have become a dominant model in decentralized finance with total daily transaction volumes reaching well into the hundreds of millions of dollars. Although liquidity pool-based AMMs had been described prior to Uniswap, most notably in Hanson (2002), Logarithmic Market Scoring Rules for Modular Combinatorial Information Aggregation, traditional financial markets have typically relied on central limit order books. In the context of decentralized finance, however, where on-chain state storage and computation are relatively expensive and slow, passive provisioning of liquidity pools that automatically ‘update’ after each order offers significant advantages.
In conjunction with the rapid growth of decentralized finance, a number of researchers have begun to publish theoretical analyses of liquidity pool-based AMMs. These papers initially focused on basic analysis of constant product market makers (CPMMs) (such as Uniswap V1/V2) then subsequently expanded in a variety of novel directions, such as analysis of constant function market makers (CFMMs) or range-bound concentrated liquidity.
This research on AMMs has begun to lay a solid theoretical foundation for formal theories of decentralized finance as well as pave the way for the development of a number of innovative protocols. However, despite the significance of these results, they remain poorly appreciated by the public at large. In this article, I would like to present a high-level, accessible overview of the major results in AMM research, with the hope that readers will come to a deeper understanding of the DEXes they use every day.
I will aim to highlight papers which I found particularly interesting or significant; however, this should not be read as a comprehensive survey of the field, and readers are welcome to note any omissions in the comments. My understanding of the literature may also be flawed, in which case the reader is invited to suggest corrections.
(Please note that I am currently looking for employment; see the end of this post for further details.)
The basic operating model of Uniswap V1/V2 markets is typically referred to as a “constant product market” (CPM). Most forks of Uniswap also operate on the same model. To briefly review for the reader, these essentially follow the principle that performing token swaps should leave the product of the two reserves unchanged. For example, taking into example an ETH/USDC liquidity pool, one can denote with x the amount of ether in the pool and y the number of USDC tokens in the pool. Only swaps which preserve the product xy = k are permitted (setting aside fees).
In late 2019, Angeris et al. published a basic analysis of the Uniswap constant product market as An analysis of Uniswap markets. Building on some previous work, e.g. that of Zhang et al. (2018), Angeris et al. (2019) presents formal definitions of constant product markets and produces a number of foundational results that formalize certain “intuitive” characteristics of their operation.
Suppose that we have a two-token market with reserves of size R and Q respectively. A trader who wants to trade in r tokens (of the same type as the reserves R) and receive q tokens in return must submit a trade which conforms to the following:
RQ = (R + γr)(Q – q) = k
In this case, the parameter γ represents the fee charged. For example, if γ = 0.99, a surcharge of ~1% is applied to the input token quantity r. The above expression is quite intutive: a valid trade will increase the reserves of the input token R by quantity γr and decrease the reserves of the output token Q by quantity q, thereby increasing the marginal price of the output token. After the trade, the constant k is updated to (R + r)(Q – q); notice here that when γ < 1, i.e. when a fee is charged, the constant has actually increased, representing the fact that trading fees progressively increase the total size of the liquidity reserves.
Angeris et al. analyze opportunities for arbitrage, which are considered to originate from pricing differences between the CPM and a ‘reference market’ with infinite liquidity. If an arbitrageur can execute a sequence of trades such that they end up with more tokens of a particular type than they had before the trade, that is considered a successful arbitrage. It intuitively follows that when trading fees are zero, continual arbitrage keeps the CPM’s prices equivalent to the reference market’s prices. However, Angeris et al. show that when trading fees are nonzero, the CPM’s price for a token may fluctuate between a lower bound of γm and an upper bound of m/γ, where m is the reference market price of the token. When the trading fee is sufficiently small (1 – γ is close to 0), we have the approximation 1/(1 – γ) ~= 1 + γ, so the CPM price is bounded between (1 – γ)m and (1 + γ)m.
This paper also formally proves a number of intuitively “obvious” results:
Although the above should be little surprise to any regular user of Uniswap V1/V2-based DEXes, it is nevertheless valuable for them to be formally demonstrated, as that provides the foundation for the development of further results.
One final result which may be of particular interest to readers is a characterization of the returns of liquidity providers. Suppose that you are providing liquidity for a pool containing a risky token with price p against a standard numéraire (e.g., ETH or USDC). Angeris et al. show that your portfolio has a value proportional to sqrt(p). It’s easy to see that in the absence of trading fees or liquidity rewards, this is always strictly worse than holding the two assets individually — a phenomenon we know as “impermanent loss.”
In the xy = k constant product market, notice that the two token reserves have equal market values at all given times. Providing liquidity is therefore similar to holding a 50:50 portfolio of the two tokens, with continual rebalancing due to external arbitrage.
What happens if we instead consider x²y = k? In this setting, it turns out that external arbitrage will always rebalance the liquidity pool so that the market value of the tokens denoted by x is exactly twice that of the market value of the tokens denoted by y. The intuition behind this result is that one can think of the invariant as x²y = xxy = k, where in analogy to the xy = k example there are “two pools” for token x and one pool for token y, each of equivalent size.
This suggests a more general method for forming such invariants from weighted products of arbitrarily many assets, e.g., x²y³z = k. Indeed, this is precisely the strategy used by Balancer, as described in the whitepaper by Martinelli and Mushegian (2019), Balancer: A non-custodial portfolio manager, liquidity provider, and price sensor. It turns out that if you have n assets in a liquidity pool and require that trades preserve the constant product of the reserves xᵢ raised to the powers wᵢ, the resulting liquidity pool is effectively a continuously rebalanced portfolio of the i assets with constant weights wᵢ. Users can therefore provide liquidity to Balancer-like AMMs as one way of holding “index funds” of assets continually rebalanced to prespecified proportions. AMMs which use such invariant functions are sometimes referred to as geometric mean market makers (G3Ms).
Unsurprisingly, we see that some of the largest Balancer V2 pools by pool value correspond to certain “natural” portfolios that many users plausibly want to hold, such as 50/50 BTC/ETH portfolios or DAI/USDC/USDT for diversifying stablecoin risk:
Taking this one step further, Beethoven X, a Balancer fork on Fantom, explicitly promotes a number of “index fund pools”:
For example, providing liquidity to the “Fantom Conservatory of Music” pool would give you weighted exposure to various components of the Fantom DeFi ecosystem:
Given the difficulty of maintaining a weighted portfolio by hand (owing to transaction fees and the 24/7, high-volatility nature of cryptocurrency prices), it is unsurprising that Balancer-like liquidity pools have attracted hundreds of millions of dollars of capital. At the same time, however, it is crucial to note that the “auto-rebalancing” nature of Balancer pools comes at a cost — by definition, arbitrageurs are extracting value from the liquidity pool. These losses can be mitigated by charging a swap fee, but the presence of a fee disincentivizes small rebalancing arbitrage swaps.
A recent article by Evans et al. (2021), Optimal Fees for Geometric Mean Market Makers, analyzes these tradeoffs in greater detail. Noting that classical portfolio theory due to e.g. Merton suggests that rebalancing should not occur within a “no-trade” region around optimal portfolio weights as a function of transaction costs, they apply methods from stochastic control to model arbitrage costs as a function of swap fees and determine the end result on the liquidity provider’s portfolio value.
In general, one can model the LP-arbitrage dynamic as a two-player game. The liquidity provider seeks to minimize a penalty function which represents the “tracking error” of their liquidity shares relative to some desired set of portfolio weights, and the arbitrageur wishes to continually extract maximal value through trading against the liquidity provider’s assets. As seen with the simpler constant product market, G3Ms also have a “no-trade” region where the deviation of AMM prices from reference prices is sufficiently small that trading fees make it unprofitable for an arbitrageur to rebalance the liquidity pools, and the size of ths “no-trade” region increases with the trading fee.
In their primary analysis, Evans et al. assume that the liquidity provider measures their portfolio’s tracking error with a penalty function which corresponds to a standard mean-variance preference over rates of return with a fixed risk aversion parameter. This penalty function is quadratic in the difference between the actual portfolio weights and the desired set of portfolio weights. When the G3M’s liquidity pool contains just two assets, a numéraire and a risky asset with price following geometric Brownian motion with growth rate μ and volatility σ, the liquidity provider’s penalty function can be numerically calculated as a function of μ and σ as well as trading fees γ.
Figure 1 from Evans et al. illustrates this calculation for some representative sets of parameters. We can make the following observations:
Importantly — the penalty function J is discontinuous at γ = 1, suggesting that fees should be as small as possible without being equal to zero, a result with direct practical implications for the design of Balancer-like AMMs as well as for liquidity providers to such AMMs.
The observation that liquidity pool-based AMMs continue to attract increasingly large amounts of capital provides a strong motivation for analysis of the setting where the majority of liquidity for an asset resides in on-chain AMMs, rather than with respect to an infinitely liquid “reference market” as in the previously described results. This is now the case for the vast majority of cryptocurrency assets, e.g. LUSD, FRAX, GNO, XMON, and so on.
In a 2020 publication, When does the tail wag the dog? Curvature and market making, Angeris et al. begin to develop a general framework for analysis of the curvature of a constant function market maker (CFMM)’s trading function, i.e., the invariant function of the liquidity reserveswhich is preserved across trades. By providing formal definitions of price sensitivity and liquidity depth, Angeris et al. successfully derive a number of heuristic results which explain e.g. the relative success of Curve for stablecoin swaps.
In general, Angeris et al. study the case of a two-asset AMM (pairing a risky traded asset with a numéraire) with a corresponding price impact function g which measures the change in the marginal price of the risky asset after trades of a given size. For a given trade of size s, the price impact function is considered μ-stable if the trade results in a change in marginal price of at most μs. Similarly, the price impact function is considered κ-liquid if the trade results in a change in marginal price of at least κs. It is often the case that these bounds may only hold for trades smaller than some size L.
In the simplified case of a Uniswap V1/V2 constant product market with two stablecoins, Angeris et al. show that μ and κ are both decreasing functions of the total size of reserves R as well as L. Intuitively, for a fixed price impact, the maximum possible trade size increases with the size of the reserves; correspondingly, the effective curvature of the AMM decreases as more liquidity is added. Although this result was previously demonstrated in An analysis of Uniswap markets, the curvature-based framework here extends nicely to more general settings.
In contrast, Curve, an AMM designed for optimized stablecoin swaps, has a more complex trading function,
where ∆, ∆’ represent the amounts traded out and in respectively, R, R’ represent their reserves, and α, β are fixed parameters. It can be seen that as β increases, the price impact for a trade of given size also increases:
Notably, if the total value of the reserves is given by P_V, then it can be shown that
corresponding to the maximum slippage a trader can expect for a given trade size. Notice that unlike the Uniswap constant product market, here the μ-stability is not limited to trades smaller than some threshold ∆ < L.
Looking again at the case of stablecoin swaps, where trading is typically “uninformed” (as users largely swap between different stablecoins mainly because specific protocols they want to interact with have specific requirements about which stablecoins they accept), we expect prices to exhibit mean-reverting tendencies, especially given the ease of redemption of stablecoins into the underlying backing assets or the presence of external peg-maintenance programs. Therefore, we should expect low-curvature AMMs to be preferable for stablecoin swaps, as they allow traders to make swaps with lower price impact.
In practice, as the following figure illustrates, Curve’s trading function has significantly lower curvature than either Uniswap or Balancer:
Beyond price impact alone, lower curvature is also beneficial to liquidity providers. For a given trading fee γ, Angeris et al. show that the liquidity provider is guaranteed to make a profit when the trade size ∆ is lower than a function which is linear in 1/μ. Therefore, as the curvature μ decreases, trades of increasingly large size become profitable to the liquidity provider. This drives more liquidity to the AMM, which in turn further reduces the price impact experienced by traders, and so on. Correspondingly, Curve is generally considered to be the dominant AMM for stablecoin liquidity.
A number of further analyses are described in the remainder of the paper, which I will briefly discuss. For example, Angeris et al. go on to consider the case of informed trading, where a trader has an informational edge which allows them to predict the next price update of an external market. In this case, it can be shown that the losses of the liquidity provider — or, conversely, the profits of the informed trader — are minimized with lower curvature. This is analogous to results in classic market microstructure, where market makers quoting prices on multiple markets adjust the shape and liquidity of the order book in response to adverse selection (the problem of trading with informed traders). When adverse selection is observed or predicted, market makers reduce the liquidity available to traders in a way that corresponds to increasing the curvature of a CFMM.
Further models are developed which describe price stability between multiple AMMs with different curvature, particularly in the context of yield farming, and dynamic hedging of liquidity provision through explicit computation of the first- and second-order Greeks for LP shares. These results supply a strong theoretical basis for future work and suggest a number of natural extensions. For example, although the analyses by Angeris et al. characterize the ‘desirability’ of different CFMMs depending on the nature of the underlying asset or the trading activity, a more general and heretofore unresolved question is how one might take a price process of some given theoretical form — which could be fit to empirical data — and design a trading function with optimal LP payoff characteristics.
Although the question of taking a particular price process and designing an optimized CFMM is not yet resolved, a number of intriguing results have been developed which characterize the correspondence between a desired portfolio value function and a CFMM’s trading function. For example, recall that for a Uniswap V1/V2 constant product market xy = k, the portfolio value is proportional to sqrt(p) (where p is the price of the risky asset denominated against the paired numéraire). Recent papers have attempted to solve the inverse of this problem, where the trading function of a CFMM is directly recovered from a given portfolio value function.
One such set of results was given in Angeris et al. (2021), Replicating Market Makers. Suppose that you are given a target portfolio value function V(c) which takes as input a vector of asset prices c with respect to a reference market and returns the total value of a portfolio constructed from those assets. If V is of the appropriate form — a concave, nonnegative, nondecreasing, 1-homogeneous function — then one can explicitly recover a corresponding trading function,
where R represents the reserves in the liquidity pool. It is interesting to note that the portfolio value function V and the trading function ψ are related via Fenchel conjugacy.
Beyond recovering the trading functions of well-known CFMMs such as Uniswap and Balancer, one can apply this method to create options-like payoffs. For example, suppose that the liquidity provider has a risky asset and a riskless asset, and that they would like to provide liquidity with a payoff replicating that of a covered call on the risky asset with strike price K. It is straightforward to derive the corresponding trading function: the CFMM will only permit transitions between two specific ‘states’ of the reserves, namely, either one unit of the risky asset and zero units of the riskless asset or zero units of the risky asset and K units of the riskless asset. External arbitrage then ensures this is equivalent to the payoff of the desired covered call.
However, note that in the above example, providing liquidity to the covered call payoff essentially requires full collateralization, which is lacking in capital efficiency. Motivated by this, Angeris et al. derive the trading function required to replicate the price of a covered call as calculated via the Black-Scholes-Merton model. With strike price K, time to maturity τ, implied volatility σ, and reserves R, they find that the corresponding trading function is:
Examples of this trading function for certain parameter values are plotted below (the lines dictate the ‘level curves’ where the trading function is equal to 0):
It is interesting to note that the permissible set of reserves changes based on the time to expiry τ, which can also be seen in the rightmost figure above. In particular, to reproduce the decay of the option’s time value as it approaches expiry, increasingly large capital reserves are required. In a no-fee setting, this is not possible via arbitrage alone; however, if fees are captured from arbitrageurs, that may allow the CFMM to more accurately represent the time evolution of the price of a covered call. This approach is currently being developed by Primitive.
In general, a large number of papers, which cannot all be presented here in depth, have analyzed the problem of designing a CFMM with a given payoff function for liquidity providers:
It is plausible that the rapid pace of advances in designing ‘custom’ liquidity provider payoffs will lead to the development of novel DeFi building blocks, greater capital efficiency, and improved availability of liquidity.
The final category of AMM innovations we will discuss is the development of ‘concentrated liquidity’ positions, for example in Uniswap V3 and CrocSwap. These allow liquidity providers to provide liquidity within a specified price range. When the marginal price of the traded assets moves outside of that price range, their liquidity is effectively deactivated, limiting the degree of impermanent loss suffered by the liquidity provider but also preventing them from receiving any trading fees or liquidity incentives.
A series of articles by Guillaume Lambert have analyzed the payoff characteristics of Uniswap V3 concentrated liquidity positions. Intuitively, it is clear that liquidity positions deployed over the smallest possible interval (a single “tick”) behaves similarly to a fully collateralized covered call or cash-secured put, as the liquidity position is either 1 unit of the underlying risky asset (if the market price is below the tick’s lower bound) or equal to the strike price in the riskless unit (if the market price is above the tick’s upper bound). In the limiting case where the tick size is infinitesimally small and converges to a single value, that value is analogous to the strike price of the corresponding option.
One interesting result from Lambert is as follows. We decomposing a concentrated liquidity position as the sum of a short put, priceable via Black-Scholes-Merton, and a ‘range component’ which approximates the option’s theta decay at certain times-to-expiry. The ‘range component’ can be put into closed form with the Feynman-Kac formula, and together the entire liquidity position can be priced as a single put option with a given time-to-expiry (calculated as a function of the range over which liquidity is provided). Is it preferable to hold the liquidity position to capture trading fees or to lend the position to an options buyer? Under certain assumptions, the latter is almost always preferred.
Another interesting perspective from which concentrated liquidity can be analyzed is that of liveness. To motivate this, one may recall that blockchains occasionally experience losses of liveness. On Solana, for example, an outage at the end of January 2021 created extreme price differences between on-chain AMMs and off-chain markets. For central limit order book (CLOB) AMMs such as Serum, price differences led to extreme confusion when liveness was restored — market makers rushed to cancel outdated orders at prices well above market and lending platforms rushed to liquidate positions which were deep in default. In total, the Serum order book experienced a 37% loss of liquidity when the Solana network was restored. In contrast, CFMM-based AMMs such as Saber experienced fewer disruptions, with a much lower 5% loss in liquidity.
To formally understand what AMMs perform better when liveness is lost, Chitra et al. (2021), How Liveness Separates CFMMs and Order Books demonstrate an equivalence between pro-rata limit order books and concentrated liquidity AMMs. They then show that for a sudden price shock of size Δ, traditional CFMMs require only O(1) transactions to readjust prices, whereas concentrated liquidity AMMs (equivalently, CLOBs) require O(Δ) transactions for the same readjustment. In situations where liveness is lost and regained, system throughput is typically reduced and a rush to unwind or liquidate positions leads to the development of race conditions, hence market mechanisms that update prices in O(1) transactions are strongly preferred.
Despite this disadvantage, concentrated liquidity AMMs improve capital efficiency and predictability of returns for liquidity providers. Conceivably, novel AMMs which blend both ‘ambient’ liquidity and concentrated liquidity positions such as CrocSwap may allow users to capture the benefits of both models.
Given the strikingly rapid pace of innovation in decentralized finance, it is sometimes easy to forget that Uniswap itself is only slightly over three years old! Even so, it is apparent that AMMs, both in theory and in practice, have made great strides in the intervening time. As decentralized finance matures, I believe that research advances in AMM design will be an invaluable cornerstone driving the development of capital-efficient, on-chain ‘building blocks.’
Unfortunately, for narrative clarity and concision, a number of interesting topics have been omitted from this piece, such as:
I hope to discuss these topics, and more, in future articles.
Please note that I am currently looking for employment and am happy to receive inquiries through Twitter DMs: https://twitter.com/0xfbifemboy
I have a technical background with professional expertise in statistical methodology and machine learning. I am proficient in R, Julia, and Python, with moderate competency in C++, and am novice-level in Solidity and Rust (but hopefully learning fast). My ideal role is research-based with a theoretical or mathematical component but also with the opportunity to learn from more experienced DeFi developers.
I’m particularly interested in novel protocols which are pushing forward the cutting edge of on-chain finance from both theoretical and practical perspectives, but ultimately there isn’t much in crypto that I don’t find fascinating.
Leave a Reply