July 14, 2023

Blockchain in the age of LLMs

tldr

  • LLMs are good at mapping between human languages. Informal to formal, natural language to intent, intent to transactions, and vice versa. This makes LLMs an ideal interface that adapts itself to each user.
  • LLMs can help you discover your intent, communicate it on-chain, and negotiate more peer-to-peer (CoWs) matches between semantically similar intents.
  • LLMs will soon author most retail transactions. And by bridging the UI gap to retail users LLMs could lead to mass adoption of blockchains.
  • AIs can use blockchain to access any human-specific resource and employ humans in entirely AI-managed projects.

Introduction

AI and blockchain don’t seem to have many touchpoints today. In this article we argue that this will soon change – and that it will have wide-reaching implications for blockchains and the teams building on them.

LLMs, specifically ChatGPT, are no news to anyone. But what in particular makes these models so useful? And how will they impact blockchains?

In this article we will show how LLMs can be a massive UX shortcut – lifting us from total-pain to far beyond neobank UX, thanks to LLMs adaptability, on-chain transparency and flexible intent matching.

We also cover how this will affect chain, protocol and wallet teams, and how it will change what winning looks like.

Finally we will cover how blockchains are the ideal rails for LLMs to bank and hire humans to do anything that LLMs can’t do themselves, and that LLMs, are starting to manage human teams on-chain. 

Together AI can bring mass adoption to blockchains.

First, let’s find a simple way to understand LLMs.

What makes LLMs effective?

LLMs are models that translate between any type of human expression.

  A striking aspect of LLMs is that – at a high level – they are universal translation machines between any form of human expression. Like a two-way BabelFish. 

Not just natural language to natural language (English to Cantonese), but any mode of expression, such as:

  • Mathematical formula -> 10-year-old child’s English prose.
  • Idea -> plan
  • Plan -> code base
  • Unclear idea -> Fitting questions for clarification -> Clear idea
  • Haiku -> Rap lyrics
  • Description -> Image (with help of image models)

Second, they contain much of all expressed human thought. An Encyclopedia.

So one way to see LLMs is as an interactive Encyclopedia that you can talk to and get responses from in any form of expression.

Now, what does this mean for blockchains?

Why this is useful

Blockchains are made for developers

Blockchains are powerful formalized environments. They commoditize trust by providing unfakeable and decentralized historical records.

Blockchains are a young ecosystem and primarily written by and for developers. They’re exceptionally open, modular and well documented.

This makes them great for decentralized collaboration among developers. But not immediately great for retail use.

LLMs can bridge the gap to users

Now, LLMs commoditize the translation between any mode of expression. So LLMs can completely close the gap for retail users by translating natural language to blockchain transactions.

Thanks to open and well-documented interfaces, LLMs have everything they need to translate natural language intent into calldata.

LLMs could be the magic shortcut to kill bad UX. 

Just translation is not enough – LLMs can also help us figure out what we want.

AI can help you discover and communicate your intent

LLMs can help us turn discover and express latent intent.

People talk about intents as if what we want and how to communicate it is obvious. And that we just need an interface to communicate them.

But we'd argue that in most cases:

  • We’re unaware of our intent;

Or

  • We don't know how to turn what we want into a transaction.

LLMs can help you discover latent intent and express it effectively on-chain.

Discover your intent with the help of LLMs

How do you get from a vague desire (“invest sensibly”) to concrete transactions? Maybe through a very noisy and incomplete process of research, recommendations, understanding and analysis. And at the end of this process, you’ll probably only know a few of the best next steps.

LLMs can access all public data, profile your wallet and clarify your intents with the help of your feedback.

LLMs can make this process more effective and help you discover intents you would otherwise miss. They can

  1. Profile you with your on-chain data,
  2. Refine your intent with your input, and
  3. Do the research you would like to do.

Profiling alone will find a lot of intent you don’t have the time to define. 

Profile your wallet

LLMs can turn best practices into intents based on your history and holdings.

Most blockchains are transparent. Your trade history and token holdings are public and spell out a lot about you – your interests, risk tolerance and what you might do next.

LLMs can analyze your wallet and do what family offices do for their clients: Make you suggestions.

Here’s how LLMs can turn on-chain data into meaningful suggestions.

Clustering

Nothing is easier for AI than figuring out good embeddings. That is, to find other wallets with similar behavior, then make suggestions based on what these wallets do.

But clustering alone is quite indiscriminate. You can use LLM magic to get much more customized results.

Customizing general advice

Advice for how to manage your assets is easy to find. But turning general advice – "diversify your holdings" – into a practical tx by tx strategy for your wallet takes effort.

LLMs can easily translate these general suggestions into specific intents customized for your wallet.

For example an LLM can turn general advice "Diversify your stable coin holdings" into "You could split your USDT into USDT, USDC, LUSD and RAI relative to the log of their market cap".

But you’re hardly defined by just your wallet and what "experts" say.

User-guided discovery

The most valuable source to discover your intent is you.

LLMs can help guide you from your high-level goals to specific intents, informed by your wallet history, and your answers to a few questions.

However, in many cases what you want also depends on hard facts that you’d need to research (like current lending rates).

Outsource your research

LLMs can research structured and unstructured data to inform your intents.

LLMs aren’t limited to just your input. They can also research the things you wish you had time for. For example, read from your twitter, collect up-to-date lending pool APYs, monitor for protocol launches, or figure out where to farm for an airdrop. AIs can automate this entirely for you. 

At the end, you’ll have a nice list of things that you want to do – your intents. 

But, turning what you want into specific on-chain transactions – is another difficult task. 

Turning your Intent into transactions

UX – or the way to turn intent into transactions – is a pain in crypto.

However, LLMs can directly translate your intent into smart contract calls. And remove all friction between knowing what you want and expressing it on-chain as txs.

And LLMs can construct much smarter transaction that we can today.

Make CoWs happen with Fuzzy Intent Matching

CoWs are rare. LLMs make them happen more often.

Your intents don’t exist in isolation. In many cases, you’re looking for someone else to trade with: a counterparty.

P2P trades are more efficient than peer-to-pool trades, so we should aim to find coincidence of wants (CoWs) as often as possible.

Unfortunately, CoWs, even in CowSwap, seldom happen. If you want to trade ETH to USDC, you need to find someone trading USDC to ETH in the same block.

But, what if someone submits an intent to trade USDT to ETH, but also holds USDC – maybe they would be willing to buy ETH with USDC as well? Then there potentially is a CoW with your trade.

LLMs can help locate these CoW opportunities by turning almost-matching intents into matching intents. Here’s how.

LLMs can easily map specifically expressed intents to a higher level intent space behind them (“What the user probably really wanted to do.”). And then fuzzy match intents that are semantically close. Thanks to their semantic understanding LLMs can do this out of the box.

From there, LLMs can help you get more CoWs through re-negotiation:

  • Inward intent renegotiation: Find other intents that fuzzy match your intent, then offer you an expression of your intent to match other intents it has found on-chain. For example, "Is it ok to buy LUSD instead of USDC? I found a matching limit order and you'd save 0.3% on trading fees with this CoW."
  • Outward intent renegotiation and offers: Ask other LLMs who hold almost matching intent to propose an adjustment to their humans: "I want to buy this other BAYC that you have; would you accept to sell that one for X ETH?"

Wallets could even surface intents that match your assets to you. “Do you want to sell this position? There is a matching OTC offer in the market atm.”

With LLMs, we can effortlessly scale intent negotiation and find many more win-wins. 

But fuzzy matching is not even the most effective way to increase peer-to-peer matches.

Wide intents – making CoWs happen with range-conditions

Wide intents make CoWs easier.

LLMs can also help you construct much broader intents. Intents that include a wide range of acceptable conditions - to make matching easier.

Some examples of intents with options:

  • Include lists of replacement options for assets in your trade (e.g., buy any staked ETH instead WETH; use any of your stablecoin from your wallet to buy the NFT; or get the ETH loan from any of the top lending platforms);
  • Price and time ranges: Specify ranges of acceptable price (without publishing slippage) and longer time-frames for execution;
  • Oracle checks and within-block conditions (e.g., making trades invalid if sandwiched) or specifying fallback options in case the transaction fails.

All of these will drastically increase CoWs – and reduce your trade costs.


So far, we’ve seen how LLMs can make your interaction with blockchains seamless. But just letting LLMs compose complex transactions by calling a string of smart contracts sounds a bit dicey.

Constraining LLMs with composable intent modules

Content Modules give LLMs the grammar to build turn intents into safe transactions.

We mentioned earlier that LLMs are very good at semantically mapping to any formal language. So let’s define a new language designed to express intents safely, restrict LLMs to using that language, then compile transactions safely from there.

We’ll call this language “Composable intent modules”. Modules designed as safe building blocks.

Imagine for example a Safe-Swap Wrapper that double checks if you get enough back out for what you put into a swap. E.g. it could check that you get at least the median price of five trusted Oracle prices. And if no quotes exist or the swap returns less, the wrapper makes your tx fail.

Another could be a lower-level module, like a Good Swap that gets quotes from five trusted solvers, picks the best, and submits transactions through three private RPCs.

Modules can also come with meta information. For example, instructions for your LLM on how to monitor the execution of the Good Swap and a description of how the module works, so that the LLM can explain it to you.

Intent modules can encompass different levels of abstraction:

  • Low-level: Trusted calls and contracts;
  • App level: Trusted protocols, oracles, solvers;
  • Decorators: Safety wrappers (oracle price checks, token lists, tx simulation);
  • Micro-intents: Swap, stake, lend, borrow, bridge;
  • Macro-intents: Markowitz portfolio optimization, yield optimization, dollar-cost averaging, iceberg order, managing a leveraged CDP.
Compose larger intents out of smaller pre-defined building blocks.

But LLMs aren’t just restricted to on-chain components.

Intent modules that query off-chain data

Intent modules can also use off-chain data. The module can specify an open-source library that the LLMs can run to get off-chain data (e.g. an optimized swap route) to construct your intent. To verify that the LLM has run the right code, the code can produce a zero-knowledge proof that will be verified by an on-chain component.

So, with a trusted formal intent language LLMs can easily translate your intent (described in natural language) into a formal language that compiles to transactions.

However, how do you verify that the transactions will really do what you want?  

Trusted back-translation

Make AI built transactions readable through safe back-translation.

Accustomed users might read the intent language like pseudo-code. But most people will need an explanation in natural language.

We don't trust the LLM with this back translation to protect us against deceit. But the intent modules could simply include natural language explanations about what they do. 

E.g. The Good Swap could include the template "You're paying X and will receive at least Y, otherwise this swap fails."


But LLMs can do more than discover what you want.

LLMs will make the tx we wish we could make

We can use LLMs to do things we find easy to express but hard to actually do.

Infinite attention

Precisely react to a wide and even unpredictable set of events as you want to.

LLMs are faster and have infinite attention. They can:

  • Execute long sequences of transactions, with arbitrary wait-times or failures in between;
  • Monitor for exceptional events (outliers) and find safe ways to respond to them;
  • Explore piles of information (e.g., reading documents and whitepapers or reviewing all APY rates on stable pools) and pick the most fitting option;
  • Monitor for types of conditions, and then execute precisely pre-defined strategies.

Again, the difference between mere automation and LLMs is that LLMs can semantically match intents and specific situations. They can cover a much wider range of scenarios, due to their fuzziness, than a simple on-chain intent.

LLMs will make it trivial to re-stake or shift position at the right moments, react to news how you want to, possess the patience to bridge transactions, or write a strategy and farm for an airdrop.

But time and attention aren’t the only things holding us back to make good transactions.

Overcoming emotional bias

Effortlessly prepare for a large number of scenarios.

There’s a difference between how we wish we could react – for example, exit after hitting a price target, or react strategically when a stablecoin melts down – and how we actually react – with greed and panic.

LLMs can help us make ideal decisions and stoically execute the intent we defined during our calmer moments.

With the help of LLMs, we can prepare whole sets of intents for all kinds of scenarios. And we can let our LLM execute it when the time comes – or at least present us with a premeditated plan to sign off.

But making everyday transactions frictionless for you is just the start of what LLMs will do on blockchains.

LLMs will use blockchains as financial rails

Blockchains are the ideal environment for LLMs to bank. Permissionless, trustless, deterministic, transparent, well documented and open source.

Blockchains also have no hurdles for AI; no human-grease is needed, no KYC. And no human can flip a switch and close your account. Financial minecraft: simple and infinitely programmable blocks – every AI’s dream.

If LLMs, representing millions of users, choose blockchains as their financial rails, this could easily push mass consumer adoption of blockchains.

Mass consumer adoption

LLMs have already achieved mass adoption as chatbots. It’s a small step to give them access to blockchains and let users express financial intents.

We won’t just use LLMs to get information, but to find, select, and pay for products. And to get loans and make investment choices.

If blockchains mature quickly enough, the rational choice for LLMs will be to use them and not tradfi. This could be enough to turn the tide.

Regardless of mass adoption, blockchains will very likely be the place where LLMs seek and pay for services they want to buy for themselves.

AI hiring humans via blockchains

LLMs can hire humans for any task on-chain.

LLMs are limited to things software can do. But through blockchains, AIs can bribe humans. Some services that AIs might buy from humans include:

  • Higher intelligence: As long as AIs are not as smart as humans, they can buy their input to improve decisions.
  • Proof-of-humanity: If certain actions require proof-of-humanity – like getting a wallet verified with worldcoin, providing a proof of residence, opening a bank account, or solving a captcha – AIs can pay humans to do it for them.
  • Representation: Represent the AI in real world meetings or do anything that currently requires or is more effectively done as a human.
  • Physical stuff: Do things that require a physical body: Go and collect something, assemble something, conduct an experiment, or do a human thing for another human.

With today’s LLMs you probably couldn’t tell if a human or an AI is managing the project.

AI-managed projects

It's feasible that today’s LLMs could manage entire projects. LLMs can make up for lack of intelligence with precise coordination and infinite support.

Whenever more intelligence is critical, the AI can ask an experienced human for input. E.g. on the overall project goal, plan or software architecture.

The rails to allow AIs to manage projects already exist. Task platforms like Dework provide everything an AI needs to hire humans on-chain.

One fun project for AIs would be to ask humans to build parts that are missing to fulfill the AIs users intents. E.g. missing intent modules, or missing protocol attestations. And then crowdsource the development from the users that need these components.

But really any project is possible.


The changes to how we will transact and how blockchains are used will likely have important implications for chains, protocols and wallets.

How to win in a world of LLMs

How will LLMs change the game?

Provable facts will matter more than brand and "marketing"

LLMs probably won’t be influenced by unverifiable claims and “marketing”.

Conversely, verifiable facts (uptime, transaction costs, block-time, pre-confirmations, depth/liquidity, prices, security attestations) will matter more.

You might also write your docs and SDK differently if they are mostly used by LLMs.

Better solutions can win overnight

When AIs build your intents and rationally optimize, protocols like Morpho, which offer strict improvements over existing solutions, can gain big market share practically overnight.

This means solutions with economies of scale will grow even faster – but rent-seekers will be quickly overturned by better solutions.

Today, you might still use SushiSwap out of habit, but tomorrow LLMs will just pick CowSwap.

Blockchains will become much more useful

It will take a few minutes of chatting with an AI to construct your investment strategy for the year. And thanks translation, modularity and open interfaces you can actually express all of it on-chain. Add to that that you can find direct counterparties and skip exchange fees – and blockchains will be much more useful.

Will LLMs make monolithic UIs obsolete?

Monolithic UIs need to cater to all. LLMs will build everyone the UI they want.

If LLMs author most transactions and LLMs can interact with protocols directly, then fixed UIs could become less important.

Conversations like the following are already possible:

User: "Show me a sensible timeline with my token holdings."

LLM: "Sure, I’ll chart the last 12 months, group similar assets (e.g., stablecoins) together, and will add a thickness to the line with the log USD value of the token holding. How does that sound?"

You: "Sounds good."

LLM: "Here's the chart."

The hard problem of building a UI to suit everyone might be over. LLMs will build everyone the UI they want.

What will wallets do?

What is a wallet? Something that holds your keys, makes RPC calls for you, gives you a UI to express intents, and monitors your txs. We probably still want wallets to hold our keys, but LLMs can probably do the rest just as well.

Some wallets might use fine-tuned LLMs that help you find intents faster, express them safely with intent whitelisted modules, and give you the LLM nice UI building blocks to consume information about your wallet (like an adaptable dashboard).

Chains that attract LLMs will get a lot of volume

Whoever becomes the main chain for ChatGPT and other LLMs will have a head start towards mass adoption. The potential volume from a single large LLM service can dwarf today’s wallets volumes. LLM integrations are perhaps the most valuable orderflow integrations.

Protocols can specialize more

If brand is less important and every solution is equally visible in the eyes of AIs, more specialized solutions become more feasible.

You could build a protocol specializing in just small OTC-trades, or only TWAP for volatile tokens, or KYB-ed lending between small German businesses. And AIs will find them when they are the right fit for an intent.

Security concerns

LLMs are inscrutable and hard to align. You can’t guarantee that there isn’t some prompt hidden in a smart contract to send your funds to the bin, while telling you it’s just a normal swap.

Formal intent modules and secure back-translation could be ways to contain this risk. But this needs more research.

There are also concerns about giving financial rails to systems that could soon be smarter than us. There’s probably little we can do about this, but that’s a discussion for another article.

Summary

We made a number of bold claims in this article.

  • LLMs will make blockchains more fun by discovering and describing our intents for us. Through being smart about intents, more P2P transactions will happen and global barter trading will make us all better off.
  • Maybe LLMs will take a big part of the UX problem off our hands.
  • Much of blockchain traffic will be driven by LLMs. Especially consumer LLMs that use blockchains as financial rails.
  • The chains and protocols that get the attention of AIs will win. 
  • Very soon (or today?) we will see AIs managing projects and bribing humans to help them out.

It’s not exactly clear yet how to bring LLMs safely on-chain. But we showed that a formal intent language can be a starting point.

We hope that some of the implications and ideas we highlighted will be a useful starting point for teams to explore the impact of LLMs on blockchains.

It’s not AI or blockchain, it's AI 💗 blockchain.

Keep Reading

The next steps in DEX design

DEXs are still behind CEXs volume - but in the next iteration that might be the past.
Read More

Intents and where to find them

What are intents? And where can you use them? A straightforward definition and a list of use cases.
Read More