Best Practices for Building High Performance Apps
Configure web hosting to keep costs under control
- Vercel and Railway provide convenient serverless platforms for hosting your application, abstracting away the logistics of web hosting relative to using a cloud provider directly. You may end up paying a premium for the convenience, especially at higher volumes.
- AWS and other cloud providers offer more flexibility and commodity pricing.
- Before choosing any service, check pricing and be aware that many providers offer loss-leader pricing on lower volumes, but then charge higher rates once you hit a certain threshold.
- For example, suppose there is a $20 plan that includes 1 TB per month of data transfer, with $0.20 per GB beyond that. Be aware that the second TB (and onward) will cost $200. If the next tier up says "contact us", don't assume the next tier up will be charging $20 per TB.
- If you are building a high-traffic app and you aren't careful about serving static files more cheaply, it will be easy to exceed the loss-leader tier and pay much more than you expect.
- For production deployments on AWS, consider:
- Amazon S3 + CloudFront for static file hosting and CDN
- AWS Lambda for serverless functions
- Amazon ECS or EKS for containerized applications
- Amazon RDS for database needs
- This setup typically provides granular cost control and scalability for high-traffic applications.
Avoid unnecessary RPC calls to methods with static responses
eth_chainId
always returns10143
eth_gasPrice
always returns52 * 10^9
eth_maxPriorityFeePerGas
always returns2 * 10^9
Use a hardcoded value instead of eth_estimateGas
call if gas usage is static
Many on-chain actions have a fixed gas cost. The simplest example is that a transfer of native tokens always costs 21,000 gas, but there are many others. This makes it unnecessary to call eth_estimateGas
for each transaction.
Use a hardcoded value instead, as suggested here. Eliminating an eth_estimateGas
call substantially speeds up the user workflow in the wallet, and avoids a potential bad behavior in some wallets when eth_estimateGas
reverts (discussed in the linked page).
Reduce eth_call
latency by submitting multiple requests concurrently
Making multiple eth_call
requests serially will introduce unnecessary latency due to multiple round trips to an RPC node. You can make many eth_call
s concurrently, either by condensing them into a single eth_call
or by submitting a batch of calls. Alternatively, you might find it better to switch to an indexer.
Condensing multiple eth_call
s into one
- Multicall: Multicall is a utility smart contract that allows you to aggregate multiple read requests (
eth_call
) into a single one. This is particularly effective for fetching data points like token balances, allowances, or contract parameters simultaneously. The standardMulticall3
contract is deployed on Monad Testnet at0xcA11bde05977b3631167028862bE2a173976CA11
. Many libraries offer helper functions to simplify multicall usage, e.g. viem. Read more aboutMulticall3
here. - Custom Batching Contracts: For complex read patterns or scenarios not easily handled by the standard multicall contract, you can deploy a custom smart contract that aggregates the required data in a single function, which can then be invoked via a single
eth_call
.
Multicall executes calls serially as you can see from the code here. So while using multicall avoids multiple round trips to an RPC server, it is still inadvisable to put too many expensive calls into one multicall. A batch of calls (explained next) can be executed on the RPC in parallel.
Submitting a batch of calls
Most major libraries support batching multiple RPC requests into a single message.
For example, viem
handles Promise.all()
on an array of promises by submitting them as a single batch:
const resultPromises = Array(BATCH_SIZE) .fill(null) .map(async (_, i) => { return await PUBLIC_CLIENT.simulateContract({ address: ..., abi: ..., functionName: ..., args: [...], }) })const results = await Promise.all(resultPromises)
Use indexers for read-heavy loads
If your application frequently queries historical events or derived state, consider using an indexer, as described next.
Use an indexer instead of repeatedly calling eth_getLogs
to listen for your events
Below is a quickstart guide for the most popular data indexing solutions. Please view the indexer docs for more details.
Using Allium
- Allium Explorer
- Blockchain analytics platform that provides SQL-based access to historical blockchain data (blocks, transactions, logs, traces, and contracts).
- You can create Explorer APIs through the GUI to query and analyze historical blockchain data. When creating a Query for an API here (using the
New
button), selectMonad Testnet
from the chain list. - Relevant docs:
- Allium Datastreams
- Provides real-time blockchain data streams (including blocks, transactions, logs, traces, contracts, and balance snapshots) through Kafka, Pub/Sub, and Amazon SNS.
- GUI to create new streams for onchain data. When creating a stream, select the relevant
Monad Testnet
topics from theSelect topics
dropdown. - Relevant docs:
- Allium Developers
- Enables fetching wallet transaction activity and tracking balances (native, ERC20, ERC721, ERC1155).
- For the request's body, use
monad_testnet
as thechain
parameter. - Relevant docs:
Using Envio HyperIndex
-
Follow the quick start to create an indexer. In the
config.yaml
file, use network ID10143
to select Monad Testnet. -
Example configuration
-
Sample
config.yaml
fileconfig.yaml1234567891011121314151617181920212223name: your-indexers-namenetworks:- id: 10143 # Monad Testnet# Optional custom RPC configuration - only add if default indexing has issues# rpc_config:# url: YOUR_RPC_URL_HERE # Replace with your RPC URL (e.g., from Alchemy)# interval_ceiling: 50 # Maximum number of blocks to fetch in a single request# acceleration_additive: 10 # Speed up factor for block fetching# initial_block_interval: 10 # Initial block fetch interval sizestart_block: 0 # Replace with the block you want to start indexing fromcontracts:- name: YourContract # Replace with your contract nameaddress:- 0x0000000000000000000000000000000000000000 # Replace with your contract address# Add more addresses if needed for multiple deployments of the same contracthandler: src/EventHandlers.tsevents:# Replace with your event signatures# Format: EventName(paramType paramName, paramType2 paramName2, ...)# Example: Transfer(address from, address to, uint256 amount)# Example: OrderCreated(uint40 orderId, address owner, uint96 size, uint32 price, bool isBuy)- event: EventOne(paramType1 paramName1, paramType2 paramName2)# Add more events as needed -
Sample
EventHandlers.ts
EventHandlers.ts12345678910111213141516171819202122import {YourContract,YourContract_EventOne,} from "generated";// Handler for EventOne// Replace parameter types and names based on your event definitionYourContract.EventOne.handler(async ({ event, context }) => {// Create a unique ID for this event instanceconst entity: YourContract_EventOne = {id: `${event.chainId}_${event.block.number}_${event.logIndex}`,// Replace these with your actual event parametersparamName1: event.params.paramName1,paramName2: event.params.paramName2,// Add any additional fields you want to store};// Store the event in the databasecontext.YourContract_EventOne.set(entity);})// Add more event handlers as needed
-
-
Important: The
rpc_config
section under a network (checkconfig.yaml
sample) is optional and should only be configured if you experience issues with the default Envio setup. This configuration allows you to:- Use your own RPC endpoint
- Configure block fetching parameters for better performance
-
Relevant docs:
Using GhostGraph
See also: Ghost
- Relevant docs:
Using Goldsky
See also: Goldsky
- Goldsky Subgraphs
- To deploy a Goldsky subgraph follow this guide.
- As the network identifier please use
monad-testnet
. For subgraph configuration examples, refer to The Graph Protocol section below. - For information about querying Goldsky subgraphs, see the GraphQL API documentation.
- Goldsky Mirror
- Enables direct streaming of on-chain data to your database.
- For the chain name in the
dataset_name
field when creating asource
for a pipeline, usemonad_testnet
(check below example) - Example
pipeline.yaml
config filepipeline.yaml12345678910111213141516171819202122232425262728293031323334name: monad-testnet-erc20-transfersapiVersion: 3sources:monad_testnet_erc20_transfers:dataset_name: monad_testnet.erc20_transfersfilter: address = '0x0' # Add erc20 contract address. Multiple addresses can be added with 'OR' operator: address = '0x0' OR address = '0x1'version: 1.2.0type: datasetstart_at: earliest# Data transformation logic (optional)transforms:select_relevant_fields:sql: |SELECTid,address,event_signature,event_params,raw_log.block_number as block_number,raw_log.block_hash as block_hash,raw_log.transaction_hash as transaction_hashFROMethereum_decoded_logsprimary_key: id# Sink configuration to specify where data goes eg. DBsinks:postgres:type: postgrestable: erc20_transfersschema: goldskysecret_name: A_POSTGRESQL_SECRETfrom: select_relevant_fields - Relevant docs:
Using QuickNode Streams
See also: QuickNode Streams
- On your QuickNode Dashboard, select
Streams
>Create Stream
. In the create stream UI, select Monad Testnet under Network. Alternatively, you can use the Streams REST API to create and manage streams—usemonad-testnet
as the network identifier. - You can consume a Stream by choosing a destination during stream creation. Supported destinations include Webhooks, S3 buckets, and PostgreSQL databases. Learn more here.
- Relevant docs:
Using The Graph's Subgraph
See also: The Graph
- Network ID to be used for Monad Testnet:
monad-testnet
- Example configuration
-
Sample
subgraph.yaml
filesubgraph.yaml1234567891011121314151617181920212223242526272829specVersion: 1.2.0indexerHints:prune: autoschema:file: ./schema.graphqldataSources:- kind: ethereumname: YourContractName # Replace with your contract namenetwork: monad-testnet # Monad testnet configurationsource:address: "0x0000000000000000000000000000000000000000" # Replace with your contract addressabi: YourContractABI # Replace with your contract ABI namestartBlock: 0 # Replace with the block where your contract was deployed/where you want to index frommapping:kind: ethereum/eventsapiVersion: 0.0.9language: wasm/assemblyscriptentities:# List your entities here - these should match those defined in schema.graphql# - Entity1# - Entity2abis:- name: YourContractABI # Should match the ABI name specified abovefile: ./abis/YourContract.json # Path to your contract ABI JSON fileeventHandlers:# Add your event handlers here, for example:# - event: EventName(param1Type, param2Type, ...)# handler: handleEventNamefile: ./src/mapping.ts # Path to your event handler implementations -
Sample
mappings.ts
filemappings.ts12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061import {// Import your contract events here// Format: EventName as EventNameEventEventOne as EventOneEvent,// Add more events as needed} from "../generated/YourContractName/YourContractABI" // Replace with your contract name, abi name you supplied in subgraph.yamlimport {// Import your schema entities here// These should match the entities defined in schema.graphqlEventOne,// Add more entities as needed} from "../generated/schema"/*** Handler for EventOne* Update the function parameters and body according to your event structure*/export function handleEventOne(event: EventOneEvent): void {// Create a unique ID for this entitylet entity = new EventOne(event.transaction.hash.concatI32(event.logIndex.toI32()))// Map event parameters to entity fields// entity.paramName = event.params.paramName// Example:// entity.sender = event.params.sender// entity.amount = event.params.amount// Add metadata fieldsentity.blockNumber = event.block.numberentity.blockTimestamp = event.block.timestampentity.transactionHash = event.transaction.hash// Save the entity to the storeentity.save()}/*** Add more event handlers as needed* Format:** export function handleEventName(event: EventNameEvent): void {* let entity = new EventName(* event.transaction.hash.concatI32(event.logIndex.toI32())* )** // Map parameters* entity.param1 = event.params.param1* entity.param2 = event.params.param2** // Add metadata* entity.blockNumber = event.block.number* entity.blockTimestamp = event.block.timestamp* entity.transactionHash = event.transaction.hash** entity.save()* }*/ -
Sample
schema.graphql
fileschema.graphql123456789101112131415161718192021222324252627282930313233343536373839404142434445# Define your entities here# These should match the entities listed in your subgraph.yaml# Example entity for a generic eventtype EventOne @entity(immutable: true) {id: Bytes!# Add fields that correspond to your event parameters# Examples with common parameter types:# paramId: BigInt! # uint256, uint64, etc.# paramAddress: Bytes! # address# paramFlag: Boolean! # bool# paramAmount: BigInt! # uint96, etc.# paramPrice: BigInt! # uint32, etc.# paramArray: [BigInt!]! # uint[] array# paramString: String! # string# Standard metadata fieldsblockNumber: BigInt!blockTimestamp: BigInt!transactionHash: Bytes!}# Add more entity types as needed for different events# Example based on Transfer event:# type Transfer @entity(immutable: true) {# id: Bytes!# from: Bytes! # address# to: Bytes! # address# tokenId: BigInt! # uint256# blockNumber: BigInt!# blockTimestamp: BigInt!# transactionHash: Bytes!# }# Example based on Approval event:# type Approval @entity(immutable: true) {# id: Bytes!# owner: Bytes! # address# approved: Bytes! # address# tokenId: BigInt! # uint256# blockNumber: BigInt!# blockTimestamp: BigInt!# transactionHash: Bytes!# }
-
- Relevant docs:
Using thirdweb's Insight API
See also: thirdweb
- REST API offering a wide range of on-chain data, including events, blocks, transactions, token data (such as transfer transactions, balances, and token prices), contract details, and more.
- Use chain ID
10143
for Monad Testnet when constructing request URLs.- Example:
https://insight.thirdweb.com/v1/transactions?chain=10143
- Example:
- Relevant docs:
Manage nonces locally if sending multiple transactions in quick succession
This only applies if you are setting nonces manually. If you are delegating this to the wallet, no need to worry about this.
eth_getTransactionCount
only updates after a transaction is finalized. If you have multiple transactions from the same wallet in short succession, you should implement local nonce tracking.
Submit multiple transactions concurrently
If you are submitting a series of transactions, instead submitting sequentially, implement concurrent transaction submission for improved efficiency.
Before:
1234567891011
for (let i = 0; i < TIMES; i++) {const tx_hash = await WALLET_CLIENT.sendTransaction({account: ACCOUNT,to: ACCOUNT_1,value: parseEther('0.1'),gasLimit: BigInt(21000),baseFeePerGas: BigInt(50000000000),chain: CHAIN,nonce: nonce + Number(i),})}
After:
12345678910111213
const transactionsPromises = Array(BATCH_SIZE).fill(null).map(async (_, i) => {return await WALLET_CLIENT.sendTransaction({to: ACCOUNT_1,value: parseEther('0.1'),gasLimit: BigInt(21000),baseFeePerGas: BigInt(50000000000),chain: CHAIN,nonce: nonce + Number(i),})})const hashes = await Promise.all(transactionsPromises)