Best Practices
BlockPI Network RPC service allow users to set their endpoint to be with different features, such as Archive mode.
When the archive mode is on for a spesific endpoint, the requests sent to this endpoint will be routed to archive nodes. And it typically takes a longer time to process due to the huge amount of data. In order to more accurately bill users for their requests, we will charge an additional 30% fee for those utilizing archive nodes while keeping the cost for regular requests low.
We present users with a more stringent success rate measurement. Any request that does not receive the expected response is considered unsuccessful in our statistics. This includes, but is not limited to:
- 1.Any request that triggers BlockPI RPC restrictions, including rate limits, block range limitations, data size limitations, etc.
- 2.Any error directly returned from the node, such as non-existent methods, incorrect parameters, missing parameters, and so on.
- 3.Others, such as timeout error.
We recommend users to output error logs so that we can accurately analyze the specific reasons for the unsuccessful requests.
As a user, if you wish for your RUs to be utilized efficiently, you may choose to send only those requests which require archive data for processing to the designated endpoint. In this case, you would need to generate two endpoints - one for regular requests, and another with Archive mode enabled. And send requests based on conditions. Here is an example,
# Send requests for data that predates 128 blocks to the archive nodes
def request():
# generate two keys, one is with archive mode enabled
fullNodeUrl = "https://ethereum.blockpi.network/v1/rpc/<key-normal>"
archiveNodeUrl = "https://ethereum.blockpi.network/v1/rpc/<key-with-archive-mode-on>"
#target block number
blockNum = "0x10336aa"
# get the latest block number
payload = {"jsonrpc": "2.0", "method": "eth_blockNumber", "params": [], "id": 83}
headers = {"Content-Type": "application/json"}
latestNum = requests.post(fullNodeUrl,headers=headers, data=json.dumps(payload)).json()['result']
# Send the request to the desired endpoint.
traceBlockPayload = {"jsonrpc":"2.0","method":"trace_block","params":[blockNum],"id":1}
if int(latestNum,16) - int(blockNum,16) >= 128 :
resp = requests.post(archiveNodeUrl,headers=headers, data=json.dumps(traceBlockPayload))
print(resp.text)
else:
resp = requests.post(fullNodeUrl,headers=headers, data=json.dumps(traceBlockPayload))
print(resp.text)
Here is another one with using the eth_geLogs. As the method consumes decent sever resource. The block range limit of this method is 1024. This is to protect the node sever to be not overwhelmed. If a users job is to query a certain block range more than 1024, it can be segmented to multiple request with 1000 blocks.
import json
import requests
fullNodeUrl = "https://ethereum.blockpi.network/v1/rpc/<your-api-key>"
headers = {"Content-Type": "application/json"}
interval = 1000
def get_logs(from_block_number, to_block_number):
logs = []
while from_block_number <= to_block_number:
end_block_number = min(to_block_number, from_block_number + interval)
payload = {
"jsonrpc": "2.0",
"id": 1,
"method": "eth_getLogs",
"params": [{
"fromBlock": hex(from_block_number),
"toBlock": hex(end_block_number)
}]
}
response = requests.post(fullNodeUrl, headers=headers, data=json.dumps(payload))
if response.status_code != 200:
raise Exception("Failed to retrieve logs for block range:", from_block_number, end_block_number)
result = response.json()["result"]
logs.extend(result)
from_block_number = end_block_number + 1
print(response.json())
return logs
def get_all_logs(from_block_number, to_block_number):
logs = []
current_block_number = from_block_number
while current_block_number <= to_block_number:
end_block_number = current_block_number + interval
logs_in_range = get_logs(current_block_number, end_block_number)
logs.extend(logs_in_range)
print("Processed block range:", current_block_number, "-", end_block_number, ", total logs:", len(logs_in_range))
current_block_number = end_block_number + 1
return logs
from_block_number = 10962850
to_block_number = 10962950
logs = get_all_logs(from_block_number, to_block_number)
print("Total logs:", len(logs))
To safeguard the system, every RPC provider sets a timeout for WebSocket connections to be disconnected periodically. In the case of BlockPI, the timeout is set to 30 minutes. Therefore, during development, users need to implement a mechanism to detect and handle reconnection when the connection is dropped. Here is a Python example:
import asyncio
import json
import websockets
async def connect(url):
while True:
try:
async with websockets.connect(url) as ws:
print("websocket connection is established")
request = {
"jsonrpc": "2.0",
"id": 2,
"method": "eth_subscribe",
"params": ["newHeads"]
}
await ws.send(json.dumps(request))
while True:
message = await ws.recv()
print("message received:", message)
except websockets.exceptions.ConnectionClosedError as e:
if str(e) == 'no close frame received or sent':
print("keepalive triggered,reconnecting...")
await asyncio.sleep(5)
continue
else:
print("websocket alive")
return
except Exception as e:
print("Unknown error occurred:", e)
await asyncio.sleep(5)
continue
if __name__ == "__main__":
url = "wss://polygon-mumbai.blockpi.network/v1/ws/d69ca19cf64365849ca8152b7f32f319bad9fc22"
asyncio.run(connect(url))
Please note that this is just a sample code, and you may need to adapt it to your specific development environment and requirements.