Zellular Docs
  • Introduction
  • Getting Started
    • The Current Landscape
    • Zellular's Solution
    • Use Cases
      • AVS Task Management
      • AVS Database Replication
    • Core Concepts
    • Performance & Scalability
  • Page
  • Technical Documentation
    • Architecture
    • Protocol
    • SDK
    • Integration
Powered by GitBook
On this page
  • A Practical Example: Logging AVS Node Downtime
  • Detecting & Validating Node Downtime
  • Hosting Downtime Data on AVS nodes
  1. Getting Started
  2. Use Cases

AVS Database Replication

PreviousAVS Task ManagementNextCore Concepts

Last updated 5 months ago

AVS Developers often use blockchains as a decentralized database especially when they need consensus across nodes, but this comes with high costs and limited scalability. Zellular sequencer offers a more efficient alternative, allowing AVS to replicate data directly on their nodes while maintaining consensus — without the drawbacks of blockchain reliance.

A Practical Example: Logging AVS Node Downtime

Downtime logs are critical data that require consensus among all AVS nodes, as they directly influence fee and penalty structures. A reliable sub-module for monitoring downtimes of nodes is essential for many AVS, serving as the backbone for their reward and penalty systems.

To demonstrate Zellular’s utility for database replication, let’s consider the essential task of logging the downtimes of AVS nodes and replicating that data across the nodes using the Zellular sequencer.

Detecting & Validating Node Downtime

In AVS architecture, an aggregator typically queries nodes about validating the tasks they are responsible for and collects their validation signatures. Aggregators are often the first to detect when a node fails to respond with its validation signature, potentially indicating downtime.

However, relying solely on the aggregator’s claims regarding node downtime is insufficient for a decentralized system, as the aggregator itself could act maliciously and make untruthful claims that put financial costs on individual nodes. To ensure reliability, we need a system where AVS nodes can independently validate the uptime or downtime of other nodes as a secondary task.

To achieve this, each AVS node must have a designated endpoint that allows the aggregator to query them with the ID of a node suspected of being down. This enables nodes to confirm or refute the downtime status independently. A node’s downtime is logged only when a threshold of nodes confirms it.

Note that to have the sum of node downtime in a certain period, nodes’ uptime must also be detected and validated in the same way.

Hosting Downtime Data on AVS nodes

Once the downtime/uptime events of a specific node are validated by the other nodes and their confirmations are collected, the corresponding signatures must be aggregated and stored in a decentralized database.

The popular approach among Web3 developers is to use Blockchains to store such data in a decentralised way. However, by enabling AVS to store the proofs on its own nodes, Zellular offers a faster, cheaper and therefore more scalable solution.

Let’s explore how Zellular can help store the data within AVS nodes while guaranteeing consensus among them.

First, the aggregator must post each aggregated proof to the Zellular sequencer:

zellular = zellular.Zellular("avs-liveness-checker", base_url)
zellular.send([{'node': 8, 'event_type': 'down', timestamp: , 1732522695, 'sig': '0x23a3...46da'}])

AVS nodes, on the other hand, constantly listen to receive the proofs from the Zellular sequencer:

zellular = zellular.Zellular("avs-liveness-checker", base_url)
for batch, index in self.zellular.batches(after=index):
    events = json.loads(batch)
    for i, event in enumerate(events):
        if verify(event):
            add_event_log(event)

AVS nodes verify each proof they receive from the Zellular sequencer. When a threshold of nodes (a predefined number of nodes, for instance 15 out of 20) has confirmed the event, they add the corresponding downtime/uptime to the log lists in their local database.

Using Zellular in this architecture not only guarantees that no event is missed in any node’s log list, but also ensures all nodes store the same sequence of downtime/uptime events. Therefore, if a query is made to retrieve the total downtime of a node over a specific period, all nodes will calculate and return identical responses. This consistency is critical for decentralized reward and penalty mechanisms.