There is a need for more equitable data regarding the post-1.4.18 rewards.

Hi there,

I want to first recognize that I understand entirely how much of a breakdown there was in communications around this, and I accept responsibility for this communication gap – the Node Runners TG has been repeatedly stated to be a private telegram group by folks, and I want to reiterate that while it has always been open to anyone who wanted to join, I can understand why this belief persisted.

Regarding Group Accessibility
The group originally had an invite link, but we were getting inundated with problems sourced from that, and with limited moderation support, it created a massive overhead in managing the group, so we disabled invite links, but continued to allow people to add anyone to the group, giving an audit trail. This setting has been in place for several months, and we also later added a voteban bot to help community self-police for the one rule we had in place. One thing I want to note: before the poll went online, I explicitly confirmed that the permissions allowed anyone to be added, and even removed the voteban bot during this time so that in the event the conversation got heated, that nobody could voteban their way into controlling the vote. The majority of the members of this telegram group have been long-term node operators, but there are also many recent joins.

Nevertheless, the group was increasingly a poor fit for handling comms, as I am now aware of the existence of multiple Telegram groups, WeChat groups, a Discord server (possibly two?), Warpcast group chats, multiple Farcaster channels, and some other smaller groups. Even before this weekend, I myself was waking up to hundreds to thousands of new direct messages each day, on top of the messages being responded to during the day. This was untenable and prone to creating poor dynamics for ensuring clear communications. After the events of the retroactive rewards update, setting up a public discussion board was the most prudent choice for ensuring these breakdowns did not occur again, and it now exists in the form of this Forum.

Regarding the Rewards Data
During the first week, heavy network instability was causing massive loss of messages on the network. A deeper synopsis of the cause and events that followed can be found here: Quilibrium knowledge learning - #2 by cassie. The retroactive rewards have been an unfortunate side effect of the original intentions of the network being incentivized during this pre-mainnet phase. Originally, the scope of the network during pre-mainnet was intended to be heavy on stress testing, but smaller in size. We wanted to fairly reward node operators that were running during this time, so to incentivize, the original application intended to do this stress testing (sized for 256 nodes at a maximum) was intended to follow a simple process for accumulating rewards and building consensus. We had no idea how much the message of Quilibrium was going to resonate or how quickly – no other grassroots driven project has grown at the scale we have so quickly, even with incentivization. It’s been an incredibly humbling process, and forced us to respond quickly to many of the assumptions that it broke along the way.

The first stages of retroactive rewards were intended to address the problems we encountered as things scaled up and bugs were found, and originally tried to keep things embedded to the protocol itself while we tried to get the network to heal under the load. And things initially did start to move well in that direction, except we began to suffer more greatly from the success of word of mouth, until the ceremony application was no longer remotely capable of handling the strain. So we instead tried targeted fixes as we explored solutions, and along the way, attempted to perform what were best-faith approximations of the retroactive rewards. To many critiques regarding this, it was indeed not fully decentralized, in the sense that to calculate these rewards, it required large data processing pipelines to try to reconstruct who was online, what their operating parameters were, and what the outcome should be given those conditions. This had occurred for a few cycles, until 1.4.19, where the nodes themselves are able to self-track their rewards, giving us time to stabilize the messaging layer of the network (mostly by getting rid of gossipsub).

How the rewards were calculated
Originally, for 1.4.18, we intended to scale the rewards based on essentially the same criteria as used in 1.4.19 – the faster the hardware, the greater the cores, the more rewards. We used the difficulty metric, which was the speed of the VDF’s completion, as one of the mediating factors. Unfortunately, we ran into the earlier described issues with network partitioning and messages getting dropped, resulting in tens of thousands of nodes cleaved from the network and unfairly left out of rewards. Alongside this, there were some specific actors in the space who made things more complicated, who violated the AGPL license of the protocol, which requires contributing code back if being used on the network, who used a faster implementation of the VDF, presumably something similar to the Rust VDF we rolled out after the first week. The issue there, is they were doing this before any known developers of Q had been working on the migration to a Rust VDF, so if we had maintained the reward approach used for the first week, they would have been unfairly rewarded the grand majority (~90%) of week two’s rewards.

What was done to improve conditions for all in the revision
We were able to reconstruct a better view of who was online during the network partition due to the fact that our DHT logs remained relatively uninhibited as it does not succumb to network partitioning failures in the same way that gossipsub does. We combined this dataset with the node manifests to synthesize a more accurate picture of the actual state of the network. Because the intention of the rewards was to model the 2.0 PoMW issuance curve, this meant that the issuance had to be maintained, resulting in many nodes that had already seen rewards seeing far fewer from the two weeks of 1.4.18. As a consequence, tens of thousands of nodes that were previously rewarded zero QUIL for the first week were now properly eligible.

What about the high earnings of the later nodes?
Earnings were scaled accordingly to prover core count and duration – there were an influx in the later days of 1.4.18 which had extremely large machines join in, the greatest was 384 cores, with the proofs accompanying it to back it up.

Closing Remarks
There was no right answer here. If the issuance curve had been sidestepped and the second option of the poll (let week 1 follow previous issuance but applied to all the new nodes), it would have been one of the largest issuance intervals in the history of the retroactive rewards – approximately 10M QUIL per day, which is well beyond the highest rate the retroactive rewards (or PoMW) would ever see. Rapidly inflating the token supply would have been a poor decision, in terms of cryptoeconomics, but naturally the revision also felt to many that it was also unfair. I held a vote for the purpose of allowing node operators to discuss and decide which was more favorable (and notably, many of those voting to keep things as revised had themselves lost tens of thousands of QUIL from the revision), and limited it to a full 24 hours to enable people to vote from any time zone, bring anyone in who wanted to be heard, and debate it out. It was never going to be a perfect outcome for as long as we were still having to do these retroactive rewards, which was why it was critical to time box this, given it causes strain in developer resources and time, and getting 1.4.19 out before retroactive reward calculations were complete reflects the desire to move on from this very broken part of the protocol’s history.

18 Likes