PSA: 2.0 Launch Steps

I guess this is the only info we have on the matter? It doesn’t say how much the window to claim would last, though.

3 Likes

Okey, thank you very much for your help, I will continue to pay attention to this information

1 Like

Having some clear terms posted about pre 2.0 rewards (and any differences or expiration for rewards pre 1.4.19 as well) will be crucial to have shared prominently

yeah, this should be a static topic in the forum, regarding hybrid mint for previous version rewards and any potential expiration date or timeframe of necessary action.

@cassie would love to know some more info on this, when you have the time and info.

1 Like

thank you, appreciate the answer on this. great news

It is in the Q&A in the last stream (#3) that Cassie had. She didn’t have a date, but i would assume something reasonable like a few weeks or a month.

The point of the time limit/expiration on pre-2.0 rewards is so that old proofs are not left hanging around forever. Leaving the ability to always be able to mint tokens on old proofs opens the door for somebody to try print infinite money by trying to spoof/fake older proofs.

3 Likes

Also, I want to be clear, the bridge to eth will be down temporarily while transitioning to 2.0, but is not related to this mint expiration.

Hey all, want to clear up the confusion real quick about the different sets of QUIL token rewards, where they’ll be, and what (if any) time limits exist on claiming them.

Everything that was bridgeable (rewards prior to “pre-1.4.18/post-1.4.18” categories)
If they have been bridged to Ethereum, they will be able to be bridged back after the 2.0 upgrade is complete.
If they have not been bridged, they will be immediately registered under the address of reward with the deployment of the token application.
No time limits are involved in either condition.

Rewards under the “pre-1.4.18/post-1.4.18” categories
They will be immediately registered under the address of reward with the deployment of the token application. No time limits are involved.

Rewards earned during 1.4.19/20/21
These rewards necessarily must have a terminal point, otherwise it would essentially become an infinite mint glitch. The token application’s mint function will have a short lived window of one week to complete the claim before strictly only allowing 2.0 protocol proofs for the mint function (during the one week window it will permit either). It is in the interest of maximizing rewards for your nodes that you do this as soon as the token application is deployed (nodes will automatically take action on this for you, your action is just to upgrade to 2.0 when the upgrade is released) so that your node can switch to providing network proofs for 2.0.

A very important note for node operators who have been moving backups around – please make sure your config files correspond to the store files being used for claiming 1.4.19/20/21 proofs, or claiming earned QUIL will fail (the proofs are bound to the key used in the config).

13 Likes

Quick update: we are now in the testnet verification process. Things are taking a lot longer than expected. To ensure things are not rushed dangerously, we are setting a buffer period of up to 2024/07/24 for mainnet launch completion.

10 Likes

Update from @cassie:

On stage 2 of 4 of testnet verification, hopefully will have it wrapped up tonight. Right now it’s raw performance drills. If anything goes awry, I’ll let y’all know about schedule updates.

Basically why these tests are so important comes down to the topic of each stage:
Stage 1. Can it be clobbered by Sybils and effectively DDoS’d? We’ve cleared this stage, things are good.
Stage 2. Are dummy imported states from pre-mainnet correct and transactable? Do they have expected performance under optimal conditions? Closing out this stage tonight, but so far, so good.
Stage 3. Does finalized pre mainnet data correctly import? Does the bridge behave as expected? Will be a quicker evaluation than previous stages.
Stage 4, and the final check. When bandwidth (unilaterally) is constrained to incredibly bad conditions (~10Mbps), how do we cope, does it handle things correctly, or does it degrade severely enough that bootstrap nodes have to be guardians?

4 Likes

Update from @cassie on https://status.quilibrium.com/

Performance degradation identified under specific user-invoked conditions, cases identified and fix being implemented. Stage 2 will need to be repeated post-update. Given evaluation time, adjusted ETA: Sat, Jul 27, PDT

More context:

High level: performance degradation conditions found, they are being fixed, but will require a rerun of stage 2 to get final performance numbers. This will push eta out to 7/27, but it would have been a really painful bug that would slow down (though not harming privacy or security) the network if exploited.

Longer version for the tech: I found a repeatable condition with specific user-invokable behaviors that are nuking performance slow-loris style under the mixnet, allowing one user to clog the mixnet for a full frame duration before aborting, resulting in an empty set of transactions in a frame. I’m adjusting the identifiable abort behaviors of our RPM implementation, but once I do this I’ll have to rerun part of stage 2 tests to verify the impact to max optimal performance. Regarding max optimal performance, I will share the numbers after the stage 2 tests are run again, but where things were at in perf (minus the bug) is going to raise a lot of eyebrows

5 Likes

Update from @cassie:

Working through the night to get this RPM stall issue resolved. I’m on the right track, but given this is security sensitive code it can’t be rushed. I’ll post an update as soon as I’ve cleared the problem.
For those familiar with the paper I’ve switched to a tradeoff to a higher degree polynomial, this lowers the number of active mixers in the equation but still requires the same node total to complete it. I also moved the user out of the loop on processing it, but added an extra check on the share step to include a ZKPoK to retain malicious security on distributing shares, which is something the original paper didn’t do at all (fine for firm offline/online boundaries, not fine for live systems that alternate matrix generation), which is why I had to loop the user in in the first place
This should net higher performance overall and prevent user driven halts
The reduction in bandwidth involved means that we’ll have even more TPS
Also will be really useful when we get to AI/ML primitives because the overhead was going to be pretty rough before

6 Likes

Update from @cassie:

Alright, finally found resolution for the bug, will get it wired up with the new pattern and resume stage two

5 Likes

As posted on the status page:

2.0 Upgrade: Testnet Verification: Stage 2 Resumed

User-driven halt bug for mixnet has been resolved, and Stage 2 of the Testnet verification process has been resumed following the emergency patch update of 1.4.21.1. Additional testcases have been added to the verification battery for the updated interaction flows for transaction settlement. Updated ETA: Fri, Aug 2, PDT

Deeper details:

The previous interaction model for transactions required online interactivity between the submitter of a transaction and the nodes in the shard to avoid the mixnet being formed from colluding to identify a submitter. This had the side effect that the submitter would have to be present for the first half of the mixnet generation process as well as the second half to submit the masked transaction payload prior to mixnet evaluation. Because of the separation, this could result in a submitter providing the mask but not the transaction payload, essentially forcing an abort on the mixnet evaluation, which would consume the entire frame period. While this would be an identifiable abort and the submitter could be blacklisted by nodes, it could be leveraged as a simple DoS vector. The RPM paper on which the mixnet was based provides no guidance for this, as best I can tell it assumes identifiable aborts to be an acceptable consequence. In the fully-separated mixnet model where there is no interactivity between submitter and mixnet, the identifiable abort would remove only a mixer, which is indeed an acceptable consequence. To retain the same privacy-preserving guarantees against mixnet collusion, the transaction share submission step will become two rounds with a ZKPoK and decommitment, such that if a decommitment doesn’t land, the transaction is simply a no-op.

10 Likes

2.0 Upgrade: Testnet Verification: Stage 3

Stage 2 of Testnet Verification has now concluded, and Stage 3 is now underway: replaying the finalized dataset and bridge validation. After Stage 3 has concluded, we will enter the final stage of verification before mainnet release: constraint conditions testing.

7 Likes

Update from @cassie:

Alright folks, we are finally in the home stretch. Stage four has begun, but yes, because of how long things have taken, it will bring us into Sunday. We’re almost at the finish line

3 Likes

Update on the status page:

Stage 4 is still in progress – the test scenarios are taking significantly longer than projected, but no issues or instabilities have arisen from the testing at this time. Bumping the ETA one final time to leave room for this stage to complete: 2.0 will launch Aug 7, 10pm PDT. We appreciate your patience.

2 Likes

I’ve never seen such a untrustworthy project, especially one with an announcement from the founder themselves. Looking at the update schedule for version 2.0, you’ve postponed the release from July 20 to July 27, August 2, August 4, and August 7. Will there be another delay by the 7th?