Breaking RSA Using Shor's Algorithm

Our construction

Other physical gate error rates

Throughout this paper, we focus primarily on a physical gate error rate of 0.1%, because we consider this error rate to be plausibly achievable by quantum hardware, yet sufficiently low to enable error correction with the surface code. 

This being said, it is of course important to be conservative when estimating the security of cryptographic schemes. The reader may therefore also be interested in estimates for lower error rates. With this in mind, in addition to including a 0.01% curve in Figure 1, we present the following rule of thumb for translating to other error rates: 

 V (p) ≈ V (0.1\%) / (−log_{10}(p) − 2)^3

Here V(p) is the expected spacetime volume, assuming a physical gate error rate of p. The rule of thumb is based on the following observation: 

Each time the code distance of a construction is increased by 2, logical errors are suppressed by a factor roughly equal to the ratio between the code's threshold error rate and the hardware's physical error rate. The surface code has a threshold error rate of around 1%. For example, this means that at a physical error rate of 0.01% you suppress logical errors by 100x each time you increase the code distance by 2, whereas at a physical error rate of 0.1% the same suppression factor would require increasing the code distance by 4. Consequently, code distances tend to be half as large at a physical error rate of 0.01%. The code distance is a linear measure, and we are packing the computation into a three-dimensional space (one of the dimensions being time), so proportional improvements to the code distance get cubed. 

Hence the idea is that at a physical error rate of 0.01% code distances are half as big, meaning the computational pieces being packed into spacetime are 23 = 8 times smaller, meaning in turn that the optimal packing will also be roughly eight times smaller. Similarly, at an error rate of 0.001%, code distances are a third as big, so spacetime volumes should be roughly 27 times smaller. 

Of course, this rule of thumb is imperfect. For example, it does not account for the fact that code distances have to be rounded to integer values, or for the fact that the spacetime tradeoffs being made have to change due to the reaction time of the control system staying constant as the code distance changes, or for the fact that at really low error rates you would switch to different error correcting codes. Regardless, for order of magnitude estimates involving physical error rates between 0.3% and 0.001%, we consider this rule of thumb to be a sufficient guide.