On Lattices, Learning with Errors, Random Linear Codes, and Cryptography

On Lattices, Learning with Errors, Random Linear Codes, and Cryptography

January 9, 2024 | Oded Regev
This paper presents a reduction from worst-case lattice problems such as GAPSVP and SIVP to a learning problem, which is a natural extension of the "learning from parity with error" problem to higher moduli. The learning problem can also be viewed as the problem of decoding from a random linear code. The reduction is quantum, implying that an efficient solution to the learning problem would lead to a quantum algorithm for GAPSVP and SIVP. A main open question is whether this reduction can be made classical. The paper also presents a classical public-key cryptosystem whose security is based on the hardness of the learning problem. The cryptosystem is more efficient than previous lattice-based cryptosystems, with a public key size of $ \tilde{O}(n^2) $ and encryption increasing message size by a factor of $ \tilde{O}(n) $. Under certain conditions, the public key size can be reduced to $ \tilde{O}(n) $. The learning problem is equivalent to the problem of decoding random linear codes. The paper shows that solving the learning problem implies a quantum solution to worst-case lattice problems. The main theorem states that for certain choices of $ p $ and $ \chi $, a solution to $ LWE_{p,\chi} $ implies a quantum solution to worst-case lattice problems. The paper also discusses the hardness of the learning from parity with errors problem and the implications of the main theorem for cryptography. It presents a public-key cryptosystem based on the learning problem and shows that its security is based on the worst-case quantum hardness of lattice problems. The cryptosystem is more efficient than previous lattice-based cryptosystems and can be made practical with certain conditions. The paper also discusses the quantum nature of the reduction and the difficulty of making it classical. It shows that quantum computation is needed in one step of the proof of the main theorem. The paper also discusses the use of the Fourier transform of Gaussian measures and the smoothing parameter in the proof. It concludes with open questions about the hardness of the learning problem and the potential for dequantizing the main theorem.This paper presents a reduction from worst-case lattice problems such as GAPSVP and SIVP to a learning problem, which is a natural extension of the "learning from parity with error" problem to higher moduli. The learning problem can also be viewed as the problem of decoding from a random linear code. The reduction is quantum, implying that an efficient solution to the learning problem would lead to a quantum algorithm for GAPSVP and SIVP. A main open question is whether this reduction can be made classical. The paper also presents a classical public-key cryptosystem whose security is based on the hardness of the learning problem. The cryptosystem is more efficient than previous lattice-based cryptosystems, with a public key size of $ \tilde{O}(n^2) $ and encryption increasing message size by a factor of $ \tilde{O}(n) $. Under certain conditions, the public key size can be reduced to $ \tilde{O}(n) $. The learning problem is equivalent to the problem of decoding random linear codes. The paper shows that solving the learning problem implies a quantum solution to worst-case lattice problems. The main theorem states that for certain choices of $ p $ and $ \chi $, a solution to $ LWE_{p,\chi} $ implies a quantum solution to worst-case lattice problems. The paper also discusses the hardness of the learning from parity with errors problem and the implications of the main theorem for cryptography. It presents a public-key cryptosystem based on the learning problem and shows that its security is based on the worst-case quantum hardness of lattice problems. The cryptosystem is more efficient than previous lattice-based cryptosystems and can be made practical with certain conditions. The paper also discusses the quantum nature of the reduction and the difficulty of making it classical. It shows that quantum computation is needed in one step of the proof of the main theorem. The paper also discusses the use of the Fourier transform of Gaussian measures and the smoothing parameter in the proof. It concludes with open questions about the hardness of the learning problem and the potential for dequantizing the main theorem.
Reach us at info@study.space