This paper presents improved practical algorithms for lattice basis reduction, focusing on the $ L^3 $-algorithm and its variants. The authors propose a practical floating-point version of the $ L^3 $-algorithm, which is more stable and efficient for large integer inputs. They also introduce a variant of the $ L^3 $-algorithm with "deep insertions" and a practical algorithm for block Korkin–Zolotarev reduction. These algorithms are shown to solve subset sum problems with up to 66 random weights of arbitrary bit length within a few hours on a UNISYS 6000/70 or minutes on a SPARC 1+ computer.
The $ L^3 $-algorithm is a polynomial-time algorithm that finds a non-zero vector in an m-dimensional lattice that is guaranteed to be at most $ 2^{m/2} $-times the length of the shortest non-zero vector in that lattice. The performance of the $ L^3 $-algorithm has been further improved by suitable modifications, and new algorithms have been invented for lattice reduction.
The paper discusses the importance of finding reasonably short vectors in a random lattice for solving linear and non-linear integer programming problems. It also presents several attempts to improve the performance of the $ L^3 $-algorithm for lattice reduction, including new algorithms for basis reduction in the square norm.
The authors also describe a practical algorithm for block Korkin–Zolotarev reduction and introduce the variant of the $ L^3 $-algorithm that uses "deep insertions." These algorithms produce considerably shorter lattice vectors than the original $ L^3 $-algorithm and perform well in practice, although they may be inefficient in worst-case scenarios.
The paper also discusses the application of these algorithms to solve the subset sum problem, which is to solve, given positive integers $ a_1, \ldots, a_n $ and $ s $, the equation $ \sum_{i=1}^{n} a_i x_i = s $ with $ x_1, \ldots, x_n \in \{0,1\} $. The authors show that these algorithms can solve almost all subset sum problems with sufficiently low or high density. They also compare the performance of these algorithms with other methods, such as the Lagarias–Odlyzko algorithm, and show that the new algorithms have a substantially improved success rate.
The paper concludes with a discussion of the practical implementation of these algorithms, including the use of floating-point arithmetic and the importance of minimizing floating-point errors. The authors also mention the use of block Korkin–Zolotarev reduction with pruning to speed up the algorithm and improve its performance.This paper presents improved practical algorithms for lattice basis reduction, focusing on the $ L^3 $-algorithm and its variants. The authors propose a practical floating-point version of the $ L^3 $-algorithm, which is more stable and efficient for large integer inputs. They also introduce a variant of the $ L^3 $-algorithm with "deep insertions" and a practical algorithm for block Korkin–Zolotarev reduction. These algorithms are shown to solve subset sum problems with up to 66 random weights of arbitrary bit length within a few hours on a UNISYS 6000/70 or minutes on a SPARC 1+ computer.
The $ L^3 $-algorithm is a polynomial-time algorithm that finds a non-zero vector in an m-dimensional lattice that is guaranteed to be at most $ 2^{m/2} $-times the length of the shortest non-zero vector in that lattice. The performance of the $ L^3 $-algorithm has been further improved by suitable modifications, and new algorithms have been invented for lattice reduction.
The paper discusses the importance of finding reasonably short vectors in a random lattice for solving linear and non-linear integer programming problems. It also presents several attempts to improve the performance of the $ L^3 $-algorithm for lattice reduction, including new algorithms for basis reduction in the square norm.
The authors also describe a practical algorithm for block Korkin–Zolotarev reduction and introduce the variant of the $ L^3 $-algorithm that uses "deep insertions." These algorithms produce considerably shorter lattice vectors than the original $ L^3 $-algorithm and perform well in practice, although they may be inefficient in worst-case scenarios.
The paper also discusses the application of these algorithms to solve the subset sum problem, which is to solve, given positive integers $ a_1, \ldots, a_n $ and $ s $, the equation $ \sum_{i=1}^{n} a_i x_i = s $ with $ x_1, \ldots, x_n \in \{0,1\} $. The authors show that these algorithms can solve almost all subset sum problems with sufficiently low or high density. They also compare the performance of these algorithms with other methods, such as the Lagarias–Odlyzko algorithm, and show that the new algorithms have a substantially improved success rate.
The paper concludes with a discussion of the practical implementation of these algorithms, including the use of floating-point arithmetic and the importance of minimizing floating-point errors. The authors also mention the use of block Korkin–Zolotarev reduction with pruning to speed up the algorithm and improve its performance.