Gaussian Elimination is not Optimal

Gaussian Elimination is not Optimal

1969 | VOLKER STRASSSEN
Strassen's paper presents an algorithm for multiplying two square matrices of order n with fewer than 4.7·n^log7 arithmetic operations, which is significantly better than the standard method requiring about 2n³ operations. This algorithm also leads to efficient methods for matrix inversion, solving linear systems, and computing determinants, all requiring less than a constant times n^log7 operations. The paper contrasts this with Gaussian elimination, which is considered optimal for row and column operations. It also notes that Winograd modified existing algorithms, replacing some multiplications with additions and subtractions. The paper defines recursive algorithms α_{m,k} for multiplying matrices of order m2^k. These algorithms use a divide-and-conquer approach, breaking matrices into blocks and computing products recursively. The number of multiplications and additions/subtractions required by α_{m,k} is derived, showing that multiplying matrices of order 2^k requires 7^k multiplications and less than 6·7^k additions/subtractions. Using these results, the paper proves that multiplying two matrices of order n can be done with fewer than 4.7·n^log7 arithmetic operations. The paper also defines algorithms β_{m,k} for inverting matrices of order m2^k, using a similar recursive approach. The inversion algorithm assumes the matrix is invertible and that all divisions are valid, similar to Gaussian elimination. The paper concludes that these algorithms provide more efficient methods for matrix operations compared to traditional approaches.Strassen's paper presents an algorithm for multiplying two square matrices of order n with fewer than 4.7·n^log7 arithmetic operations, which is significantly better than the standard method requiring about 2n³ operations. This algorithm also leads to efficient methods for matrix inversion, solving linear systems, and computing determinants, all requiring less than a constant times n^log7 operations. The paper contrasts this with Gaussian elimination, which is considered optimal for row and column operations. It also notes that Winograd modified existing algorithms, replacing some multiplications with additions and subtractions. The paper defines recursive algorithms α_{m,k} for multiplying matrices of order m2^k. These algorithms use a divide-and-conquer approach, breaking matrices into blocks and computing products recursively. The number of multiplications and additions/subtractions required by α_{m,k} is derived, showing that multiplying matrices of order 2^k requires 7^k multiplications and less than 6·7^k additions/subtractions. Using these results, the paper proves that multiplying two matrices of order n can be done with fewer than 4.7·n^log7 arithmetic operations. The paper also defines algorithms β_{m,k} for inverting matrices of order m2^k, using a similar recursive approach. The inversion algorithm assumes the matrix is invertible and that all divisions are valid, similar to Gaussian elimination. The paper concludes that these algorithms provide more efficient methods for matrix operations compared to traditional approaches.
Reach us at info@futurestudyspace.com