REEVALUATING AMDAHL'S LAW

REEVALUATING AMDAHL'S LAW

May 1988 | JOHN L. GUSTAFSON
John L. Gustafson discusses the reevaluation of Amdahl's law in the context of massively parallel processing. Amdahl's law, proposed in 1967, suggests that the maximum speedup achievable with parallel processing is limited by the fraction of serial work, s. However, Gustafson argues that the assumptions underlying Amdahl's law are not applicable to modern massively parallel systems. He presents timing results from a 1024-processor system that challenge the validity of Amdahl's law for current approaches to massive parallelism. Amdahl's law calculates speedup as 1/(s + p/N), where s is the serial fraction and p is the parallel fraction. However, Gustafson points out that this model assumes p is constant with N, which is not the case in practice. Instead, he introduces a scaled-sized model where speedup is calculated as N + (1 - N)s, which shows a much more moderate slope and allows for higher efficiency. Gustafson argues that in practice, problem size scales with the number of processors, and that speedup should be measured by scaling the problem to the number of processors, not by fixing problem size. He presents results from three applications at Sandia that achieved unprecedented speedups, contradicting Amdahl's predictions. These results suggest that it is possible to achieve very high efficiency in massively parallel systems. Gustafson concludes that the computing research community should overcome the "mental block" against massive parallelism caused by the misuse of Amdahl's law. He believes that speedup should be measured by scaling the problem to the number of processors, not by fixing problem size. He expects to extend his success to a broader range of applications and even larger values for N.John L. Gustafson discusses the reevaluation of Amdahl's law in the context of massively parallel processing. Amdahl's law, proposed in 1967, suggests that the maximum speedup achievable with parallel processing is limited by the fraction of serial work, s. However, Gustafson argues that the assumptions underlying Amdahl's law are not applicable to modern massively parallel systems. He presents timing results from a 1024-processor system that challenge the validity of Amdahl's law for current approaches to massive parallelism. Amdahl's law calculates speedup as 1/(s + p/N), where s is the serial fraction and p is the parallel fraction. However, Gustafson points out that this model assumes p is constant with N, which is not the case in practice. Instead, he introduces a scaled-sized model where speedup is calculated as N + (1 - N)s, which shows a much more moderate slope and allows for higher efficiency. Gustafson argues that in practice, problem size scales with the number of processors, and that speedup should be measured by scaling the problem to the number of processors, not by fixing problem size. He presents results from three applications at Sandia that achieved unprecedented speedups, contradicting Amdahl's predictions. These results suggest that it is possible to achieve very high efficiency in massively parallel systems. Gustafson concludes that the computing research community should overcome the "mental block" against massive parallelism caused by the misuse of Amdahl's law. He believes that speedup should be measured by scaling the problem to the number of processors, not by fixing problem size. He expects to extend his success to a broader range of applications and even larger values for N.
Reach us at info@study.space
[slides] Reevaluating Amdahl's law | StudySpace