This paper investigates the complexity of functions computed by deep feedforward neural networks with piecewise linear activations, focusing on the number of linear regions these functions have. The authors argue that deep networks can map different portions of their input space to the same output, allowing them to compute highly complex and structured functions by reusing low-level computations exponentially often. They provide theoretical results showing that deep networks with rectifier and maxout units can compute functions with exponentially more linear regions compared to shallow networks, even with a moderate number of hidden layers. The analysis is not limited to a specific model family and extends to other types of networks, such as convolutional networks. The paper contributes to understanding the advantage of depth in neural networks and provides new complexity bounds for rectifier and maxout networks.This paper investigates the complexity of functions computed by deep feedforward neural networks with piecewise linear activations, focusing on the number of linear regions these functions have. The authors argue that deep networks can map different portions of their input space to the same output, allowing them to compute highly complex and structured functions by reusing low-level computations exponentially often. They provide theoretical results showing that deep networks with rectifier and maxout units can compute functions with exponentially more linear regions compared to shallow networks, even with a moderate number of hidden layers. The analysis is not limited to a specific model family and extends to other types of networks, such as convolutional networks. The paper contributes to understanding the advantage of depth in neural networks and provides new complexity bounds for rectifier and maxout networks.