The Expressive Power of Neural Networks: A View from the Width

The Expressive Power of Neural Networks: A View from the Width

1 Nov 2017 | Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, Liwei Wang
The paper "The Expressive Power of Neural Networks: A View from the Width" by Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, and Liwei Wang explores the expressive power of neural networks, focusing on how width affects their ability to approximate functions. The authors present a universal approximation theorem for width-bounded ReLU networks, showing that width-$(n+4)$ ReLU networks can approximate any Lebesgue integrable function on an $n$-dimensional space with respect to $L^1$ distance. They also demonstrate a phase transition, where width-$n$ ReLU networks cannot approximate all functions. The paper further investigates the role of width in the expressive power of neural networks, proving that there exist wide networks that cannot be approximated by narrow networks with a polynomial increase in depth. Experimental results support the idea that a polynomial increase in depth may be sufficient for narrow networks to approximate wide and shallow networks. The authors conclude that both depth and width are crucial for understanding the expressive power of neural networks, and their contributions provide a more comprehensive understanding of the role of width in this context.The paper "The Expressive Power of Neural Networks: A View from the Width" by Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, and Liwei Wang explores the expressive power of neural networks, focusing on how width affects their ability to approximate functions. The authors present a universal approximation theorem for width-bounded ReLU networks, showing that width-$(n+4)$ ReLU networks can approximate any Lebesgue integrable function on an $n$-dimensional space with respect to $L^1$ distance. They also demonstrate a phase transition, where width-$n$ ReLU networks cannot approximate all functions. The paper further investigates the role of width in the expressive power of neural networks, proving that there exist wide networks that cannot be approximated by narrow networks with a polynomial increase in depth. Experimental results support the idea that a polynomial increase in depth may be sufficient for narrow networks to approximate wide and shallow networks. The authors conclude that both depth and width are crucial for understanding the expressive power of neural networks, and their contributions provide a more comprehensive understanding of the role of width in this context.
Reach us at info@study.space
Understanding The Expressive Power of Neural Networks%3A A View from the Width