New publications

Universal Approximation in Dropout Neural Networks

We have submitted Universal Approximation in Dropout Neural Networks, and it is currently under review. This is joint work between, Oxana Manita, Mark Peletier, Jacobus Portegies, Albert Senen-Cerda and myself. A preprint is available on arXiv.

Abstract

We prove two universal approximation theorems for a range of dropout neural networks. These are feed-forward neural networks in which each edge is given a random {0,1}-valued filter, that have two modes of operation: in the first each edge output is multiplied by its random filter, resulting in a random output, while in the second each edge output is multiplied by the expectation of its filter, leading to a deterministic output. It is common to use the random mode during training and the deterministic mode during testing and prediction.

Both theorems are of the following form: Given a function to approximate and a threshold ε>0, there exists a dropout network that is ε-close in probability and in Lq. The first theorem applies to dropout networks in the random mode. It assumes little on the activation function, applies to a wide class of networks, and can even be applied to approximation schemes other than neural networks. The core is an algebraic property that shows that deterministic networks can be exactly matched in expectation by random networks. The second theorem makes stronger assumptions and gives a stronger result. Given a function to approximate, it provides existence of a network that approximates in both modes simultaneously. Proof components are a recursive replacement of edges by independent copies, and a special first-layer replacement that couples the resulting larger network to the input.

The functions to be approximated are assumed to be elements of general normed spaces, and the approximations are measured in the corresponding norms. The networks are constructed explicitly. Because of the different methods of proof, the two results give independent insight into the approximation properties of random dropout networks. With this, we establish that dropout neural networks broadly satisfy a universal-approximation property.

Preprint

Loader Loading…
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Download

Curious for more?

Head on over to My Articles for more of my work, and check out My Research for a peek into upcoming themes. You can also find out who is on our team right here: Academic Supervision.

Jaron
Jaron Sanders received in 2012 M.Sc. degrees in Mathematics and Physics from the Eindhoven University of Technology, The Netherlands, as well as a PhD degree in Mathematics in 2016. After he obtained his PhD degree, he worked as a post-doctoral researcher at the KTH Royal Institute of Technology in Stockholm, Sweden. Jaron Sanders then worked as an assistant professor at the Delft University of Technology, and now works as an assistant professor at the Eindhoven University of Technology. His research interests are applied probability, queueing theory, stochastic optimization, stochastic networks, wireless networks, and interacting (particle) systems.
https://www.jaronsanders.nl