Dataflow matrix machines as generalized recurrent neural networks

Dec 27, 2016 01:29

A year ago I posted about dataflow programming and linear models of computation:

http://anhinga-anhinga.livejournal.com/82757.html

It turns out that those dataflow matrix machines are a fairly powerful generalization of recurrent neural networks.( Read more... )

advanced software, artificial intelligence, machine learning, strange technology, computer science, dataflow matrix machines, neural nets, software continualization

Leave a comment

anhinga_anhinga December 28 2016, 19:13:07 UTC
I am trying to keep two angles of view on this subject at the same time: neural nets as a computational platform and neural nets as a machine learning platform.

The activation functions in that Wikipedia article I link above all have one argument (and one result). What if we allow two arguments? For example, what if we allow a neuron to accumulate two linear combinations on two inputs on the "down movement", and to multiply them together during the "up movement"?

It turns out that this is very powerful. For example, if we think about one of those inputs as the "main signal", and about another of this inputs as the "modulating signal", then what we get is fuzzy conditional. By setting the modulating signal to zero, we can turn off parts of the network, and redirect the signal flow in the network. By setting the modulating signal to one, we just let the signal through. By setting it to something between 0 and 1, we can attenuate the signal, and by setting it above 1 or below 0, we can amplify or negate the signal.

This is understood for decades. In particular, the first known to me proof of Turing completeness of RNNs in 1987 features multiplication neurons prominently:

http://www.demo.cs.brandeis.edu/papers/neuring.pdf

But then it was mostly forgotten. In the 1990-s people managed to prove Turing completeness of RNNs without multiple arguments.

Of course, this does illustrate that theoretical Turing completeness and practical convenience of programming are not the same thing. (There was a recent talk by Edward Grefenstette of DeepMind, which I hope to discuss at some later point, which argued that the practical power of traditional RNNs is more like power of Finite State Machines, the theoretical Turing completeness notwithstanding. One of the objectives of DMM line of research is to boost the practical convenience of recurrent neural networks as a programming platform.)

The original RNNs had mediocre machine learning properties because of the problem of vanishing gradients. The first architecture which overcame this problem was LSTM in 1997:

https://en.wikipedia.org/wiki/Long_short-term_memory

LSTM and other architectures of this family eliminate the problem of vanishing gradient by introducing "memory" and "gates" (multiplicative masks) as additional mechanisms. However, the more straightforward way to think about this is to introduce neurons with linear activation functions for memory and bilinear neurons (the multiplication neurons I am discussing here) for gates (for good references, see Appendix C of the last of the preprints of this series, https://arxiv.org/abs/1610.00831 ).

So, here is the story. The neurons with two arguments are back, they are necessary for the modern RNN architectures such as LSTM and Gates Recurrent Units to work, but the way these modern architectures are usually presented avoids saying explicitly that we have neurons with two-argument activation function (namely, multiplication) here, and instead people talks about these things as "extra mechanisms added to RNNs", which makes them much more difficult to understand.

Reply


Leave a comment

Up