×

Neural and automata networks. Dynamical behavior and applications. (English) Zbl 1103.37302

Mathematics and its Applications (Dordrecht) 58. Dordrecht: Kluwer Academic Publishers (ISBN 0-7923-0632-5). xiii, 250 p. (1990).
This book is devoted to the dynamic behaviour of threshold automata networks. Such systems consist of large collections of locally interconnected cells which evolve at discrete time steps through mutual interactions. Threshold functions are used as transition functions by analogy with the behaviour of neurons. McCulloch and Pitts introduced this model in 1943 as a simplified model of the nervous system, and this is why they are also called neural networks. Despite their apparent simplicity, neural networks with potentially infinite cells can simulate Turing machines.
As shown in the book, statistical physics is an area of application for this model. On the other hand, the growing interest observed today for this model follows from its application to artificial intelligence. Indeed, neural networks are well suited to the problem of learning from examples. In this approach, known in the literature as “connectionism”, cells progressively update their connection weights so as to encode the examples, and the retrieval process consists in iterating the network.
This highlights the importance of the dynamic behaviour of neural networks, as presented in this book. Given the sequence of states \(\{x^t; t\geq 0\}\) assumed by a network at successive time steps, the problem is to determine the period \(p\) and the transient length \(T\) which are characterized by \(x^{t+p}=x^t\) for \(t\geq T\), and \(x^{t+p'}\neq x^t\) for \(t<T\) or \(p'<p\).
Since a neural network can simulate a Turing machine, the general problem of analyzing a sequence generated by such a system is undecidable. As a consequence, the book addresses particular classes of networks. It is important to note that there are various operation modes. The most important ones are the parallel iteration mode where all cells are activated simultaneously, and the serial iteration mode where the cells are activated sequentially in a predetermined order.
The main mathematical tool presented in the book is the concept of Lyapunov functional defined by \(E(x^t)=-\langle x^t\), \(A\cdot x^{t-1}\rangle+\langle b\), \(x^t+x^{t-1}\rangle\), where \(A\) and \(b\) are the parameters of the threshold function. This functional, which was first introduced by Hopfield, is strictly decreasing and bounded from below in the transient phase of various classes of networks. For instance, if \(\min\leq E(x^t)\leq\max\), and \(E(x^{t+1})<E(x^t)-c\) whenever \(x^{t+1}\neq x^t\), then the limit orbit is a fixed-point, i.e., \(p=1\), and the transient length \(T\) is less than or equal to (max-min)/\(c\).
This approach is used to show that if \(A\) is symmetric with nonnegative diagonal elements, then \(p=1\) for discrete neural networks operating sequentially. However, the transient length depends heavily on the entries of \(A\), and may be exponential with respect to the number of cells. The remark of Section 3.7 about the simulation of parallel iterations by appropriate sequential iterations can be used to derive that \(p=2\) for parallel iteration with symmetric matrices, as well as properties of some classes of iterations with memory. This fact is not apparent in the book. Indeed the presentation might have been more clear and synthetic if the authors of the book, who are also the originators of most of the results, had not followed too closely the chronological order of their contributions in the area. The same remark holds for the notion of algebraic invariant, which is presented in detail in Chapter 2, to derive results which also appear in Chapter 3.
The concept of Lyapunov functionals is also used to show, among other things, that \(p=4\) for discrete antisymmetrical neural networks operating in parallel, and that, if the state-set is continuous, and the cells evolve in parallel, then \(p=1\) if \(A\) is positive definite, and \(p=2\) if \(A\) is negative definite. For uniform one-dimensional networks, ad hoc tools can be used to show that \(p\leq 4\). Moreover, in the particular case of majority rule, \(p=1\), and the transient length is bounded by the number of cells.
Relations with optimization problems are presented in Chapter 5, and applications to the thermodynamics limit in the Bethe lattice are studied in Chapter 6. Finally, a dynamics which can be analyzed via Lyapunov functionals is studied in Chapter 7, in connection with the Potts model, with applications to image-smoothing and combinatorial optimization problems.
In conclusion, this book presents a broad mathematical framework for the study of dynamical neural networks, and gives an up-to-date review of the main results in the area. Contrary to statistical physics, which is studied in Chapters 6 and 7, applications to connectionism are not treated, but this does not matter since there are several books dealing with this aspect [see, e.g., F. Fogelman-Soulie,Y. Robert and M. Tchuente (edts.). Automata networks in computer science: theory and applications. Manchester University Press and Princeton Univ. Press. (1987)].

MSC:

37B15 Dynamical aspects of cellular automata
37-02 Research exposition (monographs, survey articles) pertaining to dynamical systems and ergodic theory
68Q80 Cellular automata (computational aspects)
68R99 Discrete mathematics in relation to computer science
68T05 Learning and adaptive systems in artificial intelligence
82C32 Neural nets applied to problems in time-dependent statistical mechanics
PDFBibTeX XMLCite