Sequence Recognition with Recurrent Neural Networks

Document Type

Article

Publication Date

1-1-1993

Abstract

The simple recurrent network (SRN) introduced by Elman (1990) can be trained to predict each successive symbol of any sequence in a particular language, and thus act as a recognizer of the language. Here we show several conditions occurring within the class of regular languages that result in recognition failure by any SRN with a limited number of nodes in the hidden layer. Simulation experiments show how modified versions of the SRN can overcome these failure conditions. In one case, it is found to be necessary to train the SRN to show at its output units both the current input symbol as well as the predicted symbol. In another case the SRN must show the current contents of the context units. It is shown that the SRN with both modifica-tions, called the auto-associative recurrent network (AARNfi overcomes the identified conditions for SRN failure even when they occur simultaneously. However, it cannot be trained to recognize all of the regular languages. © 1993, Taylor & Francis Group, LLC. All rights reserved.

Identifier

21344478893 (Scopus)

Publication Title

Connection Science

External Full Text Location

https://doi.org/10.1080/09540099308915692

e-ISSN

13600494

ISSN

09540091

First Page

139

Last Page

152

Issue

2

Volume

5

Fund Ref

Astrobiology Research Trust

This document is currently not available here.

Share

COinS