Artificial Neural Network for Character Recognition on Embedded ...

1 downloads 0 Views 1MB Size Report
Artificial Neural Network for Character Recognition on Embedded-Based FPGA. Phaklen Ehkan*, Lee Yee Ann, Fazrul Faiz Zakaria, and M. Nazri M. Warip.
Artificial Neural Network for Character Recognition on Embedded-Based FPGA Phaklen Ehkan*, Lee Yee Ann, Fazrul Faiz Zakaria, and M. Nazri M. Warip School of Computer and Communication Engineering, Universiti Malaysia Perlis, Pauh Putra Campus,02060 Arau, Perlis, Malaysia {Phaklen,ffaiz,nazriwarip}@unimap.edu.my, [email protected]

Abstract. An embedded system application involves a diverse set of skills that extend across traditional disciplinary boundaries, including computer hardware, software, algorithms, interfacing and application domain. Field Programmable Gate Arrays (FPGAs) which offer flexibility in design like software, but with performance speeds closer to Application Specific Integrated Circuits (ASICs) are used as an embedded-based system in this project. The character recognition project using artificial neural network (ANN) approach is implemented on Altera Cyclone II 2C35 FPGA device and the results shown very promising. Keywords: FPGA, Artificial neural network, Character recognition.

1

Introduction

Recently, the electronic devices field has witness a great revolution by having the new birth of the extraordinary FPGA platforms [1]. These platforms are the optimum and best choice for the modern digital system. The parallel structure of NN makes it potentially fast for the computation of certain tasks. Hardware realization of NN to a large extent depends on the efficient implementation of a single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of NNs. An ANN is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain and process information. It is composed of a large number of highly interconnected processing elements called neurons and is configured for a specific application, such as pattern recognition or classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. FPGA realization of ANNs with a large number of neurons is still a challenging task [2].

2

Biological Neuron

The ANN is a mathematical computational model that was inspired by actual neuron of the nervous system of animals [3]. The nervous system of human is make up of billions of neurons or nerve cells. These cells are connected with each other by synapses and there are trillions of synapses in the whole nervous system. The neuron *

Corresponding author.

James J. (Jong Hyuk) Park et al. (eds.), Future Information Technology, Lecture Notes in Electrical Engineering 309, DOI: 10.1007/978-3-642-55038-6_43, © Springer-Verlag Berlin Heidelberg 2014

281

282

P. Ehkan et al.

cell consists of the soma, axon and dendrites (Fig. 1). The soma is the cell body of the neuron that contains a nucleus which controls the function and operations of the cells. The neuron has a long extension that extend out from the soma which is the axon. The axon carries signals out from the soma to other neurons. At the end of the axon are terminals that connects to the dendrites of other neurons. These dendrites are signal receivers that carry signals from other neuron cells into the soma. The axons and dendrites are not directly connected or coupled together, instead the signal carried by axon of one neuron are transmitted to the dendrites of other neurons over a very small gaps called the synapses. In the entire nervous system, the neurons are connected into a network of neurons. This network allow basic control of bodily functions, reflexes, and the concept of memory, learning, emotion, abstract thoughts observed by living beings. It consist of billions of neuron cells work continuously to relay, process and store informations. One of the ability of biological neurons that became the highlight for researchers is the special ability to learn.

Fig. 1. Biological Neuron Cell [4]

3

Artificial Neural Network (ANN)

ANNs are computational model inspired and based on the function and operation of biological neurons. It was developed to imitate the principles of computations performed by the biological NN[2]. Moreover, it is attempting to apply the learning ability of the biological NN into computational models for information processing in computer systems. It was developed as an alternative to the conventional generalpurpose processor that is mostly sequential, structured, and linear. These processors is good for solving most problem that can be solved in sequential manner, but not very effective in problems such as pattern or character recognitions and signal processing. The ANN is made-up from network of nodes or artificial neurons. The complexity of real neuron is abstracted into a node of artificial neuron. This node basically consists of multiple inputs that is multiplied by respective weight for each input, a central node analogous to the soma that will perform the computations and determine the activation of action potential, and an output that will carry the result out from the node to another adjacent node in the network. This individual artificial neurons is then connected with other neurons that realised the ANNs. These artificial neurons can be connected into straightforward single-layer networks, or into small and simple multilayer ANNs, or into massive and complex multi-layer ANNs (Fig. 2). The nodes exchanges informations between itself and other nodes in the network.

Artificial Neural Network for Character Recognition on Embedded-Based FPGA

283

Fig. 2. Complex Multi-layer ANN [5]

ANN is massively parallel computational system, and is also highly distributed like the biological NN in the brain. This parallel and distributed processing is achieved because each node can process the informations from its inputs and compute the output individually relying much on other nodes in the ANN. Another interesting characteristic of ANN is fault-tolerant. This characteristic is based on the brain ability to create new connection to other neurons when a neuron is damaged. Most ANN implementation today is mainly software-based. Hardware-based implementation does exist but in very small number. Much can be gained by directly implement the ANN on hardware, especially to fully exploit the parallelism of the ANN.

4

FPGA Implementation of ANN for Character Recognition

A block diagram in Fig.3 shows the top-level development flow for FPGA implementation of ANN for character recognition. The entire system consists of seven functional blocks and few on-board devices (user I/O and memory devices). 4.1

ANN Block

The main processing block models the implementation of ANN in FPGA. It consists of 3 layers; input, hidden and output layers. The input nodes are connected to the toggle switches used to input the character pattern to be recognised into the system. The output from these inputs nodes are then multiplied with the associated weight factor each input-hidden nodes connection before being fed to the nodes of the hidden layer. Each hidden nodes sum all the input-weight product of its respective input lines and perform a series of calculations. The calculated value is re-multiplied with the associated weight factor each hidden-output nodes connection before being fed to the nodes of the output layer of the ANN. The ANN only performs feed-forward operations to recognise the user input pattern. Before that, it had to be trained to recognise and differentiate the different input patterns. An BP algorithm is used for learning process. The ANN is fed with a predefined input patterns and the output from the system is compared with the expected output value. If the output from the ANN is different from expected value then the ANN will calculate the difference between the 2 values and modify the weight factor for each of its connections accordingly so that the output from the ANN

284

P. Ehkan et al.

matches with the expected one. This learning process is repeated several times until the ANN is able to perform required tasks efficiently. The learning process of ANN block is supervised by the ANN training supervisor block. Start Develop the architecture, algorithm and learning rule for the ANN to be implemented on FPGA

The plan architecture for ANN implemented on FPGA device is described in VHDL [7]. These included learning algorithm and rule for ANN to function.

Train the ANN for character recognition using the pre-planned learning rule

After the compilation VHDL code is downloaded into FPGA, ANN is trained for character recognition application using the developed rule so that it is able to recognize and classify all the training characters.

Deploy the ANN on actual data for character recognition

The ANN is later deployed for character recognition application using actual data. Actual data is obtained from the user.

Data collection, analysis and presentation

The resulting data obtained from the deployment of the ANN is collected and analyzed. Analyzed data is presented to be used by others in a format and easily understood by other reader.

End Fig. 3. Development Flow of FPGA Implementation of ANN for Character Recognition

4.2

ANN Training Supervisor Block

This block supervises the training process of ANN block. It feeds the predefined input pattern to the ANN during learning phase and provide an expected output for the input pattern to learn. It also formats the user input pattern and feeds the formatted user input for processing. It then receives the output from the ANN and format it into a format used by display controller functional block to be displayed on the LCD. 4.3

Pseudo-Random Number Generator Block

This block generates pseudo random number to be assigned as the weight factors for all connections between the input and hidden nodes, and all connections between the hidden and output nodes of the ANN. It has a linear feedback shift-register to generate the pseudo-random number.

Artificial Neural Network for Character Recognition on Embedded-Based FPGA

4.4

285

Floating Point Processor Block

This block performs arithmetic and logic operations of floating point number for the ANN block. The ANN block provides floating point numbers to be processed and provide the desired operations. This processor is then signal the ANN and ANN training supervisor blocks when it finish its operation. 4.5

SRAM Driver Block

This functional block provides the interface between the ANN block and the SRAM memory device on-board of Altera DE2 board [6]. This block is needed as the ANN block has to access some external memory device to store the weight factors for all connections. It provides the necessary abstraction of the on-board SRAM memory and made it easier to access the SRAM memory from the ANN block. 4.6

LCD Controller Block

This block controls what to be displayed on the LCD module. It provides the LCD driver block with the data to generate custom characters on the LCD module and also provide the text string of what to be displayed on the LCD module to the LCD driver block. It receives user input data and format them to be displayed on the left side of the LCD text field. It also receive the output from the ANN and display the recognised character on the LCD module. 4.7

LCD Driver Block

LCD driver block is similar to the SRAM driver block as it provide the necessary abstraction for interfacing with the LCD module. It initialises on-board LCD module when the system is reset and interfaces with the on-board LCD module to send custom character data and text string to be displayed. The custom character data and the text string to be displayed is obtained from the LCD controller block.

5

Results

The entire system of this FPGA implementation of ANN for character recognition is deployed. The logic resources required by the FPGA-based ANN character recognition system running at 50 MHz is shown in Table 1. The system is designed to recognize a character pattern on a 4 x 4 grid. Fig. 4 displays the process and sample results from the implementation.

286

P. Ehkan et al. Table 1. Logic Resources for Character Recognition System

Altera Cyclone II EP2C35F672C6 FPGA device Resource type Logic elements Combinational functions Logic registers Total pins Memory bits Embedded Multipliers PLLs

Resource requirement 11,670/33,216 11,256/33216 5932/33,216 158/475 4956/483,480 54/70 0/4

(a) Training Mode

(c) Recognized character ‘A’

Percentage utilization 35% 34% 18% 33% 1% 77% 0%

(b) Running Mode

(d) Recognized character ‘Z’

Fig. 4. Process and Sample Results of Character Recognition

6

Conclusion

This system had produced favourable result. The input pattern and recognition output is displayed on-board LCD module of the Altera DE2 board. This FPGA implementation for character recognition can be seen as a starting point for introduction to ANN, future exploration of the wide range of ANN application and the multiple designs available to develop an ANN. This work may also be seen as a starting point for developing an ANN to be directly embedded on hardware devices instead of running the ANN algorithms on a computer system. A computer systembased consume much processing time compared to hardware implementation.

References 1. 2.

Brown, S., Vranesic, Z.: Fundamentals of Digital Logic with VHDL Design, 3rd edn. McGraw-Hill, New York (2009) Omandi, A.R., Rajapakse, J.C., Bajger, M.: FPGA Implementation of Neural Networks. Springer, Heidelberg (2006)

Artificial Neural Network for Character Recognition on Embedded-Based FPGA 3. 4.

5.

6. 7.

287

Gurney, K.: An Introduction to Neural Networks. UCL Press (1997) EnchantedLearning.com, Brain cells (2001), http://www.enchantedlearning.com/subjects/ anatomy/brain/Neuron.shtml (retrieved) Mol, A.C.D.A., Martinez, A.S., Schirru, R.: A Neural Model for Transient Identification in Dynamic Processes with ‘don’t know’ Response. Annals of Nuclear Energy 30(13), 1365–1381 (2003) Altera DE2 Board, http://www.altera.com/education/univ/materials/ boards/de2/unv-de2-board.html Pedroni, V.A.: Circuit design with VHDL. MIT Press, Cambridge (2004)