The EE-Method, An Evolutionary Engineering ...

1 downloads 0 Views 179KB Size Report
Group, Evolutionary Systems Department, ATR Human. Information Processing Research Labs, Sept 1998. [3] Hugo de Garis , Genetic Programming: GenNets,.
The EE-Method, An Evolutionary Engineering Developer Tool: Neural Net Character Mapping Lehireche A.

Rahmoun A.

Computer Science Department, University Djilali Liabes. Sidi Bel Abbes, Algeria email [email protected]

Faculty of Planning & Management, King Faisal University, KSA email [email protected]

Abstract

Evolutionary Engineering (EE) challenge is to prove that it is possible to build systems (i.e. solutions) without going through any design process [3],[13]. Evolutionary Engineering is defined to be "the art of using evolutionary algorithms approach such as genetic algorithms [1] to build complex systems" [3]. Our main goal is to show that the EE-Method [6] is a good setting. In this paper we show step by step, using the EE-Method, how to build a neural net based system. The EE-Method can be viewed as just a GP appliance. The need of a well-specified approach determines the necessity for such method. Also, to improve the effectiveness of the evolvability principle on a complex systems, we present In this paper a more complex example than those in [7],[8]: an evolved neural net pattern recognizer that maps an input character image to a standard representation i.e. image or code . Keywords Evolutionary Engineering (EE), ) Genetic Programming (GP), Genetic Algorithms (GA), Method, Specification, Neural Net (NN), Pattern Recognition, Evolvability. ,

and a powerful physical (or logical) machine support and devices are necessary to make grow a particular application. It is well known that EE is just another way to speak about GP [4]. Authors do not pretend to stir up the GP concepts; they just give a well-specified approach i.e. the EE-Method [6] to guide EE designer along the design process to achieve and implement a particular application. The need of a well-specified approach determines the necessity for such method. Our experiment, in the EEDIS lab, had shown that the EE-Method is an easy-to-use and easy-to-understand tool. This paper brings the EE-Method into practice by Applying step-by-step the EE-Method to the Neural Net based systems : a neural net character pattern recognizer. Our main goal is to point out that EE­ Method steps are highly relevant and also to prove that such system are evolvable. We implemented Neural Net Evolutionary Software (GES) to bring the EE-Method into operation. In section 2 we report the EE-Method as specified in [6], sections 3,4,5,6,7,8 are the application of the related method, section 9 yield the implementation results and section 10 concludes this paper. 2. EVOLUTIONARY ENGINEERING METHOD

1. INTRODUCTION

Computer science theory is based on two major concepts: computability and complexity. The computability concept deal with the fact that problem solving induce the existence of algorithm. This also mean, that computer scientists can analyze problems and have the capabilities to design the desired algorithms. It is evident that algorithm design (i.e. the solution) is the results of the problem analysis. A highly relevant question: is it possible to solve problem without

The EE-Method [6] aspires to explicit and simplify the EE approach. It also tries to enlarge the scope of GP. Six steps make up the EE-Method. Each step is a vital phase. The step order ensures the coherence of the approach but it is not unique [6]. Given a COMPLEX system: Step 1: Ensure The Availability Of

going through any design process?

Evolutionary Engineering (EE), a discipline of soft computing engineering, aims to solve the problem of building complex systems without going through any design process. EE is defined to be "the art of using evolutionary algorithms approach such as genetic algorithms [1] to build complex systems"[3]. Essentially,

By imitating nature, the Evolutionary Engineering scientists describe an elementary structure of the system and then evolve this structure toward the desired system. Genetic Algorithms are used for evolving such huge systems. One may keep in mind that EE design might start from scratch, only prior knowledge and learning techniques

0-7803-8735-X/05/$20.00©2005 IEEE

- The Inputs (data, parameters, or values), - The outputs, and The association between Inputs and outputs: this means that we are able to express any output in term of input. Of the desired system. Step 2: Choose A Model Choose a model with which the desired system should be implemented.

Examples: - Neural Networks, - Automata, - Petri Nets, - Electronic Circuits, - Graphs, - Programs, Etc ... Step 3: Choose System Genotype

3. EE-METHOD STEP 1 : Inputs, Outputs.

Character pattern recognition is fundamentally a picture mapping process. The output can be either a standard form, in this case we deal with a retinal function; or a simple binary value that encode the input pattern (see Figure 1). In evolutionary engineering terms the building process is the same for these two approaches; only the output neurons number and the fitness are different. In the follow we deal with the second approach i.e. Mapping character pattern into binary code value.

Use the Genetic Programming techniques to encode the model. This step produces the structure of a Chromosome. A chromosome must encode all the system. The chromosome is the genotype form of the system. Step 4: Determine The Adaptation Function Of The System The adaptation function is a measure that points out the system evolution degree during the evolving phase. Generally this represents the measure of the input-output association conformity (i.e. the fitness or objective function in GAs).

Input Pattern

Output Standard Form

Step 5: Choose System Phenotype To be able, during the evolving phase, to evaluate the adaptation function of an occurrence of the system; we must proceed as follows: 1.

2. 3.

Extract the real characteristics of the occurrence of the system by decoding the genotype. Implement the system model according to its characteristics. Simulate the behavior of the system.

Input Pattern

Output Binary code

Figure 1. Character Mapping Outputs.

So, as input we have the character pattern as a bitmap picture of lOxlO pixels and as the binary code associated. To encode the 26 letters patterns we need 5 bits. 4. EE-METHOD STEP 2: CHOOSING THE MODEL

The real characteristics of an occurrence of the system are called system phenotype.

The model with witch we desire to implement our example is the "Neural Networks" (NN). Its topology and the functionality of the artificial neuron characterize a NN.

Step 6: Use A Genetic Algorithm as Follow •

A) Generate randomly a population of chromosomes (each chromosome is a system genotype). B) For each chromosome do: 1. genotype -7phenotype -7 system implementation, 2. Start up the system, 3. Inject inputs, retrieve outputs, 4. Evaluate the system adaptation function. C) Select genotypes of the most adapted occurrences of the system. D) Produce a new generation by applying the crossover and the mutation operators to the selected genotypes. E) Repeat B), C) and D) until the desired system is reached.

Topology of the NN

Recurrent NN is chosen. A recurrent NN is a self fully connected NN. This topology has the advantage to do not deal with any specific details of the NN, such as number of layers, number of neuron in each layer, number of hidden layers, how neurons are connected and so on ... Only the number of neurons has to be chosen. For our application the NN contains 105 neurons; 100 as input neurons and 5 as output neurons ;see Figure 2 where : Input 1, ... ,Input 100 are the External inputs El, ... ,ElOO as specified in Figure 3.

Input

5. EE-METHOD STEP 3: SYSTEM GENOTYPE 1

Input 2 �---_Output

1

.\---_Output 2 t---+Output 3 �--_Output 4 /-------+Output 5

The set of weights determine fully the behavior of the recurrent NN; then the chromosome witch represents the system genotype is simply: the set of weights in code. Figure 5 describes the chromosome structure. Wji denotes the weight associated to the signal Sj coming from neuron j into neuron i. Each weight is coded with 7 bits and has its value in [-1, +1]. The chromosome length is 77175 bits

=

105*105*7.

In Figure 5 Wll is interpreted as follow: Bit 0 1 => weight is a negative value Bit 1 1 => weight weight + 1* 2-1 weight + 1*0.5 =

=

Figure 2. Recurrent Neural Network (105 neurons) •

Artificial neuron functionality

Figure 3 and 4 shows in details the behavior of the artificial neuron. The output is a sigmoid. External inputs are used to control the NN [3]. We use them to inject input data into theNN. == input signal "j". Sj Wji == weight associated to Sj for the neuron "i". Ei == external input of the neuron "i"; when used its wei!!ht is c1amned to 1.0 if not to 0.0.

S1

Wli

S2

W2i

NEURONEi

Internal Inputs ACIIVN,=

Ei+L:W;.*Sj

Outputi

External Input



WNi

Ei

1.0

OUTPUTI = f(ACTIVNI)

bit

0

� .. t-W...:2"-' ... ,I-....

�.... .... ..

_

=





W �5,105



bit

bit 6



77174

Figure 5. Chromosome Structure

weight = weight + 1* 2-2 = weight + 1*0.25 3 => weight = weight + 0* 2- = weight + 0*0.125 4 => weight = weight + 1* 2= weight + 1*0.0625 Bit 5 = 1 => weight = weight + 1* 2-5 = weight + 1*0.03125 Bit 6 0 => weight weight + 0* 2-6 = weight + 0*0.015625. So: WI,I -(0.5 + 0.25 + 0.0625 + 0.03125) -0.84375 Bit 2 = 1 Bit 3 = 0 Bit 4 = 1

=

=>

=

=

6. EE-METHOD STEP 4: THE ADAPTATION FUNCTION OF THE SYSTEM

During the evolution phase, for a system in test, we note: Figure 3. Artificial Neuron [3]

0.5

-4

.......f--.. W I,..:....;.,1

=

,

=

The adaptation function (i.e. the fitness) is the distance (i.e. the gap) between the actual outputs and the desired outputs. The fitness is not an absolute measure but is subject to the way by witch we compute it. The fitness formula influences strongly this measure.

-2 -0.5

-1

Figure 4. Neuron Output Function f{ACTIVj) = 21 (1

- Ai : the actual output Pattern Binary Code. - Di : the desired output Pattern Binary Code. - Fitness : the adaptation function.

+

exp{-ACTIVj»-1

The fitness is expressed as follow: SSD = I (Di-Ai)2,(i=1,5) Fitness = if ( SSD > 1) then l/SSD else 1-SSD.



Example: Table 1. Examples of Actual output Desired output (Di) I



Actual output (Ai) O.4S

0

0.43

I

0.2S

I

0.12

0

0.03

In Neural terms "MAPPING" is known as a neural net with time independent input and time independent output (TII,TOI); for this reason the recurrent NN must run until stabilization. The Stabilization cycle number must determined separately, if not the result is unpredictable. Giving the weight table W, the external input data vector E and the output signal S the recurrentNN simulation stands as follow:

Given the data in Table 1 , the fitness is computed as follow: SSD = I (Di-Ai)2,(i=I,4) =( (1-0.45? + (0 - 0.43) 2 2 2 2 + (1 - 0.25) + (1 0.12) + (0 0.23) ) = (0,3025 + 0.1849 + 0.5625 + 0.7744 + 0.0529) 1,8772 so: Fitness = II SSD = II 1,8772= 0.5327 This result means that the gap between the system in evolution and the desired system is of 0,4673. _

7. EE-METHOD STEP 5: SYSTEM PHENOTYPE

.

Table 2. The Weights Table

To Neuroni

0

WI,I

W2,1

...

...

WlOS,1

WI,2

W2,2

...

...

WlOS,2

...

...

...

...

...

...

...

...

...

...

WI,IOS

W2,IOS

...

...

W105,105

m

N e r 0

n e j

Loop { II compute the output signal for each neuron. For i= 1 to neuron number

_

In this step the evolutionary engineer have to make choices on how his system must be implemented. In our case we have to implement (simulate) a recurrentNN By the fact that we are evolving systems by mean of their genotype form, we need to decode each chromosome to obtain the recurrent NN weights real values. These values are stored in a "weight table" ( see Table 2). The external input data are saved separately in a vector noted "E". In our case, each entry of the vector E stores a bit of the input pattern bitmap. The computed output signal are saved in a vector noted "S". S encode the input pattern. The weights table, the vectors E and S are the phenotype form of the system in evolution.

F r

The recurrentNN simulation algorithm:

For j=1 to neuron number { II compute the activity of neuron i ( see Figure 3) ACTIVi = ACTIVi + Wji * Sj} II inject input data to neuron i (see Figure 3) ACTIVi=ACTIVi + Ei II apply the output function f (see Figure 4) Si = 2 I (1 + exp (-ACTIVi))-1 } lithe recurrentNN is activated many times to reach lithe stability. } Until Stabilization 8. EE-METHOD STEP 6: EVOLUTION PHASE

The evolution phase is an operational phase; the evolutionary engineer must implement the overall software, taking account all the decision made in steps 1 to 5. Figure 6 describes the architecture of such software. •

Evolution Software Parameters :

105, Neuron number Number of input neurons 100, 5, Number of output neurons Weight code length 7 bits, 77175 bits, Chromosome length Numbers of cycle to reach theNN stability 50, Uniform crossover, Type of crossover Crossover probability 0.6, Mutation probability 0.001, Roulette wheel, Selection strategy Elitism, Evolution strategy Scaling constant 2.0, 100, Population size Until fitness > 0.99 Number of generation 9. EVOLUTION RESULTS

In EEDIS laboratory we have implemented software; we name it: GENET EVOLUTION SOFTWARE (GES) ( Figure 6). This software use evolutionary engineering concepts as specified by the EE-Method. The results

presented in Figure 7, 8, 9 show the evolution process of the pattern "P" i.e. mapping the picture of "P" into the binary code value "01111". The evolution process was successful for each letter pattern of the alphabet. 1 O. CONCLUSION

Evolutionary Engineering creates an elementary structure of the system and then evolves this structure toward the desired system. Such process relies on observing and

".'

� GA

I

imitating natural systems. This paper shows, step by step, how to apply the EE-Method to neural net based systems i.e. pattern recognition. This application needs a recurrent neural net of 105 neurons, so the weights table contains 11025 entries. It easy to see, that evolution process has to

211025*7

.It tune 11025 parameters. The search space is: is evident that such a task is Hyper Complex. The results show that the evolvability principle is effective.

� �.-----

". ,

• Chri

-�

Genotype to

Phenotype transfonnation

'�N�'

''"-, ·

---+ ···········

+-. , Weights !, T bl . a e

-----

L

·········

Simulator L-____---'

------------------------------

System in evolution

t

------------

Fitness Unit Chrn Population of Gen i

Figure 6. Evolution Process schema i.e. the GES Architecture.

Figure 7. Evolution process: chromosome from the initial population, Fitness=0.24

Figure 8. Evolution process: chromosome from an intermediate population, Fitness = 0.8688

Figure 9. Evolution process: chromosome from the elite, Fitness = 1 1 1. REFERENCES

[1] Goldberg D.E., Genetic Algorithms in Search, Optimization, and Machine Learning, AddisonWesley Publishing Company, 1989. [2] Hugo de Garis ,Michael Korkin , Felix Gers , Norberto Eiji Nawa , MichaelHough , "CAM-Brain, Atr's Artificial Brain Project An Overview", Brain Builder Group, Evolutionary Systems Department, ATR Human Information Processing Research Labs, Sept 1998. [3] Hugo de Garis , Genetic Programming: GenNets, Artificial Nervous Systems, Artificial Embryos, PhD thesis 1991.

[9] Gafour A, Faraoun K, Lehireche A: Genetic Fractal Image Compression :, ACS/IEEE International conference on computer systems and applications (AICCSA 2003), Tunis, Copyright 2003 IEEE. [10] Nicholas J.Macias, Ring Around the PIG: A Parallel GA with only local interactions coupled with a Self­ Reconfigurable Hardware Platform to implement an 0(1) Evolutionary Cycle For Evolvable Hardware, Proceeding of 1999 Congress on Evolutionary Computation, Copyright 1999 IEEE. [11] Rojas R. ,Neural networks: a systematic introduction. Springer, Berlin, Heidelberg, 1996.

[4] Koza, J.R, and Bennett III, F.H, and Andre, D, and Keane, M.A.. Genetic Programming III: Darwinian Invention and Problem Solving. Morgan Kaufmann Publishers, 1999.

[12] Van Rooij A.J.F, Jain L.C, Johnson R.P, Neural

[5] Lehireche A, Rahmoun A: Evolving in Real Time a Neural Net Controller of Robot-Arm: Track and Evolve. Lithuanian Academy of Sciences, INFORMATICA International Journal ,Vol. 15, No. 1, 63-76, 2004, © IMI, LT.

[13] W. B. Langton, Adil Qureshi, Genetic Programming,

Network Training Using Genetic Algorithms, World Scientific, Series In Machine Perception & Artificial Intelligence, Vo1.26.

Computers Using 'Natural Selection' To Generate Programs, Lecture Notes of a survey, Dept Of

Computer Science, University College London, 1996. [14]

[6] Lehireche A, Rahmoun A and Gafour A: Highlights the Evolutionary Engineering Approach: the EE­ Method, ACS/IEEE International conference on computer systems and applications (AICCSA 2001), Beirut, Copyright 2001 IEEE. [7] Lehireche A, Rahmoun A : The EE-Method, An Evolutionary Engineering Developer Tool: Neural Net case study, ACS/IEEE International conference on computer systems and applications (AICCSA 2003), Tunis, Copyright 2003 IEEE. [8] Lehireche A, Rahmoun A: On Applying an Evolutionary Engineering Method to Evolve a Neural Net XOR System, Scientific Journal of King Faisal University, Vol 5, 2004, S.A.

YANN

LE CUN, Modeles Connexionnistes de l' Apprentissage, These de Doctorat, Universite Paris 6, 1987.