Evolving Spiking Neural Networks

3 downloads 0 Views 791KB Size Report
Jun 20, 2015 - The big idea is not just to evolve task-specific machines (idiot savants), ..... Intelligence for Security and Defense Applications (CISDA), (2015.
JD Schaffer, “Evolving spiking neural networks: a novel growth algorithm exhibits unintelligent design ", Proc. SPIE 9494, Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940M (June 20, 2015); doi:10.1117/12.2175896; http://dx.doi.org/10.1117/12.2175896

Evolving Spiking Neural Networks: A novel growth algorithm exhibits unintelligent design J. David Schaffer College of Community and Public Affairs, Binghamton University, PO Box 6000, Binghamton, NY 13902 ABSTRACT Spiking neural networks (SNNs) have drawn considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. Experiments show the algorithm producing SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. In addition, the output spike patterns retain evidence of the specific perturbation of the inputs, a feature that could be exploited by network additions that could use this information for refined decision making if required. On a second task, a sequence detector, a discriminating design was found that might be considered an example of “unintelligent design”; extra non-functional neurons were included that, while inefficient, did not hamper its proper functioning. Keywords: Spiking neural networks, evolutionary computation, genetic algorithms, topology evolution, network growth process, unintelligent design, noise robustness, tonic burster, sequence detector.

1. INTRODUCTION The field of artificial intelligence has long sought technologies that could provide machine-based intelligence, either for autonomous machine behavior or to amplify human capabilities. To date, these efforts have produced some useful technologies and artifacts (e.g. medical decision support systems, powerful search algorithms, some natural language processing capabilities (e.g. IBM’s Watson)), but we believe no one is fully satisfied with the achievements so far; they fall short of our dreams (e.g. we cannot yet trust a machine with major healthcare decisions, machines need extensive human support to discover data patterns). Over two decades ago, there was a gathering of people interested in coupling evolutionary computation (genetic algorithms) with neural networks (NNs)[20] . Work in this paradigm has continued, but getting it to work has been challenging. One conjecture to explain this is the limited computational power of existing NN models. In the period following that meeting, a new model, spiking neural networks (SNNs) [4] , has emerged with more computational power than the previous two generations of neural networks We have recently initiated research in this direction: evolving SNNs. The applications envisioned include technologies autonomously to learn features embedded in spatio-temporal signals[12] that enable discrimination between classes of such signals, technologies that can learn sensory-motor control strategies for autonomous machines[1] , and possibly technologies for smart prosthetics and for augmenting human capabilities, sensory, motor and cognitive. The obvious advantage of SNNs for interfacing to mammalian bodies is that they compute with the same signals as nervous tissue: Copyright 2015 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.

spike trains. The big idea is not just to evolve task-specific machines (idiot savants), but rather to evolve learning machines that should greatly improve their scope and robustness. The emergence of SNNs has opened the door on many exciting challenges for computing with spikes. Maass [7] is among those who have proven some computational properties for SNNs that, in essence, show that every interesting computational property of the previous generations can be achieved with SNNs, and often with many fewer neurons. On the minus side, the number of parameters has increased as a cost of these advantages. In addition, to date there are no design rules or learning algorithms for specifying a network topology for a given task (see our exception below). In addition, there is the exciting prospect of schemes for dynamically adjusting the synaptic weights, either transiently (e.g. Markram et al. [8] ) or permanently (e.g. Hebb [5] ) thus enabling a type of learning that we observe in living nervous tissue. We have built SNN simulator software, tested it on toy problems, modeled rat brain structures [10] , and applied it to a simple mobile robot task (light following with obstacle avoidance) [1] . These initial efforts all employed fixed network topologies; evolution only tuned the parameters (not to imply that this was easy). We then attacked the challenge of the network topology two ways, one student (PhD) did devise a design approach for a limited (albeit very useful) class of tasks: spatio-temporal pattern detectors [12] . Having a design approach enabled evolution to search for the hidden patterns to detect, and not worry about the network. It evolved a SNN sensor that fired whenever it encountered a pattern, that pattern being more prevalent in one class of signals than another. This kept the chromosome length from growing too large by requiring only “genes” that specified the characteristics of the pattern which are few, and not “genes” for specifying the network parameters which are many. A second approach was the development of a general approach to topology evolution. This approach we describe in this paper, but first we offer this simple list of advantages and disadvantages: The advantages of using SNNs include: A. Being asynchronous computing machines, low power circuit implementations are anticipated. B. Novel computational properties are believed to be available (e.g. highly multiplexed signal pathways (axons), high component reuse). They are hybrid analog/digital and non-VonNeumann machines. C. Internal spike train communication may more easily allow linking these devices to the human body (for smart prosthetics, and human performance augmenting applications). D. Synaptic plasticity is easily implemented and believed to be the basis for human learning, hence learning capabilities may be expected. The disadvantages include: A. There is no theory of how intelligent computing might be done. B. There is still no consensus on the specific capabilities for the neurons and synapses needed for high level computational performance at the network level. C. There are no established design rules for building SNNs for general purposes. D. Little is understood about how whole-brain learning is achieved with low-level synaptic plasticity [9] .

2. SOFTWARE COMPONENTS FOR EVOLVING SNNS Figure 1 shows a schematic of the software configuration used for each experiment. The GA block is the genetic algorithm executable that includes the eval routine that embodies the SNN topology growth and the parameter decoding algorithms, explained below. The basic GA operation is Eshelman’s CHC algorithm [3] . The eval routine decodes each bit-string chromosome and writes files containing the complete specification for one SNN in the form required by the SNN simulator. The simulator used is Sichtig’s SSNNS [12] ,[16] that implements the SRM0 model of Gerstner and Kistler [4] . SSNNS produces an output file containing the spikes produced when the given SNN is run on the task provided (input spikes file). Upon completion of each simulation, the eval routine recommences: it reads the output spikes file and computes a performance metric which it then returns to the GA. This process continues until a halting criterion is reached. Control of the GA process is by parameters specified in the cfg (configuration) file, and the bit-string chromosome decoding is controlled by parameters specified in the eval.ctl file shown in Figure 1.

Figure 1. Schematic of the GA-SNN software modules

2.1 The Chromosome Structure The GA’s chromosome consist of bit strings with two parts, one for determining the SNN topology (via a growth algorithm), and the other for setting the SNN parameter values. These are explained below. 2.2 The SNN Growth Algorithm The data structure that defines the network topology is a connection matrix C, where ci,j is 1 if neuron i is connected to neuron j, and 0 if not. The number of network inputs and the number of outputs are given as part of the problem definition. For the tonic burster task there is one input and two outputs. There is a row in C for each input and neuron, and a column for each neuron and output. An example connection matrix is shown in Figure 2. connection matrix nnnoo s is_out i 11100 3 0 n 01101 2 1 n 10110 3 1 n 11000 3 0 s 33311 Figure 2. Example connection matrix. The first column identifies each row as being “connected from” an input (i) or neuron (n). The top row identifies the “connected to” element as a neuron (n) or an output (o). The row and column labeled “s” contain the row/column sums. The matrix elements indicate row i is (1 = a synapses) or is not (0) connected to column j. The final column (is_out) flags the neurons connected to outputs.

The network growth algorithm begins with an empty matrix (all elements are zero), and proceeds to insert connections obeying the following rules: 1. No input may connect directly to an output 2. Every input must connect to at least one neuron 3. All synapses from a given neuron will be excitatory or inhibitory (not both) 4. Each output must be connected to by one and only one neuron 5. It is desirable (required for feasibility) that every neuron should be on a path to an output (i.e. no dangling neurons)

The genes in the chromosome that drive the growth process are illustrated in Figure 3. There are binary genes for: the number of neurons, whether each is excitatory or inhibitory, row and column targets for their respective row and column sums, and extra synapses (explained below).

Figure 3. Illustration of the genes that drive the network growth process. Note that this illustration is NOT the tonic burster; it has 2 inputs and 1 output. It matches the sequence detector.

2.3 Decoding bit strings into numbers Before going further into the growth algorithm, we provide the details as to how binary genes are decoded into numbers, and we use the Nn gene (coding for the number of neurons) as an example. Figure 3 shows the bit string “10” coding for 4 neurons. The first step in decoding is to convert the bits into an integer (m) using a reflected Gray coding [11] . Here, “10” => 3. Then the final gene value (v) is computed using a base (b) and an increment (c), as follows: (1) The example in Figure 3 used b=1 and c=1, so Nn = 4. The same b and c were used for the targets and extra_synapses (2 bits each) in Figure 3. The excitatory/inhibitory flags were simply, ‘1’ = excitatory, ‘0’ = inhibitory.

2.4 The SNN growth algorithm continued The growth algorithm proceeds as shown in Figure 4. The desirability of each cell is computed as the sum of the row and column desirabilities where each desirability is the target – sum. The algorithm halts when max_synapses is reached. It generally does not halt until feasibility is established, but this subroutine is recursive, and it sometimes occurs that feasibility cannot be established. In these circumstances, the algorithm continues until feasibility can be established, or reports failure if it cannot continue because its halting condition has been met. Otherwise, once feasibility is established, max_synapses is reset to n_synapses (when feasibility was established) plus extra_synapses. This permits a gene to control how richly the connection matrix is populated beyond what is needed for feasibility. Note that the targets merely act as attractors for the row and column sums; there is no requirement that they be satisfied. Their only job it to influence the order in which the connections are made. The sums can actually exceed the targets making some cell desirabilities negative. For this reason, it is believed that even quite large connection matrices can be grown with only a modest number of bits for the targets.

Figure 4. The network growth algorithm

2.5 Encoding SNN parameters The network parameters that must be defined are: for each neuron: absolute refractory period (during which no amount of excitation can make it fire) Tr (a time constant that controls the exponential return of the membrane potential to its resting value from its depolarized value) for each synapses: weight delay Ts Tm (time constants that define the exponential rise and fall of the PSP (post synaptic potential) To define a search space that permits complete freedom for each synapse to take any permitted value would require O(n2) parameters for n neurons. In order to constrain this explosive growth, some freedom was compromised in order that this part of the chromosome should also grow O(n). This was achieved by encoding a set of these parameters for each neuron (and input) rather than for every synapse. The scheme for computing a synapse’s parameters involves combining the parameters from the neurons it connects, as illustrated in Figure 5. Self-connected neurons use only the values from themselves, where the weight is simply its own weight. This scheme, while it does constrain the growth of the chromosome, may not enable sufficient freedom to the evolutionary search to find sufficiently good solutions for some tasks. This is one important element to be examined by experimentation. To help in this, the software user is given control over the decoding scheme (equation (1) above) by specifying the number of bits, base and increment for each parameter; thus giving control over the range and precision of each parameter. In addition, a “default” value must be provided; this value is assigned whenever the gene bits decode to zero. The full control is provided in a file called eval.ctl read by the software (see Figure 1). An example is shown in Figure 6. The first line specifies the software version to guard against providing an incompatible control file, and the parameters “allow_self” and “allow_multi” are controls (1=yes,0=no) over the growth algorithm’s using self-connected neurons, and neuron pairs with more than one synapse. The effects of these are being tested.

Figure 5. Setting synapse parameters using the values from the neurons it connects

Figure 6. Example eval.ctl file

. 2.6 The test problems For our first experiments with this approach we chose two “toy” problems: the tonic burster of Watts [19] and the sequence detector proposed by Jin[6] .

2.6.1 The tonic burster Figure 7 illustrates the tonic burster (TB) task: the SNN takes a tonic input (spikes arriving as fast as possible, namely 1000Hz) and should produce the spike bursting behavior shown. It could be thought of as a central pattern generator where the spike bursts activate the walking or swimming behavior of some robot. The fitness metric reported to the GA is a metric that reflects the difference between the output spike patterns produced by the simulator and some fixed targets[15] . For the TB experiments, the targets are the spikes shown in Figure 7.

Figure 7. The tonic burster task. A SNN should convert tonic excitation (spikes arriving 1 per ms continuously) into a specified spike bursting pattern as illustrated.

2.6.2 The Sequence detector As a second task, we chose the sequence detector proposed by Jin[6] . Jin provided a SNN model consisting of 6 neurons, and having four separate input channels (see Figure 8). The output neuron (N4) should fire if spikes appear at the inputs in sequence: 1,2,3,4, and not otherwise. We note that Jin’s SNN model differed from the Gerstner SRM0 model. Subsequently, Roy[12] devised a design solution (i.e. not requiring evolution) to this and a similar task, the temporal pattern detector, using the SRM0 model. Roy’s SNN model required only n-1 neurons for an n-channel detector. For our initial experiments, only a 2-channel task was attempted.

Figure 8. The 6-neuron sequence detector after Jin[6] . Spikes arriving on the input channels in sequence 1,2,3,4 are required to cause N4 to fire. Any other input pattern does not.

The same scheme used for the tonic burster was applied to this task, with only slight modifications to the parameter ranges (eval.ctl file). The number of neurons gene was kept at 2-bits, but the range was shifted down to allow 1-neuron SNNs. In addition, the bits for delays was reduced from 5 to 4, and the increments were reduced for Ts, Tm, Tr and weights allowing for finer gradations in these gene-specified parameters. The effect of these changes, and the fact that this task had two inputs compared to the tonic burster’s one, reduced the total chromosome length from 317 bits (TB run_17) to 277 (sequence detector run_8). The evaluation routine for this task had to be slightly modified from that used for the tonic burster. The TB task was stationary in the sense that any proposed SNN only needed to be evaluated once. The sequence detector needed to be

tested with many input sequences to vigorously assess its behavior. Since there is a virtually unlimited array of possible input spike arrival patterns, we wrote a program that generates random instances of these, about half of which were “in sequence”. A new set of 40 patterns was generated each generation and all parent SNNs were re-evaluated. This significantly increased the run time for experiments. It also complicated the selection of a “best” SNN since now every chromosome might have a different number of tests and the distribution of the fitness scores among these tests had to be taken into account.

3. RESULTS 3.1 Tonic Burster Seventeen experimental runs were completed on the tonic burster task with static synapses. With each experiment a little more was learned about how to define the search space, and improve the calculation of the fitness function. At one point it seemed that the simulation time was too short, allowing poorer SNNs to survive, so the simulation time was increased with the attendant increase in experimental run time. Experiment 17 ran for about 8 days (on a not state of the art single core machine). The best SNN found in that run scored 1592 (a perfect imitation of the target spike patterns would have scored zero). It comprised three neurons and is illustrated along with all its parameters in Figure 9, and its spiking behavior in Figure 10. Since the GA is a stochastic process, one may wish to know if the results just described, are to be expected, or could they have been a statistical anomaly. To check on this, we ran a second experiment identical to run_17 except for the random seeds (another 8 days of computing). This time a 4-neuron SNN was found, but its performance was remarkably similar (1594).

Figure 9. The best SNN found in run_17 with its evolved parameters Note that the inputs are weighted about four times larger for N3 as for N1 and N2, Also note the huge weights on the neurons’ self-inhibitions.

Figure 10. Spike behavior for run_17 Best_guy_1592. Each of the three neuron’s spikes are plotted pointing up at the axis value of the neuron number. The desired target pattern is plotted above each neuron that connects to an output with spikes pointing down. The left figure is with the full tonic input, the right figure is with every other input spike removed.

We see in Figure 10, that the desired target patterns were approximated, but not perfectly achieved. The bursting behavior is evident, but it is not exactly what was aimed for. The spikes at output 2 (neuron N2) have a longer period than the target and it commences rather sooner than desired. The target pattern has N2 firing soon after each burst of N3, while the relationship of the actual outputs N2 to N3 is not perfectly consistent; the periods of N2 and N3 are not the same. These imperfections are reflected in the fitness score, a penalty for not matching the targets, which would have been zero if it had been perfect. The nature of the spike patterns from N1 are unclear, but they do not directly figure into the calculation of fitness. They do, however, play a critical role in the spiking behavior of the whole system. There is an initial transient dynamic that results from the fact that when the first input spike arrives, the network is in a quiescent state. After the second burst, the repeating dynamic is established. The numbers of spikes in each burst are: 8,11,12,12,12,12. Just to test whether the evolved design is somehow over-tuned to the exact duration of the simulation used during evolution (1375ms), we ran the simulation for 2000ms. The pattern largely continues with three more bursts with 12, 11, and 11 spike in each. Next we ask, how robust is the observed bursting behavior? All during the evolution of this network (and its ancestors), the input spiking pattern presented to them was the same and perfect: a spike every ms for the duration of the simulation. We took this best SNN and presented it with two perturbed input spike patterns to observe its behavior, one test input consisted of just every other spike (i.e. a spike every 2 ms), and the other was the perfect input spike pattern with a random 37% of its spikes removed. The performances under the every-other-spike condition is shown in Figure 10 (right), and the randomly-perturbed-spikes is discussed elsewhere [13] . We observe that the general character of the bursting remains remarkably intact, and yet there are traces of the input perturbations observable. For instance, the triple spike bursts from N1 (Fig. 10, left) become spike pairs when the input spikes arrive at half the rate (Fig. 10, right). Also, the spiking of N2 now has longer ISIs reflecting the reduced input stimulation, yet the timing of the bursts from N3 still deviate little from the targets (Fig. 10, right). A close-up view of the spiking from N3 for the fifth burst is shown in Figure 11. The target pattern has eight spikes, while N3 spikes 12 times with tonic input and 9 times with every-other input spike. The reduced input excitation also delays the burst slightly. This seems like a desirable property for intelligent information processing: the macro-scale behavior evolved for is quite robust (spike bursts from N2 and N3), while still retaining some trace of the information in the inputs (the reduced and delayed spiking).

2.0 1.0 0.0

neuron spikes

3.0

Best_1592 evert-other vs all_in N3 burst 5

900

950

1000

1050

1100

t(ms) Figure 11. Zoom in on fifth burst of N3 comparing tonic input (line 1) with every-other spike input (line 2). The targets are shown pointing downward above line 2.

3.2 Sequence Detector Only minor modifications were needed in the chromosome specifications (above). The extra re-evaluations for random sets of inputs slowed the run time, and also made the fitness variable for any given SNN. Consequently, winning SNN designs had to be extracted from the evolutionary trace file by setting criteria. Our heuristic criteria were: to be considered a candidate solution, a chromosome must have survived at least 100 generations, and the worst penalty had to be no greater than 1000 (using the same metric as tonic burster). Nine candidates met these criteria. The one with the minimum worst performance (621), called candidate_1, was examined for its performance on a new randomly generated set of 40 input spike pairs, 22 of which were designed to be in the proper sequence. The SNN toploogy is shown in Figure 12 and its spiking behavior is shown in Figure 13.

Figure 12. Candidate_1 for the 2-input sequence detector task.

Figure 13. Spiking behavior of Candidate_1 for the 2-input sequence detector task. Each neuron’s spikes shown at the neuron’s location on the y axis and point upwards. The correct spike location is shown above pointing down. N2 is the output neuron (see Figure 12). The input spike pairs are shown below N1 on the y axis. It is difficult to see which input pairs are in sequence at this resolution, but the targets above N2 show which ones are. Note that N1, N3 and N4 never spike at all.

Curiously, one immediately sees in Figure 13 that only N2 ever fired on the 40 test patterns. Of course, we cannot say that they never will, but it seems likely that N2 is sufficient, and the “extra” neurons were inherited from ancestors where they might have served useful purposes. Perhaps we can call this an example of unintelligent design, something that is well known to evolutionary biologists [16] . It is not unexpected that evolution might produce such superfluous designs; after all, there was no selection pressure applied for “efficient” designs, only “successfully performing” designs. It is possible to conceive of schemes for adding this kind of selection pressure in the future. Further examination of Figure 13 shows that this SNN never failed to spike when the input pattern was in sequence according to the random input set generator. Once at 1345ms, it fires when the teacher did not consider the inputs to be in the proper sequence. Upon close examination the input spikes arrives at 1331 and 1332ms, in the proper sequence, but only 1 ms apart. The teacher set the target to be 10ms after the spike at IN2, but its in-sequence pairs had to have a separation of 2-8ms. Evolution tried to meet this specification (all chromosomes faced this challenge), but was unable to. Of the examples in Figure 13 where the spike from N2 appears to match the target, they are actually off by as much as 4ms (which is too small to detect at the resolution of Figure 13), actually between 0-4ms. Its errors of commission, like the case at 1345ms, were penalized less than errors of omission[15] , so evolution chose the path of lesser penalty. This topic is discussed more elsewhere[13] , as is the topic of adding learning via synaptic plasticity[14] .

4. CONCLUSIONS We have presented an approach to evolving SNNs, topology and parameters, that seems to work at least for the two small tasks tested. Functioning designs were found that exhibited the desired behaviors even beyond the simulation time used for evolution, and that were robust to noise perturbations. Observations suggest that spike encoding of information may be capable if retaining very fine detail (e.g. the noise perturbations) in addition to macro-scale patterns (e.g. bursts). Some evolved designs were inefficient in that they contained non-functioning neurons – unintelligent design (?). However, there was no selection pressure for efficient designs, only well performing ones. This growth process was designed to scale to much larger networks, because the number of bits for the row/column targets is presumed not to need to grow very fast. They only act as attractors and therefore need only contain enough information to tip the decision process towards some synapses during SNN growth. The algorithm makes no explicit attempt to exploit regularities such as symmetry, as does Stanley’s HyperNeat approach [18] . Only additional experimentation will show what limits there are to scaling up this process. ACKNOWLEDGEMENTS This work was supported by the Visiting Faculty Research Program at the US Air Force Research Laboratory, Rome, NY.

REFERENCES [1] R. Batllori, C.B. Laramee, W. Land, J.D. Schaffer, “ Evolving spiking neural networks for robot control ,” Procedia CS 6: 329-334, (2011). [2] J. Clune, B. E. Beckmann, P. K. McKinley, and C. Ofria, “Investigating Whether HyperNEAT Produces Modular Neural Networks,” Genetic and Evolutionary Computations Conference GECCO, 635-642 (2010). [3] Eshelman, L., “The CHC Adaptive Search Algorithm: How to have safe search while engaging in nontraditional genetic recombination,” In G. Rowlins (Ed.), Proceedings of FOGA, Morgan Kaufmann, Palo Alto, CA, 265-283, (1991). [4] W. Gerstner and W.M. Kistler, Spiking neuron models - single neurons, populations, plasticity, Cambridge University Press, (2002). [5] D. O. Hebb, Organization of behavior, New York: Wiley, (1949). [6] D.Z. Jin, “Spiking neural network for recognizing spatiotemporal sequences of spikes,” Physical Review E 69, (2004). [7] W. Maass and M. Schmitt, On the complexity of learning for spiking neurons with temporal coding. Information and Computation, 153:26-46, (1999). [8] H. Markram, Y. Wang, and M. Tsodyks, Differential signaling via the same axon of neocortical pyramidal neurons, Neurobiology 95, 5323-5328, (1998). [9] E.A. Phelps, “Learning: Challenges in the merging of levels,” Science of Memory: Concepts, H.L. Roediger III, Y. Dudai and S.M. Fitzpatrick (eds.), Oxford University Press, 45-48, (2007). [10] A.M. Rosen, H. Sichtig, J.D. Schaffer and P.M. Di Lorenzo, Taste-specific cell assemblies in a biologically informed model of the nucleus of the solitary tract, J Neurophysiol. 104(1), 4-17, (2010). [11] J. Rowe, D. Whitley, L. Barbulescu, and J. P. Watson, Properties of Gray and Binary Representations, Evolutionary Computation, 12(1), 47-76, (2004). [12] A. Roy, “Evolving Spike Neural Network Based Spatio-temporal Signal Classifier with an Application to Characterizing Alcoholic Brains using Visually Evoked Response Potential,” PhD Thesis, Binghamton University, (2014). [13] J.D. Schaffer, Evolving Spiking Neural Networks: A Novel Growth Algorithm Corrects the Teacher, Eighth IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA), (2015 submitted). [14] J.D. Schaffer, Hebbian Plasticity Improves Evolution of Spiking Neural Networks using a Gene-driven Growth Process, Genetic and Evolutionary Computation Conference GECCO (2015 submitted). [15] J.D. Schaffer, H. Sichtig, C.B. Laramee. “A series of failed and partially successful fitness functions for evolving spiking neural networks,” Genetic and Evolutionary Computations Conference GECCO (Companion) 2661-2664, (2009). [16] N.H. Shubin, 2009. “The Evolutionary Origins of Hicups and Hernias,” Scientific American, Jan. (2009). [17] H. Sichtig, The SGE Framework -- Discovering Spatio-temopral Patterns in Biological Systems with Spiking Neural Networks (S), a Genetic Algorithm (G), and Expert Knowledge (E), PhD Dissertation, Binghamton University, Binghamton, NY (2009). [18] K.O. Stanley, 2007. Compositional pattern producing networks: A novel abstraction of development. Genetic Programming and Evolvable Machines, 8, 131 – 162, (2007). [19] L. Watts, A tour of neuralog and spike - tools for simulating networks of spiking neurons, Tech. report, Synaptics, Inc., (1993). [20] Whitley, and Schaffer (editors), COGANN-92 Combinations of Genetic Algorithms and Neural Networks, IEEE Computer Society Press, Los Alamitos, CA., (1992).