mathematical strategies for programming biological ...

3 downloads 0 Views 7MB Size Report
[P1][P2]+kLC2. [P1][P2][P3]+[Li]2+kLL(si+k3. ∑ j=i sj )∑ j=i[Lj ]2 − b[Li]. (3.7). However, this system of coupled ODEs is difficult to study using analytic techniques ...
MATHEMATICAL STRATEGIES FOR PROGRAMMING BIOLOGICAL CELLS

by Jomar F. Rabajante

A master’s thesis submitted to the Institute of Mathematics College of Science University of the Philippines Diliman, Quezon City

as partial fulfillment of the requirements for the degree of Master of Science in Applied Mathematics (Mathematics in Life and Physical Sciences)

April 2012

This is to certify that this Master’s Thesis entitled “Mathematical Strategies for Programming Biological Cells”, prepared and submitted by Jomar F. Rabajante to fulfill part of the requirements for the degree of Master of Science in Applied Mathematics, was successfully defended and approved on March 23, 2012.

Cherryl O. Talaue, Ph.D. Thesis Co-Adviser

Baltazar D. Aguda, Ph.D. Thesis Co-Adviser

Carlene P. Arceo, Ph.D. Thesis Reader

The Institute of Mathematics endorses the acceptance of this Master’s Thesis as partial fulfillment of the requirements for the degree of Master of Science in Applied Mathematics (Mathematics in Life and Physical Sciences).

Marian P. Roque, Ph.D. Director Institute of Mathematics

This Master’s Thesis is hereby officially accepted as partial fulfillment of the requirements for the degree of Master of Science in Applied Mathematics (Mathematics in Life and Physical Sciences).

Jose Maria P. Balmaceda, Ph.D. Dean, College of Science

Brief Curriculum Vitae 09 October 1984

Born, Sta. Cruz, Laguna, Philippines

1997-2001

Don Bosco High School, Sta. Cruz, Laguna

2006

B.S. Applied Mathematics (Operations Research Option) University of the Philippines Los Ba˜ nos

2006-2008

Corporate Planning Assistant Insular Life Assurance Co. Ltd.

2008

Professional Service Staff International Rice Research Institute

2008-present

Instructor, Mathematics Division Institute of Mathematical Sciences and Physics University of the Philippines Los Ba˜ nos

PUBLICATIONS • Rabajante, J.F., Figueroa, R.B. Jr. and Jacildo, A.J. 2009. Modeling the area restrict searching strategy of stingless bees, Trigona biroi, as a quasirandom walk process. Journal of Nature Studies, 8(2): 15-21. • Esteves, R.J.P., Villadelrey, M.C. and Rabajante, J.F. 2010. Determining the optimal distribution of bee colony locations to avoid overpopulation using mixed integer programming. Journal of Nature Studies, 9(1): 79-82. • Castilan, M.G.D., Naanod, G.R.K., Otsuka, Y.T. and Rabajante, J.F. 2011. From Numbers to Nature. Journal of Nature Studies, 9(2)/10(1): 35-39. • Tambaoan, R.S., Rabajante, J.F., Esteves, R.J.P. and Villadelrey, M.C. 2011. Prediction of migration path of a colony of bounded-rational species foraging on patchily distributed resources. Advanced Studies in Biology, 3(7): 333-345.

iii

Table of Contents

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vii

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ix

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xiv

Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

Chapter 2. Preliminaries Biology of Cellular Programming . . . . 2.1 Stem cells in animals . . . . . . . . . . . . . 2.2 Transcription factors and gene expression . . 2.3 Biological noise and stochastic differentiation

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

4 4 8 10

Chapter 3. Preliminaries Mathematical Models of Gene Networks . . 3.1 The MacArthur et al. GRN . . . . . . . . . . . 3.2 ODE models representing GRN dynamics . . . . 3.2.1 Cinquin and Demongeot ODE formalism 3.2.2 ODE model by MacArthur et al. . . . . 3.3 Stochastic Differential Equations . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

12 13 15 16 19 20

Chapter 4. Preliminaries Analysis of Nonlinear Systems 4.1 Stability analysis . . . . . . . . . 4.2 Bifurcation analysis . . . . . . . . 4.3 Fixed point iteration . . . . . . . 4.4 Sylvester resultant method . . . . 4.5 Numerical solution to SDEs . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

22 24 27 28 30 33

Chapter 5. Results and Discussion Simplified GRN and ODE Model . . . . . . . 5.1 Simplified MacArthur et al. model . . . . . . . . 5.2 The generalized Cinquin-Demongeot ODE model 5.3 Geometry of the Hill function . . . . . . . . . . . 5.4 Positive invariance . . . . . . . . . . . . . . . . . 5.5 Existence and uniqueness of solution . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

35 35 38 41 46 52

. . . . . .

iv

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . .

. . . . . .

. . . . . .

Chapter 6. Results and Discussion Finding the Equilibrium Points 6.1 Location of equilibrium points . . . 6.2 Cardinality of equilibrium points . 6.2.1 Illustration 1 . . . . . . . . 6.2.2 Illustration 2 . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

57 57 60 67 68

Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation . . . . . . . . . . . . . . . . . . 7.1 Stability of equilibrium points . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Bifurcation of parameters . . . . . . . . . . . . . . . . . . . . . . . . . .

73 73 81

Chapter 8. Results and Discussion Introduction of Stochastic Noise . . . . . . . . . . . . . . . . . . . . .

85

Chapter 9. Summary and Recommendations . . . . . . . . . . . . . . . . . . . . 100 Appendix A. More on Equilibrium Points: Illustrations . . . . . A.1 Assume n = 2, ci = 1, cij = 1 . . . . . . . . . . . . . . A.1.1 Illustration 1 . . . . . . . . . . . . . . . . . . . A.1.2 Illustration 2 . . . . . . . . . . . . . . . . . . . A.1.3 Illustration 3 . . . . . . . . . . . . . . . . . . . A.1.4 Illustration 4 . . . . . . . . . . . . . . . . . . . A.2 Assume n = 2, ci = 2 . . . . . . . . . . . . . . . . . . . A.2.1 Illustration 1 . . . . . . . . . . . . . . . . . . . A.2.2 Illustration 2 . . . . . . . . . . . . . . . . . . . A.2.3 Illustration 3 . . . . . . . . . . . . . . . . . . . A.2.4 Illustration 4 . . . . . . . . . . . . . . . . . . . A.2.5 Illustration 5 . . . . . . . . . . . . . . . . . . . A.3 Assume n = 3 . . . . . . . . . . . . . . . . . . . . . . . A.3.1 Illustration 1 . . . . . . . . . . . . . . . . . . . A.3.2 Illustration 2 . . . . . . . . . . . . . . . . . . . A.3.3 Illustration 3 . . . . . . . . . . . . . . . . . . . A.3.4 Illustration 4 . . . . . . . . . . . . . . . . . . . A.3.5 Illustration 5 . . . . . . . . . . . . . . . . . . . A.3.6 Illustration 6 . . . . . . . . . . . . . . . . . . . A.3.7 Illustration 7 . . . . . . . . . . . . . . . . . . . A.3.8 Illustration 8 . . . . . . . . . . . . . . . . . . . A.4 Ad hoc geometric analysis . . . . . . . . . . . . . . . . A.5 Phase portrait with infinitely many equilibrium points Appendix B. Multivariate Fixed Point Algorithm

v

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

106 107 108 109 111 112 113 113 115 116 116 116 116 116 117 117 117 117 118 118 122 124 127

. . . . . . . . . . . . . . . . . . 128

Appendix C. More on Bifurcation of Parameters: Illustrations . . . . . . . . . . . . . C.1 Adding gi > 0, Illustration 1 . . . . . . C.2 Adding gi > 0, Illustration 2 . . . . . . C.3 gi as a function of time . . . . . . . . . C.3.1 As a linear function . . . . . . . C.3.2 As an exponential function . . . C.4 The effect of γij . . . . . . . . . . . . . C.5 Bifurcation diagrams . . . . . . . . . . C.5.1 Illustration 1 . . . . . . . . . . C.5.2 Illustration 2 . . . . . . . . . . C.5.3 Illustration 3 . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

131 131 132 134 134 137 138 138 139 144 145

Appendix D. Scilab Program for Euler-Maruyama . . . . . . . . . . . . . . . . . . 147 List of References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

vi

Acknowledgments

I owe my deepest gratitude to those who made this thesis possible: To Dr. Baltazar D. Aguda from the National Cancer Institute, USA for providing the thesis topic, for imparting knowledge about models of cellular regulation, for simplifying the MacArthur et al. (2008) GRN, for giving his valuable time to answer my questions despite long distance communication, and for his patience, unselfish guidance and encouragement; To Dr. Cherryl O. Talaue for her all-out support, for spending time checking my proofs and editing my manuscript, for granting my requests to write recommendation letters, for the guidance, for the encouragement, and for always being available; To Dr. Carlene P. Arceo for doing the proofreading of my thesis manuscript despite her being on sabbatical leave, and to the members of my thesis panel for the constructive criticisms; To Mr. Mark Jayson V. Cortez and Ms. Jenny Lynn B. Carigma for checking my manuscript for grammatical and style errors as well as for the motivation; To the University of the Philippines Los Ba˜ nos (UPLB) and to the Math Division, Institute of Mathematical Sciences and Physics (IMSP), UPLB for allowing me to go on study leave with pay; To Dr. Virgilio P. Sison, the Director of IMSP, for all the support and for being the co-maker in my DOST scholarship contract; To Prof. Ariel L. Babierra, the Head of the Math Division, IMSP and to Dr. Editha C. Jose for the invaluable suggestions, help and encouragement; To the Philippine Council for Industry, Energy and Emerging Technology Research and Development (PCIEERD), Department of Science and Technology (DOST) for the generous financial support; and To my family for the inspiration, and to El Elyon for the unwavering strength.

vii

Abstract Mathematical Strategies for Programming Biological Cells Jomar F. Rabajante University of the Philippines, 2012

Co-Adviser: Cherryl O. Talaue, Ph.D. Co-Adviser: Baltazar D. Aguda, Ph.D.

In this thesis, we study a phenomenological gene regulatory network (GRN) of a mesenchymal cell differentiation system. The GRN is composed of four nodes consisting of pluripotency and differentiation modules. The differentiation module represents a circuit of transcription factors (TFs) that activate osteogenesis, chondrogenesis, and adipogenesis. We investigate the dynamics of the GRN using Ordinary Differential Equations (ODE). The ODE model is based on a non-binary simultaneous decision model with autocatalysis and mutual inhibition. The simultaneous decision model can represent a cellular differentiation process that involves more than two possible cell lineages. We prove some mathematical properties of the ODE model such as positive invariance and existenceuniqueness of solutions. We employ geometric techniques to analyze the qualitative behavior of the ODE model. We determine the location and the maximum number of equilibrium points given a set of parameter values. The solutions to the ODE model always converge to a stable equilibrium point. Under some conditions, the solution may converge to the zero state. We are able to show that the system can induce multistability that may give rise to co-expression or to domination by some TFs. We illustrate cases showing how the behavior of the system changes when we vary

viii

some of the parameter values. Varying the values of some parameters, such as the degradation rate and the amount of exogenous stimulus, can decrease the size of the basin of attraction of an undesirable equilibrium point as well as increase the size of the basin of attraction of a desirable equilibrium point. A sufficient change in some parameter values can make a trajectory of the ODE model escape an inactive or a dominated state. Sufficient amounts of exogenous stimuli affect the potency of cells. The introduction of an exogenous stimulus is a possible strategy for controlling cell fate. A dominated TF can exceed a dominating TF by adding a corresponding exogenous stimulus. Moreover, increasing the amount of exogenous stimulus can shutdown multistability of the system such that only one stable equilibrium point remains. We observe the case where a random noise is present in our system. We add a Gaussian white noise term to our ODE model making the model a system of stochastic DEs. Simulations reveal that it is possible for cells to switch lineages when the system is multistable. We are able to show that a sole attractor can regulate the effect of moderate stochastic noise in gene expression.

ix

List of Figures 1.1

Analysis of mesenchymal cell differentiation system.

. . . . . . . . . . .

2.1

Stem cell self-renewal, differentiation and programming. This diagram

3

illustrates the abilities of stem cells to ploriferate through self-renewal, differentiate into specialized cells and reprogram towards other cell types. 2.2

5

Priming and differentiation. Colored circles represent genes or TFs. The sizes of the circles determine lineage bias. Priming is represented by colored circles having equal sizes. The largest circle governs the possible phenotype of the cell. [70] . . . . . . . . . . . . . . . . . . . . . . . . . .

2.3

7

The flow of information. The blue solid lines represent general flow and the blue dashed lines represent special (possible) flow. The red dotted lines represent the impossible flow as postulated in the Central Dogma of Molecular Biology [41]. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

2.4

C. Waddington’s epigenetic landscape — “creode” [168]. . . . . . . . . .

11

3.1

The coarse-graining of the differentiation module. The network in (a) is simplified into (b), where arrows indicate up-regulation (activation) while bars indicate down-regulation (repression). [113] . . . . . . . . . . . . . .

3.2

13

The MacArthur et al. [113] mesenchymal gene regulatory network. Arrows indicate up-regulation (activation) while bars indicate down-regulation (repression). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.3

14

Gene expression or the concentration of the TFs can be represented by a state vector, e.g. ([X1 ], [X2 ], [X3 ], [X4 ]) [70]. For example, TFs of equal concentration can be represented by a vector with equal components, such as (2.4, 2.4, 2.4, 2.4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.4

15

Hierarchic decision model and simultaneous decision model. Bars represent repression or inhibition, while arrows represent activation. [36]. . . .

x

17

4.1

The slope of F (X) at the equilibrium point determines the linear stability. Positive gradient means instability, negative gradient means stability. If the gradient is zero, we look at the left and right neighboring gradients. Refer to the Insect Outbreak Model: Spruce Budworm in [122]. . . . . .

26

4.2

Sample bifurcation diagram showing saddle-node bifurcation. . . . . . . .

28

4.3

An illustration of cobweb diagram. . . . . . . . . . . . . . . . . . . . . .

29

5.1

The original MacArthur et al. [113] mesenchymal gene regulatory network. 35

5.2

Possible paths that result in positive feedback loops. Shaded boxes denote that the path repeats. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

5.3

The simplified MacArthur et al. GRN . . . . . . . . . . . . . . . . . . . .

37

5.4

Graph of the univariate Hill function when ci = 1. . . . . . . . . . . . . .

42

5.5

Possible graphs of the univariate Hill function when ci > 1. . . . . . . . . P The graph of Y = Hi ([Xi ]) shrinks as the value of Ki + nj=1,j6=i γij [Xj ]cij

43

increases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44

5.6 5.7

The Hill curve gets steeper as the value of autocatalytic cooperativity ci increases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44

5.8

The graph of Y = Hi ([Xi ]) is translated upwards by gi units. . . . . . . .

45

5.9

The 3-dimensional curve induced by Hi ([X1 ], [X2 ]) + gi and the plane induced by ρi [Xi ], an example. . . . . . . . . . . . . . . . . . . . . . . . . .

46

5.10 The intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi with varying values P of Ki + nj=1,j6=i γij [Xj ]cij , an example. . . . . . . . . . . . . . . . . . . .

47

5.11 The possible number of intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi P where ci = 1 and gi = 0. The value of Ki + nj=1,j6=i γij [Xj ]cij is fixed. . .

49

5.12 The possible number of intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi P where ci = 1 and gi > 0. The value of Ki + nj=1,j6=i γij [Xj ]cij is fixed. . .

49

5.13 The possible number of intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi P where ci > 1 and gi = 0. The value of Ki + nj=1,j6=i γij [Xj ]cij is fixed. . .

50

5.14 The possible number of intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi P where ci > 1 and gi > 0. The value of Ki + nj=1,j6=i γij [Xj ]cij is fixed. . .

50

xi

5.15 Finding the univariate fixed points using cobweb diagram, an example. We define the fixed point as [Xi ] satisfying H([Xi ]) + gi = ρi [Xi ]. . . . . .

51

5.16 The curves are rotated making the line Y = ρi [Xi ] as the horizontal axis. Positive gradient means instability, negative gradient means stability. If the gradient is zero, we look at the left and right neighboring gradients. .

51

5.17 When gi = 0, [Xi ] = 0 is a component of a stable equilibrium point. . . .

56

5.18 When gj > 0, [Xj ] = 0 will never be a component of an equilibrium point.

56

6.1

Sample numerical solution in time series with the upper bound and lower bound. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.2 6.3

Y =

[Xi ]ci K+[Xi ]ci

will never touch the point (1, 1) for 1 < ci < ∞. . . . . . . .

An example where ρi (Ki

1/ci

71

When gi = 0, ci = 1 and the decay line is tangent to the univariate Hill curve at the origin, then the origin is a saddle. . . . . . . . . . . . . . . .

7.2

70

) > βi ; Y = Hi ([Xi ]) and Y = ρi [Xi ] only

intersect at the origin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1

60

76

Varying the values of parameters may vary the size of the basin of attraction of the lower-valued stable intersection of Y = Hi ([Xi ]) + gi and Y = ρi [Xi ]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7.3

77

The possible number of intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi P where c > 1 and g = 0. The value of Ki + nj=1,j6=i γij [Xj ]cij is taken as a parameter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

7.4

The possible topologies when Y = Hi ([Xi ]) essentially lies below the decay 79

7.5

line Y = ρi [Xi ], gi = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . P The origin is unstable while the points where [Xi ]∗ = βρ −K− nj=1,j6=i [Xj ]∗ are stable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

7.6

Increasing the value of gi can result in an increased value of [Xi ] where Y = Hi ([Xi ]) + gi and Y = ρi ([Xi ]) intersects. . . . . . . . . . . . . . . .

7.7

83

Increasing the value of gi can result in an increased value of [Xi ]∗ , and consequently in decreased value of [Xj ] where Y = Hj ([Xj ]) + gj and Y = ρj ([Xj ]) intersects, j 6= i. . . . . . . . . . . . . . . . . . . . . . . . .

xii

84

8.1

For Illustration 1; ODE solution and SDE realization with G(X) = 1. . .

88

8.2

88

8.3

For Illustration 1; ODE solution and SDE realization with G(X) = X. . √ For Illustration 1; ODE solution and SDE realization with G(X) = X.

89

8.4

For Illustration 1; ODE solution and SDE realization with G(X) = F (X).

89

8.5

For Illustration 1; ODE solution and SDE realization using the random population growth model. . . . . . . . . . . . . . . . . . . . . . . . . . .

90

8.6

For Illustration 2; ODE solution and SDE realization with G(X) = 1. . .

92

8.7

92

8.8

For Illustration 2; ODE solution and SDE realization with G(X) = X. . √ For Illustration 2; ODE solution and SDE realization with G(X) = X.

93

8.9

For Illustration 2; ODE solution and SDE realization with G(X) = F (X).

93

8.10 For Illustration 2; ODE solution and SDE realization using the random population growth model. . . . . . . . . . . . . . . . . . . . . . . . . . .

94

8.11 For Illustration 3; ODE solution and SDE realization with G(X) = 1. . .

96

8.12 For Illustration 3; ODE solution and SDE realization with G(X) = X. . √ 8.13 For Illustration 3; ODE solution and SDE realization with G(X) = X.

96 97

8.14 For Illustration 3; ODE solution and SDE realization with G(X) = F (X).

97

8.15 For Illustration 3; ODE solution and SDE realization using the random population growth model. . . . . . . . . . . . . . . . . . . . . . . . . . .

98

8.16 Phase portrait of [X1 ] and [X2 ]. . . . . . . . . . . . . . . . . . . . . . . .

98

8.17 Reactivating switched-off TFs by introducing random noise where G(X) = 1. 99 9.1

The simplified MacArthur et al. GRN . . . . . . . . . . . . . . . . . . . . 100

A.1 Intersections of F1 , F2 and zero-plane, an example. . . . . . . . . . . . . 106 A.2 The intersection of Y = H1 ([X1 ]) + 1 and Y = 10[X1 ] with [X2 ] = 1.001 and [X3 ] = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 A.3 The intersection of Y = H2 ([X2 ]) and Y = 10[X2 ] with [X1 ] = 0.10103 and [X3 ] = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 A.4 The intersection of Y = H3 ([X3 ]) and Y = 10[X3 ] with [X1 ] = 0.10103 and [X2 ] = 1.001. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 A.5 A sample phase portrait of the system with infinitely many non-isolated equilibrium points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 xiii

C.1 Determining the adequate g1 > 0 that would give rise to a sole equilibrium point where [X1 ]∗ > [X2 ]∗ . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 C.2 An example where without g1 , [X1 ]∗ = 0. . . . . . . . . . . . . . . . . . . 135 C.3 [X1 ]∗ escaped the zero state because of the introduction of g1 which is a decaying linear function. . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 C.4 An example of shifting from a lower stable component to a higher stable component through adding gi (t) = −υi t + gi (0). . . . . . . . . . . . . . . 136 C.5 [X1 ]∗ escaped the zero state because of the introduction of g1 which is a decaying exponential function. . . . . . . . . . . . . . . . . . . . . . . . . 137 C.6 Parameter plot of γ, an example. . . . . . . . . . . . . . . . . . . . . . . 138 C.7 Intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi where c > 1 and g = 0; and an event of bifurcation. . . . . . . . . . . . . . . . . . . . . . . . . . 139 C.8 Saddle node bifurcation; β1 is varied. . . . . . . . . . . . . . . . . . . . . 140 C.9 Saddle node bifurcation; K1 is varied. . . . . . . . . . . . . . . . . . . . . 141 C.10 Saddle node bifurcation; ρ1 is varied. . . . . . . . . . . . . . . . . . . . . 141 C.11 Cusp bifurcation; β1 and g1 are varied. . . . . . . . . . . . . . . . . . . . 142 C.12 Cusp bifurcation; K1 and c are varied. . . . . . . . . . . . . . . . . . . . 142 C.13 Cusp bifurcation; K1 and g1 are varied. . . . . . . . . . . . . . . . . . . . 143 C.14 Cusp bifurcation; ρ1 and g1 are varied. . . . . . . . . . . . . . . . . . . . 143 C.15 Saddle node bifurcation; ρ2 is varied. . . . . . . . . . . . . . . . . . . . . 144 C.16 Saddle node bifurcation; g2 is varied. . . . . . . . . . . . . . . . . . . . . 145 C.17 Saddle node bifurcation; ρ2 is varied. . . . . . . . . . . . . . . . . . . . . 146 C.18 Saddle node bifurcation; g2 is varied. . . . . . . . . . . . . . . . . . . . . 146

xiv

Chapter 1 Introduction The field of Biomathematics has proven to be useful and essential for understanding the behavior and control of dynamic biological interactions. These interactions span a wide spectrum of spatio-temporal scales — from interacting chemical species in a cell to individual organisms in a community, and from fast interactions occurring within seconds to those that slowly progress in years. Mathematical and in silico models enable scientists to generate quantitative predictions that may serve as initial input for testing biological hypotheses to minimize trial and error, as well as to investigate complex biological systems that are impractical or infeasible to study through in situ and in vitro experiments. One classic question that scientists want to answer is how simple cells generate complex organisms. In this study, we are interested in the analysis of gene interaction networks that orchestrate the differentiation of stem cells to various cell lineages that make up an organism. We are also motivated by the prospects of utilizing stem cells in regenerative medicine (such as through replenishment of damaged tissues as well as treatment of Parkinson’s disease and diabetes) [1, 50, 107, 151, 171, 180], in revolutionizing drug discovery [2, 48, 136, 141, 142], and in the control of so-called cancer stem cells that had been hypothesized to maintain the growth of tumors [57, 65, 110, 171, 172]. The current -omics (genomics, transcriptomics, proteomics, etc.) and systems biology revolution [3, 33, 61, 62, 63, 93, 96, 99, 100, 108, 133] are continually providing details about gene networks. The focus of this study is the mathematical analysis of a gene network [113] involved in the differentiation of multipotent stem cells to three mesenchymal stromal stem cells, namely, cells that form bones (osteoblasts), cartilages (chondrocytes), and fats (adipocytes). This gene network shows the coupled interaction among stem-cell-specific transcription factors and lineage-specifying transcription factors induced by exogenous stimuli. 1

Chapter 1. Introduction

2

MacArthur et al. [113] proposed a model of this gene network, and we hypothesize that further and more substantial analytical and computational study of this model would reveal important insights into the control of the mesenchymal cell differentiation system. We refer to the process of controlling the fate of a stem cell towards a chosen lineage as cellular programming. We analyze the gene network of MacArthur et al. [113] by simplifying the network model while preserving the essential qualitative dynamics. In Chapter (5) of this thesis, we simplify the MacArthur et al. [113] network model to highlight the essential components of the mesenchymal cell differentiation system and for easier analysis. We translate the simplified network model into a system of Ordinary Differential Equations (ODEs) using the Cinquin-Demongeot formalism [38]. The system of ODEs formulated by Cinquin-Demongeot [38] is one of the mathematical models appropriate to represent the dynamics depicted in the simplified MacArthur et al. [113] gene network. The state variables of the ODE model represent the concentration of the transcription factors involved in gene expression. The Cinquin-Demongeot [38] ODE model can represent various biological interactions, such as molecular interactions during gene transcription, and it can represent cellular differentiation with more than two possible outcomes. Stability and bifurcation analyses of the ODE model are important in understanding the dynamics of cellular differentiation. An asymptotically stable equilibrium point is associated with a certain cell type. In Chapters (6) and (7), we determine the biologically feasible (nonnegative real-valued) coexisting stable equilibrium points of the ODE model for a given set of parameters. We also identify if varying the values of some parameters, such as those associated with the exogenous stimuli, can steer the system toward a desired state. Furthermore, in Chapter (8), we numerically investigate the robustness of the gene network against stochastic noise by adding a noise term to the deterministic ODEs. The objectives of the study are summarized in the following diagram:

Chapter 1. Introduction

Figure 1.1: Analysis of mesenchymal cell differentiation system.

3

Chapter 2 Preliminaries Biology of Cellular Programming 2.1

Stem cells in animals

Stem cells are very important for the development, growth and repair of tissues. These are cells that can undergo mitosis (cell division) and have two contrasting abilities — ability for self-renewal and ability to differentiate into different specialized cell types. Self-renewal is the ability of stem cells to proliferate, that is, one or both daughter cells remain as stem cells after cell division. When a stem cell undergoes differentiation, it develops into a more mature (specialized) cell, losing its abilities to self-renew and to differentiate towards other cell types. In addition, scientists have shown that some cells can dedifferentiate and some can be transdifferentiated. Dedifferentiation means that a differentiated cell is transformed back to an earlier stage, while transdifferentiation means that a cell is programmed to switch cell lineages. The maturity of a stem cell is classified based on the cell’s potency (the cell’s capability to differentiate into various types). The three major kinds of stem cell potency are totipotency, pluripotency and multipotency. Figure (2.1) shows these three types of potencies and the differentiation process. Totipotent stem cells have the potential to generate all cells including extraembryonic tissues, such as the placenta, and they are the ancestors of all cells of an organism. A zygote is an example of a totipotent stem cell. Pluripotent stem cells are descendants of totipotent stem cells that have lost their ability to generate extraembryonic tissues but not their ability to generate all cells of the embryo. Examples of these stem cells are the cells of the epiblast from the inner cell mass of the blastocyst embryo. These stem cells can differentiate into almost all types of cells; specifically, they form the endoderm, mesoderm and ectoderm germ layers. Pluripotent 4

Chapter 2. Preliminaries

Biology of Cellular Programming

5

Figure 2.1: Stem cell self-renewal, differentiation and programming. This diagram illustrates the abilities of stem cells to ploriferate through self-renewal, differentiate into specialized cells and reprogram towards other cell types. stem cells form all cell types found in an adult organism. The stomach, intestines, liver, pancreas, urinary bladder, lungs and thyroid are formed from the endoderm layer; the central nervous system, lens of the eye, epidermis, hair, sweat glands, nails, teeth and mammary glands are formed from the ectoderm layer. The mesoderm layer connects the endoderm and ectoderm layers, and forms the bones, muscles, connective tissues, heart, blood cells, kidneys, spleen and middle layer of the skin. Embryonic stem (ES) cells, epiblast stem cells, embryonic germ cells (derived from primordial germ cells), spermatogonial male germ stem cells and induced pluripotent

Chapter 2. Preliminaries

Biology of Cellular Programming

6

stem cells (iPSCs) are examples of pluripotent stem cells that are cultured in vitro. ES cells are derived from the inner cell mass of the blastocyst embryo upon explantation (isolated from the normal embryo). Some adult stem cells, which can be somatic (related to the body) or germline (related to the gametes such as ovum and sperm), with embryonic stem cell-like pluripotency have been found by researchers under certain environments [16, 97, 103, 125, 170, 181]. Umbilical cord blood, adipose tissue and bone marrow are found to be sources of pluripotent stem cells. The production of iPSCs in 2006 [109, 162] is a major breakthrough for stem cell research. The iPSCs are cells that are artificially reprogrammed to dedifferentiate from differentiated or partially differentiated cells to become pluripotent again. With only few ethical issues compared to embryo cloning, iPSCs can be used for possible therapeutic purposes such as treating degenerative diseases, repairing damaged tissues and reprogramming cancer stem cells. However, there are plenty of issues on the use of iPSCs such as safety and efficiency. Currently, there is still no strong proof that generated iPSCs and natural ES cells are totally identical [158]. Pluripotent stem cells that differentiate to specific cell lineages lose their pluripotency, that is, they lose their ability to generate other kinds of cells. Multipotent stem cells are descendants of pluripotent stem cells but are already partially differentiated — they have the ability to self-renew yet can differentiate only to specific cell lineages. Multipotent stem cells are adult stem cells that are commonly considered as progenitor cells (cells that are in the stage between being pluripotent and fully differentiated). When a multipotent stem cell further differentiates, it matures to a more specialized cell lineage. Oligopotent and unipotent stem cells are progenitor cells that have very limited ability for self-renewal and are less potent. Oligopotent stem cells are descendants of multipotent stem cells and can only differentiate into very few cell types. Usually, stem cells are given special names based on the degree of potency, such as tripotent and bipotent depending on whether the cell can only differentiate into three and two cell fates, respectively. Unipotent stem cells, which are commonly called precursor cells, can only

Chapter 2. Preliminaries

Biology of Cellular Programming

7

differentiate into one cell type but are not the same as fully differentiated cells. Fully differentiated cells are at the determined terminal state, that is, they have completed the differentiation process, have exited the cell cycle, and have already lost the ability to self-renew [23, 123].

Figure 2.2: Priming and differentiation. Colored circles represent genes or TFs. The sizes of the circles determine lineage bias. Priming is represented by colored circles having equal sizes. The largest circle governs the possible phenotype of the cell. [70] In vitro, ex vivo and in vivo programming have already been done [138, 139, 151, 177]. The idea of programming biological cells indicates that some cells are “plastic” (i.e., some cells have the ability to change lineages). This plasticity of cells proves that some cells do not permanently inactivate unexpressed genes but rather retain all genetic information (see Figure (2.2)). Three in vitro approaches of cellular programming have been discussed in a review by Yamanaka [177]. These approaches are nuclear transfer, cell fusion and transcription-factor transduction [19, 44, 51, 58, 106, 177]. The process of nuclear transfer has been used to successfully clone Dolly the sheep. Transcriptionfactor transduction, commonly called direct programming, alters the expression of

Chapter 2. Preliminaries

Biology of Cellular Programming

8

transcription factors (TFs) by overexpression or by deletion. Overexpressing one TF may down-regulate other TFs that would lead to a change in the phenotype of a cell. In 2006, Yamanaka and Takahashi [162] identified four factors — OCT3/4, SOX2, c-MYC, and KLF4 — that are enough to reprogram cells from mouse fibroblasts to become iPSCs (through the use of retrovirus). In 2007, Yamanaka, Takahashi and colleagues [161] generated iPSCs from adult human fibroblasts by the same defined factors. The three cellular programming approaches discussed by Yamanaka [177] have revealed common features — demethylation of pluripotency gene promoters and activation of ES-cell-specific TFs such as OCT4, SOX2 and NANOG [113, 124, 129]. In this study, we only consider the TF transduction approach. To understand cellular differentiation and TF transduction, we need to look at gene regulatory networks. Gene regulatory networks (GRNs) establish the interactions of molecules and other signals for the activation or inhibition of genes. We consider the key pluripotency transcription factors OCT4, SOX2 and NANOG as the elements of the core pluripotency module in our GRN.

For a more detailed discussion about stem cells in animals, the following references may be consulted [1, 12, 20, 22, 25, 34, 39, 42, 59, 74, 78, 80, 84, 103, 117, 148, 151, 159, 169, 177].

2.2

Transcription factors and gene expression

Genes contain hereditary information and are segments of the deoxyribonucleic acid (DNA). Gene expression is the process in which information from a gene is used to synthesize functional products such as proteins. Examples of these gene products are proteins that give the cell its structure and function.

Genes in the DNA direct protein synthesis. Transcription and translation are the two major processes that transform the information from nucleic acids to proteins (see Figure (2.3)). In the transcription process, the DNA commands the synthesis of ribonucleic

Chapter 2. Preliminaries

Biology of Cellular Programming

9

Figure 2.3: The flow of information. The blue solid lines represent general flow and the blue dashed lines represent special (possible) flow. The red dotted lines represent the impossible flow as postulated in the Central Dogma of Molecular Biology [41]. acid (RNA) and the information is transcribed from the DNA template to the RNA. The RNA, specifically messenger RNA or mRNA, then carries the information to the part of the cell where protein synthesis will happen. In the translation process, the cell translates the information from the mRNA to proteins. During transcription, the promoter (a DNA sequence where RNA polymerase enzyme attaches) initiates transcription, while the terminator (also a DNA sequence) marks the end of transcription. However, the RNA polymerase binds to the promoter only after some transcription factors (TFs), a collection of proteins, are attached to the promoter. Gene expression is usually regulated by DNA-binding proteins (such as by TFs) at the transcription process, sometimes utilizing external signals. TFs play a main role in gene regulatory networks. A TF that binds to an enhancer (a control element) and stimulates transcription of a gene is called an activator; a TF that binds to a silencer (also a control element) and inhibits transcription of a gene is called a repressor. Hundreds of TFs were discovered in eukaryotes. In highly specialized cells, only a small fraction of their genes are activated. Examples of TFs are OCT4, SOX2 and NANOG as well as RUNX2, SOX9 and PPARγ. RUNX2, SOX9 and PPAR-γ stimulate formation of bone cells, cartilage cells and fat

Chapter 2. Preliminaries

Biology of Cellular Programming

10

cells, respectively [113]. For a more detailed discussion about the relationship between transcription factors and gene expression, the following references may be consulted [24, 89, 126].

2.3

Biological noise and stochastic differentiation

It is believed that stochastic fluctuations in gene expression affect cell fate commitment in normal development and in in vitro culture of cells. The path that the cell would take is not absolutely deterministic but is rather affected by two kinds of noise — intrinsic and extrinsic [128, 130, 160, 174]. Intrinsic noise is the inherent noise produced during biochemical processes inside the cell, while extrinsic noise is the noise produced from the external environment (such as from the other cells). In some cases, extrinsic noise dominates the intrinsic noise and influences cell-to-cell variation [174] because the internal environment of a cell is regulated by homeostasis. Unregulated random fluctuations can cause negative effects to the organism. However, in most cases, these stochastic fluctuations are naturally regulated enough to maintain order [30, 111]. Stochastic fluctuations have positive effects to the system such as driving oscillations and inducing switching in cell fates [71, 111, 174]. The papers [113] and [176] discuss the importance of random noise in dedifferentiation, especially in the production of iPSCs. When a stem cell undergoes cell division, the two daughter cells may both still be identical to the original, may both have already been differentiated, or may have one cell identical to the original and the other already differentiated. Cells that would undergo differentiation have plenty of cell lineages to choose from, but their cell fates are based on some pattern formation [24]. The model “creode” by C. Waddington [168], as shown in Figure (2.4), illustrates the paths that a cell might take. In Waddington’s model, cell differentiation is depicted by a ball rolling down a landscape of hills and valleys. The parts of the valleys where the ball can stay without rolling can be regarded as attractors that represent cell types. GRNs determine the topography of the landscape.

Chapter 2. Preliminaries

Biology of Cellular Programming

11

Figure 2.4: C. Waddington’s epigenetic landscape — “creode” [168]. For a more detailed discussion about biological noise and stochastic differentiation, the following references may be consulted [9, 15, 26, 28, 30, 53, 64, 80, 81, 83, 85, 94, 101, 105, 111, 112, 127, 131, 132, 152, 164, 176].

Chapter 3 Preliminaries Mathematical Models of Gene Networks This chapter gives a review of the existing literatures on models of gene regulatory networks (GRN). Commonly, to start the mathematical analysis of GRNs, a directed graph is constructed to visualize the interaction of the molecules involved. Various network analysis techniques are available to extract information from the constructed directed graph such as clustering algorithms and motif analysis [4, 30, 45, 68, 90]. The study of the network topology is important in understanding the biological system that the network represents.

Gene regulatory systems are commonly modeled as Bayesian networks, Boolean networks, generalized logical networks, Petri nets, ordinary differential equations, partial differential equations, chemical master equations, stochastic differential equations and rule-based simulations [29, 45]. The choice of mathematical model depends on the assumptions made about the nature of the GRN and on the objectives of the study. In this thesis, we study the directed graph constructed by MacArthur et al. [113] and its corresponding Ordinary Differential Equations (ODEs) formulated by CinquinDemongeot [38] and its corresponding Stochastic Differential Equations (SDEs). By using an ODE model, we assume that the time-dependent macroscopic dynamics of the GRN are continuous in both time and state space. We assume continuous dynamics because the process of lineage determination involves a temporal extension, that is, cells pass through intermediate stages [70]. We use ODEs to model the average dynamics of the GRN. ODEs are primarily used to represent the deterministic dynamics of phenomenological (coarsegrained) regulatory networks [70, 121]. In addition, we can add a random noise term to the ODE model to study stochasticity in cellular differentiation. 12

Chapter 3. Preliminaries

3.1

Mathematical Models of Gene Networks

13

The MacArthur et al. GRN

The MacArthur et al. [113] GRN is composed of a pluripotency module (the circuit consisting of OCT4, SOX2, NANOG and their heterodimer and heterotrimer) and a differentiation module (the circuit consisting of RUNX2, SOX9 and PPAR-γ) [113]. The transcription factors RUNX2, SOX9 and PPAR-γ activate the formation of bone cells, cartilage cells and fat cells, respectively.

Figure 3.1: The coarse-graining of the differentiation module. The network in (a) is simplified into (b), where arrows indicate up-regulation (activation) while bars indicate down-regulation (repression). [113]

The derivation of the core differentiation module is shown in Figure (3.1) where the interactions through intermediaries are consolidated to create a simplified network. The MacArthur et al. [113] GRN that we are going to study is shown in Figure (3.2). Feedback loops (which are important for the existence of homeostasis) and autoregulation (or autoactivation, which means that a molecule enhances its own expression) are necessary to attain pluripotency [177]. These feedback loops and autoregulation are also

Chapter 3. Preliminaries

Mathematical Models of Gene Networks

14

Figure 3.2: The MacArthur et al. [113] mesenchymal gene regulatory network. Arrows indicate up-regulation (activation) while bars indicate down-regulation (repression). present in the MacArthur et al. GRN [113]; however, they are not enough to generate iPSCs. Based on the deterministic computational analysis of MacArthur et al. [113], their pluripotency module cannot be reactivated once silenced, that is, it becomes resistant to reprogramming. However, they found that introducing stochastic noise to the system can reactivate the pluripotency module [113].

Chapter 3. Preliminaries

3.2

Mathematical Models of Gene Networks

15

ODE models representing GRN dynamics

A state X = ([X1 ], [X2 ], . . . , [Xn ]) represents a temporal stage in the cellular differentiation or programming process (see Figure (3.3)). We define [Xi ] as a component (coordinate) of a state. A stable state (stable equilibrium point) X ∗ = ([X1 ]∗ , [X2 ]∗ , . . . , [Xn ]∗ ) represents a certain cell type, e.g., pluripotent, tripotent, bipotent, unipotent or terminal state.

Figure 3.3: Gene expression or the concentration of the TFs can be represented by a state vector, e.g. ([X1 ], [X2 ], [X3 ], [X4 ]) [70]. For example, TFs of equal concentration can be represented by a vector with equal components, such as (2.4, 2.4, 2.4, 2.4).

Modelers of GRN often use the function H + (or H − ) which is bounded monotone increasing (or decreasing) with values between zero and one. Examples of such functions are the sigmoidal, hyperbolic and threshold piecewise-linear functions. If we use sigmoidal H + and H − called the Hill functions, we define H + ([X], K, c) :=

[X]c K c + [X]c

(3.1)

for activation of gene expression and H − ([X], K, c) := 1 − H + ([X], K, c) =

Kc K c + [X]c

(3.2)

for repression, where the variable [X] is the concentration of the molecule involved [69, 73, 96, 121, 144]. The parameter K is the threshold or dissociation constant and is equal

Chapter 3. Preliminaries

Mathematical Models of Gene Networks

16

to the value of X at which the Hill function is equal to 1/2. The parameter c is called the Hill constant or Hill coefficient and describes the steepness of the Hill curve. The Hill constant often denotes multimerization-induced cooperativity (a multimer is an assembly of multiple monomers or molecules) and may represent the number of cooperative binding sites if c is restricted to a positive integer. However, in some cases, the Hill constant can be a positive real number (usually 1 < c < n where n is the number of equivalent cooperative binding sites) [73, 174]. If c = 1, then there is no cooperativity [38] and the Hill function becomes the Michaelis-Menten function which is hyperbolic. If data are available, we can estimate the value of c by inference. Various ODE models and formulations are presented in [13, 14, 27, 30, 31, 32, 43, 47, 69, 76, 96, 115, 135, 173]. Examples of these are the neural network [166] model, the S-systems (power-law) [167] model, the Andrecut [7] model, the Cinquin-Demongeot 2002 [36] model, and the Cinquin-Demongeot 2005 [38] model. The Cinquin-Demongeot 2002 and 2005 models can represent various GRNs and are more amenable to analysis.

3.2.1

Cinquin and Demongeot ODE formalism

According to Waddington’s model [168], cell differentiation is similar to a ball rolling down a landscape of hills and valleys. The ridges of the hills can be regarded as the unstable equilibrium points while the parts of the valleys where the ball can stay without rolling further (i.e., at relative minima of the landscape) can be regarded as stable equilibrium points (attractors). Hence, the movement of the ball and its possible location after some time can be represented by dynamical systems, specifically ODEs. However, it should be noted that existing evidence showing the presence of attractors is limited to some mammalian cells [112]. The theory that some cells can differentiate into many different cell types gives the idea that the model representing the dynamics of such cells may exhibit multistability (multiple stable equilibrium points). However, not all GRNs are reducible to binary or boolean hierarchic decision network (see Figure (3.4)), that is why Cinquin and Demongeot formulated models that can represent cellular differentiation with more than two

Chapter 3. Preliminaries

Mathematical Models of Gene Networks

17

Figure 3.4: Hierarchic decision model and simultaneous decision model. Bars represent repression or inhibition, while arrows represent activation. [36]. possible outcomes (multistability) obtained through different developmental pathways [3, 38, 35]. The simultaneous decision network (see Figure (3.4)) is a near approximation of the Waddington illustration where there are possibly many cell lineages involved. In 2002, Cinquin and Demongeot proposed an ODE model representing the simultaneous decision network [36]. In 2005, they proposed another ODE model representing the simultaneous decision network but with autocatalysis (autoactivation) [38]. Both the Cinquin-Demongeot models are based on the simultaneous decision graph where there is mutual inhibition. All elements in the Cinquin-Demongeot models are symmetric, that is, each node has the same relationship with all other nodes, and all equations in the system of ODEs have equal parameter values.

Chapter 3. Preliminaries

Mathematical Models of Gene Networks

18

Equations (3.3) and (3.4) are the Cinquin-Demongeot ODE models without autocatalysis (2002 version, [36]) and with autocatalysis (2005 version, [38]), respectively. Let us suppose we have n antagonistic transcription factors. The state variable [Xi ] represents the concentration of the corresponding TF protein such that the TF expression is subject to a first-order degradation (exponential decay). The parameters c, β and g represent the relative speed of transcription (or strength of the unrepressed TF expression relative to the first-order degradation), cooperativity and “leak”, respectively. The parameter g is a basal expression of the corresponding TF and a constant production term that enhances the value of [Xi ], which is possibly affected by an exogenous stimulus. For simplification, only the transcription regulation process is considered in [38]. The models are assumed to be intracellular and cell-autonomous (i.e., we only consider processes inside a single cell without the influence of other cells).

Without autocatalysis :

d[Xi ] = dt

β 1+

n X

− [Xi ], i = 1, 2, . . . , n [Xj ]

(3.3)

c

j=1,j6=i

With autocatalysis :

d[Xi ] = dt

β[Xi ]c − [Xi ] + g, i = 1, 2, . . . , n n X c 1+ [Xj ]

(3.4)

j=1

The terms

1+

β n X j=1,j6=i

[Xj ]

c

β[Xi ]c and n X 1+ [Xj ]c

(3.5)

j=1

are Hill-like functions. In this study, we only consider Cinquin-Demongeot (2005 version) model (3.4) because autocatalysis is a common property of cell fate-determining factors known as “master” switches [38].

Chapter 3. Preliminaries

Mathematical Models of Gene Networks

19

In [38], Cinquin and Demongeot observed that their model (with autocatalysis) can show the priming behavior of stem cells (i.e., genes are equally expressed) as well as the up-regulation of one gene and down-regulation of the others. They also proved that multistability of their model where g = 0 is manipulable by changing the value of c (cooperativity); however, manipulating the level of cooperativity is of minimal biological relevance. Also, their model is more sensitive to stochastic noise when the equilibrium points are near each other.

3.2.2

ODE model by MacArthur et al.

MacArthur et al. [113] proposed an ODE model (Equations (3.6) and (3.7)) to represent their GRN (refer to Figure (3.2)).

Let [Pi ] be the concentration of the TF

protein in the pluripotency module, specifically, [P1 ] := [OCT 4], [P2 ] := [SOX2] and [P3 ] := [N AN OG]. Also, let [Li ] be the concentration of the TF protein in the differentiation module where [L1 ] := [RU N X2], [L2 ] := [SOX9] and [L3 ] := [P P AR−γ]. The parameter si represents the effect of the growth factors stimulating the differentiation towards the i-th cell lineage, specifically, s1 := [RA + BM P 4], s2 := [RA + T GF −β] and s3 := [RA + Insulin]. In mouse ES cells, RUNX2 is stimulated by retinoic acid (RA) and BMP4; SOX9 by RA and TGF-β; and PPAR-γ by RA and Insulin. The derivation of the ODE model and the interpretation of the parameters are discussed in the supplementary materials of [113].

d[Pi ] 1i [P1 ][P2 ](1+[P3 ]) P = 1+k P s k1+[P − b[Pi ] ( 0 j j )( 1 ][P2 ](1+[P3 ])+kP L j [Lj ]) dt d[Li ] = dt

k2 (si +k3

)

2m j6=i sj [Li ] 1+kLC1 [P1 ][P2 ]+kLC2 [P1 ][P2 ][P3 ]+[Li ]2 +kLL si +k3

P

(3.6)

(

P

j6=i sj

)

P

j6=i [Lj ]

2

− b[Li ]

(3.7)

However, this system of coupled ODEs is difficult to study using analytic techniques. MacArthur et al. [113] simply conducted numerical simulations to investigate the behavior of the system. They tried to analytically analyze the system but only for a specific

Chapter 3. Preliminaries

Mathematical Models of Gene Networks

20

case where Pi = 0, i = 1, 2, 3, that is, when the pluripotency module is switched-off. The ODE model (3.8) that they analyzed when the pluripotency module is switched-off follows the Cinquin-Demongeot [38] formalism with c = 2, that is,

d[Li ] [Li ]2 X = − b[Li ], i = 1, 2, 3 dt 1 + [Li ]2 + a [Lj ]2

(3.8)

j6=i

MacArthur et al. [113] analytically proved that the three cell types (tripotent, bipotent and terminal states) are simultaneously stable for some parameter values in (3.8). However, as the effect of an exogenous stimulus is increased above some threshold value, the tripotent state becomes unstable leaving only two stable cell types (bipotent and terminal state). If the effect of the exogenous stimulus is further increased, the bipotent state also becomes unstable leaving the terminal state as the sole stable cell type. In addition, MacArthur et al. [113] showed that dedifferentiation is not possible without the aid of stochastic noise.

3.3

Stochastic Differential Equations

A time-dependent Gaussian white noise term can be added to the ODE model to investigate the effect of random fluctuations in gene expression. This Gaussian white noise term combines and averages multiple heterogeneous sources of temporal noise. Equations (3.10) to (3.13) show some of the different SDE models [71, 72, 113, 174] of the form dX = F (X)dt + σG(X)dW

(3.9)

that we use in this study. We employ different G(X) to observe the various effects of the added Gaussian white noise term. We let F (X) be the right-hand side of our ODE equations, σ be a diagonal matrix of parameters representing the amplitude of noise, and

Chapter 3. Preliminaries

Mathematical Models of Gene Networks

21

W be a Brownian motion (Wiener process). If the genes in a cell are isogenic (essentially identical) then we can suppose the diagonal entries of the matrix σ are all equal. dX = F (X)dt + σdW

(3.10)

dX = F (X)dt + σXdW √ dX = F (X)dt + σ XdW

(3.11) (3.12)

dX = F (X)dt + σF (X)dW

(3.13)

Notice that in Equations (3.11) and (3.12), the noise term is affected by the value of X. As the concentration X increases, the effect of the noise term also increases. Whereas, in Equations (3.13), the noise term is affected by the value of F (X), that is,  as the deterministic change in the concentration X with respect to time dX = F (X) dt increases, the effect of the noise term also increases. In Equation (3.10), the noise term is not dependent on any variable. For a more detailed discussion about various modeling techniques, the following references may be consulted [6, 11, 18, 21, 46, 52, 55, 60, 66, 67, 75, 77, 79, 87, 88, 92, 118, 137, 140, 143, 149, 153, 154, 163, 165, 175, 179, 182].

Chapter 4 Preliminaries Analysis of Nonlinear Systems This chapter gives a brief discussion of the theoretical background on the qualitative analysis of coupled nonlinear dynamical systems. Consider autonomous system of ODEs d[Xi ] = Fi ([X1 ], [X2 ], . . . , [Xn ]), i = 1, 2, . . . , n, dt

(4.1)

with initial condition [Xi ](0) := [Xi ]0 ∀i. We assume that t ≥ 0 and Fi : B → Rn , i = 1, 2, . . . , n where B ⊆ Rn . If we have a nonautonoumous system of ODEs,

d[Xi ] dt

=

Fi ([X1 ], [X2 ], . . . , [Xn ], t), i = 1, 2, . . . , n, then we convert it to an autonomous system by defining t := [Xn+1 ] and

d[Xn+1 ] dt

= 1 [134].

For simplicity, let F := (Fi , i = 1, 2, . . . , n), X := ([Xi ], i = 1, 2, . . . , n) and X0 := ([Xi ]0 , i = 1, 2, . . . , n). For an ODE model to be useful, it is necessary that it has a solution. Existence of a unique solution for a given initial condition is important to effectively predict the behavior of our system. Moreover, we are assured that the solution curves of an autonomous system do not intersect with each other when existence and uniqueness conditions hold [56]. Suppose X(t) is a differentiable function. The solution to (4.1) satisfies the following integral equation: Z X(t) = X0 +

t

F (X(τ ))dτ . 0

22

(4.2)

Chapter 4. Preliminaries

Analysis of Nonlinear Systems

23

The following are theorems that guarantee local existence and uniqueness of solutions to ODEs: Theorem 4.1 Existence theorem (Peano, Cauchy). Consider the autonomous system (4.1). Suppose that F is continuous on B. Then the system has a solution (not necessarily unique) on [0, δ] for sufficiently small δ > 0 given any X0 ∈ B. Theorem 4.2 Local existence-uniqueness theorem (Picard, Lindel¨ orf, Lipschitz, Cauchy). Consider the autonomous system (4.1). Suppose that F is locally Lipschitz continuous on B, that is, F satisfies the following condition: For each point X0 ∈ B there is an -neighborhood of X0 (denoted as B (X0 ) where B (X0 ) ⊆ B) and a positive constant m0 such that |F (X) − F (Y )| ≤ m0 |X − Y | ∀X, Y ∈ B (X0 ). Then the system has exactly one solution on [0, δ] for sufficiently small δ > 0 given any X0 ∈ B.

Theorem (4.2) can be extended to a global case stated as: Theorem 4.3 Global existence-uniqueness theorem. If there is a positive constant m such that |F (X) − F (Y )| ≤m |X − Y | ∀X, Y ∈ B (i.e., F is globally Lipschitz continuous on B) then the system has exactly one solution defined for all t ∈ R⊕ for any X0 ∈ B.

If all the partial derivatives

∂Fi ∂[Xj ]

i, j = 1, 2, . . . , n are continuous on B (i.e., F ∈

C 1 (B)) then F is locally Lipschitz continuous on B. If the absolute value of these partial derivatives are also bounded for all X ∈ B then F is globally Lipschitz continuous on B. The global condition says that if the growth of F with respect to X is at most linear then we have a global solution. If F satisfies the local Lipschitz condition but not the global Lipschitz condition, then it is possible that after some finite time t, the solution will “blow-up”. We define a point X = ([X1 ], [X2 ], . . . , [Xn ]) as a state of the system, and the collection of these states is called the state space. The solution curve of the system starting from a fixed initial condition is called a trajectory or orbit. The collection of trajectories

Chapter 4. Preliminaries

Analysis of Nonlinear Systems

24

given any initial condition is called the flow of the differential equation and is denoted by φ(X0 ). The concept of the flow of the differential equation indicates the dependence of the system on initial conditions. The flow of the differential equation can be represented geometrically in the phase space Rn using a phase portrait. There exists a corresponding vector defined by the ODE that is tangent to each point in every trajectory; and the collection of all tangent vectors of the system is a vector field. A vector field is often helpful in visualizing the phase portrait of the system. Moreover, various methods are also available to numerically solve the system (4.1) such as the Euler and Runge-Kutta 4 methods.

4.1

Stability analysis

In nonlinear analysis of systems, it is important to find points where our system is at rest and determine whether these points are stable or unstable. In modeling cellular differentiation, an asymptotically stable equilibrium point, which is an attractor, is associated with a certain cell type. For any initial condition in a neighborhood of the attractor, the trajectories tend towards the attractor even if slightly perturbed.

Definition 4.1 Equilibrium point. The point X ∗ := ([X1 ]∗ , [X2 ]∗ , . . . , [Xn ]∗ ) ∈ Rn is said to be an equilibrium point (also called as critical point, stationary point or steady state) of the system (4.1) if and only if F (X ∗ ) = 0.

Finding the equilibrium points corresponds to solving for the real-valued solutions to the system of equations F (X) = 0. It is possible that this system of equations has a unique solution, several solutions, a continuum of solutions, or no solution. In order to describe the local behavior of the system (4.1) near a specific equilibrium point X ∗ , we linearize the system by getting the Jacobian matrix JF (X), defined as

Chapter 4. Preliminaries

Analysis of Nonlinear Systems

    JF (X) =   

∂F1 ∂[X1 ] ∂F2 ∂[X1 ]

∂F1 ∂[X2 ] ∂F2 ∂[X2 ]

···

∂Fn ∂[X1 ]

∂Fn ∂[X2 ]

···

.. .

.. .

··· .. .

∂F1 ∂[Xn ] ∂F2 ∂[Xn ]

.. .

∂Fn ∂[Xn ]

25

      

(4.3)

and then evaluating JF (X ∗ ). If none of the eigenvalues of the matrix JF (X ∗ ) has zero real part then X ∗ is called a hyperbolic equilibrium point. In this chapter, we focus the discussion on hyperbolic equilibrium points; but for details about nonhyperbolic equilibrium points, refer to [134]. We use the eigenvalues of JF (X ∗ ) to determine the stability of equilibrium points.

Definition 4.2 Asymptotically stable and unstable equilibrium points. The equilibrium point X ∗ is asymptotically stable when the solutions near X ∗ converge to X ∗ as t → ∞. The equilibrium point X ∗ is unstable when some or all solutions near X ∗ tend away from X ∗ as t → ∞.

Theorem 4.4 Stability of equilibrium points. If all the eigenvalues of JF (X ∗ ) have negative real parts then X ∗ is an asymptotically stable equilibrium point. If at least one of the eigenvalues of JF (X ∗ ) has a positive real part then X ∗ is an unstable equilibrium point.

For simplicity, we will call an asymptotically stable equilibrium point, “stable”. There are various tests for determining the stability of an equilibrium point such as by using Theorem (4.4), or by using geometric analysis as shown in Figure (4.1). In addition, we define X ∗ as a saddle if it is an unstable equilibrium point but JF (X ∗ ) has at least one eigenvalue with negative real part. For further details regarding the local behavior of nonlinear systems in the neighborhood of an equilibrium point, refer to the Stable Manifold Theorem and the Hartman-Grobman Theorem [134].

Chapter 4. Preliminaries

Analysis of Nonlinear Systems

26

Figure 4.1: The slope of F (X) at the equilibrium point determines the linear stability. Positive gradient means instability, negative gradient means stability. If the gradient is zero, we look at the left and right neighboring gradients. Refer to the Insect Outbreak Model: Spruce Budworm in [122]. It is also useful to determine the set of initial conditions X0 with trajectories converging to a specific stable equilibrium point X ∗ . We call this set of initial conditions the domain or basin of attraction of X ∗ , denoted by n o ΩX ∗ := X0 : lim φ(X0 ) = X ∗ . t→∞

(4.4)

ˆ ⊆ B is called positively invariant with respect to the flow φ(X0 ) In addition, a set B ˆ φ(X0 ) ⊆ B ˆ for all t ≥ 0, that is, the flow of the ODE remains in B. ˆ if for any X0 ∈ B, There are other types of attractors, such as ω-limit cycles and strange attractors [56]. A limit cycle is a periodic orbit (a closed trajectory which is not an equilibrium point) that is isolated. An asymptotically stable limit cycle is called an ω-limit cycle. Strange attractors usually occur when the dynamics of the system is chaotic. Moreover, under some conditions, a trajectory may be contained in a non-attracting but neutrally stable center (see [56] for discussion about centers). However, the extensive numerical simulations by MacArthur et al. [113] suggest that their ODE model (Equations (3.6) and (3.7)) does not have oscillators (periodic orbit) and strange trajectories. Cinquin and Demongeot [38] also claim that the solutions to their model (refer to Equations (3.4)) always tend towards an equilibrium and never oscillate [38].

Chapter 4. Preliminaries

Analysis of Nonlinear Systems

27

The existence of a center, ω-limit cycle or strange attractor that would result to recurring changes in phenotype is abnormal for a natural fully differentiated cell. Limit cycles are associated with the concept of continuous cell proliferation (self-renewal) where there are recurring biochemical states during cell division cycles [82]. However, cell division is beyond the scope of this thesis. Various theorems are available for checking the possible existence or non-existence of limit cycles (although most are for two-dimensional planar systems only). The Poincar´ eBendixson Theorem for planar systems [134] states that if F ∈ C 1 (B) and a trajectory remains in a compact region of B whose ω-limit set (e.g. attracting set) does not contain any equilibrium point, then the trajectory approaches a periodic orbit. Furthermore, if F ∈ C 1 (B) and a trajectory remains in a compact region of B as well as if there are only a finite number of equilibrium points, then the ω-limit set of any trajectory of the planar system can be one of three types — an equilibrium point, a periodic orbit or a compound separatrix cycle. Some researches have shown the effect of the presence of positive or negative feedback loops in GRNs such as possible multistability (existence of multiple stable equilibrium points) and existence of oscillations [8, 37, 45, 104, 119, 155]. It is also important to note that a strange (chaotic) attractor will not exist for n < 3 [56].

4.2

Bifurcation analysis

The behavior of the solutions of system (4.1) depends not only on the initial conditions but also on the values of the parameters. The parameters of the model may be associated with real-world quantities that can be manipulated to control the solutions. Varying the value of a parameter (or parameters) may result in dramatic changes in the qualitative nature of the solutions, such as a change in the number of equilibrium points or a change in the stability. Here, we now let F be a function of the state variables X and of the parameter matrix µ (i.e., F (X, µ)). We define the values of the parameters where such dramatic change occurs as bifurcation value, denoted by µ∗ . If we simultaneously vary the values of p number of parameters then we have a p-parameter bifurcation.

Chapter 4. Preliminaries

Analysis of Nonlinear Systems

28

If p-parameter bifurcation is sufficient for a bifurcation type to occur then we classify the bifurcation type as codimension p. Examples of codimension one bifurcation type are saddle-node (fold), supercritical Poincar´e-Andronov-Hopf and subcritical Poincar´eAndronov-Hopf bifurcations. Transcritical, supercritical pitchfork and subcritical pitchfork bifurcations are also often regarded as codimension one. Cusp bifurcation is of codimension two.

Figure 4.2: Sample bifurcation diagram showing saddle-node bifurcation. In a local bifurcation, the equilibrium point X ∗ is nonhyperbolic at the bifurcation value. For n ≥ 2, if JF (X ∗ ) has a pair of purely imaginary eigenvalues and no other eigenvalues with zero real part at the bifurcation value then under some assumptions a Hopf bifurcation may occur and a limit cycle might arise from X ∗ . We can visualize the bifurcation of equilibria using a bifurcation diagram. For further details about bifurcation theory, refer to [86, 102, 134]. There are softwares available for numerical bifurcation analysis such as Oscill8 [40] which uses AUTO (http://indy.cs.concordia.ca/auto/).

4.3

Fixed point iteration

Definition 4.3 Fixed point. The point X ∗ is a fixed point of the real-valued function Q if Q(X ∗ ) = X ∗ .

Chapter 4. Preliminaries

Analysis of Nonlinear Systems

29

We use fixed point iteration (FPI) to find approximate stable equilibrium points of the Cinquin-Demongeot [38] model. If X ∗ is a stable equilibrium point then for initial conditions X0 sufficiently close to X ∗ (where X0 6= X ∗ ), the sequences generated by FPI will converge to X ∗ (i.e., is locally convergent). If X0 = X ∗ , we can either have a stable or unstable equilibrium point. Algorithm 1 Fixed point iteration Suppose Q is continuous on the region B. Input initial guess X (0) := X0 ∈ B and acceptable tolerance error  ∈ R⊕ . X(i+1) − X(i) >  do X(i+1) := Q(X(i) ). While If X(i+1) − X(i) ≤  is satisfied then X(i+1) is the approximate fixed point.

Figure 4.3: An illustration of cobweb diagram.

The geometric illustration of FPI is called a cobweb diagram as illustrated in Figure (4.3).

Chapter 4. Preliminaries

4.4

Analysis of Nonlinear Systems

30

Sylvester resultant method

To find the equilibrium points, we can rewrite the Cinquin-Demongeot ODE model where the exponent is a positive integer as a system of polynomial equations. Assume F (X) = 0 can be written as a polynomial system P (X) = 0. The topic of solving multivariate nonlinear polynomial systems is still in its development stage. However, there are already various available algebraic and geometric methods for solving P (X) = 0 such as Newtonlike methods, homotopic solvers, subdivision methods, algebraic solvers using Gr¨obner basis, and geometric solvers using resultant construction [120]. In resultant construction, we treat the problem of solving P (X) = 0 as a problem of finding intersections of curves.

All Pi (X) should have no common factor of degree greater than zero so that P (X) has a finite number of complex solutions. The following B´ezout Theorem gives a bound on the number of complex solutions including the multiplicities.

Theorem 4.5 B´ ezout theorem. Consider real-valued polynomials P1 , P2 , . . . , Pn where Pi has degree deg i . Suppose all the polynomials have no common factor of degree greater than zero (i.e., they are collectively relatively prime). Then the number of isolated complex solutions to the system P1 (X) = P2 (X) = . . . = Pn (X) = 0 is at most (deg 1 )(deg 2 ) . . . (deg n ).

The method of using the Sylvester resultant is a classical algorithm in Algebraic Geometry used to find the complex solutions of a system of two polynomial equations in two variables. It can also be used for solving a polynomial system of n equations with n variables where n > 2, by repeated application of the algorithm. The idea of using Sylvester resultants for solving multivariate polynomial systems is to eliminate all except for one variable. There are other resultant construction methods for solving multivariate polynomial systems with n > 2 such as the Dixon resultant, Macaulay resultant and U-resultant methods, but we will only focus on the Sylvester resultant. The algorithm for using Sylvester resultants is illustrated in the following paragraphs.

Chapter 4. Preliminaries

Analysis of Nonlinear Systems

31

Consider two polynomials P1 ([X1 ], [X2 ]) and P2 ([X1 ], [X2 ]). We eliminate [X1 ] by constructing the Sylvester matrix associated to the two polynomials with [X1 ] as the variable (i.e., we take [X2 ] as fixed parameter). The size of the Sylvester matrix is (deg1 + deg2 ) × (deg1 + deg2 ) where deg1 and deg2 are the degrees of the polynomial P1 and P2 in the variable [X1 ], respectively. We give an example to show how to construct a Sylvester matrix. Let us suppose P1 ([X1 ], [X2 ]) = 2[X1 ]3 + 4[X1 ]2 [X2 ] + 7[X1 ][X2 ]2 + 10[X2 ]3 + 8

(4.5)

P2 ([X1 ], [X2 ]) = 5[X1 ]2 + 2[X1 ][X2 ] + [X2 ]2 + 6.

(4.6)

Since the degree of P1 in terms of [X1 ] is 3 and the degree of P2 in terms of [X1 ] is 2, then the size of the Sylvester matrix (with [X1 ] as variable) is 5 × 5. The Sylvester matrix of P1 and P2 with [X1 ] as variable is 

2

3

2 4[X2 ] 7[X2 ] 10[X2 ] + 8 0   0 2 4[X2 ] 7[X2 ]2 10[X2 ]3 + 8    5 2[X2 ] [X2 ]2 + 6 0 0   2  0 5 2[X2 ] [X2 ] + 6 0  0 0 5 2[X2 ] [X2 ]2 + 6

     .    

(4.7)

The first row of the Sylvester matrix contains the coefficients of [X1 ]3 , [X1 ]2 , [X1 ]1 and [X1 ]0 in P1 . We shift each element of the first row one column to the right to form the second row. The third row contains the coefficients of [X1 ]2 , [X1 ]1 and [X1 ]0 in P2 . We shift each element of the third row one column to the right to form the fourth row. We again shift each element of the fourth row one column to the right to form the fifth row. Generally, we continue the process of shifting each element of the previous row to form the next row until the coefficient of [X1 ]0 reaches the last column. All cells of the matrix without entries coming from the coefficients of the polynomials are assigned the value zero. We use the determinant of the Sylvester matrix to find the intersection of P1 and P2 .

Chapter 4. Preliminaries

Analysis of Nonlinear Systems

32

Definition 4.4 Sylvester resultant. We call the determinant of the Sylvester matrix of P1 and P2 in [X1 ] (where [X2 ] is a fixed parameter) the Sylvester resultant, denoted by res(P1 , P2 ; [X1 ]).

Theorem 4.6 Zeroes of the Sylvester resultant. The values where res(P1 , P2 ; [X1 ]) = 0 are the complex values of [X2 ] where P1 ([X1 ], [X2 ]) = P2 ([X1 ], [X2 ]) = 0.

We denote the complex values of [X2 ] where P1 ([X1 ], [X2 ]) = P2 ([X1 ], [X2 ]) = 0 by ∗ ] ]∗ ]∗ ]∗ [X 2 ] . To find [X1 ] , we solve the univariate system P1 ([X1 ], [X2 ] ) = P2 ([X1 ], [X2 ] ) = 0 ∗ ] for all possible values of [X 2] .

The following theorem can be used to determine if P1 and P2 either do not intersect, or intersect at infinitely many points.

Theorem 4.7 None and infinitely many solutions. res(P1 , P2 ; [X1 ]) is nonzero for any [X2 ] if and only if P1 ([X1 ], [X2 ]) = P2 ([X1 ], [X2 ]) = 0 has no complex solutions. Furthermore, the following statements are equivalent: 1. res(P1 , P2 ; [X1 ]) is identically zero (i.e., zero for any values of [X2 ]). 2. P1 and P2 have a common factor of degree greater than zero. 3. P1 = P2 = 0 has infinitely many complex solutions.

We can extend the Sylvester resultant method to a multivariate case, say with three polynomials P1 ([X1 ], [X2 ], [X3 ]), P2 ([X1 ], [X2 ], [X3 ]) and P3 ([X1 ], [X2 ], [X3 ]), by getting R1 = res(P1 , P2 ; [X1 ]) and R2 = res(P2 , P3 ; [X1 ]). Notice that R1 and R2 are both in terms of [X2 ] and [X3 ]. We then get R3 = res(R1 , R2 ; [X2 ]) which is in terms of [X3 ]. We solve the univariate polynomial equation R3 = 0 by using available solvers to obtain ∗ ] ]∗ ]∗ [X 3 ] . After this, we find [X2 ] by substituting [X3 ] in R1 and R2 and solve R1 =

Chapter 4. Preliminaries

Analysis of Nonlinear Systems

33

∗ ] ]∗ ]∗ ]∗ ]∗ R2 = 0. We then find [X 1 ] by solving P1 ([X1 ], [X2 ] , [X3 ] ) = P2 ([X1 ], [X2 ] , [X3 ] ) = ∗ ]∗ ] P3 ([X1 ], [X 2 ] , [X3 ] ) = 0.

For a more detailed discussion on solving systems of multivariate polynomial equations, the following references may be consulted [17, 49, 98, 156, 157, 178].

4.5

Numerical solution to SDEs

The solutions to ODEs are functions, while the solutions to SDEs are stochastic processes. We define a continuous-time stochastic process X as a set of random variables X(t) where the index variable t ≥ 0 takes a continuous set of values. The index variable t may represent time. Suppose we have an SDE model of the form dX = F (X)dt + σG(X)dW where W is a stochastic process called Brownian motion (Wiener process). The differential dW of W is called white noise. Brownian motion is the continuous version of “random walk” and has the following properties: 1. For each t, the random variable W(t) is normally distributed with mean zero and variance t. 2. For each ti < ti+1 , the normal random variable ∆W(ti ) = W(ti+1 ) − W(ti ) is independent of the random variables W(tj ) , 0 ≤ j ≤ ti (i.e., W has independent increments). 3. Brownian motion W can be represented by continuous paths (but is not differentiable).

Suppose W(t0 ) = 0. We can simulate a Brownian motion using computers by discretizing time as 0 = t0 < t1 < . . . and choosing a random number that would represent √ ∆W(ti−1 ) from the normal distribution N (0, ti − ti−1 ) = ti − ti−1 N (0, 1). This implies √ that we obtain W(ti ) by multiplying ti − ti−1 by a standard normal random number and then adding the product to W(ti−1 ) .

Chapter 4. Preliminaries

Analysis of Nonlinear Systems

34

The solution to an SDE model has different realizations because it is based on random numbers. We can approximate a realization of the solution by using numerical solvers such as the Euler-Maruyama and Milstein methods. In this thesis, we use the Euler-Maruyama method. The Euler-Maruyama method is similar to the Euler method for ODEs. Algorithm 2 Euler-Maruyama method Discretize the time as 0 < t1 < t2 < . . . < tend . Suppose Yti is the approximate solution to X(ti ) . Input initial condition Xt0 . Let Yt0 := Xt0 . For i = 0, 1, 2 . . . , end − 1 do √ ∆W(ti ) = ti+1 − ti randN (0,1) , where randN (0,1) is a standard normal random number. Yti+1 = Yti + F (Yti )(ti+1 − ti ) + σG(Yti )(∆W(ti ) ). end

Euler-Maruyama has order 1/2, that is, for any time t the expected value of the error E {|Xt − Yt |} is an element of O((∆t)1/2 ) as ∆t → 0. Note that for easy simulation, we can suppose that we have equal step sizes ∆ti = (ti+1 −ti ). For a more detailed discussion on Brownian motion and SDEs, the following reference may be consulted [95, 147]. For a more detailed discussion on the analysis of nonlinear systems, the following references may be consulted [3, 56, 134, 146, 147].

Chapter 5 Results and Discussion Simplified GRN and ODE Model In this thesis, we represent the dynamics of the simplified gene network of MacArthur et al. [113] using a system of Ordinary Differential Equations (ODEs) based on the Cinquin-Demongeot formalism [38]. We prove the existence and uniqueness of solutions to the ODE model under some assumptions.

5.1

Simplified MacArthur et al. model

Figure 5.1: The original MacArthur et al. [113] mesenchymal gene regulatory network.

35

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

36

Let us recall the MacArthur et al. [113] GRN in Chapter (3) (see Figure (5.1)). This GRN represents a multipotent cell that could differentiate into three cell types — bone, cartilage and fat.

Figure 5.2: Possible paths that result in positive feedback loops. Shaded boxes denote that the path repeats. We refer to the group of OCT4, SOX2, NANOG and their multimers (protein complexes) as the pluripotency module, and the group of SOX9, RUNX2 and PPAR-γ as the differentiation module. OCT4, SOX2, NANOG and their multimers in the original MacArthur et al. GRN [113] do not have autoactivation loops, but notice that the path NANOG → OCT4-SOX2-NANOG → OCT4 → OCT4-SOX2 → SOX2 → OCT4-SOX2NANOG → NANOG is one of the positive feedback loops of the GRN (see Figure (5.2)). A positive feedback loop that contains OCT4, SOX2, NANOG and their multimers can be regarded as an autoactivation loop of the pluripotency module. Both the OCT4-SOX2-NANOG and OCT4-SOX2 multimers inhibit SOX9, RUNX2 and PPAR-γ (as represented by the green bars in Figure (5.1)). On the other hand, SOX9, RUNX2 and PPAR-γ inhibit OCT4, SOX2 and NANOG (as represented by the

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

37

blue bars in Figure (5.1)). These inhibitions imply that the pluripotency module inhibits the differentiation module and vice versa.

Figure 5.3: The simplified MacArthur et al. GRN Since the pluripotent module can be represented as a node with autoactivation and mutual inhibition with the other nodes, then we can simplify the GRN (5.1) by coarsegraining. We represent the pluripotency module as one node, and we call it the sTF (stemness transcription factor) node. From eight nodes, we only have four nodes. The coarse-grained biological network of the MacArthur et al. GRN [113] is shown in Figure (5.3) and from now on we shall refer to this as our simplified network. This simplified

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

38

network represents a phenomenological model of the mesenchymal cell differentiation system. Since each node undergoes autocatalysis (autoactivation) and inhibition by the other nodes (as shown by the arrows and bars) then the simplified GRN is in the simultaneous-decision-model form that can be translated into a Cinquin-Demongeot [38] ODE model (refer to Figure (3.4)). It is difficult to study the qualitative behavior of the ODE model by MacArthur et al. [113] (see Equations (3.6) and (3.7)) using analytic methods. This is the reason for the simplification of the MacArthur et al. [113] GRN where the essential qualitative dynamics are still preserved. We translate the dynamics of the simplified network into a Cinquin-Demongeot [38] ODE model for easier analysis. One limitation of a phenomenological model is that it excludes time-delays that may arise from the deleted molecular details. However, a phenomenological model is sufficient to address the general principles of cellular differentiation and cellular programming such as the temporal behavior of the dynamics of the GRN [70].

5.2

The generalized Cinquin-Demongeot ODE model

In [38], Cinquin and Demongeot suggested to extend their model to include combinatorial interactions and non-symmetrical networks (i.e., each node does not have the same relationship with other nodes and each equation in the system of ODEs does not have equal parameter values). We include more adjustable parameters to their model to represent a wider range of situations. In this generalized model, some differentiation factors can be stronger than others. We generalize the Cinquin-Demongeot (2005) ODE model as follows (X = ([X1 ], [X2 ], . . . , [Xn ])): d[Xi ] = Fi (X) = dt

Ki

ci

βi [Xi ]ci + αi si − ρi [Xi ] n X c + [Xi ]ci + γij [Xj ] ij j=1,j6=i

where i = 1, 2, ..., n and n is the number of nodes.

(5.1)

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

39

In our simplified network, we have four nodes and thus, n = 4. Some of our results are applicable not only to n = 4 but to any dimension. The state variable [Xi ] represents the concentration of the corresponding TF. Specifically, let [X1 ] := [RU N X2], [X2 ] := [SOX9], [X3 ] := [P P AR−γ] and [X4 ] := [sT F ]. To have biological significance, we restrict [Xi ] and the parameters to be nonnegative real numbers. The parameter βi is the relative speed of transcription, ρi is the assumed first-order degradation rate associated with Xi , and γij is the differentiation stimulus that affects the inhibition of Xi by Xj . If γij = 0 then Xj does not inhibit the growth of [Xi ]. We denote the term αi si by gi := αi si which represents basal or constitutive expression of the corresponding TF that is affected by the exogenous stimulus with concentration si . In other words, αi si is a constant production term that enhances the concentration of Xi . Specifically, let s1 := [RA + BM P 4], s2 := [RA + T GF −β], s3 := [RA + Insulin] and s4 := 0. We define the multivariate function Hi by Hi ([Xi ], [X2 ], . . . , [Xn ]) = Ki

ci

βi [Xi ]ci n X c i γij [Xj ]cij + [Xi ] +

(5.2)

j=1,j6=i

which comes from the typical Hill equation. The terms

Pn

j=1,j6=i

γij [Xj ]cij in the denomi-

nator reflects the inhibitory influence of other TFs on the change of concentration of Xi . ci

We denote the parameter Ki > 0 by Ki := Ki

ci

which is related to the threshold or dissociation constant. The parameter ci ≥ 1 represents the Hill constant and affects the steepness of the Hill curve associated with [Xi ], and denotes the homomultimerization-induced positive cooperativity (for autocatalysis). The parameter cij denotes the heteromultimerizationinduced negative cooperativity (for mutual inhibition). Cooperativity describes the interactions among binding sites where the affinity or relationship of a binding site positively

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

40

or negatively changes depending on itself or on the other binding sites. Note that cooperativity requires more than one binding site. Notice that the lower bound of Hi (5.2) is zero and its upper bound is βi . Thus, the parameter βi can also be interpreted as the maximal expression rate of the corresponding TF. Explicitly, two mathematically unequal amounts of concentration can be regarded as biologically equal if their difference is not significant, that is, [Xi ] ≈ [Xj ] if [Xi ] = [Xj ]± where  is acceptably small. We say that [Xi ] sufficiently dominates [Xj ] if [Xi ] > [Xj ] and [Xi ] 6≈ [Xj ]. In addition, scientists compare the concentration of Xi to the concentration of Xj by looking at the ratio of [Xi ] and [Xj ] — for example, [Xi ] 6≈ [Xj ], [Xj ] 6= 0 if

[Xi ] [Xj ]

> 1 ≥ 1 or if

[Xi ] [Xj ]

< 2 ≤ 1 where 1 and 2 are some acceptable tolerance

constants. We say that a TF is switched-off or inactive if [T F ] = 0, and switched-on otherwise. However, as an approximation, a TF with sufficiently low concentration can be considered to be “switched-off”. If no component representing a node from the differentiation module sufficiently dominates [sT F ] (e.g. [sT F ] ≥ [OCT 4], [sT F ] ≥ [SOX2] and [sT F ] ≥ [P P AR−γ]) and sTF is switched-on, then the state represents a pluripotent cell. If all the components of a state are (approxmiately) equal and all TFs are switched-on, then the state represents a primed stem cell. If at least one component from the differentiation module sufficiently dominates [sT F ], then the state represents either a partially differentiated or a fully differentiated cell. If exactly three components from the differentiation module are (approximately) equal, then the state represents a tripotent cell. If exactly two components from the differentiation module are (approximately) equal and sufficiently dominate all other components (possibly including [sT F ]), then the state represents a bipotent cell. If sTF is switched-off, then the cell had lost its ability to self-renew.

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

41

If exactly one component from the differentiation module sufficiently dominates all other components (possibly including [sT F ]) but sTF is still switched-on, then the state represents a unipotent cell. If exactly one TF from the differentiation module remains switched-on and all other TFs including sTF are switched-off, then the state represents a fully differentiated cell. A trajectory converging to the equilibrium point (0, 0, . . . , 0) is a trivial case because the zero state neither represents a pluripotent cell nor a cell differentiating into bone, cartilage or fat. The trivial case may represent a cell differentiating towards other cell types (e.g., towards becoming a neural cell) which are not in the domain of our GRN. The zero state may also represent a cell that is in quiescent stage. Definition 5.1 Stable component and stable equilibrium point. If [Xi ] converges to [Xi ]∗ for all initial conditions [Xi ]0 near [Xi ]∗ , then we say that the i-th component [Xi ]∗ of an equilibrium point X ∗ is stable; otherwise, [Xi ]∗ is unstable. The equilibrium point X ∗ = ([X1 ]∗ , [X2 ]∗ , . . . , [Xn ]∗ ) of the system (5.1) is stable if and only if all its components are stable.

5.3

Geometry of the Hill function

The Hill function defined by Equation (5.2) is a multivariate sigmoidal function when ci > 1 and a multivariate hyperbolic-like function when ci = 1. If 1 < ci < n then Xi has autocatalytic cooperativity, and if 1 < cij < n then the affinity of Xj to Xi has negative cooperativity. In addition, the state variable Xi has no autocatalytic cooperativity if ci = 1, while the affinity of Xj to Xi has no negative cooperativity if cij = 1. We can investigate the multivariate Hill function by looking at the univariate function defined by Hi ([Xi ]) =

βi [Xi ]ci n X c i Ki + [Xi ] + γij [Xj ]cij j=1,j6=i

(5.3)

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

42

where [Xj ], j 6= i is taken as a parameter. This means that we project the highdimensional space onto a two-dimensional plane. If ci = 1, the graph of the univariate Hill function in the first quadrant of the Cartesian plane is hyperbolic (for any value of [Xj ], j 6= i), similar to the topology shown in Figure (5.4). If ci > 1, the graph of the univariate Hill function in the first quadrant is sigmoidal or “S”-shaped (for any value of [Xj ], j 6= i), similar to one of the topologies shown in Figure (5.5). When the value of Ki +

n X

γij [Xj ]cij

(5.4)

j=1,j6=i

in the denominator of Hi ([Xi ]) increases, the graph of the Hill curve (for any c ≥ 1) shrinks, as illustrated in Figure (5.6). When the value of ci increases, the graph of Y = Hi ([Xi ]) gets steeper, as illustrated in Figure (5.7). If we add a term gi to Hi ([Xi ]) then the graph of Y = Hi ([Xi ]) in the Cartesian plane is translated upwards by gi units, as illustrated in Figure (5.8). We investigate the geometry of the Hill function as a prerequisite to our study of determining the behavior of equilibrium points of our system (5.1).

Figure 5.4: Graph of the univariate Hill function when ci = 1.

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

Figure 5.5: Possible graphs of the univariate Hill function when ci > 1.

43

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

Figure 5.6: The graph of Y = Hi ([Xi ]) shrinks as the value of Ki + increases.

Pn

j=1,j6=i

44

γij [Xj ]cij

Figure 5.7: The Hill curve gets steeper as the value of autocatalytic cooperativity ci increases.

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

Figure 5.8: The graph of Y = Hi ([Xi ]) is translated upwards by gi units.

45

Chapter 5. Results and Discussion

5.4

Simplified GRN and ODE Model

46

Positive invariance

We solve the multivariate equation Fi (X) = 0 (for a specific i) by solving the intersections of the (n + 1)-dimensional curve induced by Hi ([X1 ], [X2 ], . . . , [Xn ]) + gi and the (n + 1)dimensional hyperplane induced by ρi [Xi ], as illustrated in Figure (5.9). That is, we find the real solutions to βi [Xi ]ci + αi si = ρi [Xi ]. n X Ki + [Xi ]ci + γij [Xj ]cij

(5.5)

j=1,j6=i

For easier analysis, we observe the intersections of the univariate functions defined by P Y = Hi ([Xi ]) + gi and Y = ρi [Xi ] while varying the value of Ki + nj=1,j6=i γij [Xj ]cij in the denominator of the univariate Hill function Hi ([Xi ]) (see Figure (5.10) for illustration). In the univariate case, we can look at Y = ρi [Xi ] as a line in the Cartesian plane passing through the origin with slope equal to ρ.

Figure 5.9: The 3-dimensional curve induced by Hi ([X1 ], [X2 ])+gi and the plane induced by ρi [Xi ], an example. The following theorem guarantees that the state variables of our ODE model (5.1) will never take negative values.

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

47

Figure P 5.10: The intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi with varying values of Ki + nj=1,j6=i γij [Xj ]cij , an example. Lemma 5.1 Positive invariance. The flow φ(X0 ) of the generalized (multivariate) n

Cinquin-Demongeot ODE model (5.1) (where X0 = ([X1 ]0 , [X2 ]0 , . . . , [Xn ]0 ) ∈ R⊕ can n

be any initial condition) is always in R⊕ . Proof. Suppose ρi > 0 ∀i and we have a nonnegative initial value X0 . Figures (5.11) to (5.14) illustrate all possible cases showing the topologies of the intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi . We employ the concept of fixed point iteration (where we define our fixed point as [Xi ] satisfying Hi ([Xi ]) + gi = ρi [Xi ]), or the geometric analysis shown in Figure (4.1) (where we rotate the graph of the curves, making Y = ρi [Xi ] the horizontal axis) to each topology of the intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi . Figures (5.15) and (5.16) illustrate how the fixed point method and the geometric analysis shown in Figure (4.1) are done. Given specific values of [Xj ], j 6= i, the univariate Hill curve Y = Hi ([Xi ]) and Y = ρi [Xi ] have the following possible number of intersections (see Figures (5.11) to (5.14)): • two intersections (where one is stable);

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

48

• one intersection (which is stable); and • three intersections (where two are stable). We can see that there exists a stable intersection located in the first quadrant (including the axes) of the Cartesian plane that always attracts the trajectory of our ODE model for any initial condition without escaping the first quadrant (including the axes). Hence, n

the flow of the ODE model (5.1) will stay in R⊕ for ρi > 0 ∀i. Now, suppose ρi = 0 for at least one i. Then

d[Xi ] dt

≥ 0 for all ([X1 ], [X2 ], . . . , [Xn ])

given nonnegative initial condition [Xi ]0 — that is, the change in [Xi ] with respect to time is always nonnegative implying that the value of [Xi ] will never decrease starting from the initial condition [Xi ]0 . Since [Xi ]0 ≥ 0, then [Xi ] ≥ 0 for any time t. n

n

Thus, φ (X0 ) ⊆ R⊕ ∀X0 ∈ R⊕ .

 n

Consequently, Lemma (5.1) implies that Fi is a function Fi : R⊕ → Rn , for i = 1, 2, . . . , n. The following theorems are consequences of the proof of Theorem (5.1). We use Lemma (5.2) in proving theorems in the succeeding chapters.

Lemma 5.2 Suppose ρi > 0 for all i. Then the generalized Cinquin-Demongeot ODE n

model (5.1) with X0 ∈ R⊕ always has a stable equilibrium point. Moreover, any trajectory of the model will converge to a stable equilibrium point. Proof. This follows from the proof of Lemma (5.1).



Proposition 5.3 Suppose ρi > 0 for all i. Then Fi (X) will not “blow-up” and will not n

approach infinity given any initial condition X0 ∈ R⊕ . Proof. Since all trajectories of our system converge to a stable equilibrium point by Lemma (5.1) and (5.2).



Chapter 5. Results and Discussion

Simplified GRN and ODE Model

49

Figure 5.11: The possible number of intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi Pn where ci = 1 and gi = 0. The value of Ki + j=1,j6=i γij [Xj ]cij is fixed.

Figure 5.12: The possible number of intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi Pn where ci = 1 and gi > 0. The value of Ki + j=1,j6=i γij [Xj ]cij is fixed.

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

50

Figure 5.13: The possible number of intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi Pn where ci > 1 and gi = 0. The value of Ki + j=1,j6=i γij [Xj ]cij is fixed.

Figure 5.14: The possible number of intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi Pn where ci > 1 and gi > 0. The value of Ki + j=1,j6=i γij [Xj ]cij is fixed.

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

51

Figure 5.15: Finding the univariate fixed points using cobweb diagram, an example. We define the fixed point as [Xi ] satisfying H([Xi ]) + gi = ρi [Xi ].

Figure 5.16: The curves are rotated making the line Y = ρi [Xi ] as the horizontal axis. Positive gradient means instability, negative gradient means stability. If the gradient is zero, we look at the left and right neighboring gradients.

Chapter 5. Results and Discussion

5.5

Simplified GRN and ODE Model

52

Existence and uniqueness of solution

Recall Peano’s Existence Theorem (4.1) stating that if each Fi is continuous on B then the system of ODEs has a local solution (not necessarily unique), given any initial condition n

X0 ∈ B ⊆ R⊕ . Also, recall the local and global existence-uniqueness theorems (4.2) and (4.3). If the partial derivatives

∂Fi ∂[Xj ]

n

i, j = 1, 2, . . . , n are continuous on B ⊆ R⊕ , then the system of

ODEs has a unique local solution given any initial condition X0 ∈ B. Moreover, if the absolute value of these partial derivatives are bounded for all X ∈ B, then the system of ODEs has exactly one solution defined for all t ∈ R⊕ for any initial condition X0 ∈ B. Lipschitz continuity is important in proving the existence and uniqueness of solutions. Observing Figures (5.4) and (5.5), we can see that there are functions Hi that are not differentiable at [Xi ] = 0. Consequently, if Hi is not differentiable at [Xi ] = 0 then Fi is also not differentiable at [Xi ] = 0. If we include [Xi ] = 0 in the domain of Fi , then this makes Fi not Lipschitz continuous. We classify Fi based on the nature of the parameter ci . We define two types of ci : Type 1 ci ≥ 1 and either an integer or a rational of the form ci =

pi qi

where pi , qi are

positive integers and qi is odd. Type 2 ci > 1 and either an irrational or a non-integer rational of the form ci =

pi qi

where pi , qi are positive integers and qi is even. The function Fi , with ci of type 1, is differentiable at [Xi ] = 0; while the function Fi , with ci of type 2, is not differentiable at [Xi ] = 0. Now, we prove several theorems that assure the existence and uniqueness of the solution to our ODE model (5.1), given an initial condition. n

Theorem 5.4 Suppose Fi : R⊕ → Rn , for i = 1, 2, . . . , n. Suppose that for all i, ci is of type 1. Then the generalized Cinquin-Demongeot ODE model (5.1) has exactly one n

solution defined for all t ∈ [0, ∞) for any initial condition X0 ∈ R⊕ .

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

53

Proof. Since Ki > 0, then the denominator of the Hill function Hi (5.2) is not identically n

n

zero for any X ∈ R⊕ . This implies that each Fi is defined and continuous on R⊕ . Since n

for all i, ci is of type 1, then it follows that Fi is differentiable on R⊕ . The partial derivative

∂Fi ∂[Xi ]

n X

Ki + [Xi ]ci + ∂Fi = ∂[Xi ]

is as follows: ! γij [Xj ]cij

βi ci [Xi ]ci −1 − βi ci [Xi ]2ci −1

j=1,j6=i

Ki + [Xi ]ci +

− ρi .

!2

n X

(5.6)

γij [Xj ]cij

j=1,j6=i

The partial derivative

∂Fi , ∂[Xl ]

i 6= l is as follows: −βi [Xi ]ci (cil )γil [Xl ]cil −1

∂Fi = ∂[Xl ]

n X

Ki + [Xi ]ci +

!2 .

(5.7)

γij [Xj ]cij

j=1,j6=i

The denominator ci

Ki + [Xi ] +

n X

!2 cij

γij [Xj ]

(5.8)

j=1,j6=i

in the partial derivative

∂Fi ∂[Xl ]

n

(for l = i and l 6= i) is not identically zero for all X ∈ R⊕ .

Hence, all the partial derivatives

∂Fi , ∂[Xl ]

n

i, l = 1, 2, . . . , n are continuous on R⊕ .

Notice that the degree of the denominator (5.8) is greater than the degree of its corresponding numerator in Equations (5.6) and (5.7). It follows that as the value of at least one state variable approaches infinity, then the value of constant −ρi and the value of Since n

∂Fi ∂[Xl ]

∂Fi ∂[Xl ]

∂Fi ∂[Xi ]

approaches the

(l 6= i) vanishes. n

(for l = i and l 6= i) is continuous on R⊕ (i.e., there are no “asymptotes”

on R⊕ that would make the partial derivatives “blow-up”) and

∂Fi ∂[Xl ]

(l = i and l 6= i)

approaches a constant as the value of at least one state variable approaches infinity, then ∂Fi n ∂[Xl ] , i, l = 1, 2, . . . , n are bounded for all X ∈ R⊕ .

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

54

Therefore, the system has a unique solution defined for all t ∈ R⊕ for any initial n

condition X0 ∈ R⊕ .



n

Proposition 5.5 Suppose Fi : R⊕ → Rn , for i = 1, 2, . . . , n. Suppose that for at least one i, ci is of type 2. Then the generalized Cinquin-Demongeot ODE model (5.1) has a local solution (not necessarily unique) given [Xi ]0 = 0 as an initial value. Moreover, the generalized Cinquin-Demongeot ODE model (5.1) has a unique local solution given [Xi ]0 6= 0 as an initial value. Proof. Since Ki > 0, then the denominator of the Hill function Hi (5.2) is not identically n

n

zero for any X ∈ R⊕ . This implies that each Fi is defined and continuous on R⊕ . By Peano’s Existence Theorem, the system has a local solution (not necessarily unique) given [Xi ]0 = 0 as an initial condition. Suppose that for at least one i, ci is of type 2. Then for such certain i, Fi is n

differentiable on R⊕ except when [Xi ] = 0. Note that the partial derivatives

∂Fi , ∂[Xl ]

n

i, l = 1, 2, . . . , n (see Equation (5.6) and (5.7)) are continuous on R+ (i.e., Fi is locally n

Lipschitz continuous on R+ ). Hence, the generalized Cinquin-Demongeot ODE model (5.1) has a unique local solution given [Xi ]0 6= 0 as an initial value.



Remark : From the preceding proposition, at [Xi ] > 0 the trajectory of our ODE model (5.1) is unique, but when the trajectory passes through [Xi ] = 0 the ODE model (5.1) may (i.e., we are not sure) have more than one solution. Nevertheless, this will not affect our analysis to effectively predict the behavior of our system when gi = 0 since [Xi ] = 0 is a component of a stable equilibrium point (i.e., [Xi ] will stay zero as t → ∞). Thus, assuming gi = 0, the flow of our ODE model does not change its qualitative behavior even if the trajectory passes through [Xi ] = 0. See Figure (5.17) for illustration. If gi > 0, we can show that even with the assumption that ci is of type 2 for at least one i, our ODE model (5.1) can still have a unique solution defined for all t ∈ [0, ∞) by restricting the domain of Fi .

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

55

Proposition 5.6 Suppose there are Fj having gj > 0 and cj of type 2. Suppose there are no Fi , i 6= j having gi = 0 and ci of type 2. Then the generalized Cinquin-Demongeot ODE model (5.1) has exactly one solution defined for all t ∈ [0, ∞) for any initial values [Xj ]0 ∈ R+ and [Xi ]0 ∈ R⊕ , i 6= j. Proof. Notice that for gj > 0, [Xj ] = 0 will never be a component of a stable equilibrium point (see Figure (5.18)). This implies that we can reduce the space of positive invariance associated to [Xj ] (refer to Lemma (5.1)) from R⊕ to R+ . Thus, for initial value [Xj ] > 0, we can restrict the domain of Fj with respect to the variable [Xj ] to R+ (i.e., we eliminate the possibility of making [Xj ] = 0 as an initial value). We follow the same flow of proof as in Theorem (5.4). Since we only consider [Xj ] ∈ R+ , then Fj is now differentiable everywhere. The absolute value of the partial derivatives of Fj are continuous and bounded for all X where [Xj ] ∈ R+ . Moreover, since there are no Fi , i 6= j having gi = 0 and ci of type 2, then it means that Fi must have ci of type 1. The absolute value of the partial derivatives of Fi are continuous and bounded for all X where [Xi ] ∈ R⊕ . Hence, we conclude that the generalized Cinquin-Demongeot ODE model (5.1) has exactly one solution defined for all t ∈ [0, ∞) for any initial values [Xj ]0 ∈ R+ and [Xi ]0 ∈ R⊕ , i 6= j.



Suppose gj > 0 but [Xj ]0 = 0 is the j-th component of the initial condition. We can still do numerical simulations to solve the ODE model (5.1) even when Fj is not differentiable at [Xj ] = 0. However, we need to do the numerical simulation with caution because we are not sure if multiple solutions will arise. We can use multivariate fixed point algorithm to investigate the corresponding stable equilibrium point for this kind of system. Note that a cij of type 2 does not affect the existence and uniqueness of a solution because [Xj ]cij only affects the shrinkage of the graph of Hi ([Xi ]) (see Figure (5.6)).

Chapter 5. Results and Discussion

Simplified GRN and ODE Model

56

Figure 5.17: When gi = 0, [Xi ] = 0 is a component of a stable equilibrium point.

Figure 5.18: When gj > 0, [Xj ] = 0 will never be a component of an equilibrium point.

Chapter 6 Results and Discussion Finding the Equilibrium Points In this chapter, we determine the location and number of equilibrium points of the generalized Cinquin-Demongeot ODE model (5.1). We only consider the biologically feasible equilibrium points — those that are real-valued and nonnegative. For the following discussions, recall that Ki > 0 ∀i. Appendix A contains illustrations related to this chapter.

6.1

Location of equilibrium points

Lemma 6.1 Given nonnegative state variables and parameters in (5.1), if gi > 0 then ρi > 0 is a necessary and sufficient condition for the existence of an equilibrium point. Proof. Since [Xi ] ≥ 0, gi > 0 and all other parameters are nonnegative, then the decay term −ρi [Xi ] with ρi > 0 is necessary for Fi (X) =

βi [Xi ]ci − ρi [Xi ] + gi n X γij [Xj ]cij Ki + [Xi ]ci + j=1,j6=i

to be zero. The Hill curve induced by Hi ([X1 ], [X2 ], . . . , [Xn ]) (5.2) translated upwards by gi > 0 and the hyperplane induced by ρi [Xi ] will always intersect when ρi > 0 and will not intersect if ρi = 0 (see Figures (5.11) to (5.14)).



Remark : If gi = 0 and ρi = 0 then we have an equilibrium point with zero i-th component i

(i.e. (..., 0, ...)) but this equilibrium point is obviously unstable. Theorem 6.2 The generalized Cinquin-Demongeot ODE model (5.1) has an equilibrium point with i-th component equal to zero (i.e., [Xi ]∗ = 0) if and only if gi = 0. 57

Chapter 6. Results and Discussion

Finding the Equilibrium Points

58

Proof. If gi = 0 then Fi (X) =

βi [Xi ]ci − ρi [Xi ] + 0 = 0, n X c Ki + [Xi ]ci + γij [Xj ] ij j=1,j6=i

implying [Xi ] = 0 is a root of Fi (X) = 0. Furthermore, if [Xi ] = 0 is a root of Fi (X) = 0 then by substitution, βi [0]ci − ρi [0] + gi = 0, n X Ki + [0]ci + γij [Xj ]cij j=1,j6=i

gi must be zero.



The following corollary is very important because the case where the trajectory converges to the origin is trivial. This zero state neither represents a pluripotent cell nor a cell differentiating into bone, cartilage or fat.

Corollary 6.3 The zero state (0, 0, . . . , 0) can only be an equilibrium point if and only if gi = 0 ∀i.

Proposition 6.4 Suppose ρi > 0. If both βi > 0 and gi > 0 then

gi ρi

cannot be an i-th

component of an equilibrium point. Proof. Suppose βi > 0, gi > 0 and

gi ρi

is an i-th component of an equilibrium point. Then   ci   βi ρgii gi gi Fi [X1 ], . . . , , . . . , [Xn ] = − ρi + gi = 0 n   X ci ρi ρi Ki + ρgii + γij [Xj ]cij j=1,j6=i

βi = Ki +

 ci gi ρi

  ci

+

gi ρi

n X

=0 γij [Xj ]cij

j=1,j6=i

implying that βi

  ci gi ρi

= 0. Thus βi = 0 or gi = 0, a contradiction.



Chapter 6. Results and Discussion Remark : If gi , ρi > 0 then [Xi ] =

Finding the Equilibrium Points gi ρi

59

can only be an i-th component of an equilibrium

point if βi = 0.

Theorem 6.5 Suppose ρi > 0. The value

gi +βi ρi

is the upper bound of, but will never be

equal to, [Xi ]∗ (where [Xi ]∗ is the i-th component of an equilibrium point). The equilibrium points of our system lie in the hyperspace       g2 g2 + β2 gn gn + βn g1 g1 + β1 , × , × ... × , . ρ1 ρ1 ρ2 ρ2 ρn ρn

(6.1)

Proof. From Lemma (5.2), our system (5.1) always has an equilibrium point. Note that [Xi ]∗ < ∞ ∀i because [Xi ]∗ = ∞ cannot be a component of an equilibrium point. The minimum value of Hi is zero which happens when βi = 0 or when [Xi ] = 0. Hence, if Hi ([X1 ], [X2 ], . . . , [Xn ]) = 0 then Fi (X) = gi − ρi [Xi ] = 0, implying [Xi ] =

gi . ρi

The upper bound of Hi is βi which will only happen when [Xi ] = ∞. If Hi ([X1 ], [X2 ], . . . , [Xn ]) = βi then Fi (X) = βi − ρi [Xi ] + gi = 0, implying [Xi ] =

gi +βi ρi

(but note that

this is just an upper bound and cannot be a component of an equilibrium point). See Figure (6.1) for illustration.



Remark : The Hill curve and ρ[Xi ] intersect at infinity when gi → ∞, βi → ∞ or ρi → 0. Moreover, if we have multiple stable equilibrium points lying on the hyperspace (6.1) then one strategy for increasing the basin of attraction of a stable equilibrium point is by increasing the value of βi (however, the number of stable equilibrium points may change by doing this strategy). In Chapter 5 Section 5.4, we are able to show the existence of an equilibrium point but we do not know the value of the equilibrium point. Solving the system Fi (X) = 0, i = 1, 2, . . . , n can be interpreted as finding the intersections of the (n + 1)-dimensional curves induced by each Fi (X) and the (n + 1)-dimensional zero-hyperplane.

Chapter 6. Results and Discussion

Finding the Equilibrium Points

60

Figure 6.1: Sample numerical solution in time series with the upper bound and lower bound.

6.2

Cardinality of equilibrium points

In this section, we use the B´ezout Theorem (4.5) and Sylvester resultant method to determine the number and exact values of equilibrium points. Suppose ci and cij are integers for all i and j. The corresponding polynomial equation to (i = 1, 2, . . . , n) βi [Xi ]ci − ρi [Xi ] + gi = 0 Fi (X) = n X c ij c Ki + [Xi ] i + γij [Xj ]

(6.2)

j=1,j6=i

is n X

Pi (X) =βi [Xi ]ci + (gi − ρi [Xi ]) Ki + [Xi ]ci +

! γij [Xj ]cij

=0

j=1,j6=i

= − ρi [Xi ]ci +1 + (βi + gi ) [Xi ]ci −

Ki +

n X

! γij [Xj ]cij

(ρi [Xi ])

j=1,j6=i

+ gi

n X j=1,j6=i

γij [Xj ]cij + gi Ki = 0.

(6.3)

Chapter 6. Results and Discussion

Finding the Equilibrium Points

61

Theorem 6.6 Assume that all equations in the polynomial system (6.3) have no common factor of degree greater than zero given a certain set of parameter values. Then the number of equilibrium points of the generalized Cinquin-Demongeot ODE model (5.1) (where ci and cij are integers) is at most

max{c1 + 1, c1j + 1 ∀j} × max{c2 + 1, c2j + 1 ∀j} × . . . × max{cn + 1, cnj + 1 ∀j}.

Proof. The degree of Pi is deg i = max{ci +1, cij +1 ∀j}. Since for some parameter values we assume that all equations in the polynomial system (6.3) have no common factor of degree greater than zero then by the B´ezout Theorem (4.5), the number of complex solutions to the polynomial system is at most max{c1 + 1, c1j + 1 ∀j} × max{c2 + 1, c2j + 1 ∀j} × . . . × max{cn + 1, cnj + 1 ∀j}. It follows that this is the upper bound of the number of equilibrium points.



The B´ezout Theorem (4.5) does not give the exact number of equilibrium points but only the upper bound. Also, Theorem (6.6) is dependent on the value of ci and cij as well as on n. According to Cinquin and Demongeot, manipulating the strength of cooperativity (ci and cij ) is of minimal biological relevance [38]. Nevertheless, the possible dependence of the number of equilibrium points on n (dimension of our state space) has a biological implication. The dependence on n may be due to the potency of the cell. It is necessary to check if all equations in the polynomial system have no common factor of degree greater than zero, because if they do then there will be infinitely many complex solutions. Recall from Theorem (4.7) that we can determine the existence of infinitely many complex solutions by checking if res(P1 , P2 ; Xi ) (the determinant of the Sylvester matrix) is identically zero, or by checking if P1 and P2 have a non-constant common factor. However, the infinite number of complex solutions arise if [Xi ] can take any complex value. There can be solutions with negative (and possibly complex-valued) components

Chapter 6. Results and Discussion

Finding the Equilibrium Points

62

that have no biological importance. Consequently, we need to do ad hoc investigation to remove the solutions with negative or non-real-valued components and to check whether the infinite number of solutions still arise when [Xi ] ∀i are restricted to be nonnegative real numbers. It is possible that our polynomial system (6.3) has a finite number of nonnegative real solutions even though the system has a non-constant common factor. In order to determine the exceptions, we determine the set of parameter values (where the strengths of cooperativity are integer-valued) that would give rise to a system of equations having a non-constant common factor. We have found one case (and this is the only case) where such common factor exists.

Theorem 6.7 Suppose ci = cij = 1, gi = 0, γij = 1, βi = βj = β > 0, ρi = ρj = ρ > 0 and Ki = Kj = K > 0, for all i and j. Then the ODE model (5.1) has infinitely many non-isolated equilibrium points if and only if β > ρK. Moreover, if β ≤ ρK then there is exactly one equilibrium point which is the origin. Proof. Recall (6.3), from the nonlinear system Fi (X) = 0 (i = 1, 2, . . . , n), βi [Xi ]ci − ρi [Xi ] + gi = 0 n X γij [Xj ]cij Ki + [Xi ]ci + j=1,j6=i

we have the corresponding polynomial system Pi (X) = 0 (i = 1, 2, . . . , n),

ci

ci +1

βi [Xi ] − ρi Ki [Xi ] − ρi [Xi ]

n X

− ρi [Xi ]

γij [Xj ]cij

j=1,j6=i

+gi Ki + gi [Xi ]ci + gi

n X

γij [Xj ]cij = 0.

j=1,j6=i

Suppose ci = cij = 1, gi = 0, γij = 1, βi = βj = β > 0, ρi = ρj = ρ > 0 and Ki = Kj = K > 0 (notice that we have a Michaelis-Menten-like symmetric system).

Chapter 6. Results and Discussion

Finding the Equilibrium Points

63

Then the polynomial system can be written as (i = 1, 2, . . . , n) 2

β[Xi ] − ρK[Xi ] − ρ[Xi ] − ρ[Xi ]

n X

[Xj ] = 0

j=1,j6=i

!

n X

⇒ [Xi ] β − ρK − ρ[Xi ] − ρ

[Xj ]

=0

j=1,j6=i

⇒ [Xi ] = 0 or

n X

β − ρK − ρ[Xi ] − ρ

! [Xj ]

= 0.

(6.4)

j=1,j6=i

Notice that the factor n X

β − ρK − ρ[Xi ] − ρ

[Xj ]

j=1,j6=i

= β − ρK − ρ

n X

[Xj ]

(6.5)

j=1

is common to all equations in the polynomial system given the assumed parameter values. Thus, by Theorem (4.7), there are infinitely many complex solutions where [Xj ] can be any complex number. However, note that we have restricted [Xj ] to be nonnegative, so we do further investigation to determine the conditions for the existence of an infinite number of solutions given strictly nonnegative [Xj ]. We focus our investigation on realvalued solutions. Suppose B = β − ρK. Case 1 : If β = ρK then B = 0 and thus, B − ρ

Pn

j=1

[Xj ] will never be zero except when

[Xj ] = 0 ∀j (since [Xj ] can take only nonnegative values). Hence, the only equilibrium point to the system is the origin. Case 2 : If β < ρK then B < 0 and thus, B − ρ

Pn

j=1

[Xj ] will always be negative and

will not have any zero for any nonnegative value of [Xj ]. Hence, the only equilibrium point is the origin (which is derived from [Xi ] = 0, i = 1, 2, . . . , n in Equation (6.4)). Case 3 : If β > ρK then B > 0 and thus, there exist solutions to the equation P B − ρ nj=1 [Xj ] = 0. Notice that the set of nonnegative real-valued solutions to B −

Chapter 6. Results and Discussion ρ

Pn

j=1

Finding the Equilibrium Points

64

[Xj ] = 0 is a hyperplane (e.g., it is a line for n = 2 and it is a plane for n = 3).

Hence, there are infinitely many non-isolated equilibrium points when β > ρK. Conversely, if the generalized Cinquin-Demongeot ODE model (5.1) with ci = cij = 1, gi = 0, γij = 1, βi = βj = β > 0, ρi = ρj = ρ > 0 and Ki = Kj = K > 0 has infinitely many equilibrium points then the model’s corresponding polynomial system has a common factor of degree greater than zero. The only possible common factor is shown in (6.5). The only case where such factor will have infinitely many non-isolated nonnegative solutions is when β > ρK.



Corollary 6.8 Suppose ci = cij = 1, gi = 0, γij = 1, βi = βj = β > 0, ρi = ρj = ρ > 0 and Ki = Kj = K > 0. If β > ρK then the equilibrium points of system (5.1) are the origin and the non-isolated points lying on the hyperplane with equation n X

β − K, [Xj ] ≥ 0 ∀j. ρ

(6.6)

Proof. This is a consequence of the proof of Theorem (6.7).



j=1

[Xj ] =

Theorem 6.9 The generalized Cinquin-Demongeot ODE model (5.1) (where ci and cij are integers) has a finite number of equilibrium points except when all of the following conditions are satisfied: ci = cij = 1, gi = 0, γij = 1, βi = βj = β > 0, ρi = ρj = ρ > 0, Ki = Kj = K > 0 and β > ρK, for all i and j. Proof. When ci = cij = 1, gi = 0, γij = 1, βi = βj = β > 0, ρi = ρj = ρ > 0, Ki = Kj = K > 0 and β > ρK for all i and j then, by Theorem (6.7), the generalized Cinquin-Demongeot ODE model (5.1) has an infinite number of equilibrium points. Now, suppose at least one of the following conditions is not satisfied: ci = cij = 1, gi = 0, γij = 1, βi = βj = β > 0, ρi = ρj = ρ > 0, Ki = Kj = K > 0 and β > ρK, for all i and j. Recall the corresponding polynomial system Pi (X) = 0 (6.3) to our generalized Cinquin-Demongeot ODE model (5.1) (where ci and cij are integers), which is (for i = 1, 2, . . . , n)

Chapter 6. Results and Discussion

Finding the Equilibrium Points

n X

βi [Xi ]ci − ρi Ki [Xi ] − ρi [Xi ]ci +1 − ρi [Xi ]

65

γij [Xj ]cij

j=1,j6=i

+gi Ki + gi [Xi ]ci + gi

n X

γij [Xj ]cij = 0.

(6.7)

j=1,j6=i

Suppose gi = 0. The factorization of the polynomials in the above polynomial system (6.7) is of the form: [Xi ] βi [Xi ]ci −1 − ρi Ki − ρi [Xi ]ci − ρi

!

n X

γij [Xj ]cij

(6.8)

j=1,j6=i

The factor [Xi ] is definitely not a common factor of our system. Moreover, there will P always be a [Xi ]ci −1 term in the factor βi [Xi ]ci −1 − ρi Ki − ρi [Xi ]ci − ρi nj=1,j6=i γij [Xj ]cij P that will make βi [Xi ]ci −1 − ρi Ki − ρi [Xi ]ci − ρi nj=1,j6=i γij [Xj ]cij not a common factor of our system. For example, suppose gi = 0, γij = 1, βi = β, ρi = ρ, Ki = K and ci = cij for all i and j, then we have Factor in equation 1 : [X1 ]

c1 −1

− ρK − ρ

Factor in equation 2 : [X2 ]c2 −1 − ρK − ρ

n X j=1 n X

[Xj ]cj −1 [Xj ]cj −1

(6.9)

j=1

.. . Factor in equation 3 : [Xn ]cn −1 − ρK − ρ

n X

[Xj ]cj −1 .

j=1

Notice that the presence of [Xi ]ci −1 in equation i makes at least one factor unique (“at least one” because at most n − 1 equations may satisfy the restriction: ci = cij = 1, gi = 0, γij = 1, βi = βj = β > 0, ρi = ρj = ρ > 0, Ki = Kj = K > 0 and β > ρK, and at least one equation does not).

Chapter 6. Results and Discussion

Finding the Equilibrium Points

66

Suppose gi 6= 0 for at least one i. By the above proof (for gj = 0, j 6= i) as well as by the presence of [Xi ]ci (in the first term of Equation (6.7)) and [Xi ] (in the second, third and fourth terms of Equation (6.7)), then the polynomials in the polynomial system (6.7) are collectively relatively prime. For example, suppose γij = 1, βi = β, ρi = ρ, Ki = K, gi = g and ci = cij for all i and j, then we have c1

β[X1 ] − ρK[X1 ] − ρ[X1 ] β[X2 ]c2 − ρK[X2 ] − ρ[X2 ]

n X j=1 n X

cj

[Xj ] + gK + g [Xj ]cj + gK + g

j=1

n X j=1 n X

[Xj ]cj [Xj ]cj

(6.10)

j=1

.. . cn

β[Xn ] − ρK[Xn ] − ρ[Xn ]

n X

cj

[Xj ] + gK + g

j=1

n X

[Xj ]cj

j=1

Notice that the presence of [Xi ] in equation i makes at least one equation relatively prime (“at least one” because at most n − 1 equations may satisfy the restriction: ci = cij = 1, gi = 0, γij = 1, βi = βj = β > 0, ρi = ρj = ρ > 0, Ki = Kj = K > 0 and β > ρK, and at least one equation does not). Therefore, by Theorem (4.7), there is a finite number of equilibrium points.  Now, we prove a theorem showing that the equilibrium points ([X1 ]∗ [X2 ]∗ , [X3 ]∗ , 0) of a system with n = 4 and g4 = 0 are exactly the equilibrium points of the corresponding system with n = 3. Generally, we state the following theorem: Theorem 6.10 Suppose gn = 0. Then the n-dimensional system is more general than the (n − 1)-dimensional system. That is, we can derive the equilibrium points of the (n − 1)-dimensional system by getting the equilibrium points of the n-dimensional system where [Xn ]∗ = 0. Proof. When [Xn ]∗ = 0 and gn = 0, the n-dimensional system reduces to an (n − 1)dimensional system.



Chapter 6. Results and Discussion

Finding the Equilibrium Points

67

In the following illustrations, we show how to find equilibrium points using the Sylvester resultant. We assign specific values to some parameters. Let us consider our simplified MacArthur et al. GRN where n = 4 (5.3).

6.2.1

Illustration 1

Consider that all parameters are equal to 1 except for g2 = g3 = g4 = 0. We have the following polynomial system:

P1 ([X1 ], [X2 ], [X3 ], [X4 ]) = [X1 ] − [X1 ](1 + [X1 ] + [X2 ] + [X3 ] + [X4 ]) + (1 + [X1 ] + [X2 ] + [X3 ] + [X4 ]) = 0 P2 ([X1 ], [X2 ], [X3 ], [X4 ]) = [X2 ] − [X2 ](1 + [X1 ] + [X2 ] + [X3 ] + [X4 ]) = 0

(6.11)

P3 ([X1 ], [X2 ], [X3 ], [X4 ]) = [X3 ] − [X3 ](1 + [X1 ] + [X2 ] + [X3 ] + [X4 ]) = 0 P4 ([X1 ], [X2 ], [X3 ], [X4 ]) = [X4 ] − [X4 ](1 + [X1 ] + [X2 ] + [X3 ] + [X4 ]) = 0.

The Sylvester matrix associated with P1 and P2 with [X1 ] as variable is



−1

1 − [X2 ] − [X3 ] − [X4 ]

1 + [X2 ] + [X3 ] + [X4 ]



   −[X2 ] −[X2 ]2 − [X2 ][X3 ] − [X2 ][X4 ] . 0   2 0 −[X2 ] −[X2 ] − [X2 ][X3 ] − [X2 ][X4 ]

(6.12)

Then res(P1 , P2 ; [X1 ]) = [X2 ]2 . Therefore, [X2 ]∗ = 0. By doing the same procedure as above, res(P1 , P3 ; [X1 ]) = [X3 ]2 and res(P1 , P4 ; [X1 ]) = [X4 ]2 . Hence, [X3 ]∗ = [X4 ]∗ = 0. Note that we cannot use res(P2 , P3 ; [X1 ]), res(P3 , P4 ; [X1 ]) and res(P2 , P4 ; [X1 ]) because these resultants are identically zero, that is, P2 , P3 and P4 have common factors. So we need to be careful in choosing what combinations of polynomial equations are to be used in determining the equilibrium points. Note that P1 does not share a common factor with the other three polynomial equations.

Chapter 6. Results and Discussion

Finding the Equilibrium Points

68

Substituting [X2 ]∗ = [X3 ]∗ = [X4 ]∗ = 0 in P1 we have −[X1 ]∗ 2 + 1 + [X1 ]∗ = 0. This means that [X1 ]∗ =

√ 1+ 5 2

√ 1− 5 2

because this is negative).  √  Therefore, we only have one equilibrium point, 1+2 5 , 0, 0, 0 .

6.2.2

(we disregard

Illustration 2

Consider that all parameters are equal to 1 except for ci = cij = 2, gi = 0, i, j = 1, 2, 3, 4. We have the following polynomial system:

P1 ([X1 ], [X2 ], [X3 ], [X4 ]) = [X1 ]2 − [X1 ](1 + [X1 ]2 + [X2 ]2 + [X3 ]2 + [X4 ]2 ) = 0 P2 ([X1 ], [X2 ], [X3 ], [X4 ]) = [X2 ]2 − [X2 ](1 + [X1 ]2 + [X2 ]2 + [X3 ]2 + [X4 ]2 ) = 0

(6.13)

P3 ([X1 ], [X2 ], [X3 ], [X4 ]) = [X3 ]2 − [X3 ](1 + [X1 ]2 + [X2 ]2 + [X3 ]2 + [X4 ]2 ) = 0 P4 ([X1 ], [X2 ], [X3 ], [X4 ]) = [X4 ]2 − [X4 ](1 + [X1 ]2 + [X2 ]2 + [X3 ]2 + [X4 ]2 ) = 0.

The Sylvester matrix associated with P1 and P2 with [X1 ] as variable is 

a  11  0    a31    0  0

a12 a13 a14

0



 a11 a12 a13 a14    a32 a33 0 0  .  a31 a32 a33 0   0 a31 a32 a33

(6.14)

where a11 = −1, a12 = 1, a13 = −1 − [X2 ]2 − [X3 ]2 − [X4 ]2 , a14 = 0, a31 = −[X2 ], a32 = 0 and a33 = [X2 ]2 − [X2 ] − [X2 ]3 − [X2 ][X3 ]2 − [X2 ][X4 ]2 . Then

res(P1 , P2 ; [X1 ]) = − [X2 ]3 (−[X2 ] + [X4 ]2 + 2[X2 ]2 + [X3 ]2 + 1) (−[X2 ] + [X4 ]2 + [X2 ]2 + [X3 ]2 + 1).

(6.15)

Chapter 6. Results and Discussion

Finding the Equilibrium Points

69

Notice that the factors −[X2 ]+[X4 ]2 +2[X2 ]2 +[X3 ]2 +1 and −[X2 ]+[X4 ]2 +[X2 ]2 +[X3 ]2 +1 in (6.15) do not have real zeros. Therefore, [X2 ]∗ = 0. Since the system is symmetric, then it follows that [X1 ]∗ = [X3 ]∗ = [X4 ]∗ = 0, too. Hence, we only have one equilibrium point which is the origin. Additional illustrations are presented in Appendix A (for n = 2 and n = 3). When all parameters are equal to 1 except for ci = cij = 2 and gi = 0 for all i, j, then the only equilibrium point is the origin. Actually, this kind of system is the original Cinquin-Demongeot ODE model [38] without “leak” where β = 1 and c = 2 (refer to system (3.4)). We state the following proposition: Proposition 6.11 If ci > 1, gi = 0, Ki ≥ 1, βi = 1 and ρi = 1 for all i, then our system has only one equilibrium point which is the origin. Proof. Let us first consider the case where [Xj ] = 0 ∀j 6= i and Ki = 1. The graphs of Y = Hi ([Xi ]) (with increasing value of ci ) and Y = [Xi ] is illustrated in Figure (6.2). If ci → ∞ then [Xi ]ci → ∞ for any [Xi ] > 1. As [Xi ]ci → ∞ (with [Xi ] > 1), the univariate Hill function Hi ([Xi ]) → β = 1. Hence, the univariate Hill curve Y = Hi ([Xi ]) will never touch the point (1, 1) lying on Y = [Xi ] for finite ci .

Now, as the values of γij [Xj ] ∀j 6= i and Ki increase then the univariate Hill curve Y = Hi ([Xi ]) will just shrink and will definitely not intersect the decay line Y = [Xi ] except at the origin (see Chapter 5 Section 5.3 for the discussion regarding the geometry of the Hill curve). Hence, for any value of [Xi ] and [Xj ] (for all j 6= i), the univariate Hill curve Y = Hi ([Xi ]) will only intersect the decay line Y = [Xi ] at the origin. In other words, [Xi ]ci < [Xi ] n X c K + [Xi ]ci + γij [Xj ] i

(6.16)

j=1,j6=i

except when [Xi ] = 0.



Chapter 6. Results and Discussion

Figure 6.2: Y =

[Xi ]ci K+[Xi ]ci

Finding the Equilibrium Points

70

will never touch the point (1, 1) for 1 < ci < ∞.

Proposition (6.11) implies that the system with ci > 1, gi = 0, Ki ≥ 1, βi = 1 and ρi = 1 for all i, j represents a trivial case (i.e., the fate of the cell is not in the domain of our GRN, or the cell is in quiescent stage). This is not the only set of parameters that gives a trivial case. A generalization of the above Proposition (6.11) is stated in the following statement: Proposition 6.12 If ci > 1, gi = 0 and ρi (Ki 1/ci ) ≥ βi

(6.17)

for all i, then our system has only one equilibrium point which is the origin. Proof. Let us first consider the case where [Xj ] = 0, for all j 6= i. Recall that the upper 1/ci

bound of Hi ([Xi ]) is βi . Also, recall that when [Xi ] = Ki 1/ci

Section 3.2 in Chapter 3). Note that (Ki Hill curve. We substitute [Xi ] =

1/c Ki i

then Hi ([Xi ]) = βi /2 (see

, βi /2) is the inflection point of our univariate

in the decay function Y = ρi [Xi ], and if the value

of ρi (Ki 1/ci ) is larger or equal to the value of the upper bound βi then Y = Hi ([Xi ]) and Y = ρi [Xi ] only intersect at the origin. See Figure (6.3) for illustration.

Chapter 6. Results and Discussion

Finding the Equilibrium Points

71

Figure 6.3: An example where ρi (Ki 1/ci ) > βi ; Y = Hi ([Xi ]) and Y = ρi [Xi ] only intersect at the origin. Now, as the values of γij [Xj ] for all j 6= i increase then the univariate Hill curve Y = Hi ([Xi ]) will just shrink and will definitely not intersect the decay line Y = [Xi ] except at the origin.



However, note that Proposition (6.12) is a sufficient but not a necessary condition. There are some cases where ρi (Ki 1/ci ) < βi yet Y = Hi ([Xi ]) and Y = ρi [Xi ] only intersect at the origin.

Corollary 6.13 If ci > 1, gi = 0, Ki ≥ 1 and ρi ≥ βi for all i, then our system has only one equilibrium point which is the origin. Proof. Since ρi ≥ βi and Ki ≥ 1, then ρi (Ki 1/ci ) ≥ βi . Then we invoke Theorem (6.12).

For ci = 1 and gi = 0, we state the following proposition:



Chapter 6. Results and Discussion

Finding the Equilibrium Points

Proposition 6.14 Suppose ci = 1, gi = 0 and

βi Ki

72

≤ ρi for all i. Then our system has

only one equilibrium point which is the origin. Proof. Let us first consider the case where [Xj ] = 0, for all j 6= i. Recall that Y = Hi ([Xi ]) where ci = 1 is a hyperbolic curve. The partial derivative   ∂ βi [Xi ] Ki βi ∂Hi = = ∂[Xi ] ∂[Xi ] Ki + [Xi ] (Ki + [Xi ])2

(6.18)

means that the slope of the hyperbolic curve is monotonically decreasing as [Xi ] increases. The partial derivative at [Xi ] = 0 is βi ∂Hi ≤ ρi , = ∂[Xi ] Ki

(6.19)

which means that the slope of Y = Hi ([Xi ]) at [Xi ] = 0 is less than the slope of the decay line Y = ρi [Xi ] at [Xi ] = 0. Hence, the Hill curve Y = Hi ([Xi ]) lies below the decay line for all [Xi ] > 0.



Suppose ci ≥ 1 and gi = 0 for all i. In general, the origin is the only equilibrium point of our ODE model (5.1) if and only if the univariate curve Y = Hi ([Xi ]) lies below the decay line Y = ρi [Xi ] (i.e., Hi ([Xi ]) < ρi [Xi ], ∀[Xi ] > 0) for all i. This statement is similar to Theorem (7.6) in the next Chapter. Remark : We have seen the importance of the univariate Hill function Hi ([Xi ]). For instance, when n = 1, c1 = 2, ρ1 > 0 and g1 = 0, the Hill curve and the decay line intersect at ∗

[X1 ] = 0,

β1 ±

p β12 − 4ρ21 K1 . 2ρ1

(6.20)

Notice that the equilibrium points depend on the parameters β1 , ρ1 and K1 . According to Cinquin and Demongeot [38], a sufficiently large c coupled with a sufficiently large β are needed for the existence of an equilibrium point with a component dominating the other components. Moreover, decreasing the value ρi or adding the term gi may result to an increased value of [Xi ]∗ .

Chapter 7 Results and Discussion Stability of Equilibria and Bifurcation We determine the stability of the equilibrium points of the generalized Cinquin-Demongeot (2005) ODE model (5.1) for a given set of parameters. We also identify if varying the values of some parameters, such as those associated with the exogenous stimuli, can steer the system towards a desired state.

7.1

Stability of equilibrium points

Recall Lemma (5.2): Suppose ρi > 0 for all i. Then the generalized Cinquin-Demongeot n

ODE model (5.1) with X0 ∈ R⊕ always has a stable equilibrium point. Moreover, any trajectory of the model will converge to a stable equilibrium point.

Theorem 7.1 Given a set of parameters where ρi > 0 for all i, if the system (5.1) has only one equilibrium point then this point is stable. Proof. This is a consequence of Lemma (5.2).



The following theorem assures us that our system (for any dimension n) will never have an asymptotically stable limit cycle: Theorem 7.2 Suppose ρi > 0 for all i, then any trajectory of our system (5.1) never converges to a neutrally stable center, to an ω-limit cycle, or to a strange attractor. This also implies that (5.1) will never have an asymptotically stable limit cycle.

73

Chapter 7. Results and Discussion

Stability of Equilibria and Bifurcation

74

Proof. Since for any nonnegative initial condition, the trajectory of the ODE model converges to a stable equilibrium point (see Lemma (5.2)), then any trajectory will never stay orbiting a center, will never converge to an ω-limit cycle, and will never converge to a strange attractor. Moreover, suppose an ω-limit cycle exists. Then given some initial condition, the trajectory of the system converges to this ω-limit cycle. This contradicts Lemma (5.2) stating that any trajectory always converges to a stable equilibrium point.



Now, the following is the Jacobian of our system. 

a11 a12 · · ·

a1n



   JF (X) =   

a21 a22 · · · .. .. .. . . .

a2n .. .

an1 an2 · · ·

     

ann

(7.1)

where

n X

Ki + [Xi ]ci + aii =

∂Fi = ∂[Xi ]

! γij [Xj ]cij

βi ci [Xi ]ci −1 − βi ci [Xi ]2ci −1

j=1,j6=i n X

Ki + [Xi ]ci +

!2

− ρi

γij [Xj ]cij

j=1,j6=i n X

Ki + =

! γij [Xj ]cij

j=1,j6=i

Ki + [Xi ]ci + ail =

βi ci [Xi ]ci −1

∂Fi = ∂[Xl ]

n X

!2 − ρ i

(7.2)

γij [Xj ]cij

j=1,j6=i −βi [Xi ]ci (cil )γil [Xl ]cil −1

Ki + [Xi ]ci +

n X j=1,j6=i

γij [Xj ]cij

!2 , i 6= l.

(7.3)

Chapter 7. Results and Discussion Notice that

∂Fi ∂[Xi ]

Stability of Equilibria and Bifurcation

75

> 0 if Ki +

!

n X

γij [Xj ]

cij

βi ci [Xi ]ci −1

j=1,j6=i

ρi
ρi


− ρi > 0 or ρi
0, gi = 0 and ci > 1 ∀i, then the origin is a stable equilibrium point of the system (5.1). Proof. By Corollary (6.3), if gi = 0 for all i then the origin is an equilibrium point. The characteristic polynomial associated with the Jacobian of our system when X = (0, 0, ..., 0) is −ρ − λ 0 1 0 −ρ2 − λ |JF (0) − λI| = . .. .. . 0 0

··· 0 .. .. . . · · · −ρn − λ ···

0

= (−ρ1 − λ)(−ρ2 − λ) . . . (−ρn − λ).

(7.6)

Chapter 7. Results and Discussion

Stability of Equilibria and Bifurcation

77

The eigenvalues (λ) are −ρ1 , −ρ2 , . . . , −ρn which are all negative. Therefore, the zero state is a stable equilibrium point.



We can vary the size of the basin of attraction of the stable zero i-th component (or of any lower-valued stable component) of an equilibrium point by varying the value of βi or Ki , or sometimes by varying the value of ρi . Let us consider Figure (7.2) for illustration. In Figure (7.2), the original basin of attraction of the origin is [0, +∞) but increasing the value of βi decreases the basin of attraction to [0, C). Decreasing the value of Ki decreases the basin of attraction of the origin to [0, A), and decreasing the value of ρi decreases the basin of attraction of the origin to [0, B).

Figure 7.2: Varying the values of parameters may vary the size of the basin of attraction of the lower-valued stable intersection of Y = Hi ([Xi ]) + gi and Y = ρi [Xi ].

In addition, the size of the basin of attraction of an equilibrium point depends on the number of existing equilibrium points and on the size of the hyperspace (6.1). Given specific parameter values, the hyperspace (6.1) is fixed and the basin of attraction of each existing equilibrium point is distributed in this hyperspace. If there are multiple stable

Chapter 7. Results and Discussion

Stability of Equilibria and Bifurcation

78

equilibrium points then there are multiple basins of attraction that share the size of the hyperspace. Now, we propose two additional methods for determining the stability of equilibrium points other than the usual numerical methods for solving ODEs and other than using the Jacobian — using a multivariate fixed point algorithm and using ad hoc geometric analysis. We discuss multivariate fixed point algorithm in Appendix B. We prove the following theorems using ad hoc geometric analysis. Theorem 7.5 Suppose ci > 1. Then [Xi ]∗ = 0 (where [Xi ]∗ is the i-th component of an equilibrium point) is always a stable component. Proof. Recall from Theorem (6.2) that our system has an equilibrium point with i-th component equal to zero if and only if gi = 0. The only possible topologies of the intersections of Y = Hi ([Xi ]) and Y = ρi [Xi ] are shown in Figure (7.3). Notice that zero i-th component is always stable.



Figure 7.3: The possible number of intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi Pn where c > 1 and g = 0. The value of Ki + j=1,j6=i γij [Xj ]cij is taken as a parameter.

Theorem (7.5) is very important because this proves that when the pluripotency module (where g4 = 0, see discussion in Chapter 5 Section 5.2) is switched-off then it

Chapter 7. Results and Discussion

Stability of Equilibria and Bifurcation

79

can never be switched-on again, unless we make g4 > 0 or introduce some random noise. This is consistent with the observation of MacArthur et al. in [113].

Theorem 7.6 Suppose ci ≥ 1 and gi = 0 for all i. The only stable equilibrium point of our ODE model (5.1) is the origin if and only if the univariate curve Y = Hi ([Xi ]) essentially lies below the decay line Y = ρi [Xi ] (i.e., Hi ([Xi ]) ≤ ρi [Xi ], ∀[Xi ] > 0) for all i. Proof. Suppose, the curve Y = Hi ([Xi ]) essentially lies below the decay line Y = ρi [Xi ], then the intersections can be any of the forms shown in Figure (7.4). It is clear that zero is the only stable intersection.

Figure 7.4: The possible topologies when Y = Hi ([Xi ]) essentially lies below the decay line Y = ρi [Xi ], gi = 0.

Conversely, suppose the only stable equilibrium point is the origin. Hence [Xj ], j 6= i must converge to zero. We substitute [Xj ]∗ = 0, j 6= i to Hi ([X1 ], [X2 ], . . . , [Xn ]) (5.2). The intersections of Y = Hi (0, . . . , [Xi ], . . . , 0) = H([Xi ]) and Y = ρi [Xi ] must contain the origin (since we assumed that the origin is an equilibrium point). Looking at the possible topologies of the intersections of Y = Hi ([Xi ]) and Y = ρi [Xi ] (see Figures (5.11)

Chapter 7. Results and Discussion

Stability of Equilibria and Bifurcation

80

and (5.13)), zero can only be the sole stable intersection if the intersection is any of the form shown in Figure(7.4). Therefore, the curve Y = Hi ([Xi ]) essentially lies below the decay line Y = ρi [Xi ].



Theorem 7.7 Suppose ci = cij = 1, gi = 0, γij = 1, βi = βj = β > 0, ρi = ρj = ρ > 0, Ki = Kj = K > 0 and β > ρK. Then the origin is an unstable equilibrium point of the system (5.1) while the points lying on the hyperplane n X j=1

[Xj ] =

β − K. ρ

(7.7)

are stable equilibrium points. Proof. From Corollary (6.8), the origin and the points lying on the hyperplane are equilibrium points of the system (5.1) given the assumed parameter values. Suppose

Pn

j=1,j6=i

[Xj ] = 0 in the denominator of Hi (5.2). At [Xi ] = 0, the slope of

the Hill curve Y = Hi ([Xi ]) is ∂Hi β = . ∂[Xi ] K Since β > ρK then

(7.8)

β K

> ρ. This implies that the slope of Y = Hi ([Xi ]) at [Xi ] = 0 is P greater than the slope of the decay line Y = ρ[Xi ]. Therefore, when nj=1,j6=i [Xj ] = 0 in the denominator of Hi (5.2), there are two possible intersections of Y = Hi ([Xi ]) and Y = ρ[Xi ]. The intersection is at the origin (which is unstable) and at [Xi ] =

β ρ

−K

(which is stable). Now, suppose

Pn

j=1,j6=i

[Xj ] in the denominator of Hi varies. Then the intersection

of Y = Hi ([Xi ]) and Y = ρ[Xi ] is at the origin (which is unstable) and at [Xi ] = βρ − P P K − nj=1,j6=i [Xj ] (which is stable). Hence, the hyperplane [Xi ] = βρ − K − nj=1,j6=i [Xj ], where [Xi ] and [Xj ] are nonnegative, is a set of stable equilibrium points. See Figure (7.5) for illustration. 

Chapter 7. Results and Discussion

Stability of Equilibria and Bifurcation

Figure 7.5: The origin is unstable while the points where [Xi ]∗ = are stable.

β ρ

−K −

Pn

j=1,j6=i

81

[Xj ]∗

In GRNs, the existence of infinitely many non-isolated equilibrium points is biologically volatile. A small perturbation in the initial value of the system may lead the trajectory of the system to converge to a different equilibrium point that may result to a change in the phenotype of the cell. The basin of attraction of each stable non-isolated equilibrium point may not be as large compared to the basin of attraction of a stable isolated equilibrium point. This special phenomenon represents competition where the co-expression, extinction and domination of the TFs depend on the value of each TF, and the dependence among TFs is a continuum. The existence of an attracting hyperplane is also discovered by Cinquin and Demongeot in [38].

7.2

Bifurcation of parameters

We have seen in Chapter 6 and in Section 7.1 (also see Appendix C) the effect of the parameters βi , Ki , ρi , ci , cij and gi on the number, size of the basins of attraction and behavior of equilibrium points. Varying the values of some parameters can decrease the size of the basin of attraction of an undesirable equilibrium point as well as increase the size of the basin of attraction of a desirable equilibrium point. We can mathematically

Chapter 7. Results and Discussion

Stability of Equilibria and Bifurcation

82

manipulate the parameter values to ensure that the initial condition is in the basin of attraction of our desired equilibrium point. Intuitively, we can make the i-th component of an equilibrium point dominate other components by increasing βi or gi or, in some instances, by decreasing ρi . Decreasing the value of Ki or sometimes increasing the value of ci minimizes the size of the basin of attraction of the lower-valued stable intersection of Y = Hi ([Xi ]) + gi and Y = ρi [Xi ], thus, the chance of converging to an equilibrium point with [Xi ]∗ > [Xj ]∗ j 6= i may increase. However, the effect of Ki and ci in increasing the value of [Xi ]∗ is not as drastic compared to βi , gi and ρi , since Ki and ci do not affect the upper bound of the hyperspace (6.1). In addition, increasing the value of ci or of cij may result in an increased number of equilibrium points, and probably in multistability (by Theorem (6.6)). We show in Appendix C some numerical bifurcation analysis to illustrate possible bifurcation types that may occur. In this section, we determine how to obtain an equilibrium point that has an i-th component sufficiently dominating other components, especially by introducing an exogenous stimulus. We focus on the parameter gi because the introduction of an exogenous stimulus is experimentally feasible.

Increasing the effect of exogenous stimuli If we increase the value of gi up to a sufficient level, then we can increase the value of [Xi ] where Y = Hi ([Xi ]) + gi and Y = ρi ([Xi ]) intersect. We can also make such increased [Xi ] the only intersection. See Figure (7.6) for illustration.

Moreover, as we increase the value of gi up to a sufficient level, we increase the possible value of [Xi ]∗ . Since [Xi ] inhibits [Xj ], then as we increase the value of [Xi ]∗ , we can decrease the value of [Xj ], j 6= i where Y = Hj ([Xj ]) + gj and Y = ρj ([Xj ]) intersect. We can also make such decreased [Xj ] the only intersection. If gj = 0, we can make [Xj ] = 0 the only intersection of Y = Hj ([Xj ]) and Y = ρj ([Xj ]). See Figure (7.7) for illustration.

Chapter 7. Results and Discussion

Stability of Equilibria and Bifurcation

83

Figure 7.6: Increasing the value of gi can result in an increased value of [Xi ] where Y = Hi ([Xi ]) + gi and Y = ρi ([Xi ]) intersects. Therefore, by changing the value of gi we can have a sole stable equilibrium point where the i-th component dominates the others. For any initial condition, the trajectory of the ODE model (5.1) will converge to this sole equilibrium point. By varying the value of gi , we can manipulate the cell fate of a stem cell — controlling the tripotency, bipotency, unipotency and terminal state of the cell. We present illustrations in Appendix C showing the effect of increasing the value of gi . Remark : Suppose, given a specific initial condition, the solution to our system tends to an equilibrium point with [Xi ]∗ = 0. If we want our solution to escape [Xi ]∗ = 0 then one strategy is to add gi > 0. The idea of adding a sufficient amount of gi > 0 is to make the solution of our system escape a certain equilibrium point. However, it is sometimes impractical or infeasible to continuously introduce a constant amount of exogenous stimulus to control cell fates, that is why we may rather consider introducing gi that degrades through time. We can make gi a function of time (i.e., gi is varying through time). This strategy means that we are adding another equation and state variable to our system of ODEs.

Chapter 7. Results and Discussion

Stability of Equilibria and Bifurcation

84

Figure 7.7: Increasing the value of gi can result in an increased value of [Xi ]∗ , and consequently in decreased value of [Xj ] where Y = Hj ([Xj ]) + gj and Y = ρj ([Xj ]) intersects, j 6= i. We can think of gi as an additional node to our GRN and we call it as the injection node. In our case, we consider functions that represent a degrading amount of gi . Refer to Appendix C for illustration. Adding a degrading amount of gi affects cell fate but this strategy may not give rise to a sole equilibrium point. Moreover, this strategy is only applicable to systems with multiple stable equilibrium points where convergence of trajectories is sensitive to initial conditions.

Chapter 8 Results and Discussion Introduction of Stochastic Noise We numerically investigate the effect of random noise to the cell differentiation system using Stochastic Differential Equations (SDEs). In [38], Cinquin and Demongeot suggested to extend their model to include stochastic kinetics. We have written a Scilab [150] program (see Algorithm (5)-(6) in Appendix D) to simulate the effect of stochastic noise to the dynamics of our GRN. We employ several functions G (see Section 3.3 in Chapter 3) to observe the various effect of the added Gaussian white noise term. The different functions G are:

G(X) = 1,

(8.1)

G(X) = X, √ G(X) = X,

(8.2)

G(X) = F (X), and p G(X) = H(X) + gˇ + ρˇX.

(8.4)

(8.3)

(8.5)

In function (8.1), the noise term is not dependent on any variable. This function is used by MacArthur et al. in [113]. The noise term with (8.2) or (8.3) is affected by the value of X. That is, as the concentration X increases/decreases, the effect of the noise term also increases/decreases. Function (8.2) is used by Glauche et al. in [71]. However, using (8.2) or (8.3) has undesirable biological implication — as [Xi ] dominates the other concentration of TFs, the effect of random noise to [Xi ] intensifies. 85

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

86

In (8.4), the noise term is affected by the value of F (X) (the right hand side of our corresponding ODE model) — that is, as the deterministic change in the concentration  = F (X) increases/decreases, the effect of the noise term X with respect to time dX dt also increases/decreases. In using (8.4), we expect a decreasing amount of noise through time because our ODE system (5.1) always converges to an equilibrium point. In other words, as F (X) → 0 (since F (X ∗ ) = 0), the effect of noise also vanishes. Function (8.5) is based on the random population growth model [5, 91]. The gˇ and ρˇ are the matrices containing the parameters gi and ρi (i = 1, 2, . . . , n), respectively. In Algorithm (5)-(6) (see Appendix D), we use Euler Method to numerically solve the system of ODEs, while we use Euler-Maruyama method to numerically solve the corresponding system of SDEs. The output of the algorithm is a time-series of the solutions. The solution of the ODE model is visualized by the thick solid line, while a realization of the SDE model is visualized by the thin solid line. In the Euler-Maruyama method, we set [Xi ] < 0 to be [Xi ] = 0. The SDE models that we have used in this thesis are not exhaustive. We can consider other types of SDE models, such as dX = F (X)dt + σA

p p p H(X)dWA − σB ρˇXdWB + σC gˇdWC .

(8.6)

In the following examples we suppose n = 4 and σii = 0.5. Let the simulation step size be 0.01. Illustration 1 Suppose our generalized Cinquin-Demongeot ODE model (5.1) has parameters equal to 1 except for βi = 5 and gi = 0 for all i. This system has infinitely many non-isolated stable equilibrium points (see Theorem (7.7)). The corresponding system of SDEs is as follows:

Chapter 8. Results and Discussion



Introduction of Stochastic Noise

5[X1 ] d[X1] = 1 + [X1 ] + [X2 ] + [X3 ] + [X4 ]  5[X2 ] d[X2] = 1 + [X1 ] + [X2 ] + [X3 ] + [X4 ]  5[X3 ] d[X3] = 1 + [X1 ] + [X2 ] + [X3 ] + [X4 ]  5[X4 ] d[X4] = 1 + [X1 ] + [X2 ] + [X3 ] + [X4 ]

 − [X1 ] dt + σ11 G1 ([X1 ])dW1  − [X2 ] dt + σ22 G2 ([X2 ])dW2  − [X3 ] dt + σ33 G3 ([X3 ])dW3  − [X4 ] dt + σ44 G4 ([X4 ])dW4 .

87

(8.7)

We assume G1 = G2 = G3 = G. Suppose the initial condition is [Xi ]0 = 4 for all i. Figures (8.1) to (8.5) show different realizations of the corresponding SDE model. In the deterministic case, we expect that the solutions to [X1 ], [X2 ], [X3 ] and [X4 ] will converge to an equilibrium point with equal components because our system is symmetric and [X1 ]0 = [X2 ]0 = [X3 ]0 = [X4 ]0 . However, because of the presence of noise, some TFs seem to dominate the others. It is possible that the solution to the SDE may approach a different equilibrium point. This biological volatility is due to the presence of infinitely many stable equilibrium points.

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

88

Figure 8.1: For Illustration 1; ODE solution and SDE realization with G(X) = 1.

Figure 8.2: For Illustration 1; ODE solution and SDE realization with G(X) = X.

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

Figure 8.3: For Illustration 1; ODE solution and SDE realization with G(X) =

89



X.

Figure 8.4: For Illustration 1; ODE solution and SDE realization with G(X) = F (X).

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

90

Figure 8.5: For Illustration 1; ODE solution and SDE realization using the random population growth model.

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

91

Illustration 2 Suppose our generalized Cinquin-Demongeot ODE model (5.1) has parameters equal to 1 except for ci = cij = 2 (for all i, j), g1 = 5, g2 = 3, g3 = 1 and g4 = 0. This system has a sole equilibrium point which is ([X1 ]∗ ≈ 5.72411, [X2 ]∗ ≈ 3.23066, [X3 ]∗ ≈ 1.02313, [X4 ]∗ = 0). The corresponding system of SDEs is as follows: [X1 ]2 d[X1] = 1 + [X1 ]2 + [X2 ]2 + [X3 ]2 + [X4 ]2  [X2 ]2 d[X2] = 1 + [X1 ]2 + [X2 ]2 + [X3 ]2 + [X4 ]2  [X3 ]2 d[X3] = 1 + [X1 ]2 + [X2 ]2 + [X3 ]2 + [X4 ]2  [X4 ]2 d[X4] = 1 + [X1 ]2 + [X2 ]2 + [X3 ]2 + [X4 ]2





− [X1 ] + 5 dt + σ11 G1 ([X1 ])dW1  − [X2 ] + 3 dt + σ22 G2 ([X2 ])dW2  − [X3 ] + 1 dt + σ33 G3 ([X3 ])dW3  − [X4 ] dt + σ44 G4 ([X4 ])dW4 .

(8.8)

We assume G1 = G2 = G3 = G. Figures (8.6) to (8.10) show different realizations of the corresponding SDE model with initial condition [Xi ]0 = 3 for all i. In the deterministic case, we expect that the solution will converge to the sole equilibrium point. From our simulation, it seems that our system is robust against the presence of moderate noise. The realization of the SDE model nearly follows the deterministic trajectory. We expect this to happen because for any initial condition, the solution to our system will tend towards only one attractor. Recall that one possible strategy for controlling our system to have only one stable equilibrium point is by introducing adequate amount of exogenous stimuli (see Chapter 7 Section 7.2). In order to have assurance that cells will not change lineages, we need to make the desired i-th lineage have a corresponding [Xi ]∗ that sufficiently dominates the concentration of the other TFs.

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

92

Figure 8.6: For Illustration 2; ODE solution and SDE realization with G(X) = 1.

Figure 8.7: For Illustration 2; ODE solution and SDE realization with G(X) = X.

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

Figure 8.8: For Illustration 2; ODE solution and SDE realization with G(X) =

93



X.

Figure 8.9: For Illustration 2; ODE solution and SDE realization with G(X) = F (X).

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

94

Figure 8.10: For Illustration 2; ODE solution and SDE realization using the random population growth model.

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

95

Illustration 3 Suppose our generalized Cinquin-Demongeot ODE model (5.1) has parameters ci = cij = 2, βi = 1, Ki = 1, γij = 1/8, ρi = 1/21 and gi = 0 for all i, j. This system has multiple stable equilibrium points (see Figure (8.16)). The corresponding system of SDEs is as follows: [X1 ]2 d[X1] = 1 + [X1 ]2 + 81 [X2 ]2 + 18 [X3 ]2 + 81 [X4 ]2  [X2 ]2 d[X2] = 1 + 81 [X1 ]2 + [X2 ]2 + 81 [X3 ]2 + 81 [X4 ]2  [X3 ]2 d[X3] = 1 + 81 [X1 ]2 + 81 [X2 ]2 + [X3 ]2 + 81 [X4 ]2  [X4 ]2 d[X4] = 1 + 81 [X1 ]2 + 81 [X2 ]2 + 81 [X3 ]2 + [X4 ]2 

− − − −

 1 [X1 ] dt + σ11 G1 ([X1 ])dW1 21  1 [X2 ] dt + σ22 G2 ([X2 ])dW2 (8.9) 21  1 [X3 ] dt + σ33 G3 ([X3 ])dW3 21  1 [X4 ] dt + σ44 G4 ([X4 ])dW4 . 21

We assume G1 = G2 = G3 = G. Figures (8.11) to (8.15) show different realizations of the corresponding SDE model with initial condition [Xi ]0 = 2 for all i. In the deterministic case, we expect that the solutions to [X1 ], [X2 ], [X3 ] and [X4 ] will converge to an equilibrium point with equal components because our system is symmetric and [X1 ]0 = [X2 ]0 = [X3 ]0 = [X4 ]0 . For a system with regulated noise, the solution to the SDE may follow the behavior of the trajectory of the ODE. However, it is also possible that [X1 ]∗ , [X2 ]∗ , [X3 ]∗ and [X4 ]∗ tend towards different values, especially when the effect of the noise becomes significant. Sufficient amount of random noise may cause cells to shift cell types, especially when the initial condition is near the boundary of the basins of attraction of the equilibrium points. Figure (8.16) shows the possible values of [X1 ]∗ and [X2 ]∗ given varying initial condition [X1 ]0 . To regulate the effect of noise, we can change the values of some parameters such as degradation rate and effect of exogenous stimulus to decrease the size of the basin of attraction of an undesirable equilibrium point. Multistability in the presence of random noise represents stochastic differentiation of cells.

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

96

Furthermore, the presence of noise may induce abnormal fluctuations in the concentration of the TFs specifically when using functions (8.2) and (8.3), as shown in Figure (8.12).

Figure 8.11: For Illustration 3; ODE solution and SDE realization with G(X) = 1.

Figure 8.12: For Illustration 3; ODE solution and SDE realization with G(X) = X.

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

Figure 8.13: For Illustration 3; ODE solution and SDE realization with G(X) =

97

√ X.

Figure 8.14: For Illustration 3; ODE solution and SDE realization with G(X) = F (X).

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

98

Figure 8.15: For Illustration 3; ODE solution and SDE realization using the random population growth model.

Figure 8.16: Phase portrait of [X1 ] and [X2 ].

Chapter 8. Results and Discussion

Introduction of Stochastic Noise

99

Illustration 4 Suppose our generalized Cinquin-Demongeot ODE model (5.1) has parameters ci = cij = 2, βi = 1, Ki = 1, γij = 1, ρi = 1 and gi = 0 for all i, j. Suppose the initial condition is [Xi ]0 = 0 for all i, which means that all TFs are switched-off. Figure (8.17) shows that with the presence of noise, the TFs can be reactivated again. However, inactive TFs cannot be activated by using any of the functions (8.2), (8.3), (8.4) or (8.5).

Figure 8.17: G(X) = 1.

Reactivating switched-off TFs by introducing random noise where

Chapter 9 Summary and Recommendations We simplify the gene regulatory network (GRN) model of MacArthur et al. [113] to study the mesenchymal cell differentiation system. The simplified MacArthur GRN is given in the following figure:

Figure 9.1: The simplified MacArthur et al. GRN

We translate the simplified network model into a system of Ordinary Differential Equations (ODEs) using a generalized Cinquin-Demongeot model [38]. We generalize the Cinquin-Demongeot ODE model as: 100

Chapter 9. Summary and Recommendations

d[Xi ] = dt

101

βi [Xi ]ci + gi − ρi [Xi ] n X c ij c Ki + [Xi ] i + γij [Xj ] j=1,j6=i

where i = 1, 2, ..., n. The state variables of the ODE model represent the concentration of the transcription factors (TFs) involved in gene expression. For our simplified network, [X1 ] := [RU N X2], [X2 ] := [SOX9], [X3 ] := [P P AR−γ] and [X4 ] := [sT F ]. Some of our results are applicable not only to n = 4 but to any dimension. An asymptotically stable equilibrium point is associated with a certain cell type, e.g., tripotent, bipotent, unipotent or terminal state. If [Xi ] sufficiently dominates the concentrations of the other TFs then the chosen lineage is towards the i-th cell type. For an ODE model to be useful, it is necessary that it has a solution. It is difficult to predict the behavior of our system if the solution is not unique. We are able to prove that there exists a unique solution to our model for some values of ci and cij . The exponents ci and cij represent cooperativity among binding sites. We propose two additional methods for determining the behavior of equilibrium points other than the usual numerical methods for solving ODEs, and other than using the Jacobian — (1) using ad hoc geometric analysis; and (2) using multivariate fixed point algorithm. The geometry of the Hill function Hi is essential in understanding the behavior of the equilibrium points of our ODE system. From the geometric analysis, we are able to n

prove that our state variables [Xi ] will never be negative (i.e., R⊕ is positively invariant with respect to the flow of our ODE model) and that our ODE model always has a stable equilibrium point. Any trajectory of the model will converge to a stable equilibrium point. A stable equilibrium point (0, 0, . . . , 0) is trivial because this state neither represents a pluripotent cell nor a cell differentiating into bone, cartilage or fat. In our case, the cell

Chapter 9. Summary and Recommendations

102

may differentiate to other cell types which are not in the domain of our GRN. The zero state may also represent a cell that is in quiescent stage. We are able to prove theorems associated to the existence of a stable zero state, such as • Our system has an equilibrium point with i-th component equal to zero if and only if gi = 0. Moreover, if ρi > 0 and ci > 1 then this zero i-th component is always stable. • The zero state (0, 0, . . . , 0) can only be an equilibrium point if and only if gi = 0 ∀i. If ρi > 0 and ci > 1 for all i then the zero state is stable. • Suppose gi = 0 for all i. The only stable equilibrium point of our ODE model is the origin if and only if the univariate Hill curve Y = Hi ([Xi ]) essentially lies below the decay line Y = ρi [Xi ] for all i. If converging to the zero state is undesirable, we can decrease the size of the basin of attraction of the zero state by sufficiently increasing the value of βi or by sufficiently decreasing the value of Ki and ρi . In addition, we can add gi > 0 to escape a stable zero state. In the case where ci > 1 and gi = 0, if the TF associated to [Xi ] is switched-off, then it can never be switched-on again because the zero i-th component of an equilibrium point is stable. Two possible strategies for escaping an inactive state is to increase the value of gi or to introduce some random noise. The following theorems give us ideas regarding the location and number of equilibrium points ([X1 ]∗ , [X2 ]∗ , . . . , [Xn ]∗ ): • Suppose ρi > 0. The equilibrium points of our system lie in the hyperspace  h  h  h g2 g2 +β2 gn gn +βn g1 g1 +β1 , ρ1 × ρ2 , ρ2 × . . . × ρn , ρn . ρ1 • The generalized Cinquin-Demongeot ODE model (where ci and cij are integers) has a finite number of equilibrium points except when all of the following conditions are satisfied: ci = cij = 1, gi = 0, γij = 1, βi = βj = β > 0, ρi = ρj = ρ > 0, Ki = Kj = K > 0 and β > ρK, for all i and j. • If the generalized Cinquin-Demongeot ODE model (where ci and cij are integers) has a finite number of equilibrium points then the possible number

Chapter 9. Summary and Recommendations

103

of equilibrium points is at most max{c1 + 1, c1j + 1 ∀j} × max{c2 + 1, c2j + 1 ∀j} × . . . × max{cn + 1, cnj + 1 ∀j}. We are able to find one case where there are infinitely many stable non-isolated equilibrium points. This happens in a symmetric Michaelis-Menten-type system. In GRNs, the existence of infinitely many non-isolated equilibrium points is biologically volatile. A small perturbation in the initial value of the system may lead the trajectory of the system to converge to a different equilibrium point that may result to a change in the phenotype of the cell. This special phenomenon represents a competition where the co-expression, extinction and domination of the TFs continuously depend on the value of each TF. If gn = 0 then the n-dimensional system is more general than the (n − 1)-dimensional system. That is, we can derive the equilibrium points of the (n − 1)-dimensional system by getting the equilibrium points of the n-dimensional system where [Xn ]∗ = 0. It is clear that when [Xn ]∗ = 0 and gn = 0, the n-dimensional system reduces to an (n − 1)dimensional system. Furthermore, we are able to prove an additional theorem related to the behavior of the solution of our ODE model. The theorem states that if ρi > 0 for all i, then our system never converges to a center, to an ω-limit cycle or to a strange attractor. The existence of a center, ω-limit cycle or strange attractor that would result to recurring changes in phenotype is abnormal for a natural fully differentiated cell. The parameters βi , Ki , ρi , γij , ci , cij and gi affect the number, size of the basins of attraction and behavior of equilibrium points. We can make the i-th component of an equilibrium point dominate other components by increasing βi , by increasing gi or sometimes by decreasing ρi . Decreasing the value of Ki or increasing the value of ci may also increase the chance of having an [Xi ]∗ dominating the steady-state concentration of the other TFs. However, the effect of Ki and ci in increasing the value of [Xi ]∗ is not as drastic compared to that of βi , gi and ρi , since Ki and ci do not affect the upper bound of [Xi ]∗ . In some instances, varying γij may also induce the system to have a steady state

Chapter 9. Summary and Recommendations

104

where some TFs dominates the other TFs. Furthermore, increasing the value of ci or of cij may result in multistability. We focus on manipulating the effect of the exogenous stimulus because this is experimentally feasible. It is possible that changing the value of gi can result to a sole stable equilibrium point where the i-th component dominates the others. That is, we can steer our system towards a desired state given any initial condition by just introducing an adequate amount of exogenous stimulus. However, this strategy is not applicable to the pluripotency module (sTF node) when g4 = 0. If g4 = 0, the only possible strategy for a switched-off sTF node to be reactivated is to introduce random noise. We also consider the case where gi changes through time. It is sometimes impractical or infeasible to continuously introduce a constant amount of exogenous stimulus to control cell fates, that is why we consider an amount of gi that degrades through time. We consider two types of functions to represent gi — a linear function with negative slope, and an exponential function with negative exponent. The idea of initially adding a sufficient amount of gi > 0 is to make the solution of our system escape a certain equilibrium point. However, this strategy is only applicable to systems with multiple stable equilibrium points where the convergence of the trajectories is sensitive to initial conditions. Multistability in the presence of random noise represents stochastic differentiation of cells. With the presence of random noise, it is possible that the solutions tend towards different attractors. Random noise may cause cells to shift cell types, especially when equilibrium points are near each other, or when the initial condition is near the boundary of the basins of attraction of the equilibrium points. However, we can increase the robustness of our system against the effect of moderate random noise by increasing the size of the basin of attraction of the desired equilibrium point, or by having only one stable equilibrium point. Suppose we only have one stable equilibrium point. Since for any initial condition, the solution to our ODE system tends toward only one attractor, then we can expect the realization of the corresponding SDE model to (approximately) follow the deterministic trajectory. One possible strategy to ensure that our system has only one stable

Chapter 9. Summary and Recommendations

105

equilibrium point is to introduce an adequate amount of exogenous stimulus. For validation, we recommend comparing our results with other models of GRN dynamics and with existing information gathered in actual experiments. We can extend the results of this thesis by considering other kinds of GRNs, possibly with more cell lineages involved. We can also include cell division and intercellular interactions.

Appendix A More on Equilibrium Points: Illustrations In our numerical computations, “difficult and computationally expensive” means the problem is not efficiently solvable using Scientific Workplace [116] run in a laptop with Intel Pentium P6200 2.13GHz processor and 2GB RAM. In determining the values of equilibrium points, we need to check if the derived solutions are really solutions of the system because we may have encountered approximation errors during our numerical computations. Solving the system Fi (X) = 0, i = 1, 2, . . . , n can be interpreted as finding the intersections of the (n + 1)-dimensional curves induced by each Fi (X) and the (n + 1)dimensional zero-hyperplane. Figure (A.1) shows an illustration for n = 2.

Figure A.1: Intersections of F1 , F2 and zero-plane, an example.

We give some cases where we use Sylvester matrix in finding equilibrium points.

106

Appendix A. More on Equilibrium Points: Illustrations

A.1

107

Assume n = 2, ci = 1, cij = 1

We determine the equilibrium points when ci = 1 and cij = 1 for all i and j in a two-dimensional system. The system of polynomial equations is as follows:

P1 ([X1 ], [X2 ]) = − ρ1 [X1 ]2 + (β1 + g1 ) [X1 ] − (K1 + γ12 [X2 ]) (ρ1 [X1 ]) + g1 γ12 [X2 ] + g1 K1 = 0

(A.1)

P2 ([X1 ], [X2 ]) = − ρ2 [X2 ]2 + (β2 + g2 ) [X2 ] − (K2 + γ21 [X1 ]) (ρ2 [X2 ]) + g2 γ21 [X1 ] + g2 K2 = 0

If P1 and P2 have no common factors then by Theorem (6.6), the number of complex solutions to the polynomial system (A.1) is at most 4. The corresponding Sylvester matrix of P1 and P2 with X1 as variable is 

a11 a12 a13



   a21 a22 0    0 a21 a22

(A.2)

where a11 = −ρ1 , a12 = β1 + g1 − K1 ρ1 − γ12 ρ1 [X2 ], a13 = g1 γ12 [X2 ] + g1 K1 , a21 = −γ21 ρ2 [X2 ] + g2 γ21 and a22 = −ρ2 [X2 ]2 + (β2 + g2 − K2 ρ2 )[X2 ] + g2 K2 . The Sylvester resultant res(P1 , P2 ; X1 ) is a polynomial in the variable X2 and is of degree at most 4. By Fundamental Theorem of Algebra, res(P1 , P2 ; X1 ) = 0 has at most 4 complex solutions which is consistent with Theorem (6.6). It is difficult and computationally expensive to find the exact solutions to res(P1 , P2 ; X1 ) = 0 in terms of the arbitrary parameters. We investigate specific cases where we assign values to some parameters.

Appendix A. More on Equilibrium Points: Illustrations

A.1.1

108

Illustration 1

Suppose all parameters in the system (A.1) are equal to 1, except for arbitrary g1 and arbitrary g2 . Assume g1 > 0 or g2 > 0. The Sylvester matrix with [X1 ] as variable is as follows: 

−1

g1 − [X2 ]

g1 ([X2 ] + 1)



   g2 − [X2 ] −[X2 ]2 + g2 [X2 ] + g2  0   2 0 g2 − [X2 ] −[X2 ] + g2 [X2 ] + g2

(A.3)

res(P1 , P2 ; X1 ) = (g1 + g2 )[X2 ]2 − (g1 g2 + g22 )[X2 ] − g22 .

(A.4)

It follows that

Then the root of res(P1 , P2 ; X1 ) = 0 is [X2 ] =

g1 g2 + g22 ±

p (g1 g2 + g22 )2 + 4(g1 + g2 )g22 . 2(g1 + g2 )

By the same procedure as above, the root of res(P1 , P2 ; X2 ) = 0 is p g1 g2 + g12 ± (g1 g2 + g12 )2 + 4(g1 + g2 )g12 [X1 ] = . 2(g1 + g2 )

Since g1 > 0 or g2 > 0 then g1 g2 + g22 ≤

p

(A.5)

(A.6)

(g1 g2 + g22 )2 + 4(g1 + g2 )g22 and g1 g2 + g12 ≤

p (g1 g2 + g12 )2 + 4(g1 + g2 )g12 . Hence, we have equilibrium point ([X1 ]∗ , [X2 ]∗ ) equal to g1 g2 + g12 +

! p p (g1 g2 + g12 )2 + 4(g1 + g2 )g12 g1 g2 + g22 + (g1 g2 + g22 )2 + 4(g1 + g2 )g22 , . 2(g1 + g2 ) 2(g1 + g2 )

Therefore, for this example, we have exactly one equilibrium point.

Appendix A. More on Equilibrium Points: Illustrations

109

Now, observe that if g1 > g2 then [X1 ]∗ > [X2 ]∗ and if if g2 > g1 then [X2 ]∗ > [X1 ]∗ . For example, assume g1 = 2g2 > 0, that is the ratio of g1 to g2 is 1 : 2. Then the equilibrium point will be p (2g22 + 4g22 )2 + 4(2g2 + g2 )4g22 , 2(2g2 + g2 ) ! p 2g22 + g22 + (2g22 + g22 )2 + 4(2g2 + g2 )g22 2(2g2 + g2 ) ! p p 6g22 + 36g24 + 48g23 3g22 + 9g24 + 12g23 ⇒ , 6g2 6g2 ! p p 6g2 + 2 9g22 + 12g2 3g2 + 9g22 + 12g2 ⇒ , . 6 6 2g22 + 4g22 +

√ Clearly, [X1 ]∗ =

6g2 +2

9g22 +12g2 6

√ > [X2 ]∗ =

3g2 +

9g22 +12g2 . 6

√ In addition, if g1 = g2 = g > 0, then [X1 ]∗ = [X2 ]∗ =

g+g

g 2 +2g . 2

On the other hand, if g1 and g2 are both zero, then, by Theorem (6.7), the only equilibrium point is (0, 0).

A.1.2

Illustration 2

Suppose all parameters in the system (A.1) are equal to 1, except for arbitrary β1 = β2 = β, arbitrary g1 and g2 = 0. The Sylvester matrix with [X1 ] as variable is as follows: 

−1

β + g1 − 1 − [X2 ]

g1 ([X2 ] + 1)



   −[X2 ] −[X2 ]2 + (β − 1)[X2 ]  0   2 0 −[X2 ] −[X2 ] + (β − 1)[X2 ]

(A.7)

res(P1 , P2 ; X1 ) = βg1 [X2 ]2 .

(A.8)

It follows that

Appendix A. More on Equilibrium Points: Illustrations

110

Then the root of res(P1 , P2 ; X1 ) = 0 is [X2 ] = 0.

(A.9)

Substituting [X2 ] = 0 to the polynomial system (A.1) with the assumed parameter values, we have P1 ([X1 ], 0) = −[X1 ]2 + (β + g1 )[X1 ] − [X1 ] + g1 = 0

(A.10)

P2 ([X1 ], 0) = 0.

Thus, p (β + g1 − 1)2 + 4g1 [X1 ] = p −2 (β + g1 − 1) ∓ (β + g1 − 1)2 + 4g1 = . 2 −(β + g1 − 1) ±

Suppose g1 > 0. Since β + g1 − 1
[X2 ]∗ when g1 > 0. If g1 = 0 and β > 1 then we have two equilibrium points: (0, 0) and (β − 1, 0). If g1 = 0 and β ≤ 1 then the only equilibrium point is (0, 0).

Appendix A. More on Equilibrium Points: Illustrations

A.1.3

111

Illustration 3

Suppose all parameters in the system (A.1) are equal to 1, except for arbitrary K1 = K2 = K, arbitrary g1 and g2 = 0. The Sylvester matrix with [X1 ] as variable is as follows: 

−1

1 + g1 − K − [X2 ]

g1 ([X2 ] + K)



   −[X2 ] −[X2 ]2 + (1 − K)[X2 ]  0   2 0 −[X2 ] −[X2 ] + (1 − K)[X2 ]

(A.12)

res(P1 , P2 ; X1 ) = g1 [X2 ]2 .

(A.13)

It follows that

Then the root of res(P1 , P2 ; X1 ) = 0 is [X2 ] = 0.

(A.14)

Substituting [X2 ] = 0 to the polynomial system (A.1) with the assumed parameter values, we have P1 ([X1 ], 0) = −[X1 ]2 + (1 + g1 )[X1 ] − K[X1 ] + g1 K = 0

(A.15)

P2 ([X1 ], 0) = 0.

Thus, p (1 + g1 − K)2 + 4g1 K [X1 ] = p −2 (1 + g1 − K) ∓ (1 + g1 − K)2 + 4g1 K = . 2 −(1 + g1 − K) ±

Suppose g1 > 0. Since 1+g1 −K < point ([X1 ]∗ , [X2 ]∗ ) equal to

(A.16)

p (1 + g1 − K)2 + 4g1 K then we have equilibrium

Appendix A. More on Equilibrium Points: Illustrations

(1 + g1 − K) +

p

112

! (1 + g1 − K)2 + 4g1 K ,0 . 2

Therefore, we have exactly one equilibrium point where [X1 ]∗ > [X2 ]∗ when g1 > 0. If g1 = 0 and K < 1 then we have two equilibrium points: (0, 0) and (1 − K, 0). If g1 = 0 and K ≥ 1 then the only equilibrium point is (0, 0).

A.1.4

Illustration 4

Suppose all parameters in the system (A.1) are equal to 1, except for arbitrary ρ1 = ρ2 = ρ, arbitrary g1 and g2 = 0. Assume ρ > 0. The Sylvester matrix with [X1 ] as variable is as follows: 



−ρ 1 + g1 − ρ − ρ[X2 ] g1 ([X2 ] + 1)    −ρ[X2 ] −ρ[X2 ]2 + (1 − ρ)[X2 ]  0   2 0 −ρ[X2 ] −ρ[X2 ] + (1 − ρ)[X2 ]

(A.17)

res(P1 , P2 ; X1 ) = g1 ρ[X2 ]2 .

(A.18)

It follows that

Then the root of res(P1 , P2 ; X1 ) = 0 is [X2 ] = 0.

(A.19)

Substituting [X2 ] = 0 to the polynomial system (A.1) with the assumed parameter values, we have P1 ([X1 ], 0) = −ρ[X1 ]2 + (1 + g1 )[X1 ] − ρ[X1 ] + g1 = 0 P2 ([X1 ], 0) = 0.

(A.20)

Appendix A. More on Equilibrium Points: Illustrations

113

Thus, p (1 + g1 − ρ)2 + 4ρg1 [X1 ] = −2ρ p (1 + g1 − ρ) ∓ (1 + g1 − ρ)2 + 4ρg1 = . 2ρ −(1 + g1 − ρ) ±

Suppose g1 > 0. Since 1 + g1 − ρ
[X2 ]∗ when g1 > 0. If g1 = 0 and ρ < 1 then we have two equilibrium points: (0, 0) and (1 − ρ, 0). If g1 = 0 and ρ ≥ 1 then the only equilibrium point is (0, 0).

A.2 A.2.1

Assume n = 2, ci = 2 Illustration 1

Consider ci = 2 and cij = 1 for all i and j. The system of polynomial equations is as follows:

P1 ([X1 ], [X2 ]) = − ρ1 [X1 ]3 + (β1 + g1 ) [X1 ]2 − (K1 + γ12 [X2 ]) (ρ1 [X1 ]) + g1 γ12 [X2 ] + g1 K1 = 0 P2 ([X1 ], [X2 ]) = − ρ2 [X2 ]3 + (β2 + g2 ) [X2 ]2 − (K2 + γ21 [X1 ]) (ρ2 [X2 ]) + g2 γ21 [X1 ] + g2 K2 = 0

(A.22)

Appendix A. More on Equilibrium Points: Illustrations

114

By Theorem (6.6), the number of complex solutions to the polynomial system (A.22) is at most 9. The corresponding Sylvester matrix of P1 and P2 with X1 as variable is a11 a12 a13 a14



  a 0  21 a22 0   0 a21 a22 0  0 0 a21 a22

     



(A.23)

where a11 = −ρ1 , a12 = β1 + g1 , a13 = −K1 ρ1 − γ12 ρ1 [X2 ], a14 = g1 γ12 [X2 ] + g1 K1 , a21 = −γ21 ρ2 [X2 ] + g2 γ21 and a22 = −ρ2 [X2 ]3 + (β2 + g2 )[X2 ]2 − K2 ρ2 [X2 ] + g2 K2 . The Sylvester resultant res(P1 , P2 ; X1 ) is a polynomial with variable X2 and degree of at most 9. By Fundamental Theorem of Algebra, res(P1 , P2 ; X1 ) = 0 has at most 9 complex solutions which is consistent with Theorem (6.6). It is difficult and computationally expensive to find the exact solutions to res(P1 , P2 ; X1 ) = 0 in terms of the arbitrary parameters. We investigate specific cases where we assign values to some parameters. Suppose all parameters in the system (A.22) are equal to 1, except for arbitrary g1 and arbitrary g2 . The Sylvester matrix is as follows: 

a11 a12 a13 a14



  a 0  21 a22 0   0 a21 a22 0  0 0 a21 a22

     

(A.24)

where a11 = −1, a12 = 1 + g1 , a13 = −1 − [X2 ], a14 = g1 ([X2 ] + 1), a21 = g2 − [X2 ] and a22 = −[X2 ]3 + (1 + g2 )[X2 ]2 − [X2 ] + g2 . The Sylvester resultant res(P1 , P2 ; X1 ) is a polynomial with variable X2 and degree of at most 9. By Fundamental Theorem of Algebra, res(P1 , P2 ; X1 ) = 0 has at most 9 complex solutions. But it is still difficult and computationally expensive to find the exact solutions to res(P1 , P2 ; X1 ) = 0 in terms of the arbitrary g1 and g2 .

Appendix A. More on Equilibrium Points: Illustrations

115

If we add another assumption g1 = 2g2 (thus, we only have one arbitrary parameter), the Sylvester matrix is as follows: 

a11 a12 a13 a14



  a 0  21 a22 0   0 a21 a22 0  0 0 a21 a22

     

(A.25)

where a11 = −1, a12 = 1 + 2g2 , a13 = −1 − [X2 ], a14 = 2g2 ([X2 ] + 1), a21 = g2 − [X2 ] and a22 = −[X2 ]3 + (1 + g2 )[X2 ]2 − [X2 ] + g2 . The Sylvester resultant res(P1 , P2 ; X1 ) of the above matrix is a polynomial with variable X2 and degree of at most 9. By Fundamental Theorem of Algebra, res(P1 , P2 ; X1 ) = 0 has at most 9 complex solutions. Notice that we only know the upper bound of the number of equilibrium points and not the exact values. Even with only one arbitrary parameter, it is still difficult and computationally expensive to find the exact solutions to res(P1 , P2 ; X1 ) = 0. Hence, we deem not to continue finding the exact value of all the equilibrium points using Sylvester resultant method for systems that is more complicated than a system with n = 2, ci = 2, cij = 1 and at least one arbitrary parameter. Notice that for the above Sylvester matrix, we only have 4 × 4 dimension which means finding the solution to res(P1 , P2 ; X1 ) = 0 for a larger Sylvester matrix with at least one arbitrary parameter may be more difficult. Nevertheless, in some instances where we do not have any arbitrary parameter, solving for res(P1 , P2 ; X1 ) = 0 is easy. For example, if we further assume that g1 = 2g2 where g2 = 1 then res(P1 , P2 ; X1 ) = 0 has only one real nonnegative solution: [X2 ]∗ ≈ 1.3143.

A.2.2

Illustration 2

If ci = cij = 2, according to Theorem (6.6), the upper bound of the number of equilibrium points is 9. Consider that all parameters are equal to 1 except for ci = cij = 2 and gi = 0, i, j = 1, 2. The only equilibrium point is the origin.

Appendix A. More on Equilibrium Points: Illustrations

A.2.3

116

Illustration 3

Consider that all parameters are equal to 1 except for ci = cij = 2 (i, j = 1, 2) and g2 = 0. The only equilibrium point is ([X1 ]∗ ≈ 1.7549, [X2 ]∗ = 0).

A.2.4

Illustration 4

Consider that all parameters are equal to 1 except for ci = cij = 2, gi = 0 and βi = 3, i, j = 1, 2. There are seven equilibrium points (the following values are approximates): • ([X1 ]∗ = 2.618, [X2 ]∗ = 0), • ([X1 ]∗ = 0, [X2 ]∗ = 2.618), • ([X1 ]∗ = 0.38197, [X2 ]∗ = 0), • ([X1 ]∗ = 0, [X2 ]∗ = 0.38197), • ([X1 ]∗ = 0.5, [X2 ]∗ = 0.5), • ([X1 ]∗ = 1, [X2 ]∗ = 1), and • ([X1 ]∗ = 0, [X2 ]∗ = 0).

A.2.5

Illustration 5

Consider that all parameters are equal to 1 except for ci = cij = 2, g2 = 0, βi = 20 and ρi = 10, i, j = 1, 2. There are three equilibrium points (the following values are approximates): • ([X1 ]∗ = 1.4633, [X2 ]∗ = 0), • ([X1 ]∗ = 0.5, [X2 ]∗ = 0), and • ([X1 ]∗ = 0.13668, [X2 ]∗ = 0).

A.3 A.3.1

Assume n = 3 Illustration 1

If ci = cij = 1, i, j = 1, 2, 3, then according to Theorem (6.6), the upper bound of the number of equilibrium points is 8.

Appendix A. More on Equilibrium Points: Illustrations

117

Consider that all parameters are equal to 1 except for g2 = 0 and g3 = 0. The only equilibrium point is ([X1 ]∗ ≈ 1.618, [X2 ]∗ = 0, [X3 ]∗ = 0).

A.3.2

Illustration 2

If ci = cij = 2, i, j = 1, 2, 3, then according to Theorem (6.6), the upper bound of the number of equilibrium points is 27. Consider that all parameters are equal to 1 except for ci = cij = 2 (i, j = 1, 2, 3), g2 = 0 and g3 = 0. The only equilibrium point is ([X1 ]∗ ≈ 1.7549, [X2 ]∗ = 0, [X3 ]∗ = 0).

A.3.3

Illustration 3

If ci = cij = 3, i, j = 1, 2, 3, then according to Theorem (6.6), the upper bound of the number of equilibrium points is 64. Consider that all parameters are equal to 1 except for ci = cij = 3 and gi = 0, i, j = 1, 2, 3. The only equilibrium point is the origin.

A.3.4

Illustration 4

Consider that all parameters are equal to 1 except for ci = cij = 3 (i, j = 1, 2, 3), g2 = 0 and g3 = 0. The only equilibrium point is ([X1 ]∗ ≈ 1.8668, [X2 ]∗ = 0, [X3 ]∗ = 0).

A.3.5

Illustration 5

Consider that all parameters are equal to 1 except for ci = cij = 3, βi = 3 and gi = 0, i, j = 1, 2, 3. There are ten equilibrium points (the following values are approximates): • ([X1 ]∗ = 1.0097 × 10−28 ≈ 0, [X2 ]∗ = 0.6527, [X3 ]∗ = 0), • ([X1 ]∗ = 1.5510 × 10−25 ≈ 0, [X2 ]∗ = 2.8794, [X3 ]∗ = 0), • ([X1 ]∗ = 0.6527, [X2 ]∗ = 0, [X3 ]∗ = 0), • ([X1 ]∗ = 0, [X2 ]∗ = 0, [X3 ]∗ = 0.6527),

Appendix A. More on Equilibrium Points: Illustrations

118

• ([X1 ]∗ = 2.8794, [X2 ]∗ = 0, [X3 ]∗ = 0), • ([X1 ]∗ = 0, [X2 ]∗ = 0, [X3 ]∗ = 2.8794), • ([X1 ]∗ = 1, [X2 ]∗ = 1, [X3 ]∗ = 0), • ([X1 ]∗ = 1, [X2 ]∗ = 0, [X3 ]∗ = 1), • ([X1 ]∗ = 0, [X2 ]∗ = 1, [X3 ]∗ = 1), and • ([X1 ]∗ = 0, [X2 ]∗ = 0, [X3 ]∗ = 0).

A.3.6

Illustration 6

Consider that all parameters are equal to 1 except for ci = cij = 3, βi = 20, ρi = 10, g2 = 0 and g3 = 0, i, j = 1, 2, 3. There are seven equilibrium points (the following values are approximates): • ([X1 ]∗ = 0.10103, [X2 ]∗ = 1.001, [X3 ]∗ = 0), • ([X1 ]∗ = 0.10103, [X2 ]∗ = 0, [X3 ]∗ = 1.001), • ([X1 ]∗ = 0.10039, [X2 ]∗ = 1.6173, [X3 ]∗ = 0), • ([X1 ]∗ = 0.10039, [X2 ]∗ = 0, [X3 ]∗ = 1.6173), • ([X1 ]∗ = 0.10213, [X2 ]∗ = 0, [X3 ]∗ = 0), • ([X1 ]∗ = 0.83362, [X2 ]∗ = 0, [X3 ]∗ = 0), and • ([X1 ]∗ = 1.8123, [X2 ]∗ = 0, [X3 ]∗ = 0).

A.3.7

Illustration 7

Consider that all parameters are equal to 1 except for ci = cij = 2, γij = γ, ρi = ρ and gi = 0, i, j = 1, 2, 3. Notice that this system is the system used by MacArthur et al. in [113] (refer to system (3.8)). The nonlinear system (3.8) is of the form: [X1 ]2 − ρ[X1 ] = 0 1 + [X1 ]2 + γ[X2 ]2 + γ[X3 ]2 [X2 ]2 − ρ[X2 ] = 0 1 + [X2 ]2 + γ[X1 ]2 + γ[X3 ]2 [X3 ]2 − ρ[X3 ] = 0. 1 + [X3 ]2 + γ[X1 ]2 + γ[X2 ]2

(A.26)

Appendix A. More on Equilibrium Points: Illustrations

119

The corresponding polynomial system is

P1 ([X1 ], [X2 ], [X3 ]) = [X1 ]2 − ρ[X1 ] − ρ[X1 ]3 − γρ[X1 ][X2 ]2 − γρ[X1 ][X3 ]2 = 0 P2 ([X1 ], [X2 ], [X3 ]) = [X2 ]2 − ρ[X2 ] − ρ[X2 ]3 − γρ[X1 ]2 [X2 ] − γρ[X2 ][X3 ]2 = 0 (A.27) P3 ([X1 ], [X2 ], [X3 ]) = [X3 ]2 − ρ[X3 ] − ρ[X3 ]3 − γρ[X1 ]2 [X3 ] − γρ[X2 ]2 [X3 ] = 0.

The Sylvester matrix associated to P1 and P2 with [X1 ] as variable is as follows: 

a a  11 12  0 a  11   a31 0    0 a31  0 0

a13

0

a12 a13 a33

0

0

a33

a31

0

0



 0    0    0   a33

(A.28)

where a11 = −ρ, a12 = 1, a13 = −ρ − γρ[X2 ]2 − γρ[X3 ]2 , a31 = −γρ[X2 ] and a33 = [X2 ]2 − ρ[X2 ] − ρ[X2 ]3 − γρ[X2 ][X3 ]2 . The Sylvester matrix associated to P1 and P3 with [X1 ] as variable is as follows: 

a a  11 12  0 a  11   a31 0    0 a31  0 0

a13

0

a12 a13 a33

0

0

a33

a31

0

0



 0    0    0   a33

(A.29)

where a11 = −ρ, a12 = 1, a13 = −ρ − γρ[X2 ]2 − γρ[X3 ]2 , a31 = −γρ[X3 ] and a33 = [X3 ]2 − ρ[X3 ] − ρ[X3 ]3 − γρ[X2 ]2 [X3 ]. The following are the Sylvester resultants (with [X1 ] as variable) associated to the polynomial system (A.27):

Appendix A. More on Equilibrium Points: Illustrations

120

• res(P1 , P2 ; [X1 ]) = −ρ[X2 ]3 (ρ − [X2 ] + ρ[X2 ]2 + γρ[X3 ]2 ) (ρ − [X2 ] + ρ[X2 ]2 + γρ[X2 ]2 + γρ[X3 ]2 ) (γ − 2γρ2 + γ 2 ρ2 + ρ2 [X2 ]2 − ρ[X2 ] + ρ2 − γ 2 ρ2 [X2 ]2 + γ 3 ρ2 [X2 ]2 − 2γ 2 ρ2 [X3 ]2 + γ 3 ρ2 [X3 ]2 + γ 2 ρ[X2 ] −γρ2 [X2 ]2 + γρ2 [X3 ]2 ) • res(P1 , P3 ; [X1 ]) = −ρ[X3 ]3 (ρ − [X3 ] + ρ[X3 ]2 + γρ[X2 ]2 ) (ρ − [X3 ] + ρ[X3 ]2 + γρ[X3 ]2 + γρ[X2 ]2 ) (γ − 2γρ2 + γ 2 ρ2 + ρ2 [X3 ]2 − ρ[X3 ] + ρ2 − γ 2 ρ2 [X3 ]2 + γ 3 ρ2 [X3 ]2 − 2γ 2 ρ2 [X2 ]2 + γ 3 ρ2 [X2 ]2 + γ 2 ρ[X3 ] −γρ2 [X3 ]2 + γρ2 [X2 ]2 ). We investigate all possible combination of the factors of res(P1 , P2 ; [X1 ]) and res(P1 , P3 ; [X1 ]) and their simultaneous nonnegative real zeros. For example, the factors −ρ[X2 ]3 in res(P1 , P2 ; [X1 ]) and −ρ[X3 ]3 in res(P1 , P3 ; [X1 ]) have a simultaneous nonnegative real zero which is [X2 ]∗ = [X3 ]∗ = 0. However, it is also possible that a factor of res(P1 , P2 ; [X1 ]) and a factor of res(P1 , P3 ; [X1 ]) do not have a simultaneous zero. Suppose the factors of res(P1 , P2 ; [X1 ]) and res(P1 , P3 ; [X1 ]) have a simultaneous nonnegative real zero. An interesting property of such zero is that it satisfies one of the following characteristics: 1. [X2 ]∗ = [X3 ]∗ = 0; 2. [X2 ]∗ = 0 and [X3 ]∗ > [X2 ]∗ ; 3. [X3 ]∗ = 0 and [X2 ]∗ > [X3 ]∗ ; 4. [X2 ]∗ = [X3 ]∗ > 0; 5. [X2 ]∗ > [X3 ]∗ > 0; and 6. [X3 ]∗ > [X2 ]∗ > 0. Since the structure of each equation in the nonlinear system (A.26) are similar, then the above enumeration of characteristics of solutions also apply to the relationship between [X1 ] and [X2 ] as well as to the relationship between [X1 ] and [X3 ]. We can conclude that an equilibrium point satisfies one of the following characteristics (depending on the value of γ and ρ):

Appendix A. More on Equilibrium Points: Illustrations

121

1. [X1 ]∗ = [X2 ]∗ = [X3 ]∗ ; 2. [X1 ]∗ > [X2 ]∗ = [X3 ]∗ ; 3. [X2 ]∗ > [X1 ]∗ = [X3 ]∗ ; 4. [X3 ]∗ > [X1 ]∗ = [X2 ]∗ ; 5. [X1 ]∗ < [X2 ]∗ = [X3 ]∗ ; 6. [X2 ]∗ < [X1 ]∗ = [X3 ]∗ ; 7. [X3 ]∗ < [X1 ]∗ = [X2 ]∗ ; 8. [X1 ]∗ > [X2 ]∗ > [X3 ]∗ ; 9. [X2 ]∗ > [X3 ]∗ > [X1 ]∗ ; 10. [X3 ]∗ > [X1 ]∗ > [X2 ]∗ ; 11. [X1 ]∗ > [X3 ]∗ > [X2 ]∗ ; 12. [X2 ]∗ > [X1 ]∗ > [X3 ]∗ ; and 13. [X3 ]∗ > [X2 ]∗ > [X1 ]∗ . Each characteristic may represent a cell that is tripotent (primed), bipotent, unipotent or in terminal state. However, it is also possible to have the origin as the equilibrium point which is a trivial case. Our observation regarding these possible characteristics of equilibrium points is also consistent with the findings of MacArthur et al. [113]. Based from Theorem (6.6), our system may have at most 27 equilibrium points.

Appendix A. More on Equilibrium Points: Illustrations

A.3.8

122

Illustration 8

Consider the system in (A.3.7) (Illustration 7), where γ =

1 8

and ρ =

1 . 21

This system has

equilibrium points satisfying all the characteristics enumerated in Illustration 7 (A.3.7). Moreover, this system has 27 equilibrium points which is equal to the upper bound of the number of possible equilibrium points. The equilibrium points are (the following values are approximates): • ([X1 ]∗ = 1.3235 × 10−23 ≈ 0, [X2 ]∗ = 20.952, [X3∗ ] = 0), • ([X1 ]∗ = 18.619, [X2 ] = 0, [X3 ] = 18.619), • ([X1 ]∗ = 18.619, [X2 ]∗ = 18.619, [X3 ]∗ = 0), • ([X1 ]∗ = 0, [X2 ]∗ = 18.619, [X3 ]∗ = 18.619), • ([X1 ]∗ = 20.832, [X2 ]∗ = 3.1685, [X3 ]∗ = 3.1685), • ([X1 ]∗ = 3.1685, [X2 ]∗ = 20.832, [X3 ]∗ = 3.1685), • ([X1 ]∗ = 3.1685, [X2 ]∗ = 3.1685, [X3 ]∗ = 20.832), • ([X1 ]∗ = 4.7755 × 10−2 , [X2 ]∗ = 4.7755 × 10−2 , [X3 ]∗ = 4.7755 × 10−2 ), • ([X1 ]∗ = 20.894, [X2 ]∗ = 3.1056, [X3 ]∗ = 0), • ([X1 ]∗ = 20.894, [X2 ]∗ = 0, [X3 ]∗ = 3.1056), • ([X1 ]∗ = 3.1056, [X2 ]∗ = 20.894, [X3 ]∗ = 0), • ([X1 ]∗ = 3.1056, [X2 ]∗ = 0, [X3 ]∗ = 20.894), • ([X1 ]∗ = 0, [X2 ]∗ = 20.894, [X3 ]∗ = 3.1056), • ([X1 ]∗ = 0, [X2 ]∗ = 3.1056, [X3 ]∗ = 20.894), • ([X1 ]∗ = 4.7741 × 10−2 , [X2 ]∗ = 0, [X3 ]∗ = 4.7741 × 10−2 ), • ([X1 ]∗ = 4.7741 × 10−2 , [X2 ]∗ = 4.7741 × 10−2 , [X3 ]∗ = 0), • ([X1 ]∗ = 0, [X2 ]∗ = 4.7741 × 10−2 , [X3 ]∗ = 4.7741 × 10−2 ), • ([X1 ]∗ = 16.752, [X2 ]∗ = 16.752, [X3 ]∗ = 16.752), • ([X1 ]∗ = 20.952, [X2 ]∗ = 0, [X3 ]∗ = 0), • ([X1 ]∗ = 0, [X2 ]∗ = 0, [X3 ]∗ = 20.952), • ([X1 ]∗ = 2.0033 × 10−25 ≈ 0, [X2 ]∗ = 4.7728 × 10−2 , [X3 ]∗ = 0), • ([X1 ]∗ = 4.7728 × 10−2 , [X2 ]∗ = 0, [X3 ]∗ = 0), • ([X1 ]∗ = 0, [X2 ]∗ = 0, [X3 ]∗ = 4.7728 × 10−2 ), • ([X1 ]∗ = 18.432, [X2 ]∗ = 18.432, [X3 ]∗ = 5.5685),

Appendix A. More on Equilibrium Points: Illustrations

123

• ([X1 ]∗ = 18.432, [X2 ]∗ = 5.5685, [X3 ]∗ = 18.432), • ([X1 ]∗ = 5.5685, [X2 ]∗ = 18.432, [X3 ]∗ = 18.432), and • ([X1 ]∗ = 0, [X2 ]∗ = 0, [X3 ]∗ = 0). The equilibrium points where [X3 ]∗ = 0 are the equilibrium points of the system when n = 2. Also, the equilibrium points where [X2 ]∗ = [X3 ]∗ = 0 are the equilibrium points of the system when n = 1. Remark : Given fixed values of [Xj ], j 6= i, the univariate Hill curve Y = Hi ([Xi ]) and Y = ρi [Xi ] have the following possible number of intersections (see Figures (5.11) to (5.14)): • two intersections where one is stable; • one intersection which is stable; and • three intersections where two are stable. Notice that the number of stable intersections is greater than or equal to the number of unstable. However, when we collect all possible equilibrium points of the ODE model (5.1) (where values of [Xj ], j 6= i are not fixed), the number of stable equilibrium points is not necessarily greater than or equal to the number of unstable equilibrium points. For example, in the previous illustration (A.3.8) (Illustration 8), we have 27 equilibrium points where only 8 are stable. The following is the list of stable and unstable equilibrium points: • ([X1 ]∗ = 1.3235 × 10−23 ≈ 0, [X2 ]∗ = 20.952, [X3∗ ] = 0) — stable (terminal state), • ([X1 ]∗ = 18.619, [X2 ] = 0, [X3 ] = 18.619) — stable (bipotent), • ([X1 ]∗ = 18.619, [X2 ]∗ = 18.619, [X3 ]∗ = 0) — stable (bipotent), • ([X1 ]∗ = 0, [X2 ]∗ = 18.619, [X3 ]∗ = 18.619) — stable (bipotent), • ([X1 ]∗ = 20.832, [X2 ]∗ = 3.1685, [X3 ]∗ = 3.1685) — unstable, • ([X1 ]∗ = 3.1685, [X2 ]∗ = 20.832, [X3 ]∗ = 3.1685) — unstable, • ([X1 ]∗ = 3.1685, [X2 ]∗ = 3.1685, [X3 ]∗ = 20.832) — unstable, • ([X1 ]∗ = 4.7755 × 10−2 , [X2 ]∗ = 4.7755 × 10−2 , [X3 ]∗ = 4.7755 × 10−2 ) — unstable,

Appendix A. More on Equilibrium Points: Illustrations

124

• ([X1 ]∗ = 20.894, [X2 ]∗ = 3.1056, [X3 ]∗ = 0) — unstable, • ([X1 ]∗ = 20.894, [X2 ]∗ = 0, [X3 ]∗ = 3.1056) — unstable, • ([X1 ]∗ = 3.1056, [X2 ]∗ = 20.894, [X3 ]∗ = 0) — unstable, • ([X1 ]∗ = 3.1056, [X2 ]∗ = 0, [X3 ]∗ = 20.894) — unstable, • ([X1 ]∗ = 0, [X2 ]∗ = 20.894, [X3 ]∗ = 3.1056) — unstable, • ([X1 ]∗ = 0, [X2 ]∗ = 3.1056, [X3 ]∗ = 20.894) — unstable, • ([X1 ]∗ = 4.7741 × 10−2 , [X2 ]∗ = 0, [X3 ]∗ = 4.7741 × 10−2 ) — unstable, • ([X1 ]∗ = 4.7741 × 10−2 , [X2 ]∗ = 4.7741 × 10−2 , [X3 ]∗ = 0) — unstable, • ([X1 ]∗ = 0, [X2 ]∗ = 4.7741 × 10−2 , [X3 ]∗ = 4.7741 × 10−2 ) — unstable, • ([X1 ]∗ = 16.752, [X2 ]∗ = 16.752, [X3 ]∗ = 16.752) — stable (tripotent), • ([X1 ]∗ = 20.952, [X2 ]∗ = 0, [X3 ]∗ = 0) — stable (terminal state), • ([X1 ]∗ = 0, [X2 ]∗ = 0, [X3 ]∗ = 20.952) — stable (terminal state)), • ([X1 ]∗ = 2.0033 × 10−25 ≈ 0, [X2 ]∗ = 4.7728 × 10−2 , [X3 ]∗ = 0) — unstable, • ([X1 ]∗ = 4.7728 × 10−2 , [X2 ]∗ = 0, [X3 ]∗ = 0) — unstable, • ([X1 ]∗ = 0, [X2 ]∗ = 0, [X3 ]∗ = 4.7728 × 10−2 ) — unstable, • ([X1 ]∗ = 18.432, [X2 ]∗ = 18.432, [X3 ]∗ = 5.5685) — unstable, • ([X1 ]∗ = 18.432, [X2 ]∗ = 5.5685, [X3 ]∗ = 18.432) — unstable, • ([X1 ]∗ = 5.5685, [X2 ]∗ = 18.432, [X3 ]∗ = 18.432) — unstable, and • ([X1 ]∗ = 0, [X2 ]∗ = 0, [X3 ]∗ = 0) — stable (trivial case). In the following section, we illustrate how to use ad hoc geometric analysis in determining the stability of equilibrium points.

A.4

Ad hoc geometric analysis

Consider that all parameters are equal to 1 except for ci = cij = 3, βi = 20, ρi = 10, g2 = 0 and g3 = 0, i, j = 1, 2, 3 (see Illustration 6 (A.3.6)). One of the equilibrium points is ([X1 ]∗ = 0.10103, [X2 ]∗ = 1.001, [X3 ]∗ = 0). To determine the stability of this equilibrium point we use ad hoc geometric analysis. First, we look at the intersection of Y = H1 ([X1 ]) + 1 and Y = 10[X1 ] with [X2 ] = 1.001 and

Appendix A. More on Equilibrium Points: Illustrations

125

[X3 ] = 0. Then we determine if [X1 ]∗ = 0.10103 is stable or not. As shown in Figure (A.2), we conclude that [X1 ]∗ = 0.10103 is stable. Now, we test if [X2 ]∗ = 1.001 is stable or not by looking at the intersection of Y = H2 ([X2 ]) and Y = 10[X2 ] with [X1 ] = 0.10103 and [X3 ] = 0. Also, we test if [X3 ]∗ = 0 is stable or not by looking at the intersection of Y = H3 ([X3 ]) and Y = 10[X3 ] with [X1 ] = 0.10103 and [X2 ] = 1.001. As shown in Figure (A.3) and (A.4), we conclude that [X2 ]∗ = 1.001 is unstable and [X3 ] = 0 is stable. Because of the presence of an unstable component (which is [X2 ]∗ = 1.001), hence, the equilibrium point ([X1 ]∗ = 0.10103, [X2 ]∗ = 1.001, [X3 ]∗ = 0) is unstable.

Figure A.2: The intersection of Y = H1 ([X1 ]) + 1 and Y = 10[X1 ] with [X2 ] = 1.001 and [X3 ] = 0.

Appendix A. More on Equilibrium Points: Illustrations

126

Figure A.3: The intersection of Y = H2 ([X2 ]) and Y = 10[X2 ] with [X1 ] = 0.10103 and [X3 ] = 0.

Figure A.4: The intersection of Y = H3 ([X3 ]) and Y = 10[X3 ] with [X1 ] = 0.10103 and [X2 ] = 1.001.

Appendix A. More on Equilibrium Points: Illustrations

A.5

127

Phase portrait with infinitely many equilibrium points

For example, the phase portrait of the system

5[X1 ] d[X1 ] = − [X1 ] dt 1 + [X1 ] + [X2 ] d[X2 ] 5[X2 ] = − [X2 ] dt 1 + [X1 ] + [X2 ]

(A.30)

is shown in Figure (A.5). The phase portrait was graphed using the Java applet in http://www.scottsarra.org/applets/dirField2/dirField2.html [145].

Figure A.5: A sample phase portrait of the system with infinitely many non-isolated equilibrium points.

Appendix B Multivariate Fixed Point Algorithm We have written a Scilab [150] program for finding approximate values of stable equilibrium points. This program employs the Fixed Point Iteration method. However, if we do numerical computations, we need to be cautious about the possible round-off errors that we may encounter. Algorithm 3 Multivariate fixed point algorithm (1st Part) //input n=input("Input n=") for i=1:n disp(i, "FOR EQUATION") coeffbeta(i)=input("beta=") K(i)=input("K=") rho(i)=input("positive rho=") g(i)=input("g=") disp(i, "exponent of x") c(i)=input("ci=") for m=1:n if m~=i then disp(m, "coefficient of x") gam(i,m)=input("gamma=") disp(m, "exponent of x") z(i,m)=input("cij=") else gam(i,m)=1 end end end for i=1:n disp(i, "initial value for x") x(i,1)=input("=") end

128

Appendix B. Multivariate Fixed Point Algorithm

129

Algorithm 4 Multivariate fixed point algorithm (2nd Part) //fixed point iteration process tol=input("tolerance error=") j=1 y(1)=1000 while (y(j)>tol)&(j 0, Illustration 1

Consider the following system d[X1 ] 3[X1 ]3 = − [X1 ] dt 1 + [X1 ]3 + [X2 ]3 + [X3 ]3 d[X2 ] 3[X2 ]3 = − [X2 ] dt 1 + [X1 ]3 + [X2 ]3 + [X3 ]3 d[X3 ] 3[X3 ]3 = − [X3 ]. dt 1 + [X1 ]3 + [X2 ]3 + [X3 ]3

(C.1)

This system has the following equilibrium points (the following are approximate values): • ([X1 ]∗ = 0, [X2 ]∗ = 0.6527, [X3 ]∗ = 0) — unstable, • ([X1 ]∗ = 0, [X2 ]∗ = 2.8794, [X3 ]∗ = 0) — stable (terminal state), • ([X1 ]∗ = 0.6527, [X2 ]∗ = 0, [X3 ]∗ = 0) — unstable, • ([X1 ]∗ = 0, [X2 ]∗ = 0, [X3 ]∗ = 0.6527) — unstable, • ([X1 ]∗ = 2.8794, [X2 ]∗ = 0, [X3 ]∗ = 0) — stable (terminal state), • ([X1 ]∗ = 0, [X2 ]∗ = 0, [X3 ]∗ = 2.8794) — stable (terminal state), • ([X1 ]∗ = 1, [X2 ]∗ = 1, [X3 ]∗ = 0) — stable (bipotent), • ([X1 ]∗ = 1, [X2 ]∗ = 0, [X3 ]∗ = 1) — stable (bipotent), • ([X1 ]∗ = 0, [X2 ]∗ = 1, [X3 ]∗ = 1) — stable (bipotent), and • ([X1 ]∗ = 0, [X2 ]∗ = 0, [X3 ]∗ = 0) — stable (trivial case).

131

Appendix C. More on Bifurcation of Parameters:

Illustrations

132

Now, let us add g1 = 1, that is, consider the following system d[X1 ] 3[X1 ]3 = − [X1 ] + 1 dt 1 + [X1 ]3 + [X2 ]3 + [X3 ]3 3[X2 ]3 d[X2 ] = − [X2 ] dt 1 + [X1 ]3 + [X2 ]3 + [X3 ]3 3[X3 ]3 d[X3 ] = − [X3 ]. dt 1 + [X1 ]3 + [X2 ]3 + [X3 ]3

(C.2)

This system has the following sole equilibrium point: ([X1 ]∗ ≈ 3.9522, [X2 ]∗ = 0, [X3 ]∗ = 0), which is stable. If we also add g2 = 1, that is, consider the following system 3[X1 ]3 d[X1 ] = − [X1 ] + 1 dt 1 + [X1 ]3 + [X2 ]3 + [X3 ]3 d[X2 ] 3[X2 ]3 = − [X2 ] + 1 dt 1 + [X1 ]3 + [X2 ]3 + [X3 ]3 d[X3 ] 3[X3 ]3 = − [X3 ], dt 1 + [X1 ]3 + [X2 ]3 + [X3 ]3

(C.3)

then we have the following equilibrium points (the following are approximate values): • ([X1 ]∗ = 2.4507, [X2 ]∗ = 2.4507, [X3 ]∗ = 0) — stable (bipotent), • ([X1 ]∗ = 1.0581, [X2 ]∗ = 3.8929, [X3 ]∗ = 0) — stable (unipotent), and • ([X1 ]∗ = 3.8929, [X2 ]∗ = 1.0581, [X3 ]∗ = 0) — stable (unipotent).

C.2

Adding gi > 0, Illustration 2

Consider the following system d[X1 ] 5[X1 ]2 = − 2[X1 ] dt 1 + [X1 ]2 + [X2 ]2 d[X2 ] 5[X2 ]2 − 2[X2 ]. = dt 1 + [X1 ]2 + [X2 ]2

(C.4)

We can add g1 > 0 to the system if we want [X1 ]∗ to sufficiently dominate [X2 ]∗ . We can do ad hoc geometric analysis to determine if the value of g1 is enough to drive the system to have sole equilibrium point where [X1 ]∗ > [X2 ]∗ .

Appendix C. More on Bifurcation of Parameters:

Illustrations

133

We first graph 5[X1 ]2 Y = + g1 and 1 + [X1 ]2 Y = 2[X1 ], then we determine if the two curves have a sole intersection. If they do have more than one intersection, we increase the value of g1 . We find the value of the sole intersection and denote it by [X1 ](∗) . We substitute [X1 ](∗) to [X1 ] in Y = Y =

5[X2 ]2 . 1+[X1 ]2 +[X2 ]2

5[X2 ]2 2

1 + [X1 ](∗) + [X2 ]2

Then we determine if and

Y = 2[X2 ] intersect only at one point. If there are more than one intersections, we increase g1 and adjust [X1 ](∗) . If there is only one intersection then [X2 ]∗ = 0.

Figure C.1: Determining the adequate g1 > 0 that would give rise to a sole equilibrium point where [X1 ]∗ > [X2 ]∗ .

The sole stable equilibrium point of the system with adequate gi > 0 is the computed

Appendix C. More on Bifurcation of Parameters:

Illustrations

134

([X1 ]∗ = [X1 ](∗) , [X2 ]∗ = 0). See Figure (C.1) for illustration. If we suppose g1 = 1, the sole equilibrium point is ([X1 ]∗ ≈ 2.698, [X2 ]∗ = 0).

C.3

gi as a function of time

Making gi as a function of time (i.e., gi changes through time) means that we are adding another equation and state variable to our system of ODEs. We can think gi as an additional node to our GRN and we call it as the injection node. In this thesis, we consider two types of functions — linear function with negative slope and exponential function with negative exponent. Notice that the two functions represent a gi that degrades through time. Suppose given a specific initial condition, the solution to our system tends to an equilibrium point with [Xi ]∗ = 0. If we want our solution to escape [Xi ]∗ = 0 then one strategy is to add gi > 0. The idea of adding an enough amount of gi > 0 is to make the solution of our system escape a certain equilibrium point. However, it is sometimes impractical or infeasible to continuously introduce a constant amount of exogenous stimulus to control cell fates, that is why we can rather consider introducing gi that degrades through time. We numerically investigate the case where adding a degrading amount of gi affects cell fate. However, this strategy is only applicable to systems with multiple stable equilibrium points where convergence of trajectories is sensitive to initial conditions.

C.3.1

As a linear function

Suppose gi (t) = −υi t + gi (0) or dgi = −υi dt where the degradation rate υi is positive. We define gi (t) < 0 as gi (t) = 0.

(C.5)

Appendix C. More on Bifurcation of Parameters:

Illustrations

135

Here is an example where without g1 , [X1 ]∗ = 0 (see Figure (C.2)); but by adding g1 (t) = −t + 5, the solution converges to a new equilibrium point with [X1 ]∗ ≈ 2.98745 (see Figure (C.3)). Figure (C.2) shows the numerical solution to the system d[X1 ] 3[X1 ]5 = − [X1 ] dt 1 + [X1 ]5 + [X2 ]5 d[X2 ] 3[X2 ]5 = − [X2 ]. dt 1 + [X1 ]5 + [X2 ]5

(C.6)

While, Figure (C.3) shows the numerical solution to the system (with gi (t)) 3[X1 ]5 d[X1 ] = − [X1 ] + g1 dt 1 + [X1 ]5 + [X2 ]5 3[X2 ]5 d[X2 ] = − [X2 ] dt 1 + [X1 ]5 + [X2 ]5 dg1 = −1 dt Limit g1 ≥ 0.

(C.7)

The initial values are [X1 ]0 = 0.5, [X2 ]0 = 1 and gi (0) = 5. By looking at Figure (C.4), we can see that [X1 ]∗ shifted from a lower stable component to a higher stable component.

Figure C.2: An example where without g1 , [X1 ]∗ = 0.

Appendix C. More on Bifurcation of Parameters:

Illustrations

136

Figure C.3: [X1 ]∗ escaped the zero state because of the introduction of g1 which is a decaying linear function.

Figure C.4: An example of shifting from a lower stable component to a higher stable component through adding gi (t) = −υi t + gi (0).

Appendix C. More on Bifurcation of Parameters:

C.3.2

Illustrations

137

As an exponential function

Suppose gi (t) = (gi (0))e−υi t or

(C.8)

dgi = −υi gi dt where the degradation rate υi is positive. We define gi (t) < 0 as gi (t) = 0. Consider the system used in the previous subsection (C.3.1). The time series in Figure (C.5) has the same behavior as Figure (C.3). Figure (C.5) shows the numerical simulation to the system d[X1 ] 3[X1 ]5 = − [X1 ] + g1 dt 1 + [X1 ]5 + [X2 ]5 d[X2 ] 3[X2 ]5 = − [X2 ] dt 1 + [X1 ]5 + [X2 ]5 dg1 = −g1 dt Limit g1 ≥ 0

(C.9)

where the initial values are [X1 ]0 = 0.5, [X2 ]0 = 1 and gi (0) = 5.

Figure C.5: [X1 ]∗ escaped the zero state because of the introduction of g1 which is a decaying exponential function.

Appendix C. More on Bifurcation of Parameters:

C.4

Illustrations

138

The effect of γij

The parameter γij can also affect the equilibrium points. Consider the system d[X1 ] [X1 ]2 1 = − [X1 ] 2 2 2 dt 1 + [X1 ] + γ[X2 ] + γ[X3 ] 21 2 d[X2 ] [X2 ] 1 = − [X2 ] 2 2 2 dt 1 + γ[X1 ] + [X2 ] + γ[X3 ] 21 2 d[X3 ] [X3 ] 1 = − [X3 ]. 2 2 2 dt 1 + γ[X1 ] + γ[X2 ] + [X3 ] 21

(C.10)

Figure (C.6) shows a case where varying the value of γ affects the value of the equilibrium points. Varying γ may induce cell differentiation (also refer to [113]).

Figure C.6: Parameter plot of γ, an example.

C.5

Bifurcation diagrams

The following are some of the bifurcation types that we have found during our simulation. We use the software Oscill8 [40] to draw the bifurcation diagrams. Note that the following bifurcation diagrams are dependent on the given parameter values and initial values.

Appendix C. More on Bifurcation of Parameters:

Illustrations

139

For one-parameter bifurcation diagram, the horizontal axis represents the parameter and the vertical axis represents [Xi ]. Thick lines denote stable equilibrium points and thin lines denote unstable equilibrium points. For two-parameter bifurcation diagram, the horizontal axis represents the first parameter and the vertical axis represents the second parameter. The lines in the diagram indicate the presence of bifurcation.

C.5.1

Illustration 1

Suppose n = 1. Bifurcation may arise due to the sigmoidal shape of the Hill curve (see Figures (C.7) for illustration). The system initially has a sole equilibrium point which is stable, then varying a parameter may result to a change in the number of equilibrium points, e.g. there will be three equilibrium points where one is unstable and the other two are stable. The bifurcation happens when there are two equilibrium points where one is stable and the other is unstable.

Figure C.7: Intersections of Y = ρi [Xi ] and Y = Hi ([Xi ]) + gi where c > 1 and g = 0; and an event of bifurcation.

Figures (C.8) to (C.10) illustrate the occurrence of saddle node bifurcations given the

Appendix C. More on Bifurcation of Parameters: equation

Illustrations

d[X1] β1 [X1 ]c = − ρ1 [X1 ] + g1 dt K1 + [X1 ]c

140

(C.11)

with initial condition [X1 ]0 = 5 as well as parameter values c = 2, β1 = 3, K1 = 1, ρ1 = 1 and g1 = 0. Moreover, figures (C.11) to (C.14) illustrate that cusp bifurcations may also arise.

Figure C.8: Saddle node bifurcation; β1 is varied.

Appendix C. More on Bifurcation of Parameters:

Illustrations

Figure C.9: Saddle node bifurcation; K1 is varied.

Figure C.10: Saddle node bifurcation; ρ1 is varied.

141

Appendix C. More on Bifurcation of Parameters:

Illustrations

Figure C.11: Cusp bifurcation; β1 and g1 are varied.

Figure C.12: Cusp bifurcation; K1 and c are varied.

142

Appendix C. More on Bifurcation of Parameters:

Illustrations

Figure C.13: Cusp bifurcation; K1 and g1 are varied.

Figure C.14: Cusp bifurcation; ρ1 and g1 are varied.

143

Appendix C. More on Bifurcation of Parameters:

C.5.2

Illustrations

144

Illustration 2

Suppose we have the system d[X1] β1 [X1 ]c = − ρ1 [X1 ] + g1 dt K1 + [X1 ]c + γ12 [X2 ]c β2 [X2 ]c d[X2] = − ρ2 [X2 ] + g2 dt K2 + [X2 ]c + γ21 [X1 ]c

(C.12)

with initial conditions [X1 ]0 = 5 and [X2 ]0 = 5 as well as parameter values c = 2, β1 = 3, β2 = 3, γ12 = 1, γ21 = 1, K1 = 1, K2 = 1, ρ1 = 1, ρ2 = 1, g1 = 1 and g2 = 0. A saddle node bifurcation arises when we vary the parameter ρ2 — the bifurcation diagram is shown in Figure (C.15). Also, a saddle node bifurcation arises when we vary the parameter g2 — the bifurcation diagram is shown in Figure (C.16).

Figure C.15: Saddle node bifurcation; ρ2 is varied.

Appendix C. More on Bifurcation of Parameters:

Illustrations

145

Figure C.16: Saddle node bifurcation; g2 is varied.

C.5.3

Illustration 3

Suppose we have the system d[X1 ] dt d[X2 ] dt d[X3 ] dt d[X4 ] dt

β1 [X1 ]c K1 + [X1 ]c + γ[X2 ]c + γ[X3 ]c + γ[X4 ]c β2 [X2 ]c = K2 + [X2 ]c + γ[X1 ]c + γ[X3 ]c + γ[X4 ]c β3 [X3 ]c = K3 + [X3 ]c + γ[X1 ]c + γ[X2 ]c + γ[X4 ]c β4 [X4 ]c = K4 + [X4 ]c + γ[X1 ]c + γ[X2 ]c + γ[X3 ]c =

− ρ1 [X1 ] + g1 − ρ2 [X2 ] + g2

(C.13)

− ρ3 [X3 ] + g3 − ρ4 [X4 ] + g4

with initial condition X0 = (5, 5, 5, 5) as well as parameter values c = 2, βi = 3 (i = 1, 2, 3, 4), γ = 1, Ki = 1 (i = 1, 2, 3, 4), ρi = 1 (i = 1, 2, 3, 4), g1 = 1, g2 = 0, g3 = 0 and g4 = 0. Figures (C.17) and (C.18) illustrate the possible occurrence of saddle node bifurcation.

Appendix C. More on Bifurcation of Parameters:

Illustrations

Figure C.17: Saddle node bifurcation; ρ2 is varied.

Figure C.18: Saddle node bifurcation; g2 is varied.

146

Appendix D Scilab Program for Euler-Maruyama

Algorithm 5 Euler-Maruyama with Euler method (1st Part) //input parameters n=input("Input n=") for i=1:n disp(i, "FOR EQUATION") coeffbeta(i)=input("beta=") K(i)=input("K=") rho(i)=input("rho=") g(i)=input("g=") disp(i, "exponent of x") c(i)=input("ci=") for m=1:n if m~=i then disp(m, "coefficient of x") gam(i,m)=input("gamma=") disp(m, "exponent of x") z(i,m)=input("cij=") else gam(i,m)=1 end end sig(i)=input("sigma=") end for i=1:n disp(i, "initial value for x") y(i,1)=input("=") x(i,1)=y(i,1) end

147

Appendix D. Scilab Program for Euler-Maruyama

148

Algorithm 6 Euler-Maruyama with Euler method (2nd Part) //Euler-Maruyama process tend=input("end time of simulation t_end=") hstep=input("step size=") j=1 while (j