A Single-Machine Distributed-Scheduling ... - Semantic Scholar

1 downloads 0 Views 841KB Size Report
Jun 28, 2002 - Texas A&M University‡, Hanyang University* ..... CICA (Jeong and Leon, 2002a) carefully combines and extends concepts used in existing .... st n jt n yj x. 1. 1. ) 1(. 1. - n yj q in order to finish job q before job r starts processing.
A Single-Machine Distributed-Scheduling Methodology Using Cooperative-Interaction Via Coupling-Agents In-Jae Jeong*, V. Jorge Leon‡ Department of Industrial Engineering*‡ Department of Engineering Technology‡ Texas A&M University‡, Hanyang University* Revised: June 28, 2002; November 4, 2003; March 31, 2004

Abstract This paper considers a single-machine scheduling problem where the decision authorities and information are distributed in multiple sub-production systems.

Sub-production

systems share the single-machine and must cooperate with each other to achieve a global goal of minimizing a linear function of the completion times of the jobs; e.g., total weighted completion times. It is assumed that neither the sub-production systems nor the shared-machine have complete information about the entire system. The associated scheduling problems are formulated as 0/1 integer programs. The solution approach is based on Lagrangian relaxation techniques modified to require less global information. Specifically, there is no need for a global upper bound, or a single master problem that has complete view of all the coupling constraints. The proposed methodology exhibits promising performance when experimentally compared to the Lagrangian relaxation with subgradient method with the added benefit that can be applied to situations with more restrictive information sharing.

1. Introduction Consider a facility consisting of three large production areas where each area has its own products and personnel. The areas are defined based on customers, demand levels, and processing similarities. As a result, each production area possesses distinct production configuration and planning characteristics. For instance, two areas have low productmix, large steady demands and are configured as continuous lines. The third area is characterized by large product mix, large demand variability, and is configured somewhat like a job-shop.

Not surprisingly, the scheduling methodologies vary significantly

between the first two areas and the third; machine-scheduling for the first two areas are simple and almost directly related to the master production schedule.

However,

scheduling for the second area is more complicated requiring specialized scheduling. Due to these distinctions, each area has its own operators, supervisors, and schedulers. Although almost independent organizations all three areas share testers and specialized processing equipment.

A scheduling consultant was hired to resolve the many

coordination problems generated by the resource sharing. After carefully sizing the scheduling problem at hand, the consultant concluded that a computerized system under responsibility of one central scheduler could easily solve the overall scheduling system in a few seconds. The system was developed and tested with real data; the experimental results suggested significant improvements over the current manual methods. After significant effort the system could not be successfully implemented to date. The main implementation barriers include: -

The more general scheduling methodology requires more detailed shop-floor data previously unnecessary for the scheduling of two of the areas. The additional data collection and reporting results in impractical database maintenance needed for the daily operation of the scheduling software.

-

The new software promotes a centralized organizational structure that makes sense for optimization but is not in concert with the distributed nature of the business.

This and other similar experiences motivated the authors to contemplate the following questions: Is close-to-optimal coordination on the shared resources achievable while maintaining as much of the autonomy of each area as possible? Can this coordination be achieved with minimal information sharing requirements?

1

This paper’s emphasis is on situations where complete information sharing is impractical ruling out the application of global optimization techniques. Cooperative Interaction via Coupling Agents (CICA) developed by Jeong and Leon (2002a) is used here as a formal methodology for decision making in organizationally distributed systems with very limited information sharing. The paper is organized as follows. Section 2 describes the problem and introduces the necessary notation. Section 3 surveys related work. Section 4 presents the solution approach and the proposed algorithm. The experimental results are shown in section 5 and the conclusion appears in section 6. 2. The Single Machine Distributed Scheduling Problem Consider m production areas that share a machine. A machine is “shared” if two or more production areas require their jobs to be processed on that machine. Let U be the set of all jobs, a subset of jobs Ui is associated with production area i, where |Ui| = ni. The job m

subsets are disjoint such that a total of N = U = ∑ ni jobs need processing on the shared i =1

machine. The distributed problem under consideration consists of scheduling the N jobs on the shared machine to minimize a linear function of completion times. The problem is termed ‘distributed’ because it possesses: (1) Distributed decision authorities: the scheduler in organization i can only prescribe the schedule for its ni jobs. (2) Distributed information: the system information is dispersed amid the distinct production areas sharing the machine. What specific information is private and exchanged is discussed later in the paper. For clarity of the presentation and derivations total-weighted-completion time is conveniently considered as the objective function.

As it will be seen later, the

methodology can be adapted to consider any given linear function of the jobs completion times. Using notation similar to the one proposed in Pritsker et al. (1969), let T denote the planning horizon and the 0/1 decision variable xjt be defined as

2

1, if job j has started by time t. x jt =  , t = 1,..., T . 0, otherwise. Associated with job j is its processing time pj, start-time sj =

T

∑ (1 − x

jt )

, and completion

t =1

time Cj = sj + pj =

T

∑ (1 − x

jt )

+ pj. Let wj denote the weight per unit of completion time

t =1

of job j, the total weighted completion time can be written as

 T  w ∑ ∑ j  ∑ (1 − x jt ) + p j  . i =1 j∈U i  t =1  m

The machine capacity constraints can be expressed as ∑ ( x jt − x jt − p j ) ≤ 1 , t = 1,..., T . j∈U

If x jt − x jt − p j = 1, then the job j is in process at time t. On the other hand, if x jt − x jt − p j = 0, then the job either has not been started or the job has already been done. Therefore the total number of jobs that are in process during any particular time slot should not exceed one. Additionally, precedence constraints between jobs in an area are also considered but are not necessary for the validity of the proposed model. If job q must precede job r, q and r ∈ Ui, then the precedence relation is denoted as (q, r). The precedence constraints

∑ (x T

can be expressed as

t =1

qt

)

− x rt ≥ p q ∀(q, r ) : q, r ∈ U i , i = 1,..., m . Our motivation to

include local precedence constraints is to be able to use the proposed model as a building block to solve more complex production configurations; e.g. flowshop and jobshop. In practice, these precedence constraints may also become useful in systems with re-entrant flows; e.g., two jobs with a precedence relationship may represent pass-1 and pass-2 of the same lot. Given the above definitions, it is now possible to specify what is meant here as partial information sharing. Sub-production system i only has direct access to the following information associated with jobs j ∈ Ui: •

Partial objective function terms

 T  w ∑ j  ∑ (1 − x jt ) + p j  , but not the terms associated j∈U i  t =1 

with j ∈ Uk for k ≠ i.

3



Capacity constraints ∑ ( x jt − x jt − p j ) ≤ 1 ∀t ; i.e., not with j ∈ Uk for k ≠ i. Hence, the j∈U i

schedule prescribed by sub-production system i ensures no capacity violations among its own jobs but it may conflict with schedules prescribed by k ≠ i. •

T

(

)

Precedence constraints between its jobs ∑ x qt − x rt ≥ p q ∀(q, r ) : q, r ∈ U i , but not t =1

the precedence constraints between jobs in Uk, k ≠ i. As expected the capacity constraints pose the main coordination challenge in the above scenario because without them the problem would be separable and the optimal solution could be obtained by solving each sub-problem independently. This restricted level of information sharing may appear unrealistic in practice and has been carefully specified in lieu of simplicity in the presentation of the proposed methodology. More realistic cases allowing more information sharing can also be handled by the methodology; e.g., precedence relations or all the capacity constraints are public information.

It is

conjectured that only better performance can be expected with the increase of information sharing and will not be treated in this paper.

This conjecture is demonstrated

experimentally in the realm of linear programming and integer programming in Jeong and Leon (2003, 2002b). The single-machine problem with precedence constraints is the well known NP-hard problem (Pinedo, 1995). Without the precedence constraints the problem can be solved in polynomial time when there is complete information sharing. However, under partial information sharing the single-machine problem is challenging even for the simple case without precedence constraints. 3. Literature review When considering potential solution methodologies it is important to distinguish among (i) distribution of information, (ii) distribution of computation, and (iii) distribution of decision authority. Information is distributed if there is only partial information sharing in the system; e.g., if the information resides in two distant databases but can be accessed by anyone in the system, then the information is not distributed.

4

Computation is

distributed when the problem can be decomposed into sub-problems that facilitate the solution of the overall problem.

For instance it is possible to perform distributed

computation under non-distributed information; however, it is not possible to solve problems with distributed information using non-distributed computation.

Finally,

decision authorities are distributed if there is no single decision maker has the authority to specify all the decision variables. Of interest to this paper are methodologies that can deal with distributed information and distributed decision authority.

From this

perspective the main distributed modeling approaches were surveyed to formulate and solve the problem at hand. The literature was broadly classified into mathematical decomposition methods, auction/bidding algorithms and constraint-directed heuristic methods. Lagrangian Relaxation has originated as a decomposition method to solve centralized problems that are linked by coupling constraints of special characteristics (Nemhauser and Wolsey, 1989). However, the interaction between the master and subproblems can serve as a model for distributed systems. For scheduling problems, many researchers have studied Lagrangian Relaxation with subgradient method as a distributed algorithm (Roundy et al. (1991), Luh and Hoitomt (1993), Gou et al. (1994)) However, it limited applicability in general distributed applications because a single master problem must have access to detailed information contained in every coupling constraint. Further, the master problem requires global information of the system such as a globally feasible solution. In another branch of research, job-resource allocation problems are represented using economic models. Considering time slots on machines as objects and job-based agents as persons, job-resource allocation operates like an auction where a person bids on objects (Bertsekas, 1990; Bertsekas and Tsitsiklis, 1989). One of the most important works in market-like distributed problem solving was initiated by Davis and Smith (1983) who proposed a “contract-net” for dynamic allocation of tasks to processors. This protocol has been used to model dynamic scheduling in manufacturing environments (Shaw, 1988; Duffie and Prabhu, 1994; Lin and Solberg, 1992). In static scheduling, the auction mechanism is more complex than dynamic scheduling.

Kutanoglu and Wu (1999)

proposed a price-directed combinatorial auction mechanism. Other auction algorithms

5

include: an auction algorithm for production planning by Ertogral and Wu (2000), an optimal auction algorithm for the classical assignment problem by Bertsekas (1990) and the distributed resource allocation problems by Wellman (1993). Constraint-directed Heuristic Search (CHS) has been very popular among Computer Science and Artificial Intelligence research communities to solve machine scheduling problems. CHS provides a meaningful framework of a distributed system which is composed of job-based and resource-based agents in the first place rather than taking a centralized system and decomposing it into subsystems to develop a distributed algorithm(Fox and Smith, 1984, Smith et al., 1986, 1990, Sycara et al., 1991, Zweben and Fox, 1994). However, it can hardly be incorporated with an optimization process. It only generates a feasible schedule that satisfies various constraints and rules (Pinedo, 1995). 4. A Solution Methodology using CICA CICA (Jeong and Leon, 2002a) carefully combines and extends concepts used in existing approaches into a methodology that is better suited for application in situations where information sharing is restricted. CICA has yielded promising results when applied to simple continuous nonlinear optimization (Jeong and Leon, 2002a), linear programming (Jeong and Leon, 2003) and integer programming (Jeong and Leon, 2002b). CICA is based on Lagrangian Relaxation (LR) with modifications to reduce the amount of global information required for its application. CICA and LR differ in the way coupling constraints can be relaxed, and the sharing of information associated with coupling constraints and Lagrangian-multiplier updating. CICA also uses multi-agent negotiation and interaction paradigms as in auction-bidding and artificial intelligence approaches.

Specifically, coupling constraints can be grouped into given arbitrary

subsets and are associated with decision agents termed Coupling Agents – this is a major distinction with LR where all coupling constraints must be accessible to a single decision entity.

In turn, the resulting separable sub-problems are associated with quasi-

autonomous agents termed Coupled Autonomous Organizations.

6

In this paper, the machine capacity constraints of the shared machine are associated with a single coupling agent, and each sub-production system is associated with a coupled organization. Sub-productions systems interact with the shared machine by exchanging information triplets during an iterative process aimed at solving the scheduling problem. Figure 1 shows the decision structure and information flows of the proposed methodology for a single machine problem described in the previous section. Associated with the shared machine there is a problem (MP), and associated with each sub-production system there is a problem (SPi). At iteration n, the information from (MP) to (SPi) consists of a triplet formed by a solution vector of a scheduling problem s nyj−1 , and weight vector µ nj −1 and π nj−1 that represent the starting time of job j determined by the shared machine, and the cost of shifting job j from s nyj−1 , respectively. Similarly, the information from (SPi) to (MP) consists of a solution vector s xjn , weight vector α nj and β nj , determined by the sub-production system.

(MP): Shared Machine Problem

s nyj− 1 ,µ nj − 1 ,π nj − 1 , j ∈ U i

(SP1)

s nxi , α in , β in

(SPi): Sub-production System i Problem

(SPm)

Figure 1. CICA model for a single machine problem Sections 4.1 and 4.2 describe in detail the decision problems associated with the shared machine and the sub-production systems. Sections 4.3 and 4.4 describe the derivation of the penalty weights passed between the shared machine and the sub-production systems.

7

4.1 Sub-production system problem (SPi) Let s nyj−1 be the starting time of job j as specified by the shared-machine after (n-1)th iteration.

The subscript y indicates that the solution is associated with the shared

machine, and µ nj −1 and π nj−1 are the earliness/tardiness (E/T) weights by starting job j one unit early/late from s nyj−1 . The problem of sub-production system i (SPi) at nth iteration can be formulated as follows: (SPi):

Min

  T wj (1 − x njt ) + p j  +   j∈U i   t =1





St.

s nyj−1   T  n −1 n n −1 n  + − µ x π ( 1 x ) jt j jt   j  j∈U i  t =1 t = s nyj−1 +1  





x njt +1 ≥ x njt ∀j ∈ U i , t = 1,..., T − 1 T − p j +1

∑x

n jt



,

≥ 1 ∀j ∈ U i ,

(1) (2) (3)

t =1

∑ (x T

t =1

n qt

∑ (x j∈U i

)

− x rtn ≥ p q ∀(q, r ) : q, r ∈ U i , n jt

− x njt − p j ) ≤ 1 ∀t

x njt = {0,1}

Note that,

s nyj−1

∑ t =1

x njt

is the earliness and

∀j ∈ U i , ∀t .

T

∑ (1 − x

n jt

)

(4) (5) (6)

is the tardiness of job j with due date

t = s nyj−1 +1

starting time s nyj−1 . Therefore the objective of sub-production system i (1) is to find a compromise between the local optimal solution and the shared machine’s solution, s nyj−1 . Constraint (2) implies that once started, a job remains started in all subsequent time periods. Constraint (3) forces each job to be finished within the production horizon. The precedence constraint is shown in (4). The left hand side of (4) represents the interval between the starting time of job q and r. Thus the interval must be at least pq in order to finish job q before job r starts processing. Constraint (5) implies that at most one job can be done on the machine in a time period, which is the machine conflict constraint.

8

Expression (1) is derived starting from the case where complete information is available to each sub-production system. Given complete information the objective function can be written as follows: Min



T

∑ w  ∑ (1 − x j

j∈U i

 t =1

jt ) + p j

 −  

∑θ t

t

 1 −  

∑ (x j∈U i

jt

 − x jt − p )  j  

(7)

where, θ t is the positive Lagrangian multiplier of the machine conflict constraint at the time t. However, in this paper, passing complete information of the machine conflict constraint is not allowed since it is assumed private information that belongs to the shared machine.

Thus, the shared machine passes only an information vector which is

composed of the shared machine’s solution s nyj−1 and E/T weights µ nj −1 and π nj−1 . The E/T weights reflect penalties associated with machine conflict constraint violation. Thus by sending this information to sub-production systems, the shared machine provides partial information of the global system to the sub-production systems. Section 4.3 explains how to find E/T weights. A more compact form of (1) can be rewritten as follows: s nyj−1   T  n −1 n n −1 n  x jt + (π j + w j ) (1 − x jt )  Min. (µ j − w j )  j∈U i  t =1 t = s nyj−1 +1  







(8)

The sub-production’s problem is equivalent to a 1|prec|Weighted E/T problem if µ njt−1 ≥ w j , ∀j ∈ U i which is an NP-hard problem (Garey et al., 1988).

4.2 Shared machine problem (MP) Each sub-production system sends an information vector, ( s xjn , α nj and β nj ) to the shared machine. s xjn is the starting time of job j determined by solving the problem SP at nth iteration. The subscript x indicates that the solution is associated with sub-production systems, and α nj and β nj are E/T weights by starting one unit time early/late from s xjn . In order to distinguish a solution proposed by the shared-machine and sub-production systems, y njt is introduced instead of x njt to denote the shared machine’s solution. The shared-machine problem can be formulated as follows:

9

(MP):

Min St.

  n α j j =1   N

s xjn

∑ ∑

y njt + β nj

t =1

  (1 − y njt )   t = s xjn +1  T



y njt +1 ≥ y njt ∀j , t = 1,..., T − 1 T − p j +1

∑y

n jt

,

(9) (10)

≥ 1 ∀j ,

(11)

− y njt − p j ) ≤ 1 ∀t ,

(12)

t =1

N

∑(y

n jt

j =1

y njt = {0,1}

Note that,

s xjn

∑ t =1

s xjn .

T

y njt is the earliness and

∀j , ∀t .

∑ (1 − y

n jt )

(13)

is the tardiness of job j with due date

t = s xjn +1

Therefore the objective of the shared machine problem is to find a compromise

among local solutions proposed by sub-production systems. Constraints (10) and (11) are analogous to constraint (2) and (3). Constraint (12) is the machine conflict constraints for all jobs. Expression (9) is derived considering the complete information case as follows: N



T



j =1



t =1



Min ∑ w j  ∑ (1 − y jt ) + p j  −

T  T  ρ y − y rt − p q  ∑ ∑ ( q , r )  ∑ qt t =1 ∀( q ,r )  t =1 

(14)

where ρ( q ,r ) is the positive Lagrangian multiplier of precedence constraints for (q,r). However, in this paper it is assumed that a sub-production system cannot disclose local information (i.e., local objective function combined with precedence constraints) to the shared-machine. Thus the need to substitute this information with the triplet ( s xjn , α nj , β nj ). The calculation of E/T weights is explained in section 4.4.

The shared machine’s problem is a traditional 1||Weighted E/T problem which is NPhard problem (Garey et al., 1988) when α nj and β nj are nonnegative. There is no nonnegativity restriction on the penalty weights in this paper.

10

4.3 Updating the Lagrangian multiplier for the sub-production systems This section describes the derivation of the E/T weights for the sub-production system models and how to update the Lagrangian multipliers associated with the precedence constraints. n Let s xin = ( s xn1 ,..., s xn ) be a vector of start-times of jobs prescribed by sub-production i

system i and s nyi = ( s ny1 ,..., s nyni ) be a vector of start-times of jobs specified by the shared machine at nth iteration. First, we transform the local objective function combined with n local constraints as a function of the starting time of jobs, s xin = ( s xn1 ,..., s xn ) as follows: i

 T   n wj (1 − x njt ) + p j  − ρ  ( , ) q r   ∀ ( q , r )∈U i  j∈U i  t =1 

∑ =





T

∑ (x t =1

  T  n wj (1 − x njt ) + p j  − ρ − q r ( , )   ∀ ( q , r )∈U i  j∈U i   t =1







= g i ( s xin ) =

∑ w (s j

n xj

j∈U i

+ pj

)



T  n ( 1 − x ) + (1 − x rtn ) − p q  ∑ ∑ qt t =1 t =1 

∑ ρ (− s

∀ ( q , r )∈U i

n ( q ,r )

 − x rtn ) − p q  

n qt

T

n xq

)

+ s xrn − p q .

(15)

E/T weights must be determined for the following cases: (1) a job is a predecessor, (2) a job is a successor, (3) a job which does not have a precedence constraint. For each case, E/T weights are calculated as follows: The E/T weights of job q that a predecessor of job r. n α qn = g i (s nxi , s xq − 1) − g i (s nxi ) = − wq −

n β qn = g i (s nxi , s xq + 1) − g i (s nxi ) = wq +

∑ρ

n ( j, p) ∀j ( j = q , r )

∑ρ

n ( j ,r ) ∀j ( j = q , r )

,

.

(16)

The earliness penalty α qn represents the cost increment of Lagrangian function value (i.e., n to the left. In this case, the local objective function (15)) by shifting one unit of s xq

decreases by − wq because the completion-time is reduced by one unit.

Also, the

violation of precedence constraint, (q,r) is improved since the predecessor job q is

11

completed one unit time earlier. This occurs for all precedence constraints where job q is a predecessor. Thus, the total improvement is −

∑ρ

n ( j ,r ) ∀j ( j = q , r )

. The same reasoning can be

applied to the tardiness penalty in (16). The E/T weights of job r which is the successor of job q. α rn = g i (s nxi , s xrn − 1) − g i (s nxi ) = − wr + β rn = g i (s nxi , s xrn + 1) − g i (s nxi ) = wr −

∑ρ

n (q, j ) ∀j (q , j = r )

∑ρ

n (q, j ) ∀j ( q , j = r )

,

.

(17)

n Similar to case 1), by shifting one unit of sx xr to the left, the completion time of job r can

be reduced by − wr . However, this may cause the violation of precedence constraints, (q,r) since the successor job r is completed one time unit earlier. This can be applied to all the precedence constraints where job r is the successor. Thus the total cost of violating precedence constraints is

∑ρ

n ( q, j ) ∀j ( q , j = r )

. The same reasoning can be applied to the

tardiness penalty in (17). The penalty weights of job k which does not have a precedence constraint. α kn = − wk , β kn = wk

.

(18)

Where, (18) is derived using similar arguments as in (16) and (17). Remaining in order to calculate (16) and (17) is the prescription of a method to calculate the value of the multipliers, ρ, associated with the precedence constraints. Let, f i ( s xin )

: local objective value of the solution proposed by sub-production system i,

f i ( s nyi ) : local objective value of the solution proposed by the shared machine, t in −1 ψ n −1

: positive step size, : positive scalar step length ,

p : positive constant step parameter. Given s nyj−1 from the shared machine, sub-production system i updates the Lagrangian multiplier of the precedence constraint ∀(q, r ) : q, r ∈ U i for nth iteration as follows: n −1 ρ(nq ,r ) = max(0, ρ(nq−,1r ) − t in −1 ( s yrn −1 − s yq − p q )) ,

12

(19)

t in −1

=

ψ in −1 f i (s nxi ) − f i (s nyi−1 )

∑ (s

∀( q ,r )

ψ in −1

n −1 yr

n −1 − s yq − pq ) 2

,

(20)

= ψ in −2 × p , if f i ( s xin ) − f i ( s nyi−1 ) ≥ f i ( s xin−1 ) − f i ( s nyi− 2 ) , = ψ in − 2 otherwise.

(21)

n −1 It is well known that the subgradient of (14) given s nyj−1 is ± ( s yrn −1 − s yq − p q ) . For a

maximization (minimization) Lagrangian dual problem, ρ(nq ,r ) is updated along the positive (negative) subgradient direction as shown in (19). In the traditional subgradient method, ( z * − z n ) is used instead of f i ( s xin ) − f i ( s nyi−1 ) in determining step size in (20) where z * is an upper bound of the centralized problem and z n is the Lagrangian objective value (see Bazaraa et al., 1993 for more details). An upper bound of the centralized problem could be found from a globally feasible solution. However, in a distributed environment, it is not easy to find a globally feasible solution since no one has complete information of the entire system. This research proposes a new method to update the step length which does not require a globally feasible solution as shown in (20). If the sub-production i agrees with the solution proposed by the shared machine; that is s xin = s nyi−1 then f i ( s xin ) − f i ( s nyi−1 ) = 0; thus, the rule does not change previous Lagrangian multiplier ρ(nq−,1r ) . If s xin ≠ s nyi−1 , the sub-production’s solution and the shared machine’s solution are different thus the rule changes current Lagrangian multiplier proportionate to f i ( s xin ) − f i ( s nyi−1 ) . Finally, as shown in (21), the step length ψ in −1 is reduced proportionately to the step parameter p if f i ( s xin ) − f i ( s nyi−1 ) fails to improve compared to the previous iteration. In traditional subgradient method, p = 0.5 have given good empirical results (Wolfe and Crowder, 1974). However, Jeong and Leon (2003) reported that the traditional step parameter may not work in solving parameter design problems using CICA. This issue will be addressed in section 5. Similar derivations would be possible if the objective function is other than totalweighted-completion time as long as they are linear functions. In general the main

13

inconvenience is that the resulting expressions for penalty weights may not be as simple as the ones obtained in (16) through (18). In other words, E/T weights are determined by the amount of objective value increment by one unit left/right shift of starting time of a job. This would be an effective approximation of the objective function if it is a linear function of the starting time. If not linear, the increment of objective value by starting time shifting may not represent the true value of the objective function. 4.4 Updating the Lagrangian multiplier for the shared machine Similar to E/T weights for sub-production systems in section 4.3, the E/T weights for the 



shared machine must reflect the variation of function − ∑ θ t 1 − ∑ ( y jt − y jt − p j )  with one 

t



j∈U i

unit left/right shift from the shared machine’s solution at the end of nth iteration. However, this approach may not be effective for this case since the decision variables are 0/1 integer. In addition, it is not easy to transform the machine conflict constraints as a function of the starting time of jobs. Therefore, a method is developed here to estimate E/T weights for machine conflict constraints given system solution s nyi . Let θtn

be the Lagrangian multiplier for a machine conflict constraint at time t, at the

nth iteration.

Lagrangian multipliers can be regarded as a cost of taking the

corresponding time slot. In Figure 2, time slots from s nyj + 1 to s nyj + p j are assigned for the job j. To estimate the earliness penalty for the given schedule, calculate the cost increment by shifting the schedule one unit to the left. If one unit time is shifted to the left from the current schedule, the new schedule seizes the time slot s nyj and releases s nyj + p j . Thus the cost increment is θ snn − θ snn + p . yj

θ1n

θ 2n 1

θ snn +1

............

yj

yj

θ snn + 2 yj

j

............

θ snn + p yj

j

…..

s nyj + p j

s nyj

θTn T

Figure 2. The schedule of the job j and the Lagrangian multipliers of time slots. 14

The schedule can be left-shifted down to the first time slot. Thus the total number of possible shifting is s nyj . The average cost increment can be used as an earliness penalty as follows: s nyj

µ nj =

∑ (θ t =1

n t

− θ tn+ p j )

.

s nyj

(22)

The same reasoning can be applied to the tardiness penalty. The current schedule can be shifted to the end of the time interval. By shifting the start time of a job one unit time to the right, the new schedule seizes the time slot s nyj + p j + 1 and releases s nyj + 1. Thus the cost increment is θ snnyj + p j +1 − θ snnyj + 1 . The total possible right shifting is T − s nyj − p j and the average of total cost increment can be used to estimate the tardiness as follows: T

∑ (θ

π nj =

t = s nyj +1

n t+ p j

− θ tn )

.

T − s nyj − p j

(23)

Remaining in order to determine θ in (22) and (23), is the prescription on a method to update the Lagrangian multipliers for each machine conflict constraint. The method proposed in this paper is as follows: θ tn = max(0, θ tn −1 − s n (1 −

N

∑ (x

n jt

j =1

n τ n Zˆ MP

n

s =

 1 −  t =1  T



 ( x njt − x njt − p j )   j =1  N

2

− x njt − p j ))) ,

(24)

,

(25)



n n −1 > Zˆ MP , τ n = τ n −1 × p, if Zˆ MP

= τ n−1 , otherwise.

(26)

n Where, Zˆ MP is the objective value of the shared machine problem MP at nth iteration.

Note that θtn is updated to optimize the Lagrangian dual problem of (7) as shown in (24).

The subgradient of (7), given

15

s xjn , ∀j ,

or equivalently

x njt , ∀j , t

is

N

± (1 −

∑ (x j =1

n jt

− x njt − p j )) .

For a maximization (minimization) problem of Lagrangian dual

problem, θtn is updated along the positive (negative) subgradient direction. Expression (25) is equivalent to the traditional step size updating rule except that it does not use a n global upper bound for the centralized problem. Instead, the local Zˆ MP is used for this n purpose. If Zˆ MP = 0, it means that the shared machine agrees with the solution proposed

by sub-production systems; thus the rule does not change the previous Lagrangian n multiplier, θ tn −1 . If Zˆ MP ≠ 0, the shared machine’s solution and sub-production’s solution

are different, then the rule changes the current Lagrangian multiplier proportionate to n n Zˆ MP . The step length, τ n is reduced proportionately to the step parameter, p if Zˆ MP has

not been improved (i.e., reduced) as shown in (26).

16

4.5 CICA algorithm for single-machine distributed scheduling problems Following is the CICA algorithm for single-machine distributed scheduling problem. Initialization: Set the number of maximum interaction N. Set s 0yj = µ 0j = π 0j = 0, s 0 = t i0 = 0, τ 0 , ψ i0 = 2 ∀i, j and p, 0 < p < 1. Set n = 1. Step 1: Sub-production system problem. For i = 1,…,m. Step 1.1:

Solve the problem SPi and find s xni .

Step 1.2: Calculate E/T weights α nj and β nj ∀j ∈ U i as shown in (16), (17) and (18). Step 1.3: Calculate the step length ψ in −1 as shown in (21). Step 1.4: Calculate the step size t in−1 as shown in (20). Step 1.5: Update the Lagrangian multipliers ρ(nq ,r ) as shown in (19) ∀(q, r ) ∈ U i . Step 1.6: Distribute the information vector ( s xni , α nj , β nj ), ∀j ∈ U i to the shared machine. Step 2: Shared machine problem Step 2.1: Solve the problem MP and find s nyi . Step 2.2: Calculate E/T weights µ nj and π nj ∀j as shown in (22) and (23). Step 2.3: Calculate the step length τ n as shown in (26). Step 2.4: Calculate the step size s n as shown in (25). Step 2.5: Update the Lagrangian multipliers θ tn as shown in (24), ∀t . Step 2.6: Distribute the information vector ( s nyi , µ nj , π nj ), ∀j ∈ U i to the subproduction system i ∀i . Step 3: If n = N or s xin = s nyi , stop. Otherwise, n = n + 1 and go to step 2.

17

5. Experimental Study This section introduces an example to illustrate the behavior of CICA. In the latter part of the section, CICA is experimentally compared with an algorithm based on Lagrangian Relaxation with subgradient method using randomly generated problems. Finally, a simple heuristic method is proposed to restore global feasibility for a single machine problem. In this paper, organization problem and the shared machine problem are solved optimally using CPLEX and the overall algorithm is coded using C. Being a modified LR-based methodology, CICA does not guarantee global feasibility and optimal convergence. Feasibility restoration heuristics are commonly used to postprocess the solutions obtained using LR-based algorithms.

For instance in our

experiments we tested a greedy schedule generation algorithm that uses the start-times from the organizations’ local solutions and implemented by the shared machine as priority indexes. However, the main interest here is to compare CICA with traditional LR because LR has been proven to be a good approach to solve scheduling problems – clearly, one should expect LR to dominate CICA because LR uses more global information.

In the experimental study, performance measures are calculated to

evaluate the degree of capacity violation before feasibility restoration. CV is proposed as a measure of capacity violation in the shared machine for the sub-production’s solution and PV as a measure of precedence violation for the shared machine’s solution as follows: CV =

  max 0,    t =1   T



 ( x jt − x jt − p ) − 1  , j  j =1  N



∑ max(0, ( p

PV =

∀( q ,r )

q

− ( s yr − s yq

)) .

T

(27)

(28)

CV represents the total excess of capacity by the solution of the sub-production system. Also, PV represents the total precedence violation per unit time by the solution of the shared machine.

18

To measure the quality of the solutions, two measures reflecting the distance to the optimal solution are proposed. First, the closeness of an individual solution to the centralized solution can be evaluated as follows: Z x -Z *

PDx =

Z*

PDy = n

where Zx =

∑ i =1

f i (s xi ) , Zy =

n

∑ f (s i =1

i

yi

,

Z y -Z * Z*

,

(29)

) and Z* are the global objective values of sub-

production’s solution, the shared machine’s solution and the optimal solution. The second type of solution quality measure is the Compromise Gap (CG) between sub-production systems’ solution and the shared machine’s solution. The CG is defined as follows: CGx =

Z x -Z y

CGy =

Zx

,

Z x -Z y Zy

.

(30)

CGx represents the percent deviation of the shared machine’s solution from the subproduction’s solution and CGy is the percent deviation of the sub-production’s solution from the shared machine’s solution. A small CG does not always mean that the solutions are close to the optimal solution since the solutions may deviate greatly from the optimal objective value. Therefore, the quality of solutions must be evaluated considering CV, PV, PD and CG simultaneously. 5.1 Example Consider a problem with two sub-production systems and one shared machine. For each sub-production system, 6 jobs need to be processed on the shared machine. Also there are 3 precedence constraints between the operations for each sub-production system. The problem data is shown in Table 1.

19

Table 1. Problem data for the example i=1

i=2

ni

6

6

pj, j ∈ Ui

(5,5,1,1,2,5)

(4,3,1,4,3,4)

wj, j ∈ Ui

(7,2,1,2,9,5)

(8,8,4,7,2,1)

(q,r), j ∈ Ui

(1,2),(2,3),(3,4)

(1,2),(2,3),(3,4)

In parameter design problems, Jeong and Leon (2002a) reported that the choice of the step parameter is significant for the performance of CICA. Thus the effect of the step parameter is also illustrated in this example problem. Figure 3 shows the objective value of the sub-production’s solution and the shared machine’s solution in case of the step parameter, p = 0.25, p = 0.5, p = 0.75 and p = 1.0 respectively. In the case of small step parameters (i.e., p = 0.25 and p = 0.5), solutions converge fast but the compromise gap is large. In the case of large step parameters, Lagrangian multipliers stabilized quickly. In case of p = 1.0, solutions diverge until the algorithm reaches the maximum number of iteration because the step lengths do not change. It seems that in case of p = 0.75, CICA gives the best result in terms of deviation from the optimal objective value. However, in case of p = 0.75, it took about 50 iterations to converge. Meanwhile, it took about 20 and 30 iterations to converge in case of p = 0.25 and p = 0.5 respectively. Table 2 shows the performance measures for each step parameter in this example. In terms of feasibility, p = 0.5 and p = 0.75 give the best result. The capacity violation (CV) of sub-production’s solution is equally 0.105 and there is no precedence violation (PV) for the shared machine’s solution. Actually, the shared machine solution is the global optimal solution in case of p = 0.5 and p = 0.75. Therefore the percent deviation (PD) from the optimal is zero for the shared machine’s solution in both cases. However, the PD of the sub-production’s solution is smaller in the case of p = 0.75 than p = 0.5. Thus the compromise gap (CG) between the sub-production system and the shared machine is smaller in case of p = 0.75 than p = 0.5. Table 2 shows that the effect of the step parameter is important in this example. Also, it seems that p = 0.75 gives the best result for the feasibility of the solution and the closeness to the optimality and compromise gap.

20

Step Parameter 0.25

Step Parameter 0.5

1100

Iteration

Sub-production

Sub-production Shared Machine

1100

1000

1000

Iteration

Iteration

Figure 3. The variations of objective value with step parameters

21

57

53

49

45

57

53

49

45

41

37

33

29

25

1

57

53

49

45

41

37

33

29

500 25

500 21

600 17

600

21

700

17

700

800

13

800

13

Optimal

900

9

900

5

Objective value

1100

9

Shared Machine

1200

Optimal

5

41

Step Parameter 1.0

1200

1

37

Iteration

Step Parameter 0.75

Objective value

33

1

57

53

49

45

41

37

33

29

25

21

17

500 13

500 9

600 5

600

29

700

25

700

800

21

800

900

17

900

1000

13

1000

Optimal

9

Objective value

Optimal

1

Sub-production Shared Machine

Shared Machine

1100 Objective value

1200

Sub-production

5

1200

Table 2. The performance measures for the example problem p

PDx

PDy

CGx

CGy

CV

PV

0.25

0.356

0.014

0.575

0.365

16

0.184

0.5

0.087

0

0.095

0.087

4

0

0.75

0.014

0

0.014

0.014

4

0

1.0

0.082

0.008

0.089

0.098

10

0.342

5.2 Experimental Comparison between CICA and Lagrangian Relaxation (LR) This experiment considers two sub-production systems with one shared machine problem.

Each sub-production system has six jobs to be processed on the shared

machine.

The weight for completion time is generated from the discrete uniform

distribution U(1,10). In this experiment, two factors are considered, the variance of processing time and the number of jobs that have precedence constraints. Processing times are randomly generated from either U(1,5) or U(1,10). Additionally, precedence constraints have three levels. In level 1, two jobs have a precedence relationship (e.g., (q,r) = (1,2)). In levels 2 and 3, three and four jobs are interrelated by precedence constraints, respectively (e.g., in case of level 2, (1,2) (2,3)). Thus the total number of problem types is 6 and 20 problems are randomly generated of each type. For each problem, CICA is applied with different step parameters, p = 0.25, p = 0.5, p = 0.75. Also, CICA is compared with the classical Lagrangian Relaxation with subgradient method (see Bazaraa et al., 1993 for more details).

In Lagrangian Relaxation, the

machine conflict constraints are relaxed and the centralized problem is decomposed into m independent sub-problems. Lagrangian Relaxation is implemented using a random solution as an initial lower bound and a step parameter p = 0.5 that has performed well empirically (Fisher, 1981; Wolfe and Crowder, 1974). Tables 3, 4 and 5 show the PC, CG, CV and PV of CICA with step parameters, p = 0.25, p = 0.5, p = 0.75 respectively. A problem type represents a combination of the distribution of processing time and the level of precedence constraint. For example, ‘[1,5] level 1’ means that processing times are generated from U[1,5] and two jobs have a

22

precedence constraint. For each problem type, tables show the minimum, average and the maximum of each performance measures. Statistical analysis is performed to investigate whether the step parameter and problem factors affect the performance measures of CICA. First, the effect of the step parameter is tested. For all performance measures, the effect of the step parameter is significant with p-value less than 0.001. The null hypotheses that performance means using two different step parameters are equal, cannot be accepted with p-value less than 0.001 for all pair of treatments and for all performance measures. Second, the effect of the variations of the processing time and the level of precedence constraints are tested. For the PDx of sub-production solution, the significant factor is the level of precedence constraints with p-value less than 0.001.

As the level of

precedence constraints increase, CICA performs better. Meanwhile, for CV of subproduction solution, the significant factor is the variations of processing time with pvalue less than 0.001. The CV of sub-production solution is increased as the variations of processing time increases.

All other performance measures for the shared machine

solution (PDy and PV) are not affected by the variation of processing time and the level of precedence constraints.

Therefore CICA performs the best in cases where the

variation of processing time is low, the level of precedence constraints is high and p = 0.75. Finally, CICA with p = 0.75 is compared with the classical Lagrangian Relaxation with subgradient algorithm since the step parameter gives the best result for CICA. The PDx and CV of the Lagrangian Relaxation are shown in Table 6. It would be fair to compare the Lagrangian solution with the sub-production solution since the subproduction solution may violate the machine conflict constraints. A statistical analysis is performed for null hypothesis H0: µ CICA = µ LR and the alternative hypothesis H1:

µ CICA > µ LR for PDx and CV. The p-value for PDx is less than 0.001 and the p-value for CV is 0.88. Therefore Lagrangian Relaxation is significantly better than CICA for PDx but the null hypothesis cannot be rejected for CV. Table 7 shows the average computational time to find the optimal solution, the proposed solution and the Lagrangian solution. The results suggest that the growth of the computation time of CICA is slower as the problem increases in job processing times and 23

precedence constraints. It is unclear to the authors on exactly why the difference in computation times since both formulations are similar – it is conjectured that it has to do with the nature of the updated cost coefficients in the objective function from iteration to iteration; it was observed that CICA tends to have more integer valued coefficients, while LR coefficients had real values, thus slowing down the computation speed.

The

experiments were run on a personal computer with 3 GHz processor. 5.3 Experimental Study with Feasibility Restoration To further explore the potential for practical applicability of CICA this section describes its performance after feasibility restoration and with larger problem instances. Restoring feasibility under partial information is in general a challenging problem; however, in a single-machine environment simple heuristic procedures can be developed to restore global feasibility without significant loss of privacy. Here at the end of the algorithm, the shared machine gathers the starting time of each job from sub-production systems. In turn, jobs are sequenced in ascending order of starting time. This feasibility restoration procedure does not violate the precedence constraints since it does not change the sequence of jobs proposed by sub-production systems. Also, it does not violate the machine conflict constraints since jobs are scheduled sequentially on the shared machine. The experiments in the previous section are performed with 12 jobs and only 2 subproduction systems. This section considers experiments with 12, 24 and 36 jobs, as well as, runs with 4 sub-productions systems. Sixty random problems are generated similar as in the previous section with 10 replicates per each of the 6 problem types. CICA is run with the step parameter set at 0.75 because it was the best performer in the previous experiment. Table 8 show the quality of the solutions after feasibility restoration in terms of PD. CICA yields results that on the average are less than 2% from the optimal with best and worst solutions at optimal and 23%, respectively.

These results are promising

considering the fact that no global information sharing is used. As expected, Lagrangian Relaxation exhibits even better performance. Table 9 depict the corresponding computation times. The results confirm that the times grow exponentially with the size of the problem suggesting the need to develop

24

more efficient heuristics to practically solve larger problem instances. Another way to reduce the computation time is by using more efficient formulations of the scheduling problem. The development of faster solution approaches is on-going research by the authors. Table 10 summarizes the results when 4 sub-production systems and 36 jobs are considered. When compared with the two-sub-production case, for 36 jobs, CICA’s solution quality degrades from 1.2% to 3.5% as the number of sub-production systems increases. This can be explained by the fact that the information becomes more dispersed among the four systems.

The computation times decrease because now each sub-

production system solves a smaller sub-problem. As expected, the performance of the Lagrangian Relaxation does not change significantly in terms of solution quality. 6. Conclusion This paper investigated the application of CICA proposed by Jeong and Leon (2002a) to a single shared-machine scheduling problem. The scheduling problems of the subproduction systems and the shared machine were formulated as 0/1 integer programs. Experimental results suggest that the proposed algorithm can yield close-to-optimal solutions after feasibility restoration for various problem settings. Future research on this line of work can focus on improving the computational efficiency of the methodology to solve larger problem sets. Preliminary experimentation with alternative problem formulations is yielding significant computation time reductions with similar solution quality performance. The alternative formulations being tested are based on efficient LP relaxations of the associated scheduling problem (Dyer and Wolsey, 1990). Other aspects that require further study include the inclusion of multiple coupling agents, heterogeneous objective functions, and asynchronous interactions. Acknowledgement. The authors will like to thank Mr. Sun Woo Kim, PhD student in Industrial Engineering at Texas A&M University for his help in re-running the original experiments and additional runs with larger problems performed after the first version of this paper.

25

Biographical Sketches Dr. In-Jae Jeong is currently an assistant professor of Industrial Engineering department in Hanyang University, Seoul, Korea. His research interests are in the area of distributed decision making and optimization. Dr. V. Jorge Leon is a professor at Texas A&M University holding a joint appointment in the Department of Industrial Engineering and the Department of Engineering Technology.

His research interests are in the optimization of the operation of

manufacturing systems. Dr. Leon is the Program Coordinator of the Manufacturing and Mechanical Engineering Technology programs at Texas A&M University, and is a member of the Journals Committee for the Society of Manufacturing Engineers. He is a member of ASEE, INFORMS, and SME.

26

Table 3. CICA with step parameter p = 0.25 Problem type [1,5] level 1 [1,5] level 2 [1,5] level 3 [1,10] level 1 [1,10] level 2 [1,10] level 3

min 0.000 0.000 0.009 0.000 0.014 0.338

PDx avg 0.148 0.237 0.294 0.179 0.350 0.399

max 0.465 0.430 0.420 0.431 0.426 0.428

min 0.000 0.000 0.000 0.000 0.000 0.000

PDy avg 0.025 0.022 0.077 0.030 0.048 0.078

max 0.164 0.128 0.264 0.170 0.222 0.207

min 0.000 0.000 0.008 0.000 0.015 0.283

CGx avg 0.203 0.378 0.367 0.294 0.597 0.547

max 0.739 0.777 0.668 1.117 1.006 0.821

min 0.000 0.000 0.008 0.000 0.014 0.220

CGy avg 0.171 0.245 0.247 0.196 0.356 0.348

max 0.868 0.437 0.400 0.528 0.501 0.451

min 0.000 0.000 2.000 0.000 4.000 18.000

CV avg 6.000 9.700 12.400 12.900 25.100 28.600

max 17.000 22.000 22.000 34.000 37.000 40.000

CV avg

max

min 0.000 0.000 0.000 0.000 0.000 0.000

PV avg 0.052 0.197 0.415 0.011 0.152 0.521

max 0.382 1.273 1.171 0.217 0.575 1.621

min

PV avg

max

0.000 0.000 0.000 0.000 0.000 0.000

0.016 0.069 0.277 0.020 0.071 0.293

0.313 0.447 1.273 0.217 0.426 1.304

Table 4. CICA with step parameter p = 0.5 Problem type [1,5] level 1 [1,5] level 2 [1,5] level 3 [1,10] level 1 [1,10] level 2 [1,10] level 3

min 0.000 0.000 0.001 0.000 0.000 0.000

PDx avg 0.072 0.123 0.140 0.100 0.121 0.172

max 0.408 0.419 0.410 0.349 0.382 0.383

min 0.000 0.000 0.000 0.000 0.000 0.000

PDy avg 0.052 0.018 0.046 0.016 0.010 0.037

max 0.284 0.144 0.223 0.258 0.048 0.207

min

CGx avg

max

min

CGy avg

max

0.000 0.000 0.000 0.000 0.000 0.000

0.093 0.169 0.193 0.120 0.157 0.243

0.688 0.723 0.695 0.535 0.618 0.622

0.000 0.000 0.000 0.000 0.000 0.000

0.073 0.123 0.140 0.110 0.122 0.173

0.408 0.419 0.410 0.467 0.382 0.383

27

min

0.000 4.800 19.000 0.000 6.400 21.000 0.000 8.650 22.000 0.000 6.250 17.000 0.000 10.100 28.000 0.000 16.050 36.000

Table 5. CICA with step parameter p = 0.75 Problem type [1,5] level 1 [1,5] level 2 [1,5] level 3 [1,10] level 1 [1,10] level 2 [1,10] level 3

min

PDx avg

max

0.000 0.000 0.000 0.000 0.000 0.000

0.095 0.053 0.028 0.150 0.066 0.036

0.363 0.257 0.092 0.370 0.239 0.283

min

PDy avg

max

0.000 0.000 0.000 0.000 0.000 0.000

0.020 0.003 0.005 0.013 0.001 0.005

0.195 0.020 0.036 0.119 0.014 0.063

min

CGx avg

max

0.000 0.000 0.000 0.000 0.000 0.000

0.094 0.054 0.030 0.140 0.071 0.044

0.383 0.257 0.092 0.370 0.239 0.394

min

CGy avg

max

0.000 0.000 0.000 0.000 0.000 0.000

0.118 0.062 0.030 0.191 0.074 0.037

0.620 0.346 0.101 0.587 0.315 0.283

min

CV avg

max

0.000 0.000 0.000 0.000 0.000 0.000

2.350 2.750 3.100 3.150 5.950 5.350

6.000 6.000 9.000 9.000 13.000 21.000

Table 6. Lagrangian Relaxation with subgradient method

Problem type

PDx

CV

min

avg

max

min

avg

max

[1,5] level 1

0.000

0.013

0.061

0.000

2.150

7.000

[1,5] level 2

0.000

0.010

0.068

0.000

1.950

9.000

[1,5] level 3

0.000

0.011

0.066

0.000

1.150

7.000

[1,10] level 1

0.000

0.009

0.054

0.000

2.450

12.000

[1,10] level 2

0.000

0.012

0.115

0.000

2.700

19.000

[1,10] level 3

0.000

0.008

0.085

0.000

1.800

10.000

28

min

PV avg

max

0.000 0.000 0.000 0.000 0.000 0.000

0.053 0.027 0.035 0.011 0.000 0.071

0.421 0.286 0.300 0.217 0.000 0.507

Table 7. The comparison of the average computational time Computational time(sec.) Problem type

CICA Algorithm

Lagrangian Relaxation

Optimal solution

min

avg

max

min

avg

max

[1,5] level 1

0.100

24.500

28.809

67.688

12.281

21.978

45.829

[1,5] level 2

0.102

24.547

27.624

43.547

11.703

30.402

83.407

[1,5] level 3

0.105

25.500

29.015

66.141

11.375

42.782

92.296

[1,10] level 1

0.138

27.312

39.886

174.781

20.969

58.970

133.157

[1,10] level 2

0.147

28.000

39.913

110.219

24.063

89.519

225.156

[1,10] level 3

0.163

28.594

46.486

141.312

38.312

135.101 328.406

29

Table 8. Percent deviation comparison after feasibility restoration. Lagrangian Relaxation

No. of Jobs

CICA algorithm

min

avg

max

min

avg

max

12

0.000

0.001

0.046

0.000

0.015

0.229

24

0.000

0.004

0.059

0.000

0.016

0.163

36

0.000*

0.000*

0.003*

0.000

0.012

0.123

*Due to excessive run times these results are for 12 replicates only – all other cases use 60 replicates.

Table 9. Computational time with feasibility restoration. Lagrangian Relaxation

No. of Jobs

CICA algorithm

min

avg

Max

min

avg

max

12

12.69

63.85

204.03

23.92

38.49

155.05

24

114.78

1541.40

9129.61

34.89

141.98

1280.61

1813.11* 7054.66* 13230.67*

69.67

1079.04

5446.33

36

*Due to excessive run times these results are for 12 replicates only – all other cases use 60 replicates.

Table 10. Results comparison for 4 sub-production system problems, 36 jobs. Lagrangian Relaxation min

avg

max

CICA algorithm Min

avg

max

0.035

0.245

Percent Deviation (PD) 0.000

0.001

0.004

0.000

Computational Time (sec.) 277.64

1092.79

3695.39

76.08

332.14

30

1507.78

References Bazaraa, M. S., Sherali, H. D. and Shetty, C. M. (1993) Nonlinear programming: Theory and algorithms. 2nd Edn., John Wiley & Sons, New York Bertsekas, D. P. (1990) The auction algorithm for assignment and other network flow problems: A tutorial. Interfaces, 20(4), 133-149 Bertsekas, D. P. and Tsitsiklis, J. N. (1989) Parallel and distributed computation: Numerical methods. Prentice Hall, Englewood Cliffs, NJ Davis, R. and Smith, R.G. (1983) Negotiation as a metaphor for distributed problem solving. Artificial Intelligence, 20, 63-109 Duffie, N.A. and Prabhu, V.V. (1994) Real-time distributed scheduling of heterarchical manufacturing systems. Journal of Manufacturing Systems, 13(2), 94-107 Dyer, M. E. and Wolsey, L. A. (1990) Formulating the single machine sequencing problem with release dates as a mixed integer program. Discrete Applied Mathematics, 26, 255 - 270 Ertogral K. and Wu S. D. (2000) Auction-theoretic coordination of production planning in the supply chain, IIE Transactions, 32, 931 - 940 Fisher, M. (1981) The Lagrangian relaxation method for solving integer programming problems. Management Science, 27, 1-18 Fox, M.S. and Smith, S.F. (1984) ISIS-A knowledge-based system for factory scheduling. Expert System, 1(1) 25-48, Garey, M., Tarjan, R. and Wilfong, G. (1988) One-processor scheduling with symmetric earliness and tardiness penalties. Mathematics of Operations Research, 13, 330-348 Gou, L., Hasegawa, T. and Luh, P.B. (1994) Holonic planning and scheduling for a robotic assembly testbed. Proceedings of the 4th Rensselaer international conference on Computer Integrated Manaufacturing and Automation Technology, 142-149 Kutanoglu, E. and Wu, S.D. (1999) On combinatorial auction and Lagrangian relaxation for distributed resource scheduling. IIE Transactions, 31(9), 813-826 Jeong I. J. and Leon V. J. (2002a) Decision making and cooperative interaction via coupling agents in organizationally distributed system. IIE Transactions – Special Issue in Large Scale Optimization, 34,789-802

31

Jeong, I. J. and Leon V. J. (2002b) A Distributed Scheduling Methodology for a TwoMachine Flowshop Using Cooperative-Interaction Via Multiple Coupling-Agents, Journal of Manufacturing Systems, 21(3), 126-140 Jeong I. J. and Leon V. J. (2003) Distributed allocation of capacity of a single-facility using cooperative interaction via coupling agents. International Journal of Production Research, 1(1), 15-30 Lin, G.Y. and Solberg, J.J. (1992) Integrated shop floor control using autonomous agents. IIE Transactions, 24(3), 57-11 Luh, P.B. and Hoitomt, D.J. (1993) Scheduling of manufacturing system using the Lagrangian relaxation technique. IEEE Transaction on robotics and automation, 38, 1066-1079 Nemhauser, G. L. and Wolsey, L. A. (1989) Integer and combinatorial optimization, John Wiley & Sons, New York Pinedo, M. (1995) Scheduling: Theory, algorithms and systems. Prentice Hall, Englewood Cliffs, NJ Pritsker A., Watters, L. and Wolfe P. (1969) Multiproject scheduling with limited resources: A zero-one programming approach. Management Science: Theory, 16(1), 93-108 Roundy, R.D., Maxwell, W.L. and Herer, Y.T., Tayur, S.R. and Getzler, A.W. (1991) A price-directed approach to real-time scheduling of manufacturing operations. IIE Transactions, 23, 149-160 Shaw, M.J. (1988) Dynamic scheduling in cellular manufacturing systems: A framework for networked decision making. Journal of Manufacturing system, 7(2), 83-94 Sherali, H. and Choi, G. (1996) Recovery of primal solutions when using subgradient optimization methods to solve Lagrangian duals of linear programs. Operations Research Letters, 19,105-113 Smith, S.F., Ow, P.S., Potvin, J.Y., Muscettola, N., Matthys, D.C. (1990) An integrated framework for generating and revising factory schedule. Journal of the Operations Research Society, 41, 539-552

32

Smith, S.F., Fox, M.S. and Ow, P.S. (1986) Constructing and maintaining retailed production plans: Investigations into the development of knowledge-based scheduling systems. AI magazine, 7(4), 45-61 Sycara, K., Roth, S., Sadeh, N. and Fox, M. (1991) Distributed constrained heuristic search. IEEE Transaction on Systems Man and Cybernetics, 21(6), 1446 – 1461 Uma, R. N. and Wein, J. (1998) On the relationship between combinatorial and LP-based approaches to NP-hard scheduling. Lecture Notes in Computer Science, 1412, 394408 Wellman, M.P. (1993) A market-oriented programming environment and its application to distributed multicommodity flow problems. Journal of Artificial Intelligence Research, 1(1), 1-23 Wolfe, P and Crowder, H. D. (1974) Validation of subgradient optimization. Mathematical Programming 6, 62-88 Zweben, M. and Fox, M. (1994) Intelligent scheduling. Morgan Kaufman, San Francisco.

33