Component Deployment in Computational Grids

0 downloads 0 Views 257KB Size Report
services, designed with traditional programming languages such as C a nd FORTRAN can be found in [10] while ... The thorns are used to enhance the problem solving environ- ment with new .... packages before download. Component .... 643-662, http://www.globus.org/cog/documentation/papers/cog-cpe-final.pdf. 23.
G. von Laszewski, et al., Component Deployment in Computational Grids

1

Component Deployment in Computational Grids Gregor von Laszewski 1 , Peter Lane 1 , Eric Blau1 , Jarek Gawor,1 Other authors TBD 1

Argonne National Laboratory, 9700 S. Cass Ave Argonne, IL 60439, U.S.A . {gregor}@mcs.anl.gov

Abstract. Grids comprise an infrastructure that enables scientists to use a diverse set of distributedly owned, and managed remote services or resources as part of complex scientific problem-solving processes. Due to the diversity of such a complex environment, it is a challenging problem to deploy and maintain components in a fashion that transparently supports the problem solving process. We analyze some of the challenges involved in deploying components in Grids and report on three practical solutions used by the Globus Research Project that are related to component deployment. Lessons learned from this experience lead us to believe that it is necessary to support a variety of component deployment strategies, and select one that is best suited for the community.

1

Introduction

Grids compris e an infrastructure enabling scientists to use a diverse set of distributed services that access a variety of dispersed resources as part of complex scientific pro blem-solving processes . This includes, for example, the use of compute resources, such as supercomputers , access to information resources, such as large scale data bases, or access to knowledge resources , such as collaboration between colleagues. The integration of these resources within the collaborative problem solving process is instrumental in furthering the scientific discovery process while using advanced pro blem solving environments. The Grid concept is based on coordinated resource sharing for problem-solving processes in dynamic, multi-institutional virtual organizations [8]. In general, we observe two trends creating production level Grids. The first trend is based around the creation of common production Grids that are maintained as part of a virtual organization spanning multiple administrative domains and enabling access to high-end resources such as supercomputers and advanced instruments, such as a particle collider. The Globus Project, together with other efforts such as the NASA Information Power Grid (IPG) [11] and the Alliance Virtual Machine Room (VMR) [21], have made progress in developing fundamental technologies needed to build comp utational Grids [1, 7, 22]. These Grids can be used collectively by researchers who are members of this virtual organization. The deployment of services and comp onents in

G. von Laszewski, et al., Component Deployment in Computational Grids

2

such collectively maintained production Grids is performed by a well trained administrative staff. Nevertheless, training the staff and performing the installation and maintenance tasks pose a significant overhead towards making production Grids a reality. The second trend is to integrate vast amounts of spare compute resources, such as workstations and personal computers, as part of a shared computing pool resource. This pool of resources can either be accessed by a trusted application (as demo nstrated by the SETI@home project [25]) or the members of the community that contribute to the compute pool (as demonstrated by the Condor project [5, 14]). In each case it is important that d eployment issues are addressed carefully in order to minimize the hurdle given by deploying the necessary components on the local clients. Both trends are an important ingredient in making resources available to larger audiences while integra ting them into production Grids. Within the Grid community, we identified three basic user communities that must deal with the deployment of components : grid programmers that are developing services in a collaborative fashion to be deployed in the Grid, administrators that deploy such developed services on resources in order to make them accessible to others, and scientific or application users that access the services provided as part of the Grid. The deployment of components in Grids plays a central role in the lifecycle of Grid component development that requires in many current Grid efforts intense interactions between designers, administrators and users. Deployment Deployment Design Design

Packaging Packaging

Installation Installation

Configuration Configuration

Use Use

Quality Quality Control Control

Update Update

Uninstall Uninstall Software Engineer Administrator Application User Typical involvement of Grid users in the component life cycle

Figure 1: Deploying and maintaining components in Grids is so far a resource intensive task that requires interaction between designers, administrators and users. A blueprint for deployment of Grid component is depicted in Figure 1. We distinguish s everal phases that are inherently connected with the deployment of components.

G. von Laszewski, et al., Component Deployment in Computational Grids

3

First, the software architect must design the components in such a way that future deployment in the Grid is supported. Such components must be packaged conveniently so that they can be installed and configured during a deployment process in Grids. On ce the component is deployed, it is used by the app lication users or scie ntists, but also by other Grid architects developing new Grid services. It is important to acknowledge that the deployment of components must be followed by a quality control phase that evaluates the practical use and correctness of the d esign. Updates , and in the worst case, uninstallation of such a component must be supported. Thus, care must be taken while designing a deployment strategy that supports the distributed characteristics of Grids. The goal must be to reduce the effort that is required to d eploy and maintain comp onents that enable the effective use of the complex enviro nment in the scientific problem solving process. Thus, it is desirable that much of the logistics of the maintenance of such components be performed automatically. The rest of the paper will describe some of the unique requirements for component deployment within Grids. Because of the complexity of these requirements we have prototyped deployment strategies for three common deployment scenarios. Finally, we compare the scenarios and summarize our conclusions.

2

Requirements

The problems associated with component deployments are manifold and are already very complex for deployment on a single workstation or computer. These problems get amplified while targeting Grids as infrastructure on which components are deployed. Common commercial distributed environments such as CORBA [18] provide insufficient support for the deployments of components in Grids. This is based on the fact that these technologies usually target a single administrative domain, while Grids try to encompass multiple domains while keeping them autonomous. One of the key issues in such an environment is that the components be designed with a security infrastructure that allows the secure invocation of components and their insecure intero perability within other administrative domains . One focus of the Globus Project is to enable a Grid security infrastructure (GSI). GSI is based on public key encryption, X.509 certificates, and the Secure Sockets Layer (SSL) communication protocol while adding extensions to these standards for single sign -on and delegation [2]. We use a visionary example to illustrate several other issues that are typical to derive requirements for the deployment of components in Grids. This example is depicted in Figure 2. Here a group of scientists uses a problem solving environment that determines a large LU factorization as part of the structure determination of a molecule in three dimensional space. The scientist is not interested in the details of the solving process, e.g. on which Grid resources he performs the calculation. He uses the Grid as a service infrastructure returning to him the information requested as part of his pro blem solving process. First, the problem solving environment uses the appropriate components to integrate Grid resources as part of his problem solving process. It queries which resources have a component available that can perform an LU factorization

G. von Laszewski, et al., Component Deployment in Computational Grids

4

with the given input data. Returned is a ranked list of possible choices from which the best suited is chosen. Once the components are determined, they are assembled and are executed to deliver the appropriate feedback to the scientists. 1.

Collaborate with colleagues to derive a problem solving strategy

2.

The problem solving environment selects high level components used in the problem solving process

3.

Based on the tasks to be solved Grid resources and components using these resources are selected.

4.

A component is created solving the problem

5.

The component is executed as a Grid service using the Grid Resources

6.

The output is feed back to the scientists

Grid

Figure 2: Component Assembly to support the scientific problem solving process in Grids. We see that in order to make such a Grid enabled environment possible we have to integrate several locally maintained code repositories into a virtual code repository. From it we must discover, select, and assemble the components best suited for the task. The selection of components is determined by the functional properties, performance, licensing issues, and cost. We must address the issue of whether the comp onents can be trusted to be executed in a trusted environment. We must make sure that the interfaces between components maintain portability. Version control must be strictly enforced not only on the delivery of components but potentially while using statically or dynamically linked libraries. A discovery process must be designed in such a way that only authenticated users may know the existence or the properties of the comp onents. Introspection of components must be provided so that prior to the use of the component its functionality can be judged and approved for inclusion within other components. Components themselves must be able to perform similar component ass embly strategies, while keeping the authentication of the users in mind. That means that the same query and assembly of a component for user A could result in a completely different implementation as user B may have different access rights to resources within the Grid. This paper provides a first step towards the definition of a

G. von Laszewski, et al., Component Deployment in Computational Grids

5

genera lized Grid Deployment Infrastructure (GDI). In the next section we will outline different scenarios for the component deployment that are part of our exa mple.

3

Component Deployment Scenarios

As is evident from the previous section, the requirements posed on a deployment infrastructure of Grid Components and services are plentiful, due to the complex and diverse infrastructure comprising computational Grids. Hence we can not assume there is a single strategy that fulfills all of our requirements. We decided to concentrate on three common use scenarios that we have identified within our example. These scena rios are defined as follows: Scenario Thick Grid Service: A Grid designer develops component and services that are to be installed locally on a computer by an administrator. Such component and s ervices are typically written in C and may be run with root access on the machine. Scenario Thin Grid Service: A scientific user is using a Web browser to access Grid Services in a transparent fashion. The communication is performed only through a browser interface and does not allow users to install any software on the client machine on which the browser runs. Scenario Slender Grid Service: Practical experience with slender Grid services have shown that the user interface provided by the current generation of browsers is too limited or requires an unreasonable long startup cost. A slender client allows the browser to cache components accessing Grid and scientific services locally, as well as integrate specialized local available applications within the Web interface. Each of these scenarios has implecations for the deployment of components in the Grid. In the next sections, we describe who we implemented a deployment strategy for each of the scenarios.

4

Thick Grid Services and Components

A subset of deployment issues for Grid services and components based on thick Grid services, designed with traditional programming languages such as C a nd FORTRAN can be found in [10] while practically applying it to the Globus Toolkit. As an initial solution, the Globus Project together with NCSA have developed a packaging tool that simplifies the deployment of thick Grid services and components. Issues that are intended to be addressed with this tool as follows:: • A packaging manager that helps during the installation, upgrade, and uninstallation of components that is compatible to existing packaging managers such as RedHat packing manager (RPM).

G. von Laszewski, et al., Component Deployment in Computational Grids

6



An inter-component dependency checker that can track dependencies at compile, link, and runtime. • A versioning system that identifies compatibility between different versions of components based on a variation of the libtool versioning • Distribution of binaries with compile time flavors, such as the numerical res olution. • Distribution of relocatable binaries that are independent of any absolute path within the distribution. • Integration of dependencies to external programs and libraries not distributed with the current component. • Support for runtime configuration files that allows the custom configuration of components that are installed with statically based runtime managers. An example would be the modification of runtime parameters after the package has been installed in the destination path. • Support for source code distribution of the components. • Inclusion of meta information in the description of the components to assist in the preservation of the components. The packaging toolkit has been tested on the 2.0 beta version of the Globus Toolkit As part of this test various packages are released that are targeted towards different platforms but also include various sets of components. In future it is to be expected that administrators may chose a particular set of components, and the platform and get a custom ass embled package for his needs back. The automatic generation of metadata related to this package is integrated into the Grid Information Infrastructure that is implemented as part of MDS. Thus, it will be possible to query for installations of Globus and the supported features on various domains. The benefit of using a native C implementation of Grid components is based on their speed and the possible integration with other component frameworks such as the Common Component Architecture [3].

5

Thin Grid Services and Components

Although the packaging mechanism for Globus allows software architects to develop a set of components that can be installed and configured by system administrators and Grid component designers, it is still too complex or impossible for Grid users to deal with their installation. Thus, many projects suggest developing thin clients that e xpose the scientific problem solving process through convenient Web-based interfaces. A good example for such a Grid based application is the astrophysical computing portal [13] allowing scientist of the community to access supercomputing resources and interact in the problem solving process as required by the astrophysical community (see [24]). This application contains two different aspects making it interesting for the deplo yment of components in computational Grids. First, the thin clients enable application users to easily interact with the system without updating their local software as all

G. von Laszewski, et al., Component Deployment in Computational Grids

7

communication is performed through a Web-based p ortal using HTML and DHTML technologies. If state-of-the-art Web browsers are available on the applic ation users clients, no additional installation task is required. Naturally, the designer of the software must put special care into fulfilling the application users need, as well as provide sufficient guidance for administrators to install such a portal. An obvious advantage of such a portal is the reduced overhead in installing and maintaining the portal. Cu stomization for users is provided by saving the state of the last interaction with the portal. Second, the portal is used to develop reusable components as part of the Ca ctus framework. This allows astrophysicist to develop collaboratively components that can be shared in the community.

Astrophysics Collaboratory Application Server Virtual Organization

Interaction

|

Interaction

Cactus Portal

|

Applications Cactus Workbench

Administrator

Grid Portal

Software Deployment and Maintenance Component and Software Repository

Software Configuration

Software Deployment

Session Management

Resource Monitor

Resource Management

Parameter Archive

User Credential management

User Account management

General ASC Grid Services Interface to the Grid through Java CoG Kit Grid Protocols

Compute Resource

GRAM

GRAM (GSI-CVS)

Code Repository

GridFTP

Storage

Deployment of components on the web server Maintenance of the components

Deployment on clients is removed due to the availability of HTMLbased interfaces

Interface through Java and JSP components LDAP

GSI

MDS

MyProxy

HTTP

Web Server

JDBC

Database

Resources

Figure 3: The structure of the ASC Application Server showing the various components. This is achieved while exposing tools such as a GSI enhanced cvs and component compilation through the portal. The astrophysical community uses the terms thorns instead of components. The thorns are used to enhance the problem solving enviro nment with new methods and services. Other users are able to access the components

G. von Laszewski, et al., Component Deployment in Computational Grids

8

form the shared component repository. The Globus Toolkit is used to provide the necessary security infrastructure. Additional administrative overhead is needed to maintenance such a portal and determine rules and policies for interacting as me mber of the community to provide access to the software repository, guidelines for the software configuration, and automatic software deployment.

6

Slender Grid Services and Components

At SC2001 in Denver we have demonstrated how the use of slender services and components can support component development and deployment in computational Grids (see Figure 4). We define slender clients to be clients that are developed in an a dvanced programming language and can be installed through a Web-based portal on the local client. This allows the creation of advanced portals and the integration of locally available executables as part of the problem solving process. Furthermore, it enables the integration of sophisticated collaborative tools provided by a third party. Although the creation of slender clients can be performed in any programming la nguage, we have chosen in our prototype to use the Java programming language. We rely upon the Java CoG Kit [22] for the interface to Grid resources that can be a ccessed through the slender clients. In our demonstration we have developed two popularly used components and deployed them through the Java Webstart technology. Java Webstart provides a deployment framework that allows client users with access to a local resource to install Java components that can be accessed from the cache of the local browsers. Thus it reduces the installation time in case the component is used more than once. Additionally, it also provides an automatic framework for updating the component in case a new one is placed on the Web server. While developing and designing a sophisticated set of components using the protocols defined by the Globus Project, we have achieved interoperability between the Globus Toolkit and our Web-enabled slender clients. Thus our practical experience revealed the usability of such a technology as part of the deployment stra tegy for Grid computing. Furthermore we have conducted tests with users not familiar with the Webstart technology they were able to deploy the components in a matter of two minutes on their clients, which is in strict contrast to the installation of the currently distributed Globus clients, which is sufficiently complex so that users typically attend a training session to learn about the installation process. Although the Globus Toolkit packaging framework described in a previous section is aimed to improve this situation while delivering a client package, it is obvious that the developers of the Globus toolkit must maintain a significant variety of binary releases that are to be installed with an installshield like capability on Linux/Unix/ and the Windows platform. While using Java for this task we cover a significant portion of the target machines and are even able to support the Macintosh platform, so far not explored by the other packaging effort.

G. von Laszewski, et al., Component Deployment in Computational Grids

9

Advanced Instruments

Distributed Grid &Web Servers

Supercomputing Centers

Compute Clusters

g

Information Servers

Figure 4: The slender client environment allowing to significantly simplifying the deployment and maintenance of clients accessing Grid functionality. During the SC2001 demo nstration, we have also demonstrated that it is possible to integrate programs natively installed on the client in a Grid components framework. Our demo nstration obtaine d a molecule via a secure connection to a Globus enabled information resource and displayed it with the help of a natively compiled molecular viewer (rasmol). Thus, this easy deployment methodology enables the inclusion of new resources within the computational Grids. Furthermore we are able to integrate certificates in components that can be used to guarantee authenticity and e portability between deployed components.

7

Comparison of the Scenarios

While practically experimenting with the various deployment scenarios in Grid settings we have to acknowledge a number of advantages and disadvantages of each approach. We are still in the process of refining this analysis. In Table 1 we list several of the features we have looked at for the development and Deployment of comp onents. Each of the approaches can access sophisticated native component in C or

G. von Laszewski, et al., Component Deployment in Computational Grids 10 FORTRAN. The overhead of accessing these native components is low in each case. Nevertheless, significant benefit can be achieved while using Java as the component integration language due to its well known advantages for component development [9]. Table 1: Comparison between the various scenarios Feature Primary language Native C/FORTRAN Primary Language Java Primary Language HTML/XML Component Versioning Component Interoperability through signature Component Metainformation Component Repository Source Repository Portable GUI Sophistication of the GUI Interactivity limited to browser [20] Speed of interface interaction First time activation cost Cost for subsequent use Support for power users Incremental component update Integration of client side programs Access restriction to client by sophisticated policies Standard Component Framework Sandboxing Desktop integration Offline operation Automatic installation of supporting components Deployment in Grids Deployment protocol

Thick x applic ations

Slender JNI x

Slim Third tier applets

cvs

tag x

x Web-server

x Webserver

x Gsi-cvs

Tcl/Tk Qt high no high high low yes

Swing

HTML

high no high low low yes x x x

low yes low none high No

Beans/EJB [16] x x x x

none

distributed JNLP

replicated none

x

x

none

x x

distributed To be defined

Additional benefits can be found by the implicit integration of documentation and meta-information as part of the Java packaging. These packages can be signed with

G. von Laszewski, et al., Component Deployment in Computational Grids 11 the standard Java keytool to generate signatures for improving the authenticity of the packages before download. Component repositories can thus be implemented as part of already existing web services enabling others to share their components with the community in an easy fashion. As pointed out before the Java Webstart technology provides an excellent mechanism for installing such signed components on local clients. Other advantages are the availability of a sophisticated user interface develo pment environment that interfaces with the desktop and can so far not be replicated with HTML/XML technologies. At the slender scenario has its disadvantage by using the Java programming language as portability is defined by the availability of a JVM for the targeted Grid resource. As some Grid resources have no sufficient support for Java it is necessary to interface to these resources through native libraries. Other reasons may be the restrictions in the address space or the speed numerical calculations are performed (though studies show that the Java performance can be dramatically improved [17]). In our practical experience many application users and designers are initially pleased with an HTML based interface, but experiencing quickly frustration, as the Interface does not provide for enough flexibility and speed during a continuous period of use. AN example for such an application is given by the use of our Java based MDS browser [23] which has thousands of users in contrast to its original CGI counterpart that is simply to slow between page updates. On the other hand users that are not able or willing to access this tool still use the CGI -based browser. The ability to sandbox client side applications is a further advantage of the slender client and enables to create sophisticated high throughput compute resources similar in spirit to SETI@home [25], Condor and the Frontier SDK [19].

8

Related work in the Grid Community

Related work in component assembly and deployment has been performed in Grid related activities even before the term Grid was termed [12]. The Globus Project has defined previously a schema for the Metacomputing Directory Service [4, 6] that allows to store meta-information about components that are installed on remote resources. With the input of the Globus team, University of Tennessee has completed a sch ema proposal to the Grid Forum [15]. Most recently new projects such as the European GridLab project or the DOE Science Grid projects will need to address the deployment issue of Grid components and services. The research conducted in this paper will be beneficial for this work.

9

Summary

In this paper we have outlined three scenarios that have significant impact on the deployment strategy of components within Grids. Although none of the deployment strategies is all encompassing, we have learned that they solve together many aspects

G. von Laszewski, et al., Component Deployment in Computational Grids 12 of the current practical component deployments in Grids. We found the strategy of signed slender clients to be much superior to the thin-client approach supported by other communities. In fact, we found that often artificial requirements were put in place without reason to prevent the developers to consider slender client deployment strategies that even allow integrating previously written client software. Additionally we have shown that with the availability of Java we were able to deploy components with ease on Java enabled platforms including Solaris, Linux, Windows, and Macintosh. We will continue our research in this field as it is essential for the acceptance and the success of Grids.

Acknowledgments This work was supported by the Mathematical, Information, and Computational Science Division subprogram of the Office of Advanced Scientific Computing Research, U.S. Department of Energy, under Contract W -31-109-Eng-38. DARPA, DOE, and NSF support Globus Project research and development. We thank Ian Foster, Mike Russell for the valuable discussions during the course of th e ongoing development. This work would not have been possible without the help of the Globus team. Globus Toolkit and Globus Project are trademarks held by the University of Chicago.

References 1. 2. 3.

4.

5.

6.

7.

The Globus project WWW page, 2001, http://www.globus.org/. -. The Globus Security Web pages, 2001, http://www.globus.org/security. R. Armstrong, D. Gannon, A. Geist, K. Keahey, S. Kohn, L. McInnes and S. Parker. Toward a Common Component Architecture for High Performance Scientific Computing. in Proc. 8th IEEE Symp. on High Performance Distributed Computing , 1999. K. Czajkowski, S. Fitzgerald, I. Foster and C. Kesselman, Grid Information Services for Distributed Resource Sharing. in Proc. 10th IEEE International Symposium on High Performance Distributed Computing, (2001), http://www.globus.org. D. H. J. Epema, M. Livny, R. v. Dantzig, X. Evers and J. Pruyne A Worldwide Flock of Condors: Load Sharing among Workstation Clusters. Future Generation Computer Systems, 12. S. Fitzgerald, I. Foster, C. Kesselman, G. v. Laszewski, W. Smith and S. Tuecke. A Directory Service for Configuring High-performance Distributed Computations. in Proc. 6th IEEE Symp. on High Performance Distributed Computing, 1997, 365--375. I. Foster and C. Kesselman Globus: A Metacomputing Infrastructure Toolkit. International Journal of Supercomputer Applications, 11 (2). 115--128, http://www.globus.org.

G. von Laszewski, et al., Component Deployment in Computational Grids 13 8.

9.

10. 11.

12.

13.

14.

15. 16. 17. 18. 19. 20. 21. 22.

23. 24.

I. Foster, C. Kesselman and S. Tuecke The Anatomy of the Grid: En abling Scalable Virtual Organizations. Intl. J. Supercomputer Applications, 15 (3). 200-202, http://www.globus.org/research/papers/anatomy.pdf. V. Getov, G. von Laszewski, M. Philippsen and I. Foster Multi-Paradigm Communications in Java for Grid Computing. Communications of ACM , 44 (10). 119-125, http://www.globus.org/cog/documentataion/papers/. Globus. Packaging Desing Document, 2001, http://wwwunix.globus.org/packaging/rfc.html. W. E. Johnston, D. Gannon and B. Nitzberg, Grids as Production Computing Environments: The Engineering Aspects of NASA's Information Power Grid. in Proc. 8th IEEE Symposium on High Performance Distributed Computing, (1999), IEEE Press, http://www.ipg.nasa.gov/. G. v. Laszewski. An Interactive Parallel Programming Environment applied in atmospheric Science. in Hoffman, G. -R. and Kreitz, N. eds. Making its Mark, Proceedings of the 6th Workshop of The use of Parallel Processors in Meteorology, World Scientific European Centre for Medium Weather Forecast, Reading, UK, 1996, 311--325. G. v. Laszewski, M. Russell, I. Foster, J. Shalf, G. Allen, G. Daues, J. Novotny and E. Seidel Community Software Development with the Astrophysics Simulation Collaboratory. Concurency and Computation , to be pu blished, http://www.globus.org/cog. M. Litzkow, M. Livny and M. Mutka. Condor - A Hunter of Idle Workstations. in Proc. 8th Intl Conf. on Distributed Computing Systems , 1988, 104111. J. Miller. Grid Software Object Specification, 2001, http://wwwunix.mcs.anl.gov/gridforum/ gis/reports/software/software.pdf. R. Monson-Haefel Enterprise JavaBeans. O'Reilly, Beijing ; Sebastopol, CA, 1999. J. E. Moreira, S. P. Midkiff, M. Gupta, P. V. Artigas, P. Wu and G. Almasi The NINJA project. Communications of ACM, 44 (10). 102 - 109. OMG. CORBA: Common Object Request Broker Architecture, 2001, http://www.omg.org. Parabon. Frontier SDK Documentation, 2001, http://www.parabon.com/developers/frontier/index.js p. R. N. Srinivas. Java Web Start to the rescue, Java World, 2001, http://www.javaworld.com/javaworld/jw -07-2001/jw-0706-webstart.html. J. Towns. The Alliance Virtual Machine Room, 2001, http://archive.ncsa.uiuc.edu/SCD/Alliance/VMR/. G. von Laszewski, I. Foster, J. Gawor and P. Lane A Java Commodity Grid Kit. Concurrency and Computation: Practice and Experience, 13 (8-9). 643-662, http://www.globus.org/cog/documentation/papers/cog-cpe-final.pdf. G. von Laszewski and J. Gawor. Copyright of the LDAP Browser/Editor, 1999. G. von Laszewski, M. Russell, I. Foster, J. Shalf, G. Allen, G. Daues, J. Novotny and E. Seidel Community Software Development with the Astro-

G. von Laszewski, et al., Component Deployment in Computational Grids 14

25.

physics Simulation Collaboratory. Concurency and Computation, to be published, http://www.globus.org/cog. I. W. T. Sullivan, D. Werthimer, S. Bowyer, J. Cobb, D. Gedye and D. Anderson, A new major SETI project based on Project Serendip data and 100,000 personal computers. in Proc. of the Fifth Intl. Conf. on Bioastronomy., (1997), http://setiathome.ssl.berkeley.edu/woody_paper.html.