Market-Based Massively Parallel Internet Computing - Semantic Scholar

1 downloads 47 Views 313KB Size Report
effort for finding large primes, involving a group of more ... a graduate student at Berkeley had cracked the 40-bit RSA ..... insurance mechanisms may help.

Market-Based Massively Parallel Internet Computing Peter Cappello, Bernd O. Christiansen, Michael O. Neary, and Klaus E. Schauser Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93106 fcappello, bernd, neary, [email protected]  Abstract Recent advances in Internet connectivity and implementations of safer distributed computing through languages such as Java provide the foundation for transforming computing resources into tradable commodities. We have developed Javelin, a Java-based prototype of a globally distributed, heterogeneous, high-performance computational infrastructure that conveniently enables rapid execution of massively parallel applications. Our infrastructure consists of three entities: Hosts, clients, and brokers. Our goal is to allow users to buy and sell computational power, using supply and demand, and market mechanisms to marshal computational power far beyond what can be achieved via conventional techniques. Several research issues must be worked out to make this vision a reality: allocating resources between computational objects via market mechanisms; expressing and enforcing scheduling and quality of service constraints; modeling programming in a global computing ecosystem; supporting heterogeneous execution without sacrificing computational speed; ensuring host security; global naming and communication; and client privacy.

1. Introduction Internet-based cluster computing involving thousands of computers has been very successful lately, demonstrating its usefulness for several important and news-breaking applications. In January 1998, a PC in California running a primality test part-time for 46 days found the 37th known Mersenne prime, the largest known prime discovered so far [27]. This  Bernd Christiansen was supported by the German Academic Exchange Service (Deutscher Akademischer Austauschdienst). Klaus E. Schauser was supported by the National Science Foundation NSF CAREER Award CCR-9502661. Computational resources were provided by the NSF Instrumentation Grant CDA-9529418, Sun Microsystems, and NSF Infrastructure Grant CDA-9216202.

computer was part of a world-wide distributed computing effort for finding large primes, involving a group of more than 4000 workstations and PCs. In January 1997, RSA announced a code breaking challenge. Within only 3.5 hours, a graduate student at Berkeley had cracked the 40-bit RSA code running on a 128 processor UltraSparc NOW cluster. Two weeks later, the 48-bit RSA code was cracked using 3,500 workstations spread across Europe. In June 1997, the 56-bit DES was cracked, using approximately 78,000 computers, as many as 14,000 per day [15]. In October 1997, the 56-bit RSA RC-5 was cracked in a very large world-wide effort [16]. Citing from the press release: At the close of the contest there were over 4000 active teams processing over 7 billion keys each second at a combined computing power equivalent to more than 26 thousand high-end personal computers. The work was performed entirely using consumer PCs during off-hours or otherwise idle time. Add them all together, however, and you have the world’s largest computer. The above examples, while dramatic and encouraging, have several shortcomings. Security is of foremost concern. Participation requires either downloading and running an executable, or downloading unknown source code and then compiling it. The code may contain bugs or viruses that can destroy or spy on local data. Secondly, participating in such a computational challenge is not easy. Each of the examples mentioned above runs on a different set of architectures and requires different mechanisms to download and participate. We think that this lack of a unified framework is one reason why of all the computers connected to the Internet only a small number participate. Finally, participants currently do so only on a volunteer basis, donating idle computing time that otherwise would have been wasted. There is no incentive provided beyond either curiosity, helping a worthy cause, or fame (if one is lucky enough to find the next largest prime!).

Several recent projects, such as Globus [22], Legion [28], Charlotte [6], Atlas [5], ParaWeb [9], and Popcorn [10] share the vision of seamlessly integrating networked computers into a global computing infrastructure. They have identified many of the issues, and address them holistically. However, apart from Popcorn, none use market-based mechanisms to coordinate client and host interaction.

1.1. Our Approach We think it is necessary to make participation in such a global infrastructure as easy as possible, to ensure secure execution of untrusted code, and to include an economic incentive. This leads to a supply and demand driven marketbased global computing environment. Architecture: The infrastructure consists of three entities: brokers, clients, and hosts (see Figure 1). A broker coordinates supply and demand for computing resources. A client is a process that requests resources, and a host is a process that offers its resources (i.e., CPU cycles, memory, and permanent storage) as a commodity by registering with a broker. To clients, a broker provides the illusion of being a supercomputer that can be used to run distributed applications, where each task of an application is mapped to a host so that the client’s overall requirements are met. In our model, the roles of hosts and clients are not fixed. A machine may register as a host when idle, and act as a client when its owner needs additional computational power. While we target our infrastructure to work across the whole Internet, it is also usable on a smaller scale. In fact, we anticipate that its initial implementations will be inside large organizations with intranets consisting of a large number of heterogeneous computers.

Clients Brokers Hosts Figure 1. The Javelin Architecture. Micro-Economic Model: It is important to motivate individual users to participate. We address this issue by giving individual users an economic incentive to offer their resources. Hosts receive credit that may be exchanged for a micro-currency (cyber money) or other resources (e.g., CPU cycles, information access, backups, Internet access, or soft-

ware licenses). The broker coordinates supply and demand, bills clients, and credits hosts. This leads to market-based resource allocation, including on-line bidding auctions for available resources, and reserving resources ahead of time (futures). It can be described by a micro-economic model that captures the costs of resources, as well as communication costs, and the broker’s commission. Market mechanisms not only provide an incentive for participating,but also address issues inherent to distributed systems. We believe that they can be extended naturally to achieve scalability, load balancing, and the distribution of the broker process itself. Easy Participation: We want everyone connected to the Internet or an intranet to participate easily. To this end, our design is based on widely used components: Web browsers and the portable language Java. Our architecture is not dependent on Java or Web browsers. This is merely one convenient way of offering resources. Experienced users may install sophisticated host software, such as screen savers, that automatically make resources available, or even provide confined execution environments, allowing for the safe execution of untrusted binaries. Prototype Implementation: We have implemented a Java-based prototype, called Javelin [12]. By simply pointing their browser to a known URL, users automatically enable their computer system to host parts of parallel computations. This is achieved by downloading and executing an applet that spawns a small daemon thread which waits and “listens” for tasks from a broker. The key technology is the ability to safely execute an untrusted piece of Java code sent over the Internet, something most of today’s Web browsers can do1 . Java also solves the problem with heterogeneous platforms. While the computational speed of most Java virtual machine implementations is still not very good, emerging technologies such as just-in-time compilation promise to overcome these problems in the near future. Services: A host offering its resources provides the service of accepting code from the network and executing it (for example, by running a Java virtual machine). Hosts also can offer other services, such as FFT or ray tracing, when they have software packages that are very expensive, efficient, or complex.2 From an economic perspective, we view the Internet or an intranet as a service market [36]. Services may range from running standard applications, to general-purpose services that allow for the safe execution of arbitrary untrusted code. Consequently, a broker is just another service. A broker itself may require additional resources, and take advantage of other services, such as other brokers. Service markets are being pursued by many research groups [36, 30, 42, 59, 40, 57] and standardization 1 Assuming

all security bugs have been found and fixed. services can promise better performance, since they require neither code-shipping nor a safe execution environment. 2 Such

organizations [41]; they allow for convenient software reuse, and for building applications from basic building blocks. Applications: Given the current limited Internet/intranet communication bandwidth, communication requirements are a concern. Fortunately, the advanced Gigabit networks that are being developed and deployed should result in a dramatic improvement. Nevertheless, communication probably will not perform as well as dedicated parallel hardware. On the other hand, Javelin’s potential number of nodes is much larger than any existing supercomputer. Computationbound simulations or scientific computations can either be much larger, or run much faster. Potential applications include large, coarse-grain scientific simulation such as quantum computing simulations, ILP, SOR, prime number factorization, rendering, ray tracing, and code cracking problems. For these applications, we expect to obtain performance far superior to today’s supercomputers at a fraction of the cost.

1.2. Outline The rest of the paper is structured as follows. In Section 2, we discuss important research issues that must be addressed in a global computing infrastructure and the benefits of using market-based mechanisms for resource allocation. Section 3 presents experimental results from a prototype we developed. Section 4 discusses related work. Section 5 summarizes our future research and highlights the importance of our work.

2. A Globally Distributed Computing Infrastructure The envisioned infrastructure realizes the abstraction of a virtual supercomputer drawing on a world-wide market of the world’s networked computational resources. This virtual supercomputer is configured dynamically from network resources, paid for on a per-use basis, and administered without human intervention. This implies that certain aspects of its operation must be hidden from the client, such as the dynamically varying number of supporting hosts, the heterogeneity of the host set, and host failures. Similarly, properties associated with real supercomputers, such as privacy and a suitable programming interface, must be provided. The infrastructure’s ease of use and economic incentive to participate ensure a number of hosts that is larger than the largest NOW ever constructed. Its resulting computational capacity is potentially much larger than any existing system, enabling parallel applications that are commensurately larger and/or faster than any undertaken thus far. In this section, we first describe the properties that the virtual supercomputer must have. This property list also represents a research agenda. Other researchers already are pursuing many of these issues. We touch on some of those efforts

in Section 4. After describing the system properties, we describe some of the benefits of these properties.

2.1. System properties The computing environment we envision has several unique properties that, taken together, distinguish it from other distributed systems and pose challenging technical problems. Ease of use: We view ease of use as a key property of the proposed infrastructure, since it relies on the participation of thousands of users. We envision a web-centric approach where a user only needs ubiquitous software such as a Javaenabled web browser to participate. Interoperability: In heterogeneous systems like the Internet, hosts and clients may have different instruction sets, word sizes, or operating systems. The infrastructure proposed must provide the means to overcome this heterogeneity. This issue has been addressed by either employing machine-independent languages, such as Java [26], E [18], and Limbo [33], or by providing multiple binary executables [51]. Machine-independent languages achieve portability at the expense of some performance; multiple binary executables achieve performance at the expense of portability. It thus is desirable to support both approaches, to meet any application’s requirements. Security: Executing an untrusted piece of code poses integrity and security threats to the host. To protect the host from buggy or malicious code, any untrusted piece of code must be run within an environment that only allows limited access to system resources such as the file system. Host security already has been addressed by a variety of mechanisms including Software Fault Isolation [13], Secure Remote Helper Applications [25], and, of course, interpreted languages such as Java. Although much has been done, researchers continue to work vigorously on this important problem. To the extent that trust exists, protection costs can be reduced. The natural progression is that trust can be established locally, but diminishes as the size of the system and the number of participants grows large. We thus move from small, centrally coordinated, trusted systems to large distributed systems of less trusted participants that require protective mechanisms. In the limit, we approach globally distributed systems of anonymous, mutually untrusted participants requiring high levels of protection. Axelrod’s [4] iterated prisoner’s dilemma tournament shows that, even between mutually unknown agents, trust can arise spontaneously over time. However, in the presence of reputation systems (i.e., services that associate an object/agent’s identity with past performance), the time and cost of achieving trust can be reduced. Client privacy: A company’s internal data and “know-

how” represent a value that usually is protected from unauthorized access. The proposed infrastructure must provide mechanisms that enable clients to hide the data and possibly the algorithms that are passed to untrusted hosts (see, e.g., Feigenbaum [19]). Although encrypted computing might not be possible for all applications, a number of important practical problems including FFT and MM can be encrypted [1]. Another way of ensuring client privacy is to split the computation into fragments such that no part by itself reveals any useful information about the complete computation. Performance: Since our infrastructure aims at providing better performance than is available locally, the efficient execution of anonymous code is essential. The interpretation overhead of portable languages is being overcome by modern compilation techniques such as just-in-time compilation [56, 46] that allow for an execution speed close to that of compiled C code. Scalability: As performance relies heavily on the number of participants, scalability is a key issue. We intend to provide an infrastructure that is scalable with respect to communication (i.e., limitations imposed by subsidiary technologies, such as Java applet security, must be overcome), computation (i.e., resource allocation must be distributed), and administration (i.e., requiring neither login accounts on hosts nor operating system changes). Although some of these issues have been addressed by various researchers, an environment that scales to the Internet’s size is a research issue. Incentive: Since our vision relies on the cooperation of thousands of hosts, we want to give potential hosts an incentive to participate by capturing the interactions between clients and hosts in a microeconomic model. We feel that the application of market mechanisms is the right way of solving the problem of resource allocation in potentially planet-wide distributed systems. Important underlying technologies required for the implementation of a market mechanism, such as cyber money, authentication schemes, and secure transactions, are maturing rapidly. Section 2.2 identifies in more detail those research issues related to market-based resource allocation. Correctness: Economic incentives have a dark side: the specter of hosts faking computations and returning wrong or inaccurate results. To deal with this problem, clients may be able to apply algorithm-specific techniques that cheaply verify a computation’s correctness (e.g., it is simple to verify a proposed solution to a system of linear equations), or cheaply verify with high probability the correctness of a computation (see, e.g., Blum [7]). Other techniques include statistical methods (e.g., in an image rendering computation, the client can compute some random pixels and compare these to the corresponding pixels returned by hosts), checksum computations that are inserted into the code to assure that the code was actually run, or redundant computations

(i.e., performing a computation by several hosts). Reputation services, similar to credit and bond ratings, also give hosts an economic disincentive for faking computations. Fault tolerance: In a potentially Internet-wide distributed system, fault tolerance is crucial; hosts may be untrusted and communication is unreliable. It can be provided either transparently by the underlying architecture or explicitly by the application itself. Since fault-tolerance consumes resources, it is an added value service that, if requested, must be paid for. Although fault tolerance is discussed in the literature comprehensively, further research is needed to meet the requirements of the economic setting. Quality of service: Quality of service must be incorporated and ensured. A host negotiating for a task should be able to calculate accurate runtime estimates based on the task’s profile as well as its own machine characteristics, in order to decide whether it can meet the client’s runtime constraints without actually performing the computation. One possibility is through micro-benchmarks that characterize the host as a point in a multidimensional performance space [43]. A suitable benchmark must be representative, quickly computable by the host, and quickly evaluated by the broker. The broker also may want assurance that the benchmark has indeed been computed by the target host (as opposed to the host’s passing it off to a faster surrogate in order to appear to be a more capable machine). However, we think that, analogous to real-life economics, hosts, brokers, and clients can have reputations (i.e., for example, whether a host’s availability or quality of service guarantees have been met in the past). Locality: At present, neither the latency nor bandwidth across the Internet are satisfactory for communication intensive parallel applications. Thus, both must be taken into account when mapping applications to the available resources. This implies the need for methods to determine or forecast the communication latency and bandwidth requirements and execution time constraints for given problems [58]. Computational substrate: The envisioned infrastructure provides a substrate on which various communication, data, and programming models may be implemented; different models are suitable for different applications. We intend to provide the programmer with abstractions, such as a global file system, shared memory, and reliable communication channels. These abstractions are services that can be purchased optionally. Dynamic reconfiguration: We expect hosts to be able to freely associate with, and disassociate from, our infrastructure: Hosts can withdraw their computational resources at any time, although, depending on their contract, this may result in a penalty. Mechanisms for task migration such as checkpointing, object serialization, and worm programs [14] have been investigated by other researchers. Toolkit-level support already is available.

Offline operation: Since dial-up accounts and mobile computing are common, clients must be able to disconnect from the network after having submitted an application, returning later to retrieve results. Similarly, we expect hosts to be able to receive a task, compute it offline, and then reestablish a network connection to return the results.

2.2. Benefits of Market-Based Resource Allocation Some of the benefits to expect from market-based resource allocation are: Coordination that scales: As computations have grown in size and complexity, resource allocation and control have become more distributed. A natural next step in the evolution of large, complex, distributed computational systems is the use of market mechanisms. Indeed, if mechanisms for resource coordination are to scale, they must be distributed. Markets are a mechanism for highly adaptive, distributed coordination that has been honed over centuries, and continues to outperform centralized mechanisms for the coordination of large, complex systems. Trading and pricing provide potential compute suppliers with an incentive to participate in a global computational market whose size and efficiency may dwarf what we are capable of marshaling at present. Such a market requires an infrastructure. Some of its components already exist in prototypical form (e.g., the Internet, the Web, and Java). By making computing resources tradable commodities, the virtual supercomputer enables its distributed resources to be better utilized. Market-based resource allocation, privacy, security, and guarantees of correctness, taken together, form the foundation for industrial use of potentially planet-wide networks of workstations. We pursue our goal of massive participation in the supercomputer not just by decreasing the costs of participation, but also by increasing the benefits. Solving the technical obstacles to a web-based distributed computing infrastructure decreases the costs to potential hosts. Markets increase the benefits. This next step is crucial to the creation of a dynamic, large, adaptive infrastructure. By creating a market in computation, we enable people to maximize the value of their computational resources. With a trading mechanism, suppliers will compete for the right to serve computational consumers (clients). Paying only for what you use: A side benefit is that computational consumers will pay for only what they use (better utilization of existing resources). They will no longer have to buy hardware to meet their maximum usage. Degrade gracefully: Dynamic reconfiguration allows the infrastructure to degrade gracefully, when hosts withdraw from the market (temporarily or otherwise). As the average number of participating hosts increases, the gracefulness of degradation due to host withdrawal increases:

The law of large numbers implies increasingly stable system statistics. Upgrade gracefully: We are used to improved performance every time we buy a new computer. We are equally used to slogging along at that rate until we buy a new computer. With the introduction of a computation market, we can expect our computational performance/cost ratio to increase continuously, as faster, less expensive equipment makes its way into the infrastructure. An aggressive spot market: Costs can be reduced by computing during off-peak hours for the participating hosts. Not including communication costs, computational costs at any particular time will be the lowest that any supplier is willing to provide at that time. A reliable futures market: If we want to plan financially for a future computation, we can secure that computational power today by purchasing computational futures (which presumably will be a market forecast of dropping prices in computation). Other market derivatives can be used for analogous benefits. Concentrated computation: The size of computations will be limited primarily by the money that we are willing to bring to bear on the problem. By today’s standards, enormous computations will be possible. Communication constraints may limit the size of computation that can be concentrated at one location, perhaps leading ultimately to the need for a market in bandwidth. Locality of hosts: An important design goal of Atlas [5] is that “The infrastructure should exploit existing hierarchical domains, so that pooled resources are given preferentially to other members of their organization, and so that applications can exploit (higher) local bandwidth.” Market mechanism naturally favor local hosts (i.e., hosts within the same organization); for such hosts, the costs associated with client privacy, host security, and communication are reduced, enabling organizationally internal hosts to underbid outside competitors. Indeed, the farther away a host is, the greater its communication cost and latency. A realistic microeconomic model favors host locality precisely to the extent that locality improves its performance/cost ratio.

2.3. Open Research Issues for Market-Based Resource Allocation The research issues that we see as central to this next step in the evolution of complex computational systems are briefly described below. Most of these issues are already being studied (see related work in Section 4). Enforceable contracts: Contracts are a market mechanism for achieving performance guarantees. If the terms of a contract are violated, damage results. The cost of protection can be reduced when restitution can be guaranteed with high probability at relatively low cost. Bonding and

insurance mechanisms may help. Authentication of identity: Authentication is a prerequisite technology of enforceable contracts. Method of coordination: Objects need a negotiation protocol with minimal overhead. (It will still be large compared with command-style allocation mechanisms.) The negotiation protocol must be capable of evolution: Objects with more advanced negotiation protocols will continue negotiating with older, less advanced objects. Cost of coordination: Malone et al. [35] have observed that as information technology advances, the cost of coordination goes down, and that this marginally increases the percentage of coordination that should be done via markets, as opposed to centralized resource allocation. It will be interesting to investigate the complexity of coordination costs (i.e., the costs incurred to select a host) associated with advertising, negotiation/bidding, accounting, and achieving the requisite level of trust. Transaction granularity size: How big do transactions need to be to warrant market bidding? Clearly, their minimum size depends on the cost of accounting and negotiation [35]. It would be useful to have a quantitative model that is experimentally verifiable, which can be used to determine the minimum economically feasible transaction size. Cost of trading algorithms: Humans can use trading strategies that are quite complex. However, Star [47] has successfully conducted double-auction markets between software entities using simple decision algorithms, yielding efficiencies comparable to human markets. How computationally complex do automated strategies need to be in order to obtain the efficiencies that markets provide? How large must their inputs be? Profit of trading algorithms: How can we measure the quality (e.g., profitability) of a trading algorithm? What can be proved, or experimentally shown, about an algorithm’s profit/cost ratio? Most of the questions above can be addressed experimentally, but the scientific validity of the methodology must itself be investigated with great care.

3. Java-based Prototype In this section, we briefly describe Javelin [12], our prototype infrastructure for Internet-based parallel computing using Java. Our system is based on Internet software technology that is essentially ubiquitous: Web technology. It is intended to be a substrate on which various programming models may be implemented. The Javelin architecture follows the model presented in Figure 1. The current prototype already provides some of the properties listed in Section 2, like e.g. ease of use and interoperability. Others, like scalability and fault tolerance, will be addressed in the near future.

Once these fundamental technological issues have been solved, we will be in a situation where computation is truly a tradable commodity, and buying and selling of computational resources will become feasible. At this point we will integrate market-based protocols and algorithms into our design, leading eventually to a system that is naturally loadbalanced around the globe and offers users a true incentive to participate. We conducted our performance measurement experiments in a heterogeneous environment: Pentium PCs, Sparc-5s, a 64-node Meiko CS-2 where the individual nodes are Sparc-10 processors, and single and dual processor UltraSparcs connected by 10 and 100 Mbit Ethernet. Although all machines run under Solaris, heterogeneity was introduced by using Java VMs with and without JIT. Furthermore, all experiments were conducted under these machines’ typical workloads.

3.1. Implementation Our most important goal is simplicity, i.e., to enable everyone connected to the Internet or an intranet to easily participate in Javelin. To this end, our design is based on widely used components: Web browsers and the portable language Java. By simply pointing their browser to a known URL of a broker, users automatically make their resources available to host parts of parallel computations. This is achieved by downloading and executing an applet that spawns a small daemon thread that waits and “listens” for tasks from the broker. The simplicity of this approach makes it easy for a host to participate — all that is needed is a Java-capable Web browser and the URL of the broker. Tasks are represented as applets that are embedded in HTML pages. This design decision implies certain limitations due to Java applet security: e.g. all communication must be routed through the broker and every file access involves network communication. Therefore, in general, coarse-grained applications with a high computation to communication ratio are well suited to Javelin. To make the infrastructure more convenient for programmers, it is important to provide a global name space. We have already implemented a user-level global file system for Solaris. The global file system, called Ufo, allows access to remote files just as if they were local, but requires no root access or OS modifications [2]. We currently are investigating the incorporation of the Ufo file system into the Javelin architecture. In the following we briefly discuss performance numbers from our prototype. For a more detailed presentation of the performance results and the applications used the reader is referred to [12].

3.2. Raytracing Measurements We have ported a sequential raytracer written in Java to Javelin as an example of a real-world application that benefits from additional processors even if communication is relatively slow. The raytracer was written by Frederico Inacio de Moraes, and was first parallelized by Laurence Vanhelsuwe [53]. To evaluate the performance and dynamic behavior of our infrastructure we have raytraced images of the size 1024x1024 pixels for the two scenes shown in Figure 2, and a randomly generated scene. simple is built from a plane and two spheres; cone435 is built from a plane and 435 spheres arranged as a cone 3 ; random is built from 12,288 randomly chosen primitives.

Figure 3(a) shows the speedup curve for our parallel raytracer running on simple and cone435 in a cluster of two Sparc-5s, five UltraSparcs, and one dual-processor UltraSparc connected by 10 and 100 Mbit Ethernet. Figure 3(b) gives the speedup curve for raytracing random on the 64 Sparc-10 nodes of our Meiko CS-2. We used a simple Web “browser” written in Java that executes applets and applications by spawning Sun’s JVM with JIT. The graphs illustrate that the speedup that can be achieved depends on the computation to communication ratio. The more complex the scene, the longer it takes to compute the color of a pixel, while communication costs stay the same and become less important overall.

8 cone435 simple

Speedup

6

4

2

0 0

2

4

6

8

48

64

Sparc Processors

(a)

64 random 48

Speedup

(a) simple

32

16

0 0

16

32 Sparc10 Processors

(b)

Figure 3. Speedup curves for images simple and cone435 on a SparcStation cluster and random on a 64 processor Meiko CS-2. (b) cone89

3.3. Mersenne Prime Measurements Figure 2. Ray traced images. 3 In

favor of a better illustration Figure 2 shows only 89 spheres.

As our second application we implemented a parallel primality test which is used to search for Mersenne prime

64

Speedup

48

32

16

0 0

16

32

48

64

Meiko Processors

(a)

8

6

Speedup

numbers. This type of application is well suited to Javelin, since it is very coarse-grained with a high computation-tocommunication ratio when testing large Mersenne primes. In our current implementation, we developed a Java application to test for Mersenne primality, given a range of prime numbers. For our measurements, we chose to test the Mersenne primality for all 119 prime exponents between 4000 and 5000. The reason for selecting this range is that on the one hand, we tried to make numbers large enough to simulate the true working conditions of the application, and on the other hand, we wanted to keep them small enough to be able to complete our set of measurements in a reasonable amount of time. The first set of measurements was performed on clusters of identical machines. Figure 4(a) presents the speedup curve for test runs on a 64-node Meiko CS-2, while Figure 4(b) shows the curve for a cluster of 8 Sun UltraSparcs. In both cases, the speedup was close to linear as long as the ratio of job size to number of processors was big enough. For large numbers of processors communication through a single router becomes a bottleneck. In a more realistic setting where huge primes are checked we do not expect this to be a problem since the computation will be extremely coarse-grained. For our tests, we chose a strategy where the biggest tasks (large amount of computation) were enqueued at the beginning of the task queue, thus ensuring that the hosts that participate first get the largest amount of work. This led to an even task distribution. Next, we tested our prototype on a heterogeneous environment consisting of 16 PCs, 8 UltraSparcs and one Sparc5. The execution time for 32 Meiko nodes is given for comparison. We chose an uneven distribution of platforms to reflect a possible real situation of computing resources registered at a broker at some moment. The broker should distribute the tasks such that resource utilization is close to optimal. Figure 5 shows total execution times for different combinations of the previously mentioned architectures. In our case, the PCs are slightly faster than the UltraSparcs. The combined resources of all machines showed an improvement, but not linear, due to the saturation of the router and the big difference in computing speed of the various hosts. There are three basic approaches to solve this problem: the client could give out more than one number at a time, thus reducing the number of messages, or more than one router could be used, or hosts themselves could become clients for other hosts, leading to a work-stealing approach as in our previous raytracer application. In conclusion, the initial results from our prototype are highly encouraging. The measurements show that we can achieve almost linear speedup for small to medium numbers of hosts with this type of coarse-grained application. The next step is to seek ways to avoid the saturation of the routing service and and other bottlenecks in the system, so that the

4

2

0 0

2

4

6

8

UltraSparc Processors

(b)

Figure 4. Speedup curves for primality test on a 64 node Meiko CS-2 (Sparc 10 processors) and on a cluster of 8 UltraSparcs.

result will be an infrastructure that truly scales to the size of the Internet. In the long term, it is our belief that the most natural way to avoid bottlenecks on a global scale is the market-based resource allocation approach, since it has already proven itself superior to any centralized scheme in the real world.

4. Related Work 4.1. Global Computing Related Work There is a rapidly expanding body of work based on the vision of seamlessly integrating networked computers into a global computing resource. Recent network computing approaches include CONDOR [32], Linda [55], PVM [51], Piranha [24], MPI [39], Network of Workstations (NOW)

25

Total Time (min)

20 15 10 5 0 32 Sparc10

8 Ultra

16 PC

8 Ultra + 8 8 Ultra + 8 Ultra + PC 15 PC 16 PC + 1 Sparc5

Figure 5. Primality test on a heterogeneous platform consisting of Pentiums, Sparcs and UltraSparcs.

[3], Legion [28], GLOBUS [22], and WebOS [52]. The goal of the Legion research project is to provide secure shared object and name spaces, application-controlled fault-tolerance, improved response time, and greater throughput. Multiple language support is another of its goals. Globus is viewed as a networked virtual supercomputer also known as a metacomputer: an execution environment in which high-speed networks are used to connect supercomputers, databases, scientific instruments, and advanced display devices. The project aims to build a substrate of low-level services — such as communication, resource location and scheduling, authentication, and data access — on which higher-level metacomputing software can be built. WebOS is being developed with an eye towards providing operating system services for wide area applications such as resource discovery and management, a global name space, remote process execution, authentication, and security. Most of these systems require the user to have login access to all machines used in the computation. All of these systems require the maintenance of binaries for all architectures used in the computation. Java helps to address these issues. The flexibility of Java for Internet computing has been observed by several other researchers. A new Java-based generation of projects is aimed at establishing a software infrastructure on which a global computing vision can be implemented. These projects include ATLAS [5], Charlotte [6], ParaWeb [9], Bayanihan [44], and Popcorn [10]. All these projects are explicitly designed to run parallel applications and provide a specific programming model. Other recent systems, e.g. JPVM [21], use Java to overcome heterogeneity, but are not intended to execute on anonymous machines. The use of Java as a means for building distributed systems that execute throughout the Internet has also been recently proposed by Chandy et al. [11] and Fox et al. [23]. ATLAS provides a global computing model based on Java and on the Cilk programming model [8] that is best suited to

tree-based computations. ATLAS ensures scalability using a hierarchy of managers. The current implementation uses native libraries, which may raise some portability problems. Charlotte supports distributed shared memory, and uses a fork-join model for parallel programming. A distinctive feature of this project is its eager scheduling of tasks, where a task may be submitted to several servers, providing faulttolerance and ensuring timely execution. Both Charlotte and ATLAS provide fault-tolerance based on the fact that each task is atomic. In Charlotte, the changes to the shared memory become visible only after a task completes successfully. This allows a task to be resubmitted to a different server, in case the original server fails. In ATLAS, each subtask is computed by a subtree in the hierarchy of servers. Any subtask that does not complete times out and is recomputed from a checkpoint file local to its subtree. ParaWeb provides two separate implementations of a global computing infrastructure, each with a different programming model. Their Java Parallel Class Library implementation provides new Java classes that provide a messagepassing framework for spawning threads on remote machines and sending and receiving messages. ParaWeb’s Java Parallel Runtime System is implemented by modifying the Java interpreter to provide global shared memory and to allow transparent instantiations of threads on remote machines. Bayanihan identifies issues and concepts of a Java-based system like Javelin. It classifies the participation in a global computing framework into different levels of volunteer computing, touching on economic concepts (e.g. barter for CPU time without proposing a broker model). The current prototype provides a general framework for executing tasks within a so-called “chassis object” that can either be a Java applet or application. Tasks are dealt out by a centralized server (work manager) and executed by work engines in the client. Communication in Bayanihan is based on the HORB [29] distributed object libary. Scheduling and fault tolerance schemes are left to the application programmer. Popcorn provides a Java API for writing parallel programs. Applications are decomposed by the programmer into small, self-contained subcomputations, called computelets. The API facilitates something like RMI, but where the application does not specify a destination on which the computelet is to execute. Rather, a “market” brings together buyers and sellers of CPU and determines which seller will run the computelet. Market incentives are supported. Some of the mechanisms needed to implement the proposed framework may eventually be realized through recently released standard Java components such as Remote Method Invocation (RMI) [50] and Object Serialization [49], or already have been provided by other research groups (e.g. [45]). The secure execution of arbitrary binaries recently has

been addressed by at least two techniques. First, softwarebased fault isolation techniques [13] guard against insecure system calls of programs by patching their binaries. Second, secure remote helper applications [25] use operating system tracing facilities to limit the use of resources that could violate system integrity. Another important goal of our project is to allow a natural integration of our infrastructure with other currently existing models for distributed systems. There has been substantial work in the last decade geared towards heterogeneity and open systems. However, promising standards for interoperability and portability between different combinations of software and hardware components emerged only a couple of years ago. We now have widely used technologies, like e.g. CORBA. We expect to see a shift in Web technology towards systems based on distributed objects and IIOP.

4.2. Market-Based Related Work As far back as 1968, Ivan Sutherland considered futures markets in computation. In 1981, Dertouzos described a related phenomenon: an information marketplace that still is taking shape. Then, in 1985, Kurose et al. [31] explored microeconomics as a way to optimize a distributed system of channels. In 1986, Malone considered important parallels between human organizations and computer systems with regard to the organization of information processing systems. Miller and Drexler [37] expound the view that market economies can be viewed as ecosystems. They convincingly argue that markets promote cooperation (what they term symbiotic behavior) and the use of specialized knowledge and abilities better than biological ecosystems (where symbiotic behavior is sufficiently unusual as to get special attention, when discovered). If we want to build a computational ecosystem for solving problems, they consequently argue, markets are a better model than biology. In [38], they give what may be the seminal work associating market mechanisms with computer resource allocation. In [17], Drexler and Miller recast processor scheduling as a variant of a sealed-bid auction, and recast storage management as rental negotiations, yielding a clever algorithm for distributed garbage collection that is able to collect loops of unreferenced objects that cross trust boundaries. Malone et al. [34] present work that matches the spirit of what we want to accomplish, but in a more restricted setting: They describe a system, called Enterprise, for sharing tasks among workstations that are connected by a LAN. Processors send out “bids” on tasks to be done, and other processors respond with bids giving estimated completion times that reflect workstation speed and load. A simple scheduling protocol assigns tasks to workstations. Stonebraker et al. [48] use a market mechanism for data migration: Each site tries to maximize its income by buying and selling stor-

age objects, and processing queries about its objects. We are encouraged by the similarity of their approach to those of Waldspurger et al. [54] and Ferguson et al. [20], which apply these mechanisms to other computer resources.

5. Future Research and Conclusions Market-based global computing is attractive because it spans a wide range of interdisciplinary research issues, ranging from pure technical distributed systems issues such as security, privacy, communication, and dynamic compilation to economic issues such as micro-economic models. A lot of research is going on in each of these areas. Especially important is fundamental research on correctness, security, privacy, authentication, payment, and heterogeneity. In addition, we think it is important to:

 Have a method for comparing bidding protocols for distributed markets.  Understand how the micro-economic model scales, and how it can be used to allocate resources and balance loads while adapting to a dynamic set of participants.  Have methods for estimating the runtime and communication requirements of tasks, analyzing the capabilities of computers, and heuristically matching tasks to existing resources.  Derive a good set of communication primitives, and a workable fault tolerance method, based on a sound understanding of performance issues.  Develop a large prototype with real users and real applications, allowing researchers to study global computing issues in a realistic setting. Market-based global computing has several prerequisite technologies: We need millions of computers to be globally connected via high-speed communication links; these computers need to be able to execute downloaded programs in an environment that is secure; we need to enliven objects with the ability to engage in electronic commerce, and all of its prerequisite functions, such as authentication. The maturation of all these technologies is converging now, making global computing possible. We believe that, when animated by supply and demand, potential hosts will self-organize into a global computing market. This computing infrastructure will expand the set of price vs. performance points, enabling scientists to use their computational and monetary resources more flexibly and effectively. More importantly, this computing market will give scientists the computational power needed to perform simulations that are too large for

today’s computational organizations. The pursuit of this vision opens a rich palette of research topics in the areas of distributed computing and automated market-based resource allocation. We have identified what we believe are those technologies that are being undertaken neither by commercial ventures nor by other researchers: These are linchpin technologies. Their research is crucial to the development of a global computing infrastructure more flexible, powerful, adaptive, and responsive than anything that can be centrally directed. It is truly important work about which we are deeply challenged and excited.

References [1] A. Alexandrov, M. Ibel, K. E. Schauser, and C. Scheiman. SuperWeb: Research Issues in Java-Based Global Computing. Concurrency: Practice and Experience, June 1997. [2] A. D. Alexandrov, M. Ibel, K. E. Schauser, and C. J. Scheiman. Extending the Operating System at the User Level: the Ufo Global File System. In 1997 Annual Technical Conference on UNIX and Advanced Computing Systems (USENIX’97), Jan. 1997. [3] T. E. Anderson, D. E. Culler, and D. Patterson. A case for NOW (Networks of Workstations). IEEE Micro, 15(1), Feb. 1995. [4] R. Axelrod. The Evolution of Cooperation. Basic Books, New York, 1984. [5] J. E. Baldeschwieler, R. D. Blumofe, and E. A. Brewer. ATLAS: An Infrastructure for Global Computing. In Proceedings of the Seventh ACM SIGOPS European Workshop on System Support for Worldwide Applications, 1996. [6] A. Baratloo, M. Karaul, Z. Kedem, and P. Wyckoff. Charlotte: Metacomputing on the Web. In Proceedings of the 9th Conference on Parallel and Distributed Computing Systems, 1996. [7] M. Blum and S. Kannan. Designing Programs that Check Their Work. JACM, 42(1), 1995. [8] R. D. Blumofe. Executing Multithreaded Programs Efficiently. PhD thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Sept. 1995. [9] T. Brecht, H. Sandhu, M. Shan, and J. Talbot. ParaWeb: Towards World-Wide Supercomputing. In Proceedings of the Seventh ACM SIGOPS European Workshop on System Support for Worldwide Applications, 1996. [10] N. Camiel, S. London, N. Nisan, and O. Regev. The POPCORN Project: Distributed Computation over the Internet in Java. In 6th International World Wide Web Conference, Apr. 1997. [11] K. M. Chandy, B. Dimitrov, H. Le, J. Mandleson, M. Richardson, A. Rifkin, P. A. G. Sivilotti, W. Tanaka, and L. Weisman. A World-Wide Distributed System Using Java and the Internet. In Proceedings of the Fifth IEEE International Symposium on High Performance Distributed Computing, Syracuse, NY, Aug. 1996.

[12] B. Christiansen, P. Cappello, M. F. Ionescu, M. O. Neary, K. E. Schauser, and D. Wu. Javelin: Internet-Based Computing Using Java. Concurrency: Practice and Experience, 9(11), Nov. 1997. [13] Colusa Software. Omniware Technical Overview, 1995. http://www.colusa.com. [14] G. F. Coulouris, J. Dollimore, and T. Kindberg. Distributed Systems – Concepts and Design. Addison–Wesley, 2 edition, 1994. [15] DESCHALL. Internet-Linked Computers Challenge Data Encryption Standard. Press Release, 1997. http://www.frii.com/˜rcv/despr4.htm. [16] Distributed.net. Secure Encryption Challenged by Internet-Linked Computers. Press Release, Oct. 1997. http://www.distributed.net/pressroom/56-PR.html. [17] E. Drexler and M. Miller. Incentive Engineering for Computational Resource Management. In B. Huberman, editor, The Ecology of Computation. Elsevier Science Publishers B. V., North-Holland, 1988. [18] Electric Communities. The E Programming Language, 1996. http://www.communities.com/e/epl.html. [19] J. Feigenbaum. Encrypting Problem Instances — Or, ..., Can You Take Advantage of Someone Without Having to Trust Him? In Proceedings of the CRYPTO’85 Conference, 1985. [20] D. Ferguson, C. Nikolaou, and Y. Yemini. An Economy for Managing Replicated Data in Autonomous Decentralized Systems. In Proceedings of the International Symposium on Autonomous Decentralized Systems, Kawasaki, Japan, Mar. 1993. [21] A. Ferrari. JPVM – The Java Parallel Virtual Machine. http://www.cs.virginia.edu/˜ajf2j/jpvm.html. [22] I. Foster and C. Kesselman. Globus: A Metacomputing Infrastructure Toolkit. International Journal of Supercomputer Applications, 1997. [23] G. Fox and W. Furmanski. Towards Web/Java based High Performance Distributed Computing – An Evolving Virtual Machine. In Proceedings of the Fifth IEEE International Symposium on High Performance Distributed Computing, Syracuse, NY, Aug. 1996. [24] D. Gelernter and D. Kaminsky. Supercomputing out of Recycled Garbage: Preliminary Experience with Piranha. In Proceedings of the Sixth ACM International Conference on Supercomputing, July 1992. [25] I. Goldberg, D. Wagner, R. Thomas, and E. A. Brewer. A Secure Environment for Untrusted Helper Applications — Confining the Wily Hacker. In Proceedings of the 1996 USENIX Security Symposium, 1996. [26] J. Gosling and H. McGilton. The Java Language Environment – A Whitepaper. Technical report, Sun Microsystems, Oct. 1995. [27] Great Internet Mersenne Prime Search. GIMPS Discovers 37th Known Mersenne Prime. Press Release, Jan. 1998. http://www.mersenne.org/3021377.htm. [28] A. S. Grimshaw, W. A. Wulf, and the Legion team. The Legion Vision of a Worldwide Virtual Computer. Communications of the ACM, 40(1), Jan. 1997. [29] S. Hirano. HORB: Extended Execution of Java Programs. In First International Conference on WorldWide Computing and its Applications (WWCA 97), 1997. http://ring.etl.go.jp/openlab/horb/.

[30] E. Kovacs and S. Wirag. Trading and Distributed Application Management: An Integrated Approach. In Proceedings of the 5th IFIP/IEEE International Workshop on Distributed Systems: Operation and Management, Oct. 1994. [31] J. F. Kurose, M. Schwartz, and Y. Yechiam. A Microeconomic Approach to Decentralized Optimization of Channel Access Policies in Multiaccess Networks. In Proceedings of the 5th IEEE International Conference on Distributed Computing Systems, Denver, CO, May 1985. [32] M. Litzkow, M. Livny, and M. W. Mutka. Condor – A Hunter of Idle Workstations. In Proceedings of the 8th International Conference of Distributed Computing Systems, June 1988. [33] Lucent Technologies Inc. Inferno. http://inferno.belllabs.com/inferno/. [34] T. Malone, R. E. Fikes, K. R. Grant, and M. T. Howard. Enterprise Computation. In B. Huberman, editor, The Ecology of Computation. Elsevier Science Publishers B. V., NorthHolland, 1988. [35] T. Malone, J. Yates, and R. Benjamin. Electronic Markets and Electronic Hierarchies. Communications of the ACM, 30(6), June 1987. [36] M. Merz, K. M¨uller-Jones, and W. Lamersdorf. Services, Agents, and Electronic Markets: How do they Integrate. In Proceedings of the International Conference on Distributed Systems (ICDP ’96), Dresden, Germany, Mar. 1996. [37] M. S. Miller and K. E. Drexler. Comparative Ecology: A Computational Perspective. In B. Huberman, editor, The Ecology of Computation. Elsevier Science Publishers B. V., North-Holland, 1988. [38] M. S. Miller and K. E. Drexler. Markets and Computation: Agoric Open Systems. In B. Huberman, editor, The Ecology of Computation. Elsevier Science Publishers B. V., NorthHolland, 1988. [39] MPI: A Message-Passing Interface Standard. The International Journal of Supercomputer Applications and High Performance Computing, 8(3), 1994. [40] O. Nierstrasz and D. Tsichritzis, editors. Component– Oriented Software Composition. Prentice Hall, 1995. [41] Draft International Standard 13235 – ODP Trading Function. International Organisation for Standardization, International Electrotechnical Commission, May 1995. [42] A. Puder, S. Markwitz, G. Gudermann, and K. Geihs. AI– based Trading in Open Distributed Processing. In K. Raymond and L. Armstrong, editors, Open Distributed Processing: Experiences with Distributed Environments, Proceedings of the 3rd IFIP TC 6/WG 6.1 International Conference on Open Distributed Processing. Chapman & Hall, 1995. [43] R. Saavedra-Barrera, A. Smith, and E. Miya. Machine Characterization Based on an Abstract High-Level Language. IEEE Transactions on Computers, 38(12), December 1989. [44] L. F. G. Sarmenta. Bayanihan: Web-Based Volunteer Computing Using Java. In 2nd International Conference on World-Wide Computing and its Applications, Mar. 1998. [45] P. A. G. Sivilotti and K. M. Chandy. Reliable Synchronization Primitives for Java Threads. Technical Report CS-TR96-11, California Institute of Technology, June 1996. [46] Softway. Guava – Softway’s just-in-time compiler for Sun’s Java language. http://guava.softway.com.au/.

[47] S. Star. TRADER: A Knowledge-based System for Trading in Markets. In Proceedings of the 1st International Conference on Economics and Artificial Intelligence, Aix-EnProvence, France, Sept. 1986. [48] M. Stonebraker, R. Devine, M. Kornacker, W. Litwin, A. Pfeffer, A. Sah, and C. Staelin. An economic paradigm for query processing and data migration in Mariposa. In " Proceedings of the Third International Conference on Parallel and Distributed Information Systems", Austin, TX, Sept. 1994. [49] Sun Microsystems, Inc. Java Object Serialization Specification, May 1996. Revision 0.9. [50] Sun Microsystems, Inc. Java Remote Method Invocation Specification, May 1996. Revision 0.9. [51] V. S. Sunderam. PVM: A Framework for Parallel Distributed Computing. Technical Report ORNL/TM-11375, Dept. of Math and Computer Science, Emory University, Atlanta, GA, USA, Feb. 1990. [52] A. Vahdat, P. Eastham, C. Yoshikawa, E. Belani, T. Anderson, D. Culler, and M. Dahlin. WebOS: Operating System Services For Wide Area Applications. Technical Report CSD-97-938, UC Berkeley, 1997. [53] L. Vanhelsuwe. Create Your Own Supercomputer With Java. JavaWorld, 2(1), Jan. 1997. [54] C. A. Waldspurger, T. Hogg, B. A. Huberman, J. O. Kephart, and W. S. Stornetta. Spawn: A Distributed Computational Economy. IEEE Transactions on Software Engineering, 18(2), Feb. 1992. [55] R. A. Whiteside and J. S. Leichter. Using Linda for Supercomputing on a Local Area Network. Technical Report YALEU/DCS/TR-638, Department of Computer Science, Yale University, New Haven, Connecticut, 1988. [56] T. Wilkinson. Kaffe – A free virtual machine to run Java code, 1997. http://www.tjwassoc.demon.co.uk/kaffe/kaffe.htm. [57] A. Wolisz and V. Tschammer. Service Provider Selection in an Open Services Environment. In Proceedings of the 2nd IEEE Workshop on Future Trends on Distributed Computing in the 1990’s, Cairo, Egypt, Sept. 1990. IEEE. [58] R. Wolski, N. Spring, and C. Peterson. Implementing a Performance Forecasting System for Metacomputing: The Network Weather Service. In Proceedings of the ACM/IEEE Conference on Supercomputing (SC97), San Jose, CA, Nov. 1997. [59] A. M. Zaremski and J. M. Wing. Specification Matching of Software Components. In Proceedings of the 3rd ACM SIGSOFT Symposium on the Foundations of Software Engineering, Oct. 1995.

Suggest Documents