A Bandwidth Management and Pricing Proxy

0 downloads 0 Views 59KB Size Report
proxy server software for controlling Internet ... The proposed proxy server is designed to preemptively alleviate Internet ... Many people argue that bandwidth will be cheap and abundant in the .... and B is aggregate delay product of the shared.
A Bandwidth Management and Pricing Proxy Austin Poulton, Peter Clayton and F F Jacot-Guillarmod Department of Computer Science Rhodes University P.O. Box 94 Grahamstown 6140*

Abstract The authors present the design of precedence based bandwidth management and pricing extensions to proxy server software for controlling Internet communication in an intelligent and adaptive manner. Proportional fairness and congestion avoidance disciplines are examined as flow control mechanism. The proposed proxy server is designed to preemptively alleviate Internet congestion.

1

INTRODUCTION

The Internet connectivity of most corporations and institutions Internet connectivity is via a single access link to an ISP WAN. The capacity of the service providers access link is usually several orders of magnitude lower than capacity of the customer networks. Hence, Traffic bottlenecks form at WAN entry points and they are a source of packet loss and delay. Network administrators are faced with bandwidth contention and the resulting poor performance (high link loads and retransmissions). The objective is to alleviate congestion on the saturated access link. Many people argue that bandwidth will be cheap and abundant in the future and that there is no need for traffic management. Experience has shown that demand will expand to consume capacity.

FTP/Web proxy Establishes connections on behalf of local clients to foreign hosts. Serves as a multiplexing and controlling point for traffic

Intra domain hosts

Therefore simply providing more Internet bandwidth is a naive and fiscally expensive option. Free and unlimited access to the Internet via the corporate or university network by employees or students is no longer tolerable. In addition, emerging communication applications require preferential network treatment. The alternative available to network administrators is to control bandwidth as a scarce resource. This implies a policy-based control system possibly incorporating usage pricing for the domain. In a policy-based network, not all domain users are equal. They are differentiated by status or the price paid for Internet access. For example, a manager’s or faculty member’s Internet communication sessions have a greater priority than secretaries or students’ traffic. As Internet demand grows there is increasing pressure to account for service provider costs and charges for Internet usage within the domain. Charging for usage provides a business incentive to provide better quality both in terms of operational management (human resources required to manage the network) and future capacity provisioning of the network. Policy-based networking necessitates automation systems that implement Internet bandwidth control and prevent congestion at access points. Figure 1 depicts the prevalent campus or corporate network architecture.

Internet access router and possibly firewall

Low capacity access link to Internet Internet

Campus Intranet

Figure 1: Campus Internet access via a proxy system *

This work was undertaken in the Distributed Multimedia CoE at Rhodes University, with financial support from Telkom, Lucent Technologies, Dimension Data, and THRIP

Most corporate/campus networks employ a web proxy server through which all Internet traffic is directed. HTTP and FTP data constitutes the majority of all Internet traffic. Hence, the authors consider the proxy server as the logical location of Internet bandwidth management and pricing software in current best effort networks.

2

The distinguishing feature of our proposal is that the flow control is receiver based as opposed to sender based. That the proxy is able to control TCP flows from the receiver side. The proxy system has the following design objectives: • A flexible priority policy definition for differentiating between various Internet usage classes at an IP address or finer granularity (user level). It is imperative that network administrators are able to define their own pricing and usage policies. • A dynamic bandwidth and congestion control system to avoid instantaneous traffic peak over the access link and hence prevent saturation as discussed in the previous section. • The use of efficient data structures and control operations that augment the proxy with the additional functionality. The performance and responsiveness of a campus proxy is critical. The diagram below (figure 2) illustrates the proposed proxy server’s core components based on the design goals articulated and are described in the following sections.

RELATED WORK The Squid web proxy is the only proxy server that implements a bandwidth management mechanism. Squid employs usage control system based on a delay pool discipline [6]. The mechanism relies on the network administration defining delay factors for each pool and a usage quota threshold associated with a particular pool. Hosts that exceed a given Internet usage quota are added to the access control list (ACL) of the appropriate delay pool. The pools are implemented as queues that delay the return of web pages requested by hosts within the domain. Each pool uses its delay factor to delay the return of pages to hosts in the delay pool ACL. Bandwidth Control and Pricing Components Currently, scripts must be written to automate the processing of usage logs and inserting offending hosts into the delay pools. Priority policy The delay pool usage control strategy enforces local flow IP address priority table Defined by the Maintains all user IP campus network control between the proxy and the local hosts (see figure addresses in the campus and administration 1). The strategy results not only in the perceived their associated priority values degradation of Internet responsiveness to hosts in the delay pool but also prevents them from issuing outgoing http requests as quickly, hence influencing the wide area Active Connection List Control System flow control. List of active Internet Receiver based The local flow control strategy is effective over larger connections to which control flow control based time scales (weekly, monthly), providing control over functions are applied on priority policy longer time scales, and has been adopted in various forms by a number of institutions including Rhodes University with a large degree of success. Figure 2: Component diagram of author’s proxy However, the mechanism is inadequate in terms of The Priority Policy and Associated Table preventing short time scale or instantaneous traffic peaks, 3.1 since the delay pool ACL’s are updated periodically The system is reliant on the network administration (usually daily) from download logs. defined priority policy that is used by the proxy to control and price for connections. The priority policy defines a PacketShaper from Packeteer [3] is a bandwidth number of service classes and associated priority values, management device that resides at the WAN entry point which denote varying levels of service. and manages traffic flows. The product allows maximum Each IP address (host) in the campus domain must have usage thresholds to be defined for each flow, where a an associated priority value, based on the requirements of flow maybe defined at varying granularities - from ports the user and the subscriptions/usage charges they are (service) to IP addresses. Flows are then monitored and prepared to pay. Obviously priorities may be extended to policed. The operation of the Packeteer flow control individual user granularity, however for our initial proof firmware is described later in this paper. This solution is of concept purposes, IP addresses will suffice. relatively expensive, costing upward of R 30000. A host’s priority value is used by the proxy to control and ALTQ is a traffic management project [2]. ALTQ is a price for connections initiated by it. Each campus IP Free BSD queuing platform for PC routers. It allows address must be registered with the campus priority policy various queuing disciplines to be deployed including class and inserted into the IP address priority table. Hence, the based queuing (CBQ) and weighted fair queuing (WFQ). IP address priority table elements or entries are defined: The interface buffer is redefined into the required number struct { int32 ip_address of queues (for different classes) and then services each byte priority queue according to its defined priority. } Since the priority value of each local host needs to be 3 AUTHORS’ SYSTEM obtained for control and pricing operations by the proxy The authors propose a cheap dynamic bandwidth each time a connection is requested to a foreign server, an management and pricing proxy for ftp and/or http traffic. efficient data structure and look up functions are required

for the priority table. The authors propose a persistent hash table structure for the IP address priority table. The hash tables’ size will be proportionate the number of IP address pool of the domain. The priority system has the following advantages: • Flexibility: Allows for the definition of various priority classes and associated values used for control and charging. • Efficiency: Use of efficient data structures and fast lookup functions.

3.2

Charging and Billing

Internet charging is a subjective topic that should be based on the corporation or institutions’ pricing policy. The priority scheme devised by the authors allows for the development of flexible charging algorithms by network administrators. Charging algorithms may be derived from a combination of a priori information, such as the local hosts priority values and a posteriori information such as usage data. The authors do not prescribe any particular pricing treatment since it is beyond the scope of this discussion. However, it is intuitive that IP addresses with higher priority values will attract higher subscription and/or usage charges since they receive better relative Internet service from the proxy. Billing is essentially an offline function and should not impinge on the normal performance dependent operation of the proxy. Billing information may be generated periodically from usage logs and the IP address priority table to apply priority based pricing.

3.3

Bandwidth Control

Differentiated bandwidth control is the core to providing priority based Internet service to domain users. The objective is for the proxy to apply flow control at individual connection granularity. Hence, the proxy has to provide receiver-based flow control for connections to foreign servers as opposed to the delay pool flow control, where Squid controlled the return of data to local hosts. The authors discuss two dynamic bandwidth control treatments for the proposed proxy. 3.3.1 Proportional Fairness Control The concept of proportional fairness control is based on the premise that each user has a proportional share of the available bandwidth at any given time. In our proxy context, a proportional share is governed by the users IP address priority value. Proportional fairness control is implemented by adjusting the TCP socket receive buffer of each active connection that the proxy maintains. The receive buffer limits the maximum window that may be advertised by the receiver. Hence, the objective is to apply flow control by limiting the size of the receive buffers [4] on the proxy server for all connections from foreign servers to the proxy. Since, there can never be more than one window worth of data in transit between sender and receiver, the receive buffer size limits the throughput of a TCP connection [1,5]:

T≤

BR RTT

where BR is the receive buffer size and RTT is the round trip time for the connection Limiting the receive buffers of a set of connections going through on host (the proxy server) provides proportional fairness of all connections that share the common Internet access link. Hence, to achieve proportional fairness each connection, i, should be assigned a buffer size

bi = B

ki

∑k

where, bi is the receive buffer size for connection i, ki is the priority value for the connection derived from the local hosts priority value that initiated the connection, ∑k is the sum of all connections priority values and B is aggregate delay product of the shared Internet access link. The delay product is defined as the product of the access links’ bandwidth and the mean RTT of all connections that traverse access link. The priority value effectively determines the size of its receive buffers and hence its throughput. Thus, a “soft QoS” paradigm is achieved whereby connections with higher priority values receive better Internet service. All active connections’ buffer sizes have to be adjusted whenever a connection starts or stops in order to maintain the overall fair sharing of the access link. Proportional fairness control is advantageous in terms of maintaining optimized proportional sharing of the access link. However, the control treatment has a large processing overhead. Given that a proxy needs to remain responsive, performance of the treatment is critical. Receiver buffers have to be reallocated when every a TCP connection is started or stopped. This entails going through all the active connections and adjusting the receive buffers. On a busy proxy server this treatment may incur a considerable overhead and hence affect the performance of the proxy. 3.3.2 Congestion Avoidance Control The authors consider and propose a preemptive flow control scheme based on the concept of congestion avoidance. The congestion avoidance control treatment relies on monitoring the Internet access link load and preemptively intervening to avoid link saturation. Essentially, all active connections are not subjected to bandwidth control until the threat of congestion arises. The scheme relies on SNMP traps being set by the SNMP agent resident on the access router whenever certain link loads are attained. These SNMP traps are caught by the proxy system, which then applies flow control in a preemptive manner. Flow control is applied to active connections in ascending order of their priority values. Hence, all lower priority connections are throttled first and then if the access link load continues to escalate, higher priority connections are also throttled.

The alternative is to develop kernel level TCP processes to monitor the TCP timers for individual connections and to delay data acknowledgements destined for the foreign server based on the value of the retransmission timeout timer. Additionally, the process may also spoof the advertised receive window by overwriting the value on outbound TCP segments. % of Throttle Actions - This implementation has been adopted by Packeteer [3] in their firmware bandwidth control product, Threshold Capacity throughput reduction LOW 70 STANDARD priority Packetshaper. connections by 40% CONCLUSION AND FUTURE WORK MED 80 GOOD and STANDARD 4 priority connections by 50% The authors have presented the design of an intelligent proxy system that is capable of providing HIGH 90 ALL connections by 60% bandwidth management control and pricing Table 1: Example throttle rules functionality for current best effort Internet access. The flow control and pricing treatments are based on The table above gives an example of what actions the definition of a flexible priory policy that is need to be taken when certain Internet access link load specified by the network administrators. The priority thresholds are reached. based control and pricing system allows campus The throttle function will use the defined rules and network administrators to control congestion, provide apply flow control to the appropriate connections. users with incentives to limit their usage and account For performance considerations, the authors propose a for service provider costs. priority ordered active connection list for the proxy. The system described is performance sensitive with Congestion avoidance control allows for the flexible little processing and a relatively small memory definition of flow control during times of congestion. overhead. Hence, the proposed proxy server fulfils the The scheme may not be considered as optimal or as design criteria outlined. fair as the proportional fairness treatment. However, The author’s are in the process of developing OPNET the objective was to avoid congestion in such a way simulations for the proposed proxy. This has that higher priority connections are least affected by involved modifying the TCP processes within the flow control. proxy server node model to use the flow control This may appear counter intuitive, since it might well implementation described above. The author’s will be the case that the higher priority connections are the be experimenting and benchmarking other flow cause of the congestion. However, all connections control techniques to realise the proxy’s desired will ultimately be subjected to flow control, given the features. link load level reached. In addition, the domains priority policy is the basis for maintaining the relative 5 REFERENCES priority of connections. [1] Jon Crowford and Philippe Oechslin, The congestion avoidance control is designed to act “Differentiated End-to-end Internet Services using preemptively while intervening only when link loads Weighted Proportional Fair Sharing TCP”, ACM approach saturation. Consequently, the scheme does Computer Communications Review 28(1998), pp53not impose a large flow control overhead. 67. [2] Kenjiro Cho, “Managing Traffic with 3.4 Flow Control Implementation ALTQ”, Proceedings of USENIX 1999, Monterey A brief description of flow control mechanisms is CA, June 1999. given to complete the discussion. [3] Packeteer Inc. “Controlling TCP/IP TCP receive buffer manipulation (section 3.3.1) is Bandwidth”, White Paper, November 1998. achieved through the setsockopt() system call [4] W. Richard Stevens, TCP/IP Illustrated which accordingly adjusts the TCP buffer high water Volume 1, The Protocols, Addison-Wesley mark [5]. This mechanism is mostly reliable and Publishing, USA 1994. W. Richard Stevens, TCP/IP Illustrated effective in terms of limiting throughput via window [5] advertisements. However the TCP stack and its flow Volume 2, The Implementation, Addison-Wesley control system was intended to be meddled with, and Publishing, USA 1995. Squid Documentation, available at: undesirable results occur when shrinking a receive [6] buffer in large proportions or too quickly. The result http://squid.nlanr.net is corrupted data in the socket buffer. Nonetheless, the mechanism is expedient in terms of making use of the TCP stack and efficient since the work is done at the kernel level. Several link load thresholds may be specified as a proportional of the access links total capacity. Each threshold defining which priority connections should be throttled and to what degree. Hence, throttling rules need to be defined by the network administrator for each link load threshold.

Suggest Documents