Bioaccumulation of heavy metal ions - A Review (PDF Download ...

3 downloads 147631 Views 6MB Size Report
Hybrid Energy System (HES) is one of the viable. solutions to harvest energy from renewable energy. resources. This work discusses Different types of.
NITC

For Private Circulation Only

Volume 3

Issue 1

December 2009

Hybrid Energy SystemsProspects and Initiatives at NIT Calicut

N a DECEMBER t i o n a2009 l Institute

of

Te c h n o l o g y

C a l i c u t1

CONTENTS Hybrid Energy Systems- Prospects and Initiatives at NIT Calicut S. Ashok

3

Types of arcs in a fuzzy graph Sunil Mathew, M.S. Sunitha

17

Steel fiber reinforced SCC wall panels in one-way in-plane action N. Ganesan, P.V. Indira, S. Rajendra Prasad

21

Bioaccumulation of heavy metal ions - A Review S. Bhuvaneshwari, K. Suguna, V. Sivasubramanian

26

Planning Annualised Hours M.R. Sureshkumar, V. Madhusudanan Pillai

30

Nanoscience and Biomedicine: Converging Technologies Mahesh Kumar Teli, G.K. Rajanikant

35

Organic Light Emitting Diodes: A Review on Device Physics, and Modeling using Artificial Neural Networks T.A. Shahul Hameed, M.R. Baiju, P. Predeep Strength and behaviour of petrofitted multi-storey RCC frames under lateral loading N. Ganesan, P.V. Indira, Shyju P. Thadathil

DECEMBER 2009

44 54

1

To the Authors NITC Research Review is a publication devoted to throw light on the research activities carried out at National Institute of Technology Calicut and their outcome and is meant for limited circulation. The authors retain their right to publish these results in any journal of their choice fully or partially. Technical papers based on research conducted at NITC, and Technical notes/ review papers on state of the art on a topic/theme may be submitted through any member of the editorial committee. One hard copy and one soft copy of the manuscript, complete in all respects are required. Matter should be arranged in the following order-titlenames of authors-affiliation-abstractnomenclature-main body of the paper acknowledgements-references –appendices if any. All figures and tables should be numbered and captioned.

2

NIT CALICUT RESEARCH REVIEW

Hybrid Energy Systems Prospects and Initiatives at NIT Calicut S. Ashok*

Introduction: Electricity is a major commodity for the socio-economic development of any country. It plays vital role in all activity of human beings in the present scenario. The major part of electricity is developed mainly from the fossil fuel like coal, oil, gas. These fossil fuels have severe impact over the atmosphere in various aspects. Even, these fossil fuels are limited and also going to over almost middle of this century. Still 18000 villages have to be electrified in India and 18% of world population. Electricity for this villages and increased demand will end with vast power crisis in the future. Yet, new sources/technology has to be invented to meet the future demand throughout the world. For over four decades, scientist and engineers around the world have been advocating the utilization of renewable energy resources. Because renewable sources are abundant, though dilute and variable, locally available, almost evenly distributed around the earth, no severe harm over the environment, simplicity in onsite generation. Table 1 Installed capacity and estimated potential from different renewables in India Table 1

Since, it is dilute and variable in nature, many complexity exist in conversion, condition, control, coordination etc. They are utilized as a standalone system to electrify many applications like lighting system, water pumping for irrigation, traffic control, T.V in remote areas etc,. But it is costly, unreliable, stiff, need individual conditioner and controlling units. In this challenging atmosphere, Hybrid Energy System (HES) is one of the viable solutions to harvest energy from renewable energy resources. This work discusses Different types of HES, their advantages and their future research scopes. Hybrid Energy System: Hybrid energy system is including several (two or more) energy sources with appropriate energy conversion technology connected together to feed power to local load/grid. Figure 1 gives the general pictorial representation of Hybrid energy system. Since, it is coming under distributed generation umbrella, there is no unified standard or structure. It receives benefits in terms of reduced line and transformer losses, reduced environmental

Renewable sources

Installed Capacity

Estimated Potential

Wind

2483 MW

45000 MW

Biomass Power/ Cogeneration

613 MW

19500 MW

Biomass Gasifier

58 MW



Small Hydro

1603 MW

15000 MW

Waste to Energy

41 MW

1700 MW

Solar PV

151 MW

20 MW/sq.km

* Professor, Department of Electrical Engg., e-mail:[email protected] DECEMBER 2009

3

impacts, relived transmission and distribution congestion, increased system reliability, improved power quality, peak shaving, and increased overall efficiency.

Barriers: Maximum power extraction: When different V-I characteristics voltages are connected together, one will be superior to other. In this circumstance, extracting maximum power is difficult for a constant load. Stochastic Nature of sources: These distributed sources are site specific and diluted. So, the design of power converters and controllers has to design to meet the requirement. Complexities in matching voltage and frequency level of both inverted DC sources like PV system, fuel cell, etc controlled AC sources like wind, hydro, etc. Because, these sources V-I characteristics depends on atmospheric condition, which is varying time to time. Forecasting of these sources is not accurate.

Figure 1

Major features of Hybrid energy system: HES allow wide variety of primary energy sources, frequently renewable sources generation as the stand alone system for rural electrification where grid extension is not possible or uneconomic. Design and development of various HES components has more flexibility for future extension and growth. Device can be added as the need arises and assure the promising operation with existing system. If there is excess generation than demand, it can be feed in to grid which leads new revenue. The “whole” is worth more than the “parts”. Since many sources are involving in power generation, its stability, reliability and efficiency will be high. Running cost of thermal plant and atomic plant is high. Majority of the renewable source based electricity generation has minimum running cost also abundant in nature. 4

Coordination: In order to get reliable power, these HES connected to utility grid. Often frequency mismatch arises between both systems. Hence it leads instability of the overall system. Energy Conversion Technology: Sun is the primary sources of all energies. It is available in many ways like oil, coal, wind, hydel, sunlight. We are generating electrical energy from these sources directly or indirectly. So far, there is no unique viable method is used for conversion and utilization. Power Quality: Variety of power electronics converters are involved in the power conditioning of hybrid energy system between sources to load. These power converters generate many harmonic components to the load which cause various disturbances to the load/power distribution system.

NIT CALICUT RESEARCH REVIEW

Major research work carried out at NITC Campus: We have developed a hybrid energy system, which is consisting of consisting of biomass, wind, solar photovoltaic (SPV) and battery. Figure 2 shows the proposed hybrid energy system model. The sources are operated to deliver energy at optimum efficiency. An optimization model is developed to supply the available energy to the loads according to the priority. It is also proposed to maintain a fair level of energy storage to meet the peak load demand together with biomass, wind, solar photovoltaic, during low or no solar radiation periods or during low wind periods.

with where,t is hour of a particular day t = 1,2, …24 i is load type primary and deferrable loads (2) Pi is Demand of load i at time t in KW Ii (t) is the fraction of time t that the load i is supplied energy Load constraints The energy distribution from the energy sources at period t to each load i is given as Where QP, Qw, QG, QB are the energy supplied by the PV, Wind, Gasifier and Battery respectively. PV Array constraints Ep(t) is the sum of the energy supplied by the PV array to the loads and to the battery bank, in hour t,

(3)

Figure 2

MODEL DEVELOPMENT The objective of the proposed optimization model is to optimize the availability of energy to the loads according to their levels of priority. It is also proposed to maintain a fair level of energy storage in battery to meet peak load demand (together with the gasifier, wind and PV array), during low or no radiation periods and wind speed is very less. The loads are classified as primary and deferrable loads. It is desired to minimize, dumped energy, Qdump (t). The dumped energy is the excess energy, or energy which cannot be utilized by the loads. The objective function is to maximize (1) DECEMBER 2009

where, QP,i (t) is the energy supplied by PV array to the loads QP,B (t) is the energy supplied by PV array to the battery bank QP,R (t) is the energy dumped by PV array Since energy generated by the system varies with insolation, therefore the available array energy Ep(t) at any particular time is given by where, (4) V is the capacity of PV array S(t) is the insolation index Wind energy system constraints EW(t) is the sum of the energy supplied by the wind energy system to the loads and battery bank at hour t, (5) 5

where, Qw,i(t) is the energy supplied by the wind energy system Qw,B(t) is the energy supplied by the wind energy system to the battery bank Qw,R(t) is the dumped energy by the wind energy system Gasifier constraints EG(t) is the sum of the energy supplied by the gasifier based power generation system to the loads and battery bank, with a possibility of excesses. It is desired to run the generator at its Optimum capacity to ensure longevity and efficiency. (6) where, QG,i(t) is the energy supplied by the gasifier to the loads QG,B(t) is the energy supplied by the gasifier to the battery bank QG,R(t) the dumped energy from the gasifier

Battery bank constraints The battery bank serves as an energy source entity when discharging and a load when charging. The net energy balance to the battery determines it’s state-of-charge, (SOC) The state of charge is expressed as follows

(8)

It is also necessary to guard the battery against excessive discharge. Therefore the SOC at any period t should be greater than a specified minimum SOC, SOCmin (9) Dumped energy From the above equations the total dumped energy in each hour t as follows (10) Maximum power point tracking of PV array and wind system are developed in our campus to harvest maximum energy form the source. Peak power point tracking of pv array: Peak power point tracking of PV array fed induction motor drive is developed in our campus. This system shown in figure 3 consists of PV array, DC chopper, inverter, microcontroller unit and single-phase capacitor run induction motor drive. PV array is providing electricity to the load through the power conditioning circuits respectively chopper and inverter. Microcontroller is incorporated with the proposed system in closed loop operation to generate firing pulses for both chopper and inverter in order to track peak power point. Dedicated software is developed for the firing pulse generation in MPLAB platform and tested successfully in PROTEUS software, which

(7) Where QB is the capacity of the battery bank The battery has to be protected against overcharging; therefore, the charge level at (t-1) plus the influx of energy from the PV, wind and gasifier at period (t-l), (t) should not exceed the capacity of the battery. Mathematically, 6

Figure 3 NIT CALICUT RESEARCH REVIEW

is made especially for microcontroller-based applications. The proposed system is simulated in MATLAB/SIMULINK platform and the performances are computed. Figure 3 shows the simulated model of the proposed system. The fabrication work is carried out for the proposed system and tested successfully in Electrical lab.

Figure 4

Figure 4 shows the experimental setup of the proposed system. The computed performance of both simulation and experimental set up are compared and also presented in figure 5. Finally, the Peak Power Point Tracking is carried out over the proposed system using Perturb and Observe Technique and the performance characteristics show in figure 6, figure 7, and figure8.

Figure 6. (a) Available Photovoltaic power at NIT Campus.

7. V-P and V-I curve for different solar insulation and atmospheric temperature

Figure 8. Response of Drive system during for different solar insulation and atmospheric temperature DECEMBER 2009

7

Peak power Point Tracking of Wind Generator: Wind energy is transformed into mechanical energy by means of a wind turbine that has one or several blades. The turbine coupled to the generator by means of a mechanical drive train. The speed and direction of the wind impinging upon a wind turbine is constantly changing. Over any given time interval, the wind speed will fluctuate about some mean value. The power obtained by the turbine is a function of wind speed. This function may have a shape such as shown in Figure 9.

Figure 9. Typical power curve for a wind turbine Peak power point tracking of wind generator is developed in our campus. This system consists of wind generator, DC chopper, microcontroller unit. Wind generator is providing electricity to the load through the power conditioning circuit (chopper). Microcontroller is incorporated with the proposed system in closed loop operation to generate firing pulses for chopper in order to track peak power point. CASE STUDY 1. EXISTING SCHEME The evaluation is based on a typical farming village in the hilly terrains of Western Ghats of Kannur district in Kerala, India. It is 35 Km away from the nearest town, the mode of transportation is limited and village distance to the existing grid is 15 Km away. About 50% of the population is deprived of electricity. It has approximately 150 families with a population of over 600. The village has a house density of 30-50 household s per km 2 .The expected growth rate is 10%. The principal demand is for lighting and television. In this study, the electrical appliances in the village include 11W CFL lamps, 20W stereo, 60W television sets. They are fully dependent on 8

agriculture for their livelihood. The major crops of the area are rubber, arecanut and coconut. Electricity is essential for shops, school, irrigation household and public utilities. Existing stand alone power units Conventional DG sets are used by a smaller fraction of the population for irrigation and household purposes. Solar PV panels are used for typical applications such as individual household and street lighting. Few water -pumping units are realized with wind-powered turbines. Stand alone micro hydro units supply power for irrigation and household applications. A Community based micro hydro unit also exists supplying 30% of the population. Vehicle alternators and induction motor as generators are used for power generation with pico hydro schemes. The details of various renewable power units in operation in the village are given in the Table 2. Figures 10 to 16 demonstrate the potential of renewables like solar PV, wind and micro hydro resources and the feasibility of installing a hybrid energy system in this proposed site of study.

Figure 10 Turbine – Generator set of 5 kW Community based pico hydro unit

Figure 11 Water intake of 5 kW Community based pico hydro unit NIT CALICUT RESEARCH REVIEW

Figure 12 Control panel of 5 kW Community based pico hydro unit

Figure 15 A 700W Individual owned pico hydro unit in the Western Ghats of Kannur

Figure 13 A 1 kW Individual owned pico hydro unit in the Western Ghats of Kannur

Figure 14 Solar panels in a school in Western Ghats of Kannur, Kerala DECEMBER 2009

Figure 16 Wind powered water pumping unit in the Western Ghats of Kannur 9

Sl. No Description

Capacity 5 kW, 230 V 1 kW, 230 V I kW, 230 V 700 W, 230 V 1-2 kW 2-3 kW 1-2 kW 5 kW

Daily Operating hours 14 hours 8 hours 8 hours 10 hours 4 hours 4 hours 4 hours 4 hours

Population Benefited 30% 1% 1% 1% 5% 5% 2% 5%

1 2 3 4 5 6 7 8

Community based pico hydro unit Individual owned pico hydro unit Individual owned pico hydro unit Individual owned pico hydro unit Other pico hydro units Solar PV units Wind units Diesel Generator sets TOTAL

20 kW

8 hours (average)

50%

Load data From the hourly load profile of a typical day in the village, the daily demand is found to be 317 kW. The peak demand is 20 kW and the load factor is 66%. The demand curve satisfies the principal load of the village population in lighting and television sets. The average hourly load profile for a typical day is shown in the Figure 17.

Figure 18: Solar irradiation for a typical day

Figure 17 Hourly load profile for a typical day

Figure 19 Wind velocity and temperature data for a typical day

Table 3 Component sizes and costs of different combinations No. Combinations Micro hydro Wind Unit SPV DG set Batt. No. COE (Rs. % (kW) (5 kW) (120W) (kW) (360Ah, 6V) /kWh) Renewables

1 2 3 4 5 6 7 8 9 10 10

H/W/S H/W H/W H/S W/S W/S W/S W/S W/S H

15 15 15 15 0 0 0 0 0 15

1 1 2 0 1 2 3 4 5 0

26 0 0 50 332 268 191 113 26 0

0 10 0 10 15 10 10 5 0 10

32 24 28 32 72 68 56 48 40 12

7.09 6.3 6.5 9.33 26.49 22.05 17.9 13.47 9.9 7.05

100 98.4 100 89.44 75.57 82.01 88.25 94.22 100 79.82

% DG

0 1.6 0 10.56 24.43 17.99 11.75 5.78 0 20.18

NIT CALICUT RESEARCH REVIEW

11 12 13 14 15 16 17

W W W W W W S

0 0 0 0 0 0 0

1 2 3 4 5 6 0

0 0 0 0 0 0 359

CASE STUDY 2: A hybrid energy system consists of PV, Wind, Biomass gasifier and Battery bank along with the developed power conditioner is implemented in a site (Vallam, 325km south of Chennai, India). Utility grid supply is already available at the site. This hybrid energy system is implemented to reduce the demand of the utility grid. Table 4 shows the ratings and parameters of the hybrid energy system.

20 15 15 15 10 0 15

8 8 8 16 32 40 76

28.9 19.9 13.42 10.15 8.68 9.85 31.4

18.12 37.77 57.45 77.26 97.27 100 76.8

81.88 62.23 42.55 22.74 2.73 0 23.2

Biomass is available in abundance at the site. Fig.20 shows the hourly wind speed and the power from wind and Fig.21 shows the hourly solar insolation and the calculated power from solar photovoltaic system at the site on the day of the field test. To demonstrate the effectiveness of the developed power conditioner and the hybrid energy system a real-time testing is conducted for 24 hours at the site.

TABLE 4: Ratings and Parameters of the Hybrid Energy System PV System Capacity Size of PV panel No of panels Overall Efficiency

100kW, 400 V 1341mmX990mm X36mm 556 15%

Wind system Capacity Air density Coefficient of performance Rotor diameter Number of Blade Working wind speed Tower height

90kW, 415 V 1.225 kg/m3 0.59 13 m 3 3-25 m/s 18 m

Biomass gasifier Capacity Minimum loading

125 kW, 400V 30%

Battery Rating 6 V, 1156 Ah Minimum State-of-Charging 40% Minimum charging rate 1A/Ah Maximum Charging current 41 A This site has an average solar radiation of 5.32 kW/ ms/day and an average wind speed of 6.48 m/s. DECEMBER 2009

Fig.20 Hourly wind speed and wind power at the site

Fig.21 Hourly solar radiation and solar power at the site 11

The load demand is of the site for the particular day is as shown in fig.22. Necessary measuring and monitoring equipments are connected to the power conditioner and to the hybrid energy system to monitor the power from the sources, load demand, state-of-charge of battery and control signals from the power conditioner. Different operations of the power condtioner have been observed and the results are presented in the next section. The excess power to be fed into and the power drawn the utility grid have been measured.The solar radiation, wind speed and load demand data are available for the site and used for sensitivity and economic analysis. During the field testing, hybrid energy system has not been interfaced with the utility grid since the utility grid does not accept the operating voltage of the hybrid system (400 V). But the power conditioner algorithm has been programmed to interface with the utility grid. Hourly load demand and meteorological data has been recorded for one year for the site. Economic analysis has been carried out using the micro power optimization software Homer, developed by National Renewable Energy Laboratory, USA with the data. Introduction The constant decline of costs for renewable energy technology and new research activities on the alternative energy technology has increased the utilization of renewable energy sources. A single renewable energy source system cannot provide a continuous source of energy due to the low availability during different seasons. It is necessary to use hybrid system where two or more renewable energy sources are exploited together to achieve high energy availability. A power conditioner to control and supervise the operations of hybrid energy system is proposed. Continuous monitoring of load demand, optimal allocation of sources to load, battery charging and discharging control are the major tasks of the power conditioner. This work discusses a new algorithm for power management of hybrid system through the conditioner. Utility grid interfacing is also taken care by the control 12

Fig.22 Load Demand for a typical day

Fig.23 Optimal allocation using proposed power conditioner

algorithm. The modular power conditioner hardware is developed and implemented in a site with a system consists of PV, Wind, Biomass gasifier and Battery. A case study is carried out in real-time and the results are presented. Hourly annual demand and meteorological data has been collected for the site. With the data, an economic analysis has been carried out. The results show that the total annualized cost is Rs 7,320,464 (US $152,955) and the cost of energy is Rs. 5.85 (US$ 0.122). Control Algorithm A control algorithm is developed and implemented in the power conditioner hardware. This algorithm is used to control the entire operations of the hybrid energy system. Monitoring the load demand, solar insolation, wind speed, biomass fuel availability, battery state of charge and grid availability are the major tasks of the control algorithm. Fig.24 shows the flow chart of the control algorithm NIT CALICUT RESEARCH REVIEW

Fig.24 Flowchart of the power conditioner algorithm

The control algorithm is developed to satisfy the load demand by optimally allocate the sources. The control algorithm first checks the availability of solar and wind power and calculates the power from solar and winds using (1) and (2). It measures the demand and if the demand matches with the availability then the sources are allocated accordingly. After the allocation, the excess power if any is used for charging the battery bank. The power conditioner checks the state of charge of battery, once it is maximum, then the control algorithm is developed in such a way that the excess power is fed into the utility grid. If the demand is high then the power conditioner allocates battery together with wind and PV to meet the load demand. Before the allocation, the state of charge of battery is checked. Biomass gasifier is allocated only when the demand is high or during when the demand can not be met by the wind, PV and battery bank. Sudden increase in load demand is taken care by the battery before the starting of biomass gasifier. The control algorithm checks for any shortage after the allocation of all the sources and battery. Then it connects grid with the hybrid energy system to meet the shortage. It also ensures DECEMBER 2009

that the grid will supply only the remaining power required by the system. For any power conditioner, there are 2n allocations possible; where n is the number of source. Table5 shows different possible allocations of the power conditioner for the hybrid energy system. Table 5: Different possible allocations of power conditioner Biomass gasifier Wind

PV

Battery

Off Off Off Off Off Off Off On On On On On On On On

Off On On Off Off On On Off Off On On Off Off On On

Discharging Charging/Floating Discharging Charging/Floating Discharging Charging/Floating Discharging Charging/Floating Discharging Charging/Floating Discharging Charging/Floating Discharging Charging/Floating Discharging

Off Off Off On On On On Off Off Off Off On On On On

13

Power Conditioner The power conditioner is developed using two microcontrollers. Fig.25 shows the power conditioner and the different energy sources connected through ac and dc buses. One is to control the operation of the hybrid energy system and the other is to log the wind speed and solar insolation data. The power conditioner is designed using a PIC16F873 microcontroller. The power conditioner receives the wind speed and solar insolation data every minute from environmental monitoring station. The data acquisition system in the power conditioner is designed using a PIC18F458 microcontroller with 1GB memory to log the data. The logged data can be transferred to computer system for further analysis. Necessary circuitry is provided to measure the load demand, state-of-charging of battery. The power conditioner controls the allocation of sources through relays by sending the suitable control signals depend on the availability of the sources. The power conditioner measures the battery bank voltage using voltage divider circuit and current flow from and to the battery using resistor-shunt circuitry to find the state of charge of battery. The relays are operated by the microcontroller through appropriate control signals to the relay driver circuits. In the proposed hardware there are 4 such relays to control the operation of the wind, PV, biomass gasifier and battery apart from the grid interfacing. The ultimate aim of the power conditioner design is to utilize the renewable energy sources to the maximum extent. However, during the shortage of power after allocating all the sources including battery bank, and during excess power after the demand is met and state of charge of battery bank is full, the power conditioner will interface the hybrid energy system with the utility grid to draw or feed power. RESULTS AND DISCUSSION From the real-time test conducted, various functions of the power conditioner were tested with a hybrid energy system. Fig.6 shows the allocation of sources under different conditions by the power conditioner.Fig.6 shows how the demand is met 14

Fig.25 Schematic of the proposed power conditioner

by the hybrid energy system (PV, Wind and biomass gasifier). It shows that how the sources were allocated according to the load demand and availability. The entire operations of the power conditioner can be seen from fig.6. It shows that the power from PV is fully utilized to supply the load demand as well as charging the battery during day times. The charging and discharging of the battery bank is also shown. It was observed that the power conditioner utilized the battery bank effectively. Power conditioner switches the batteries into charging mode whenever excess power available from the sources. It was in discharging mode, whenever there was a shortage of power from sources. From fig.6, it is found that the maximum demand of 204.79kW occurs at 11:00 am is met by all the energy sources along with the battery bank. During that time excess power is available from the hybrid energy system and can be fed in to the grid. It is mentioned in the fig.6. The power conditioner turns off the biomass gasifier when the load demand can be met together by the PV, wind and battery bank. For example on the typical day, at 16:00 hours the load demand is only 161.41kW, the power conditioner allocates PV, wind and battery bank to supply the energy to the load without allocating the biomass gasifier. The remaining power is taken from the grid. During the day of the real-time site test, the power conditioner allocated the biomass gasifier only for 8 hours. The demand was met by the renewable energy sources (PV + wind) along with the battery bank and from grid in the remaining periods. It NIT CALICUT RESEARCH REVIEW

reduces the running cost of the hybrid energy system.It is observed that the power conditioner allocates the sources optimally according to the demand and availability satisfying the constraints. Also the power conditioner controls the charging and discharging of battery effectively. Whenever the biomass gasifier required, the battery supplied power for a small period since the generator may take some time for starting. ECONOMIC ANALYSIS With the data collected from the site, a detailed economic analysis has been carried out using micro power optimization software homer. The results are presented in this section. Fig.7 shows the monthly average contributions of the different sources and the utility grid. It shows that the variation is not only in the demand but also the availability of sources. The utility compensates the shortage.

The grid contributes about 12% of the demand. Fig.9 shows the total monthly purchase and sale of energy with the utility grid. The total annual energy drawn from the grid is 163,344 kWh and fed into the grid is 90,679 kWh.

Fig.8 Annual Contribution of different sources and grid

Fig.8 Annual Contribution of different sources and grid Fig.9 Monthly feeding and drawing of energy from the utility grid Fig.7 Monthly average power from hybrid energy system

Fig.8 shows the annual contribution of the sources in hybrid energy system and utility grid. The total energy from the PV system is 307,089kWh. It is about 22 % of the total energy supplied to the load by the hybrid energy system. It is found that the total energy from the wind is 398,514kWh. It is about 29 % of the total energy supplied to the load by the hybrid energy system. The biomass gasifier supplies the remaining energy, which is 516,750kWh. It is about 37 % of the total energy supplied to the load by the hybrid energy system. DECEMBER 2009

The cost of energy is calculated by dividing the total annualized cost of each system component by the total energy production. The average cost per tonne of biomass fuel is Rs2000.00 (US$41.80). The total biomass fuel used is 882 tonne/year. Table 2 shows the cost details and their annualized costs. This is calculated for the rate of interest of 9% and for a 20-year project period. The total annualized cost is calculated as Rs 7,320,464 (US $152,955). The shortage of power is met by the grid and excess power is fed into the grid. The cost of purchase of energy from the 15

TABLE 3 ECONOMIC ANALYSIS OF HYBRID ENERGY SYSTEM Component

Capital(Rs) Replacement(Rs)

PV

2,190,930

0

100,000

0

-2,346

2,288,584

WindSystem

985,918

0

75,000

0

0

1,060,918

Biomass Gasifier

479,266

12,918

516,750

1,763,840

-11,807

2,760,966

0

0

736,103

0

0

736,103

87,637

1,558

8,000

0

-261

96,934

Converter

205,400

0

37,500

0

-489

242,411

Other

109,546

0

25,000

0

0

134,546

4,058,697

14,476

1,498,353

1,763,840

-14,902

7,320,464

Grid Battery

System

O&M(Rs /yr)

Fuel(Rs)

Salvage(Rs)

Total(Rs)

utility grid is Rs.5.75 and the cost of sale of energy to the grid is Rs 3.15 per kWh. The cost of energy for the hybrid energy system is Rs 5.85(US$0.122) per kWh. The annual net purchase from the grid is Rs 6,719,550. CONCLUSION A power conditioner algorithm for the optimal control and operation of the hybrid energy system is presented. The proposed power conditioner algorithm was implemented in the hardware. A system consists of PV/wind/ Biomass gasifier and battery has been implemented with the developed power conditioner. Real time field test is conducted for a period of 24 hours. It is found that the implemented algorithm allots the sources effectively and the hybrid energy system supplies the demand of the particular site effectively. Though grid interfacing is not implemented due to different voltage levels of grid and hybrid energy system, the software has been developed to address grid interfacing issues also. Economic analysis has been carried out and the results show that the total annualized cost is calculated as Rs 7,320,464 (US $152,955) and cost of energy is Rs 5.85(US$0.122) per kWh. The grid purchase is for Rs 6,719,550.

16

NIT CALICUT RESEARCH REVIEW

Types of arcs in a fuzzy graph Sunil Mathew* & M.S.Sunitha** 1 Introduction Fuzzy graphs were introduced by A.Rosenfeld [5] in 1975, ten years after Zadeh’slandmark paper “ Fuzzy Sets” [9] in 1965. Fuzzy graph theory is now finding numerous applications in modern science and technology especially in the fields of information theory, neural networks, expert systems, cluster analysis, medical diagnosis, control theory, etc. Fuzzy modeling is an essential tool in all branches of science, engineering and medicine. Fuzzy models give more precision, flexibility and compatibility to the system when compared to the classic models . Rosenfeld has obtained the fuzzy analogues of several basic graph-theoretic concepts like bridges, paths, cycles, trees and connectedness and established some of their properties [5]. 2 Preliminaries A fuzzy graph(f-graph) [5] is a pair G : (σ, ì) where σ is a fuzzy subset of a set S and ì is a fuzzy relation on σ. We assume that S is finite and nonempty, ì is reflexive and symmetric [5]. In all the examples σ is chosen suitably. Also, we denote the underlying graph by G * : (σ*, ì*_) where σ* = {u ∈S : σ(u) > 0} and ì * = {(u, v)_S×S : ì(u, v) > 0}. A fuzzy graph H : (τ, υ) is called a partial fuzzy subgraph of G : (σ, ì) if τ (u) ≤ σ(u) for every u and υ (u, v) ≤ ì(u, v) for every u and v [4]. In particular we call a partial fuzzy sub graph H : (τ, υ) a fuzzy subgraph of G : (σ, ì ) if τ (u) = σ(u) for every u in τ * and υ (u, v) = ì(u, v) for every arc (u, v) in υ*. Now a fuzzy sub graph H : (τ, υ) spans the fuzzy graph G : (σ, ì) if τ = σ. A

connected f-graph G : (σ, ì) is a fuzzy tree(f-tree) if it has a fuzzy spanning subgraph F : (σ, υ), which is a tree, where for all arcs (x, y) not in F there exists a path from x to y in F whose strength is more than ì(x, y) [5]. Note that here F is a tree which contains all nodes of G and hence is a spanning tree of G. Also note that F is the unique maximum spanning tree(MST) of G [7]. A path P of length n is a sequence of distinct nodes u0, u1, .......un such that ì(ui”1, ui) > 0, i = 1, 2, ......, n and the degree of membership of a weakest arc is defined as its strength. If u0 = un and n≥ 3, then P is called a cycle and a cycle P is called a fuzzy cycle(f-cycle) if it contains more than one weakest arc [4]. The strength of connectedness between two nodes x and y is defined as the maximum of the strengths of all paths between x and y and is denoted by CONNG(x, y). An x “ y path P is called a strongest x “ y path if its strength equals CONNG(x, y) [5]. An f-graph G : (σ, ì) is connected if for every x,y in σ* , CONNG(x, y) > 0. Through out this, we assume that G is connected. An arc of a f-graph is called strong if its weight is at least as great as the connectedness of its end nodes when it is deleted and an x”y path P is called a strong path if P contains only strong arcs [1]. An arc is called an fbridge of G if its removal reduces the strength of connectedness between some pair of nodes in G [5]. Similarly an f-cutnode w is a node in G whose removal from G reduces the strength of connectedness between some pair of nodes other than w. A complete fuzzy graph (CFG) is an f-graph

* Research Scholar, Department of Mathematics. e-mail: [email protected] ** Assistant Professor, Department of Mathematics. e-mail: [email protected] DECEMBER 2009

17

G : (σ, ì) such that ì(x, y) = σ(x) ∧σ(y) for all x and y. 3 Types of arcs in a fuzzy graph Depending on the CONNG(x, y) of an arc (x, y) in a fuzzy graph G we define the following three different types of arcs. Note that CONNG”(x,y)(x, y) is the strength of connectedness between x and y in the fuzzy graph obtained from G by deleting the arc (x, y). Definition 1: An arc (x, y) in G is called α - strong if ì(x, y) > CONNG”(x,y)(x, y) Definition 2: An arc (x, y) in G is called β - strong if ì(x, y) = CONNG”(x,y)(x, y). Definition 3: An arc (x, y) in G is called a δ - arc if ì(x, y) < CONNG”(x,y)(x, y). Remark 1: A strong arc is either α- strong or β strong by definition 1 and definition 2 respectively. Definition 4: A δ - arc (x, y) is called a δ* - arc if ì(x, y) > ì(u, v) where (u, v) is a weakest arc of G. Definition 5: A path in an f-graph G : (σ, ì) is called an α-strong path if all its arcs are α - strong and is called a β - strong path if all its arcs are β - strong. Example 1 : Let G : (σ, ì) be with σ* = {u, v,w, x} and ì(u, v) = 0.2 = ì(x, u), ì(v,w) = 1 = ì(w, x), ì ( v, x) = 0.3. Here, (v,w) and (w, x) are α-strong arcs, (u, v) and (x, u) are β- strong arcs and (v, x) is a -arc. Also (v, x) is a δ* - arc since ì(v, x) > ì(u, v), where (u, v) is a weakest arc of G. Here P1 : x,w, v is an α- strong x “ v path whereas P2 : x, u, v is a β - strong x “ v path. Note that in an f-graph G, the types of arcs cannot be determined by simply examining the arc weights; for, the membership value of a δ-arc can exceed membership values of α -strong and βstrong arcs. Also membership value of a β- strong arc can exceed that of an α - strong arc as can be seen from the following examples. (a) Membership value of δ - arc exceeds membership value of β - strong arc. 18

In Example 1, ì(v, x) = 0.3 > 0.2 = ì(u, v). Here, (v, x) is a δ- arc whereas (u, v) is β - strong. (b) Membership value of δ - arc exceeds membership value of α - strong arc. Example 2 : Let G : (σ, ì) be with σ*= {u, v,w, x} and ì(u, v) = 1 = ì ( v,w), ì(u,w) = 0.4, ì(w, x) = 0.3, ì(x, u) = 0.1. Here, (u, v), (v,w) and (w, x) are α-strong arcs,whereas (u,w) and (x, u) are δ-arcs with ì(u,w) = 0.4 > 0.3 = ì ( w, x). (c) Membership value of β -strong arc exceeds membership value of α - strong arc. Example 3 : Let G : (σ, ì) be with σ* = {u, v,w, x} and ì(u, v) = ì(u,w) = ì ( v,w) = 1, ì(w, x) = 0.5, ì(x, u) = 0.1. Here, (u, v),(v,w), (u,w) are β - strong arcs, whereas (w, x) is α - strong and (x, u) is a δarc with ì(u,w) = ì(u, v) = ì ( v,w) = 1 > 0.5 = ì(w, x). 4 Types of arcs in a strongest path Now we shall discuss the types of arcs of a strongest path in G. Remark 2: A strongest path may contain all types of arcs. In example 1, the strength of the path P : u, v, x,w is 0.2, which is a strongest path from u to w and it contains all types of arcs, namely (u, v) is β strong, (x,w) is α - strong and (v, x) is a δ-arc. Remark 3: As per Remark 1, a strong path contains only α-strong and β- strong arcs but no δ - arcs. Remark 4: In a graph G, each path is strong as well as strongest . But in a fuzzy graph a strongest path need not be a strong path and a strong path need not be a strongest path. In example 1, P1 : u, v, x,w is a strongest u”w path, but not a strong u “ w path. Note that P2 : u, v,w and P3 : u, x,w are strong u “ w paths. Now, P4 : v, u, x is a strong v “ x path which is not a strongest v “ x path and P5 : v,w, x is the strongest v “ x path. Remark 5: A strongest path without δ-arcs is a NIT CALICUT RESEARCH REVIEW

strong path; for, it contains only α-strong and βstrong arcs . Proposition 1: A strong path P from x to y is a strongest x “ y path in the following cases. (i) If P contains only α - strong arcs. (ii) If P is the unique strong x “ y path. (iii) If all x “ y paths in G are of equal strength. Proof: (i) Let G : (σ, ì) be an f-graph. Let P be a strong x”y path in G containing only α - strong arcs. If possible suppose that P is not a strongest x”y path. Let Q be a strongest x “ y path in G. Then P ∩ Q will contain at least one cycle C in which every arc of C “ P will have strength greater than strength of P. Thus a weakest arc of C is an arc of P and let (u, v) be such an arc of C. Let C’ be the u “ v path in C, not containing the arc (u, v). Then, ì(u, v) ≤ strength of C’ ≤ CONNG”(u,v)(u, v), which implies that (u, v) is not α - strong, a contradiction. Thus P is a strongest x “ y path. (ii) Let G : (σ, ì) be an f- graph. Let P be the unique strong x”y path in G. If possible suppose that P is not a strongest x “ y path . Let Q be a strongest x “ y path in G. Then, strength of Q > strength of P. ie; for every arc (u, v) in Q, ì(u, v) > ì(x’, y’) where (x’, y’) is a weakest arc of P. Claim: Q is a strong x “ y path. For; otherwise, if there exists an arc (u, v) in Q which is a δ - arc, then ì(u, v) < CONNG”(u,v)(u, v) ≤ CONNG(u, v) and hence ì(u, v) < CONNG(u, v). Then there exists a path from u to v in G whose strength is greater than ì(u, v). Let it be P|. let w be the last node after u, common to Q and P|in the u “ w sub path of P|and w| be the first node before v, common to Q and P| in the w’ “ v sub path of P|. (If P|and Q are disjoint u “ v paths then w = u and w| = v). Then the path P| | consisting of the x “ w path of Q, w “ w| path of P|, and w|” y path of Q is DECEMBER 2009

an x “ y path in G such that Strength of P| | > Strength of Q, contradiction to the assumption that Q is a strongest x “ y path in G. Thus (u, v) cannot be a c - arc and hence Q is a strong x “ y path in G. Thus we have another strong path from x to y, other than P, which is a contradiction to the assumption that P is the unique strong x “ y path in G. Hence P should be a strongest x “ y path in G. (iii) If every path from x to y have the same strength, then each such path is strongest x”y path. In particular a strong x”y path is a strongest x”y path. We observe that if all arcs of an f-graph G are β strong, as in graphs without bridges, then each strongest path is a strong path but the converse need not be true. For; consider the f-graph G : (σ, ì) with σ* = {u, v,w, x, y} and ì(u, v) = ì(v,w) = ì ( w, x) = ì(x, u) = 0.2, ì(u, y) = ì(y,w) = 0.1. Here all arcs are β - strong and u, y,w is a strong u “ w path but it is not a strongest u “ w path. References 1. K. R. Bhutani, A Rosenfeld, Strong arcs in fuzzy graphs, Information sciences 152 (2003) 319322. 2. K. R. Bhutani,A Rosenfeld, Fuzzy end nodes in fuzzy graphs, Information sciences 152 (2003) 323-326. 3. K. R. Bhutani,A.Battou,On M-strong fuzzy graphs, Information Sciences1-2 (2003) 103109. 4. J.N. Mordeson, P.S. Nair, Fuzzy Graphs and Fuzzy Hypergraphs, Physica -Verlag, 2000. 5. A. Rosenfeld, Fuzzy graphs, In: Zadeh. L.A., Fu, K.S., Shimura M (Eds). Fuzzy sets and their Applications to Cognitive and Decision Processes, Academic Press, New York 1975, 7795. 6. Sameena K,M.S.Sunitha,Strong arcs and maximum spanning trees in fuzzy graphs, International Journal of Mathematical Sciences 5(2006) 17-20. 19

7. M. S. Sunitha, A. Vijayakumar, A characterization of fuzzy trees, Information Sciences 113 (1999) 293-300. 8. Sunil Mathew , M. S. Sunitha, Types of arcs in a fuzzy graph, Information Sciences 179 (2009) 1760 – 1768. 9. L.A.Zadeh, Fuzzy sets,Information and Control 8(1965) 338-353.

20

NIT CALICUT RESEARCH REVIEW

Steel fiber reinforced SCC wall panels in one-way in-plane action N. Ganesan*, P.V. Indira* and S. Rajendra Prasad** Abstract Eight Steel Fibre Reinforced Self Compacting Concrete (SFRSCC) rectangular wall panels, hinged at top and bottom with free vertical edges, were tested and properties evaluated. The panels were subjected to uniformly distributed load applied at a small eccentricity of t/6 to reflect possible eccentric load in practice. The variables considered were 4 different values of Slenderness Ratio (SR) viz. 12, 15, 21 and 30 and 4 different values of Aspect Ratio (AR) viz. 0.75, 1.07, 1.5 and 1.875. The thickness of wall panels was kept constant. The vertical and horizontal reinforcement was kept constant at 0.88% and 0.74% respectively. The crack patterns of the specimens, failure modes and load-deformation characteristics are reported. The ultimate strength of SFRSCC wall panels decrease non-linearly with the increase in SR and decrease linearly with the increase in AR. Keywords Aspect ratio, slenderness ratio, self compacting concrete, steel fibres, wall panels Introduction Over the years, Reinforced Concrete (RC) walls have gained greater acceptance as load bearing structural members and RC wall construction has become increasingly popular world wide. The trend towards RC core walls in high rise buildings is the reason for this popularity in the usage of RC walls and it acts as an integral component in the core wall system of tall buildings. Also they can appear

as integral components in box frames, folded plates, box girders, etc Recently Self Compacting Concrete (SCC) has gained much attention in the concrete industry and is being used in many applications successfully throughout the world1. The increased flowability of SCC can ease the constructability requirements of pre-cast elements for which the important aspect of the design is the ability to place and consolidate concrete within the form and around the internal reinforcing2. With the increased flowability of SCC, it is possible to produce thin concrete walls with minimum reinforcement consisting of smaller diameter reinforcing bars. This leads to reduction in the cost of the building as well as increase in the usable space of the building. The investigations on the strength and behaviour of SFRSCC wall panels are not yet reported. Hence a large scale experimental investigation was recently carried out to study the strength and behaviour of SFRSCC wall panels at the National Institute of Technology Calicut. Experimental Programme The experimental program consists of casting and testing of 8 wall panels under compression. Table.1 gives the details of over all dimensions, SR and AR of wall panels. The thickness of the wall panels was kept constant. For casting the specimens, the formwork was fabricated using Indian Standard (IS) equal angles of 40mm×40mm×6mm.

* Professor, Department of Civil Engineering ** Research Scholar, Department of Civil Engineering DECEMBER 2009

21

Table 1 - Details of wall panels and variables Panel Designation

OWSFS-1 OWSFS-2 OWSFS-3 OWSFS-4 OWAFS-1 OWAFS-2 OWAFS-3 OWAFS-4

Panel Size h×L×t(mm)

Variables SR AR

480×320×40 600×400×40 840×560×40 1200×800×40 600×320×40 600×400×40 600×560×40 600×800×40

12 15 21 30 15

1.5

1.875 1.5 1.07 0.75

(i) Materials used The materials consist of Portland Pozzolana Cement (PPC–Fly Ash based) conforming to IS 1489 (part 1): 19913, fine aggregate conforming to grading zone III as per IS 383-19704 having a specific gravity of 2.67, coarse aggregate having a maximum size of 12.5 mm with specific gravity of 2.78, and potable water. Straight steel fibres of length 23mm were used. The volume fraction and aspect ratio of steel fibres are 0.5 and 60 respectively. Mineral admixtures which consist of Silica fume and Class–C fly ash, and chemical admixtures comprises of naphthalene based super plasticizer and polysaccharide based Viscosity Modifying Agent (VMA) were used. (ii) Mix proportions The mix proportions of SFRSCC were obtained after extensive trials based on the guidelines of EFNARC 5 for M30 grade concrete. As per EFNARC, the concrete can be considered as self compactable if it fulfills the requirements of filling ability, passing ability and segregation resistance. While checking the above mix for filling ability, using V-funnel test it was noticed that the time required, to empty the V-funnel, was 8 seconds and the slump flow by Abrams cone was found to be 650 mm. When the passing ability was checked using the L-box test, the ratio of H2/H1 was found to be 0.9. For checking the segregation resistance, V-funnel at T5min test was conducted. The trap door of the V-funnel was opened 5 minutes after filling and it was observed that the time required to empty the V-funnel was found to be 11 seconds. Table 2 22

gives the details of constituents of SFRSCC mix thus obtained. Table 2.Mix proportions of SCC Particulars

Quantity (kg/m3)

Cement (Fly Ash based) Fly Ash Silica fume Fine aggregate Coarse aggregate Water Superplasticiser VMA Vol . fraction of steel fibres

493 20 10 789 740 246 5 0.012 0.5%

(iii) Reinforcement The reinforcement in the form of rectangular grid, fabricated using 6 mm diameter High Yield Strength Deformed bars (Fe415), was placed in a single layer at mid thickness of the panel. The spacing of bars in both directions did not exceed three times the panel thickness with a clear side cover of 10mm. The yield strength of reinforcement steel was 445N/mm 2 . The percentages of vertical and horizontal reinforcement provided in the panels are 0.88 and 0.74 respectively. (iv) Casting of specimens The specimens were cast horizontally on a level floor in the Structural Engineering Laboratory. The wall panels were moist cured with wet gunny bags for an initial period of three days and were then immersed in the curing tank. After 28 days of curing, the panels were taken out from the curing tank and were white washed and made ready for testing. Three numbers of 150mm cubes were cast along with the wall panels for each series and tested on the day of testing of panels. The values of cube compressive strength of concrete are given in Table 3. TESTING OF WALL PANELS The wall panels were tested under pinned end condition at both ends with uniformly distributed load applied at a small eccentricity of t/6 to reflect NIT CALICUT RESEARCH REVIEW

possible eccentric load in practice, as carried out by other investigators6,7. All specimens were tested in the vertical position in a Compression Testing Machine of 2,943 kN (300tons) capacity. A leveling ruler was used to ensure the proper leveling of the panels. Plumb-bob was used to ensure verticality of the panels. Fig.1 shows the details of test set up. The loading was gradually increased in stages up to failure. At each stage, lateral deformations at quarter and mid height points along the central vertical line of the panel were measured using LVDTs. The experimental ultimate loads (Pue) were recorded and are given in Table 3. Also the normalized values of ultimate loads obtained by dividing the ultimate load by fc×Lt called axial strength ratio of panels are also given in Table 3.

RESULTS AND DISCUSSION (i) Crack patterns and failure mode The crack patterns observed on both the tension and the compression faces of the panels indicated the following; (i) the specimen OWSFS-1 failed by crushing near the edge, (ii) the panel OWSFS2 failed by bending at mid height by forming central horizontal cracks on tension side and crushing on compression side. (iii) The wall panels OWSFS-3 and OWSFS-4 failed by bending with multiple narrow width cracks at mid height. In the case of wall panel having SR equals to 12, it was found that the wall tend to crush before the yielding of the reinforcement. The failure patterns of SFRC wall panels of OWAFS series are similar to those of OWSFS series. The specimens, OWAFS-1 and OWAFS-2 failed due to bending at mid height. The wall panel OWAFS-3 failed by bending with multiple narrow width cracks. This kind of failure patterns may be due to the improvement in the tensile strain carrying capacity of the composite in the neighbourhood of steel fibres, which arrest micro cracks and enhance the ductility . However the failure pattern of OWSFS–4 is different from the wall panels having AR more than 0.75 and this panel failed by crushing near the edges. Fig.3 shows typical crack patterns of tested specimens.

Fig.2 Test set-up

Table 3.Experimentalultimate Loads Panel

fcu

Experimental

(Pue)

designation

(N/mm2) ultimate load

Fc Lt

(Pue) (kN)

OWSFS-1

264.87

0.61

OWSFS -2 OWSFS -3 OWSFS -4 OWAFS-1 OWAFS -2 OWAFS -3 OWAFS -4

323.73 441.45 412.02 215.82 274.68 392.40 711.23

0.59 0.58 0.38 0.49 0.50 0.51 0.65

42.73

DECEMBER 2009

Fig.3 Typical crack pattern of tested specimen 23

(ii) Load – deformation response The deformation response exhibited by a structure under load is usually known as its structural behaviour. The structural behaviour is normally explained using a load versus deflection diagram. The load versus lateral deflection curves of wall panels are shown in Figs. 4 and 5. Fig.4 shows the load versus lateral deflection plots for the effect of SR in wall panels. It may be noted from the figure that the curves are linear up to the formation of the first crack and beyond which the curves exhibit non-linearity. In general, as SR increases the load carrying capacity decreases and the lateral deflection increases for all the specimens. However a significant increase of lateral deflection can be seen in the case of wall panels for SR=30. The continuously increasing values of deflection as the loading increases indicate that the wall panels exhibit a smooth ductile type of failure till the ultimate load is reached. The load versus lateral deflection plots for different values of AR are given

in Fig.5. In this case also initially the curves are linear up to the formation of first crack, beyond which they become non-linear. The load-deflection plots indicate that SFRSCC wall panels exhibit softening behaviour which means that the wall panels behave in a more ductile manner. This type of softening of material is due to the presence of higher percentage of finer particles in the SFRSCC mix in addition to fibres which transforms the material to behave in a ductile manner and also induces higher degree of compressibility8. Further review of the load-deflection curves in Figs. 4 and 5 shows that the deflection at the mid height points of walls is generally proportional to those at quarter height points. This indicates that the deflections at mid height and quarter height points move in an approximate single curvature manner in the vertical direction, which is a typical of one-way behaviour. In the case of OWAFS series as the AR increases the strength decreases gradually. This may be attributed to the reduction in the bearing area of wall panels as AR increases. Conclusions 1. The development of larger number of finer cracks in SFRSCC wall panels indicates a better cracking performance. This behaviour will improve the serviceability limit states and durability significantly.

Fig. 4 Effect of Slenderness Ratio in SFRSCC wall panels

2. SFRSCC wall panels exhibit higher ductility. Hence SFRSCC wall panel appears to be an ideal structural element in the case of seismic resistant structures. References 1. Domone, P. L., “Self-compacting concrete: An analysis of 11 years of case studies”, Journal of Cement and Concrete Composites, 2006, 28, pp. 197-208. 2. Precast/Prestressed Concrete Institute, “Interim Guidelines for the Use of Self-Consolidating Concrete in Precast/Prestressed Concrete Institute Member Plants”, TR-6-03, Chicago, IL, 2003.

Fig. 5 Effect of Aspect Ratio in SFRSCC wall panels 24

3. IS 1489 (part 1): 1991, “Indian standard code NIT CALICUT RESEARCH REVIEW

of practice for Portland-Pozzolana CementSpecification, (Fly Ash based)”, Bureau of Indian Standards, New Delhi, 1991. 4. IS 383: (1970), “Indian standard code of practice for specification for coarse and fine aggregate from natural sources for concrete”, Bureau of Indian Standards, New Delhi, 1970. 5. EFNARC, “Specifications and guidelines for self compacting concrete”. European Federation of National Trade Associations, Surrey, UK, Feb. 2002. 6. Pillai, S. U., and Parthasarathy, C. V., “Ultimate strength and design of concrete walls”, Building and Environment, 1977, Vol.12, pp. 25-29. 7. Saheb, S. M., and Desayi, P., “Ultimate strength of RC wall panels in one-way in-plane action”, Journal of structural Engineering, ASCE, October 1989, 115 (10), pp. 2617-2630. 8. Ganesan, N., and Ramana Murthy, J. V., “Strength and Behaviour of Confined steel Fibre Reinforced Concrete Columns”, ACI Materials Journal, American Concrete Institute, No.3, May-June 1990, pp. 221-227.

DECEMBER 2009

25

Bioaccumulation of heavy metal ions -A Review S. Bhuvaneshwari*, K. Suguna** and V. Sivasubramanian*** Abstract Algae, bacteria and fungi and yeasts have proved to be potential metal biosorbents. Chitin is poly â(1’!4)-2-acetamido-2-deoxy-D- glucopyranose found in cell wall of certain fungi, bacteria, algae & yeast. By alkaline deacetylation of chitin, chitosan is produced. Chitosan is poly â-(1’!4)-2amino-2-deoxy-D-glucopyranose. Chitosan is a well-known biosorpent of metal ions. Among the many other low cost absorbents, chitosan has the highest sorption capacity for several metal ions. Chitosan chelates five to six times greater amounts of metals than chitin. Heavy metals of concern include copper, chromium, mercury, uranium, cadmium. Heavy metal pollution has become a serious threat today and of great environment concern as they are non biodegradable and thus persistent. Bioaccumulation for the removal of heavy metal ions may provide an attractive alternative to physico-chemical methods as the conventional techniques presently in existence for removal of heavy metals from contaminated water have disadvantage like incomplete removal, high energy and reagent requirements. Introduction Biosorption is a property of certain types of inactive, dead, microbial biomass to bind and concentrate heavy metals from even very dilute aqueous solutions. Biomass exhibits this property, acting just as a chemical substance, as an ion exchanger of biological origin. It is particularly the cell wall structure of certain algae, fungi and bacteria which was found responsible for this

phenomenon. Opposite to biosorption is metabolically driven active bioaccumulation by living cells. That is an altogether different phenomenon requiring a different approach for its exploration Biotechnology has been investigated as an alternative method for treating the metalcontaining wastewater of low concentrations. In response to heavy metals, microorganisms have evolved various measures via processes such as transport across the cell membrane, biosorption to cell walls and entrapment in extracellular capsules, precipitation, complexation and oxidationreduction reactions. It has been proved that they are capable of adsorbing heavy metals from aqueous solutions, especially for the metal concentration below 50 mg/L. The utilization of microbial biomass, either alive or dead, for the removal of metals from industrial wastewater and polluted waters has already been recognized 1.Chitosan is one such an organic material found rarely in living organisms but abundant in the cell wall of certain fungi, bacteria, algae & yeast. The chitin of fungi possesses principally the same structure as the chitin occurring in other organisms. However, not all fungi contain chitin; variations in the amount of chitin may depend on physiological parameter in natural environment as well as on the fermentation conditions in biotechnology processing or in culture of fungi. The interest in the potential utilization of fungal chitosan as a biosorbent is

* Lecturer, Department of ** Research Scholar, Department of Chemical Engg. *** Assistant Professor, Department of Chemical Engg., NIT Calicut.e-mail: [email protected] 26

NIT CALICUT RESEARCH REVIEW

increasing due to the need for economical and efficient adsorbents to remove heavy metal ions from wastewater. This is attributed to the free amino groups exposed in chitosan because of deacetylation of chitin 2. Determination of physico-chemical properties The viscosity, average molecular weight of chitosan is calculated by the equation by Mark– Houwink–Sakurada that relates the intrinsic viscosity to the polymer’s molecular weight. Size exclusion chromatography, Gel permeation chromatography has been applied to study the molecular weight of polymers 3, Detectors are also used to determine the molecular weight of chitin. The deacetylation degree of chitosan was determined by the potentiometric titration methods 4 . Chitosan was dissolved in a known excess of hydrochloric acid. From the titration of this solution with a 0.1 M sodium hydroxide solution, a curve with two inflexion points was obtained. The difference between the volumes of these two inflexion points corresponded to the acid consumption for the salification of amine groups and permitted the determination of chitosan’s acetylation degree, through equation %NH2 = 16.1 (V2 - V1) x Mb/W where (V1) and (V2) are the base volumes referred to first and second inflexion points, respectively, in mL, (Mb) is the base molarity in g/mol, and (W) is the original weight of the polymer in g. The optimum condition for the deacetylation reaction for molecular weight was observed at a temperature of 130 °C and in 90 min, and corresponded to a molecular weight of chitosan of about 150 kDa, and a deacetylation degree of 90% 5. Biosorption Biosorbents are prepared by pretreating the biomass with different methods. Biomass can be pretreated with several ways; they are heat treatment, detergent washing, employing acids, alkalis & enzymes etc 6. Metabolism independent on adsorption of pollutants on microbial biomass based on the partition process. Biosorption over conventional treatment methods include many advantages 7 some are metal recovery, regeneration of biosorpents etc. DECEMBER 2009

Chitosan was the polysaccharide with best capacity for copper biosorption (75%) 8. Chitin presented the maximum iron uptake 9, Metal biosorption will be better in single metal system The Algae, Distigma proteus, isolated from industrial waste water remove 48% Cd2+ after 2 days, 90% after 8 days.75% Cr removal by Aspergillus niger was determined by diphenyl carbazide colorimetric assay & atomic absorption spectrophotometer10. Factors affecting biosorption In the biosorption process, The influence of several operational parameters such as dose of adsorbent, agitation speed, temperature, initial pH and contact time gets accounted, pH seems to be the most important parameter it affects the solution chemistry of the metals, the activity of the functional groups in the biomass and the competition of metallic ions 11 .In the range of 2035 0C , temperature wont influence the biosorption performances12.The metal adsorption by chitin and chitosan in aqueous solution was directly influenced by the metal concentration. Biosorption using biomass Rhizopus arrhizus biomass obtained 54% recovery of uranium. Cd & Cu sorption by Microcystis aeruginosa, showed 22 & 61% of metal recovery. Dry mycelia of Saccaromyces cerevisae and Pseudomonas aeruginosa for pb2+ recovery and showed about 30% & 50%. These results are lower than those obtained for chitin and chitosan extracted from C. elegans (IFM 46109), suggesting that this microorganism has a biotechnological potential as source for polysaccharides production and metal bioremediation of contaminated water. Best results found for chitin were iron recovery of 56% and for chitosan, Cu recovery 75%, those are reported by mycelia of zygomycetes, biosorption of Cu(II) reached a maximal capacity of 39.84 mgCu(II)/g dry cell weight of Thiobacillus thiooxidans at pH 5.0. One of the best metalsorbing biomass types is ubiquitous Sargassum seaweed 13. Ni and Cd are adsorbed by dried cells of E. agglomerans SM 38 and found that at optimum pH their removal reached 25.2% and 32%, respectively. While for B. subtilis WD 90 their removal exhibited 27% and 25%, respectively. 27

Immobilization of Biomass Biomass immobilization has various applications. The principal techniques that are available in literature for the application of biosorption are based on adsorption on inert supports, on entrapment in polymeric matrix, on covalent bonds in vector compounds, or on cell cross-linking. Immobilization will offers easy & convenient usage compared to free biomass which is easily biodegradable and has better shelf life 14 . Entrapments in polymeric matrix (eg) polymers used were alginate and polyacrylamide. Adsorption on inert supports (eg) activated carbon was used as a support for Enterobacter aerogens biofilm 15. Support materials are introduced prior to sterilization and inoculation with starter culture and are left inside the continuous culture for a period of time, after which a film of microorganisms is apparent on the support surfaces. The immobilization of Rhizopus arrhizus fungal biomass in reticulated foam biomass support particles. Rhizopus nigricans are immobilized on polyurethane foam cubes and coconut fibres. Chitosan preparation, membrane formulation and applications Many of the methods reported for converting chitin in crustacean shell to chitosan are slow and consume significant amounts of reagents. A relatively rapid and mild deacetylation method is followed now to convert chitin to chitosan 16. Chitosan membranes were produced from a solution of chitosan in formic acid 17 present an application of chitosan membranes for removal of heavy metal ions.The Macroporous Chitosan membranes were prepared according to the method described by Zeng and Ruckenstein. The porous membrane is a very important configuration for the use as biomedical materials 18.Chitosan has various applications, it is used in waste water treatment, to stabilize food and oil pills, as bacterial immobilizer, effective to improve the quality of paper, both in wet end addition and in sizing operation, plays main role in agriculture and horticulture, especially on orchid cultivation 19. Biomedical application-as beads for controlled drug release, chitosan-alginte beads have been proven to resist the pH and pepsin concentration 28

in the human stomach 20 , chitosan has an accelerating effect on the regeneration of bone tissue 21. Conclusion Rapid industrialization and progressive urbanization are highly responsible for accumulation of metal ions in the environment. The assessment of the metal-binding capacity of some types of biomass has gained momentum since 1985. Indeed, some biomass types are very effective in accumulating heavy metals. Availability is a major factor to be taken into account to select biomass for clean-up purposes. Optimization of specific biosorption process applications has to be done in conjunction with industrial users and requires specific process engineering expertise and a serious developmental commitment for effective outcome. Acknowledgement The authors are grateful to the Department of science and technology, Ministry of science and Technology, Government of india, New Delhi, for their financial support (Project No : SR/FTP/CS68/2007). References 1. B. Volesky, and Z.R. Holan, Biosorption of heavy metals. Biotechnology Progress, vol. 11, no. 3, p 235-250, 1995. 2. M. Beran, L. Adamek, P. Hanak, and P. Molik, Isolation and Some applications of Fungal Chitin- Glucan Complex and Chitosan. 3. P .Pochanavanaich, and W. Suntornsuk, Fungal Chitosan production and its characterization. Letters in applied microbiology, vol. 35, p 17-21, 2002. 4. Marco Antonio Torres, Marisa Masumi Beppu, Eduardo Jose Arruda, Viscous and viscoelastic properties of chitosan solutions and gels. Brazilian Journal of Food Technology, vol.9, no.2,p 101-108, 2006. 5. Kalaivani Nadarajah, Dawn Carmel Paul, Abdul Jalil Abdul Kader, Effects of alkaline and acid treatment to the yield and quality NIT CALICUT RESEARCH REVIEW

of chitosan extracted from Absidia sp. Journal of Salwa Technology, vol.44,p 3342, 2006. 6. R.Suleman Qaiser , Anwar Saleemi, Muhammad Mahmood Ahmad, Heavy metal uptake by agro based waste materials. Electronic Journal of Biotechnology, vol.10, no. 3,p 409-416, 2007. 7. Hima Karnika Alluri, Srinivasa Reddy Ronda, Vijaya Saradhi Settalluri, Jayakumar Singh.Bondili, Suryanarayana.V and Venkateshwar. P, BIOSORPTION: An Ecofriendly alternative for Heavy Metal removal. African journal of Biotechnology, vol.6,no.25, pp. 2924-2931, 2007. 8. J. L., Zhou, and R. J .Kiff , The uptake of copper from aqueous solution by immobilized fungal biomass. Journal of Chemical Technology and Biotechnology, vol. 52 ,p 317-330, 1991. 9. A. Meyer and F.M .Wallis, The use of Aspergillus niger (strain 4) biomass for lead uptake from aqueous systems, Water SA ,vol. 23 no. 2, 1997. 10. R Schmuhl. HM Krieg and K Keizer, Adsorption of Cu(II) and Cr(VI) ions by chitosan: Kinetics and equilibrium. vol.27,p 1-6 11. K.Anand Kishore, M.Praveen Kumar,V. Ravi Krishna and G. Venkat Reddy, Optimization of process variables of citric acid production using Aspergillus niger in a batch fermentor. vol.16,p 16-20, 2008. 12. Iqbal Ahmad, Shaheen Zafar; Farah Ahmad, Heavy Metal Biosorption potential of Aspergillus and Rhizopus sp. isolated from waste water treated soil, Journal of Appl.Sci. Environ. Mgt. vol. 9,no. 1, p 123-126, 2005. 13. B.Bina, M.Kermani,H.Movahedian and Z.Khazaei, Biosorption and recovery of copper and zinc from aqueous solution by non living biomass of marine brown algae of sargassum,Pakistan Journal of Biological science, vol.9,no.8,2006. DECEMBER 2009

14. Ranifaryal, Maria yusuf, Kiran munir, Faheem Tahi and Abdul hameed, Enhancement of Cr6+ removal by Aspergillus niger rh19 using a biofermentor. vol.39,no.5,p 1873-1881, 2007. 15. K.Nadarajah, J. Kader, Mohd. Mazmira and D.C.Paul, Production of chitosan by fungi. Pakistan Journal of Biological Sciences ,vol.4,no.3, p 263-265, 2001. 16. Yuzhu Fu and T Viraraghavan, Column studies for biosorption of dyes from aqueous solutions on immobilized Aspergillus niger fungal biomass. Water SA, vol. 29, no. 4,2003. 17. Trang Si Trung, Wah Wah Thein- Han, Nguyen Thi Qui, Chuen- How Ng, Willem F. Stevens, Functional characteristics of shrimp chitosan and its membranes as affected by the degree of Deacetylation. Bioresource Technology , vol.97, p 659-663, 2006. 18. Z.Y.Gu,P.h.Xue and W.J.Li, Preaparation of the porous chitosan membrane by cryogenic induced phase separation, Polymers for advanced technologies. 19. R.W.Coughlin, M.R.Deshaies, and E.M.Davis, Preparation of chitosan for heavy metal removal. Environmental Progress, vol. 9,no. 35, 1990. 20. Rosa Valeria da Silva Amorim, Wanderley de Souza, Kazutaka Fukushima, Galba Maria de Campos-Takaki, Faster chitosan production by Mucoralean straines in submerged culture. Brazilian Journal of Microbiology, vol.32,p 20-23, 2001. 21 W.Kaminski, Z .Modrzejewska,Application of chitosan membranes in separation of heavy metal ions. Separation Science and Technology, vol.32,no.16, p 2659-2668, 1997

29

Planning Annualised Hours M.R. Sureshkumar * and V. Madhusudanan Pillai**

1. Introduction The demand fluctuation is a major concern for the industries. Some industry faces seasonal demand pattern. Normal way of managing this demand pattern includes varying workforce size, building inventory, subcontracting and varying workforce utilization. All these methods have its own disadvantages. For example, varying the workforce utilization leads to idle time in slow periods and costly overtime in hectic periods. Annualising working hours is another method to face seasonal demand and this method is increasingly popular in Europe and in the UK in particular. Annualisation is a variation in the arrangement of hours where staff work to an annualised contracted hours rather than weekly or monthly number of hours. The working hours vary with the demand pattern. During busy periods they may have to work more and on the other hand in slack periods they will work less. The hours, which the employee has to execute, will be decided in advance and it can vary daily, weekly, or monthly basis. Thus, annualised hours (AH) allows employer to vary workforce availability according to the demand level without incurring much overtime/hiring/ training/ subcontracting costs. The AH application gives positive results. This include reduced inventory cost, decreased unit cost, less labour turnover and training, easier recruitment, and better customer service. Thus, the overall expense of the firm is reduced. AH method has already been successfully implemented in

many manufacturing organizations where employers are often faced with busy and slack periods. However, the scheme has been implemented mainly in the service type industrial sector. The pattern for the introduction of collectivelyagreed AH schemes is that the basic parameters are laid down in sectored agreements, with their concrete implementation referred to agreement at company or workplace level, between management and local trade unions. Under the influence of legislation, collective agreements at sector and company level is very common for AH schemes [1]. The major advantage of annualising working hours is the reduced cost, reduction in the use of temporary workers and overtime in comparison to other options. A reduction in the use of temporary workers can also lead to an improvement in productivity and the quality of the product or service. Rhodia Consumer Specialities [2] introduced the annualised hours for staff scheduling and the benefits were that the consumer complaints fell by 25% and the service standard is improved substantially. The results from the Tesco [3] distribution showed that after the introduction of the AH system the stock levels have reduced to a large extent. 2. Problem Description In an AH problem the number of workers is taken on the basis of total annual demand.

* Graduate student, Department of Mechanical Engineering ** Assistant Professor, Department of Mechanical Engineering. e-mail: [email protected] 30

NIT CALICUT RESEARCH REVIEW

The manpower requirement is calculated as given below. Assume 52 weeks in a year. Average working hours in a week = 35 hours Total annual forecasted demand = 16100 hours Total number of holiday weeks in a year per employee = 6 Total number of working weeks = 52 - 6 = 46 Manpower required = 16100/ (46×35) = 10 Nos. The workers are assumed to be cross trained and they are able to perform different types of task but with different relative efficiency. A relative efficiency is considered for each type of task performed by each category. A value of 0.8 signifies that a worker in a given category needs to work 1/0.8 hours to meet a demand that a worker with a relative efficiency equal to 1 would meet in 1 hour. The duties to be performed by an employee are predetermined and there are a specified number of worker categories, for instance 3. Since the possibilities of overtime or hiring temporary workers are not considered, a capacity shortage is possible during certain weeks as a result of the relative efficiency considered for different types of tasks. However, shortages will be a small percentage of the required capacity in AH applications compared to other methods of worker assignment. 3. Objective function The success of an organisation lies in the fact that at what level their customers are satisfied. As the service level improves, the customer satisfaction increases. The capacity of the organization is fixed as per the forecasted demand. If the required capacity is more than the actual capacity then the service level deteriorates and the customer will not be satisfied. If the relative capacity shortage, which is defined as the capacity shortage related to the required capacity, is large then the demand cannot be met. On the other hand, if the capacity shortage is a small part of the required capacity then the workers can meet the demand with a small extra effort. That is, the demand will be met with slightly reduced service quality. The maximum relative capacity shortage, which has to be minimised, can be considered as the DECEMBER 2009

objective function, thus optimising the service level. This function avoids large capacity shortages and tends to distribute capacity over the course of the year in a regular way. This function minimises the maximum capacity shortage, however, it is not giving consideration to periods where capacity shortages less than the maximum relative capacity shortage. For obtaining a small capacity shortage in every week, a secondary objective function, which is the sum of relative capacity shortages, is considered. Now the objective function is defined as weighted the sum of these two functions. That is, the objective function minimises the weighted sum of two terms: (i) the maximum relative capacity shortage and (ii) the sum of weekly relative capacity shortages. The objective function described here is same as that of Corominas et. al [4]. 4. Models for annualised hours The following different models can be considered in the annualised scenario. i. Holiday weeks are partially individualised Mixed Integer Linear Programming (MILP) Model. In this type, a part of the holiday weeks are individualised, which means that a certain number of holiday weeks is fixed as per the agreement and the remaining holiday weeks are assigned on the basis of the demand requirement. Workers are not allowed to take individualised holiday weeks at the same time [5]. For example, if total number of holiday weeks is six and out of these two weeks are individualised and when should be the remaining four holiday weeks are determined by the model. This type of assignment of holiday weeks reduces the shortages compared to completely individualised holiday weeks. ii. Holiday periods are individualised – MILP Model Instead of holiday weeks the periods in which holiday weeks can be availed are individualised That is, the assignment may be like two weeks in winter and four weeks in summer. The holiday weeks which lie in these periods are determined by the model. 31

iii.When spike in demand exist – MILP Model This model can be used when there is spike in demand in some particular weeks. This happens in festival seasons. iv. Variable working weeks Programming (LP) Model

-

Linear

In the previous models weekly working hours for an employee may belong to a finite set. But, in this model the weekly working hours are considered to be variable and will vary from worker to worker subjected to an upper bound for the weekly working hours. However, the total annual working hours will be same for all employees. 4.1 General characteristics of MILP model i. The weekly working hours are taken from a finite set. For example, a set can contain 25, 35, and 50 weekly working hours. ii. The total number of holiday weeks is fixed previously. iii.Overtime is not allowed. iv. Hiring of temporary workers is not allowed. v. The average working hours for a group of 12 consecutive weeks cannot be larger than 44 hours per week. vi. All the workers cannot take the holiday weeks at the same time. In all the above models the objective is to minimise the weighted sum of maximum relative capacity shortage and sum of weekly capacity shortages considering the relevant constraints. 4.2 General characteristics of LP model i. The weekly working hours will vary for different workers subjected to an upper bound, for example 50 hours. ii. The total number of individualised holiday weeks is as agreed previously through contract. The characteristics of MILP model from iii to vi are applicable here also. 4.3 General model description Objective function 32

Minimises the weighted sum of: (i) the maximum relative capacity shortage and (ii) the sum of weekly relative capacity shortages, Constraints: 1. Considers the effect of maximum relative capacity shortage; 2. Assigns the required number of hours for each worker as stipulated in the contract; 3. Ensures the assigned hours, considering efficiency and capacity shortage in hours, is greater than or equal to the required hours as per the forecast; 4. Equalise the time allotted for all types of task in a week for a category of workers with the time assigned for the category of workers for the same week; 5. Ensures the contractual condition that the average time assigned over a consecutive 12 weeks is less than 44 hours; 6. Shows the non-negativity condition for the variables. 5. Implication of AH. A problem for illustration is modelled with a staff strength of five and number of working weeks of 46 and six holiday weeks. The workers are grouped in three categories and considered three types of task. Each category of workers has different efficiency for doing different types of task. The relative efficiency of categories of workers for performing tasks is given in Table 1. Table 1. Relative efficiency Category 1 Category 2 Category 3

Task1 1 0 0

Task2 0.9 1 0

Task3 0.8 0.9 1

The annualised hours problem can be formulated as MILP model or as LP model. Here, a comparison is made with respect to the capacity shortages under these models. In literature the planning annualised hour is carried out with a finite set of weekly working hours [4]. AH models with finite set of NIT CALICUT RESEARCH REVIEW

weekly working hour are usually modelled as MILP. The finite set of weekly working hours for the problem modelled contains 25, 35 and 50 hours. The same problem is modelled as LP with an upper bound for weekly working hours as 50. The forecasted demand, capacity and shortage profile of these models for the same problem are given in figures 1 and 2. It can be seen that variable weekly working hours can fit better into the requirement than a finite set of weekly working hours. (In a finite set of weekly working hours, the weekly working hours assigned will be selected from the finite set. This may lead to poor distribution of working hours and the consequent higher capacity shortages.)

weekly working hours considered for AH scheme. For the same demand profile of the problem this method of worker assignment leads to a large capacity shortages and this can be seen from figure 3. This figure shows that the capacity shortages are more in periods from 13 to 30. The capacity shortages when AH schemes implemented are available in figures 1 and 2 and these figures show less capacity shortages in periods from 13 to 30. It can be seen that capacity shortage obtained by fixed hour method is greater than any of the AH method (See Table 2 for a comparison). Excess capacity is also resulted when fixed hour method is used. For example, excess capacity can be seen in period 44 and from period 47 to 52 of the figure 3. Similarly, excess capacity is visible in some other periods also. The chance of excess capacity is very remote in an AH scheme with variable weekly working hours. This possibility cannot be written-off in a AH scheme with finite set of weekly working hours. Table 2. Comparison of shortages Method

Fig. 1. Demand, capacity and shortage profiles under finite set of working hours

Fig. 2. Demand, capacity and shortage profiles under variable working hours

The importance of annualised hours can be better understood from a comparison with a fixed hour scenario. In the fixed hour scenario, weekly working hours of a worker is same in every week and it is taken as 35 hours same as the average DECEMBER 2009

Total Capacity Shortage (hours)

Finite set of weekly working 225 Variable weekly working 182 Fixed weekly working 781

Total Demand hours

8050 8050 8050

In an AH problem the number of staff required is determined by considering the total annual demand hours. When an AH problem is modelled using finite set of weekly working hours, scope for capacity shortage exists. Further, when the holiday weeks are individualised capacity shortage increases. The effect of relative efficiency augments this problem. It is assumed that cross trained workers can perform different categories of task at different relative efficiencies. This implies that the combination of finite set of weekly working hours and relative efficiency can provide scope for more shortages of capacity compared to variable weekly working hours. (See Table 2 for comparison of capacity shortages under finite set of weekly working hours and variable weekly working hours.) Variable weekly working hours 33

can match more properly with the forecasted requirement and hence the capacity shortage will be less.

5. Corominas, A., Lusa, A., and Pastor, R., 2004, Planning annualised hours with finite set of working hours and joint holidays. Annals of operations Research, 128, pp. 217–233.

Fig. 3. Demand, capacity and shortage profiles under fixed working hours in each week 6. Conclusion The AH is a better method for meeting the capacity with seasonal demand. The capacity shortage can be reduced to a greater extent by AH method of scheduling without incurring additional cost. The problem is analysed with three different methods and it is found that when the weekly working hours are taken as variable the capacity shortage obtained is less compared to other methods. References 1. EIRO, Annualised hours in Europe. Available online at: http://www.eurofound. europa.eu/eiro/2003/08/study/index.htm (accessed on 20-Aug-2007). 2. Workforce logistics, Annual hours case studyRhodia. Available online at: http:// w w w. w o r k f o r c e - l o g i s t i c s . c o m / C a se_study_Annual_Hours_at_Rhodia.htm (accessed on 11-Sept-2007). 3. MacMeeking, J., 1995, Why Tescos new composite distribution needed annual hours. International Journal Retail Distribution Management, 23, pp. 36–38. 4. Corominas, A., Lusa, A., and Pastor, R., 2007, Planning annualised hours with a finite set of weekly working hours and cross trained workers. European Journal of Operational Research, 176, pp. 230–239. 34

NIT CALICUT RESEARCH REVIEW

Nanoscience and Biomedicine: Converging Technologies Mahesh Kumar Teli* and G.K. Rajanikant** Abstract Nanotechnology is an emerging field that could potentially make a major impact to human health. Nanomaterials promise to revolutionize medicine and are increasingly used in drug delivery and molecular diagnostics. Nanotechnology will facilitate the integration of diagnostics with therapeutics and assist the development of personalized medicine, i.e. prescription of specific therapeutics best suited for an individual. This review will provide an integrated overview of application of nanotechnology-based molecular diagnostics and drug delivery in the development of nanomedicine and ultimately personalized medicine. Finally, we identify critical gaps in our knowledge of nanoparticle toxicity and how these gaps need to be assessed to enable nanotechnology to transit safely from bench to bedside. Keywords Nanotechnology, nanomedicine, diagnostics, imaging, targeted drug delivery, personalized medicine Introduction Since the land mark lecture by eminent Nobel Laureate Richard Feynman in 1959 entitled “There’s plenty of room at the bottom”, the concept of nanotechnology has been inûuencing all different ûelds of research involving chemistry, physics, electronics, optics, materials science and biomedical science (Feynman, 1960). The concept led to the new paradigm that size and shape dictate the function of materials. This distinguishes the emerging nanoscience from other conventional technologies, which have some aspect at the

nanosize range. National Science Foundation and the National Nanotechnology Initiative define nanotechnology as understanding and technological applications of materials and assemblies at the nanometric scale (1-100 nm), where unique phenomena such as optical, magnetic, electronic and structural properties not seen with macromolecules enable novel applications (Nowack and Bucheli, 2007 and Gazit, 2007). One area of nanotechnology application that holds the promise of providing great benefit for society is in the realm of medicine. Given the inherent nanoscale functional components of living cells, it was inevitable that nanotechnology would be applied in medicine, giving rise to the term nanomedicine. Due to their unique characteristics including superparamagnetic or fluorescent properties, and small size comparable to biomolecules, nanostructured materials have emerged as novel biomedical imaging, diagnostic and therapeutic agents for the future biomedical field (Table 1). Moreover, the conjugation of targeting moieties on the surface of these multifunctional nanomaterials gives them specific targeted imaging and therapeutic properties (Wang and Chen, 2009). Nanoparticles and nanodevices under investigation for imaging and drug/gene delivery applications are quantum dots, nanoshells, nanospheres, gold nanoparticles, dendrimers, paramagnetic nanoparticles, liposomes and carbon nanotubes (Vo-Dinh, 2007). In this review, we will summarize important applications of nanotechnology in medicine with more emphasis on diagnostics, imaging, drug delivery and therapy.

* Research Scholar, School of Biotechnology ** Assistant Professor, School of Biotechnology. e-mail: [email protected] DECEMBER 2009

35

Table 1. Nanomedicine in the 21st century Nanodiagnostics • Molecular diagnostics • Imaging with nanoparticle contrast materials • Nanobiosensors • NanoendoscopyNanopharmaceuticals • Nanotechnology based drugs • Targeted drug delivery system • Implanted nanopumps and nanocoated stents for drug delivery • Gene/cell therapyReconstructive surgery • Tissue engineering with nanotechnology scaffolds • Implantation of rejection-resistant artificial tissues and organs • Nanosensor implanted catheters for real-time data during surgery • Nanolaser surgery Nanorobotics • Nanorobotic vascular surgery • Remote controlled nanorobots for tumor detection and destruction Implants • Bioimplantable sensors that bridge the gap between electronic and neurological circuitry • Durable rejection-resistant artificial tissues and organs • Nanocoated stent implantations in coronary arteries to elute drugs and to prevent reocclusion • Implantation of nanopumps for drug delivery Nanotechnology for cytogenetics and diagnostics Cytogenetics, a part of molecular diagnostics, has been used mainly to describe the chromosome structure and identify abnormalities related to diseases. Localizing specific gene probes by fluorescent in situ hybridization (FISH) combined with conventional fluorescence microscopy has reached its limit. Molecular cytogenetics is now enhanced by nanotechnology. Endothelial progenitor cells taken from human umbilical cord blood and labeled with perfluorocarbon nanoparticles (200 nm) can be detected by magnetic resonance imaging (MRI) (Partlow et al, 2007). Further, a superparamagnetic iron oxide nanoparticle is emerging as an ideal probe for noninvasive cell tracking. 36

Combining advances in related ûelds such as nanotechnology, biotechnology and pharmaceutics, nanomedicine offers the potential to move from a ‘one-size-fits-all’ approach to one more individually tailored for higher efficacy (Jain, 2009). For diagnosis, this translates to recognition and characterization of very early (even presymptomatic) disease providing assessment, preferably non-invasively. One of the earliest applications of nanotechnology in MRI was the use of paramagnetic iron oxide particles; when taken up by healthy hepatocytes, these particles could help to distinguish between normal and cancerous liver cells (Saini et al, 1995). Similarly, substantial success in nanotechnology-enabled molecular imaging have been made in all imaging modalities including optical, nuclear, ultrasound and computed tomography (Bergman, 1997, Herschman, 2003, Wickline and Lanza, 2003, Lanza and Wickline, 2003, Sakamoto et al, 2005, Winter et al, 2005, Kobayashi and Brechbiel, 2005 and Caruthers et al, 2006). For example, carbon nanotube based X-ray device that emit a scanning X-ray beam composed of multiple smaller beams while also remaining stationary will enable the construction of smaller and faster X-ray imaging systems for medical tomography such as CT scanners, which will produce higher-resolution images (Zhang et al, 2005). Another study indicated that it is feasible to use silica nanospheres as contrast-enhancing agents for ultrasonic imaging (Liu et al, 2006). Because of the small dimension, most of the ‘nanodiagnostics’ fall under the broad category of nanobiochips and nanoarrays (Jain, 2007). Nanotechnology-on-a-chip is a new paradigm for total chemical analysis systems (Jain, 2005). Protein nanobiochips in development can detect traces of proteins in biological fluids that are not detected by conventional immunoassays. Nanobiosensors based on nanotechnology are portable and sensitive detectors of chemical and biological agents, which are useful for point-ofcare testing of patients. Small pieces of DNA attached to gold particles (~13 nm) can detect millions of different DNA sequences simultaneously (Jain, 2007). Quantum dots (QDs) NIT CALICUT RESEARCH REVIEW

are inorganic fluorophores with potential applications for cancer diagnosis (Zhang et al, 2009). Another application of QDs is for viral diagnosis. For example, current respiratory syncytial virus (RSV) detection methods are not sensitivity enough and time consuming. However, antibody-conjugated nanoparticles rapidly and sensitively detect RSV and estimate relative levels of surface protein expression (Agrawal et al, 2005). Nanotechnology will have an impact on improving our understanding of the central nervous system (CNS) and developing new treatments, both medical and surgical, for CNS disorders (Jain 2006). For instance, neuroscientists have successfully detected the activity of individual neurons lying adjacent to the blood vessels using platinum nanowires and blood vessels as conduits to guide the wires (Llinas et al, 2005). Iron oxide nanoparticles can outline not only brain tumors under MRI, but also other lesions in the brain that may otherwise have gone unnoticed (Neuwelt et al, 2004). Further, iron oxide nanoparticle coated with biocompatible polymer and tagged with chlorotoxin (tumor-targeting agent) and a fluorophore, has been shown to cross the bloodbrain barrier and specifically target brain tumor, as established through in vivo magnetic resonance and biophotonic imaging (Veiseh et al, 2009). QD technology has been employed to gather information about how the CNS environment becomes inhospitable to neuronal regeneration following injury or degenerative events (Gao et al, 2008). Nanotherapeutics Unfortunately, early diagnosis is futile if not coupled with effective therapy. However, developing effective drug delivery system is a major challenge for pharmaceutical companies since nearly half of the drugs are poorly soluble in water, which is an essential factor for drug effectiveness. Packaging a small-molecule drug into nanoparticles not only improves its bioavailability, bio-compatibility and safety profiles but also facilitates targeted transport, immune evasion and favorable drug release kinetics at the target site, maximizing patient compliance DECEMBER 2009

(Devalapally et al, 2007). In this regard, nanotechnology is already moving from being used in passive structures to active structures in medical field, through more targeted drug therapies or “smart drugs” by conjugation of nanocarriers to specific ligands or to aptamers. These new nanodrug therapies have already been shown to cause fewer side effects and be more effective than traditional therapies (Jain, 2007). There are a number of advantages with nanoparticles in comparison to microparticles. For example, nanoscale particles can travel through the blood stream without sedimentation or blockage of the microvasculature. Small nanoparticles can circulate in the body and penetrate tissues such as tumors. In addition, nanoparticles can be taken up by the cells through natural means such as endocytosis. Nanotechnology improves drug delivery by the following approaches: • Minuscule particle size to increase the surface area, thereby enhancing the rate of dissolution. • Development of novel nanoparticle formulations with improved stability and shelf-life. • Development of nanoparticle formulations for improved absorption of insoluble compounds/ macromolecules enables improved bioavailability and release rates, potentially reducing the amount of dose required and increasing safety through reduced side effects. • Nanoparticle formulations having sustained release profiles up to 24 h improve patient compliance with drug regimens. • Nanoparticles conjugated to specific ligands for targeted drug delivery. • Nanotechnology is particularly useful for delivery of biological therapies (gene, protein or stem cells). Few nanotechnology-based products are already approved for the treatment of cancer - Doxil (a liposome preparation of doxorubicin) and Abraxane (paclitaxel in nanoparticle formulation). Many of these already commercialized products are not available directly to the consumer. Instead, they are employed by researchers involved in drug discovery, physicians in need of better imaging techniques and as prescriptions to treat particular 37

kinds of illness (Table 2). Further, gold nanoparticles offer a novel class of selective photothermal agents to destroy the malignant cells (El-Sayed et al, 2006). The ability of gold nanoparticles to detect cancer was demonstrated previously. Similarly, immunotargeted nanoshells, engineered to both scatter light in the near-infrared range enabling optical molecular cancer imaging and to absorb light, enable selective destruction of targeted carcinoma cells through photothermal therapy (Loo et al, 2005). Hence, it will be possible now to design an ‘all-in-one’ active agent that can be used to find cancer noninvasively and then destroy it. This selective technique has a potential in molecularly targeted photothermal therapy in vivo. Nanoshells, e.g. AuroShellTM (Nanospectra Biosciences Inc.), are in commercial development for the targeted destruction of various cancers. In addition to this, novel nanoparticles preprogrammed to alter their structure and properties by the incorporation of molecular sensors that are able to respond to physical or biological stimuli, including changes in pH, redox potential or enzymes, will make a most effective drug delivery systems (Wagner, 2007). An important role of nanotechnology in the management of infections is use of formulations which improve the action of known bactericidal agents. The bactericidal properties of some agents are apparent only in nanoparticulate form. These formulations are made of simple, nontoxic metal oxides such as magnesium oxide (MgO) and calcium oxide (CaO, lime) in nanocrystalline form, carrying active forms of halogens, for example, MgO.Cl 2 and MgO.Br 2. When these ultrafine powders contact vegetative cells of Escherichia coli, Bacillus cereus, or Bacillus globigii, over 90% are killed within a few minutes. The aluminum oxide or copper oxide nanoparticles have been demonstrated to exhibit significant antimicrobial activity (Sadiq et al, 2009 and Ren et al, 2009). Silver nanoparticles have been incorporated in commercial preparations for wound care to prevent infection (Bhattacharyya and Bradley, 2008 and Rai et al, 2009). A simple molecule from a hydrocarbon and an ammonium compound, diacetylene amine salt, has been used to produce a 38

unique nanotube structure with antimicrobial capability (Lee et al, 2004). Some drug delivery devices are implanted in the body for release of therapeutic substances. The lining of these devices can be improved by nanotechnology. For example, formation of microcapsules by depositing coatings onto the particle surface will make it possible to control drug release kinetics by: (a) diffusion of the drug through a polymeric coating, (b) degradation of a biodegradable polymer coating on the drug particles, releasing the core drug material. A selfassembling cube-shaped perforated microcontainer, no larger than a dust speck, could serve as a delivery system for medications/cell and can be tracked easily by MRI (Gimi et al, 2005). Delivery of drugs to the central nervous system is a challenge and most of the strategies based on nanotechnology are directed at overcoming the blood-brain barrier, a major hurdle in drug delivery to the brain (Jain, 2007). Nanotechnology can facilitate neuroprotection. Water-soluble derivatives of buckminsterfullerene C60 derivatives are a unique class of nanoparticle compounds with potent antioxidant properties. Robust neuroprotection against excitotoxic, apoptotic and metabolic insults in cortical cell cultures has been demonstrated by use of carboxyfullerenes (Aksenova et al, 2005). Apart from therapeutic drugs, several genes are being introduced to cells using nanotechnology. A variety of nanoparticles including nanoliposomes, gelatin nanoparticles, calcium phosphate nanoparticles, dendrimers and other nanostructures are now being considered/used for nonviral gene delivery (Chowdhury, 2007). Nanotechnology is well suited to optimize the generally encouraging results already achieved in cell transplantation (Halberstadt et al, 2006). The small size of nanomaterial constructs provides an increasing number of options to label, transfect, visualize, and monitor cells/tissues used in transplantation. Neural progenitor cells, encapsulated in vitro within a three-dimensional network of nanofibers formed by self-assembly of peptide amphiphile molecules, facilitate growth of NIT CALICUT RESEARCH REVIEW

nerve cells in tissue cultures (Silva et al, 2004). Nanodevices like carbon nanotubes to locate and deliver anticancer drugs at the specific tumor site are under research. Nanotechnology promises construction of artificial cells, enzymes and genes. Currently nanodevices like respirocytes, microbivores and probes encapsulated by biologically localized embedding have a greater application in treatment of anaemia and infections. Thus in the present scenario, nanotechnology is spreading its wings to address the key problems in the field of medicine (Sandhiya et al, 2009). Table 2. Commercialized nanotechnology products for various biomedical applications Appetite Control Megace® ES [Par Pharmaceutical Companies, Inc. (USA)] • Drug designed to stimulate appetite • Utilizes Elan’s NanoCrystal technology delivery system to improve the rate of dissolution and bioavailability of the original megesterol acetate oral suspension Cancer Abraxane™ [American Pharmaceutical Partners, Inc. (USA)] • Anti-cancer drug for advanced breast cancer. • Albumin-bound form of paclitaxel with a mean particle size of approximately 130 nanometers. Doxil® [ALZA Corporation (USA)] • Anti-cancer drug for refractory ovarian cancer and AIDS-related Kaposi’s sarcoma. • Lipid nanoparticles that incorporate a polyethylene glycol (PEG) coating. Emend [Merck & Co., Inc. (USA)] • Anti-nausea drug for chemotherapy patients. • 80 or 125 mg of aprepitant formulated as NanoCrystal drug particles. ®

Cholesterol TriCor® [Abbott Laboratories (USA)] • Cholesterol-lowering drug that employs Elan’s NanoCrystal Technology for an easy administration. Imaging Qdot Nanocrystals [Invitrogen Corporation (USA)] DECEMBER 2009

• Qdot nanocrystals are nanometer-scale atom clusters, containing a semiconductor material (cadmium mixed with selenium or tellurium), which has been coated with an additional semiconductor shell (zinc sulfide) to improve the optical properties of the material. TriLite™ Technology [Crystalplex Corporation (USA)] • Alloyed nanocrystal aggregates of 8-12 individual nanocrystals. • These nanoclusters are 40-50 nm in size and are functionalized on the surface with carboxyl groups using a proprietary Crystalplex technology. Medical Tools EnSeal Laparoscopic Vessel Fusion System [SurgRx, Inc. (USA)] • Indicated for surgical hemostasis. • An electrode consisting of nanometer-sized conductive particles embedded in a temperaturesensitive material. • Each particle acts like a discrete thermostatic switch to regulate the amount of current that passes into the tissue area with which it is in contact. TiMESH [GfE Medizintechnik GmbH (Germany)] • Indicated for laparoscopic and open surgery. • TiMESH with its ideal surgical mesh properties, including biocompatibility, resistance to infection and the ability to be recognized by the body as a solid titanium implant. Acticoat® [Smith & Nephew, Inc. (USA)] • Patented nanocrystalline technology for safe bactericidal concentrations of silver. SilvaGard™ [Technology AcryMed, Inc. (USA)] • Antimicrobial silver nanoparticle in solution form. Bone Replacement Vitoss [Orthovita (USA)] • Scaffolds (porous, 100 nm) for bone defects and to enhance resorption and new bone growth. Zirconium Oxide [Altair Nanotechnologies, Inc. (USA)] • Nano-sized zirconium oxide for dental 39

applications including fillings and prosthetic devices. Diagnostic Tests CellTracks® [Immunicon Corporation (USA)] • Patented magnetic ferrofluid nanoparticles conjugated to antibodies directed against circulating tumor/endothelial cells. • After the rare cells are enriched from the patient sample, they are fluorescently labeled. NanoChip ® Technology [CombiMatrix Corporation (USA)] • Provides an open platform that allows customers to easily run and customize common assays. • This technology involves electronically addressing biotinylated samples, hybridizing complementary DNA reporter probes and applying stringency to remove un-bound and nonspecifically-bound strands after hybridization. Microarrays [CombiMatrix Corporation (USA)] • Semiconductor based array technology that enables the preparation of materials with nanoscale control. • Allows for the parallel synthesis of large numbers of nano-structured materials. These materials can then be tested using the same chipbased technology. • Can be used for rapid analysis of samples, detection of disease, etc. Hormone Therapy Estrasorb™ [Novavax, Inc. (USA)] • Proprietary micellar nanoparticle drug-delivery platform. • Used to deliver a therapeutic dose of 17â estradiol in the moisturizing emulsion.

to therapy. Besides, pharmacogenetics and pharmacogenomics, other –omics such as proteomics and metabolomics are also contributing to the development of personalized medicine (Jain, 2009). Nanotechnology is also making important contributions to personalized medicine through refinement of various technologies used for diagnosis and therapeutics as well as interactions among these (Figure 1). One example of application of nanotechnology in improving cancer management is as follows: alphanubeta3-targeted paramagnetic nanoparticles have been employed for noninvasive detection of very small regions of angiogenesis associated with nascent melanoma tumors (Schmieder et al, 2005). Each particle is filled with thousands of molecules of the metal that is used to enhance contrast in conventional MRI scans. The surface of each particle is decorated with a substance that attaches to newly forming blood vessels that are present at tumor sites. This enables the detection of sparse biomarkers with molecular MRI in vivo when the growths are still invisible to conventional MRI. Earlier detection can potentially increase the effectiveness of treatment, particularly in case of melanoma. Another advantage of this approach is that the same nanoparticle used to detect the tumors can be used to deliver stronger doses of anticancer drugs directly to the tumor site without systemic toxicity. The nanoparticle MRI would enable physicians to more readily evaluate the effectiveness of the treatment by comparing MRI scans before and after

Immunosuppressant Rapamune® [Wyeth (USA)] •Immunosuppressant indicated for the prophylaxis of organ rejection in patients receiving renal transplants. Nanotechnology and personalized medicine Personalized medicine means the prescription of specific treatments and therapeutics best suited for an individual taking into consideration both genetic and environmental factors that influence response 40

Figure 1. Relationship of nanotechnology and personalized medicine NIT CALICUT RESEARCH REVIEW

treatment. This fulfills some of the important components of personalized cancer therapy: early detection, combination of diagnostics with therapeutics and monitoring of efficacy of therapy. Both nanomedicine and personalized medicine are already present on the medical scene, although not officially designated as specialties of medicine. Both will continue to interact and evolve and will play an important role in shaping the future of medical practice. Nanotechnology will play the most important role in integration of diagnostics with therapeutics, which is an essential component of personalized medicine. Conclusions Nanotechnology is an emerging field that is potentially changing the way we treat diseases through ground-breaking diagnostic and therapeutic methods. Employing constructs such as dendrimers, liposomes, nanoshells, nanotubes, emulsions and quantum dots, these advances lead toward the concept of personalized medicine and the potential for very early, even pre-symptomatic, diagnosis coupled with highly-effective targeted therapy. Current preclinical research in nanomedicine promises new ways to diagnose disease, to deliver speciûc therapy and to monitor the effects acutely and non-invasively. The potential that nanotechnology offers for a drug with improved circulating half-life, greater functional surface area and other beneûts are superb, but offset by new and yet unknown, constraints. Coupled with changing circulation times, for example, are changes with clearance from the body. Other nanoparticles might be retained not only for days, but potentially for years. In that case, the safety proûle for all components becomes of paramount signiûcance. Therefore, significant challenges remain in pushing this field into clinically viable therapies. Current problems for nanotechnology and nanomedicine involve understanding the issues related to toxicity and environmental impact of nanoscale materials (Jain, 2008). Nanoparticles must be evaluated on a particle-by-particle basis and a rational characterisation strategy must include absorption, DECEMBER 2009

distribution, metabolism and excretion (ADME) tests and physicochemical and toxicological characterisation, involving both in vitro and in vivo studies. As we continue exploring nanotechnology for biomedical applications, it is essential for us to ensure that the nanotechnologies developed are safe. Nanotoxicology is an emerging ûeld of research that will become an integral part of nanotechnology research; however, the burden for ensuring the safety of these tools and technologies resides with all of us. References 1. Agrawal A, Tripp RA, Anderson LJ, Nie S. Real-time detection of virus particles and viral protein expression with two-color nanoparticle probes. J Virol 2005;79:8625–28. 2. Aksenova MV, Aksenov MY, Mactutus CF, Booze RM.. Cell culture models of oxidative stress and injury in the central nervous system. Curr Neurovasc Res. 2005 Jan;2(1):73-89. 3. Bergman A. Hepatocyte-speciûc contrast media for CT. An experimental investigation. Acta Radiol Suppl 1997;411:1-27. 4. Bhattacharyya M, Bradley H. A case report of the use of nanocrystalline silver dressing in the management of acute surgical site wound infected with MRSA to prevent cutaneous necrosis following revision surgery Int J Low Extrem Wounds. 2008;7:45-8. 5. Caruthers SD, Winter PM, Wickline SA, Lanza GM. Targeted magnetic resonance imaging contrast agents. Methods Mol Med 2006;124:387-400. 6. Chowdhury EH. pH-sensitive nano-crystals of carbonate apatite for smart and cell-specific transgene delivery. Expert Opin Drug Deliv 2007;4:193–6. 7. Devalapally H, Chakilam A, Amiji MM. Role of nanotechnology in pharmaceutical product development. J Pharm Sci. 2007 ;96:2547-65. 8. El-Sayed IH, Huang X, El-Sayed M. Selective 41

laser photo-thermal therapy of epithelial carcinoma using anti-EGFR antibody conjugated gold nanoparticles. Cancer Lett 2006;239:129–35.

20. Jain KK. Role of nanotechnology in developing new therapies for diseases of the nervous system (editorial). Nanomedicine 2006;1:9–12.

9. Feynman, R. P., There’s Plenty of Room at the Bottom - An Invitation to Enter a New Field of Physics, Eng Sci, 1960;23:22-36.

21. Jain KK. Textbook of personalized medicine. Humana/Springer, Totowa, NJ, USA, 2009.

10. Gao X, Chen J, Chen J, Wu B, Chen H, Jiang X. Quantum dots bearing lectin-functionalized nanoparticles as a platform for in vivo brain imaging. Bioconjug Chem. 2008;19:2189-95. 11. Gazit E. Plenty of room for biology at the bottom: An introduction to Bionanotechnology. Imperial College Press, London, 2007; pp 129-146. 12. Gimi B, Leong T, Gu Z, Yang M, Artemov D, Bhujwalla ZM, Gracias DH. Self-assembled 3D radiofrequency-shielded (RS) containers for cell encapsulation. Biomed Microdevices 2005;7:341-5. 13. Halberstadt C, Emerich DF, Gonsalves K. Combining cell therapy and nanotechnology. Expert Opin Biol Ther 2006;6:971–81. 14. Herschman HR. Molecular imaging: looking at problems, seeing solutions. Science 2003;302:605-8. 15. Jain KK. Applications of nanobiotechnology in clinical diagnostics. Clin Chem 2007;53:2002–9. 16. Jain KK. Handbook of Nanomedicine. Humana/Springer, Totowa, NJ, USA, 2008. 17. Jain KK. Nanobiotechnology-based drug delivery to the central nervous system. Neurodegener Dis 2007;4:287–91. 18. Jain KK. Nanotechnology-based lab-on-a-chip devices; in Encyclopedia of Diagnostic Genomics and Proteomics. New York, Dekker, 2005; pp.891–5. 19. Jain KK. Role of nanobiotechnology in the development of personalized medicine. Nanomedicine 2009;4:249-52. 42

22. Kobayashi H, Brechbiel MW. Nano-sized MRI contrast agents with dendrimer cores. Adv Drug Deliv Rev 2005;57:2271-86. 23. Lanza GM, Wickline SA. Targeted ultrasonic contrast agents for molecular imaging and therapy. Curr Probl Cardiol 2003;28:625-53. 24. Lee SB, Koepsel R, Stolz DB, Warriner HE, Russell AJ. Self-assembly of biocidal nanotubes from a single chain diacetylene amine salt. J Am Chem Soc. 2004;126: 13400-5. 25. Liu J, Levine AL, Mattoon JS, Yamaguchi M, Lee RJ, Pan X, Rosol TJ. Nanoparticles as image enhancing agents for ultrasonography. Phys Med Biol 2006;51:2179–89. 26. Llinas RR, Walton KD, Nakao M, Hunter I, Anquetil PA. Neuro-vascular central nervous recording/stimulating system: using nanotechnology probes. J Nanoparticle Res 2005;7:111–27. 27. Loo C, Lowery A, Halas N, West J, Drezek R. Immunotargeted nanoshells for integrated cancer imaging and therapy. Nano Lett 2005;5:709–11. 28. Neuwelt EA, Várallyay P, Bagó AG, Muldoon LL, Nesbit G, Nixon R. Imaging of iron oxide nanoparticles by MR and light microscopy in patients with malignant brain tumours. Neuropathol Appl Neurobiol. 2004;30:456-71. 29. Nowack, B., Bucheli, T.D., Occurrence, behaviour and effects of nanoparticles in the environment, Environ Pollut, 2007;150:5-22. 30. Partlow KC, Chen J, Brant JA, Neubauer AM, Meyerrose TE, Creer MH, Nolta JA, Caruthers SD, Lanza GM, Wickline SA. 19F magnetic NIT CALICUT RESEARCH REVIEW

resonance imaging for stem/progenitor cell tracking with multiple unique perfluorocarbon nanobeacons. FASEB J 2007;21:1647–54. 31. Rai M, Yadav A, Gade A. Silver nanoparticles as a new generation of antimicrobials. Biotechnol Adv. 2009;27:76-83. 32. Ren G, Hu D, Cheng EW, Vargas-Reus MA, Reip P, Allaker RP. Characterisation of copper oxide nanoparticles for antimicrobial applications. Int J Antimicrob Agents. 2009;33:587-90. 33. Sadiq IM, Chowdhury B, Chandrasekaran N, Mukherjee A. Antimicrobial sensitivity of Escherichia coli to alumina nanoparticles. Nanomedicine. 2009;5:282-6. 34. Saini S, Edelman RR, Sharma P, Li W, MayoSmith W, Slater GJ, Eisenberg PJ, Hahn PF. Blood-pool MR contrast material for detection and characterization of focal hepatic lesions: initial clinical experience with ultra small superparamagnetic iron oxide (AMI-227). AJR Am J Roentgenol 1995;164:1147-52. 35. Sakamoto JH, Smith BR, Xie B, Rokhlin SI, Lee SC, Ferrari M. The molecular analysis of breast cancer utilizing targeted nanoparticle based ultrasound contrast agents. Technol Cancer Res Treat 2005;4:627-36. 36. Sandhiya S, Dkhar SA, Surendiran A. Emerging trends of nanomedicine—an overview. Fundam Clin Pharmacol. 2009;23:263-9. 37. Schmieder AH, Winter PM, Caruthers SD, Harris TD, Williams TA, Allen JS, Lacy EK, Zhang H, Scott MJ, Hu G, Robertson JD, Wickline SA, Lanza GM. Molecular MR imaging of melanoma angiogenesis with alphanubeta3-targeted paramagnetic nanoparticles. Magn Reson Med. 2005; 53:621-7.

cells by high-epitope density nanofibers. Science 2004;303:1352-5. 39. Veiseh O, Sun C, Fang C, Bhattarai N, Gunn J, Kievit F, Du K, Pullar B, Lee D, Ellenbogen RG, Olson J, Zhang M. Specific targeting of brain tumors with an optical/magnetic resonance imaging nanoprobe across the blood-brain barrier. Cancer Res. 2009;69: 6200-7. 40. Vo-Dinh T. Nanotechnology in biology and medicine: methods, devices, and applications. CRC Press, Florida, USA, 2007. 41. Wagner E. Programmed drug delivery: nanosystems for tumor targeting. Expert Opin Biol Ther 2007;7:587–93. 42. Wang H, Chen X. Applications for site-directed molecular imaging agents coupled with drug delivery potential. Expert Opin Drug Deliv. 2009;6:745-68. 43. Wickline SA, Lanza GM. Nanotechnology for molecular imaging and targeted therapy. Circulation 2003;107:1092-5. 44. Winter PM, Shukla HP, Caruthers SD, Scott MJ, Fuhrhop RW, Robertson JD, Gaffney PJ, Wickline SA, Lanza GM. Molecular imaging of human thrombus with computed tomography. Acad Radiol 2005;12:9-13. 45. Zhang H, Zeng X, Li Q, Gaillard-Kelly M, Wagner CR, Yee D. Fluorescent tumour imaging of type I IGF receptor in vivo: comparison of antibody-conjugated quantum dots and small-molecule fluorophore. Br J Cancer 2009;101:71-9. 46. Zhang J, Yang G, Cheng Y, Gao B, Qiu Q, Lee YZ, JP Lu, Zhou O. Stationary scanning Xray source based on carbon nanotube field emitters. Appl Phys Lett 2005;86: 184104/1184104/3.

38. Silva GA, Czeisler C, Niece KL, Beniash E, Harrington DA, Kessler JA, Stupp SI. Selective differentiation of neural progenitor DECEMBER 2009

43

Organic Light Emitting Diodes: A Review on Device Physics, and Modeling using Artificial Neural Networks T.A. Shahul Hameed*, M.R. Baiju** and P. Predeep***

Abstract Search for flexible and robust displays imparted greater momentum to the hectic research in the field of Organic and Polymer Light Emitting Diodes (OLEDs & PLEDs). Ease of fabricating multi color displays through affordable and cost effective techniques accelerated these investigations. Being an interdisciplinary area understanding its physics and modeling have become unequivocally important for shaping tomorrow’s device technology. Here a review on the physics of OLEds is presented. Further , the scope of Artificial Neural Networks in device modeling is explored by fabricating and characterizing an MEH-PPV based device and modeling it by using MATLAB tool for ANN. Keywords PLED, MEH-PPV, PEDOT-PSS, ISAM and Electro Phosphorescent Device, ANN, Back propagation I. Introduction Research in organic displays [1] has been attaining greater momentum for the last two decades obviously due to their capacity to form flexible multi color displays. Their potential advantages

include easy processing, robustness and inexpensive foundry compared to inorganic counterparts. In fact, this new comer in display is rapidly moving from fundamental research into industrial product, throwing [2] many new challenges like degradation and lifetime. In order to design suitable structures displays it is beyond doubt that information on device physics is to be brought out. Such studies will lead to the development of accurate and reliable models of performance, design optimization, integration with existing platforms, design of silicon driver circuitry and prevention of device degradation. Moreover, a clear understanding on the device physics is necessary for optimizing electrical properties including balanced carrier injection and the location of the emission in the device. The commendable efforts for unveiling the basic operation of PLEDs, the physical phenomena and device models using analytical and experimental evidences are reviewed in coming sections. This is is appended with our attempts to implement an artificial neural net work based model of an MEHPPV device.

* Department of Electronics and Communication Engineering, TKM College of Engineering, Kollam, Kerala ** Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala *** Professor, Laboratory for Unconventional Electronics and Photonics, NIT Calicut e-mail: [email protected] 44

NIT CALICUT RESEARCH REVIEW

2. Device Structure, Principle

an external voltage is applied till photocurrent is equal to dark current. Its physical significance is that it reduces the applied external voltage V such that a net drift current in forward bias direction can only be achieved if V exceeds built in voltage,Vbi. Carrier injection is described by [4] Fowler-Nordheim tunneling or RichardsonSchottky thermionic emission, described by the equations

Fig.1-Simple structure of a PLED

The simplest structure showing the essential parts of PLED is shown in fig 1.In practical implementations, more layers for carrier injection and transport are normally incorporated. The polymer LED [3] is a dual carrier injection device in which electrons are injected from cathode to the LUMO of the polymer and holes are injected from anode to the HOMO of conducting polymer and they recombine radiatively within the polymer to give off light. The fabrication of the device is easy through spin casting of the carrier transport layer and EL layer (MEH-PPV) for thickness in A0 range. 3. Device Physics For OLEDs it is more often a practice to follow many concepts derived from inorganic semiconductor physics. In fact most of the organic materials used in LEDs form disordered amorphous films without forming crystal lattice and hence the mechanisms used for molecular crystals cannot be extended. Detailed study on device physics of organic diodes based on aromatic amines (TPD) and aluminium chelate complex (Alq) was carried out by [4] W.Brutting et al.. Basic steps in electroluminescence are shown in fig. 2 where charge carrier injection, transport, exciton formation and recombination are accounted in presence of built-in potential. Built-in potential across the organic layers is due to the different work functions [5] between anode and cathode. Built-in potential is found out by [6] photovoltaic nulling method where OLED is illuminated and DECEMBER 2009

Fig.2 Basic Steps of Electroluminescence[4]

The current is either space charge limited (SCLC) or trap charge limited (TCLC).The recombination process in OLED has been described by [4] Langevin theory because it is based on a diffusive motion of positive and negative carriers in the attractive mutual Coulomb field. To be more clear the recombination constant (R) is proportional to the carrier mobility.

Apart from the discussion on the dependence of current on voltage and temperature the current has a direct dependence on the thickness of the organic layer and it was observed that thinner the device better will be the current output. Similar observations were also made by the group [4] on J-V and 45

luminance characteristics of ITO/TPD/AlQ/Ca hetero junction devices for different organic layer thickness. The thickness dependence of current at room temperature leads to the inference that the electron current in Alq device is predominantly space charge limited with a field dependent charge carrier mobility and that trapping in energetically distributed states is additionally involved at low voltage and especially for thick layers. The temperature dependence of current in Al/Alq/Ca device (from 120 K to 340K) indicates that device is having a less turn-on current at higher temperature and recombination in OLED is a bimolecular process following the Langevin theory. The mathematical analysis of the device, considering traps and temperature has been a new approach in device physics. Towards the search for highly efficient device, the combining of Alq and NPB, with a thickness of 60nm for the Alq layer was found to yield higher quantum efficiency whereas thickness variation of NPB layer didn’t show any measurable effect.

Trap free limit for dual carrier device was studied by [8] Bozano et al. Space charge limited current was observed above moderate voltages (>4V), while zero field electron mobility is an order of magnitude lower than hole mobility. Balanced carrier injection is one of the pre requisites for the optimal operation of single layer PLEDs. Balanced carrier transport implies that injected electrons and holes have same drift mobilities. In fact it is difficult to achieve this in single layer devices due to the predominance of one of the carriers and hence bi-layer devices are used to circumvent the problem. ITO/PPV/TPD: PC/Al devices fabricated where ITO/PPV is an ideal hole injecting contact for the trap-free MDP TPD: PC. Here ITO/PPV contact acts as an infinite, non depletable charge reservoir which is able to satisfy the demand of the TPD: PC layer under trapfree space-charge-limited [9] (TFSCL) conditions. Trap free space charge limited current (TFSL) [8,10] can be expressed as

The field and temperature dependence of the electron mobility in Alq leads [4] to the delay equation as

where

The behavior of hopping

transport in disordered organic solids has been better explained by [7] Gaussian Disorder Model. The quantitative model for device capacitance with an equivalent circuit of hetero layer device gives more insight into interfacial charges and electric field distribution in hetero layer devices. The transport behavior in polymer semiconductor has been a matter of active debate since many theories were put forwarded by different groups. Charge transport is not a coherent motion of carriers in well defined bands - it is a stochastic process of hopping between delocalized states, which leads [4] to low carrier mobilities. 46

is the permittivity of vacuum, ε is the where is the mobility of permittivity of the polymer, holes in trap free polymer, d is inter electrode distance. Trapping is relatively severe at low electric fields and in thick PPV layers. At high electric fields, trapping is minimized even for thick PPV layers. The carrier drift distance x at a given electric field E before trapping occurs is given by where is the trapping time. The electron deep determines the average carrier trapping product range per applied electric field before they get immobilized in deep traps. It is imperative that the values of electrons and holes in difference in PPV (10-12and 10-9 cm2/v respectively) reflects their discrepancy in transport. It is understood [11] that not the structure of PPV contributes to this difference, but oxygen related impurities in PPV with strong electron accepting character and NIT CALICUT RESEARCH REVIEW

reduction potential lower than PPV may act as the predominant electron traps and limit the range of electrons. The study of temperature dependence of current density versus electric field for single carrier (both electron dominated and hole dominated) and dual carrier devices at temperatures 200K and 300K exhibits [8] interesting results. In both temperatures, the reduction in space charge due to neutralization contributes to significant enhancement in current density in dual carrier devices . Also it was deduced that the electric field dependence of the mobility is significantly stronger for electrons than for holes. The electric field coefficient is related [12] to temperature as per the empirical relation where B and T0 are constants. In MEH-PPV devices, charge balance will be improved by cooling which in turn leads to enhanced quantum efficiency. By adjusting barrier heights, at the level of 0.1eV, quantum efficiency close to theoretical maximum can be achieved. In order to limit the space charge effects and hence to enhance the performance in terms of current density, the intrinsic carrier mobility to be taken care by modifying dielectric constant or electrically pulsing the device at an interval greater than recombination time. The other means [13] of improvisation is aligning of polymer backbone, but such efforts may lead to quenching.

model. I.D.Parker [14] examined the factors that control carrier injection with a particular reference to tunneling, by experimenting on ITO/MEH-PPV/ Ca device. The thickness dependability [14] of current density with respect to bias and field strength is shown in fig.3. It is obvious from these figures that the device operating voltage shall be reduced by reducing the polymer thickness. The field dependence of I-V behavior points to the tunneling model of carrier injection, in which carriers are field emitted through a barrier at electrode/polymer interface (fig.4).

Fig.3 Thickness Dependence of the I-V Characteristics in ITO/MEH-PPV/Ca Device [14]

4. Device Models Device modeling is useful in many ways like optimization of design, integration with existing tools, prediction of problems in process control and better understanding of degradation mechanism. By modeling PLEDs current-voltage and luminance behaviors with which quantum and power efficiencies can be analytically seen which in turn normally has to be subjected to experimental validation. 4.1 Band based and Exciton based Models Both band based models and exciton based models were proposed to explain the electronic structure and operation of polymer devices. Out of the two, there are more supportive arguments for band based DECEMBER 2009

Fig.4 Field v Current Dependence for ITO/MEH-PPV/ Ca Device [14]

For a clear understanding of the device physics and models, it is customary to fabricate single 47

carrier and dual carrier devices. On replacing Ca, having low work function (2.9eV) with higher work function metals like In (4.2eV), Au (5.2eV), hole only devices can be made. This increases the offset between Fermi energy of cathode and LUMO of polymer which causes a substantial reduction in injected electrons and holes become dominant carriers. It is apparent that the external quantum efficiency reduces in single carrier devices. The current characteristics show only a slight dependence with temperature which is predicted [15] by Fowler-Nordheim tunneling.

From the band based model and characterization, the improvements in device performance was suggested [14] by Parker. Of the devices he made, ITO/MEH-PPV/Ca devices exhibit better results due to the reasons explained elsewhere. The device turn – on happens at a flat band condition and it is in fact the voltage required to reach the flat-band condition and it depends on the band gap of the polymer and work-function of electrodes. The operating voltage of the device is sensitive to barrier height whereas the turn-on voltage is not. From the equations mentioned before, an approximation for the current can be made as

where F is the field strength The constant k is defined by

is the barrier height and where effective mass of the holes.

is the

A rigid band model better explains experimental results where holes and electrons tunnel into the polymer when applied electric field tilts the polymer bands to present sufficiently thin barriers. Fig.5 clearly indicates how this model envisages tunneling of holes

(8) where V is the applied voltage and φ is the barrier height. This prediction of barrier height dependence of operating voltage has been supported by experimental results. Efficiency of the device is a function of current density due to minority carriers, increasing barrier height leads to an exponential decrease in current and efficiency, which is shown in fig.6.

Fig.6 Device Efficiency v Barrier Height3/2 [14]

Parker had suggested a suitable combination of electrode materials and polymers so that low turnon voltage and operating voltage can be achieved. Fig.5 Band Diagram (in Forward Bias) for Model, indicating positions of Fermi Level for different electrode materials[14] 48

J.C.Scott et al [6] contributed to unveil the phenomena like built in potential, charge transport, recombination and charge injection with a numerical model to calculate the recombination NIT CALICUT RESEARCH REVIEW

profile in single and multilayer structures. ‘Essentially trap free’ transport, Langevin mechanism for recombination and model of thermionic injection with Schottkey barrier at metal organic interface are the important features used by them. It is to be highlighted that charge trapping is neglected in the analysis and transport is described in terms of trap free space charge limited currents. Fowler-Nordheim mechanism was used to explain the injection, but by analytical methods and simulations, thermionic injection is said [16] to best suit for explaining the injection in organic diodes. Blom and De jong [17] made commendable efforts in characterization and modeling of polymer light emitting diodes. Their experiments on PPV devices, both single carrier and dual carrier devices, paved the way to the better understanding of mobility of electrons and holes. Electron only devices are fabricated by a PPV layer sandwiched between two Ca electrodes whereas hole only devices with an evaporated Au on top. For hole only devices, current density depends [17] quadratically on V.

(9)

applied voltage whereas in conventional LEDs, it is not. Temperature dependence of charge transport in PLEDs is investigated by performing J-V measurements on hole only and double carrier devices. Carrier transport strongly dependent on temperature [18] and the figure 7 explains the variation of current density with respect to applied voltage at different temperature.

Fig.7 Experimental and Calculated (Solid lines)J-V characteristics in hole only(squares) and double carrier (circle) for different thickness [17]

Also, the plot of bimolecular recombination constant B for different temperatures (fig.8) sheds light into the fact that recombination is Langevin type [19] and mathematically it is expressed in terms of mobility

is hole mobility and L is the thickness where of the device. Hole only device is having effect of space charge holes and electron only devices show trapping of electrons. For double carrier device, two additional phenomenon becomes importantrecombination and charge neutralization. Recombination is bimolecular since its rate is directly proportional to electron and hole concentration. Without traps and field dependent mobility the current in double carrier device is [17], Fig.8 Temperature Dependence of Bimolecular Recombination Constant [17]

(10) where B is bimolecular recombination constant. In PLEDs, conversion efficiency is dependent on DECEMBER 2009

The enhancement of maximum conversion efficiency is by decreasing non radiative recombination and by use of electron transport layer which shifts recombination zone away from metallic cathode. 49

Device model based on Poisson’s equation and conservation of charges was presented by [20] Kawabe et al. By assuming that recombination rate is proportional to collision cross section A, electric field, sum of mobility values of electrons and holes and the product of carrier densities, charge conservation equation has been rewritten as

(12) where + and – signs indicate electron and hole currents. By conservation law of the total current

(13) with the boundary conditions given by [20] current injection at both electrodes. Besides, current density, relative quantum efficiency was calculated by [20] the model equation

(14) Numerical values of the parameters are used to simulate J-V and quantum efficiency characteristics .Two devices-one with semiconducting polymer (BEH-PPV) and the other ) were with dye doped polymer ( fabricated by spin casting techniques and characterized. The results validate the model for the single layer devices and its suitability for complex devices is yet to be tested. The model is having the advantages of incorporating charged traps as shown in equation below

(15) where 50

sign indicates positive ad negative

charges respectively. This sheds light to the causes of degradation process in real devices due to the accumulation of electrons in the vicinity of the cathode. The inferences include low barrier height for low voltage operation, high mobility for high brightness devices and low electron mobility confines the emission region near the cathode and should be avoided to prevent electrode quenching. 4.2 Modeling with Artificial Neural Network 4.2.1 Requirement of the Model Several approaches have been followed to generate device level behavioral models. The numerical solutions of semiconductor equations of the devices have been applied to accurately model the physics of devices. Similarly by starting from basic device physics, microscopic or particle level simulation approaches provided the device electrical characteristics. The main drawback of these models is that they are computationally very intensive. Thus their capability to provide device level modeling interface for circuit simulation is limited by the CPU time. Some other typical approaches for the realization of this crucial interface for accurate and fast circuit simulation have included analytical, parameterized device models, table look-up models and even tensor product splines. The parameter extraction of these models present a difficult problem even if the models are physically sound and include every possible phenomenon that completely describes the device. To overcome the difficulties in the parameterized models a variety of look-up table methods with different interpolation techniques have been used. These models store the device data as tables. As is obvious, the table size grows as the third power of the number of input parameters and becomes the limiting factor in the modeling accuracy. Moreover, efficient interpolation schemes are required to effectively represent the device characteristics over the entire operation range. Considering all the above mentioned limitations of these conventional device modeling approaches, neural network model, as the one developed in this work, is found to be a potential alternative for NIT CALICUT RESEARCH REVIEW

modeling of device characteristics. This new application of the artificial neural network (ANN) is first proposed by Litovski et al., [21] in 1992. Ever since, very few studies have been reported in black box modeling of microelectronic devices using ANN. As evolution and growth of ANN have been tremendous in a decade with many good analytical studies in supervised learning, it would be logical to extend it to the modeling of polymer light emitting diodes. The main advantages of this approach over conventional models are reduced CPU time, reduced memory requirements and ease of parameter extraction. 4.2.2 Architecture of the Network A typical multilayer perceptron (MLP) network consists [22] of a set of source nodes forming the input layer, one or more hidden layers of computation nodes, and an output layer of nodes. The input signal propagates through the network layer-by-layer. The computations performed by such a feed forward network with a single hidden layer with nonlinear activation functions and a linear output layer can be written mathematically as

x =f(s) = B (As + a) + b

or Levenberg-Marquartd. The whole process is iterated until the weights have converged. 4.2.3 Implementation of the Model The model is implemented in multilayer structure as shown in fig 9. It is having three layers; starting from an input layer with five neurons where all versatile inputs are fed, a hidden layer with fifteen neurons and an output layer with a single neuron. The different inputs are input voltage, thickness of polymer layer, active area, thickness of cathode and temperature at which measurements are made. The number of neurons in the hidden layer is chosen on trial basis so as to get the least error and fast convergence time.

(16)

where s is a vector of inputs and x a vector of outputs. A is the matrix of weights of the first layer, a is the bias vector of the first layer. B and b are, respectively, the weight matrix and the bias vector of the second layer. The function denotes [23] an element wise nonlinearity.

Fig.9 ANN Model of Polymer Light Emitting Diode

The supervised learning problem of the MLP can be solved with the back-propagation algorithm. The algorithm consists of two steps. In the forward pass, the predicted outputs corresponding to the given inputs are evaluated as in Equation (16). In the backward pass, partial derivatives of the cost function with respect to the different parameters are propagated back through the network. The chain rule of differentiation gives very similar computational rules for the backward pass as the ones in the forward pass. The network weights can then be adapted using any gradient-based optimization algorithms like [24] gradient descent

The output of the network is the diode current and the training data consists of few representative data points obtained from the experimental measurements on MEH-PPV based device. The model was trained by using the Levenberg-Marquardt algorithm and implemented using the neural network toolbox [25] of the MATLAB 7 software. The training epochs were chosen so as to get the sum-squared errors (i.e., the difference between the actual network outputs and the expected network outputs) and sum squared weights (SSW) becomes less than a certain specified value.

DECEMBER 2009

51

Fig.10 Comparison of currents from the model and the experiment. The devices used here are having ITO/PEDOT/ MEH-PPV/Al. The fabrication is carried out according to the procedure described elsewhere. The three devices whose characteristics plotted here are having the specifications listed in table 1. Device

Active Area (Sq.mm)

Thickness (KA)(Al)

Polymer Thickness(nm)

1

20

1.435

100

2

25

1.435

75

3

100

1.675

75

Table 1: Specifications of fabricated devices

The network has been trained by data extracted from other reported [26] results. In those cases the device current at different temperatures form the target vector and the model has shown better convergence. Figure 10 shows the model output compared with the experimental results. It is imperative that the model values show a very small difference with the experimental results. More data from similar device would give a better convergence. 4.2.4 Parameter Extraction It is to be highlighted that the model could be very well used for extracting device current at different temperatures for a device at a particular voltage. 52

Fig.11Device current at 2V for different active area

Figure 11 shows the device current at an applied voltage of 2V for the three devices listed in table1. The extraction is carried out by adding a module in the MATLAB code for ANN model Conclusions In this paper an attempt has been made to review major milestones in the development of organic light emitting diodes focusing device physics and modeling. The theories pertaining to carrier injection, recombination and emission are reviewed and the works explaining Fowler Nordheim tunneling in carrier injection, Langevin theory in recombination are revisited. In device modeling, there have been exciton based and band based models unveiling the basic phenomena of luminescence. We attempted to present an artificial neural network based model of ITO/PEDOT-PSS/ MEH-PPV/Al device, giving out the device current. It shows a commendable convergence with the experimental output of the device we had fabricated. The model is less computationally intensive and demands less memory requirements providing flexibility to extract device currents at different temperatures and active areas. References 1. J. H. Burroughes, D. D. C. Bradley, A. R. Brown, R. N. Marks, K. Mackay, R. H.Friend, P. L. Burns, and A.B.Holmes, Nature, 347, 539 (1990). NIT CALICUT RESEARCH REVIEW

2. G.Yu and Alan J.Heeger, Synthetic Metals 85 (1997) 1183

19. P. Langevin, Ann. Chim. Phys.,vol. 28, pp. 433-530, 1903.

3. Y.Cao et al, Synthetic Metals 87 (1997) 171

20.Y.Kawabe, M.M. Morrell, G.E. Jabbour, S.E. Shaheen, B.Kippelen and N. Peyghambarian, .Journal of Applied Physics 84(9) 1998

4. Wolfgang Brutting, Stefan Berleb, Anton G.Muckl , Organic Electronics 2 (2001) 1- 36 5. I.H. Campbell, T.W. Hagler, D.L. Smith, J.P. Ferraris, Phys. Rev. Lett. 76 1996 1900. 6. J.C.Scott, Philip J.Brock, Jesse R.Salem, Sergio Ramos, George G.Malliaras, Sue A .Carter and Luisa Bozano, Synthetic Metals 111-112(2000) 289-293 7. H.Bassler.Phys.State Sol .(b), 175 (1993),15 8. L.Bozano,S.A.Carter, J.C.Scott, G.G.Malliaras and P.J.Brock , Applied Physics Letters Volume 74, Number 8 22 February 1999 9. H.Antoniadis. M.A.Abkowitz and B.R.Hsieh, Applied Physics Letters 65 (16), 17 October 1994 10 M. A. Lampert and P. Mark, Current Injection in Solids ~Academic, NewYork, 1970 11 Papadimitrakopoulos,K. Konstadinidis, T. Miller, R. Opila, E. A. Chandross and M. E. Galvin, Chem. Mater. 6, 1563 (1994)

21. V.B.Litovsky, .I.Radjenovic,Z.M.Mrcarica and S.L.Milenkovic Electronics Letters 27th August 1992 Vol.28 No.18 23. A.K.Jain, J.Mao and J.Mohiuddin , IEEE Computer, Vol 29, 3, pp 33-44 , March 1996 23. Simon Haykin, ‘Neural Networks - A Comprehensive Foundation’, 2nd ed.PrenticeHall India, 1998. 24. M.T.Hagan and M.B.Menhaj , IEEE Trans. On Neural Networks vol 5 P 989 1994 25. Neural Network Documentation- MATLAB 7 Mathworks Inc., U.S.A. 26. L.F.Santos, R.F.Bianchi, R.M.Faria, Journal of Non crystalline Solids 338-340(2004) 590594.

12. W.D.Gill, Journal of Applied Physics 43, 5033 (1972) 13. L.Bozano, S.A.Carter, and P.J.Brock, Applied Physics Letters 73, 3911(1998) 14. I.D. Parker, Journal of Applied Physics. 75 1994, 1656. 15. S. M. Sze, Physics of Semiconductor Devices (Wiley, New York, 1981). 16. G.G. Malliaras, J.R. Salem, P.J. Brock, J.C. Scott, Phys. Rev. B 58 1998 R13411. 17. P.W.M.Blom and Marc J.M.de jong,IEEE journal of selected topics in quantum electronics 4(1), 1998 18. P. W. M. Blom, M. J. M. De Jong, and S. Breedijk, Appl. Phys. Lett., vol. 71, pp. 930932, 1997. DECEMBER 2009

53

Strength and behaviour of petrofitted multi-storey RCC frames under lateral loading N. Ganesan*, P.V. Indira* and Shyju P. Thadathil**

Abstract This paper deals with an experimental investigation carried out to examine the suitability of ferrocement as a retrofitting material for RCC frames, which are subjected to distress under reversed lateral cyclic loading. The experimental investigation consists of casting and testing of a three-bay three storey RCC frame under reversed lateral cyclic loading until severely distressed. After unloading, retrofitting with ferrocement at the beam-column joints was adopted for the frame. The strengthened specimen was subjected to same loading sequence in the 2nd stage. The performance parameters such as stiffness degradation, energy dissipation capacity, ductility of the bare frame and retrofitted frame were compared and the results are presented. Keywords: RCC frames, retrofitting, cyclic, ductility, ferrocement. Introduction Lateral load resistance of reinforced concrete moment resisting frames depends upon their ability to deform well into inelastic range and dissipate the energy through the stable hysteretic behaviour. These inelastic deformations are mainly

concentrated in certain critical regions of the frame like end sections of the beams at beam-column joints which are suitably designed and detailed to undergo large inelastic deformations. As per the global practice, the resulting yield mechanism under lateral cyclic loads for the entire structural frame system should be capable of sustaining large drifts by distributing the inelastic demand uniformly across its various members. Hence the ductile moment resisting frames are designed to develop plastic hinges at the ends of the beams, while columns remain elastic except at the base of the frame. This will form strong-column weakbeam mechanism. RCC framed structures that were built, when effective design provisions for lateral loads were not established, exist at various places. Due to several design deficiencies relative to current code requirements (IS: 456(2000), IS: 1893(2002), ACI 318, NZS 3101 etc.), the behaviour of such structures under lateral load conditions are governed by undesirable mode of failures like weak column-strong beam failure mechanisms. In addition to this, many RCC frames are already in distressed and deteriorated condition due to material deficiencies, poor workmanship, and unexpected loading from natural hazards like

* Professor, Department of Civil Engineering ** Research Scholar, Department of Civil Engineering 54

NIT CALICUT RESEARCH REVIEW

cyclones, earthquake and blasts, which generate stress reversals in members. For economic and environmental reasons it would be too costly to demolish and rebuilt these structures. Hence it is necessary to repair and strengthen these framed structures with effective techniques to extend their life cycles and provide preferred collapse mechanisms. The properties, which enable structures to withstand the effects of severe stress reversals, are ductility and energy dissipation capacity. This paper describes an experimental investigation carried out to study the effect of the retrofitting technique namely ferrocement wrapping and laminate application for strengthening the frames under distress. Research significance Many of the RCC framed structures, currently under service, are conventionally designed. It is essential to evaluate the performance of these structures in terms of its strength, degradation of stiffness, ductility and energy dissipation capacity under lateral loads. Also investigations and analytical studies on the effect of confinement using ferrocement wrapping on the damaged structural elements are scanty, that too with RCC frames. The objective of this paper is to report the findings of experimental investigation with regard to the relative performance of bare and retrofitted frames in terms of stiffness, ductility, strength and failure mechanisms. This will increase the knowledge of how the RC frames behave in lateral loadings and to provide much needed experimental data for further theoretical development in retrofitting with ferrocement. This will also allow engineers in the field to properly evaluate and retrofit the structures. Experimental Programme Test specimens The experimental investigation consists of casting and testing of a one-fourth scale down model of three-bay three-storey reinforced concrete frames under static lateral reversed cyclic loads in two stages. In the first phase the specimen was loaded till collapse and in the second phase, the collapsed specimen was retrofitted with ferrocement and tested again. The RCC frame conforms to a typical DECEMBER 2009

multi-storey building’s details in relation to spanto-depth ratio, shear reinforcement percentage, longitudinal reinforcement percentage and material strengths without special detailing required for seismic resistance. The three-bay RCC frame stood 2.775 m tall and 3.15 m wide. The beams were 100 mm wide by 150 mm deep in cross section. The columns also had cross sectional dimensions of 100×150-mm. To provide fixity at the bottom, a reinforced concrete base 100 mm wide, 450 mm deep and 1800 mm long was built integrally with the frame and connected to the foundation block already designed and cast using mild steel rods. The clear span of the beam was 850 mm and the column clear storey height was 600 mm. The concrete used was of M20 grade. Fig.1 shows the details of the bare frame.

Fig. 1- Details of bare frame

Test set up The loading frame, which was already erected on the test floor of the laboratory as in Fig.1, was used to attach the hydraulic jacks in a position in line with the point of application of load. Hydraulic jacks were used for the application of load through 55

the load cell attached. In order to measure lateral deflections LVDTs having 300 mm travel and having least count of 0.01 mm were used. The LVDTs were attached to a steel frame which consists of ISMC 150 channels and ISA 50 angles and erected on test floor using bolts. The application of push and pull loads on the frame was possible using the arrangement which includes mild steel rods of diameter 25mm threaded on both ends and mild steel channels. Instrumentation Strain gauges of 120 &! gauge resistance and 4 mm length were used to measure strain on the longitudinal steel bars at the 1st storey interior joint since these locations will be subjected to higher stress compared to other stories. Mechanical demec gauges were used to measure the strains on the members of the frame. Deflections at the top storey, second storey and first storey levels were measured using LVDTs. A data acquisition system was used for obtaining steel strains, deflections and load cell readings continuously as shown in Fig.2. Three hydraulic jacks of capacity 250 kN, 100 kN and 100 kN were provided at top, middle and bottom stories respectively for applying lateral loading.

Loading sequence and observations in testing- stage I The frames were subjected to quasi static lateral reversed cyclic loading. The loading history for the test frames was pre-determined which consisted of a series of step-wise increasing load cycles as shown in Fig.3. Loading cycles were kept symmetric, as the strength of the test specimen was same in both directions, except for the last cycle when the available stroke of the hydraulic jack in one direction was limited. The load was applied by 250 kN, 100 kN and 100 kN hydraulic jacks at top, middle and bottom stories along the centerline of the beams. In the initial cycle, 4.905 kN load was applied at each storey level. The load increment in the subsequent cycles was at 9.81 kN per cycle. In the final cycle, the frame could sustain a base shear of 91.233 kN with large lateral deflection. The testing was stopped at this stage and load was gradually released. The maximum deflection at the top storey was 62.86 mm from the neutral position. At this stage beam-column joints were severely distressed and visible cracks were developed. Loading was released and frames were subjected to retrofitting process with ferrocement wrapping technique.

Fig. 3- loading history

Application of Ferrocement

Fig. 2- data acquisition system 56

GI hexagonal wire mesh of size ½ inch x 22 gauge was used for ferrocement application. The beam-column joints was thoroughly cleaned using wire brush and roughened using a chisel to get adequate bond. Two layers of mesh were carefully wrapped tightly over the required length (2 times effective depth of section) using NIT CALICUT RESEARCH REVIEW

winding wire. In addition to this L-shaped mesh strips were placed at the joints to provide integrity for wrapping. Plastering was done with cement mortar having ratio 1:2 by weight. Cement used conforms to the requirements of 43 grade Ordinary Portland Cement as per IS: 8112-1989 and sand conforming to zone II as per IS: 383-1970. The water cement ratio used was 0.5. The mortar was applied so that thickness does not exceed 15 mm. The finished specimen was cured with wet gunny bags wrapped on the specimen for 7 days.

Test results in the II stage Damage pattern and failure mechanism Same loading sequence as that of the I stage was applied for the retrofitted frame in the II stage. The frame was loaded till the lateral load resistance was almost lost showing large lateral deformation without increase in load carrying capacity. For the retrofitted frame the first crack was observed in the end section of beams at the interior joint of the 1st storey. Even though the base of the frames were fixed with the foundation block by inserting mild steel rods through the holes already provided, due to slight rotations at these points the restraint offered by the bottom simulated partial fixity. Hence in the initial cycles of loading cracking was observed in the 1st and 2nd stories and in the final cycles footing base, i.e. column section was also cracked. In reality, one may expect lower rotational restraint offered by the surrounding soil to the footing and hence such behaviour can be justified. With the increase in loading, there was further cracking of concrete in the joint regions and in the column at footing base. At the final cycle, spalling of the concrete was observed at the 1st storey joints of the bare frame BF1 in the 1st stage. However such behaviour was absent in the 2nd stage. In the final cycle, the frame could sustain a base shear of 95.65 kN with large lateral deflection. The testing was stopped at this stage and load was gradually released. The maximum deflection at the top storey was 68.36 mm from the neutral position. Fig. 4 shows the failure of the retrofitted frame. DECEMBER 2009

The inelastic actions in the frame were concentrated at ends of the beams, near the beam-column joints. Few diagonal cracks were observed in the 1st and 2nd storey joints of bare and retrofitted frame. In the case of the retrofitted frame, at the final cycles of loading a large number of finer cracks were formed when compared to a fewer number of wider cracks of the bare frame under distress. However damage of the beam-column joints of the bare frame is highly undesirable and detrimental to overall lateral response of the structure. The computed value of the joint shear strength of the bare frame was 91.24 kN. The total flexural capacity of the column sections above and below of the joint was 1.36 times greater than total flexural capacity of the beam sections at the left and right of the joints. This satisfies the requirement of strong column-weak beam concept. The base shear versus top storey deflection plot of all cycles of load for bare frame and retrofitted frames are shown in Figs. 5 and 6. From these figures, it may be noted that there is a residual deflection for each cycle of loading. This residual deflection values were found to increase as the loading cycle increases. The bare frame BF1 developed a side sway collapse mechanism in the final stage of loading after severe cracking at the beam and column sections around the 1st storey beam column-joints. The column sections at the base of the frame were also severely cracked at this stage. The retrofitted frame RF1 also showed similar behaviour at the ultimate stage, however the collapse was gradual.

Fig. 4- retrofitted frame at collapse 57

Load-deformation response The base shear-top storey deflection curves for the frame are plotted and are shown in Figs. 5 and 6.

Fig. 5-Base Shear versus Top Storey Deflection of bare R.C Frame-BF1

Fig. 6- Base Shear versus Top Storey Deflection of Retrofitted frame-RF1

The frame retrofitted with ferrocement exhibits less amount of deflection than other frames for the same level of loading in the initial cycles, which indicates the increase in stiffness of the frame when the ferrocement wrapping contributes to the flexural strength combined with the parent RCC joint. The ultimate base shear was found to be 95.65 kN for the retrofitted frame RF1. This was 4.84 % more than the value shown by the bare frame BF1.

Stiffness Degradation The stiffness of the frame was calculated from the base shear required causing unit deflection at the top storey level. The stiffness in a 58

particular cycle was calculated from the slope of the line joining peak values of the base shear in each half cycle. The comparisons of stiffness degradation of the bare RC frame BF1 and retrofitted frame RF1 is shown in Fig. 7.

Fig. 7- Comparison of Stiffness Degradation of the frames BF1 & RF1

It may be noted from Fig. 7 that the frame retrofitted with ferrocement exhibits less degradation of stiffness when compared to the bare frame. The initial stiffness of the frame retrofitted with ferrocement is much higher than the other cases. Also the stiffness corresponding to the 5th cycle of loading is less by3.38% as that of the bare frame. The substantial increase in the lateral stiffness in the first cycle shows the effect of ferrocement wrapping, but in the subsequent loading cycles the frame could sustain large deformation.

Energy Dissipation Capacity The energy dissipation capacity of a member under the load is equal to the work done in straining or deforming the structure up to the limit of useful deflection, i.e., numerically equal to the area under the load deflection curve. The proportionate energy dissipation during various load cycles was calculated from the sum of the area under the hysteresis loops from the base shear versus top storey deflection diagrams. The variation of energy dissipation capacity of all specimens during each cycle is shown in Fig. 8. The cumulative energy dissipation capacity of the strengthened frame is 27.42% higher than that of the bare frame. NIT CALICUT RESEARCH REVIEW

wrapping technique was found to increase the cumulative ductility factor of the bare frame by 60%.

Fig. 8- Comparison of cumulative energy dissipation capacity of the specimens

Ductility The ductility factor is determined as the ratio of average of the maximum deflection in each half cycle to the deflection at the yield obtained from the load-deflection graph. Fig.9 shows the variation of cumulative ductility factor the bare and retrofitted frames. The ferrocement

Fig. 9- Comparison of cumulative ductility factor of the specimen

Table 1 gives the comparison of structural properties of bare and retrofitted frame.

COMPARISON OF EXPERIMENTAL RESULTS IN TWO STAGES OF TESTING shear at Ultimate loading (kN)

Stiffness degradation (kN/mm)

Bare Frame

91.233

6.12 to1.48

Retrofitted frame

95.6475

11.32 to1.43

Improvement Achieved

4.84 % higher after retrofitting

84.97 % higher stiffness in the 1st cycle3.38 % less in the final cycle

Base

Cumulative Cumulative Energy ductility factor dissipation (kN mm) 9.075

6398

19.24

8152.5

112.01 % higher after strengthening

27.42 % higher after strengthening

Table 1- Comparison of test results

Concluding remarks The major findings of the experimental investigation are summarized as follows: (i)

Ferrocement application is a quick rehabilitation process for RC frames which involves easy methods of intervention to enhance the structural properties like stiffness, lateral load carrying capacity and ductility.

(ii) The ferrocement composite wrapping allowed the retrofitted structure to withstand higher displacement demand in the final loading cycles. DECEMBER 2009

(iii) The retrofitted structure showed a large deformation capacity without exhibiting any loss of strength and was able to carry higher ultimate load. (iv) The cyclic behaviour of the strengthened frame was stable and no significant cumulative damage was observed on the strengthened members. (v) The retrofitted specimen was able to provide 27.42% higher energy dissipation and 60% increase in the cumulative ductility factor than the bare frame. This is highly desirable for multi-storeyed framed structures for resisting lateral loads. 59

Hence the ferrocement retrofitting technique can be adopted as an efficient and effective method of retrofitting of RCC frames

5. References 1. Bracci, J.M., Reinhorn, A.M., and Mander, J.B., “Seismic retrofit of reinforced concrete buildings designed for gravity loads: performance of structural model,” ACI structural Journal,November-December 1995,pp.711-723. 2. Paramasivam,P.,Lim,C.T.E., andOng,K.C.G., “Strengthening of RC beams with ferrocement laminates,”Cement and Concrete Composites, 1998,pp.53-65. 3. Alpa S; “Seismic Retrofitting by Conventional Methods”; ICJ, August, 2002; 489-495. 4. Yong Lu; “Comparative Study of Seismic Behaviour of Multi-storey Reinforced Concrete Framed Structures”; J. Structural Engineering, February, 2002; 169-178. 5. R o c h a , P. , D e i g a d o , P. , C o s t a , A . , a n d Delgado,R., “Seismic retrofit of RC frames,” Computers and Structures, May 2004,pp.1523-;1534. 6. Arulselvan S, Perumal E B P, Subramanian K, and Shanthakumar A R; “Experimental Investigation on Two Dimensional RC Infilled Frame-RC Plane Frame Interaction for Seismic Resistance”; Proceedings of National Conference, EQADS-2006; B.43 – B.55. 7. Huang C H, Sung Y C; “Experimental Study and Modelling Masonry-Infilled Concrete with and without CFRP Jacketing”; Structural Engineering and Mechanics, Vol 22, No.4; November; 2006; 449-467. 8. Kien Vinh Duong et al., “Seismic Behavior of Shear-Critical Reinforced Concrete Frame: Experimental Investigation,”ACI Structural Journal,May-June 2007,pp.304-313.

for Construction, March-April 2007, pp.211226. 10.Alexandros G. Tsonos., “Cyclic Load Behavior of Reinforced Concrete BeamColumn Subassemblages of Modern Structures,”ACI Structural Journal,JulyAugust 2007,pp.468-478.

9. Stefano Pampanin, Davide Bolognini, and Alberto Pavese, “ Performance based retrofit strategy for existing reinforced concrete frame systems using fiber reinforced polymer composites ,” ASCE Journal of Composites 60

NIT CALICUT RESEARCH REVIEW