MHD Effects on Unsteady Flow past an Accelerated

6 downloads 0 Views 8MB Size Report
This would eliminate manual intervention for ..... a novel vision of how to run the business, the enterprise ...... Numerical simulation using Mat lab. ...... [9] Li, S., X. Mou, and Y. Cai. 2001. ...... 10]. http://jade.tilab.com/ to download Jade Agent.
Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

MHD Effects on Unsteady Flow past an Accelerated Vertical Plate with Variable Temperature and Mass Diffusion in the presence of Thermal Radiation M.Muralidharan1

R.Muthucumaraswamy2

1

Department of Mathematics, Mohamed Sathak A.J. College of Engineering, Siruseri, IT Park, Chennai, INDIA. 2 Department of Applied Mathematics Sri Venkateswara College of Engineering, Sriperumbudur, Tamilnadu, INDIA.

ABSTRACT Thermal radiation effects on unsteady flow past an uniformly accelerated infinite vertical plate with variable temperature and mass diffusion, under the action of transversely applied magnetic field has been presented. The plate temperature is raised to T and the concentration level near the plate is also raised to C  . The dimensionless governing equations are solved using Laplace-transform technique. The velocity profiles, temperature and concentration are studied for different physical parameters like magnetic field parameter, radiation parameter, thermal Grashof number, mass Grashof number, Schmidt number, Prandtl number and time. It is observed that the velocity increases with increasing values of thermal Grashof number or mass Grashof number. It is observed that the velocity increases with decreasing magnetic field parameter or radiation parameter. Keywords: accelerated, isothermal, radiation, vertical plate, heat transfer, mass diffusion, magnetic field. 1. INTRODUCTION MHD plays an important role in agriculture, petroleum industries, geophysics and in astrophysics. Important applications in the study of geological formations, in exploration and thermal recovery of oil,and in the assessment of aquifers,geothermal reservoirs and underground nuclear waste storage sites. MHD flow has application in metrology, solar physics and in motion of earths core. Also it

has applications in the field of stellar and planetary magnetospheres, aeronautics, chemical engineering and electronics. In the field of power generation, MHD is receiving considerable attention due to the possibilities it offers for much higher thermal efficiencies in power plants. Radiative heat and mass transfer play an important role in manufacturing industries for the design of fins, steel rolling, nuclear power plants, gas turbines and various propulsion device for aircraft, missiles, satellites and space vehicles are examples of such engineering applications. England and Emery[2] have studied the thermal radiation effects of a optically thin gray gas bounded by a stationary vertical plate. Das et al[1] have analyzed radiation effects on flow past an impulsively started infinite isothermal vertical plate. Gupta et al [4] studied free convection on flow past an linearly accelerated vertical plate in the presence of viscous dissipative heat using perturbation method. Kafousias and Raptis[5] extended the above problem to include mass transfer effects subjected to variable suction or injection. Free convection effects on flow past an accelerated vertical plate with variable suction and uniform heat flux in the presence of magnetic field was studied by Raptis al[6]. MHD effects on flow past an infinite vertical plate for both the classes of impulse as well as accelerated motion of the plate was studied by Raptis and Singh[7]. Mass transfer effects on flow past an uniformly accelerated vertical plate was studied by Soundalgekar[9]. Again, mass transfer effects on flow past an accelerated vertical plate with uniform heat flux was

1

M.Muralidharan R.Muthucumaraswamy

analyzed by Singh and Singh[8]. Basant Kumar Jha and Ravindra Prasad[3] analyzed mass transfer effects on the flow past an accelerated infinite vertical plate with heat sources.

 1  2 R =   2 t Pr Y Pr (2)

C 1  C = t Sc Y 2 2

Hence, it is proposed to study hydromagnetic effects on flow past an uniformly accelerated infinite vertical plate with heat and mass transfer in the presence of thermal radiation. The dimensionless governing equations are solved using the Laplace-transform technique. The solutions are in terms of exponential and complementary error function. Such a study found useful in magnetic control of molten iron flow in the steel industry, liquid metal cooling in nuclear reactors and magnetic suppression of molten semi-conducting materials. 2. MATHEMATICAL FORMULATION The unsteady flow of a viscous incompressible fluid past an uniformly accelerated isothermal vertical infinite plate with variable temperature and mass diffusion in the presence of magnetic field has been considered. Here the unsteady flow of a viscous incompressible fluid which is initially at rest and surrounds an infinite vertical plate with temperature T and concentration C . The x -axis is taken along the plate in the vertically upward direction and the y -axis is taken normal to the plate. At time t   0 , the plate and fluid are at the same temperature T . At time t  > 0 , the plate is 3

u accelerated with a velocity u= 0 t  in its own v plane and the temperature from the plate is raised to T and the concentration level near the plate are also raised to C  . A transverse magnetic field of uniform strength B0 is assumed to be applied normal to the plate. The fluid considered here is a gray, absorbingemitting radiation but a non-scattering medium. Then under usual Boussinesq's approximation the unsteady flow is governed by the following dimensionless equations as discussed in Muralidharan and Muthucumaraswamy [10].

U  2U = Gr  Gc C  2  M U t Y (1)

(3) The initial and boundary conditions in nondimensional quantities are

U  0,   0, C  0 t  0 : U  t,   t, C  t

for all Y , t  0 at Y  0

U  0,   0, C  0 as Y   (4) The dimensionless governing equations (1) to (3), subject to the initial and boundary conditions (4) are solved by the usual Laplacetransform technique and the solutions are derived as follows:





  2 Sc C  t (1  2 2 Sc) erfc  Sc  exp(  2 Sc)   





(5)

 t exp( 2 Rt ) erfc  Pr  bt       2  exp( 2 Rt ) erfc  Pr  bt  

 Pr t 2 b



 





exp( 2 Rt ) erfc  Pr  bt      exp( 2 Rt ) erfc  Pr  bt  



(6)





 t exp( 2 mt ) erfc   mt U  (1  2ac  2ed )   2  exp( 2 mt ) erfc   mt 



 

  



 

 t exp( 2 mt ) erfc   mt     2 m  exp( 2 mt ) erfc   mt  

 (a  e)[exp( 2 mt ) erfc (  mt )  exp( 2 mt ) erfc (  mt )]  a exp(ct ) [exp( 2 (m  c)t ) erfc (  (m  c) t )  exp( 2 (m  c)t ) erfc (  (m  c) t )]  e exp( dt ) [exp( 2 (m  d )t ) erfc (  (m  d ) t )  exp( 2 (m  d )t ) erfc (  (m  d ) t )]

2

MHD Effects on Unsteady Flow past an Accelerated Vertical Plate with Variable Temperature and Mass Diffusion in the presence of Thermal Radiation

 a [exp( 2 Rt ) erfc ( Pr  b t )

from the surface to a zero value far away in the free stream. It is observed that the wall concentration increases with decreasing Schmidt number.

 exp( 2 Rt ) erfc ( Pr  b t )]





exp( 2 Rt ) erfc  Pr  bt   act    exp( 2 Rt ) erfc  Pr  bt  



 



 

ac Pr t exp( 2 Rt ) erfc  Pr  bt    b  exp( 2 Rt ) erfc  Pr  bt 

 a exp( ct )[exp( 2 Pr(b  c )t ) erfc ( Pr  (b  c ) t )

 exp( 2 Pr(b  c )t ) erfc ( Pr  (b  c ) t )]  2 e erfc ( Sc )

 e. exp( dt ) [exp( 2 dtSc ) erfc ( Sc  d t )

The concentration profiles for different values of time (t = 0.2, 0.4, 0.6, 0.8, 1) and Sc = 0.3 are shown in figure 2. The trend shows that the wall concentration increases with increasing values of time t. The temperature profiles are calculated for different values of thermal radiation parameter (R = 0.2, 2, 5, 10) at time t = 1 and these are shown in figure 3. The effect of thermal radiation parameter is important in temperature profiles. It is observed that the temperature increases with decreasing radiation parameter.

 exp( 2 dtSc ) erfc ( Sc  d t )] 0.2





  2 Sc  2ed . t (1  2 2 Sc) erfc  Sc  exp(  2 Sc)    (7)

where R Gr Gc b ,a 2 , e , 2 Pr 2c (1  Pr) 2d (1  Sc)

0.18 t = 0.2 0.16 0.14

Sc = 0.16

0.12

Sc = 0.3

C

Sc = 0.6

0.08

Sc = 2.01

0.06

M RM d , c and   Y / 2 t Sc  1 1  Pr

0.04 0.02 0

3. RESULTS AND DISCUSSION For physical understanding of the problem, numerical computations are carried out for different physical parameters Gr, Gc, Sc, Pr, M and t upon the nature of the flow and transport. The value of the Schmidt number Sc is taken to be 2.01 which corresponds to water-vapour. The value of Prandtl number Pr is chosen such that they represent air (Pr = 0.71).The numerical values of the velocity, temperature and concentration are computed for different physical parameters like Prandtl number, thermal Grashof number, mass Grashof number, Schmidt number and time. Figure 1 illustrates the effect of the concentration profiles for different values of the Schmidt number (Sc = 0.16, 0.3, 0.6, 2.01) at t = 0.2. The effect of the Schmidt number is important in concentration field. The profiles have the common feature that the concentration decreases in a monotone fashion

0.1

0

0.5

1

1.5

2



3

2.5

3.5

4

Fig. 1 Concentration profiles for different values of Sc

1 0.9

Sc = 0.3

0.8

C

0.7

t=1

0.6

t = 0.8

0.5

t = 0.6

0.4

t = 0.4

0.3

t = 0.2

0.2 0.1 0

0

0.5

1

1.5



2

2.5

3

Fig. 2 Concentration profiles for different values of t

3

M.Muralidharan R.Muthucumaraswamy 1

0.7

0.9

0.7

0.5

R = 0.2

0.6



Pr = 0.71 R = 10 Sc = 2.01 t = 0.6 Gr = 10 Gc = 5

0.6

Pr = 0.71 t=1

0.8

M = 0.2

R=2

0.4

0.5

0.3

R = 10

0.3

M=2

U

R=5

0.4

M=5

0.2

0.2 0.1

0.1 0 0

0.2

0.4

0.6

0.8

1



1.2

1.4

1.6

1.8

2

0 0

0.2

0.4

0.6

0.8

Fig. 3 Temperature profiles for different values of R



1

1.2

1.4

1.6

1.8

2

Fig. 6 Velocity profiles for different values of M 1

0.45

0.9

0.4 Pr = 0.71 R=2

0.8

Pr = 0.71 M = 0.2 Sc = 2.01 t = 0.4 Gr = 10 Gc = 5

0.35

0.7

0.3

0.6



R=2

t=1

0.5

t = 0.6

0.4

U

R = 10 0.2

t = 0.4

0.3

R=5

0.25

t = 0.2

0.15

0.2

0.1 0.1

0.05

0 0

0.2

0.4

0.6

0.8

1

1.2



1.4

1.6

1.8

2

0

0

0.2

0.4

0.6

0.8

Fig. 4 Temperature profiles for different values of t



1

1.2

1.6

1.4

2

1.8

Fig. 7 Velocity profiles for different values of R

0.45

0.45 0.4 0.35 0.3

0.35 0.3

t = 0.4

0.25

t = 0.3

0.15

t = 0.2

Gr = 10, Gc = 5 Gr = 5, Gc = 10

0.25

U 0.2

Pr = 0.71 M = 0.2 Sc = 2.01 t = 0.4 R=2

0.4

Pr = 0.71 R=2 Sc = 2.01 M = 0.2 Gr = 10 Gc = 5

U

Gr = 5, Gc = 5

0.2 0.15

0.1 0.1 0.05 0.05 0

0

0.2

0.4

0.6

0.8

1



1.2

1.4

Fig. 5 Velocity profiles for different values of t

1.6

1.8

2

0 0

0.2

0.4

0.6

0.8

1



1.2

1.4

1.6

Fig. 8 Velocity profiles for different values of Gr, Gc

4

1.8

2

MHD Effects on Unsteady Flow past an Accelerated Vertical Plate with Variable Temperature and Mass Diffusion in the presence of Thermal Radiation

Figure 4 is a graphical representation which depicts the temperature profiles for different values of the time (t = 0.2, 0.4, 0.6, 1) in the presence of thermal radiation R = 2. It is clear that the temperature increases with increasing values of the time t. The velocity profiles for different time (t = 0.2, 0.3, 0.4), R = 2, Gr = 10, Gc = 5, Pr = 0.71, Sc = 2.01 and M = 0.2 are studied and presented in figure 5. It is observed that the velocity increases with increasing values of the time t. Figure 6 illustrates the effects of the magnetic field parameter on the velocity when (M = 0.2,2,5), R = 10, Gr = 10, Gc = 5, Pr = 0.71, Sc = 2.01 and t = 0.6. It is observed that the velocity increases with decreasing values of the magnetic field parameter. This shows that the increase in the magnetic field parameter leads to a fall in the velocity. This agrees with the expectations, since the magnetic field exerts a retarding force on the free convective flow. Figure 7 demonstrates the effects of the radiation parameter on the velocity when (R = 2,5,10), M = 0.2, Gr = 10, Gc = 5, Pr = 0.71, Sc = 2.01 and t = 0.4. It is observed that the velocity increases with decreasing radiation parameter. Figure 8 shows the effects of different thermal Grashof number (Gr = 10,5), mass Grashof number (Gc = 10,5), R = 2, Pr = 0.71, Sc = 2.01 and M = 0.2 on the velocity at time t = 0.4. It is observed that the velocity increases with increasing values of the thermal Grashof number or mass Grashof number. 4. CONCLUSION The theoretical solution of hydromagnetic flow past an uniformly accelerated infinite vertical plate with variable temperature and mass diffusion in the presence of thermal radiation have been studied. The dimensionless governing equations are solved by the usual Laplace-transform technique. The effect of different parameters like thermal Grashof number, mass Grashof number and t are studied graphically. It is observed that the velocity increases with increasing values of

Gr, Gc and t. But the trend is just reversed with respect to the magnetic field parameter. NOMENCLATURE, GREEK SYMBOLS a* absorption coefficient C' species concentration in the fluid C dimensionless concentration Cw wall concentration C concentration in the fluid far away from the plate Cp specific heat at constant pressure D mass diffusion coefficient Gc mass Grashof number Gr thermal Grashof number g acceleration due to gravity k thermal conductivity of the fluid Pr Prandtl number Sc Schmidt number qr radiative heat flux in the y-direction R radiation parameter T temperature of the fluid near the plate Tw temperature of the plate T temperature of the fluid far away from the plate t' time t dimensionless time u velocity of the fluid in the x-direction u0 velocity of the plate U dimensionless velocity component in x-direction x spatial coordinate along the plate y′ spatial coordinate normal to the plate volumetric coefficient of thermal  expansion volumetric coefficient of expansion * with concentration cofficient of viscosity  kinematic viscosity  density of the fluid  Stefan-Boltzmann constant  dimensionless temperature  similarity parameter  erfc complementary error function REFERENCES [1] U.N.Das , R.K.Deka, V.M.Soundalgekar, 1996, Radiation effects on flow past an impulsively started vertical infinite plate, J Theo. Mech., 1, pp.111-115. [2]W.G.England, A.F.Emery, 1969, Thermal radiation effects on the laminar free convection boundary layer of an

5

M.Muralidharan R.Muthucumaraswamy

absorbing gas, J Heat Transfer, 91,pp.3744. [3] Basanth kumar Jha and Ravindra Prasad, Free convection and mass transfer effects on the flow past an accelerated vertical plate with heat sources, Mechanics Research Communications, Vol.17, pp. 143-148, 1990. [4] Gupta A.S., Pop I., and Soundalgekar V.M., Free convection effects on the flow past an accelerated vertical plate in an incompressible dissipative fluid, Rev. Roum. Sci. Techn.-Mec. Apl., Vol.24, pp.561-568, 1979. [5] Kafousias N.G., and Raptis A., Mass transfer and free convection effects on the flow past an accelerated vertical infinite plate with variable suction or injection, Rev. Roum. Sci. Techn.Mec. Apl. Vol.26, pp.11-22, 1981. [6] Raptis A., Tzivanidis G.J. and Peridikis C.P., Hydromagnetic free convection flow past an accelerated vertical infinite plate with variable suction and heat flux, Letters in heat and mass transfer, Vol.8, pp.137143, 1981. [7] Raptis A. and Singh A.K., MHD free convection flow past an accelerated vertical plate, Letters in Heat and Mass Transfer, Vol. 8, pp.137-143, 1981. [8] Singh A.K. and Singh J., Mass transfer effects on the flow past an accelerated vertical plate with constant heat flux, Astrophysics and Space Science, Vol.97, pp.57-61, 1983. [9] Soundalgekar V.M., Effects of mass transfer on flow past a uniformly accelerated vertical plate, Letters in heat and mass transfer, Vol.9, pp.65-72, 1982. [10]Muralidharan, Muthucumaraswamy R., Thermal radiation on linearly accelerated vertical plate with variable temperature and uniform mass flux, Indian Journal of 2010

Science and Technology, Vol. 3, pp. 398-401,

6

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Evaluation and Adoption of Free and Open Source Software Lekshmy Preeti Money

Praseetha S

Mohankumar D

Computer Division, Avionics Entity, Vikram Sarabhai Space Centre, Indian Space Research Organization, Veli, Thiruvananthapuram, India

ABSTRACT This paper takes an in-depth look into Open Source Software, the various licensing schemes of OSS, quality benefits OSS imparts over commercial software, widely reported factors for switching to OSS and evaluation of OSS in each of those factors, methodologies to adopt OSS including a case study of InfoCache software. Keywords: Open Source Software (OSS), security, ease of evolution, maintainability, testability, reliability, understandability, operability, Brown field modernization, Green Field Adoption, licences, copyright, copyleft 1. INTRODUCTION Open Source Software (OSS , also known as Free and Open Source Software -FOSS) refers to software that may be freely used, modified or distributed subject to certain restrictions with respect to copyright and protection of its open source status. It must be noted that the free in FOSS doesn‟t mean free of cost, but freedom to modify and/or redistribute. [3]

• License must not be specific to a product (i.e. rights cannot depend on a particular distribution) • License must not contaminate other software (e.g.: both licensed software and OSS can coexist in the same distribution 3. VARIOUS KINDS OF OSS PROJECTS Open source desktop environments and related end-user desktop applications have increased both numbers and functionality. Table1 below shows a broad categorization of the various types of OSS projects Type

Control style Exploration Cathedral[7] oriented style central (Sharing control knowledge and innovation) Utility Bazaar [7] like oriented decentralized (Satisfying control an individual need)

Community E.g. Structure Project GNU, Perl, Leader, Linux Kernel many readers

Many peripheral developers, peer support to passive users

Linux system excluding kernel

Service oriented (Providing stable services)

Core project members, passive system developers

Apache, PostgreSQL

2. OSD- A FORMAL DEFINITION A group Open Source Initiative (OSI) formed and established Open Source Definition (OSD)[5]. The OSD is a formalization of what it means to distribute software that is open viz.: • Free distribution: license cannot restrict selling or giving away • Source code (included) i.e. software includes unobfuscated source code • Derived works i.e. software can be modified or distributed by others • Integrity of the author‟s source code i.e. know who gets credit for source code • No discrimination against persons or groups –ethnic or religious groups • No discrimination against fields of endeavour •Distribution of license (i.e. forbidding addition of further restrictive licensing)

Council-like central control

4. INFOCACHE: A CASE STUDY To study the switching from proprietary to OSS, InfoCache- indigenously developed software of VSSC/ISRO is considered. It is a central repository of information for storing and analyzing mission related data. This includes data related to mission performance, telemetry and Post Flight Analysis (PFA) reports. It has 4 main functional modules viz. – Mission Performance Display (also known as View module), Flight Data Archive, Non Conformances & Failures database and PFA Reports Search Engine. InfoCache was 7

Lekshmy Preeti Money Praseetha S Mohankumar D

originally developed in Microsoft Active Server Pages (ASP) and Microsoft Access database, using Internet Information Service (IIS) web server on Windows operating system.

Tomcat as middle tier. Tomcat is a Web server and a servlet container; Apache is a Web server. Tomcat is often integrated with Apache to enhance the latter with servlet capabilities and to capitalize on Apache's optimized and robust static-page delivery mechanisms.

Fig.1. Version 1.0 of InfoCache View module

Fig.5. Version 2.0 InfoCache Enhanced View Mission, Parameter selection and display in single page as opposed to hierarchical structure of pages in Version 1.0 Fig.2. Version 1.0 Listing Subsystems for selected Mission

Fig.3. Version 1.0 Listing pages of subsystem

Fig.4. Version 1.0 Parameters listed for selected subsystem

Inefficiencies due to the existing software design, coupled with the organizational policy favouring development in OSS, led to the project to be re-developed using open source tools. The project undertaken by COM/VSSC aimed at re-developing the Mission Performance Display or View module (module for viewing post flight data associated with each mission) for better maintenance, more configurable display and implementation using open source solutions – Java, Java Server Pages (JSP), MySQL database and Apache

Despite the fact that ASP supported dynamic paging, the previous module used static web pages to display data of each launch vehicle mission. Hence thousands of ASP files were maintained. Figures 1 to 4 show the various pages required to view parameters in the old version of InfoCache View module. The enhanced View module would use dynamically created web pages for display, and thus reduce the number of middle-tier files. This would eliminate manual intervention for enabling new mission data. Display properties would be easily reconfigured, since they are stored in database. Option for page wise security instead of module wise security would also be implemented. Fig.5 shows the enhanced view module. The following two functionalities would also be developed as add-ons to the existing software: • „Online Upload‟ for addition / modification of mission data based on well defined workflow and online status display including e-mail notification of updates, approval or rejection. • „Advanced Post Flight Analysis Search‟ which would have facility for full text search and highlighting search string occurrences in a document. (Ref. Fig. 6). The previous Search Engine conducted searches using keywords stored in the database. The new search engine would be implemented using an open source 8

EVALUATION AND ADOPTION OF FREE AND OPEN SOURCE SOFTWARE

component SearchBlox SearchVSSC.

customized

to

Fig.6. SearchVSSC– customized from Open Source component SearchBlox

The database design would also be modified. In the earlier version of InfoCache, there were multiple MS Access databases - one for each subsystem of a particular mission. The new system envisaged a single database with multiple database tables for all missions. There would also be a shift from proprietary MS Access to an open source solution - MySQL. InfoCache software also sought the following security enhancements: • Email ID authenticated login • Module wise access rights • Role based Access • Avoid inconsistency by record locking • Logging all transactions An evaluation of the limitations of proprietary software that existed in InfoCache and the merits of their open source counterparts, chosen for development in the subsequent version, follows: 4.1. Apache Tomcat vs. IIS IIS is bound to Microsoft technologies whereas Apache is by far the most widely used Web server that runs on diverse platforms UNIX, Linux, Mac OS and Windows. From the security angle, IIS is more vulnerable, requires more patching, and is more likely to have as-yet unexploited or unknown problems. 4.2. ASP vs. JSP ASP essentially limits the developer to Microsoft platforms, and even the simplest of scripting mistakes can cause the server to crash or hang, effectively bringing down a website. JSP is based entirely upon Sun's popular Java programming language and gives developers the advantages of developing in Java. JSP supports the same modularity, reusability, platform-independence, and access to Java APIs that Java programming supports. The threading model and error handling of JSP also help prevent server hangs and crashes. JSP also beats ASP by supporting the definition of tags that abstract functionality

within a page, so tags can be defined in a tag library and then used within any JSP page. This makes for a better separation of page content from its code, which is one of Web development's prime directives. Global changes need only be made to the tags defined in a central library, making time-consuming, page-by-page fixes things of the past. Unlike ASP, JSP is a lot less platform-specific and it doesn't rely as heavily on the company that created it (Sun Microsystems) for support or performance improvements. 4.3. MySQL vs. MS Access Microsoft Access, a popular data management application, has the feel of a single-user data manager designed for local use. MySQL, on the other hand, easily handles many simultaneous users. It was designed from the ground up to run in a networked environment and to be a multiple-user system that is capable of servicing large numbers of clients. MySQL can manage hundreds of megabytes of data, and more. Access has limitations in this regard. MySQL also offers higher levels of security. MySQL runs on several platforms whereas Access is a single-platform application. To summarize, high level of security offered by OSS, platform independence, multitude of user forums, high level of community support, interoperability with other commonly available open source components and ISRO‟s policy in favour of using FOSS solutions were the main reasons for choosing OSS over closed source systems in the InfoCache project. 5. OSS LICENCES Typically, software licensing [6] is designed to protect the intellectual property rights of software owners; however the objective of Open Source licensing is to protect the open distributability of the software. Open Source licensing structures are classified as either Restrictive or Permissive, others somewhere in between. The classification is based on the distribution intent of the original owner. Restrictive licenses are preferred by projects who want to ensure the future openness of a project; they restrict the ability to make a derivative project's code proprietary. Restrictive licenses require derivatives, improvements, or enhancements to be made available under similar terms. Examples of restrictive licenses are Copyleft, reciprocal licenses and the GPL. 9

Lekshmy Preeti Money Praseetha S Mohankumar D

Permissive licenses are often preferred if a project needs to protect trade-secrets or is intended for a secure environment. Although the licensing of the original project remains in force, modifications and enhancements may become proprietary. The distribution of code is permitted provided copyright notice and liability disclaimers are included and contributors‟ names are not used to endorse the new product. Examples of permissive OSS licenses include the Berkeley Software Distribution (BSD) license and the Apache Software License. Some of the commonly used Open Source Licenses are as listed: • Apache License, 2.0 • BSD licenses • GNU General Public License (GPL) • GNU Library or "Lesser" General Public License • MIT license • Mozilla Public License 1.1 (MPL) • Common Development and Distribution License • Eclipse Public License • Artistic Licenses 6. QUALITY BENEFITS IMPARTED BY OSS OSS can enhance certain software attributes as compared to development with proprietary software. These attributes include security, ease of, evolution, maintainability, testability, reliability, understandability and interoperability. [1] 6.1. Security Hoepman et. al. [8] argue that OSS is essential to building secure systems. Their main argument is that “opening the source allows an independent assessment of the exposure of a system, and the risk associated with using the system makes patching bugs easier and more likely and forces software developers to spend more effort on the quality of their code.” They further state that OSS is easier for multiple security teams, outside vendors or independent agencies to assess. In high security organizations like ISRO, secure software – software less prone to threats external or internal, is of paramount importance.

6.2. Ease of evolution Another hallmark of OSS is a vibrant community; if the community is vibrant, the software is more likely to rapidly evolve new features and bug fixes. Feature innovation in OSS does not depend on a single entity – e.g.: a vendor – so the software product is easier to change and evolve. Open source also allows for code branching – the process by which a new community forms around an existing application code base to take the features and functions or underpinnings in a different direction than that of the original team. In the InfoCache scenario, the vibrant user community of Java, mySQL and Apache was a definite advantage. 6.3. Maintainability By its nature, OSS enhances maintainability. Not only is the software more available for maintenance, but active OSS communities participate in and cultivate high level of interest in the software viability. In many cases, the code team is also part of the user population and has a vested interest in the product‟s function ability and growth. 6.4. Testability, Reliability, Understandability Because OSS code is readily available, testability is easier, both functionally and structurally. Since open source communities are very comfortable with full disclosure of bugs (as opposed to hiding them or treating them as features), software users can more easily track defects and their workarounds. The argument that thorough testing is impossible because of the absence or incompetence of software requirements (as is frequently the case in OSS projects) doesn‟t hold because some testing approaches don‟t need such requirements.[9] For the same reason that testability is easier, reliability is also higher with OSS. OSS‟s open nature also means understandability since there are no black boxes. 6.5. Interoperability Interoperability is much greater with OSS because projects almost always use shared components, and these components tend to comply with relevant international standards. Frequently these standards are the main form 10

EVALUATION AND ADOPTION OF FREE AND OPEN SOURCE SOFTWARE

of requirements satisfaction for all externally furnished (including open source) components available. 7. ADOPTION FACTORS AND EVALUATION CRITERIA OF OSS Here are the five distinct adoption factors and evaluation criteria for deciding whether adoption of OSS is truly beneficial to an organization. This section details the pros and cons of OSS associated with each factor. [2] 7.1. Cost Advantage The OSS movement has always tried to downplay the fact that OSS generally doesn‟t require a licence fee. But studies have shown that lower costs have largely helped drive the use of OSS. But not all OSS is free. Several enterprise Linux distributions are available, such as Red Hat Enterprise Linux and SUSE Linux Enterprise Server. These products include additional service for enterprise customers, such as the certification of Linux for certain hardware, access to software updates and support services. To estimate the costs involved in introducing OSS, an organization can calculate the Total Cost of Ownership (TCO) in the environment in which the adoption will occur. Switching costs include the costs necessary to migrate data from the old system to the new and the costs required to retrain personnel. As illustrated, OSS isn‟t always free, and the cost advantages over proprietary software might be limited or even absent. Therefore OSS‟s potentially lower cost isn‟t necessarily a sufficient condition for adoption. Having stated that, in the case of InfoCache, since there was no change in the operating system- it being Windows in both cases, and the software development team being already trained in Java and mySQL, switching costs were minimal and licensing costs were greatly reduced. 7.2. Source Code Availability The Open Source Software movement has always emphasized the advantages of source code availability. Proponents argue that making source code available lets everyone peer review the code, resulting in higher quality software. It is also suggested that it gives users more choice and control because it

lets them read and modify the source code. Although many OSS advocates have proclaimed these advantages, several authors have questioned or cast doubts on them. Consider the following three scenarios 7.2.1 Source code’s availability is neither an advantage nor disadvantage This is because the organization never uses the source code. Organizations might have little need to modify the source code of highly mature infrastructure software such as Linux and Apache. In this scenario OSS serves as a black box and its advantages and disadvantages are comparable to proprietary packaged software. This was the scenario in InfoCache. 7.2.2 Organization considers the source code availability to be an advantage, but doesn’t use it to study or customize the program Some organizations expressed a greater trust in OSS because of the source code‟s availability. They felt that the program was less likely to contain hidden features and that bugs in the software would be quickly fixed. Although organizations in this scenario might not actually use the source code, its availability gives them the option of doing so later. 7.2.3 OSS serves as a white box Organizations can use the source code to study the software‟s inner working or to adapt the software to their own needs. In this case, the source code availability can be an important factor in the adoption decision. Organizations might also customize the software e.g.: customized web mail applications. 7.3. Maturity Another important question is whether OSS is mature enough for use in organizations. Several OSS evaluation frameworks (Navica Open Source Maturity Model www.navicasoft.com, Open Business Readiness Rating www.openbrr.com, CapGemini Open Source Maturity Model www.seriouslyopen.org, Qualification and Selection of Open Source Software www.qsos.org) aim to help decision makers decide whether a particular OSS package is 11

Lekshmy Preeti Money Praseetha S Mohankumar D

mature enough to adopt. Reliability – i.e. the software‟s ability to function as expected under certain conditions – is one aspect of maturity. OSS that receives a high maturity rating can be considered reliable. If an organization is unfamiliar with OSS, it should restrict its use to software that is generally considered to be mature such as Linux and Apache.

customizations, provides more choice and control and provides more trust in the software. Source code can be important when developing products based on OSS because it provides more insight.

apply bug fixes Source code might not matter to organizations

Maturity

Linux/ OSS is reliable Category killers in

Vendor lock-in

OSS avoids vendor lock-in

Linux/OSS is unreliable (claimed by advocates of proprietary software) Choice for Enterprise Linux might be mandated by external vendor. Still dependent on OSS vendor for updates, services and support Type of support differs. Support is lacking for some OSS

7.4. Avoiding Vendor Lock-In Organizations frequently adopt OSS to reduce vendor lock-in and become less dependent on their software vendors. Although using OSS can reduce vendor lock-in, choosing OSS won‟t make an organization fully independent of vendors. For instance, vendors such as SAP require customers to use Linux Enterprise versions. Products offered by OSS companies such as SourceLabs and SpikeSource are another situation in which customers might be dependent on an OSS vendor. Decision makers should therefore not adopt OSS simply to reduce their dependency on their vendor. Instead, they should investigate the degree to which the organization would depend on OSS vendors for services such as support or updates. 7.5. External support availability External support availability for OSS is rated important. Support for OSS can take different forms. Some vendors offer support contracts – e.g.: the enterprise versions of Linux distributions and the services offered by companies such as SourceLabs. Large software vendors such as IBM openly declare their commitment to OSS and offer various services to customers. Additionally, many independent consultancy firms will install, configure and maintain OSS systems. Table II summarizes the claims and counterclaims of each of the five factors discussed above. Factor

Merits/ Claims

Demerits/CounterClaims

Source Code Availability

Source code availability leads to higher quality, enables

Lack of knowledge to apply modifications Lack of need to

External support

External support is important for OSS. Support for OSS is available from commercial vendors Table 2: Evaluation factors for adopting OSS

A study of the above factors indicates that managers must take care while adopting OSS. Doing so for the wrong reason can harm the organization, while not adopting OSS might leave considerable opportunities unused. 8. WHEN AND HOW TO CONVERT TO OSS 8.1 Ways to Integrate Open Source Software 8.1.1. Green Field Adoption Here, the OSS is implemented either as a distro (short for distribution) or as a standalone 12

EVALUATION AND ADOPTION OF FREE AND OPEN SOURCE SOFTWARE

application or as an integrated stack of related or integrated solutions. Distros incorporate the Linux kernel wrapped with a series of customized desktop and server maintenance applications and a load and install routine. A standalone solution can be any open source equivalent to commercial software (e.g. a CRM package), which could run either out-ofthe-box or customized. Because the open source code is available, the customization could include integration with other enterprise software, as well as extensions to the internal functions of the application itself. A stack is a collection of open source applications that are interoperable and / or interconnected. It can be anything from the popular LAMP (Linux, Apache, mySQL, PHP) stack to a comprehensive business-in-a-box array of back office applications. Several vendors conduct ongoing testing, security monitoring, patch management and version upgrading of stacks and distros. 8.1.2. Brown Field Modernization It is the process of understanding and evolving existing software assets within an architectural framework. Modernization can take many forms: • Application portfolio management • Application improvement • Language to language conversion • Platform migration • Non invasive application integration • Services oriented architecture transformation • Data architecture migration •Application and data architecture consolidation • Data warehouse deployment •Application package selection and deployment • Reusable software assets and component reuse • Model driven architecture transformation One approach to Brown field modernization is to use certain building blocks in various combinations to create target architecture. The building blocks include: 1) Refactoring: It is a process to preserve code transformation and results in modifying or cleaning up source code without changing the platform, language or external behaviour. Refactoring might be the only action taken if the code is hard to maintain or change. However, refactoring might also be required to

effectively wrap the application as a service and plug it into an orchestration. 2) Translating: This usually involves converting code into another language, frequently COBOL to Java. Automated translation tools are available, but without subsequent refactoring, conversion might wind up going from spaghetti COBOL to spaghetti Java. Translation can also involve different platforms or data models. 3) Wrapping: This surrounds existing applications with an interface layer to expose some or all of its functionality as a service so that it is easily re-used in building new solutions. Sometimes wrapping is possible without refactoring the existing application, in other cases much refactoring is needed. 4) Replacement: This means eventually decommissioning an application and replacing it with an entirely new solution either designed or off the shelf. Even if the solution represents a novel vision of how to run the business, the enterprise must still know something about what the old application does. 5) Orchestration: This requires integrating OSS enabled legacy applications or newly created custom or packaged applications, components or services with the help of processes implemented using business-process enactment and monitoring services.

Fig.7. Brown field modernization involves reengineering legacy software so that one or more qualities are significantly improved

Achieving Brown field modernization by converting to OSS requires understanding and evolving assets within an architectural framework. In Fig.7, business value increases with higher levels of abstraction. “Discover &understand” (Fig.7) means gaining an understanding of the existing application and identifying the knowledge it contains. An 13

Lekshmy Preeti Money Praseetha S Mohankumar D

enterprise deals with each application individually as part of its overall strategy for modernizing its application portfolio. As Fig.7 shows, different applications require different combinations of modernization scenarios. All six scenario categories in the figure require some understanding of the existing application, but the precise knowledge that an enterprise must discover depends on the particular scenario. For e.g., applications that need to be restructured in place might require only refactoring. Other applications that need to be wrapped as services require wrapping, but might also require refactoring and orchestration. Performing data migration to a relational database will require at least translation and refactoring. All these applications require the enterprise to extract knowledge from the existing “as is” application – a process that becomes the sixth building block- extraction. Each of the other five building blocks incorporates knowledge at a different abstraction level from the bottom (implementation) to the top (business). Focusing on knowledge abstraction level is crucial to the modernization because an organization seldom has the luxury to look at each application in isolation and decide what is best for just that one piece. Rather, the view must be of the overall architecture – each piece as part of the larger plan to manage the application portfolio. When using OSS for Brown field modernization, an organization must: • let the enterprise architecture drive strategy and apply this strategy across the entire software portfolio • use different modernization tactics – i.e. apply a combination of the five modernization building blocks • plan the modernization journey by understanding existing software so that the highest level of software quality can be achieved with the least effort. Incorporating OSS into existing software is a viable strategy for improving certain software qualities. Both the Green Field and Brown Field approaches can enhance the quality of any enterprise software portfolio.

software engineers become proficient regarding the advantages and constraints associated with specific open source components as well as legal issues related to licenses. They should not take the widely claimed advantages or disadvantages of open source software for granted, but rather should investigate how each of these claims could manifest itself in an organization-specific context. REFERNCES [1] A.Gold, T.Costello and P. Laplante, Open Source Software:Is It Worth Converting?: IT Professional, July 2007 [2] J. Verelst, H. Manneet, K. Ven: Should You Adopt Open Source Software : IEEE Software May 2008 [3] C.Ebert, Open Source Software in Industry : IEEE Software May 2008 [4] B.A,Calloni, J.F.McDowen and R.Stanley, Open source Software:Free Isn’t Exactly Cheap! :AIAA 2005-7108 [5] S.Hissam, C.B.Weinstock, D.Plakish and J.Asund, Perspectives on Open Source Software: November 2001 CMU/SEI2001-TR-019 ESC-TR-2001-019 [6] http://mil-oss.org/learn-more/oss-licensingoverview [7] E.Raymond, The Cathedral and the Bazaar http://www.catb.org/~esr/writings/cathedralbazaar/ [8] J.-H.Hoepman and B.Jacobs, Increased Security Through Open Source: Communications of the ACM, vol 50, No.1, 2007 pp 79-83 [9] A.Elcock and P.Laplante, Testing without Requirements:Innovations in System and Software Engineering: A NASA J., vol 2, Dec 2006, pp 137-145

9. CONCLUSION Open Source Software is an inevitable evolution that can fulfil software engineering‟s basic needs. It is increasingly important that 14

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

More Properties of L-Intutionistic Fuzzy or L-Vague Substructures of a Ring Vasanti Gopal1

Nistala V.E.S. Murthy2

1

2

Aditya Institute of Technology and Management, Tekkali, Andhra Pradesh Department of Computer Science and Systems Engineering, Andhra University, Vishakapatnam, Andhra Pradesh.

ABSTRACT The aim of this paper is basically to study some of the standard properties of the intuitionistic L-fuzzy (inverse) images of intutionistic L-fuzzy sub rings (left ideals, right ideals, ideals) under a crisp map. Also, we study some properties of L-if/v-quotient sub rings. Keywords: Intutionistic L-fuzzy or L-vague sub ring, intuitionistic L-fuzzy image, intuitionistic L-fuzzy inverse image. A.M.S. subject classification: 08A72, 03F55, 06C05, 16D25. 1 INTRODUCTION The notion of a fuzzy set was introduced by Zadeh, and since then this concept has been applied to various algebraic structures. The idea of "Intuitionistic Fuzzy Set" was first introduced by Atanassov [1] as a generalization of the notion of fuzzy set. Later on in 1984, Atanassov and Stoeva[2], further generalized the notion intuitionistic fuzzy subset to L-intuitionistic fuzzy subset, where L is any complete lattice with a complete order reversing involution N. Thus an L-intutionistic fuzzy subset A of a set X , is a pair ( A , A ) where  A , A : X  L are

such that  A  N A . Let us recall that a complete order reversing involution is a map N : L  L such that (1) N0 L = 1L and

N1L = 0 L (2)    implies N 

N (3) NN =  (4) N ( iI  i ) =

iI N i and N (iI  i ) =  iI N i . Interestingly the same notion of intuitionistic fuzzy subset of a set was also introduced by Gau and Buehrer[7] in 1993 under a different name called Vague subset.

Thus whether we called intuitionistic fuzzy subset of a set or if-subset of a set for short, or vague subset of a set, they are one and the same. In stead of using long phrases like intuitionistic fuzzy subset or vague subset, here onwards, we use the phrase if-subset. Obviously, if/v-subset only means intuitionistic fuzzy/vague subset. In 1982, Liu [14] introduced the concept of fuzzy ring and fuzzy ideal. In 1985, Ren [13] studied the notions of fuzzy ideal and fuzzy quotient ring. Fuzzy rings and fuzzy ideal in the sense of Liu and Ren were actually an extension of Rosenfeld's fuzzy group by starting with an ordinary ring and then define a fuzzy sub ring based on the ordinary operations of the given ring. Coming to generalizations of substructures of rings on to the if/v-subsets, we briefly mention the following papers: In 2003, Banergee-Basnet[6] introduced and studied the notions of if/v-sub rings and if/v-ideals of a ring. The same year Hur-Kang-Song[8] introduced and studied the notion if/v-sub ring of a ring. Further studies on if/v-ideals, if/v-prime/maximal ideals, if/v-nil radicals etc. were made in Hur etal.[9], Jun-Ozturk-Park [11] and Wang-Lin[19] etc.. In 2008, Palaniappan, Arjunan and Palanivelrajan [12] introduced the concept of homomorphism and anti-homomorphism in intuitionistic L-fuzzy sub rings and proved some results. Meena and Thomas [16] studied some properties of L -if/v-sub rings, L -if/v-ideals and also introduced notion of quotient ring. In this paper, we make an exclusive and somewhat detailed study of the usual algebraic

15

Vasanti Gopal Nistala V.E.S. Murthy

properties of L-if/v-sub structures and their L -if/v-(inverse) images under a crisp map. For any set X , the set of all L-if-subsets of X be denoted by AL (X ) . By defining, for any pair of if-subsets A = ( A , A ) and B = ( B , B ) of X ,

A  B iff  A   B and  B   A , AL (X )

becomes a complete infinitely distributive lattice, provided L is also a complete infinitely distributive lattice. In this case for any family ( Ai )iI of L-if-subsets of X ,

 iI Ai = ( iI  A ,iI  A ) and iI Ai = i

i

(iI  A , iI  A ) , where for i : X  L , i

i

( iI i ) x =  iI i x and (iI i ) x =

an L -if/v-subset of a set X , always  P and

P

denote the membership and non-membership function of the L -if/v-subset P respectively. Also we frequently use the standard convention that   = 0 L and   = 1L . 2. MAIN RESULTS In what follows, first we give some definitions and statements and then we give equivalent statements which are quite frequently used in some propositions later on without an explicit mention. Then analogues of some crisp theoretic results are established.

iI i x .

Throughout this section R is an arbitrary but fixed ring with respect to some addition and multiplication.

For any set X , one can naturally associate, with X , the L-if-subset (  X , X )

Definitions and Statements 2.1:

= (1X , 0 X ) , where 1X or simply 1 is the constant map assuming the value 1 of L for each x  X and 0 X or simply 0 is the constant map assuming the value 0 of L for each x  X , which turns out to be the largest element in AL (X ) . Observe that then, the L-if-empty subset  of X gets naturally associated with the L-if-subset (  ,  ) =

(0,1) , which turns out to be the least element in AL (X ) . Also for any  : X  L , both (  , N ) , where N : X  L is defined x  X and by ( N )( x) = N ( x) ( N ,  ) , define L -if-subsets of X because N is an involution on L . Let A = ( A , A ) be an L-if-subset of X . Then since N is an order reversing involution, it turns out that ( A ,  A ) is also an L-if-subset of X , thus for any L-if-subset A = (  A , A ) the L-if-complement of A , denoted by Ac is defined by ( A ,  A ) . Observe that Ac =

X  A = X  Ac . Further for any pair A , B of L-if/v-subsets of X, we define B/A to be B  Ac . Throughout this paper, whenever P is

(a) Let A , B be a pair of L -if/v-subsets of a multiplicative set R , by which we mean simply a set R with a binary operation called multiplication. Let C be defined by, C x = = Cx  x= yz { A y  B z} , for each x  R . Then C

 x = yz { A y   B z}

and

is an L -if/v-subset of R as follows: Let x = yz , y, z  R . Then  A y   B z 

N A y  N B z = N ( A y  B z)  N ( x = yz ( A y  B z )) = N C x such that

x = yz , y, z  R . Now it follows that C 

N C . (b) For any pair of L -if/v-subsets A, B of R , the L -if/v-subset C defined as in (a) above, is called the L -if/v-product of A by B and is denoted by A B . (c) (i) Let A , B be a pair of L -if/v-subsets of an additive set R . Then as in (a) and (b) above, the L -if/v-sum of A by B exists and is denoted by A  B . Thus,  A B x =

 A B x  x = y  z { A y  B z} , for each x  R .

 x = y  z { A y   B z}

and

=

(ii) Further, when the binary operation addition

16

More Properties of L-Intutionistic Fuzzy or L-Vague Substructures of a Ring

is abelian, A  B = B  A . (d) For any L -if/v-subset A of an inversed set R , by which we mean a set R with a unary operation '-', the L -if/v-inverse of A , denoted by  A , defined by (   A ,   A ) is in fact an L -if/v-subset of R , where for each x  R  A (x) =  A ( x) and   A (x) =

 A (  x) .

complete lattice where in for all subsets ( i )iI , (  j ) jJ and elements  ,  , the following hold: (1)   ( jJ  j ) =  jJ (   j ) , (2)   (iI  i ) = iI (    i ) . Consequently, in a complete infinite distributive lattice, for all subsets ( i )iI and

(  j ) jJ , the following are true:

(e) For any y  R and for any pair  ,  of

(3)  iI  jJ ( i   j ) =

L , the L -if/v-point of R , denoted by y ,  ,

( iI  i )  ( jJ  j ) and

is defined by the L -if/v-subset y ,  = (

(4) iI  jJ ( i   j ) =









 y ,  y ) where  y (x) =  ,  y (x) =  



when x = y and  y (x) =  y (x) = 0 when x  y . (f) An L -if/v-subset A of R is called an L-if/v-sub ring of R iff: (1)  A (xy )

 A (xy )

x, y  R .



 A (x)

  A (x)

  A ( y) and   A ( y ) , for each

(2)  A ( x  y)

  A (x)   A ( y) and  A ( x  y)   A (x)   A ( y) , for each x, y  R . (3)  A ( x)   A (x) and  A ( x)   A (x) , for each x  R . (g) For any L -if/v-sub ring A of a ring R , {x  R/ A ( x) =  A (0) A = and

 A ( x) =  A (0)} and A = {x  R/ A ( x) > 0 and  A (x) < 1 } . (h) A complete lattice L is strongly regular iff 0 is  - prime or  > 0,  > 0 implies    > 0 and 1 is  - prime or  < 1,  < 1 implies    < 1. (i) For any L -if/v-subset A of R and for any  ,   L , the ( ,  ) -level subset of A, denoted by A ,  , is defined by A ,  =

{g  R/ A g   , A g   } . (j) Some times for us L is a complete infinite distributive lattice, by which we mean a

(iI  i )  ( jJ  j ) . Let X , Y be a pair of sets and let f : X  Y be a map. Let A = (  A , A ) and B = (  B , B ) be if/v-subsets of X and Y respectively. (k)Let X , Y be a pair of sets and let f : X

 Y be a map. Let A = (  A , A ) and B = (  B ,  B ) be L-if/v-subsets of X and Y respectively. (i) Let  D , D : Y  L be defined by  D y

  A f 1 y and  D y =  A f 1 y y  Y . Then D is a well defined L-if/v-subset of Y . =

(ii) Let X , Y , f and A be as above. Then the L-if/v-subset D of Y defined as in (i) above is called the L-intutionistic fuzzy/vague image of A under f or simply the L-if/v-image of A under f . (iii) Let C , C : Y  L be defined by

C x =  B fx and  C x =  B fx , x  X . Then C is a well defined L-if/v-subset of X . (iv) Let X , Y , f and B be as above. Then the L-if/v-subset C of X defined as in (iii) above is called the L-intutionistic fuzzy/vague inverse image of B under f or simply the L-if/v-inverse image of B under f . (l) Clearly, y  Y ,  fX y =  1X f 1 y =

 fX y

or

 fX

=

 fX

and  fX y

 0 X f 1 y = N fX y or  fX

=

=

N fX

17

Vasanti Gopal Nistala V.E.S. Murthy

implying fX = (  fX , N fX ) . (m) For any L -if/v-sub ring A of a ring R , (1) A is an L -if/v-left-ideal of R iff  A (xy )   A y and  A (xy )   A ( y) for each x, y  R . (2) A is an L -if/v-right-ideal of R iff  A (xy )   A x and  A (xy )   A (x) for each x, y  R . (3) A is an L -if/v-ideal of R iff the A is an L -if/v-right-ideal and L -if/v-left-ideal of R or (  A ( xy )   A x and  A ( xy )   A y ) or (  A ( xy )   A x   A y ). (4) The set of all L -if/v-cosets of R , denoted

R , whenever A is an L A -if/v-ideal of R , is called the L -if/v-quotient set of R by A . by R/A or

The following Lemma, which provides alternative equivalent statements for some of the above definitions and statements, is quite useful and is frequently used without an explicit mention of it in several proofs in later sections. Lemma 2.2 Let A, B, ( Ai )iI be L -if/v-subsets of a ring R . Let  =   A R ,

 =  A R , y ,  = (  y ,  y ) . Then the following are true: 1. (  A B )( x) =  yR ( A ( y)   B ( y  x)) =  yR ( A ( y)   B ( y  x)) =  yR (  A

( x  y)   B ( y)) =  yR ( A ( x  y)   B ( y)) and ( A B )( x) =  yR ( A ( y)  B ( y  x)) =

 yR ( A ( y)  B ( y  x)) =

 yR ( A ( x  y)  B ( y)) =

 yR ( A ( x  y)  B ( y)) , for each x  R . In particular, ( A B )( x  y) =

 zR ( A ( x  z)  B ( z  y)) =  zR (  A ( x  z )   B ( z  y)) and ( AB )( x  y) =  zR ( A ( x  z)  B ( z  y)) =  zR ( A ( x  z )  B ( z  y)) .

2. A  ( B  C ) = ( A  B)  C , whenever L is a complete infinite distributive lattice. 3. A  y ,  = (  A   y , A   y ) ,

(  A   y ) x =  A ( x  y) and ( A   y ) x =  A ( x  y) , for each x, y  R . In particular

A  0 , = A . 4.  ( A) = A 5. A   A iff  A  A iff A =  A 6. A  B iff  A   B 7.  ( iI Ai ) =  iI ( Ai ) 8.  (iI Ai ) = iI ( Ai ) 9.  ( A B) =  A   B 10.  ( A  B) =  A  B 11. g ,   h , = (gh)  ,   12. g ,   h , = ( g  h)  ,   . Lemma 2.3 For any L -if/v-subset A of a ring R , the following are true: (a)  A ( x  y)   A (x)   A ( y ) ,

 A ( x  y)   A (x)   A ( y) implies: (1)  A (nx)   A (x) and (2)  A (nx)   A (x) for each x  R and n  1 is an

integer. (b) A is an L -if/v-sub ring of R implies,  A (0)   A (x) and (2)  A (0)   A (x) for each x  R (c) Whenever A is an L -if/v-(left, right)ideal, for each x  R ,  A ( x)   A1 and  A ( x)   A1 . Consequently, then  A (0)

  A (x)   A1 and  A (0)   A (x)   A1 (d) A is an L -if/v-sub ring iff (1)  A A   A and  A A   A or equivalently A  A  A (2)  A A =  A and  A A =  A or equivalently A  A = A (3)   A =  A and   A =  A or equivalently  A = A . (e) A is an L -if/v-sub ring iff (i)  A (xy )   A (x)   A ( y) and  A (xy )   A (x)   A ( y ) for each x, y  R and (ii)  A ( x  y)   A (x)   A ( y) and  A ( x  y)   A (x)   A ( y) for each x, y  R . (f) A is an L -if/v-sub ring (left ideal, right 18

More Properties of L-Intutionistic Fuzzy or L-Vague Substructures of a Ring

ideal, ideal) of R implies 1. A = {x  R/ A ( x) =  A (0) ,

 A ( x) =  A (0)} is a sub ring (left ideal, right ideal, ideal) of R ; 2. A =

{x  R/ A ( x) >

0,

  A B (3)   A B (2) = (  2 z  3 z )3  (  2 z  3 z )2 = (0  1)  (1 0) = 1  1 = 1, a contradiction. So A B not an L -if/v-sub ring of R . then 0 =  AB (3 2)

 A (x) < 1} is a sub ring (left ideal, right ideal, ideal) of R whenever L is strongly regular. Lemma 2.4 For any pair of rings R and S and for any crisp homomorphism f : R  S the following are true: 1. A is an L -if/v-sub ring (left ideal, right ideal, ideal) of R implies f (A) is an L -if/v-sub ring (left ideal, right ideal, ideal) of S , whenever L is a complete infinite distributive lattice (and f is onto) 2. B is an L -if/v-sub ring (left ideal, right ideal, ideal) of S implies f 1 ( B) is an L -if/v-sub ring (left ideal, right ideal, ideal) of R.

Lemma 2.8 For any family of L -if/v-sub rings (left ideals, right ideals, ideals) ( Ai )iI of R ,

 iI Ai is an L -if/v-sub ring (left ideal, right ideal, ideal) of R whenever ( Ai )iI is a sup/inf assuming chain. Definitions 2.9 Let R be an additive group not necessarily abelian. (1) For any L -if/v-subgroup A of the group R and for any x  G , the L -if/v-subset g  A = (  g  A , g  A ) of R , where  g  A : R  L ,  g  A : R  L are defined by

 g  A x =  A ( g  x) and

 g  A x =  A ( g  x) , is called the L-if/v-left

Lemma 2.5 For any family of L -if/v-sub rings (left ideals, right ideals, ideals) ( Ai )iI of a

coset of A by g in R . The L -if/v-subset

ring R , iI Ai is an L -if/v-sub ring (left ideals, right ideals, ideals) of R .

=  A ( x  g ) and  A g x =  A ( x  g ) is

It may so happen that even though each Ai is a non-empty L -if/v-sub ring, the

iI Ai may be the empty L -if/v-subset which is trivially an L -if/v-sub ring of R as shown in the following example:

1 n

1 n

Example 2.6 An = ( ,1  ) ,  n =1 An =

(0,1) =  , the empty sub ring of R . The A B of L -if/v-sub rings A , B of a ring R need not be an L -if/v-sub ring as shown in the following example: Example 2.7 Let A = (  2 z ,1   2 z ) , B =

(  3 z ,1   3 z ) be the I -if/v-sub rings of Z , the additive ring of integers, where I = [0,1], the closed interval of real numbers. Then A B = (  2 z  3 z , (1   2 z )  (1  3 z )) and  A B (5) = (  2 z   3 z )5 = 0  0 = 0. If A B is an L -if/v-sub ring of R ,

A  g = (  A g , A g ) of R , where  A g x called the L-if/v-right coset of A by g in R . Observe that  g  A x =  A ( g  x)



N A ( g  x) = N g  A x and  A g x =

 A ( x  g )  N A ( x = g ) = N A g x . (2) The set of all L -if/v-left cosets of A in R is denoted by ( R/A) L . The set of all L -if/v-right cosets of A in G is denoted by ( R/A) R . (3) (Later on we show, as in the crisp set up, that) The number of L -if/v-left cosets of A in R is the same as the number of L -if/v-right cosets of A in R and this common number, denoted by ( R : A) , is called the L -if/v-index of A in R . Theorem 2.10 For any L -if/v-subgroup A of R , g , h  R the following are true: (1)

gA

A  g = A  g

=

A 0, A 0

g

A 0, A 0

A

and

.

(2) g  A = h  A 

g  A = 19

Vasanti Gopal Nistala V.E.S. Murthy

h  A . (3) A  g = A  h 

A  g =

A  h . (4) when R is abelian, clearly, g  A = A  g for all g in R . Corollary 2.11 For any L -if/v-subgroup A of a group R , the following are true: (1) The number of L -if/v-left(right) cosets of A in R is the number of left(right) cosets of A in R (2) ( R : A) = ( R : A ) . Definitions 2.12 (1) For any L -if/v-sub ring A of a ring R and for any r  R , the L -if/v-subset r  A = ( r  A , r  A ) of R ,

uv  A . Again by 2.15, (uv)  A .

( xy )  A

=

Lemma 2.14 For any L -if/v-sub ring A of a ring R and for any x, y  R such that

x  A = y  A ,  A (x) =  A ( y) and  A (x) =  A ( y) . Theorem 2.15 For any L -if/v-ideal A of a ring R , 1. ( R/A,,) is a ring. 2. R/A 

R/A . L -if/v-sub ring B of crisp ideal N of R , R/N  L where , C ( x  N ) =

where  r  A ,  r  A : R  L , are defined by

Theorem 2.16 For any a ring R and for any the L -if/v-subset C : xR for each

r in R .

and =  B ( x  N )  C (x  N )  B ( x  N ) , is an L -if/v-sub ring of R/N when L is a complete infinite distributive

r A x =  A ( x  r ) and  r A x =  A ( x  r ) , is called the L-if/v-coset of A by Observe that  r A x =  A ( x  r ) 

N A ( x  r ) = N r A x .

(2) The set of all L -if/v-cosets of A in R is denoted by ( R/A) . Definition 2.13 Let A be an L-if/v-sub ring of ring R . Then + and  on the L-if/v-subset R/A = {x  A/x  R} is defined as follows: (i) (r  A)  (s  A) = r  s  A for each r , s  R . (ii) (r  A)  (s  A) = rs  A for each r , s  G . Observe that + and  on the L-if/v-subset R/A = {x  A/x  R} are well defined as follows:

lattice. Definitions 2.17 For any L -if/v-sub ring B of a ring R and for any ideal N of R , the L -if/v-sub ring C : R/N  L , where L is a complete infinite distributive lattice, defined by C ( x  N ) =   B ( x  N ) and

 C ( x  N ) =  B ( x  N ) for each x  R , is called the L -if/v-quotient sub ring of R/N relative to B and is denoted by B/N or

B . N

In other words when N is an ideal of R and B is any L -if/v-sub ring of R , and L is a complete infinite distributive lattice, B R  L is defined by  B (r  N ) = : N N N

Let x, y, u, v  R and let x  A = u  A and y  A = v  A . Then by 2.15,

  B (r  N )

x  A = u  A and y  A = v  A or x  u , y  v  A . Hence (i) x  y  (u  v) = x  u  y  v  A or x  y  A = u  v  A . Again by 2.15, x  y  A = u  v  A . (ii) Now since A is L -if/v-ideal, by 2.20 A is crisp ideal, xy  uv = ( x  u) y  u( y  v)  A or xy  A =

 B (r  N ) for each r  R . For any pair of L -if/v-sub rings A and B of a ring R such that A  B , (i) A is called an L -if/v-left ideal of B iff for each x, r  R ,  A (rx)   B (r )   A (x) and  A (rx)   B (r )   A (x)

and

 B (r  N )

=

N

. (ii) A is called an L -if/v-right ideal of B iff for each x, r  R ,  A (xr ) 

20

More Properties of L-Intutionistic Fuzzy or L-Vague Substructures of a Ring

 B (r )   A (x) and  A (xr )   B (r )   A (x) . (iii) A is called an L -if/v-ideal of B iff A is both L -if/v-left ideal and L -if/v-right ideal of B .

Theorem 2.18 For any pair of L -if/v-sub rings A and B of a ring R such that A is an L -if/v-(left, right)ideal of B : 1. A is an (left, right) ideal of B . 2. A is an (left, right) ideal of B  whenever L is strongly regular. Lemma 2.19 For any pair of L -if/v-sub rings A and B of a ring R such that A is an L -if/v ideal of B , the L -if/v-subset C :

B  L defined by, for each b  B  A = and   B (b  A ) C (b  A )

 C (b  A ) =  B (b  A ) , is an L B , whenever L is a A

-if/v-sub ring of

strongly regular complete infinite distributive lattice.

Definition 2.20 For any pair of L -if/v-sub rings A and B of a ring R such that A is an L -if/v ideal of B and L is a strongly regular complete infinite distributive lattice, the L -if/v-quotient sub ring of B | B relative to A , denoted by B/A or

B , is A

B /A  L   B (b  A )

with and

defined

by

B/A :

 B/A (b  A ) 

 B/A (b  A ) =

=

 B (b  A )

for each



b  B and is called L-if/v-quotient sub ring of B relative to A. Lemma 2.21 For any pair of L -if/v-sub rings A and B of a ring R such that A is an L -if/v-ideal of B , ( B/A) = B /A . Theorem 2.22 For any L -if/v-(left, right) ideal A of R and an L -if/v-sub ring B of R , A  B is an L -if/v-(left, right) ideal of

B. REFERENCES [1] Atanassov, K., Intuitionistic fuzzy sets, in: V. Sgurev, Ed., VII ITKR's Session, Sofia, June 1983 (Central Sci. and Techn. Library, Bulg. Academy of Sciences, 1984). [2] Atanassov, K. and Stoeva, S., Intuitionistic fuzzy sets, in: Polish Symp. On Interval and Fuzzy Mathematics, Poznan (Aug. 1983) 23 - 26. [3] Atanassov, K. and Stoeva, S., Intuitionistic L-fuzzy sets, in: R. Trappl, Ed., Cybernetics and Systems Research 2 (Elsevier Sci. Publ., Amsterdam, 1984 ) 539 - 540. [4] Atanassov, K., Intuitionistic fuzzy relations, in: L. Antonov, Ed., III International School "Automation and Scientific Instrumentation", Varna (Oct, 1984)56-57. [5] Atanassov, K., New Operations defined over the Intuitionistic fuzzy sets, in: Fuzzy Sets and Systems 61 (1994) 137 - 142, North - Holland. [6] Banerjee B. and Basnet D.,Kr., Intuitionistic fuzzy sub rings and ideals, Journal of Fuzzy Mathematics Vol.11(1)(2003), 139-155. [7] Gau, W.L. and Buehrer D.J., Vague Sets, IEEE Transactions on Systems, Man and Cybernetics, Vol. 23, 1993, , 610-614. [8] Hur K., Kang H.W. and Song H.K., Intuitionistic fuzzy subgroups and sub rings, Honam Mathematical Journal Vol.25(1)(2003), 19-41.on-10 [9] Hur K., Kim K.J. and Song H.K., Intuitionistic fuzzy ideals and bi-ideals, Honam MathematicalJournal, Vol.26(3) (2004), 309-330.on-14 [10] Hur K., Jang S.Y. and Kang H.W., intuitionistic fuzzy ideals of a ring, Journal of Korean Society of Mathematics Education, Series B-PAM, Vol12(3)(2005), 193-209. [11] Jun Y.B., Öztürk M.A. and Park C.H., Intuitionistic nil radicals of Intuitionistic fuzzy ideals and Euclidean intuitionistic fuzzy ideals in rings, Information Sciences, Vol.177(2007), No.21, 4662-4677. [12] Palaniappan.N., Arjunan.K. and Palanivelrajan.M., A study on intuitionistic L-fuzzy sub rings, NIFS 14 (2008), vol 3, pg 5-10. [13] Ren.Y.C, Fuzzy ideals and quotient rings,

21

Vasanti Gopal Nistala V.E.S. Murthy

Fuzzy Math. 4 (1985), 19-26. [11] A. Rosenfeld, Fuzzy groups, J. Math. Anal. Appl. 35 (1971), 512-517. [14] Liu. W.J, Fuzzy invariant subgroups and fuzzy ideals, Fuzzy Sets and Systems 8 (1982), 133 - 139. [15] Liu. Y.L., Quotient groups induced by fuzzy subgroups, Quasigroups and Related Systems, Vol.11(2004), 71- 78. [16] Meena.K. and Thomas.K.V., Intuitionistic L-Fuzzy sub rings, International Mathematical Forum, Vol. 6, 2011, no. 52, 2561 - 2572. [17] Mohammad F.Marashdeh and Abdul Razak Salleh, Intuitionistic Fuzzy Rings, International Journal of Algebra, Vol. 5, 2011, no. 1, 37 - 47. [18] Mordeson J.N. and Malik D.S., Fuzzy Commutative Algebra, Worla Scientific, Singapore, 1998. [19] Wang J. and Lin X., Intuitionistic Fuzzy Ideals with thresholds (  ,  ) of Rings, International Mathematical Forum, Vol.4(23)(2009), 1119-1127.

22

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Resilient and Distributed Group Key Agreement Protocol with Authentication Security G. Raghu

D.Jamuna

K.Guru Raghavendra Reddy

K. Mahesh Kumar

Jayaprakash Narayan College of Engineering, Nahabubnagar, Andhra Pradesh, India

ABSTRACT Many applications in Dynamic Peer Group are becoming increasing popular nowadays. There is a need for security services to provide grouporiented communication privacy and data integrity. To provide this form of group communication privacy, it is important that members of the group can establish a common secret key for encrypting group communication data. A secure distributed group key agreement and authentication protocol is required to handle this issue. Instead of performing individual re-keying operations, an interval-based approach of re-keying is adopted in the proposed scheme. The proposed interval based algorithms considered in this paper are Batch algorithm and the Queue-batch algorithm. The interval-based approach provides re-keying efficiency for dynamic peer groups while preserving both distributed and contributory properties. Performance of these interval-based algorithms under different settings, such as different join and leave probabilities, is analyzed The Queue-batch algorithm performs the best among the interval-based algorithms Keywords: Authentication, dynamic peer groups, group key agreement, re-keying, secure group communication, security 1. INTRODUCTION group key agreement protocol is different from traditional centralized group key management protocols. Centralized protocols rely on a centralized key server to efficiently distribute the group key. An excellent body of work on centralized key distribution protocols exists. In those approaches, group members are arranged in a logical key hierarch known as a key tree. Using the tree topology, it is easy to distribute the group key to members whenever there is any change in the group membership (e.g., a new

member joins or an existing member leaves the group). In the distributed key agreement protocols we consider, however, there is no centralized key server available. This arrangement is justified in many situations—e.g., in peer-to- peer or ad hoc networks where centralized resources are not readily available. Moreover, an advantage of distributed protocols over the centralized protocols is the increase in system reliability, because the group key is generated in a shared and contributory fashion and there is no single-point- of-failure. In the special case of a communication group having only two members, these members can create a group key using the Diffie–Hellman key exchange protocol [6]. In the protocol, members and use a cyclic group of prime order with the generator. They can generate their secret exponents. Member can compute its public key and send it to receiver. Since both members know their own exponent, they can each raise the other party’s public key to the exponent and produce a common group key. Using the common group key, and can encrypt their data to prevent eavesdropping by intruders. To prevent a new user from reading past communications (backward confidentiality) and a departed user from reading future communications (forward confidentiality)[6], the re- keying, which means renewing the keys associated with the nodes of the key tree, is performed. In this paper, we propose, based on the tree-based group Diffie–Hellman protocol [11], several group key agreement protocols for a dynamic communication group in which members are located in a distributed fashion and can join and leave the group at any time. 2. RELATED WORK Diffie–Hellman [6] proposed the first two-party singleround key agreement protocol. Joux proposed a singleround three party key agreement protocol that uses bilinear pairings. Burmester and Desmedt [5] had 23

Resilient and Distributed Group Key Agreement Protocol with Authentication Security

proposed a multiparty two- round key agreement (BD) protocol using a ring structure of participants. The BD protocol makes the active adversary control over the channel of all these protocols.These protocols assume only a passive adversary and justify their security on purely heuristic models. Burmester and Desmedt proved that their ring structure based group key agreement protocol is secure against a passive adversary in standard model under decision Diffie– Hellman (DDH) assumption. Several variations of Diffie– Hellman protocol and Joux [11] protocol have been suggested to incorporate authentication and a trial and error approach has been adopted to provide informal security. To achieve secure group communication and re-keys at each join or leave event. Li et al. [13] and Yang et al. [13],then apply the periodic re-keying concept in Kronos [16] to the key tree setting. All the key-tree-based approaches [16] require a centralized key server for key generation. Burmester and Desmedt [5] propose a computation-efficient protocol at the expense of high communication overhead. Steiner et al. [3] propose Cliques, in which every member introduces its key component into the result generated by its preceding member and passes the new result to its following member. Cliques are efficient in re-keying for leave or partition events, but imposes a high workload on the last member in the chain. Kim et al. [10] propose TGDH, which arranges keys in a tree structure. The setting of TGDH is similar to that of the One-Way Function Tree (OFT) scheme [16] except that TGDH uses Diffie–Hellman instead of one-way functions for the group key generation. Kim et al. [10] also suggest a variant of TGDH called STR which minimizes the communication overhead by trading off the computation complexity. All the above schemes are decentralized and hence avoid the single-point-offailure problem in the centralized case, though they introduce high message traffic due to distributed communication. A reference [11] considers rekeying at single join, single leave, merge, or partition events. Our work considers a more general case that consists of a batch of join and leave events. Comparison between the centralized and decentralized re-keying is studied by Amir et al. [1] .

In particular, Amir et al. [1] suggest a centralized key distribution scheme based on Cliques [3] and compare the performance of both schemes. In contrast, our work compares the centralized and decentralized key management schemes adapted from a key tree setting. Rather than emphasize the re-keying efficiency, [3] focus on the security issues and develop authenticated group key agreement schemes based on the Burmester- Desmedt model, Cliques, and TGDH, respectively 3. GROUP KEY ESTABLISHMENT The tree-based group Diffie–Hellman (TGDH) protocol [6] is used to establish the group key in a dynamic peer group. Each member maintains a set of keys, which are arranged in a hierarchical binary tree. A node ID v is assigned to every tree node. For a given node v a secret (or private) key KV and a blinded (or public) key BKV are associated. All arithmetic operations are performed in a cyclic group of prime order p with the generator . Therefore, the blinded key of node v can be generated by BKV = _αKV mod p

(1)

Each leaf node in the tree corresponds to the individual secret and blinded keys of a group member. Every member holds all the secret keys along its key path starting from its associated leaf node up to the root node. Therefore, the secret key held by the root node is shared by all the members and is regarded as the group key. The node ID of the root node is set to 0. Each nonleaf node v consists of two child nodes whose node ID’s are given by 2v+1 and 2v+2. Based on the Diffie– Hellman protocol, the secret key of a nonleaf node can be generated by the secret key of one child node of v and the blinded key of another child node of v. KV = (BK2v+1 )K2V +2 mod p = (BK2v+2 )K2V +1 mod p = α K2V +1 K2V +2 mod p (2) Unlike the keys at non leaf nodes, the secret key at a leaf node is selected by its corresponding group member through a secure pseudo random number generator. Since the blinded keys are publicly known, every member can compute the keys along its key path to the root node based on its individual secret key.

24

G. Raghu D.Jamuna K.Guru Raghavendra Reddy K. Mahesh Kumar

Fig.1. Key tree used in the tree-based group Diffie– Hellman protocol To illustrate, consider the key tree in Fig. 1. Every member Mi generates their own secret key and all the secret keys along the path to the root node. For example, member M1 generates the secret key K7 and it can request the blinded key BK8 from M2, BK4 from M3, and BK2 from M4, M5, or M6. Given M1’s secret key K7 and the blinded key BK8, M1 can generate the secret key K3. Given the blinded key BK4 and the newly generated secret key K3, M1 can generate the secret key K1. Given the secret key K1 and the blinded key BK2, M1 can generate the secret key K0 at the root. Any communication in the group can be encrypted based on the secret key K0 which is the group key . 4. SECURED GROUP KEY COMMUNICATION The secured group key authentication communication model comprises of the following phases A. TREE BASED GROUP KEY IN DYNAMIC PEERS Tree based group Diffie-Hellman is used to efficiently maintain the group key in a dynamic peer group with more than two members. Each member maintains a set of keys, which are arranged in a hierarchical binary tree. We assign a node ID to every tree node. For a given node, we associate a secret (or private) key and a blinded (or public) key.

All arithmetic operations are performed in a cyclic group of prime order with the generator. Each leaf node in the tree corresponds to the individual secret and blinded keys of a group member. Every member holds all the secret keys along its key path starting from its associated leaf node up to the root node. Therefore, the secret key held by the root node is shared by all the members and is regarded as the group key. The secret key of a non leaf node can be generated by the secret key of one child node of and the blinded key of another child node. The secret key at a leaf node is selected by its corresponding group member through a secure pseudo random number generator. Since the blinded keys are publicly known, every member can compute the keys along its key path to the root node based on its individual secret key. To provide both backward confidentiality (i.e., joined members cannot access previous communication data) and forward confidentiality (i.e., left members cannot access future communication data), re-keying, is performed whenever there is any group membership change (join of new member or leaving of existing member). Individual re- keying is performed after every single join or leave event. Before the group membership is changed, a special member called the sponsor is elected to be responsible for updating the keys held by the new member or departed member. Tree group key use the convention that the rightmost member under the sub tree rooted at the sibling of the join and leave nodes will take the sponsor role. The existence of a sponsor does not violate the decentralized requirement of the group key generation since the sponsor does not add extra contribution to the group key. B. INTERVAL BASED DISTRIBUTED RE-KEYING Interval based distributed algorithms significantly reduce the computation and communication costs of maintaining the group key. The interval-based approach provides re-keying efficiency for dynamic peer groups while preserving both distributed (i.e., no centralized key server is involved) and contributory (i.e., each member contributes to the resulting group key) properties. Interval- based re-keying maintains the re-keying frequency regardless of the dynamics of join and leave events, with a tradeoff of weakening both backward and forward confidentialities as a result of delaying the update 25

Resilient and Distributed Group Key Agreement Protocol with Authentication Security

of the group key.

build the key tree.

The interval-based algorithms are developed based on the following assumptions. The group communication satisfies view synchrony that defines reliable and ordered message delivery under the same membership view. Intuitively, when a member broadcasts a message under a membership view, the message is delivered to same set of members viewed by the sender. Note that this view-synchrony property is essential not only for group key agreement, but also for reliable multipoint- to-multipoint group communication in which every member can be a sender. Since the interval-based re-keying operations involve nodes lying on more than one key path, more than one sponsor may be elected. Also, a renewed node may be re- keyed by more than one sponsor. Therefore, it is assumed that the sponsors can coordinate with one another such that the blinded keys of all the renewed nodes are broadcast only.

The Batch algorithm is based on the centralized approach, which is now applied to a distributed system without a centralized key server.

An interval-based re-keying is used in order to eliminate the difficulties of individual re-keying such as inefficiency and out-of-sync problem. Interval-based re-keying maintains the rekeying frequency regardless of the dynamics of join and leave events, with a tradeoff of weakening both backward and forward confidentialities as a result of delaying the update of the group. Interval-based re-keying is done through the approaches Rebuild algorithm, the Batch algorithm, and the Queuebatch algorithm. a) REBUILD AND BATCH ALGORITHM The Rebuild algorithm minimizes the resulting tree height so that the re-keying operations for each group member can be reduced. At the beginning of every re-keying interval, reconstruct the whole key tree with all existing members that remain in the communication group, together with the newly joining members. The resulting tree is a left-complete tree, in which the depths of the leaf nodes differ by at most one and those deeper leaf nodes are located at the leftmost positions. Rebuild is suitable for some cases, such as when the membership events are so frequent that we can directly reconstruct the whole key tree for simplicity, or when some members lose the re-keying information and the simplest way of recovery is to re-

Given the numbers of joins and leaves within a re-keying interval, we attach new group members to different leaf positions of the key tree in order to keep the key tree as balanced as possible. b) QUEUE BATCH ALGORITHM Rebuild and batch re-keying approaches perform all re- keying steps at the beginning of every re-keying interval. This sults in high processing load during the update instance and thereby de-lays the start of the secure group communication. Thus a more effective algorithm Queue-batch algorithm is proposed to develop. It reduces the re-keying load by pre- processing the joining members during the idle re-keying interval. The Queue-batch algorithm is divided into two phases, namely the Queue-sub tree phase and the Queue- merge phase. The first phase occurs whenever a new member joins the communication group during the re- keying interval. In this case, append this new member in a temporary key tree. The second phase occurs at the beginning of every re-keying interval and we merge the temporary tree (which contains all newly joining members) to the existing key tree. Pseudo-code of the Queue sub tree phase Queue-sub tree (T’) 1. if (a new member joins){ 2. if(T’==NULL)/*no new member in T’*/ 3. create a new tree T’ with the only new member; 4. else{/*there are new members in T’*/ 5. find the insertion node; 6. add the new member to T’; 7. select the rightmost member under the sub tree rooted at the sibling of the joining node to be the sponsor; 8. if(sponsor)/* sponsor’s responsibility*/ 9. re-key renewed nodes and broadcast new blinded keys; 10. } 11. } 26

G. Raghu D.Jamuna K.Guru Raghavendra Reddy K. Mahesh Kumar

Pseudo-code of the Queue merge phase Queue-merge (T,T’,Ml,L) 1. if (L==0){/* There is no leave*/ 2. add T’ to either the shallowest node (which need to be the leaf node) of T such that the merge will not increase the resulting tree height, or the root node of T if the merge to any location will increase the resulting tree height; 3. } else {/* there are leaves*/ 4. add T’ to the highest leave position of the key tree T; 5. remove remaining L-1 leaving leaf nodes and promote their siblings; 6. } 7. elect members to be sponsors if they are the rightmost members of the sub tree model rooted at the sibling nodes of the departed leaf nodes in T, or they are the rightmost member of T’; 8. if(sponsor)/*sponsor’s responsibility*/ 9. re-key renewed nodes and broadcast new blinded keys;

Fig. 2. Example of the Queue-merge phase.

Analysis of the Queue-Batch Algorithm: The main idea of the Queue-batch algorithm exploits the idle re-keying interval to pre-process some rekeying operations. When we compare its performance with the Rebuild or Batch algorithms, we only need to consider the re-keying operations occurring at the beginning of every re-keying interval. When J=0, Queue-batch is equivalent to Batch in the pure leave scenario. For J>0, the number of renewed nodes in Queue-batch during the Queue- merge phase is equivalent to that of Batch when J=1. Thus, the expected number of renewed nodes is

5. PERFORMANCE EVALUATION ON SECURED GROUP KEY MANAGEMENT To reflect the latency of generating the latest group key for data confidentiality, we evaluate the performance of the interval-based algorithms in two aspects: mathematical analysis and simulation-based experiments. The mathematical analysis considers the complexity of the algorithms under the assumption that the key tree is completely balanced. Using simulations, we then study their performance in a more general setting. We also compare the performance of our interval-based algorithms and a centralized key distribution approach. The analysis of the three proposed algorithm are based on two performance measures i.e., number of exponentiation operations and the number of renewed nodes. The number of exponentiation operation gives a measure of the computation load in terms of node density to communication group’s packets drop (Graph 1). The number of renewed nodes is said to be renewed if it is a non leaf node and its associated keys are renewed. These metric measures the communication cost since the new blinded keys of the renewed nodes have to be broadcast to the whole group. Graph 1: Node Density Vs Packets Dropped The existing key tree is completely balanced prior to the interval-based re-keying event. Every existing member has the same leave probability. The computation of the blinded group key of the root node is counted in the blinded key computations. With this assumption, the number of blinded key computations simply equals the number of renewed nodes, provided that the blinded key of each renewed node is broadcast only once.

27

Resilient and Distributed Group Key Agreement Protocol with Authentication Security

especially in case of high join and leave probabilities.

6. CONCLUSION The proposed model of this work provides a distributed collaborative key agreement protocols for dynamic peer groups. The key agreement setting is performed in which there is no centralized key server to maintain or distribute the group key. We show that one can use the TGDH protocol to achieve such distributive and collaborative key agreement. To reduce the re-keying complexity, we propose to use an interval-based approach to carry out rekeying for multiple join and leave requests at the same time, with a tradeoff between security and performance.

Graph 2 : No-of-nodes Vs Delay It provides an inference of the batch re-keying (path) to queue batch re-keying (path) algorithm performance comparison in terms of node density to the group user communication delay. It presumes better node density communication for queue batch re-keying algorithm compared to that of batch re-keying model

The fig.3. GIVEN IN Appendix typically illustrates the performance difference between individual re-keying and interval based re- keying. We observe that queue batch performs better than individual re-keying

In particular, proposal showed that the Queue-batch algorithm can significantly reduce both computation and communication costs when there is highly frequent membership events. The proposal also addresses both authentication and implementation for the interval-based key agreement algorithms. REFERENCES [1] Y. Amir, Y. Kim, C. Nita-Rotaru, J. L. Schultz, J. Stanton, and G. Tsudik, ―Secure group communication using robust contributory key agreement,‖ IEEE Trans. Parallel Distrib. Syst., vol. 15, no. 5, pp. 468–480, May 2004. [2] Y. Amir and J. Stanton, The Spread Wide Area Group Communication System. Johns Hopkins Univ., Baltimore, MD, CNDS-98-4, 1998. [3] G. Ateniese, M. Steiner, and G. Tsudik, ―Authenticated group key agreement and friends,‖ in Proc. 5th ACM Conf. Computer and Communication Security, Nov. 1998, pp. 17– 26. [4] S. Blake-Wilson and A. Menezes, ―Authenticated Diffie-Hellman key agreement protocols,‖ in Proc. 5th Annu. Workshop on Selected Areas in Cryptography (SAC’98), 1998, vol. LNCS 1556, pp. 339–361. [5] M. Burmester and Y. Desmedt, ―A secure and efficient conference key distribution system,‖ in Proc. Advances in Cryptology—EUROCRYPT’ 94, 1995, vol. LNCS 950, pp. 275–286. [6] W. Diffie and M. Hellman, ―New directions in cryptography,‖ IEEE Trans. Inf. Theory, vol. 28

G. Raghu D.Jamuna K.Guru Raghavendra Reddy K. Mahesh Kumar

22, no. 6, pp. 644–654, 1976. [7] A. Fekete, N. Lynch, and A. Shvartsman, ―Specifying and using a partionable group communication service,‖ in Proc. 16th ACM Symp. Principles of Distributed Computing (PODC), Aug. 1997, pp.53–62. [8] C. G. Günther, ―An identity-based key exchange protocol,‖ in Proc. Advances in Cryptology— EUROCRYPT’89, 1989, vol. LNCS 434, pp. 29–37. [9] M. Just and S. Vaudenay, ―Authenticated multiparty key agreement,‖ in Proc. Advances in Cryptology— ASIACRYPT’96, 1996, vol. LNCS 1163, pp. 36–49 [10] Y. Kim, A. Perrig, and G. Tsudik, ―Communicationefficient group key agreement,‖ in Proc. 17th IFIP Int. Information Security Conf. (SEC’01), Nov. 2001, pp. 229–244. [11] ―Tree-based group key agreement,‖ ACM Trans. Inf. Syst. Security, vol. 7, no. 1, pp. 60–96, Feb. 2004. [12] P. P. C. Lee, ―Distributed and collaborative key agreement protocols with authentication and

implementation for dynamic peer groups,‖ M.Phil. thesis. The Chinese University of Hong Kong, , Jun. 2003. [13] X. S. Li, Y. R. Yang, M. G. Gouda, and S. S. Lam, ―Batch re-keying for secure group communications,‖ in Proc. 10th Int. World Wide Web Conf. (WWW10), Orlando, FL, May 2001, pp. 525–534. [14] A. Perrig, ―Efficient collaborative key management Protocols for secure autonomous group communication,‖ in Int. Workshop on Cryptographic Techniques and E- Commerce (CrypTEC ’99), Jul. 1999, pp.192–202. [15] S. Setia, S.Koussih, and S. Jajodia, ―Kronos: a scalable group re-keying approach for secure multicast,‖ in Proc. IEEE Symp. Security and Privacy, May 2000, pp. 215–228. [16] A. T. Sherman and D. A. McGrew, ―Key establishment in large dynamic groups using oneway function trees,‖ IEEE Trans. Software Eng., vol. 29, no. 5, pp. 444–458, May 2003.

Appendix

Fig.3. Comparison between individual re-keying and the Queue-batch algorithm. (a) Average number of exponentiations;

(b) average number of renewed nodes

29

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Unsteady MHD Viscoelastic Fluid Flow along a moving Vertical Cone J. Prakash*

R. Sivaraj

B. Rushi Kumar

Fluid Dynamics Division, SAS, VIT University, Vellore, Tamil Nadu, India * Department of Mathematics, University of Botswana, Gaborone, Botswana

ABSTRACT The present investigation deals with the study of unsteady, MHD, free convective, chemically reacting, viscoelastic fluid (Walter’s liquid- B model) flow over a moving vertical cone embedded in non-Darcy porous medium with thermal radiation. The constitutive equations for the boundary layer regime are solved by an efficient finite difference scheme of the Crank-Nicolson type and solution for the velocity, temperature and concentration distributions are obtained. The graphical results are presented and the physical aspects are discussed to interpret the effect of various significant parameters of the problem. Impact of the inertia coefficient with thermal radiation parameter plays an important role in the viscoelastic fluid flow through non-Darcy porous medium. Keywords: MHD, viscoelastic fluid, thermal radiation, non-Darcy porous medium 1. INTRODUCTION The use of non-Newtonian fluids has grown considerably because of more applications in chemical process industries, food preservation techniques, petroleum production and power engineering. In view of these applications, the study of boundary layer behavior has been channelized to viscoelastic fluids. Beard and Walter [1] had introduced the boundary layer treatment for an idealized viscoelastic fluid. Some theoretical studies [2 & 3] had analyzed the flow and heat transfer characteristics of Walter’s Liquid Model-B. In the last few decades, heat and mass transfer analysis of chemically reacting fluid flow through porous medium with the influence of magnetic field have attracted a considerable attention of researchers because such process exist in many branches of science and technology [4 & 5]. In

many practical situations, the porous medium is bounded by an impermeable wall, has higher flow rates, and reveals non-uniform porosity distribution in the near wall region, making the Darcy’s law inapplicable. So it is necessary to include the non-Darcian effects in the analysis of convective transport in the porous medium. The inertia effect is expected to be important at a higher flow rate and it can be accounted through the addition of a velocity squared term in the momentum equation, which is known as the Forchheimer’s extension. It is worth to mention that the solution of non-Darcian forced flow boundary layers is of great importance in many practical applications such as biomechanical problems, filtration transpiration cooling and geothermal. In literature, the attention has been devoted to the fluid flow through non-Darcy porous medium [6 & 7]. The hot walls and the working fluid are usually emitting the thermal radiation within the systems. The role of thermal radiation is of major importance in the design of many advanced energy convection systems operating at high temperature and knowledge of radiative heat transfer becomes very important in the nuclear power plants, gas turbines, various propulsion devices for aircraft, missiles and space vehicles [8 & 9]. Since no attempt has been made to analyze the unsteady, MHD, viscoelastic fluid flow over a moving vertical cone embedded in non-Darcy porous medium with thermal radiation. The results of parametric study on the fluid flow, heat and mass transfer characteristics are shown graphically and the physical aspects are discussed. 2. MATHEMATICAL FORMULATION We consider a two-dimensional unsteady flow and free convection heat and mass transfer of 30

Unsteady MHD Viscoelastic Fluid Flow along a moving Vertical Cone

an electrically conducting, incompressible, chemically reacting viscoelastic (Walter’s liquid-B model) fluid over a moving vertical cone embedded in non-Darcy porous medium. The x -axis is taken in the direction along the surface of the cone which is set to motion and the y -axis is taken perpendicular to it. The wall y  0 is maintained at constant temperature Tw and concentration Cw , higher than the ambient temperature T and ambient concentration C , respectively. The fluid is electrically conducting in the presence of an external transversely applied, magnetic field of strength B0 . The governing equations for this investigation are based on the balances of mass, linear momentum, energy and concentration species. Taking into consideration of these assumptions, the equations that describe the physical situation can be written in Cartesian frame of references, as follows:

  ru  x



  rv  y

0

(2.1)

u u u  2u  3u u v   2  K0 t * x y y t * y 2 

 g T cos   T  T   g C cos    C  C  (2.2)

k qr  T  T     CP  CP y QT

u  0, v  0, T  T , C  C at x  0 u  0, T  T , C  C

as y   (2.5)

where r is the dimensional local radius of cone, K 0 is the dimensional Walters-B viscoelastic parameter, K1 is the dimensional porous permeability parameter, Cb is the drag coefficient which is independent of viscosity, QT is the dimensional heat absorption coefficient, qr is the dimensional radiative heat flux, u p dimensional velocity of the moving cone,  s is the Stefan-Boltzmann Constant, T is the heat source parameter, l is the order of chemical reaction (natural number).

qr  

4 s T 4 3ke y

(2.6)

Introducing the following non-dimensional quantities

x y r 1/ 4 , Y   Gr  , R  , L L L uL r  x sin   , U   Gr 1/ 2 , X

(2.3)



V

C C C  2C u v  D 2  K R  C  C  t * x y y (2.4) The appropriate initial and conditions of the problem are

at y  0

The radiative heat flux qr under Rosseland approximation (see Raptis [10]) has the form

 e B02 C  u  b u2  u  K1 K1

T T T k  2T u v  t * x y  CP y 2

t*  0 :u  0, v  0, T  T , C  C for all x, y t*  0 :u  u p , v  0, T  Tw , C  Cw

boundary

vL

 Gr 1/ 4 ,

t

t *

 Gr 1/ 2 ,

 L2 C  C T  T  ,  , Cw  C Tw  T Gr 





g T Tw  T L3

2

, E

K0

1/ 2 Gr ,   2

L

31

J. Prakash R. Sivaraj B. Rushi Kumar

U  0,   0,   0

 0 B02 L 1 L2  , , M K K  Gr 1/ 2  1 Cb L

FI 

K1

 CP

Pr 

F

k

, N

Sc 

T 

,

CP  Gr 

1

, /





L

1/ 2

,

  Gr 

1/ 2

number, N is the buoyancy ratio parameter, M is the magnetic field parameter, Pr is the 2 Prandtl number, F is the thermal radiation parameter, Sc is the Schmidt number, K r is

number  Sh  are given by

(2.7) In view of the equation (2.7), the basic field of equations (2.1)-(2.4) can be expressed in nondimensional form as

  RV 

 0 (2.8) X Y U U U  2U  3U U V   E t X Y Y 2 t Y 2 1    M   U  FI U 2  cos    N cos    K  (2.9)

   1 4   U V  1   T   t X Y Pr  3F  Y 2 2

(2.10)

   1  U V   Kr t X Y Sc Y 2 2

(2.11) The appropriate conditions become

is the thermal Grashof

The dimensionless local values of skin friction   , Nusselt number  Nu  and Sherwood

Lu p

  RU 

Walters-B viscoelastic permeability coefficient FI is the local inertia

the chemical reaction parameter,  is the semi-vertical angle of the cone and U P is the velocity of the moving cone.

l 1 2

  Gr 

UP 

QT L2

,

K R Cw  C

Kr 

where E is the parameter, K is the of porous medium,

,

coefficient, GrT

ke k D

C Tw  T 

(2.12)

,

16 sT3



T  Cw  C 

as Y  

initial

and

boundary

t  0 :U  0, V  0,   0,   0 for all X ,Y t  0 :U  U P , V  0,   1,   1 at Y  0

U  0, V  0,   0,   0 at X  0

 U 

    ,  Y Y 0   

   , Nu x   X    Y Y 0

Shx   X    Y Y 0 (2.13) Average skin friction, average Nusselt number and average Sherwood number in dimensionless form can be written as 1

 U   dX , Y Y 0 0

   

1

   Nu      dX ,  Y  Y 0 0 1

   Sh      dX Y Y 0 0

(2.14)

3. FINITE DIFFERENCE NUMERICAL SOLUTION In order to solve the non-linear coupled equations (9) to (12) under the initial and boundary conditions (13), an implicit finite difference scheme of Crank-Nicolson type has been employed. The finite difference equations 32

Unsteady MHD Viscoelastic Fluid Flow along a moving Vertical Cone

corresponding to equations (9) to (12) are discretized using the Crank-Nicolson Method and represented in the equations (3.1)-(3.4) (see the Appendix). The region of integration is considered as a rectangle with max X max  1 and Ymax  22 , where Ymax corresponds to Y   which lies well outside the momentum, thermal and mass diffusion boundary layers. The maximum of Y is chosen as 22, after some preliminary numerical experiments such that the boundary conditions of the equation (13) are satisfied within the tolerance limit 105 . The mesh sizes have been fixed as X  0.05 and Y  0.05 with time step t  0.01 . The steady state solution is assumed to have been reached when the absolute difference between the values of the velocity U , temperature  as well as concentration  at

respectively. In Fig. 1, we concern the variation of viscoelastic parameter from E  0.0 (Newtonian case) to E  0.008 (weak elastic effects); an increase in the viscoelastic parameter increases the velocity near the cone surface Y  2  whereas the trend is reversed far from the cone. Figure 2 elucidates that the increase in the local inertia coefficient decrease the fluid velocity near the cone Y  3.5 but produce the opposite effect far from the cone. It is observed from Fig. 3 that increases in the radiation parameter used to decrease the fluid velocity due to the fact that the momentum boundary layer thickness decreases for increasing the thermal radiation parameter. Figure 4 shows that the fluid velocity decreases for increasing the semi vertical angle of the cone near the cone Y  3.5 ; however the reverse trend is observed further from the cone.

two consecutive time steps are less than 105 at all grid points. The scheme is unconditionally stable. The local truncation



 = 0.0, 0.003, 0.005, 0.008 0.8



error is O t 2  X 2  Y 2 and it tends to

0.6 U

zero as t , X and Y tend to zero. It follows that the Crank-Nicolson Method is compatible. Stability and compatibility ensure the convergence.

0.4 0.2 0

4. RESULTS AND DISCUSSION This section provides the behavior of parameters involved in the expressions of flow, heat and mass transfer characteristics. Numerical evaluation for the solutions of this problem is performed and the results are illustrated graphically in Figs. 1-12. Throughout the computations, we employ t  15 , E  0.005 , M  0.5 , K  1.0 , FI  0.5 , K  1.0 , N  1.0 ,

  20 , Pr  0.71 , F  0.5 , T  1.0, Sc  0.6 , Kr  0.5 and U P  0.5, unless

2

4

Y

6

8

Fig. 1: Effect of viscoelastic parameter in velocity distribution

0.8

FI = 0.0, 0.3, 0.6, 1.0

0.6 U 0.4 0.2 0

2

4

Y

6

8

otherwise stated. The profiles for velocity distribution are depicted in Figs. 1-5 for various values of the physical parameters E , FI , F ,  and U P ,

Fig. 2: Effect of local inertia coefficient in velocity distribution

33

J. Prakash R. Sivaraj B. Rushi Kumar

1.5

0.8

1.2

0.6 U

0.9

0.4

P = 0.1, 0.71, 3, 7 r

 0.6

0.2 F = 0.0, 0.5, 1.0, 1.5

0

2

4

Y

6

0.3

8

0.0 0

Fig. 3: Effect of thermal radiation parameter in velocity distribution

 = 15 , 30 , 45 , 60 o

o

o

4

Y

6

8

Fig. 6: Effect of Prandtl number in Temperature distribution

1.2 1.0

2

o

0.8 U 0.6

1.5

0.4

1.2 

0.2 0

2

4

Y

6

F = 0.0, 0.3, 0.6, 1.0 I

0.9

8

0.6 0

Fig. 4: Effect of semi vertical angle of the cone in velocity distribution 1.5

2

4

Y

6

8

Fig. 7: Effect of local inertia coefficient in Temperature distribution

1.2

U

0.9

1.5

U = 0.0, 0.5, 1.0, 1.5 P

0.6

1.2

0.3

0.9

 0.0

F = 0, 1, 2, 3 0.6

0

2

4

Y

6

8 0.3 0

Fig. 5: Effect of velocity of the moving cone in velocity distribution

2

4

Y

6

8

Fig. 8: Effect of thermal radiation parameter in Temperature distribution

34

Unsteady MHD Viscoelastic Fluid Flow along a moving Vertical Cone

It is noted from Figure 5 that increasing the velocity of the moving cone has the tendency to increase the velocity of the fluid near to the cone Y  2  while it decreases the velocity

12 9  6

far from the cone. Figures 6-8 present the influence of Pr , FI and F on temperature

UP = 0.0, 0.5, 1.0, 1.5

distribution, respectively. Figure 6 displays that increasing the Prandtl number decrease the heat transfer. The reason is increasing the Prandtl number decrease thermal boundary layer thickness because the large values of Prandtl number corresponds to the weaker thermal diffusivity and thinner boundary layer. Physically, these behaviors are all valid qualitatively for air

 Pr  7.0 .

 Pr  0.71

3 0 0.0

0.2

0.4

0.6 X 0.8

1.0

and water Fig. 11: Effect of velocity of the moving cone in local Skin friction distribution

1.0

0.0 0.8

-0.1



Nu

0.6

-0.2

Sc = 0.22, 0.6, 0.78, 2.0

0.4

F = 0.0, 0.5, 1.0, 1.5

-0.3 -0.4

0.2

-0.5

0.0 0

2

4

Y

6

8

0.0

Fig. 9: Effect of Schmidt number in Concentration distribution

0.2

0.4

0.6 X 0.8

1.0

Fig. 12: Effect of thermal radiation parameter in local Nusselt number distribution

1.0

Figures 7 presents that increase in the local inertia coefficient is seemed to increase the heat transfer profiles. Figure 8 represents that increase in the radiation parameter increase the heat transfer near to the cone Y  2  but

0.8 0.6  0.4

FI = 0.0, 1.5, 2.5, 4.0

0.2 0.0 0

2

4

Y

6

Fig. 10: Effect of local inertia coefficient in Concentration distribution

8

decreases the temperature distribution far from the cone. The reason is large values of radiation parameter enhance the conduction over radiation, thereby which decreases the thickness of the thermal boundary layer. Figures 9-10 represent the effect of Sc and

FI

on

the

concentration

distribution

respectively. Figure 9 elucidates that increases in Schmidt number decrease the mass transfer 35

J. Prakash R. Sivaraj B. Rushi Kumar

because increases in Schmidt number decrease the molecular diffusivity and increase the conduction. The values of the Schmidt number are chosen to represent the presence of species by hydrogen (0.22), water vapor (0.6), ammonia (0.78), carbon dioxide (0.96) and hydrocarbon derivatives (2). Increase in the local inertia coefficient increases the fluid concentration and the curves could be seen in Fig. 10. Figure 11 presents that the magnitude of local skin friction increases with an increase of velocity of the moving cone. Figure 12 displays that increase in the radiation parameter slightly increase the local Nusselt number near to the stream-wise coordinate  X  0.4 ; however the reverse trend is observed for the higher values of X .

REFERENCES

[1]. Beard D.W., and Walters K., Elastico-

[2].

[3].

5. CONCLUSIONS Numerical solutions are obtained for the unsteady, chemically reacting, viscoelastic fluid flow along a vertical cone embedded in non-Darcy porous medium in the presence of MHD, heat source and thermal radiation. The Crank-Nicolson method is used to solve the problem and the results are evaluated numerically and displayed graphically. In the light of the present investigation, following conclusions can be drawn:

[4].

[5].

 The velocity profiles increase for increasing E and U P near the cone while the trend is reversed far from the cone.  Increase in  and FI decreases the

[6].

velocity near the cone; however further from the cone, the reverse trend is observed.  The fluid temperature rises for increasing FI while it falls for increasing Pr and

F.  Higher values of Sc decrease the mass transfer whereas FI follows the opposite trend. It is concluded that the impact of FI and F plays a vital role in the viscoelastic fluid flow through non-Darcy porous medium, compared with the other physical parameters.

[7].

viscous boundary layer flow, I. two dimensional flow near a stagnation point, Proceedings of the Cambridge Philosophical Society, 60, 667-674, 1964 Mohiddin S.G., Prasad V.R., Varma S.V.K. and Beg O.A., Numerical study of unsteady free convective heat and mass transfer in a Walters-B viscoelastic flow along a vertical cone, International Journal of Applied Mathematics and Mechanics, 6 (15), 88-114, 2010 Sivaraj R. and RushiKumar B. Unsteady MHD dusty viscoelastic fluid Couette flow in an irregular channel with varying mass diffusion. International Journal of Heat and Mass Transfer, 55, 3076-3089, (2012) Pal D. and Talukdar B., Buoyancy and chemical reaction effects on MHD mixed convection heat and mass transfer in a porous medium with thermal radiation and Ohmic heating, Communication in Nonlinear Science and Numerical Simulation, 15, 2878-2893, 2010 J. Prakash, R. Sivaraj and B. Rushi Kumar, Influence of chemical reaction on unsteady MHD mixed convective flow over a moving vertical porous plate, International Journal of Fluid Mechanics 3(1), 1-14, 2011 Pal D. and Mondal H., Effect of variable viscosity on MHD non-Darcy mixed convective heat transfer over a stretching sheet embedded in a porous medium with non-uniform heat source/sink, Communication in Nonlinear Science and Numerical Simulation, 15, 1553-1564, 2010 Pal D. and Mondal H., MHD non-Darcy mixed convective diffusion of species over a stretching sheet embedded in a porous medium with non-uniform heat source/sink, variable viscosity and Soret effect, Communication in Nonlinear Science and Numerical Simulation, 17, 672-684, 2012 36

Unsteady MHD Viscoelastic Fluid Flow along a moving Vertical Cone

[8]. Kishore P.M., Rajesh V. and Verma S.V., Viscoelastic buoyancy-driven MHD free convective heat and mass transfer past a vertical cone with thermal radiation and viscous dissipation effects, International Journal of Applied Mathematics and Mechanics 6(15), 67-87, 2010 [9]. Hayat T. and Qasim M., Influence of thermal radiation and Joule heating on MHD flow of a Maxwell fluid in the presence of thermophoresis, International Journal of Heat and Mass Transfer, 53, 4780-4788, 2010 [10]. Raptis A., Radiation and viscoelastic flow, International Communication in Heat Mass Transfer, 26, 889-895, 1999

37

J. Prakash R. Sivaraj B. Rushi Kumar

APPENDIX

U in, j 1  U in1,1j  U in, j  U in1, j  U in, j 11  U in1,1j 1  U in, j 1  U in1, j 1 4X 

Vin, j1  Vin, j11  Vin, j

 Vin, j 1

2Y

U in, j 1  U in, j t





U in, j 1  U in1,1j

U in, j



4iX

 U in, j

 U in1, j

2X

  V U n i, j

n 1 n 1 n 1 n n  U i , j 1  2U i , j  U i , j 1  U i , j 1  2U i , j  2

E   1   t   FI

4

in, j 1  in, j t 





in, j 1  in1,1j

 in, j

t 1  Sc

 cos  

in, j 1  in, j 2

Vin, j

2X





2X 2  Y 

2

n i, j

 in, j 1



4Y  in, j 1

  V 

in, j 11  2in, j 1  in, j 11  in, j 1  2in, j 2

in, j 1  in, j

in, j 11  in, j 11  in, j 1  in, j 1

2  Y 

U in, j in, j 1  in1,1j  in, j  in1, j

n 1 n 1  U i, j  U i, j  M   K 2 

 N cos  

 

 in1, j



4Y

 U in, j 1

n 1 n 1 n 1 n n  i , j 1  2i, j  i , j 1  i , j 1  2i, j  2

1  4 1  Pr  3F 

in, j 1  in, j

n 1  U in, j 11  U in, j 1  U in, j 1 i , j 1

2  Y 

2 n 1 U i, j  2U in, j 1U in, j  U i2, nj

U in, j

(3.1)

U in, j 1  U in, j 11  U in, j  U in, j 1

 T

2

n 1  in, j 11  in, j 1  in, j 1 i , j 1

4Y  Kr

(3.3)

in, j 1  in, j

in, j 1  in, j

(3.2)

 (3.4)

2

where t is a dimensionless time-step, X and Y are the dimensionless finite difference grid size in the X and Y directions, respectively. n designates the grid point in the time variable t , i and j designate the grid points along the X and Y directions, respectively.

38

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Modeling Biochemical Oxygen Demand in a river with a storage zone along its banks Nitash Kaushik 1

Babita Tyagi 2

Girija Jayaraman3

1

Department of Mathematics, Accurate Institute of Management and Technology, Greater Noida, Gautambudh Nagar, Utter Pradesh, INDIA. 2 Department of Mathematics, Galgotia’s University, Greater Noida, Gautambudh Nagar, Utter Pradesh, INDIA. 3 Centre for Atmospheric Sciences, Indian Institute of Technology Delhi, INDIA.

ABSTRACT Mathematical models have often been accepted as a useful tool to assess and manage water quality in water bodies. Dissolved Oxygen (DO), a surrogate variable for the general health of an aquatic eco-system and Biochemical Oxygen Demand (BOD) are two important parameters to assess the level of pollution in a river/stream. Various models developed to date accounts for only that portion of BOD which is in dissolved form and not the least in settle able form. These models also do not account for the storage zone situated along the bank of rivers and hence do not represents the actual situation caused by the discharge of partially treated/ untreated waste water, which contains a significant portion of BOD in settle able form, into the water body with a large width where the water becomes stagnant along the bank of rivers. The present work represents a model to predict the concentration of total BOD when partially treated/untreated waste water is discharged into the river having storage zone along its bank and thus addressing the above stated situation. Keywords: Mathematical modeling. Biochemical oxygen demand. Settleable BOD, Main zone. Storage zone.

1. INTRODUCTION Dissolved Oxygen is the surrogate variable for the general health of an aquatic eco- system. Low dissolved oxygen in river adversely affects the aquatic system and consequently life of human being. The waste dumped into the river consumes the oxygen dissolved in river water for the process of stabilization. The amount of

oxygen consumed by bacteria to stabilize the organic waste aerobically at a stated temperature and in specified period of time, called BOD, is used as an adjunct to DO determination. The model presented by Streeter and Phelps 1925 [13] and subsequent mathematical formulation by Fair 1939[6] are the first published mathematical models which were used to determine the DO condition in a stream below single point source under steady state conditions. The interesting characteristic of Streeter and Phelps model is the idea that the river may be represented by a single one-dimensional system. The model was well suited with the computational capabilities of that time, but it did not include that part of BOD which is in settleable form. This situation arises when partially treated/untreated waste enters the river. Bhargava 1983, 1986(b) [2,4] incorporated the settleable part of BOD along with the soluble part and evaluated the model for accurate prediction of the DO-sag related parameters. Bhargava 1983, 1986(a) and 1986(b) [2, 3 &4], however, did not include the dispersion term in his model. Tyagi. et. al. 1999 [15] accounted for both the parts of BOD in their model along with dispersion. Various authors (Chapra and Runkel 1999 [5], Thackston and Schnelle 1970 [14], Gooseff 2005 [7]) explored and analyzed the impact of dead zone(also referred to as storage zone) in the last three decades. Several approaches have been developed to determine the impact of these storage zones on solute transport (Rutherford 1994 [10], Bencala & Walters 1983 [1]). Bencala & Walters 1983 [1] suggested that a river is to be divided into two areas namely the main zone and the storage zone.

39

Nitash Kaushik Babita Tyagi Girija Jayaraman

The model presented by Chapra and Runkel (1999)[5] is extended to incorporate settleable part of BOD along with the dissolved part which represents the situation in which the partially treated/untreated waste enters the river having stagnant zone on the bank of river. The present work, therefore, addresses such a situation and develops a model to study the cumulative effect of storage zone and two types of BOD below a point source, under steady state condition.







2. DEVELOPMENT OF MODEL A mathematical model is developed for the physical system as mentioned below: 2.1 Physical System Consider a river in which there is an immobile storage zone on both the banks of a river. Let the partially treated/untreated oxygen demanding waste be released continuously at a constant rate into the river through a point source (waste treatment plant). 2.2 Mathematical Representation The cross-section of the river is divided into two zone’s namely the main zone in the centre of the river and the storage zone along the two banks where the velocity is assumed to be zero. A mathematical model is developed for the above stated system based on the following assumptions. 





The entire BOD in the waste is in two forms namely dissolved and settleable. The dissolved part of BOD is decaying according to first order kinetics, while the settleable part of BOD is being removed by a linear law. The ratio of settleable part to total BOD is assumed to be fixed at the outfall. The size of storage zone is As and it consists of two parts located near the two banks of the river while the size of main zone is A which is located in the centre of river. No transverse gradient exists within any of the two zones. However, there is exchange of mass between the two zones which is linearly related to the difference in the respective concentrations.

  

In the main zone, advection, reaction and exchange of mass are considered to be the relevant phenomena. In the storage zone only exchange of mass with the main zone and reaction within the storage zone are considered. In the storage zone the settleable BOD is settled at the outfall itself while in the main zone it is carried forward with the flow and is settled only after a particular distance downstream. Hence the effects of advective forces are considered and included in the transition time Ts, in which all the settleable part get removed from the waste. The transition time Ts = d/v would be longer for deeper rivers and for smaller flocculated particle size. Exchange of mass between two zones is considered only for dissolved part of BOD. The temperature effect on decomposition rates is same in each zone. There is no other source and sink of BOD in the river.

Using the above stated assumptions, the steady-state mass balance equations for BOD in the main zone and the storage zone respectively are given as follows: 2.2.1 Main-Zone dB 0  u d  K d Bd   ( Bs  Bd ) ----(1) dx  ( x / u)  L( x)  L0 1  , x  x' (d / v)    0, x  x'

2.2.2 Storage Zone A 0   ( Bs  Bd )  K ' d Bs As

----(2)

----(3)

where Bd  dissolved part of BOD in the main zone (mg/L); Bs  dissolved part of BOD in the storage zone (mg/L); L  Settleable part of BOD in the main zone (mg/L); L0  Initial settleable BOD in the main zone,

D  DO deficit in the main 40

Modeling Biochemical Oxygen Demand in a river with a storage zone along its banks

zone(mg/L); Ds  DO deficit in the storage zone (mg/L);   Storage zone exchange coefficient (per day); x'  u.Ts , is the distance at which all the settleable BOD is removed (m), u  Mean cross-sectional flow velocity (m/day); K d  decomposition rate of the dissolved BOD in the main zone(day-1); K ' d  decomposition rate of the dissolved BOD in the storage zone (day-1); K s  decomposition rate of settleable BOD in the main zone, v  Settling velocity of the particles (m/day); d  Depth of the river (m). 2.3 Boundary Conditions

The value of B s computed from eq.(3) as follows:

----(4)

Bs is substituted in eq.(1) to give the following equation

dBd  Ed K d Bd dx

----(5)

Where

    .   K d   Ed  1          K ' d

 Kd   u 

 x  

   As    A   E d can be identified as a dimensionless enhancement factor representing the impact of the storage zone on decomposition. As A

Let K d  Ed K d where K d an apparent decomposition ratio that accounts for the storage-zone (per-day)

----(7)

So that the total BOD (BM) in the main zone is given by the following equations BM  B0 e

K  d  u 

 x  

 x / u  , x  x'  L0 1   d / v   

----(8a)  Kd     u x  

, x  x'

----(8b)

Using equations (4), (8a) and (8b), then the total BOD in the storage zone is given by --(9a)    K    d K 'd   B e  u Bs   0     A      s  A   K 'd   

3. METHOD OF SOLUTION

0  u

Bd  B0 e

BM  B0 e

The associated boundary conditions reflecting the release of BOD causing material are Bd  B0 at x  0 .

     K 'd  B BS  d   As       A     K ' d

Equation (5) now converts to dBd ----(6) 0  u  K d .Bd dx On solving (6) with the boundary condition Bd  B0 at x  0 , we get

 x  

  x / u  , x  0  L0 1   d / v    

   K    d  u K '   d  Bs  B0e    As          A     K 'd

 x  

 , x  0  

----(9b)

4. RESULTS AND DISCUSSION To analyze the cumulative impact of settleable BOD and storage zone on river’s DO, the model is applied to Uvas Creek for which the physical parameters with their values are outlined in Table-1. To predict the concentration of DO in the considered river system, some kinetic and chemical parameters are appropriately taken from the literature and their values are given in Table-2.The model is applied to simulate a 3 Km stretch of the Creek along with the present model. The result of conventional S-P model and storage zone version of S-P model (presented by Chapra) are also displayed for comparison. Case-1: The entire BOD is in dissolved form and the Creek has no storage zone(S-P model). Case-2: The entire BOD is in dissolved form and the Creek has storage zone (Chapra’s model). Case-3: A part of total BOD is in settleable form and the Creek has storage zone (The present model).

41

Nitash Kaushik Babita Tyagi Girija Jayaraman

Parameter (1) U

Unit (3) m/day  3.456 day-1 6.0 mg/L B0 4.0 mg/L L0 0.4 m2 A 0.7 m2 As 5 day-1 (v / d ) 600 m x' Table:1 Hydraulic Parameters

Parameter (1)

Value (2) 2808

Value (2) 3

Units (3) day-1

K 'd

3

day-1

Ks

7

day-1

Kd

Table: 2 Chemical and kinetic parameters

Fig.2 depicts the comparative BOD distribution in the storage zone as predicted by the Streeter-Phelps storage zone model (i.e., Chapra’s model) and the present model. It is observed from the figure that for the beginning stretch (0-50 meters) of the downstream the concentration of BOD by the present model is very less than the concentration of BOD by Chapra’s model. Because the settle able part of BOD in the present model settled at the starting point of the downstream, so the remaining BOD of present model is very less than the remaining BOD of Chapra’s model. And the difference of concentration of BOD of two models becomes less gradually with the distance downstream. The cumulative impact of settle able part of BOD in the waste water and presence of the storage zone in the river is to increase the demand of oxygen from river’s DO for stabilization of waste discharged into the river. 5. CONCLUSION

Fig.1

Fig.1 depicts the comparative BOD distribution in the main zone as predicted by the conventional Streeter-Phelps model, StreeterPhelps storage zone model (i.e.Chapra’s model) and the present model. It is observed from the figure that the concentration of BOD by the present model is less than that predicted by S-P model as well as the Chapra’s model. Since the settle able part is removed faster, it assimilates more BOD and consequently the remaining BOD would be less with distance downstream.

It is concluded that predicting the distribution of BOD (which ultimately affect the DO in the river) with the storage zone on the two banks depends on the treatment of waste. If untreated or partially treated waste enters such river then the model presented by Chapra shall be of little use. The present model would be able to predict the concentration of BOD more accurately and consequently the decision based on such a prediction would be more rational for such a real life situation. REFERENCES [1] Bencala, K.E. and Walters R.A. (1983) “Simulation of solute transport in a mountain pool-and-riffle stream: a transient storage model”, Water Resource Research, 19(3), 718-724. [2] Bhargava, D.S. (1983) “Most rapid BOD assimilation in Ganga and Yamuna rivers” Journal of Environmental Engineering, ASCE 109, 174-187.

Fig.2

[3] Bhargava, D.S. (1986a). “DO sag Model for extremely fast river purification”, Journal of Environmental Engineering, ASCE 112, 572-585. [4] Bhargava, D.S. (1986b) “ Models for

42

Modeling Biochemical Oxygen Demand in a river with a storage zone along its banks

polluted streams subject to fast purification”, Water Research 20, 1-8. [5] Chapra, S.C., and Runkel, R.L . (1999) “Modeling impact of storage zones on stream dissolved oxygen”, Journal of Environmental Engineering , ASCE 125, 415-419. [6] Fair, G.M.(1939) “The Dissolved Oxygen Sag-An Analysis” Sewage Works Journal, Vol.11, No.3, p 445. [7] Gooseff, M.N.(2005) “Determining in channel(dead zone) transient storages by solute transport in a bedrock channelalluvial channel, Sequence, Oregon” Journal, Water Resource Research, Vol. 41, W06014. [8] McBride, G.B and Rutherford, J.C. (1984) “Accurate Modelling of river pollutant transport”, Journal of Environmental Engineering, ASCE 110 , 808-826.

dispersion” Journal Hydraulic Engineering, ASCE, Vol.129, No.6, 470473. [13] Streeter, H.W. and Phelps, E.B. (1925) “A study of the Pollution and Natural Purification of the Ohio Rivers” U.S. Public Health Service Bulletin, No.146. [14] Thacksten, E.L and Schnelle, K.B.(1970) “Predicting effects of dead Zone on Stream mixing” Journal, Sanitary Engineering, ASCE 96, 319-331. [15] Tyagi, B., Gakkhar, S., Bhargava, D.S. (1999) “Mathematical modeling of stream DO-BOD accounting for settleable BOD and periodically varying BOD source”, Environmental Software, Elsevier, U.K.14, 461- 471.

[9] Runkel, R.L.(1998) “One dimensional transport with inflow and storage (OTIS) : A solute transport model for streams and rivers” U.S. Geological Survey Water Resources Investigation Report 98-4018, p 73. [10] Rutherford, J.C. (1994) “River mixing” Wiley, New York. [11] Schmid, B.H.(1995), “On the transient storage equations for longitudinal solute transport in open channels: temporal moments accounting for the effects of first-order decay” Journal ,Hydraulic Research, Vol.33, NO.5, 595-609. 12] Singh, S.K.(2003) “Treatment of stagnant zone in riverine advection

43

Nitash Kaushik Babita Tyagi Girija Jayaraman

APPENDIX: Figure 1:

Figure 2:

44

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Analysis of 𝑁-policy Markovian queue with balking, reneging and multiple working vacations P. Vijaya Laxmi

K. Jyothsna

Department of Applied Mathematics, Andhra University, Visakhapatnam, India.

ABSTRACT This paper presents the analysis of finite buffer 𝑀/𝑀/1 queue with multiple working vacations and 𝑁-policy. The arriving customers balk (that is do not join the queue) with a probability and renege (that is leave the queue after joining) according to Poisson distribution. As soon as the system becomes empty, the server leaves for a working vacation. During any working vacation, the server works at a different rate rather than completely stopping service. On return from a working vacation, if the server finds the queue length greater than or equal to a predetermined threshold value 𝑁, it immediately begins service; otherwise, if the server finds less than 𝑁 customers, it takes another vacation and continues to do so till it finds at least 𝑁 customers waiting in the queue. Such a policy is known as multiple working vacations with 𝑁-policy. Inter-arrival times, vacation times, service times during a regular service period and during a vacation period are independent and exponentially distributed random variables. Steady-state behavior of the model is considered and various performance measures of the model have been discussed. Finally, some numerical results have been examined for the effect of different parameters on the system performance characteristics. Keywords: Finite buffer, multiple working vacations, balking, reneging, 𝑁-policy. 1. INTRODUCTION In real life, many queuing situations arise in which there may be a tendency for customers to be discouraged by a long queue. As a result, the customers either decide not to join the queue (i.e. balk) or depart after joining the queue without getting service due to impatience (i.e. renege). Balking and reneging are not only common phenomena in queues

arising in daily activities, but also in various machine repair models. An 𝑀/𝑀/1 queue with customers balking and reneging has been discussed in Haight [3] and [4], respectively. The combined impact of balking and reneging with finite capacity in an 𝑀/𝑀/1 queue has been studied by Ancker and Gafarian [1], [2]. Under working vacation (WV) policy, the server can provide service at a lower speed during the vacation period rather than stopping service completely. At the end of a vacation if the queue is nonempty, a service period begins with normal service rate; otherwise the server takes another vacation. This policy is called multiple working vacations (MWV) as introduced by Servi and Finn [6]. They first studied the 𝑀/𝑀/1 queueing system with working vacations and obtained the distribution of the number of customers in the system and sojourn time in steady-state. The 𝑁-policy queueing system was first introduced by Yadin and Naor [7] which turns the server on when 𝑁 (𝑁 ≥ 1) or more customers are present and turns the server off when the system is empty. After the server is turned off, the server will not operate until 𝑁 customers are present in the system. In the early period, Hersh and Brosh [5] considered a queueing system with Poisson arrivals and exponentially distributed service times where the server operates under the 𝑁 − policy. Zhang and Xu [8] studied an M/𝑀/1/𝑀𝑊𝑉 queue with 𝑁-policy. This paper deals with a finite buffer single server 𝑁 − policy queue where customers may balk or renege due to impatience and the server may take multiple working vacations. The customers are allowed to decide whether to join or balk, to wait or abandon. At a vacation completion epoch, the server switches to regular service rate only if there are 𝑁 number of customers in the queue, 45

P. Vijaya Laxmi K. Jyothsna

otherwise takes another vacation. The system length distribution is obtained by solving the steady-state equations recursively. 2. MODEL DESCRIPTION Let us consider an 𝑀/𝑀/1/𝐾 queueing system with balking, reneging, multiple working vacations and 𝑁-policy.The capacity of the system is finite 𝐾. The assumptions of the system model are as follows. i) Customers arrive one at a time according to a Poisson process with rate 𝜆. Let 𝑏𝑖 represent the probability with which the customer decides to join the queue and balk with probability 1 − 𝑏𝑖 , when there are 𝑖 customers ahead of him in the system. Furthermore, we assume that 0 ≤ 𝑏𝑖+1 ≤ 𝑏𝑖 < 1, 1 ≤ 𝑖 ≤ 𝐾 − 1, 𝑏0 = 1 and 𝑏𝑖 = 0, 𝑖 ≥ 𝐾. ii) After joining the queue each customer will wait a certain length of time, say 𝑇, for service to begin before they get impatient and leave the queue without receiving service. This time 𝑇 is assumed to follow exponential distribution with mean 1/𝛼. Since the arrival and departure of an impatient customer without service are independent, the average rate of reneging of a customer is given by 𝑛 − 1 𝛼. Hence, the function of customer's average reneging rate is given by 𝑟 𝑖 = 𝑖 − 1 𝛼, 1 ≤ 𝑖 ≤ 𝐾, 𝑟(𝑖) = 0, 𝑖 > 𝐾. iii) The customers are served on a first-come, first-served (FCFS) discipline. The service times are assumed to follow exponential distribution with mean 1/𝜇. iv) The server takes working vacations whenever the system becomes empty. When a vacation ends and if there are at least 𝑁 customers in the queue, a service period begins and server serves the queue with its usual service rate; otherwise the server takes another vacation. This continues until it finds at least 𝑁 customers at a vacation completion instant. The server remains active during the working vacation period and serves the arriving customers at a different service rate. Such type of vacation is called multiple working vacations (MWV) with 𝑁-policy.

v) The vacation times and service times during vacation are assumed to follow Poisson distribution with parameter 𝜙 and 𝜂, respectively. During any working vacation the service rate 𝜂 is different and generally lower than the regular service rate 𝜇. The interarrival times, vacation times, service times during regular service and during vacation are mutually independent. 3. STEADY-STATE PROBABILITIES In this section, we first develop the steadystate probability equations using the Markov process. The steady-state probabilities are computed using recursive technique. 3.1 Steady-state equations: At steady-state, let 𝑃0,𝑖 , 0 ≤ 𝑖 ≤ 𝐾, be the probability that there are 𝑖 customers in the system when the server is in working vacation, and 𝑃1,𝑖 , 1 ≤ 𝑖 ≤ 𝐾, be the probability that there are 𝑖 customers in the system when the server is in regular busy period. Using the Markov process theory, we obtain the following set of steady-state equations 0 = 𝜂 𝑃0,1 + 𝜇 𝑃1,1 − 𝜆 𝑃0,0 , (3.1.1) 0 = 𝜂 + 𝑖𝛼 𝑃0,𝑖+ + 𝜆𝑏𝑖−1 𝑃0,𝑖−1 − 𝜆𝑏𝑖 + 𝜂 + 𝑖 − 1 𝛼 𝑃0,𝑖 , 1 ≤ 𝑖 ≤ 𝑁 − 1, (3.1.2) 0 = 𝜂 + 𝑖𝛼 𝑃0.𝑖+1 + 𝜆𝑏𝑖−1 𝑃0,𝑖−1 − 𝜆 𝑏𝑖 + 𝜂 + 𝑖 − 1 + 𝜙 𝑃0.𝑖 , 𝑁≤ 𝑖 ≤ 𝐾 − 1, (3.1.3) 0 = 𝜆 𝑏𝐾−1 𝑃0,𝐾−1 − 𝜂 + 𝐾 − 1 𝛼 + 𝜙 𝑃0,𝐾 , (3.1.4) 0 = 𝜇 + 𝛼 𝑃1,2 − 𝜇 + 𝜆 𝑏1 𝑃1,1 , (3.1.5) 0 = 𝜆 𝑏𝑖−1 𝑃1,𝑖−1 + 𝜇 + 𝑖𝛼 𝑃1,𝑖+1 − 𝜇 + 𝜆 𝑏𝑖 + 𝑖 − 1 𝛼 𝑃1,𝑖 , 2 ≤ 𝑖 ≤ 𝑁 − 1, (3.1.6) 0 = 𝜆 𝑏𝑖−1 𝑃1,𝑖−1 + 𝜇 + 𝑖𝛼 𝑃1,𝑖+1 + 𝜙𝑃0,𝑖 − 𝜇 + 𝜆 𝑏𝑖 + 𝑖 − 1 𝛼 𝑃1,𝑖 , 𝑁 ≤ 𝑖 ≤ 𝐾 − 1, (3.1.7) 46

Analysis of 𝑵-policy Markovian queue with balking, reneging and multiple working vacations

0 = 𝜆 𝑏𝐾−1 𝑃1,𝐾−1 + 𝜙 𝑃0.𝐾 − 𝜇 + 𝐾 − 1 𝛼 𝑃1,𝑖 . 3.1.8 3.2) Recursive Solution: In this subsection we solve the system of equations (3.1.1)-(3.1.8) recursively. From equations (3.1.2)-(3.1.4), we obtain 𝑃0,𝑖 = 𝑍𝐾−𝑖−1 𝑃0.𝐾 , 0 ≤ 𝑖 ≤ 𝐾 − 1, where 𝑍0 = 𝐻𝐾−1 , 𝑍1 = 𝐻 𝐾−2 𝑍0 − 𝐺𝐾−2 , 𝑍𝑖 = 𝐻𝐾−𝑖−1 𝑍𝑖−1 − 𝐺𝐾−𝑖−1 𝑍𝑖−2 , 2 ≤ 𝑖 ≤ 𝐾 − 𝑁, 𝑍𝑖 = 𝐻𝐾−𝑖−1 −

𝜙

𝑍𝑖−1 𝜆𝑏𝐾−𝑖−1 − 𝐺𝐾−𝑖−1 𝑍𝑖−2 , 𝐾 − 𝑁 + 1 ≤ 𝑖 ≤ 𝐾 − 1, 𝜆 𝑏𝑖+1 + 𝜂 + 𝜙 + 𝑖𝛼 𝐻𝑖 = , 0 ≤ 𝑖 ≤ 𝐾 − 1, 𝜆 𝑏𝑖 𝜂+ 𝑖+1 𝛼 𝐺𝑖 = , 0 ≤ 𝑖 ≤ 𝐾 − 2. 𝜆 𝑏𝑖 From equations (3.1.1), (3.1.5)-(3.1.8), we have, 𝑃1,𝑖 = 𝑌𝑖 𝑃0,𝐾 , 1 ≤ 𝑖 ≤ 𝐾, where 𝑌0 = 0, 𝑌1 𝜆 𝜂 = 𝑍 𝐾−1 − 𝑍 , 𝜇 𝜇 𝐾−2 𝑌𝑖 = 𝑅𝑖 𝑌𝑖−1 − 𝑆𝑖 𝑌𝑖−2 , 2 ≤ 𝑖 ≤ 𝐾, 𝑌𝑖 = 𝑅𝑖 𝑌𝑖−1 − 𝑆𝑖 𝑌𝑖−2 − 𝑈𝑖 𝑍𝐾−𝑖 , 𝑁 + 1 ≤ 𝑖 ≤ 𝐾, 𝜆 𝑏𝑖−2 𝑆𝑖 = ,2 ≤ 𝑖 𝜇+ 𝑖−1 𝛼 ≤ 𝐾, 𝜙 𝑈𝑖 = ,𝑁 + 1 ≤ 𝑖 𝜇+ 𝑖−1 𝛼 ≤ 𝐾. Using the normalization condition we get the unknown probability as −1 𝐾 𝑃0.𝐾 = 1 + 𝐾−1 . 𝑖=0 𝑍𝐾−𝑖−1 + 𝑖=1 𝑌𝑖 4. PERFORMANCE MEASURES One can evaluate various performance measures of the model like probability that the server is in regular busy period (𝑃𝑏 ),

probability that the server is in working vacation (𝑃𝑤𝑣 ), expected queue length (𝐿𝑞 ), average system length (𝐿𝑠 ), average balking rate (B.R), average reneging rate (R.R), and average rate of loosing a customer (L.R) and are given by 𝐾

𝑃𝑏 =

𝐾

𝑃1,𝑖 ,

𝑃𝑤𝑣 =

𝑖=1 𝐾

𝐿𝑞 =

𝑃0,𝑖 , 𝑖=1

(𝑖 − 1)(𝑃0,𝑖 + 𝑃1,𝑖 ) , 𝑖=1 𝐾

𝐿𝑠 =

𝑖 (𝑃0.𝑖 + 𝑃1,𝑖 ). 𝑖=1

Using the concepts of Ancker and Gafarian [1], [2], we obtain the average balking rate (B.R.) as 𝐾

𝐵. 𝑅. =

𝐾

𝜆 1 − 𝑏𝑖 𝑃0,𝑖 + 𝑖=1

𝜆 1 − 𝑏𝑖 𝑃1,𝑖 . 𝑖=1

The average reneging rate (R.R.) is given by 𝐾

𝑅. 𝑅. =

𝐾

𝑖 − 1 𝛼 𝑃0,𝑖 + 𝑖=1

𝑖 − 1 𝛼 𝑃1,𝑖 𝑖=1

The average rate of customer loss (L.R.) is the sum of the average balking rate and the average reneging rate. Thus, we have L.R. = B.R. + R.R. 5. SENSITIVITY ANALYSIS In this section, we present some numerical results to show the effect of the parameters of the model on the system performance measures. We fix the capacity of the system as 𝐾 = 20 and 𝑁 = 4. The other parameters of the model are chosen to be 𝜆 = 1.1, 𝜇 = 1.5, 𝜂 = 1.1, 𝜙 = 1.2, 𝛼 = 0.3 and the balking function is taken as 𝑖 𝑏𝑖 = 1 − 𝐾 2 , 1 ≤ 𝑖 ≤ 𝐾 − 1, 𝑏0 = 1 and 𝑏𝑖 = 0, 𝑖 ≥ 𝐾. Figure 1 presents the effect of 𝜂 on the expected queue length, 𝐿𝑞 for varying values of 𝜙. We observe that the expected queue length decreases with the increase of 𝜂 for any vacation rate, 𝜙. Further, for a fixed 𝜂, the expected queue length decreases with the increase of vacation rate, 𝜙. The average queue length, 𝐿𝑞 is shown as a function of arrival rate, 𝜆 with and without balking and reneging (B&R) in Figure 2. It can be observed that as 𝜆 increases the average queue length 𝐿𝑞 increases. But in the case of 47

P. Vijaya Laxmi K. Jyothsna

without balking and reneging the queue length increases sharply. The effect of 𝜆 on the average rate of customer loss L.R. for different balking 1 𝑖 functions 𝑖 𝑏𝑖 = 𝑖+1 , 𝑖𝑖 𝑏𝑖 = 1 − 𝐾 2 and 𝑖𝑖𝑖 𝑏𝑖 = 1 − 𝑒 −𝑖 is shown in Figure 3. It is evident from the figure that as 𝜆 increases the average rate of customer loss, L.R. increases for different balking functions. But the average rate of customer loss is lower for the balking function given in (i).

REFERENCES [1] Ancker Jr. C. J. and Gafarian A. V., Some queueing problems with balking and reneging:II, Operation Research, 11, 928937, 1963b. [2] Ancker Jr. C. J. and Gafarian A. V. , Some queueing problems with balking and reneging:I, Operation Research, 11, 88100, 1963a. [3] Haight F. A., Queueing with balking, Biometrika, 44, 360-369, 1957. [4] Haight F. A., Queueing with reneging, Metrika, 2, 186-197, 1959. [5] Hersh M. and Brosh I. , The optimal strategy structure of an intermittently operated service channel, European Journal of Operations Research, 5, 133141, 1980. [6] Servi L. D. and Finn S. G., M/M/1 queue with working vacations (M/M/1/WV), Performance Evaluation 50, 41-52, 2002. [7] Yadin M. and Naor P., Queueing systems with a removable service stations, Operations Research Quartely, 14, 393-405, 1963. [8] Zhang Z. and Xu X., Analysis for the M/M/1 queue with multiple working vacation and N-policy, Information and Management Sciences, 19, 495-506.

48

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

A Three Species Ecological Model with a Prey, Predator and Competitor to the Prey Papa Rao. A. V.1

Lakshmi Narayan. K2

Shahnaz Bathul3

1

Department of Mathematics, VIGNAN INSTITUTE OF TECHNOLOGY AND SCIENCE, Deshmukhi, India 2 Department of Mathematics, SLC’STATE INSTITUTE OF EDUCATIONAL TECNOLOGY, Hyderabad, India 3 Department of Mathematics, JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY, Hyderabad, India

ABSTRACT The present paper is devoted to an analytical investigation of three species ecological model with Prey ( N 1 ) a predator ( N 2 ) and a competitor ( N 3 ) to the Prey and neutral with the predator ( N 2 ) .In addition to that total all three species are provided with alternative food. The model is characterized by a set of first order non-linear ordinary differential equations. All the eight equilibrium points of the model are identified. The local stability is discussed using Routh-Hurwitz certiria and global stability is discussed by Lyapunov function and the stability analysis is supported by Numerical simulation using Mat lab. Keywords: Prey, Predator, competitor to the Prey, Equilibrium points, Normal Steady State, Numerical simulation. 1. INTRODUCTION Ecology relates to the study of living beings (animals and plants) in relation to their habits and habitats. This discipline of knowledge is a branch of evolutionary biology purported to explain how or what extent the living beings are regulated in nature .Allied to the problem of population regulation is the problem of species distribution prey ,predator ,competition and so on. Research in the area of theoretical ecology was initiated in 1925 by Lotka [5] and by Volterra [6]. The general concept of modelling have been presented in the treatises of Paul Colinvaux [4], Freedman [1], Kapur [2,3] etc. The ecological symbiosis can be broadly classified recently as prey, predator, competitor, mutualism, commensalism, and amensalism and so on. Recently Shiva Reddy [7] discussed the stability analysis of three species eco system consting of prey, predator and super predator and Paparao [8] discussed A prey, predator model with a cover linearly varying with the prey population and an alternative food for the predator. Here we discussed a more general three species model, which resembles a small fish , star fish and a shark. Small fish and star fish competes for the

same food and shark predates only on the small fish but not the star fish .The model is characterized by a set of first order ordinary differential equations. All the eight equilibrium points of the model are identified; the local stability is discussed using Routh-Hurwitz certiria and global stability criteria using Lyapunov function. The stability analysis is supported by Numerical simulation using Mat lab. 2. BASIC EQUATIONS The model equations for a three species Prey - Predator and competitor to the prey system is given by the following system of first order ordinary differential equations employing the following notation: N1 , N 2 and N 3 are the populations of the prey and predator and a competitor to the prey with the natural growth rates a1 a2 and a 3 respectively,

11 is rate of decrease of the prey due to insufficient food and inter species competition, 12 is rate of decrease of the prey due to inhibition by the predator,  21 is rate of increase of the predator due to successful attacks on the prey,  22 is rate of decrease of the predator due to insufficient food other than the prey and inter species competition,  13 is rate of decrease of the prey due to the competition with the third species,  33 is rate of decrease of the Competitor to the prey due to insufficient food and inter species competition,  31 is rate of decrease of the competitor to the prey due to the competition with the prey dN1  a1 N1  11 N12  12 N1 N 2  13 N1 N 3 dt ----- (2.1) dN 2  a2 N 2   22 N 2   21 N1 N 2 dt dN 3  a3 N 3   33 N 32   31 N1 N 3 dt

49

Papa Rao. A. V Lakshmi Narayan. K Shahnaz Bathul

3. EQUILIBRIUM STATES The system under investigation has eight equilibrium states. They are I.E1:The extinct state

N 1  0, N 2  0 , N 3  0 .

--- (3.1) II.E2: The state in which only the predator survives and the prey and competitor to the prey are extinct

N1  0 , N 2 

a2

 22

III. E3 :The state in which both the prey and the predators extinct and competitor to the prey survive

a3

 33

.

The equilibrium state exist only when a1 2233  (a21233  a31322 ) ,

(a21133   21a133 )  (a21331   21a3 13 )

, N 3  0 . --- (3.2)

N 1  0, N 2  0 N 3 

a 2 (11 33   13 31 )   21 (a1 33  a3 13)  22 ( 11 33  13 31 )   12 21 33 a    12 (a3 21  a2 31 )  a1 31 22 -- (3.8) N3  3 11 22 11 ( 2233   23 32 )  12 21 33 N2 

-----

(3.3)

a311 22  12 (a3 21  a231 )  a131 22 and (11 2233  12 2133 )  11 2332 --- (3.9)

4. THE SATBILITY OF THE EQUILIBRIUM STATES Let N  ( N1 , N 2 , N 3 ) T

IV. E4: The state in which both the predator and competitor to the prey extinct and prey survive

N1 

a1

 11

, N 2  0 , N 3  0 ------ (3.4)

V. E5 :The state in which both the prey and the predator exist and competitor to the prey extinct

N1 

N  ( N1 , N 2 , N 3 ) T .The basic equilibrium state. equations (2.1) are linearized to obtain the equations for the perturbed state.

(a1 33  a3 13 ) , N2  0 ( 11 33   13 31 ) (a3 11  a1 31 ) N3  ( 11 33   13 31 )

----- (3.6)

VII. E7: The state in which both predator and competitor to the prey exist and prey extinct

 22

, N3 

a3

 33

---- (3.7)

VIII. E8 : The state in which prey, predator and competitor to the prey exist

N1 

12 N1 13 N1  a1 - 211 N1 - 12 N 2 - 13 N3    A=   21 N 2 a2  2 22 N 2   21 N1 0    31 N1 0 a3  2 33 N3   31 N1 

The

a133  a313 and a311  a131

a2

---- (4.2)

---

The equilibrium state exist only when 11 33 13 31 ,

N1  0 , N 2 

dU  AU dt Where

-- -- (3.5)

VI. E6 :The state in which both prey and competitor to the prey exist and predator extinct

N1 

-- -- (4.1)

Where U = (u1 , u 2 , u3 ) T is the perturbation over the

(a1 22  a 2 12 ) (a   a1 21 ) , N 2  2 11 ,  11 22   12 21  11 22   12 21

N 3  0 .This case arise only when a1 22  a212

= N U

(a1 22 33  a212 33  a313 22 )  22 (11 33  13 31 )  12 21 33

characteristic

det  A   I   0

equation

for

(4.3)

the system --- (4.4)

is

The equilibrium state is stable, if three roots of the equation (4.4) are negative in case they are real or the roots have negative real parts in case they are complex. 4.1. Local Stability Analysis of case E8: The variational matrix of the system (2.1) at E8 is

 11 N1  A=  21 N 2    31 N1

12 N1  22 N2 0

13 N1  0  ---- (4.1.1) 33 N3 

With the characteristic equation

50

A Three Species Ecological Model with a Prey, Predator and Competitor to the Prey





 3  11 N1   22 N 2   33 N 3  2  11 33  13 31  N1 N 3  11 22  12 21  N1 N 2   33 22 N 2 N 3     [11 33 22   2213 31   2112 33 ]N1 N 2 N 3

----- (4.1.2) Where

a1   11 N1   22 N2  33 N3

a2  11 33  13 31  N1 N3 

11 22  12 21  N1 N 2  ( 33 22 N 2 N3 )

a3  [ 22 (1133  1331 )   211233 ]N1 N2 N3 By Routh-Hurwitz criteria, when all Eigen values of the above characteristic equation have negative real parts if only if a1  0 ,  a1a2  a3   0 and. a3 (a1a2  a3 )  0 Clearly a1  0 ,  a1a2  a3   0 .And based on certain algebraic deductions applicable in this case, it can be verified that a3 (a1a2  a3 )  0 if 1133  13 31 . Therefore the roots of (4.1.1) are real and negative or complex conjugates having negative real parts. Thus the system is locally stable for interior equilibrium





point E8 N1 , N 2 , N3 if 1133  13 31 . 4.2 .The solution of the linearized system of equations:

 u10 (s2   22 N 2 )(s2   33 N 3 )  u2012 N1(s2  33 N 3 )      u3013 N1(s2   22 N 2 )  B1    s2  s1 s2  s3         u10 (s3   22 N 2 )(s3   33 N 3 )  u2012 N1(s3   33 N 3 )      u3013 N1(s3   22 N 2 )  C1    s3  s1  s3  s2       

 u20 (s1  11 N1 )(s1   33 N 3 )  u1021 N 2 (s1  33 N 3 )     13 N1(u30 21 N 2  u20 31 N 3 )  A2    s1  s2   s1  s3        

 u20 (s2  11 N1 )(s2   33 N 3 )  u10 21 N 2 (s2   33 N 3 )     13 N1(u30 21 N 2  u20 31 N 3 )  B2    s2  s1 s2  s3       

 u20 (s3  11 N1 )(s3   33 N 3 )  u10 21 N 2 (s3   33 N 3 )     13 N1(u30 21 N 2  u20 31 N 3 )  C2    s3  s1  s3  s2       

The solution of linearized system of equations (4.2) are given as

u1  A1e

s1t

s2t

s1t

 B2e

s1t

 B3e

u2  A2e u3  A3e

 B1e

 C1e

s3t

--- (4.2.1)

s3t

----

s3t

--- (4.2.3)

s2t

 C2e

s2t

 C3e

(4.2.2)

Here s1 , s2 and s3 are roots of equation (4.1.1)  u10 (s1   22 N 2 )(s1   33 N 3 )  u2012 N1(s1  33 N 3 )      u3013 N1(s1   22 N 2 )  A1    s1  s2  s1  s3       

 u30 (s1  11 N1 )(s1   22 N 2 )  u10 31 N 3 (s1  11 N1 )     12 N1(u30 21 N 2  u20 31 N 3 )  A3    s1  s2   s1  s3        

 u30 (s2  11 N1 )(s2   22 N 2 )  u1031 N 3 (s2  22 N1 )     12 N1(u30 21 N 2  u20 31 N 3 )  B3    s2  s1 s2  s3       

51

Papa Rao. A. V Lakshmi Narayan. K Shahnaz Bathul

 u30 (s3  11 N1 )(s3   22 N 2 )  u10 31 N 3 (s3  11 N1 )     12 N1(u30 21 N 2  u20 31 N 3 )  C3    s2  s1 s2  s3       

a1=4; a2=.63; a3=2; α11=0.1010; α12=0.4; α13=0.001; α22=0.1010; α21=.1010; α33=0.5010; α31=.6. The graphs 6.1.1 and 6.1.2 shows the variation with initial population sizes 15, 9, 20 of prey, predator and competitor respectively.

where u10 , u20 and u 30 are the initial size of populations

u1 , u2 and u 3 respectively. 5. GLOBAL STABILITY Theorem: The Equilibrium point E8 ( N 1 , N 2 , N 3 ) is globally asymptotically stable. Proof: Let the Lyapunov function for the case E8 is

N  V ( N1 , N 2 , N 3 )  ( N1  N1 )  N1 ln  1   N1 

Fig 6.1.1 The Variation of N1, N2 & N3 with respective Time (t) for system of Eq (2.1)

N  ( N 2  N 2 )  N 2 ln  2   N2 

--- (5.1)

N  ( N 3  N 3 )  N 3 ln  3   N3 

From the above graph N1, N2 & N3 converges with diminishing amplitude as time goes on increases the population size tends to equilibrium points.

Differentiating V w.r.to „t‟ we get

dV dN1  N1  dN 2  1    dt dt  N1  dt

 N 2  dN 3 1   N 2   dt

 N3  1    N3 

--- (5.2)

[N1  N1 ] a1  11N1  12N 2  13N 3  = [N 2  N 2 ] a2   22N 2   21N 1 

[N 3  N 3 ] a3   33N 3   31N 2  2 dV 1      11  12   21  13   31   N 1  N 1  dt 2   2 1      22  12   21   N 2  N 2  2   2 1      33  13   31   N 3  N 3  2  

--- (5.3)

dV 0 dt





Therefore E8 N1 , N 2 , N3 is globally asymptotically stable if 12   21 6. NUMERICAL EXAMPLE

Fig 6.1.2: The Phage portrait of N1, N2, N3 for system of Eq (2.1)

The above graph shows the N1, N2 & N3 phase portrait. The curve is concentric spiral and the system of equations (2.1) for the given parametric values it is globally asymptotically stable and converges to equilibrium point E (2.9961, 9.2322, 0.4540). Example 6.2: a1=2; a2=1; a3=2; α11=0.1; α12=0.1; α13=0.1; α22=0.2; α21=.1; α33=0.2; α31=.1.

Example 6.1:

52

A Three Species Ecological Model with a Prey, Predator and Competitor to the Prey

The graphs 6.2.1 and 6.2.2 shows the variation with initial population sizes 10, 15, 20 of prey, predator and competitor respectively.

Fig 6.2.1 The Variation of N1, N2 & N3 with respective Time (t) for system of Eq (2.1)

From the above graph N1, N2 & N3 converges. As time goes on increases the population sizes tends to equilibrium points.

The graph 6.3.1 and 6.3.2 shows the variation with initial population sizes 15, 1, 2 of prey, predator and competitor respectively.

Fig 6.3.1: The Variation of u1, u2 & u3 with respective Time (t) for system of Eq (4.2)

From the above graph u1, u2 & u3 converges with diminishing amplitude. As time goes on increases the population sizes tends to equilibrium point.

Fig 6.3.2: The phase portrait of u1, u2 & u3 for system of Eq (4.2) Fig 6.2.2: The Phage portrait of N1, N2, N3 for system of Eq (2.1)

The above graph shows the N1, N2 & N3 phase portrait. The curve is globally asymptotically stable to equilibrium point E (4.9821, 7.4866, 7.5134), for the system of equations (2.1) for the given parametric values. For the linearized system of equations exact solutions have been derived and given from equations 4.2.1, 4.2.2 and 4.2.3. The Trajectories for the solutions with the following parametric values are shown below. Example 6.3: a1=4; a2=.63; a3=2; α11=0.1010; α12=0.4; α13=0.001; α22=0.1010; α21=.1010; α33=0.5010; α31=.6.

The above graph shows u1, u2 & u3 phase portrait. The curve represents the system of equations (4.2) for the given parametric values and it converges to interior equilibrium point and hence it is globally asymptotically stable. 7.CONCLUSION In this paper, we made an attempt to study “A three species ecological model with a prey, predator and competitor to the prey and neutral with the predator species”. All possible equilibrium points are identified and stability of the interior equilibrium point is discussed using Routh-Hurwitz criteria for local stability and Lyapunov function for global stability.The solutions of linearized equations for interior equilibrium point are determined and are represented graphically with a

53

Papa Rao. A. V Lakshmi Narayan. K Shahnaz Bathul

suitable example. Further the non-linear system of equations is examined for global stability with the help of suitable examples by using Mat lab. The analytical discussion and numerical examples clearly shows that the system is globally asymptotically stable. REFERENCES [1] Freedman H. I., Deterministic Mathematical Models in Population Ecology, Decker, New York, 1980. [2] Kapur J.N., Mathematical Modeling in Biology and Medicine.Affiliated east west, 1985. [3] Kapur J.N., Mathematical modeling, wiley, easter, 1985. [4] Paul Colinvaux, Ecology, John Wiley and Sons Inc., New York, 1977. [5] Lotka A.J., Elements of Physical Biology, Williams and Wilkins, Baltimore,New York, 1925. [6] Volterra V, Leçons sur la théorie mathématique de la lutte pour la vie, Gauthier-Villars, Paris, 1931. [7] Shiva Reddy. K, Lakshmi Narayan. K, and Pattabhiramacharyulu, N.Ch. “A Three Species Eco system consisting of A Prey, Predator and Super Predator”. International J.of Mathematics Applied in Science and Technology, Vol 2, No.1 (2010), pp.95-107. [8] Paparao. A. V and Lakshmi Narayan. K. 'A prey – predator model with a cover linearly varying with the prey population and an alternative food for the Predator’ Int .J. of Open Prob. Comp. & Math. (vol2, No: 2, June 2009)

54

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Key Based Secure Image Encryption Priyanath Mahanti Department of Information Technology, MCKV Institute of Engineering, Howrah, West Bengal, India

ABSTRACT This paper presents a secure & fast image encryption scheme using extreme sensitive chaotic logistic map and Linear Feedback Shift Register (LFSR). Proposed algorithm generates encryption key (K) from two sub key (K1 & K2) having large key space. First sub key (K1) is generated from user defined password and LFSR Sequences using nonlinear recurrences relation and the other sub key (K2) is generated from the user defined password and chaotic signal. The results of several experimental, statistical analysis and key sensitivity tests show that the proposed image encryption scheme provides an effective and secure way for real-time image encryption. Keywords: Security, Cryptography, Image Encryption, Image Decryption, Chaotic map, LFSR 1. INTRODUCTION Nowadays internet is vastly used and digital information exchange over public network has increased rapidly. Because of this, protection of digital information is a prime concern to prevent illegal access of secret information. Till date various algorithms have been proposed for image encryption and now many researchers are working on how image can be encrypted using chaotic signal, and a variety of chaos-based cryptosystems have been proposed. Cryptographic algorithms based on chaos systems have been proposed with good cryptographic properties. Soheil Fateri and Rasul Enayatifar proposed a new method for image encryption using hybrid model of cellular automata and logistic map function. Priyanath Mahanti proposed a method for binary image encryption based on user password using LFSR and logistic map. Vinod Patidar,G. Purohit,K.K. Sud and N.K. Pareek proposed RGB image encryption using a novel Permutation – Substitution scheme based on chaotic standard map.

1.1 Symmetric key cryptography In symmetric key encryption scheme both the encryption and decryption key are same or it is computationally ―easy‖ to determine decryption key knowing only encryption key and vice versa. Thus same key is use to encrypt and decrypt the message. Symmetric key crypto systems are fast and efficient, especially if large volume of data (e.g. image) need to be processed. On the other hand it is observed that key distribution problem is inherently associated with the symmetric key crypto system. 1.2 Public key cryptography In public key crypto system two key is used. In this crypto system each node has a key pair consisting of private key and public key. Public key is public to all nodes and private key is kept secret with the respective node only. The message is encrypt using receiver‘s public key and the message decryption is possible only by using receiver‘s private key. This method solves the key distribution problem but encryption and decryption process takes longer times. 1.3 Chaotic signal and logistic map The logistic map is one of the simplest forms of a chaotic signal. One of the main characteristics of chaotic behavior is the great dependency on initial conditions. A small difference in two initial conditions will give raise to a large trajectory difference after little iteration. The Lyapunov exponent is a number that describes the dynamics of the orbit; it gives a notion of the divergence of nearby trajectories, presenting a method to quantify chaotic behavior. It is defined as , λ1 = n-1 ln(|g‘(x0)g‘(x1)… g‘(x n-1)|) ……….(1.1) where g is the relationship between the present and previous values of the trajectory. If λ1 is found to be positive, the map is said to be sensitive to the initial conditions. It is one of the most important indicators of chaos. The logistic map is given by the equation, Xn+1 = r Xn (1 – Xn) , n € Z …………… (1.2)

55

Priyanath Mahanti

where, the symbol Z represents the set of positive integer. If the value of r between 3.57 and 4 (i.e. 3.57 ≤ r ≤ 4) and initial value of X between 0 and 1 (i.e. 0 ≤ X0 ≤ 1) then Equation (1.2) generate chaotic signal having white noiselike property and random in nature. Equation (1.2) is sensitive to the parameter r and value of X0 i.e. little variation of the initial value produce a signal which differ significantly. Equation (1.2) is one of the simplest forms of a chaotic signal. Basically, this map, like any one-dimensional map, is a rule for getting a number from a number. The parameter ‗r‘ is fixed, but if one studies the map for different values of ‗r‘ (up to 4) it is found that ‗r‘ is the catalyst for chaos. In this paper equation (1.2) is modified to achieve extreme sensitive chaotic logistic map to initial condition by change in Bifurcation parameter as shown in equation (1.3) and (1.4) below, X0 = (A2 B – C2 ) - A ….…..……(1.3) Xn+1 = [ (A2 B – Xn 2 ) - A ] ……………(1.4) Where, n € Z , the symbol Z represent the set of positive integer , A= 0.5, 3.57 ≤ B ≤ 4 and 0.5 ≤ C ≤ 0.5 Equation (1.3) and (1.4) together satisfy following conditions,  Generate chaotic signal having white noise like property  Random in nature  Probability density function is uniformly distributed  Great dependency on initial condition i.e. on X0 as λ1 is close to 1 (one). From Fig. 1, Fig. 2 & Fig. (3), the signal behavior with initial value of A = 0.5, B = 3.999999999 and 3.999999998 with C= 0.3457 can be seen.

Fig. 1: The chaotic behaviour of Equation (1.3) & (1.4) in its 300 iterations

Fig. 2: Difference B/W and Auto Correlation of Map 1 & Map 2.

Fig. 3: Synchronisation and Cross Correlation of Map 1 & Map 2.

From Fig. 1 It is seen that with very little difference in ―B‖ value generate a different chaotic map, hence proves that it is greatly depended into its initial value. Fig. 3 shows synchronisation between two maps and cross Correlation between two maps. Table 1, below shows λ1 values (λ1 > 0 means chaotic) which indicate chaotic maps sensitive to the initial conditions is shown for different combinations of initial value. Sl. A B C λ1 No. 1. 0.5 3.9999 0.123456789 0.99793 2. 0.5 3.9999 0.123456788 0.99801 3. 0.5 3.9999 0.234567899 0.99830 4. 0.5 3.7777 0.234567899 0.98659 5. 0.5 3.6666 0.234567899 0.97803 Table: 1 λ1 Value Vs initial conditions

1.4 Linear feedback shift register sequences A linear feedback shift register (LFSR) is a shift register whose input bit is function of its previous state. The initial value of LFSR is called the seed. An LFSR is a linear system thus

56

Key Based Secure Image Encryption

the stream produce by LFSR is deterministic but an LFSR with suitable feedback function can produce an output stream which appears random with very long cycle. LFSRs have long been used as pseudo – random number generator for use in stream ciphers but unfortunately, LFSR encryption system succumbs easily to a known plaintext attack i.e. if an attacker knows only a few bits of the plain text along with the corresponding cipher text then attacker can easily determine the recurrence relation and therefore attacker can compute all subsequent bits of the key. Three general methods are use to reduce known plaintext attack are given below,  By using non-linear recurrences for example, Xn+3 ≡ Xn+2 Xn + Xn+1  Non-linear combination of several LFSRs.  Irregular clocking of the LFSRs. 2. PROPOSED WORK The proposed scheme, like other image encryption method, consists of two parts namely encryption and decryption. The simplified block diagram representing proposed scheme is shown in Fig. 4. Plain Image 3D Matrix (M*N*3)

2D Matrix (NH*NW)

2D Matrix (NH*NW)

Substitution

Substitution

Fig. 4. Simplified Block Diagram of Proposed Encryption/Decryption Scheme

2.1 The plain image The plain colour image having height H and width W is the image which needs to be encrypted. The image is a 3D matrix having total integer element H X W X 3 and the value of integers are 0 to 255. 2.2 Conversion of 2D matrix from 3D matrix Form 3D matrix we form a 2D matrix such that, NH X NW = H X W X 3 ………(2.1) where, NH & NW are the height & width of the 2D matrix. Propose algorithm try to form a square matrix & in that case, NH = NW. If it is not possible then it should be a rectangle such that |NW – NH| is minimum. 2.3 Substitution In this step integers (Xi,j) of the 2D matrix is replaced by other integer ( Ci,j ) , where 0 ≤ Ci,j ≥ 255 using equation (2.2) & during decryption we obtain Xi,j from Ci,j from equation (2.3). Here M-1 is multiplicative inverse of M & both are relative prime to 256 and determined from user password. Ci,j = (Xi,j  M) mod 256 ……………… (2.2) Xi,j = (Ci,j  M-1) mod 256 ……………….(2.3) 2.4 Interchange between row & column & permutation

Interchange Row & Collumn

Interchange Row & Collumn

User Key

Permutation

LFSR

Chaotic Map

K1

K2

Permutation

2.5 Key generation

XOR

Encryption Function

K

Encrypted Image

In this step row and column are interchange and then permutation is done which is governed by user password. During decryption reverse process is done.

Decryption Function

Proposed algorithm generates encryption key (K) from two sub keys (K1 & K2). First sub key (K1) is generated from user defined password and Linear Feedback Shift Register (LFSR) Sequences using nonlinear recurrences relation. From the user given password (which is of any length and can be of any combination of character, number and special character) first, seed of the LFSR is generated and then using above seed along with complex and non –linear recurrences a key, K1 is generated from LFSR which is of same size of the final key, K. K1 is

57

Priyanath Mahanti

now round off to integer-sequence according to Equations (2.4). K1 = Round (k1 mod 256) ……..………….. (2.4) Second sub key (K2), is generated using equation (1.3) & (1.4). Now K2 is amplified by a scaling factor 104 and then round off to integer sequence as per equation (2.5), given below, K2 = Round ([k2 * 104] mod 256) ……… (2.5) Here initial value of X (i.e. X0) & B used in equation (1.3) & (1.4) are determined by user selected password such that X0 lie in between 0.5 and 0.5 & B lie in between 3.57 and 4 & A=0.5 always. Size of K2 and K1 are same. Final key (K) is generated from K1 and K2 i.e. K = K1 XOR K2 .Size of K and plain image are same.

Fig. 7

Fig. 8

2.6 Encryption

Fig. 9

Fig. 10

Encryption is done by adding the key to the secret image bit by bit. This processes is called exclusiveor and denoted by XOR. In other word, using the rules 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1 & 1 + 1 = 0. We have, encrypted image Ci,j as, Ci,j = Xi,j XOR Ki,j ………………………(2.6) As long as user password is kept secret, the above cryptosystem is secure. During the decryption, again decryption key (same key, K) is generated from user defined password and then we get the secret image by XORing K with encrypted image. 3. EXPERIMENTAL RESULT AND PERFORMANCE EVALUATION The performance of the proposed crypto system is tested over a test image (Fig.5) using many password combinations. Test image is Lena colour image having size 256 X 256 X 3. Test image (Fig.5) is encrypted using password ‗hello‘. From the password ‗hello‘ final key (K) is generated which is shown in Fig.6.Test image is encrypted by key, K and the encrypted image is shown in Fig.7. Fig.8 to Fig.10 shows decrypted image using password ‘Hello‘, ‗HELLO‘ and ‗hello‘ respectively.

Fig. 5

Fig. 6

Extreme key sensitivity is an essential feature for any good cryptosystem which guarantees the security of the cryptosystem against the bruteforce attack. For this proposed crypto system key sensitivity is very high. Small change in password produce different seed for the LFSR and it also produce different initial value (X0) for the equation (1.3). Combining both the effect generates a different set of key (K‘) and decryption using this key does not generate the original secret image which is shown in Fig 8 and Fig.9. Proposed crypto system provides good strength against any statistical attack. Fig. 11 and Fig. 12 shows histogram of test image (Lena Image) and histogram of encrypted image. From the Fig 11 and Fig. 12 it is seen that histogram of encrypted image gives no clue about the secret image thus proposed scheme is some extend secure against statistical attacks.

Fig. 11. Histogram of Lena Image

58

Key Based Secure Image Encryption

Fig. 12. Histogram of Encrypted Image

4. CONCLUSION In this paper, I proposed a crypto system for color image. Experimental result shows that this crypto system is highly secure but user password need to be secret other wise above system can be compromised. For this proposed scheme key space is very large thus brute force attack is a relatively harder. Mainly symmetric key encryption technique is used, hence encryption – decryption process is fast. For password distribution purpose public key crypto system can be used. ACKNOWLEDGEMENT I would like to express my gratitude to Mr. Banibrata Patra , Associate System Engineer, IBM India Pvt. Ltd, Noida for reviewing the manuscript

van Renesse [87], pp. 126{133, ISBN 08194-2033-6. [5] J. R. Smith and B. O. Comiskey, ―Modulation and information hiding in images.‖ In Anderson [5], pp. 207{226, ISBN 3-540- 61996-8. [6] Fridrich, J. 1997. ―Image Encryption Based on Chaotic Maps‖. Proc. IEEE Conf. on Systems, Man, and Cybernetics. 1105-1110. [7] Jiang, Z. P. 2002. ―A Note on Chaotic Secure Communication Systems‖. IEEE Transactions On Circuits And Systems—I: Fundamental Theory And Applications. Vol. 49. [8] Fridrich, J. 1998. ―Symmetric Ciphers Based on Two-Dimensional Chaotic Maps‖. Int. J. Bifurcation and Chaos. 8(6). [9] Li, S., X. Mou, and Y. Cai. 2001. ―Improving Security of a Chaotic Encryption Approach‖. Physics Letters A. 290(3-4): 127-133. [10] Priyanath Mahanti ―Password Based Binary Image Encryption using LFSR and Chaotic Signal ‖ ICRTITCS- 2011 [11] Vinod Patidar,G. Purohit,K.K. Sud and N.K. Pareek ―Image encryption through a novel permutation – substitution scheme based on chaotic standard map ‖ 2010 international workshop on Chaos – Fractal Theory and its Application. [12] Soheil Fateri , Rasul Enayatifar ‗‘ A new method for image encryption via standard rules of CA and logistic map function‘‘ International journal of physical sciences Vol 6 (12,pp 2912 – 29)

REFERENCES [1] Yen J. C. and Guo J. I., ―A new chaotic image encryption algorithm,‖ Proc. 1998 National Symposium on Telecommunications, Dec. 1998, page(s): 358-362. [2] Yen J. C. and Guo J. I., ―A new chaotic keybased design for image encryption and decryption,‖ ISCAS, IEEE International Symposium on Circuits and Systems, May 28-31, Geneva, Switzerland, 2000, IV, page(s): 49-52. [3] Chen G., Mao Y. B. and Chui C. K., ―A symmetric image encryption scheme based on 3D chaotic cat maps,‖ Int. J. Chaos, Solitons and Fractals, 2004, 21, page(s): 749761. [4] G. W. Braudaway, K. A. Magerlein, and F. Mintzer, ―Protecting publicly-available images with a visible image watermark.‖ In

59

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Operations on Red Black Trees through R-Trains K N S Deepthi

G.Ganesan

Department of Mathematics, Adikavi Nannaya University, Rajahmundry, Andhra Pradesh, India

ABSTRACT ‗r-train‘ (or, ―train‖ in short), a kind of powerful robust data structure, has flexible ways of storing data, in particular while the data are huge in number. The r-train is neither a dynamic array nor a HAT. Expansion of rtrain by insertion of new data (may be by insertion of large amount of data) is very simple. Also existing data can be deleted/ replaced very easily. Searching of a data residing (not residing) in this data structure will work very fast if parallel processing is done which will be simple here, compared to the cases of other classical data structures. In this paper, we illustrated the effectiveness of this structure in Red Black Trees. Keywords: r-train, HAT, Red Black Tree

1. INTRODUCTION

A data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently. Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by an address — a bit string that can be itself stored in memory and manipulated by the program. Thus the record and array data structures are based on computing the addresses of data items with arithmetic operations; while the linked data structures are based on storing addresses of data items within the structure itself. Data structures can be classified in several ways, viz.: Primitive / Non-primitive  Homogeneous / Heterogeneous  Static / Dynamic  Linear / Non-linear There are some engineering situations that require more than the simple rudimentary data structures like arrays, linked lists, hash tables, binary search trees, dictionary, etc. In present day database of huge size, in many situations,

require a dash of creativity of a new or better equipped/powerful data structure of rudimentary in nature. There are some situations where we need to create an entirely new type of data structure of generalized nature, and creating such a new but simple data structure is not always a straightforward task. The objective of this research work is to introduce a new and a robust kind of dynamic data structure which encapsulates the merits of the arrays and of the linked lists, and at the same time inherits a reduced amount of their characteristic demerits. By the nature ‗dynamic‘ we mean that it changes over time, it is not like static. Apparently it may seem that this new data structure is of hybrid nature, hybrid of linked list and array; but construction wise (by virtue of its architecture) it is more. The new data structure namely ―rtrain‖[3], a large number of data elements can be stored and basic operations like insertion/deletion/search can be executed very easily, in particular parallel programming can be done very easily, in many situations, with improved time complexities and the performance. The term ‗train‘ has been coined in [3] from the usual railways transportation systems because of a lot of similarities in nature of the object r-train with the actual train of railways; and analogously we use the terms coach, passenger, availability of seats, etc. in our way of discussion in this work. The notion of rtrains is not a direct generalization of that of the linked lists. It is not just linked list of arrays as it looks so, but something more. However, every linked list is an example of 1train (i.e. r-train with r = 1). The main aim of introducing the data structure ‗r-train‘ is how to store a large array in an alternative way. One basic demerit of rtrain inherited from the properties of array is

60

Operations on Red Black Trees through R-Trains

that new data can not be inserted in between two existing consecutive data, because we preserve the unique index of each and every data which remains unaltered althrough the time. However, the data structure r-train obviously dominates the data structures array and linked list in case of dealing with huge number of data. It is neither a dynamic array [2] nor a HAT [4]. 2 .r-TRAIN (TRAIN) : A NEW FLEXIBLE

the above examples, the larrays c and d are null larrays, but e is not. The larray f is empty larray but c, d are not. It is obvious that two distinct larrays may contain є elements of different datatypes, because є elements of a larray are of the same datatype analogous to that of the non- є elements of that larray (by virtue of its construction). For example, each є element in the larray a above requires 2 bytes in memory, whereas each є element in the larray e above requires 4 bytes in memory.

DYNAMIC DATA STRUCTURE 2.1 Larray: a data structure like Array: Before introducing the data structure ‗r-train‘, we first of all need to introduce a very simple data structure called ‗larray‘ as a basic data structure. Hence we define the terms ‗coach‘ and ‗tagged coach‘, etc. 2.2 What is a Larray? Larray is not a valid word in dictionary. Larray stands for ―Like ARRAY‖. A larray is like an array of elements of identical datatype, where zero or more number of elements may be null element (i.e. no element, empty). Denote the null element by є. Assuming that є is of the same datatype, the memory space reserved for it will be same as that required by any other element of the larray. The number of elements (including є) in a larray is called the length of the larray. Example of larrays:1) a = < 5, 2, є, 13, 25, є , 2, > 2) b = < 6, 9, 8> 3) c = < є > 4) d = < є , є , є , є > 5) e = < 2.8, є , є , є , є > 6) f = < >, which is called empty larray. If all the elements of a larray are є, then it is called a null larray. Any null larray is denoted by θ. Clearly θ could be of any length (any non-negative integer) and hence null larray is not unique. As a special instance, the empty larray f above is also a null larray of length 0(zero), i.e. with 0(zero) number of є elements. But empty larray and null larray are not the same concept in our work here. Empty larray is a unique object unlike null larray. In our theory, an object like k = < , , , , > is an undefined object and hence should not be confused with empty larray or null larray. In

Similarly, each є element in the larray d above requires as many bytes as defined at the time of creation of d. But for the case of larray d, we can not extract any idea about the datatype of the є elements of d just by looking at it. In such case we must have information from outside. By using the name z of a larray z, we do also mean the address of the larray z in the memory, analogous to the case in arrays.

2.3 Coach: By a coach C we mean a pair (A,e) where A is a non-empty larray (could be a null larray) and e is an address in the memory. Here e is basically a kind of link address. Its significance is that it says the address of the next coach, and thus it links two coaches. If the coach C is a single coach or the last coach then the address field e will be put equal to an invalid address, otherwise e will be the address of the next coach. There is no confusion in differentiating the two objects e and є. In fact, a possible value of e can never be є for any coach. A coach can be easily stored in memory where the data e resides next to the last element of the larray A in the memory. Logical diagram of a coach is available at subsequent pages in Figure-1, 2, 3, 4, 5.

Suppose that the larray A has m number of elements in it. If each element of A is of size x bytes and if the data e require two bytes to be stored in memory, then to store the coach C in memory, exactly (m.x + 2) number of consecutive bytes are required, and accordingly the coach be created by the programmer. The notion of r-train is introduced in the next section. Before that we define what is meant by the terms ‗status‘ and ‗tag‘ of a coach of a r-train.

61

K N S Deepthi G.Ganesan

2.4 Status of a Coach & Tagged Coach (TC) The status s of a coach is a nonnegative integer variable which is equal to the number of є elements present in it (i.e. in its larray) at this point of time. Therefore, 0 ≤ s ≤ r. The significance of the variable s is that it informs us about the exact number of free spaces available in the coach at this point of time. If there is no є element in the larray of the coach C, then the value of s is 0 at this point of time. If C = (A,e) is a coach, then the corresponding tagged coach (TC) is denoted by the notation (C,s) where s is the status of the coach. This means that C is a coach tagged with an information on the total amount of available free space (here it is termed as є elements) inside it at this time. For example, consider the section 2.1 above. Clearly a TC with the larray a will be denoted by (C,2), a TC with the larray b will be denoted by (C,0), a TC with the larray d will be denoted by (C,4), and so on. In our discussion here, without any confusion, the two statements “there is no є element in the larray‖ and ―there is no є element in the coach‖ will have identical meaning where the larray is that of the coach. 2.5 r-Train (or Train) :A New Data Structure In this section, we introduce a new powerful, robust, and flexible data structure ‗r-train‘. A r-train is basically a linked list of tagged coaches. This linked list is called the ‗pilot‘ of the r-train. But the implementation of this pilot in memory may also be done using the data structure array in some cases which will be explained later on in this paper. The pilot of a train is thus, in general, a linked list. But, if we are sure that there will be no requirement of any extra coach in future, then it is better to implement pilot as an array. Details about these operations are explained at subsequent sections.

The number of coaches in a r-train is called the ‗length‘ of the r-train which may increase or decrease time to time. A ‗r-train‘ may also be called by the name ‗train‘ if there is no confusion.

A r-train T of length l (>0) will be denoted by the following notation T = < (C1, sC1), (C2, sC2), (C3, sC3), ……, (Cl, sCl) >, where the coach Ci is (Ai,ei) with Ai being a larray of length r, ei being the address of the next coach Ci+1 (or an invalid address in case Ci is the last coach) and sCi being the status of the coach Ci, for i = 1, 2, 3, …., l. For a r-train, START is the address of the pilot (viewing its implementation in memory as a linked list). Thus START points at the data C1 in memory. The length l of the pilot could be any natural number, but the larrays of the TCs are each of fixed length r which store data elements (including є elements) of common datatype where r is a natural number. Thus, by name, 1-train, 60train, 75-train, 100-train, etc. are few instances of r-train, where the term 0-train is undefined. The notion of the data structure r-train is a new but very simple data structure, a type of 2-tier data structure, having very convenient methods of executing various fundamental operations like insertion, deletion, searching etc., for huge number of data, in particular for parallel computing. The most important characteristic of the data structure r-train is that it is likely that it can store x number of elements of identical datatype even if an array of size x of same elements can not be stored in memory, at some moment of time. And also, the data can be well accessed by using the indices. In a r-train, the coach names C1, C2, C3, ………. , Cl do also mean the addresses (pointers) serially numbered like in case of arrays, for example : the array z = (5, 9, 3, 2) can be easily accessed by calling its name z only. Status of a coach reflects the availability of a seat (i.e. whether a data element can be stored in this coach now or not). Status may vary with time. Each coach of a r-train can accommodate exactly r number of data elements serially numbered, each data element being called a passenger. Thus each coach of a r-train points at a larray of r number of passengers. By definition, the data ei is not a passenger for any i. Consider a coach Ci = (Ai, ei). In the larray Ai = , the data element eij is called the ―jth

62

Operations on Red Black Trees through R-Trains

passenger‖ or ―jth data element‖ for j = 1, 2, 3, 4,..., r. Thus we can view a r-train as a linked list of larrays. Starting from any coach, one can visit the inside of all the next coaches but not any of the previous coaches. The r-train is a forward linear object, not a type of circular one. It is neither a dynamic array nor a HAT. It has an added advantage over HAT that starting from one data element, all the next data elements can be read well without referring to any hash table or the pilot.

The following figure shows a r-train with 50 coaches

2.5.1 Example Any linked list is a 1-train where each coach is having a common status equal to zero (as each coach accommodates one and only one passenger). However the converse is not necessarily true, i.e. if each coach of a train is having status 0, it does not necessarily mean that the train is a 1-train (linked list). The notion of 1-train is not a generalization of ‗linked list‘.

Figure- 2 [A train with 50 Coaches] The following figure shows one coach [i coach] of a 11 Train where data to be read clockwise starting from ei1. th

2.5.2 Example Consider a 3-train T of length 3 given by T = < (C1,0), (C2,1), (C3,1) > , where C1 = < 5, 9, 2, e1 >, and C2 = < 7, є , 8, e2 >, and C3 = < 4, 7, є, an invalid-address >. Here e1 is the address of coach C2 (i.e. address of larray A2), and e2 is the address of coach C3 (i.e. address of larray A3). Since it is a 3-train, each coach Ci can accommodate exactly three passengers (including є element, if any). In coach C1, the larray is A1 = which means that the first passenger is the integer 5, second passenger is the integer 9, and the last/third passenger is the integer 2; the data e1 being the address of the next coach C2. Thus, T is a larray of three TCs which are (C1,0), (C2,1), and (C3,1). The logical diagram of the above mentioned 3-train T is shown below where data in coaches are to be read clockwise starting from e11 for the coach C1, from e21 for the coach C2, from e31 for the coach C3 :-

Figure- 3 [A Coach Ci of a 11-train] 2.5.3

Full Coach

A coach is said to be full coach if it does not have any є, i.e., if its status sis o.The physical significance of the variable ‗status‘ is ―availability of space at this point of time‖ for storing a data element. In a full coach we cannot insert any more data (passenger) at this point of time (however may be possible at later time). In the following diagram, clearly, the coaches of any linked list (i.e., 1-train) are all full and always full.

63

K N S Deepthi G.Ganesan

2.5.4 Empty Coach A coach is said to be empty coach if every passenger of it is є, i.e., if the corresponding larray is the null array.

The above 3-train is T = < (10A5h, 0), (BA98h, 0), (00ADh, 0), (E49Bh, 1) > which is of length 4 where the coach C 1 begins from the address 10A5h, the coach C 2 begins from the address BA98h, the coach C 3 begins from the address 00ADh, the coach C 4 begins from the address E49Bh. Here START = 0A02h, i.e. the pilot of this 3-train is stored at the address 0A02h. Also in this example, the pilot is implemented as an array, not using linked list. 4. FUNDAMENTAL OPERATIONS ON THE DATA STRUCTURE ‘r-TRAIN’ The two fundamental operations on the data structure r-train are ‗insertion‘ and ‗deletion‘ which are defined below :

Thus for any empty coach of a r-train, the status is equal to r. A coach may be neither empty nor full. 3. A r-TRAIN IN MEMORY Consider the data: 9.2,2.4,1.4,7.8,6.5,7.8,4.3,2.9,6.8,7.1 which are store in the data segment of the 8086 memory using the data structure 3-train as shown below, starting from START=0A02h:-

4.1 Insertion There are two types of insertion operation in the data structure r-train : (i) insertion (addition) of a new coach in a r-train. (ii) insertion of a data element (passenger) in a coach of a r-train. 4.1.1 Insertion of a new coach in a r-train Insertion of a new coach can be done at the end of the pilot, nowhere else. Consider the r-train T = < (C1, sC1), (C2, sC2), (C3, sC3), ……, (Ck, sCk) >, with k number of coaches, where the coach Ci = (Ai,ei) for i = 1, 2, 3, …., k. After insertion of a new coach, the updated rtrain immediately becomes the following rtrain : T = < (C1, sC1), (C2, sC2), (C3, sC3), ……, (Ck, sCk), (Ck+1,r) >. Initially at the time of insertion, we create CK+1 as an empty coach (i.e. with status = r) which is likely to get filled-up with nonpassengers (data) later on with time. For insertion of a new coach CK+1 in a r-train, we need to do the following : (i) update the pilot (linked list) (ii) ek in Ck is to be updated and to be made equal to the address CK+1 (iii) set ek+1, j = є for j = 1, 2, ….., r (iv) set ek+1 = an invalid address. (v) set sCk+1 = r

64

Operations on Red Black Trees through R-Trains

4.1.2 Insertion of an element x inside the coach Ci = (Ai , ei) of a r-train

backward direction, one after another. An interim coach cannot be deleted.

Insertion of an element (a new passenger) x inside the coach Ci is feasible if x is of same datatype (like other passengers of the coach) and if there is an empty space available inside the coach Ci. If status of Cj is greater than 0 then data can be stored successfully in the coach, otherwise insertion operation fails here. After each successful insertion, the status s of the coach is to be updated by doing s = s – 1. For insertion of x, we can replace the lowest indexed passenger є of Ci with x.

The last coach Ci can be deleted if it is an empty coach (as shown below) : -

4.2 Deletion There are two types of deletion operation in the data structure r-train : (i) Deletion of a data element ( ≠ є) from any coach of the r-train. (ii) Deletion of the last coach Ci, if it is an empty coach, from a r-train. 4.2.1. Deletion of a data eij ( ≠ є) from the coach Ci of a r-train Deletion of ei from the coach Ci is not allowed as it is the link. But we can delete a data element eij from the coach Ci. Deletion of a data (passenger) from a coach means replacement of the data by an є element (of same datatype). Consequently, if eij = є , then the question of deletion does not arise. Here it is pre-assumed that eij is a non є member element of the coach Ci . For j = 1, 2, .….., r, deletion of eij is done by replacing it by the null element є , and updating the status s by doing s = s+1. Deletion of a data element (passenger) does not effect the size r of the coach. For example, consider the tagged coach (Ci, m) where Ci = < ei1, ei2, ei3, ei4, ..................., eir> . If we delete ei3 from the coach Ci, then the updated tagged coach will be (Ci, m+1) where Ci = < ei1, ei2, , є , ei4, ............., eir>. 4.2.2. Deletion of the last coach Ci from a r-train Deletion of coaches from a r-train is allowed from the last coach only and in

If the last coach is not empty, it can not be deleted unless its all the passengers are deleted to make it empty. To delete the empty last coach Ci, of a r-train, we have to do the following actions : (i) update ei-1 of the coach Ci-1 by storing an invalid address in it. (ii) delete (Ci, r) from the r-train T = < (C1, sC1), (C2, sC2), (C3, sC3), ……, (Ci-1, sCi-1), (Ci, r) >, and get the updated rtrain T = < (C1, sC1), (C2, sC2), (C3, sC3), ……, (Ci-1, sCi-1) > . (iii) update the pilot. 4.2.3. Searching for a Passenger (data) in a r-train of length k Searching a data x in a r-train T is very easy. If we know in advance the coach number Ci of the passenger x, then visiting the pilot we can enter into the coach Ci of the rtrain directly and then can read the dataelements ei1, ei2, …….., eir of the larray Ai for a match with x. If we do not know the coach, then we start searching from the coach C1 onwards till the last coach. We need not go back to the pilot for any help. Here lies an important dominance of the data structure r-train over the data structure HAT introduced by Sitarski [15] in 1996. In case of multi processor system the searching can be done in parallel very fast, which is obvious from the physiology of the data structure r-train. 5. RED-BLACK TREES A red-black tree [1] is a self-balancing binary search tree, a data structure used in computer science, typically to implement associate arrays. The original structure was invented in 1972 by Rudolf Bayer and named ―symmetric and named "symmetric binary Btree," but acquired its modern name in a paper in 1978 by Leonidas J. Guibas and Robert Sedgewick. It is complex, but has good worst-

65

K N S Deepthi G.Ganesan

case running time for its operations and is efficient in practice: it can search, insert, and delete in O(log n) time, where n is the total number of elements in the tree. Put very simply, a red–black tree is a binary search tree that inserts and deletes in such a way that the tree is always reasonably balanced.

A red–black tree is a special type of binary tree, used in computer science to organize pieces of comparable data, such as text fragments or numbers. The leaf nodes of red–black trees do not contain data. These leaves need not be explicit in computer memory — a null child pointer can encode the fact that this child is a leaf — but it simplifies some algorithms for operating on red–black trees if the leaves really are explicit nodes. To save memory, sometimes a single sentinel node performs the role of all leaf nodes; all references from internal nodes to leaf nodes then point to the sentinel node.

2. The root is black. (This rule is sometimes omitted from other definitions. Since the root can always be changed from red to black, but not necessarily vice-versa, this rule has little effect on analysis.) 3. All leaves are the same color as the root. 4. Both children of every red node are black. 5. Every simple path from a given node to any of its descendant leaves contains the same number of black nodes. These constraints enforce a critical property of red–black trees: that the path from the root to the furthest leaf is no more than twice as long as the path from the root to the nearest leaf. The result is that the tree is roughly balanced. Since operations such as inserting, deleting, and finding values require worst-case time proportional to the height of the tree, this theoretical upper bound on the height allows red–black trees to be efficient in the worst-case, unlike ordinary binary search trees. 6. IMPLEMENTATION OF r-TRAINS IN RED BLACK TREE Each coach is filled with the technique of red black trees. If the number of elements is to be stored is odd then a single place is filled with ‗void‘ (or empty) otherwise, they will be filled in clockwise direction without any empty spaces. The structure of a coach using red black trees is shown below:

Red–black trees, like all binary search trees, allow efficient in-order traversal in the fashion, Left–Root–Right, of their elements. The search-time results from the traversal from root to leaf, and therefore a balanced tree, having the least possible tree height, results in O(log n) search time. 5.1 Properties:

Figure-6: structure of a coach in red black trees

A red–black tree is a binary search tree where each node has a color attribute, the value of which is either red or black. In addition to the ordinary requirements imposed on binary search trees, the following requirements apply to red–black trees: 1. A node is either red or black.

66

Operations on Red Black Trees through R-Trains

Figure 7 : A coach with 12 elements In this way a coach is filled by using the concept of red black trees and after the declaration of the train size ‗n‘ the number of coaches are formed and the structure is represented as follows:

Figure-8 : Structure of r-train like tree with 7 coaches Fundamental operations like insertion, deletion, searching, display are done in this new data structure is described in the below algorithm:6.1 Algorithm: 1. start 2. Insert all the operations to be performed 3. insert elements into the coach using red black trees 4. Allocate the size of the train 5. Insertion of a coach Insertion of a coach at root Operations to be done are (i) r link = Null (ii) l link = Null (iii) Head = coach value Insertion of a coach at middle check whether the tree is full or not if size of tree is full begin coach cannot be inserted

otherwise coach can be inserted choose at the location where the coach is to be inserted Let the new coach is to be inserted between A and B coaches Choose whether the new coach is to be inserted at left or right of coach A if left begin l link(A) = address of new coach Choose whether the coach B is added to left or right of new coach if left l link(new coach) = address of Coach B if right r link(new coach) = address of coach B end if right repeat the steps from 5.2.4 to 5.2.7 for the right option If the coach B is added to left link of new coach then the right link of new coach is null and vice versa. Inserting a new coach at last check whether the tree is full or not if size of tree is full begin coach cannot be inserted otherwise coach can be inserted choose the leaf coach to insert the new coach Let S be the leaf coach Now, choose whether the new coach is inserted in left or right of coach S if left l link(S) = Address of new coach if right r link(S) = address of new coach end of insertion operation 6. Deletion of a coach First check size of the tree is empty or not If empty Deletion operation cannot be done Otherwise Deletion of coach can be performed Deletion of a root coach Entire tree is deleted. The operations to be done are (i) l link = Null

67

K N S Deepthi G.Ganesan

(ii) r link = Null Deletion of a coach at last choose the leaf coach to be deleted Let S be the coach to be deleted which is connected to the coach X First, check whether the coach is connected at left or right of coach X. if left l link(X) = Null if right r link(X)= Null end of deletion operation 7. Searching of a coach Searching operation can be performed by using binary search technique 8. Display Using while loop we can display all coaches from root to the end coach. 9. End 7. CONCLUSION Today, as multiprocessor computer architectures that provide parallelism become the dominant computing platform (through the proliferation of multi-core processors), the term has come to stand mainly for data structures that can be accessed by multiple threads which may actually access the data simultaneously because they run on different processors that communicate with one another. There are many organizations, in present day business world, where a large number of data is to be dealt with. Insertions, deletion, searching operations are required to be fast even if the data are huge in number. Such situations require some way or some method which work more efficiently than the simple rudimentary data structures such as arrays, linked lists, hash tables, binary search trees, dictionary, etc. Obviously, there is a need of a dash of creativity of a new or better performed data structure which at the same time must be of rudimentary in nature. Creating such a new but simple data structure is not always a straightforward task.

amount of data stored using the data structure r-train. The pilot of a train is, in general, a linked list. But, if we are sure that there will be no requirement of any extra coach in future, then it is better to implement pilot as an array. Codes in C or in any higher level language for the operations viz. Create_larray, Create_rtrain, Insert_Coach, Insert_element, Delete_element, Delete_last_coach, Search_element, etc. is not presented here, and in fact it is a straightforward task for the programmers. In our next work, we will attempt to make the data structure ―r-train‖ more flexible, more robust, more powerful and hence more useful by allowing different coaches to accommodate data of different datatypes respectively, but all the data of a coach will be of identical datatype. By implementation of this data structure in trees we can deal with large amounts of data than rtrain by creating links to each coach. REFERENCES [1] Alfred V. Aho, John E. Hopcroft, and Jeffrey D. Ullman, Data Structures and Algorithms, Addison Wesley, 1983. [2] Goodrich, Michael T.; Kloss II, John G. (1999), "Tiered Vectors: Efficient Dynamic Arrays for Rank-Based Sequences", Workshop on Algorithms and Data Structures 1663: pp. 205–216, doi: 10.1007 / 3-540-48447-7_21, http:// citeseer.ist.psu.edu / 519744.html [3] Ranjith Biswas. R-train: A New Flexible Dynamic Data Structure, an International Journal (Japan), Vol.14 (4), April 2011. [4] Sitarski, Edward (September 1996), Algorithm Alley, "HATs: Hashed array trees",Dr.Dobb'sJournal21(11).

The r-train is neither a dynamic array nor a HAT. It is reasonably space efficient as well. Parallel processing will be easier for huge

68

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Direct Similarity Analysis of a Nonlocal Gaseous Ignition Model R.Asokan Department of Mathematics, School of Mathematics, Madurai Kamaraj University, Madurai, INDIA.

ABSTRACT The nonlinear parabolic equation 𝜔𝑟 = 𝜔𝑦𝑦 -

1 𝑦𝜔𝑦 2

+ 𝑒 𝜔 - 1+𝑒 −𝑟 g(T- 𝑒 𝜏 ), where T is the final time modeling the nonlocal gaseous ignition problem is subjected to the direct method of Clarkson and Kruskal [2] to obtain its similarity solution. Keywords: Direct method, Nonlinear parabolic equation, Similarity transformation. 1. INTRODUCTION

Souplet [5] considered the nonlocal parabolic equation 𝜔𝑡 − ∆𝜔 = 𝑒 𝜔(𝑦,𝑡)𝑑𝑦 ----(4) and proved that lim𝑡→𝑇

The semi linear parabolic equation 𝜔𝑡 = ∆𝜔 + 𝑓 𝜔 , ----- (1) Where f( 𝜔 ) is either 𝑒 𝜔 or 𝜔 𝑝 (p >1) [1] models the thermal combustion process in a solid fuel, where heat transfer by conduction is constant and the reaction rate depends on temperature. For an ideal gaseous fuel in bounded container, the motion caused by the compressibility of the gas leads to the addition of a nonlocal integral term that complicates the model. Indeed the ignition period of a thermal event is modeled by the integro-parabolic equation [1] 𝜔𝑡 = ∆𝜔 + 𝑒 𝜔 + (x,t)∈ 𝛺 x(0,∞)

𝛾−1 𝛺

𝑒 𝜔 𝑑𝑦, ----(2)

where Ω ∁ 𝑅 𝑛 is a bounded open container, 𝜔 is the temperature perturbation of the gas, 𝛾 > 1 the gas parameter and 𝛺 = vol(Ω). Another model equation is 𝜔𝑡 =∆𝜔 + 𝑒 𝜔 +

Green’s first identity to ∆𝜔 𝑑𝑦 and omitting the nonlocal gradient term. Thus the gaseous ignition model (2) depicates the thermal behaviour for a gaseous fuel whose flux of the temperature’s spatial rate of change across the container’s boundary is negligible.

𝛾−1 𝛺

𝜔𝑡 𝑑𝑦.

----(3)

𝜔 (𝑡) ∞ log ⁡ (𝑇−𝑡)

= 1,

----(5)

where T is the final time. Similar behaviour for equation (2) includes the same nonlocal term as well as the ignition term 𝑒 𝜔 is observed also Bricher [6]. Further, Bricher briefly reviewed comparision methods for nonlocal problems as discussed [3] and used them to obtain monotonicity results for a class of integro – parabolic equations that includes (2) as well as a comparison result for solution to (2) and (3). For Bricher made the hot-spot change of variables 𝜏 = − log 𝑇 − 𝑡 ,

−1

𝑦 = 𝑥(𝑇 − 𝑡) 2

𝜔 𝑦, 𝜏 = 𝜔 𝑥, 𝑡 + log⁡ (𝑇 − 𝑡), ----(6) to (2) in order to obtain the parabolic equation for 𝜔 𝑦, 𝜏 : 1 𝜔𝜏 = ∆𝜔 − 2 𝑦. ∇𝜔 + 𝑒𝜔 − 1 + 𝑒 −𝛾 g(T-𝑒 −𝛾 ), ----(7) on the set (y,𝜏) ∈ B( 𝜏) x (𝜏0 , ∞) with 𝜔 → 0 as 𝜏 → ∞, using a stabilization technique. In the present paper, we give detailed account of the similarity deductions of the parabolic equation(Stephen Brricher [6])

However, (2) may be obtained from equation (3) by integrating the later over Ω, applying 69

R.Asokan

𝜔𝜏 = 𝜔𝑦𝑦 − 2 y 𝜔𝑦 + 𝑒 𝜔 − 1 + 𝑒 −𝜏 𝑔 𝑇 − 𝑒 𝜏 , ----(8) By the direct method of Clarkson and Kruskal [2]

1

−𝐵2 + 𝑇 − 𝑡 𝐵2 𝑔(𝑡)]𝑓 2 - 𝐵2 𝑧𝑦2 𝑓 ′ 2 + 𝐵3 𝑓 3

We transform (8) to 1 𝑣𝑦𝑦 − 2 𝑦𝑣𝑦 − 𝑇 − 𝑡 𝑣𝑡 + 𝑒 𝑣 − 1 + 𝑇 − 𝑡 𝑔 𝑡 = 0, ----(9) through 𝑡 = 𝑇 − 𝑒 −𝜏 , 𝜔 𝜏, 𝑦 = 𝑣 𝑡, 𝑦 . ----(10)

for (13) to represent an ordinary differential equation (ODE) governing f(z), we introduce functions 𝛤𝑛 𝑧 , 𝑛 =1,2,….,8 in the following manner: 1 𝐴𝐴𝑦𝑦 − 𝐴2𝑦 − 𝑦𝐴𝐴𝑦 − 𝑇 − 𝑡 𝐴𝐴𝑡 + 𝐴3 2 − 𝐴2

Further equation (9) can be changed via u(y,t) = 𝑒 𝑣(𝑦,𝑡) to 1 𝑢𝑣𝑦𝑦 − 𝑢𝑦2 − 2 𝑦𝑢𝑢𝑦 − 𝑇 − 𝑡 𝑢𝑢𝑡 + 𝑢3 − 𝑢2 + 𝑇 − 𝑡 𝑢2 𝑔 𝑡 = 0. ----(11) The scheme of the present paper is organized as follows. In section 2 the direct method has presented in an easily applicable form by Sachdev and Mayil Vaganan [4] is used to reduce the nonlinear parabolic equation to an ODE. The conclusions of the present study set forth in section 3 2. DIRECT SIMILARITY ANALYSIS OF (11) Now we make the ansatz 𝑢 𝑥, 𝑡 = 𝐴 𝑥, 𝑡 + 𝐵 𝑥, 𝑡 𝑓 𝑧 , B(x,t)≠ 0, ----(12) where z = z(x,t) is the similarity variable. Putting (12) in (11), we have 1 𝐴𝐴𝑦𝑦 − 𝐴2𝑦 − 𝑦𝐴𝐴𝑦 − 𝑇 − 𝑡 𝐴𝐴𝑡 + 𝐴3 2 − 𝐴2 + 𝑇 − 𝑡 𝐴2 𝑔(𝑡) +

[𝐴𝐵𝑦𝑦 + 𝐵𝐴𝑦𝑦 − 2𝐴𝑦 𝐵𝑦 −

𝐵𝐴𝑦+(𝑇−𝑡)(𝐴𝐵𝑡+𝐵𝐴𝑡)

1 𝑦 2

𝐴𝐵𝑦 +

+3𝐴2 𝐵 − 2𝐴𝐵 + 2 𝑇 − 𝑡 𝐴𝐵𝑔(𝑡)]𝑓 1

+ [2A 𝐵𝑦 𝑧𝑦 +AB 𝑧𝑦𝑦 −2𝐴𝑦 𝐵 − 𝑦𝐴𝐵𝑧𝑦 − 2 (𝑇 − 𝑡𝐴𝐵𝑧𝑡 )]𝑓 ′

+ 𝐴𝐵𝑧𝑦2 𝑓 ′′ + 𝐵2 𝑧𝑦2 𝑓𝑓 ′′ = 0 ----(13)

𝐵2 𝑧𝑦2 𝛤1 (𝑧),

+ 𝑇 − 𝑡 𝐴2 𝑔 𝑡 =

----(14) 1 𝐴𝐵𝑦𝑦 + 𝐵𝐴𝑦𝑦 − 2𝐴𝑦 𝐵𝑦 − 𝑦 𝐴𝐵𝑦 + 𝐵𝐴𝑦 2 + 𝑇 − 𝑡 𝐴𝐵𝑡 + 𝐵𝐴𝑡 + 3𝐴2 𝐵 − 2𝐴𝐵

+2 𝑇 − 𝑡 𝐴𝐵𝑔(𝑡)

=

𝐵2 𝑧𝑦2 𝛤2 (𝑧) , ----(15) 1

2A𝐵𝑦 𝑧𝑦 +AB𝑧𝑦𝑦 −2𝐴𝑦 𝐵 − 2 𝑦𝐴𝐵𝑧𝑦 − (𝑇 − 𝑡)𝐴𝐵𝑧𝑡 ) = 𝐵2 𝑧𝑦2 𝛤3 𝑧 , ----(16) 1

B 𝐵𝑦 𝑧𝑦 + 𝐵2 𝑧𝑦𝑦 − 2𝐵𝐵𝑦 𝑧𝑦 − 2 𝑦𝐵2 𝑧𝑦 − 𝑇 − 𝑡 𝐵2 𝑧𝑡 = 𝐵2 𝑧𝑦2 𝛤4 𝑧 , ----(17) 1 𝐵𝐵𝑦𝑦 − 𝐵𝑦2 − 𝑦𝐵𝐵𝑦 − 𝑇 − 𝑡 𝐵𝐵𝑡 + 3𝐴𝐵2 2 −𝐵2 + 𝑇 − 𝑡 𝐵2 =𝐵2 𝑧𝑦2 𝛤5 𝑧 , ----(18) −1 = 𝛤6 𝑧 , 𝐵 = 𝑧𝑦2 𝛤7 𝑧 , 𝐴 = 𝐵𝛤8 𝑧 .

----(19) ----(20) ----(21)

In view of (14) – (21) takes the form 𝛤1 𝑧 + 𝛤2 𝑧 𝑓 + 𝛤3 𝑧 𝑓 ′ + 𝛤4 𝑧 𝑓𝑓 ′ + 𝛤5 𝑧 𝑓 2 +𝛤6 𝑧 𝑓 ′ 2 + 𝛤7 𝑧 𝑓 3 + 𝛤8 𝑧 𝑓 ′′ = 0. ----(22)

1

+ [2B𝐵𝑦 𝑧𝑦 + 𝐵2 𝑧𝑦𝑦 − 2𝐵𝐵𝑦 𝑧𝑦 − 𝑦𝐵2 𝑧𝑦 − 2 𝑇 − 𝑡 𝐵2 𝑧𝑦 ]𝑓𝑓 ′ 1 + [𝐵𝐵𝑦𝑦 − 𝐵𝑦2 − 𝑦𝐵𝐵𝑦 − 𝑇 − 𝑡 𝐵𝐵𝑡 2 + 3𝐴𝐵2

The following remarks are used in determination of A,B,z and 𝛤𝑛 𝑧 , n=1,2,…,8 from the system (14) – (21) : Remark 1: If A(x,t) has the form  (x,t) =A(x-t)+B(x,t) Γ(z), then we put Γ(z) ≡ 0. 70

Direct Similarity Analysis of a Nonlocal Gaseous Ignition Model

Remark 2: If B(x,t) is found to have form B(x,t)=Ḃ(x,t) Γ(z), then we may choose Γ(z) ≡1. Remark 3: If z(x,t) is to be determined from the relation f(z) =z(x,t), where f(z) is an invariable function, then without loss of generality, we may take f(z) ≡ z. Using remark 1 in equation (21), we obtain 𝛤8 𝑧 ≡ 0 and therefore A=0. ----(23) on substituting (23) in equations (14) - (16) and (18) to give 𝛤1 𝑧 = 𝛤2 𝑧 = 𝛤3 𝑧 =0 and we have 1 𝐵𝐵𝑦𝑦 − 𝐵𝑦2 − 𝑦𝐵𝐵𝑦 − 𝑇 − 𝑡 𝐵𝐵𝑡 2 −𝐵2 + 𝑇 − 𝑡 𝐵2 𝑔(𝑡) = 𝐵2 𝑧𝑦2 𝛤5 𝑧 , ----(24) By remark 2, equation (20) gives 𝛤7 𝑧 = 1 and therefore B = 𝑧𝑦2 . ----(25) Writing (17) as 1 𝑧𝑦𝑦 = 2 𝑦𝑧𝑦 + 𝑇 − 𝑡 𝑧𝑡 + 𝑧𝑦2 𝛤4 𝑧 . ----(26) Now inserting (26) in equation (18), we get 𝑧 𝑧 4 𝑧𝑦𝑦2 𝛤4 𝑧 + 2𝛤4 (𝑧)′ − 2 𝑧𝑦𝑦4 + 𝑇−𝑡 𝑔(𝑡) 𝑧𝑦2

𝑦

𝑦

= 𝛤5 𝑧 .

2.1. Case

𝛤4 = 0,

𝛤5 = 𝑙5

----(27)

From equation (26) can be written as 1 𝑧 + (𝑇 − 𝑡)𝑧𝑡 = 0. 2 𝑦 ----(28) This equation can be written as 2𝑑𝑦 𝑑𝑡 𝑑𝑧 = 𝑇−𝑡 = 0 . 𝑦 ----(29) Solving (29), we obtain 𝑧 = 𝑦(𝑇 − 𝑡)1/2 ----(30) is the similarity variable. Inserting 𝛤4 = 0, 𝛤5 = 𝑙5 and (30) in (27), we get 𝑔(𝑡) = 𝑙5 . ----(31) Applying (30) in (25), we have

𝐵 𝑦, 𝑡 = 𝑇 − 𝑡 ----(32) On using (23) and (32) in equation (12), we get the similarity transformation of (11) as 𝑢 𝑦, 𝑡 = 𝑇 − 𝑡 𝑓(𝑧), ----(33) Applying 𝛤1 = 𝛤2 = 𝛤3 = 𝛤4 = 𝛤8 = 0 , 𝛤5 = 𝑙5 , 𝛤6 = −1 and 𝛤7 =1 in equation (22), we get the ordinary differential equation for f(z) appearing in the similarity transformation (33) which is given by 𝑓𝑓 ′′ − 𝑓 ′2 + 𝑙5 𝑓 2 + 𝑓 3 = 0, ----(34) 𝑓𝑝𝑝′ 𝑓 = 𝑝2 -𝑙5 𝑓 2 -𝑓 3 . ----(35) This can be written as 1 𝑝′ 𝑓 = 𝑝 + −𝑙5 𝑓 − 𝑓 2 𝑝−1 , 𝑓

----(36) Which is Bernoulli equation form. Its solution is 𝑝= 𝜔 ----(37) Now inserting 37 in (36), we obtain −2 𝜔′ 𝑓 − 𝑓 𝜔𝑓 = 2 𝑙5 𝑓 + 𝑓 2 ----(38) This is linear in 𝜔. Integrating factor 𝜔 𝑓 =

1 𝑦2

The general solution of equation (38) is 𝑓 ′ 2 = 𝑝2 = 𝜔 = 𝑀𝑓 2 − 2𝑓 2 𝑙𝑜𝑔𝑓 𝑙5 + 𝑓 , ----(39) Where M is a constant of integration. 3. CONCLUSION Despite the difficulties posed by (11) we have succeeded in solving it. Indeed we reduced (11) to an ordinary differential equation (34) and the general solution was obtain in (39). REFERENCES [1]. Bebernes and Eberly., Mathematical Problems from Combustion Theory, Appl. J.Math.Sci., 83, SpringerVerlag, Newyork, 1989. 71

R.Asokan

[2]. Clarkson. P.A. and Kruskal. M.D., New similarity reductions of the Boussinesq equation, J.Math. Phys. 30, 2201-2213, 1989. [3]. Langlias and Phillips., Stabilization of solutions of nonlinear and degenerate evolution equations, Nonlinear Anal. Methods Appl. 9, 321-333, 1985. [4]. Sachdev P.L. and Mayil Vaganan B., Exact free surface flows for shallow water equations: Part- I, The incompressible case, Study. Appl. Math. 93, 251-274, 1994. [5]. Souplet., Uniform blow up profiles and boundary behavior for diffusion equations with nonlocal source, J.Diff. Eqs. 153, 374-406, 1999. [6]. Stephen Bricher., Total versus single point blow up for a nonlocal gaseous ignition model, Rockey mountain J.Math. 32, 2543, 2002.

72

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Probabilistic Differential Privacy for Mining Frequent Item-sets From Log Records S.GuruRaja

B.Leela Padmavathi

Department of Computer Science and Engineering, Kandula Sreenivasa Reddy Memorial College of Engineering, Kadapa, Andhra Pradesh, India.

ABSTRACT This paper enable search engine companies to make their search logs available to researchers without disclosing their user’s sensitive information: Search engine companies can apply our algorithm to generate statistics that are probabilistic differentially private while retaining good utility. Beyond publishing search logs we believe that our findings are of interest to mining frequent item-sets, as an algorithm protects privacy against much stronger attackers than those considered in existing work on privacy preserving mining of frequent items/item-sets. Keywords: web mining, user history, histograms, privacy, search log, frequent itemsets. 1. INTRODCTION Today’s search engines do not just collect and index web pages, they also collect and mine information about their users. They store the queries, clicks, IP-addresses, and other information about the interactions with users in what is called a search log. Search logs contain valuable information that search engines use to shape their services better to their user’s needs. They enable the discovery of trends, patterns, and anomalies in the search behavior of users, and they can be used in the development and testing of new algorithms to improve search performance and quality. Scientists all around the world would like to tap this gold mine for their own research; search engine companies, however, do not release them because they contain sensitive information about their users, for example searches for diseases, lifestyle choices, personal tastes, and political affiliations. When uses k-anonymity [12] in search logs [13], [14], [15], [16] are insufficient in the light of attackers who can actively influence the search log. We then turn to Probabilistic

differential privacy [11], a much stronger privacy guarantee; however, we show that it is possible to achieve good utility with Probabilistic differential privacy. We then describe Privacy–Preserving Algorithm, developed independently by Korolova et al. [9] and others [4] with the goal to achieve relaxations of k-anonymity. We here offer a new analysis that shows how to set the parameters of an Algorithm to guarantee probabilistic differential privacy, a much stronger privacy guarantee as our analytical comparison shows. Beyond publishing search logs, we interest to mining frequent itemsets. The algorithm protects privacy against much stronger attackers than those considered in existing work on privacy preserving publishing of frequent items/itemsets [10]. Our paper concludes with an extensive experimental evaluation that guarantees privacy in search log publishing and also in mining frequent itemsets. 2. PRELIMINARIES In this section we introduce the problem of mining frequent keywords, queries, clicks and other items of log records. 2.1 Search Logs When a user submits a query and clicks on one of the results, a new entry is added to the search log. Without loss of generality, we assume that a search log has the following schema: , Where a USER-ID identifies a user, a QUERY is a set of keywords, and CLICKS is a list of urls that the user clicked on. A user history or search history consists of all search entries from a single user. Such a history is usually partitioned into sessions containing similar queries. A query 73

S.GuruRaja B.Leela Padmavathi

pair consists of two subsequent queries from the same user that are contained in the same session.

guarantee and in some cases it can be too strong to be practically achievable.

We say that a user history contains a keyword k, if there is a search log entry such that k is a keyword in the query of the search log. A keyword histogram of a search log, S records for each keyword k, the number of users ck whose search history in S contains k. A keyword histogram is thus a set of pairs (k, ck). We classify a keyword in a histogram to be frequent if its count exceeds some predefined threshold τ; when we do not want to specify whether we count keywords, queries, etc., we also refer to these objects as items.

2.2.2 Definition (probabilistic differential privacy [11]): An algorithm satisfies (є, δ)probabilistic differential privacy if for all search logs S we can divide the output space Ω into sets Ω1, Ω2 such that

With this terminology, we can define our goal as mining frequent items (utility) without disclosing sensitive information about the users (privacy). 2.2 Disclosure Limitations for Publishing Search Logs A simple type of disclosure is the identification of a Particular user’s search history in the published search log. The concept of k-anonymity has been introduced to avoid such identifications. 2.2.1 Definition (k-anonymity [12]): A search log is k-anonymous if the search history of every individual is indistinguishable from the history of at least k-1 other individuals in the published search log. There are several proposals in the literature to achieve different variants of kanonymity for search logs. Motwani and Nabar add or delete keywords from sessions until each session contains the same keywords as at least k − 1 other sessions in the search log [14], following by a replacement of the user-id by a random number. We call the output of this algorithm a k-session anonymous search log. He and Naughton generalize keywords by taking their prefix until each keyword is part of at least k search histories and publish a histogram of the partially generalized keywords [15]. We call the output a k-keyword anonymous search log. Efficient ways to anonymize a search log are also discussed in work by Yuan et al. [16]. Stronger disclosure limitations try to limit what an attacker can learn about a user. Probabilistic Differential privacy is a very strong

…….. (2.1) For all neighboring search logs

and for

all .…………………………………….. (2.2)

This definition guarantees that algorithm achieves є-differential privacy with high probability (≥ 1 − δ). The set Ω2 contains all outputs that are considered privacy violates according to є-differential privacy; the probability of such an output is bounded by δ. 3. UTILITY MEASURES We can compute the utility of algorithm producing sanitized search logs both theoretically and experimentally. 3.1 Theoretical Utility Measures For simplicity, suppose we want to mine all items (such as keywords, queries, etc.) with frequency at least τ in a search log; we call such items frequent items; we call all other items infrequent items. Consider a discrete domain of items D. Each user contributes a set of these items to a search log S. We denote by fd(S) the frequency of item d  D in search log S. It may make mistakes for items with frequency very close to τ, and thus we do not take these items in our notion of accuracy into account. We formalize this “slack” by a parameter ξ. We call an item d with frequency fd ≥ τ + ξ a very-frequent item and an item d with frequency fd ≤ τ − ξ a veryinfrequent item.

We can measure the inaccuracy of an algorithm then only using its inability to retain the very-frequent items and its inability to filter out the very infrequent items. 74

Probabilistic Differential Privacy for Mining Frequent Item-sets From Log Records

3.2 Experimental Utility Measures Traditionally, the utility of a privacypreserving algorithm has been evaluated by comparing some statistics of the input with the output to see “how much information is lost”. We use two real applications from the information retrieval community: query substitution, as a representative application for search quality and Index caching, as a representative application for search performance. For both applications the sufficient statistics are histograms of keywords, queries, or query pairs. 3.2.1 Query Substitution: We use an algorithm developed by Jones et al. in which related queries for a query are identified in two steps [6]. First, the query is partitioned into subsets of keywords, called phrases, based on their mutual information. Next, for each phrase, candidate query substitutions are determined based on the distribution of queries. We run this algorithm to generate ranked substitution on the sanitized search logs. We then compare these rankings with the rankings produced by the original search log, which serve as ground truth.

for each keyword in the query either from memory or from disk. If the intersection of the posting lists happens to be empty, then less relevant documents are retrieved from disk for those keywords for which only the truncated posting list is kept on memory. Search engines maintain an inverted index, which in its simplest instantiation, contains for each keyword a posting list of identifiers of the documents in which the keyword appears. This index can be used to answer search queries, but also to classify queries for choosing sponsored search results. The index is usually too large to fit in memory, but maintaining a part of it in memory reduces response time for all these applications. We use the formulation of the index caching problem from Baeza–Yates [2]. We are given a keyword search workload, a distribution over keywords indicating the likelihood of a keyword appearing in a search query. It is our goal to cache in memory a set of posting lists that for a given workload maximizes the cache-hit-probability while not exceeding the storage capacity. Here the hit-probability is the probability that the posting list of a keyword can be found in memory given the keyword search workload. 4. NEGATIVE RESULTS

Query substitutions are suggestions to rephrase a user query to match it to documents or advertisements that do not contain the actual keywords of the query. Query substitutions can be applied in query refinement, sponsored search, and spelling error correction [6]. Algorithms for query substitution examine query pairs to learn how users re-phrase queries. 3.2.2 Index Caching: In this paper, we use an improved version of the algorithm developed by Baeza– Yates to decide which posting lists should be kept in memory [2]. Our algorithm first assigns each keyword a score, which equals its frequency in the search log divided by the number of documents that contain the keyword. Keywords are chosen using a greedy binpacking strategy where we sequentially add posting lists from the keywords with the highest score until the memory is filled. Our inverted index stores the document-posting list for each keyword sorted according to their relevance, which allows retrieving the documents in the order of their relevance. Hence, for an incoming query the search engine retrieves the posting list

In this section, we discuss the deficiency of existing privacy model for search log publication i.e. K-anonymity. 4.1 Insufficiency of K-Anonymity k-anonymity and its variants prevent an attacker from uniquely identifying the user that corresponds to a search history in the sanitized search log. Nevertheless, even without unique identification of a user, an attacker can infer the keywords used by the user. K-anonymity does not protect against this severe information disclosure. 5. ACHIEVING PRIVACY We introduce a mining algorithm called Privacy–Preserving Algorithm. This algorithm ensures probabilistic differential privacy, and it follows a simple two-phase framework. In the first phase, it generates a histogram of items in the input search log, and then removes from the histogram the items with frequencies below a threshold. In the second phase, it adds noise to the histogram counts, and eliminates the items 75

S.GuruRaja B.Leela Padmavathi

whose noisy frequencies are smaller than another threshold. The resulting histogram (referred to as the sanitized histogram) is then returned as the output.

information leaked in Step 3 in order to achieve probabilistic differential privacy.

5.1 Algorithm The Privacy–Preserving Algorithm for Mining Frequent Items of a Search Log 5.1.1 Input Search log S, positive numbers m, τ, τ' 5.1.2 Steps 1. For each user u select a set su of up to m distinct items from u’s search history in S. Fig. 1 Privacy–Preserving Algorithm

2. Based on the selected items create a histogram consisting of pairs (k, ck), where k denotes an item and ck denotes the number of users u that have k in their search history su. We call this histogram the original histogram. 3. Delete from the histogram the pairs (k, ck) with count ck smaller than τ. 4. For each pair (k, ck) in the histogram, sample a random number ηk from the Laplace distribution Lap(λ), and add ηk to the count ck, resulting in a noisy count: ˜ck ← ck + ηk. 5. Delete from the histogram the pairs (k, ˜ck) with noisy counts ˜ck ≤ τ'. 6. Publish the remaining items and their noisy counts. To understand the purpose of the various steps one has to keep in mind the privacy guarantee would like to achieve. Step 1, 2 and 4 of the algorithm are fairly standard. It is known that adding Laplacian noise to histogram counts achieves є-differential privacy .However; these steps alone result in poor utility because for large domains many infrequent items will have high noisy counts. To deal better with large domains, restrict the histogram to items with counts at least τ in Step 2. This restriction leaks information and thus the output after Step 4 is not є-differentially private. One can show that it is not even (є, δ)–probabilistic differentially private (for δ < 1/2).Step 5 masks the

6. CHOOSING PARAMETERS Apart from the privacy parameters є and δ, algorithm requires the data publisher to specify two more parameters: τ, the first threshold used to eliminate keywords with low counts (Step 3), and m, the number of contributions per user. These parameters affect both the noise added to each count as well as the second threshold τ'. Privacy Parameters— In this paper we set δ = 0.001.Thus the probability that the output of algorithm could violate the privacy of any user is very small. We explore different levels of (є, δ)-probabilistic differential privacy by varying є. 6.1 Choosing Threshold τ The following optimality result, which tells us how to choose τ optimally in order to maximize utility. 6.1.1 Proposition: For a fixed є, δ and m choosing τ = minimizes the value of τ'. ……….. (6.1) 6.2 Choosing The Number Of Contributions m Here discussion is how to set m optimally. We will do so by studying the effect of varying m on the coverage and the precision of the top-j most frequent items in the sanitized histogram. The top-j coverage of a sanitized search log is defined as the fraction of distinct 76

Probabilistic Differential Privacy for Mining Frequent Item-sets From Log Records

items among the top-j most frequent items in the original search log that also appear in the sanitized search log. The top-j precision of a sanitized search log is defined as the distance between the relative frequencies in the original search log versus the sanitized search log for the top-j most frequent items. In particular, we study two distance metrics between the relative frequencies: the average L-1 distance and the KL-divergence. As a study of coverage, Table 1 shows the number of distinct items (recall that items can be keywords, queries, query pairs, or clicks) in the sanitized search log as m increases. We observe that coverage decreases as we increase m.

TABLE 1. Distinct item counts with different m

The number of distinct keywords decreases by 55% while at the same time the number of distinct queries pairs decreases by 96% as we increase m from 1 to 40. This trend has two reasons. First, from Proposition 1 we see that threshold τ' increases super-linearly in m. Second, as m increases the number of keywords contributed by the users increases only sublinearly in m; fewer users are able to supply m items for increasing values of m. Hence, fewer items pass the threshold τ' as m increases. The reduction is larger for query pairs than for keywords, because the average number of query pairs per user is smaller than the average number of keywords per user in the original search log (shown in Table 2).

which leads to higher counts in the sanitized histogram. However, the total count increases only sub-linearly with m (and even decreases) due to the reduction in coverage We found that the tipping point where the total count starts to decrease corresponds approximately to the average number of items contributed by each user in the original search log. This suggests that we should choose m to be smaller than the average number of items, because it offers better coverage, higher total counts, and reduces the noise compared to higher values of m. Note that m = 1 does not always give the best precision; for keywords, m = 8 has the lowest KL-divergence. As we can see from these results, there are two “regimes” for setting the value of m. If we are mainly interested in coverage, then m should be set to 1. However, if we are only interested in a few top-j items then we can increase precision by choosing a larger value for m; and in this case we recommend the average number of items per user. The index caching application does not require high coverage because of its storage restriction. However, high precision of the top-j most frequent items is necessary to determine which of them to keep in memory. On the other hand, in order to generate many query substitutions, a larger number of distinct queries and query pairs is required. Thus m should be set to a large value for index caching and to a small value for query substitution. 7. RESULTS 7.1 Input Screens

TABLE 2. Average numbers of items per user in the original search log

To understand how m affects precision, we measure the total sum of the counts in the sanitized histogram as we increase m in Table 1. Higher total counts offer the possibility to match the original distribution at a finer granularity. We observe that as we increase m, the total counts increase until a tipping point is reached after which they start decreasing again. This effect is as expected for the following reason: As m increases, each user contributes more items,

Input Screen 1

77

S.GuruRaja B.Leela Padmavathi

Input Screen 2

7.3 Output Screens K-anonymity based itemsets:

Input Screen 3

Output Screen 1

After compare with T (i.e. 2) Input Screen 4

Output Screen 2

Output Screen 3

Input Screen 5

After adding random value 3 to ck then ck` value=6

7.2 Input Values

Output Screen 4

After compare with T` (i.e. 4) 78

Probabilistic Differential Privacy for Mining Frequent Item-sets From Log Records

frequent items while filtering out all infrequent items.

Output Screen 5

7.4 Output Graph

10. FUTURE WORK A topic of future work is the development of algorithms that allow publishing useful information about infrequent items in a search log. REFERENCES [1] Jiawei Han and Micheline Kamber. Data

Graph 1. Frequent Itemset (i.e. Madurai university) is displayed

8. BEYOND SEARCH LOGS While the main focus of this project report is search logs, our results apply to other scenarios as well. For example, consider a retailer who collects customer transactions. Each transaction consists of a basket of products together with their prices, and a time-stamp. In this case algorithm can be applied to publish frequently purchased products or sets of products. This information can also be used in a recommender system or in a market basket analysis to decide on the goods and promotions in a store [1]. Another example, concerns monitoring the health of patients. Each time a patient sees a doctor, the doctor records the diseases of the patient and the suggested treatment. It would be interesting to publish frequent combinations of diseases. 9. CONCLUSION Our probabilistic differentially private Privacy–Preserving algorithm is able to retain

Mining: Concepts and Techniques. Morgan Kaufmann, 1st edition, September 2000. [2] Roberto Baeza-Yates. Web usage mining in search engines. Web Mining: Applications and Techniques, 2004. [3] Charu C Aggarwal and Philips Y U. Privacy-Preserving Data Mining: Models & Algorithms. Kluwer Academic Publishers. [4] Michaela Gotz, Ashwin Machanavajjhala, Guozhang Wang, Xiaokui Xiao, and Johannes Gehrke. Privacy in search logs. CoRR, abs/0904.0682v2, 2009. [5] R. C.-W. Wong, A. W.-C. Fu, K. Wang, and J. Pei, “Minimality attack in privacy preserving data publishing,” in VLDB, 2007, pp. 543–554. [6] Rosie Jones, Benjamin Rey, Omid Madani, and Wiley Greiner.Generating query substitutions. In WWW, 2006. [7] Ravi Kumar, Jasmine Novak, Bo Pang, and Andrew Tomkins. On anonymizing query logs via token-based hashing. In WWW, 2007. [8] Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In EUROCRYPT, 2006. [9] Aleksandra Korolova, Krishnaram Kenthapadi, Nina Mishra, and Alexandros Ntoulas. Releasing search queries and clicks privately. In WWW, 2009. [10] Yongcheng Luo, Yan Zhao, and Jiajin Le. A survey on the privacy preserving algorithm of association rule mining. Electronic Commerce and Security, International Symposium, 1:241–245, 2009. [11] Ashwin Machanavajjhala, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. Privacy: Theory meets practice on the map. In ICDE, 2008. [12] Pierangela Samarati. Protecting respondents’ identities in microdata release. IEEE Trans. on Knowl & Data Eng.,13(6):1010–1027, 2001. 79

S.GuruRaja B.Leela Padmavathi

[13] Eytan Adar.

User 4xxxxx9: Anonymizing query logs. In WWW Workshop on Query Log Analysis, 2007. [14] Rajeev Motwani and Shubha Nabar. Anonymizing unstructured data. arXiv, 2008. [15] Yeye He and Jeffrey F. Naughton. Anonymization of set-valued data via top-down, local generalization. PVLDB, 2(1):934–945, 2009. [16] Yuan Hong, Xiaoyun He, Jaideep Vaidya, Nabil Adam, and Vijayalakshmi Atluri. Effective anonymization of query logs. In CIKM, 2009.

80

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Performance Characterization of Single Vacation Queue with Change over Time P. Vijaya Laxmi#1

Seleshi Demie#2

Department of Applied Mathematics, Andhra University, Visakhapatnam, India

vacations have been investigated in [3]. ABSTRACT This paper analyzes an infinite buffer single server queue with single vacation a n d change over time. The inter-arrival, service and vacation times are exponentially distributed. The server begins service if there are at least c units in the queue. The service takes place in batches with a minimum of size a and a maximum o f size b (a ≤ c ≤ b). We obtained the closedform expressions for the steady state pr obabi li t i es and derived the expected length of the busy period, idle period and busy cycle. Some numerical results and effect of important parameters on the performance measures have been presented. Keywords: Queue, Single vacations, Change over time, Infinite buffer, busy period. 1. INTRODUCTION Queueing systems with server vacations have been studied extensively over the past two decades and have been found to be applicable in the analysis and modeling of communication networks, manufacturing, production and transportation systems. Vacation models are useful for the systems in which the server wants to utilize the idle time for different purposes. Comprehensive surveys on queueing systems with server vacation can be found in [2], [5], [6], etc.

Batch service models are useful to investigate the performance of various processes in production, manufacturing, traffic control problems without overloading the system. The multiple vacations a n d change over time batch service queue is considered in [4]. The steady state behavior of a state dependent bulk service queue with delayed

Real-life queueing situations exist where jobs are served with a control policy. For example, manufacturing systems process jobs only when the number of jobs to be processed exceeds a specified level, and once processing starts, it is profitable to continue it even when the queue size is less than the specified level but not less than a secondary limit. An M/M/1 queue under the policy (a, c, d) has been investigated by [1]. This paper analyzes a bulk service M/M (a,c,b) /1 queueing system with single vacation and change over time. Using the recursive method the steady state probabilities of the system states have been obtained. The expected length of the busy period, idle period and busy cycle have been derived. Numerical results have been presented in the form of graphs. 2. MODEL DESCRIPTION AND ANALYSIS Let us consider an M/M (a,c,b) /1 queueing system with single vacation and change over time. It is assumed that the interarrival and service times are exponentially distributed with parameters λ and µ, respectively. The service begins only if there are at least c jobs in the queue. The jobs are processed or served in batches with a minimum size a and a maximum size b (a ≤ c ≤ b). At a service completion epoch, if the queue size is less than c but not less than a , the server continues to serve. If there are j jobs (0 ≤ j ≤ a − 2) in the queue after a service completion epoch, the server goes for a vacation, which is exponentially distributed with rate 𝜙. At a service completion epoch, if the queue size is a − 1 then instead of going for a vacation immediately the server waits for some 81

P. Vijaya Laxmi Seleshi Demie

time in the system called the change over time which is exponentially distributed with rate α. It starts service on finding an arrival during this change over time, otherwise it takes a vacation. On returning from a vacation, if the server finds j (0 ≤ j ≤ c − 1) jobs in the queue, it remains dormant in the system until c customers are available in the queue and min{j, b} jobs are taken for service according to FCFS rule when j ≥ c. The inter-arrival times, service times and vacation times are independently distributed. The traffic intensity ρ is given by ρ = λ/bµ. The state of the system at time t is described by the random variables N (t) denoting the queue size and 𝜁(𝑡) the state of the server, which is defined as 0, 𝑖𝑓 𝑡𝑕𝑒 𝑠𝑒𝑟𝑣𝑒𝑟 𝑖𝑠 𝑖𝑛 𝑣𝑎𝑐𝑎𝑡𝑖𝑜𝑛, 1, 𝑖𝑓 𝑡𝑕𝑒 𝑠𝑒𝑟𝑣𝑒𝑟 𝑖𝑠 𝑏𝑢𝑠𝑦, 𝜁 𝑡 = 2, 𝑖𝑓 𝑡𝑕𝑒 𝑠𝑒𝑟𝑣𝑒𝑟 𝑖𝑠 𝑖𝑛 𝑐𝑕𝑎𝑛𝑔𝑒 𝑜𝑣𝑒𝑟 𝑡𝑖𝑚𝑒, 3, 𝑖𝑓 𝑡𝑕𝑒 𝑠𝑒𝑟𝑣𝑒𝑟 𝑖𝑠 𝑑𝑜𝑟𝑚𝑎𝑛𝑡.

The stochastic process {N (𝑡), 𝜁(𝑡), 𝑡 ≥ 0} is a two-dimensional Markov process with state space 𝑆 = 𝑛, 0 |𝑛 ≥ 0 ∪ 𝑛, 1 |𝑛 ≥ 0 ∪ (𝑎 − 1,2) ∪ 𝑛, 3 |0 ≤ ≤ 𝑐−1 . Let us define Pn,i (t) = P r{N (t) = n, ζ(t) = i} which are denoted by Pn,i at steady state. For the system under steady state, t h e Chapman-Kolmogorov forward difference equations can be written a s

(𝜙 + 𝜆)𝑃0,0 = 𝜇𝑃0,1 , 𝜙 + 𝜆 𝑃𝑛,0 = 𝜆𝑃𝑛−1,0 + 𝜇𝑃𝑛 ,1 , 1 ≤ 𝑛 ≤ 𝑎 − 2, 𝜆 + 𝜙 𝑃𝑎−1,0 = 𝜆𝑃𝑎−2,0 + 𝛼𝑃𝑎 −1,2 , 𝜆 + 𝜙 𝑃𝑛 ,0 = 𝜆𝑃𝑛 −1,0 , 𝑛 ≥ 𝑎, 𝜆 + 𝜇 𝑃0,1 = 𝜆𝑃𝑐−1,3 + 𝜆𝑃𝑎−1,2 + 𝜇 𝑏𝑛 =𝑎 𝑃𝑛,1 + 𝜙 𝑏𝑛=𝑐 𝑃𝑛 ,0 , 𝜆 + 𝜇 𝑃𝑛,1 = 𝜆𝑃𝑛−1,1 + 𝜇𝑃𝑛 +𝑏,1 + 𝜙𝑃𝑛+𝑏,0 , 𝑛 ≥ 1, 𝜆 + 𝛼 𝑃𝑎−1,2 = 𝜇𝑃𝑎−1,1 , 𝜆 𝑃0,3 = 𝜙𝑃0,0 ,

(2.1) (2.2) (2.3) (2.4) (2.5)

𝜆𝑃𝑛 ,3 = 𝜆𝑃𝑛 −1,3 + 𝜙𝑃𝑛 ,0 , 1 ≤ 𝑛 ≤ 𝑐 − 1. From (2.4) we get 𝑃𝑛,0 = 𝜉 𝑛+1−𝑎 𝑃𝑎−1,0 ,

(2.7) (2.8)

𝑛 ≥ 𝑎, (2.10)

𝜆 where 𝜉 = 𝜆 +𝜙 . Applying the shift operator 𝐸 defined by 𝐸 𝑥 𝑃𝑛,1 = 𝑃𝑛+𝑥,1 on equation (2.6), we get 𝜇𝐸 𝑏+1 − 𝜆 + 𝜇 𝐸 + 𝜆 𝑃𝑛 −1,1 = −𝜙𝜉 𝑛 +𝑏+1−𝑎 𝑃𝑎−1,0 , 𝑛 ≥ 1. 𝑏+1

Let 𝑘 𝑧 = 𝜇𝑧 − 𝜆 + 𝜇 𝑧 + 𝜆. Then by Rouche’s theorem, there exists a unique positive real number 𝑟 < 1 such that 𝑘 𝑟 = 0. Therefore, we have 𝑃𝑛 ,1 = 𝐾𝑟 𝑛 −

𝜉 𝑛+𝑏+2−𝑎 𝜙𝑃𝑎−1,0 , 𝑘 𝜉 𝑛 ≥ 0,

(2.11)

where 𝐾is a constant. Using equation (2.7) and (2.11), we have 𝑃𝑎−1,2 𝜇 = 𝐾𝑟 𝑎 −1 𝜆+𝛼 𝜇𝜙𝜉 𝑏+1 − 𝑃 𝜆 + 𝛼 𝑘(𝜉) 𝑎−1,0

(2.12)

From (2.3) and (2.12), we get 𝑃𝑎−2,0 = 𝑘1 𝑃𝑎−1,0 − 𝑘2 𝐾, where 𝑘1 = 𝛼𝜇 𝑟 𝑎 −1 𝜆(𝜆+𝛼 )

𝜆 +𝜙 𝜆

+𝜆

(2.13)

𝜙𝛼𝜇 𝜉 𝑏 +1 𝜆 +𝛼 𝑘(𝜉 )

and

𝑘2 =

.

From equations (2.2), (2.11) and (2.13) it follows that 𝑃𝑛 ,0 = 𝜏1 𝑛 𝑃𝑎−1,0 − 𝜏2 𝑛 𝐾, 0 ≤ 𝑛 ≤ 𝑎 − 3,

(2.14)

where 𝜏1 𝑛 = 𝑘1 𝜉 𝑛 −𝑎+2 +

(2.6)

(2.9)

𝜏2 𝑛 = 𝑘2 𝜉 𝑛 −𝑎 +2

(𝑎 −2−𝑛 )𝜇𝜙 𝜉 𝑛 +𝑏 −𝑎 +3 , and 𝜆𝑘 (𝜉 )

𝜇𝑟 𝑎 −2 + 𝜆

𝑎 −3−𝑛

𝑟 −𝑖 𝜉 𝑖+𝑛 +3−𝑎 . 𝑖=0

82

Performance Characterization of Single Vacation Queue with Change over Time

Using (2.1) and putting 𝑛 = 0 in (2.11) and (2.14), 𝐾 can be obtained as 𝐾 = 𝑘3 𝑃𝑎−1,0, where

(2.15)

𝜆 +𝜙 𝜏 1 0 𝑘 𝜉 +𝜇𝜙 𝜉 𝑏 +2−𝑎 𝑘 𝜉 [𝜇 +(𝜙 +𝜆)𝜏 2 (0)]

𝑘3 =

𝑘3 𝜙 𝜉 𝑏 +2−𝑎 𝜙𝑘 − + 𝑘5 + 4 + 1−𝑟 1−𝜉 𝑘(𝜉 ) 𝜆 𝜙 𝑎−3 𝑛 𝑛=0 𝑖=0 [𝜏1 𝑖 − 𝜏2 (𝑖)𝑘3 ] 𝜆 −1 𝜙 𝑐−1 +1 𝑖 𝑘4 + 𝑛−𝑎 𝜉 , 𝑛=𝑎 −1 𝑖=0 𝜆

where

.

𝑘5 =

Equations (2.8), (2.14) and (2.15) yield 𝑃0,3 =

𝜙 𝜆

𝜏1 0 − 𝜏2 0 𝑘3 𝑃𝑎−1,0 .

(2.16)

Using (2.9), (2.14) and (2.16) we get 𝑃𝑛,3 =

𝑖=0

(2.17)

𝜏1 (𝑖) 𝑖=0

𝑎 −3

− 𝑘3 𝑘2 +

𝜏2 (𝑖)

.

𝑖=0

From this equation and (2.10) we have 𝜙 = 𝑘4 + 𝜆

𝑛 −𝑎 +1

𝜉 𝑖 𝑃𝑎−1,0 , 𝑖=0

𝑎 − 1 ≤ 𝑛 ≤ 𝑐 − 1.

(2.19)

Finally, using the normalization condition ∞ 𝑐−1 𝑛=0 𝑃𝑛,0 + 𝑃𝑛 ,1 + 𝑃𝑎−1,2 + 𝑛 =0 𝑃𝑛 ,3 =

1,

The busy period (B) commences with the start of a service and lasts until, for the first time on completion of a service, there are j units (j = 0, 1, 2, ..., a − 1) left in the queue. Let us define the joint distribution of B and J as Fj t = Pr⁡B ≤ t, J = j and the corresponding density function as

fj t dt = Pr t ≤ B ≤ t + dt, J = j , j = 0, 1, ⋯ , a − 1, and 𝑓𝑗∗ (𝑠) is the Laplace transform of 𝑓𝑗 𝑡 . For busy period analysis, let us consider a process that avoids the states 0, 1, ⋯ , 𝑎 − 1 wherein the server becomes free or in change over time. The state transition equations can be written as

𝑓𝑗 𝑡 = 𝜇𝑃𝑗 ,1 𝑡 , 𝑗 = 0, 1, 2, ⋯ , 𝑎 − 1, 𝑑 𝑃 𝑡 = − 𝜆 + 𝜇 𝑃0,1 𝑡 𝑑𝑡 0,1 𝑏

𝑃𝑎−1,0 can be obtained as 𝑎−3 𝑛=0

𝑖=1

Using Little’s rule, the average waiting time in the 𝐿 queue is given by 𝑊𝑞 = 𝑞 . 3. EXPECTED LENGTH OF IDLE AND BUSY PERIOD

𝑎 −3

𝑘4 = 𝑘1 +

𝑖𝑃𝑖,3 .

𝜆

(2.18)

𝑘4 𝑃𝑎−1,0 ,

where

𝑐−1

+

𝑃𝑎−1,0 ,

We use (2.9) at 𝑛 = 𝑎 − 2, (2.13) and (2.17) at 𝑛 = 𝑎 − 3 to get 𝜙 𝜆

𝑖 𝑃𝑖,0 + 𝑃𝑖,1 + 𝑎 − 1 𝑃𝑎−1,2 𝑖=1

0 ≤ 𝑛 ≤ 𝑎 − 3.

𝑃𝑎−1,0 =

𝜇𝑘3 𝑟 𝑎 −1 𝜇𝜙𝜉 𝑏+1 − . 𝜆+𝛼 𝜆 + 𝛼 𝑘(𝜉)

Once the state probabilities are known, we can evaluate various performance measures such as the average number of jobs in the queue (𝐿𝑞 ) and the average waiting time in the queue (𝑊𝑞 ). The average number of jobs in the queue can be obtained by computing 𝐿𝑞 =

𝜏1 𝑖

− 𝜏2 𝑖 𝑘3

𝑃𝑛,3

(2.20)



𝑛

𝜙 𝜆

𝑃𝑎−2,3 =

+

+𝜇

𝜏1 𝑛 −

𝑃𝑛 ,1 (𝑡) 𝑛=𝑎

(3.1)

𝜉

𝜏2 𝑛 𝑘3 + 𝑘1 − 𝑘2 𝑘3 + 1 + 1−𝜉 + 83

P. Vijaya Laxmi Seleshi Demie

𝑑 𝑃 𝑡 = − 𝜆 + 𝜇 𝑃𝑛,1 𝑡 𝑑𝑡 𝑛,1 + 𝜆𝑃𝑛−1,1 𝑡 + 𝜇𝑃𝑛+𝑏,1 𝑡 , 𝑛 ≥ 1

over times. Following this, the expected length of an idle period,

(3.2)

From the definition of busy period, we have 𝑃0,1 (0) = 1. Using this and taking the Laplace transform of (3.1) and (3.2), we get

𝑠+𝜆+𝜇

∗ 𝑃0,1

𝑠 𝑏

𝑃𝑛∗,1 𝑠

=1+𝜇

(3.3)

𝐸𝐼 =

1 𝜆

𝑐−1 𝑛 =0 (𝑐−𝑛)𝑃𝑛 ,3 +𝑃𝑎 −1,2

𝑃 𝑛 ,0 + ∞ 𝑛 =0 𝜙

𝑐−1 ∞ 𝑃 +𝑃 𝑎 −1,2 + 𝑛 =0 𝑃𝑛 ,3 𝑛 =0 𝑛 ,0

(3.7)

Consequently, the expected busy cycle, 𝐸 𝐵𝐶 = 𝐸[𝐵] + 𝐸 𝐼 . In the following section, we discuss numerical examples for certain varying parameters of the system.

𝑛=𝑎 ∗ 𝑠 + 𝜆 + 𝜇 𝑃𝑛,1 𝑠 ∗ = 𝜆𝑃𝑛−1,1 𝑠 ∗ + 𝜇𝑃𝑛 +𝑏,1 𝑠 , 𝑛≥1

4. NUMERICAL ILLUSTRATIONS

(3.4)

∗ ∗ From (3.4), 𝑃𝑛,1 𝑠 = 𝑅 𝑛 𝑃0,1 𝑠 , where 𝑅 is the 𝑏+1 solution of 𝜇𝑧 − 𝑠 + 𝜆 + 𝜇 𝑧 + 𝜆 = 0 in (0,1) whose existence and uniqueness is guaranted by Rouche’s theorem. Using this and (3.3) we get 1−𝑅 ∗ 𝑃0,1 𝑠 = 𝑎.

Finally,

𝜇 +𝑠−𝜇 𝑅 𝑅 𝑛 (1−𝑅) ∗ 𝑃𝑛,1 𝑠 = 𝜇 +𝑠−𝜇 𝑅 𝑎

Numerical analysis is carried out to demonstrate the effect of various parameters on the system performance. We considered the parameters 𝑎 = 4, 𝑐 = 12, 𝑏 = 15, 𝜙 = 0.5, 𝜆 = 1.5, 𝜌 = 0.45 and 𝛼 = 2.5 except when they are considered as variable.

, 𝑛 ≥ 0. 𝜇𝑅 𝑗 (1−𝑅)

Therefore, for 0 ≤ 𝑗 ≤ 𝑎 − 1, 𝑓𝑗∗ 𝑠 = 𝜇 +𝑠−𝜇 𝑅 𝑎

and the Laplace transform 𝑏 ∗ (𝑠) of the busy period probability density function (PDF) 𝑏(𝑡) of the busy period 𝐵 is given by 𝑎−1

𝑓𝑗∗



𝑏 𝑠 = 𝑗 =0

𝜇(1 − 𝑅 𝑎 ) 𝑠 = 𝜇 + 𝑠 − 𝜇𝑅 𝑎

(3.5)

From (3.5) one gets 𝑏 𝑡 = 𝑎−1 𝑗 =0 𝑓𝑗 (𝑡). Also we 𝑎−1 ∗ ∗ have 𝑏 0 = 𝑗 =0 𝑓𝑗 (0) = 1. Further, 𝑓𝑗∗ 0 is the probability that the busy period terminates leaving 𝑗 0 ≤ 𝑗 ≤ 𝑎 − 1 in the 1−𝑟 𝑟 𝑗

queue. 𝑓𝑗∗ 0 = lim𝑠→0 𝑓𝑗∗ 𝑠 = 1−𝑟 𝑎 , 𝑗 = 0,1, ⋯ , 𝑎 − 1, where 𝑟 is the unique root of 𝜇𝑧 𝑏+1 − 𝜆 + 𝜇 𝑧 + 𝜆 = 0 in (0,1) and 𝑟 = lim𝑠→0 𝑅. Thus, the expected busy period 𝐸[𝐵] is given by 𝑑 1 𝐸[𝐵] = − 𝑏 ∗ (𝑠) = (3.6) 𝑑𝑡 𝜇(1 − 𝑟 𝑎 ) 𝑠=0 In this model, the idle period occurs when the server is in vacation, in dormancy or in change

Figure 1 demonstrates the impact of the minimum batch size (𝑎) on the expected busy period (𝐸[𝐵]) for different values of 𝜌. One can see that 𝐸[𝐵] decreases as 𝑎 increases. This is evident as the server keeps on giving service as long as the queue size is not less than 𝑎. Further, we can observe that for a fixed 𝑎, 𝐸[𝐵] increases with 𝜌.

84

Performance Characterization of Single Vacation Queue with Change over Time

Figure 2 shows the effect of 𝑐 on 𝐿𝑞 for different values of 𝜌. It is observed that 𝐿𝑞 increases as 𝑐 increases. Since 𝑐 is the minimum threshold to start service, increasing 𝑐 will result in greater accumulation of customers in the queue thereby increasing 𝐿𝑞 . It is also observed that 𝐿𝑞 increases as 𝜌 increases for a fixed 𝑐.

Figure 3 depicts the change of 𝐿𝑞 with 𝛼 for different values of 𝑎. It is observed that 𝐿𝑞 increases as 𝛼 increases. As 𝛼 gets larger the mean duration of change over times becomes smaller so that the server will go for vacation without waiting for an arrival for some reasonable duration of time. This contributes for 𝐿𝑞 to increase. Further, we observe that for a fixed 𝛼 as 𝑎 increases 𝐿𝑞 increases. The effect of 𝜙 on 𝑊𝑞 with and without change over time is presented in Figure 4. We see that 𝑊𝑞 decreases as 𝜙 increases in both the cases.

5. CONCLUSION In this paper, we have analyzed a single server infinite buffer queue with single vacation and change over time. We have developed the steady state equations of probabilities and obtained various system performance measures. We have derived the expected busy period, idle period and busy cycle. Numerical illustrations in the form of graphs are reported to demonstrate how the various parameters of the model influence the behaviour of the system. REFERENCES [1] Baburaj, C. and Surendranath, T. M., An M/M/1 bulk service queue under the policy (a, c, d), Int. J. Agricult. Stat, 1, 27-33, 2005. [2] Doshi, B. T., Single server queues with vacations, Stochastic Analysis of Computer and Communication systems, North Holland, Amsterdam, 1990. [3] Madhu Jain and Poonam Singh, State dependent bulk service queue with delayed vacations, JKAU: Eng. Sci.,16, 3-15, 2005. [4] Reddy, G. V. K., Nadarajan, R. and Arumuganathan, R., Analysis of a bulk queue with N-policy multiple vacations and setup times, Comput. Ops. Res., 25, 957-967, 1998. [5] Takagi, H., Queueing analysis: A Foundation of Performance Evaluation, Vacation and Priority Systems, Vol.I, North Holland,Amsterdam, 1991. [6] Tian, N. and Zhang, Z. G., Vacation queueing models: Theory and applications, Springer, NewYork, 2006. 85

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Analytical Hierarchy Process for selection of a student for All Round Excellence Award P. Kousalya Department of Humanities & Sciences (Mathematics) Vignana Bharathi Institute of Technology, Ghatkesar, Hyderabad, INDIA.

ABSTRACT The study aims at giving an application of Analytical Hierarchy Process(AHP, a Multi Criteria Decision Making method) . Here AHP is applied for selection of a student from an Engineering college who is eligible for All Round Excellence Award for the year 2004-05 by taking subjective judgments of decision maker into consideration. Seven criteria were identified in students for getting this award and the alternatives are the five Branches of the Engineering College. It is observed that a student of ECE (Electronics and communications engineering ) branch has received the award. Keywords: AHP, Multi Criteria Decision Making, consistency 1. INTRODUCTION

The Analytical Hierarchy Process or AHP was first developed by Professor Thomas L. Saaty in the 1970‟s and since that time has received wide application in a variety of areas. Kousalya P et.al, [1] has applied AHP to study student absenteeism in engineering colleges. Kousalya P et al[2], has applied AHP and also TOPSIS for selection of a student for All Round Excellence Award . Kousalya P et. al[3] made a comparative study of the performance of Averaging Methods and Stochastic Vector Methods in Analytical Hierarchy Process problems. Kousalya P et al[4] has applied fuzzy Analytic Hierarchy Process and TOPSIS methods for student absenteeism .Kousalya P et al[5] has made a mathematical model of a Multi-criteria DecisionMaking approach. R. Ramana than et. Al [8] has employed group preference aggregation methods in AHP.T.Ramsha Prabhu et. al [9] has made evaluation of Iron making Technologies using AHP Approach.

2. THE MATHEMATICS OF AHP THEORY Consider n elements to be compared, C1 … CN and denote the relative „weight‟ (or priority or significance) of Ci with respect to Cj by aij and form a square matrix A=(aij) of order n with the constraints that aij = 1/aji, for i ≠ j, and aii = 1 for all i. Such a matrix is said to be a reciprocal matrix. The weights are consistent if they are transitive, that is aik = aijajk for all i, j, and k. Such a matrix might exist if the aij are calculated from exactly measured data. Then find a vector ω of order n such that Aω = λω . For such a matrix, ω is said to be an eigenvector (of order n) and λ is an eigen value. For a consistent matrix, λ = n . For matrices involving human judgment, the condition aik = aijajk does not hold as human judgements are inconsistent to a greater or lesser degree. In such a case the ω vector satisfies the equation Aω= λmaxω and λmax ≥ n. The difference, if any, between λmax and n is an indication of the inconsistency of the judgments. If λmax = n then the judgments have turned out to be consistent. Finally, a Consistency Index can be calculated from (λmax-n)/(n-1). That needs to be assessed against judgments made completely at random and Saaty has calculated large samples of random matrices of increasing order and the Consistency Indices of those matrices. A true Consistency Ratio is calculated by dividing the Consistency Index for the set of judgments by the Index for the corresponding random matrix. Saaty suggests that if that ratio exceeds 0.1, the set of judgments may be too inconsistent to be reliable. In practice, CRs of more than 0.1 sometimes have to be accepted. A CR of 0 means that, the judgments are perfectly consistent.

86

Analytical Hierarchy Process for selection of a student for All Round Excellence Award

Intensity of Importance 1 3 5 7 9 2,4,6,8 Reciprocals of above

Rationals

Comparison of factor i and j Equally Important Weakly Important Strongly Important Very Strongly Important Extremely Important Intermediate value between adjacent scales In comparing elements i and j - if i is 3 compared to j,- then j is 1/3 compared to i Force consistency Measured values available.

3. METHODOLOGY The following steps are involved in any decision making process. 3.1Establishment of a structural Hierarchy: A complex decision is to be structured in to a hierarchy descending from an overall objective to various criteria, sub criteria till the lowest level. The overall goal of the decision is represented at the top level of the hierarchy. The criteria and the sub criteria, which contribute to the decision, are represented at the intermediate levels. Finally the decision alternatives are laid down at the last level of the hierarchy. According to Saaty (2000), a hierarchy can be constructed by creative thinking, recollection and using people‟s perspectives. 3.2 Establishment of comparative judgments: Once the hierarchy has been structured, the next step is to determine the priorities of elements at each level. A set of comparison matrices of all elements in a level with to respect to an element of the immediately higher level are constructed. The pair wise comparisons are given in terms of how much element A is more important than element B. The preferences are quantified using a nine – point scale that is shown inTable1.

Table 1: Pair wise comparison scale (AHP Saaty‟s scale, taken from Saaty)

3.3 Synthesis of priorities and measurement of consistency:

The pair wise comparisons generate the matrix of rankings for each level of the hierarchy after all matrices are developed and all pair wise comparisons are obtained, Eigen vectors (relative weights) are obtained. Eigen Vector Method: Suppose we wish to compare a set of „n‟ objects in pairs according to their relative weights. Denote the objects by A1,A2,…..An and their weights by w1,w2,…..wn. The pair wise comparisons may be represented by a matrix as follows: A1 A2 . . . An

A1 w1/w1 w2/w1

A2 w1/w2 w2/w1

wn/w1

wn/w1



An w1/wn w2/w1

wn/w1

This matrix has positive entries everywhere and satisfies the reciprocal property aji = 1/aij. It is called a reciprocal matrix. If we multiply this matrix by the transpose of the vector wT = ( w1,w2,…..wn) we obtain the vector nw. Our problem takes the form Aw= nw. We started with the assumption that w was given. But if we only had A and wanted to recover w, we would have to solve the system (A- nI) w = 0 in the unknown w. This has a nonzero solution if n is an eigenvalue of A, i.e., it is a root of the characteristic equation of A. But A has unit rank since every row is a constant multiple of the first row. Thus all the eigenvalue i ,i=1,2,…..n of A are zero except one. Also it n

is known that



i

= tr (A) = n, and i = 0,

i 1

i  max .The solution w of this problem is any column of A. These solutions differ by a multiplicative constant. However, this solution is normalized so that its components sum to unity. The consistency ratio is calculated as per the following steps

87

P. Kousalya



Calculate the Eigen vector or the relative weights and max for each matrix of order n. Compute the consistency index for each matrix of order n by the formulae CI = (max – n ) / (n – 1) The consistency ratio is then calculated using the formulae CR = CI / RI

branches with respect to how much better one is than the other in satisfying each criterion in level 2.Thus there will be fifteen 5 x 5 matrices of judgments. The ranks of the alternative branches are shown in Figure 1 and the hierarchical decomposition of criteria, sub criteria and alternatives is given in the appendix 2 in figure 2.

where RI is a known random consistency index obtained from a large number of simulation runs and varies depending upon the order of the matrix.

The student of ECE gets the All Round Excellence Award as he/she is good in academics and general behavior which are highest priority criteria, had the largest priority from figure 2 and Table 4. The student of CSE branch is equivalently good who performed better than EEE students.

 

Table 2: Average random index (RI) based on matrix size (adapted from Saaty, 2000)

Size of matrix (n) 1 2 3 4 5 6 7 8 9 10

Random consistency index (RI) 0 0 0.52 0.89 1.11 1.25 1.35 1.40 1.45 1.49

The acceptable CR range varies according to the size of the matrix i.e. 0.05 for a 3 by 3 matrix, 0.08 for a 4 by 4 matrix and 0.1 for all larger matrices, for n  5 (Saaty, 2000) if the value of CR is equal to, or less than that value it implies that the evaluation within the matrix is acceptable or indicates a good level of consistency in the comparative judgments represented in that matrix. If CR is more than that acceptable value, inconsistency of the judgments within the matrix has occurred and the evaluation process should be reviewed. The matrix of pair wise comparisons of the criteria as given by the decision maker is shown in Table 3, along with the resulting vector of priorities. The vector of priorities is the principal Eigen vector of the matrix .It gives the relative priority of the criteria measured on a ratio scale given in Table 1. Next we move to pair wise comparisons of the lower level and lastly to the pair wise comparisons of the lowest level .The elements to be compared pair wise are the engineering

4. CONCLUSIONS

REFERENCES

[1] Kousalya P, Ravindranath and Vizayakumar K ,Student absenteeism in engineering colleges-Evaluation of alternatives using AHP, Journal of Applied Mathematics and decision sciences, Vol 6,1-26,2006 [2] Kousalya P, Mahender Reddy G, “ Selection of a student for All Round Excellence Award using fuzzy AHP and TOPSIS methods “ , International Journal of Engineering Research and Applications (IJERA) ,Vol. 1, Issue 4, pp.1993-2002, 2011. [3] Kousalya P, Pradeep Kumar R.L.N and Ravindranath V, Comparative Performance of Averaging Methods and Stochastic Vector Methods in Analytical Hierarchy Process problems”,Proceedings in International Conference on Supply chain Management at IIT Kharagpur, 2011. [4] Kousalya P, Mahender Reddy G, ” Application of fuzzy Analytic Hierarchy Process and TOPSIS methods, Proceedings in “National Conference-cum-Workshop on Scientific Computing , School of Computational Science Indian Institute of Information Technology & ManagementKerala, January 2012. [5] Kousalya P, “Selection of a student for All Round Excellence Award – A mathematical model of a Multi-criteria Decision Making approach”, Proceedings of International Conference at Bengal Institute of Technology and Management, West Bengal,2012.

88

Analytical Hierarchy Process for selection of a student for All Round Excellence Award

[6] Lee,W.B,Lau,H, Liu Z and Tam S, A fuzzy analytic hierarchy process approach in modular product design, Expert systems Review, 18(1),32-42,2001. [7] Patric T.Harker, „‟ The art of Science and Decision-making: The Analytic Hierarchy Process‟‟, Springer Verlag, 1989. [8] R.Ramanathan and L.S.Ganesh, ‟‟Group preference aggregation methods employed in AHP: An evaluation and an intrinsic process for deriving members‟ weightages‟‟, European Journal of Operational Research 79(1994)249-265. [9] T.Ramsha Prabhu and K.Vizaya Kumar „‟Evaluation of Iron making Technologies; An AHP Approach‟‟, Technology forecasting and Social Change (1995).

89

P. Kousalya

APPENDIX -1

Ranks of branches for All round Excellence Award 0.25 0.2

EEE

ECE

CSE

0.15

MECH ALTERNATIVES

0.1 ICE 0.05 0 1

2

3

4

5

Figure: 1 Ranks of alternatives (five engineering branches) Table 3: Priority Vector of criteria

C1 C2 C3 C4 1 5 3 5 C1 0.2 1 0.33 0.33 C2 0.33 3 1 3 C3 0.2 3 0.33 1 C4 0.2 3 0.33 1 C5 1 5 1 3 C6 0.33 1 1 3 C7 λ max = 7.4524, C.I = 0.0754,C.R=0.0559

C5 5 0.33 3 1 1 3 1

C6 1 0.2 1 0.33 0.33 1 0.33

priority vector 0.316851 0.045595 0.158735 0.067133 0.078653 0.234051 0.098985

C7 3 1 1 0.33 1 3 1

Table: 4 Local and global priorities

A1 A2 A3 A4 A5

C1 (0.31)

C2 (0.04)

C3 (0.2)

C4 (0.2)

C5 (0.2)

C6 (0.2)

C7 (0.5)

C8 (0.5)

C9 (0.5)

C10 (0.5)

C11 (0.33)

C12 (0.33)

C13 (0.33)

C14 (0.5)

C15 (0.5)

0.28 0.442 0.063 0.15 0.06

0.1 0.51 0.01 0.02 0.01

0.23 0.45 0.05 0.19 0.0

0.28 0.28 0.09 0.22 0.11

0.19 0.46 0.06 0.19 0.09

0.18 0.38 0.09 0.22 0.1

0.14 0.09 0.11 0.26 0.4

0.18 0.08 0.23 0.23 0.2

0.272 0.112 0.072 0.272 0.27

0.181 0.181 0.075 0.282 0.28

0.273 0.273 0.09 0.273 0.09

0.2 0.2 0.2 0.2 0.2

0.27 0.27 0.09 0.27 0.09

0.23 0.28 0.08 0.23 0.18

0.29 0.19 0.1 0.23 0.1

0.225 0.236 0.102 0.234 0.197

90

Analytical Hierarchy Process for selection of a student for All Round Excellence Award

APPENDIX -2

All Round Excellence Award

Round Excellence Award

Academics(C1)

)

Attendance(C2)

Co-curricular

Extra curricular Activities(C4)

Activities(C3) Paper Presentation(C31)

Cultural

General

Departmental

Activities(C5)

Behavior(C6)

Activities(C7)

In-door games(C41) Out-door games(C42)

Singing(C51) Choreography(C52)

Group Discussion(C32)

Honesty(C61) Peer Relationship (C62)

Teacher Relationship(C63)

Debate(C33)

Event Management (C71)

Coordinating (C72)

Quiz(C34)

EEE (A1)

ECE(A2)

ICE(A3)

CSE(A4)

MECH(A5)

Figure: 2 Hierarchical decomposition of criteria, sub criteria and alternatives

91

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

A Two Species Amensalism Model with constant Harvesting B. Sita Rambabu1

K.L.Narayan2

Shahnaz Bathul3

1

Department of Mathematics, AVN Institute of Engineering & Technology, Hyderabad. Department of Mathematics, SLC Institute of Engineering & Technology, Hyderabad. 3 Department of Mathematics, Jawaharlal Nehru Technical University, Kukatpally, Hyderabad. 2

ABSTRACT In this paper a two species Amensalism model is taken up for analytical study. In this model a constant number of first species are harvested. Moreover both the species are provided with limited resources. The series solution of the non-linear system was approximated by the Homotopy analysis method (HAM) and the results are supported by numerical simulations. Keywords: Amensalism, series solution, zero order deformation, embedded parameter, linear operator and HAM. 1.INTRODUCTION Symbioses are a broad class of interactions among organisms --amensalism involves one organism affecting another negatively without having any positive or negative benefit for itself.A.V.N.Acharyulu and PattabhiRamacharyulu [1] studied the stability of enemy amensal species pair with limited resources and B.Shivaprakash,T.Karunanithi [10] studies the growth rate of both the species were tested for the validity of the logistic model under culture operation. In this paper a two species Amensalism model is taken up for analytical study. In this model a constant number of first species are harvested. Moreover both the species are provided with limited resources. The series solution of the non-linear system was approximated by the Homotopy analysis method (HAM) and the results are supported by numerical simulations. 1.1About HAM : In 1992 Liao employee the basic idea of homotopy in topology to propose a powerful analytical method for nonlinear problems namely Homotopy Analysis Method

[2,3].Later M.Ayub,A.Rasheed,T.Hayat,Fadi &Awawdeh [6,7] successfully applied this technique to solve different types of non-linear problems. The HAM itself provides us with a convenient way to control adjust the convergence region and rate of approximation series.In this paper we propose HAM method to investigate the series solutions of a two species amensalism model with a cover for the first species to protect it from the attacks of the second species. The governing equations of the system are as follows

dx  a1 x(t )  11 x 2 (t )  12 x(t ) y (t ) dt dy  a2 y (t )   22 y 2 (t )  L dt

--(1.1)

With the notation x(t) ,y(t): populations of the species1 and species2 a1, a2: rates of natural growth of species 1 and species 2. 11 ,  22 : rates of decrease due to insufficient food of the species 1 and species 2. 12 : rate of decrease of the species 1 due to inhibition by the species 2. L: the rate of harvesting constant 2. BASIC IDEAS OF HAM In this paper, we apply the homotopy analysis method to the discussed problem. To show the basic idea, let us consider the following differential equation N(u,t) =0,

----(2.1)

Where N is a nonlinear operator, t denote independent variable, u (t) is an unknown function, respectively. For simplicity, we ignore all boundary or initial conditions, which can be treated in the similar way. By means of generalizing the traditional homotopy method, 92

A Two Species Amensalism Model with constant Harvesting

Liao constructs the so-called deformation equation

zero-order

(1  p) L[ (t; p)  u0 (t )]  phH (t ) N[ (t; p)], ----(2.2) Where p [0,1] is the embedding parameter , h is a nonzero auxiliary, parameter H is an auxiliary ,parameter H is an auxiliary function L is an auxiliary linear operator, u0(t) is an initial guess of u(t),φ(t,p) is a unknown function, respectively. It is important that one has great freedom to choose auxiliary things in HAM. Obviously, When P=0 and p=1, it holds φ (t, 0) = u0 (t), φ (t, 1) = u (t) ----(2.3)

to the definition, the governing equation can be deduced from the zero-order deformation equation (2.7) . Define the vector

 uk  {u0,u1 ,.....uk }

Differentiating Eq. (2.7) ,m times with respect to embedding parameter p and then setting p=0 and finally dividing them by m! , we have the so-called mth-order deformation equation

 L[um (t )  mum1 (t )]  hH (t ) R m (u m1), ----(2.8) Where

 Rm (um 1 ) 

1  m 1 N [ (t ; p)] (m  1)! p m 1

, p 0

and

respectively. Thus as p increases from 0 to 1, the solution φ (t; p) varies from the initial guesses u0(t) to the solution u(t). Expanding φ(t; p) in Taylor series with respect to p,one has

0, m  1, 1, m  1.

m  

----(2.9)

3. APPLICATION



 (t; p)  u0 (t )   um (t ) p m

----(2.4)

m 1

Where

um (t ) 

1  m (t; p) m ! p m

----(2.5) p 0

If the initial guess (2.3), the auxiliary linear parameter Li, the non-zero auxiliary parameter hi and the auxiliary function Hi are properly choosen, so that the power series (2.4) converges at p=1. Then we have under these assumptions the solution series 

u (t )  u0 (t )   um (t )

----(2.6)

m 1

Which must be one of solution s of original non linear equation, as proved by Liao [15].As h =-1 and H(t)=1,Eq (2.2) becomes

(1  p) L[ (t; p)  u0 (t )]  pN[ (t; p)]  0,

Consider the nonlinear differential equation(1.1) with initial conditions .we assume the solution of the system(1.1) ,x(t),y(t) can be expressed by following set of base functions in the form 

m 1

m 1

Where am, bm are coefficients to be determined. This provides us the so called rule of solution expression i.e., the solution of (1.1) must be expressed in the same from as (3.1) and the other expressions must be avoided. According to (1.1) and (3.1) we chose the linear operator. To solve the system of Eqs.(1.1), Homotopy analysis method is employed. We consider the following initial approximations

x0 (t )  x(t  0)  x0

y0 (t )  y(t  0)  y0

----(3.2) The linear and non-linear operators are denoted as follows. L1[ x(t ; p)] 

----(2.7) which is used mostly in the homotopy perturbation method, whereas the solution obtained directly, without using Taylor series which is explained by H.Jafari,M.Zabihi and M.Saidy [14] and J.H.He [15,16] and S.J.Liao [17] compare the HAM and HPM. According



x(t )   amt m , y(t )   bmt m

dx(t ; p) dy (t ; p) , L2 [ y (t ; p)]  , dt dt

----(3.3)

dx(t; p)  a1 x(t ; p) dt 11 x 2 (t ; p)  12 x(t ; p) y (t; p) N1[ x(t , p)] 

--(3.3)

93

B. Sita Rambabu K.L.Narayan Shahnaz Bathul

N 2 [ y(t , p)] 

dy(t; p)  a2 y(t; p)   22 y 2 (t; p)  L dt

----(3.4) Using above definition the zero order deformation equation can be constructed

Let us consider H1 (t )  H 2 (t )  1 and the initial conditions

x0 (t )  x(t  0)  x0

y0 (t )  y(t  0)  y0

in above equations R1m ( xm1 , ym1 ) 

(1  p) L1[ x(t; p)  x0 (t )]  ph1 N1[ x, y ],

1 d m 1 N [ x(t , p)] (m  1)! dp m1

m d xm1 (t )  a1 xm1  11  xn (t ) xm n 1 (t ) dt n 1

(1  p) L2 [ y(t; p)  y0 (t )]  ph2 N 2 [ x, y ],



----(3.5) When p=0 and p=1, from the zero-deformation equations one has,

12  xn (t ) ym n 1 (t )

m 1 n 0

x(t;0)  x0 (t ) x(t;1)  x(t )

----(3.12)

y (t;0)  y0 (t ) y (t;1)  y (t ) R2 m ( xm 1 , ym 1 )  ----(3.6) And expanding x(t;p) and y(t;p) in Taylors series, with respect to embedding parameter p. one obtains 

x(t ; p)  x0 (t )   xm (t ) p m m 1 

y (t ; p)  y0 (t )   ym (t ) p m m 1

ym (t ) 

1 d m x(t ; p ) m ! dp m

m d ym 1 (t )  a2 ym 1   22  yn (t ) ym n 1 (t )  L dt n 1

The following solutions are obtained successively which can see in the Appendix A, B and C The series solution of the approximation is

x(t )  x0  x1 (t )  x2 (t )  x3 (t )

----(3.7)

xm (t ) 



1 d m1 N [ y (t , p)] (m  1)! dp m1

y (t )  y0  y1 (t )  y2 (t )  y3 (t )

p 0

4. NUMERICAL EXAMPLE

m

1 d y (t ; p ) m ! dp m

p 0

----(3.8)

Example 4. 1: x0=3; y0=13; a1=2;a2=1.5;a11=0.5;a12=0.6;a22=.5;h1=1;h2=-1;

  xm (t )  x0 (t )   xm (t )  m 1 p  1   y (t )  y (t )  y (t )  m 0 m  m 1 

-----(3.9) Define the vector

 xm  [ x0 (t ), x1 (t ),....xm (t )]  ym  [ y0 (t ), y1 (t ),.... ym (t )]

(3.10)

And apply the procedure stated before. The following mth-order deformation Eq will be achieved.

  L1[ xm (t )   m xm1 (t )]  h1H1 (t ) R 1m ( x m1 , y m1 ),   L2 [ ym (t )   m ym1 (t )]  h2 H 2 (t ) R 2 m ( x m1 , y m1 ), ----(3.11)

94

A Two Species Amensalism Model with constant Harvesting

[6] M.Ayub, A.Rasheed, T.Hayat.Exact flow of a third grade fluid past a porous plate using homotopy analysis method.Int J Eng Sci.41 (2003), 2091-2103. [7] T.Hayat, M.Khan. Homotopy solutions for a generalized second-grade fluid past a porous plate. Nonlinear Dyn, 42 :( 2005), 395-405. [8] H.Jafari, M.Zabihi and .Saidy.Application of Homotopy perturbation method for solving 5.CONCLUSION

REFERENCES: K.V.L.N.

&

Pattabhi

Ramacharyulu.N.Ch.”On The stability of An Amensal-Harvested Enemy Species pair

Dynamic

quation.Appl.Math.sci., 2-2008), 2393-

The solutions are converging by HAM method by choosing proper ‘h’values, supported by numerical example.

[1]. Acharyulu

Gas

with

resources”,

limited

2396. [9] Fadi Awawdeh, H.M. Jaradat, O. Alsayyed, “Solving system of DAEs by homotopy analysis method”, Chaos, Solutions and Fractals 42 (2009) 1422– 1427, Elsevier. [10] B.Sivaprakash,T.Karunanithi ,”Modeling of interaction between Schizosaccharomyces pombe and Saccharomyces cerevisiae to predict stable operating conditions in a chemostat”,(IJCRGG),ISSN:0974-4290.

International journal of computational Intelligence Research (IJCIR), Vol.6, No.3; pp.343-358, June 2010. [2].

S.J.Liao,The

proposed

homotopy

analysis technique for the solution of nonlinear problems,PhD thesis,Shanghai Jiao Tong University1992. [3]. S.J.Liao

Beyond

perturbation:

introduction to the homotopy analysis method.CRC

Press,

Boca

Raton:

Chapman & Hall (2003) [4]. S.J.Liao. On the homotopy analysis method

for

nonlinear

problems.

Appl.Math. Compat., 147(2004), 499513 [5] S.J.Liao a new branch of solutions of boundary-layer

flows

over

an

impermeable stretched plate. Int J Heat Mass Transfer.,.48(2005),2529-2539. 95

B. Sita Rambabu K.L.Narayan Shahnaz Bathul

Appendix: A

d  L1  x1 (t )  1 x0 (t )   h1  x0 (t )  a1 x0 (t )  11 x02 (t )  12 x0 (t ) y0 (t )   dt   m (t )  0, m  0 1, m  0 L1  x1 (t )   h1  a1 x0 (t )  11 x02 (t )  12 x0 (t ) y0 (t )  x1 (t )  h1  a1 x0  11 x02  12 x0 y0  t  h1k1t where k1  a1 x0  11 x02  12 x0 y0 d  L1  y1 (t )  1 y0 (t )   h2  y0 (t )  a2 y0 (t )   22 y02 (t )  L   dt  L1  y1 (t )   h2  a2 y0 (t )   22 y02 (t )  L  y1 (t )  h2  a2 y0   22 y02  L  t  h2 k 2t where k2  a2 y0   22 y02  L

96

A Two Species Amensalism Model with constant Harvesting

Appendix: B

1 1 d  L x (t )   x (t )  h  x (t )  a x (t )   (t )   (t )   x (t ) x  x (t ) y 1 2 2 1 1  dt 1 11 11 n 1 n 12 n 1 n  n0 n0   2 t x (t )  (h  h 2 )k1t   a h12 k1  2 h12 k1 x   h12 k1 y   h1h2k 2 x  2 1 1 11 0 12 0 12 0 2 1





t2 x (t )  h [(1  h )k ]t  M 1 2 1 1 1 2 2 where M 1   a h1 k1  2 h12 k1 x   h12 k1 y   h1h2 k 2 x 1 11 0 12 0 12 0 m 1 d  L y (t )   y (t )  h  y (t )  a y (t )   (t )  L   y (t ) y 1 2 2 1 2  dt 1 2 1 22 n m  n 1  n0  t2 y (t )  (h  h 2 )  a y   y 2  t  [a2 h2 2 k 2  2 h2 2 k 2 y ]  h2 Lt 2 2 2  2 0 22 0  22 0 2 2 t  (h  h 2 )k2t  M 2  h2 Lt 2 2 2 2 where M 2  a2 h2 k2  2 h2 2 k2 y 22 0





Appendix: C 2 2 d  L1  x3 (t )  3 x2 (t )   h1  x2 (t )  a1 x2 (t )  11  xn (t ) x1n (t )  12  xn (t ) y1n  m (t )  n 0 n 0  dt  x3 (t )  [(1  h1 )k1 ](h1  h12 )t  [(1  h1 ) M 1  a1h12 ((1  h1 )k1 )  211h12 x0 ((1  h1 )k1 )  12 h1 x0 (h2  h2 2 )k 2  h1h212 Lx0

t2 1 1 1 t3  [ h1a1M 1  11h1M 1 x0  11h13k12  12 h1M 2 x0  12 h1M 1 yo  h12h2 k1k2 ] 2 2 2 2 3 2 2 M 2   a1h2 k2  2h2  22 k2 y0   12 h12 y0 ((1  h1 )k1 )]

2 d  L1  y3 (t )  3 y2 (t )   h2  y2 (t )  a2 y2 (t )   22  yn (t ) y1n (t )  n 0  dt  3 t2 1 1 3 2 t y3 (t )  (1  h2 )(h2  h )k2t  [(a2 h2 (h2  h )k2  (1  h2 ) M 2  2h2 22 k2 y0 (h2  h )  h2 L]  [ a2 h2 M 2  h2 22 M 2  h2 k2 ] 2 2 2 3 2 2

2 2

2 2

97

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

On Intuitionistic Fuzzy Implicative Hyper BCK- Ideals of Hyper BCK-Algebras B. Satyanarayana1

R. Durga Prasad2

D. Ramesh3

Y. Pragathi Kumar4

1,3

Department of Mathematics, Acharya Nagarjuna University, Guntur, Andhra Pradesh, INDIA 2 Department of Science and Humanities, RISE Gandhi Group of Institutions, Ongole, Andhra Pradesh, INDIA 4 Department of Science and Humanities, Tenali Engineering College, Anaumarlapudi, Andhra Pradesh, INDIA

ABSTRACT In this paper, we define the notions of intuitionistic fuzzy (weak) implicative hyper BCK-ideals and then we present some theorems which characterize the above notions according to the level subsets. Also we obtain the relationship among these notions, intuitionistic fuzzy (strong, weak, reflexive) hyper BCKideals, intuitionistic fuzzy positive implicative hyper BCK-ideals of types-1, 2, ..., 8 and we obtain some related results. Keywords: Hyper BCK-algebra, fuzzy set, intuitionistic fuzzy implicative hyper BCK-ideal. 1.INTRODUCTION The hyper structure theory (called also multialgebras) was introduced in 1934 by F.Marty [10] at the 8th congress of Scandinavian Mathematicians. In [9], Y.B. Jun, et. all applied the hyper structers to BCK- algebras and introduced the notion of hyper BCK-algebras which is a generalization of a BCK-algebra. After the introduction of the concept of fuzzy sets by Zadeh [11], In [1, 2], Atanassove introduced intuitionistic fuzzy set. In [9], Jun et al. introduced the notion of implicative hyper BCK-ideals and obtain some related results. In this paper first we define the notions of intuitionistic fuzzy (Weak) implicative hyper BCK-ideals and then we present some theorems which characterize the above notions according to the level subsets. Also we obtain the relationship among these notions, intuitionistic fuzzy (strong, weak, reflexive) hyper BCK-ideals, intuitionistic fuzzy

positive implicative hyper BCK-ideals of types1, 2,...,8 and obtain some related results. 2. PRELIMINARIES Let H be a non-empty set endowed with hyper operation that is, ο is a function from H  H to P * (H)  P(H) \ {} . For any two sub-sets A and B of H, denoted by

aο b. We shall use the  aA,bB xο y instead of xο {y}, {x}ο y, or {x}ο {y}.

Aο B the set

2.1 Definition [4]. By a hyper BCK-algebra we mean a nonempty set H endowed with a hyper operation " ο" and a constant 0 satisfying the following axioms: (HK-1) (x  z)  (y  z)  x  y , (HK-2) (x  y)  z  (x  z)  y , (HK-3) x  H  {x} , (HK-4) x  y and y  x implies x = y, for all x, y,z  H, we can define a relation “  ” on H by letting x  y if and only if 0  x  y and for every A, B  H, A  B is defined by a  A, b  B such that a  b . In such case, we call “  ” the hyper order in H. In the sequel H denotes hyper BCK-algebras. Note that the condition (HK3) is equivalently to the condition: (p1) for any x, y  H, x  y  {x}. In any hyper BCK-algebra H the following hold: (P2) 0  0  {0} , (P3) 0  x , (P4) x  x , (P5) A  B  A  B ,

98

On Intuitionistic Fuzzy Implicative Hyper BCK- Ideals of Hyper BCK-Algebras

(P6) 0  x  {0}, (P7) x  0  {x}, (P8) 0  A  {0} , (P9) x  x  0 , for all x  H and for any non-empty A, B of H. Let I be anon-empty subset of hyper BCK-algebra H and 0 I . Then I is called a hyper BCK-sub algebra of H, if x  y  I, for all x, y  I, weak hyper BCK-ideal of H if x  y  I and y  I imply x  I, for all x, y in H, a hyper BCK-ideal of H, if x  y  I and y  I imply x  I, for all x, y  H , a strong hyper BCK-ideal of H, if x  y  I  φ and y  I imply x  I, for all x, y  H , I is said to be reflexive, if x  x  I ,for all x  H , S-reflexive if (x  y)  I  φ implies that x  y  I, for all x, y  H, closed, if x  y and y  I implies that x  I, for all It is easy to see that every Sx, y  H . reflexive subset of H is reflexive. A hyper BCKalgebra H is called a positive implicative hyper BCK-algebra, iff for all

x, y,z  H, (x  y)  z  (x  z)  (y  z) .

Let I be a non-empty sub-set of H and 0  I . Then I is said to be a weak implicative hyper BCK-ideal of H if z  I  x  I, (x  z)  (y  x)  I, an implicative hyper BCK-ideal of H if (x  z)  (y  x)  I, z  I  x  I for all x, y,z  H. A fuzzy set in a set X is a function μ : X  [0,1] and the complement of

μ , denoted by μ , is the fuzzy set in H given by μ(x)  1  μ(x) , for all x  X . Let µ and λ be the fuzzy sets of H. For s, t  [0,1] the set U(μ A ; s)  {x  X/μ A (x)  s}is called upper s-level cut of μ and the set L(λ A ; t)  {x  X/λ A (x)  t}is called lower t-level cut of λ and can used to the characterization of μ and λ Let μ be a fuzzy subset of H and μ(0)  μ(x) , for all x in H.

Then μ is called a (i). fuzzy weak implicative hyper BCK-ideal of H if

μ(x)  min{ μ(a), μ(z)} inf a(x  z) (y x)

and (ii). fuzzy implicative hyper BCK-ideal of H, if

μ(x)  min{ μ(a), μ(z)} sup a(x  z) (y x) for all x, y,z  H. 2.2 Definition [1, 2].

As an important genera- lization of the notion of fuzzy sets in X, Atanassov [1, 2] introduced the concept of an intuitionistic fuzzy set (IFS for short) defined by an intuitionist fuzzy set in a non-empty set X is an object having the form A {(x, μ (x), λ (x))/x  X} where the function

A A μ A : X  [0,1] and λ A : X  [0,1] denoted the degree of membership (namely μ A (x) ) and the degree of non membership (namely λ A (x)) of each element x  X to the set A respectively and 0μ A (x)  λ A (x)  1 ,  x  X . For the sake of simplicity, we use the symbol form A  (X, μ A , λ A ) or A  (μ A , λ A ) . 2.3 Definition [6]. An IFS A  (μ A , λ A ) in H is called an intuitionistic fuzzy hyper BCK-ideal of H if it satisfies

x  y implies μ (x)  μ (y) and λ A (x)  λ A (y), A A (k2) μ (x)  min{inf μ A (a) , μ (y)}, A A ax  y (k3) λ (x)  max{ sup λ A (b) , μ (y)}for A A bx y all x, y  H . (k1)

99

B. Satyanarayana R. Durga Prasad D. Ramesh Y. Pragathi Kumar

2.4 Definition [6]. An IFS A  (μ A , λ A ) in H is called an intuitionistic fuzzy strong hyper BCK-ideal of H if it satisfies, for any x, y  H

μ (x )  sup μ (x) and A 0 A xT λ (y )  inf μ (y) . A 0 A yT

inf μ A (a)  μ A (x) ax  x  min{sup μ A (b) , μ (y)} A bx  y

3. INTUITIONISTIC FUZZY IMPLICATIVE HYPER BCK-IDEALS

and

sup λ A (c)  λ A (x) cx  x  max{ inf λ A (d) , λ (y)} A dx  y 2.5 Definition [6]. An IFS A  (μ A , λ A ) in H is called an intuitionistic fuzzy s-weak hyper BCK-ideal of H if it satisfies, for any x, y  H

(0)  μ (y) and A A λ (0)  λ (y) , A A (s2) there exists a, b  x  y such that

(s1) μ

μ (x)  min{μ (a), μ (y)}and A A A λ (x)  max{ λ (b), λ (y)}. A A A 2.6 Definition [6]. An IFS A  (μ A , λ A ) in H is called an intuitionistic fuzzy weak hyper BCK-ideal of H if it satisfies

μ (0)  μ (x)  min{inf μ A (a) , μ (y)} A A A ax  y and

λ (0)  λ (x)  max{ sup λ A (b) , λ (y)} A A A bx  y for all x, y  H . 2.7 Definition [6]. An IFS A  (μ A , λ A ) in H is said to satisfy the “sup-inf” property if for any sub-set T of H there exist x 0 , y 0  T such that

3.1 Definition: Let A  ( μ A , λ A ) be an intuition -istic fuzzy set on H and μ A (0)  μ A (x) , λ A (0)  λ A (y)for all

x, y  H. Then A= ( μ A , λ A ) is called an (i). intuitionistic fuzzy weak implicative hyper BCK-ideal of H if

μ (x)  min{ μ (a), μ (z)} inf A A A a(x  z) (y x) and

λ (x)  max{ A

λ (b), λ (z)}. sup A A b(x  z) (y x)

(ii). intuitionistic fuzzy implicative hyper BCKideal of H, if

μ (x)  min{ μ (a), μ (z)}a sup A A A a(x  z) (y x) nd

λ (x)  max{ A

λ (b), λ (z)} inf A A b(xz)(y x)

forall x, y,z  H. 3.2 Theorem: Every intuitionistic fuzzy impli-cative hyper BCKideal of H is an intuitionistic fuzzy weak implicative hyper BCK-ideal. Proof: Let A  ( μ A , λ A ) be an intuitionistic fuzzy implicative hyper BCK-deal of H and for x, y,z  H. Since

inf μ A (a)  sup μ A (a) and a(x  z)(y x) a(x  z)(y x) inf λ A (b)  sup λ A (b) . b(x  z) (y  x) b(x  z) (y  x)

100

On Intuitionistic Fuzzy Implicative Hyper BCK- Ideals of Hyper BCK-Algebras

Therefore

μ (x)  min{ μ (a), μ (z)} sup A A A a(x  z)(y x)

 min{ μ (a), μ (z)}and inf A A a(x  z)(y x) λ (x)  max{ λ (b), λ (z)} inf A A A b(x  z)(y x)  max{ λ (b), λ (z)}. sup A A b(x  z)(y x) Thus A  ( μ A , λ A ) is an intuitionistic fuzzy weak implicative hyper BCK-ideal of H. 3.3 Example Let H  {0, a, b} . Consider the following cayley table o 0 a b 0 {0} {0} {0} a {a} {0,a} {0,a} b {b} {a} {0,a} Then (H,) is a hyper BCK-algebra [4]. Define an IFS H by A  (μ A , λ A ) on

μ A (0)  1 , μ A (a)  0, μ A (b)  0.6 and λ A (0)  0 , λ A (a)  1, and λ A (b)  0.4 . Then A  (μ A , λ A ) is an intuitionistic fuzzy

(ii). Every intuitionistic fuzzy weak implicative hyper BCK-ideal of H is an intuitionistic fuzzy weak hyper BCK-ideal. Proof: (i) Let A  (μ A , λ A ) be an intuitionistic fuzzy implicative hyper BCK-ideal of H. By putting y  0 and z  y in Def. 3.1(ii), we get

μ (x)  min{ μ (a), μ (y)} sup A A A a(x  y) (0 x)  min{sup μ A (a) , μ (y)} A ax  y and

λ (x)  max{ A

λ (b), λ (y)} inf A A b(x  y) (0 x)  max{ inf λ A (b) , λ (y)} A bx  y ….. (i).

First we show that, for x, y  H , if x  y

(x)  μ (y) and λ A (x)  λ A (y) A For this, let x, y  H be such that x  y , then 0  x  y and so from (i), we get implies μ

A

weak implicative hyper BCK-ideal of H but it is not an intuitionistic fuzzy implicative hyper BCK-ideal of H, because

μ (x)  min{ sup μ (a), μ (y)} A A A ax  y  min{ μ A (0) , μ (y)}  μ (y)and A A

μ (a)  0  1  μ (0) A A  min{ μ (t), μ (0)} sup A A t(a  0)  (a  a)

λ (x)  max{ inf λ (b), λ (y)} A A A bx  y  max{ λ A (0) , λ (y)}  λ (y) A A

and

λ (a)  1  0  λ (0) A A  max{ λ (t), λ (0)} inf A A t(a  0)  (a  a) . 3.4 Theorem: (i). Every intuitionistic fuzzy implicative hyper BCK-ideal of H is an intuitionistic fuzzy strong hyper BCK-ideal.

…… (ii)

Let x  H and a  x  x . Since x  x  x then a  x for all a  x  x and so by (ii), we have μ (a)  μ (x) and λ (a)  λ (x) for

A A ax  x.

A

A

Hence

inf μ A (a)  μ A (x) and ax  x . sup λ A (a)  λ A (x).....(i ii) ax  x

101

B. Satyanarayana R. Durga Prasad D. Ramesh Y. Pragathi Kumar

Combining (i) and (iii), we get

inf ax  x

c

μ (a)  μ (x) A A

and

 min{sup μ A (b) , μ (y)} A bx  y

sup λ A (c)  λ A (x) cx  x fo  max{ inf λ A (d) , λ (y)} A dx  y r all x, y  H. Thus A  (μ A , λ A ) is an intuitionistic fuzzy strong hyper BCK-ideal of H. (ii) Let A  (μ A , λ A ) be an intuitionistic fuzzy weak implicative hyper BCK-ideal of H. By putting y  0 and z  y in Def 3.1(i), we get

μ

(x)  min{ μ (a), μ (y)} inf A A A a(x  y) (0 x)  min{ inf μ A (a) , μ (y)} A ax  y

and

λ (x)  max{ A

λ (b), λ (y)} sup A A b(x  y) (0 x)  max{ sup λ A (b) , λ (y)} A bx  y

Therefore,

μ (0)  μ (x)  min{ inf μ A (a) , μ (y)}a A A A ax  y nd

λ (0)  λ (x)  max{ sup λ A (b) , λ (y)} A A A bx  y for all x, y  H. Thus A  (μ A , λ A ) is an intuitionistic fuzzy weak implicative hyper BCK-ideal of H. 3.5 Example : Let H  {0, a, b, c} . Consider the following cayley table. o 0 a b c 0 {0} {0} {0} {0} a {a} {0} {0} {0} b {b} {b} {0} {0}

{c}

{c}

{b,c}

{0,b,c}

Then (H, ) is a hyper BCK-algebra [4]. Define IFS A  (μ A , λ A ) in H by μ A (0)  1, μ A (a)  1

μ A (b)  0, μ A (c)  0.4 and λ A (0)  0, λ A (a)  0, λ A (b)  0, λ A (c)  0.7 then A  (μ A , λ A ) is an intuitionistic fuzzy strong hyper BCK-ideal (and so is an intuitionistic fuzzy weak hyper BCK-ideal) but it is not an intuitionistic fuzzy implicative (and so is not an intuitionistic fuzzy weak implicative) hyper BCK-ideal of H. Because

μ (b)  0.4  1  μ (0) A A  min{ μ (t), μ (0)} inf A A t(b0)(c b) and

λ (b)  0.7  0  λ (0) A A  max{ λ (t), λ (0)} sup A A t(b  0)  (c b) This implies that the converse of the Theorem 3.4 is not correct in general. 3.6 Theorem: Let A  (μ A , λ A ) be an IFS of H, then the following statements hold: (i).A is an intuitionistic fuzzy weak implicative hyper BCK-ideal of H if and only if for all s, t  [0,1], U(μ A ; s)  φ  L(λ A ; t) are weak implicative hyper BCK- ideals of H. (i). if A is an intuitionistic fuzzy implicative hyper BCK-ideal of H, then for all s, t  [0,1], U(μ A ; s)  φ  L(λ A ; t) are implicative hyper BCK- ideals of H. (iii). U(μ A ; s)  φ  L(λ A ; t) are S-reflexive implicative hyper BCK- ideals of H, for any s, t [0,1], and if A satisfies the “sup-inf” property, then A is an intuitionistic fuzzy implicative hyper BCK-ideal of H. 3.7 Theorem : Let A  (μ A , λ A ) be an IFS on H. (i). If A satisfies the “sup-inf” property and for all s, t  [0,1] , U(μ A ; s) and L(λ A ; t) are

102

On Intuitionistic Fuzzy Implicative Hyper BCK- Ideals of Hyper BCK-Algebras

reflexive and A is an intuitionistic fuzzy implicative hyper BCK-ideal of H, and then A is an intuitionistic fuzzy positive implicative hyper BCK-ideal of type 3. (ii). Let H be a positive implicative hyper BCKalgebra. If A  (μ A , λ A ) is an intuitionistic fuzzy weak implicative hyper BCK-ideal of H, then A is an intuitionistic fuzzy positive implicative hyper BCK-ideal of type 1 Proof: (i) Suppose that A  (μ A , λ A ) is an IFS of H has “sup-inf” property and for all s, t  [0,1] , U(μ A ; s) and L(λ A ; t) are reflexive and A is an intuitionistic fuzzy implicative hyper BCK-ideal of H. By Theorem 3.6(ii), for all s, t  [0,1] , U(μ A ; s) and

L(λ A ; t) are implicative hyper BCK-ideals of H. Since U(μ A ; s) and L(λ A ; t) are reflexive, then by Theorem 4.6(iii)[5], U(μ A ; s) and L(λ A ; t) are positive implicative hyper BCKideals of type 3 and so by Theorem 3.14(iii) [9] A  (μ A , λ A ) is an intuitionistic fuzzy positive implicative hyper BCK- ideal of type 3. (ii) Suppose A  (μ A , λ A ) is an intuitionistic fuzzy weak implicative hyper BCK-ideal of H and for x, y,z  H , put s  min{ μ (a), inf μ (b)} . inf

A A a(x  y) z by  z Thus for all a  (x  y)  z and b  y  z , μ A (a)  s, μ A (b)  s since H is a positive

implicative hyper BCK-ideal, so we have (x  z)  (y  z)  (x  y)  z  U(μ A ; s) an d y  z  U(μ A ; s) by Theorem 3.4(ii), A 

t  max{

λ (c), sup λ (d)} . sup A A c(x  y) z dy  z Thus for all c  (x  y)  z and d  y  z , λ A (c)  t, λ A (d)  t , since H is a positive implicative hyper BCK-ideal, so we have (x  z)  (y  z)  (x  y)  z  L(λ A ; t) and

y  z  L(λ A ; s) , by Theorem 3.4(ii). It is easy to show that L(λ A ; t) is a weak hyper BCKideal of H, then x  z  L(λ A ; s) .Therefore for all v  x  z , λ (v)  t A  max{

. Thus A  (μ A , λ A ) is an intuitionistic fuzzy positive implicative hyper BCK-ideal of type 1 3.8 Theorem: Let A  (μ A , λ A ) be an IFS on H. Then A is an intuitionistic fuzzy weak implicative hyper BCK-ideal if and only if the fuzzy sets μ A and

λ A are fuzzy weak implicative hyper BCKideals of H. Proof: Assume A  (μ A , λ A ) is an intuitionistic fuzzy weak implicative hyper BCK-ideal. Obviously, μ A is a fuzzy weak implicative hyper BCK- ideal of H. Now λ A (x)  1 - λ (t)

A

 1  max{

(μ A , λ A ) is an intuitionistic fuzzy weak hyper BCK-ideal of H. It is easy to show that U(μ A ; s) is a weak hyper BCK-ideal of H, then x  z  U(μ A ; s) . ux z,

Therefore,

for

all

μ (u)  s A  min{ μ (a), inf μ (b)} inf A A a(x  y) z b(y  z) Put

λ (c), sup λ (d)} sup A A c(x  y) z dy  z

λ (a), λ (z)} sup A A a(x  z) (y  x)

λ (a)), 1 - λ (z)} inf (1A A a(x  z) (y x)

= min{

inf λ A (a) , λ (z)}. A a(x  z) (y  x)

= min{

Hence μ A and λ A are fuzzy weak implicative hyper BCK-ideal of H.

103

B. Satyanarayana R. Durga Prasad D. Ramesh Y. Pragathi Kumar

Conversely, assume that μ A and λ A are fuzzy weak implicative hyper BCK-ideal of H. Now

1  λ (x)  λ A (x) A  min{ inf λ A (b) , λ (z)} A b(x  z) (y  x) = min{ λ (b)),1 - λ (z)} inf (1A A b(x  z) (y x) = min{1 λ (b),1 - λ (z)} sup A A b(x  z) (y x) = 1  max{ λ (b), λ (z)} sup A A b(x  z) ( y x) So λ (x)  max{ λ (c), λ (z)} . sup A A A c(x  y) z Thus A  (μ A , λ A ) is an intuitionistic fuzzy weak implicative hyper BCK-ideal of H. 3.9 Theorem : Let A  (μ A , λ A ) be an IFS on H. Then A is an intuitionistic fuzzy weak implicative hyper BCK-ideal if and only if the fuzzy sets A  (μ A , μ A ) and A  (λ A , λ A ) are intuitionistic fuzzy weak implicative hyper BCK-ideals of H. Proof: The proof follows from the Theorem 3.8 3.10 Theorem : Let A  (μ A , λ A ) be an IFS on H. Then A is an intuitionistic fuzzy implicative hyper BCK-ideal if and only if the fuzzy sets μ A and λ A are fuzzy implicative hyper BCK-ideals.

REFERENCES [1] Atanassov, K.T, Intuitionistic fuzzy sets, Fuzzy Sets and Systems, 20, 87- 96, 1983. [2] Atanassov, K.T, New operations defined over the intuitionistic fuzzy Sets, Fuzzy Sets And Systems, 61, 137-142, 1994. [3] Atanassov. K.T, Intuitionistic fuzzy systems, Springer Physica-Verlag, Berlin, 1999 [4] Bakhshi, M., Zahedi and Borzooei, R.A, Fuzzy (Positive, Weak) hyper BCK-ideals, Iranian journals of fuzzy systems,1, 63-79, 2004. [5] Borzooei, R.A and Bakshi, Some results on hyper BCK-algebras, Quasigroups and related systems, 11, 9-24, 2004. [6] Borzooei, R.A and Jun, Y.B, Intuitionistic fuzzy hyper BCK-ideals of hyper BCKalgebras, Iranian journals of fuzzy systems, 1(1), 65-78, 2004. [7] Durga. Prasad, R., Satyanarayana. B. and Ramesh. D, on intuitionistic fuzzy positive implicative hyper BCK- ideals of BCKalgebras, IJAS, 6, 175-196, 2012. [8] Jun, Y.B, and Xin, X. L, Implicative hyper BCK-ideals in hyper BCK-algebras, Mathematicae Japanicae, 52, 435-443, 2000. [9] Jun, Y.B, Zahedi, M.M, Xin, L and Borzooei, R.A, On hyper BCK-algebras, Italian J. of Pure and Appl. Math., 8,127136, 2000. [10] Marty, F., Surune generalization de lanotion de groupe, 8th Congress Math.Scandinavas, Stockholm, 45-49, 1934. [11] Zadeh, L.A., Fuzzy sets, Information Control, 8, 338-353, 1965

3.11 Theorem: Let A  (μ A , λ A ) be an IFS on H. Then A is an intuitionistic fuzzy implicative hyper BCK-ideal if and only if the A  (μ A , μ A )

and A  (λ A , λ A ) are fuzzy

implicative

hyper BCK-ideals.

104

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

A Unified Approach to Computability K. Venkata Rao1

B. Gayatri Devi2

1

2

Government College (A), Rajahmundry, Andhra Pradesh, India Godavari Institute of Engineering and Technology, Rajahmundry, Andhra Pradesh,

ABSTRACT Mapcode Theory is proposed as a unified approach for the theory of computation.

negative integers and the set of all positive integers respectively. The symbol  may be read as `is defined as'. 2 NONDETERMINISTIC MAPCODE THEORY

1 INTRODUCTION

X be a nonempty set. A map  : X  P( X ) is called a choice map. If  is a choice map then the pair ( X , ) is called Let

The concept of computability in the theory of computation is introduced to the students through many models like Finite Automata, Pushdown Automata, Context free grammar, Unlimited Register Machines, Partial Recursive Functions,  -calculus, and Turing machines. When dealing with practical computing problems, the same students need to learn a programming language and study how to express algorithms in that language. Thus there is a disconnect between theory and practice. In a series of articles [6],[9], [10],[11], [12] and a book [13] a theory called the `mapcode formalism' has been presented as an alternative approach to this problem. Thus a student can learn about both the theoretical and practical issues in terms of a single framework. In continuation of the work, in the article [7] partial recursive maps are characterized in terms of mapcode theory. The concept of Nondeterminism is modeled in terms of set valued maps called choice maps in [8]. When a computing mechanism is in a state x and the nondeterminism is acting on it, it may yield a state from a possible set ( x) of states. When the set ( x) of possible outcomes at the state x becomes a singleton set, then the nondeterminism will be reduced to the deterministic case. In this article, nondeterministic mapcode theory is presented as a unified approach for some of the important computing methods. This approach shows the structural similarities in the systems and advantage of studying mapcode theory. Let  and  denote the set of all non-

a choice structure. Let ( X , ) be a choice structure. An element x  X is called a dynamic state of the choice structure if ( x)   . The set of all dynamic states of  is denoted by dyn() . If ( x) =  then x is called a static state of the choice structure. Suppose  , 1 , and  2 are choice maps on X . Definitions 1 1.

For x  X , we define ( 2  1 )( x) =

 2 (1 ( x)) .  2  1 is a choice map on X called the composition of  2 with 1 . 0 ( x)  {x}, and recursively for k  1 , k    k 1  k 1   . 3. A point x  X is called a fixed point of  if ( x) = {x} . fix()  {x | ( x) = {x}} . 4. A point x  X is called a hang point of  if x  ( x) and ( x)  {x} . 5. A subset V  X is said to be invariant under  if ( x)  V for every x V . 6. An element x  X is called a stable point of  if k ( x)  dyn() for every k  0 . The set of all stable points of  is denoted by stab() . 7. An element x  X is said to be a weakly convergent point of if  k  ( x)  fix()   for some k  0 . The set of all weakly convergent points of  is 2.

105

K. Venkata Rao B. Gayatri Devi

denoted by conw () . 8. An element x  X is said to be a convergent point of  if x  stab() , and

k ( x)  fix() for some k  0 . The set of all convergent points of  is denoted by con() . Remarks 2 1. If ( x) is at most a singleton set, then there exists a unique partial map F : D  X  X such that dyn() = D and ( x) = {F ( x)} for all x  D . In such a case the dynamics of choice structure is reduced to the dynamics of the map F . 2. The set con() is an invariant set. 3. fix()  con()  conw ()  stab() .

k ( x)  fix()  k 1 ( x)  fix() for k 1. 5. 1 (stab())  stab() . 4.

The definitions of convergence and weak convergence given above are conceptually easy to understand but verifying convergence using these definitions is not convenient in practice. So we give below a more practical characterization of convergence. Definitions 3 Let ( X , ) be given. 1. If y  ( x) we write x  y and say that x maps to y .  defines a binary relation on X. 2. For n  0 , a finite sequence ( x0 , x1 ,,

xn ) of elements of X is called a run of length n starting at x0 and ending at xn if x0  x1   xn . In this case, x1 ( x0 ), x2 ( x1 ),  , xn ( xn1 ) . Also

xn   n ( x0 ) . If ( x0 , x1 ,, xn ) is a run we write x0

3.

 xn and say that x0 yields xn . It may be *

observed that * is the transitive closure of . 4. If ( x0 , x1 ,, xn ) is a run and 0  m < n then ( x0 , x1 ,, xm ) is also a run. In such a case we say that

( x0 , x1 ,, xn ) is an

extension of the run ( x0 , x1 ,, xm ) . 5. A run is said to be aborted if it ends in a state that is not in dyn() ; that is, if it ends in a static state. 6. A run is said to be terminal if it ends in a fixed point of  . If ( x0 , x1 ,, xn ) is a terminal run and m is the least nonnegative integer such that xm  fix() , then xm = xm1 =  = xn . Theorem 4: Suppose  is a choice map on X , and x  X . 1. x  con() if and only if there exists k  0 such that there are runs at x , every run at x can be extended, and every run starting at x of length k is terminal. 2. x  conw () if and only if there exists a run starting at x that is terminal. Proof 1. Suppose that x  con() , so that

n ( x)  dyn() for all n  0 , and k ( x)  fix() for some k  0 . Then n ( x)  dyn() for all n  0  there are runs at x , and every run at x can be extended. Consider any run ( x = x0 , x1 , x2 ,, xk ) of length k . Then

xk k ( x)  fix() . Hence it is a terminal run. Conversely assume the condition. There are runs at x  x  dyn(). Every run at x can be extended  n ( x)  dyn() for every n  0 . Take any xk   k ( x) . If k > 1 ,

k ( x) = (k 1 ( x)) , xk 1  k 1 ( x) such that

and

there

exists

xk  ( xk 1 ) .

Continuing inductively we get a sequence ( x = x0 , x1 ,, xk ) such that

x0  x1   xk . Then it is a run of length k and hence terminal. So xk  fix() . Hence k ( x)  fix() . 2. Suppose x  conw () . Then there exists

k  0 such that k ( x)  fix()   . So there exists an element xk k ( x)  fix() .

106

A Unified Approach to Computability

If k > 1 , then xk k ( x) = (k 1 ( x)) and there

exists

xk 1  k 1 ( x)

such

that

xk  ( xk 1 ) . Continuing inductively we get a finite sequence ( xi ),0  i  k such that x = x0  x1   xk  fix(). Its length is k .

Definition 6 For any A  X 1. the set bas(, A) = {x  con() |  ( x)  A} is called the basin of A with respect to  , and 2. the set basw (, A) = {x  conw () |  ( x)  A  } is

called the weak basin of A with respect to  .

Conversely assume that there exists a terminal run starting at x of length k . Then there exist such that xi , 0ik,

Remark 2 The set bas(, A) is an invariant set under

x = x0  x1   xk  fix() . It is clear

invariant

xk   k ( x) .

that

Hence

 ( x)  fix()   . k

Let  be a choice map on X . We have observed in Remarks 2 that the sequence of sets k ( x)  fix() is monotonically increasing for all x . Let ( X , ) be a choice Define

 ( x) = lim (k ( x)  fix()), for all x  X . The choice map   is called the limit map of  . Elements of  ( x) are called limit points of  at x . Remark 1 1. fix( ) = fix() . 2.  ( x) = { y  fix() | x * y}, for x  X . Dijkstra's Note:- Let ( X , ) be given. Suppose ( x) is finite for all x . If x  con() then there exists k  0 such that

k ( x)  fix() . 

In

such

a

case



 ( x) =  ( x) . In particular  ( x) is finite. k

set.

If

we

define

(0) = {0}, (1) = {0, 2} , and ( x) = x  1 for x  2 , then 1 basw (,{0}) and

(1)  basw (,{0}) .

3 LIMIT CHOICE MAPS

Definition 5 structure.

 . However, the set basw (, A) is not an

So it is impossible to have a convergent choice structure with ( x) finite and   ( x) infinite for x  X . It is this fact that Dijkstra is pointing out when he says [3] that there can not exist a program that says ``set x to any positive integer". This can be understood in terms of continuity in the space of P( X ) .

Dijkstra [3] introduces the notion of a state space X but does not define what is meant by a nondeterministic mechanism acting on X . Rather he says that if we denote such a mechanism by  then  is characterized by a certain action it has on sets. If we take our mathematical starting point in defining a nondeterministic mechanism as a choice map  then we define first the notion of Guarded Command. 7 Given D  X and F : X  X a partial map, ( D, F ) is called a Guarded Command on X . The set D  dom( F ) is called guard and F is called command. Definition

Its action is first to check if a given state x is in D  dom( F ) . If it is, x is changed to F ( x) . If it is not, then no action is taken. Suppose Q = {( D1 , F1 ),( D2 , F2 ),,( Dk , Fk )} is a collection

of guarded commands on X . Let D denote the union of all the Di 's. Dijkstra introduces two different ways of composing guarded commands. The alternative construct IF is denoted by

IF = if D1  F1  Dk  Fk fi and the repetitive construct DO is denoted by DO = do D1  F1  Dk  Fk od . 107

K. Venkata Rao B. Gayatri Devi

To interpret the constructs of `` IF " and `` DO " of Dijkstra in terms of the present formalism we have now only to identify the choice maps corresponding to IF and DO . This we proceed to do.

Theorem

Let A, B  X be such that

9:

A  D , and Fj ( A  D j )  B for all j . Then  IF ( A)  B . Next, let us consider the repetitive construct DO .

Definitions 8 Let

Q = {( D1 , F1 ),,( Dk , Fk )} a finite

Definition 10 Let Q be given as above. Then

collection of guarded commands and D = Di .

the choice map  DO is defined by

  ( x),  DO ( x) =  Q ,

1. The choice map  IF associated with Q is defined by

{Fi ( x) | x  Di ,1  i  k},   IF ( x) =  if x  D; , if x  D.  The map  IF is called quilt of the guarded commands in Q . 2. Define the choice map  Q by

 ( x), Q ( x) =  IF {x},

Clearly

if x  bas(Q , D c ), otherwise.

dyn( DO ) = bas(Q , Dc )

fix( DO ) = D c . Also  DO ( X )  Dc . The ``fundamental invariance theorem for loops" takes the following form. Let V  X  IF (V  D)  V . Then

if x  D; if x  D.

and

Theorem 11:

be such that

 DO (V  con(Q ))  V  Dc .

Remark 3 If the sets Di are pair-wise disjoint then there exists a unique partial map F : X  X such  IF ( x) = {F ( x)} that for every

dyn( IF ) = dom( F ) .

We

write

F = ( D1 , F1 )( D2 , F2 )( Dk , Fk ) . By definition,

 Q ( x)  

for

dyn(Q ) = X .

dyn( IF ) = D and

all Clearly

x X

so

that

Dc  fix(Q ) .

There could be points of D also in fix(Q ) . Let E = {x  D | x  Di  Fi ( x) = x} . Then

E  fix(Q ) and in fact fix(Q ) = Dc  E , and fix( IF ) = E . It is to be noted that the set

E is not mentioned explicitly by Dijkstra.

After all this work we have finally the simple definitions below. Definitions 12 1. IF   IF . 2. DO   DO . Given Q a collection of guarded commands, we take the associated choice map  to be the map  Q . 4 NONDETERMINISTIC MAPCODE COMPUTATIONAL SYSTEMS Definition 13 Let S , X , and T be three nonempty sets. Let Q be a finite collection

{( Di , Fi ) | i  I } of guarded commands on X . Let  : S  Di be a map for some i0 . Let 0

The ``basic theorem for the alternative construct" takes the following form. The proof follows immediately from the definitions of  Q and  IF .

D = Di and  : X  T be a partial map. A nondeterministic mapcode machine is the 6 tuple ( S , X , T ,  , Q ,  ) . A Nondeterministic Mapcode computational system is a collection

108

A Unified Approach to Computability

M of such machines (S , X , T ,  , Q ,  ) . Suppose that M = ( S , X , T ,  , Q ,  ) is a nondeterministic mapcode machine. Given an input s  S , the input map  takes value s into

a

state

 ( s)

in

Di  X . The 0

nondeterministic computation map  Q will keep on acting on it until it reaches a state in X \ D in which no command Fi can act. If it also is in the set dom( ) , then the output map  will read out the terminal state to be an element of T . This is how the machine M behaves. Thus an element in H  ( X \ D)  dom( ) is taken as an halting state of the computation. Given s , the set of all such possible out comes is a subset of T denoted by  ( s) . Then  : S  P(T ) . It is clear that  =   Q   defined on the set

 = {s  S | Q (  ( s))  H } . The set  is called the halting set of the machine. Definitions 14 Let M be given. 1. The map  is called input-output map of the machine and  = {s |  (s)  } . 2. A partial map f : S  P(T ) is said to be computable with M if dyn( f )   and for any s  dyn( f ) ,  (s)  f (s) . Definitions 15 1. A collection

This shows the importance and applicability of the theory presented in this article. 5 UNLIMITED REGISTER MACHINES The ``Unlimited Register Machine" (URM) presented by Negel Cutland in his book [2] is a mathematical idealization of a computer. We present it as a Mapcode Computational System. Let * be the set of all infinite strings (u1 , u2 ,) of elements of  in which all but finitely many ui 's are zero. Let X =   * . Let a general element of X be denoted by (a; u ) for some a   (which is referred to indicate the stage of the computation) and for some element u  * (which is referred to indicate the state of the computation). The element x = (a; u) represents the state u at the stage a . Definition 16 Let S = n , T =  for some n . 1. For any a   define Da = {a} * . 2. For any a   , define a : S  X by

a (u1 ,, un )  (a; u1, u2 ,, un ,0,0,). For any b0   define  b : Db   by 0

0

 b (b0 ; u1 ,) = u1 . 0

For any b, i, j   and x = (a; u1 ,) ,

M of nondeterministic mapcode machines ( S , X , T ,  , Q ,  ) is called a computational system from S into T . Given a computational system M , a partial map f : S  P(T ) is said to be computable in M if there exists a machine M in M computing the map f . 2. Two computational systems M 1 and M 2 are said to be equivalent if they generate same set of computable maps from S into T .

3.

We will now present some standard models as mapcode computational systems. 1. The collection of Unlimited Register Machines introduced by Nigel Cutland [2]. 2. The collection of Turing machines.

commands on X . 4. For any a   and a command F on X the pair ( Da , F ) is called an admissible patch on X

define the total maps Z i , Si , Ti , j and J b;i , j on

X by Zi ( x) = (a  1; u1 ,, ui 1 ,0, ui 1 ,); Si ( x) = (a  1; u1 ,, ui 1, ui  1, ui 1,); Ti , j ( x) = (a  1; u1 ,, u j 1 , ui , u j 1 ,); (b; u1 , u2 ,), J b;i , j ( x) =  (a  1; u1 , u2 ,),

if ui = u j , if ui  u j ,

The maps Zi , Si , Ti , j and J b;i , j are called

109

K. Venkata Rao B. Gayatri Devi

For any i   , the jump command F = J a ,i ,i is always identity on the set Da = {a} N * . Definition 17 1. A mapcode machine M given by (S , X , T ,  , Q ,  ) is called an URM machine if there exists n, N   such that

Recall:

For

any

two

u = (u1 , u2 ,, un ) ,

strings and

v = (v1 , v2 ,, vm )  R* the concatenation of u and v is defined and denoted by u  v = (u1 , u2 ,, un , v1 , v2 ,, vm ) . Let | be a special symbol called ``head" (or

(c) Q (V )  V where V = [1, N  1]  N * ,

separator) which is not in R . Let R* be the set of all two sided strings of elements of R  {|} in which 1. the symbol head `` | " appears one and only once, 2. all but finitely many elements in the string are blank `` # ".

and (d)  = 1 ,  =  N 1 .

z = (, z2 , z1 , z0 | z1 , z2 ,) is a general

(a) S =  , T =  , n

(b) Q = {( D1 , F1 ),( D2 , F2 ),,( DN 1 , FN 1 )} an N  1 admissible patches on X , with FN 1 = Id identity on DN 1 ,

2. The collection M of all such URM machines is a mapcode computational system. 3. A map f : n   is said to be computable if there exists an URM -machine M computing the map f . The equivalent way of saying Q (V )  V is that whenever Fa = J b,i , j a jump command, it is required that 1  b  N  1.

element of R* , or simply z = ( zi )i , where

zi  R for all i  , and there exists n   such that zi = # for all i  ,| i | n . We sometimes call the element z1 as the first coordinate of z . In such a case, the element to the left of | is the 0 -th coordinate, and the relative places of the other elements of z are to be understood in a similar way. The symbol ` | ' may be viewed as a separator

In the theory of ``Unlimited Register Machine" presented by Negel Cutland, the program directly acts on the space N * . In his sense, a program is an ordered finite sequence of instructions. The execution begins with the first instruction and pass on to the next instruction until a jump instruction is given. The program has to keep the track of the stage of the command also. This problem is not addressed explicitly in their theory. We introduced the set [1, N  1] to keep track of the stage and the instructions are accordingly modified. This helps replace the informal description of action of the program by the mathematical terms.

between z0 and z1 or as a head reading the

6 TURING MACHINES Let R be a finite set containing at least two elements. Let # be a special symbol in R called blank. Then R* is the set of all finite strings of elements of R and the symbol ``  " stands for the empty string in R* .

v = ( z1 , z2 , z3 ,, zm ) .

symbol

z1 .

( z )i  zi

denotes the

i th

coordinate of z (with reference to ` | ' ). When the set R is equipped with a group structure, we choose the zero element 0 of R as # . Notation: Suppose z = (, z2 , z1 , z0 | z1 , z2 ,)  R* and zi = # for i < l or i > m , for some

l  1, and m  2 . Then it is sometimes convenient to write z = u | v, where u = ( zl , zl 1 ,, z1 , z0 ), and

We can also write z = u  u | v  v , for u = u  u, v = v  v . It is to be observed that the pair (l , m) is not unique and hence the

110

A Unified Approach to Computability

representation is not unique. We consider three kinds of maps on R* namely, left and right shift maps denoted by Rl and Rr respectively; the replacement maps

g c for any c  R ; and the identity map Id . Definition 18 1. The left shift and the right shift operation on R* are defined by Rl (, z2 , z1 , z0 | z1 , z2 ,)

= (, z2 , z1 | z0 , z1 , z2 ,), and Rr (, z2 , z1 , z0 | z1 , z2 ,) = (, z2 , z1 , z0 , z1 | z2 , z3 ,).

map on X the pair ( Da ,v , F ) is called an admissible patch on X . Definition

20

M = ( S , X , T ,  , Q ,  )

A with

machine T = {0,1} is

called a Turing Machine if there exists a finite set R containing at least two elements, and a finite set  such that S  ( R \{#})* , and

X =  R* 1. Q is a finite collection of admissible patches

( Da ,v , F ) on X , 2. there exists a0  such that

 = a : S  X given by 0

2. The replacement map gc : R*  R* for any

c  R is defined by gc (, z2 , z1 , z0  z1 , z2 ,) =

a (u) = (q0 ,  | u) , 3. there exists two distinct 0

elements y, n  such that

{ y, n} R*  fix( F ) ,

(, z2 , z1 , z0  c, z2 ,). The map g c reads

 =  y ,n :{ y, n}*  T given by

the element a1 under head, erases it, and writes the element c . 3. Let Id : R*  R* be the identity map.

 y , n ( a, u ) = 

Let F be the collection of all shift maps, replacement maps and the identity map.

7 CONCLUSION

We

may sometimes write ( Rl ( z ))i = zi 1 , and ( Rr ( z ))i = zi 1 . In other words,

Rr (v  z0 | z1  u) = v  z0  z1 | u, and

1, if a = y, 0, if a = n.

While presenting Mapcode model for the Turing Machine, the notation and terminology are carefully chosen to suit for the argument involved in presenting Real Computability discussed in [1] as a mapcode Machine.

map Fb, g : X  X by Fb, g (a, u) = (b, g (u)) .

Dijkstra [3] introduces the notion of a nondeterministic mechanism acting on a state space X but does not define the notion. Rather he says that such a mechanism induces a set action and that the action characterizes the mechanism. This article presents an alternative approach to the understanding of Dijkstra's formalism. Our approach also suggests there is a weak convergence related to nondeterministic mechanisms. It also presents some of the standard computing models as mapcode machines.

The map Fa , g is called a command map on

REFERENCES

Rl (v  z0 | z1  u) = v | z0  z1  u. Definition 19 Let  be a nonempty finite set and X =  R* . 1. For any a  and v  R ,

Da,v  {a} vR* = {(a, z ) | z1 = v} . 2. For any b  and g  F define a partial

X. 3. For any a , v  R and F a command

[1] Blum, L., Cucker, F., Shub, M., and Smale, S.: Complexity and Real

111

K. Venkata Rao B. Gayatri Devi

Computation, Springer-Verlag, New York, USA, 1998. [2] Cutland, N.: Computability:An Introduction to Recursive Function Theory Cambridge University Press, Cambridge, UK, 1980. [3] Dijkstra, E.W.: A Discipline of Programming, Prentice-Hall, Englewood Cliffs, NJ, 1976. [4] David Gries.: The Scinece of Programming Springer-Verlag New York Inc, 1981. [5] Knuth, D.E.: The Art of Computer Programming, Volumes 1 - III, Third Edition, Pearson Education Asia, New Delhi, India, 2002. [6] Viswanath, K: Dynamical Systems and Computer Science, Mathematics Newsletter of the Ramanujan Mathematical Society, 13(3), 2003, pp.3348. [7] Venkata Rao, K., and Viswanath, K.: Mapcode Characterization of Partial Recursive Maps, Sankhya, The Indian Journal of Statistics, 2010, Volume 72-A, Part 1, pp. 276-292. [8] Venkata Rao, K., and Viswanath, K.: Computing with Multiple Discrete Flows, Journal of Differential Equations and Dynamical Systems., October 2010, Volume 18(4), pp. 401-414. [9] Viswanath, K.: Building Programs and proving them correct with Dynamical Systems, Proceedings of the International Conference on Computing and Control, Austin, Texas, USA, August, 2004, International Institute of Informatics and Systemics,Orlando, Florida, USA, pp. 372377. [10] Viswanath, K.: An Invitation to Mathematical Computer Science, Discrete Mathematics and its Applications, Ed. M. Sethumadhavan, Narosa, 2006. [11] Viswanath, K.: Simple Foundations for Computing, Proceedings of the 40th Annual Convention of the Computer Society of India, November 10-12, 2005, Hyderabad, India. [12] Viswanath, K.: Computing with Dynamical Systems, Journal of Differential Equations and Dynamical Systems, Vol.14(1), pp.1-24, 2006. [13] Viswanath, K.: An Introduction to Mathematical Computer Science, The

Universities 2008.

Press,

Hyderabad,

India,

112

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Grid Resource Management (GRM) by adopting Software Agents (SA) T.A.Rama Raju1

M.S.Prasada Babu2

Jawaharlal Nehru Technological University, Kakinada, Andhra Pradesh, India College of Engineering, Andhra University, Vishakhapatnam, Andhra Pradesh, India

ABSTRACT Grids and Agent communities offer two approaches to open distributed systems. Grids focus on robust infrastructure and services, and Agents on the operation of autonomous intelligent problem solvers. The interests of Grids and Agents converge when efficient management of the services provided by Grids comes to the fore. Cooperation, coordination and negotiation are issues that Grid users and Grid resource providers need to address in order to successfully interoperate. We propose a Grid service platform that dynamically provisions resources for both interactive and batch applications to meet their QoS constraints while ensuring good resource mutualization by adopting software agents (SA). Moreover we believe that relying on software agents for grid resource management (GRM) will allow more flexible, robust and scalable solution. Keywords: Grid, Software Agents, Resource Management, Distributed Systems. 1. INTRODUCTION In spite of the success of Grid computing in providing solutions for a number of large-scale science and engineering problems, the problem of GRM remains an open challenge. GRM problems include the management of user requests, their matching with suitable resources through service discovery, and the scheduling of tasks on the matched resources. A high level of complexity arises from the various issues GRM must address: resource heterogeneity, site autonomy and security mechanisms, the need to negotiate between resource users and resource providers, the handling of the addition of new policies, scalability, fault tolerance and QoS, just to cite the most relevant. Software agents are

emerging as a solution for providing flexible, autonomous and intelligent software components. Multiagent Systems (MAS) are collections of software agents that interact through cooperation, coordination or negotiation in order to reach individual and common goals. There are five main dimensions that software computing is presently trying to address and that are shared by the Agent perspective:  ubiquity  interconnection  intelligence (and learning)  delegation (autonomic computing)  human orientation. Agents and MAS meet these requirements, and many computer scientists consider Agentoriented programming to be the next paradigm in software computing. Grid computing pioneers, as the developers of the „de facto‟ Grid middleware Globus Toolkit, agree with

agent experts on the advantages that the convergence of the two approaches may bring (see their recent paper: I. Foster, N. R. Jennings and C. Kesselman, (2004). Brain meets brawn: Why Grid and agents need each other. Proc. 3rd Int. Conf. on Autonomous Agents and Multi-Agent Systems, New York, USA) Software systems have been influencing and controlling all aspects of our daily life. This implies that society has developed an extraordinary addiction on computers. software agents are critically needed to meet the dynamic changes in information technology, which inhabits waste amounts of discrete information that require complex processing under uncertain conditions in order to abstract knowledge and make decisions. This article presents an overview and deliberation of the most important aspects of the rapidly evolving technology of software 113

T.A.Rama Raju M.S.Prasada Babu

agents in order to setting up clear distinctions between many different principals upon which we study and develop agents and multi agent systems. A software agent is a software program that acts for a user or other program in a relationship of agency, which drives from the Latin agere: an agreement to act on one‟s behalf. Search “action on behalf of” implies the authority to decide which, if any, action is appropriate.

 



2. ATTRIBUTES OF SOFTWARE AGENTS The basic attributes of software agents are that  Agents are not strictly invoked for a task, but active themselves,  Agents may reside in wait status on a host, perceiving context,  Agents may get to run status on a host upon starting conditions,  Agents do not require interaction of user,  Agents may invoke other tasks including communication. The term “agent” describes a software abstraction, an idea or a concept, similar to OOP terms such as methods, functions and objects. The concept of an agent provides a convenient and powerful way to describe a complex software entity that is capable of acting with a certain degree of autonomy in order to accomplish tasks on behalf of its host. But unlike objects, which are defined in terms of methods and attributes, an agent is defined in terms of its behavior. Agents; those commonly include concepts such as  Persistence (code is not executed on demand but runs continuously and decides for itself when it should perform some activity)  Autonomy (agents have capabilities of task selection, prioritization, goaldirected behavior, decision making without human intervention)  Social ability (agents are able to engage other components through some sort of communication and coordination, they may collaborate on a task)

Reactivity (agents perceive the context in which they operate and react to it appropriately). Learning ability (when designing an agent they developer may furnish it with all the intelligence needed to carry out its assigned roles to achieve specific goals). Reasoning (Is a decision-making mechanism, by which an agent decides to act on the basis of the information it receives, and in accordance with its own objectives to achieve its goals.

What makes it an Agent? An Agent should at least be characterized by possessing the autonomy and pro-activity properties. Explicitly, to be an agent, it has to work independently without any external intervention, and once initiated, it must plan ahead and take the initiative instead of waiting for external instructions. 3. CLASSIFICATION OF AGENTS Due to the multiplicity of properties that agents can possess. As we;; as the tasks they can perform. Different types of classes of agents. For instance      

Navigation Agents Search Agents Search and Retrieval Agents Management Agents Report Agents Testing Agents, Development Agents and so on.

4. MULTI-AGENT SYSTEMS (MAS) A Multi-Agent System can be viewed as a distributed system that coherently incorporates various discrete and independent problem solving agents. MAS as a loosely coupled network of software agents that interact to solve problems 114

Grid Resource Management (GRM) by adopting Software Agents (SA)

that is beyond the individual capacities of each problem solver.

6. APPLICATIONS OF MULTI-AGENT SYSTEMS

In MAS, we focus on the coordinated behavior of a system of individual autonomous agents to work cooperatively by interacting with each other and with their environment, which is usually heterogeneous in nature.

Multi-Agent Systems are appropriate in such situations when we have complex designs that incorporate numerous components and remotely distributed agents with different expertise and conflicting interests. Aylett et al. [13] assume that the MAS approach is worth considering for problems that are inherently distributed, either physically or geographically, such that independent processes can be distinctly identified. Examples of such problems include distributed sensor networks, air traffic control systems, decision support systems, and other networked/distributed control systems. They also claim that distribution is not a sufficient factor to use MAS, but there should also be essential requirements for intelligence or adaptivity in the sub-processes that involve explicit reasoning about behavior. In addition, they recognized the importance of information ownership to existing agents, especially in such situations where information is distributed over different agents and no single agent can access all the information.

5. ADVANTAGES OF MAS Tsatsoulis and Soh [19] assert that MAS usually inherit the following major advantages: 1. Reliability: MAS are more faulttolerant and robust. 2. Modularity and scalability: instead of adding new capacities to a monolithic system, agents can be added and deleted without breaking or interrupting the system. 3. Adaptivity: agents have the ability to re-configure themselves to accommodate new changes and faults. 4. Concurrency: agents are capable of reasoning and performing tasks in parallel which in turn provides more flexibility and speeds up computation. 5. Dynamics: agents can dynamically collaborate to share their resources and solve problems.

In addition, Stone and Veloso [11] claim that parallelism is another important advantage in MAS to overcome the limitations imposed by time-bounded reasoning requirements. They also claim the advantage of robustness in MAS that have redundant agents; for instance, “if control and responsibilities are sufficiently shared among different agents, the system can tolerate failures by one or more of the agents.” They assume that this feature is absolutely essential for domains that degrade gracefully; for example, “if a single entity, processor or agent, controls everything, then the entire system could crash if there is a single failure.” Furthermore, they comment that “although a multi agent system need not be implemented on multiple processors, to provide full robustness against failure, its agents should be distributed across several machines.”

Moreover, it would be a trivial solution and insignificant decision to use MAS because we just have complex requirements; not all complex designs require MAS. We usually adopt multi-agent systems when dealing with such situations where different clients/organizations with potentiallyconflicting goals and unincorporated information need to interact with each other to effectively achieve their targets and perform their tasks. Furthermore, it should be emphasized that in some situations where a centralized control is essential to a single agent or no parallel actions are possible, we cannot employ the MAS approach. In this sense, Aylett et al. have demonstrated a number of situations that qualify to adopt MAS, include: 1. Applications that require “interconnections and inter-operation of multiple autonomous, self-interested existing legacy systems, expert systems, and decision systems”. 115

T.A.Rama Raju M.S.Prasada Babu

2. Problems involving distributed autonomous and selfish information sources. 3. Problems requiring solutions based upon “different distributed experts, such as health care provisioning, in which some central agent cannot possibly perform the task without help from other experts”. 4. Problems characterized by having “cross organizational boundaries for which an understanding of the interactions among societies and organizations is needed”. 5. Problems characterized by a lack of global view for all agents, but having several agents with local views. 7. PROPOSED WORK GRID AND AGENT TECHNOLOGIES We are currently looking at the intermingling of Grid and Agent technologies. We propose an environment based on an agent marketplace, in which agents representing Grid users can detect and meet agents representing Grid services, negotiate with them for the conditions of the service, and execute the tasks on the Grid node corresponding to the service (see Figure 1). Agents allow flexible negotiation, exchanging messages on behalf of the entities they represent. In our system, agents can use an extra information source for negotiations, consisting of a table containing the possible grid configurations ranked by estimated utility.

For each possible configuration, this utility table stores the expected time of task completion, given values for parameters used to model the state of the resources (eg available resources, system load, network traffic) and for execution parameters of the task (eg granularity and time between each submission of a single job). The Broker Agent (BA) representing the Grid user can check this utility table to decide which of the bids proposed by Seller Agents (SAs) is the best, and also decide how to prepare the task for the effective execution in order to minimize the time to completion.

Figure 1: The Overall Architecture. The negotiation between agents for usage of resources is based on the FIPAContract-Net protocol, a common protocol used to dynamically create relationships in response to current processing requirements embodied in a contract. The contract is an agreement between a manager (the Broker) and a contractor (the Seller Agent acting on behalf of a Grid resource), resulting from the contractor successfully bidding for the contract. The agents are implemented using JADE Framework supporting FIPA standard for agent architectures, and they behave in a coordinated manner by means of the sequential execution of their behaviours (see Figure 2). We use Globus Toolkit 3 as Grid middleware for the interaction between agents and the Grid services and for the effective scheduling of jobs into the available resources.

Figure 2: The Agents operation Sequence Diagram.

116

Grid Resource Management (GRM) by adopting Software Agents (SA)

8. AGENT BASED GRID COMPUTING SYSTEM The resource management is central to the operation of a grid. The basic function of resource management is to accept requests for resources from machines within the grid and assign specific machine resources to a request from the overall pool of grid resources for which the user has access permission. A resource management system matches requests to resources, schedules the matched resources, and executes the requests using the scheduled resources. 9. RESOURCE MANAGEMENT The basic issues related to metacomputing resource management include the representation and management of knowledge that models data semantics, communication protocols, resource discovery and quality of service (QoS) support [17]. The main issues related to local resource management are multi-processor scheduling, resource allocation and monitoring. Next, we will give a detailed description on how some of these issues are settled in AGCS. AGCS is composed of Local Grid systems. Every Local Grid system is implemented as a Multi-Agent Environment (MAE) multi agent system or other FIFA compliant multi-agent systems, it mainly consists of the following components: RA (Resorce Agent), AMS(Agent Management System), DF(Directory Facilitator), and LGS(Local Grid System). As summarized in [16], there are three different models for grid resource management architecture: hierarchical model, abstract owner model and, computational market/economy model.

Figure 3: Resource Management Architecture The basic issues related to metacomputing resource management include the representation and management of knowledge that models data semantics, communication protocol resource discovery and quality of service (QoS) support[17]. The main issues related to local resource management are multi-processor scheduling resource allocation and monitoring. 10. CONCLUSIONS Establishing grids is an important undertaking in developing scalable infrastructures. Agent-Based Grid computing from the implement point of view. Based on the model, ABGC has been constructed using MAE wish is a multi-agent environment platform. Due to the very generic nature of the grid computing, we can involve the research on it from different level, such as operating system layer, information layer, knowledge layer, and service-oriented layer. Agent-Based Grid Computing System (ABGCS) focuses on service-oriented layer in terms of current existing running environment. ABGCS will be a useful platform for research on Agent Based Grid Computing. REFERENCES: [1] K. Czajkowski, I. Foster, N. Karonis, C. Kesselman, S. Martin, W. Smith and S. Tuecke, A resource management architecture for metacomputing systems, in: Proceedings IPPS/SPDP Workshop on Job Scheduling Strategies for Parallel Processing, 1998. [2] N.R. Jennings, M.J. Wooldridge, eds, Agent technology: foundations, applications, and markets, SpringerVerlag, 1998 [3] R. Buyya, D. Abramson, J. Giddy, Nimrod/G: an architecture for a resource management and scheduling system in a global computational grid, in: Proceedings 4th International Conference on High Performance Computing in Asia-Pacific Region, Beijing, China, 2000. [4] O.F. Rana and D.W. Walker, The Agent Grid: agent-based resource integration in PSEs, in: Proceedings 16th IMACS World Congress on Scientific 117

T.A.Rama Raju M.S.Prasada Babu

[5]

[6]

[7]

[8]

[9]

Computation, Applied Mathematics and Simulation, Lausanne, Switzerland, 2000. J. Cao, D.J. Kerbyson, and G.R. Nudd, “Performance Evaluation of an AgentBased Resource Management Infrastructure for Grid Computing”, in Proc. of 1st IEEE Int. Symp. on Cluster Computing and the Grid, 311-318, 2001. K. Krauter, R. Buyya and M. Maheswaran, A taxonomy and survey of grid resource management systems, to appear in: Software: Practice and Experience, 2001. J. Cao, D.J. Kerbyson and G.R. Nudd, Performance evaluation of an agent-based resource management infrastructure for grid computing, in: Proceedings 1st IEEE/ACM International Symposium on Cluster Computing and the Grid, Brisbane, Australia, 2001, pp. 311–318. J.D. Turner, D.P. Spooner, J. Cao, S.A. Jarvis, D.N. Dillenberger, and G.R. Nudd, „A Transaction Definition Language for Java Application Response Measurement”, J. Computer Resource Management, 105, 55-65, 2002. J. Cao, S.A. Jarvis, S. Saini, D.J. Kerbyson, and G.R. Nudd, “ARMS: an Agent-based Resource Management System for Grid Computing”, to appear in Scientific Programming, 2002.

118

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

A Note on Planar and Median Graphs S. Venu Madava Sarma1

T. V. Pradeep Kumar2

1

K. L. University, Vaddeswaram, Andhra Pradesh, India A.N.U College of Engineering & Technology, Acharya Nagarjuna University, Andhra Pradesh, India

2

Fig. 1

ABSTRACT 1.3 Definition:

The main aim of this paper is to discuss the problems on planar graphs, Median graphs, and obtained some fundamental results and relations between them. Keywords: Graph, Hamiltonian graph, Median graph, Planar graph.

1. INTRODUCTION Graph theory is one of the oldest subject in Mathematics which is started with the problem of Koinsberg bridge, in 1735. This problem lead to the concept of Eulerian Graph. Euler studied the problem of Koinsberg bridge and constructed a structure to solve the problem called Eulerian graph. Sparse graphs are useful most in practical applications such as computer discuss the problems on planar graphs, sparse graphs, and obtained some relations between them or road networks are and with respect to Hamiltonicity, sparse graphs tend to be more difficult to solve than dense graphs. 1.1 Definition: A graph usually denoted G(V,E) or G = (V, E) – consists of set of vertices V together with a set of edges E. The number of vertices in a graph is usually denoted n while the number of edges is usually denoted m. 1.2 Example: The graph depicted in Figure 1 has vertex set V={a, b, c, d, e. f} and edge set E = {(a, b),(b, c),(c, d),(c, e),(d, e),(e, f)}. d

a

e

b c

f

Two vertices u and v are adjacent if there exists an edge (u,v) that connects them. 1.4 Definition: Every graph has associated with it an adjacency matrix, which is a binary nn matrix A in which aij = 1 and aji = 1 if vertex vi is adjacent to vertex vj, and aij = 0 and aji = 0 otherwise. The natural graphical representation of an adjacency matrix is a table, such as shown below.

a b c d e f

a 0 1 0 0 0 0

b 1 0 1 0 0 0

c 0 1 0 1 1 0

d 0 0 1 0 1 0

e 0 0 1 1 0 1

f 0 0 0 0 1 0

Adjacency matrix for graph in Figure 1. 1.5 Definition: Examining either Figure 1 or given adjacency Matrix, we can see that not every vertex is adjacent to every other. A graph in which all vertices are adjacent to all others is said to be complete. 1.6 Definition: A subgraph of a graph G is a graph whose points and lines are contained in G. A complete subgraph of G is a section of G that is complete 2. PLANAR GRAPHS, MEDIAN GRAPHS In this section we would like to give some preliminaries about Planar graphs and Median graphs, and obtained some results between Planar and Median graphs

119

S. Venu Madava Sarma T. V. Pradeep Kumar

2.1 Definition: REFERENCES A planar graph is one that can be drawn on a plane in such a way that there are no "edge crossings," i.e. edges intersect only at their common vertices. 2.2 Examples:

[1] Bermond J.C. “Hamiltonian graph” in selected topics in graph theory ed. By L.W.Beinke and R.J. Wilson, Academic press (1978). [2] Bondy J.A. and Murthy USR “Graph theory with applications” Academic Elsevier Publishing company, New yark, 1976. [3] Narasingh Deo “Graph theory with applications to engineering and computer science” Prentice Hall of India, 1990. [4] S.G.Srinivas, S.Vetrivel, N.M.Elango “Applications of Graph theory in Computer Science an over view” IJESc.and Technology, Vol. 2(9), 2010, pp 4610 – 4621.

2.3 Definition: Median Graph is an undirected Graph in which any three vertices a, b, and c have a unique median: a vertex m(a,b,c) that belongs to shortest paths between any two of a, b, and c. 2.4 Result: Every Median Graph is an Planar Graph. Proof: Let G be a median graph. By the definition G is an undirected Graph in which any three vertices a, b, and c have a unique median: a vertex m(a,b,c) that belongs to shortest paths between any two of a, b, and c. By the definition of Planar Graph we have that in G , there is a path that visits every vertex in no edge crossings. Since in median graphs we have that a shortest path between any two vertices then there is a median between any three vertices. It shows that there is a path which visit every vertex there is no edge crossings, since it is shortest path. So Every Median graph Having Planar Graph. Hence the result.

120

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Probabilistic Rough Indexing in Information Systems under Fuzziness D.Rekha(1)

K.Thangadurai(2)

D.Latha(3)

(1)

Department of Computer Applications, St.Peter’s University, Chennai, India Department of Computer Science, Government College, Karur, Tamilnadu, India (3) Department of Computer Science, Adikavi Nannaya University, Rajahmundry, Andhra Pradesh, India (2)

ABSTRACT In 2004, G.Ganesan et.al introduced the concept of indexing any information system with fuzzy decision attributes using a threshold. These indices are based on the two way approach of Pawlak’s rough sets. However, the approach of Pawlak lacks in quantifying the importance of any basic granule which involve in the approximation. Later, Y.Y.Yao discussed a new Probabilistic Rough Set model which appropriately quantifies the appropriate basic granules. In this paper, we extended the work of G.Ganesan et.al., for the Probabilistic Rough Set Model to improve the efficiency of rough indices in the information system with fuzzy decision attributes. Keywords: information system, rough set, probabilistic rough set, rough index 1. INTRODUCTION The concept of Rough Sets [4,5] introduced by Z.Pawlak finds various technical applications in the areas such as Knowledge Discovery, Acquisition etc. This approach approximates any input/concept in terms of union of Basic Categories in two ways. However, this approach lacks with the level of involvement of the basic categories defined. To overcome, W.Ziarko defined Variable Precision Rough Sets in 1993, which mainly deals with the approximations based on the degrees of contributions of the Basic Categories in defining the approximations for a given input. Later, Bing Zhou, YY Yao, Slezak etc have contributed in extending this model through Probabilistic approaches. In parallel, the contributions from the researchers such as Dubois, Prade, Nakamura, Biswas etc have been noteworthy in hybridizing rough and fuzzy models for real time applications. In 2005, G.Ganesan et.al.,

discussed the importance of defining the thresholds in rough fuzzy computing. In 2008, G.Ganesan et. al., introduced indexing terminology in information systems using these thresholds in any information systems with fuzzy decision attributes. Recently, Yiyu Yao and Bing Zhou discussed the Naïve Bayesian Rough Set Model in [8] and earlier to this, the initial approach in this regard was discussed in [7] by Slezak. In this paper, we extended the work of G.Ganesan et. al., on rough indexing to the information systems functions with Probabilistic Naïve Bayesian Rough Set Model. 2. DECISION THEORETIC AND PROBABILISTIC ROUGH SETS The Rough Sets [4,5] theory gives two way approximations namely lower and upper approximations for a given input. For given finite universe of discourse U and an equivalence relation E, we define the equivalence class of any xU to be  x   y U / xEy . The family of equivalence classes U

E





  x E | x U is a

partition of the universe U. For a given concept C, Pawlak defined the lower approximation as apr E  C   x U /  xE  C



and

upper



approximation as apr E  C   x U /  xE  C   . According to





Pawlak, for a given concept C, three disjoint regions can be defined namely positive, negative and boundary regions which are defined as follows: Positive Region: POSE  C    x U /  xE  C Boundary : BNDE  C   x U /  x  C     x C E E Negative region: NEGE  C   x U /  xE  C   Understanding the limitations of Pawlak’s restrictive model, several researchers 121

D.Rekha K.Thangadurai D.Latha

focused on generalizing this approach towards parameterized rough set model, probabilistic rough set model and generalized rough set model. In 1994, Pawlak and Skowron [6] defined rough membership function by considering degrees of overlap between equivalence classes and a concept C to be approximated and is viewed as the conditional probability of an object belongs to C given that the object is in [x] (for simplicity, we denote [x]E with [x]) which is given as   C   x Pr  C    x   x 

Hence, by Baye’s Theorem, the Positive, Boundary and Negative Regions are given by

  Prx / C  POS B ',  '  (C )   x  U / log   ' c   Pr x / C  



  Prx / C  BNDB ',  '  (C )   x  U /  '  log   ' c Pr x / C  



    POS (C )   x  U / Pr  C   1 x           BND(C )   x U / 0  Pr  C   1   x        NEG (C )   x  U / Pr  C   0   x   

In 2009, Greco et.al [3] discussed the parameterized rough set model by generalizing the above said definitions. In this model, two thresholds namely  and  are used to define the probabilistic regions and the positive, boundary and negative regions are modified as follows:     POS ,   (C )   x U / Pr  C   x         C  BND ,   (C )   x U /   Pr      x        NEG ,   (C )   x U / Pr  C   x      

These Probabilistic regions will lead three way decisions namely acceptance, deferment and rejection respectively for any object x in U. But, however, in several cases, it is easy to compute the probability of the existence of a category [x] for a given concept C using   x   x  C Pr   C  C 



  Prx / C  NEG B ',  '  (C )   x  U / log   ' c Pr x / C  





Pr(C )   log Pr(C ) 1 c Pr(C )   '  log  log Pr(C ) 1 

where  '  log and

Using the definition quoted above, in [8], the positive, boundary and negative regions are defined as follows:



c

Now, we shall discuss the conventional approach on dealing the fuzzy sets to approximate under rough computing, which was discussed in [1] 3. ANALYSIS OF FUZZY SET USING A THRESHOLD Consider a set D, called R-domain [1], satisfying the following properties: a) D  (0,1) b) If a fuzzy concept C is under computation, eliminate the values C(x) and Cc (x)  xU from the domain D, if they exist. c) After the computation using C, the values removed in (b) may be included in D provided C must not involve in further computation Consider the universe of discourse U={x1,x2,…,xn}. Let ,1,2, be the thresholds assume one of the values from the domain D, where D is constructed using the fuzzy concepts A and B. For a given threshold  and a fuzzy set A, the Strong -Cut is given by A[ ]  {x U /  A ( x)  } . The union and intersection of fuzzy sets [10] are by the maximum and minimum of corresponding membership values respectively. In 1972, Zadeh [9] introduced the concept of hedges. In fuzzy logic, in order to improve the efficiency of fuzziness, the concept of 122

Probabilistic Rough Indexing in Information Systems under Fuzziness

concentration and dilation were introduced by him. For example, for the linguistic variable ‘low’ with the membership function , the hedges ‘very’ and ’very very’ emphasis the efficiency of the variable with the corresponding membership values 2 and 4. They are called concentration, whereas the hedges ‘slightly’ and ‘more slightly’ dilutes the efficiency of the linguistic variables with the membership values with the corresponding membership values 1/2 and 1/4. They are called dilation. Using the definitions of fuzzy sets mentioned above, the following properties were derived in [1]. a) A[1]A[2]=A[] where =min(1,2) b) A[1]A[2]=A[] where =max(1,2) c) (AB)[]=A[]B[] d) (AB)[]=A[]B[] e) Ac[]=A[1-]c f) (AB)c[]=Ac[]Bc[] g) (AB)c[]=Ac[]Bc[] Using the mathematical tool derived as above, in [1], rough set approach on fuzzy sets using a threshold is introduced as discussed below. 3.1 Rough Approximations on fuzzy sets using  Let  be any partition of U, say {B1,B2,…,Bt}. For the given fuzzy concept, the lower and upper approximations with respect to  can be defined as C= (C[ ]) and C= (C[ ]) respectively.



3.1.1 Propositions

a) b) c) d)





(Ac) =(1-A)c c 1- A)c (A ) =(

Now, we shall hybridize the concepts dealt in the above two sections which gives the approach of dealing a fuzzy concepts under Naïve Bayesian Probabilistic Rough Sets. 4. NAÏVE BAYESIAN PROBABILISTIC ROUGH SETS MODEL FOR A FUZZY CONCEPT Since, in the above both sections, the same threshold  has been used, for different purposes, to make the homogeneity, in this paper, we replace the threshold  to obtain a Strong Cut on fuzzy sets with . Hence, for a given fuzzy concept F with the threshold , the probabilistic positive, boundary and negative regions are respectively defined on the approximation space U/E as     POS ( F )   x U / Pr  F [ ]   1 x          BND ( F )   x U / 0  Pr  F [ ]   1 x      

    NEG ( F )   x U / Pr  F[ ]   0  x      

For given parameters  and , the regions of the parameterized rough sets model are given by     POS ,  ,  ( F )   x U / Pr  F [ ]      x    

    BND , ,  ( F )   x U /   Pr  F [ ]      x         NEG ,  ,  ( F )   x U / Pr  F [ ]   x      

and the Regions of Naïve Bayesian Rough Sets Model are given by

Here, by using the properties of rough sets, the following propositions [1] can be obtained. 

e) f)



(AB) = A B (AB) =AB (AB)  AB  (AB) AB

  Pr  x  / F[ ] POSB ', ',  ( F )   x U / log   '  Pr  x  / ( F[ ])C     Pr  x  / F[ ] BNDB ', ',  ( F )   x U /  '  log   '  Pr  x  / ( F[ ])C  









123

D.Rekha K.Thangadurai D.Latha

  Pr  x  / F [ ]   NEGB ',  ',  ( F )   x U / log   ' C Pr x / ( F [  ])      



where  '  log



Pr(C c )   log Pr(C ) 1

Pr(C c )  and  '  log  log Pr(C ) 1  5. ROUGH INDICES Let U be the universe of discourse and  be any value in (0,1). Let X={W1,W2,…,Wn} be any partition defined on U. For any fuzzy set A define A[]={xU/ A(x)>} where  is chosen from R-domain satisfying the property that diln() and conn() are the members of R-Domain for any positive integer n [dil represents dilation and con represents concentration]. The lower and upper approximations A and A are given by A=(A[])~ and A=(A[])~ respectively [1]. The following algorithm as in [2] illustrates the method of indexing the elements of U, by using the lower and upper approximations of the given fuzzy set A Let M denote the largest number under consideration such that n+M is always positive and n-M is always negative for any integer n. 5.1: ALGORITHMS Algorithm rough index (x,A,) //Algorithm to obtain rough index of x an element of universe of discourse //Algorithm returns the rough index 1. Let x_index be an integer initialized to 0 2. Pick the equivalence class K containing x. If A(y)=0 for all yK begin x_index=-M goto 6 end end 3. If A(x)=1 begin If A(y)=1 for all yK begin x_index=M

goto 6 end end 4. compute A and A 5. If xA begin x_index=M while (xA) begin =dil() //dilation of  x_index=x_index+1 compute A end end else if xA begin x_index=-M while (xA) begin =con() //concentration of  x_index=x_index-1 compute A end end else Let = compute A while (xA and xA) begin =con() // concentration of  =dil() // dilation of  compute A,A x_index=x_index+1 end if xA then x_index= - x_index end 6. return x_index

Consider the universe of discourse U={a,b,c,d,e,f,g,h} with the partition X={{a,e,f},{b,g},{c,h}, {d}}. Let =0.5. Consider the fuzzy set {(a,0.6),(b,0.4), (c,0.8), (d,0.24), (e,0.44), (f,0.56), (g,0.98), (h,0.77)}. By the above algorithm, `b’ can be indexed by –1 and d can be indexed by –2-M. Similarly, other values of U can be indexed. These indices are called rough indices. 124

Probabilistic Rough Indexing in Information Systems under Fuzziness

Here, it is to be noted that the elements of same equivalence classes have the same rough indices. Sometimes, the elements of different equivalence classes may have same rough index. Clearly, it depends upon the choice of  and the fuzzy set taken under consideration. But,, it is obvious that the elements of the same equivalence classes will have the same rough indices and therefore, instead of indexing the elements of U, one may follow the given algorithm for rough indexing the equivalence classes. Now, we shall modify this algorithm for three way approach on rough sets as follows: Algorithm Three_Way_ rough index (x,A,) //Algorithm returns the Three_Way_rough index of x 1. Let x_index be an integer initialized to 0 2. Pick the equivalence class K containing x. If A(y)=0 for all yK begin x_index=-M goto 6 end end 3. If A(x)=1 begin If A(y)=1 for all yK begin x_index=M goto 6 end end 4. compute POS(A), BND(A) and NEG(A) 5. If x POS(A) begin x_index=M while (x POS(A)) begin =dil() //dilation of  x_index=x_index+1 compute POS(A) end end else if x NEG(A) begin x_index=-M while (x NEG(A)) begin

=con() //concentration of  x_index=x_index-1 compute NEG(A) end end else Let = compute NEG(A) while (x(POS(A)NEG(A))) begin =con() // concentration of  =dil() // dilation of  compute POS(A)NEG(A) x_index=x_index+1 end if x POS(A) then x_index= - x_index end 6. return x_index This algorithm can be illustrated in the same manner as mentioned in the previous example. Now, we parameterize the algorithm using parameters  and . Algorithm Naïve Bayesian_ rough index (x,A,,,) //Algorithm returns Naïve Bayesian_rough index of x 1. Let x_index be an integer initialized to 0 2. Pick the equivalence class K containing x. If A(y)=0 for all yK begin x_index=-M goto 6 end end 3. If A(x)=1 begin If A(y)=1 for all yK begin x_index=M goto 6 end end 4. compute POSB(’,’,)(A), B BND (’,’,)(A) and NEG B(’,’,) (A) 5. If x POSB(’,’,)(A) begin x_index=M 125

D.Rekha K.Thangadurai D.Latha

while (x POSB(’,’,) (A)) begin =dil() //dilation of  x_index=x_index+1 compute POSB(’,’,)(A) end end else if x NEGB(’,’,)(A) begin x_index=-M while (x NEGB(’,’,)(A)) begin =con() //concentration of  x_index=x_index-1 compute NEGB(’,’,)(A) end end else Let = compute NEG(A) while (x(POSB(’,’,)(A)NEGB(’,’,)(A))) begin =con() // concentration of  =dil() // dilation of  compute B

POS

B (’,’,)(A)NEG (’,’,)(A)

x_index=x_index+1 end if x POSB(’,’,) (A) then x_index= - x_index end 6. return x_index

6. NAÏVE BAYESIAN INDEXING IN INFORMATION SYSTEM WITH FUZZY DECISION ATTRIBUTE According to the perspective of Z.Pawlak, any information system is given by T=(U, A, C, D), where U is the universe of discourse, A is a set of primitive attributes, C and D are the subsets of A called condition and decision features respectively [C and D may not exist in a few of the information systems].

Consider an information system with conditional attributes C={a1,a2,…,an} and ecsion attributes {d1,d2,…,ds} with the records U={x1,x2,…,xm}. For any index key ‘a’ in C, the indiscernibility relation is given by xi ak x j (read as xi is related to xj with respect to ak) if and only if ak(xi)=ak(xj). Clearly, this indiscernibility relation partitions the universe of discourse U. However, the procedure of selecting the appropriate minimal attributes [reducts] for effectiveness is not discussed in this paper.

For example, consider the decision table with C={a,b,c,d} and D={E}.

x1

a 1

b 0

c 2

d 1

E 1

x2

1

0

2

0

1

x3

1

2

0

0

2

x4

1

2

2

1

0

x5

2

1

0

0

2

x6

2

1

1

0

2

x7

2

1

2

1

1

Let us consider the index key as ‘c’. As x1,x2,x4,x7 have the values 2; x3,x5 have the values 0 and x6 has the value 1. Hence, the partition on U with respect to c can be defined as {{x1,x2,x4,x7},{x3,x5},{x6}}. However, in real time systems we can find several information systems with fuzzy decision attributes and hence the scope of the algorithms discussed above would be applicable for such information systems. Here, the Naïve Bayesian rough indexing of the data can be derived from the fuzzy decision attribute as discussed in the previous section. For example, consider knowledge representation of the information system with C={a,b,c,d} and D={E} where E is of fuzzy natured. 126

Probabilistic Rough Indexing in Information Systems under Fuzziness

x1

a 1

b 0

c 2

d 1

E(xi) 0.45

x2

1

0

2

0

0.7

x3

1

2

0

0

0.65

x4

1

2

2

1

0.1

x5

2

1

0

0

0.91

x6

2

1

1

0

0.6

x7

2

1

2

1

0.35

On considering ‘c’ as the index key, the partition obtained is {{x1,x2,x4,x7},{x3,x5},{x6}}. Let =0.5. Here, E[]={x2,x3,x5,x6}. For a given  and , the Naïve Bayesian indexing algorithm would be implemented further. 7. CONCLUSION In this paper, by using the concept of Naïve Bayesian rough sets the approach of indexing the records of the information system is dealt. These rough indices are useful to analyze and index a database when the fuzzy information about the entire key values is obtained.

[4]. Pawlak, Z. Rough sets, International Journal of Computer and Information Sciences, 11, 341-356, 1982. [5]. Pawlak, Z. Rough Sets, Theoretical Aspects of Reasoning about Data, Dordrecht: Kluwer Academic Publishers, 1991. [6]. Pawlak, Z. and Skowron, A. Rough membership functions, in: Yager, R.R., Fedrizzi, M. and Kacprzyk, J., Eds., Advances in the DempsterShafer Theory of Evidence,John Wiley and Sons, New York, 251-271, 1994. [7]. Slezak, D. and Ziarko, W. The investigation of the Bayesian rough set model, International Journal of Approximate Reasoning, 40, 81-91, 2005. [8]. Yiyu Yao and Bing Zhou, Naive Bayesian Rough Sets, Proceedings of RSKT 2010, LNAI 6401, pp. 719-726, 2010. [9]. Kohavi R, Useful feature subsets and Rough set reducts, Proceedings, Third International Workshop on Rough Set and Soft Computing, pp:310-317, 1994 [10]. Zadeh L.A. Fuzzy Sets, Journal of Information and Control, 8, pp. 338-353, 1965

REFERENCES [1]. G.Ganesan, C.Raghavendra Rao, Rough Set: Analysis of Fuzzy Sets using thresholds, UGC Sponsored National Conference on Recent Tends in Computational Mathematics, March 2004, Gandhigram Rural Institute, Tamilnadu, by Narosa Publishers, pp:81-87, 2005 [2]. G.Ganesan, D.Latha, C.Raghavendra Rao, Rough Index in Information System with Fuzziness in Decision Attributes, International journal of Fuzzy Mathematics, Vol: 16, No: 4, 2008 [3]. Greco, S., Matarazzo, B. and Slowinski, R. Parameterized rough set model using rough membership and Bayesian con_rmation measures, International Journal of Approximate Reasoning, 49, 285-300, 2009 127

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Machine Learning Algorithm in Computational Intelligence Premalatha .M1

Vijayalakshmi .C2

1

Researcher Scholar, Department of Mathematics, Sathyabama University, Chennai Associate Professor, Department of Mathematics, Sathyabama University, Chennai

2

ABSTRACT The intelligent systems based on various combinations of neural nets and fuzzy logic, called Neural Fuzzy Systems (NFS). In this paper presented elegant algorithms to combine neural nets with fuzzy logic, resulting in both feed forward and recurrent neural fuzzy systems with medical application. The goal is the simplest form of neural network used for the classification of linearly separable patterns. NFS provide several key advantages/features which were highlighted and discussed. These problems, coupled with others have inspired a growing interest in intelligent techniques including Fuzzy Logic, Neural Networks, Genetic Algorithms, Expert Systems, and Probabilistic Reasoning. Keywords— Neural Network and Fuzzy Logic, Back Propagation Algorithm, Neural Fuzzy System, Medical Diagnosis, Advantages and Disadvantages of NN and FL 1. INTRODUCTION Artificial Neural Networks (ANNs) mimic biological information processing mechanisms. ANNs are developed to try to achieve biological system type performance using a dense interconnection of simple processing elements analogous to biological neurons. Networks and Fuzzy Logic are the two key technologies that have recently received growing attention in solving real world, nonlinear, time variant problems. Intelligent combinations of these two technologies can exploit their advantages while eliminating their limitations. Such combinations of neural networks and fuzzy logic are called Neural Fuzzy Systems (NFS). Intelligent Systems (IS) based on neural fuzzy techniques have shown good potential to solve many complex real word problems. Biological systems learn and then interpolate and extrapolate using slowly propagated (100 m/s) information when compared to the propagation speed (3 *108 m/s) of a signal in an electronic system. In recent years, intelligent techniques such as fuzzy logic, fuzzy reasoning, and fuzzy modelling are used in many fields including engineering, medical, and social sciences.

2. MACHINE LEARNING Machine Learning aims to generate classifying expressions simple enough to be understood easily by the human. 2.1 Neural Network and Fuzzy Logic Neural network approaches combine the complexity of some of the statistical techniques with the machine learning objective of imitating human intelligence: however, this is done at a more ―unconscious‖ level and hence there is no accompanying ability to make learned concepts transparent to the user [1]. Fuzzy logic uses graded statements rather than ones that are strictly true or false. Fuzzy logic techniques have been successfully applied in a number of applications: computer vision, decision making, and system design including ANN training. The most extensive use of fuzzy logic is in the area of control, where examples include controllers for cement kilns, braking systems, elevators, washing machines, hot water heaters, airconditioners, video cameras [4]. 2.2 Artificial Intelligence vs. Computational Intelligence

Machine Intelligence

Artificial Intelligence

Soft Computing

Hard Computing

Fuzzy Logic

Computational Intelligence

Genetic Algorithm

Artificial Neural Network

Fig. 1 Machine Intelligence

128

Machine Learning Algorithm in Computational Intelligence

Obtain a decision surface based on perceptron learning.

2.3 Neural Network Selection

θ

Neural Network Type Prediction

Specific Examples

Back-propagation Delta Bar Delta Extended Delta Bar Delta Directed Random Search Higher Order Neural Networks Self-Organizing Map into Back propagation

Classification

Data Association

Data Conceptualiz ation

Data Filtering

Learning Vector Quantization Counterpropagation Probabilistic Neural Networks Hopfield Boltzmann Machine Hamming Network Bidirectional Associative Memory Spatio-temporal Pattern Recognition Adaptive Resonance Network Self-Organizing Map Recirculation

Use for Neural Network Type Use input values to predict some output (e.g. pick the best stocks in the market, predict weather, identify people with cancer risks.) Use input values to determine the classification

W0

X1(k) W1(k)

Output

Yk X2(k)

W2(k)

Fig. 2 Perceptron Structure

For the simplicity Let us assume   0.5 Initial weight vector W0T  0 0 Iteration 1:

0

Weight update: W (2) = W (1) + x (1) = 0  0.50   0  0 0.5 2.5       I2 : =

it also recognizes data that contains errors

0 Weight update: W (3) =W (2) + x (2) = 0   0.50  0 2.5  

1   

I3 : =

3   

0 Weight update: W (4) = W (3) -  x (3) = 0  0.51    0.5 3 0 3   

 

I4 : =

Analyze the inputs so that grouping relationships can be inferred Smooth an input signal

Table 2: Neural Network Type

2.4 Linear Classification Let us consider pattern classes C1 and C2, where C1: {(0,5), (0,1), (1,1), } and C2: { (1,0)}. The objective is





0 Weight update: W (5) = W (4)  η x (4) =  0.5  0.51   1  3 

 

1  

2.5  

I5 : =

0

I6 : =

0

I7 : =

0

I8 : =

0

Therefore, the algorithm converges and the decision surface for the perceptron is as follows: d(x) = -x1 +2.5x2 = 0 Now, the input data {1, 2}, which is not in the training set. If we calculate the output: y= WT X =  1 2.5 1   4  0 . The output Y belongs to the  2  

class C1 as is expected.

129

Premalatha .M Vijayalakshmi .C

sigma units. A different propagation rule is known as the propagation rule for the sigma-pi unit.

X2

S k t    w jk t  y jm t    k t 

-1x1+2.5x2 = 0

j

Often, the yjm are weighted before multiplication. The activation function is a non-decreasing function of the total input of the unit:   (3.3) y t  1  A S t   A  w t  y t    t 

X1

k

k

k

k

  j

Fig. 3 Decision surface on 2D plane

The derived function expression is then a classifier. Constructing ML-type expressions from sample data is known as ―concept learning‖. Testing Data → Classification Rules → Learning Algorithm → Training Data FOR

DISTRIBUTED

1. A set of processing units (neurons, cells) 2. A state of activation yk for every unit, which equivalent to the output of the unit, Connections between the units, generally each connection is denoted by a weight wjk which determines the effect which the signal of unit j has on unit k 3. A propagation rule, which determines the effective input sk of a unit from its external inputs 4. An activation function Ak which determines the new level of activation based on the effective input sk (t) and the current activation yk (t) i.e. The update 5. An external input θk for each unit 6. A method for information gathering (the learning rule) 7. An environment within which the system must operate, providing input signals and if necessary error signals [5].

jk

j

k

 

k θk

W

2.5 Classification process from Training to Testing

3. A FRAME WORK REPESENTATION

----(3.2)

m

W jk

s k   w jk y j   k W

j

F k

j

w

Yj

θk Fig.4. The propagation rule used here is the ‗standard‘ weighted summation.

For this smoothly function often a sigmoid (S shaped) function like

y k t   As k  

1 1  e  sk

----(3.4)

The activation is not deterministically determined by the neuron input, but the neuron input determines the probability p that a neuron gets a high activation value:

p y k  1 

1 1  e  sk / T

----(3.5)

3.2 Neural Network represented by Directed Graph An Artificial Neural Network is a data processing system, consisting large number of simple highly interconnected processing elements as artificial neuron in a network structure that can be represented using a directed graph G, an ordered 2-tuple (V,E), consisting a e1

3.1 Connections between units

V1

The total input to unit k is simply the weighted sum of the separate outputs from each of the connected units plus a bias or off set term θk

S k t  

 w t y t    jk

j

k

---- (3.1)

j

The contribution for positive wjk is considered as an excitation and for negative wjk as inhibition. In some cases more complex rules for combining inputs are used, in which a distinction is made between excitatory and inhibitory inputs. We call units with propagation rule (1)

V3 e2

V5

e4 V2

V4 e3 Fig. 5 Directed Graph

e5

set V of vertices and a set E of edges. Vertices V = {v1, v2, v3, v4, v5} Edges= { e1, e2, e3, e4, e5}

130

Machine Learning Algorithm in Computational Intelligence

4. BACK PROPAGATION ALGORITHM Back propagation algorithm is one of the most popular algorithms for training a network due to its success from both simplicity and applicability. The algorithm consists of two phases: Training phase and recall phase. In the training phase, first, the weights of the network are randomly initialized. Then, the output of the network is calculated and compared to the desired value. In sequel, the error of the network is calculated and used to adjust the weights of the output layer. In a similar fashion, the network error is also propagated backward and used to update the weights of the previous layers. Fig.6 shows how the error values are generated and propagated for weights adjustments of the network. In the recall phase, only the feed forward computations using assigned weights from the training phase and input patterns take place. Fig.7 shows both the feed forward and back propagation paths.

The feed forward process is used in both recall and training phases. In the training phase, the weight matrix is first randomly initialized. In sequel, the output of each layer is calculated starting from the input layer and moving forward toward the output layer. Thereafter, the error at the output layer is calculated by comparison of actual output and the desired value to update the weights of the output and hidden layers. There are two different methods of updating the weights. In the first method, weights are updated for each of the input patterns using an iteration method. In the second method, an overall error for all the input output patterns of training sets is calculated. The training phase will be terminated when the error value is less than the minimum set value provided by the designer. One of the disadvantages of back propagation algorithm is that the training phase is very time consuming. During the recall phase, the network with the final weights resulting from the training process is employed. Therefore, for every input pattern in this phase, the output will be calculated using both linear calculation and nonlinear activation functions.

Choose activation functions for the neurons. These activation functions can be uniform or they can be different for different layers 5. Select the training pair from the training set. Apply the input vector to the network input I0=1

O1

n1

D1

I1 I2

O2

n2

D2

Or

Dr

Im nn Fig .6 Back Propagation of the Error in a Two Layer Network

Fig. 6(a) Forward Propagation Training and Recall phase

Fig. 6 (b) Backward Propagation Training Phase

6. Calculate the output of the network based on the initial weights and input set 7. Calculate the error between network output and the desired output (the target vector from the training pair) 8. Propagate error backward and adjust the weights in such a way that minimizes the error. Start from the output layer and go backward to input layer 9. Repeat steps 5−8 for each vector in the training set until the error for the set is lower than the required minimum error. After enough repetitions of these steps, the error between the actual outputs and target outputs should be reduced to an acceptable value, and the network is said to be trained. At this point, the network can be used in the recall or generalization phases where the weights are not changed. 5. NEURAL FUZZY SYSTEM

The process provides a very fast performance of the network in the recall phase, which is one of its important advantages. The training phase of back propagation algorithm: 1. Initialize the weights of the network 2. Scale the input/output data 3. Select the structure of the network 4.

By properly combining neural nets with fuzzy logic, NeuFuz (Fig.3) attempts to solve the problems associated with both fuzzy logic and neural nets. NeuFuz learns system behaviour by using system input-output data and does not use any mathematical modelling [8].

131

Premalatha .M Vijayalakshmi .C

5.1Neural Net Learning With an input level of x, the output layer 1 neuron (Figure 3) is g1 · x where g 1 is the gain of neuron in layer 1. The input of layer 2 neuron is g1 · x · W 1. Continuing this way, we have the input of layer 4 neuron, z, as I 1

Oj

I 2

Fig. 7 Learning mechanism of NeuFuz

(5.1) z  ( g1 . x . W1 . W2 . g 2  b) . w3  (a . x  b) .c where a = g1 · g 2 · W 1 · W 2, c = W3, and g2 = gain of layer 2 neuron. Gains g1 and g2 can be kept constant and we can adjust only weights W1, W2, and W3 during learning. The equivalent error, dk out, at the output layer is (5.2) d kout  (tk  ok )f ' net k ) ok is the output of the output neuron k, tk is the desired output of the output neuron k, and f '(netk) is the derivative which is unity for layers 2 and 3 neurons as mentioned above. The weight modification equation for the weights between the hidden layer and output layer, Wjk, is old out wnew (5.3) jk  w jk   .d k .o j Where μ is the learning rate, Wjk is the weight between hidden layer neuron j and output layer neuron k, and oj is the output from the hidden layer neuron j. The general equation for the equivalent error in the hidden layer is d hidden  f ' (net j ) d kout , w jk (5.4) j the equivalent error is different, as for the middle layer, the netpj is netPjhidden  Wij  oi (5.5) Thus, for the input layer (Figure 3), the equivalent error expression becomes d iinput  f ' netPi  d hidden  Wij  Wkj  ok (5.6) j where both i and k are indices in the input layer and j is the index in the hidden layer; the summation is over j and product is over k[10].







6. ADVANTAGES AND DISADVANTAGES OF NN AND FL 6.1 Advantages of FL and NN

Fuzzy Logic Fuzzy logic converts complex problems into simpler problems using approximate reasoning Fuzzy rules and functions using human type language and linguistic variables. It is extremely difficult, if not impossible, to develop a mathematical model of a complex system Implement using both software on existing microprocessors or dedicated hardware.

Neural Network Neural nets learn system behaviour by using system input-output data

Neural nets try to mimic the human brain‘s learning mechanism

Neural net based solutions do not use mathematical modelling of the system

Neural nets-can solve many problems that are either unsolved or inefficiently solved by existing techniques, including fuzzy logic Fuzzy logic based Neural nets can develop solutions are cost solutions to meet a preeffective for a wide specified accuracy range of applications 6.2 Disadvantages of FL and NN Fuzzy Logic The system complexity increases, it becomes more challenging to determine the correct set of rules and membership functions to describe system behaviour The generalization capability of fuzzy logic is poor compared to neural nets

Neural Network Neural net implementation is typically more costly than other technologies, in particular fuzzy logic (embedded control is a good example)

If not impossible, to determine the proper size and structure of a neural net to solve a given problem

7. ANN PERFORMANCE IN MEDICAL SYSTEM Taking 100 patients were referred to the mammography department for diagnosis of breast cancer. Let us further assume that of the 100 individuals, 38 actually had a cancerous tumour, and the remaining 62 either did not have any tumour or

132

Machine Learning Algorithm in Computational Intelligence

did not have one that was malignant (cancerous). Let us further assume that a diagnosis procedure (manually conducted by physicians, by an automated system, or by both) correctly diagnosed 32 of the 38 cancer sufferers as having breast cancer. It, however, misdiagnosed six of those as being cancer free. Let us also assume that the procedure correctly classified 58 of the 62 cancer-free patients as such, and misclassified the remaining 4 as having breast cancer. Finally, let us assume that on the average, 35 % of those who are referred to the mammography procedure actually have breast cancer. In this Data TP = 32, FN = 6, TN = 58, FP = 4, and P (D) = 0.35. Hence, Sensitivity = TPF = 32  100 = 84.21% 32  6

FNF = 1 – 84.21 = 15.79% Specificity = TNF =

58  100 58  4

ACKNOWLEDGEMENT = 93.55%

Special Thanks to my Guide Dr.C.Vijayalakshmi for his continuous support encouragement and guidance.

FPF = 1 – 93.55 = 6.45%

REFERENCES

PA = 84.21 * 0.35 + 93.55 * [1- 0.35 ] = 90.28% PV

=

of outputs. This is possible because the neural net is able to learn to a pre-specified accuracy, especially for the training set and learned knowledge can be fully mapped to fuzzy logic. In general, the NN architectures cannot compete with the conventional techniques at performing precise numerical operations. However, there are large classes of problems that often involve ambiguity, prediction, or classifications that are more amenable to solution by NN than other available techniques. Applications of ANNs in these fields are sure to help unravel some of the mysteries in various diseases and biological processes. To analyse performance of the network various test data are given as input to the network. To speed up the learning process, at each neuron in all hidden and output layers. The results show that the multilayer neural network is trained quickly than single layer neural network and the classification efficiency is also high.

84.21  0.35 84.21  0.35  100  93.55  1  0.35

=

[1]

87.55% [2] The overall system accuracy is 90.28 %, while the predictive value of a positive test is at 87.55 %. 94.00% 92.00% 90.00% 88.00% 86.00% 84.00% 82.00% 80.00% 78.00%

[3] SensitivitySpecificity PA -PV

[4]

Graph: 1 NN Overall Accuracy Performance

8. CONCLUSION Neural Fuzzy System‘s learning and generalization capabilities allow generated rules and membership functions to provide more reliable and accurate solutions than alternative methods. However, with the proper combination of fuzzy logic and neural networks it is possible to completely map (100%) the neural net knowledge to fuzzy logic. This enables users to generate fuzzy logic solutions that meet a pre-specified accuracy

[5]

[6]

L.A.Zadeh, “What is Fuzzy Logic and what is its application?” Scientific Computing Seminar, NERSC, 2002. M.Premalatha, C. Vijayalakshmi (2012), Analysis of Soft Computing in Neural Network, in Proc 2nd International Conference in Computer Applications‘12, Associated with ASDF, ACM, SERSC, Pondicherry, Vol 2, PP.no.172-177 M.Premalatha, C.Vijayalakshmi (2011), Comparative Study of Statistical Pattern Recognition and Neural Network, in proc3rd National Conference on Signal Processing, Communication and VLSI Design, Coimbatore, PP.no: 803 – 809 Haimovich H., Seron. M.M., Goodwin G.C. and Agüero J.C. (2003): A neural approximation to the explicit solution of constrained linear MPC.—Proc. European Control Conf., Cambridge, UK. Xu, L., 2002, BYY learning, regularized implementation, and model selection on modular networks with one hidden layer of binary units, Neurocomputing, in press. Abraham and M.R. Khan, Neuro-Fuzzy Paradigms for Intelligent Energy Management, Innovations in Intelligent Systems: Design, Management and Applications, Abraham A., Jain L. and Jan van der Zwaag B. (Eds.), Studies

133

Premalatha .M Vijayalakshmi .C

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15] [16]

[17]

[18]

[19]

in Fuzziness and Soft Computing, Springer Verlag Germany, Chap. 12, pp. 285– 314, 2003. “Neural Network, Fuzzy Logic, and Genetic Algorithm – Synthesis and Applications”, by S. Rajasekaran and G.A. Vijayalakshmi Pai, (2005), Prentice Hall Chapter 3, pp.34-86. Soft Computing and Intelligent Systems Design – Theory, Tools and Applications”, by Fakhreddine Karray and Clarence de Silva (2004), Addison Wesley, Chapter 5, pp. 249 – 293. “Introduction to Fuzzy Sets and Fuzzy Logic”, by M. Ganesh, (2008), Prentice Hall, pp. 1 – 256. “Fuzzy Logic with Engineering Applications”, by Timothy Ross, (2004), John Wiley & Sons Inc, pp. 1-623. Vila J.P. and Wagner V: Predictive neurocontrol of uncertain systems: Design and use of a neuro-optimizer. — Automatic, Vol. 39, No. 5, pp. 767–777 (2003). Mandic, D. and Chambers, J. Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability, John Wiley & Sons, New York (2001). Fakhreddine Karray and Clarence de Silva: ―Soft Computing and Intelligent Systems Design – Theory, Tools and Applications”, Addison Wesley, Chapter 5, pp. 249 – 293 (2004). Kohonen, T. (2001). Self-organizing maps (3rd ed.). Springer. Rodriguez, A., Arcay, B., Dafonte, C., Manteiga, M., & Carricajo, I. Automated knowledge-based analysis and classification of stellar spectra using fuzzy reasoning. Expert Systems with Applications, 27(2), 237-244 (2004). “Internet application with Fuzzy Logic”, A SURVEY BY H.CHRIS TSENG. Vol I 2007. Brdy´s M.A. and Tatjewski P. (2005): Iterative Algorithms for Multilayer Optimizing Control. — London: Imperial College Press/World Scientific. Vila J.P. and Wagner V. (2003): Predictive neuro-control of uncertain systems: Design and use of a neuro-optimizer. — Automatica, Vol. 39, No. 5, pp. 767–777. Abraham, Neuro-Fuzzy Systems: State-of-theArt Modeling Techniques, Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence, Springer-Verlag Germany, Jose Mira and Alberto Prieto (Eds.), Granada, Spain, pp. 269–276, 2001. H. Bunke and A. Kandel, Neuro-Fuzzy Pattern Recognition, World Scientific Publishing CO, Singapore, 2000.

[20]

[21]

[22]

X.H. Li and C.L.P. Chen, The Equivalence Between Fuzzy Logic Systems and Feed forward Neural Networks, IEEE Transactions on Neural Networks, Vol 11, No. 2, pp. 356–365, 2000. Tank, D.W. and Hopfield, J.J., 1986, “Simple neural optimization network: An A/D converter, signal decision circuit, and a linear programming circuit”, IEEE Tans. Circuit Syst., Vol.33, pp.533-541. Xia, Youshen, and Wang, Jun., July 2000. “Global Exponential stability of recurrent neural network for solving optimization related problems”, IEEE Trans. Neural Network, Vol.11, no4, pp.1017-1022.

134

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Mobile Agent based Intelligent Fire Control System Megha Ainapurkar

Adarsh Kumar Handa

PC College of Engineering, Goa University, Goa, India

ABSTRACT The proposed system architecture presents a fire control system using distributed mobile agents. In recent years, mobile agents have attracted considerable interest in distributed systems. In mobile agent technology, a program, in the form of software agent, can suspend its execution on the host computer and transfer it on the agent enabled host and continue its execution. In this paper the multi mobile agent system is presented to reduce the time delay and loss of resources as compared to the traditional fire control system.

"the science and engineering of making intelligent machines."

The most widely used accepted definitions for this term is “an agent acts on behalf of someone else, after having been authorized”. An agent is a physical or virtual entity, which runs approximately as follows: which is capable of acting in an environment, which can communicate directly with other agents, which is driven by a set of tendencies, which possesses resources of its own, which is capable of perceiving its environment, Keywords: Mobile Agent , BAS which has only a partial representation of this environment, which possesses skills and can offer 1. INTRODUCTION services, which may be able to reproduce itself, whose behaviour tends towards satisfying its objectives, taking The emergence of distributed intelligent systems and account of the resources and skills available to it and smart fire detection systems has changed the traditional depending on its perception, and the communications it fire control system. Building Automation Systems (BAS) receives. The definition given by Wool ridge & Jennings are concerned with the automatic control of building is as follows: “An agent is a computer system that is services, key areas being Heating, Ventilation and Air situated in some environment, and that is capable of Conditioning (HVAC), lighting and shading. Mobile autonomous action in this environment, in order to meet agent technology offers a new computing paradigm in its design objectives.” which a program, in the form of a software agent, can suspend its execution on a host computer, transfer itself to another agent-enabled host on the network, and 3. EXISTING FIRE CONTROL SYSTEMS FOR BAS resume execution on the new host. The use of mobile code has a long history dating back to the use of remote Building Automation Systems (BAS) use computerjob entry systems in the 1960's. Today's agent based monitoring to coordinate, organize and optimize incarnations can be characterized in a number of ways building control sub-systems such as security, fire/life ranging from simple distributed objects to highly safety, elevators, etc. Common applications include organized software with embedded intelligence. equipment scheduling (turning equipment off and on as required) .Tremendous advances in computer technology are reflected in the sophistication and falling costs of 2. INTELLIGENT AGENT Direct Digital Control (DDC) systems for buildings. DDC systems are now affordable for all but the smallest Artificial intelligence (AI) is the intelligence of and simplest of buildings, and allow much finer control machines and the branch of computer science that aims and energy savings than pneumatic controls. Besides to create it. AI textbooks define the field as "the study flexible control of lighting and HVAC systems, DDC and design of intelligent agents" where an intelligent can also integrate fire and intruder alarms, security and agent is a system that perceives its environment and access systems and local and wide area computer takes actions that maximize its chances of success. John networks. The following figure will explain the design of BAS McCarthy, who coined the term in 1956, defines it as

135

Megha Ainapurkar Adarsh Kumar Handa

to take out the vehicle from the parking slot and keep ready to go.

Home Agent

Master Controllin g Machine

Compute r with agent platform Fig. 1 BAS Design

4. ARCHITECTURE OF NEW BAS SYSTEM BASED ON EXISTING BAS SYSTEM

The existing Fire Control System can only turn on/off the appliances and generate the alarm signals. The architecture (Fig.2) proposed in this paper will extends the system functionalities by making use of intelligent mobile agents. The Fig.2 represents the flow of system for the improved BAS. This new design contains following entities.

Fire Station MAS Agent Systems

Fire Station MAS Agent Systems

Centralized Database

4.1 Interface Agent (On Home Machine) This is a local agent which will keep on receiving information from all the sensors connected with the home electrical appliances. If there is an alarm or fire smoke is caught my the agent , it will put off all the appliances and immediately move to closest fire station to give the address ,owner details etc. 4.2 MAS(Multi Agent System) This is a collection of agents and they can communicate with each other to complete the task. After receiving the information from the interface agent, the MAS will activate the Database agent to check whether Fire Vehicle and Driver is available.

On receiving details it will send the information to the Transfer Agent. The Transfer agent is suppose to send SMS alert to the driver and to the Vehicle agent to activate the system. Next the Vehicle agent is suppose

Fig. 2 Improved BAS Design

4.3 Master Controlling Agent This agent is suppose to control all the agents as well as to activate the agents on all the machine with the network. Also this agent is suppose to check whether the driver and vehicle is available at the requested fire station , otherwise it will activate Selection agent , to select the next closet fire station and it will send the details to that station.

136

Mobile Agent based Intelligent Fire Control System

The proposed design is possible to implement in intranet but not in an open environment as the present technologies still working on security issues of mobile agents.

5. ADVANTAGES OF NEW DESIGN 1. This system is fast as compared to the manual process.

2. This can reduce the loss by substantial amount and can save lot of life’s 3. System is not restricted to any particular station, instead it can actually transfer the data to other stations as fast possible to reduce the loss. 4. This is an fully automated approach where there is hardly human interface , so the valuable time can saved . 6. CONCLUSIONS The design will actually help to reduce the burden caused by manual working. The Intelligent mobile agent system can improve existing system by distributed the data across the network as fast as possible and connects to the closest fire station. The use of Artificial intelligence can actual taking the world into new era where most of the work will be handled by such intelligent systems.

REFERENCES 1]. J. Rajan Dr. V. Saravanan ,Research Scholar Associate Professor & Head Department of Computer Applications Karunya University Coimbatore “A Framework of an Automated Data Mining System Using Autonomous Intelligent Agents” International Conference on Computer Science and Information Technology 2008. 2]. Lu Quan and Chen Jing Center for Studies of Information Resources Wuhan University Wuhan, China “A Grid Agent Model for Information Retrieval based on Topic Map and Knowledge Elements Mining” 2009 International Conference on Environmental

Science and Information Application Technology 3]. Zhengxiang Zhu Institute of Systems Engineering, Dalian University of Technolog ,Dalian and Wuqi Song , Jifa GuInstitute of Systems Engineering, Dalian University of Technolog ,Dalian “A Multiagent and Data Mining Model for TCM Cases Knowledge Discovery” 2008 ISECS International Colloquium on Computing, Communication, Control, and Management 4]. Longbing Cao, University of Technology, Sydney Vladimir Gorodetsky, 5]. St. Petersburg Informatics and Automation, Russian Academy of Sciences Pericles A. Mitkas, Aristotle University of Thessaloniki “Agent Mining: The Synergy of Agents and Data Mining” IEEE INTELLIGEN T SYSTEMS 6]. Syed Zahid Hassan Zaidi, S. S . R. Abidi, S . Manikam & Cheah Yu-N “ADMI: A MultiAgent Architecture to Autonomously Generate Data Mining Services” SECOND IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS. JUNE 2004 7]. Chunsheng Li, Yatian Gao College of Computer and Information Technology, Daqing Petroleum Institute, Daqing Heilongjiang 163318, CHINA “AGENT-BASED PATTERN MINING OF DISCREDITED ACTIVITIES IN PUBLIC SERVICES” Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT 2006 Workshops)(WIIATW'06) 8]. Lizeng Wu, Yongli Zhu, Xueyu Li, Jinsha Yuan “Application of Multi-agent and Data Mining Techniques in Condition Assessment of Transformers” 2004 lntemallonal Conference on Power System 9]. Vladimir Gorodetsky Oleg Karsaeyv Vladimir Samoilov Intelligent system Lab. St. Petersburg, 199178, Russia “Software Tool For Agent-Based Distributed Data Mining” KIMAS 2003 BOSTON, USA 10]. http://jade.tilab.com/ to download Jade Agent Development Framework 11]. JADE SECURITY GUIDE 12]. JADE TUTORIAL JADE PROGRAMMING FOR BEGINNERS

137

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Software Reliability Growth Model Type-I Generalized Half Logistic Distribution R.R.L.Kantam1

V.Ramakrishna2

M.S.Ravikumar3

1,3

Dept. of Statistics, Acharya Nagarjuna University, Nagarjunanagar, India. Dept. of Computer Science and Engineering, Koneru Lakshmaiah University, Vaddeswaram, India.

2

ABSTRACT A Non Homogeneous Poisson Process is considered with its mean value function represented by a Generalized Half Logistic Model. The process used to approximate it as a Software Reliability Growth Model (SRGM). The suitability of the model to a live data along with the parametric estimates is presented. Keywords: NHPP, GHLD, SRGM. 1. INTRODUTION Computer software is aimed to instruct the computer’s hardware in performing the necessary computations. An important goal is to know how to initially estimate and subsequently measure quality, reliability of developed software. For a software product also, the methodologies of reliability can be applied. Since softwares are intellectual creation, generally it is difficult to measure the software quality. The only way for a software quality is to identify systematically the opportunities of improvement at every stage of software development. The associated preferability of a software product may be known as software reliability. The imperfectness in software can be known by the existing discrepancy between what it can do versus what the computing environment wants it to do. These discrepancies are known as software faults. Even if we know that the software contains faults, we do not know generally their exact identity. The available two approaches for indicating software faults are program processing and program testing. Possible imperfectness of these approaches induces the need of a metric that reflects the degree of program correctness. One such metric in software engineering practice is software reliability. A commonly used approach for measuring software reliability is via an analytical model

whose parameters are generally estimated from available data on software failures. Reliability and other relevant measures are computed from the fitted model. In a way, software reliability is a probability measure and is defined as the probability that software faults do not cause a failure during a specified exposure period in a specified environment. It is useful to give the user, confidence about software correctness. A number of analytical models proposed to address the problem of software reliability measurement can be classified into four main groups according to the nature of failure process studied. One among them is failure count models. In this group the number of experienced software failures within a given time period is a random process- time dependent, stochastic process. Since faults are removed from the system as and when failures are experienced, it is expected that the observed failures per unit time (known as failure intensity) will decrease. The basic idea behind most of the failure count models is that of a Poisson distribution whose parameter takes different forms for different models. Letting N(t) be the cumulative number of failures observed by time t, N(t) can be modeled as a Poisson process with a time dependent failure rate.

P[ N (t )  y ]  .

[m(t )] y e m(t ) , y  0,1,2,...... y! -----(1.1)

Here m(t) is called mean value function and its derivative is called the failure intensity function µ(t). If the limiting value of N (t) is a finite constant that limit is called eventual expected number of failures of the system. If F(t) is the cdf of a positive valued continuous random variable, ‘a’ is eventual number of failures, generally ‘a.F(t)’ is taken as the mean value function of failure count Poisson process. In the above Poisson mass function all such models are termed as non homogeneous Poisson processes (NHPP) measuring software reliability generally 138

Software Reliability Growth Model - Type-I Generalized Half Logistic Distribution

known as ‘software reliability growth model’(SRGM). In literature a number of models exist under this category as can be known from [3], [4] and [5]. In this paper we adopt a generalized HLD to define mean value function of NHPP in order to study software reliability growth. The rest of the paper is organized as follows. Estimation of parameters of the corresponding NHPP from failure count data is presented in Section 2. Goodness of fit of model for data sets is discussed in Section 3. 2. SOFTWARE RELIABILITY GROWTH MODEL (SRGM) with Type-I GHLD Consider the non-homogeneous Poisson process given by equation (1.1) with mean value function given by 

1  e bt  m(t )  a  ,  bt  1  e  where ‘a’ is limiting value of m (t) known as expected number of software failures eventually, b = 1/. If t10 and  n>0. Since = constant, the model given by  equation (10) does not approach isotropy at any stage. The spatial volume of the model increases with the increase in time. The model given by equation (10) for a cloud of cosmic strings possess a line singularity as  ,  ,  and  196

Five Dimensional Spherically Symmetric Strings in Scalar- Tensor Theory of gravitation

tend to infinity and spatial volume tend to zero at initial epoch t = 0. Case 3.1: The case    refers to geometric strings. If m  2 , the solution of the field equations (5)(7) is given by (18) e A  t 2 n , e B  t n and the scalar field (19)   1 t k1 where

1 is an arbitrary constant and 1

2  n(9n  6)  2 k1    ,n   . 3  2  The spherically symmetric five dimensional model in this case reduces to the form





ds 2  dt 2  t 2n dr 2  r 2 d 2  r 2 sin 2  d 2 t n dy 2 (20) The string energy density ρ and tension density λ are

    1 2

(15n 2  6n) . t 22 k1

(21)

The particle density ρp, the scalar expansion  , the shear scalar σ, spatial volume V and the deceleration parameter q are given by p     0,





 5n , 2t

427 n , 6t

(22)

V  t 5n ,

3  q  1   .  5n  It is observed that the special volume V is zero at t  t 0 where t 0  0 and expansion scalar is infinite, which shows that the universe starts

evolving with zero volume at t  t 0 with an infinite rate of expansion. The model has a “Barrel type” singularity at initial epoch for

2 n   . The string energy density ρ and tension 3 density λ and the shear scalar σ diverge at the initial singularity. As t increases ,the scale factors and spatial volume increases but the expansion scalar decreases. Thus, the rate of expansion slows down with increase in time. Also the scalar expansion  , the shear scalar σ decreases as t increases. As t   volume V becomes infinite whereas  ,  ,  and  tend to zero. Therefore, the model would essentially give an empty universe for large time t and it has „Cigar type‟ singularity at late times provided for n  

2 . For expanding universe ,the scale 3

factor A(t) should be an increasing function of time while the scale factor B(t) corresponding to extra dimension should decrease or be a constant to have comactification of extra dimension . Thus the extra dimension B(t) contracts whereas A(t) expands indefinitely with time for n  0. Case 3.2: Now we consider the massive string,(     0 ) i.e., the sum of rest energy density and tension density for cloud of strings vanish. Then we have m  o . the solution of the field equations (5)-(7) can be expressed as

e A  1 and

eB  t n .

and the scalar field is given by   2 t k2 , where

(23) (24)

 2 is an arbitrary constant

and

1 2

 n 2  2n   . k 2    2  The model, for massive strings, reduces to





ds 2  dt 2  dr 2  r 2 d 2  r 2 sin 2  d 2 t n dy 2 (25) 197

J.Satish

The string energy density ρ and tension density λ are

     2 2

( n 2  2n) . t 2 2 k2

(26)

As time increases the scale factors B (t) increase indefinitely indicating that there is no compactification of extra dimension with time in the model for massive strings. Since

  

 p  2 2 2

Constant, the anisotropy is maintained throughout and the model inflates since q  0 for all „n>1‟. The present day Universe is undergoing accelerated expansion. It may be noted that the current observations of SNe Ia and the CMBR favour accelerating models (q < 0).

strings do not exist. It is observed that the scalar field becomes a constant when n  2 .This situation lead to the general relativity case. And the kinematical parameters are given by

Case3.3:

The particle density is

( n 2  2 n) (27) t 2 2 k2 If n  2 ,     0 this shows that the cosmic

n Scalar expansion   , 2t 7n Shear scalar   , 6t Spatial volume V  g  t n ,

The P-strings or Takabayasi strings are represented by   (1   )  ,  >0. The solution of the field equations (5)-(7) is given by

e t A

(29)

(30)   3 t k where  3 is an arbitrary constant 3

 a a 1  1. n a 2

The dominant energy conditions implies that   0 and  2  2 . These energy conditions do not restrict the sign of λ, accordingly the expressions given by equation (26) satisfies all these conditions. We observe that

and e B  t n ,

and the scalar field is given by

(28)

Deceleration parameter q 

 2     n   1 

p  2. 

p Since  1 , thus we may conclude that the  particles dominate over the strings in this model.

 It is easy to see that  .88 ,for our models.   The present upper limit of is 10-5 obtained  from indirect arguments concerning the isotropy of the primordial blackbody radiation[29]. The

 for our models is considerably greater than 

the present value. This fact indicates that our solutions represent the early stages of evolution of the universe.

1

 n 2 (4 2  4  1)  2n(3 2  3)  2  . k 3   2 (  1) 2   The line element for P-strings reduces to the form

ds  dt  t 2

2

 2   2 n    1 

dr

2

 r 2 d 2  r 2 sin 2  d 2  t 2 n dy 2 .

(31) Now the string energy density ρ, tension density λ, and the particle density ρp are given by

 n(5  4 )  2(  1)   (32) 2 2 2 k  (  1) 4t 

  (1   )  3n 

And the kinematical parameters are given by Scalar expansion  

n(5  4 ) , 2t

Shearscalar



n 6t (  1)

148 2  458  427 , 198



Five Dimensional Spherically Symmetric Strings in Scalar- Tensor Theory of gravitation

Spatial volume V 

g t

n ( 4 5) ( 1)

Deceleration

 a a  (3  4n)  (3  5n) q  3n(2  1) a 2

,

(33) parameter

From the above parameters we observe that  ,  , the scalar expansion  , as well as the shear scalar σ all decreases as the increase in time. At the singularity stage t  0, v 3  0 and  ,  , , and  all infinitely large but at

t  , v 3   and  ,  , , and  all vanish which represent an empty universe .But all these parameters remain finite and physically significant for all t  0. Hence we see that space-time admits a big-bang singularity but the rate of expansion of the universe decreases with increase of time. As time increases the scale factors B and A increase indefinitely indicating that there is no compactification of extra dimension with time. Case3.4: The case   0 corresponds to a dust filled universe without strings whose future asymptote is the Einstein –desitter universe. If m  1 , the solution of the field equations (5)-(7) is given by

e e t A

B

n

where

4

is

an

arbitrary

(35) constant

and

1 2

 6n(n  1)  k4    .    The line element in this case can be written as





ds 2  dt 2  t n dr 2  r 2 d 2  r 2 sin 2  d 2 t n dy 2 (36) The string energy density ρ is

  4

2

(12n 2  6n) t 2 2 k4

p     



(12n 2  6n) , t 22 k4

2n , t 148 n , 6t

(37)

V  t 4n ,

 4n(4n  3)  q   . 2  16n  Usually the model decelerate when q>0 and inflates when q n0 and U(n), n n0 , n1 ,..., N  such that T( N ) = T f .

= ( N , n0 ){T0  W (n0 , N )W 1 (n0 , N )[T0  (n0 , N  1  1)T f ]}

= Tf Therefore the periodic system S1 is completely contollable. Now conversely suppose that the periodic system S1 is completely contollable. Then to show that W (n0 , N ) is non singular. Let  be any arbitrary constant column s-vector. Since W (n0 , N ) is non singular we can construct a quadratic form

TW (n0 , N ) =

Theorem 6 The discrete matrix system with

239

Venkata Sundaranand Putcha Sudarsan M S V D N 1

{  (n , j    1)B( j   )B ( j   ) (n , j    1)} T

*

1

0

1

*

2

2

0

1

j = n0

N 1

=



T

definite matrix. Hence W (n0 , N ) is non singular. It can easily be verified that W (n0 , N ) is periodic with period  .

( j  1, n0 )( j  1, n0 ), where

j = n0

N 1

This implies 1 g =0  1 =0 [since g is a non zero vector]. This is a contradiction. Therefore W (n0 , N ) is a symmetric positive

( j  1, n0 ) = B* ( j  2 )* (n0 , j  1  1) =    0 2

j = n0

Therefore W (n0 , N ) is positive semi definite. If possible suppose that there exists some column s-vector 1  0 such that

Theorem 7

 (n) is any other Suppose U

control taking (T0 , n0 )



to (T f , N )

 ( j )) > N 1 U ( j )) U  j =n 2

N 1 j = n0

2

then where

0

U ( j ) = [ B* ( j  2 )(n0 , j  1 1)W 1 (n0 , N )[T0  (n0 , N  1  1)T f ]]

 (n)  U (n) for all provided U

N 1

1TW (n0 , N )1 = 0i.e.,  1T (n0 , j  1  1) B( j  2 )

n {n0 , n0  1,..., N  1} .

j = n0

B ( j  2 ) (n0 , j  1  1) = 0 *

*

N 1

i.e.,  1T ( j  1, n0 )1 ( j  1, n0 ) = 0, where

 ( n) Proof. From the hypothesis U (n) and U are controls taking (T0 , n0 ) to (T f , N ) we have T ( N ) = Tf =

j = n0 N 1

1 ( j  1, n0 ) = B ( j  2 ) (n0 , j  1  1)1 *

*

 ( N  1 , n0 )[T0   (n0 , j  1  1) B( j  2 )U ( j )] j = n0

(7) N 1

i.e.,   1 2 = 0, i.e., 1 (n  1, n0 ) = 0 n {n0 , n1,..., N 1}. j = n0

N 1

 ( j )]  ( N  1 , n0 )[T0   (n0 , j  1  1) B( j  2 )U

ThereforeB (n  2 ) (n0 , n  1  1)1 = 0n n0 , n1,..., N 1 *

j = n0

(8)

*

Since the periodic system S1 is completely contollable, there exists a control  (n), say, making T(N)=0 if T (n0 ) = 1 g, where g is any non zero 1  s matrix. N 1

i.e., 1 g =   (n0 , j  1  1) B( j  2 ) ( j ).consider j = n0

By subtracting (7) from (8) we get



N 1

 ( j )  U ( j )] = 0  (n0 , j  1  1) B( j  2 )[U

j = n0

By pre multiplying the above equation on both sides by

[T0   (n0 , N  1 )]T [W 1 (n0 , N )]T we get



N 1 j = n0

[T0   (n0 , N  1 )]T [W 1 (n0 , N )]T  (n0 , j  1  1)

N 1

1 g = (1 g )T (1 g ) =  * ( j) B* ( j  2 )* ( j  2  1, n0 )1g 2

T ( N ) = Tf =

j = n0

= 0 [since1 (n  1, n0 ) = 0 n n0 , n1 ,..., N 1]

 ( j )  U ( j )] = 0 B( j  2 )[U N 1  ( j )  U ( j )] = 0 U T ( j )[U i.e.,



j = n0

240

Controllability and Observability of Periodic Matrix Difference System N 1

i.e., U T ( j )U ( j ) = j = n0

N 1

U

T

implies

 ( j) ( j )U

j = n0



Therefore from (9)

N 1 j = n0

(9)

 ( j )  U ( j ) 2 = U

 U ( j)   U ( j)  2U ( j)U ( j)  ( j )   U ( j )  . U =   ( j)  = U i.e.,   U ( j)   U ( j) U ( j)  >  U ( j)  N 1

2

2

T

j = n0

N 1

2

2

j = n0

N 1

(9)

Z (n) = C (n) (n  1  1, n0 )T0 T (n  1  1, n0 ) (11) Pre multiplying the above equation on both sides by T (n  1  1, n0 )CT (t  3 ) , we get

T (n  1  1, n0 )CT (t  3 )Z (n) = T (n  1  1, n0 )

2

j = n0

N 1

2

j = n0

2

N 1

2

j = n0

Theorem 8 If for a given final state T f there exist a constant square matrix  of order s such that W (n0 , N ) = T0   (n0 , N )T f . Then

the

U (n) = (n  2 ) (n0 , n  1) T B

*

control transfers

system S1 from T (n0 ) = T0 to T ( N ) = T f .

CT (t  3 )C (n  3 ) (n  1  1, n0 )T0 T (n  1  1, n0 ) (12) By taking the summation on both sides of the above equation from n0 to N  1 , we get N 1



T

(n  1  1, n0 )C T (t  3 ) Z (n) = V (n0 , N )T0

j = n0 N 1

 T0 = V 1 (n0 , N )  T (n  1  1, n0 )C T (t  3 )Z (n). j = n0

Definition 9 The discrete time varying system with periodic coefficients S1 is said to be

Therefore the system S1 is completely observable. Conversely suppose that the system S1 is completely observable. Now to show that V (n0 , N ) is non singular. Since V (n0 , N ) is symmetric we can construct a quadratic form TU (n0 , N ) , where  is an arbitrary constant column column s-vector. Therefore

completely observable if for any initial time n0

TV (n0 , N ) =

Proof. By substituting the value of U(n) in the general solution of (2) with n = N we get

T ( N ) =  ( N  1, n0 )[T0   j=n  (n0 , j  1 1)B( j  2 )BT ( j  2 ) T (n0 , j  1 1)] N 1

0

=  ( N  1 , n0 )[T0  W1 (n0 , N )] = T f .

, any initial state T( n0 )= T0  a finite time N

> n0 such that knowledge of U(n) and Z(n) suffices to determine T0 uniquely. Theorem 10 The discrete time varying matrix system with periodic coefficients S1 is completely observable if and only if the symmetric observability matrix N 1

V (n0 , N ) =  T ( j  1  1, n0 )C T ( j  3 )C ( j  3 ) ( j  1  1, n0 ) j = n0

(10)

, is non singular and is periodic with period   is the Least Common Multiple of  where  1 and 3 . Proof. Suppose that V (n0 , N ) is non singular. Suppose

U (n)  0 n n0 , n1 ,..., N 

N 1

  T

T

( j  1  1, n0 )C T ( j  3 )C ( j 

=

j = n0

3 ) ( j  1  1, n0 ) Therefore V (n0 , N ) is positive semi definite. If possible suppose that there exists some column s-vector 1  0 such that

1TV (n0 , N )1 = 0 N 1

i.e.,  [C ( j  3 )( j  1  1, n0 )1 ]T  0 j = n0

i.e.,C( j  3 )( j  1  1, n0 )1 = 0 n n0 , n1,..., N If T0 = 1k , where k is any non zero 1  s matrix, then the out put

241

Venkata Sundaranand Putcha Sudarsan M S V D

Z (n) = C (n  3 )(n  1  1, n0 )T0 C (n  3 )(n  1  1, n0 )1V = 0 Therefore T0 can't be determined with the knowledge of out put Z(n). This is a contradiction to our assumption that S1 is completely observable. Therefore V (n0 , N ) is positive definite and hence non singular. It can easily be verified that V (n0 , N ) is

. periodic with period  Theorem 11 The discrete time varying system S 2 is completely controllable (completely observable )  the dual system S 2D

T (n  1) =  AT (n)T (n)  CT (n)U (n), T (n0 ) = T0

Z (n) = BT (n)T (n) is completely controllable).

observable

(completely

Proof. Proof follows from the proofs of the Theorems 6 and 11

4 SUMMARY AND CONCLUSIONS In this paper, the general solution of periodic matrix difference system is presented and it's period is characterized by the periods of coefficient matrices. Necessary and sufficient conditions for the controllability and observability are established for periodic matrix difference system through symmetric periodic controllability and observability matrices. ACKOWLEDGEMENTS Venkata Sundaranand Putcha is supported by DST-CMS project Lr.No. SR/S4/MS:516/07, Dt.21-04-2008. Sudarsan M S V D is thankful to The Principal and Management of VRSEC. REFERENCES [1] Agarwal, R.P., On Multi Point Boundary Value Problems For Discrete Equations, J. Math. Anal. Appl., 96,pp 520-534, 1983. [2] Anand PVS and Kranthi Kumar K, Stability of The Linear and Non-Linear Matrix

Difference Systems, Proceedings of the 52th Congress of ISTAM 2007, pp 216-222, 2007. [3] Anand, P V S, Non-Linear Differential and DifferenceEquations-Existence, Uniqueness, Stability, Observability and Controllability, Ph.D Thesis, Andhra University, 1995. [4] Davis,John M, Gravagne,Ian A,Jackson, Billy J, Marks II, Robert J, Controllability, Observability, Realizability, and Stability of Dynamic Linear Systems, Electronic Journal of Differential Equations, Vol. 2009(2009), No. 37, pp. 1–32. [5] Hoppenstdeadt, F.C and Hyman, J.M., Periodic Solutions of Logostic Difference Equations, SIAMJAM, 32, pp 73-81, 1977. [6] Fausett, L and Murty, K N, Controllability, Observability, and Realizability Criteria on Time Scale Dynamical Systems. Nonlinear Stud. 11 (2004), 627–638. [7] R. E. Kalman, Mathematical Description of Linear Dynamical Systems, this Journal, 1 (1963), pp. 152-192. [8] E. Kreindler and E. Sarachik, On The Concepts of Controllability and Observability of Linear Systems, IEEE Trans. Automatic Control, AC-9 (1964), pp. 129-136. [9] Lakshmikanthyam V and Trigiante, Theory of Difference Equations-Theory Methods And Applications, Academic Press INC, 1983 [10] Murthy, K N, Prasad, K R and Anand P V S, Two -Point Boundary Value Problems Associated With Lyapunov Type Matrix Difference System, dynamic systems and applications, USA, 4(2) pp 205-213, 1995. [11] Stephen Barnett, Introduction to Mathematical Control Theory, Clarendon Press, Oxford,1975. [12] L. Weiss, The Concepts of Differential Controllability and Differential Observability, J. Math. Anal. Appl., 10 (1965), pp. 442-449.

242

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Eccentric Graph of Unique Eccentric Node Graph S.Sriram1

D. Ranganayakulu2

1

Manonmaniam Sundaranar University, Valliammai Engineering College, SRM Nagar, Kattankulathur, Tamilnadu, India 2 Department of Mathematics, S.I.V.E.T. College, Gowrivakkam, Chennai, India

ABSTRACT The eccentricity e(u) of a node u is the maximum distance of u to any other node of G. A node v is an eccentric node of u if the distance from u to v is equal to e (u). The eccentric graph of a graph G, denoted by Ge, has the same set of nodes as G with two nodes u and v being adjacent in Ge iff either u is an eccentric node of v in G or v is an eccentric node of u in G. A graph G is an unique eccentric node graph if each node of G has a unique eccentric node. In this paper we study the eccentric graph of unique eccentric node graph and the eccentric graph of unique eccentric node graphs of diameter two and three. Also, we study the eccentric graph of Harary graphs. Keywords: Eccentricity, Eccentric graph, Unique eccentric node graph, Harary graph. 1. INTRODUCTION In a graph G, the distance between the nodes u and v is denoted by d(u,v) and it is defined as the length of the shortest distance between u and v. The eccentricity e(v) of a node v is defined as e(v) = max(d(u,v)), for all u in V(G). The radius and diameter of G are defined as r(G) =

min

d (u, v)

u ,v V ( G )

diam(G) = max

u ,vV ( G )

and

d (u, v) .

A node u is a

central node if e(u) = r(G). A node u is a peripheral node if e(u) = diam(G). The set of all central nodes and Peripheral nodes are denoted by C(G) and P(G) respectively. A graph G is a self centered graph if r(G)=diam(G). The set of all eccentric nodes of u is denoted by E(u).

The set of all eccentric nodes of G is EC(G) = E (u ) .A graph G is m- eccentric node



u V ( G )

graph if E (u ) =m, for every node u  V(G). If m=1, then G is called unique eccentric node graph. For all u  V(G), Ni(u) ={v  V(G) / d(u,v) =i}.There are many associated graphs to a given graph. The eccentric graph [2] of a graph G, denoted as Ge or EG(G), is defined on the same set of nodes as G with two nodes u and v adjacent in EG(G) if and only if either u is an eccentric node of v or v is an eccentric node of u. A graph G and its eccentric graph EG(G) is shown in fig1. A graph G is complete if there is an edge between every pair of distinct nodes. A complete graph on n nodes is denoted by Kn. The Bistar graph Bm,n [4] is the graph obtained by joining m pendant edges in one end and n pendant edges in the other end of K2. The Broom graph [6] Bn,d consists of a path Pd , together with (n-d) end nodes all adjacent to the same end vertex of Pd. Harary graphs Hk,n is a k-regular graph on n nodes. The nodes are numbered from 0 to n-1. Each node i is adjacent to i+1,i+2,…,i+l,i-1,i-2,i-3,…,i-l when k=2l is even. When k=2l +1 is odd and n= 2r is even, each node is adjacent to the 2l nodes as mentioned above as well as to i+r. When k =2l+1 and n =2r+1 are both odd. In this case Hk,n is obtained from Hk-1,n by adding the edges (i, i+(n-1)/2) for 0  i  (n-1)/2. H3,8 and H4,8 are shown in the fig2. Here we study the eccentric graph of unique eccentric node graph and the eccentric graph of unique eccentric node graph of diameter two and three. Also , the eccentric graph of Harary graphs.

243

S.Sriram D. Ranganayakulu

Fig.1(a)

Fig.2(b)

2. KNOWN RESULTS We state the following known results. Result 1[1]:The eccentric graph of a self-centered unique eccentric node graph is a union of K2 Fig. 1(b)

Result 2[3]:For every unique eccentric node graph d  2r-1. Result 3[3]:In any unique eccentric node graph G, P(G)  is even. Result 4[3]:A unique eccentric node graph G is selfcentered if and only if each node of G is an eccentric node. 3. MAIN RESULTS In this section, we study the eccentric graph of unique eccentric node graph. In particular, the eccentric graph of unique eccentric node graph of diameter two and three. Also, we study the

eccentric graph of Harary graphs Hk,n, where atleast one of k and n is even. Fig.2(a)

Theorem 1:Let G =(V,E) be a non-self centred unique eccentric node graph such that P(G) =EC(G) then

244

Eccentric Graph of Unique Eccentric Node Graph

EG(G) = Go G1  G2 …  Gk-1 where each Gi is a Bistar or a K2 or a Broom graph and P(G)= 2k.

Thus, EG(G) = Go G1  G2 …  Gk-1 where each Gi is a Bistar or a K2 or a Broom graph and P(G)= 2k.

Proof :Let G =(V,E) be a non-self centred unique eccentric node graph such that P(G) = EC(G).Then, sinceP(G)is even, let P(G) ={uo, u1, u2,…, u2k-1} such that

Theorem 2:Let G=(V,E) be a unique eccentric node graph of diameter two then EG(G) is the union of K2.

E(ui) =

u

 for i =0,1,2,3,…,2k-1.

i 2 k k

Let S (ui )  vV (G)  P(G) : E (v)  {ui }  for i =0,1,2,…,2k-1. Now, since G is an unique eccentric node graph, S (ui )  S (u j )   , i ≠j and E(ui) ={uj} iff

Proof:Let G=(V,E) be a unique eccentric node graph of diameter two. Then G is a (n-2) regular graph. It implies that G is a self-centered unique eccentric node graph. Thus, EG(G) is the union of K2.

E(uj) ={ui} for all ui,u j  P(G). This implies that the eccentric graph of G, EG(G), has the same set of nodes as G and the edge set is

Theorem 3:Let G=(V,E) be a unique eccentric node graph of diameter three then EG(G) is Bistar Bm,n .

 ui  P(G)  u ,u E ( EG (G))   i i  2 k k v , ui  , if v  S (ui )

Proof:-



Let Gi =



ui , ui  2 k k

, i =0,1,2,…,k-1.

Then Gi is a (i) Bistar, if both S (u i ) and S (ui  2 k k ) are non-empty. (ii) K2, if both S (u i ) and S (ui  2 k k ) are empty (iii) Broom graph if only one of S (u i ) and

S (ui  2 k k ) is non-empty. k 1

Claim :- EG(G) =

G

i

i0

k 1

It is obvious that

G

i

 EG (G ) …..(1).

i0

On the other hand, let u  EG(G), then u  V(G). This implies that u  P(G) or u  V(G)-P(G). So, u = ui or u  S( ui) for some u i P(G). Now, by the definition of Gi, u  Gi. This gives that u 

k 1

G

i

. Since u is

i0

arbitrary, we have EG (G ) 

k 1

G

i

…..(2)

i 0

Now, the claim follows from equations 1 and 2.

Let G = (V,E) be a unique eccentric node graph of diameter three. Then e(u) =2 or e(u) =3, for all u in V(G). This implies that for all u in V(G), either u is a central node or peripheral node. Since the eccentric node of a central node is a peripheral node , so P(G) =EC(G). Also since P(G)  2 Let P(G) ={u1, u2} and S(ui) ={u  V(G) –P(G) : E(u) ={ ui }} ,i =1,2. Now, the eccentric graph of G, EG(G), has the same set of nodes as G and the edge set is E(EG(G)) = {(u,v) : either u, v  P(G) or u  S(v) and v  P(G)}. It implies that EG(G) is a Bistar Bm,n.. Thus, the eccentric graph of a unique eccentric node graph of diameter three is a Bistar Bm,n. Theorem 4:Let G= Hk,n where atleast one of k and n is even. Then G is n-k-1 eccentric node graph if r(G) is two. Proof:Let G= Hk,n where at least one of k and n is even. If r(G) =2, then since G is a self centered graph, so for all u  V(G) all non-adjacent nodes of u are eccentric nodes of u in G. Also since G is a k regular graph, so E (u)  n  k  1 for all u  V(G).Therefore, G is a n-k-1 eccentric node graph.

245

S.Sriram D. Ranganayakulu

Theorem 5:Let G= Hk,n where both k and n are even such that r(G) >2 and let m be the remainder when n-1 is divided by k . Then G is meccentric node graph. Proof:Let G= Hk,n where both k and n are even such that r(G) >2 and let m be the remainder when n-1 is divided by k .Then since G is k regular, we have N i (u )  k ,1  i < d, and N d (u )  m for all u V(G). This implies that each node of G has exactly m eccentric nodes in G. Therefore, G is m-eccentric node graph. Corollary 6:The Harary graph Hk,n ,where k and n are even, is an unique eccentric node graph when n-2 is divisible by k. Corollary 7:The eccentric graph of Harary graph graph Hk,n ,where k and n are even, is a union of K2 when n-2 is divisible by k. Theorem 8:Let G= Hk,n where k is even and n is odd, such that r(G) >2 and let m be the remainder when n-1 is divided by k . Then G is m-eccentric node graph if m ≠ 0 and k-eccentric node graph otherwise. Proof:Let G= Hk,n where k is even and n is odd, such that r(G) >2 and let m be the remainder when n-1 is divided by k Case(i):- If n-1 is not divisible by k Then since G is k regular, we have N i (u )  k ,1  i < d, and N d (u )  m for all u V(G). This implies that each node of G has exactly m eccentric nodes in G. Therefore, G is m-eccentric node graph. Case(ii):- If n-1 is divisible by k Then N i (u )  k , 1  i  d for all u V(G). This implies that each node of G has exactly k eccentric nodes in G. Therefore, G is a k-eccentric node graph.

Corollary 9:The Harary graph Hk,n ,where k is even and n is odd, is a 2-eccentric node graph when n3 is divisible by k. Corollary 10:The eccentric graph of Harary graph graph Hk,n ,where k is even and n is odd , is an odd cycle when n-3 is divisible by k. Theorem 11:Let G= Hk,n where k is odd and n is even, such that r(G) >2 and let m be the remainder when n-k-1 is divided by 2(k-1) . Then G is m-eccentric node graph if m ≠ 0 and 2(k-1)-eccentric node graph otherwise. Proof:Let G= Hk,n where k is odd and n is even, such that r(G) >2 and let m be the remainder when n-k-1 is divided by 2(k-1). Case(i):- If n-k-1 is not divisible by 2(k-1) Then since G is k regular, we have N1 (u )  k and N i (u)  2(k  1) , 1 < i < d, and N d (u )  m for all u V(G). This implies that each node of G has exactly m eccentric nodes in G. Therefore, G is m-eccentric node graph. Case(ii):- If n-1 is divisible by 2(k-1). Then N1 (u )  k and N i (u )  2(k  1) ,1 < i  d, for all u V(G). This implies that each node of G has exactly 2(k-1) eccentric nodes in G. Therefore, G is a 2(k-1)-eccentric node graph.. Corollary 12:The Harary graph Hk,n ,where k is odd and n is even, is an unique eccentric node graph when n-k-2 is divisible by 2( k-1). Corollary 13:The Harary graph Hk,n ,where k is odd and n is even, is a 2-eccentric node graph when n-k-3 is divisible by 2( k-1). Corollary 14:The eccentric graph of Harary graph Hk,n,where k is odd and n is even, is a union of K2 when n-k-2 is divisible by 2( k-1).

246

Eccentric Graph of Unique Eccentric Node Graph

Corollary 15:The eccentric graph of Harary graph Hk,n,where k is odd and n is even , is an odd cycle when n-3 is divisible by2( k-1). REFERENCES [1] F. Buckley and F.Harary. Distance in Graphs, Addison-Wesley, Redwood city CA ,1990. [2] J. Akiyama, K. Ando and D. Avis, Eccentric graphs, Discrete Math., 56 (1985), 1-6 [3] K.R. Parthasarathy and R. Nandakumar, Unique Eccentric Point Graphs, Discrete Math.,46 (1983)69-74. [4] R.Arundadhi and R. Sattanathan,, Acyclic and Star coloring of Bistar Graph Families, International Journal of Scientific and Research Publications, Vol. 2, Issue 3, 2012, 1-4. [5] Douglas B West. Intro. to Graph Theory(2009) PHI Learning PVT.Ltd. [6] M.J. Morgan, S. Mukwembi, H.C. Swart, On the eccentric connectivity index of a graph. Discrete Mathematics 311(2011) 1229-1234.

247

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

k-graceful and (k,d)-graceful labeling in Extended Duplicate Graph of Path P.P. Ulaganathan

K. Thirusangu

B. Selvam

Department of Mathematics, S.I.V.E.T. College, Gowrivakkam, Chennai, India

Keywords : Graph labeling, k-graceful labeling, (k,d)-graceful labeling, Extended Duplicate Graphs

In [8], it is proved that Extended Duplicate Graph of paths are graceful and skolem graceful. Though k-graceful labeling and (k,d)graceful labeling have been studied for different kinds of graphs, the k-graceful and (k,d)graceful labeling for duplicate graphs have not been investigated. In this paper we prove that Extended Duplicate Graph of path graphs are kgraceful and (k,d)-graceful.

1. INTRODUCTION

2. PRELIMINARIES

A graph labeling is an assignment of integers to the vertices or edges or both, subject to certain conditions. This concept of graph labeling was introduced by Rosa in 1967 [6]. Labeled graphs serve as useful models for a broad range of applications such as coding theory, circuit design, astronomy, radar, x-ray crystallography, communication network addressing and data base management. Hence in the intervening years various labeling of graphs such as graceful labeling, harmonious labeling, magic labeling, anti-magic labeling, bi-magic labeling, prime labeling, cordial labeling, binary labeling, mean labeling, arithmetic labeling, etc., have been investigated in the literature [2].

In this section, we give the basic notions relevant to this paper. Let G = G (V, E) be a finite, simple and undirected graph with p vertices and q edges. By a labeling we mean a one-to-one mapping that carries a set of graph elements onto a set of numbers, called labels (usually the set of integers).

ABSTRACT In this paper, we prove that the Extended Duplicate Graph of a path is kgraceful and (k,d)-graceful.

Golomb has introduced graceful labeling [3]. In 1989 Gallian showed that all mobius ladders are graceful. Lee has introduced Skolemgraceful labeling [5]. Youssef proved that if G is skolem-graceful then G  K n is graceful [9]. Gnanajothi introduced odd graceful and proved books are odd graceful[4]. In 1982, Slater introduced k-graceful labeling [7]. He showed that: Cn is k-graceful if and only if either n  0 or 1 (mod 4) with k even and k < (n-1)/2, or n  3 (mod 4) with k odd and k < (n2-1)/2. Acharya and Hegde generalized kgraceful labelings to (k,d)-graceful labelings [1].

Definition 2.1: A graph G with q-edges is kgraceful if there is a labeling f : V  {0, 1, 2, …, k+(q-1)} such that | f (x) – f (y) |  {k, k+1, k+2, …, k+(q-1)}, for all x, y  E. Definition 2.2: A graph G with q-edges is (k,d)graceful if there is a labeling f : V  {0, 1, 2, …, k+(q-1)d } such that | f(x) – f(y) |  {k, k+d, k+2d,…, k+(q-1)d}, for all x,y E. Definition 2.3: Let G (V, E) be a simple graph. A duplicate graph of G is DG = (V1, E1) where the vertex set V1 = V  V and V  V =  and f : V  V is bijective (for v  V, we write f (v) = v for convenience) and the edge set E1 of DG is defined as follows: The edge ab is in E if and only if both ab and ab are edges in E1. Clearly the duplicate graph of path graph is disconnected. We give the following definition from [8].

248

k-graceful and (k,d)-graceful labeling in Extended Duplicate Graph of Path

Definition 2.4: Let DG = (V1, E1) be a duplicate graph of the path graph G (V, E). We add an edge between any one vertex from V to any other vertex in V, except the terminal vertices of V and V. For convenience, let us take v2  V and v2  V and thus the edge (v2, v2 ) is formed. We call this new graph as the Extended Duplicate Graph of the path Pm and it is denoted as EDG (Pm).

Consider the paths of the type P2n; n  N. To get k-graceful labeling, define a map f : V  {0,1, 2, …, k+(q-1)} as given in above algorithm. Therefore the vertices v1, v3, v5, …, vm+1, vm, vm-2, …, v2 receive 0, 1, 2, …, (q-1)/2 as labels and v1, v3 , v5 ...vm 1 , vm , vm 2 ,..., v2 receive k+(q-1)/2, k+(q+1)/2, k+(q+3)/2, …, k+(q-1) as labels. Thus all the (2m+2) vertices are labeled.

3. MAIN RESULTS 3.1 k-graceful labeling for EDG (Pm) In this section, we present an algorithm and prove the existence of k-graceful labeling for EDG (Pm). Algorithm: procedure (k-graceful labeling for EDG (Pm)) V  {v1, v2, …, vm, vm+1, v1, v2 ,..., vm , vm 1 } E  {e1, e2, …, em, em+1, e1, e2 ,..., em } if (m = 2n) for i=0 to m/2 do v2i+1  i; v2 i 1  (m+k)+i; end for for i= 0 to (m-2)/2 do v2i+2  m-i; v2 i  2  (2m+k)-i; end for else for i = 0 to (m-1)/2 v2i+1  i; v2 i 1  (m+k)+i; v2i+2  m-i; v2 i  2  (2m+k)-i; end for end if end procedure Theorem: The Extended Duplicate Graph of path Pm, m > 2 admits k-graceful labeling. Proof: Let EDG (Pm) be an Extended duplicate graph of path Pm. Clearly EDG (Pm) has p vertices and q edges where p = 2m+2, q = 2m+1.

Define induced function f * : E  N such that | f (x) – f (y) |  {k, k+1, k+2, …, k+(q-1)}, for all x, y  E. Now the edges are labeled as follows: The edges are in the following five forms, namely, (v2i+1, v2 i  2 ), ( v2 i 1 , v2i+2) where 0 < i < (m-2)/2; (v2i+2, v2 j 1 ), ( v2 i  2 , v2j+1) where j=i+1,0 < i < (m-2)/2 and (v2, v2 ). For any i, where 0 < i < (m-2)/2, f * (v2i+1, v2 i  2 ) = | f (v2i+1) – f ( v2 i  2 ) | = | i – (2m+k – i) | = | {k+(q-1) – 2i} | These m/2 edges are labeled with numbers k+(q-1), k+(q-3), …, k+{(q+3)/2}. For any i, where 0 < i < (m-2)/2, f * ( v2 i 1 , v2i+2) =| {(m+k)+i} – (m-i) | = k + 2i These m/2 edges are labeled with numbers k, k+2, k+4, …, k+{(q-5)/2}. Let j = i+1, for any i, where 0 < i < (m2)/2, f * (v2i+2, v2 j 1 ) =| f (v2i+2) – f ( v2 j 1 ) | = | (m-i) – (m+k+j) | = k + 1 + 2i These m/2 edges are labeled with numbers k+1, k+3, k+5, …, k+{(q-3)/2}. Let j = i+1, for any i, where 0 < i < (m2)/2,

Denote the set of vertices as V = {v1, v2, …, vm, vm+1, v1, v2 ...vm , vm 1 }

f * ( v2 i  2 ,v2j+1) = | f ( v2 i  2 ) – f (v2j+1) | = | (2m+k-i) – j | = | k+ (q–2)-2i |

Case (1): In this case, we prove this theorem for even paths. Let Pm be a path, where m=2n; n  N.

These m/2 edges are labeled with numbers k+(q-2), k+(q-4), …, k+{(q+1)/2}. For the edge (v2, v2 ),

249

P.P. Ulaganathan K. Thirusangu B. Selvam

f * (v2, v2 ) = | f (v2) – f ( v2 ) | = | m – (2m + k) | = k+m = k + (q-1)/2 Thus the edges ( v1 , v2), (v2, v3 ), ( v3 , v4) ,…, (vm, vm 1 ), (v2, v2 ) receive k, k+1, k+2, k+3, …, k+{(q-3)/2}, k+{(q-1)/2} as labels and (vm+1, vm ), ( vm , vm-1) ,…, ( v2 , v1) receive k+{(q+1)/2}, k+{(q+3)/2}, …, k+(q-1) as labels. That is f * (E) = {k, k+1, k+2, k+3, …, k+(q-1)} which are all distinct and hence EDG (Pm) is k-graceful. Case (2): In this case, we prove this theorem for odd paths. Let Pm be a path, where m = 2n+1; n  N. Consider the paths of the type P2n+1; n  N. To get k-graceful labeling, define a map f : V  {0, 1, 2, …, k+(q-1)} as given in above algorithm. Therefore the vertices v1, v3, v5, … vm, vm+1, vm-1, vm-3, …, v2 receive 0, 1, 2, … (q-1)/2 as labels and v1, v3 ,...vm , vm 1 , vm 1 , vm 3 ,..., v2 receive k+(q-1)/2, k+(q+1)/2, k+(q+3)/2, …, k+(q-1) as labels. Thus all the (2m+2) vertices are labeled. Define induced function f * : E  N such that | f (x) – f (y) |  {k, k+1, k+2, …, k+(q-1)}, for all x, y  E. Now the edges are labeled as follows: The edges are in the following five forms, namely, (v2i+1, v2 i  2 ), ( v2 i 1 , v2i+2) where 0 < i < (m-1)/2; (v2i+2, v2 j 1 ), ( v2 i  2 , v2j+1) where j=i+1,0 < i < (m-3)/2 and (v2, v2 ). For any i, where 0 < i < (m-1)/2, f * (v2i+1, v2 i  2 ) = | f (v2i+1) – f ( v2 i  2 ) | = | i – (2m+k – i) | = | {k+(q-1)-2i} | These m/2 edges are labeled with numbers k+(q-1), k+(q-3), …, k+{(q+1)/2}. For any i, where 0 < i < (m-1)/2, f * ( v2 i 1 , v2i+2) =| {(m+k)+i} – (m-i) | = k + 2i

These m/2 edges are labeled with numbers k, k+2, k+4, …, k+{(q-3)/2}. Let j = i+1, for any i, where 0 < i < (m-3)/2, f * (v2i+2, v2 j 1 ) =| f (v2i+2) – f ( v2 j 1 ) | = | (m-i) – (m+k+j) | = k+1+2i These m/2 edges are labeled with numbers k+1, k+3, k+5, …, k+{(q-5)/2}. Let j = i+1, for any i, where 0 < i < (m3)/2, f * ( v2 i  2 ,v2j+1) =| f ( v2 i  2 ) – f (v2j+1) | = | (2m+k-i) – j | = | k+(q-2) – 2i | These m/2 edges are labeled with numbers k+(q2), k+(q-4), …, k + {(q+3)/2}. For the edge (v2, v2 ), f * (v2, v2 ) = | f (v2) – f ( v2 ) | = | m – (2m + k) | = k+m = | k + (q-1)/2 | Thus the edges ( v1 , v2), (v2, v3 ), ( v3 , v4) ,…, ( vm , vm+1), (v2, v2 ) receive k, k+1, k+2, k+3, …, k+{(q-3)/2}, k+{(q-1)/2} as labels and ( vm 1 , vm), (vm, vm 1 ) ,…, ( v2 , v1) receive k+{(q+1)/2}, k+{(q+3)/2}, …, k+(q-1) as labels. That is f * (E) = {k, k+1, k+2, k+3, …, k+(q-1)} which are all distinct and hence EDG (Pm) is k-graceful. Illustration : k-Graceful labeling for the graphs EDG (P6) and EDG (P7) are shown in the diagrams given in Appendix I. 3.2 (k,d)-graceful labeling for EDG (Pm) In this section, we present an algorithm and prove the existence of (k,d)-graceful labeling for EDG (Pm). Algorithm: procedure ((k,d)-graceful labeling for EDG (Pm)) V  {v1, v2, …, vm, vm+1, v1, v2 ,..., vm , vm 1 } E  {e1, e2, …, em, em+1, e1, e2 ,..., em } if (m = 2n) for i=0 to m/2 do v2i+1  di; v2 i 1  (dm+k)+di; end for for i= 0 to (m-2)/2 do

250

k-graceful and (k,d)-graceful labeling in Extended Duplicate Graph of Path

v2i+2  d(m-i); v2 i  2  (2dm+k)-di; end for

These m/2 edges are labeled with numbers k+(q-1)d, k+(q-3)d, k+(q-5)d, …, k+{(q+3)/2}d

for i = 0 to (m-1)/2 v2i+1  di; v2 i 1  (dm+k)+di; v2i+2  d(m-i); v2 i  2  (2dm+k)-di; end for

For any i, where 0 < i < (m-2)/2, f* ( v2 i 1 , v2i+2) =| {(dm+k)+di} – (dm-di) | = k+2di These m/2 edges are labeled with numbers k, k+2d, k+4d, …, k+{(q-5)/2}d.

else

end if end procedure Theorem : The Extended Duplicate Graph of path Pm, m > 2 admits (k,d)-graceful labeling. Proof: Let EDG (Pm) be an Extended duplicate graph of path Pm. Clearly EDG (Pm) has p vertices and q edges where p = 2m+2, q = 2m+1.

Let j = i+1, for any i, where 0 < i < (m2)/2, f * (v2i+2, v2 j 1 )

=

| f

(v2i+2) – f

( v2 j 1 ) | = | d(m-i) – {(dm+k)+dj)} | = (k+d) + 2di

Denote the set of vertices as V = {v1, v2, …, vm, vm+1, v1, v2 ...vm , vm 1 }

These m/2 edges are labeled with numbers k+d, k+3d, k+5d, …, k+{(q-3)/2}d.

Case (1): In this case, we prove this theorem for even paths. Let Pm be a path, where m=2n; n  N. Consider the paths of the type P2n; n  N. To get (k,d)-graceful labeling, define a map f : V  {0,1, 2, …, k+(q-1)d} as given in above algorithm.

Let j = i+1, for any i, where 0 < i < (m2)/2, f * ( v2 i  2 ,v2j+1) =| f ( v2 i  2 ) – f (v2j+1) | = | (2dm+k-di) – dj | = {k+(q-2)d}-2di

Therefore the vertices v1, v3, v5, …, vm+1, vm, vm-2, …, v2 receive 0, d, 2d, 3d, …, {(q1)/2}d as labels and receive k+{(qv1, v3 , v5 ...vm 1 , vm , vm 2 ,..., v2 1)/2}d, k+{(q+1)/2}d, k+{(q+3)/2}d, …, k+(q1)d as labels. Thus all the (2m+2) vertices are labeled. Define induced function f * : E  N such that | f (x) – f (y) |  {k, k+d, k+2d, …, k+(q-1)d}, for all x, y  E.

Now the edges are labeled as follows: The edges are in the following five forms, namely, (v2i+1, v2 i  2 ), ( v2 i 1 , v2i+2) where 0 < i < (m2)/2; (v2i+2, v2 j 1 ), ( v2 i  2 , v2j+1) where j=i+1,0 < i < (m-2)/2 and (v2, v2 ). For any i, where 0 < i < (m-2)/2, f * (v2i+1, v2 i  2 ) = | f (v2i+1) – f ( v2 i  2 ) | = | di – (2dm+k –di) | = | k + (q-1-2i)d |

These m/2 edges are labeled with numbers k+(q-2)d, k+(q-4)d, k+(q-6)d,…, k+{q+1)/2}d. For the edge (v2, v2 ),

f * (v2, v2 )=| f (v2) – f ( v2 ) | = | dm – (2dm + k) | = k + {(q-1)/2}d Thus the edges ( v1 , v2), (v2, v3 ), ( v3 , v4) ,…, (vm, vm 1 ), (v2, v2 ) receive k, k+d, k+2d, k+3d, …, [k+{(q-3)/2}d], [k+{(q-1)/2}d] as labels and (vm+1, vm ), ( vm , vm-1) ,…, ( v2 , v1) receive [k+{(q+1)/2}d], [k+{(q+3)/2}d], …, [k+(q-1)d] as labels. That is f * (E) = {k, k+d, k+2d, …, k+(q1)d} which are all distinct and hence EDG (Pm) is (k,d)-graceful. Case (2): In this case, we prove this theorem for odd paths. Let Pm be a path, where m = 2n+1; n  N. Consider the paths of the type P2n+1; n  N. To get (k,d)-graceful labeling, define a map f : V  {0, 1, 2, …, k+(q-1)d} as given in above algorithm.

251

P.P. Ulaganathan K. Thirusangu B. Selvam

Therefore the vertices v1, v3, v5, … vm, vm+1, vm-1, vm-3, …, v2 receive 0, d, 2d, 3d, …, {(q-1)/2}d as labels and v1, v3 ,...vm , vm 1 , receive k+{(q-1)/2}d, vm 1 , vm 3 ,..., v2 k+{(q+1)/2}d, k+{(q+3)/2}d, …, k+(q-1)d as labels. Thus all the (2m+2) vertices are labeled. Define induced function f * : E  N such that | f (x) – f (y) |  {k, k+d, k+2d, …, k+(q-1)d}, for all x, y  E. Now the edges are labeled as follows: The edges are in the following five forms, namely, (v2i+1, v2 i  2 ), ( v2 i 1 , v2i+2) where 0 < i < (m1)/2; (v2i+2, v2 j 1 ), ( v2 i  2 , v2j+1) where j=i+1,0 < i < (m-3)/2 and (v2, v2 ). For any i, where 0 < i < (m-1)/2, f * (v2i+1, v2 i  2 ) = | f (v2i+1) – f ( v2 i  2 ) | = | di – (2dm+k –di) | = | k + (q-1-2i)d | These m/2 edges are labeled with numbers k+(q-1)d, k+(q-3)d, k+(q-5)d, …, k+{(q+1)/2}d For any i, where 0 < i < (m-1)/2, f * ( v2 i 1 , v2i+2) =| {(dm+k)+di} – (dm-di) | = k+2di These m/2 edges are labeled with numbers k, k+2d, k+4d, …, k+{(q-3)/2}d. Let j = i+1, for any i, where 0 < i < (m-3)/2, f * (v2i+2, v2 j 1 ) =| f (v2i+2) – f ( v2 j 1 ) | = | d(m-i) – {(dm+k)+dj)} | = (k+1+2i)d These m/2 edges are labeled with numbers k+d, k+3d, k+5d, …, k+{(q-5)/2}d. Let j = i+1, for any i, where 0 < i < (m-2)/2, f * ( v2 i  2 ,v2j+1) = | f ( v2 i  2 ) – f (v2j+1) | = | (2dm+k-di) – dj | = | k+(q-2-2i)d | These m/2 edges are labeled with numbers k+(q-2)d, k+(q-4)d, k+(q-6)d,…, k+{q+3)/2}d. For the edge (v2, v2 ), f * (v2, v2 )= | f (v2) – f ( v2 ) | = | dm – (2dm + k) | = k + {(q1)/2}d Thus the edges ( v1 , v2), (v2, v3 ), ( v3 , v4) ,…, ( vm , vm+1), (v2, v2 ) receive k, k+d, k+2d,

k+3d, …, [k+{(q-3)/2}d], [k+{(q-1)/2}d] as labels and ( vm 1 , vm), (vm, vm 1 ) ,…, ( v2 , v1) receive [k+{(q+1)/2}d], [k+{(q+3)/2}d], …, [k+(q-1)d] as labels. That is f * (E) = {k, k+d, k+2d, …, k+(q-1)d} which are all distinct and hence EDG (Pm) is (k,d)-graceful. Illustration : (k,d)-Graceful labeling for the graphs EDG (P6) and EDG (P7) are shown in the diagrams given in Appendix II 4. CONCLUSION: In this paper, we proposed algorithms and implemented them to obtain k-graceful labeling and (k, d) graceful labeling for the extended duplicate graph of path Pm. In future it would be interesting to extend different type of labelings to the extended duplicate graph of path Pm. REFERENCES: [1] Acharya B.D. and Hegde S.M., Arithmetic graphs, J. Graph. Theory, 14, 275-299, 1999 [2] Gallian J.A., A Dynamic Survey of graph labeling, the Electronic Journal of Combinatories,17, # DS6, 2010 [3] Golomb S.W., How to number a graph, in Graph Theory and Computing, R.C. Read, ed., Academic Press, New York, 23-37, 1972 [4] Gnanajothi R.B., Topics in Graph Theory, Ph.D. Thesis, Madurai Kamaraj University, 1991 [5] Lee S.M. and Shee S.C., On Skolemgraceful graphs, Discrete Math., 93, 195-200, 1991 [6] Rosa A., On certain Valuations of the vertices of a graph, Theory of graphs (Internat. Symposium, Rome, July 1966), Gordon and Breech, N.Y. and Dunod paris, 349-355, 1967 [7] Slater P.J., On k-graceful graphs, Proc. Of the 13th S.E. conf. on Combinatorics, Graph Theory and Computing, 53-57, 1982 [8] Thirusangu K., Ulaganathan P.P. and Selvam B., Graceful and Skolem graceful labeling in extended duplicate graph of path, International Journal of

252

k-graceful and (k,d)-graceful labeling in Extended Duplicate Graph of Path

Science and Technology, Vol. 4, Nos.2, pp. 107-111, 2011

[9]

Youssef M.Z., New families of graceful graphs, Ars combin, 67, 303-311, 2003

Appendix I

k-graceful labeling if k = 4 EDG (P6)

4

v1 10

0 v1  18

10 5 15

v2 16

7 v2

v3 11

1 v3  16

6

6 v4 7

15 v4 17

2 v5  14

8

5 v6 9

13 v6 16

3 v7  12

10 v7 14

0 v1  16 6 v2 

EDG (P7)

1 v3  14

6

5 v4  7

13 v4 15

2 v5  12

8

4 v6  9

11 v6 14

v5 12

v7 13

3 v7 

4 v8 

Fig. 1

4

v1 11

11 5 17

v2 18 v3 12

v5 13

v8 15

Fig. 2

k-graceful labeling if k = 5 EDG (P6) 5

0 v1  17

v1 11

6 v2 

v2 17

11 6 16

v3 12

1 v3  15

7

5 v4  8

14 v4 16

2 v5  13

9

4 v6  10

12 v6 15

v5 13

v7 14

3 v7  Fig. 3

EDG (P7) 5

v1 12

12 6 18

v2 19

0 v1  19 7 v2

v3 13

1 v3  17

7

6 v4 8

16 v4 18

2 v5  15

9

5 v6 10

14 v6 17

3 v7  13

11 v7 15

4 v8 

Fig. 4

v5 14

v8 16

253

P.P. Ulaganathan K. Thirusangu B. Selvam

Appendix II

(k,d)-graceful labeling if k = 2, d = 2 EDG (P6) EDG (P7) 2 v1 14  0 v1  26

12 v2

14

4 24

v2 26

14v2

v3 16

2 v3  22

6

10 v4 8

20 v4 24

4 v5  18

10 v5 18

8 v6  12

16 v6 22

v1 16



16 4 28

v2 30

6

v3 18

2 v3  26 

8

24 v4 28

22

10 v5 20

10v6 12

20 v6 26



14 v7 22

12v4 4 v5

v7 20

6 v7 

2

30

0 v1

6 v7



8 v8 

Fig. 5

18 Fig. 6

v8 20

(k,d)-graceful labeling if k = 2, d = 3 EDG (P7)

EDG (P6)

2

v1 23

23 5 41

v2 44

2

v1 20

0 v1  44

35

v2 38

21v2

3 v3  32

8

v3 23

3 v3  38

8

15 v4 11

29 v4 35

18v4 11

35 v4 41

6 v5  26

14 v5 26

6 v5  32

14 v5 29

12 v6 17

23 v6 32

15v6 17

29 v6 38

v7 29

9 v7  26

20 v7 32

0 v1  38 18 v2

5

20

9 v7  Fig. 7

12v8

Fig. 8

v3 26

v8 35

254

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Dynamics in Topological groups #

V .V. S. Ramachandram#1

B.Sanakara Rao*2

Department of Mathematics, B.V.C College of Engineering, Rajahmundry, Andhra Pradesh, India * Department of Mathematics, Adikavi Nannayya University, Rajahmundry, India.

ABSTRACT In this paper we defined a dynamical system  G, f  where G is a topological group and „ f ‟ is a continuous homomorphism from G to itself. We established some properties of the set of fixed points and the set of periodic points. We extended the notion of conjugacy to systems of above type, and discuss various properties of systems, preserved by this new conjugacy.

structure preserving function. Then the ordered pair  G, f  is a Discrete dynamical system. In the present paper whenever we say a dynamical system it means we are taking an ordered pair  G, f  where G is a topological group and f : G  G homomorphism.

is a continuous

Mathematic Subject Classification: 55F08, 58F11, 54H20, 28D05

Trajectory : In the dynamical system (G, f ) the trajectory of an element a  G is given by the sequence a, f (a), f 2 (a), f 3 (a),     .

Key words: Discrete dynamical system, Homomorphism, Periodic point, Fixed point, Trajectory , Orbit, Invariant set, Strongly Invariant set, Topological conjugacy ,  -limit set.

The range of the above sequence is called the orbit of the element a  G .

1. INTRODUCTION Topological dynamics is a very active field in pure and applied mathematics, that involves tools and techniques from many areas such as analysis, geometry and number theory and has applications in many fields as physics, astronomy. A discrete dynamical system can be defined by taking a nonempty set X with some structure and a function f : X  X . In the present paper, we established some basic properties of the elements of the Discrete dynamical system defined on a topological group G .

Fixed point: In the dynamical system element a  G if f  a   a .

 G, f  an

is called a fixed point

Periodic point: In the dynamical system

 G, f  an

a  G is a periodic point if f (a)  a for some positive integer n. The least value of „ n ‟ is called the period of „ a ‟. element n

Invariant set: In the dynamical system

 G, f  a

subset A of G is invariant if f ( A)  A . [3] Basic Definitions: Discrete dynamical system:

Strongly invariant set (S-invariant set): In the dynamical system  G, f  a subset A of

Let G be a nonempty set having some structure (Algebraic or Topological or Measure theoretic) and f : G  G be a

G is a Strongly invariant subset of f  A  A .

G if

255

V .V. S. Ramachandram B.Sanakara Rao

Topological Conjugacy : Let F and G be two topological groups and f : F  F , g : G  G be continuous homomorphisms. Then f is topologically conjugate to g if there is a continuous, open,  :F G isomorphism such that  of  go . Basic Results: The following basic results are useful in proving the theorems obtained in the present paper. We state them without proof. Result 1: If f : G  G is a homomorphism from a topological group G to itself and „ e ‟ is the identity element in G then f  e   e . Result 2: If

f : G  G is a function,

f n (a)  a and n , p are positive integers such that n divides p then f p (a)  a . Main Results: Theorem 1: In the dynamical system  G, f  , if ‘ a ’ is a fixed point and ‘ b ’ is a periodic point in G then ‘ aob ’ is a periodic point in G . Proof: Let „ k ‟ be the period of the point „ b ‟. Then f k (b)  b . Since every fixed point is a periodic point of period one, „ a ‟ is a periodic point. Now, f k (aob)  f k (a)of k (b)  aob Hence „ aob ‟ is a periodic point of period k . Theorem2: In the dynamical system  G, f  , the set of all periodic points in G is a subgroup of G . Proof: Let K be the set of all periodic points in the topological group G . Since „ e ‟ is a periodic point, K is a nonempty subset of G . Let a, b  K and their periods be m, n respectively.

Put r  L.C.M of {m, n} By Result (2), we can get

f r (aob)  f r (a)of r (b)  aob and 1

f r (a 1 )   f r (a)   a 1 This proves that K is a subgroup of G . Theorem 3: In the dynamical system  G, f  , if A is a subset of G such that f (G)  A  G , then A is invariant with respect to f . Proof: Suppose

A  G  f ( A)  f (G)  f ( A)  A . Hence, A is an invariant set in G . Theorem 4: In the dynamical system  G, f  , the set of all fixed points in G is S-invariant. Proof: Let A be the set of all fixed points in G . Let

.

f (a)  f ( A)  a  A  f ( a )  a  f (a )  A Also, a  A  f (a)  f ( A)  a  f ( A) . Hence, f  A  A ,

proving

the

theorem.

. Theorem 5: In the dynamical system  G, f



the set of

all fixed points in G is a closed subgroup of G. Proof: Let H be the set of all fixed points in G and „ e ‟ be the identity element in G . Since „ e ‟ is a fixed point, H   . Suppose „ a ‟ is a limit point of H . There is a sequence {an } of elements of H such that lim an  a .

.

n 

Since „ f ‟ is a continuous function, we get





f lim an  f (a)  lim f  an   f (a) n 

n 

 lim an  f  a  n 

256

Dynamics in Topological groups

By the uniqueness of limit of a sequence we get that f (a)  a . This shows that „ a ‟ is a fixed point in G and hence H is a closed subset of G . Let a, b  H  f  a   a and f  b   b .

(4) Ker  Kerf . Moreover the equality holds if g a one- one function. Proof: Proof of (1):

Since „ f ‟ is a homomorphism, f (aob)  f (a)of (b)  aob and

n Let a  Orb( x, f )  a  f  x 

f (a 1 )  ( f (a))1  a 1 . Hence H is a closed subgroup of G .

   a     f n  x 

The following is a well known theorem from [2].We state the theorem without proof. Theorem 6 : Let F and G be topological groups, f : F  F , g : G  G and  : F  G be a topological conjugacy of f and g. Then, (a)  1 : G  F is a topological conjugacy, (b)  of n  g n o for all natural numbers n (c) p is a periodic point of f if and only if  ( p) is a periodic point of g . Further, the prime periods of p and  ( p) are identical. (d) If „p‟ is a periodic point of f with stable

for some n  0

   a   g n   x      a   Orb   x  , g  Proof of (2): Let a   ( x, f )  a is a limit point of

Orb( x, f )

 There exists a subsequence sequence { f nk  x } such that lim f nk  x   a nk 



nk 







  lim f nk  x     a  (By the continuity of  )

 lim  f nk  x     a  nk 

set W s ( p) , then the stable set of  ( p)

 lim g nk   x      a 

is  W s ( p) .

  (a)   (  x  , g )





(e) The periodic points of „f ‟ are dense in F iff the periodic points of „g‟ are dense in G. (f) „f ‟ is topologically transitive on F if and only if g is topologically transitive on G . (g) „f ‟ is chaotic on F if and only if g is chaotic on G . In view of above theorem we can prove the following theorem. Theorem 7: G Let and be topological F groups, f : F  F , g : G  G be two continuous homomorphisms and  : F  G be a topological conjugacy of f and g . Then, (1) a  Orb( x, f )   (a)  Orb( ( x), g ) (2) a  ( x, f )   (a)  ( ( x), g ) (3) x  Kerf   ( x)  Kerg

nk 

Conversely, suppose  (a)   (  x  , g )

 There exists a subsequence





g   x  nk

such that lim g nk   x     a  nk 





 lim  f nk  x     a  nk 

 lim f nk  x   a nk 

 a   ( x, f ) Proof of (3): Let x  Kerf  f  x   e

   f  x   e '  g  ( x)   e '   ( x)  Kerg 257

V .V. S. Ramachandram B.Sanakara Rao

Proof of (4): Let x  Ker    x   e '

 g   x    e '

   f ( x )   e '   (e)  f ( x)  e  x  Kerf This proves that Ker  Kerf If the function „g’ is one-one then

x  Kerf  f  x   e

   f  x   e '

 g  ( x)   e '  g (e ')   ( x)  e '  x  Ker This shows that the equality holds if the function „g‟ is a one-one function. Theorem 8: If  : F  G is a topological conjugacy then x  Ker  x is eventually fixed at the identity element of G . Proof: Let x  Ker   ( x)  e ' Hence the element „x‟ is eventually fixed at e ' , the identity element of G . REFERENCES [1] R. L. Devaney, “Chaotic Dynamical systems” (second edition), AddisonWesley, 1989. [2] M.Brin& G.Stuck, “Introduction to Dynamical systems”, Cambridge University Press. [3] M.J.Evans& others, “Characterization of Turbulent one dimensional mappings via -limit sets”, Transactions of the American Mathematical society, Vol.326 (1), 1991.

258

Proceedings of National Conference on Mathematical & Computational Sciences- 6th&7th July, 2012 Adikavi Nannaya University, Rajahmundry [AP], India

Lattice Algebraic Properties of (Inverse) Images of Type-2 Fuzzy Subsets Nistala V.E.S. Murthy1

Lokavarapu Sujatha2

1

Department of Computer Science and Systems Emgineering, Andhra University, Visakhapatnam, Andhra Pradesh, India 2 Department of Mathematics, Andhra University, Visakhapatnam, Andhra Pradesh, India

ABSTRACT The aim of this paper is basically to introduce the notions of type- 2 fuzzy image of a type-2 fuzzy sub set and type-2 fuzzy inverse image of a type-2 fuzzy sub set under a crisp map and study some of the standard lattice algebraic properties for the same. Keywords: Type-2 fuzzy subset, type-2 fuzzy image of type-2 fuzzy sub set and type-2 fuzzy inverse image of type-2 fuzzy subset. A.M.S. subject classification: 08A72, 03F55, 06C05, 16D25. 1 Introduction The traditional view in science, especially in mathematics, is to avoid uncertainty at all levels at any cost. Thus "being uncertain" is regarded as "being unscientific". But unfortunately in real life most of the information that we have to deal with is mostly uncertain. One of the paradigm shifts in science and mathematics in this century is to accept uncertainty as part of science and the desire to be able to deal with it, as there is very little left out in the practical real world for scientific and mathematical processing without this acceptance! One of the earliest successful attempts in this directions is the development of the Theories of Probability and Statistics. However, both of them have their own natural limitations. Another successful attempt again in the same direction is the so called Fuzzy Set Theory, introduced by Zadeh.

According to Zadeh, a fuzzy subset of a set X is any function f from the set X itself to the closed interval [0,1] of real numbers. An element x belonging to the set X belongs to the fuzzy subset f with the degree of membership fx , a real number between 0 and 1 . These fuzzy sub sets of a set X are also called type 1 fuzzy sub sets of x . Observing that fuzzy subsets themselves require a specific real number between/including 0 and 1 to be associated with each element of X , which is not always possible in several of the practical applications, Zadeh[1975] himself introduced the so called interval valued fuzzy subsets and type-2 fuzzy sub sets of a set X as means to handle even more inexact/uncertain information. Thus, a type-2 fuzzy sub set of a set X is any function f from the set X itself to the complete lattice of all type 1 fuzzy subsets of the closed interval [0,1] of real numbers. An element x belonging to the set X belongs to the fuzzy subset f with the degree of membership fx , a type 1 fuzzy sub set of the closed interval [0,1] . Mizumoto and Tanaka [1976, 1981] have studied the set-theoretic operations of type2 sets, properties of membership grades of such sets, and examined the operations of algebraic product and algebraic sum for them [24]. Nieminen[1977] has provided more details about algebraic structure of type-2 sets.

259

Nistala V.E.S. Murthy Lokavarapu Sujatha

Dubois and Prade[1978,79,80] discussed composition of type-2 relations as an extension of the type-1 sup-star composition, but this formula is only for minimum t-norm. Karnik and Mendel [1998] have provided a general formula for the extended sup-star composition of type-2 relations. Looking at all these and other papers in print and on-line, one thing which becomes evident is that various (lattice) algebraic properties of type-2 fuzzy images and type-2 fuzzy inverse images of type-2 fuzzy sub sets under a crisp map which, incidentally, not only play a crucial role in the study of both type-2 fuzzy Algebra and type-2 fuzzy Topology but also are necessary for the individual/exclusive development of Type-2 Fuzzy Set Theory, are not yet studied. Note that, according to Hamrawi[2011], Page 36, the following version of type-2 fuzzy image for a type-2 fuzzy subset was introduced by Mendel and John[2002]: Let, X = X 1 ...  X n , be the Cartesian

~

~

product of universes, and A1 ,... An be type-2 fuzzy subsets in each respective universe. Also ~ let Y be another universe and B be a type-2

~

~

~

fuzzy subset of Y such and B = f ( A1 ,... An ) , where f : X  Y is a monotone mapping. Then applying the Extension Principle to type-2 fuzzy subsets (Type-2 Extension Principle) leads to the following: ~ ~ B  B ( y) =

sup

( x1 ,...,xn ) f 1 ( y )

~ ~ min( A1 ( x1 ),..., An ( xn )) , where

y = f ( x1 ,..., xn ) . However, (1) it is not clear why f : X  Y needed to be monotone to make the definition of type-2 fuzzy image of a type-2 fuzzy subset under an arbitrary crisp map and (2) how it can be assumed to be monotone without any orderings on both X and Y , which may or may not exist. Interestingly, the above expression still remains valid without

f : X  Y being monotone and is simply the type-2 fuzzy image of type-2 Cartesian product of n type-2 fuzzy subsets of n universes! Now, the aim of this Paper is, a. to introduce the notions of, type-2 fuzzy image of a type-2 fuzzy subset and type-2 fuzzy inverse image of a type-2 fuzzy subset, under a crisp map and b. make an exclusive study of both type-2 fuzzy images and type-2 fuzzy inverse images of type-2 fuzzy subsets under a crisp map in lines similar to the one in various other Set Theories, like Crisp Set Theory, (L-) Zadeh's (Interval Valued) Fuzzy Set Theory, Goguen's L -Fuzzy Set Theory, Atanassov's (L-) Intuitionistic Fuzzy Set Theory etc.. Further, as in the crisp set up, injectivity and surjectivity of maps in terms of some lattice algebraic properties of type-2 fuzzy images and type-2 fuzzy inverse images are characterized. Now coming back to the developments in this side of the Theories, Goguen unified some of them mathematically. However, one must observe here that, as mentioned earlier, when it comes to practical applications, the fuzzy subsets and the type-2 fuzzy subsets are quite different because fuzzy sets require a specific real number between 0 and 1 to be associated with each of its elements while type2 fuzzy sets require a reasonable set of ordered pairs in [0,1] to be associated with each of its elements. In what follows we show that for any set

X , the set Z 2 ( X ) of all type-2 fuzzy subsets of X is a complete infinite distributive complete DeMorgan lattice. Let us recall that, a complete DeMorgan Lattice is a complete lattice with a unary complement operation that satisfies the complete DeMorgan identities, namely, for any subset ( i )iI of L , ( iI  i ) c = iI  ic and

(iI  i ) c =  iI  ic . A complete infinite distributive DeMorgan lattice is a complete DeMorgan lattice which is also a complete infinite

260

Lattice Algebraic Properties of (Inverse) Images of Type-2 Fuzzy Subsets

distributive lattice. Later on we show that the collection of all type-2 fuzzy subsets of any set is a complete infinite distributive complete DeMorgan lattice. 2 Main Results: Definitions and Statements 2.1 (a) A type-2 fuzzy subset, A of a set X is any map A : X  I I , where I = [0,1] is the closed interval of all real numbers between 0 and 1 . (b) For any set X , the type-2 fuzzy subset again denoted by X , defined by, for each x in X , Xx = 1 , where for each   I , 1 = 1 of [0,1] , is the whole type-2 fuzzy subset of X and the type-2 fuzzy subset, denoted by  , defined by, for each x in X , x = 0 , where for each   I , 0 = 0 of [0,1] , is the empty type-2 fuzzy subset of X . (c) The collection of all type-2 fuzzy subsets of a set X is denoted by Z 2 ( X ) . (d) For any pair of type-2 fuzzy subsets A , B of X , define A  B if and only if Ax  Bx in I I for each x  X . (e) For any type-2 fuzzy subset A of a set X , define the compliment of A , denoted by Ac , by Ac x = 1  Ax , for each x  X , where (1  Ax ) = 1  Ax for each   I . Let ( Ai )iI be a family of type-2 fuzzy subsets of a set X . Then (a) The type-2 fuzzy union of ( Ai )iI , denoted by  iI Ai , is defined by,

( iI Abri ) x =  iI Ai x in I I for each

x X . (b) The type-2 fuzzy intersection of ( Ai )iI , denoted by iI Ai , is defined by,

(iI Ai ) x = iI Ai x in I I for each

x X . DeMorgan Algebra of Type-2 Fuzzy Subsets Theorem 2.2 For any set X , the set Z 2 ( X ) of all type-2 fuzzy subsets of a set X is a complete infinite distributive complete DeMorgan lattice, since I I is so.

Images And Inverse Images Definitions and Statements 2.3 Let X , Y be a pair of sets and let f : X  Y be an arbitrary but fixed map. Let A : X  I I and B : Y  I I be a pair of type-2 fuzzy subsets of X , Y respectively, where I I is the complete infinite distributive complete DeMorgan lattice. (a) The type-2 fuzzy image of A , denoted by fA , is defined by fA : Y  I I such that fAy =  Af

1

y for each y  Y . Observe that whenever y  Y is such that f 1 y =  , fAy =  Af 1 y =   = 0. (b) The type-2 fuzzy inverse image of B , denoted by f 1 B , is defined by f 1 B :

X  I I such that f 1Bx = Bfx for each x X . Lemma 2.4 For any pair of type-2 fuzzy subsets A and C of X and for any subset E of X such that for each e  E , Ae  Ce , we have  AE  CE . Lemma 2.5 For any map A : X  I I and for any pair of subsets P and Q of X , such that P  Q , we have  AP   AQ . In what follows, we show that several of the (lattice) algebraic properties that hold good for images and inverse images of crisp sets are also held good for type-2 fuzzy subsets. Lemma 2.6 For any map f : X  Y and for any pair of type-2 fuzzy subsets A and C of X such that A  C , we have fA  fC . Lemma 2.7 For any map f : X  Y and for any pair of type-2 fuzzy subsets B and D of Y such that B  D , we have f 1 B  f 1 D . Lemma 2.8 For any map f : X  Y and for any type-2 fuzzy subset A of X , we have A  f 1 fA . In particular, X = f 1 fX .

261

Nistala V.E.S. Murthy Lokavarapu Sujatha

Lemma 2.9 For any map f : X  Y and for any type-2 fuzzy subset B of Y , we have ff B  B.

( f 1B)c or f 1 ( 1  B) = 1  f 1B .

1

Lemma 2.10 For any map f : X  Y and for any family of type-2 fuzzy subsets ( Ai )iI of X , we have  iI fAi = f ( iI Ai ) .

Lemma 2.18 For any map f : X  Y and for any type-2 fuzzy subset B of Y , we have ff

B = B  fX where X =  hence always ff implies ff

Lemma 2.11 For any map f : X  Y and for any family of type-2 fuzzy subsets ( Ai )iI of X , we have f (iI Ai )  iI ( fAi ) . The equality is true whenever f is 1-1 and a strict inequality is possible otherwise. Lemma 2.12 For any map f : X  Y and for any family of type-2 fuzzy subsets ( Bi )iI of Y , we have f 1 ( iI Bi ) =  iI f 1Bi . Lemma 2.13 For any map f : X  Y and for any family of type-2 fuzzy subsets ( Bi )iI of Y , we have f 1 (iI Bi ) = iI f 1Bi . Lemma 2.14 For any map f : X  Y and for any type-2 fuzzy subset A of X , we have fA =

0 iff A = 0 . In particular f 0 = 0 , and when f is 1-1, fA = fX iff A = X . Lemma 2.15 For any map f : X  Y and for any type-2 fuzzy subset B of Y , 1. f 1 B = X iff B  fX . In particular, f 1Y = f 1 fX = X .

f 1 B = 0 iff B  Y  fX . In particular, f 1 0 = 0 . 2.

Lemma 2.16 For any map f : X  Y and for any type-2 fuzzy subset A of X , fX  fA  f ( X  A) and the equality holds whenever f is one-one. Lemma 2.17 For any map f : X  Y and for any type-2 fuzzy subset B of Y , f 1 ( B c ) =

1

1

X

1

= 1X and

B  B . In particular, f is onto

B = B.

Lemma 2.19 For any map f : X  Y and for any type-2 fuzzy subset B of Y , we have f 1 B = f 1 ( B  fX ) , where X = 

X

= 1X .

Lemma 2.20 For any map f : X  Y and for any pair of type-2 fuzzy subsets A of X and B of Y , we have fA  B iff A  f 1 ( B) . Lemma 2.21 For any map f : X  Y and for any pair of type-2 fuzzy subsets A of X and B of Y we have 1. ff 1 f ( A) = f (A) . 2. f 1 ff

1

( B) = f 1 ( B) .

In what follows we show that the identities, ( g  f ) A = g ( f ( A)) for all fuzzy subsets A of X and ( g  f ) 1 C = f 1 ( g 1C ) , for all fuzzy subsets C of Z , remain valid even for fuzzy subsets whenever f : X  Y and g : Y  Z are crisp maps. But first we make the following simple observations which will be used in the proofs of the above: Let X , Y , Z and f , g be as above. Then f 1 g 1 z   iff g 1 z   and f 1 y

  for some y  g 1 z or equivalently f 1 g 1 z =  iff g 1 z =  or g 1 z   and f 1 y =  for all y  g 1 z . Lemma 2.22 For any pair of maps f : X  Y , g : Y  Z and for each type-2 fuzzy subset A of X , ( g  f ) ( A) = g ( f ( A)) .

262

Lattice Algebraic Properties of (Inverse) Images of Type-2 Fuzzy Subsets

Lemma 2.23 For any pair of maps f : X  Y , g : Y  Z and for each type-2 fuzzy subset C of Z , ( g  f ) 1 (C ) = f 1 ( g 1 (C )) . Lemma 2.24 For any map f : X  Y and for any family of type-2 fuzzy subsets ( Ai )iI of X , we have 1. iI fAi = 0 implies iI Ai = 0 . However the converse is true whenever f is one-one and the implication may be false otherwise. 2.  iI fAi = 0 iff  iI Ai = 0 . Lemma 2.25 For any map f : X  Y and for any family of type-2 fuzzy subsets ( Bi )iI of Y , 1. iI Bi = 0 implies iI f 1Bi = 0 . However the converse is true whenever f is onto and the implication may be false otherwise. 2.  iI Bi = 0 implies  iI f 1Bi = 0 . However the converse is true whenever f is onto and the implication may be false otherwise. In what follows, as in the crisp set up, we characterize injectivity and surjectivity of maps in terms of some lattice algebraic properties of fuzzy images and fuzzy inverse images. Lemma 2.26 For any map f : X  Y , f is injective iff for each type-2 fuzzy subset A of X , f 1 fA = A . Lemma 2.27 For any map f : X  Y , f is injective iff for any pair of type-2 fuzzy subsets A1 and A2 of X , A1 < A2 implies fA1 < fA2 . Lemma 2.28 For any map f : X  Y , f is injective iff for any pair of type-2 fuzzy subsets A1 and A2 of X , fA1  fA2 implies A1  A2 . Lemma 2.29 For any map f : X  Y , f is injective iff for any family of type-2 fuzzy subsets

( Ai )iI of X , f (iI Ai ) = iI fAi . Lemma 2.30 For any map f : X  Y , f is surjective iff for each type-2 fuzzy subset B of Y , B = ff 1 B . Lemma 2.31 For any map f : X  Y , f is surjective iff for any pair of type-2 fuzzy subsets B1 and B2 of Y , B1 < B2 implies f 1B1