www. nitap.in

Proceedings of

North-Eastern Regional Science Congress on

“Science For Shaping The Future of India” on 11th -13th March’2013

International Journal on Current Science & Technology Vol.-1 | No.-1 | January-June’ 2013

ISSN : 2320 5636

ISSN: 2320 5636

www. nitap.in

International Journal on Current Science & Technology Vol.-I | No.-I | January-June’ 2013

ISSN: 2320 5636 International Journal on Current Science & Technology Vol.-I | No.-I | January-June’ 2013 Published By:

NATIONAL INSTIUTE OF TECHNOLOGY (An Institute of national importance) ARUNACHAL PRADESH Designed & Printed at:

INFLAME MEDIA

Kolkata, West Bengal

ISSN: 2320 5636 International Journal on Current Science & Technology

Vol.-I | No.-I | January-June’ 2013

Proceedings of

North-Eastern Regional Science Congress on

“Science For Shaping The Future of India” on 11th -13th March’2013

Sponsored by

Department of Science & Technology, Govt. of India. and Indian Science Congress Association, Kolkata Organized by

NATIONAL INSTIUTE OF TECHNOLOGY

www. nitap.in

ISCA

(An Institute of national importance) ARUNACHAL PRADESH (Estd. By MHRD, Govt. of India) PO-Yupia, P.S.-Doimukh,Dist-Papum Pare Pin-791112, Arunachal Pradesh Ph: +91 360 228 4801, Fax: +91 360 228 4972 E-mail : [email protected]; [email protected]

EDITORIAL BOARD MEMBERS [1] Prof. C. T. Bhunia - Director, NIT AP [2] Prof. M. V. Pitke - Former Professor of TIFR, Mumbai, Chair, CSE Section [3] Prof. Ajit Pal - Professor, IIT-Kharagpur [4] Prof. Atal Chowdhuri - Professor, Jadavpur University [5] Prof. Y. B. Reddy - Professor, Grambling State University (USA) [6] Prof. Mohammad S Obaidat - Professor, Monmouth University (USA) [7] Dr. Bubu Bhuyan - Associate Professor, NEHU [8] Prof. Swapan Mondal - Professor, Kalyani Govt. Engg. College [9] Prof. Swapan Bhattacharjee - Director, NIT Suratkal, Chair, ECE Section [10] Prof. P. P. Sahu - Professor, Tezpur University [11] Prof. S. R. Bhadrachowdhury - Professor, BESU [12] Prof. F. Masuli - Professor, University of Genova [13] Prof. S. Sen - Professor, Calcutta University [14] Prof. P. K. Basu - Professor, Calcutta university [15] Prof. S. C. Dutta Roy(Bhatnagar Awardee) - Professor, IIT Delhi, Chair, EEE Section [16] Prof. P. Sarkar - Professor, NITTR, Kolkata [17] Prof. G. K. N. Chetry - Professor, Manipur University, Chair, BioScience Section [18] Dr. Pinaki Chakraborty - Assistant Dean (R&D), NIT AP [19] Dr. Nabakumar Pramanik - Assistant Dean (Exam.) [20] Dr. K. R. Singh - Assistant Professor, NITAP [21] Dr. U. K. Saha - Assistant Professor, NITAP [22] Dr. Parogama Sen - Associate Professor, Calcutta University, Chair, Physical Science Section [23] Prof. A. K. Bhunia - Professor, Burdwan University

NORTH - EASTERN REGIONAL SCIENCE CONGRESS Programme Committee: [1] Prof. Dilip Kumar Sinha - Former Vice-Chancellor of Viswa Bharati University [2] Dr. Manoj Kumar Chakrabarti - General Secretary (Membership Affairs), ISCA [3] Dr. (Mrs.) Vijay Laxmi Saxena - General Secretary (scientific activities), ISCA [4] Mr. N. B. Basu - Treasurer, ISCA [5] Dr. Amit Krishna De - Executive Secretary, ISCA [6] Prof. S. C. Dutta Roy - IIT Delhi [7] Prof. Sanghamitra Roy - ISI, Kolkata [8] Prof. S. K. Bhatttacharyya -Director, NIT Surathkal [9] Prof. S. Sen , University of Calcutta, West Bengal [10] Prof. Surabhi Banerjee - Vice-Chancellor, Central University Orissa [11] Prof. S. R. Bhadrachowdhury - Bengal Engineering & Science University, Howrah [12] Prof. M. L. Das - Dhiru Bhai Ambani Institute of ICT, Gujrat [13] Prof. M. V. Pitke - Former Professor of TiFR, Mumbai [14] Prof. S. Raha - Bose Institute, Kolkata [15] Prof. Rabindra Nath Bera - Sikim Manipal University, Assam [16] Prof. Binay Singh - NERIST, Arunachal Pradesh [17] Prof. P. P. Sahoo - Tezpur University, Assam [18] Dr. Bubu Bhuyan - NEHU, Shilong Local Organizing Committee : [1] Prof. C. T. Bhunia -Chairman, Conference & Director, NITAP [2] Dr. Pinaki Chakraborty - Convenor, Conference, NITAP [3] Prof. P. D. Kashyap- NITAP [4] Dr. Nabakumar Paramanik- NITAP [5] Dr. U. K. Saha - NITAP [6] Dr. K. R. Singh - NITAP [7] Mr. Swarnendu Chakraborty- NITAP Working Programme Committee: [1] Dr. Pinaki Chakraborty - Convenor, Conference, NITAP [2] Dr. Nabakumar Paramanik- NITAP [3] Dr. U. K. Saha - NITAP [4] Dr. K. R. Singh - NITAP

PREFACE “As for the future, your task is not to foresee it, but to enable it.” ...Antoine de Saint-Exupery The National Institute of Technology is an Institute of National Importance and a unitary University by an act of Parliament. It is full of never-to-die spirit in implementing its defined objectives of Education, Research, Ethics and Service-to-Society. Nothing can be more credible for an institute of higher learning than to provide quality teaching and productive research. In its pursue of quality teaching and in an attempt to complete man making process in holistic approach, in the B Tech syllabi of this instute inclusion of unique compulsory courses of Values & Ehtics, Entrepreneurship Practices, Histrography of Science & Technology, NCC among othere are made purposefully. In line with that to root a solid foundation in research, at its very third year of inception, Ph D programms are introduced. GOD is in favor of doers, as we are highly privileged to get the opportunity to organize the North Eastern Regional Science Congress in this centenary year of Indian Science Congress Association. I on my own behalf and on behalf of entire NIT family put on record our gratitude to Indian Science Congress Association on showing their confidence & faith on our academic potentialities & viabilities to organize the North Eastern Regional Science Congress. We feel more honored that the several distinguished scientists and promosing youmg researchers of several leading universities, eg University of Calcutta, Other National Institute of Technology, Manipur University, North Eastern Hill University, Tezpur University among others have spontaneously & generously contributed their thought provoking research papers in this conference. I thank & salute to the esteemed contributors. We in NIT, Arunachal believe to take challenges to realize what we think is of essential for making NIT at par excellency. To us, sky is the only limit. Therefore our initiative to publish a Bi-Yearly Research Journal on Current Science & Technology on regular basis can not find a better moment than the eve of North Easter Regional Science Copngress to see the day of light. The proceedings of the conference is therefore published as the premier issue of the Journal. Accolodates to the authors, the editors, the organizers, the readers and all the members of family of NIT, Arunachal for their commitment on ”Stop Not till The Goal Is Reached.” I have full confidence that the journal cum proceedings published on the occassion of the North Eastern Regional Science Congress will bring scholarships in totality and figuratibility. “There is nothing so practical as a good theory.” ...Ludwig Boltzman

Professor Chandan Tilak Bhunia DIRECTOR National Institute of Technology Arunachal Pradesh

INDEX OF CONTENT Sl. No.

Title

Page

1

The evaluation of research performance of Indian states by Dr. Gangan Prathap

11

2

Imbalance of Technical Education in the North East India and its Effects by Sainkupar Marwein Mawiong

15

3

Reviewing And Sggestions For Revamping Technical Higher Education In India To Meet The Challenges Of Future Scenario by A. Bhunia, A. Bhunia, S. K. Chakraborty, P. Chakraborty, R.S. Goswami, N. Pramanik, M. K. De, P. K. Samanta and C.T. Bhunia

21

4

Imbalance in Technical Education-Regional by Bikash Sah, Nupur, Santosh Shukla, Krishna Kumar

31

5

A comparative study of Fungal diseases of french bean (Phaseolus vulgaris. L) in organic and conventional farming system by G. K. N. Chhetry and H. C. Mangang

35

6

Arbuscular mycorrhial fungi associated with the rhizospheric soil of potato plant (Solanum tuberosum) in Barak valley of South Assam, India by Sujata Bhattacharjee & G. D. Sharma

41

7

Biodiversity and conservation strategies of home garden crops in Manipur by A Premila and G. K. N Chhetry

45

8

Metabolic Pathways: A review by Daizy Deb and Rhythm Upadhyaya

49

9

Icthyofaunal Diversity of Simen River in Assam and Arunachal Pradesh, India by Biplab Kumar Das, Aloka Ghosh and Devashish Kar

55

10

Recent Advances in Papaya Cultivation and Breeding by Aditi Chakraborty and S. K. Sarkar

59

11

Traditional organic practices with traditional inputs farming for the cultivation of french bean in Manipur by G. K. N. Chhetry and H. C. Mangang

65

12

Induced breeding of eel-loach Pangio pangia, (Hamilton 1822) by Kh. Geetakumari, Ch. Basudha and N. Prakash

73

13

Fungal Airspora over onion field in Mnipur valley by A. Premila

77

14

Variation in Indoor and Outdoor Aeromycoflora of a ice Mill in Imphal by A. Premila

81

15

Biochemical Networks: The Chemistry of Life by Rhythm Upadhyaya and Rhyme Upadhyaya

85

16

Applications of zeolites for alkylation reactions: catalytic and thermodynamic properties by Dr. V. R. Chumbhale

91

17

Multichannel Transceiver System Design Using Uncoordinated Direct Sequence Spread Spectrum by S.Kalita, R.Kaushik, M.Jajoo, P.P.Sahu

97

18

Effect of demyelination on conduction velocity in demyelinating polyneuropathic patients by H. K. Das and P. P. Sahu

101

19

From Transistor to Medicine: Materials, Devices, and Systems by Tapas Kumar Maiti

105

20

Enzyme-modified Field Effect Transistors (ENFETs) as Biosensors : A Research Review by Manoj Kumar Sarma and Jiten Ch. Dutta

109

21

Acetylcholine Gated Spiking Neuron Model by Soumik Roy, Meenakshi Boro, Jiten Ch Dutta and Reginald H. Vanlalchaka

115

22

Power Efficient Adiabatic Gray to Binary & Binary to Gray Code Converter Circuits by Reginald H Vanlalchaka and Soumik Roy

119

Sl. No.

Title

Page

23

Light Induced Plating For Enhance Efficiency by Improving Fill Factor And Short Circuit Current by Santanu Maity, Avra Kundu, Hiranmay Saha, UtpalGangopadhyay

125

24

Image Denoising Using Sparse and Overcomplete Representations -A Study By M. K. Rai Baruah, BhabeshDeka

129

25

FOTOFUSION - An Analysis of Image Editing on Android Platform as an Application in Smart Phones by Smita Das, Nitesh Kr. Singh, Mukesh Kumar, Ashok Ajad, Priya Khan

135

26

Denoising of Speckled Images by Sagarika Das

141

27

A Study of Randomness and Variable Key in Cryptography by Achinta Kumar Gogoi, Bidyut Kalita

147

28

Approach towards realizing error propagation effect of AES and studies thereof in the light of Redundancy Based Technique by B. Sarkar, C. T. Bhunia, U. Maulik

153

29

Cipher Combining Technique to tackle Error Propagation Behavior of AES by Rajat Subhra Goswami, Swarnendu Kumar Chakraborty, Abhinandan Bhinia, C. T. Bhunia

159

30

Two New Protocols for Improving Performance of Aggressive Packet Combining by Swarnendu Kumar Chakraborty, Rajat Subhra Goswami, Abhinandan Bhinia, C. T. Bhunia 161

31

Review and Security Analysis of an Efficient Biometric-Based Remote User Authentication Scheme U sing Smart Cards by Subhasish Banerjee, Uddalak Chatterjee and Kiran Sankar Das 167

32

Evolution Strategy for the C-Means Algorithm: Application toMultimodal Image Segmentation By Francesco Masulli, Anna Maria Massone, Andrea Schenone

171

33

A Deterministic Inventory Model for Deteriorating Items With Time Dependent Demand and Allowable Shortage Under Trade Credit by Pinki Majumder and U.K.Bera

197

34

Development of Labview Based Electronic Nose Using k-nn Algorithm for the Detection and Classification of Fruity Odors by N.Jagadesh Babu

207

THE EVALUATION OF RESEARCH PERFORMANCE OF INDIAN STATES Gangan Prathap CSIR-National Institute of Science Communication and Information Resources New Delhi, New Delhi 100012 E-mail : [email protected]

ABSTRACT We examine how various states in India have performed in academic research on a per GDP basis. The scientific output measured in terms of the number of papers published in a prescribed window (which serves as a quantity proxy), and the GDP in current dollar terms, leads to the quality proxy, papers/GDP. The second-order indicator which is a product of the square of the quality proxy and the quantity proxy becomes the most practical single number scalar indicator of performance that combines quality and quantity of output or outcome. Keywords -Quality; Quantity; Quasity; Exergy, Performance; Bibliometrics.

I. NTRODUCTION As early as 1939, J D Bernal made an attempt to measure the amount of scientific activity in a country and relate it to the economic investments made. In The Social Function of Science (1939), Bernal [1] estimated the money devoted to science in the United Kingdom using existing sources of data: government budgets, industrial data (from the Association of Scientific Workers) and University Grants Committee reports. He was also the first to propose an approach that became the main indicator of science and technology: Gross Expenditures on Research and Development (GERD) as a percentage of GDP. He compared the UK’s investment (0.1%) with that of the United States (0.6%) and USSR (0.8%) and suggested that Britain should devote (0.5-1.0%) of its national income to research. Since then, research evaluation at the country and regional levels has progressed rapidly and there are now exercises carried out at regular intervals in the United States of America, European Union, OECD, UNESCO, Japan, China, etc. Science is a socio-cultural activity that is highly disciplined and easily quantifiable. The output of science can be easily measured in terms of articles published and citations, etc. Inputs are mainly that of the financial and human resources

invested in science and technology activity. The financial resources invested in research are used to calculate what is called the Gross Domestic Expenditure on R&D (GERD), and the human resources devoted to these activities (FTER for Full Time Equivalent Researcher) are usually computed as a fraction of the workforce or the population. The US science adviser, J R Steelman pointed out in 1947 that “The ceiling on research and development activities is fixed by the availability of trained personnel, rather than by the amounts of money available. The limiting resource at the moment is manpower”.

II. METHODOLOGY In most countries, due to a legacy of poor investment in higher education and research, both GERD and FTER/ million of population are sub-optimal. To see how far R&D investment in manpower and funding terms is sub-optimal in India, it is a good exercise to see how output is related to actual GDP. In the present exercise, the scientific output measured in terms of articles published from the various states of India as registered by the Web of Science over a 3 year period (2007-2009) P, is taken as the output term [2]. The GDP of each state, in billions of dollar in 2009 ($Bn) is taken as the proxy for the input term (http://www.economist.com/content/indian-summary accessed on 22 July 2011). A simple and crude measure of the quality of scientific activity will of course be given by the ratio of Output to Input, q = P/$Bn. This indicator usually favours small states at the expense of larger states where the law of diminishing returns sets in. Indeed, there will always be cases of high input but low output and therefore low quality, or low input and medium output but of high quality, etc. It is therefore desirable to assess overall performance in terms of a single indicator. The challenge is, when given an output or outcome (O), and an input of size Q, to combine quality q with quantity Q and/or output O to yield a single indicator that is the best proxy for performance. The Quasity-Exergy International Journal on Current Science & Technology Vol - I l No- I l January-June’2013 P 11

paradigm [3] proposes that in any general situation where performance needs to be evaluated, given an input Q (for quantity) and an output or outcome O (for quasity), quality, is defined as quasity/quantity (q = O/Q) and the simplest and most effective indicator for performance becomes X = qO = q2Q. Thus in this case, where Q = $Bn, and O = P, X = P2/$Bn. That is, in Quantity-Quality-Quasity terms, the indicator P/$Bn (papers/billion dollars of GDP) is the “quality” measure. The quantity (read size) measures are $Bn (billion dollars of GDP) and the quasity measure is now P (papers published during 2007-2009). The energy like term X = P/$Bn × P is a product of the quality and the quasity term and perhaps best represents the “performance” of each state on a per GDP basis.

III. THE RELATIVE SCIENTIFIC PERFORMANCE OF VARIOUS INDIAN STATES ON A PER GDP BASIS Table I presents the results of the output from various Indian States from the Web of Science during 2007-2009 [2]. Tamil Nadu accounts for the largest number of publications on what we call the quasity basis. Table II sorts out the results on a quality basis (Papers per billion dollars of GDP). This is obtained by inverting the relationship proposed in Prathap [3], namely quasity = quantity x quality. Here, the GDP of the state in billions of dollars ($Bn) is taken as the quantity term. The Union Territory of Chandigarh, which has many top national research and academic institutions ranks first among the Indian states for academic scientific research on this basis. Delhi, which has a privileged status as the National Capital Region, ranks second, and the erstwhile Union Territory of Puducherry ranks third. The exergy term, which is the product of quality and quasity, is offered as the best single number indicator for performance. On this basis, Delhi emerged first. This is not surprising as a very large number of premier research and academic institutions are based in Delhi. All this can be easily represented on a Quantity-Quality-Quasity diagram, where the product qO (also q2Q) is the energy like term (called exergy X) and is a scalar measure of the scientific activity during the window concerned that takes into account both quality and quantity. We see from Table II and Figures 1 and 2 that Delhi’s research during this period forges ahead of the rest of the field. Indeed, in exergy terms, Delhi contributes 38% of India’s scientific output, while on GDP terms, it accounts for only 3.3% of India’s GDP.

IV. CONCLUSIONS Reference [3] proposed a practical theory of performance, associating quality with vector properties, input quantity

with scalar properties and an intermediate term, quasity, also a vector, (quantity × quality). This trinity of terms helps generate an energy-like called exergy which serves as the simplest indicator for performance. We have applied these ideas to the comparative research evaluation of various Indian states on a per GDP basis. TABLE I

Tamil Nadu Is Ranked First On The Basis Of The Number Of Papers Published During 2007-09.

State Tamil Nadu

17507

Maharashtra

16577

Uttar Pradesh

15843

Karnataka

15156

West Bengal

14471

Delhi

14157

Andhra Pradesh

9494

Kerala

4559

Gujarat

4094

Madhya Pradesh

3835

Punjab

3151

Rajasthan

2814

Chandigarh

2640

Haryana

2555

Assam

2210

Orissa

2105

Uttarakhand

1223

Himachal Pradesh

1137

Bihar

1019

Jammu & Kashmir

988

Pondicherry

875

Jharkhand

698

Goa

626

Meghalaya

364

Chhattisgarh

238

Arunachal Pradesh

195

Manipur

156

Sikkim

124

Tripura

96

Mizoram

84

Andaman & Nicobar Islands

77

Nagaland

68

Lakshadweep

Total International Journal on Current Science & Technology P 12 Vol - I l No- I l January-June’2013

Number of Papers P

2

125619

TABLE II On A Quality Basis (Papers Per Billion Dollars Of Gdp), Chandigarh Ranks First. On The Second-Order Indicator Basis, Delhi Emerges First.

Bihar

32.7

31.16

31754.16

Chhattisgarh

22.7

10.48

2495.33

Lakshadweep

0.3

6.67

13.33

GDP $Billion

q = P/$Bn

Exergy X = P x P/$Bn

4.1

643.90

1699902.44

180

36.1

392.16

5551818.53

160

Puducherry

2.8

312.50

273437.50

Karnataka

62.9

240.95

3651897.23

80

218.84

3831188.11

Chandigarh Delhi

Tamil Nadu

25626.67

1

195.00

38025.00

76.9

188.18

2723144.88

Meghalaya

2.1

173.33

63093.33

Andaman & Nicobar Islands

0.5

154.00

11858.00

103.5

153.07

2425127.04

Goa

4.2

149.05

93303.81

Jammu & Kashmir

7.6

130.00

128440.00

Himachal Pradesh

8.9

127.75

145254.94

Uttarakhand

9.9

123.54

151083.74

18.6

118.82

262586.02

1081.8

116.12

14586922.87

1.4

111.43

17382.86

Andhra Pradesh

85.7

110.78

1051762.38

Kerala

41.2

110.66

504477.69

700

0.8

105.00

8820.00

600

37.3

102.82

394295.58

175.3

94.56

1567580.88

Punjab

40.5

77.80

245155.58

Orissa

31.8

66.19

139340.41

Rajasthan

46.3

60.78

171027.99

Haryana

44.2

57.81

147692.87

100

Gujarat

80.1

51.11

209248.89

0

Nagaland

1.5

45.33

3082.67

Jharkhand

17.5

39.89

27840.23

2.6

36.92

3544.62

Uttar Pradesh

Assam India Manipur

Mizoram Madhya Pradesh Maharashtra

Tripura

Jammu & Kasmir Himachal Pradesh Uttarakhand Manipur Assam

Kerala

Mizoram

Madhya Prades h X=100000

80

Orissa

X=50000

60 Nagaland 40

Tripur a

20 0

X=500000

Andaman & Nicobar Islands Goa

100

206.67

West Bengal

Meghalay a

120

0.6

Arunachal Pradesh

Arunachal

140

Punjab Rajasthan Haryana

Gujara t

Jharkhan d Bihar

Chattisgarh Lakshadweep 0

1000

2000

3000

4000

5000

P Fig. 1 The graphical representation of scientific performance of various Indian states on a quality-quasity map.

1000 900 800

P/SBn

Sikkim

200

P/SBn

States/UTs

Chandigarh

X=5000000

500 400

Delhi

300 200

Puducherry Sikki m

X=500000

0

Karnatak a

X=1000000

5000

West Benga l Andhr a Pradesh

Tamil Nadu Uttar Pradesh Maharashtra

10000

15000

20000

P Fig. 2 The graphical representation of scientific performance of various Indian states on a quality-quasity map (zoomed in for X1) copies of the requested packet. Receiver getting i copies, can now make a pair-wise XORed to locate error positions. For example if i=2, we have three copies of the packet (Copy-1=the stored copy in receiver’s buffer, Copy-2=one of the retransmitted copies, Copy-3=another retransmitted copy) and three pairs for XOR operation: Copy-1 and Copy-2 Copy-2 and Copy-3 Comparing pairs

P 162

Number of bits in error (x)

Common copy in two consecutive (x)

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Copy-1 and Copy-2

1

Copy-1 common in first two xs

Copy-1 and Copy-3

2

Copy-3 common in next two xs

Copy-3 andCopy-2

3

Copy-2 common in next two xs Copy-3 and Copy-1

Table (I) Algorithm of MPC

IV. REVIEW OF AGGRESSIVE PACKET COMBINING SCHEME(APC) APC is a modification of MjPC[30] so as to apply APC in wireless networks. APC is best illustrated as in [23]. i. ORIGINAL PACKET=11111, and it sent from the sender. Sender sends three copies of the packet. ii. All the packets reached receiver with error as: FIRST COPY: 11011, SECOND COPY: 11110 and THIRD COPY: 11011. iii. Receiver applies majority logic bit by bit on the received three erroneous copies: 11011 11110 11011 and thus gets a generated copy as 11011. iv. Receiver applies error detection scheme to find whether generated copy is correct or not. As it is not correct Assume that an actual packet 10100011 was received as: Copy-1 = 10101011 Copy-2 = 10101111 Copy-3 = 10100001 when we have under xored operation: Copy-1 xored Copy-2 (say, C12) = 00000100 (one bit in error) Copy-2 xored Copy-3 (C23) = 00001110 (three bits in error) Copy-3 xored Copy-1 (C31) = 00001010 (two bits in error). Now we have to define with which copy the bit inversion will start and how to proceed thereafter. We define an algorithm for the purpose as below. Make a table (see Table (I)) in ascending order of number of bits in error as indicated by the xor operation. The bit inversion and the FCS checking process shall begin with the common copy indicated in the last column of the table so

prepared, and proceed down the table if required. If all the inversions do not yield any result, the receiver has to go for requesting further retransmission.. As per table (I) in this example, the detection of error location and consequent bit inversion will start with Copy-1 and if required will be followed by Copy-3 and then by Copy-2. in this case, receiver choose least reliable bit from majority logic. In this case these are 3rd and fifth bit from the left side. v. Receiver applies brute force correction as in PC to the 3rd and fifth bits, followed by error detection. By the process it may get correct copy. If fails it request for retransmission when sender will repeat three copies of retransmission.

V. TWO MODIFICATIONS OF APC Enhancing throughput: SCHEME I: The APC as proposed by Leung [23] has a very low throughput. One basic parameter of measuring throughput is the average number of times (n) a packet is transmitted/retransmitted for successful receiving at the receiver. In APC, n>=3, making throughput less or at best equal to (1/3) X100%. In exactly, if S/W ARQ is employed with APC, n= [3/ (1-p)] where p is the probability that a packet is in error. P=1-(1-α) N when α is bit error rate (BER). For GBN ARQ with APC, n=3[{1+ (L-1) p}/ (1- p)] where L is the window size in GBN. Such a low throughput of APC does not guarantee the claim of bandwidth savings in APC. We propose that let the normal GBN protocol shall be applied with the modification that when a packet is acknowledged negatively, m (m = any odd number≥3) each of the negatively acknowledge packet and all other subsequent packet transmitted by this time shall be retransmitted. This will make:

Fig: Variation of number of c copies with BER

SCHEME II: In the scheme we propose that when a packet is acknowledged negatively let the same packet shall retransmitted with a bit wise XOR copy of the packet with received correct copy of the just previous packet. Say first packet, 11001100 (A) is received correctly. Say second packet, 11110000 is received erroneously as 01110000 (B). When second packet is acknowledged negatively, transmitter will transmit followings: 11110000 (copy of the erroneous packet) and XOR of previously received correct packet and present packet acknowledged negatively i.e. in this case (11001100 XOR 11110000)=00111100. Say these copies are received both erroneously as: 11001101(C) and 10111100 (D). Using A and D, receiver will reconstitute a second packet as A XOR D=01110000 (E). Now receiver has three erroneous copies: B, C and E. Receiver will apply MPC on B, C and E to recover correct copy of the second packet. The proposed scheme will considerably enhance throughput as 2 copies in place of 3 copies (as in APC) are transmitted.

n ≤3[{1+(L-1)p}/(1-p)].

VI. CONCLUSION AND FUTURE RESEARCH

This will raise the throughput of the proposed scheme over that of the APC. Only issue is the choice of m that will be deciding factor for higher throughput in the proposed scheme. The condition on which the proposed scheme will provide better throughput is:

We have proposed two suggestions and modification of APC for performance improvement in terms of throughput. All these modifications require to be compared with simulation studies to arrive at some definite conclusions.

(m-1)≤2/[1-(1-α)N] ……………(1) For a set of α and N, the variation of required m to have higher throughput of the proposed scheme over conventional APC is portrayed in fig (1).

VII. REFERENCES [1]

C T Bhunia, A Few Modified ARQ Techniques, Proceedings of the International Conference on International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 163

Communications, Computers & Devices, ICCCD-2000, 14-16, Decedmber’2000, I I T, Kharagpur, India, Vol.II, pp. 705-708J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68-73.

[2]

C T Bhunia and A Chowdhury, ARQ Technique with Variable Number of Copies in Retransmission, Proceedings of Conference on Computer Networking and Multimedia (COMNAM-2000), 21-22 December’2000, Javadpur University, Calcutta, India, pp.16-21 .

[3]

C T Bhunia and A Chowdhury, Performance Analysis of ARQ Techniques used in Computer Communication Using Delay as a Parameter, Proceedings of Conference on Computer Networking and Multimedia (COMNAM-2000), Jadavpur University, Calcutta, India, pp.22-24.

C T Bhunia, ARQ with two level coding with generalized parity and i (i>1) copies of parts in retransmission, Proceedings of National Conference on Data Communications (NCDC2000), Computer Society of India, Chandigarh, India, 7-8 April’2000, pp.19

[4] C T Bhunia, ARQ Techniques: Review and Modifications, Journal IETE Technical Review, Sept Oct’2001 Vol18, No 5, pp 381-401 [5] R J Beniece and A H Frey Jr, An analysis of retransmission schemes, IEEE Trans Comm Tech, COM-12, pp 135-145, Dec 1964 [6]

S Lin, D Costello Jr and M J Miller, Automatic repeat request error control schemes, IEEE Comm Mag, 22, pp 5-17, Dec ‘1984.

[7]

A R K Sastry, Improving Automatic Repeat Request (ARQ) Performance on Satellite Channels Under High Error Rate Conditions, IEEE Trans Comm, April’77, pp 436-439.

[8]

Joel M Morries, On Another Go-Back -N ARQ Technique For High Error Rate Conditions, IEEE Trans Comm, Vol 26, No 1, Jan’78, pp 186-189.

[9]

E J Weldon Jr, An Improved Selective Repeat ARQ Strategy, IEEE Trans Comm, Vol 30, No 3, March’82, pp 480-486.

[10] Don Towsley, The Shutter Go Back-N ARQ Protocol, IEEE Trans Comm, Vol 27, No 6, June’79, pp 869-875. [11] Dimirti Bertsekas et al, Data Networks, Prentice Hall of India, 1992, Ch-2 [12] G E Keiser, Local Area Networks, McGrawhill, USA, 1995 [13] N

P 164

D

Birrell,

Pre-emptive

retransmission

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

for

communication over noisy channels, IEE Proc Part F, Vol 128, 1981, pp 393-400.

[14]

H Bruneel and M Moeneclacey, On the throughput performance of some continuous ARQ strategies with repeated transmissions, IEEE Trans Comm, Vol COM m34, 1986, pp 244-249.

[15] Y wang and S Lin, A Modified Selective Repeat Type Ii Hybrid ARQ System and its Performance Analysis, IEEE Trans Comm, Vol Com 31, May’1983, pp. 593-608. [16] S B Wicker and M J Bartz, Type-II Hybrid ARQ Protocol using Punctured MDS Code, IEEE Trans Comm, Vol 42, Feb-March- April’1994, pp. 1431-1440. [17] O Yuen, Design trde-offs cellular/PCS systems, IEEE Comm Mag., Vol. 34, No 9, Sept’1996, pp 146-152 [18] H Liu. H Ma, M E Zarki, and S Gupta, Error control schemes for networks: An overview, Mobile Networks and Applications, Vol. 2, 1997, pp 167-182. [19] A Pahlavan and A H Levesque, Wireless Data Communication, Proc. IEEE, Vol 82, No 9, Sept’1994, pp 1398-1430 [20] Dzmitry Kliazovich, Nadhir Ben Halima and Fabizio Granelli, context-aware receiver - driven retransmission Control in Wireless Local Area Networks, found in Internet. [21]

Y Hirayama, H Okada, T Yamazato and M Katayama, Time-Dependent Analysis of the Multiple-Route Packet Combining Scheme in Wireless Multihop Network, Int J wireless Information Networks, Vol. 42, No 1Jan’2005, pp 35-44.

[22] Yiu-Wing LEUNG, Aggressive Packet Combining for Error Control in Wireless Networks, trans. Comm Vol. E83, No 2Feb’2000, pp38-385 [23] Shyam S. Chakraborty et al, An ARQ Scheme with Packet Combining, IEEE Comm Letters, Vol 2, No 7, July’95, pp 200-202. [24]

Shyam S Chakraborty et al, An Exact Analysis of an Adaptive GBN Scheme with Sliding Observation Interval Mechanism, IEEE Comm Letters, Vol 3, No. 5,May’99, pp 151-153.

[25] Shyam S Chakraborty et al, An Adaptive ARQ Scheme with Packet Combining for Time Varying Channels, IEEE Comm Letters, Vol 3, No 2, Feb’1999, pp 52-54. [26]

C T Bhunia, Modified Packet Combining Scheme using Error Forecasting Decoding to combat error in network, Proc. ICITA’05(Proc. IEEE Computer Soc.), Sydney, Vol, 2, 4-7, July’2005, pp 641-646

[27] C T Bhunia, Packet Reversed Packet Combining

Scheme, Proc. IEEE Computer Soc, CIT’07, Aizu University, Japan, pp. 447-451

[28] C T Bhunia, Error forecasting Schemes of error Correction at Receiver, Proc ITNG’2008, IEEE C o m p u t e r Society , USA , pp . 332 - 336 [30] S B Wicker, Adaptive rate error control through the use of diverse combining and majority logic decoding in hybrid ARQ protocol, IEEE Trans Comm., Vol.39. No. 3, March’1991, pp 380-385. [29] C T Bhunia, Exact Analyzing Performance of New and Modified GBN scheme for Noisy Wireless Environment, J Inst Engrs, India, Vol.89, Jan’2009, pp 27-31

[30] C T Bhunia, IT, Network & Internet, New Age International Publishers, India, 2005 [33]Michele Zorzi and Ramesh R Rao, Lateness Probability of a Retransmission Scheme for Error Control on a Two-State Markov Channel, IEEE Transactions on Communications, Vol. 47, No. 10, October’1999, pp.1537-1548. [31] C T Bhunia et al, Pre Emptive Dynamic Source Routing: A Repaired Back Up Approach and stability Based DSR with Multiple routes, J Comp & Information Tech, CIT, Croatia, Vol.16, No. 2, 2008, pp 91-99.

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 165

P 166

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

REVIEW AND SECURITY ANALYSIS OF AN EFFICIENT BIOMETRIC-BASED REMOTE USER AUTHENTICATION SCHEME USING SMART CARDS Subhasish Banerjee

Kiran Sankar Das

Department of Computer Science & Informatics Bengal Institute Of Technology & management Santiniketan,India E-mail : [email protected]

M.Tech (CSE) Bengal Institute Of Technology & management Santiniketan,India E-mail : [email protected]

Uddalak Chatterjee Department of Computer Science & Informatics Bengal Institute Of Technology & management Santiniketan,India E-mail : [email protected]

ABSTRACT A path braking scheme on biometric-based remote user authentication has been proposed by Li-Hwang In 2010. Later in 2011, A. K. Das showed some shortfalls of the LiHwang scheme and proposed an efficient biometric based remote user authentication scheme using smart cards that overcomes the shortfalls of the main Li- Hwang scheme and provides mutual authentication. In this paper, we reviewed and analyzed Das’s scheme and pointed out some existing flaws mainly based on Smart Card tampering and revealing stored information.

I. NTRODUCTION In the field of recent e-commerce and m-commerce remote user authentication has been a great research domain. However, day-by-day progress in technology and network access methods exposed serious security weaknesses in remote user authentication process due to week password management and advanced attack techniques. several schemes [1-6] have shown various ways to tamper user authentication and get access unethically to various authentication processes. In traditional systems of identity-based user recognition remote user authentication was based on password. But passwords can be guessed easily with some basic dictionary attacks. Later to overcome these problems passwords were encrypted with cryptographic secret keys. But the long cryptographic keys were difficult to memories and moreover they are lost, forgotten and easily shared therefore unable to

provide non-repudiation. In a client- server systems password based authentication with smartcard are proposed in [7-8]. A biometric system is basically a pattern recognition system which extracts some pattern set from user’s provided biometry and acquires a feature set and further verifies it with the stored template set in systems database. [9-11]. In recent work [12-14], biometric based remote user authentication schemes shown strong authentication protections against Password theft and fake user attacks. Some advantageous features of biometric keys are as follows•

Biometric keys cannot be lost or forgotten.

•

Biometric keys are very difficult to share or copy.

• Biometric keys are extremely hard to forge or distribute. •

Biometric keys cannot be guessed.

• Someone’s biometrics is not easy to break with others. Therefore biometric key based authentication is more secure and reliable than traditional password based authentication schemes. Therefore biometric key based authentication is more secure and reliable than traditional password based authentication schemes. In this report we analyzed Das’s scheme and shown that Das’s authentication scheme is still vulnerable to various attacks and does not provide mutual authentication between the user and the server. In [16-17] researches revealed that

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 167

the secret information stored in smart card can be revealed by monitoring power consumption. Therefore an attacker can obtain information stored in user’s smart card and also can intercept message packets communicating between user and server. This paper is organized as a short review of A.K Das [15] scheme followed by the security analysis.

II. REVIEW OF A.K DAS SCHEME In 2011 Das proposed an improved and efficient biometricbased remote user authentication scheme using smart cards. The scheme was composed of three phases: a. Registration phase, b. Login phase, c. Authentication phase. The notations used in the report are shown in the following table. Notation

Description

Ci

User i

Ri

Trusted registration centre

Si

Server

PWi

Password shared between user and server

IDi

Identity of the user i

Bi

Biometric template of the user i

h(.)

A secure one way hash function

Xs

A secret information maintained in the server

Rc

A random number chosen by client Ci

Rs

A random number chosen by server Si

A||B

Data A concatenates with data B. XOR operation of A and B

Registration Phase: A.) Before the remote user Ci login to the system, Ci first enters his biometrics on a specific device and offers his/her identification and password to the registration centre, Ri. B.) Ri then computes: generated by server.

, Xs is a secret value

C.) Ri stores (IDi, h(.), fi, ei, ri) on the user’s smart card and sends it to the user via a secure channel.

P 168

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Login Phase. The user has to perform the following steps to login to the system. A.) Ci inserts his/her smart card into the card reader and provides his/her biometrics information Bi on the specific device. It verifies user’s biometrics checking whether or not. If this holds, Ci passes biometrics verification. B.) Ci inputs the IDi and PWi, and then the smart card computes: Ci

Si

1. Inserts smart card and 2. Checks if

3. If it holds, inputs his/ her password 4. Computes 5. Checks if 6. If it holds, the smart card computes

< IDi, M2, M3 >

1. Checks the . format of 2. If its valid then computes 3. Verifies if 4. If it holds then computes

< IDi, M2, M3 > 1.Verifies whether . 2. If it satisfies, computes: . 3.Verifie s whether . 4I fi t

1.After receiving the message verifies whether 2. If they are equal, accepts ì ë Élogin ê ∞ë = request

.

Figure 1: Login and Authentication in A. K. Das’s Scheme

Checks if is equal to to verify password. If it holds then

smart card further computes:

provide mutual authentication. 1. U ser Impersonation Attack: Suppose a n attacker g ets able to track information stored in the smart card and obtains a nd a lso intercepts t he l ogin the secret v alues .The a ttacker then p erforms the message f rom user following steps: is a A. The attacker first computes the following where random number generated by the attacker.

C.) sends the login request message to

.

Authentication Phase. After receiving the login request message, the server performs following steps.

B. The attacker then sends the forged message

A.)

C. Upon receiving t he f orged m essage s erver

checks the format of is valid,

B.) If

.

computes

to the server

verifies whether

or not. If it satisfies,

c omputes following calculations w here random number generated by the server.

is a

D.

t hen verifies w hether satisfies and

checks

. As i t is t he s ame as t he real u ser,

the format of

computes the following:

verification passes. Then

C.)

.

thinks

.

o r not. I t

as a valid user, therefore

computes following calculations:

D.) Then

sends the message

to

E.) After receiving the message sent b y . If it satisfies,

whether F.)

v erifies whether computes

.

,

.

v erifies computes:

. If i t holds, .

G.) Then

sends

H.) After

receiving t

to he m

. essage

v

erifies

. If t hey are e qual, whether accepts user’s login request. Describe in figure 1.

E.

then sends the message

to

in the

authentication phase.

2. Server Masquerading Attack: If the attacker can obtain the secret data and intercept messages between server and the real user to obtain messages in login phase and in the authentication phase, it then can act as a server and retrieve messages from the real user. A.) The attacker performs the following calculations. a random number generated by .

is

III. SECURITY ANALYSIS OF A. K DAS’S SCHEME In this part we have analyzed the security aspects in Das’s scheme. To do so, we assume that an attacker could obtain the secret information stored in the smart card by continuous monitoring and analysing power consumption of the smart card [16-17] and also obtain communication messages by intercepting communication channels between the user and the server. We have discussed here various attacks over Das’s scheme such as User Impersonation Attack, Server Masquerading Attack, and finally showed how it fails to

B.)

Then the attacker

sends the forged message

C.)

Upon receiving,

checks whether

holds, therefore

computes

to the user

. . It . Further

verifies if . If it also holds and hence is convinced t hat the message came from a trusted legal server.

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 169

IV. CONCLUSION In this paper authors has reviewed and analyzed Das’s scheme and shown that it fails to provide security against various attacks. So a better approach on biometric based remote user authentication scheme can be proposed to enhance all the security aspects.

V. REFERENCES

P 170

1.

Lamport, L. “Password authentication with insecure communication”, Communication of the ACM, vol. 24, no. 11, pp. 770-772, 1981.

2.

Hwang, M. S., Li, L. H, “A new remote user authentication scheme using smart cards”, IEEE Transaction on Consumer Electronics, vol. 46, no. 1, pp. 28-30, 2000

3.

Yoon, E. J., Ryu, E. K., Yoo, K. Y., “Further improvement of an efficient password based remote user authentication scheme using smart cards”, IEEE Transaction on Consumer Electronics, Vol 50, no. 2, pp-612-614, 2004.

4.

Das, M. L., Saxena, A, Gulati, V. P., “A dynamic IDbased remote user authentication scheme”, IEEE Transaction on Consumer Electronics, vol. 50, no. 2, pp. 629-631, 2004.

5.

Lin, C. W., Tsai, C. S., Hwang, M. S., “A new strong password authentication scheme using one-way Hash functions”, Journal of Computer and Systems Sciences International, vol. 45, no. 4, pp. 623-626, 2006.

6.

Bindu, C. S., Reddy, P., Satyanarayana, B., “Improved remote user authentication scheme preserving user anonymity”, International Journal of Computer Science and Network Security, vol. 83, pp. 62-66, 2008.

7.

Fan, L., Li, J. H., Zhu, H. W., “An enhancement of timestamp-based password authentication scheme”, Computer Security, vol. 21, no. 7, pp. 665-667, 2002

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

8.

Shen, J. J., Lin, C. W., Hwang, M. S., “Security enhancement for the timestamp-based password authentication using smart cards, Computer Security, vol. 22, no. 7, pp. 591-595, 2003.

9.

Jain, A. K., Ross, A., Prabhakar, S., “An introduction to biometric recognition”, IEEE Transaction on Circuits Systems and Video Technology, vol. 14, no. 1, pp. 4-20, 2003.

10.

Maltoni, D., Maio, D., Jain, A. K., Prabhakar, S., “Handbook of fingerprint recognition”, (Springer, New York, 2nd Ed., 2009).

11.

Prabhakar, S., Pankanti, S., Jain, A. K., “Biometric recognition: security and privacy concerns”, IEEE Security and Privacy Mag., vol. 1, no. 2, pp-33-42, 2003.

12.

Khan, M. K., Zhang, J., Wang, X., “Chaotic hash-based fingerprint biometric remote user authentication scheme on mobile devices”, Chaotic Solutions Fractals, vol. 35, no. 3, pp-519-524, 2008.

13.

Li, C. T., Hwang, M. S., “An efficient biometric based remote user authentication scheme using smart cards”, Journal on Networking and Computer Applications”, vol. 33, pp. 1-5, 2010.

14.

Lin, C. H., Lai, Y. Y., “A flexible biometric remote user authentication scheme”, Computer Standards Interf., vol. 27, no. 1, pp-19-23, 2004.

15.

Das, A. K., “Analysis and improvement on an efficient biometric based remote user authentication scheme using smart cards”, IET Information Security, vol. 5, no. 3, pp. 541-552, 2011.

16. Kocher, P., Jaffe, J., Jun, B., “Differential power analysis”, Proceedings of Advances in Cryptology, pp. 388-397, 1999. 17.

Messerges, T. S., Dabbish, E. A., Sloan, R. H., “Examining smart card security under the threat of power analysis attacks”, IEEE Transactions on Computers, vol. 51, no. 5, pp. 541-552, 2002.

Evolution Strategy for the C-Means Algorithm: Application to multimodal image segmentation Francesco Masulli DIBRIS - Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi - University of Genoa Via Dodecaneso 35, 16146 Genoa - Italy and SICRMM Temple University, Philadelphia - PA [email protected]

Anna Maria Massone CNR - SPIN via Dodecaneso 33 - I-16146 Genoa - Italy [email protected]

Andrea Schenone DIBRIS - Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi - University of Genoa Via Dodecaneso 35, 16146 Genoa - Italy [email protected]

February 24, 2013 Abstract Evolutions Strategies (ES) are a class of Evolutionary Computation methods for continuous parameter optimization problems founded on the model of organic evolution. In this paper we present a novel clustering algorithm based on the application of an ES to the search for the global minimum of the C-Means (CM) objective functional. The new algorithm is then applied to the clustering step of an interactive

1

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 171

system for the segmentation of multimodal medical volumes obtained by different medical imaging diagnostic tools. In order to aggregate voxels with similar properties in the different diagnostic imaging volumes, clustering is performed in a multidimensional space where each independent dimension is a particular volumetric image. As a consequence, in this application clustering supports an inference process based on complementary information carried by each image (e.g. functional or anatomical) in oder to extract regions corresponding to the different anatomical and/or pathological tissues. A quantitative comparison of segmentation results obtained by the original CM and by the new algorithm is reported in the paper.

1

Introduction

C-Means (CM) [6] is a widely used clustering method based on a simple and efficient numerical approximation to the maximum likelihood technique for the estimation of probability mixtures parameters [6, 3]. The CM shows some intrinsic problems. In particular, it is subject to the problem of trapping in local optima of its objective function. In the clustering literature, many algorithms based on fuzzy set theory have been proposed in order to overcome this limit of CM, among them the Fuzzy CMeans algorithm [3], the Deterministic Annealing [20], and the Possibilistic C-Means [12, 13]. As shown by Miyamoto and Mukaidono in [18], all those methods are different kind of regularization [26] of the local optima problem of CM. Nevertheless, even with these methods we have no guarantee of finding the optimal solution of the problem of clustering. In order to overcome this problem, in this paper we present a novel clustering algorithm based on the application of a global search technique based on an Evolution Strategy (ES) [19, 25, 1] to the minimization of the objective function of the C-Means Algorithm [6]. Evolution Strategies are a class of methods for continuous parameter optimization problems founded on the model of organic evolution. In this paper we present a novel clustering algorithm based on the application of a (µ, λ)ES to the search for the global minimum of the classical C-Means (CM) objective function [6, 3]. The new Evolution Strategy based C- Means (ESCM) algorithm is applied to the clustering step of an interactive system for the segmentation of multimodal medical volumes [22]. This computer-based system supports the clinical oncologist in the tasks 2

P 172

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

of delineating the volumes to be treated by radiotherapy and surgery, and of quantitatively assessing (in terms of tumor mass or detection of metastases) the effect of oncological treatments. In order to aggregate voxels with similar properties in the different diagnostic imaging volumes, clustering is performed in a multidimensional space where each independent dimension is a particular volumetric image. Clustering algorithms can point out clusters of close voxels in that multidimensional feature space representing the probability distribution of intensities in the different modalities, and therefore sets of voxels with similar intensity values can be defined within the whole multimodal medical volume. These sets of voxels can then be used to delineate regions of interest, that is to make a segmentation of the multimodal volumetric image. In this application clustering supports an inference process based on complementary information carried by each image (e.g. functional or anatomical), each of them considered as an independent dimension of the input space, in order to extract regions corresponding to the different anatomical and/or pathological tissues. A quantitative comparison of segmentation results obtained by the original CM and by the new algorithm is reported in the paper. The paper is organized as follows. The next section introduces the CMeans following the parametric learning framework. In Sect.s 3 and 4 we give some material on Evolution Strategies and we present a novel application of them to the clustering. In Sect. 5 we set clustering as the basic step of an inference process that, starting from raw data, mines region of interest in multimodal medical volumes. In Sect. 6, we present an experimental comparison of the application of the CM and of the new clustering algorithm to the segmentation of multimodal images. Conclusions are drawn in Sect. 7.

2 2.1

Parametric Learning Approach to Clustering Maximum Likelihood estimation of cluster parameters

Let X = xk | xk ∈ Rd , k = 1, ..., n be a set of unlabeled random sampled vectors xk = (x1k , ..., xdk ) or training set, and Y = {yj | yj ∈ Rd , j = 1, ..., c} 3

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 173

be the set of centers of clusters (or classes) ωj . Following a parametric learning approach, we make the following assumptions: 1. the samples come from a known number of c classes ωj , j ∈ {1, ..., c}; 2. the a priori probabilities P (wj ) (i.e. the probability of drawing patterns of class ωj from X) are known; 3. the form of class-conditional probabilities densities p (x | ωj , Θj ) (i.e. the probability density of sample xk inside class ωj ) are known, while the vectors of parameters Θj are unknown. Note that the third assumption reduces the clustering problem to the problem of estimation of the vectors Θj (parametric learning). In this setting, we assume that samples are obtained by selecting a class ωj and then selecting a pattern x according to the probability law p (x | ωj , Θj ), i.e.: p (x | Θ) =

c

j=1

p (x | ωj , Θj ) P (ωj )

(1)

where Θ = (Θ1 , ..., Θc ). A density function of this form is called a mixture density [6], p (xk | ωj , Θj ) are called the component densities, and P (ωj ) are called the mixing parameters. A well known parametric statistics method for estimating the parameter vector Θ is based on maximum likelihood [6]. It assumes that the parameter vector Θ is fixed but unknown. The likelihood of the training set X is the joint density p (X | Θ) =

n

k=1

p (xk | Θ) .

(2)

ˆ is that value of Θ that maxiThen the maximum likelihood estimate Θ mizes the likelihood of the observed training set X. If p (X | Θ) is a differentiable function of Θ, maximizing the logarithm of the likelihood, we can obtain the following conditions for the maximumˆ j: likelihood estimate Θ n

k=1

ˆ ∇ ˆ log p xk | ωi , Θ ˆj P ωj | xk , Θ Θj

4

P 174

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

= 0 ∀ j.

(3)

Moreover, if the a priori class probabilities P (ωj ) are also unknown, the clustering problem can be faced as the constrained maximization of the likelihood p (X | Θ) over Θ and P (ωj ) subject to the constraints: P (ωj ) ≥ 0

c

and

j=1

P (ωj ) = 1.

(4)

If p (X | Θ) is differentiable and the a priori probabilities estimate Pˆ (ωj ) �= ˆ j must satisfy: 0 for any j, then Pˆ (ωj ) and Θ n 1 ˆ ˆ Pˆ ωj | xk , Θ P (ωj ) = n k=1

(5)

and n

k=1

ˆ ∇ ˆ log p xk | ωj , Θ ˆj Pˆ ωj | xk , Θ Θj

=0

(6)

where ˆ j Pˆ (ωj ) p xk | ωi , Θ ˆ ˆ P ωj | xk , Θ = c . ˆ ˆ p x | ω , Θ k h h P (ωh ) h=1

(7)

Let we assume now that the component densities are multivariate normal, i.e.: ˆj = p xk | ωi , Θ

1

1 exp[− (xk − yj )t Σ−1 j (xk − yj )] 2 (2π) | Σj | d 2

1 2

(8)

where d is the dimensionality of the feature space, yj is the mean vector, Σj the is the covariance matrix, (xk − yj )t is the transpose of (xk − yj ), Σ−1 j inverse of Σj , and | Σj | the determinant of Σj . In the general case (i.e. yj , Σj , and P (ωj ) are all unknown) the maximum likelihood principle yields useless singular solutions. As shown by Duda and Hart [6], we can obtain meaningful solutions by considering the largest of the finite local maxima of the likelihood function. The local-maximum-likelihood estimate for P (ωj ) is the same as Eq. 5, while ˆ ˆ k=1 P ωj | xk , Θj xk ˆ j = n y ˆj Pˆ ωj | xk , Θ n

k=1

(9)

5

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 175

Table 1: C-Means (CM) Algorithm. 1. assign the number of clusters and the tolerance ǫ1 for the stop criterion; 2. initialize the centers of clusters; 3. do until any center changes less than ǫ1 ; (a) assign the samples to the clusters with smaller Euclidean distance using Eq.s 12 and 14; (b) recalculate the centers using Eq. 9; 4. end do.

ˆj = Σ

ˆ ˆ ˆ j )(xk − y ˆ j )t k=1 P ωj | xk , Θj (xk − y n ˆj Pˆ ωj | xk , Θ

n

k=1

(10)

where (from Eq.s 7, and 8)

ˆ j |− 21 exp[− 1 (xk − y ˆ −1 ˆ j )t Σ ˆ j )] Pˆ (ωj ) |Σ j (xk − y 2 ˆ ˆ P ωj | xk , Θj = c . 1 ˆ − 2 exp[− 1 (xk − y ˆ −1 (xk − y ˆ h )t Σ ˆ h )] Pˆ (ωh ) h=1 | Σh | h 2 (11) The set of Eq.s 5, 9, 10, and 11 can be interpreted as a gradient ascent or hill-climbing procedure for maximizing the likelihood procedure. A LloydPicard iteration can start with Eq. 11 using initial estimates to evaluate ˆ j and then using Eq.s 5, 9, and 10 to update the Eq. 11 for Pˆ ωj | xk , Θ estimates. Like all hill-climbing procedures the results of this iteration do depend ˆ j is quite time upon the starting point, and, moreover, the inversion of Σ consuming, and there is the possibility of multiple solutions.

2.2

C-Means (CM) Algorithm

An efficient implementation of the previous procedure is based on the following approximation of Eq. 11:

6

P 176

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

ˆj = P ωj | xk , Θ

1 if Dj (xk ) = min1≤j≤C Dj (xk ) 0 otherwise

(12)

where Dj (xi ) is a local cost function or distortion measure and in many cases can be assumed as the scaled Mahalanobis distance Mj (xk ), ˆ j )t Σ−1 ˆ j ). M2j (xk ) ≡ | Σj |1/d (xk − y j (xk − y

(13)

This observation is the rationale of the C-Means (CM), also named Basic Isodata algorithm [6] and Hard C-Means [3]. It is worth noting that the usage of the Mahalanobis distance still involves a heavy computational overhead. In many implementations of CM a strong approximation of Dj (xk ) is adopted, using the Euclidean distance Ej (xk ) ˆ j || . Ej (xk ) ≡|| xk − y

(14)

The resulting CM algorithm is an efficient approximate way to obtain the maximum likelihood estimate of the centers of clusters [6]. One implementation of the CM using the Euclidean distance is illustrated in Tab. 1. In this algorithm the initialization of the number of clusters (Step 1) is performed by using the a-priori knowledge on the problem. At Step 2, the position of centers of clusters can be initialized either using a-priori knowledge or at random in the d-dimensional hyperbox I: I = Πdi=1 [mink (xik ), maxk (xik )] ,

I ⊂ Rd

(15)

As demonstrated by Bezdek [3], the CM, while maximizes the likelihood of the training set, minimizes at the same time a global error function Jw defined as the expectation of the squared local cost function: 2

Jw ≡< D >=

c n

ujk Dj2 (xk )

(16)

k=1 j=1

where ujk ≡ P (ωj | xk ) or, in general, a membership value of pattern xk (k = {1, ..., n}) to cluster ωj (j = {1, ..., c}). The CM, while is an efficient approximation of the maximum likelihood procedure for estimating the centers of clusters, shows some intrinsic problems. In particular, it is subject to the problem of trapping in local minima of Jw (i.e. on the local maxima of the likelihood). 7

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 177

This locality in searching for minima is its main limitation, in particular when we try to apply this algorithm as the basis for inference procedures. In order to overcome these problems, many attempt, based on different fuzzy clustering paradigms, have been proposed in the literature. The most popular fuzzy clustering method is the Fuzzy C-Means algorithm by Bezdek [3] that is based on the constrained minimization of a generalization of the CM global error expectation. We cite also the technique proposed by Rose et al [20] based on the maximum entropy principle [9] and using a Deterministic Annealing technique, and the Possibilistic C-Means algorithm by Krishnapuram and Keller [12, 13]. In [18], Miyamoto and Mukaidono showed that the Fuzzy C-Means [3], and the maximum entropy methods correspond to different types of application of the regularization theory to the CM in order to reduce the problem of local minima. An alternative approach to the solution of the local minima problem of CM can be based on the application of global search techniques. In [5] we propose a global search method for the minimization of Jw based on the Simulated Annealing technique [11]. In next sections we shall present some search techniques based on Evolution Strategies, that will be applied to clustering problem.

3

Evolution Strategies

Evolutions Strategies (ES) [19, 25, 1] are a class of Evolutionary Computation methods for continuous parameter optimization problems founded on the model of organic evolution. During each generation (iteration of the ES algorithm) a population of individuals (potential solutions) is evolved to produce new solutions. Only the highest-fit solutions survive to become parents for the next generation. In biological terms, the genetic encoding for an individual is called genotype. New genotypes are created from existing ones by modifying the genetic material. The interaction of a genotype with its environment induces an observed response called phenotype. Reproduction takes place at the genotype level, while survival is determined at the phenotype level. Only highly fit individual survive and reproduce in future generations.

8

P 178

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Individuals in the population are composed by object variables and strategy parameters. In basic ES, an individual is represented as a vector a = (x1 , ..., xn , σ1 , ..., σn ) ∈ ℜ2n

(17)

consisting of n object variables and their corresponding n standard deviations for individual mutations. There are two variants of an ES. The multi-membered ES plus strategies (denoted as (µ + λ)-ES) and the multi-membered ES comma strategies (denoted as (µ, λ)-ES). In (µ + λ)-ES µ parents create λ ≥ 1 offspring individuals by means of recombination and mutation. The µ best parents and offspring are selected to form the next population. For a (µ, λ)-ES, with λ > µ ≥ 1, the µ best individuals are selected from offspring only. We shall discuss now the ES operators, i.e. recombination, mutation, and selection.

3.1

Recombination

Recombination (or crossover) in ES is performed on individuals of the population. The most used recombination rules are: 1. no recombination; 2. discrete recombination: the components of two parents are selected at random from either the first or the second parent to form an offspring individual; 3. intermediate recombination: offspring components are somewhere between the corresponding components of the parents; 4. global and discrete recombination: one parent is selected and fixed and for each component a second parent is selected anew from the population to determine the component values using discrete recombination; 5. global and intermediate recombination: one parent is selected and fixed and for each component a second parent is selected anew from the population to determine the component values using intermediate recombination. The recombination operator may be different for object variables and strategy parameters. 9

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 179

3.2

Mutation

For mutations each xj is mutated by adding an individual, (0, σj )-normally distributed random number. The σj themselves are also subject to mutation and recombination (self-adaptation of strategy parameters [24]), and a ′ complete mutation step m(a) = a is obtained by the following equations: s = exp(N (0, τ ))

(18)

′

(19)

′

(20)

′

σj = σj · exp(Nj (0, τ )) · s ′

xj = xj + Nj (0, σj )

Mutation is performed on the σj by multiplication with two log-normally √ ′ distributed factors, one individual factor, sampled for each σj (τ = 1/ 2 n), √ and one common factor s (τ = 1/ 2n), sampled once per individual. This way, a scaling of mutations along the coordinate axes can be learned by the algorithm itself, without an exogenous control of the σj . More sophisticated ES using so-called correlated mutation are presented in [1].

3.3

Selection

Selection for survival is completely deterministic, as it is only based on the rank of fitness. It is called also an extinctive selection, as λ − µ worst individuals are definitively excluded from contribution offspring to the next generation. It is worth noting that (µ + λ)-ES is elitist and therefore, while performance is monotonously improved, the implemented search is local and unable to deal with changing environment. On the contrary, (µ, λ)-ES enables the search algorithm to escape from local optima, to follow a moving optimum, to deal with noisy objective function, and to self adapt strategy parameters effectively. The ratio µ/λ is named the degree of extinctiveness and is linked to the probability to locate the global optimum. If it is large there is a high convergence reliability, whereas if it is small there is a high convergence velocity. Investigations presented in [24] suggest an optimal ratio of µ/λ = 1/7.

10

P 180

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Table 2: Evolution Strategy based C-Means (ESCM) algorithm. 1. assign µ, λ, the number of clusters, and the threshold ǫ2 ; 2. initialize the population; 3. evaluate Jw for each individual (Eq. 16); 4. do until ∆Jwbest /Jwbest is greater than ǫ2 ; 5. count1=0; (a) while count1 less then µ; i. count1++; ii. select by rank two individuals for mating; iii. order consistently the centers of clusters in both selected individuals using algorithm RI (Tab. 3); iv. crossover object variables (discrete recombination); v. crossover strategy parameters (intermediate recombination); vi. mutate individual as shown in Sect. 3.2; (b) end do; (c) evaluate Jw for each individual (Eq. 16); (d) select the µ fittest individuals for next population; 6. end do.

11

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 181

4

Evolution Strategy based C-Means (ESCM) algorithm

In order to overcome the limits of C-Means, a (µ, λ)-ES can be used to find the global optimum of Jw (Eq. 16). Tab. 2 illustrates the Evolution Strategy based C-Means (ESCM) algorithm. Each genotype a is a list containing the object variables (i.e. the centers of clusters) and the strategy parameters: a = (y1 , ..., yc , σ1 , ..., σc )

(21)

where c is the number of clusters. ESCM works in a (c × (d + 1))-dimensional space, where d is the dimension of the pattern space. After the initialization of parameters (step 1), the population is initialized (step 2) in the following way: Centers of clusters (i.e. object variables) are initialized at random in the hyperbox I (Eq. 15), while strategy parameters are initialized at random in the range [0, α], where α is order of 1/10 the side of I. The remaining steps are quite standard for an (µ, λ)-ES, with the exception of Step 5(A)iii. In fact we must note that, before mixing object variables of parents (centers of clusters) using discrete recombination crossover, they must be re-indexed, in such a way centers with same index are likely to correspond to the same cluster. The re-indexing algorithm is described in Tab. 3 and is modified by the RL algorithm proposed in [27]. Besides, the stop condition (Step 4) ∆Jwbest < ǫ2 (22) Jwbest is based on the ratio of normalized difference of objective function Jw evaluated on the fittest individual of two successive generations. In principle, ESCM allows us to avoid local minima of Jw and to find the global optimum, improving in this way the reliability of inferential tasks associated to the clustering procedure. Moreover it is simple to create variants of the basic ESCM. For instance, if we want to reduce the interference of big blobs to the localization of the centers of small clusters, it is straightforward to change in the algorithm Jw with the following scaled global error function Js :

12

P 182

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Table 3: Re-indexing (RI) algorithm. 1. compile the matrix of distances M among centers of clusters of the two individuals; 2. count2=0; 3. while count2 less than c; (a) count2++; (b) find the minimal item of the matrix; (c) assign the same index to both centers of clusters in the two individuals; (d) delete the corresponding row and column in the matrix of distances M ; 4. end do.

Js ≡

c n 1

j=1 Cj

ujk Dj2 (xk ),

(23)

k=1

where Cj is the cardinality of cluster wj .

5 5.1

Segmentation of multimodal medical volumes Multimodal medical volumes (MMV)

Medical images are obtained by different acquisition modalities, including X-ray tomography (CT), magnetic resonance imaging (MRI), single photon emission tomography (SPECT), and positron emission tomography (PET), ultrasounds (US), etc. [15]. Multimodal volumes can be derived from sets of such different diagnostic volumes by spatial coregistration of volumes in order to fully correlate complementary information (e.g., structural and functional) about the same patient. 13

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 183

The visual inspection of a large set of such volumetric images permits only partially to the physician to exploit the available information. Therefore, computer-assisted approaches may be helpful in the clinical oncological environment as support to diagnosis in order to delineate volumes to be treated by radiotherapy and surgery, and to assess quantitatively (in terms of tumor mass or detection of metastases) the effect of oncological treatments. The extraction of such volumes or other entities of interest from imaging data is named segmentation and is usually performed, in the image space, by defining sets of voxels with similar features within a whole multimodal volume.

5.2

Clustering-based inference approach to MMV segmentation

It is worth noting that it is very difficult or impossible to settle the solution of the multimodal volumes segmentation problem in a reliable rule based systems framework, as physicians are hardly able, at least for low level steps in image analysis, to describe the rationale of their decisions. Moreover, for higher level in image analysis, rationales of physicians, even if more precise, strongly depend on many factors, such as different clinical frameworks, different anatomical areas, different theoretical approaches, etc. Inference procedures based on learning from data must be then employed for design a computer-assisted systems for segmenting multimodal medical volumes. Actually, in such data based systems, a possible supervised approach has two major drawbacks: • it is very time-consuming (especially for large volumes), as it requires the labeling of prototypical samples needed for applying the generalization process. Even if the number of clusters is predefined, a careful manual labeling of voxels in the training set belonging with certainty to the different clusters is not trivial, especially when it concerns multimodal data sets and • heavy biases may be introduced by physicians unskilled or fatigued due to the large inter-user and intra-user variability generally observed when manual labeling is performed.

14

P 184

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

On the contrary, unsupervised methods may fully exploit the implicit multidimensional structure of data and make clustering of the feature space independent from the user’s definition of training regions [2, 8] due to their self-organizing approach. A multimodal volume may be defined by the spatial registration of a set of d different imaging volumes. As a consequence, its voxels are associated with an array of d values, each representing the intensity of a single modality in a voxel. From another point of view, the d different intensity values related to the voxel in such multimodal volume can be viewed as the coordinates of the voxel within a d-dimensional feature space where multimodal analysis can be made. An image space (usually 3D) defined by the spatial coordinates of the data set, and a multidimensional feature space, as described before, must be considered for a more complete description of the segmentation problem. The interplay between these two spaces turns out to be very important in the task of understanding the data structure. Actually, the definition of clusters within the above described d-dimensional feature space and the classification of all the voxels of the volumes to the resulting classes are the main steps in segmenting multimodal volumes. This approach, where an inference process based on clustering constitutes the principal procedure for the MMV segmentation, has been followed in many recent papers [4, 22, 17, 10, 14], and it has been shown to be more robust to noise in discrimination of different tissues than techniques based on edge detection [4]. Nevertheless, the used clustering method itself must be well founded in statistics and must be not limited by intrinsic problems, such as the problem of local optima in CM. Moreover, many bias effects must be taken into account in considering clustering for the segmentation of medical images. Actually, very heterogeneous clusters may be found in the feature space, with very different probability densities, and considering the cardinality of clusters may be necessary in order to include in the analysis the statistical nature of the data set. Furthermore, the partial volume effect during acquisition may produce a really intrinsic ambiguity of borders between regions of interest. As a consequence, unsupervised clustering based segmentation of medical images emerges as a very difficult task, whose usefulness is related to the balance of two conflicting actions, namely, the elimination of noise and redundancy from original images and the preservation of significant information in the segmented im15

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 185

age. These constraints may force users to introduce their knowledge in the sequence of analysis and further refinements are often needed in order to obtain meaningful and affordable results.

5.3

Interactive segmentation system

From all these considerations a correct architecture for a computer based system for multimodal medical volumes segmentation should include a computational core grounded on unsupervised clustering together with powerful interactive tools for knowledge based refinements that physicians could tune and organize to specific diagnostic tasks to be performed. This way, as requested in the clinical practice, physicians can stay in control both of the sequence of choices and of the results in the analysis process in order to introduce in the segmentation process their theoretical and heuristic knowledge. A system based on those assumptions has been developed by our group and is described in [22]. It is an interactive system with a friendly Graphics User Interface, and supporting a full sequence of analysis of multimodal medical images. The main functions performed by this system are: Feature extraction, dimensionality reduction, unsupervised clustering, voxel classification, and intra- and post-processing refinements. The main component of this system is the clustering subsystem that make possible to run in the feature space alternative clustering algorithms, including the C-Means [6], the Capture Effect Neural Network [7], Fuzzy C-Means [3], the Deterministic Annealing [20, 21], and the Possibilistic CMeans [12, 13]. In [16, 17] we report some comparisons of application of such algorithms on clinical images.

6 6.1

Experimental analysis Data set

We have implemented the Evolution Strategy based C-Means (ESCM) algorithm as a clustering module of the previously described graphical interactive system supporting the full sequence of analysis of multimodal medical volumes.

16

P 186

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

(a)

(b)

Figure 1: T1-weighted (a) and T2-weighted (b) MRI images of a patient with glioblastoma multiforme in the right temporal lobe. In order to illustrate in a specific case the inference task of MMV segmentation based on clustering, and to show the gain in precision and reliability obtained in this task using the ESCM instead of the original CM, let we consider now a simple data set consisting of a multimodal transverse slice of the head (Fig. 1) composed by spatially correlated T1-weighted and T2weighted MRI images from an head acquisition volume of an individual with glioblastoma multiforme. The images are 288 x 362 with 256 gray levels. The tumor is located in the right temporal lobe and appears bright on the T2-weighted image and dark on the T1-weighted image. A large amount of edema is surrounding the tumor and appears very bright on the T2-weighted image. The lower signal area within the mass suggests tissue necrosis. Each pixel in the above defined two-modal slice is associated to an array of two intensity values (T1 and T2). Therefore, each of these couples of pixel intensity is represented by a point in a 2D feature space (Fig. 2), whose coordinates represent the intensity values in that pixel of each modality belonging to the multimodal set. The segmentation task consists in finding the main classes in this feature space and in associating each pixel in image to one of this classes. The main classes 17

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 187

250

200

T2

150

100

50

0

0

50

100

150 T1

200

250

300

Figure 2: Feature space (T2 versus T1) obtained from the MRI images in Fig. 1. in the data set are: white matter, gray matter, cerebro spinal fluid (CSF), tumor, edema, necrosis, scalp. A slight mis-registration between images may be responsible of some mis-classification errors in final results.

6.2

Methods

We give here some information on the implementation of clustering algorithms used in the experimental analysis. • The CM uses 7 clusters and a tolerance for the stop criterion ǫ1 = .01, centers of clusters are initialized at random, and convergence is noticed in 10-15 fast iterations. • For the ESCM using Jw , according to the µ/λ = 1/7 rule proposed by Schwefel [24], we selected µ = 10 and λ = 70. Moreover, we initialized c = 7, ǫ2 = .005, and the centers of clusters at random. We implemented the selection by rank using a linear probability distri-

18

P 188

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

7000

best individual cost function

6500

6000

5500

5000

4500

0

5

10

15

20 iteration

25

30

35

40

Figure 3: Cost function of best individuals versus iteration of ESCM.

Figure 4: Segmentation obtained by the CM algorithm with 7 clusters. 19

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 189

Figure 5: Segmentation obtained by the ESCM algorithm using Jw and with 7 clusters. bution with negative slope, while the intermediate recombination is implemented as the average of components of parents. • The implementation of ESCM using Js is identical to the previous one, with the obvious exception of the objective function. A typical plot of Jsbest is presented in Fig. 3. Using ∆Jsbest /Jsbest ≤ ǫ2 as the stop condition, the ESCM ends in 15 iteration.

6.3

Results and Discussion

Let us compare the results produced by the ESCM clustering algorithm and by the standard C-Means (CM) algorithm. In Fig. 4 the results of the unsupervised segmentation with the CM algorithm are shown. CM almost correctly defines scalp and white matter. Nevertheless it produces mistakes in classification of gray matter and edema in the left side of brain, and especially is not able to separate tumor, necrosis and CSF. Similar results are obtained by the basic ESCM with the standard cost function Jw (Fig. 5). Nevertheless, as an important difference, from 20

P 190

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Figure 6: Segmentation obtained by the ESCM algorithm using Js and with 7 clusters. a large number of tests, ESCM results to be largely more stable than CM with respect to the positions of centroids and to the extension of clusters in the feature space. Eventually, by using the newly defined scaled global error function Js to take into account the cardinality of clusters, the results of ESCM (Fig. 6) dramatically improve. Actually, we may notice that, in comparison with CM, and with the basic version of ESCM, the final version of ESCM correctly distinguishes between tumor and CSF, and within the tumor region is able to find the necrosis region. Correct definition of scalp and white matter and misclassification in the left side of the brain remains as from CM.

7

Conclusions

The C-Means (CM) [6], while is an efficient approximation of the maximum likelihood procedure for estimating the centers of clusters, shows some intrinsic problems. In particular, it is subject to the problem of trapping in local minima of its objective function Jw (Eq. 16). This locality in search21

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 191

ing for minima is a main limitation, in particular when we try to apply this algorithm as the basis for inference procedures. In order to overcome the limits of C-Means, we have proposed in this paper a novel clustering algorithm based on the application of an Evolution Strategy (ES) [19, 25, 1] to the search for the global minimum (Evolution Strategy based C-Means or ESCM algorithm). The ESCM is based on a (µ, λ)-ES strategy where the object variables of genotypes are the centers of clusters. The implementation of the (µ, λ)ES strategy is quite standard, but before mixing object variables of parents using discrete recombination crossover, they are re-indexed, in such a way centers with same index are likely to correspond to the same cluster. It is worth noting that it is easy to make variants to the basic ESCM. For instance, with the straightforward change of Jw with the scaled global error function Js (Eq. 23) it is possible to reduce the interference of big blobs to the localization of the centers of small clusters. In this paper we considered a complex inference processes based on clustering consisting in multimodal medical volumes (MMV) segmentation. This approach has been shown to be very robust to noise and able to process complementary information carried by each image (e.g. functional or anatomical) [4]. In this inference task, devoted to aggregate voxels with similar properties (corresponding to the different anatomical and/or pathological tissues) in the different diagnostic imaging volumes, clustering is performed in a multidimensional space where each independent dimension is a particular volumetric image. Nevertheless, the used clustering method itself must be well founded in statistics and must be not limited by intrinsic problems, such as the problem of local optima in CM. Moreover, many bias effects (due, e.g., to heterogeneous clusters and to partial volume effect during acquisition) must be taken into account in considering clustering for the segmentation of medical images. We have implemented the ESCM algorithm as a clustering module of the previously described graphical interactive system supporting the physician for the full sequence of analysis of multimodal medical volumes. In the experimental results presented in the paper, we have compared the segmentation obtained by the application of CM, ESCM using Jw and ESCM using Js to a simple data set consisting of a multimodal transverse slice of the head (Fig. 1) composed by spatially correlated T1-weighted and T2-weighted MRI images from an head acquisition volume of an individual with glioblastoma multiforme. 22

P 192

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

The two implementations of ESCM give more stable solutions than CM with respect to the positions of centroids and the extension of clusters in the feature space. In particular, the ESCM using Js , as is able to take into account the cardinality of clusters, dramatically improves the quality of segmentation results.

Acknowledgments The images are from the BrighamRAD Teaching Case Database of the Department of Radiology at Brigham and Women’s Hospital in Boston.

References [1] T. Baeck. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithm. Oxford University Press, 1996. [2] A.M. Bensaid, L.O. Hall, L.P. Clarke, and R.P. Velthuizen. MRI segmentation using supervised and unsupervised methods. In Proc. 13th IEEE Eng. Med. Biol. Conf., pages 483–489, Orlando, 1991. IEEE. [3] J.C. Bezdek. Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, New York, 1981. [4] J.C. Bezdek, L.O. Hall, and L.P. Clarke. Review of MR image segmentation techniques using pattern recognition. Med. Phys., 20:1033–1048, 1993. [5] P. Bogus, A. Massone, and F. Masulli. A Simulated Annealing C-Means Clustering Algorithm. In F. Masulli and R. Parenti, editors, Proceeding of SOCO’99 ICSC Symposium on Soft Computing - Genova, pages 534– 540, Millet, Canada, 1999. ICSC. [6] R.O. Duda and P.E. Hart. Pattern Classification and Scene Analysis. Wiley, New York, 1973. [7] F. Firenze and P. Morasso. The capture effect model: a new approach to self-organized clustering. In Sixth International Conference. Neural Networks and their Industrial and Cognitive Applications. NEURO-NIMES 23

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 193

93 Conference Proceedings and Exhibition Catalog, pages 65–54, Nimes, France, 1993. [8] G. Gerig, J. Martin, R. Kikinis, O. Kubler, M. Shenton, and F.A. Jolesz. Unsupervised tissue type segmentation of 3D dual-echo MR head data. Im. Vis. Comput., 10:349–360, 1992. [9] E. T. Jaynes. Information theory and statistical mechanics. Physical Review, 106:620–630, 1957. [10] Z-X Ji, Q-S Sun, D-S Xia. A modied possibilistic fuzzy c-means clustering algorithm for bias field estimation and segmentation of brain MR image. Computerized Medical Imaging and Graphics 35 38–397, 2011. [11] S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi. Optimization by simulated annealing. Science, 220:661–680, 1983. [12] R. Krishnapuram and J.M. Keller. A possibilistic approach to clustering. IEEE Transactions on Fuzzy Systems, 1:98–110, 1993. [13] R. Krishnapuram and J.M. Keller. The Possibilistic C-Means algorithm: Insights and recommendations. IEEE Transactions on Fuzzy Systems, 4:385–393, 1996. [14] H. Mahmoud, F. Masulli, S. Rovetta. A Fuzzy Clustering Segmentation Approach for Feature-Based Medical Image Registration. In Proc. CIBB 2012, 9-th International Meeting on Computational Intelligence Methods for Bioinformatics and Biostatistics, Houston, Tx, USA, CIBB Proceedings Series, ISBN 978-88-906437-1-2, 2012. [15] M.N. Maisey and et al. 19:1002–1005, 1992.

Synergistic imaging.

Eur. J. Nucl. Med.,

[16] F. Masulli, P. Bogus, A. Schenone, and M. Artuso. Fuzzy clustering methods for the segmentation of multivariate images. In M. Mares, R. Mesia, V. Novak, J. Ramik, and A. Stupnanova, editors, Proceedings of the 7th International Fuzzy Systems Association Word Congress IFSA’97, volume III, pages 123–128, Prague, 1997. Academia. [17] F. Masulli and A. Schenone. A fuzzy clustering based segmetation system as support to diagnosis in medical imaging. Artificial Intelligence in Medicine 16, 129–147, 1999. 24

P 194

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

[18] S. Miyamoto and M. Mukaidono. Fuzzy C-Means as a regularization and maximum entropy approach. In M. Mares, R. Mesia, V. Novak, J. Ramik, and A. Stupnanova, editors, Proceedings of the 7th International Fuzzy Systems Association Word Congress IFSA’97, volume III, pages 86–91, Prague, 1997. Academia. [19] I. Rechenberg. Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Frommann-Holzboog Verlag, Stuttgart, 1973. [20] K. Rose, E. Gurewitz, and G. Fox. A deterministic annealing approach to clustering. Pattern Recognition Letters, 11:589–594, 1990. [21] K. Rose, E. Gurewitz, and G. Fox. Constrained clustering as an optimization method. IEEE Transaction on Pattern Analysis and Machine Intelligence, 15:785–794, 1993. [22] A. Schenone, F. Firenze, F. Acquarone, M. Gambaro, F. Masulli, and L. Andreucci. Segmentation of multivariate medical images via unsupervised clustering with adaptive resolution. Computerized Medical Imaging and Graphics, 20:119–129, 1996. [23] K. E. Parsopoulos, M. N. Vrahatis. Recent approaches to global optimization problems through Particle Swarm Optimization. Natural Computing 1(2-3):235-306, 2002. [24] H.P. Schwefel. Collective phenomena in evolutionary systems. In Preprints of the 31-th annual meeting of the International Society for General Systems Research, volume 2, pages 1025–1033, Budapest, 1988. [25] H.P. Schwefel. Evolution and Optimum Seeking. Wiley, 1995. [26] A. Tikhonov and V. Arsenin. Solutions of ill-posed problems. Winston and Sons, New York, 1997. [27] E.C.K. Tsao, J.C. Bezdek, and N.R. Pal. Fuzzy Kohonen clustering networks. Pattern Recognition, 27(5):757–764, 1994.

25

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 195

P 196

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

A DETERMINISTIC INVENTORY MODEL FOR DETERIORATING ITEMS WITH TIME DEPENDENT DEMAND AND ALLOWABLE SHORTAGE UNDER TRADE CREDIT †

PINKI MAJUMDER AND U.K.BERA

Department of Mathematics,National institution of Technology,Agartala,Tripura(west),India, e-mail:[email protected], bera [email protected]

Abstract. In this proposed research we developed a deterministic inventory model of deteriorating items for time dependent demand and trade credit. Here supplier offers a credit limit to the retailer and retailer also offers a credit limit to the customer. This paper develops a model to determine an optimal ordering policy under conditions of allowable shortage and permissible delay in payment.Numerical examples are used to illustrate all results obtained in this paper.Finally the model is solved by Generalised Reduced Gradient(GRG) method and using LINGO software. Key words :Time dependent demand , shortage, deterioration , trade credit,optimization.

1. Introduction In today’s business transactions , it is more and more common to see that the retailers are allowed a fixed time period before they settle their account to the supplier. We term this period as trade credit period.Before the end of the trade credit period, the retailer can sell the goods and accumulate revenue and earn interest.A higher interest is charged if the payment is not settled at the end of the trade credit period. Goyal[6] develops an economic order quantity under the conditions of permissible delay in payments for an inventory system.Jamal et. al consider an ordering policy for deteriorating items with allowable shortage and permissible delay in payment.Funthermore, Sarker et. al[11] address a model to determine an optimal ordering policy for deteriorating items under inflation, permissible delay in payment and allowable shortage.Chen and Ouyang[2] extend Jamal et. al.[7] model by fuzzifying the carrying cost rate,interest paid rate and interest earned rate simultaneously , based on the interval-valued fuzzy numbers and triangular fuzzy number to fit the real world. Kumar M et al. developed an EOQ model for time varying demaqnd rate under trade credits. Chen and Kang[3] proposed an integrated inventory models considering permissible delay in payment and variant pricing strategy,M. Liang et. al[4] developed an optimal order quantity under advanced sales and permissible delay in payments.Deterioration is applicable to many inventories in practice like blood,fashion goods, agricultural products and medicine , highly volatile liquids such as gasoline;alcohol,electronic goods , radioactive substances , photographic film, grain etc.So decay or deterioration of physical goods in stock is a very realistic feature and inventory researches felt the necessity to use this factory into consideration.Shah and Jaiswal presented an inventory model for items deteriorating at a constant rate.Covert and philip[1] , Deb and Chaudhuri[5] ,Kumar,M et al.[8]developed an inventory model with time dependent deterioration rate. Recently Meher ,Panda[9] and Sahu[10] developed an inventory model where †

Corresponding Author. 1 International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 197

2

Pinki Majumder and U.K.Bera

demand is a weibull function of time. In the classical inventory models the demand rate is assumed to be a constant . In reality , demand for physical goals may be time dependent and price dependent. Meher , Panda and Sahu[10] develops inventory model where demand is a function of time. In this paper we establish an deterministic inventory model with allowable shortage , time dependent demand , weibull deterioration and two trade credit period. Here we derive the optimal value of cycle time which minimize the total average cost. Lastly numerical examples are set to illustrate all results obtained in this paper. 2. Assumptions and notation The following notations and assumptions are used for the development of proposed model. 2a. Notation (i) D(t)=a(1-bt); the annual demand as a decreasing function of time where a > 0 is fixed demand and b(0 < b < 1) denotes the rate of demand. (ii) C = The unit purchase cost. (iii) S = The unit selling cost with (S > C). (iv) h= The inventory holding cost per year excluding interest charges. (v) A = The ordering cost per order. (vi) P = The unit shortage cost. (vii) Q(t) = The order quantity at time t = 0. (viii) θ(t) = The deteriorating rate which is a weibull function of time as θ(t) =αβtβ−1 where 0 < α 0 and t > 0 (ix) M = Retailer’s trade credit period offered by the supplier in years. (x) N = Customer’s trade period offered by the retailer in years. (xi) Ic = Interest charges payable per $ per year to the supplier. (xii) Ie = Interest earned per $ per year. (xiii) I(t) = Inventory level at time t. (xiv) T1 = Length of the period with positive stock of the item. (xv) T2 = Length of the period with negative stock of the item. (xvi) T = Length of the replenishment cycle . T = T1 + T2 (xvii) Z(T1 , T2 ) : Total Inventory cost when the length of period with positive stock of the item is T1 and the length of the period with negative stock of the item is T2 . (xviii) Z1 (T1 , T2 ) : Total relevant cost per unit time when N ≤ M ≤ T1 < T . (xix) Z2 (T1 , T2 ) : Total relevant cost per unit time when N ≤ T1 ≤ M < T . (xx) Z3 (T1 , T2 ) : Total relevant cost per unit time when 0 ≤ T1 ≤ N ≤ M < T . (xxi) T1∗ = Optimal value of T1 . (xxii) T2∗ = Optimal value of T2 . 2b. Assumption (i) (ii) (iii) (iv) (v) (vi)

P 198

The inventory system under consideration deals with the single item. The planning horizon is infinite. The demand of the product is declining function of time. Shortages are allowed. Ic ≥ Ie , S ≥ C, M ≥ N . The supplier offers the full trade credit to the retailer.When T1 ≥ M ,the account is settled at T1 = M ,the retailer pays off all units sold and keeps his/her profits, and starts paying for the interest charges on the items in stock with rate Ic .When T1 ≤ M ,the account is settled at T1 = M and the retailer no need to pay any interest on the stock.

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

A deterministic inventory model.......... allowable shortage under trade credit

3

(vii) The retailer can accumulate revenue and earn interest after his/her customer pays for the amount of purchasing cost to the retailer until the end of the trade credit period offered by the supplier . That is , the retailer can accumulate revenue and earn interest during the period N to M with rate Ie under the condition of trade credit. (viii) The deteriorated units can neither be repaired nor replaced during the cycle time. 3. Mathematical Formulation The inventory level I(t) depletes to meet the demand and deterioration. The rate of change of inventory level is governed by the following differential equation dI(t) + θI(t) = −D(t) ,0 ≤ t ≤ T (1) dt dI(t) β−1 + αβt I(t) = −a(1 − bt) ,0 ≤ t ≤ T (2) which is equivalent to dt with the initial condition I(0) = Q and the boundary condition I(T1 ) = 0 Consequently, the solution of (2) is given by β α bα (T1β+1 −tβ+1 )− β+2 (T1β+2 −tβ+2 )− 2b (T12 −t2 )+(T1 −t)] ,0≤t≤T (3) I(t) = ae−αt [ β+1 The order quantity is

α Q = I(0) = a[ β+1 T1β+1 −

bα T β+2 β+2 1

−

bT12 2

+ T1 ]

(4)

the total cost of inventory system per time unit include the following : A (a) Ordering cost : (T1 +T 2) T β+1

bT β+2

1 (b) Deterioration cost per unit time : (TCaα [ 1 − β+2 ] 1 +T2 ) β+1 (c)Inventory holding cost per unit time: (2β+3) (2β+2) (β+3) bαβ ah α2 b α2 [ T − 2(β+1) − (β+1)(β+3) T1 + 2 T1 (T1 +T2 ) (β+1)(2β+3) 1 T P I(t)dt (d)Shortage cost = − (T1 +T ) 2

(β+2) αβ T (β+1)(β+2) 1

−

bT13 3

+

T12 ] 2

T1

P = − (T1 +T 2)

T

α (1 − αtβ )[ β+1 (T1β+1 − tβ+1 ) −

bα (T1β+2 β+2

− tβ+2 ) − 2b (T12 − t2 ) + (T1 − t)]dt

T1 bT 3 T2 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 = (T1 +T2 ) [ (β+1)(β+2) T1 − (β+1)(β+3) T1 + (β+1)(2β+3) T1 − 2(β+1) − 31 + 21 ]+ 2 T1 β+1 2β+2 β+1 2β+3 β+1 2 P α2 b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 ) β+1 β+1 2β+2 β+1 2β+3 2 β+1 β+3 β+1 β+2 β+2 (T1 +T2 ) α bα 2) 2) 2) ) + α(T1 (T1 +T − (T1 +T ) − β+1 (T1β+1 (T1 + T2 ) − (T1 +T ) + β+2 (T1β+2 (T1 + T2 ) − β+3 β+1 β+2 β+2 3 2 (T1 +T2 )β+3 3) 2) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )] β+3 3 2

Regarding interest charges and earned three cases may arise based on the length of M, N, T1 . The three cases are as follows Case1 : N ≤ M ≤ T1 < T Case2 : N ≤ T1 ≤ M < T Case3 : T1 ≤ N ≤ M < T

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 199

4

Pinki Majumder and U.K.Bera

p1.png Figure 1. case 1: N≤ M ≤ T1 < T

p2.png Figure 2. case 2: N ≤ T1 ≤ M < T

p3.png Figure 3. case 3: T1 ≤ N ≤ M≤T

P 200

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

A deterministic inventory model.......... allowable shortage under trade credit

5

4. According to given assumption,there are three cases to occur in interest charged for the items kept in stock per year. Case 1. N ≤ M ≤ T1 < T

c Annual interest payable = (T1CI +T2 ) ca = (TCI 1 +T2 )

T1

M

(

−

I(t)dt

M

α (1 − αtβ )[ β+1 (T1β+1 − tβ+1 ) − 2

(2β+3)

α b ca = (TCI [ T 1 +T2 ) (β+1)(2β+3) 1 (β+1) αT1

T1

(β+2) αβT1

(β+1) (β+2) α2 b 2β+3 M (β+2)(2β+3)

bα (T1β+2 β+2

(2β+2) α2 T 2(β+1)2 1

−

−

− tβ+2 ) − 2b (T12 − t2 ) + (T1 − t)]dt

(β+3) bαβ T (β+1)(β+3) 1

+

(β+2) αβ T (β+1)(β+2) 1

−

bT13 3

+

T12 2

−

(β+1) bT12 αβ αβb α2 +T1 )(M − αM )− (β+1)(β+2) M β+2 + 2(β+2)(β+3) M β+3 − (β+1)(2β+2) M 2β+2 + 2 (β+1) 3 2 − bM6 + M2 ]

−

Case2. N ≤ T1 ≤ M < T In this case annual interest payable = 0 Case 3. T1 ≤ N ≤ M < T In this case annual interest payable = 0 5. According to given assumption,three cases will occur in interest earned per year. case 1. N ≤ M ≤ T1 < T M The annual interest earned = (T1SI+Te 2 ) [a(1 − bT2 )T2 (M − N ) + a(1 − bt)tdt] = (T1SI+Te 2 ) [a(1

− bT2 )T2 (M − N ) +

2 a( M2

−

bM 3 3

−

N2 2

+

bN 3 )] 3

N

case 2. N ≤ T1 ≤ M < T

T1 The annual interest earned = (T1SI+Te 2 ) [a(1−bT2 )T2 (M −N )+a(1−bT1 )T1 (M −T1 )+ a(1 − bt)tdt] = (T1SI+Te 2 ) [a(1

− bT2 )T2 (M − N ) +

T2 a( 21

−

bT13 3

N

−

N2 2

+

bN 3 ) 3

+ a(1 − bT1 )T1 (M − T1 )]

case 3. T1 ≤ N ≤ M < T The annual interest earned = (T1SI+Te 2 ) [a(1 − bT2 )T2 (M − N ) + a(1 − bT1 )T1 (M − N )] The annual total cost incurred by the retailer Z(T1 , T2 ) = Setup cost + Holding cost + Purchasing cost + Shortage cost +Interest payable Interest earned T1β+1 bT1β+2 (2β+3) (2β+2) A α2 b α2 + (TCaα [ − ] + (T1ah [ T − 2(β+1) − 2 T1 (T1 +T2 ) +T ) β+1 β+2 +T2 ) (β+1)(2β+3) 1 1 2 3 2 bT T (β+3) (β+2) bαβ αβ T + (β+1)(β+2) T1 − 31 + 21 ]+ (β+1)(β+3) 1 bT13 T12 (2β+3) (2β+2) (β+3) (β+2) bαβ αβ α2 b α2 ca + (TCI [ T − − T + T − + − 2 T1 1 1 1 +T ) (β+1)(2β+3) 2(β+1) (β+1)(β+3) (β+1)(β+2) 3 2 1 2 (β+1) (β+2) (β+1) αT1 αβT1 bT 2 αβ αβb α2 ( (β+1) − (β+2) − 21 +T1 )(M − αM )− (β+1)(β+2) M β+2 + 2(β+2)(β+3) M β+3 − (β+1)(2β+2) M 2β+2 + (β+1) 3 2 α2 b M 2β+3 − bM6 + M2 ] (β+2)(2β+3)

where Z1 (T1 , T2 ) =

(β+2)

αβ P + (T1 +T [ T 2 ) (β+1)(β+2) 1

β+1 2 P 2) [ α (T1β+1 (T1 +T (T1 +T2 ) β+1 β+1

bT13 T12 (β+3) (2β+3) 2β+2 bαβ α2 b α2 T + T − T − + ]+ 2 1 1 1 (β+1)(β+3) (β+1)(2β+3) 2(β+1) 3 2 2β+2 β+1 2β+3 β+1 (T1 +T2 ) α2 b 2) 2) 2) ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − 2β+2 β+1 2β+3 2 β+1

−

−

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 201

6

Pinki Majumder and U.K.Bera

β+2 β+1 β+2 (T1 +T2 )β+3 α bα 2) 2) 2) ) + α(T1 (T1 +T − (T1 +T ) − β+1 (T1β+1 (T1 + T2 ) − (T1 +T ) + β+2 (T1β+2 (T1 + T2 ) − β+3 β+1 β+2 β+2 3 2 (T1 +T2 )β+3 3) 2) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )] β+3 3 2 SIe M2 bM 3 N2 bN 3 − (T1 +T2 ) [a(1 − bT2 )T2 (M − N ) + a( 2 − 3 − 2 + 3 )]

T1β+1 bT1β+2 (2β+2) (2β+3) A Caα α2 b α2 − + [ − ]+ (T1ah [ T − 2(β+1) 2 T1 (T1 +T2 ) (T1 +T2 ) β+1 β+2 +T2 ) (β+1)(2β+3) 1 3 2 T bT (β+3) (β+2) bαβ αβ T + (β+1)(β+2) T1 − 31 + 21 ]+ (β+1)(β+3) 1 bT 3 T2 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 [ T − (β+1)(β+3) T1 + (β+1)(2β+3) T1 − 2(β+1) − 31 + 21 ] + 2 T1 (T1 +T2 ) (β+1)(β+2) 1 β+1 β+1 β+1 2β+2 2β+3 2 P α2 b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 ) β+1 β+1 2β+2 β+1 2β+3 2 β+1 β+2 (T +T ) (T1 +T2 )β+3 (T1 +T2 )β+2 (T1 +T2 )β+1 β+1 β+2 α bα 2 ) + α(T1 β+1 − β+2 ) − β+1 (T1 (T1 + T2 ) − 1 β+2 ) + β+2 (T1 (T1 + T2 ) − β+3 β+3 3 2 (T1 +T3 ) (T1 +T2 ) (T1 +T2 ) b 2 ) + 2 (T1 (T1 + T2 ) − ) − (T1 (T1 + T2 ) − )] β+3 3 2 T12 bT13 SIe N2 bN 3 − (T1 +T2 ) [a(1 − bT2 )T2 (M − N ) + a( 2 − 3 − 2 + 3 ) + a(1 − bT1 )T1 (M − T1 )]

where Z2 (T1 , T2 ) =

T1β+1 bT1β+2 (2β+3) (2β+2) A Caα α2 b α2 + [ − ] + (T1ah [ T − 2(β+1) − 2 T1 (T1 +T2 ) (T1 +T2 ) β+1 β+2 +T2 ) (β+1)(2β+3) 1 bT13 T12 (β+3) (β+2) bαβ αβ T + (β+1)(β+2) T1 − 3 + 2 ]+ (β+1)(β+3) 1 bT13 T12 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 T − [ T − T + T − + ]+ 2 1 1 1 1 (T1 +T2 ) (β+1)(β+2) (β+1)(β+3) (β+1)(2β+3) 2(β+1) 3 2 β+1 (T (T1 +T2 )2β+2 (T1 +T2 )2β+3 +T ) β+1 (T1 +T2 )β+1 β+2 (T1 +T2 )β+1 P α2 α2 b αb 1 2 [ (T1 − 2β+2 ) − β+2 (T1 − 2β+3 ) − 2 (T12 β+1 − (T1 +T2 ) β+1 β+1 β+1 β+3 β+1 β+2 (T1 +T2 ) (T1 +T2 ) (T1 +T2 )β+2 (T1 +T2 ) β+1 α bα ) + α(T − ) − (T (T + T ) − ) + β+2 (T1β+2 (T1 + T2 ) − 1 1 2 1 β+3 β+1 β+2 β+1 β+2 3 2 (T1 +T2 )β+3 2) 3) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )] β+3 3 2 SIe − (T1 +T2 ) [a(1 − bT2 )T2 (M − N ) + a(1 − bT1 )T1 (M − N )]

Z3 (T1 , T2 ) =

Since Z1 (M, T2 ) = Z2 (M, T2 ) Z2 (N, T2 ) = Z3 (N, T2 ) Therefore Z(T1 , T2 ) is continuous and well definded All Z1 (T1 , T2 ), Z2 (T1 , T2 ), Z3 (T1 , T2 ) are defined on T1 > 0, T2 > 0. 6. The determinations of the optimal solution of Z(T1 , T2 ) The optimal solutions (T1 , T2 ) of Z1 (T1 , T2 )can be determined by equations ∂Z1 (T1 ,T2 ) ∂T1 ∂Z1 (T1 ,T2 ) = ∂T2

=0 (1) 0 (2)

(1) implies

T β+1

A Caα 1 − (T1 +T 2 − (T +T )2 [ β+1 − 2) 1 2

bT1β+2 (2β+3) (2β+2) (β+3) bαβ ah α2 b α2 ]− (T1 +T − 2(β+1) − (β+1)(β+3) T1 + 2 [ (β+1)(2β+3) T1 2 T1 β+2 2)

bT13 T12 (β+2) αβ T − + ] 1 (β+1)(β+2) 3 2 bT 3 T2 (2β+3) (2β+2) (β+3) (β+2) bαβ αβ CIc a α2 b α2 − (T1 +T2 )2 [ (β+1)(2β+3) T1 − 2(β+1) − (β+1)(β+3) T1 + (β+1)(β+2) T1 − 31 + 21 − 2 T1 (β+1) (β+2) (β+1) αT1 αbT1 bT 2 αβ αβb α2 ( (β+1) − (β+2) − 21 +T1 )(M − αM )− (β+1)(β+2) M β+2 + 2(β+2)(β+3) M β+3 − (β+1)(2β+2) M 2β+2 + (β+1) 3 2 α2 b M 2β+3 − bM6 + M2 ] (β+2)(2β+3) e + (T1SI [a(1 − bT2 )T2 (M − N ) +T2 )2

(β+2)

αβ P − (T1 +T 2 [ (β+1)(β+2) T1 2)

β+1 2 P 2) [ α (T1β+1 (T1 +T (T1 +T2 )2 β+1 β+1

P 202

2

+ a( M2 −

(β+3)

bαβ − (β+1)(β+3) T1 2β+2

2) − (T1 +T 2β+2

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

bM 3 3

2

−

N2 2

+

bN 3 )] 3

2

(2β+3)

α b + (β+1)(2β+3) T1

α b 2) ) − β+2 (T1β+2 (T1 +T β+1

β+1

2

bT13 T2 + 21 ] − 3 β+1 αb 2) (T12 (T1 +T − 2 β+1

2β+2 α − 2(β+1) − 2 T1 2β+3

2) − (T1 +T 2β+3

)−

A deterministic inventory model.......... allowable shortage under trade credit

7

β+2 β+1 β+2 (T1 +T2 )β+3 α bα 2) 2) 2) ) + α(T1 (T1 +T − (T1 +T ) − β+1 (T1β+1 (T1 + T2 ) − (T1 +T ) + β+2 (T1β+2 (T1 + T2 ) − β+3 β+1 β+2 β+2 3 2 (T1 +T2 )β+3 3) 2) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )]+ β+3 3 2 2 β+1 β+2 2β+2 2β+1 bαβ αβ α2 b α2 P P 2 − (β+1) T1 + (β+1) T1 − (β+1) T1 − bT1 + T1 ] + (T1 +T [ T [ α (T1β+1 (T1 + (T1 +T2 ) (β+1) 1 2 ) (β+1) β+1 α2 b 2) T2 )β + (T1 + T2 )β+1 T1β − (T1 + T2 )2β+1 ) − (β+2) (T1β+2 (T1 + T2 )β + (T1 +T (β + 2)T1β+1 − (T1 + β+1 β+1 +T2 )β+1 2) T2 )2β+2 ) − αb ( (T1 +T 2T1 + (T1 + T2 )β T12 − (T1 + T2 )β+2 ) + α((T1 + T2 )β T1 + (T1(β+1) − (T1 + 2 β+1 α bα ((β + 1)T1β (T1 + T2 ) + T1β+1 − (T1 + T2 )β+1 ) + β+2 ((β + 2)T1β+1 (T1 + T2 ) + T1β+2 − T2 )β+1 ) − (β+1) (T1 + T2 )β+2 ) + 2b (T12 + (T1 + T2 )2T1 − (T1 + T2 )2 ) − (T1 + (T1 + T2 ) − (T1 + T2 ))] bαβ α2 b α2 + (TCaα (T1β − bT1β+1 ) + (T1ah [ αβ T β+1 − (β+1) T1β+2 + (β+1) T12β+2 − (β+1) T12β+1 − bT12 + T1 ] + +T2 ) (β+1) 1 1 +T2 ) 2 αβb αβ aCIc α2 [ α b T 2β+2 − (β+1) T12β+1 − (β+1) T1β+2 + (β+1) T1β+1 − bT12 + T1 − (αT1β − αbT1β+1 − bT1 + (T1 +T2 ) (β+1) 1

1)(M −

αM β+1 )] (β+1)

=0

(3)

Now (2) implies T β+1

A Caα 1 − (T1 +T 2 − (T +T )2 [ β+1 − 2) 1 2

bT1β+2 (2β+2) (2β+3) (β+3) bαβ α2 b ah α2 − (β+1)(β+3) ]− (T1 +T − 2(β+1) T1 + 2 [ (β+1)(2β+3) T1 2 T1 β+2 2)

bT 3 T2 (β+2) αβ T − 31 + 21 ] (β+1)(β+2) 1 bT 3 T2 (2β+3) (2β+2) (β+3) (β+2) bαβ αβ α2 b α2 ca − (T1CI [ T − 2(β+1) − (β+1)(β+3) T1 + (β+1)(β+2) T1 − 31 + 21 − 2 T1 +T2 )2 (β+1)(2β+3) 1 (β+1) (β+2) (β+1) αT1 αbT1 bT 2 αβ αβb α2 − (β+2) − 21 +T1 )(M − αM )− (β+1)(β+2) M β+2 + 2(β+2)(β+3) M β+3 − (β+1)(2β+2) M 2β+2 + ( (β+1) (β+1) 3 2 α2 b M 2β+3 − bM6 + M2 ] (β+2)(2β+3) 2 3 2 3 e e + (T1SI [a(1 − bT2 )T2 (M − N ) + a( M2 − bM3 − N2 + bN3 )] − (T1sI+T [a(1 − 2bT2 )(M − N )] +T2 )2 2) 2 bT 3 T2 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α b α2 − (T1 +T − (β+1)(β+3) T1 + (β+1)(2β+3) T1 − 2(β+1) − 31 + 21 ] − 2 [ (β+1)(β+2) T1 2 T1 2) β+1 2β+2 β+1 2β+3 β+1 2 P α2 b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 )2 β+1 β+1 2β+2 β+1 2β+3 2 β+1 (T1 +T2 )β+3 (T1 +T2 )β+2 (T1 +T2 )β+2 (T1 +T2 )β+1 β+1 β+2 α bα ) + α(T1 β+1 − ) − β+1 (T1 (T1 + T2 ) − ) + β+2 (T1 (T1 + β+3 β+2 β+2 (T1 +T2 )β+3 (T1 +T3 )3 (T1 +T2 )2 b P α2 2 ) − (T1 (T1 + T2 ) − )] + (T1 +T2 ) [ (β+1) (T1β+1 (T1 + T2 ) − β+3 ) + 2 (T1 (T1 + T2 ) − 3 2 α2 b T2 )β − (T1 + T2 )2β+1 ) − (β+2) (T1β+2 (T1 + T2 )β − (T1 + T2 )2β+2 ) − αb ((T1 + T2 )β T12 − (T1 + T2 )β+2 ) + 2 α bα (T1β+1 − (T1 + T2 )β+1 ) + (β+2) (T1β+2 − (T1 + T2 )β+2 ) + α((T1 + T2 )β T1 − (T1 + T2 )β+1 ) − (β+1) b (T12 − (T1 + T2 )2 ) − (T1 − (T1 + T2 ))] =0 (4) 2

The equation (3) and (4) gives the optimal value T1∗ and T2∗ . The optimal solutions (T1 , T2 ) of Z2 (T1 , T2 ) can be determined by equations ∂Z2 (T1 ,T2 ) =0 (5) ∂T1 ∂Z2 (T1 ,T2 ) =0 (6) ∂T2 (5) implies T β+1

A Caα 1 − (T1 +T 2 − (T +T )2 [ β+1 − 2) 1 2

bT1β+2 (2β+3) (2β+2) (β+3) bαβ ah α2 b α2 ]− (T1 +T − 2(β+1) − (β+1)(β+3) T1 + 2 [ (β+1)(2β+3) T1 2 T1 β+2 2)

bT13 T12 (β+2) αβ T − + ] 1 (β+1)(β+2) 3 2 bT13 T12 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 − (T1 +T − T + T − − + ]− 2 [ (β+1)(β+2) T1 2 T1 1 1 ) (β+1)(β+3) (β+1)(2β+3) 2(β+1) 3 2 2 β+1 2β+2 β+1 2β+3 β+1 2 P α2 b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 )2 β+1 β+1 2β+2 β+1 2β+3 2 β+1

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 203

8

Pinki Majumder and U.K.Bera

(T1 +T2 )β+2 (T1 +T2 )β+3 (T1 +T2 )β+2 (T1 +T2 )β+1 β+1 α bα (T + T ) − ) + α(T − ) − (T ) + β+2 (T1β+2 (T1 + T2 ) − 1 2 1 1 β+3 β+1 β+2 β+1 β+2 3 2 (T1 +T2 )β+3 3) 2) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )]+ β+3 3 2 2 2 2 bαβ α b α P P [ αβ T β+1 − (β+1) T1β+2 + (β+1) T12β+2 − (β+1) T12β+1 − bT12 + T1 ] + (T1 +T [ α (T1β+1 (T1 + (T1 +T2 ) (β+1) 1 2 ) (β+1) β+1 α2 b 2) T2 )β + (T1 + T2 )β+1 T1β − (T1 + T2 )2β+1 ) − (β+2) (T1β+2 (T1 + T2 )β + (T1 +T (β + 2)T1β+1 − (T1 + β+1 β+1 +T2 )β+1 2) T2 )2β+2 ) − αb ( (T1 +T 2T1 + (T1 + T2 )β T12 − (T1 + T2 )β+2 ) + α((T1 + T2 )β T1 + (T1(β+1) − (T1 + 2 β+1 β β+1 β+1 α bα β+1 β+1 T2 ) ) − (β+1) ((β + 1)T1 (T1 + T2 ) + T1 − (T1 + T2 ) ) + β+2 ((β + 2)T1 (T1 + T2 ) + T1β+2 − (T1 + T2 )β+2 ) + 2b (T12 + (T1 + T2 )2T1 − (T1 + T2 )2 ) − (T1 + (T1 + T2 ) − (T1 + T2 ))] bαβ α2 b α2 + (TCaα (T1β − bT1β+1 ) + (T1ah [ αβ T β+1 − (β+1) T1β+2 + (β+1) T12β+2 − (β+1) T12β+1 − bT12 + T1 ] +T2 ) (β+1) 1 1 +T2 ) 2 3 T2 bT 3 e + (T1SI [a(1−bT2 )T2 (M −N )+a( 21 − 31 − N2 + bN3 )+a(1−bT1 )T1 (M −T1 )]− (T1SI+Te 2 ) [a(T1 − +T2 )2 bT12 ) + a(1 − bT1 )(M − 2T1 ) + T1 (M − T1 )(−b)]=0 (7)

(6) implies

A Caα 1 − (T1 +T 2 − (T +T )2 [ β+1 − 2) 1 2

T β+1

bT1β+2 (2β+3) (2β+2) (β+3) bαβ ah α2 b α2 ]− (T1 +T − 2(β+1) − (β+1)(β+3) T1 + 2 [ (β+1)(2β+3) T1 2 T1 β+2 2)

N )(1 − 2bT2 )]=0

(8)

bT 3 T2 (β+2) αβ T − 31 + 21 ] (β+1)(β+2) 1 bT 3 T2 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 − (T1 +T − (β+1)(β+3) T1 + (β+1)(2β+3) T1 − 2(β+1) − 31 + 21 ] − 2 [ (β+1)(β+2) T1 2 T1 2) β+1 2β+2 β+1 2β+3 β+1 2 P α2 b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 )2 β+1 β+1 2β+2 β+1 2β+3 2 β+1 (T1 +T2 )β+3 (T1 +T2 )β+2 (T1 +T2 )β+2 (T1 +T2 )β+1 β+1 β+2 α bα ) + α(T1 β+1 − ) − β+1 (T1 (T1 + T2 ) − ) + β+2 (T1 (T1 + β+3 β+2 β+2 (T1 +T2 )β+3 (T1 +T3 )3 (T1 +T2 )2 b P α2 2 T2 ) − β+3 ) + 2 (T1 (T1 + T2 ) − ) − (T1 (T1 + T2 ) − )] + (T1 +T2 ) [ (β+1) (T1β+1 (T1 + 3 2 α2 b T2 )β − (T1 + T2 )2β+1 ) − (β+2) (T1β+2 (T1 + T2 )β − (T1 + T2 )2β+2 ) − αb ((T1 + T2 )β T12 − (T1 + T2 )β+2 ) + 2 α bα (T1β+1 − (T1 + T2 )β+1 ) + (β+2) (T1β+2 − (T1 + T2 )β+2 ) + α((T1 + T2 )β T1 − (T1 + T2 )β+1 ) − (β+1) b (T12 − (T1 + T2 )2 ) − (T1 − (T1 + T2 ))] 2 2 3 T12 bT13 e [a(1−bT )T (M −N )+a( − − N2 + bN3 )+a(1−bT1 )T1 (M −T1 )]− (T1SI+Te 2 ) [a(M − + (T1SI 2 2 2 +T2 ) 2 3

The equation (7) and (8) gives the optimal value T1∗ and T2∗ . The optimal solutions (T1 , T2 ) of Z3 (T1 , T2 ) can be determined by equations =0 (9) =0 (10) (9) implies ∂Z3 (T1 ,T2 ) ∂T1 ∂Z3 (T1 ,T2 ) ∂T2

T β+1

A Caα 1 − (T1 +T 2 − (T +T )2 [ β+1 − 2) 1 2

bT1β+2 (2β+3) (2β+2) (β+3) bαβ ah α2 b α2 ]− (T1 +T − 2(β+1) − (β+1)(β+3) T1 + 2 [ (β+1)(2β+3) T1 2 T1 β+2 2)

bT 3 T2 (β+2) αβ T − 31 + 21 ] (β+1)(β+2) 1 bT13 T12 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 − (T1 +T − T + T − − + ]− 2 [ (β+1)(β+2) T1 2 T1 1 1 ) (β+1)(β+3) (β+1)(2β+3) 2(β+1) 3 2 2 β+1 2β+2 β+1 2β+3 β+1 2 P α2 b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 )2 β+1 β+1 2β+2 β+1 2β+3 2 β+1 β+3 β+1 β+2 β+2 (T1 +T2 ) α bα 2) 2) 2) ) + α(T1 (T1 +T − (T1 +T ) − β+1 (T1β+1 (T1 + T2 ) − (T1 +T ) + β+2 (T1β+2 (T1 + T2 ) − β+3 β+1 β+2 β+2 3 2 (T1 +T2 )β+3 3) 2) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )]+ β+3 3 2

P 204

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

A deterministic inventory model.......... allowable shortage under trade credit

9

bαβ P α2 b α2 P α2 [ αβ T β+1 − (β+1) [ T1β+2 + (β+1) T12β+2 − (β+1) T12β+1 − bT12 + T1 ] + (T1 +T (T1β+1 (T1 + (T1 +T2 ) (β+1) 1 ) (β+1) 2 β+1 α2 b 2) (T1β+2 (T1 + T2 )β + (T1 +T (β + 2)T1β+1 − (T1 + T2 )β + (T1 + T2 )β+1 T1β − (T1 + T2 )2β+1 ) − (β+2) β+1 2) T2 )2β+2 ) − αb ( (T1 +T 2 β+1

β+1

β+1

+T2 ) 2T1 + (T1 + T2 )β T12 − (T1 + T2 )β+2 ) + α((T1 + T2 )β T1 + (T1(β+1)

− (T1 + β β+1 β+1 α bα ((β + 1)T1 (T1 + T2 ) + T1 − (T1 + T2 )β+1 ) + β+2 ((β + 2)T1 (T1 + T2 ) + T1β+2 − T2 )β+1 ) − (β+1) (T1 + T2 )β+2 ) + 2b (T12 + (T1 + T2 )2T1 − (T1 + T2 )2 ) − (T1 + (T1 + T2 ) − (T1 + T2 ))] bαβ α2 b α2 + (TCaα (T1β − bT1β+1 ) + (T1ah [ αβ T β+1 − (β+1) T1β+2 + (β+1) T12β+2 − (β+1) T12β+1 − bT12 + T1 ] +T2 ) (β+1) 1 1 +T2 ) e + (T1SI [a(1−bT2 )T2 (M −N )+a(1−bT1 )T1 (M −N )]− (T1SI+Te 2 ) [a(M −N )(1−2bT1 )]=0 (11) +T2 )2 Equation (10) implies T β+1

Caα A 1 − (T1 +T 2 − (T +T )2 [ β+1 − 2) 1 2

bT1β+2 (2β+3) (2β+2) (β+3) bαβ ah α2 b α2 ]− (T1 +T − − (β+1)(β+3) T1 + 2 [ (β+1)(2β+3) T1 2 T1 β+2 ) 2(β+1) 2

bT 3 T2 (β+2) αβ T − 31 + 21 ] (β+1)(β+2) 1 bT13 T12 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 − (T1 +T − T + T − − + ]− 2 [ (β+1)(β+2) T1 2 T1 1 1 ) (β+1)(β+3) (β+1)(2β+3) 2(β+1) 3 2 2 β+1 2β+2 β+1 2β+3 β+1 2 2 P α b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 )2 β+1 β+1 2β+2 β+1 2β+3 2 β+1 β+3 β+1 β+2 β+2 (T1 +T2 ) α bα 2) 2) 2) ) + α(T1 (T1 +T − (T1 +T ) − β+1 (T1β+1 (T1 + T2 ) − (T1 +T ) + β+2 (T1β+2 (T1 + β+3 β+1 β+2 β+2 β+3 2 3 2 P 2) 3) 2) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )] + (T1 +T [ α (T1β+1 (T1 + T2 ) − (T1 +T β+3 3 2 2 ) (β+1) α2 b T2 )β − (T1 + T2 )2β+1 ) − (β+2) (T1β+2 (T1 + T2 )β − (T1 + T2 )2β+2 ) − αb ((T1 + T2 )β T12 − (T1 + T2 )β+2 ) + 2 α bα (T1β+1 − (T1 + T2 )β+1 ) + (β+2) (T1β+2 − (T1 + T2 )β+2 ) + α((T1 + T2 )β T1 − (T1 + T2 )β+1 ) − (β+1) b (T12 − (T1 + T2 )2 ) − (T1 − (T1 + T2 ))] 2 e [a(1−bT2 )T2 (M −N )+a(1−bT1 )T1 (M −N )]− (T1SI+Te 2 ) [a(M −N )(1−2bT2 )]=0 (12) + (T1SI +T2 )2

The equation (11) and (12) gives the optimal value T1∗ and T2∗ . 7. Numerical Example:To illustrate the results of the proposed model, we solve the following numerical examples. Example 1:- Let C = 60, S = 70, P = 20, Ic = 0.02, Ie = 0.015, A = 350, a = 2900, b = 0.35, α = 0.01, β = 2, M = 0.02, N = 0.01, h = 4 Then we see thatT1∗ = 0.02229108, T2∗ = 8.023180 and the minimum average cost Z1 (T1∗ , T2∗ ) = 103.6384 Example 2:- Let C = 50, S = 80, P = 50, Ic = 0.06, Ie = 0.01, A = 300, a = 1000, b = 0.2, α = 0.01, β = 2, M = 0.10, N = 0.022, h = 8 Then we see thatT1∗ = 0.03180632, T2∗ = 3.611006 and the minimum average cost Z2 (T1∗ , T2∗ ) = 129.9500 Example 3:- Let C = 50, S = 70, P = 30, Ic = 0.070, Ie = 0.030, A = 250, a = 1000, b = 0.4, α = 0.30, β = 2, M = 0.09589041, N = 0.01369863, h = 4 Then we see thatT1∗ = 0.005365123, T2∗ = 2.080469 and the minimum average cost Z1 (T1∗ , T2∗ ) = 100.4811 8. Conclusion:In this paper, an EOQ inventory model is considered for determining the optimal cycle time under weibull deterioration rate and demand declining market where shortages are allowed.Also the proposed model in-cooperates other realistic phenomenon and practical features such as trade credit period.The credit policy in payment has become a very powerful tool to attract

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 205

10

Pinki Majumder and U.K.Bera

new customers and a good incentive policy for the buyers.In keeping with this reality , these factors are incorporated into the present model. Numerical examples are presented to justify the claim of each case of the model analysis by obtaining the optimal inventory length, shortage time period and also calculated the total variable cost. The proposed model can be extended in several ways.For instance,we may extend this model for partial trade credit period, quantity discount,taking selling price, ordering cost , demand as a fuzzy number. 9. References:[1]Covert R.P and Philip G. C(1973),An EOQ model for items with weibull distribution deterioration ,AIIE Transactions,5,323-326. [2]Chen L.H,Ouyang L.Y.,Fuzzy inventory model for deteriorating items with permissible delay in payment,Appl. Math. Comput. 182(2006)711-726. [3] Chen L.H ,Kang F.S (2010),Integrated inventory models considering permissible delay in payment and variant pricing strategy , Appl. Math. Model,34,36-46. [4] Chen M.L and Chang M. C. (2011),Optimal order quantity under advance sales and permissible delays in payment,African Journal of Business Management 5(17),7325-7334. [5] Deb m. and Chaudhuri K.S.(1986), An EOQ Model for items with finite rate of production and variable rate of deterioration, Opsearch,23,175-181. [6]Goyal S.K,Economic order quantity under conditions of permissible delay in payments,J. Operat. Res.Soc. 36(1985) 335-338. [7]Jamal A.M.M , Sarker B.R,Wang S.,An ordering policy for deteriorating items with allowable Shortage and permissible delay in payment .Journal of Operation Research society 48(1997) 826-833. [8] Kumar M., Tripathi R.P. and Singh S.R (2008) , Optimal ordering policy and pricing with variable demand rate under trade credits,Journal of National Academy of Mathematics , 22,111-123. [[9] Meher M. K , Panda G.C, Sahu S.K ,An Inventory Model with weibull Deterioration Rate under the Delay in payment in Demand Declining Market , Applied Mathematical sciences, vol.6,2012 no. 23,1121-1133. [10] Shah Y.K and Jaiswal M.C (1977) ,An order-level inventory Model for a system with constant rate of deterioration , Opsearch 14, 174-184. [11] Sarker B. R , Jamal A.M.M ,Wang S., Supply chain models for perishable products under inflation and permissible delay in payment.Computational Operation Research 27 (2000) 59-75.

P 206

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

DEVELOPMENT OF LABVIEW BASED ELECTRONIC NOSE USING K-NN ALGORITHM FOR THE DETECTION AND CLASSIFICATION OF FRUITY ODORS N.Jagadesh babu Assistant professor, EIE Department, Gitam University, Visakhaptanam,A.P,India. E-mail : [email protected]

ABSTRACT The basic objective of this paper is to development of electronic nose system which can able to detect and classify different fruits basing upon their odor with help of LabVIEW. This system consists of two Figaro gas sensors (TGS 2620 and TGS 2602) which is used detection for odor and k-NN Algorithm is used to classify different fruits. Olfaction is one’s sense of smell and a primary human sensory system. The detection of odors has been applied to many industrial applications, including indoor air quality, health care, safety and security, environmental monitoring, quality control of food products, medical diagnosis, psychoanalysis, agriculture, pharmaceuticals, military applications, and detection of hazardous gases, to name but a few. The biological nose is an obvious choice for such applications, but there are some disadvantages to having human beings perform these tasks because they have to face various difficulties such as fatigue, infections, mental state, subjectivity, exposure to hazardous materials etc., due to above reasons machines are preferred to do the above applications which show high accuracy then human beings. Keywords : Electronic nose, Virtual Instrumentation, K-NN algorithm, Fruity Odors

I. NTRODUCTION An electronic nose is a device intended to detect odors or flavors. The expression electronic sensing refers to the capability of reproducing human senses using sensor arrays and pattern recognition systems. Since 1982 research has been conducted to develop technologies[2], commonly referred to as electronic noses that could detect and recognize odors and flavors. The stages of the recognition process are similar to human olfaction and are performed for identification, comparison, quantification and other applications, including data storage and retrieval. These devices have undergone

much development and are now used to fulfill industrial needs. Other techniques to analyze odors In all industries, odor assessment is usually performed by human sensory analysis, by chemo sensors, or by gas chromatography. The latter technique gives information about volatile organic compounds but the correlation between analytical results and actual odor perception is not direct due to potential interactions between several odorous components. Working principle The electronic nose was developed in order to mimic human olfaction that functions as a non-separative mechanism: i.e. an odor or flavor is perceived as a global fingerprint. Essentially the instrument consists of head space sampling, sensor array, and pattern recognition modules, to generate signal pattern that are used for characterizing odors. Electronic noses include three major parts: A sample delivery system, A detection system and A computing system. Detection System: This consists of a sensor set, is the reactive part of the instrument. When in contact with volatile compounds, the sensors react, which means they experience a change of electrical properties. Each sensor is sensitive to all volatile molecules but each in their specific way. Most electronic noses use sensor arrays that react to volatile compounds on contact: the adsorption of volatile compounds on the sensor surface causes a physical change of the sensor. A specific response is recorded by the electronic interface transforming the signal into a digital value. Recorded data are then computed based on statistical models.The more commonly used sensors include: Computing System: They work to combine the responses of all of the sensors, which represent the input for the data treatment. This part of the instrument performs global fingerprint analysis and provides results and representations that can be easily

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 207

interpreted. Moreover, the electronic nose results can be correlated to those obtained from other techniques (sensory panel, GC, GC/MS). Many of the data interpretation systems are used for the analysis of results. These systems include Artificial Neural Network (ANN), fuzzy logic, pattern recognition modules, etc. Perform an analysis: As a first step, an electronic nose needs to be trained with qualified samples so as to build a database of reference. Then the instrument can recognize new samples by comparing volatile compounds fingerprint to those contained in its database. Thus they can perform qualitative or quantitative analysis. This however may also provide a problem as many odors are made up off multiple different molecules, this may be possibly wrongly interpreted by the device as it will register them as different compounds, resulting in incorrect or inaccurate results depending on the primary function of a nose.

these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e.g. analog-digital converter can act as a hardware complement of a virtual oscilloscope, a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation. The concept of a synthetic instrument is a subset of the virtual instrument concept. A synthetic instrument is a kind of virtual instrument that is purely software defined. A synthetic instrument performs a specific synthesis, analysis or measurement function on completely generic, measurement agnostic hardware.

Applications: Electronic nose instruments are used by research and development laboratories, quality control laboratories and process & production departments for various purposes,The detection of lung cancer by detecting the VOC’s (volatile organic compounds) that indicate lung cancer. The quality control of food products as it could be conveniently placed in food packaging to clearly indicate when food has started to rot,Possible and future applications in the field of crime prevention and security The ability of the electronic nose to detect odorless chemicals makes it ideal for use in the police force, such as the ability to detect drug odors despite other airborne odors capable of confusing police dogs. However this is unlikely in the mean time as the cost of the electronic nose is too great and until its price drops significantly it is unlikely to happen. It may also be used as a bomb detection method in airports. Through careful placement of several or more electronic noses and effective computer systems you could triangulate the location of bombs to within a few meters of their location in less than a few seconds.

II. VIRTUAL INSTRUMENTATION Virtual instrumentation is the use of customizable software and modular measurement hardware to create measurement systems, called virtual instruments. Traditional hardware instrumentation systems are made up of predefined hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis or measurement function. Because of their hard-cored function,

P 208

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Figure 2.1 2: Architecture of VI

Traditional instruments (left) and software based virtual instruments (right) largely share the same architectural components, but radically different philosophies. Every virtual instrument consists of two parts -software and hardware. A virtual instrument typically has a sticker price comparable to and many times less than a similar traditional instrument for the current measurement task. However, the savings compound over time, because virtual instruments are much more flexible when changing measurement tasks.With virtual instrumentation, software based on user requirements defines general -purpose measurement and control hardware functionality. Virtual instrumentation combines mainstream commercial technologies, such as the PC, with flexible software and a wide variety of measurement and control hardware, so engineers and scientists can create user-defined systems that meet their exact application needs. With virtual instrumentation, engineers and scientists reduce development time, design higher quality products and lower their design costs.

III. VIRTUAL INSTRUMENTATION DESIGN The same design engineers that use a wide variety of software design tools must use hardware to test prototypes.

Commonly, there is no good interface between the design phase and testing/validation phase, which means that the design usually must go through completion phase and enter a testing/ validation phase. Issues discovered in the testing phase require a design-phase reiteration.

wide band-gap insulators to metallic and superconducting. Tin dioxide belongs to a class of materials that combines high electrical conductivity with optical transparency and thus constitutes an important component for optoelectronic applications.

Virtual instrumentation is necessary because it delivers instrumentation with the rapid adaptability required for today’s concept, product, and process design, development and delivery. Only with virtual instrumentation can engineers and scientist create the user defined instruments required to keep up the worlds demands. To meet the ever-increasing demand to innovate and deliver ideas and products faster, scientists and engineering are turning to advanced electronics, processors, and software.

The electrical resistance of the sensor is attributed to this potential barrier. In the presence of a deoxidizing gas, the surface density of the negatively charged oxygen decreases, so the barrier height in the grain boundary is reduced .The reduced barrier height decreases sensor resistance. The relationship between sensor resistance and the concentration of deoxidizing gas can be expressed by the following equation over a certain range of gas concentration. Sensors Configuration:

IV METHODOLOGY Gas Sensor-1

PC

TGS-2620 Gas

LabVIEW

and

NIcDAQ

k-NN Algorithm

Sensor-2

TGS-2602 Figure 3.1 1: Overview of Process

FIGURE 3.1 5: SENSORS WITH PCB BOARD

V. DATA ACQUISITION

Figure: overview photo

IV. GAS SENSOR Tin dioxide is the inorganic compound with the formula SnO2. The mineral form of SnO2 is called cassiterite, and this is the main ore of tin. This colorless, diamagnetic solid is amphoteric. The wide variety of electronic and chemical properties of metal oxides makes them exciting materials for basic research and for technological applications alike. Oxides span a wide range of electrical properties from

Data acquisition is the process of sampling signals that measure real world physical conditions and converting the resulting samples into digital numeric values that can be manipulated by a computer. Data acquisition systems (abbreviated with the acronym DAS or DAQ) typically convert analog waveforms into digital values for processing. The components of data acquisition systems include: Sensors that convert physical parameters to electrical signals. Signal conditioning circuitry to convert sensor signals into a form that can be converted to digital values. Analog-to-digital converters, which convert conditioned sensor signals to digital values. NI cDAQ-9174:

Figure 3.1 6: NI cDAQ-9174 modules and chassis:

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 209

The NI cDAQ-9174 is a 4-slot NI Compact DAQ USB chassis designed for small, portable, mixed-measurement test systems. Combine the cDAQ-9174 with up to four NI C Series I/O modules for a custom analog input, analog output, digital I/O, and counter/timer measurement system. Modules are available for a variety of sensor measurements including thermocouples, RTDs, strain gages, load and pressure transducers, torque cells, accelerometers, flow meters, and microphones. MATLAB Script Node: Calls the MATLAB software to execute scripts. You must have a licensed copy of the MATLAB software version 6.5 or later installed on your computer to use MATLAB script nodes because the script nodes invoke the MATLAB software script server to execute scripts written in the MATLAB language syntax. Because LabVIEW uses ActiveX technology to implement MATLAB script nodes, they are available only on Windows

VI. K-NN ALGORITHM KNN stands for K-Nearest Neighbor algorithm. It is one of the pattern recognition technique used for classifying objects based on closest training examples in the feature space. K-NN is a type of instance-based learning, or lazy learning where the function is only approximated locally and all computation is deferred until classification. The k-nearest neighbor algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of its nearest neighbor The same method can be used for regression, by simply assigning the property value for the object to be the average of the values of its k nearest neighbors (A common weighting scheme is to give each neighbor a weight of 1/d, where d is the distance to the neighbor. This scheme is a generalization of linear interpolation. The neighbors are taken from a set of objects for which the correct classification (or, in the case of regression, the value of the property) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required. The k-nearest neighbor algorithm is sensitive to the local structure of the data. Nearest neighbor rules in effect compute the decision boundary in an implicit manner. It is also possible to compute the decision boundary itself explicitly, and to do so in an efficient manner so that the computational complexity is a function of the boundary

P 210

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

complexity. k-value Selection: The best choice of k depends upon the data; generally, larger values of k reduce the effect of noise on the classification, but make boundaries between classes less distinct. A good k can be selected by various heuristic techniques, for example, cross-validation. The special case where the class is predicted to be the class of the closest training sample (i.e. when k = 1) is called the nearest neighbor algorithm. The accuracy of the k-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put into selecting or scaling features to improve classification. A particularly popular approach is the use of evolutionary algorithms to optimize feature scaling. Another popular approach is to scale features by the mutual information of the training data with the training classes. Euclidean distance: The k-nearest-neighbor classifier is generally uses the Euclidean distance between a test sample and the specified training samples. Let xi be an input sample with p features (xi1,xi2,…,xip) , n be the total number of input samples (i =1,2,…,n) and p the total number of features (j=1,2,…,p) . The Euclidean distance d(xi,xt) between sample xi and xt (t =1, 2,…, n) is defined as d (xi, xt) = √(xi1-xt1)2 + (xi2-xt2)2 + ... +(xip- xti)2 Equation 3.1 1: Euclidean Distance K-NN Example:

Figure 3.1-11: K-NN Example

The test sample (green circle) should be classified either to the first class of blue squares or to the second class of red triangles. If k = 3 it is assigned to the second class because there are 2 triangles and only 1 square inside the inner circle. If k = 5 it is assigned to the first class (3 squares vs. 2 triangles inside the outer circle

Flow Chart of Process:

Figure 3.3 2: Block Diagram during Testing Phase

Front Panel:

Figure 3.2 1: Flow chart of Process

There are two phases in the process. Training: uring this phase each fruit odors is sampled using NI cDAQ from the sensors. Then the value of both

Figure 3.3 3: Front Panel during Training

sensors and type of fruit are stored in the spreadsheet. Testing:During this phase a new sample (whose type of fruit is to be determined) is acquired. Then its value is compared with the other trained fruits (which are stored in spreadsheet) using k-NN algorithm and the type of fruit is shown. Building VI for Detection and Classification of Fruit Odors and to implement K-NN Classifier Algorithm:

Fig. Figure 3.3 4: Front Panel during Testing

Figure 3.3 1: Block Diagram during Training Phase Figure 3.3-5experiement photo resolution

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 211

As a part of this project so far we monitored odor for different fruits using TGS 2620 and TGS 2602. We tabulated the output voltages corresponding to their fruit odors at time of training. Then type fruit is shown as output during testing phase.

VII. CONCLUSION/RESULTS We have successfully classified different fruits (like: Banana and Lemon) with the help of k-NN algorithm in LabVIEW. The conclusion of this work so far we monitored odor for different fruits using TGS 2620 and TGS 2602. We tabulated the output voltages corresponding to their fruit odors and classification fruits at different stages (days) of training. Then type of fruit is shown as output during testing phase.

Fruit

Banana

Lemon

Fruit statuss LED

TGS 2620

TGS 2620

Voltage (V)

Voltage (V)

Stage-1

1.708

1.159

Red/ON

Stage-2

1.716

1.161

Red/ON

Stage-3

10698

1.150

Red/ON

Stage-1

1.52

0.999

Green/ON

Stage-2

1.49

1.015

Green/ON

Stage-3

1045

1.005

Green/ON

Sample Number

Instead of using PC and LabVIEW we can implement microcontroller based portable electronic nose. By improving the algorithm and addition of sensors we can also use this project to checking freshness of food. This paper can be made into automatic system which can be used detecting of harmful gases.

REFERENCES [1]

Kea-Tiong Tang,Shih-Wen Chiu, Chih-Heng-Ti Hsieh, Yao-Sheng Liang and Ssu-Chieh Liu.

[2] Persaud, K; Dodd, G.H. Analysis of Discrination Mechanisms of the Mammalian Olfactory System Using a Model Nose. Nature 1982,299,352-355. [3]

Alphus D. Wilson, Manuela Baietto.

[4]

Pattern Classification, by R.O.Duda, P.E.Hart and D.G.Stork.

[5]

Statistical pattern Recognition by K. Fukunaga.

[6]

Nearest Neighbor Pattern Classification by T. M. Cover and P. E. Hart.

[7]

Handbook of Machine Olfaction: Electronic Nose Technology by Tim C. Pearce, Susan S. Schiffman, H. Troy Nagle, Julian W. Gardner.

[8] LabVIEW-based Advanced Instrumentation Systems by S. Sumathi, P. Surekha. [9] Virtual Instrumentation using LabVIEW by Jovitha Jerome. http://www.scholarpedia.org/article/K-nearest_neighbor

VIII. FUTURE SCOPE The algorithm can modify to Artificial NEURAL NETWORKS (ANN) and implement same project with better accuracy.

P 212

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

http://www.ni.com/ [10] http://www.howstuffworks.com/environmental/green science/pollution-sniffer.htm.

SUBSCRIPTION RATE Annual Subscription

In Rupees

In US$

International Journal on Current Science & Technology

Rs. 5000/-

US$100

Payable by Bank Transfer/RTGS to Account No. 32709506720, Name of the bank Nirjuli, Bank code 9535, IFS Code: SBIN0009535. Once paid details of Postal address of subscriber and scanned copy of Bank receipt may be send by E-mail to [email protected] Authors’ Instruction The papers on the following subjects should reach in IEEE format only by E-mail : [email protected]/[email protected] and a hard copy by post.

To, The Editor, International Journal on Current Science & Technology, National Institute of Technology - Arunachal Pradesh PO - Yupia, P.S. - Doimukh, Dist. - Papum Pare, Pin - 791112, Arunachal Pradesh Acceptance of paper is based on peer-review process. Technical Education, Chemical Sciences, Engineering Sciences, Environmental Sciences, Information and communication Science & Technology (including Computer Sciences), Material Sciences, Mathematical Sciences (including Statistics), Medical Sciences, New Biology (including Biochemistry, Biophysics & Molecular Biology and Biotechnology) and Physical Sciences.

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 213

P 214

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 215

International Journal on Current Science & Technology Vol. : 1 | No. : 1 | January - June’ 2013

For Correspondence

National Instiute Of Technology Arunachal Pradesh (Estd. By MHRD, Govt. of India) PO-Yupia, P.O.-Doimukh, Dist-Papum Pare, Pin-791112, Arunachal Pradesh T 0360-2284801, F 0360-2284972 E [email protected]; [email protected]

ISSN : 2320 5636

Proceedings of

North-Eastern Regional Science Congress on

“Science For Shaping The Future of India” on 11th -13th March’2013

International Journal on Current Science & Technology Vol.-1 | No.-1 | January-June’ 2013

ISSN : 2320 5636

ISSN: 2320 5636

www. nitap.in

International Journal on Current Science & Technology Vol.-I | No.-I | January-June’ 2013

ISSN: 2320 5636 International Journal on Current Science & Technology Vol.-I | No.-I | January-June’ 2013 Published By:

NATIONAL INSTIUTE OF TECHNOLOGY (An Institute of national importance) ARUNACHAL PRADESH Designed & Printed at:

INFLAME MEDIA

Kolkata, West Bengal

ISSN: 2320 5636 International Journal on Current Science & Technology

Vol.-I | No.-I | January-June’ 2013

Proceedings of

North-Eastern Regional Science Congress on

“Science For Shaping The Future of India” on 11th -13th March’2013

Sponsored by

Department of Science & Technology, Govt. of India. and Indian Science Congress Association, Kolkata Organized by

NATIONAL INSTIUTE OF TECHNOLOGY

www. nitap.in

ISCA

(An Institute of national importance) ARUNACHAL PRADESH (Estd. By MHRD, Govt. of India) PO-Yupia, P.S.-Doimukh,Dist-Papum Pare Pin-791112, Arunachal Pradesh Ph: +91 360 228 4801, Fax: +91 360 228 4972 E-mail : [email protected]; [email protected]

EDITORIAL BOARD MEMBERS [1] Prof. C. T. Bhunia - Director, NIT AP [2] Prof. M. V. Pitke - Former Professor of TIFR, Mumbai, Chair, CSE Section [3] Prof. Ajit Pal - Professor, IIT-Kharagpur [4] Prof. Atal Chowdhuri - Professor, Jadavpur University [5] Prof. Y. B. Reddy - Professor, Grambling State University (USA) [6] Prof. Mohammad S Obaidat - Professor, Monmouth University (USA) [7] Dr. Bubu Bhuyan - Associate Professor, NEHU [8] Prof. Swapan Mondal - Professor, Kalyani Govt. Engg. College [9] Prof. Swapan Bhattacharjee - Director, NIT Suratkal, Chair, ECE Section [10] Prof. P. P. Sahu - Professor, Tezpur University [11] Prof. S. R. Bhadrachowdhury - Professor, BESU [12] Prof. F. Masuli - Professor, University of Genova [13] Prof. S. Sen - Professor, Calcutta University [14] Prof. P. K. Basu - Professor, Calcutta university [15] Prof. S. C. Dutta Roy(Bhatnagar Awardee) - Professor, IIT Delhi, Chair, EEE Section [16] Prof. P. Sarkar - Professor, NITTR, Kolkata [17] Prof. G. K. N. Chetry - Professor, Manipur University, Chair, BioScience Section [18] Dr. Pinaki Chakraborty - Assistant Dean (R&D), NIT AP [19] Dr. Nabakumar Pramanik - Assistant Dean (Exam.) [20] Dr. K. R. Singh - Assistant Professor, NITAP [21] Dr. U. K. Saha - Assistant Professor, NITAP [22] Dr. Parogama Sen - Associate Professor, Calcutta University, Chair, Physical Science Section [23] Prof. A. K. Bhunia - Professor, Burdwan University

NORTH - EASTERN REGIONAL SCIENCE CONGRESS Programme Committee: [1] Prof. Dilip Kumar Sinha - Former Vice-Chancellor of Viswa Bharati University [2] Dr. Manoj Kumar Chakrabarti - General Secretary (Membership Affairs), ISCA [3] Dr. (Mrs.) Vijay Laxmi Saxena - General Secretary (scientific activities), ISCA [4] Mr. N. B. Basu - Treasurer, ISCA [5] Dr. Amit Krishna De - Executive Secretary, ISCA [6] Prof. S. C. Dutta Roy - IIT Delhi [7] Prof. Sanghamitra Roy - ISI, Kolkata [8] Prof. S. K. Bhatttacharyya -Director, NIT Surathkal [9] Prof. S. Sen , University of Calcutta, West Bengal [10] Prof. Surabhi Banerjee - Vice-Chancellor, Central University Orissa [11] Prof. S. R. Bhadrachowdhury - Bengal Engineering & Science University, Howrah [12] Prof. M. L. Das - Dhiru Bhai Ambani Institute of ICT, Gujrat [13] Prof. M. V. Pitke - Former Professor of TiFR, Mumbai [14] Prof. S. Raha - Bose Institute, Kolkata [15] Prof. Rabindra Nath Bera - Sikim Manipal University, Assam [16] Prof. Binay Singh - NERIST, Arunachal Pradesh [17] Prof. P. P. Sahoo - Tezpur University, Assam [18] Dr. Bubu Bhuyan - NEHU, Shilong Local Organizing Committee : [1] Prof. C. T. Bhunia -Chairman, Conference & Director, NITAP [2] Dr. Pinaki Chakraborty - Convenor, Conference, NITAP [3] Prof. P. D. Kashyap- NITAP [4] Dr. Nabakumar Paramanik- NITAP [5] Dr. U. K. Saha - NITAP [6] Dr. K. R. Singh - NITAP [7] Mr. Swarnendu Chakraborty- NITAP Working Programme Committee: [1] Dr. Pinaki Chakraborty - Convenor, Conference, NITAP [2] Dr. Nabakumar Paramanik- NITAP [3] Dr. U. K. Saha - NITAP [4] Dr. K. R. Singh - NITAP

PREFACE “As for the future, your task is not to foresee it, but to enable it.” ...Antoine de Saint-Exupery The National Institute of Technology is an Institute of National Importance and a unitary University by an act of Parliament. It is full of never-to-die spirit in implementing its defined objectives of Education, Research, Ethics and Service-to-Society. Nothing can be more credible for an institute of higher learning than to provide quality teaching and productive research. In its pursue of quality teaching and in an attempt to complete man making process in holistic approach, in the B Tech syllabi of this instute inclusion of unique compulsory courses of Values & Ehtics, Entrepreneurship Practices, Histrography of Science & Technology, NCC among othere are made purposefully. In line with that to root a solid foundation in research, at its very third year of inception, Ph D programms are introduced. GOD is in favor of doers, as we are highly privileged to get the opportunity to organize the North Eastern Regional Science Congress in this centenary year of Indian Science Congress Association. I on my own behalf and on behalf of entire NIT family put on record our gratitude to Indian Science Congress Association on showing their confidence & faith on our academic potentialities & viabilities to organize the North Eastern Regional Science Congress. We feel more honored that the several distinguished scientists and promosing youmg researchers of several leading universities, eg University of Calcutta, Other National Institute of Technology, Manipur University, North Eastern Hill University, Tezpur University among others have spontaneously & generously contributed their thought provoking research papers in this conference. I thank & salute to the esteemed contributors. We in NIT, Arunachal believe to take challenges to realize what we think is of essential for making NIT at par excellency. To us, sky is the only limit. Therefore our initiative to publish a Bi-Yearly Research Journal on Current Science & Technology on regular basis can not find a better moment than the eve of North Easter Regional Science Copngress to see the day of light. The proceedings of the conference is therefore published as the premier issue of the Journal. Accolodates to the authors, the editors, the organizers, the readers and all the members of family of NIT, Arunachal for their commitment on ”Stop Not till The Goal Is Reached.” I have full confidence that the journal cum proceedings published on the occassion of the North Eastern Regional Science Congress will bring scholarships in totality and figuratibility. “There is nothing so practical as a good theory.” ...Ludwig Boltzman

Professor Chandan Tilak Bhunia DIRECTOR National Institute of Technology Arunachal Pradesh

INDEX OF CONTENT Sl. No.

Title

Page

1

The evaluation of research performance of Indian states by Dr. Gangan Prathap

11

2

Imbalance of Technical Education in the North East India and its Effects by Sainkupar Marwein Mawiong

15

3

Reviewing And Sggestions For Revamping Technical Higher Education In India To Meet The Challenges Of Future Scenario by A. Bhunia, A. Bhunia, S. K. Chakraborty, P. Chakraborty, R.S. Goswami, N. Pramanik, M. K. De, P. K. Samanta and C.T. Bhunia

21

4

Imbalance in Technical Education-Regional by Bikash Sah, Nupur, Santosh Shukla, Krishna Kumar

31

5

A comparative study of Fungal diseases of french bean (Phaseolus vulgaris. L) in organic and conventional farming system by G. K. N. Chhetry and H. C. Mangang

35

6

Arbuscular mycorrhial fungi associated with the rhizospheric soil of potato plant (Solanum tuberosum) in Barak valley of South Assam, India by Sujata Bhattacharjee & G. D. Sharma

41

7

Biodiversity and conservation strategies of home garden crops in Manipur by A Premila and G. K. N Chhetry

45

8

Metabolic Pathways: A review by Daizy Deb and Rhythm Upadhyaya

49

9

Icthyofaunal Diversity of Simen River in Assam and Arunachal Pradesh, India by Biplab Kumar Das, Aloka Ghosh and Devashish Kar

55

10

Recent Advances in Papaya Cultivation and Breeding by Aditi Chakraborty and S. K. Sarkar

59

11

Traditional organic practices with traditional inputs farming for the cultivation of french bean in Manipur by G. K. N. Chhetry and H. C. Mangang

65

12

Induced breeding of eel-loach Pangio pangia, (Hamilton 1822) by Kh. Geetakumari, Ch. Basudha and N. Prakash

73

13

Fungal Airspora over onion field in Mnipur valley by A. Premila

77

14

Variation in Indoor and Outdoor Aeromycoflora of a ice Mill in Imphal by A. Premila

81

15

Biochemical Networks: The Chemistry of Life by Rhythm Upadhyaya and Rhyme Upadhyaya

85

16

Applications of zeolites for alkylation reactions: catalytic and thermodynamic properties by Dr. V. R. Chumbhale

91

17

Multichannel Transceiver System Design Using Uncoordinated Direct Sequence Spread Spectrum by S.Kalita, R.Kaushik, M.Jajoo, P.P.Sahu

97

18

Effect of demyelination on conduction velocity in demyelinating polyneuropathic patients by H. K. Das and P. P. Sahu

101

19

From Transistor to Medicine: Materials, Devices, and Systems by Tapas Kumar Maiti

105

20

Enzyme-modified Field Effect Transistors (ENFETs) as Biosensors : A Research Review by Manoj Kumar Sarma and Jiten Ch. Dutta

109

21

Acetylcholine Gated Spiking Neuron Model by Soumik Roy, Meenakshi Boro, Jiten Ch Dutta and Reginald H. Vanlalchaka

115

22

Power Efficient Adiabatic Gray to Binary & Binary to Gray Code Converter Circuits by Reginald H Vanlalchaka and Soumik Roy

119

Sl. No.

Title

Page

23

Light Induced Plating For Enhance Efficiency by Improving Fill Factor And Short Circuit Current by Santanu Maity, Avra Kundu, Hiranmay Saha, UtpalGangopadhyay

125

24

Image Denoising Using Sparse and Overcomplete Representations -A Study By M. K. Rai Baruah, BhabeshDeka

129

25

FOTOFUSION - An Analysis of Image Editing on Android Platform as an Application in Smart Phones by Smita Das, Nitesh Kr. Singh, Mukesh Kumar, Ashok Ajad, Priya Khan

135

26

Denoising of Speckled Images by Sagarika Das

141

27

A Study of Randomness and Variable Key in Cryptography by Achinta Kumar Gogoi, Bidyut Kalita

147

28

Approach towards realizing error propagation effect of AES and studies thereof in the light of Redundancy Based Technique by B. Sarkar, C. T. Bhunia, U. Maulik

153

29

Cipher Combining Technique to tackle Error Propagation Behavior of AES by Rajat Subhra Goswami, Swarnendu Kumar Chakraborty, Abhinandan Bhinia, C. T. Bhunia

159

30

Two New Protocols for Improving Performance of Aggressive Packet Combining by Swarnendu Kumar Chakraborty, Rajat Subhra Goswami, Abhinandan Bhinia, C. T. Bhunia 161

31

Review and Security Analysis of an Efficient Biometric-Based Remote User Authentication Scheme U sing Smart Cards by Subhasish Banerjee, Uddalak Chatterjee and Kiran Sankar Das 167

32

Evolution Strategy for the C-Means Algorithm: Application toMultimodal Image Segmentation By Francesco Masulli, Anna Maria Massone, Andrea Schenone

171

33

A Deterministic Inventory Model for Deteriorating Items With Time Dependent Demand and Allowable Shortage Under Trade Credit by Pinki Majumder and U.K.Bera

197

34

Development of Labview Based Electronic Nose Using k-nn Algorithm for the Detection and Classification of Fruity Odors by N.Jagadesh Babu

207

THE EVALUATION OF RESEARCH PERFORMANCE OF INDIAN STATES Gangan Prathap CSIR-National Institute of Science Communication and Information Resources New Delhi, New Delhi 100012 E-mail : [email protected]

ABSTRACT We examine how various states in India have performed in academic research on a per GDP basis. The scientific output measured in terms of the number of papers published in a prescribed window (which serves as a quantity proxy), and the GDP in current dollar terms, leads to the quality proxy, papers/GDP. The second-order indicator which is a product of the square of the quality proxy and the quantity proxy becomes the most practical single number scalar indicator of performance that combines quality and quantity of output or outcome. Keywords -Quality; Quantity; Quasity; Exergy, Performance; Bibliometrics.

I. NTRODUCTION As early as 1939, J D Bernal made an attempt to measure the amount of scientific activity in a country and relate it to the economic investments made. In The Social Function of Science (1939), Bernal [1] estimated the money devoted to science in the United Kingdom using existing sources of data: government budgets, industrial data (from the Association of Scientific Workers) and University Grants Committee reports. He was also the first to propose an approach that became the main indicator of science and technology: Gross Expenditures on Research and Development (GERD) as a percentage of GDP. He compared the UK’s investment (0.1%) with that of the United States (0.6%) and USSR (0.8%) and suggested that Britain should devote (0.5-1.0%) of its national income to research. Since then, research evaluation at the country and regional levels has progressed rapidly and there are now exercises carried out at regular intervals in the United States of America, European Union, OECD, UNESCO, Japan, China, etc. Science is a socio-cultural activity that is highly disciplined and easily quantifiable. The output of science can be easily measured in terms of articles published and citations, etc. Inputs are mainly that of the financial and human resources

invested in science and technology activity. The financial resources invested in research are used to calculate what is called the Gross Domestic Expenditure on R&D (GERD), and the human resources devoted to these activities (FTER for Full Time Equivalent Researcher) are usually computed as a fraction of the workforce or the population. The US science adviser, J R Steelman pointed out in 1947 that “The ceiling on research and development activities is fixed by the availability of trained personnel, rather than by the amounts of money available. The limiting resource at the moment is manpower”.

II. METHODOLOGY In most countries, due to a legacy of poor investment in higher education and research, both GERD and FTER/ million of population are sub-optimal. To see how far R&D investment in manpower and funding terms is sub-optimal in India, it is a good exercise to see how output is related to actual GDP. In the present exercise, the scientific output measured in terms of articles published from the various states of India as registered by the Web of Science over a 3 year period (2007-2009) P, is taken as the output term [2]. The GDP of each state, in billions of dollar in 2009 ($Bn) is taken as the proxy for the input term (http://www.economist.com/content/indian-summary accessed on 22 July 2011). A simple and crude measure of the quality of scientific activity will of course be given by the ratio of Output to Input, q = P/$Bn. This indicator usually favours small states at the expense of larger states where the law of diminishing returns sets in. Indeed, there will always be cases of high input but low output and therefore low quality, or low input and medium output but of high quality, etc. It is therefore desirable to assess overall performance in terms of a single indicator. The challenge is, when given an output or outcome (O), and an input of size Q, to combine quality q with quantity Q and/or output O to yield a single indicator that is the best proxy for performance. The Quasity-Exergy International Journal on Current Science & Technology Vol - I l No- I l January-June’2013 P 11

paradigm [3] proposes that in any general situation where performance needs to be evaluated, given an input Q (for quantity) and an output or outcome O (for quasity), quality, is defined as quasity/quantity (q = O/Q) and the simplest and most effective indicator for performance becomes X = qO = q2Q. Thus in this case, where Q = $Bn, and O = P, X = P2/$Bn. That is, in Quantity-Quality-Quasity terms, the indicator P/$Bn (papers/billion dollars of GDP) is the “quality” measure. The quantity (read size) measures are $Bn (billion dollars of GDP) and the quasity measure is now P (papers published during 2007-2009). The energy like term X = P/$Bn × P is a product of the quality and the quasity term and perhaps best represents the “performance” of each state on a per GDP basis.

III. THE RELATIVE SCIENTIFIC PERFORMANCE OF VARIOUS INDIAN STATES ON A PER GDP BASIS Table I presents the results of the output from various Indian States from the Web of Science during 2007-2009 [2]. Tamil Nadu accounts for the largest number of publications on what we call the quasity basis. Table II sorts out the results on a quality basis (Papers per billion dollars of GDP). This is obtained by inverting the relationship proposed in Prathap [3], namely quasity = quantity x quality. Here, the GDP of the state in billions of dollars ($Bn) is taken as the quantity term. The Union Territory of Chandigarh, which has many top national research and academic institutions ranks first among the Indian states for academic scientific research on this basis. Delhi, which has a privileged status as the National Capital Region, ranks second, and the erstwhile Union Territory of Puducherry ranks third. The exergy term, which is the product of quality and quasity, is offered as the best single number indicator for performance. On this basis, Delhi emerged first. This is not surprising as a very large number of premier research and academic institutions are based in Delhi. All this can be easily represented on a Quantity-Quality-Quasity diagram, where the product qO (also q2Q) is the energy like term (called exergy X) and is a scalar measure of the scientific activity during the window concerned that takes into account both quality and quantity. We see from Table II and Figures 1 and 2 that Delhi’s research during this period forges ahead of the rest of the field. Indeed, in exergy terms, Delhi contributes 38% of India’s scientific output, while on GDP terms, it accounts for only 3.3% of India’s GDP.

IV. CONCLUSIONS Reference [3] proposed a practical theory of performance, associating quality with vector properties, input quantity

with scalar properties and an intermediate term, quasity, also a vector, (quantity × quality). This trinity of terms helps generate an energy-like called exergy which serves as the simplest indicator for performance. We have applied these ideas to the comparative research evaluation of various Indian states on a per GDP basis. TABLE I

Tamil Nadu Is Ranked First On The Basis Of The Number Of Papers Published During 2007-09.

State Tamil Nadu

17507

Maharashtra

16577

Uttar Pradesh

15843

Karnataka

15156

West Bengal

14471

Delhi

14157

Andhra Pradesh

9494

Kerala

4559

Gujarat

4094

Madhya Pradesh

3835

Punjab

3151

Rajasthan

2814

Chandigarh

2640

Haryana

2555

Assam

2210

Orissa

2105

Uttarakhand

1223

Himachal Pradesh

1137

Bihar

1019

Jammu & Kashmir

988

Pondicherry

875

Jharkhand

698

Goa

626

Meghalaya

364

Chhattisgarh

238

Arunachal Pradesh

195

Manipur

156

Sikkim

124

Tripura

96

Mizoram

84

Andaman & Nicobar Islands

77

Nagaland

68

Lakshadweep

Total International Journal on Current Science & Technology P 12 Vol - I l No- I l January-June’2013

Number of Papers P

2

125619

TABLE II On A Quality Basis (Papers Per Billion Dollars Of Gdp), Chandigarh Ranks First. On The Second-Order Indicator Basis, Delhi Emerges First.

Bihar

32.7

31.16

31754.16

Chhattisgarh

22.7

10.48

2495.33

Lakshadweep

0.3

6.67

13.33

GDP $Billion

q = P/$Bn

Exergy X = P x P/$Bn

4.1

643.90

1699902.44

180

36.1

392.16

5551818.53

160

Puducherry

2.8

312.50

273437.50

Karnataka

62.9

240.95

3651897.23

80

218.84

3831188.11

Chandigarh Delhi

Tamil Nadu

25626.67

1

195.00

38025.00

76.9

188.18

2723144.88

Meghalaya

2.1

173.33

63093.33

Andaman & Nicobar Islands

0.5

154.00

11858.00

103.5

153.07

2425127.04

Goa

4.2

149.05

93303.81

Jammu & Kashmir

7.6

130.00

128440.00

Himachal Pradesh

8.9

127.75

145254.94

Uttarakhand

9.9

123.54

151083.74

18.6

118.82

262586.02

1081.8

116.12

14586922.87

1.4

111.43

17382.86

Andhra Pradesh

85.7

110.78

1051762.38

Kerala

41.2

110.66

504477.69

700

0.8

105.00

8820.00

600

37.3

102.82

394295.58

175.3

94.56

1567580.88

Punjab

40.5

77.80

245155.58

Orissa

31.8

66.19

139340.41

Rajasthan

46.3

60.78

171027.99

Haryana

44.2

57.81

147692.87

100

Gujarat

80.1

51.11

209248.89

0

Nagaland

1.5

45.33

3082.67

Jharkhand

17.5

39.89

27840.23

2.6

36.92

3544.62

Uttar Pradesh

Assam India Manipur

Mizoram Madhya Pradesh Maharashtra

Tripura

Jammu & Kasmir Himachal Pradesh Uttarakhand Manipur Assam

Kerala

Mizoram

Madhya Prades h X=100000

80

Orissa

X=50000

60 Nagaland 40

Tripur a

20 0

X=500000

Andaman & Nicobar Islands Goa

100

206.67

West Bengal

Meghalay a

120

0.6

Arunachal Pradesh

Arunachal

140

Punjab Rajasthan Haryana

Gujara t

Jharkhan d Bihar

Chattisgarh Lakshadweep 0

1000

2000

3000

4000

5000

P Fig. 1 The graphical representation of scientific performance of various Indian states on a quality-quasity map.

1000 900 800

P/SBn

Sikkim

200

P/SBn

States/UTs

Chandigarh

X=5000000

500 400

Delhi

300 200

Puducherry Sikki m

X=500000

0

Karnatak a

X=1000000

5000

West Benga l Andhr a Pradesh

Tamil Nadu Uttar Pradesh Maharashtra

10000

15000

20000

P Fig. 2 The graphical representation of scientific performance of various Indian states on a quality-quasity map (zoomed in for X1) copies of the requested packet. Receiver getting i copies, can now make a pair-wise XORed to locate error positions. For example if i=2, we have three copies of the packet (Copy-1=the stored copy in receiver’s buffer, Copy-2=one of the retransmitted copies, Copy-3=another retransmitted copy) and three pairs for XOR operation: Copy-1 and Copy-2 Copy-2 and Copy-3 Comparing pairs

P 162

Number of bits in error (x)

Common copy in two consecutive (x)

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Copy-1 and Copy-2

1

Copy-1 common in first two xs

Copy-1 and Copy-3

2

Copy-3 common in next two xs

Copy-3 andCopy-2

3

Copy-2 common in next two xs Copy-3 and Copy-1

Table (I) Algorithm of MPC

IV. REVIEW OF AGGRESSIVE PACKET COMBINING SCHEME(APC) APC is a modification of MjPC[30] so as to apply APC in wireless networks. APC is best illustrated as in [23]. i. ORIGINAL PACKET=11111, and it sent from the sender. Sender sends three copies of the packet. ii. All the packets reached receiver with error as: FIRST COPY: 11011, SECOND COPY: 11110 and THIRD COPY: 11011. iii. Receiver applies majority logic bit by bit on the received three erroneous copies: 11011 11110 11011 and thus gets a generated copy as 11011. iv. Receiver applies error detection scheme to find whether generated copy is correct or not. As it is not correct Assume that an actual packet 10100011 was received as: Copy-1 = 10101011 Copy-2 = 10101111 Copy-3 = 10100001 when we have under xored operation: Copy-1 xored Copy-2 (say, C12) = 00000100 (one bit in error) Copy-2 xored Copy-3 (C23) = 00001110 (three bits in error) Copy-3 xored Copy-1 (C31) = 00001010 (two bits in error). Now we have to define with which copy the bit inversion will start and how to proceed thereafter. We define an algorithm for the purpose as below. Make a table (see Table (I)) in ascending order of number of bits in error as indicated by the xor operation. The bit inversion and the FCS checking process shall begin with the common copy indicated in the last column of the table so

prepared, and proceed down the table if required. If all the inversions do not yield any result, the receiver has to go for requesting further retransmission.. As per table (I) in this example, the detection of error location and consequent bit inversion will start with Copy-1 and if required will be followed by Copy-3 and then by Copy-2. in this case, receiver choose least reliable bit from majority logic. In this case these are 3rd and fifth bit from the left side. v. Receiver applies brute force correction as in PC to the 3rd and fifth bits, followed by error detection. By the process it may get correct copy. If fails it request for retransmission when sender will repeat three copies of retransmission.

V. TWO MODIFICATIONS OF APC Enhancing throughput: SCHEME I: The APC as proposed by Leung [23] has a very low throughput. One basic parameter of measuring throughput is the average number of times (n) a packet is transmitted/retransmitted for successful receiving at the receiver. In APC, n>=3, making throughput less or at best equal to (1/3) X100%. In exactly, if S/W ARQ is employed with APC, n= [3/ (1-p)] where p is the probability that a packet is in error. P=1-(1-α) N when α is bit error rate (BER). For GBN ARQ with APC, n=3[{1+ (L-1) p}/ (1- p)] where L is the window size in GBN. Such a low throughput of APC does not guarantee the claim of bandwidth savings in APC. We propose that let the normal GBN protocol shall be applied with the modification that when a packet is acknowledged negatively, m (m = any odd number≥3) each of the negatively acknowledge packet and all other subsequent packet transmitted by this time shall be retransmitted. This will make:

Fig: Variation of number of c copies with BER

SCHEME II: In the scheme we propose that when a packet is acknowledged negatively let the same packet shall retransmitted with a bit wise XOR copy of the packet with received correct copy of the just previous packet. Say first packet, 11001100 (A) is received correctly. Say second packet, 11110000 is received erroneously as 01110000 (B). When second packet is acknowledged negatively, transmitter will transmit followings: 11110000 (copy of the erroneous packet) and XOR of previously received correct packet and present packet acknowledged negatively i.e. in this case (11001100 XOR 11110000)=00111100. Say these copies are received both erroneously as: 11001101(C) and 10111100 (D). Using A and D, receiver will reconstitute a second packet as A XOR D=01110000 (E). Now receiver has three erroneous copies: B, C and E. Receiver will apply MPC on B, C and E to recover correct copy of the second packet. The proposed scheme will considerably enhance throughput as 2 copies in place of 3 copies (as in APC) are transmitted.

n ≤3[{1+(L-1)p}/(1-p)].

VI. CONCLUSION AND FUTURE RESEARCH

This will raise the throughput of the proposed scheme over that of the APC. Only issue is the choice of m that will be deciding factor for higher throughput in the proposed scheme. The condition on which the proposed scheme will provide better throughput is:

We have proposed two suggestions and modification of APC for performance improvement in terms of throughput. All these modifications require to be compared with simulation studies to arrive at some definite conclusions.

(m-1)≤2/[1-(1-α)N] ……………(1) For a set of α and N, the variation of required m to have higher throughput of the proposed scheme over conventional APC is portrayed in fig (1).

VII. REFERENCES [1]

C T Bhunia, A Few Modified ARQ Techniques, Proceedings of the International Conference on International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 163

Communications, Computers & Devices, ICCCD-2000, 14-16, Decedmber’2000, I I T, Kharagpur, India, Vol.II, pp. 705-708J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68-73.

[2]

C T Bhunia and A Chowdhury, ARQ Technique with Variable Number of Copies in Retransmission, Proceedings of Conference on Computer Networking and Multimedia (COMNAM-2000), 21-22 December’2000, Javadpur University, Calcutta, India, pp.16-21 .

[3]

C T Bhunia and A Chowdhury, Performance Analysis of ARQ Techniques used in Computer Communication Using Delay as a Parameter, Proceedings of Conference on Computer Networking and Multimedia (COMNAM-2000), Jadavpur University, Calcutta, India, pp.22-24.

C T Bhunia, ARQ with two level coding with generalized parity and i (i>1) copies of parts in retransmission, Proceedings of National Conference on Data Communications (NCDC2000), Computer Society of India, Chandigarh, India, 7-8 April’2000, pp.19

[4] C T Bhunia, ARQ Techniques: Review and Modifications, Journal IETE Technical Review, Sept Oct’2001 Vol18, No 5, pp 381-401 [5] R J Beniece and A H Frey Jr, An analysis of retransmission schemes, IEEE Trans Comm Tech, COM-12, pp 135-145, Dec 1964 [6]

S Lin, D Costello Jr and M J Miller, Automatic repeat request error control schemes, IEEE Comm Mag, 22, pp 5-17, Dec ‘1984.

[7]

A R K Sastry, Improving Automatic Repeat Request (ARQ) Performance on Satellite Channels Under High Error Rate Conditions, IEEE Trans Comm, April’77, pp 436-439.

[8]

Joel M Morries, On Another Go-Back -N ARQ Technique For High Error Rate Conditions, IEEE Trans Comm, Vol 26, No 1, Jan’78, pp 186-189.

[9]

E J Weldon Jr, An Improved Selective Repeat ARQ Strategy, IEEE Trans Comm, Vol 30, No 3, March’82, pp 480-486.

[10] Don Towsley, The Shutter Go Back-N ARQ Protocol, IEEE Trans Comm, Vol 27, No 6, June’79, pp 869-875. [11] Dimirti Bertsekas et al, Data Networks, Prentice Hall of India, 1992, Ch-2 [12] G E Keiser, Local Area Networks, McGrawhill, USA, 1995 [13] N

P 164

D

Birrell,

Pre-emptive

retransmission

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

for

communication over noisy channels, IEE Proc Part F, Vol 128, 1981, pp 393-400.

[14]

H Bruneel and M Moeneclacey, On the throughput performance of some continuous ARQ strategies with repeated transmissions, IEEE Trans Comm, Vol COM m34, 1986, pp 244-249.

[15] Y wang and S Lin, A Modified Selective Repeat Type Ii Hybrid ARQ System and its Performance Analysis, IEEE Trans Comm, Vol Com 31, May’1983, pp. 593-608. [16] S B Wicker and M J Bartz, Type-II Hybrid ARQ Protocol using Punctured MDS Code, IEEE Trans Comm, Vol 42, Feb-March- April’1994, pp. 1431-1440. [17] O Yuen, Design trde-offs cellular/PCS systems, IEEE Comm Mag., Vol. 34, No 9, Sept’1996, pp 146-152 [18] H Liu. H Ma, M E Zarki, and S Gupta, Error control schemes for networks: An overview, Mobile Networks and Applications, Vol. 2, 1997, pp 167-182. [19] A Pahlavan and A H Levesque, Wireless Data Communication, Proc. IEEE, Vol 82, No 9, Sept’1994, pp 1398-1430 [20] Dzmitry Kliazovich, Nadhir Ben Halima and Fabizio Granelli, context-aware receiver - driven retransmission Control in Wireless Local Area Networks, found in Internet. [21]

Y Hirayama, H Okada, T Yamazato and M Katayama, Time-Dependent Analysis of the Multiple-Route Packet Combining Scheme in Wireless Multihop Network, Int J wireless Information Networks, Vol. 42, No 1Jan’2005, pp 35-44.

[22] Yiu-Wing LEUNG, Aggressive Packet Combining for Error Control in Wireless Networks, trans. Comm Vol. E83, No 2Feb’2000, pp38-385 [23] Shyam S. Chakraborty et al, An ARQ Scheme with Packet Combining, IEEE Comm Letters, Vol 2, No 7, July’95, pp 200-202. [24]

Shyam S Chakraborty et al, An Exact Analysis of an Adaptive GBN Scheme with Sliding Observation Interval Mechanism, IEEE Comm Letters, Vol 3, No. 5,May’99, pp 151-153.

[25] Shyam S Chakraborty et al, An Adaptive ARQ Scheme with Packet Combining for Time Varying Channels, IEEE Comm Letters, Vol 3, No 2, Feb’1999, pp 52-54. [26]

C T Bhunia, Modified Packet Combining Scheme using Error Forecasting Decoding to combat error in network, Proc. ICITA’05(Proc. IEEE Computer Soc.), Sydney, Vol, 2, 4-7, July’2005, pp 641-646

[27] C T Bhunia, Packet Reversed Packet Combining

Scheme, Proc. IEEE Computer Soc, CIT’07, Aizu University, Japan, pp. 447-451

[28] C T Bhunia, Error forecasting Schemes of error Correction at Receiver, Proc ITNG’2008, IEEE C o m p u t e r Society , USA , pp . 332 - 336 [30] S B Wicker, Adaptive rate error control through the use of diverse combining and majority logic decoding in hybrid ARQ protocol, IEEE Trans Comm., Vol.39. No. 3, March’1991, pp 380-385. [29] C T Bhunia, Exact Analyzing Performance of New and Modified GBN scheme for Noisy Wireless Environment, J Inst Engrs, India, Vol.89, Jan’2009, pp 27-31

[30] C T Bhunia, IT, Network & Internet, New Age International Publishers, India, 2005 [33]Michele Zorzi and Ramesh R Rao, Lateness Probability of a Retransmission Scheme for Error Control on a Two-State Markov Channel, IEEE Transactions on Communications, Vol. 47, No. 10, October’1999, pp.1537-1548. [31] C T Bhunia et al, Pre Emptive Dynamic Source Routing: A Repaired Back Up Approach and stability Based DSR with Multiple routes, J Comp & Information Tech, CIT, Croatia, Vol.16, No. 2, 2008, pp 91-99.

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 165

P 166

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

REVIEW AND SECURITY ANALYSIS OF AN EFFICIENT BIOMETRIC-BASED REMOTE USER AUTHENTICATION SCHEME USING SMART CARDS Subhasish Banerjee

Kiran Sankar Das

Department of Computer Science & Informatics Bengal Institute Of Technology & management Santiniketan,India E-mail : [email protected]

M.Tech (CSE) Bengal Institute Of Technology & management Santiniketan,India E-mail : [email protected]

Uddalak Chatterjee Department of Computer Science & Informatics Bengal Institute Of Technology & management Santiniketan,India E-mail : [email protected]

ABSTRACT A path braking scheme on biometric-based remote user authentication has been proposed by Li-Hwang In 2010. Later in 2011, A. K. Das showed some shortfalls of the LiHwang scheme and proposed an efficient biometric based remote user authentication scheme using smart cards that overcomes the shortfalls of the main Li- Hwang scheme and provides mutual authentication. In this paper, we reviewed and analyzed Das’s scheme and pointed out some existing flaws mainly based on Smart Card tampering and revealing stored information.

I. NTRODUCTION In the field of recent e-commerce and m-commerce remote user authentication has been a great research domain. However, day-by-day progress in technology and network access methods exposed serious security weaknesses in remote user authentication process due to week password management and advanced attack techniques. several schemes [1-6] have shown various ways to tamper user authentication and get access unethically to various authentication processes. In traditional systems of identity-based user recognition remote user authentication was based on password. But passwords can be guessed easily with some basic dictionary attacks. Later to overcome these problems passwords were encrypted with cryptographic secret keys. But the long cryptographic keys were difficult to memories and moreover they are lost, forgotten and easily shared therefore unable to

provide non-repudiation. In a client- server systems password based authentication with smartcard are proposed in [7-8]. A biometric system is basically a pattern recognition system which extracts some pattern set from user’s provided biometry and acquires a feature set and further verifies it with the stored template set in systems database. [9-11]. In recent work [12-14], biometric based remote user authentication schemes shown strong authentication protections against Password theft and fake user attacks. Some advantageous features of biometric keys are as follows•

Biometric keys cannot be lost or forgotten.

•

Biometric keys are very difficult to share or copy.

• Biometric keys are extremely hard to forge or distribute. •

Biometric keys cannot be guessed.

• Someone’s biometrics is not easy to break with others. Therefore biometric key based authentication is more secure and reliable than traditional password based authentication schemes. Therefore biometric key based authentication is more secure and reliable than traditional password based authentication schemes. In this report we analyzed Das’s scheme and shown that Das’s authentication scheme is still vulnerable to various attacks and does not provide mutual authentication between the user and the server. In [16-17] researches revealed that

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 167

the secret information stored in smart card can be revealed by monitoring power consumption. Therefore an attacker can obtain information stored in user’s smart card and also can intercept message packets communicating between user and server. This paper is organized as a short review of A.K Das [15] scheme followed by the security analysis.

II. REVIEW OF A.K DAS SCHEME In 2011 Das proposed an improved and efficient biometricbased remote user authentication scheme using smart cards. The scheme was composed of three phases: a. Registration phase, b. Login phase, c. Authentication phase. The notations used in the report are shown in the following table. Notation

Description

Ci

User i

Ri

Trusted registration centre

Si

Server

PWi

Password shared between user and server

IDi

Identity of the user i

Bi

Biometric template of the user i

h(.)

A secure one way hash function

Xs

A secret information maintained in the server

Rc

A random number chosen by client Ci

Rs

A random number chosen by server Si

A||B

Data A concatenates with data B. XOR operation of A and B

Registration Phase: A.) Before the remote user Ci login to the system, Ci first enters his biometrics on a specific device and offers his/her identification and password to the registration centre, Ri. B.) Ri then computes: generated by server.

, Xs is a secret value

C.) Ri stores (IDi, h(.), fi, ei, ri) on the user’s smart card and sends it to the user via a secure channel.

P 168

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Login Phase. The user has to perform the following steps to login to the system. A.) Ci inserts his/her smart card into the card reader and provides his/her biometrics information Bi on the specific device. It verifies user’s biometrics checking whether or not. If this holds, Ci passes biometrics verification. B.) Ci inputs the IDi and PWi, and then the smart card computes: Ci

Si

1. Inserts smart card and 2. Checks if

3. If it holds, inputs his/ her password 4. Computes 5. Checks if 6. If it holds, the smart card computes

< IDi, M2, M3 >

1. Checks the . format of 2. If its valid then computes 3. Verifies if 4. If it holds then computes

< IDi, M2, M3 > 1.Verifies whether . 2. If it satisfies, computes: . 3.Verifie s whether . 4I fi t

1.After receiving the message verifies whether 2. If they are equal, accepts ì ë Élogin ê ∞ë = request

.

Figure 1: Login and Authentication in A. K. Das’s Scheme

Checks if is equal to to verify password. If it holds then

smart card further computes:

provide mutual authentication. 1. U ser Impersonation Attack: Suppose a n attacker g ets able to track information stored in the smart card and obtains a nd a lso intercepts t he l ogin the secret v alues .The a ttacker then p erforms the message f rom user following steps: is a A. The attacker first computes the following where random number generated by the attacker.

C.) sends the login request message to

.

Authentication Phase. After receiving the login request message, the server performs following steps.

B. The attacker then sends the forged message

A.)

C. Upon receiving t he f orged m essage s erver

checks the format of is valid,

B.) If

.

computes

to the server

verifies whether

or not. If it satisfies,

c omputes following calculations w here random number generated by the server.

is a

D.

t hen verifies w hether satisfies and

checks

. As i t is t he s ame as t he real u ser,

the format of

computes the following:

verification passes. Then

C.)

.

thinks

.

o r not. I t

as a valid user, therefore

computes following calculations:

D.) Then

sends the message

to

E.) After receiving the message sent b y . If it satisfies,

whether F.)

v erifies whether computes

.

,

.

v erifies computes:

. If i t holds, .

G.) Then

sends

H.) After

receiving t

to he m

. essage

v

erifies

. If t hey are e qual, whether accepts user’s login request. Describe in figure 1.

E.

then sends the message

to

in the

authentication phase.

2. Server Masquerading Attack: If the attacker can obtain the secret data and intercept messages between server and the real user to obtain messages in login phase and in the authentication phase, it then can act as a server and retrieve messages from the real user. A.) The attacker performs the following calculations. a random number generated by .

is

III. SECURITY ANALYSIS OF A. K DAS’S SCHEME In this part we have analyzed the security aspects in Das’s scheme. To do so, we assume that an attacker could obtain the secret information stored in the smart card by continuous monitoring and analysing power consumption of the smart card [16-17] and also obtain communication messages by intercepting communication channels between the user and the server. We have discussed here various attacks over Das’s scheme such as User Impersonation Attack, Server Masquerading Attack, and finally showed how it fails to

B.)

Then the attacker

sends the forged message

C.)

Upon receiving,

checks whether

holds, therefore

computes

to the user

. . It . Further

verifies if . If it also holds and hence is convinced t hat the message came from a trusted legal server.

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 169

IV. CONCLUSION In this paper authors has reviewed and analyzed Das’s scheme and shown that it fails to provide security against various attacks. So a better approach on biometric based remote user authentication scheme can be proposed to enhance all the security aspects.

V. REFERENCES

P 170

1.

Lamport, L. “Password authentication with insecure communication”, Communication of the ACM, vol. 24, no. 11, pp. 770-772, 1981.

2.

Hwang, M. S., Li, L. H, “A new remote user authentication scheme using smart cards”, IEEE Transaction on Consumer Electronics, vol. 46, no. 1, pp. 28-30, 2000

3.

Yoon, E. J., Ryu, E. K., Yoo, K. Y., “Further improvement of an efficient password based remote user authentication scheme using smart cards”, IEEE Transaction on Consumer Electronics, Vol 50, no. 2, pp-612-614, 2004.

4.

Das, M. L., Saxena, A, Gulati, V. P., “A dynamic IDbased remote user authentication scheme”, IEEE Transaction on Consumer Electronics, vol. 50, no. 2, pp. 629-631, 2004.

5.

Lin, C. W., Tsai, C. S., Hwang, M. S., “A new strong password authentication scheme using one-way Hash functions”, Journal of Computer and Systems Sciences International, vol. 45, no. 4, pp. 623-626, 2006.

6.

Bindu, C. S., Reddy, P., Satyanarayana, B., “Improved remote user authentication scheme preserving user anonymity”, International Journal of Computer Science and Network Security, vol. 83, pp. 62-66, 2008.

7.

Fan, L., Li, J. H., Zhu, H. W., “An enhancement of timestamp-based password authentication scheme”, Computer Security, vol. 21, no. 7, pp. 665-667, 2002

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

8.

Shen, J. J., Lin, C. W., Hwang, M. S., “Security enhancement for the timestamp-based password authentication using smart cards, Computer Security, vol. 22, no. 7, pp. 591-595, 2003.

9.

Jain, A. K., Ross, A., Prabhakar, S., “An introduction to biometric recognition”, IEEE Transaction on Circuits Systems and Video Technology, vol. 14, no. 1, pp. 4-20, 2003.

10.

Maltoni, D., Maio, D., Jain, A. K., Prabhakar, S., “Handbook of fingerprint recognition”, (Springer, New York, 2nd Ed., 2009).

11.

Prabhakar, S., Pankanti, S., Jain, A. K., “Biometric recognition: security and privacy concerns”, IEEE Security and Privacy Mag., vol. 1, no. 2, pp-33-42, 2003.

12.

Khan, M. K., Zhang, J., Wang, X., “Chaotic hash-based fingerprint biometric remote user authentication scheme on mobile devices”, Chaotic Solutions Fractals, vol. 35, no. 3, pp-519-524, 2008.

13.

Li, C. T., Hwang, M. S., “An efficient biometric based remote user authentication scheme using smart cards”, Journal on Networking and Computer Applications”, vol. 33, pp. 1-5, 2010.

14.

Lin, C. H., Lai, Y. Y., “A flexible biometric remote user authentication scheme”, Computer Standards Interf., vol. 27, no. 1, pp-19-23, 2004.

15.

Das, A. K., “Analysis and improvement on an efficient biometric based remote user authentication scheme using smart cards”, IET Information Security, vol. 5, no. 3, pp. 541-552, 2011.

16. Kocher, P., Jaffe, J., Jun, B., “Differential power analysis”, Proceedings of Advances in Cryptology, pp. 388-397, 1999. 17.

Messerges, T. S., Dabbish, E. A., Sloan, R. H., “Examining smart card security under the threat of power analysis attacks”, IEEE Transactions on Computers, vol. 51, no. 5, pp. 541-552, 2002.

Evolution Strategy for the C-Means Algorithm: Application to multimodal image segmentation Francesco Masulli DIBRIS - Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi - University of Genoa Via Dodecaneso 35, 16146 Genoa - Italy and SICRMM Temple University, Philadelphia - PA [email protected]

Anna Maria Massone CNR - SPIN via Dodecaneso 33 - I-16146 Genoa - Italy [email protected]

Andrea Schenone DIBRIS - Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi - University of Genoa Via Dodecaneso 35, 16146 Genoa - Italy [email protected]

February 24, 2013 Abstract Evolutions Strategies (ES) are a class of Evolutionary Computation methods for continuous parameter optimization problems founded on the model of organic evolution. In this paper we present a novel clustering algorithm based on the application of an ES to the search for the global minimum of the C-Means (CM) objective functional. The new algorithm is then applied to the clustering step of an interactive

1

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 171

system for the segmentation of multimodal medical volumes obtained by different medical imaging diagnostic tools. In order to aggregate voxels with similar properties in the different diagnostic imaging volumes, clustering is performed in a multidimensional space where each independent dimension is a particular volumetric image. As a consequence, in this application clustering supports an inference process based on complementary information carried by each image (e.g. functional or anatomical) in oder to extract regions corresponding to the different anatomical and/or pathological tissues. A quantitative comparison of segmentation results obtained by the original CM and by the new algorithm is reported in the paper.

1

Introduction

C-Means (CM) [6] is a widely used clustering method based on a simple and efficient numerical approximation to the maximum likelihood technique for the estimation of probability mixtures parameters [6, 3]. The CM shows some intrinsic problems. In particular, it is subject to the problem of trapping in local optima of its objective function. In the clustering literature, many algorithms based on fuzzy set theory have been proposed in order to overcome this limit of CM, among them the Fuzzy CMeans algorithm [3], the Deterministic Annealing [20], and the Possibilistic C-Means [12, 13]. As shown by Miyamoto and Mukaidono in [18], all those methods are different kind of regularization [26] of the local optima problem of CM. Nevertheless, even with these methods we have no guarantee of finding the optimal solution of the problem of clustering. In order to overcome this problem, in this paper we present a novel clustering algorithm based on the application of a global search technique based on an Evolution Strategy (ES) [19, 25, 1] to the minimization of the objective function of the C-Means Algorithm [6]. Evolution Strategies are a class of methods for continuous parameter optimization problems founded on the model of organic evolution. In this paper we present a novel clustering algorithm based on the application of a (µ, λ)ES to the search for the global minimum of the classical C-Means (CM) objective function [6, 3]. The new Evolution Strategy based C- Means (ESCM) algorithm is applied to the clustering step of an interactive system for the segmentation of multimodal medical volumes [22]. This computer-based system supports the clinical oncologist in the tasks 2

P 172

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

of delineating the volumes to be treated by radiotherapy and surgery, and of quantitatively assessing (in terms of tumor mass or detection of metastases) the effect of oncological treatments. In order to aggregate voxels with similar properties in the different diagnostic imaging volumes, clustering is performed in a multidimensional space where each independent dimension is a particular volumetric image. Clustering algorithms can point out clusters of close voxels in that multidimensional feature space representing the probability distribution of intensities in the different modalities, and therefore sets of voxels with similar intensity values can be defined within the whole multimodal medical volume. These sets of voxels can then be used to delineate regions of interest, that is to make a segmentation of the multimodal volumetric image. In this application clustering supports an inference process based on complementary information carried by each image (e.g. functional or anatomical), each of them considered as an independent dimension of the input space, in order to extract regions corresponding to the different anatomical and/or pathological tissues. A quantitative comparison of segmentation results obtained by the original CM and by the new algorithm is reported in the paper. The paper is organized as follows. The next section introduces the CMeans following the parametric learning framework. In Sect.s 3 and 4 we give some material on Evolution Strategies and we present a novel application of them to the clustering. In Sect. 5 we set clustering as the basic step of an inference process that, starting from raw data, mines region of interest in multimodal medical volumes. In Sect. 6, we present an experimental comparison of the application of the CM and of the new clustering algorithm to the segmentation of multimodal images. Conclusions are drawn in Sect. 7.

2 2.1

Parametric Learning Approach to Clustering Maximum Likelihood estimation of cluster parameters

Let X = xk | xk ∈ Rd , k = 1, ..., n be a set of unlabeled random sampled vectors xk = (x1k , ..., xdk ) or training set, and Y = {yj | yj ∈ Rd , j = 1, ..., c} 3

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 173

be the set of centers of clusters (or classes) ωj . Following a parametric learning approach, we make the following assumptions: 1. the samples come from a known number of c classes ωj , j ∈ {1, ..., c}; 2. the a priori probabilities P (wj ) (i.e. the probability of drawing patterns of class ωj from X) are known; 3. the form of class-conditional probabilities densities p (x | ωj , Θj ) (i.e. the probability density of sample xk inside class ωj ) are known, while the vectors of parameters Θj are unknown. Note that the third assumption reduces the clustering problem to the problem of estimation of the vectors Θj (parametric learning). In this setting, we assume that samples are obtained by selecting a class ωj and then selecting a pattern x according to the probability law p (x | ωj , Θj ), i.e.: p (x | Θ) =

c

j=1

p (x | ωj , Θj ) P (ωj )

(1)

where Θ = (Θ1 , ..., Θc ). A density function of this form is called a mixture density [6], p (xk | ωj , Θj ) are called the component densities, and P (ωj ) are called the mixing parameters. A well known parametric statistics method for estimating the parameter vector Θ is based on maximum likelihood [6]. It assumes that the parameter vector Θ is fixed but unknown. The likelihood of the training set X is the joint density p (X | Θ) =

n

k=1

p (xk | Θ) .

(2)

ˆ is that value of Θ that maxiThen the maximum likelihood estimate Θ mizes the likelihood of the observed training set X. If p (X | Θ) is a differentiable function of Θ, maximizing the logarithm of the likelihood, we can obtain the following conditions for the maximumˆ j: likelihood estimate Θ n

k=1

ˆ ∇ ˆ log p xk | ωi , Θ ˆj P ωj | xk , Θ Θj

4

P 174

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

= 0 ∀ j.

(3)

Moreover, if the a priori class probabilities P (ωj ) are also unknown, the clustering problem can be faced as the constrained maximization of the likelihood p (X | Θ) over Θ and P (ωj ) subject to the constraints: P (ωj ) ≥ 0

c

and

j=1

P (ωj ) = 1.

(4)

If p (X | Θ) is differentiable and the a priori probabilities estimate Pˆ (ωj ) �= ˆ j must satisfy: 0 for any j, then Pˆ (ωj ) and Θ n 1 ˆ ˆ Pˆ ωj | xk , Θ P (ωj ) = n k=1

(5)

and n

k=1

ˆ ∇ ˆ log p xk | ωj , Θ ˆj Pˆ ωj | xk , Θ Θj

=0

(6)

where ˆ j Pˆ (ωj ) p xk | ωi , Θ ˆ ˆ P ωj | xk , Θ = c . ˆ ˆ p x | ω , Θ k h h P (ωh ) h=1

(7)

Let we assume now that the component densities are multivariate normal, i.e.: ˆj = p xk | ωi , Θ

1

1 exp[− (xk − yj )t Σ−1 j (xk − yj )] 2 (2π) | Σj | d 2

1 2

(8)

where d is the dimensionality of the feature space, yj is the mean vector, Σj the is the covariance matrix, (xk − yj )t is the transpose of (xk − yj ), Σ−1 j inverse of Σj , and | Σj | the determinant of Σj . In the general case (i.e. yj , Σj , and P (ωj ) are all unknown) the maximum likelihood principle yields useless singular solutions. As shown by Duda and Hart [6], we can obtain meaningful solutions by considering the largest of the finite local maxima of the likelihood function. The local-maximum-likelihood estimate for P (ωj ) is the same as Eq. 5, while ˆ ˆ k=1 P ωj | xk , Θj xk ˆ j = n y ˆj Pˆ ωj | xk , Θ n

k=1

(9)

5

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 175

Table 1: C-Means (CM) Algorithm. 1. assign the number of clusters and the tolerance ǫ1 for the stop criterion; 2. initialize the centers of clusters; 3. do until any center changes less than ǫ1 ; (a) assign the samples to the clusters with smaller Euclidean distance using Eq.s 12 and 14; (b) recalculate the centers using Eq. 9; 4. end do.

ˆj = Σ

ˆ ˆ ˆ j )(xk − y ˆ j )t k=1 P ωj | xk , Θj (xk − y n ˆj Pˆ ωj | xk , Θ

n

k=1

(10)

where (from Eq.s 7, and 8)

ˆ j |− 21 exp[− 1 (xk − y ˆ −1 ˆ j )t Σ ˆ j )] Pˆ (ωj ) |Σ j (xk − y 2 ˆ ˆ P ωj | xk , Θj = c . 1 ˆ − 2 exp[− 1 (xk − y ˆ −1 (xk − y ˆ h )t Σ ˆ h )] Pˆ (ωh ) h=1 | Σh | h 2 (11) The set of Eq.s 5, 9, 10, and 11 can be interpreted as a gradient ascent or hill-climbing procedure for maximizing the likelihood procedure. A LloydPicard iteration can start with Eq. 11 using initial estimates to evaluate ˆ j and then using Eq.s 5, 9, and 10 to update the Eq. 11 for Pˆ ωj | xk , Θ estimates. Like all hill-climbing procedures the results of this iteration do depend ˆ j is quite time upon the starting point, and, moreover, the inversion of Σ consuming, and there is the possibility of multiple solutions.

2.2

C-Means (CM) Algorithm

An efficient implementation of the previous procedure is based on the following approximation of Eq. 11:

6

P 176

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

ˆj = P ωj | xk , Θ

1 if Dj (xk ) = min1≤j≤C Dj (xk ) 0 otherwise

(12)

where Dj (xi ) is a local cost function or distortion measure and in many cases can be assumed as the scaled Mahalanobis distance Mj (xk ), ˆ j )t Σ−1 ˆ j ). M2j (xk ) ≡ | Σj |1/d (xk − y j (xk − y

(13)

This observation is the rationale of the C-Means (CM), also named Basic Isodata algorithm [6] and Hard C-Means [3]. It is worth noting that the usage of the Mahalanobis distance still involves a heavy computational overhead. In many implementations of CM a strong approximation of Dj (xk ) is adopted, using the Euclidean distance Ej (xk ) ˆ j || . Ej (xk ) ≡|| xk − y

(14)

The resulting CM algorithm is an efficient approximate way to obtain the maximum likelihood estimate of the centers of clusters [6]. One implementation of the CM using the Euclidean distance is illustrated in Tab. 1. In this algorithm the initialization of the number of clusters (Step 1) is performed by using the a-priori knowledge on the problem. At Step 2, the position of centers of clusters can be initialized either using a-priori knowledge or at random in the d-dimensional hyperbox I: I = Πdi=1 [mink (xik ), maxk (xik )] ,

I ⊂ Rd

(15)

As demonstrated by Bezdek [3], the CM, while maximizes the likelihood of the training set, minimizes at the same time a global error function Jw defined as the expectation of the squared local cost function: 2

Jw ≡< D >=

c n

ujk Dj2 (xk )

(16)

k=1 j=1

where ujk ≡ P (ωj | xk ) or, in general, a membership value of pattern xk (k = {1, ..., n}) to cluster ωj (j = {1, ..., c}). The CM, while is an efficient approximation of the maximum likelihood procedure for estimating the centers of clusters, shows some intrinsic problems. In particular, it is subject to the problem of trapping in local minima of Jw (i.e. on the local maxima of the likelihood). 7

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 177

This locality in searching for minima is its main limitation, in particular when we try to apply this algorithm as the basis for inference procedures. In order to overcome these problems, many attempt, based on different fuzzy clustering paradigms, have been proposed in the literature. The most popular fuzzy clustering method is the Fuzzy C-Means algorithm by Bezdek [3] that is based on the constrained minimization of a generalization of the CM global error expectation. We cite also the technique proposed by Rose et al [20] based on the maximum entropy principle [9] and using a Deterministic Annealing technique, and the Possibilistic C-Means algorithm by Krishnapuram and Keller [12, 13]. In [18], Miyamoto and Mukaidono showed that the Fuzzy C-Means [3], and the maximum entropy methods correspond to different types of application of the regularization theory to the CM in order to reduce the problem of local minima. An alternative approach to the solution of the local minima problem of CM can be based on the application of global search techniques. In [5] we propose a global search method for the minimization of Jw based on the Simulated Annealing technique [11]. In next sections we shall present some search techniques based on Evolution Strategies, that will be applied to clustering problem.

3

Evolution Strategies

Evolutions Strategies (ES) [19, 25, 1] are a class of Evolutionary Computation methods for continuous parameter optimization problems founded on the model of organic evolution. During each generation (iteration of the ES algorithm) a population of individuals (potential solutions) is evolved to produce new solutions. Only the highest-fit solutions survive to become parents for the next generation. In biological terms, the genetic encoding for an individual is called genotype. New genotypes are created from existing ones by modifying the genetic material. The interaction of a genotype with its environment induces an observed response called phenotype. Reproduction takes place at the genotype level, while survival is determined at the phenotype level. Only highly fit individual survive and reproduce in future generations.

8

P 178

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Individuals in the population are composed by object variables and strategy parameters. In basic ES, an individual is represented as a vector a = (x1 , ..., xn , σ1 , ..., σn ) ∈ ℜ2n

(17)

consisting of n object variables and their corresponding n standard deviations for individual mutations. There are two variants of an ES. The multi-membered ES plus strategies (denoted as (µ + λ)-ES) and the multi-membered ES comma strategies (denoted as (µ, λ)-ES). In (µ + λ)-ES µ parents create λ ≥ 1 offspring individuals by means of recombination and mutation. The µ best parents and offspring are selected to form the next population. For a (µ, λ)-ES, with λ > µ ≥ 1, the µ best individuals are selected from offspring only. We shall discuss now the ES operators, i.e. recombination, mutation, and selection.

3.1

Recombination

Recombination (or crossover) in ES is performed on individuals of the population. The most used recombination rules are: 1. no recombination; 2. discrete recombination: the components of two parents are selected at random from either the first or the second parent to form an offspring individual; 3. intermediate recombination: offspring components are somewhere between the corresponding components of the parents; 4. global and discrete recombination: one parent is selected and fixed and for each component a second parent is selected anew from the population to determine the component values using discrete recombination; 5. global and intermediate recombination: one parent is selected and fixed and for each component a second parent is selected anew from the population to determine the component values using intermediate recombination. The recombination operator may be different for object variables and strategy parameters. 9

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 179

3.2

Mutation

For mutations each xj is mutated by adding an individual, (0, σj )-normally distributed random number. The σj themselves are also subject to mutation and recombination (self-adaptation of strategy parameters [24]), and a ′ complete mutation step m(a) = a is obtained by the following equations: s = exp(N (0, τ ))

(18)

′

(19)

′

(20)

′

σj = σj · exp(Nj (0, τ )) · s ′

xj = xj + Nj (0, σj )

Mutation is performed on the σj by multiplication with two log-normally √ ′ distributed factors, one individual factor, sampled for each σj (τ = 1/ 2 n), √ and one common factor s (τ = 1/ 2n), sampled once per individual. This way, a scaling of mutations along the coordinate axes can be learned by the algorithm itself, without an exogenous control of the σj . More sophisticated ES using so-called correlated mutation are presented in [1].

3.3

Selection

Selection for survival is completely deterministic, as it is only based on the rank of fitness. It is called also an extinctive selection, as λ − µ worst individuals are definitively excluded from contribution offspring to the next generation. It is worth noting that (µ + λ)-ES is elitist and therefore, while performance is monotonously improved, the implemented search is local and unable to deal with changing environment. On the contrary, (µ, λ)-ES enables the search algorithm to escape from local optima, to follow a moving optimum, to deal with noisy objective function, and to self adapt strategy parameters effectively. The ratio µ/λ is named the degree of extinctiveness and is linked to the probability to locate the global optimum. If it is large there is a high convergence reliability, whereas if it is small there is a high convergence velocity. Investigations presented in [24] suggest an optimal ratio of µ/λ = 1/7.

10

P 180

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Table 2: Evolution Strategy based C-Means (ESCM) algorithm. 1. assign µ, λ, the number of clusters, and the threshold ǫ2 ; 2. initialize the population; 3. evaluate Jw for each individual (Eq. 16); 4. do until ∆Jwbest /Jwbest is greater than ǫ2 ; 5. count1=0; (a) while count1 less then µ; i. count1++; ii. select by rank two individuals for mating; iii. order consistently the centers of clusters in both selected individuals using algorithm RI (Tab. 3); iv. crossover object variables (discrete recombination); v. crossover strategy parameters (intermediate recombination); vi. mutate individual as shown in Sect. 3.2; (b) end do; (c) evaluate Jw for each individual (Eq. 16); (d) select the µ fittest individuals for next population; 6. end do.

11

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 181

4

Evolution Strategy based C-Means (ESCM) algorithm

In order to overcome the limits of C-Means, a (µ, λ)-ES can be used to find the global optimum of Jw (Eq. 16). Tab. 2 illustrates the Evolution Strategy based C-Means (ESCM) algorithm. Each genotype a is a list containing the object variables (i.e. the centers of clusters) and the strategy parameters: a = (y1 , ..., yc , σ1 , ..., σc )

(21)

where c is the number of clusters. ESCM works in a (c × (d + 1))-dimensional space, where d is the dimension of the pattern space. After the initialization of parameters (step 1), the population is initialized (step 2) in the following way: Centers of clusters (i.e. object variables) are initialized at random in the hyperbox I (Eq. 15), while strategy parameters are initialized at random in the range [0, α], where α is order of 1/10 the side of I. The remaining steps are quite standard for an (µ, λ)-ES, with the exception of Step 5(A)iii. In fact we must note that, before mixing object variables of parents (centers of clusters) using discrete recombination crossover, they must be re-indexed, in such a way centers with same index are likely to correspond to the same cluster. The re-indexing algorithm is described in Tab. 3 and is modified by the RL algorithm proposed in [27]. Besides, the stop condition (Step 4) ∆Jwbest < ǫ2 (22) Jwbest is based on the ratio of normalized difference of objective function Jw evaluated on the fittest individual of two successive generations. In principle, ESCM allows us to avoid local minima of Jw and to find the global optimum, improving in this way the reliability of inferential tasks associated to the clustering procedure. Moreover it is simple to create variants of the basic ESCM. For instance, if we want to reduce the interference of big blobs to the localization of the centers of small clusters, it is straightforward to change in the algorithm Jw with the following scaled global error function Js :

12

P 182

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Table 3: Re-indexing (RI) algorithm. 1. compile the matrix of distances M among centers of clusters of the two individuals; 2. count2=0; 3. while count2 less than c; (a) count2++; (b) find the minimal item of the matrix; (c) assign the same index to both centers of clusters in the two individuals; (d) delete the corresponding row and column in the matrix of distances M ; 4. end do.

Js ≡

c n 1

j=1 Cj

ujk Dj2 (xk ),

(23)

k=1

where Cj is the cardinality of cluster wj .

5 5.1

Segmentation of multimodal medical volumes Multimodal medical volumes (MMV)

Medical images are obtained by different acquisition modalities, including X-ray tomography (CT), magnetic resonance imaging (MRI), single photon emission tomography (SPECT), and positron emission tomography (PET), ultrasounds (US), etc. [15]. Multimodal volumes can be derived from sets of such different diagnostic volumes by spatial coregistration of volumes in order to fully correlate complementary information (e.g., structural and functional) about the same patient. 13

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 183

The visual inspection of a large set of such volumetric images permits only partially to the physician to exploit the available information. Therefore, computer-assisted approaches may be helpful in the clinical oncological environment as support to diagnosis in order to delineate volumes to be treated by radiotherapy and surgery, and to assess quantitatively (in terms of tumor mass or detection of metastases) the effect of oncological treatments. The extraction of such volumes or other entities of interest from imaging data is named segmentation and is usually performed, in the image space, by defining sets of voxels with similar features within a whole multimodal volume.

5.2

Clustering-based inference approach to MMV segmentation

It is worth noting that it is very difficult or impossible to settle the solution of the multimodal volumes segmentation problem in a reliable rule based systems framework, as physicians are hardly able, at least for low level steps in image analysis, to describe the rationale of their decisions. Moreover, for higher level in image analysis, rationales of physicians, even if more precise, strongly depend on many factors, such as different clinical frameworks, different anatomical areas, different theoretical approaches, etc. Inference procedures based on learning from data must be then employed for design a computer-assisted systems for segmenting multimodal medical volumes. Actually, in such data based systems, a possible supervised approach has two major drawbacks: • it is very time-consuming (especially for large volumes), as it requires the labeling of prototypical samples needed for applying the generalization process. Even if the number of clusters is predefined, a careful manual labeling of voxels in the training set belonging with certainty to the different clusters is not trivial, especially when it concerns multimodal data sets and • heavy biases may be introduced by physicians unskilled or fatigued due to the large inter-user and intra-user variability generally observed when manual labeling is performed.

14

P 184

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

On the contrary, unsupervised methods may fully exploit the implicit multidimensional structure of data and make clustering of the feature space independent from the user’s definition of training regions [2, 8] due to their self-organizing approach. A multimodal volume may be defined by the spatial registration of a set of d different imaging volumes. As a consequence, its voxels are associated with an array of d values, each representing the intensity of a single modality in a voxel. From another point of view, the d different intensity values related to the voxel in such multimodal volume can be viewed as the coordinates of the voxel within a d-dimensional feature space where multimodal analysis can be made. An image space (usually 3D) defined by the spatial coordinates of the data set, and a multidimensional feature space, as described before, must be considered for a more complete description of the segmentation problem. The interplay between these two spaces turns out to be very important in the task of understanding the data structure. Actually, the definition of clusters within the above described d-dimensional feature space and the classification of all the voxels of the volumes to the resulting classes are the main steps in segmenting multimodal volumes. This approach, where an inference process based on clustering constitutes the principal procedure for the MMV segmentation, has been followed in many recent papers [4, 22, 17, 10, 14], and it has been shown to be more robust to noise in discrimination of different tissues than techniques based on edge detection [4]. Nevertheless, the used clustering method itself must be well founded in statistics and must be not limited by intrinsic problems, such as the problem of local optima in CM. Moreover, many bias effects must be taken into account in considering clustering for the segmentation of medical images. Actually, very heterogeneous clusters may be found in the feature space, with very different probability densities, and considering the cardinality of clusters may be necessary in order to include in the analysis the statistical nature of the data set. Furthermore, the partial volume effect during acquisition may produce a really intrinsic ambiguity of borders between regions of interest. As a consequence, unsupervised clustering based segmentation of medical images emerges as a very difficult task, whose usefulness is related to the balance of two conflicting actions, namely, the elimination of noise and redundancy from original images and the preservation of significant information in the segmented im15

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 185

age. These constraints may force users to introduce their knowledge in the sequence of analysis and further refinements are often needed in order to obtain meaningful and affordable results.

5.3

Interactive segmentation system

From all these considerations a correct architecture for a computer based system for multimodal medical volumes segmentation should include a computational core grounded on unsupervised clustering together with powerful interactive tools for knowledge based refinements that physicians could tune and organize to specific diagnostic tasks to be performed. This way, as requested in the clinical practice, physicians can stay in control both of the sequence of choices and of the results in the analysis process in order to introduce in the segmentation process their theoretical and heuristic knowledge. A system based on those assumptions has been developed by our group and is described in [22]. It is an interactive system with a friendly Graphics User Interface, and supporting a full sequence of analysis of multimodal medical images. The main functions performed by this system are: Feature extraction, dimensionality reduction, unsupervised clustering, voxel classification, and intra- and post-processing refinements. The main component of this system is the clustering subsystem that make possible to run in the feature space alternative clustering algorithms, including the C-Means [6], the Capture Effect Neural Network [7], Fuzzy C-Means [3], the Deterministic Annealing [20, 21], and the Possibilistic CMeans [12, 13]. In [16, 17] we report some comparisons of application of such algorithms on clinical images.

6 6.1

Experimental analysis Data set

We have implemented the Evolution Strategy based C-Means (ESCM) algorithm as a clustering module of the previously described graphical interactive system supporting the full sequence of analysis of multimodal medical volumes.

16

P 186

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

(a)

(b)

Figure 1: T1-weighted (a) and T2-weighted (b) MRI images of a patient with glioblastoma multiforme in the right temporal lobe. In order to illustrate in a specific case the inference task of MMV segmentation based on clustering, and to show the gain in precision and reliability obtained in this task using the ESCM instead of the original CM, let we consider now a simple data set consisting of a multimodal transverse slice of the head (Fig. 1) composed by spatially correlated T1-weighted and T2weighted MRI images from an head acquisition volume of an individual with glioblastoma multiforme. The images are 288 x 362 with 256 gray levels. The tumor is located in the right temporal lobe and appears bright on the T2-weighted image and dark on the T1-weighted image. A large amount of edema is surrounding the tumor and appears very bright on the T2-weighted image. The lower signal area within the mass suggests tissue necrosis. Each pixel in the above defined two-modal slice is associated to an array of two intensity values (T1 and T2). Therefore, each of these couples of pixel intensity is represented by a point in a 2D feature space (Fig. 2), whose coordinates represent the intensity values in that pixel of each modality belonging to the multimodal set. The segmentation task consists in finding the main classes in this feature space and in associating each pixel in image to one of this classes. The main classes 17

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 187

250

200

T2

150

100

50

0

0

50

100

150 T1

200

250

300

Figure 2: Feature space (T2 versus T1) obtained from the MRI images in Fig. 1. in the data set are: white matter, gray matter, cerebro spinal fluid (CSF), tumor, edema, necrosis, scalp. A slight mis-registration between images may be responsible of some mis-classification errors in final results.

6.2

Methods

We give here some information on the implementation of clustering algorithms used in the experimental analysis. • The CM uses 7 clusters and a tolerance for the stop criterion ǫ1 = .01, centers of clusters are initialized at random, and convergence is noticed in 10-15 fast iterations. • For the ESCM using Jw , according to the µ/λ = 1/7 rule proposed by Schwefel [24], we selected µ = 10 and λ = 70. Moreover, we initialized c = 7, ǫ2 = .005, and the centers of clusters at random. We implemented the selection by rank using a linear probability distri-

18

P 188

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

7000

best individual cost function

6500

6000

5500

5000

4500

0

5

10

15

20 iteration

25

30

35

40

Figure 3: Cost function of best individuals versus iteration of ESCM.

Figure 4: Segmentation obtained by the CM algorithm with 7 clusters. 19

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 189

Figure 5: Segmentation obtained by the ESCM algorithm using Jw and with 7 clusters. bution with negative slope, while the intermediate recombination is implemented as the average of components of parents. • The implementation of ESCM using Js is identical to the previous one, with the obvious exception of the objective function. A typical plot of Jsbest is presented in Fig. 3. Using ∆Jsbest /Jsbest ≤ ǫ2 as the stop condition, the ESCM ends in 15 iteration.

6.3

Results and Discussion

Let us compare the results produced by the ESCM clustering algorithm and by the standard C-Means (CM) algorithm. In Fig. 4 the results of the unsupervised segmentation with the CM algorithm are shown. CM almost correctly defines scalp and white matter. Nevertheless it produces mistakes in classification of gray matter and edema in the left side of brain, and especially is not able to separate tumor, necrosis and CSF. Similar results are obtained by the basic ESCM with the standard cost function Jw (Fig. 5). Nevertheless, as an important difference, from 20

P 190

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Figure 6: Segmentation obtained by the ESCM algorithm using Js and with 7 clusters. a large number of tests, ESCM results to be largely more stable than CM with respect to the positions of centroids and to the extension of clusters in the feature space. Eventually, by using the newly defined scaled global error function Js to take into account the cardinality of clusters, the results of ESCM (Fig. 6) dramatically improve. Actually, we may notice that, in comparison with CM, and with the basic version of ESCM, the final version of ESCM correctly distinguishes between tumor and CSF, and within the tumor region is able to find the necrosis region. Correct definition of scalp and white matter and misclassification in the left side of the brain remains as from CM.

7

Conclusions

The C-Means (CM) [6], while is an efficient approximation of the maximum likelihood procedure for estimating the centers of clusters, shows some intrinsic problems. In particular, it is subject to the problem of trapping in local minima of its objective function Jw (Eq. 16). This locality in search21

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 191

ing for minima is a main limitation, in particular when we try to apply this algorithm as the basis for inference procedures. In order to overcome the limits of C-Means, we have proposed in this paper a novel clustering algorithm based on the application of an Evolution Strategy (ES) [19, 25, 1] to the search for the global minimum (Evolution Strategy based C-Means or ESCM algorithm). The ESCM is based on a (µ, λ)-ES strategy where the object variables of genotypes are the centers of clusters. The implementation of the (µ, λ)ES strategy is quite standard, but before mixing object variables of parents using discrete recombination crossover, they are re-indexed, in such a way centers with same index are likely to correspond to the same cluster. It is worth noting that it is easy to make variants to the basic ESCM. For instance, with the straightforward change of Jw with the scaled global error function Js (Eq. 23) it is possible to reduce the interference of big blobs to the localization of the centers of small clusters. In this paper we considered a complex inference processes based on clustering consisting in multimodal medical volumes (MMV) segmentation. This approach has been shown to be very robust to noise and able to process complementary information carried by each image (e.g. functional or anatomical) [4]. In this inference task, devoted to aggregate voxels with similar properties (corresponding to the different anatomical and/or pathological tissues) in the different diagnostic imaging volumes, clustering is performed in a multidimensional space where each independent dimension is a particular volumetric image. Nevertheless, the used clustering method itself must be well founded in statistics and must be not limited by intrinsic problems, such as the problem of local optima in CM. Moreover, many bias effects (due, e.g., to heterogeneous clusters and to partial volume effect during acquisition) must be taken into account in considering clustering for the segmentation of medical images. We have implemented the ESCM algorithm as a clustering module of the previously described graphical interactive system supporting the physician for the full sequence of analysis of multimodal medical volumes. In the experimental results presented in the paper, we have compared the segmentation obtained by the application of CM, ESCM using Jw and ESCM using Js to a simple data set consisting of a multimodal transverse slice of the head (Fig. 1) composed by spatially correlated T1-weighted and T2-weighted MRI images from an head acquisition volume of an individual with glioblastoma multiforme. 22

P 192

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

The two implementations of ESCM give more stable solutions than CM with respect to the positions of centroids and the extension of clusters in the feature space. In particular, the ESCM using Js , as is able to take into account the cardinality of clusters, dramatically improves the quality of segmentation results.

Acknowledgments The images are from the BrighamRAD Teaching Case Database of the Department of Radiology at Brigham and Women’s Hospital in Boston.

References [1] T. Baeck. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithm. Oxford University Press, 1996. [2] A.M. Bensaid, L.O. Hall, L.P. Clarke, and R.P. Velthuizen. MRI segmentation using supervised and unsupervised methods. In Proc. 13th IEEE Eng. Med. Biol. Conf., pages 483–489, Orlando, 1991. IEEE. [3] J.C. Bezdek. Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, New York, 1981. [4] J.C. Bezdek, L.O. Hall, and L.P. Clarke. Review of MR image segmentation techniques using pattern recognition. Med. Phys., 20:1033–1048, 1993. [5] P. Bogus, A. Massone, and F. Masulli. A Simulated Annealing C-Means Clustering Algorithm. In F. Masulli and R. Parenti, editors, Proceeding of SOCO’99 ICSC Symposium on Soft Computing - Genova, pages 534– 540, Millet, Canada, 1999. ICSC. [6] R.O. Duda and P.E. Hart. Pattern Classification and Scene Analysis. Wiley, New York, 1973. [7] F. Firenze and P. Morasso. The capture effect model: a new approach to self-organized clustering. In Sixth International Conference. Neural Networks and their Industrial and Cognitive Applications. NEURO-NIMES 23

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 193

93 Conference Proceedings and Exhibition Catalog, pages 65–54, Nimes, France, 1993. [8] G. Gerig, J. Martin, R. Kikinis, O. Kubler, M. Shenton, and F.A. Jolesz. Unsupervised tissue type segmentation of 3D dual-echo MR head data. Im. Vis. Comput., 10:349–360, 1992. [9] E. T. Jaynes. Information theory and statistical mechanics. Physical Review, 106:620–630, 1957. [10] Z-X Ji, Q-S Sun, D-S Xia. A modied possibilistic fuzzy c-means clustering algorithm for bias field estimation and segmentation of brain MR image. Computerized Medical Imaging and Graphics 35 38–397, 2011. [11] S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi. Optimization by simulated annealing. Science, 220:661–680, 1983. [12] R. Krishnapuram and J.M. Keller. A possibilistic approach to clustering. IEEE Transactions on Fuzzy Systems, 1:98–110, 1993. [13] R. Krishnapuram and J.M. Keller. The Possibilistic C-Means algorithm: Insights and recommendations. IEEE Transactions on Fuzzy Systems, 4:385–393, 1996. [14] H. Mahmoud, F. Masulli, S. Rovetta. A Fuzzy Clustering Segmentation Approach for Feature-Based Medical Image Registration. In Proc. CIBB 2012, 9-th International Meeting on Computational Intelligence Methods for Bioinformatics and Biostatistics, Houston, Tx, USA, CIBB Proceedings Series, ISBN 978-88-906437-1-2, 2012. [15] M.N. Maisey and et al. 19:1002–1005, 1992.

Synergistic imaging.

Eur. J. Nucl. Med.,

[16] F. Masulli, P. Bogus, A. Schenone, and M. Artuso. Fuzzy clustering methods for the segmentation of multivariate images. In M. Mares, R. Mesia, V. Novak, J. Ramik, and A. Stupnanova, editors, Proceedings of the 7th International Fuzzy Systems Association Word Congress IFSA’97, volume III, pages 123–128, Prague, 1997. Academia. [17] F. Masulli and A. Schenone. A fuzzy clustering based segmetation system as support to diagnosis in medical imaging. Artificial Intelligence in Medicine 16, 129–147, 1999. 24

P 194

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

[18] S. Miyamoto and M. Mukaidono. Fuzzy C-Means as a regularization and maximum entropy approach. In M. Mares, R. Mesia, V. Novak, J. Ramik, and A. Stupnanova, editors, Proceedings of the 7th International Fuzzy Systems Association Word Congress IFSA’97, volume III, pages 86–91, Prague, 1997. Academia. [19] I. Rechenberg. Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Frommann-Holzboog Verlag, Stuttgart, 1973. [20] K. Rose, E. Gurewitz, and G. Fox. A deterministic annealing approach to clustering. Pattern Recognition Letters, 11:589–594, 1990. [21] K. Rose, E. Gurewitz, and G. Fox. Constrained clustering as an optimization method. IEEE Transaction on Pattern Analysis and Machine Intelligence, 15:785–794, 1993. [22] A. Schenone, F. Firenze, F. Acquarone, M. Gambaro, F. Masulli, and L. Andreucci. Segmentation of multivariate medical images via unsupervised clustering with adaptive resolution. Computerized Medical Imaging and Graphics, 20:119–129, 1996. [23] K. E. Parsopoulos, M. N. Vrahatis. Recent approaches to global optimization problems through Particle Swarm Optimization. Natural Computing 1(2-3):235-306, 2002. [24] H.P. Schwefel. Collective phenomena in evolutionary systems. In Preprints of the 31-th annual meeting of the International Society for General Systems Research, volume 2, pages 1025–1033, Budapest, 1988. [25] H.P. Schwefel. Evolution and Optimum Seeking. Wiley, 1995. [26] A. Tikhonov and V. Arsenin. Solutions of ill-posed problems. Winston and Sons, New York, 1997. [27] E.C.K. Tsao, J.C. Bezdek, and N.R. Pal. Fuzzy Kohonen clustering networks. Pattern Recognition, 27(5):757–764, 1994.

25

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 195

P 196

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

A DETERMINISTIC INVENTORY MODEL FOR DETERIORATING ITEMS WITH TIME DEPENDENT DEMAND AND ALLOWABLE SHORTAGE UNDER TRADE CREDIT †

PINKI MAJUMDER AND U.K.BERA

Department of Mathematics,National institution of Technology,Agartala,Tripura(west),India, e-mail:[email protected], bera [email protected]

Abstract. In this proposed research we developed a deterministic inventory model of deteriorating items for time dependent demand and trade credit. Here supplier offers a credit limit to the retailer and retailer also offers a credit limit to the customer. This paper develops a model to determine an optimal ordering policy under conditions of allowable shortage and permissible delay in payment.Numerical examples are used to illustrate all results obtained in this paper.Finally the model is solved by Generalised Reduced Gradient(GRG) method and using LINGO software. Key words :Time dependent demand , shortage, deterioration , trade credit,optimization.

1. Introduction In today’s business transactions , it is more and more common to see that the retailers are allowed a fixed time period before they settle their account to the supplier. We term this period as trade credit period.Before the end of the trade credit period, the retailer can sell the goods and accumulate revenue and earn interest.A higher interest is charged if the payment is not settled at the end of the trade credit period. Goyal[6] develops an economic order quantity under the conditions of permissible delay in payments for an inventory system.Jamal et. al consider an ordering policy for deteriorating items with allowable shortage and permissible delay in payment.Funthermore, Sarker et. al[11] address a model to determine an optimal ordering policy for deteriorating items under inflation, permissible delay in payment and allowable shortage.Chen and Ouyang[2] extend Jamal et. al.[7] model by fuzzifying the carrying cost rate,interest paid rate and interest earned rate simultaneously , based on the interval-valued fuzzy numbers and triangular fuzzy number to fit the real world. Kumar M et al. developed an EOQ model for time varying demaqnd rate under trade credits. Chen and Kang[3] proposed an integrated inventory models considering permissible delay in payment and variant pricing strategy,M. Liang et. al[4] developed an optimal order quantity under advanced sales and permissible delay in payments.Deterioration is applicable to many inventories in practice like blood,fashion goods, agricultural products and medicine , highly volatile liquids such as gasoline;alcohol,electronic goods , radioactive substances , photographic film, grain etc.So decay or deterioration of physical goods in stock is a very realistic feature and inventory researches felt the necessity to use this factory into consideration.Shah and Jaiswal presented an inventory model for items deteriorating at a constant rate.Covert and philip[1] , Deb and Chaudhuri[5] ,Kumar,M et al.[8]developed an inventory model with time dependent deterioration rate. Recently Meher ,Panda[9] and Sahu[10] developed an inventory model where †

Corresponding Author. 1 International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 197

2

Pinki Majumder and U.K.Bera

demand is a weibull function of time. In the classical inventory models the demand rate is assumed to be a constant . In reality , demand for physical goals may be time dependent and price dependent. Meher , Panda and Sahu[10] develops inventory model where demand is a function of time. In this paper we establish an deterministic inventory model with allowable shortage , time dependent demand , weibull deterioration and two trade credit period. Here we derive the optimal value of cycle time which minimize the total average cost. Lastly numerical examples are set to illustrate all results obtained in this paper. 2. Assumptions and notation The following notations and assumptions are used for the development of proposed model. 2a. Notation (i) D(t)=a(1-bt); the annual demand as a decreasing function of time where a > 0 is fixed demand and b(0 < b < 1) denotes the rate of demand. (ii) C = The unit purchase cost. (iii) S = The unit selling cost with (S > C). (iv) h= The inventory holding cost per year excluding interest charges. (v) A = The ordering cost per order. (vi) P = The unit shortage cost. (vii) Q(t) = The order quantity at time t = 0. (viii) θ(t) = The deteriorating rate which is a weibull function of time as θ(t) =αβtβ−1 where 0 < α 0 and t > 0 (ix) M = Retailer’s trade credit period offered by the supplier in years. (x) N = Customer’s trade period offered by the retailer in years. (xi) Ic = Interest charges payable per $ per year to the supplier. (xii) Ie = Interest earned per $ per year. (xiii) I(t) = Inventory level at time t. (xiv) T1 = Length of the period with positive stock of the item. (xv) T2 = Length of the period with negative stock of the item. (xvi) T = Length of the replenishment cycle . T = T1 + T2 (xvii) Z(T1 , T2 ) : Total Inventory cost when the length of period with positive stock of the item is T1 and the length of the period with negative stock of the item is T2 . (xviii) Z1 (T1 , T2 ) : Total relevant cost per unit time when N ≤ M ≤ T1 < T . (xix) Z2 (T1 , T2 ) : Total relevant cost per unit time when N ≤ T1 ≤ M < T . (xx) Z3 (T1 , T2 ) : Total relevant cost per unit time when 0 ≤ T1 ≤ N ≤ M < T . (xxi) T1∗ = Optimal value of T1 . (xxii) T2∗ = Optimal value of T2 . 2b. Assumption (i) (ii) (iii) (iv) (v) (vi)

P 198

The inventory system under consideration deals with the single item. The planning horizon is infinite. The demand of the product is declining function of time. Shortages are allowed. Ic ≥ Ie , S ≥ C, M ≥ N . The supplier offers the full trade credit to the retailer.When T1 ≥ M ,the account is settled at T1 = M ,the retailer pays off all units sold and keeps his/her profits, and starts paying for the interest charges on the items in stock with rate Ic .When T1 ≤ M ,the account is settled at T1 = M and the retailer no need to pay any interest on the stock.

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

A deterministic inventory model.......... allowable shortage under trade credit

3

(vii) The retailer can accumulate revenue and earn interest after his/her customer pays for the amount of purchasing cost to the retailer until the end of the trade credit period offered by the supplier . That is , the retailer can accumulate revenue and earn interest during the period N to M with rate Ie under the condition of trade credit. (viii) The deteriorated units can neither be repaired nor replaced during the cycle time. 3. Mathematical Formulation The inventory level I(t) depletes to meet the demand and deterioration. The rate of change of inventory level is governed by the following differential equation dI(t) + θI(t) = −D(t) ,0 ≤ t ≤ T (1) dt dI(t) β−1 + αβt I(t) = −a(1 − bt) ,0 ≤ t ≤ T (2) which is equivalent to dt with the initial condition I(0) = Q and the boundary condition I(T1 ) = 0 Consequently, the solution of (2) is given by β α bα (T1β+1 −tβ+1 )− β+2 (T1β+2 −tβ+2 )− 2b (T12 −t2 )+(T1 −t)] ,0≤t≤T (3) I(t) = ae−αt [ β+1 The order quantity is

α Q = I(0) = a[ β+1 T1β+1 −

bα T β+2 β+2 1

−

bT12 2

+ T1 ]

(4)

the total cost of inventory system per time unit include the following : A (a) Ordering cost : (T1 +T 2) T β+1

bT β+2

1 (b) Deterioration cost per unit time : (TCaα [ 1 − β+2 ] 1 +T2 ) β+1 (c)Inventory holding cost per unit time: (2β+3) (2β+2) (β+3) bαβ ah α2 b α2 [ T − 2(β+1) − (β+1)(β+3) T1 + 2 T1 (T1 +T2 ) (β+1)(2β+3) 1 T P I(t)dt (d)Shortage cost = − (T1 +T ) 2

(β+2) αβ T (β+1)(β+2) 1

−

bT13 3

+

T12 ] 2

T1

P = − (T1 +T 2)

T

α (1 − αtβ )[ β+1 (T1β+1 − tβ+1 ) −

bα (T1β+2 β+2

− tβ+2 ) − 2b (T12 − t2 ) + (T1 − t)]dt

T1 bT 3 T2 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 = (T1 +T2 ) [ (β+1)(β+2) T1 − (β+1)(β+3) T1 + (β+1)(2β+3) T1 − 2(β+1) − 31 + 21 ]+ 2 T1 β+1 2β+2 β+1 2β+3 β+1 2 P α2 b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 ) β+1 β+1 2β+2 β+1 2β+3 2 β+1 β+3 β+1 β+2 β+2 (T1 +T2 ) α bα 2) 2) 2) ) + α(T1 (T1 +T − (T1 +T ) − β+1 (T1β+1 (T1 + T2 ) − (T1 +T ) + β+2 (T1β+2 (T1 + T2 ) − β+3 β+1 β+2 β+2 3 2 (T1 +T2 )β+3 3) 2) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )] β+3 3 2

Regarding interest charges and earned three cases may arise based on the length of M, N, T1 . The three cases are as follows Case1 : N ≤ M ≤ T1 < T Case2 : N ≤ T1 ≤ M < T Case3 : T1 ≤ N ≤ M < T

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 199

4

Pinki Majumder and U.K.Bera

p1.png Figure 1. case 1: N≤ M ≤ T1 < T

p2.png Figure 2. case 2: N ≤ T1 ≤ M < T

p3.png Figure 3. case 3: T1 ≤ N ≤ M≤T

P 200

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

A deterministic inventory model.......... allowable shortage under trade credit

5

4. According to given assumption,there are three cases to occur in interest charged for the items kept in stock per year. Case 1. N ≤ M ≤ T1 < T

c Annual interest payable = (T1CI +T2 ) ca = (TCI 1 +T2 )

T1

M

(

−

I(t)dt

M

α (1 − αtβ )[ β+1 (T1β+1 − tβ+1 ) − 2

(2β+3)

α b ca = (TCI [ T 1 +T2 ) (β+1)(2β+3) 1 (β+1) αT1

T1

(β+2) αβT1

(β+1) (β+2) α2 b 2β+3 M (β+2)(2β+3)

bα (T1β+2 β+2

(2β+2) α2 T 2(β+1)2 1

−

−

− tβ+2 ) − 2b (T12 − t2 ) + (T1 − t)]dt

(β+3) bαβ T (β+1)(β+3) 1

+

(β+2) αβ T (β+1)(β+2) 1

−

bT13 3

+

T12 2

−

(β+1) bT12 αβ αβb α2 +T1 )(M − αM )− (β+1)(β+2) M β+2 + 2(β+2)(β+3) M β+3 − (β+1)(2β+2) M 2β+2 + 2 (β+1) 3 2 − bM6 + M2 ]

−

Case2. N ≤ T1 ≤ M < T In this case annual interest payable = 0 Case 3. T1 ≤ N ≤ M < T In this case annual interest payable = 0 5. According to given assumption,three cases will occur in interest earned per year. case 1. N ≤ M ≤ T1 < T M The annual interest earned = (T1SI+Te 2 ) [a(1 − bT2 )T2 (M − N ) + a(1 − bt)tdt] = (T1SI+Te 2 ) [a(1

− bT2 )T2 (M − N ) +

2 a( M2

−

bM 3 3

−

N2 2

+

bN 3 )] 3

N

case 2. N ≤ T1 ≤ M < T

T1 The annual interest earned = (T1SI+Te 2 ) [a(1−bT2 )T2 (M −N )+a(1−bT1 )T1 (M −T1 )+ a(1 − bt)tdt] = (T1SI+Te 2 ) [a(1

− bT2 )T2 (M − N ) +

T2 a( 21

−

bT13 3

N

−

N2 2

+

bN 3 ) 3

+ a(1 − bT1 )T1 (M − T1 )]

case 3. T1 ≤ N ≤ M < T The annual interest earned = (T1SI+Te 2 ) [a(1 − bT2 )T2 (M − N ) + a(1 − bT1 )T1 (M − N )] The annual total cost incurred by the retailer Z(T1 , T2 ) = Setup cost + Holding cost + Purchasing cost + Shortage cost +Interest payable Interest earned T1β+1 bT1β+2 (2β+3) (2β+2) A α2 b α2 + (TCaα [ − ] + (T1ah [ T − 2(β+1) − 2 T1 (T1 +T2 ) +T ) β+1 β+2 +T2 ) (β+1)(2β+3) 1 1 2 3 2 bT T (β+3) (β+2) bαβ αβ T + (β+1)(β+2) T1 − 31 + 21 ]+ (β+1)(β+3) 1 bT13 T12 (2β+3) (2β+2) (β+3) (β+2) bαβ αβ α2 b α2 ca + (TCI [ T − − T + T − + − 2 T1 1 1 1 +T ) (β+1)(2β+3) 2(β+1) (β+1)(β+3) (β+1)(β+2) 3 2 1 2 (β+1) (β+2) (β+1) αT1 αβT1 bT 2 αβ αβb α2 ( (β+1) − (β+2) − 21 +T1 )(M − αM )− (β+1)(β+2) M β+2 + 2(β+2)(β+3) M β+3 − (β+1)(2β+2) M 2β+2 + (β+1) 3 2 α2 b M 2β+3 − bM6 + M2 ] (β+2)(2β+3)

where Z1 (T1 , T2 ) =

(β+2)

αβ P + (T1 +T [ T 2 ) (β+1)(β+2) 1

β+1 2 P 2) [ α (T1β+1 (T1 +T (T1 +T2 ) β+1 β+1

bT13 T12 (β+3) (2β+3) 2β+2 bαβ α2 b α2 T + T − T − + ]+ 2 1 1 1 (β+1)(β+3) (β+1)(2β+3) 2(β+1) 3 2 2β+2 β+1 2β+3 β+1 (T1 +T2 ) α2 b 2) 2) 2) ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − 2β+2 β+1 2β+3 2 β+1

−

−

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 201

6

Pinki Majumder and U.K.Bera

β+2 β+1 β+2 (T1 +T2 )β+3 α bα 2) 2) 2) ) + α(T1 (T1 +T − (T1 +T ) − β+1 (T1β+1 (T1 + T2 ) − (T1 +T ) + β+2 (T1β+2 (T1 + T2 ) − β+3 β+1 β+2 β+2 3 2 (T1 +T2 )β+3 3) 2) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )] β+3 3 2 SIe M2 bM 3 N2 bN 3 − (T1 +T2 ) [a(1 − bT2 )T2 (M − N ) + a( 2 − 3 − 2 + 3 )]

T1β+1 bT1β+2 (2β+2) (2β+3) A Caα α2 b α2 − + [ − ]+ (T1ah [ T − 2(β+1) 2 T1 (T1 +T2 ) (T1 +T2 ) β+1 β+2 +T2 ) (β+1)(2β+3) 1 3 2 T bT (β+3) (β+2) bαβ αβ T + (β+1)(β+2) T1 − 31 + 21 ]+ (β+1)(β+3) 1 bT 3 T2 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 [ T − (β+1)(β+3) T1 + (β+1)(2β+3) T1 − 2(β+1) − 31 + 21 ] + 2 T1 (T1 +T2 ) (β+1)(β+2) 1 β+1 β+1 β+1 2β+2 2β+3 2 P α2 b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 ) β+1 β+1 2β+2 β+1 2β+3 2 β+1 β+2 (T +T ) (T1 +T2 )β+3 (T1 +T2 )β+2 (T1 +T2 )β+1 β+1 β+2 α bα 2 ) + α(T1 β+1 − β+2 ) − β+1 (T1 (T1 + T2 ) − 1 β+2 ) + β+2 (T1 (T1 + T2 ) − β+3 β+3 3 2 (T1 +T3 ) (T1 +T2 ) (T1 +T2 ) b 2 ) + 2 (T1 (T1 + T2 ) − ) − (T1 (T1 + T2 ) − )] β+3 3 2 T12 bT13 SIe N2 bN 3 − (T1 +T2 ) [a(1 − bT2 )T2 (M − N ) + a( 2 − 3 − 2 + 3 ) + a(1 − bT1 )T1 (M − T1 )]

where Z2 (T1 , T2 ) =

T1β+1 bT1β+2 (2β+3) (2β+2) A Caα α2 b α2 + [ − ] + (T1ah [ T − 2(β+1) − 2 T1 (T1 +T2 ) (T1 +T2 ) β+1 β+2 +T2 ) (β+1)(2β+3) 1 bT13 T12 (β+3) (β+2) bαβ αβ T + (β+1)(β+2) T1 − 3 + 2 ]+ (β+1)(β+3) 1 bT13 T12 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 T − [ T − T + T − + ]+ 2 1 1 1 1 (T1 +T2 ) (β+1)(β+2) (β+1)(β+3) (β+1)(2β+3) 2(β+1) 3 2 β+1 (T (T1 +T2 )2β+2 (T1 +T2 )2β+3 +T ) β+1 (T1 +T2 )β+1 β+2 (T1 +T2 )β+1 P α2 α2 b αb 1 2 [ (T1 − 2β+2 ) − β+2 (T1 − 2β+3 ) − 2 (T12 β+1 − (T1 +T2 ) β+1 β+1 β+1 β+3 β+1 β+2 (T1 +T2 ) (T1 +T2 ) (T1 +T2 )β+2 (T1 +T2 ) β+1 α bα ) + α(T − ) − (T (T + T ) − ) + β+2 (T1β+2 (T1 + T2 ) − 1 1 2 1 β+3 β+1 β+2 β+1 β+2 3 2 (T1 +T2 )β+3 2) 3) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )] β+3 3 2 SIe − (T1 +T2 ) [a(1 − bT2 )T2 (M − N ) + a(1 − bT1 )T1 (M − N )]

Z3 (T1 , T2 ) =

Since Z1 (M, T2 ) = Z2 (M, T2 ) Z2 (N, T2 ) = Z3 (N, T2 ) Therefore Z(T1 , T2 ) is continuous and well definded All Z1 (T1 , T2 ), Z2 (T1 , T2 ), Z3 (T1 , T2 ) are defined on T1 > 0, T2 > 0. 6. The determinations of the optimal solution of Z(T1 , T2 ) The optimal solutions (T1 , T2 ) of Z1 (T1 , T2 )can be determined by equations ∂Z1 (T1 ,T2 ) ∂T1 ∂Z1 (T1 ,T2 ) = ∂T2

=0 (1) 0 (2)

(1) implies

T β+1

A Caα 1 − (T1 +T 2 − (T +T )2 [ β+1 − 2) 1 2

bT1β+2 (2β+3) (2β+2) (β+3) bαβ ah α2 b α2 ]− (T1 +T − 2(β+1) − (β+1)(β+3) T1 + 2 [ (β+1)(2β+3) T1 2 T1 β+2 2)

bT13 T12 (β+2) αβ T − + ] 1 (β+1)(β+2) 3 2 bT 3 T2 (2β+3) (2β+2) (β+3) (β+2) bαβ αβ CIc a α2 b α2 − (T1 +T2 )2 [ (β+1)(2β+3) T1 − 2(β+1) − (β+1)(β+3) T1 + (β+1)(β+2) T1 − 31 + 21 − 2 T1 (β+1) (β+2) (β+1) αT1 αbT1 bT 2 αβ αβb α2 ( (β+1) − (β+2) − 21 +T1 )(M − αM )− (β+1)(β+2) M β+2 + 2(β+2)(β+3) M β+3 − (β+1)(2β+2) M 2β+2 + (β+1) 3 2 α2 b M 2β+3 − bM6 + M2 ] (β+2)(2β+3) e + (T1SI [a(1 − bT2 )T2 (M − N ) +T2 )2

(β+2)

αβ P − (T1 +T 2 [ (β+1)(β+2) T1 2)

β+1 2 P 2) [ α (T1β+1 (T1 +T (T1 +T2 )2 β+1 β+1

P 202

2

+ a( M2 −

(β+3)

bαβ − (β+1)(β+3) T1 2β+2

2) − (T1 +T 2β+2

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

bM 3 3

2

−

N2 2

+

bN 3 )] 3

2

(2β+3)

α b + (β+1)(2β+3) T1

α b 2) ) − β+2 (T1β+2 (T1 +T β+1

β+1

2

bT13 T2 + 21 ] − 3 β+1 αb 2) (T12 (T1 +T − 2 β+1

2β+2 α − 2(β+1) − 2 T1 2β+3

2) − (T1 +T 2β+3

)−

A deterministic inventory model.......... allowable shortage under trade credit

7

β+2 β+1 β+2 (T1 +T2 )β+3 α bα 2) 2) 2) ) + α(T1 (T1 +T − (T1 +T ) − β+1 (T1β+1 (T1 + T2 ) − (T1 +T ) + β+2 (T1β+2 (T1 + T2 ) − β+3 β+1 β+2 β+2 3 2 (T1 +T2 )β+3 3) 2) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )]+ β+3 3 2 2 β+1 β+2 2β+2 2β+1 bαβ αβ α2 b α2 P P 2 − (β+1) T1 + (β+1) T1 − (β+1) T1 − bT1 + T1 ] + (T1 +T [ T [ α (T1β+1 (T1 + (T1 +T2 ) (β+1) 1 2 ) (β+1) β+1 α2 b 2) T2 )β + (T1 + T2 )β+1 T1β − (T1 + T2 )2β+1 ) − (β+2) (T1β+2 (T1 + T2 )β + (T1 +T (β + 2)T1β+1 − (T1 + β+1 β+1 +T2 )β+1 2) T2 )2β+2 ) − αb ( (T1 +T 2T1 + (T1 + T2 )β T12 − (T1 + T2 )β+2 ) + α((T1 + T2 )β T1 + (T1(β+1) − (T1 + 2 β+1 α bα ((β + 1)T1β (T1 + T2 ) + T1β+1 − (T1 + T2 )β+1 ) + β+2 ((β + 2)T1β+1 (T1 + T2 ) + T1β+2 − T2 )β+1 ) − (β+1) (T1 + T2 )β+2 ) + 2b (T12 + (T1 + T2 )2T1 − (T1 + T2 )2 ) − (T1 + (T1 + T2 ) − (T1 + T2 ))] bαβ α2 b α2 + (TCaα (T1β − bT1β+1 ) + (T1ah [ αβ T β+1 − (β+1) T1β+2 + (β+1) T12β+2 − (β+1) T12β+1 − bT12 + T1 ] + +T2 ) (β+1) 1 1 +T2 ) 2 αβb αβ aCIc α2 [ α b T 2β+2 − (β+1) T12β+1 − (β+1) T1β+2 + (β+1) T1β+1 − bT12 + T1 − (αT1β − αbT1β+1 − bT1 + (T1 +T2 ) (β+1) 1

1)(M −

αM β+1 )] (β+1)

=0

(3)

Now (2) implies T β+1

A Caα 1 − (T1 +T 2 − (T +T )2 [ β+1 − 2) 1 2

bT1β+2 (2β+2) (2β+3) (β+3) bαβ α2 b ah α2 − (β+1)(β+3) ]− (T1 +T − 2(β+1) T1 + 2 [ (β+1)(2β+3) T1 2 T1 β+2 2)

bT 3 T2 (β+2) αβ T − 31 + 21 ] (β+1)(β+2) 1 bT 3 T2 (2β+3) (2β+2) (β+3) (β+2) bαβ αβ α2 b α2 ca − (T1CI [ T − 2(β+1) − (β+1)(β+3) T1 + (β+1)(β+2) T1 − 31 + 21 − 2 T1 +T2 )2 (β+1)(2β+3) 1 (β+1) (β+2) (β+1) αT1 αbT1 bT 2 αβ αβb α2 − (β+2) − 21 +T1 )(M − αM )− (β+1)(β+2) M β+2 + 2(β+2)(β+3) M β+3 − (β+1)(2β+2) M 2β+2 + ( (β+1) (β+1) 3 2 α2 b M 2β+3 − bM6 + M2 ] (β+2)(2β+3) 2 3 2 3 e e + (T1SI [a(1 − bT2 )T2 (M − N ) + a( M2 − bM3 − N2 + bN3 )] − (T1sI+T [a(1 − 2bT2 )(M − N )] +T2 )2 2) 2 bT 3 T2 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α b α2 − (T1 +T − (β+1)(β+3) T1 + (β+1)(2β+3) T1 − 2(β+1) − 31 + 21 ] − 2 [ (β+1)(β+2) T1 2 T1 2) β+1 2β+2 β+1 2β+3 β+1 2 P α2 b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 )2 β+1 β+1 2β+2 β+1 2β+3 2 β+1 (T1 +T2 )β+3 (T1 +T2 )β+2 (T1 +T2 )β+2 (T1 +T2 )β+1 β+1 β+2 α bα ) + α(T1 β+1 − ) − β+1 (T1 (T1 + T2 ) − ) + β+2 (T1 (T1 + β+3 β+2 β+2 (T1 +T2 )β+3 (T1 +T3 )3 (T1 +T2 )2 b P α2 2 ) − (T1 (T1 + T2 ) − )] + (T1 +T2 ) [ (β+1) (T1β+1 (T1 + T2 ) − β+3 ) + 2 (T1 (T1 + T2 ) − 3 2 α2 b T2 )β − (T1 + T2 )2β+1 ) − (β+2) (T1β+2 (T1 + T2 )β − (T1 + T2 )2β+2 ) − αb ((T1 + T2 )β T12 − (T1 + T2 )β+2 ) + 2 α bα (T1β+1 − (T1 + T2 )β+1 ) + (β+2) (T1β+2 − (T1 + T2 )β+2 ) + α((T1 + T2 )β T1 − (T1 + T2 )β+1 ) − (β+1) b (T12 − (T1 + T2 )2 ) − (T1 − (T1 + T2 ))] =0 (4) 2

The equation (3) and (4) gives the optimal value T1∗ and T2∗ . The optimal solutions (T1 , T2 ) of Z2 (T1 , T2 ) can be determined by equations ∂Z2 (T1 ,T2 ) =0 (5) ∂T1 ∂Z2 (T1 ,T2 ) =0 (6) ∂T2 (5) implies T β+1

A Caα 1 − (T1 +T 2 − (T +T )2 [ β+1 − 2) 1 2

bT1β+2 (2β+3) (2β+2) (β+3) bαβ ah α2 b α2 ]− (T1 +T − 2(β+1) − (β+1)(β+3) T1 + 2 [ (β+1)(2β+3) T1 2 T1 β+2 2)

bT13 T12 (β+2) αβ T − + ] 1 (β+1)(β+2) 3 2 bT13 T12 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 − (T1 +T − T + T − − + ]− 2 [ (β+1)(β+2) T1 2 T1 1 1 ) (β+1)(β+3) (β+1)(2β+3) 2(β+1) 3 2 2 β+1 2β+2 β+1 2β+3 β+1 2 P α2 b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 )2 β+1 β+1 2β+2 β+1 2β+3 2 β+1

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 203

8

Pinki Majumder and U.K.Bera

(T1 +T2 )β+2 (T1 +T2 )β+3 (T1 +T2 )β+2 (T1 +T2 )β+1 β+1 α bα (T + T ) − ) + α(T − ) − (T ) + β+2 (T1β+2 (T1 + T2 ) − 1 2 1 1 β+3 β+1 β+2 β+1 β+2 3 2 (T1 +T2 )β+3 3) 2) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )]+ β+3 3 2 2 2 2 bαβ α b α P P [ αβ T β+1 − (β+1) T1β+2 + (β+1) T12β+2 − (β+1) T12β+1 − bT12 + T1 ] + (T1 +T [ α (T1β+1 (T1 + (T1 +T2 ) (β+1) 1 2 ) (β+1) β+1 α2 b 2) T2 )β + (T1 + T2 )β+1 T1β − (T1 + T2 )2β+1 ) − (β+2) (T1β+2 (T1 + T2 )β + (T1 +T (β + 2)T1β+1 − (T1 + β+1 β+1 +T2 )β+1 2) T2 )2β+2 ) − αb ( (T1 +T 2T1 + (T1 + T2 )β T12 − (T1 + T2 )β+2 ) + α((T1 + T2 )β T1 + (T1(β+1) − (T1 + 2 β+1 β β+1 β+1 α bα β+1 β+1 T2 ) ) − (β+1) ((β + 1)T1 (T1 + T2 ) + T1 − (T1 + T2 ) ) + β+2 ((β + 2)T1 (T1 + T2 ) + T1β+2 − (T1 + T2 )β+2 ) + 2b (T12 + (T1 + T2 )2T1 − (T1 + T2 )2 ) − (T1 + (T1 + T2 ) − (T1 + T2 ))] bαβ α2 b α2 + (TCaα (T1β − bT1β+1 ) + (T1ah [ αβ T β+1 − (β+1) T1β+2 + (β+1) T12β+2 − (β+1) T12β+1 − bT12 + T1 ] +T2 ) (β+1) 1 1 +T2 ) 2 3 T2 bT 3 e + (T1SI [a(1−bT2 )T2 (M −N )+a( 21 − 31 − N2 + bN3 )+a(1−bT1 )T1 (M −T1 )]− (T1SI+Te 2 ) [a(T1 − +T2 )2 bT12 ) + a(1 − bT1 )(M − 2T1 ) + T1 (M − T1 )(−b)]=0 (7)

(6) implies

A Caα 1 − (T1 +T 2 − (T +T )2 [ β+1 − 2) 1 2

T β+1

bT1β+2 (2β+3) (2β+2) (β+3) bαβ ah α2 b α2 ]− (T1 +T − 2(β+1) − (β+1)(β+3) T1 + 2 [ (β+1)(2β+3) T1 2 T1 β+2 2)

N )(1 − 2bT2 )]=0

(8)

bT 3 T2 (β+2) αβ T − 31 + 21 ] (β+1)(β+2) 1 bT 3 T2 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 − (T1 +T − (β+1)(β+3) T1 + (β+1)(2β+3) T1 − 2(β+1) − 31 + 21 ] − 2 [ (β+1)(β+2) T1 2 T1 2) β+1 2β+2 β+1 2β+3 β+1 2 P α2 b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 )2 β+1 β+1 2β+2 β+1 2β+3 2 β+1 (T1 +T2 )β+3 (T1 +T2 )β+2 (T1 +T2 )β+2 (T1 +T2 )β+1 β+1 β+2 α bα ) + α(T1 β+1 − ) − β+1 (T1 (T1 + T2 ) − ) + β+2 (T1 (T1 + β+3 β+2 β+2 (T1 +T2 )β+3 (T1 +T3 )3 (T1 +T2 )2 b P α2 2 T2 ) − β+3 ) + 2 (T1 (T1 + T2 ) − ) − (T1 (T1 + T2 ) − )] + (T1 +T2 ) [ (β+1) (T1β+1 (T1 + 3 2 α2 b T2 )β − (T1 + T2 )2β+1 ) − (β+2) (T1β+2 (T1 + T2 )β − (T1 + T2 )2β+2 ) − αb ((T1 + T2 )β T12 − (T1 + T2 )β+2 ) + 2 α bα (T1β+1 − (T1 + T2 )β+1 ) + (β+2) (T1β+2 − (T1 + T2 )β+2 ) + α((T1 + T2 )β T1 − (T1 + T2 )β+1 ) − (β+1) b (T12 − (T1 + T2 )2 ) − (T1 − (T1 + T2 ))] 2 2 3 T12 bT13 e [a(1−bT )T (M −N )+a( − − N2 + bN3 )+a(1−bT1 )T1 (M −T1 )]− (T1SI+Te 2 ) [a(M − + (T1SI 2 2 2 +T2 ) 2 3

The equation (7) and (8) gives the optimal value T1∗ and T2∗ . The optimal solutions (T1 , T2 ) of Z3 (T1 , T2 ) can be determined by equations =0 (9) =0 (10) (9) implies ∂Z3 (T1 ,T2 ) ∂T1 ∂Z3 (T1 ,T2 ) ∂T2

T β+1

A Caα 1 − (T1 +T 2 − (T +T )2 [ β+1 − 2) 1 2

bT1β+2 (2β+3) (2β+2) (β+3) bαβ ah α2 b α2 ]− (T1 +T − 2(β+1) − (β+1)(β+3) T1 + 2 [ (β+1)(2β+3) T1 2 T1 β+2 2)

bT 3 T2 (β+2) αβ T − 31 + 21 ] (β+1)(β+2) 1 bT13 T12 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 − (T1 +T − T + T − − + ]− 2 [ (β+1)(β+2) T1 2 T1 1 1 ) (β+1)(β+3) (β+1)(2β+3) 2(β+1) 3 2 2 β+1 2β+2 β+1 2β+3 β+1 2 P α2 b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 )2 β+1 β+1 2β+2 β+1 2β+3 2 β+1 β+3 β+1 β+2 β+2 (T1 +T2 ) α bα 2) 2) 2) ) + α(T1 (T1 +T − (T1 +T ) − β+1 (T1β+1 (T1 + T2 ) − (T1 +T ) + β+2 (T1β+2 (T1 + T2 ) − β+3 β+1 β+2 β+2 3 2 (T1 +T2 )β+3 3) 2) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )]+ β+3 3 2

P 204

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

A deterministic inventory model.......... allowable shortage under trade credit

9

bαβ P α2 b α2 P α2 [ αβ T β+1 − (β+1) [ T1β+2 + (β+1) T12β+2 − (β+1) T12β+1 − bT12 + T1 ] + (T1 +T (T1β+1 (T1 + (T1 +T2 ) (β+1) 1 ) (β+1) 2 β+1 α2 b 2) (T1β+2 (T1 + T2 )β + (T1 +T (β + 2)T1β+1 − (T1 + T2 )β + (T1 + T2 )β+1 T1β − (T1 + T2 )2β+1 ) − (β+2) β+1 2) T2 )2β+2 ) − αb ( (T1 +T 2 β+1

β+1

β+1

+T2 ) 2T1 + (T1 + T2 )β T12 − (T1 + T2 )β+2 ) + α((T1 + T2 )β T1 + (T1(β+1)

− (T1 + β β+1 β+1 α bα ((β + 1)T1 (T1 + T2 ) + T1 − (T1 + T2 )β+1 ) + β+2 ((β + 2)T1 (T1 + T2 ) + T1β+2 − T2 )β+1 ) − (β+1) (T1 + T2 )β+2 ) + 2b (T12 + (T1 + T2 )2T1 − (T1 + T2 )2 ) − (T1 + (T1 + T2 ) − (T1 + T2 ))] bαβ α2 b α2 + (TCaα (T1β − bT1β+1 ) + (T1ah [ αβ T β+1 − (β+1) T1β+2 + (β+1) T12β+2 − (β+1) T12β+1 − bT12 + T1 ] +T2 ) (β+1) 1 1 +T2 ) e + (T1SI [a(1−bT2 )T2 (M −N )+a(1−bT1 )T1 (M −N )]− (T1SI+Te 2 ) [a(M −N )(1−2bT1 )]=0 (11) +T2 )2 Equation (10) implies T β+1

Caα A 1 − (T1 +T 2 − (T +T )2 [ β+1 − 2) 1 2

bT1β+2 (2β+3) (2β+2) (β+3) bαβ ah α2 b α2 ]− (T1 +T − − (β+1)(β+3) T1 + 2 [ (β+1)(2β+3) T1 2 T1 β+2 ) 2(β+1) 2

bT 3 T2 (β+2) αβ T − 31 + 21 ] (β+1)(β+2) 1 bT13 T12 (β+2) (β+3) (2β+3) 2β+2 αβ bαβ P α2 b α2 − (T1 +T − T + T − − + ]− 2 [ (β+1)(β+2) T1 2 T1 1 1 ) (β+1)(β+3) (β+1)(2β+3) 2(β+1) 3 2 2 β+1 2β+2 β+1 2β+3 β+1 2 2 P α b 2) 2) 2) 2) 2) [ α (T1β+1 (T1 +T − (T1 +T ) − β+2 (T1β+2 (T1 +T − (T1 +T ) − αb (T12 (T1 +T − (T1 +T2 )2 β+1 β+1 2β+2 β+1 2β+3 2 β+1 β+3 β+1 β+2 β+2 (T1 +T2 ) α bα 2) 2) 2) ) + α(T1 (T1 +T − (T1 +T ) − β+1 (T1β+1 (T1 + T2 ) − (T1 +T ) + β+2 (T1β+2 (T1 + β+3 β+1 β+2 β+2 β+3 2 3 2 P 2) 3) 2) ) + 2b (T12 (T1 + T2 ) − (T1 +T ) − (T1 (T1 + T2 ) − (T1 +T )] + (T1 +T [ α (T1β+1 (T1 + T2 ) − (T1 +T β+3 3 2 2 ) (β+1) α2 b T2 )β − (T1 + T2 )2β+1 ) − (β+2) (T1β+2 (T1 + T2 )β − (T1 + T2 )2β+2 ) − αb ((T1 + T2 )β T12 − (T1 + T2 )β+2 ) + 2 α bα (T1β+1 − (T1 + T2 )β+1 ) + (β+2) (T1β+2 − (T1 + T2 )β+2 ) + α((T1 + T2 )β T1 − (T1 + T2 )β+1 ) − (β+1) b (T12 − (T1 + T2 )2 ) − (T1 − (T1 + T2 ))] 2 e [a(1−bT2 )T2 (M −N )+a(1−bT1 )T1 (M −N )]− (T1SI+Te 2 ) [a(M −N )(1−2bT2 )]=0 (12) + (T1SI +T2 )2

The equation (11) and (12) gives the optimal value T1∗ and T2∗ . 7. Numerical Example:To illustrate the results of the proposed model, we solve the following numerical examples. Example 1:- Let C = 60, S = 70, P = 20, Ic = 0.02, Ie = 0.015, A = 350, a = 2900, b = 0.35, α = 0.01, β = 2, M = 0.02, N = 0.01, h = 4 Then we see thatT1∗ = 0.02229108, T2∗ = 8.023180 and the minimum average cost Z1 (T1∗ , T2∗ ) = 103.6384 Example 2:- Let C = 50, S = 80, P = 50, Ic = 0.06, Ie = 0.01, A = 300, a = 1000, b = 0.2, α = 0.01, β = 2, M = 0.10, N = 0.022, h = 8 Then we see thatT1∗ = 0.03180632, T2∗ = 3.611006 and the minimum average cost Z2 (T1∗ , T2∗ ) = 129.9500 Example 3:- Let C = 50, S = 70, P = 30, Ic = 0.070, Ie = 0.030, A = 250, a = 1000, b = 0.4, α = 0.30, β = 2, M = 0.09589041, N = 0.01369863, h = 4 Then we see thatT1∗ = 0.005365123, T2∗ = 2.080469 and the minimum average cost Z1 (T1∗ , T2∗ ) = 100.4811 8. Conclusion:In this paper, an EOQ inventory model is considered for determining the optimal cycle time under weibull deterioration rate and demand declining market where shortages are allowed.Also the proposed model in-cooperates other realistic phenomenon and practical features such as trade credit period.The credit policy in payment has become a very powerful tool to attract

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 205

10

Pinki Majumder and U.K.Bera

new customers and a good incentive policy for the buyers.In keeping with this reality , these factors are incorporated into the present model. Numerical examples are presented to justify the claim of each case of the model analysis by obtaining the optimal inventory length, shortage time period and also calculated the total variable cost. The proposed model can be extended in several ways.For instance,we may extend this model for partial trade credit period, quantity discount,taking selling price, ordering cost , demand as a fuzzy number. 9. References:[1]Covert R.P and Philip G. C(1973),An EOQ model for items with weibull distribution deterioration ,AIIE Transactions,5,323-326. [2]Chen L.H,Ouyang L.Y.,Fuzzy inventory model for deteriorating items with permissible delay in payment,Appl. Math. Comput. 182(2006)711-726. [3] Chen L.H ,Kang F.S (2010),Integrated inventory models considering permissible delay in payment and variant pricing strategy , Appl. Math. Model,34,36-46. [4] Chen M.L and Chang M. C. (2011),Optimal order quantity under advance sales and permissible delays in payment,African Journal of Business Management 5(17),7325-7334. [5] Deb m. and Chaudhuri K.S.(1986), An EOQ Model for items with finite rate of production and variable rate of deterioration, Opsearch,23,175-181. [6]Goyal S.K,Economic order quantity under conditions of permissible delay in payments,J. Operat. Res.Soc. 36(1985) 335-338. [7]Jamal A.M.M , Sarker B.R,Wang S.,An ordering policy for deteriorating items with allowable Shortage and permissible delay in payment .Journal of Operation Research society 48(1997) 826-833. [8] Kumar M., Tripathi R.P. and Singh S.R (2008) , Optimal ordering policy and pricing with variable demand rate under trade credits,Journal of National Academy of Mathematics , 22,111-123. [[9] Meher M. K , Panda G.C, Sahu S.K ,An Inventory Model with weibull Deterioration Rate under the Delay in payment in Demand Declining Market , Applied Mathematical sciences, vol.6,2012 no. 23,1121-1133. [10] Shah Y.K and Jaiswal M.C (1977) ,An order-level inventory Model for a system with constant rate of deterioration , Opsearch 14, 174-184. [11] Sarker B. R , Jamal A.M.M ,Wang S., Supply chain models for perishable products under inflation and permissible delay in payment.Computational Operation Research 27 (2000) 59-75.

P 206

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

DEVELOPMENT OF LABVIEW BASED ELECTRONIC NOSE USING K-NN ALGORITHM FOR THE DETECTION AND CLASSIFICATION OF FRUITY ODORS N.Jagadesh babu Assistant professor, EIE Department, Gitam University, Visakhaptanam,A.P,India. E-mail : [email protected]

ABSTRACT The basic objective of this paper is to development of electronic nose system which can able to detect and classify different fruits basing upon their odor with help of LabVIEW. This system consists of two Figaro gas sensors (TGS 2620 and TGS 2602) which is used detection for odor and k-NN Algorithm is used to classify different fruits. Olfaction is one’s sense of smell and a primary human sensory system. The detection of odors has been applied to many industrial applications, including indoor air quality, health care, safety and security, environmental monitoring, quality control of food products, medical diagnosis, psychoanalysis, agriculture, pharmaceuticals, military applications, and detection of hazardous gases, to name but a few. The biological nose is an obvious choice for such applications, but there are some disadvantages to having human beings perform these tasks because they have to face various difficulties such as fatigue, infections, mental state, subjectivity, exposure to hazardous materials etc., due to above reasons machines are preferred to do the above applications which show high accuracy then human beings. Keywords : Electronic nose, Virtual Instrumentation, K-NN algorithm, Fruity Odors

I. NTRODUCTION An electronic nose is a device intended to detect odors or flavors. The expression electronic sensing refers to the capability of reproducing human senses using sensor arrays and pattern recognition systems. Since 1982 research has been conducted to develop technologies[2], commonly referred to as electronic noses that could detect and recognize odors and flavors. The stages of the recognition process are similar to human olfaction and are performed for identification, comparison, quantification and other applications, including data storage and retrieval. These devices have undergone

much development and are now used to fulfill industrial needs. Other techniques to analyze odors In all industries, odor assessment is usually performed by human sensory analysis, by chemo sensors, or by gas chromatography. The latter technique gives information about volatile organic compounds but the correlation between analytical results and actual odor perception is not direct due to potential interactions between several odorous components. Working principle The electronic nose was developed in order to mimic human olfaction that functions as a non-separative mechanism: i.e. an odor or flavor is perceived as a global fingerprint. Essentially the instrument consists of head space sampling, sensor array, and pattern recognition modules, to generate signal pattern that are used for characterizing odors. Electronic noses include three major parts: A sample delivery system, A detection system and A computing system. Detection System: This consists of a sensor set, is the reactive part of the instrument. When in contact with volatile compounds, the sensors react, which means they experience a change of electrical properties. Each sensor is sensitive to all volatile molecules but each in their specific way. Most electronic noses use sensor arrays that react to volatile compounds on contact: the adsorption of volatile compounds on the sensor surface causes a physical change of the sensor. A specific response is recorded by the electronic interface transforming the signal into a digital value. Recorded data are then computed based on statistical models.The more commonly used sensors include: Computing System: They work to combine the responses of all of the sensors, which represent the input for the data treatment. This part of the instrument performs global fingerprint analysis and provides results and representations that can be easily

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 207

interpreted. Moreover, the electronic nose results can be correlated to those obtained from other techniques (sensory panel, GC, GC/MS). Many of the data interpretation systems are used for the analysis of results. These systems include Artificial Neural Network (ANN), fuzzy logic, pattern recognition modules, etc. Perform an analysis: As a first step, an electronic nose needs to be trained with qualified samples so as to build a database of reference. Then the instrument can recognize new samples by comparing volatile compounds fingerprint to those contained in its database. Thus they can perform qualitative or quantitative analysis. This however may also provide a problem as many odors are made up off multiple different molecules, this may be possibly wrongly interpreted by the device as it will register them as different compounds, resulting in incorrect or inaccurate results depending on the primary function of a nose.

these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e.g. analog-digital converter can act as a hardware complement of a virtual oscilloscope, a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation. The concept of a synthetic instrument is a subset of the virtual instrument concept. A synthetic instrument is a kind of virtual instrument that is purely software defined. A synthetic instrument performs a specific synthesis, analysis or measurement function on completely generic, measurement agnostic hardware.

Applications: Electronic nose instruments are used by research and development laboratories, quality control laboratories and process & production departments for various purposes,The detection of lung cancer by detecting the VOC’s (volatile organic compounds) that indicate lung cancer. The quality control of food products as it could be conveniently placed in food packaging to clearly indicate when food has started to rot,Possible and future applications in the field of crime prevention and security The ability of the electronic nose to detect odorless chemicals makes it ideal for use in the police force, such as the ability to detect drug odors despite other airborne odors capable of confusing police dogs. However this is unlikely in the mean time as the cost of the electronic nose is too great and until its price drops significantly it is unlikely to happen. It may also be used as a bomb detection method in airports. Through careful placement of several or more electronic noses and effective computer systems you could triangulate the location of bombs to within a few meters of their location in less than a few seconds.

II. VIRTUAL INSTRUMENTATION Virtual instrumentation is the use of customizable software and modular measurement hardware to create measurement systems, called virtual instruments. Traditional hardware instrumentation systems are made up of predefined hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis or measurement function. Because of their hard-cored function,

P 208

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

Figure 2.1 2: Architecture of VI

Traditional instruments (left) and software based virtual instruments (right) largely share the same architectural components, but radically different philosophies. Every virtual instrument consists of two parts -software and hardware. A virtual instrument typically has a sticker price comparable to and many times less than a similar traditional instrument for the current measurement task. However, the savings compound over time, because virtual instruments are much more flexible when changing measurement tasks.With virtual instrumentation, software based on user requirements defines general -purpose measurement and control hardware functionality. Virtual instrumentation combines mainstream commercial technologies, such as the PC, with flexible software and a wide variety of measurement and control hardware, so engineers and scientists can create user-defined systems that meet their exact application needs. With virtual instrumentation, engineers and scientists reduce development time, design higher quality products and lower their design costs.

III. VIRTUAL INSTRUMENTATION DESIGN The same design engineers that use a wide variety of software design tools must use hardware to test prototypes.

Commonly, there is no good interface between the design phase and testing/validation phase, which means that the design usually must go through completion phase and enter a testing/ validation phase. Issues discovered in the testing phase require a design-phase reiteration.

wide band-gap insulators to metallic and superconducting. Tin dioxide belongs to a class of materials that combines high electrical conductivity with optical transparency and thus constitutes an important component for optoelectronic applications.

Virtual instrumentation is necessary because it delivers instrumentation with the rapid adaptability required for today’s concept, product, and process design, development and delivery. Only with virtual instrumentation can engineers and scientist create the user defined instruments required to keep up the worlds demands. To meet the ever-increasing demand to innovate and deliver ideas and products faster, scientists and engineering are turning to advanced electronics, processors, and software.

The electrical resistance of the sensor is attributed to this potential barrier. In the presence of a deoxidizing gas, the surface density of the negatively charged oxygen decreases, so the barrier height in the grain boundary is reduced .The reduced barrier height decreases sensor resistance. The relationship between sensor resistance and the concentration of deoxidizing gas can be expressed by the following equation over a certain range of gas concentration. Sensors Configuration:

IV METHODOLOGY Gas Sensor-1

PC

TGS-2620 Gas

LabVIEW

and

NIcDAQ

k-NN Algorithm

Sensor-2

TGS-2602 Figure 3.1 1: Overview of Process

FIGURE 3.1 5: SENSORS WITH PCB BOARD

V. DATA ACQUISITION

Figure: overview photo

IV. GAS SENSOR Tin dioxide is the inorganic compound with the formula SnO2. The mineral form of SnO2 is called cassiterite, and this is the main ore of tin. This colorless, diamagnetic solid is amphoteric. The wide variety of electronic and chemical properties of metal oxides makes them exciting materials for basic research and for technological applications alike. Oxides span a wide range of electrical properties from

Data acquisition is the process of sampling signals that measure real world physical conditions and converting the resulting samples into digital numeric values that can be manipulated by a computer. Data acquisition systems (abbreviated with the acronym DAS or DAQ) typically convert analog waveforms into digital values for processing. The components of data acquisition systems include: Sensors that convert physical parameters to electrical signals. Signal conditioning circuitry to convert sensor signals into a form that can be converted to digital values. Analog-to-digital converters, which convert conditioned sensor signals to digital values. NI cDAQ-9174:

Figure 3.1 6: NI cDAQ-9174 modules and chassis:

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 209

The NI cDAQ-9174 is a 4-slot NI Compact DAQ USB chassis designed for small, portable, mixed-measurement test systems. Combine the cDAQ-9174 with up to four NI C Series I/O modules for a custom analog input, analog output, digital I/O, and counter/timer measurement system. Modules are available for a variety of sensor measurements including thermocouples, RTDs, strain gages, load and pressure transducers, torque cells, accelerometers, flow meters, and microphones. MATLAB Script Node: Calls the MATLAB software to execute scripts. You must have a licensed copy of the MATLAB software version 6.5 or later installed on your computer to use MATLAB script nodes because the script nodes invoke the MATLAB software script server to execute scripts written in the MATLAB language syntax. Because LabVIEW uses ActiveX technology to implement MATLAB script nodes, they are available only on Windows

VI. K-NN ALGORITHM KNN stands for K-Nearest Neighbor algorithm. It is one of the pattern recognition technique used for classifying objects based on closest training examples in the feature space. K-NN is a type of instance-based learning, or lazy learning where the function is only approximated locally and all computation is deferred until classification. The k-nearest neighbor algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of its nearest neighbor The same method can be used for regression, by simply assigning the property value for the object to be the average of the values of its k nearest neighbors (A common weighting scheme is to give each neighbor a weight of 1/d, where d is the distance to the neighbor. This scheme is a generalization of linear interpolation. The neighbors are taken from a set of objects for which the correct classification (or, in the case of regression, the value of the property) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required. The k-nearest neighbor algorithm is sensitive to the local structure of the data. Nearest neighbor rules in effect compute the decision boundary in an implicit manner. It is also possible to compute the decision boundary itself explicitly, and to do so in an efficient manner so that the computational complexity is a function of the boundary

P 210

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

complexity. k-value Selection: The best choice of k depends upon the data; generally, larger values of k reduce the effect of noise on the classification, but make boundaries between classes less distinct. A good k can be selected by various heuristic techniques, for example, cross-validation. The special case where the class is predicted to be the class of the closest training sample (i.e. when k = 1) is called the nearest neighbor algorithm. The accuracy of the k-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put into selecting or scaling features to improve classification. A particularly popular approach is the use of evolutionary algorithms to optimize feature scaling. Another popular approach is to scale features by the mutual information of the training data with the training classes. Euclidean distance: The k-nearest-neighbor classifier is generally uses the Euclidean distance between a test sample and the specified training samples. Let xi be an input sample with p features (xi1,xi2,…,xip) , n be the total number of input samples (i =1,2,…,n) and p the total number of features (j=1,2,…,p) . The Euclidean distance d(xi,xt) between sample xi and xt (t =1, 2,…, n) is defined as d (xi, xt) = √(xi1-xt1)2 + (xi2-xt2)2 + ... +(xip- xti)2 Equation 3.1 1: Euclidean Distance K-NN Example:

Figure 3.1-11: K-NN Example

The test sample (green circle) should be classified either to the first class of blue squares or to the second class of red triangles. If k = 3 it is assigned to the second class because there are 2 triangles and only 1 square inside the inner circle. If k = 5 it is assigned to the first class (3 squares vs. 2 triangles inside the outer circle

Flow Chart of Process:

Figure 3.3 2: Block Diagram during Testing Phase

Front Panel:

Figure 3.2 1: Flow chart of Process

There are two phases in the process. Training: uring this phase each fruit odors is sampled using NI cDAQ from the sensors. Then the value of both

Figure 3.3 3: Front Panel during Training

sensors and type of fruit are stored in the spreadsheet. Testing:During this phase a new sample (whose type of fruit is to be determined) is acquired. Then its value is compared with the other trained fruits (which are stored in spreadsheet) using k-NN algorithm and the type of fruit is shown. Building VI for Detection and Classification of Fruit Odors and to implement K-NN Classifier Algorithm:

Fig. Figure 3.3 4: Front Panel during Testing

Figure 3.3 1: Block Diagram during Training Phase Figure 3.3-5experiement photo resolution

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 211

As a part of this project so far we monitored odor for different fruits using TGS 2620 and TGS 2602. We tabulated the output voltages corresponding to their fruit odors at time of training. Then type fruit is shown as output during testing phase.

VII. CONCLUSION/RESULTS We have successfully classified different fruits (like: Banana and Lemon) with the help of k-NN algorithm in LabVIEW. The conclusion of this work so far we monitored odor for different fruits using TGS 2620 and TGS 2602. We tabulated the output voltages corresponding to their fruit odors and classification fruits at different stages (days) of training. Then type of fruit is shown as output during testing phase.

Fruit

Banana

Lemon

Fruit statuss LED

TGS 2620

TGS 2620

Voltage (V)

Voltage (V)

Stage-1

1.708

1.159

Red/ON

Stage-2

1.716

1.161

Red/ON

Stage-3

10698

1.150

Red/ON

Stage-1

1.52

0.999

Green/ON

Stage-2

1.49

1.015

Green/ON

Stage-3

1045

1.005

Green/ON

Sample Number

Instead of using PC and LabVIEW we can implement microcontroller based portable electronic nose. By improving the algorithm and addition of sensors we can also use this project to checking freshness of food. This paper can be made into automatic system which can be used detecting of harmful gases.

REFERENCES [1]

Kea-Tiong Tang,Shih-Wen Chiu, Chih-Heng-Ti Hsieh, Yao-Sheng Liang and Ssu-Chieh Liu.

[2] Persaud, K; Dodd, G.H. Analysis of Discrination Mechanisms of the Mammalian Olfactory System Using a Model Nose. Nature 1982,299,352-355. [3]

Alphus D. Wilson, Manuela Baietto.

[4]

Pattern Classification, by R.O.Duda, P.E.Hart and D.G.Stork.

[5]

Statistical pattern Recognition by K. Fukunaga.

[6]

Nearest Neighbor Pattern Classification by T. M. Cover and P. E. Hart.

[7]

Handbook of Machine Olfaction: Electronic Nose Technology by Tim C. Pearce, Susan S. Schiffman, H. Troy Nagle, Julian W. Gardner.

[8] LabVIEW-based Advanced Instrumentation Systems by S. Sumathi, P. Surekha. [9] Virtual Instrumentation using LabVIEW by Jovitha Jerome. http://www.scholarpedia.org/article/K-nearest_neighbor

VIII. FUTURE SCOPE The algorithm can modify to Artificial NEURAL NETWORKS (ANN) and implement same project with better accuracy.

P 212

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

http://www.ni.com/ [10] http://www.howstuffworks.com/environmental/green science/pollution-sniffer.htm.

SUBSCRIPTION RATE Annual Subscription

In Rupees

In US$

International Journal on Current Science & Technology

Rs. 5000/-

US$100

Payable by Bank Transfer/RTGS to Account No. 32709506720, Name of the bank Nirjuli, Bank code 9535, IFS Code: SBIN0009535. Once paid details of Postal address of subscriber and scanned copy of Bank receipt may be send by E-mail to [email protected] Authors’ Instruction The papers on the following subjects should reach in IEEE format only by E-mail : [email protected]/[email protected] and a hard copy by post.

To, The Editor, International Journal on Current Science & Technology, National Institute of Technology - Arunachal Pradesh PO - Yupia, P.S. - Doimukh, Dist. - Papum Pare, Pin - 791112, Arunachal Pradesh Acceptance of paper is based on peer-review process. Technical Education, Chemical Sciences, Engineering Sciences, Environmental Sciences, Information and communication Science & Technology (including Computer Sciences), Material Sciences, Mathematical Sciences (including Statistics), Medical Sciences, New Biology (including Biochemistry, Biophysics & Molecular Biology and Biotechnology) and Physical Sciences.

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 213

P 214

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

International Journal on Current Science & Technology Vol - I l No- I l January-June’2013

P 215

International Journal on Current Science & Technology Vol. : 1 | No. : 1 | January - June’ 2013

For Correspondence

National Instiute Of Technology Arunachal Pradesh (Estd. By MHRD, Govt. of India) PO-Yupia, P.O.-Doimukh, Dist-Papum Pare, Pin-791112, Arunachal Pradesh T 0360-2284801, F 0360-2284972 E [email protected]; [email protected]

ISSN : 2320 5636