Lecture Notes in Electrical Engineering

21 downloads 182985 Views 48MB Size Report
The Research on PGP Encrypted Email Recovery. . . . . . . . . . . . . . . 1107 ...... org/findstds/standard/802.15.4-2011.html. 13. Bettstetter, C. ...... neous data sources integration and heterogeneous data format conversion. The second stage ...... normalized PSD of the sinusoidal modulated signal is expressed as. GBOCsin Пfч ¼.
Lecture Notes in Electrical Engineering Volume 348

Board of Series editors Leopoldo Angrisani, Napoli, Italy Marco Arteaga, Coyoacán, México Samarjit Chakraborty, München, Germany Jiming Chen, Hangzhou, P.R. China Tan Kay Chen, Singapore, Singapore Rüdiger Dillmann, Karlsruhe, Germany Haibin Duan, Beijing, China Gianluigi Ferrari, Parma, Italy Manuel Ferre, Madrid, Spain Sandra Hirche, München, Germany Faryar Jabbari, Irvine, USA Janusz Kacprzyk, Warsaw, Poland Alaa Khamis, New Cairo City, Egypt Torsten Kroeger, Stanford, USA Tan Cher Ming, Singapore, Singapore Wolfgang Minker, Ulm, Germany Pradeep Misra, Dayton, USA Sebastian Möller, Berlin, Germany Subhas Mukhopadyay, Palmerston, New Zealand Cun-Zheng Ning, Tempe, USA Toyoaki Nishida, Sakyo-ku, Japan Bijaya Ketan Panigrahi, New Delhi, India Federica Pascucci, Roma, Italy Tariq Samad, Minneapolis, USA Gan Woon Seng, Nanyang Avenue, Singapore Germano Veiga, Porto, Portugal Haitao Wu, Beijing, China Junjie James Zhang, Charlotte, USA

About this Series “Lecture Notes in Electrical Engineering (LNEE)” is a book series which reports the latest research and developments in Electrical Engineering, namely: • • • • •

Communication, Networks, and Information Theory Computer Engineering Signal, Image, Speech and Information Processing Circuits and Systems Bioengineering

LNEE publishes authored monographs and contributed volumes which present cutting edge research information as well as new perspectives on classical fields, while maintaining Springer’s high standards of academic excellence. Also considered for publication are lecture materials, proceedings, and other related materials of exceptionally high quality and interest. The subject matter should be original and timely, reporting the latest research and developments in all areas of electrical engineering. The audience for the books in LNEE consists of advanced level students, researchers, and industry professionals working at the forefront of their fields. Much like Springer’s other Lecture Notes series, LNEE will be distributed through Springer’s print and electronic publishing channels.

More information about this series at http://www.springer.com/series/7818

Qing-An Zeng Editor

Wireless Communications, Networking and Applications Proceedings of WCNA 2014

123

Editor Qing-An Zeng Department of Computer Systems Technology North Carolina Agricultural and Technical State University Greensboro, NC USA

ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-81-322-2579-9 ISBN 978-81-322-2580-5 (eBook) DOI 10.1007/978-81-322-2580-5 Library of Congress Control Number: 2015944748 Springer New Delhi Heidelberg New York Dordrecht London © Springer India 2016 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. Printed on acid-free paper Springer (India) Pvt. Ltd. is part of Springer Science+Business Media (www.springer.com)

Preface

The 2014 International Conference on Wireless Communications, Networking and Applications was held in Shenzen, China, from December 27 to 28, 2014. WCNA 2014 aims to bring researchers, engineers, and students to the latest research results and the advanced research methods in the field. WCNA 2014 features several topics of interest, namely emerging technologies in wireless and mobile computing and applications, Internet of things, resource allocation and interference management, long-term evolution engineering, communication architecture, algorithms, modeling and evaluation, smart grid and cloud communication, security, privacy, and trust. We received a large number of submissions from 18 different countries around the world. The WCNA program committee worked very hard to have all papers reviewed before the review deadline. The keynote speaker, Prof. Qing-An Zeng is internationally recognized leading expert in the area of wireless and mobile networks, ad hoc and sensor networks, heterogeneous networks, handoff, resource management, and mobility management, who has demonstrated outstanding proficiency and achieved distinction in the profession. We would like to express our sincere gratitude to all the members of the technical program committee and the organizers for their enthusiasm, time, and expertise. Our deepest thanks also go to the many volunteers and staff for the long hours and hard work they have generously given to WCNA 2014. We would also like to thank Aninda Bose and and Kamiya Khatter, Editor of Springer (India) Private Limited, for their excellent support and advice. We are very grateful to the WCNA support personnel for their support in making this possible. Finally, we would like to thank all the authors, speakers, and participants of this conference for their contributions to WCNA 2014.

v

International Conference on Wireless Communications, Networking and Applications [WCNA 2014]

December 27–28, 2014 Shenzhen, Guangzhou, China http://www.wcna2014.org

WCNA 2014 Committee Organization General Chair Prof. Qing-An Zeng, North Carolina Agricultural and Technical State University, USA Technical Program Committee Co-chairs Dr. Bing He, Cisco Systems Inc., USA Dr. Yun Wang, Bradley University, USA Technical Program Committee Dr. Masaki Bandai, Sophia University, Japan Dr. Michael Bartolacci, Penn State Berks, USA Dr. Dewayne Brown, North Carolina Agricultural and Technical State University, USA Dr. Jian-Nong Cao, Hong Kong Polytechnic University, Hong Kong Dr. Xiuzhen (Susan) Cheng, Washington University, USA Dr. Yang Chi, Cisco Systems Inc., USA Dr. Hiroshi Fujinoki, Southern Illinois University Edwardsville, USA Dr. Jan Holub, Czech Technical University, Czech Republic Dr. Vivek Jain, Robert Bosch Research and Technology Center, USA Dr. Junghyun Jun, IIT-Ropar, India Dr. Hailong Li, Cincinnati Children’s Hospital Medical Center, USA Dr. Wei (Wayne) Li, Texas Southern University, USA Dr. Kangshun Li, South China Agricultural University, China vii

viii

Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr.

International Conference on Wireless Communications …

Chong Li, Auburn University, USA Meirong Liu, University of Oulu, Finland Jiangbo Liu, Bradley University, USA Don Liu, Louisiana Tech University, USA Izabella Lokshina, SUNY Oneonta, USA Wenjing Lou, Virginia Tech, USA Zory Marantz, New York City College of Technology, USA Peter Mueller, IBM Research, Switzerland Talmai Oliveira, Philips Research, USA Ehsan Sheybani, Virginia State University, USA Lei Shu, Guangdong University of Petrochemical Technology, China Yi Tang, iDirect Inc., USA Alexander Uskov, Bradley University, USA Wenye Wang, North Carolina State University, USA Demin Wang, Microsoft Inc., USA Haitang Wang, Amazon, USA Takashi Watanabe, Osaka University, Japan Qing Wei, DoCoMo Communications Laboratories Europe GmbH, Germany Kui Wu, University of Victoria, Canada Yanwei Wu, Western Oregon University, USA Zhifeng Xiao, Penn State University at Erie, USA Zhuoling Xiao, University of Oxford, UK Jingyuan (Alex) Zhang, The University of Alabama, USA Zhenjiang Zhang, Beijing Jiaotong University, China Yanping (Angie) Zhang, Gonzaga University, USA Yuan Zhang, University of Jinan, China Bo Zhang, iDirect Inc., USA Xinning Zhu, Beijing University of Posts and Telecommunications, USA

Contents

Set 1 Part I

Emerging Topics in Wireless and Mobile Computing and Communications

Relay Precoding for Multiuser Cooperative Communication . . . . . . . . Yong Wang, Hao Wu, Liyang Tang and Yue Zhang

3

A Big Slot Scheduling Algorithm for the Reliable Delivery of Real-Time Data Packets in Wireless Sensor Networks . . . . . . . . . . . Hoon Oh and Md Abul Kalam Azad

13

Classification and Comparative Analysis of Resource Management Methods in Ad Hoc Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haitao Wang, Li Yan, Lihua Song and Hui Chen

27

A Secure Model Based on Hypergraph in Multilayer and Multi-domain Intelligent Optical Network . . . . . . . . . . . . . . . . . . Qiwu Wu and Jie Lu

35

Creating a Mobile Phone Short Message Platform Applied in the Continuing Nursing Information System . . . . . . . . . . . . . . . . . . Yujie Guo, Yuanpeng Zhang and Fangfang Zhao

41

Enhancing the Bit Error Rate of Visible Light Communication Systems Using Channel Estimation and Channel Coding . . . . . . . . . . . Tian Zhang, Shuxu Guo and Haipeng Chen

51

An Empirical Examination of Direct and Indirect Network Externalities of the Japanese Handheld Computer Industry: An Empirical Study of the Early Days . . . . . . . . . . . . . . . . . . . . . . . . Michiko Miyamoto

59

ix

x

Contents

Multipath Performance Assessments for Future BeiDou BOC Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Di Wu, Wei Chen, Jing Li, Hongyang Lu and Jing Ji Research of Incremental Dimensionality Reduction Based on Tensor Decomposition Algorithm. . . . . . . . . . . . . . . . . . . . . Xin Guo, Yang Xiang, Dongdong Lv, Shuhan Yuan, Yinfei Huang, Qi Zhang, Jisheng Wang and Dong Wang

73

87

Estimating a Transit Passenger Trip Origin–Destination Matrix Using Simplified Survey Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jangwon Jin

95

An LDPC Coded Adaptive Amplify-and-Forward Scheme Based on the EESM Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiang Chen and Mingxiang Xie

105

An Opportunistic Routing Protocol Based on Link Correlation for Wireless Mesh Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Huibin Wang, Yang Liu and Shufang Xu

113

Election of Guard Nodes to Detect Stealthy Attack in MANET . . . . . . R. Kathiroli and D. Arivudainambi

127

GA-LORD: Genetic Algorithm and LTPCL-Oriented Routing Protocol in Delay Tolerant Network . . . . . . . . . . . . . . . . . . . . . . . . . . Rahul Johari and Dhari A. Mahmood

141

Integrated Modeling Environment “Virtual Computational Network” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alexey G. Shishkin, Sergey V. Stepanov and Fedor S. Zaitsev

155

Comparative Study of Different Windowing Techniques on Coupling Modes in a Coaxial Bragg Structure. . . . . . . . . . . . . . . . . . . . . . . . . . Xueyong Ding, Yuan Wang and Lingling Wang

163

Joint Two-Dimensional DOA and Power Estimation Based on DML-ESPRIT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . Zheng Luo and Donghua Liu

171

Comparison Between Operational Capability of PDT and Tetra Technologies: A Summary . . . . . . . . . . . . . . . . . . . . . . . . . Pengfei Sun, Guanyuan Feng, Kai Guan and Yicheng Zhang

179

Development and Analysis of Police Digital Trunking Channel Technology of PDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pengfei Sun, Run Tian, Hao Xue and Ke Wan

189

Contents

xi

MR-LSH: An Efficient Sparsification Algorithm Based on Parallel Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianxi Peng and Zhiyuan Liu

201

Research of Badminton Data Acquisition System Based on Sensors Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Weijiao Song, Zhengang Wei and Bin Peng

215

The Direct Wave Purifying Based on WIFI Signal for Passive Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liubing Jiang, Tao Feng, Wenwu Zhang and Li Che

223

Application of Data Compression Technique in Congested Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lee Chin Kho, Sze Song Ngu, Yasuo Tan and Azman Osman Lim

235

On-Demand Carrier Sense and Hidden Node Interference-Aware Channel Reservation Scheme for Common Traffic in Wireless Mesh Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hassen Mogaibel, Mohamed Othman, Shamala Subramaniam and Nor Asilah Wati Abdul Hamid

251

A Novel Locating Algorithm Based on GPSR Protocol for Mobile Sink Nodes in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . Qiuhui Pan, Yuchen Li, Yibin Kang, Wenbing Hou and Mingfeng He

267

A Hybrid Routing Protocol Based on Load Balancing in Wireless Mesh Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chunfei Zhang and Zhiyi Fang

273

Link Budget and Simulation of Double-Wavelength Full-Duplex Free-Space Laser Communication Based on Modulating Retro-reflector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peng Zhang, Yan Lou, Tianshu Wang, Guowei Yang, Lizhong Zhang, Shoufeng Tong and Huilin Jiang

285

Based on the Wireless Transmission of Pneumatic Seeder Seeding Condition Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pengxiang Meng, Duanyang Geng, Jiazheng Wang, Yuhuan Li and Chunyan Jiang Energy-Efficient Mobile Agent Communications for Maximizing Lifetime of Wireless Sensor Networks. . . . . . . . . . . . . . . . . . . . . . . . . Wei Hong, Zhanghui Liu, Yuzhong Chen and Wenzhong Guo

295

305

xii

Contents

Novel Design and Performance Analysis of Broadband Dual Layer Circular Polarizer Based on Frequency Selective Surface for 60 GHZ Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Farman Ali Mangi, Shaoqiu Xiao, Imran Memon and Deedar Ali Jamro

319

Research on Embedded Vehicle Diagnosis Technology Based on Wireless Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yanhai Wang, Xiaoman Wang and Lianghua Ding

327

Internet of Things in Real-Life—A Great Understanding . . . . . . . . . . Mohammad Derawi and Hao Zhang

337

Urban Water Supply Network Monitoring and Management Platform Based on Wireless Sensor Network. . . . . . . . . . . . . . . . . . . . Liang Cai, Ronghe Wang, Jilong Sun, Shanshan Li and Yanlong Jing

351

Comparative Study on the Performance of TFRC and SCTP Over AODV in MANET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shahrudin Awang Nor and Omar Dakkak

363

Toward an Efficient Integral Multi-agent Sensor Network Generic Simulation System Design . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Filippou and D.A. Karras

371

A Conceptual Multi-agent Modeling of Dynamic Scheduling in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P.M. Papazoglou and D.A. Karras

385

Ad Hoc High-Dynamic Routing Protocol Simulation and Research . . . Li Chen, Ruijuan Yang and Meirong Huang A Stopping Criterion Based on Check-Sum Variations for Decoding Nonbinary LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . Wen Fan, Haiyang Liu, Wei Yang, Junfeng Zhao and Aaron Z. Jia On Data Transport Issues in Wireless Self-configuring Networks. . . . . Iurii Voitenko and Mohammad Derawi

399

409 419

A Dual-Band Cross-Coupled Bandpass Filter with CPW Trapezoid Resonator for WIFI Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quanqi Zhang, Shuhui Yan and Hongzhou Tan

439

A Method of Modulation Recognition Based on Symbol Rate Characteristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xintai Gan, Wei Cheng, Ruijuan Yang, Dengpeng Hu and Zejun Zhang

449

Contents

Part II

xiii

Internet of Things and Long Term Evolution Engineering

Adaptive Modulation and Coding with Channel Estimation/ Equalization for WiMAX Over Multipath Faded Channels . . . . . . . . . B. Siva Kumar Reddy and B. Lakshmi

459

Design and Implementation of Smart-Home Monitoring System with the Internet of Things Technology . . . . . . . . . . . . . . . . . . . . . . . Yuzhe Jiang, Xingcheng Liu and Shixing Lian

473

Internet of Things Laboratory Test Bed . . . . . . . . . . . . . . . . . . . . . . . Ruslan Kirichek and Andrey Koucheryavy

485

The Research of Reactive Spectrum Handoff Algorithm Based on Spectrum Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wanshu Yang, Yunxiao Zu and Bin Hou

495

Feature Analysis and Research on Radar Target Scattering Characteristics of Typical Individual Migratory Bird . . . . . . . . . . . . . Mingkun Wang, Chenxin Zhang and Xiaokuan Zhang

505

Research of IoT Application in Manned Spaceflight Launch Site . . . . . Wei Liu, Qiang Liu and Wensu Li

517

Planar UHF RFID Tag Antenna with Easy Method of Impedance Matching and Large Reading Range . . . . . . . . . . . . . . . . . . . . . . . . . Zhefeng Chen, Bo Xu and Jun Hu

529

Design of a Radio Frequency Identification (RFID)-Based Monitoring and Vehicle Management System . . . . . . . . . . . . . . . . . . . Lixing Wang, W.H. Ip and Jacky S.L. Ting

537

Research and Implementation of Traffic Sign Recognition System . . . . Yishan Gong, Wei Zhang, Zhijia Zhang and Yuanyuan Li

553

Performance Evaluation of LTE Systems in Multi-path Channels . . . . Srinivas Rao Vempati, Habibulla Khan and Tipparti Anil Kumar

561

Evaluate the Performance of Long Term Evolution–LTE Downlink Scheduling Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Patteti Krishna, Tipparti Anil Kumar and Kalithkar Kishan Rao

571

Design of a Low-Power Temperature Sensing Tag Based on RFID . . . Shihua Cao, Qihui Wang, Lidong Wang and Huixi Zhang

583

Distributed PT-Topk Query Algorithm for Uncertain Data in IOT . . . Yingchi Mao, Bicong Jia, Jiulong Wang and Qing Jie

593

xiv

Contents

Application of Interference Coordination Technology TD-LTE230 Power Wireless Communication System . . . . . . . . . . . . . . . . . . . . . . . Yintao Li, Qingsu He, Weiguo Yuan, Wei Song, Lihua Jiang, Jing Zhou, Rui Yang and Dan Su Research on Terminal Power Control of Power Wireless Communication System Based on Narrow-Band Spectrum . . . . . . . . . Yintao Li, Qingsu He, Weiguo Yuan, Baoxian Guo, Rui Yang, Wei Song, Lihua Jiang and Dan Su Electricity Information Collection System Design and Information Security Based on WiMAX Over 230 MHz Dedicated Frequency Band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Baoping Zou Proposal for Spatial Monitoring Activities Using the Raspberry Pi and LF RFID Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zoltán Balogh, Richard Bízik, Milan Turčáni and Štefan Koprda

607

621

633

641

Set 2 Part III

Resource Allocation and Interference Management

A Capability-Based Access Control Framework with Delegation Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haibo Shen Performance Analysis and Prototype Experiment for Parallel RoF-MIMO System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shozo Komaki, Sevia M. Idrus, Toshio Wakabayashi, A.K.M. Muzahidul Islam, Sabariah Baharun and Wan Haslina Hassan

655

669

Social-Based Routing for Vehicular Ad Hoc Networks in Fixed-Route Transportation Scenarios . . . . . . . . . . . . . . . . . . . . . . Junling Shi, Xingwei Wang and Min Huang

681

Energy Efficient Sensor Scheduling for Target Coverage in Wireless Sensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Arivudainambi, G. Sreekanth and S. Balaji

693

Distributed Closed-Loop QOSTBCs with PIC Detector Under Quasi-Synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linlin Zhang, Ming Gao, Nan Zhang and Peng Yang

707

Novel Slots Synthesis Design for the Harmonic Suppression of Vivaldi Antenna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xingchen Guo, Jing Shen and Zhi Xu

717

Contents

xv

The Research on DS-OFDM in Integrated Radar and Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xuchi Shen, Ruijuan Yang, Xiaobai Li and Meirong Huang

729

An Improved Dynamic Resource Allocation in Multi-users OFDM System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xin Guo, Shibing Zhang, Lili Guo and Yonghong Chen

741

The Algorithm for Mining Global Frequent Itemsets Based on Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bo He

749

The Design and Fabrication of High Efficiency Low Beam Divergence 850 nm VCSELs for High Capacity Optical Transmission. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pengcheng Liu, Guojun Liu, Yuan Feng, Bintai He, Ning An, Chao Liu, Jie Yu, Zhipeng Wei and Yongqin Hao

757

A Tactic Research of Eliminating the Conflict of Using Electromagnetic Spectrum in the Battlefield Environment . . . . . . . . . . Jingxue Liu and Guangjun Zeng

765

Triangular Antenna with Novel Techniques for RCS Reduction Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deedar Ali Jamro, Jingsong Hong, Mamadou Hady Bah, Farman Ali Mangi and Imran Memon On Integrating Natural Computing Based Optimization with Channel Assignment Mining and Decision Making Towards Efficient Spectrum Reuse in Cellular Networks Modelled Through Multi-agent System Schemes . . . . . . . . . . . . . . . . . P.M. Papazoglou, D.A. Karras and R.C. Papademetriou 808 nm VCSELs Temperature Characteristic Study . . . . . . . . . . . . . . Yuan Feng, Dawei Feng, Pengcheng Liu, Xiaohui Ma, Yongqin Hao, Guojun Liu, Changling Yan, Yong Wang, Zaijin Li and Yang Li Part IV

775

783 799

Communication Architecture, Algorithms, Modeling and Evaluation

Steady Signal-Based Fractal Method of Specific Communications Emitter Sources Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhiling Tang and Simin Li

809

A Method for Optimization Design of Cognitive Radar Waveform of Non-Gauss Target Classification. . . . . . . . . . . . . . . . . . . . . . . . . . . Bo Zhou, Jing Zhao, Huanyao Dai and Bin Jiao

821

xvi

Contents

A Novel Dynamic Task Scheduling Algorithm Based on Improved Genetic Algorithm in Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . Juntao Ma, Weitao Li, Tian Fu, Lili Yan and Guojie Hu

829

Real IF Signal Quality Analysis Using General GNSS Software Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Li-ping Zhu, Jie Zhen, Xiaobing Zhao and Mingtao Zhou

837

Time Delay Estimation of Ultra-wideband Signals by Calculation of the Cross-Ambiguity Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roman A. Ershov, Oleg A. Morozov and Vladimir R. Fidelman

851

A Study of Route Optimization Support in Distributed Mobility Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ye Wang, Yanming Cheng and Li Yu

861

Data Integrity Checking Protocol Based on Secure Multiparty Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Runhua Shi, Yechi Zhang, Hong Zhong, Jie Cui and Shun Zhang

873

A 60-GHz Millimeter-Wave CMOS SIR Pseudo-interdigital Band-Pass Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wencheng Lai, Jhinfang Huang and Pigi Yang

883

Frequency-Domain Turbo Equalization with Iterative Impulsive Noise Mitigation for Single-Carrier Power-Line Communications . . . . Ying Liu, Qinghua Guo, Sheng Tong, Jun Tong, Jiangtao Xi and Yanguang Yu

891

Performance of Multimodal Biometric Systems at Score Level Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Harbi AlMahafzah and Ma’en Zaid AlRawashdeh

903

The Study of Fault Diagnosis for Numerical Controlled Machine Based on Improved Case-Based Reasoning Model. . . . . . . . . . . . . . . . Huijuan Hao, Maoli Wang and Juan Li

915

A Design of Fed-Divider for Slotted Ridge Waveguide Antenna Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xuequan Yan and Yuan Yuan

927

Positive Opinion Influential Node Set Selection for Social Networks: Considering Both Positive and Negative Relationships. . . . . Jing (Selena) He, Harneet Kaur and Manasvi Talluri

935

Outage Probability of Hybrid Duplex Relaying in Cooperative Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xianyi Rui, Pan Xu and Fei Xu

949

Contents

Design and Implementation of Automobile Leasing Intelligent Management System Based on Beidou Compass Satellite Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Baoming Shan, Sailong Ji and Qilei Xu

xvii

957

An IVR Service System Based on Adjustable Broadcast Sequence Speech Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shuhao Zhang, Zhiyi Fang and Hongyu Sun

971

Summary Research on Energy-Efficient Technology for Multi-core Computing System Based on Scientometrics . . . . . . . . . Xingwang Wang

983

Sparsity Reconstruction Error-Based Discriminant Analysis Dimensionality Reduction Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . Mingming Qi, Yanqiu Zhang, Dongdong Lv, Cheng Luo, Shuhan Yuan and Hai Lu Performance Analysis of CFO Estimation for OFDM Systems with Low-Precision Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . Dandan Li, Xingzhong Xiong and Haifeng Wang Analysis of Sun Outages Influence on GEO to LEO Communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yan Lou, Yi Wu Zhao, Chunyi Chen, Shoufeng Tong and Cheng Han

991

1005

1017

60-GHz UWB System Performance Analysis for Gigabit M2M Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suiyan Geng, Linlin Cheng, Xing Li and Xiongwen Zhao

1027

Optimized Context Weighting Based on the Least Square Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Min Chen, Jianhua Chen, Yan Zhang and Meng Tang

1037

General Theory of the Application of Multistep Methods to Calculation of the Energy of Signals. . . . . . . . . . . . . . . . . . . . . . . Galina Mehdiyeva, Vagif Ibrahimov and Mehriban Imanova

1047

Analysis of Influence of Attitude Vibration of Aircraft on the Target Detection Performance . . . . . . . . . . . . . . . . . . . . . . . . Xiufang Wang, Jinye Peng, Bin Chen and Wei Qi

1057

Corner Detection-Based Image Feature Extraction and Description with Application to Target Tracking . . . . . . . . . . . . Lejun Gong, Jiacheng Feng and Ronggen Yang

1069

xviii

Part V

Contents

Security, Privacy, and Trust

Anonymous Entity Authentication-Mechanisms Based on Signatures Using a Group Public Key . . . . . . . . . . . . . . . . . . . . . Zhaohua Long, Jie Lu and Tangjie Hou

1079

A Comparative Study of Encryption Algorithms in Wireless Sensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zonghu Xi, Li Li, Guozhen Shi and Shuaibing Wang

1087

Survey on Privacy Preserving for Intelligent Business Recommendation in Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yong Xu, Ming Li, Xiaomei Hu, Yougang Wang and Hui Zhang

1099

The Research on PGP Encrypted Email Recovery. . . . . . . . . . . . . . . Qingbing Ji, Lijun Zhang and Fei Yu An Improved Lightweight Pseudonym Identity-Based Authentication Scheme on Multi-server Environment . . . . . . . . . . . . Hao Lin, Fengtong Wen and Chunxia Du Abnormal Situation Detection for Mobile Devices: Feasible Implementation of a Mobile Framework to Detect Abnormal Situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . German Lancioni and Patricio Maller Virtual Machine Security Monitoring Method Based on Physical Memory Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shumian Yang, Lianhai Wang, Liang Ge, Shuhui Zhang and Guangqi Liu Password Recovery for WPA/WPA2-PSK Based on Parallel Random Search with GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liang Ge, Lianhai Wang, Lijuan Xu and Shumian Yang Part VI

1107

1115

1127

1137

1149

Routing, Position Management and Network Topologies

A Security Mechanism for Detecting Nonfeasance on Inter-domain Routing Forwarding . . . . . . . . . . . . . . . . . . . . . . . . . . Chen Zhao, Hanbing Yan and Wang Tang

1163

A Novel Routing Scheme in Three-Dimensional Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bang Zhang, Xingwei Wang and Min Huang

1175

A Model of Cloud Computing-Based TDOA Location System . . . . . . Bohao Huang, Shuo Gu and Wei Xia

1187

Contents

Position-Based Unicast Routing Protocols for Mobile Ad Hoc Networks Using the Concept of Blacklisting . . . . . . . . . . . . . . . . . . . Muhammad Aman, Asfandyar Khan, Azween Abdullah and Israr Ullah

xix

1195

Cloud Computing: Models, Services, Utility, Advantages, Security Issues, and Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ikechukwu Nwobodo

1207

Optimization of Logistics Distribution Network Model Based on Random Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feng Yu, Wei Liu, Liang Bai and Gang Li

1223

Data Forwarding with Selectively Partial Flooding in Opportunistic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lijun Tang and Wei Wu

1233

Design of Flame End Points Detection System for Refuse Incineration Based on ARM and DSP . . . . . . . . . . . . . . . . . . . . . . . Fengying Cui, Sailong Ji and Qilei Xu

1243

Routing Protocols in Delay Tolerant Networks: Application-Oriented Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rahul Johari and Sakshi Dhama

1255

Survey of Indoor Positioning Systems Based on Ultra-wideband (UWB) Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guowei Shi and Ying Ming

1269

Nonlinear Attitude Stabilization and Tracking Control Techniques for an Autonomous Hexa-Rotor Vehicle . . . . . . . . . . . . . Hyeon Kim and Deok Jin Lee

1279

The Design and Implementation of Occupational Health Survey System Based on Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . Honger Tian, Lili Cao, Yongguo Zhan and Liuliu Liu

1293

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1307

About the Editor

Dr. Qing-An Zeng received his MS and Ph.D. degrees both in Electrical Engineering from Shizuoka University in Japan. In 1997, he joined NEC Corporation, Japan, where he was engaged in the R&D of 3G systems. In 1999, he joined the University of Cincinnati in Ohio and was a faculty member in the Department of Computer Science. In 2010, Dr. Zeng joined North Carolina Agricultural and Technical State University as a faculty member in the Department of Computer Systems Technology. His research interests include Wireless and Mobile Networks, Handoff and Resource Management, Mobility Management, Heterogeneous Networks, Mobile Ad Hoc and Sensor Networks, Wireless Internet, QoS Issues, Security Issues, UWB, NoC, PLC, Smart Grid Communications, Vehicle Communications, Modeling and Performance Analysis, and Queuing Theory. Dr. Zeng has more than 140 publications including: books, book chapters, refereed journal articles, and international conference proceedings papers. He has also served on numerous professional committees. In October 2014, Dr. Zeng published his book entitled “Introduction to Wireless and Mobile Systems, 4th edition” by Cengage Learning. His book has been adopted by a lot of universities in the world and was translated into Chinese and Korean versions. Dr. Zeng is a Senior Member of IEEE.

xxi

Part I

Emerging Topics in Wireless and Mobile Computing and Communications

Relay Precoding for Multiuser Cooperative Communication Yong Wang, Hao Wu, Liyang Tang and Yue Zhang

Abstract In this paper, we study the design of precoding matrices for the multidirectional two-way relaying channels where multiple pairs of sources wish to exchange information with their partners. Precoding at each user and the relay is carefully constructed to ensure that the signals from the same user pair are grouped together and cross-pair interference can be canceled. Then, analytical results are developed for the proposed protocol. The numerical results are also provided to demonstrate the performance of our proposed scheme. To improve the diversity gain of the proposed scheme, an optimal scheme is also presented. Keywords Multiple-input alignment

 Multiple-output  Precoding  Relay  Signal space

1 Introduction Two-way relaying channel might be one of the most interesting communication scenarios and has been studied extensively [1–3]. A typical two-way relaying channel consists of one relay and two sources which exchange information with each other. Such a communication scenario is important not just because it gives the best platform to show the benefit of network coding, but also because it is a common building block of wireless communications. In next generation communication systems, such as LTE-advanced systems, it is possible for users to communicate with each other via relays instead of the base stations in the future cellular Y. Wang (&) Xidian-Ningbo Information Technology Institute, Xidian University, Xi’an 710071, Shaanxi, China e-mail: [email protected] Y. Wang  H. Wu  L. Tang  Y. Zhang State Key Laboratory of Integrated Services Networks, Xidian University, Xi’an 710071, Shaanxi, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_1

3

4

Y. Wang et al.

systems. Due to the low cost of installation, enough relays could be located to satisfy the high-speed requirement of the users, which is a difficult problem for base stations. Therefore, we focus on the multi-way relay channel to introduce a realistic special multiplexing technique for the future wireless communication system, which shows the advantages of our work. Two-way relay channels have been generalized to different systems such as bidirectional multi-pair message exchange and multidirectional multi-pair exchange [4–7]. For multi-pair multi-way relay channels where more than two users communicate via a relay in a bidirectional manner, multiple interfering clusters of users were considered in [4], which communicates simultaneously via a relay, where users within the same cluster exchange messages among themselves. Also, upper and lower bounds for the capacity were investigated with different relaying schemes. While each user in a cluster has a single common message intended for all the other users in the same cluster in Gunduz et al. [5], the authors Lee et al. [6, 7] take into account more general settings such as multiple independent messages per user. Especially, a new network information flow called a multiple input multiple output (MIMO) Y channel was proposed by Lee et al. [6] where each of the three users intends to convey independent unicast messages to the other users via an intermediate relay while receiving two independent messages from the other two users. As the number of users that wish to communicate in the wireless medium increases, interference becomes a big issue. In order to successfully deal with an interference problem, various signaling methods have been studied. The idea of interference alignment was introduced for MIMO X channels [8, 9] or K-user interference channels. These works have inspired much hope as the savior to the interference problem by aligning multiple interfering signals at each receiver in order to reduce effective interference. By appropriately exploiting the concept of interference alignment and network coding, signal space alignment for network coding (SSA-NC) was proposed to efficiently deal with multiple interference signals in three-user MIMO Y channels by Lee et al. [6]. The core concept of the SSA-NC is that, each user pair that wants to exchange messages cooperatively constructs the transmit beamforming vectors so that two pair signals for the network coding should be aligned within the same spatial signal. In this paper, we propose an encoding and decoding strategy, which involves a signal space alignment for a coding message for the multiple access channel (MAC) phase. We then carefully design the precoding matrix at the relay to cancel cross-pair interference for broadcasting (BC) phase based on Amplify-and-Forward (AF) strategies. Compared to conventional two-way relaying channels, such a scenario is more challenging due to two types of interference: intra-pair interference and inter-pair interference, where the messages from one pair will cause strong interference to other pairs. The key idea of the signal space alignment is that all users cooperatively design precoding vectors for transmitting a message so that the relay can receive a proper message. During the BC phase, different combined schemes with orthogonal projection decoding enable all users to decode the messages from the relay.

Relay Precoding for Multiuser Cooperative Communication

5

2 System Model In the following mathematical exposition, superscripts ðCÞT , ðCÞ , and ðCÞH denote transpose, complex conjugate, and conjugate-transpose, respectively. We consider a MIMO K pair of source nodes and one relay channel in this section as shown in Fig. 1. In this channel, 2K users have M antennas and a relay has N antennas. The users want to exchange messages between each pair with the help of a single relay terminal. User i wants Wi to send message to his partner on the network and intends to decode all other users’ messages on the network except User i, i.e.,    ^ 1; W ^i . ^ 2 ; . . .; W ^K W W In the first time slot, which is called a MAC phase, all users simultaneously transmit the signals to the relay which is described by yr ¼

K  X

 H r;i xi + H r;Kþi xKþi + N r

ð1Þ

i¼1

where Hr,i represents the N × Mi channel matrix from user i to the relay r, xi 2 CMi denotes transmit vector at user i, and N r 2 CN denotes an additive white Gaussian noise vector. The user has an average power constraint h  (AWGN) i E Tr xi xi

H

 SNR. The channel is assumed to be quasi-static and each entry of

the channel matrix is an independently and identically distributed (i.i.d.) zero mean complex Gaussian random variable with unit variance, i.e., NC(0, 1).

Fig. 1 System model of multiuser communication scenario

User 1

user 1+K

User i

user i+K

user K

user K+K

MAC phase

BC phase

6

Y. Wang et al.

After receiving, the relay generates new transmitting signals and broadcasts them to all users in what is known as the BC phase. The received signal vector at user i is given by: yi ¼ H i;r xr þ N i

ð2Þ

where Hi,r denotes the Mi × N channel matrix from the relay r to user i, xr 2CN is the transmit vector at the relay, and N i 2CM denotes the AWGN hvector.  The itransmit

signal at the relay is subject to the average power constraint E Tr xr xr

H

 SNR.

3 Signaling Transmission for Amplify-Forward Relay During the multiple access phase, all 2K source nodes transmit simultaneously. We carefully design the precoder vectors at each source, so that the two messages from the same pair are aligned with each other at the relay. During the broadcasting phase, the relay broadcasts the K aligned mixtures, where the precoding matrix at the relay is carefully designed to cancel cross-pair interference. During the first time, all source nodes transmit messages to the relay. The key idea of the proposed protocol is still to ensure that the two messages from the same pair align to each other [10], spanðH r;i vKþi;i Þ ¼ spanðH r;Kþi vi;Kþi Þ; i2f1; . . .; Kg; where v is transmitting precoding vector. All users carefully choose the precoding vectors that each user designs the beamforming directions so that two desired signals for network coding should be aligned within the same spatial dimension     ui ¼ H r;i vKþi;i ¼ H r;Kþi vi;Kþi

ð3Þ

So as long as N ≺ 2 M, such ui always. User i sends message sK+i,i for his partner to the relay along beamforming vectors vKþi;i . As a result, the received signal at the relay is as follows: Yr ¼

K  X

 H r;i vKþi;i sKþi;i þ H r;Kþi vi;Kþi si;Kþi þ N r

i¼1

2

s1;Kþ1 þ sKþ1;1

3

6 2;Kþ2 þ sKþ2;2 7 7

6 s 7 þ Nr ¼ u1 u2 . . .uK 6 .. 7 6 5 4 . sK;2K þ s2K;K ¼U s þN r r

r

ð4Þ

Relay Precoding for Multiuser Cooperative Communication

7



T where sr ¼ sr1 . . .srK . We design the precoding matrices at the relay, where desired messages will be obtained and interference will be eliminated. Specifically for the ith user, the precoding matrix Qi can be designed as   1 H  Qi ¼ I N  Pi PH Pi ; i2f1; 2; . . .; K g i Pi

ð5Þ

where Pi is a N  ðK  1Þ submatrix of the channel matrix U r ¼ ½u1    ui    uK  by removing its ith columns. Apparently Qi is an orthogonal projection matrix generated from Pi, which is a submatrix of the channel matrix. Due to the definition of orthogonal projection matrix, the null space dimension for Qi is (K − 1). Therefore, the number of relay antennas must satisfy the constraint (N ≥ K), which ensures that the dimension of the signal space of Qi is larger or equal to one. Prior to transmission, the relay uses Qi to suppress the inter-pair interference as follows: Q i Y r ¼ Q i U r sr þ Q i N r ¼ Qi ui sri + Qi N r

ð6Þ

where the fact Qi u j ¼ 0; i 6¼ j has been used. During the second time slot, the mixed messages are broadcasted to all the users, Xr ¼

K X

Qj Y r

ð7Þ

j¼1

To simplify the computational complexity at each node, we make the precoding vector and the detection vector of each node the same. As a result, user i receives 

vKþi;i

H

 H  r;i H r Y i ¼ vKþi;i H X þ Ni ¼

K   X H ui Qj Y r + N i

ð8Þ

j¼1

 H  H ¼ ui Qi ui sri þ ui Qi N r þ N i where Ni is the additive Gaussian noise at user i. Due to the conjugate symmetry H of Hermite matrix Qi, it can be easily shown that ðui Þ Qj ¼ 0, for i ≠ j. Thus, the desired message sri could be detected by user i. Similar to physical layer network coding, each destination can first subtract its own information from the observation and then detect the message from its partner. In a similar way, other users can obtain their desired messages using the proposed signaling methods. Since the relay has the access to global channel information, such coefficients will be calculated at the relay and transmitted to the users.

8

Y. Wang et al.

4 Selection Criterion for Diversity Gain Previously we focused on the case with N = K, where Qi can be viewed as an orthogonal projection matrix of the N × (K − 1) matrix Pi. The dimension of non-null space for the precoding matrix Qi is 1. Since the precoding matrix is an idempotent matrix, i.e., Qi Qi ¼ Qi , its only nonzero eigenvalue is one. Therefore, the eigenvalue decomposition of Qi can be shown as Qi ¼ qi qH i , where qi is the eigenvector of the matrix corresponding to the eigenvalue 1. It is easy to show that the dimension of the null space of Pi is 1 and qi is from the null space. When the number of relay antennas is N ≻ K, the dimension of the null space of Pi becomes (N − K + 1). By using the basic vectors from such a subspace, denoted as qi,k, 1 ≤ k ≤ (N − K + 1), we can construct (N − K + 1) projection matrix, denoted as Qi;k ¼ qi;k qH i;k ; ðk ¼ 1; . . .; N  K þ 1Þ

ð9Þ

To improve the system transmission performance, it is necessary to choose an ~ i among Qi,k for the ith pair. Since the system appropriate precoding matrix Q performance is largely impacted by the worst user’s performance of each pair, the precoding matrix can be selected by using the following rule as   ~ i ¼ arg maxmin SNRi;k ; SNRKþi;k Q ~ Q2R

ð10Þ

  where R ¼ Qi;1 ; . . .; Qi;NKþ1 is the set of precoding matrices, 1 ≤ k ≤ (N − K + 1). The use of the selection criterion in Eq. (10) can ensure that the diversity gain(N − K + 1)is achievable for all users. Such a result shows that the extra diversity gain can be achieved by increasing the number of relay antennas.

5 Simulation Results In this section, we provide the performance of the proposed scheme through simulations first. From the simulations results, we demonstrate the result of antenna configuration conditions. For the sake of notational convenience, we denote {N|M1, M2, …, M2K} for 2K-user MIMO channels where the relay has N antennas and user i employs Mi transmit antennas. It is assumed that each user allocates its power to each stream equally. In Fig. 2, the sum rate of the proposed scheme is shown as functions of SNR, where the number of user pairs is set as K = 3, 4, 5. As shown in the figure, the proposed scheme can achieve higher sum rate than the PNC scheme. In addition,

Relay Precoding for Multiuser Cooperative Communication

9

55 proposed AF scheme K=3 conventional PNC K=3 proposed AF scheme K=4 conventional PNC K=4 proposed AF scheme K=5 conventional PNC K=5

50

Sum Rate (bit/s/Hz)

45 40 35 30 25 20 15 10 5

0

5

10

15

20

25

30

35

40

45

50

SNR[dB]

Fig. 2 The performance for sum rate between different schemes

the slopes of curves for the sum rate increase faster than the comparable scheme as K increases, which means that the gap between the two schemes can be further enlarged. When the number of user pairs is increased, the time sharing-based approach requires more time slots to accomplish information exchange among the multiple pairs, but the proposed transmission protocol still requires only two time slots. As a result, the proposed scheme can achieve higher throughput gain and more robust performance than PNC scheme when K increases. Then we compare the system performance of the proposed scheme with that of the other conventional scheme: TDMA and MU-MIMO schemes. In the MIMO channel, the TDMA scheme requires three orthogonal time slots for exchanging the message with each other via a relay. In contrast to the TDMA scheme, the MU-MIMO scheme demands two orthogonal time slots to interchange the message with each other via a relay on the MIMO channel. It is shown in Fig. 3 that the proposed scheme exhibits superior performance compared to the other schemes with the same condition {6|(4, 4, 4, 4)}. The performance improvement of the proposed signaling scheme mainly comes from efficient utilization of the signal space so that interference signals do not affect the network. At the same time, we can know that the modulo-M operation has performance loss while the degradation of the proposed network coding is due to the reduced minimum DASL. This loss becomes larger with increasing modulation size because more levels are required for the ambiguity-free detection.

10

Y. Wang et al. 0

10

TDMA MU-MIMO The proposed scheme

-1

10

-2

BER

10

-3

10

-4

10

-5

10

0

2

4

6

8

10

12

14

SNR[dB]

Fig. 3 The system performance of different schemes

6 Conclusions In this paper, we propose a network coding protocol for multi-pair two-way relay channels. Meanwhile, the precoding matrices have been carefully designed to remove inter-pair interference in the system. Analytical results have been developed for the proposed transmission protocol. The performance comparisons between PNC and proposed scheme are also provided, respectively. Both the analytical and numerical results show that the proposed protocol is spectral efficient, which can achieve higher performance gains. To enlarge the diversity gain of the proposed scheme, the optimization is also studied. Acknowledgments This project was supported by National Natural Science Foundation of China (No. 61101147), The National Basic Research Program of China (973 Program, No. 2012CB316100), Specialized Research Fund for the Doctoral Program of Higher Education (No. 20110203120004), Natural Science Basic Research Plan in Shaanxi Province of China (Program No. 2014JZ018), Science Research Plan in Shaanxi Province of China (No. 2013K06-15), Open Project of Key Laboratory of Wireless Sensor Network & Communication, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences (No. 2012004), Xidian-Ningbo Information Technology Institute seed fund, The Fundamental of Research Funds for the Central Universities (No. K5051301006), The 111 Project (No. B08038).

Relay Precoding for Multiuser Cooperative Communication

11

References 1. Wang, R., Tao, M.: Joint source and relay precoding designs for MIMO two-way relaying based on MSE criterion. IEEE Trans. Signal Process. 60(3), 1352–1365 (2012) 2. Xu, Y., Xia, X., Chen, Y.: Symbol error rate of two-way decode-and-forward relaying with co-channel interference. In: IEEE International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), pp. 138–143, London (2013) 3. Narayanan, K., Wilson, M.P., Sprintson, A.: Joint physical layer coding and network coding for bi-directional relaying. In: Proceedings of 45th Allerton Conference on Communication, Control, and Computer. Monticello, USA (2007) 4. Lee, N., Chun, J.: Signal space alignment for an encryption message and successive network code decoding on the MIMO K-way relay channel. In: 2011 IEEE International Conference on Communications (ICC), pp. 1–5. IEEE (2011) 5. Gunduz, D., Yener, A., Goldsmith, A., Poor, H.V.: The multiway relay channel. IEEE Trans. Inf. Theory 59(1), 51–63 (2013) 6. Lee, N., Lim, J.-B., Chun, H.: Degrees of freedom of the MIMO Y channel: signal space alignment for network coding. IEEE Trans. Inf. Theory 56(7), 3332–3342 (2010) 7. Lee, N., Heath, R.W.: Degrees of freedom of completely-connected multi-way interference networks. In: Proceedings of IEEE International Conference on ISIT, pp. 1571–1575 (2013) 8. Jiang, C., Cimini, L.: Energy-efficient transmission for MIMO interference channels. IEEE Trans. Wireless Commun. 12(6), 2988–2999 (2013) 9. Yetis, C., Gou, T., Jafar, S., et al.: On feasibility of interference alignment in MIMO interference networks. IEEE Trans. Signal Process. 58(9), 4771–4782 (2010) 10. Wang, Y., Li, H.: Transmission scheme for a K-way relay multiple-input multiple-output channel. IET Commun. 6(15), 2442–2447 (2012)

A Big Slot Scheduling Algorithm for the Reliable Delivery of Real-Time Data Packets in Wireless Sensor Networks Hoon Oh and Md Abul Kalam Azad

Abstract In wireless sensor networks (WSNs), guaranteeing a reliable data transmission over a time-varying wireless channel is a challenging task. The existing TDMA-based MAC protocols assign time slots to the nodes individually for the transmission of data packets safely. However, these protocols may not be suitable for the industrial WSNs in which the stability of wireless links is threatened by various obstacles. In this paper, we propose a new slot allocation and utilization method that allocates one big slot for all nodes at each depth of a tree, and allows the nodes to share it through contention. The big slot constrains the packet transmission delay of all nodes at the same depth, thereby limiting the packet transmission delay to a sink. We show by simulation that the proposed approach is very dependable against the time-varying channel in WSNs. Keywords Slot scheduling

 Real-time  Safety-critical  TDMA  CSMA

1 Introduction A safety-critical application [1] requires a timely and reliable data transmission over a communication channel in industrial WSNs. Thus, many TDMA-based MAC protocols [1–3] were proposed to tackle this problem. However, the harsh industrial environment incurs the frequent failures of wireless links due to the ambient noises and interferences, thus obsoleting part of the scheduled slots. Thus, it is highly required but challengeable to design a real-time MAC protocol that is robust against the time-varying wireless links. H. Oh (&)  M.A.K. Azad Ubicom Lab, School of Computer Engineering and Information Technology, University of Ulsan, P.O. Box 18, Ulsan 680-749, South Korea e-mail: [email protected] M.A.K. Azad e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_2

13

14

H. Oh and M.A.K. Azad

TreeMAC [2] assigns nonoverlapping frames to all nodes where each frame consists of three slots. This protocol is advantageous in terms of channel efficiency by allowing slots to be reused by the nodes every three depths in a tree. However, the slot reuse can incur the irregular interference because the interference range is always farther than the transmission range. Furthermore, it does not suggest any measure against the link failures. I-MAC [1] that targets small control and monitoring networks excludes the spatial reuse of slots by allocating slots to each node in a distinct manner. Moreover, it tries to enhance reliable data transmission in industrial unstable WSNs such that data transmission within a slot is secured by control packets such as RTS, CTS, and ACK. However, it does not respond effectively to the time-varying failures of wireless links even though it suggests a spare time utilization scheme to salvage packets. Z-MAC [3] combines the advantages of TDMA-based and CSMA-based protocols. Z-MAC allocates time slots to every node such that no two nodes within two-hop neighbors are assigned the same time slot in order to prevent interference. However, due to slot scheduling overhead, they recommend the execution of DRAND [4] only at network initialization time. This protocol does not address the irregular interference problem and does not take any measures against link failures, either. Since the time-varying noise and interference in industrial fields is dependent on frequency, WirelessHART [5] employs a frequency (i.e., channel) hopping technique in which a sensor node switches randomly to one out of a predefined list of wireless channels every slot in order to reduce the effect of noise and interference. This definitely improves the data transmission reliability. However, WirelessHART reschedules slots network-wide to fix any link failure, similar to the other MAC protocols. On the other hand, GinMAC [6] calculates a number of additional slots deterministically that is reserved for improving reliability against the worst case channel characteristics. However, GinMAC loses its reliability control when the depth of a tree extends beyond the static topology envelope. Some other protocols to improve reliability by reducing data collision have been proposed. The link activity scheduling approach [7] builds and examines a conflict graph while S-Web-based MAC [8] uses checkerboard and dartboard-based slot scheduling to remove data collisions by reducing the number of nodes contending for the media at the same time. Note that both protocols consider the link failures by the collisions only, but not by the external interference. RNP [9] employs overhearing and piggybacking techniques to forward data packets cooperatively that are missing due to the external interference. However, all data packets are broadcasted to allow for overhearing and piggybacking, which incurs high collisions. According to the discussion so far, any of the existing TDMA-based MAC protocols except for I-MAC directly address a method to ensure the reliability of data delivery and also a method to survive over the link failures in industrial WSNs. I-MAC also responds weakly to the link failures. Thus, the existing MAC protocols may not be suitable for industrial WSNs with the inherent link instability due to various obstacles or high signal noises.

A Big Slot Scheduling Algorithm for the Reliable Delivery …

15

We propose a new slot allocation and utilization method that improves the reliability of data transmission over the time-varying unstable links in industrial WSNs. In our approach, one big slot is allocated for all nodes at each depth of a tree and shared by those nodes for data transmission to their respective parents. The proposed approach reduces the channel competition because only the nodes at the same depth contend for channel within the same big slot. Furthermore, the transmission delay at lower depths will increase since the nodes at the lower depths have higher probability of collision due to their higher number of packet processing than that of nodes at higher depths. Considering these, we develop a formula to generate the variable-sized big slots according to tree depths. We show by simulation that the proposed approach is very dependable against the time-varying link failures in WSNs. The rest of the paper is organized as follows. In Sect. 2, we discuss the background of the proposed approach with some necessary definitions and notations. In Sect. 3, we formally describe the proposed approach with the analysis of its properties. In Sect. 4, performance evaluation is given by resorting to simulation. Finally, we make concluding remarks in Sect. 5.

2 Background 2.1

Network Model

Industry fields are often harsh and non-friendly to the wireless communication activity due to the obstacles, changing structures, and ambient interferences, resulting in unstable wireless links. An industrial monitoring and control WSN usually consists of one data collection and control server (hereafter, referred to as simply node or sink node) and a number of sensor devices (hereafter, referred to as simply node or sensor node) that include at least one sensor module for sensing the environment. Each sensor node generates one data packet that is required to send to the sink node within a specified time bound, which is defined by an application. A sink node is wall-powered and a sensor node is battery-powered. The transmission range of a sensor node is limited for spectrum efficiency and battery efficiency. A sink node collects data from the network at regular intervals, thus naturally bounding data delivery time. The sensor nodes form a tree originating from a server in which each node except for a sink node has a parent and may have multiple children. A node is said to be a tree-node if it belongs to a tree. Otherwise, it is an orphan node. Two nodes that can directly and mutually communicate with each other are said to have a link. A link between a node and its parent is specially called a tree link. A link can be broken because of node failure, battery depletion, interferences, or the intervention of some obstacles.

16

H. Oh and M.A.K. Azad

Fig. 1 A network model

Figure 1 shows an example network in which one sink node and 13 sensor nodes form a tree originating from sink node S. The solid lines and the dashed lines indicate tree-links and ordinary-links, respectively.

2.2

Motivations

A TDMA-based protocol eliminates contention among the nodes by assigning time slots to them for data transmission and gives them a guaranteed chance to deliver data packet to a sink. However, if a node loses one of its upstream links, it cannot deliver packets to a sink. There are two approaches for resolving this. One is to reconstruct a tree and then reschedule slots over the whole tree. Tree reconstruction and slot rescheduling can be performed either after every tree-link failure or after some percentage of nodes fails to deliver data packets to a sink. The former one suffers from a lot of overhead while the latter one causes some amount of slots to be wasted. Another is to repair the broken tree and reschedule slots locally for the broken part. However, the local rescheduling of slots is very difficult since it affects the amount of slots required by and the slot start times of the related nodes. In consequence, tree maintenance by TDMA scheme is radically unfavorable since it is accompanied by the costly slot rescheduling. Let us consider some tree-based MACs such as TreeMAC and I-MAC for a simpleWSN as shown in Fig. 1. If link (1, 2) is broken, the slots allocated to nodes 2 and 4 will be wasted until tree reconstruction and slot rescheduling are made. However, if node 2 can change its parent to node 7 without affecting the scheduled slots, it would make seamless data transmission via the new parent while resolving the problems such as control overhead and the waste of slots. Nonetheless, slot reusing is difficult to instantiate because it is not easy to identify whether or not any two nodes can use the same slot without interference since interference range is always farther than the physical transmission range [10]. However, the slot reuse

A Big Slot Scheduling Algorithm for the Reliable Delivery …

17

increases channel efficiency, but impairs the freedom from interference in data transmission. To achieve seamless data transmission, reduce control overhead caused by tree maintenance, and improve channel efficiency, it would be desirable to assign one unique big slot to all the nodes at the same depth of a tree so that they can share the big slot for data transmission. Even though a node changes its parent, it does not have to change the big slot as long as it maintains the same tree depth. If its depth is changed, it can utilize another big slot allocated to the changed depth. One problem is to incur collision and delay due to the competition of data transmission by employing the CSMA scheme within a big slot. One favorable aspect regarding this is that the competition of data transmission is confined to the nodes at the same tree depth. Another is that the opportunistic parallel transmission is made possible. If two nodes at same depth do not interfere with each other in terms of the CSMA operation, they can transmit packets to their respective parents simultaneously. Nodes 4 and 13 in Fig. 1 are such a case. In consequence, tree construction and slot scheduling are of great importance to realize the proposed concept. In this paper, the function to generate big slots in a distributed manner will be developed, and a slot scheduling algorithm will be briefly described for space constraints.

2.3

Notations and Definitions

For convenience, we use some notations and definitions as follows. • • • •

depth(i): The depth of node i N(i): A set of neighbors of node i C(i): A set of children of node i P(i): The parent of node i

Definition 1 A bigslot (BS) is a time span that all nodes at the same depth share to receive data packets from their children and transmit their data packets to their respective parents using CSMA scheme. A big slot allocated to the nodes at depth i is denoted by BS (i). BS(i) is divided into two parts, BSRx(i) and BSTx(i) that are used for the nodes at depth i to receive data packets from their children and to transmit data packets to their parents, respectively. Definition 2 A superframe (SF) is given the sum of the transmission portions of all the big slots allocated to the nodes at different depths as follows.

18

H. Oh and M.A.K. Azad

SF ¼

H X

BSTX ðiÞ

i¼2

where BSTx(i) and BSTx(j) when i ≠ j do not overlap.

3 Slot Scheduling 3.1

Tree Construction and Maintenance

During tree construction, every node i constructs its neighbor information table, NIT(iÞ ¼ fðx; status; depth(xÞÞjx 2 NðiÞg, where status 2 {primary parent, secondary parent, child}. We assume that every link is bidirectional and time is synchronized over the entire network. A tree construction process is as follows. At initialization, a sink is the only tree node and initiates tree construction by issuing a tree construction request, TCR = (node ID) message. Upon receiving TCR, an orphan node joins the sink by sending a join request, JREQ = (sender, receiver, depth) message. Upon receiving JREQ, a tree node sends a join response, JRES = (sender, receiver, depth) message and takes the orphan node as its child. When the orphan node receives JRES, it takes the tree-node as its parent. Another orphan node who has overheard JREQ can take the same procedure to become a tree node. If an orphan node overhears multiple JREQs from different tree nodes, it takes a tree node that provides the shortest distance (depth) to the sink as a primary parent, and other tree nodes with the same distance as the secondary parents.

3.2

Wait Time Generation Function

A slot scheduling in this paper is based on the wait time distribution function which was proposed in [11]. The wait time distribution function is given below. WTimeðdÞ ¼ W1  ad1

ð1Þ

where, WTime (d) is the time that a node at depth d has to wait to transmit data packets to its parent and the range of the base a is in (0, 1]. Since a sink does not have to send any packets, WTime (d) is in (0, W1]. Therefore, the wait time for a sink, WTime(1) is equal to SF(=W1). The wait time distribution function has two basic principles. The first one is that it generates a skewed wait time according to tree depths. This implies that a node waits for all of its children completing their data transmission for the favor of data aggregation. The second one is that the wait time gap between any two nodes of

A Big Slot Scheduling Algorithm for the Reliable Delivery …

19

two consecutive depths increases exponentially as depth decreases. This way of gap distribution is necessary since the nodes at lower depths have higher contention to acquire channel since they have to process the more number of data packets in proportion to the number of their descendants, and have the reduced possibility of parallel transmission due to the reduced distance among the nodes.

3.3

Estimation of Key Parameters

One big issue is how a and W1 can be determined. In [11], the approximate range of a for a small tree-based network is determined as 0.63 ≤ a ≤ 0.82. One good thing is that the values of a within this range are less sensitive to the number of nodes and the dimension of the network. Thus, a will be chosen 0.7 for our experiment in the later part of this paper. The size of a superframe, W1 should be greater than the summation of transmission times of all packets from each node to a sink, assuming that each node generates only one packet within one superframe. Thus, the bound of W1 can be given as follows. H H X X ðd  1Þ  nd  T  W1  ðd  1Þ  nd  E½D d¼2

ð2Þ

d¼2

where H is the depth of a tree, nd is the number of nodes at depth d, T is the one hop transmission time of a packet and E[D] indicates an expected delay of a packet when a node sends the packet to its parent using CSMA. The initial values of the parameters H, nd, T, and E[D] were studied in other papers [11, 12]. Especially, nd is dependent on the number of nodes in the considered network. Thus, this value can be increased adaptively according to the increase in the number of participating nodes. The values of W1 and a can be included in the TCR, JREQ, and JRES messages so that every node can know them.

3.4

Big Slot Calculation and Slot Scheduling

We can calculate the length of a big slot (BS) using Eq. (1). The receiving slot length, BSRx ðiÞ of a node at depth i is given as follows.  BSRx ðiÞ ¼

WTimeðiÞ  WTimeði þ 1Þ if a node at depth i has a child 0 otherwise

The sending slot length, BSTx ðiÞ of the same node at depth i,

ð3Þ

20

H. Oh and M.A.K. Azad

Fig. 2 The variable size of big slots at different depths and their relationship

 BS ðiÞ ¼ Tx

WTimeði  1Þ  WTimeðiÞ 0

if i [ 1 otherwise

ð4Þ

To find the size of the BS of a node at depth i, we need to sum up Eqs. (3) and (4) as follows. 8 < W1 ða2  1Þ  ai if a node is an intermediate node Rx Tx BS(iÞ ¼ BS ðiÞ þ BS ðiÞ ¼ W1 ða2  a1 Þ  ai if a node is a leaf node : W1 ða1  1Þ  ai if a node is a sink node ð5Þ According to Eq. (5), the BS size varies depending on the depth of a node in a tree such that nodes at lower depths require a BS larger than those at higher depths. It is worth mentioning that the summation of all BS’s is larger than the SF size as the BS’s at adjacent depths overlap with each other. Figure 2 shows the relationship of the variably sized big slots at different depths. BS(i) is divided into BSRx(i) and BSTx(i). BSRx(i) and BSTx(i) overlaps with BSTx(i + 1) and BSRx(i − 1), respectively, so that every node can receive and transmit data packets from its children and to its parent, respectively.

A Big Slot Scheduling Algorithm for the Reliable Delivery …

3.5

21

Big Slot Assignment Algorithm

Every node can compute its BS in a distributed manner according to Eq. (5) since it knows its depth. Now, a node should be able to determine the start time of its BS within the superframe, W1. Suppose the start time of a superframe is sTime. Then, a node at depth i can determine the start times of BSRx(i) and BSTx(i) denoted as RxTime(i) and TxTime(i), respectively. RxTimeðiÞ ¼ sTime þ WTimeði þ 1Þ

ð6Þ

TxTimeðiÞ ¼ sTime þ WTimeðiÞ

ð7Þ

Sleep TimeðiÞ ¼ sTime þ WTimeði  1Þ

ð8Þ

According to Eqs. (6), (7), and (8), every node at depth i wakes up at RxTime (i) to receive data packets from its children. As soon as it finishes receiving data packets, it gets into sleep mode and wakes up at TxTime(i) to forward data packets to its parent. Then, it gets into sleep mode at SleepTime(i) until the next RxTime (i) being RxTime(i) + W1, regardless of the success of data forwarding.

4 Performance Evaluation We evaluated the proposed big slot scheduling, named BSSA for convenience, using the QualNet simulator version 5.0.2. We compare our approach with I-MAC [1], which has exhibited better performance than other contemporary real-time MAC protocols.

4.1

Simulation Model

To ease topology dimensioning, we enlarge both simulation area and transmission range to the same extent. Therefore, the experimental results by these modified parameters are in congruence with the network model in Sect. 2.1. Using mathematical formulas in [13, 14], we get the average value of H as 7. Substituting H = 7, T = 3.125 ms [12], E[D] = 30 ms [11] in Eq. (2), we get a theoretical range of W1 as 0.25 s ≤ W1 ≤ 2.4 s. However, we set an optimum value of W1 to 1.6 within this range according to our simulation study. Table 1 shows the key simulation parameters and values.

22

H. Oh and M.A.K. Azad

Table 1 Simulation parameters and values Parameter

Value

Number of node, n Dimension, d Simulation time, T W1 (for BSSA) a (for BSSA) Slot size (for I-MAC) Transmission range, R Channel frequency, Fr Path loss model Sensor energy model Battery model Maximum Tx times (MAX_TIMES in I-MAC) Data packet length

1 sink and 25 sensor nodes 100 m × 100 m 600 s 1.6 0.7 20 ms 20 m (−25 dBm) 2.4 GHz 2-ray ground MicaZ Linear 2 100 bytes

4.2

Simulation Scenarios

All sensor nodes are static. Twenty-five sensor nodes are uniformly distributed within the boundary of a simulation area of 100 m by 100 m, and a sink is placed at the middle of the top of the area. Each sensor node transmits only one packet of 100 bytes every superframe. The following two scenarios are used to evaluate the performance. • Scenario I: Stable link scenario In this model, all the links remain connected till the simulation ends. In this case, we want to examine the impact of using BS in network operations. We are especially interested in knowing how much deviation in performance happens due to the use of CSMA within BS instead of the totally slotted approach. • Scenario II: Randomly disconnected link scenario In this model, a statistical generator is used to break the links in a purely random manner. The location and the number of link breaks within a cycle cannot be predicted or controlled from a user interface. Every SF, a node picks up a value k from a subset of all nodes participating in the simulation. If k is equal to its ID, the node is considered to be disconnected to its parent. So, the node remains disconnected to its primary parent for the current SF. The number of link breaks in an SF is controlled by the Link Break Index (LBI), α. We can set α to a low value for a high rate link failure, and vice versa.

A Big Slot Scheduling Algorithm for the Reliable Delivery …

4.2.1

23

Simulation with Stable Link Scenario

Packet Delivery Ratio (PDR). In Fig. 3, it is shown that PDR with BSSA is slightly decreasing with an increasing depth by about 1 % at depth 7. This implies that about 1 % of the packets generated at depth 7 are lost while they move along the upstream tree paths. Since packets travel through multiple hops by resorting to CSMA, some of them becomes lost due to collision. However, this is not noticeable considering the properties of wireless networks; thereby supporting that data collision made among the nodes at same depth can be almost disregarded.

4.2.2

Simulation with Randomly Disconnected Link Scenario

Packet Delivery Ratio. In this scenario, a random number of links is broken down every superframe (SF). Since BSSA allows nodes to manage multiple parents and to utilize the same big slot, it can quickly respond to the broken links by changing parents with no overhead. Referring to Fig. 4, it is shown that BSSA achieves higher PDR than I-MAC and remains stable against the link breakage. However, I-MAC waits link recovery until a tree is reconstructed in the start of next superframe. A more number of link failures demands for more frequent tree reconstructions, which causes a high decrease in PDR. However, I-MAC PDR reaches an equilibrium point for a specific value of LBI, α, in which tree re-construction rate does not lower down PDR any further. Energy Consumption. BSSA is low in energy consumption due to distributed slot scheduling and the flexibility of topology change compared with I-MAC. We measured energy consumption for each node as simulation time progresses. Then, average energy consumption for all participating nodes is depicted in Fig. 5. It is natural that both curves show a linearly increasing pattern with simulation time. According to the figure, we can see that BSSA consume much less energy than Fig. 3 Packet delivery ratio with stable links

24

H. Oh and M.A.K. Azad

Fig. 4 Packet delivery ratio in random link failure

Fig. 5 Average power consumption in random link failure

I-MAC and shows a slightly increasing pattern. This is because in BSSA, every node determines a big slot independently of other nodes and even though a link is broken, tree reconstruction is not necessary.

5 Conclusions We have proposed a new slot scheduling algorithm, BSSA that exhibits a highly robust behavior in an interference prone environment. BSSA provides a run-time defense against unpredictable link failures by delivering packets through secondary parents. The proposed approach achieves this capability by instantiating a local and controlled CSMA operation within a big slot. The simulation result shows that BSSA provides a comparable level of performance to I-MAC in a stable link scenario. However, it clearly outperfroms I-MAC

A Big Slot Scheduling Algorithm for the Reliable Delivery …

25

in the unpredictable link breaking scenario. Moreover, the proposed protocol consumes significantly lower amount of energy compared to I-MAC. Therefore, we conclude that BSSA should be a promising approach for the noisy and interfere-prone industrial applications. Acknowledgments This research is supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2013R1A1A2013396).

References 1. Oh, H., Vinh, P.V.: Design and implementation of a MAC protocol for timely and reliable delivery of command and data in dynamic wireless sensor networks. Sensors 13, 13228– 13257 (2013) 2. Song, W.-Z., Huang, R., Shirazi, B., LaHusen, R.: TreeMAC: localized TDMA MAC protocol for real-time high-data-rate sensor networks. In: IEEE International Conference on Pervasive Computing and Communications, pp. 1–10. IEEE, Texas (2009) 3. Rhee, I., Warrier, A., Aia, M., Min, J., Sichitiu, M.L.: Z-MAC: a hybrid MAC for wireless sensor networks. IEEE/ACM Trans. Netw. 16, 511–524 (2008) 4. Rhee, I.,Warrier, A., Min, J., Xu, L.: DRAND: distributed randomized TDMA scheduling for wireless ad-hoc networks. In: 7th ACM International Symposium on Mobile Ad Hoc Networking and Computing, pp. 190–201. ACM New York, Florence (2006) 5. Petersen, S., Carlsen, S.: WirelessHARTversus ISA100.11a: the format war hits the factory floor. IEEE Ind. Electron. Mag. 5, 23–34 (2011) 6. Suriyachai, P., Brown, J., Roedig, U.: Time-critical data delivery in wireless sensor networks. In: 6th IEEE International Conference on Distributed Computing in Sensor Systems, pp. 216– 229. Springer, Berlin, Santa Barbara (2010) 7. Cheng, M.X., Gong, X., Xu, Y., Cai, L.: Link activity scheduling for minimum end-to-end latency in multihop wireless sensor network. In: IEEE Global Telecommunications Conference, pp. 1–5, IEEE, Houston (2011) 8. Le, H., Eck, J.V., Takizawa, M.: An efficient hybrid medium access control technique for digital ecosystems. IEEE Trans. Industr. Electron. 60, 1070–1076 (2013) 9. Silvo, J., Eriksson, L.M., Bjorkbom, M., Nethi, S.: Ultra-reliable and real-time communication in local wireless applications. In: 39th Annual Conference of the IEEE Industrial Electronics Society, pp. 5611–5616. IEEE, Vienna (2013) 10. Polastre, J., Hill, J., Culler, D.: Versatile low power media access for wireless sensor networks. In: 2nd International Conference on Embedded Networked Sensor Systems, pp. 95–107. ACM New York, Baltimore (2004) 11. Ngo, C.T., Oh, H.: A tree-based mobility management using message aggregation based on a skewed wait time assignment in infrastructure based MANETs. Wireless Netw. 20, 537–552 (2014) 12. IEEE Standard for Local and metropolitan area networks - Part 15.4: Low-rate wireless personal area networks (LR-WPANs). IEEE Std. 802.15.4-2011 (2011). http://standards.ieee. org/findstds/standard/802.15.4-2011.html 13. Bettstetter, C., Hartenstein, H., Perez-Costa, X.: Stocasticproprties of the random waypoint moility model. Wireless Netw. 10, 555–567 (2004) 14. Dung, L.T., An, B.: A modeling framework for supporting and evaluating performance of multi-hop paths in mobile Ad-Hoc wireless networks. Comput. Math Appl. 64, 1197–1205 (2012)

Classification and Comparative Analysis of Resource Management Methods in Ad Hoc Network Haitao Wang, Li Yan, Lihua Song and Hui Chen

Abstract Due to distinct characteristics of Ad hoc network, such as constrained energy of nodes, the instability of wireless links, the rapid changes of topology, and the limited communication range, resource management of Ad hoc network has always been a challenging task. In recent years, there have been many achievements in this research field, which gain good results in terms of enhancing user QoS, prolonging network lifetime, reducing end-to-end delay and improving network throughput. In this paper, resource management techniques and methods in Ad hoc network are surveyed. From the perspective of research methods and techniques, these study results are divided into three main categories, including cross-layer design-based resource management, game theory-based resource management, and scene awareness-based resource management. Then the advantages and shortcomings of these methods are explained from different angles. In general, adaptive scene awareness-based resource management is more suitable for dynamic Ad hoc network. In conclusion these three kinds of methods complement each other and can be combined appropriately for different application scenarios in future, to achieve more efficient resource management for Ad hoc network. Keywords Ad hoc network theory Context-aware



 Resource management  Cross-layer design  Game

H. Wang (&)  L. Yan  H. Chen Information Management Center, PLA University of Science and Technology, Nanjing 210007, Jiangsu, China e-mail: [email protected] L. Song College of Command Information Systems, PLA University of Science and Technology, Nanjing 210007, Jiangsu, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_3

27

28

H. Wang et al.

1 Introduction Ad hoc network is a wireless self-organizing network with flexible network structure, not relying on the support of infrastructure, extensively used for occasions like military fields, disaster recovery, remote areas, and so on [1]. However, Ad hoc network is also a network restricted by resources calling for efficient resource management strategies. For example, most mobile nodes are powered by battery and use wireless links for data transmission. Designing efficient resource management schemes is a very challenging work in Ad hoc network due to its own characteristics, including unstable wireless links, dynamic topology, and limited energy and bandwidth. Although it is hard to provide definitive QoS guarantee in Ad hoc networks, reasonable resource management could improve the performance of related applications. Resources in Ad hoc networks mainly include two types: one is node energy and the other is network bandwidth. Rational and effective power management could enable balanced energy consumption of nodes and reduce signal interference [2]. In addition, it is necessary to reduce the power consumption of idle nodes, thus extending the network lifespan. On the other hand, efficient bandwidth management can guarantee the harmonious utilization of scarce bandwidth resources, so as to improve QoS to some extent. In general, resources in Ad hoc network can be considered from multiple aspects, and are relevant with specific application environments. In recent years, there have been many achievements in the resource management of Ad hoc which have learned experiences from traditional network management, and considered the characteristics of Ad hoc network. Cross-layer design, game theory, and context awareness technology as the strong tools for design resource management schemes have been applied in Ad hoc network extensively.

2 Classification of Resources Management Methods 2.1

Cross-Layer Design-Based Resource Management

Due to dynamic topology, limited resources, and unpredictable channel conditions, strict layered design is not flexible enough in wireless networks. Application of cross-layer design in Ad hoc network has attracted more attention. Cross-layer design is a key theoretical innovation in next generation wireless communication system, breaking traditional layered design idea, treating all layers of the network as a whole for design, analysis, and optimization control, so as to realize effective allocation of network resources [3]. According to the direction of message flow, cross-layer design could be classified into top-down cross-layer design, bottom-up cross-layer design, and comprehensive cross-layer design mechanisms. For example, physical layer is responsible for transmitting data information, requiring lower BER as much as possible and adapting to rapid changes of link characteristics.

Classification and Comparative Analysis …

29

MAC layer shall adjust access of all nodes on shared channels, so as to avoid access conflicts and hidden terminal problems. Network layer is responsible for finding the route to destination and meeting the requirements for high package delivery ratio, high throughput, low energy consumption, and low time delay. All these layers are dependent on each other, and interaction of information among these layers helps adaptively adjust and control parameters, and improve overall network performance. Cross-layer design can be divided into three categories: layer trigger mechanism, joint optimization, and full cross-layer design. Layer trigger refers to signals predefined between different protocols to inform certain events (such as data transmission failure). In this mechanism, although multiple layers will participate in the interaction process, normally only one layer is responsible for the optimization process; other layers just provide corresponding parameters. In general, two to three layers in joint optimization mechanism consider multiple optimization targets, including QoS, route dispatch, and power control. For example, the optimized cross-layer frame proposed by Huang is to realize joint optimization of MAC scheduling and node power control [4]. In the full cross-layer design, all layers keep separated in logical structure, but respectively adjust behavior by transmitting network status information throughout the whole protocol stack. The advantages of cross-layer design lies in its comprehensive consideration of information from different layers as index for resources dispatch. For example, QoS-AOMDV protocol introduced the total cost index considering MAC layer queue length and node energy from network layer, effectively avoiding nodes with extremely low energy from transmitting packets, and can effectively improve network throughput and reduce time delay at high packet rate. Similarly the joint control protocol for power control and access scheduling in Ad hoc network uses cross-layer design idea to reduce time delay experienced by delay sensitive applications and eliminate Doppler frequency dispersion effect.

2.2

Game Theory-Based Resource Management

Ad hoc network is distributed, dynamic and self-organized, making game theory a very suitable network modeling tool. Modeling the communication interaction process among nodes could rationally configure network parameters, reaching overall network optimization. Different users may endure different wireless channel qualities, have different processing abilities, battery energy and QoS demands; application of game theory to optimize resources configuration not only focuses on overall effectiveness but also considers the fairness among users. Game theory includes cooperative game and non-cooperative game, as well as the competition strategy mixing both games. Cooperative game adopts a kind of compromise to increase the earnings of at least one party and overall earnings of the system. Of course, any party has its minimum benefit requirement, which is called non-cooperative effect. One necessary condition to form cooperative game is that both parties shall reach restrictive agreements. In fact, the increased benefits come

30

H. Wang et al.

Table 1 Mapping relation between ad hoc network elements and game theory factors Elements in ad hoc network

Factors in game theory

Nodes or the whole network Related activities such as power control and bandwidth allocation Performance index such as throughput and time delay Ad hoc network elements Nodes in network

People in the game Strategy set Utility function Game theory factors People in game

from the cooperative surplus, and how to allocate the cooperation surplus is subject to the abilities and conditions between the two parties. Non-cooperative game theory is a kind of game that no restrictive agreements can be reached by both parties; each person in the game is trying to maximize his own benefit, meanwhile his own strategy selection is affected by the strategies of other people in the game. An important concept in non-cooperative game is Nash equilibrium, referring to a kind of equilibrium that for anyone in the game, when other people’s strategy combination is defined, anyone will have not motivation to change his own strategy. While applying Nash equilibrium, it shall be noted that the existence and uniqueness of equilibrium in game theory cannot be guaranteed. Nodes in Ad hoc network are normally of the same characteristics, which is an important reason to apply game theory, because one to one mapping can be established between network element and game theory factors, as shown in Table 1. Game theory could be applied to every layer of Ad hoc network, such as the distributed power control for physical layer, media access management for MAC layer and packet transfer strategy for network layer. In formal description, it is normally modeled as G = [n, An, Un]. G means game, n means number of people in the game, i.e., node number in the network. An means strategy of each node and Un represents utility of node n. Utility function for each node is related with model objects and application scenes and it is the function of nodes’ action. Nodes will continuously adjust strategies to maximize their own benefits. Quer G studied the issue of network infrastructure sharing to improve the overall performance of two co-existing Ad hoc networks through such cooperation [5]. To improve the data transmission rate and user QoS, two networks sometimes will select some nodes of its own for sharing as the relay node to transmit data for these two networks, thus solving network separation and greatly improve efficiency of shared resources. However in practical environment, considering energy consumption and security, one network might not be willing to share too many nodes to relay more data from other networks. Usually one time game will result in inefficient non-cooperative equilibrium, that is to say both networks are not willing to share nodes. Therefore, repeated game will be adopted to enable a person in the game to consider impact of current action on other people in the game in future, and punish people who have deviated from equilibrium point, and eventually reach a kind of cooperative equilibrium.

Classification and Comparative Analysis …

2.3

31

Scene Awareness-Based Resource Management

With rapid development of communication and compute technologies, computing modes have become increasingly smart; especially occurrence of ubiquitous networks has attracted great attentions from the academic circle. An important field of ubiquitous network is scene awareness computing. Scene awareness system could adaptively adjust self behavior according to peripheral situations without any human intervention, so as to improve system usability and effectiveness. According to the definition given by Dey, scene refers to any information used to characterize the environment of an entity [6]. An entity could be a person, a location or object, including user and application itself. Scene is a concept related its context and shall be integrated with actual application scenes. For example, in smart home, scenes shall include the hobbies, behavior habits, and activity modes of habitants, and also include physical scenes such as location, temperature, sunlight, and humidity. In Ad hoc network resource management, scenes include node location, link status, and network topology information. Just like what Petrelli said, when there is user there is application, when there is application there is environment, scenes are just such application and environment. Scene awareness technology possesses good traits of self-management, self-healing, and self-protection; it has been used in Ad hoc network management and resource allocation to help solving many troublesome problems such as node energy restriction, fast and unpredictable topological changes. Normal work mode for scene awareness resource management is to guide network in resource dispatch and behavior control through establishing high level management strategies, which are triggered by scene data meeting certain conditions. For instance, alarming phase is entered when residual energy of a node is lower than certain threshold. And then the node will automatically reduce the chances as the relay node for other nodes to reduce energy consumption or even go into sleeping mode. Taking another example, suitable route protocols can be selected by perceiving mobility of current node (including speed and location change frequency), such as selecting AODV protocol during strong mobility and selecting OLSR protocol during weak mobility, to guarantee better network performance [7]. Scene awareness-based network management in Ad hoc network must effectively collecting and transfer scene information among nodes. Collection of scene information can be performed by each node, and certain mechanism is needed to publish the information to the whole network [8]. For the efficient interaction of scene information, a mechanism has been proposed by Liu Q which could not only effectively realize scene data exchange among nodes, but also successfully solved the node rejoining and information loop issues [9]. Such mechanism mainly includes three models: scene model, scene information database (CiB) for scene expression and storage, and scene communication protocol (CiComm) for scene information exchange. Scene modeling is the foundation for designing scene awareness system. Such mechanism adopted the most popular ontology modeling,

32

H. Wang et al.

which has been defined as five-dimensional questions—Who, When, Where, What, and How, and it can express entities as well as their relationships. Each node maintains scene information of itself and its neighbor nodes through CiB. Then neighbor nodes exchange scene information through CiComm protocol. To master the leave and join of nodes in real-time, such mechanism adopts soft status method to timely eliminate the scenes of nodes not sending heartbeat beacon over certain period of time.

3 Comparison and Analysis The three resource management methods expatiated above have their own merits and shortcomings. First, cross-layer design method has realized information sharing among different layers, achieve network optimization and efficient resources dispatch through inter-layer cooperation. But cross-layer design has inevitably restricted modularized characteristics of system structure while optimizing network performance, and may result in difficulty for system maintenance. Also, single cross-layer design is also designated for limited issues instead of all issues. Secondly, game theory as a strong mathematics tool has received recognition and attention in Ad hoc network resource management. Game theory can greatly improve efficiency of resource management through reasonable modeling and designing effective utility functions. But attention shall also be paid to the situation that Nash equilibrium gained from modeling might be a kind of equilibrium decreasing the overall utility. Thirdly, scene awareness technology has gained good reputation in ubiquitous network, and research achievements in Ad hoc resource management area are not rich particularly. The scenes currently considered for scene awareness system are still inadequate, and excessive interaction of node scene data will consume large amount network resources [10]. What is more, due to the mobility of nodes in Ad hoc network, scene availability is also dynamic. If scenes cannot be updated timely caused by link breakup, it will result in network environment uncertainty, even occurrence of conflict scenes. The characteristics of Ad hoc network have also determined the distributed management over scenes. It is worth noting that above-mentioned three methods are not isolated, and can be suitably combined. For example, at the scene gaining stage for scene awareness system, cross-layer design can help achieving the scenarios at different layers, and then current network status can be deducted after integration. In addition, at the scenario expression and reasoning stage, according to scenarios collected for each node, game modeling can be established to conduct effective resources dispatch to guarantee the optimized performance of the overall network. On the contrary, scenario awareness system normally can gain more network-related information, thus providing more comprehensive basis for establishing utility function.

Classification and Comparative Analysis …

33

4 Conclusions Research achievements in recent years on Ad hoc network resources management are summarized and classified from the perspective of research methods in this paper. It is pointed out that mentioned resource management methods are not isolated and different methods have intrinsic merits and can support and complement each other. So the efficient and adaptive resources management mechanism can be designed by combining these methods effectively. In future, study works should include two aspects: one is to enhance the network survivability and the other is to improve user satisfaction. Acknowledgments This paper is supported by NSFC (NO: 61072043) and Pre-research Project of PLAUST.

References 1. Frodigh, M., Johansson, P.: Wireless ad hoc networking—the art of networking without a network. Ericsson Rev. 4, 248–262 (2000) 2. Huang, W.L., Letaief, K.B.: Cross-layer scheduling and power control combined with adaptive modulation for wireless ad hoc networks. IEEE Trans. Commun. 55(4), 728–739 (2007) 3. Chen, J., Li, Z., Liu, J. et al.: QoS Multipath routing protocol based on cross layer design for ad hoc networks. Internet Comput. Inform. Serv. In: 2011 (ICICIS), pp. 261–264 Aug 2011 4. Qu, Q., Milstein, L.B., Vaman, D.R.: Cross-layer distributed joint power control and scheduling for delay-constrained applications over CDMA-based wireless ad-hoc networks. IEEE Trans. Commun. 58(2), 669–680 (2010) 5. Lim, A.O., Kado, Y.: Using game theory for power and rate control in wireless ad hoc networks. In: 2007 Annual Conference of SICE, pp. 1166–1170 Sept 2007 6. Dey, A.K.: Providing architectural support for building context-aware applications. Georgia Institute of Technology (2000) 7. Roy, N., Roy, A., Das, S.: Context-aware resource management in smart homes a Nash H-learning based approach. IEEE Perv. Comput. Commun. 151–158 (2006) 8. Malatras, A., Pavlou, G.: Exploiting context-awareness for autonomic management of ad hoc networks. J. Netw. Syst. Manage. 15(1), 29–55 (2007) 9. Liu, Q., Linge, N., Wang, J., et al.: A context-aware management and control mechanism in a mobile ad-hoc environment. Int. J. Grid Distrib. Comput. 5(4), 122–127 (2012) 10. Lee, K.W., Cha, S.H.: Ontology-based context-aware management for wireless sensor networks. In: Advances in Computer Science, Environment, Ecoinformatics, and Education, pp. 353–358. Springer, Berlin (2011)

A Secure Model Based on Hypergraph in Multilayer and Multi-domain Intelligent Optical Network Qiwu Wu and Jie Lu

Abstract To resolve the security problem existing in multilayer and multi-domain intelligent optical network, the multilevel and expansive security architecture was presented based on the classified description of threats, and the design of the syncretic mechanisms. Finally, the security model is described by a mathematical model based on hypergraph theory. That is, the edges represent security level; nodes represent the corresponding security technology. The proposed model simplified the complexity of security architecture and facilitated the analysis of future quantitative mathematical. Keywords Intelligent optical network Hypergraph Secure model





Multilayer and multi-domain



1 Introduction As the Internet continues to expand the scale, intelligent optical network is faced with multi-domain requirements [1]. Intelligent optical network of multilayer and multiregion includes the transport plane, the control plane, and the management plane. The characteristic of multilayer and multi-domain increased the difficulty of optical network security management. Hence, safety should be considered as the important factor at the beginning of the network planning and construction. Security architecture is the premise and the foundation to guarantee the security of intelligent optical network. Considering that many RFC documents lack of systematic discuss in light of optical network security problems, IETF has released the ninth version of optical network security draft in 2010 April [2]. The draft Q. Wu (&) Armed Police Engineering University, Xi’an 710086, China e-mail: [email protected] J. Lu 68036 PLA Troops, Xi’an 710086, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_4

35

36

Q. Wu and J. Lu

describes the security threats and the overall countermeasures of optical networks from the user and the service provider’s point of view, which includes identity authentication technology, data source authentication, access control technology, etc. Although the draft does not give the detailed implementation scheme, it points out the research direction of information security in intelligent optical network. So, we presented a multilevel and expansive security architecture in multilayer and multi-domain optical network. Finally, the security model is described by a mathematical model based on hypergraph theory. The proposed model simplified the complexity of security architecture and facilitated the analysis of future quantitative mathematical.

2 The Proposed Security Architecture 2.1

Analysis of Security Threats

The security problem of multilayer and multi-domain intelligent optical network is divided into two aspects: physical optical layer security and network information security. Among them, security threats existing in the physical optical layer can be divided into two categories: service disruption and eavesdropping [3]. Service disruption refers to stop the normal communication or reduce network quality of services (QoS). Specifically, it includes the attacks of fiber, optical amplifier, wavelength selective switching equipment, and so on. The eavesdropping refers to that the illegal user access to the optical node device or intercept signal in optical fiber. Unlike the security threats of physical light layer, the information security problem in intelligent optical network of multilayer and multi-domain is more complex, the more vulnerable to various attacks [4]. This leads to the more vulnerable to various attacks. These attacks can be divided into two categories: active attacks and passive attacks. The threats include unauthorized optical path establishment of, tamper message, replay message, forgery message, denial of service, false routing topology information of multi-domain, collusion attack, eavesdropping, etc. Hence, general security mechanisms in multilayer and multi-domain intelligent optical network must be provided, such as encryption and decryption, integrity detection, authentication, access control, nonrepudiation mechanism, key management, and so on.

2.2

The Proposed Security Architecture

By considering the security problem in multilayer and multi-domain intelligent optical network and using the syncretic mechanisms, a multilevel and expansive

A Secure Model Based on Hypergraph in Multilayer …

37

security architecture is proposed. In terms of multilevel, the security architecture involves the transport plane, the control plane, and the management plane. Meanwhile, the exchange visits of different entities are considered. Besides, the expansive characteristic means that the pertinent secure mechanisms can be chosen according to the real demands. The multilevel and expansive security architecture is revealed in Fig. 1. Thereinto, it involves the security transport layer, security control layer, and security management layer. At the same time, it includes the security management entity and the security services entity.

security management layer

security transport layer access management

key management

security manageme -nt entity

security secure strategy management entity system security

security control layer

integrity protection

integrity protection secure middlewa -re

authentication access control non-repudiation

secure interface

encryption

encryption

secure codeing

key management

access control

security service entity

security service entity

authentication

non-repudiation

intrusion detection trustworthiness evaluation

database security privacy protection

multi-domain management

key management

security manageme -nt entity

encryption integrity protection multidomain AAA

authentication access control

security service entity

non-repudiation secure routing, secure singling, secure link management

management entity

service entity

Fig. 1 Multilayers and high flexible security architecture

data flow

control flow

38

Q. Wu and J. Lu

(1) Security transport layer. Intelligent optical network transport level is used to transmit data information for different users in the form of all optical networks or photoelectric network. In security transport layer, security management entity can implement the secure access control management, key management, and so on. Security service entity can provide the generic security services, such as encryption and decryption services, integrity detection, etc. Besides, management entity must distribute correlative policies and parameters for the security service entity. (2) Security control layer. Intelligent optical network control level is used to provide control strategies for transport layer, such as routing, signaling, and link management. In the security control layer, management entities include multi-domain security management and key management. Security service entities must execute generic security services. In addition, it must carry out security routing, authentication, authorization, and accounting (AAA) of multi-domain optical network, security singling, security link management, etc. (3) Security management layer. In the intelligent optical networks, management plane provided coordination for the transport plane and control plane. The management entity must supply the secure application policy, system security strategies, key management, and the equipment interface management for security management layer. Security service entity not only supply general security services which are similar to above layers, but also need to provide secure middleware, database security, and privacy protection.

3 Secure Model Based on Hypergraph 3.1

Model Parameters

In 1970, C. Berge proposed the concept of hypergraph and first created undirected hypergraph theory [5]. Researches on hypergraph theory and its application are extremely wide, which can apply to the knowledge organization and presentation, theme map, clustering, cellular mobile communication system, and so forth. We describe the security architecture of multilayer and multi-domain optical network by using the hypergraph theory. Namely, vertex represents each key security technology; each super edge describes the relationship between each security key technology and each security layer. We use binary relation H = (V, E) which represents the proposed security architecture in Fig. 1. Therein, the set of vertex x V = {v1, v2,…, vn} represents the n key technologies, the set of edge E = {e1, e2, e3}, while e1 represents the security transport layer, e2 represents the security control layer, e3 represents the security management layer. The corresponding model parameters are defined as shown in Table 1.

A Secure Model Based on Hypergraph in Multilayer …

39

Table 1 Model parameter symbol Symbol

Meaning

Symbol

Meaning

e1 e2 e3 v1 v2 v3 v4 v5 v6 v7 v8 v9

Security transport layer Security control layer Security management layer Security access management Key management Encryption Integrity protection Authentication Access control Nonrepudiation Secure codeing Intrusion detection

v10 v11 v12 v13 v14 v15 v16 v17 v18 v19 v20 v21

Trustworthiness evaluation Multi-domain security Multi-domain AAA Secure routing Secure singling Secure link management Secure strategy management System security management secure interface management database security privacy protection secure middleware

3.2

Model Description

A secure model based on hypergraph in multi-domain intelligent optical network is shown in Fig. 2. Therein, e1 = {v1, v2, v3, v4, v5, v6, v7, v8, v9, v10 }, e3 = {v2, v3, v4, v5, v6, v7, v11, v12, v13, v14, v15 }, e2 = {v2, v3, v4, v5, v6, v7, v16, v17, v18, v19, v20, v21}. In addition, the hypergraph model can be represented by the correlation matrix. We can further study hypergraph structure, such as the two parts graph, partial hypergraph, and so on. In addition, we can also introduce weighted function to represent the impact strength, and carry out in-depth studies to measure mutual impacts of various security technologies. It is clear that the proposed model simplifies the complexity of security architecture, and facilitates the analysis of future quantitative mathematical.

Fig. 2 A secure model based on hypergraph in multi-domain optical network

v9

v10

e1

v6 v5

v4

v1

v8 v7

v16

v3 v2

e3 v15

v21

v14 v13

e2 v12

v11

v20

v17 v18 v19

40

Q. Wu and J. Lu

4 Conclusion Security is the key factor that must be considered for each network. We propose a multilevel and expansive security architecture in multilayer and multi-domain optical network. Finally, the security model is described by a mathematical model based on hypergraph theory. The proposed model simplifies the complexity of security architecture, and facilitated the analysis of future quantitative mathematical. Acknowledgments The work is supported by the National Natural Science Foundation (61402529), and the basic research foundation of Armed Police Engineering University (WJY201417, XJY201403).

References 1. Lehman, T., Xi, Y., Guok, C.P., et al.: Control plane architecture and design considerations for multi-service, multi-layer, multi-domain hybrid networks. IEEE Commun. Mag. 11, 67–71 (2008) 2. Fang, L., Behringer, M., Callon, R., et al.: Security Framework for MPLS and GMPLS Networks, pp. 1–30. Draft-ietf (2010) 3. Fok, M.P., Zhe, X.W., Yan, H.D.: Optical layer security in fiber-optic networks. IEEE Trans. Inf. Forensics Secur. 3, 725–736 (2011) 4. Wu, Q.W, Jiang, L.Z.: Research on security key technologies of multi-layer and multi-region intelligent optical network. Opt. Commun. Technol. 12, 1–5 (2012) 5. Rerge, C.: Graph and Hypergraph. North Holland, Amsterdam (1973)

Creating a Mobile Phone Short Message Platform Applied in the Continuing Nursing Information System Yujie Guo, Yuanpeng Zhang and Fangfang Zhao

Abstract This study aims to create a kind of mobile phone short message platform that can be used in chronic disease patients continuing nursing information system. The platform consists of mobile phone, short message gateway equipment, continuing nursing system, and heterogeneous data conversional middleware. Mobile phone is the mobile terminal which has the function of editing and receiving messages; short message gateway is a device based on the wireless technique to quickly send and receive short messages; continuing nursing information system is a short message platform that can assess, intervene, and evaluate the health status of discharged chronic disease patients by editing, sending, receiving, and analyzing the short messages. The heterogeneous data conversional middleware can connect the database of continuing nursing information system with the database of electronic medical records in third-party hospitals. This research applied modern technology of mobile communication to improve the efficiency of continuing nursing and promote the healthy status of chronic disease patients. Keywords Mobile phone

 Message  Information  Continuing nursing

1 Introduction In recent years, along with the fast development of science and technology, mobile phone has become a popular communication tool. At the same time, mobile phone has provided a large space and a new platform for all walks of life to carry out all Y. Guo (&)  F. Zhao School of Nursing, Nantong University, 19#, Qixiu Road, Nantong 226001, Jiangsu, China e-mail: [email protected] Y. Zhang Department of Medical Information, Nantong University, 19#, Qixiu Road, Nantong 226001, Jiangsu, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_5

41

42

Y. Guo et al.

kinds of service. Short message service (SMS) of mobile phone has the superiority of high accuracy of interaction, fast speed, low cost, wide spread, flexible, instantaneous sensationalism, and other irreplaceable goodness compared with the traditional mass media [1, 2]. With a large base of clients, SMS has become the second popular service in addition to the most widely used mobile communication service of phone call. SMS which has the technological connectivity with China mobile, Unicom, telecom, and Netcom, has become a convenience service platform for many enterprises and institutions for business operations and other services. Globally, due to the high incidence of chronic disease, patients need long-term care after their hospitalization. A large number of chronic disease patients along with their families suffer a lot from the complications, lifestyle change, health knowledge deficiency, heavy financial burden, and so on. They need professional aid with their cognition, behaviors, and attitude urgently to improve their quality of life. In the past decades, continuing nursing became popular in western countries, which means hospital and community should be united to provide continuous health service jointly, making patients’ process of rehabilitation become an integral part. The continuing nursing involves different stages such as hospitalized stage, transfer stage, and community and family stage. It will make nursing care become a seamless, continuous process. In our country, the most popular forms of chronic disease continuing nursing are home visiting and telephone follow-up. Although it can provide a certain amount of nursing intervention for patients, due to complicated factors such as limited staff, low quality of employees, lack of policy support, poor financial support, and so on, the domestic continuing nursing remains a status of short time duration, narrow coverage, and obvious regional difference. To majority of discharged patients, the home time is still a blind area of nursing care, where their health cannot be monitored and intervened. So, it is the time to develop a new style of continuing nursing to compensate the current deficiency. In this study, we established a continuing nursing information system based on mobile phone SMS to assess patients’ physiological and psychological problems and evaluate the effect of continuing nursing. Patients’ information gathered together as a database which makes it convenient for community healthcare workers to judge whether patients need family visit or hospitalized treatment or transfer through data analysis. Also, the database can be connected with the hospital electronic medical records and makes patients’ health data become an integral one.

2 The Constitute and Function of SMS Continuing Nursing Information System The SMS continuing nursing information system includes mobile phone, message gateway equipment, continuing nursing information system, and heterogeneous data conversional middleware (see Fig. 1). Nurses connect discharged patients with

Creating a Mobile Phone Short Message Platform Applied … Client’s Mobile Phone

43

Client’s Mobile Phone

Internet Electronic Medical Records Database

Short Message Gateway

Middle Ware

Continuing Nursing System

Nursing Information Database

Data Both-way Flow Data One-way Flow

Fig. 1 The SMS continuing nursing information system

chronic disease by SMS continuing nursing information system through the Internet. Patients’ health information was gathered in the nursing information database. Through middleware, patients’ important health information can be stored in the hospital’s electronic medical records database. Then, doctors and nurses can supervise discharged patients closely.

2.1

The Mobile Phone

The mobile phone terminal is a phone having the function of sending and receiving short messages.

2.2

The Message Gateway Equipment

The message gateway equipment is a communication gateway device which can send and receive messages based on wireless mode. It can be connected to the computer via the port of RS232. When the system is applied, the equipment uses the secondary developing package based on the commands of port of AT, and then the device can directly use the application interface (API) to send and receive short messages.

44

Y. Guo et al.

The message gateway equipment is mainly composed of the message gateway interface module, uplink message receiving interface module, downlink message sending interface module, uplink message dispatcher, and the gateway of the database (Fig. 2). Message gateway interface module is mainly used to implement gateway interconnection agreements, such as mobile China mobile peer-to-peer (CMPP) agreement, Unicom short message gateway interface protocol (SGIP) agreement, Personal Handy phone System short message gateway protocol (SMGP) agreement, and establish the relationship between internet short message gateway (ISMG) and the service providers. It acts as a bridge in the system of service provider and short message service center (SMSC). When the mobile phone sends the appropriate message, it will be collected in the message center. Then the message center will send the message to the message gateway interface of service provider, and transfer to the uplink message receiving interface. The uplink message dispatcher reads business routing Extensible Markup Language XML configuration file (the configuration file holds the corresponding relationship between various business service number and ‘message receiving data table’). It receives information from the uplink message receiving interface module, and then according to the routing information, routing messages will be written together to the corresponding business of “message receiving data table” of gateway database. Then the message will be sent to the message service model.

Operator Gateway

Message GatewayI nterface Module

Uplink Message Receiving Interface Module

Downlink Message Sending Interface Module

Data Both-way Flow Data One-way Flow

Fig. 2 The message gateway equipment

Uplink Message Dispatcher

Gateway Database

Message Service Model

Creating a Mobile Phone Short Message Platform Applied …

45

The message service model will send the message to the “message sending data table” of gateway database, meanwhile, the downlink message sending interface module will query the “message sending data table” and read the message data. Then, the downlink message sending interface module will send the message to the message gateway interface module, and finally send the message to the operator gateway.

2.3

The Continuing Nursing Information System

The continuing nursing information system includes six modules, namely patients’ self-assist module, patients’ management module, short message management module, advanced query module, data analysis module and print module (see Fig. 3). Patients’ Self-assist Module. By registering, patients log in after entering the user name and password, then he will see all his data. System will set the limits of authority so that other patients will not see other information without permission. In this module, patients can also query the information of costs, the operation content, and order or cancel service.

Continuing Nursing Information System

Print Model

Data Analysis Model

Advanced Query Model

Short Message Management Model

Patients’ Management Model

Patients’ Self-assist Model

Patients’ Selfevaluation Information Model

Health Education Information Model

Assessment Information Model

Reminding Information Model

Message Short Database Model

Fig. 3 The continuing nursing information system

46

Y. Guo et al.

Patients’ Management Module. Patients’ management module will be operated by hospital nurses. There are two procedures of patients’ list and patients’ editing. Patients’ list executes patient inquiries, adding or deleting patient; patients’ editing has the functions of modify or editing patients’ data. Short Message Management Module. Short message management module implements the function of editing, saving, sending, receiving inquiries, etc. This module can be divided into the following submodules: Short Message Database Module. The continuing nursing messages are edited covering the dimensions of patients’ environmental field, the field of psychology, physical, health behavior, and others. Reminding Information Module. Hospital nurses will select the specialized short messages to remind or supervise patients to accomplish self-care and rehabilitation exercises according to patients’ condition. For example, the message for the lung cancer patients after operation is: Did you take a deep breath today? Under the message, there is an operating area where patients can answer the question by selecting the number in the display box. 1. Yes, 2. No, if selected 1, the automatic reply: You did it, continue to do it! If selected 2, the automatic reply: you need to adhere to, come on! Assessment Information Module. By editing and sending short messages, hospital nurses can assess patients’ condition in the four areas of environment, psychology, physiology, health-related behavior. When patients reply, nurses can judge whether there are some problems with the patient and its severity. Then, nurses can decide whether the patient needs further service or not. Health Education Information Module. Regularly sent by the hospital nurses, including mass and individual sent. The bulk of information can be sent randomly by the system. Individualized health education is specialized based on patient need. Patients’ Self-evaluation Information Module. We designed three aspects of the continuing nursing evaluation standard including knowledge, behavior, and status. Nurses send self-evaluation message regularly to the patients, and patients are required to self-evaluate by answering the message. According to patients’ answer, nurses can give each patient a score. Advanced Query Module. The function of advanced query module provides patients or nurses to query patients’ information during his hospitalization. Given the Chinese cultural custom, nurses can partly retain patients’ right of query according to the opinions of patients’ family. Database Analysis Module. This module realizes the functions of data classification, sorting, and statistical analysis on patients’ information in the SMS continuing nursing information system. The information includes patients’ name, sex, date of birth, age, address, occupation, telephone number, hospitalization time, discharged time, diagnosis, treatment, pathological classification, pathologic stage, clinical stage, etc. Analysis module will automatically list the statistical result of each item according to the information of database, such as the proportion of patients with different clinical stage, general data structure, and self-reported scores. The result of analysis will alert nurses if it is necessary to strengthen patients’ health education through the SMS continuing nursing information system, or take-home

Creating a Mobile Phone Short Message Platform Applied …

47

visit, or telephone follow-up. Then it will improve the efficiency of continuing nursing, save human resources, and increase the link between hospital and patients. At the same time, it can provide abundant research data for clinic research. Print Module. Patient information can be printed.

2.4

The Heterogeneous Data Conversional Middleware

Design Philosophy. The heterogeneous data conversional middleware (Fig. 4) aims to realize data switching between continuing nursing information database and the hospital electronic medical records (EMR). EMR is also called a computerized medical record or computer-based patient record system (CPR). It utilizes electronic devise to store, manage, transmit, and represent digital medical record, taking the place of paper medical record. In our study, the key problem is how to translate the heterogeneous data into unified data. The heterogeneous data will adopt “xml” format of representation, using open synchronization standard—synchronization markup language (SynML) protocols to complete data transmission [3]. Extensible markup language (XML) is a term used to describe an open, structured format metadata. According to its definition, the document contains structure and the “understanding” between exchanged documents. XML is an ideal solution solving the mass information flow in EMR, the diversity and complexity of data types as well as the data exchange and sharing between heterogeneous systems. XML middleware layer has two main tasks. The first task is to acquire and switch the data from the source data. According to the clients’ data demand, the select statement is generated and then the data is extracted from the source data, and is converted into an XML document profile. The second task is to transform and

Basic Data Query

Basic Relational Data Storage

XML Document

Unify Query

Fig. 4 The heterogeneous data conversional middleware

Downlink Integration

Uplink Implementation

Integration Middle Ware Based on XML

Data Base

Heterogeneous Data

48

Y. Guo et al.

update the data. The XML documents are transformed into database. According to the transformation rules, the statement of insert or update is generated, completing the update of new record. There are two rules in XML exchanging rules library. The exchange sequence control system and the mapping between the database tables, which are the switching sequence control documents and the mapping file. Specifically, the integration middle ware based on XML is in the middle of basic relational data storage (heterogeneous database systems) and basic data query (unify query). On one hand, the integration middleware based on XML collects information from various heterogeneous databases and coordinates all databases downward; on the other hand, by integrating and unified data model, the integration middleware based on XML provides unified query interface to visit the applications of heterogeneous data upward. XML [4] integration middleware transforms metadata’s information from each heterogeneous database into a global virtual view through the appropriate mapping file. When customers submit assignments, the query documents generated through the middleware’s analysis, decomposition, transforming the query targeting the logic virtual picture to the subquery targeting each physical database, and return the query results in the form of “XML” document to the customers. Through the business rules ordered by the customers, various “XML” sub-documents are filtered and merged. Finally, the synthesis of “XML” documents is added to the corresponding “XML” file, and is returned to the client-side via customers’ visiting interface. Architecture. XML middleware, which adopts B/S structure, is constructed between database service system and application. This structure contains three layers (Fig. 5): (1) data layer consisting of heterogeneous data; (2) middle layer

Application Layer

Web Interface

Other Interface

Manage Interface

Query creating

Query decomposing

Data Layer

Wrapper 1

Wrapper 2

Database 1

Database 2

Fig. 5 The architecture of XML middleware

Safe control

Results synthesise

Data cache

Middle Layer

Query analyzing

Global view

...

Configuration Connection Pool

Wrapper n

Database n

Creating a Mobile Phone Short Message Platform Applied …

49

consisting of business logic that is the middleware of system; (3) application layer consisting of application programs and corresponding access interface. The implementation process of XML middleware. The implementation process of XML middleware can be divided into two different stages, the first stage is middleware’s construction process, which realizes two main functions; heterogeneous data sources integration and heterogeneous data format conversion. The second stage refers to executing task query process, which mainly discusses the conversion from the query of global virtual view to each physical database. Detail algorithm is described as follows: (1) Extracting information table and view information of heterogeneous database and choosing the integrated contents. (2) Dealing with the conflict of structure and semantics. (3) Calling mode conversion algorithm and generating corresponding global view file.

3 Discussion Compared with the previous work, our study has the following advantages. We adopted communication technology—mobile phone short message service in the continuing nursing information system and developed a diverse style of continuing nursing for chronic disease patients. The SMS continuing nursing information system makes up for the present continuing nursing style of family visiting and telephone follow-up, which has the shortcoming of narrow benefit, human resources and material resources consuming, single content, etc. Our study enables the information of discharged patients to be collected completely. By screening and uploading through middleware, patients’ continuing nursing information is connected with the hospital’s electronic medical record to ensure the integrity of patient information. Presently, the databases of hospital information system (HIS) are different, such as SQL Server, Oracle, etc., and the operation platforms are also different, like Windows, UNIX, AIX, and the like. Middleware technology is needed to integrate the continuing nursing information data with the heterogeneous data mentioned above. Currently, mature middleware technologies are TSIMMIS system and distributed heterogeneous data source integration system prototype based on common object request broker architecture (CORBA). The former has the disadvantage of the fact that it is difficult to add the data source dynamically. The latter system can only communicate and transfer data within peer–peer architecture system. So, both the middleware technologies have shortcomings. In our study, the heterogeneous data integration middleware based on XML has the advantage of being irrelevant to the platform, scalability, interoperability, semantic description, and data transmission. So, it is an ideal media for data interaction.

50

Y. Guo et al.

4 Conclusion This study applied modern technology of mobile communication and the Web to improve the efficiency of continuing nursing. The key technology is the heterogeneous data conversional middleware which can connect the continuing nursing system and hospital electronic medical records. The integration middleware was based on XML, which gathers a large amount of information needed for EMR integration problems. In the near future, more and more people will benefit from modern communication skills and live more comfortably. Acknowledgments This study was sponsored by the project of “The development of continuing nursing system of cancer patients based on short message system in Nantong,” Nantong Science and Technology Bureau (Grant: No.BK2013073).

References 1. Qiang, C., Tian, Y.S., Li, H.S., Xu, T.: Study of the SMS platform optimization technology based on sliding window and load balancing mechanism. In: Proceedings 7th Web Information Systems and Applications Conference, WISA (2010) 2. Wang, G.R., Li, D.L., Lü, Z.Q., Duan, Q.L., Wen, J.W.: Design and implementation of SMS-platform system for diagnosis of fish diseases. Nongye Gongcheng Xuebao/Trans. Chin. Soc. Agric. Eng. 25(3), 130–134 (2009) 3. Bojańczyk, M., Kołodziejczyk, L.A., Murlak, F.: Solutions in XML data exchange. J. Comput. Syst. Sci. 79, 785–815 (2013) 4. Thuy, PTT., Lee, Y.-K., Lee, S.: S-Trans: Semantic transformation of XML healthcare data into OWL ontology. Knowl.-Based Syst. 35, 349–356 (2012)

Enhancing the Bit Error Rate of Visible Light Communication Systems Using Channel Estimation and Channel Coding Tian Zhang, Shuxu Guo and Haipeng Chen

Abstract This paper presents an improved scheme to enhance the bit error rate (BER) for indoor visible light communication (VLC) systems. The proposed scheme uses cascaded codes-based channel code and least square discrete Fourier transform (LS-DFT)-based channel estimation to improve the robustness of the indoor optical wireless communication link. The simulation results demonstrate that a 3–5 dB signal noise rate (SNR) gain can be achieved when the BER is 10−3 below the forward error correction (FEC) limit for a 16-quadrature amplitude modulation (QAM) asymmetrically clipped optical orthogonal frequency-division multiplexing (ACO-OFDM) visible light communication system. Keywords Light-emitting diodes data processing

 Free-space optical communication  Optical

1 Introduction Visible light communication (VLC) uses white light-emitting diodes (WLED) to realize illumination and data transmission at the same time. Because LED has a fast response rate, it can be used to realize high speed wireless communication links. Furthermore, this new wireless technology offers an important supplementary way for indoor radio frequency (RF) communication [1]. VLC is also widely applied in indoor positioning, underwater communications, and traffic light systems in recent years [2]. The primary purpose of VLC system is to provide high-speed communication with high reliability. In 2012, German scientists achieved 1.25 GB/s capacity within the FEC 2 × 10−3 limit using WDM and DMT technology with a red-green-blue tri-color LED transmitter [3]. In the same year, Scuola Superiore Sant’Anna T. Zhang  S. Guo (&)  H. Chen College of Electronic Science & Engineering, Jilin University, Changchun 130012, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_6

51

52

T. Zhang et al.

experimentally realized data rate of 1.5 GB/s with single channel and 3.4 GB/s by implementing WDM transmission at standard illumination levels. In both experiments, the BERs were below 2 × 10−3 [4]. In 2013, University of Oxford reported an experimental demonstration of indoor wireless visible light communication transmission at 1 GB/s using a four-channel MIMO white LED source, each transmitting signals at 250 Mb/s using OFDM modulation with an average BER of 10−3 [5]. In this paper, we significantly improved reliability of ACO-OFDM visible light communication system using channel coding and channel estimation technique. In the first simulation experiment, we analyze the difference in channel estimation performance between traditional LS and LS-DFT algorithms applied in VLC system. We prove that LS-DFT algorithm can improve BER performance than the traditional LS algorithm. In the second simulation experiment, we proved that the system performance can be further enhanced using the cascaded code, which consists of convolution code and BCH code.

2 System Overview The block diagram of a complete optical ACO-OFDM system is shown in Fig. 1. At the transmitter, a set of binary data is obtained from the arbitrary waveform generator (AWG). Then the data is coded by cascaded code and mapped into complex values applying quadrature amplitude modulation (QAM) [6]. In order to make full use of the limited bandwidth, orthogonal frequency-division multiplexing (OFDM) is adopted to improve spectrum efficiency and implement high-speed communication. The inverse Fast Fourier Transform (IFFT) is used to modulate the QAM symbol into mutually orthogonal subcarriers. The OFDM modulation scheme can effectively suppress the inter-symbol interference (ISI). In practical systems,

Fig. 1 Block diagram of VLC

Enhancing the Bit Error Rate of Visible Light Communication …

53

Fig. 2 Input signals for IFFT

channel distortions cannot be ignored. Therefore, we use comb pilots to estimate the channel state information. Furthermore, guard interval (GI) and cyclic prefix (CP) are used for eliminating the ISI and inter-carrier interference (ICI) for OFDM signal and also used for synchronization at the receiver [7]. Different from RF communication, VLC systems modulate the LED luminance to transmit data. Therefore, a real and positive value should be obtained to drive the LED. In order to get the required signal, the N information symbols are chosen to construct the Hermitian symmetry property (mirroring). For making full use of the dynamic range of the LED, clipping and scaling are used to mitigate the peak-to-average power ratio (PAPR) of the OFDM signal. X½k ¼ X  ½2N þ 1  k ;

1k N

ð1Þ

After Hermitian mirroring operation, the 2 N signals are distributed on the even subcarriers and zero is distributed to odd subcarriers, leading to 4 N subcarriers. A real and positive transmitted signal can be obtained by a 4 N-point IFFT [8]. The input signal for IFFT is constructed as in Fig. 2. After parallel/series (P/S) and digital/analog (D/A) conversion, the real and positive signal is fed to the phosphor-based white LED. At the receiver, a commercial photoelectric detector (PD) with a blue filter is used to detect the intensity of incident blue light component. After A/D conversion, the signal is sent to FFT block after bit synchronization, parallelization, and removal of the CP. The channel state information can be obtained using LS-DFT-based channel estimation algorithm. The received signal can be processed using the zero-forcing equalization. After signal compensation and correction, the complex signals are demapped and decoded. In the end, the binary data stream is recovered.

3 LS-DFT-Based Channel Estimation For OFDM system, LS-DFT-based channel estimation technique can eliminate the effect of noise outside the maximum channel delay outperforming the LS channel estimation algorithm [9]. For this system, comb pilot tones are distributed on every OFDM subcarrier periodically. Through frequency domain interpolation, we can estimate the whole channel frequency response along the frequency axis.

54

T. Zhang et al.







h[0]

H [0] ∧

h DFT [0]



H DFT [0] ∧

H [1] ∧

H [2]





h[ L − 1] h DFT [ L − 1]



H DFT [1] ∧ H DFT [2]



H [ N − 1]

H DFT [ N − 1]

Fig. 3 LS-DFT-based channel estimation

Through applying the LS channel estimation method, the channel gain of the kth ^ can be obtained. subcarrier in frequency domain H½k ^ IDFTfH½kg ¼ h½n þ z½n  ^h½n;

n ¼ 0; 1; . . .; N  1

ð2Þ

z[n] denotes the noise component in the time domain. The channel response in time domain is expressed as ^hDFT ½n ¼



h½n þ z½n; 0;

n ¼ 0; 1; . . .; L  1 otherwise

ð3Þ

After reselecting the former L elements of the channel in time domain, we transform them back to the frequency domain to get the proposed channel frequency response as below: ^ DFT ½k ¼ DFTf^hDFT ðnÞg H

ð4Þ

Figure 3 shows a block diagram of LS-DFT-based channel estimation. Due to all the energy of channel impulse response is concentrated in the former sampling points in time domain, therefore, we use IDFT algorithm to get the channel response in time domain and reselect the real L channel elements to eliminate the noise out of the channel length by transforming them back into the frequency domain using DFT algorithm. Note that the maximum channel delay L must be perfectly known in advance.

4 Cascaded Codes The channel code used in this paper is a cascaded code consisting of convolution code and BCH code. Cascaded code makes short code to long code,which can satisfy the demand of code length used in the error correction process. Compared

Enhancing the Bit Error Rate of Visible Light Communication …

55

Fig. 4 Block diagram of cascaded code

with long code, it can get the same error correcting capability with lower complexity. Cascade code separates coding process into two internal and external parts, effectively increase the code length, improving the ability of error correction. In this paper, we choose convolution code as internal code and BCH code as external code. The block diagram is shown in Fig. 4.

5 Simulation Results and Discussions In our experiment, we consider the accuracy of channel estimation as the primary purpose. The simulation experiments of the two algorithms are built in a typical indoor environment. The simulation parameters are given in Table 1. For the purpose of channel estimation, pilots are used. First, we estimate the transmission characteristics of the pilot position by the least-squares method in frequency domain. In addition, we estimate the whole sub-channel characteristics by cubic B-spline interpolation. Finally, we obtain the LS-DFT channel estimation results using DFT algorithm on the former data. The simulation results are shown in Fig. 5. Comparing with the three curves in Fig. 5, it is clear that the channel estimation based on LS-DFT algorithm shows better performance than the traditional LS channel estimation. The simulation results demonstrate that the LS-DFT-based channel estimation algorithm improves the estimation accuracy, and it can be used to compensate and correct the received data.

Table 1 Simulation parameters Modulation format

Signal length

Subcarriers number

Channel delay

Pilot spacing

Iteration number

16-QAM

52

256

32

16

1000

56

T. Zhang et al. 12

Power (dB)

10 8 6 4 True Channel LS LS-DFT with spline

2 0 0

50

100

150

200

250

300

Subcarrier index

Fig. 5 Channel estimation-based on LS-DFT

Figure 6a, b shows the received signal constellations map before and after channel compensation for the 16-QAM-OFDM VLC system, illustrating that the suitable channel estimation algorithm can effectively alleviate the impact of channel distortion. In order to study the BER performance of the system, performance with and without channel coding is analyzed. For 16-QAM system, a comparative analysis has been done. Cascaded code consisted of (4, 7) BCH linear block code and (7, [171, 133]) convolution code, both are forward error correction (FEC) codes [10]. The results show that cascaded code has improved about 3 dB SNR gain when the BER is 10−3 as shown in Fig. 7. Therefore, cascaded code is chosen as channel coding applied in the transmitter. In addition, DFT-based channel estimation algorithm is also used for VLC-OFDM system. The simulation results show that the proposed method improves the SNR

(a)

(b)

5

5

4

4

3

3

2

2

1

1

0

0

-1

-1

-2

-2

-3

-3

-4

-4

-5 -5

-5 -4

-3

-2

-1

0

1

2

3

4

5

-5

-4

-3

-2

-1

0

1

2

3

4

5

Fig. 6 Constellations map before and after channel compensation. a Before compensation, b after compensation

Enhancing the Bit Error Rate of Visible Light Communication … Fig. 7 BER performance comparison of the 16-QAM-OFDM systems with and without channel coding

10

57

0

cascaded code without code

-1

10

-2

BER

10

-3

10

-4

10

0

2

4

6

8

10

12

14

16

18

20

14

16

18

20

SNR (dB)

Fig. 8 BER performance comparison of the 16-QAM-OFDM systems with and without channel coding and channel estimation

10

0

-1

10

-2

BER

10

-3

10

-4

10

After channel code and estimation without code and estimation

0

2

4

6

8

10

12

SNR (dB)

gain about 4 dB in an indoor channel when the BER is 10−3 within the FEC limit as shown in Fig. 8. In Fig. 8, the improved BER of 16-QAM is shown. For VLC-OFDM system, the advantage of QAM modulation is that channel capacity increases with the growth of the order of modulation format, which significantly improves the communication rate. The 1024-QAM scheme has been reported in 1-Gb/s DMT-VLC system recently [11]. The disadvantage is that the higher order modulation formats increase the communication rate at the cost of system complexity and higher BER.

6 Conclusions In this paper, we demonstrated an improved scheme for VLC-OFDM system. A 3–5 dB signal noise rate (SNR) gain can be achieved when the BER is 10−3 within the FEC limit. To achieve these results we adopt the LS-DFT-based channel

58

T. Zhang et al.

estimation to compensate and correct the amplitude and phase of the received signal, and adopt the cascaded code to improve the performance of the communication system. The simulation results show that the proposed VLC systems can achieve good BER performance.

References 1. Dang, J., Zhang, Z., Wu, L.: A novel receiver for ACO-OFDM in visible light communication. J. IEEE Com. Let. 17, 2320–2323 (2013) 2. Wang, Z., Yu, C., Zhong, W.D.: Performance of a novel LED lamp arrangement to reduce SNR fluctuation for multi-user visible light communication systems. J. Opt. Exp. 20, 4564– 4573 (2012) 3. Kottke, C., Hilt, J., Habel, K.: 1.25 Gbit/s visible light WDM link based on DMT modulation of a single RGB LED luminary. In: 38th European Conference and Exhibition on Optical Communication, pp. We–3. IEEE Press, Amsterdam (2012) 4. Cossu, G., Khalid, A.M., Choudhury, P.: 3.4 Gbit/s visible optical wireless transmission based on RGB LED. J. Opt. Exp. 20, B501–B506 (2012) 5. Azhar, A.H., Tran, T.A., Brien, D.O.: A Gigabit/s indoor wireless transmission using MIMO-OFDM visible light communications. J. IEEE Phot. Tech. Let. 25, 171–174 (2013) 6. Elgala, H., Mesleh, R., Haas, H.: OFDM visible light wireless communication based on white LEDs. In: 65th Vehicular Technology Conference, pp. 2185–2189. IEEE press, Dublin (2007) 7. Wang, Y., Shao, Y., Shang, H.: 875-Mb/s Asynchronous Bi-directional 64QAM-OFDM SCM-WDM transmission over RGB-LED-based visible light communication system. In: Optical Fiber Communication Conference, pp. OTh1G–3. IEEE press, California (2013) 8. Stefan, I., Elgala, H., Haas, H.: Study of dimming and LED nonlinearity for ACO-OFDM based VLC systems. In: IEEE Wireless Communications and Networking Conference, pp. 990–994. IEEE press, Paris (2012) 9. Cho, Y.S., Kim, J., Yang, W.Y.: MIMO-OFDM wireless communications with MATLAB. Wiley, New York (2010) 10. Wang, Q., Wang, Z., Chen, S.: Enhancing the decoding performance of optical wireless communication systems using receiver-side predistortion. J. Opt. Exp. 21, 30295–30305 (2013) 11. Khalid, A.M., Cossu, G., Corsini, R.: 1-Gb/s transmission over a phosphorescent white LED by using rate-adaptive discrete multitone modulation. J. IEEE Phot. 4, 1465–1473 (2012)

An Empirical Examination of Direct and Indirect Network Externalities of the Japanese Handheld Computer Industry: An Empirical Study of the Early Days Michiko Miyamoto

Abstract As a mobile tool, the handheld computer provides accessibility to information for users away from their homes or offices. This accessibility is based on the technology that exhibits both direct and indirect network externalities. The empirical results suggest that communication technologies (direct network externalities) and compatible technologies with PC (indirect network externalities) are among the most important technical attributes of the handheld computer. Keywords Direct network effects puter Hedonic regression





Indirect network effects



Handheld com-

1 Introduction In the theory of technological variety [1], each company develops its own technological trajectory which affects its strategy of delineating itself from another in an industry; however, in order to expand and/or lock-in its consumer base, the company sometimes pursues technological trajectory, which utilizes network externalities. When a network externality exists, the value of a product or a service will increase with the number of users. Katz and Shapiro [2] noted the direct and indirect effects of network externalities. An example of direct physical effect of network externalities (the “direct network effect”) is a telephone, as the number of

M. Miyamoto (&) Akita Prefectural University, Department of Management Science and Engineering, Yurihonjo City, Akita 015-0055, Japan e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_7

59

60

M. Miyamoto

users increases the benefits of the technology. Indirect effects of network externalities (the “indirect network effect”) also exist in the diffusion of innovations or compatibilities without physical direct networks, such as those of a hardware– software relationship; the markets where users of rival products benefit from externalities on the size of the compatible system or “network” with which they would be joined [3]. A larger size of consumer base or installed base for a hardware provides an increased opportunity for a better variety of software. Although the theoretical framework and its economic implications for network externalities have been developed in recent years, there have been few empirical works that test either the direct or indirect effects of network externalities. Regarding the direct effect of network externalities, there are two major studies. Saloner and Shepard [4] used branches as a proxy for an expected network size, and found that the adoption of automated teller machines (ATMs) by banks is positively affected by the number of branches. Banks with many branches adopted ATMs sooner than banks with fewer branches, which shows the presence of network externalities. Wang [5] investigated the adoption and diffusion of IT applications in the presence of network externalities, and found that the perceived or estimated business value of a network by a potential adopting firm influences the firm’s adoption decision as well as network diffusion and growth. Regarding the indirect effect of network externalities, Brynjolfsson and Kemerer [6] and Gandal [7] tested the existence of network externalities with a Lotus platform, which received significant premium in a study of spreadsheets. In this study the author analyzes and tests the direct and indirect effects of network externalities on the Japanese handheld computer industry. When the handheld computer was first introduced in the market in 1993, it was mainly made as a personal electronic organizer. However, as handheld computer technology has improved, its role as a personal or business mobile tool has been highlighted and it has started the development of both direct and indirect network externalities. Direct network effects of a handheld computer involve communication technologies including fax sending/receiving, e-mail downloading/uploading, and Internet browsing, directly or through a connected digital cellular telephone. Indirect effects are related to synchronization with other types of computers (PC synchronization), where a user of a handheld computer can seamlessly exchange data files with other PCs. In 1996, a smaller version of the notebook computer entered the market with the same software platform as the desktop computer. Most of the operating systems for the handheld computer were proprietary, i.e., it was not compatible with other manufacturers’ products, up to the fall of 1996, when Microsoft entered the market with its handheld version of the operating system, Microsoft Windows CE. Windows CE provides regular handheld applications, such as personal information management (PIM), as well as Pocket Excel/Pocket Word, and Pocket Internet Explorer which are compatible with desktop PC containing Windows 95/NT.

An Empirical Examination of Direct and Indirect Network …

61

Accessibility to desktop PCs and other PCs and to the communication network has increasingly become an important technological aspect of the handheld computer, regardless of its type of operating system; proprietary or Windows. With these features, consumers can download necessary files in a desktop computer into a handheld computer that can be carried around and allows editing, while being away from their homes or offices, and later upload that back to the desktop PC. They can also send a message through fax or e-mail on the road. Therefore, the author hypothesizes that a handheld computer with direct and/or indirect network externalities would increase the value of its technology for users. In this study, the author will test whether these two different network effects exist in the handheld computer industry, and examine consumers’ preferences and choices of technology attributes. The plan of the article is as follows. Section 2 discusses the direct and indirect network externalities and their existence in the handheld technology. Section 3 describes the data and the estimation procedure on the entire explanatory variables, and its results. The final section concludes with a summary and a discussion of implications.

2 Network Externalities and Handheld Computer Communication technology is a good example of a growing variety in the course of its technological development [8]; it shows relatively high levels of aggregation of broadly defined types of services, including several subsets of technology for long periods of time. In recent years, facsimile, e-mail, and Internet have joined the variety of communication technologies. Networking is one very important aspect of communication technology, and the communication network was the first to be recognized as the industry that provides positive consumption benefits [2]. Saloner and Shepard [9] expressed the theoretical model of network externality as a + b(N), where a represents the “stand-alone” or “network-independent” benefit from the technology and b(N) represents the network effect. Handheld computers contain network-independent technologies, such as a miniaturization technology and a handwriting technology; however, among the technology subsets of the handheld computer, communication technology is one of the significant attributes [1]. Gradually, more and more handheld computers are giving mobile consumers access to and capabilities of the same communication services that they can enjoy at home or in the office. A handheld computer would present a positive direct consumption externality, where the more users there are on a given communication network, such as e-mail, the greater are the services provided by that network. Network externalities are also significant where there is no physical network. The focus of such a network is whether the products of different firms may be used

62

M. Miyamoto

together. If two firms’ systems are interlinked, or compatible, then the aggregate number of subscribers to the two systems constitutes the appropriate network. Or, if two brands of hardware can use the same software, the relevant network is the set of users who have compatible brands of hardware [10]. Farrell and Saloner [11] noted that installed-base users are somewhat tied to the old technology, which creates a bias against the new technology. For the handheld computer, the older technology comes from precedent computers such as a desktop or a laptop PC. By making handheld computer software compatible with other forms of PCs, handheld computer manufacturers can extend their customer base or installed base to those computer users; at the same time, users of incumbent computers can easily use the handheld computer as a mobile tool. In the study of Katz and Shapiro [12], network externalities exist in the computer software market since users want to transfer files among themselves. Gandal [7] showed that consumers are willing to pay a significant premium for spreadsheets that are compatible, and for spreadsheets that offer links to external databases. Early handheld computer technology has limited consumers’ benefits on its stand-alone and network-independent applications, which are mainly used as a personal electronic organizer. However, gradually, firms have developed and provided applications with word processing and spreadsheet capabilities, which enable consumers to exchange data files seamlessly with other types of PCs. This seems to be a natural technological trajectory that the firm would take in a competitive environment. Over time, each product establishes an installed base of physical capital in the form of the previously sold equipment, and of human capital in the form of users who are trained to operate those products [2]. There are many different ways to make handheld applications compatible with, or link to other PCs, from RTF format to Windows CE, depending on which technology each firm employs. Taking into account the benefits of both direct and indirect network effects, I express the total benefits associated with network adoption in the handheld computer in Eq. (1), where Nd is the network effect from direct network externalities, and Ni is the network effect from indirect network externalities. The total benefits ¼ a þ bðNd Þ þ c ðNi Þ

ð1Þ

If consumers place a significant value on the direct and indirect network effects of handheld computer technology, this would show evidence of network externalities. The handheld computer is supplied by multiproduct firms with variable qualities or attributes. Since some of the handheld computers provide both network externalities, while others provide only one or no externality, it is possible to test whether consumers place a premium on those externalities. In order to measure what consumers are willing to pay for a given handheld computer, I employ an econometric technique called the hedonic price estimation

An Empirical Examination of Direct and Indirect Network …

63

method.1 The hedonic price estimation assumes that a consumer is paying for a “bundle of technical attributes,” and the total price is made up of the price of each individual attribute of the bundle [14]. It can estimate quality-adjusted prices by using the price of each attribute as weight and reducing multidimensional attributes to a unidimensional quality measure. One advantage of the hedonic price model method is that the estimation of quality measure draws on the ex post price data, and this suggests that estimated hedonic price reveals consumers’ preferences and choices of product attributes in equilibrium. This methodology has been applied in several industries that produce heterogeneous products, such as trucking, tractor, spreadsheet, and PC industries.

3 Stepwise Hedonic Regression Analysis The data set was compiled from monthly PC magazines, companies’ published papers and websites, and included list prices and technical attributes of handheld computers (including PDA, mini notebook, and upscale cellular phone) weighing less than 1 kg sold in Japan from 1993 to 1998. Discount prices were not included in this study. There are 120 model-observations (unbalanced panels) over 5 years which were used in this analysis. For each observation, the sample includes a set of 50 technical specifications, list price, each model’s name, and its producer. Description of each variable is seen in Table 1 and the list of major attributes, with descriptive statistics, is found in Table 2. The author runs the Pearson correlation coefficient analysis between all pairs of variables with the two-tailed significance of these coefficients. Most of the variables correlate fairly well and none of the correlation coefficients are particularly large; therefore, multicollinearity is not a problem for these data. The author runs a stepwise hedonic regression of handheld computer list prices on the entire technical attributes, production year of each model, and company dummies to see whether a handheld computer consists of technologies which are stand-alone or network-independent, as well as those of direct and indirect network effects. For each model m, produced by firm i in year t, the pooled hedonic regression is: ln Pmit ¼b0 þ bi þ bt þ b1 ðPersonal Handwriting RecognitionÞmit þ    þ bj ðDateÞmit þ    þ emit

ð2Þ

where β = weight of each attribute; and ε = residuals See Table 3 for the stepwise regression results.

1

See, e.g., Spady and Friedlaender [13], Gibbons et al. [14] and Tremblay [15] for a description and an application of this technique.

64 Table 1 Definitions of variables

M. Miyamoto Tablet technology 1. PA = takes on the value one if the handheld computer can input data using pen, instead of keyboard, zero otherwise 2. HW = takes on the value one if the handheld computer can input data by hand writing, zero otherwise 3. IK = takes on the value one if the handheld computer can input data through imaginary keyboard on LCD display, zero otherwise LCD technology (display) 4. logLCD = size of LCD display 5. COLOR = takes on the value one if the handheld computer has a color display, zero otherwise Recognition technology 6. HWR = takes on the value one if the handheld computer can recognize certain handwriting, zero otherwise 7. PHW = takes on the value one if the handheld computer can recognize personal handwriting, including Kanji (Chinese characters), zero otherwise 8. IW = takes on the value one if the handheld computer has the advanced handwriting technology, can read or recognize handwriting with one’s habit or in cursive style, zero otherwise Application 9. TS = takes on the value one if the handheld computer has travel support application, such as train time tables or travel navigator, zero otherwise 10. CAL = takes on the value one if the handheld computer has calendar application, zero otherwise 11. SCH = takes on the value one if the handheld computer has scheduling application, zero otherwise 12. ADD = takes on the value one if the handheld computer has address organizer application, zero otherwise 13. BC = takes on the value one if the handheld computer has business card organizer application, zero otherwise 14. TELOR = takes on the value one if the handheld computer has telephone number organizer application, zero otherwise 15. MEMO = takes on the value one if the handheld computer has memo (note pad) application, zero otherwise 16. GAME = takes on the value one if the handheld computer has game application, zero otherwise 17. TODO = takes on the value one if the handheld computer has to-do list application, zero otherwise 18. CAL = takes on the value one if the handheld computer has calculator application, zero otherwise 19. SPS = takes on the value one if the handheld computer has spreadsheet application, zero otherwise (continued)

An Empirical Examination of Direct and Indirect Network … Table 1 (continued)

65

20. WORD = takes on the value one if the handheld computer has word processing application, zero otherwise 21. WATCH = takes on the value one if the handheld computer has a worldwide watch application, zero otherwise 22. MAP = takes on the value one if the handheld computer has map application, zero otherwise 23. DICT = takes on the value one if the handheld computer has dictionary application, zero otherwise Communication technology 24. FAXS = takes on the value one if the handheld computer can send a facsimile, zero otherwise 25. FAXR = takes on the value one if the handheld computer can receive a facsimile, zero otherwise 26. EMAIL = takes on the value one if the handheld computer can send and receive e-mail, zero otherwise 27. INTERNET = takes on the value one if the handheld computer can browse Internet information, zero otherwise 28. DCL = takes on the value one if the handheld computer can transmit data via digital cellular phone, zero otherwise 29. PCC = takes on the value one if the handheld computer can interact with PC communication tools, zero otherwise 30. OFC = takes on the value one if the handheld computer can interact with PC through optical fiber cable, zero otherwise 31. REC = takes on the value one if the handheld computer can interact with PC via an infrared port or other remote communication method, zero otherwise 32. PCS = takes on the value one if the handheld computer can synchronized with PC, zero otherwise Multimedia technology 33. VM = takes on the value one if the handheld computer can record voice, zero otherwise 34. DC = takes on the value one if the handheld computer can use as digital camera, or send and receive digital pictures, zero otherwise 35. SOUND = takes on the value one if the handheld computer can play bake recorded voice, zero otherwise Communication port 36. MODEM = takes on the value one if the handheld computer have a modem, zero otherwise 37. TELEPHONE = takes on the value one if the handheld computer can be used as a telephone, zero otherwise 38. PAGER = takes on the value one if the handheld computer can be used as a pager, zero otherwise Hardware technology 39. RAM = size of memory (RAM) 40. ROM = size of memory (ROM) (continued)

66 Table 1 (continued)

M. Miyamoto 41. SLOT = takes on the value one if the handheld computer has a card slot for smart media, compact flash, or any kind of additional memories, zero otherwise 42. BAT = takes on the value one if the handheld computer can operate with battery, zero otherwise 43. OH = operating hours 44. KEY = takes on the value one if the handheld computer has a key board, zero otherwise 45. WIDTH = width of handheld computer in centimeters 46. LENGTH = length of handheld computer in centimeters 47. DEPTH = depth of handheld computer in centimeters 48. WEIGHT = weight of handheld computer in grams 49. DATE = a date of handheld computer released 50. PRICE = the list price for a handheld computer 51. LPRICE = defined as the natural log of the price 52. OWNDOS = takes on the value one if the handheld computer carries its proprietary operating system, zero otherwise 53. WINCE = takes on the value one if the handheld computer carries Windows CE operating system, zero otherwise 54. WIN3.1/95 = takes on the value one if the handheld computer are compatible with Windows 3.1 or Windows 95, zero otherwise 55. MSDOS = takes on the value one if the handheld computer are compatible with MS-DOS, zero otherwise

Among the important technical aspects of handheld computer with a positive coefficient and significant t-statistics, the communication technology (fax send, e-mail, Internet) and the PC-compatible technology (PC communication, data transferring via infrared light, PC synchronization, Windows CE, MS-DOS) are included. Hardware technologies with positive coefficients and significant t-statistics, such as modem and card slot for extra memory are also related to network externalities, since they are used as communication mediums. Card slots are used for extra memories, such as PC card, smart media, compact flash memory, which could insert into desktop PC or laptop PC directly, and transfer information from handheld PC. This shows that the handheld computer consists of technologies, which are network-independent and those of direct and indirect network externalities and the direct and indirect network effects are important technological attributes of the handheld computer.

An Empirical Examination of Direct and Indirect Network …

67

Table 2 Summary statistics for variables, 1993–1998 (n = 120) Variables

Type of technologies

Pen access (dummy) Handwriting (dummy) Imaginary keyboard (dummy) log LCD

Tablet technology Tablet technology Tablet technology

0.708 0.65 0.075

LCD technology (display) LCD technology (display) Recognition technology Recognition technology Recognition technology Application Application Application Application

Color (dummy) Handwriting-recognition (dummy) Personal handwriting recognition (dummy) Ink-wordpro (dummy) Travel support (dummy) Calendar (dummy) Schedule (dummy) Address book organizer (dummy) Business card organizer (dummy) Telephone number organizer (dummy) Memo (dummy) Game (dummy) To-do list (dummy) Calculator (dummy) Spreadsheet (dummy) Word processing (dummy) World Watch (dummy) Map (dummy) Dictionary (dummy) Fax send (dummy) Fax receive (dummy) E-mail (dummy) Internet (dummy) Digital cellular phone Link (dummy)

Mean

Std. deviation

Min

Max

0.456 0.479 0.264

0 0 0

1 1 1

4.672

0.643

1.556

5.487

0.125

0.332

0

1

0.542

0.5

0

1

0.192

0.395

0

1

0.2

0.402

0

1

0.183 0.6 0.783 0.583

0.389 0.492 0.414 0.495

0 0 0 0

1 1 1 1

Application

0.217

0.414

0

1

Application

0.433

0.498

0

1

Application Application Application Application Application Application Application Application Application Communication technology Communication technology Communication technology Communication technology Communication technology

0.75 0.25 0.575 0.717 0.433 0.408 0.608 0.125 0.567 0.35

0.435 0.435 0.496 0.453 0.498 0.494 0.49 0.332 0.498 0.479

0 0 0 0 0 0 0 0 0 0

1 1 1 1 1 1 1 1 1 1

0.25

0.435

0

1

0.575

0.496

0

1

0.267

0.444

0

1

0.258

0.44

0

1 (continued)

68

M. Miyamoto

Table 2 (continued) Variables

Type of technologies

PC communication (dummy) Optical fiber communication (dummy) Remote communication (dummy) PC synchronization (dummy) Voice memo (dummy)

Communication technology Communication technology Communication technology Communication technology Multi media technology Multi media technology Multi media technology Communication port Communication port Communication port Hardware technology Hardware technology Hardware technology Hardware technology Hardware technology Hardware technology Hardware technology Hardware technology Hardware technology Hardware technology Date Price

Digital camera (dummy) Sound (dummy) Modem (dummy) Telephone (dummy) Pager (dummy) Memory (RAM) Memory (ROM) Card slot (dummy) Battery (dummy) Operating hours (hours) Key board (dummy) Width (cm) Length (cm) Depth (cm) Weight (gram) Date (year) Price (yen)

Mean

Std. deviation

Min

Max

0.733

0.444

0

1

0.558

0.499

0

1

0.6

0.492

0

1

0.608

0.49

0

1

0.167

0.374

0

1

0.083

0.278

0

1

0.217

0.414

0

1

0.317

0.467

0

1

0.1

0.301

0

1

0.067

0.25

0

1

3.756

5.819

0.023

32

0.827

2.541

0

16

0.525

0.501

0

1

0.342

0.476

0

1

310.847

2

2000

0.502

0

1

119.8 0.517 146.7

54.649

11.6

305

113.37

41.151

19.5

290

22.72

17.119

4

184.75

336.56

263.259

33

1800

93 4500

98 210,000

96.658 76,405

1.503 51541.329

An Empirical Examination of Direct and Indirect Network … Table 3 Results of the stepwise regression

Coefficients

69 t-value

(Intercept) 8.5681 9.142 ** PA 0.1012 2.098 ** TS −0.0952 −2.4154 *** TELOR 0.1201 3.0173 *** MEMO −0.1276 −2.8234 *** MAP 0.1361 2.7761 *** DICT 0.0978 2.6449 *** FAXS −0.1925 −3.7266 *** EMAIL 0.1423 3.6297 ** INTERNET −0.1023 −2.3098 *** OFC 0.1275 3.3101 ** REC 0.094 2.0965 *** PCS 0.1717 4.7094 *** MODEM 0.1232 2.6535 *** TELEPHONE 0.3498 4.9341 *** KEY −0.1488 −3.4363 ** ROM 0.0124 2.2168 *** SLOT 0.1105 2.6919 *** BAT 0.2138 6.1846 *** OH −0.0002 −5.5574 *** WIDTH 0.0026 7.0426 *** LENGTH 0.0012 3.1226 *** MSDOS 0.1149 1.9739 ** DATE −0.045 −4.652 * cp = 2.160591 Residuals standard error: 0.1043 on 74 degree of freedom Multiple R-Squared: 0.9546 F-statistic: 35.34 on 44 and 74 degrees of freedom, the pvalue = 0 *** Significant at the 0.01 level or less, ** Significant at the 0.05 level or less, * Significant at the 0.10 level or less

4 Conclusion The author has tested the handheld computers, and the empirical estimation results of this study suggest a clear presence of both direct and indirect network externalities in the industry. A purpose of mobile product purchase in 1997 is shown in Table 4. As a mobile tool, empirical results suggest those communication technologies (direct network externalities) and compatible technologies with PC (indirect network externalities) are among the most important technical attributes of the handheld computer. As the industry evolves, the author has found a shift in consumers’ preference to those technologies, e.g., telephone, pager, and facsimile to e-mail and Internet. Such

70 Table 4 A purpose of mobile products purchase

M. Miyamoto Email send/receive

2,066

Memo

559

Internet browsing 769 Scheduling 926 PC communication 699 Address 789 FAX send/receive 266 Remote access 604 Presentation 549 Car navigation 38 Spreadsheet 348 No purpose 44 Word processing 802 Others 266 Respondents: 4909 (could be multiple replies) Survey conducted by Keyman’s Net Navigation (September 24, 1997)

a shift in technology may continue to be seen, as a handheld computer model with GPS and digital camera capabilities seen in the market. A recent survey conducted by Accenture in 2013 [16] suggests that some activities, such as emailing and texting on handheld PC, mobile or smart phones are well established among consumers, and many activities are done on multiple devices. Our study suggests that with technological evolution, the handheld computer has evolved from a personal organizer to a communication tool. This industry is still at its growing stage, and a different picture in the future might be seen; however, the importance of direct and indirect network effects on this technology will remain for a long time.

References 1. Miyamoto, M.: Technological variety, strategic variations and the persistence of the firms’ market positioning. The Handheld Computer Industry. Technical Paper, University of Tsukuba (1999) 2. Katz, M.L., Shapiro, C.: Product compatibility choice in a market with technological progress. Oxford Econ. Papers 38, 146–165 (1986) 3. David, P.A.: Clio and the economics of QWERTY. Am. Econ. Rev. P and P 75, 332–7 (1985) 4. Saviotti, P.P., Stubbs, C.: Innovation and technical change: a case study of the UK tractor industry. Res. Policy 11, 289–310 (1982) 5. Wang, Y.-M.: Information technology adoption in the presence of network externalities (CIRRUS, PLUS), Unpublished Doctoral Dissertation. New York University (1995) 6. Brynjolfsson, E., Kemerer, C.F.: Network externalities in microcomputer software: an econometric analysis of the spreadsheet market. Manage. Sci. 42(12), 1627–1647 (1996) 7. Gandal, N.: Hedonic price indexes for spreadsheets and an empirical test of network externalities. Rand J. Econ. 25, 160–170 (1994) 8. Saviotti, P.P.: Variety, economic and technological development. innovation in technology, industries, and institutions. In: Yuichi, S., Mark, P. (eds.) Studies in Schumpeterian Perspectives, pp. 27–48 (1994) 9. Saloner, G., Shepard, A.: Adoption of technologies with network effects: an empirical examination of the adoption of automated teller machines. Research Paper, No. 1146, Stanford University(1991) 10. Katz, M.L., Shapiro, C.: Technology adoption in the presence of network externalities. J. Polit. Econ. 94(4), 822–841 (1986)

An Empirical Examination of Direct and Indirect Network …

71

11. Farrell, J., Saloner, G.: Installed base and compatibility; innovation, product preannouncements and predation. Am. Econ. Rev. 76, 640–655 (1986) 12. Katz, M.L., Shapiro, C.: Product introduction with network externalities. J. Indus. Econ. 40, 55–84 (1992) 13. Spady, R., Friedlaender, A.: Hedonic cost functions for the regulated tracking industry. Bell J. Econ. 9, 1(Spring), 159–179 (1978) 14. Gibbons, M., Coombs, R., Saviotti, P., Stubbs, P.C.: Innovation and technical change; a case study of the U.K. tractor industry, 1957–1977. Res. Policy 11, 289–310 (1982) 15. Tremblay, V.J.: Strategic groups and the demand for beer. J. Indus. Econ. XXXIV, 183–198 (1986) 16. Accenture: It’s Anyone’s Game in the Consumer Electronics Playing Field. The 2013 Accenture Consumer Electronics Products and Services Usage Report (2013)

Multipath Performance Assessments for Future BeiDou BOC Signal Di Wu, Wei Chen, Jing Li, Hongyang Lu and Jing Ji

Abstract With the purpose of overall performance enhancement, binary offset carrier (BOC) modulation is proposed to replace the existing quadrature phase shift keying (QPSK) signal in the BeiDou system. Multipath performance comparisons between these two signals have been performed using the error envelops providing ideal signal scenarios. However, this method gives only qualitative statements without considering realistic signal characteristics. In order to produce a quantitative assessment of the multipath performance in different environments, power delay profile (PDP) models are built based on field measurement data to simulate the real-world multipath channels. It is proven by the simulation results that the BeiDou BOC(14,2) signal outperforms QPSK(2) signal by 60 % overall error reduction in four typical environments: open, rural, suburban, and urban. The simulation results also testify that double-delat correlator (DDC) has more influence on the BOC(14,2) in comparison to narrower correlator.



Keywords Beidou signal BOC modulation delay profile Double-delta correlator



 Multipath performance  Power

D. Wu (&)  W. Chen School of Automation, Wuhan University of Technology, 122 Luoshi Road, Wuhan 430070, Hubei, China e-mail: [email protected] J. Li  H. Lu China Transport Telecommunications and Information Center, No. 1, An Wai Wai Guan Hou Shen, Beijing 100011, China J. Ji School of Information Engineering, Wuhan University of Technology, 122 Luoshi Road, Wuhan 430070, Hubei, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_8

73

74

D. Wu et al.

1 Introduction For the purpose of national security, China is constructing a proprietary national satellite navigation system, the BeiDou system, to replace the existing system, i.e., the Global Positioning System (GPS) and the Global Navigation Satellite System (GLONASS), in key areas. It is expected that a worldwide navigation service will be launched in 2020 following the complete deployment of satellite constellation. Multipath signal is a major contributing factor to ranging errors of the BeiDou navigation service. However, due to its random nature, it is difficult to detect and mitigate at the receivers’ end. A very promising method is to mitigate at the transmitter end by applying optimized signal modulation schemes. Therefore, BeiDou BOC modulation scheme is proposed in article [1] in order to replace the existing QPSK(2) signal in the future due to presumable better performance [2]. Recent activities in the field of performance assessments of the BeiDou BOC signal have been increased in order to estimate the potential of performance enhancement. The computation of the multipath error envelop is a traditional method to analyze the multipath performance for a given signal scheme [3]. The envelope illustrates the scale of the errors with the assumption that there is only one multipath signal with a fixed amplitude throughout all considered time delays. The benefit of this method is that differences of multipath performance are directly reflected by the enclosed areas of the envelops [3]. Therefore, it has become an essential tool for the BeiDou signal evaluation in articles [4–7]. However, without consideration of realistic characterization of multipath signals, the error envelops allow only general qualitative statements of performance assessments for an ideal multipath scenario. It is difficult to extract meaningful typical multipath errors [8]. Another approach is to use multipath propagation channel model to determine multipath errors, as introduced in articles [8, 9]. Because the model describes the characteristics of multipath signals based on real-world measurement data, the determined multipath errors can serve as input for quantitative assessments. Furthermore, typical multipath errors obtained during the assessments can be used as a benchmark for assessments of various anti-multipath techniques. The aim of this paper is to carry out some quantitative assessments of the BeiDou BOC signal using multipath propagation channel models, to give a comprehensive and detailed comparison regarding the multipath performance between BeiDou BOC and QPSK signals. The paper is organized as follows. The BeiDou BOC modulation scheme is described in Sect. 2. PDP models and computation of multipath errors are introduced in Sect. 3. Section 4 presents a comprehensive comparison under various multipath environments and receiver settings. Summary and conclusion are given in Sect. 5.

Multipath Performance Assessments for Future BeiDou BOC Signal

75

2 BeiDou BOC Modulation Scheme BOC modulation is developed from phase shift keying (PSK) modulation [9]. The basic idea of the BOC modulation is to use a subcarrier to modulate the PSK signals. The main benefit of the subcarrier is to provide a frequency offset, enabling the transmission of multiple carriers without interference from each other. The signal is expressed as SBOC ðtÞ ¼

pffiffiffiffiffiffi 2PaðtÞsðtÞ cosð2pft þ uÞdðtÞ;

ð1Þ

where s(t) is the subcarrier, a(t) is the ranging code, d(t) is the navigation data, pffiffiffiffiffiffi 2P is the signal amplitude, f is the carrier frequency, and φ is the carrier phase. Since the subcarrier is an essential component, BOC signal is initially detonated as BOCðfs ; fc Þ, where fs is the subcarrier frequency and fc is the chipping rate. For a simplified detonation, BOC(m, n) has become popular, where m ¼ fs =1:023 MHz and n ¼ fc =1:023 MHz. BOC(14,2) modulation is chosen for the BeiDou signal, which means the subcarrier frequency is 14.322 MHz and the chipping rate is 2.046 MHz. Autocorrelation function (ACF) is one of the most important properties of a GNSS signal [10] which is used by the receiver to synchronize the signal and track the variation of the carrier phase. The normalized ACF of the BOC signal is expressed as ( RBOC ðsÞ ¼

ð1Þkþ1

h

1 2 p ðk

i þ 2kp þ k  pÞ  ð4p  2k þ 1Þjsj ; jsj  chip 0; jsj  1chip ð2Þ

where P ¼ ba, k ¼ 2pjsj. Figure 1 shows the computed ACF of the BOC(14,2) signal, with a receiver bandwidth of 30 MHz. Compared with the QPSK(2) signal, the BOC(14,2) signal has multiple autocorrelation peaks, which means it has a smaller correlator spacing and higher tracking performance. Another important characteristic is power spectrum density (PSD) [10]. The normalized PSD of the sinusoidal modulated signal is expressed as 8 2    3 2 > > tan 2fpfs sin pffc > > > 5 ; n ¼ even > fc 4 > > pf < ð3Þ GBOCsin ðf Þ ¼ 2    32 > > > tan 2fpfs cos pffc > > 5 ; n ¼ odd > fc 4 > > : pf

76

D. Wu et al.

Fig. 1 ACF comparison

and the normalized PSD of the cosinusoidal modulated signal is expressed as

GBOCcos ðf Þ ¼

8 2    32 pf pf > > 2 sin > fc sin 4fs > 4 >   5 ; n ¼ even fc > > > < pf cos pf 2fs

   32 2 > > > 2 cos pffc cos 4fpfs > > >   5 ; n ¼ odd fc 4 > > : pf cos 2fpfs

ð4Þ

where f is the receiver pre-correlation bandwidth. Since the cosinusoidal modulation is more widely used, in this paper we compute the GBOCcos ðf Þ as shown in Fig. 2. In comparison with the QPSK(2) signal, the power has spread to the Fig. 2 PSD comparison

Multipath Performance Assessments for Future BeiDou BOC Signal

77

sidebands rather than concentrated in the center, leading to higher root-mean square (RMS) bandwidth. In summary, the BOC modulation has better overall performance than the QPSK modulation.

3 Multipath Performance Assessment Techniques 3.1

Multipath Error Computation

Similar to the GPS signal, the BeiDou signal is composed of the navigation data, ranging code, and carrier. In order to track the incoming signal, the ranging code and the carrier phase are required to be matched with the locally generated signal by calculating correlation function. In the real-world multipath environment, not only the direct path but also multipath signals are received by a BeiDou receiver. When the geometric multipath delay is less than 1 chip plus the correlator spacing [6], the receiver cannot readily distinguish the multipath signals from the direct path signals, and will treat both of them as a unity. The composite signal is expressed as SðtÞ ¼ aPðtÞ cosð2pftÞ þ

L X

ðai Pðt  si Þ cosð2pft þ ui ÞÞ

ð5Þ

i¼1

where there are L − 1 multipath signals, ai , si and ui are delays, amplitudes, and carrier phases relative to those of the direct path. Consequently, the code and carrier phase tracking are distorted by the multipath signals, resulting in errors in the pseudo-range and carrier phase measurements. Because the carrier phase tracking errors are trivial compared with the code tracking [11], in this paper, only the code tracking multipath errors are computed. Usually, the conventional code-tracking technique is early-later processing (ELP) [12]. The incoming signal is correlated with the early, late, and prompt versions of the locally generated signal to synchronize the code phase based on the correlation results. There are four types of discriminators; for a simplified implementation, of which the non-coherent early minus late power (NELP) discriminator is adopted in this paper. Equation (6) represents the NELP discriminator output considering no multipath signals h i DðeÞ ¼ ðRðe  d=2ÞÞ2  Rððe þ d=2ÞÞ2

ð6Þ

where RðeÞ is the correlation function, and d is the correlator spacing. The presence of DðeÞ is a zero-crossing S-curve, namely DðeÞ ¼ e ¼ 0. With the existence of multipath signals, DðeÞ is expressed as

78

D. Wu et al.

2

DðeÞComposite

!2   X   L d d ¼4 R e ai R e  s i  þ cosðui Þ 2 2 i¼1 

!2 3   X   L d d R eþ ai R e  s i þ þ cosðui Þ 5 2 2 i¼1

ð7Þ

When DðeÞComposite ¼ 0, e = multipath error. Based on the first order Taylor series expansion of DðeÞComposite in the vicinity of 0, the multipath error is expressed as R b =2 ai br =2 Gs ðf Þ sinðpfndÞ sinð2pf si Þdf r e R br =2 2p b =2 Gs ðf Þ sinðpfndÞ½1  a cosð2pf si Þdf

ð8Þ

r

where br is the correlation bandwidth, Gs ðf Þ is the signal PSD, nd is the correlator spacing, τi are the multipath delays, and ai are the relative amplitudes. “+” signifies the in-phase multipath signals while “−” signifies the out-phase multipath signals. The multipath errors can be computed by inputting the variables τi, ai and φi, given βr, Gs ðf Þ and nd are constant.

3.2

Power Delay Profile Model

The most realistic but also the most complex model is the statistic channel model, which provides general distribution of multipath numbers, delays, relative amplitudes, and phases. To reduce the complexity of the computation, a PDP model is used in this paper, which provides similar results as the statistic channel as long as the same parameters are used [8]. The complex envelope impulse response of a multipath channel can be denoted by the two-variable functions: multipath delay τ and the power r at delay τ. The PDP model is defined as the variation of the mean power in the channel with delay: Ds

L 1  e s0 X iDs Ps ¼ r0 =ð1 þ r0 ÞdðsÞ þ e s0 dðs  iDsÞ 1 þ r0 i¼1

ð9Þ

where r0 is the mean multipath power ratio and s0 is the mean path delay, which can be obtained by channel measurements. This model assumes the existence of an arbitrary number of multipath signals presented with the same delay Δτ between each other. The power of the ith multipath signal can be simply obtained by

Multipath Performance Assessments for Future BeiDou BOC Signal

79

PðiÞ ¼ Pðsi Þ Ds

ð10Þ

The relative amplitude can be computed as aðiÞ ¼

pffiffiffiffiffiffiffiffiffiffiffi 2PðiÞ

ð11Þ

Suppose the relative phased τi are uniformly distributed in ½0; 2pi, the multipath errors under realistic environment can be obtained by applying these parameters to Eq. (8).

4 Simulation and Assessment Results 4.1

Simulation Scenarios

Since the 1980s, extensive measurement campaigns have been carried out by the German Aerospace Center (GAC) to characterize the satellite multipath propagation channels. Model parameters of different environment and elevation angles have been surveyed based on massive data collected from these campaigns. In each model, 250 multipath signals are assumed with the same stepping Δτ = 1 m. It must be emphasized that delays of the contributing multipath signals are smaller than 1 chip distance plus the correlator spacing; the maximum delay τmax is 250 m given the conventional spacing = 0.5 chip. The error is obtained from 500 simulation runs of each τ0. In order to obtain a detailed overview of the BOC(14,2) multipath performance, the simulation scenarios and receiver settings are summarized in Tables 1, 2, 3, 4 and 5. For Scenarios 1, 2, and 3 a pre-correlation bandwidth of 24 MHz is selected to compare the BOC(14,2) and the QPSK(2) under different correlator spacings. 0.1 chip is the typical spacing of a narrower correlator [12], which is a widely used multipath mitigation technique. Double-delta is another popular technique; it has two correlators featuring spacing d and 2d [12]. Multipath errors of BOC(14,2) with these techniques applied will be computed to gain a deeper insight into its potential.

Table 1 Multipath power delay profile parameters at elevation angles 15°

Environment

r0 (dB)

τ0 (m)

Open Rural Suburban Urban

27 11 15.5 4.5

25 55 58 92

80

D. Wu et al.

Table 2 Multipath power delay profile parameters at elevation angles 25°

Environment

r0 (dB)

τ0 (m)

Open Rural Suburban Urban

27.5 13.5 20.5 6.0

26 57 56 51

Table 3 Simulation settings for Scenario 1

Scenario 1 Signal mode

QPSK(2)

BOC(14,2)

Chipping rate Bandwidth Correlator spacing

2.046 MHz 24 MHz 0.5Chip

2.046 MHz 24 MHz 0.5Chip

Table 4 Simulation settings for Scenario 2

Table 5 Simulation settings for Scenario 3

4.2

Scenario 2 Signal mode

QPSK(2)

BOC(14,2)

Chipping rate Bandwidth Correlator spacing

2.046 MHz 24 MHz 0.1Chip

2.046 MHz 24 MHz 0.1Chip

Scenario 3 Signal mode

QPSK(2)

BOC(14,2)

Chipping rate Bandwidth Correlator spacing

2.046 MHz 24 MHz Double-Δ(0.1 Chip)

2.046 MHz 24 MHz Double-Δ(0.1 Chip)

Numerical Results

Figure 3 plots the multipath PDP model at elevation angle 15°. Urban has the strongest multipath power due to its most complex environment, while open has the weakest multipath power. Rural has a stronger multipath power than suburban because there are more reflected echoes caused by trees and plants. Compared with the PDP model at elevation angle 25° shown in Fig. 4, all environments of the lower angle suffer from server multipath influence. The simulation results of Scenario 1 demonstrate that the BOC(14,2) has better overall multipath performance than the QPSK(2) signal for all environments. As shown in Fig. 5, the mean multipath error in urban drops sharply from 11.74 to 4.31 m, a 60 % reduction of the error when switching to the BOC(14,2). In both rural and suburban, the multipath errors of BOC(14,2) account for 1/3 of the

Multipath Performance Assessments for Future BeiDou BOC Signal

81

Fig. 3 Multipath PDP model at 15°

Fig. 4 Multipath PDP model at 25°

counterpart of QPSK(2), 6.03 to 2.15 m and 4.03 to 1.34 m. Diminish of the error in open area is 0.5 m, more than half the QPSK(2) error is overcome. Figure 6 shows the same trend at 25°. In sum, the multipath performance of the BOC(14,2) outperforms the QPSK(2) by approximately 60 % error reduction. The results of Scenario 2 illustrate the multipath performance of the BOC(14,2) and the QPSK(2) using the narrower correlator with spacing = 0.1 chip. Both the signals are benefited from the technique, 8.78 m is reduced for QPSK(2) and 3.15 m is mitigated for BOC(14,2) in urban at 15°. Although the BOC(14,2) error is

82

D. Wu et al.

Fig. 5 Simulation results of Scenario 1(a)

Fig. 6 Simulation results of Scenario 1(b)

smaller than the QPSK(2), the latter undergoes a sharper decline, namely the narrower correlator has a more distinct influence on the QPSK(2) than the BOC(14,2) (Figs. 7 and 8). By contrast, within Scenario 3, the DDC delivers a more significant improvement to the BOC(14,2) other than the QPSK(2). The error in urban at 15° is declined from 1.16 to 0.16 m, much greater than the decrease of the QPSK(2) signal. With the help of the DDC, multipath errors of the BOC(14,2) can be less

Multipath Performance Assessments for Future BeiDou BOC Signal

83

Fig. 7 Simulation results of Scenario 2(a)

Fig. 8 Simulation results of Scenario 2(b)

than 0.2 m, under most conditions. It contradicts the outcomes in [7], in which the DDC is equivalent to the narrower correlator. This is because the envelops reflect the performance in the presence of one signal only other than considering the situation that there are three types multipath echoes: short-, medium-, and long-delay. As matter of fact, the DDC has better performance than the narrower correlator in case of medium to long delay multipath [13] (Figs. 9 and 10).

84

D. Wu et al.

Fig. 9 Simulation results of Scenario 3(a)

Fig. 10 Simulation results of Scenario 3(b)

5 Conclusion The BeiDou BOC(14,2) signal has been evaluated with respect to its multipath performance under various realistic propagation conditions. Three different simulation scenarios were developed to compare the BOC(14,2) multipath performance with the QPSK(2) under different correlator spacings. The simulation results prove that the BOC(14,2) has a overall multipath performance improvement than the QPSK(2) under various multipath environments and receiver conditions. Table 6 summarizes all the comparison results. It is also testified that the DDC has a more significant influence on the BOC(14,2) than the

Multipath Performance Assessments for Future BeiDou BOC Signal

85

Table 6 Comparison between the BOC(14,2) and the QPSK(2) BOC/QPSK (%)

E = 15° Open Rural (%) (%)

Suburban (%)

Urban (%)

E = 25° Open Rural (%) (%)

Suburban (%)

Urban (%)

0.5Chip 0.1Chip Double-Δ (0.1Chip)

42 37 12

33 39 8

37 39 6

44 39 7

34 40 8

40 40 7.5

35 38 7

35 39 8

narrower correlator. It is highly recommended to apply the DDC technique to the BOC signal for high-precision services, although the correlator requires more complicated circuits. On the other hand, to provide an acceptable multipath performance for low-precision services, it is proposed that the narrower correlator be employed to the QPSK(2). The future work will be focused on other signal modulation schemes and multipath mitigation techniques with the method described in this paper. Acknowledgments This work was financially supported by the National high Technology Research and Development Program of China under Grants No. 2013AA122403. This work was financially supported by the self-determined and innovative research funds of WUT under Grants No.2013-YB-018.

References 1. Report on the Development of BeiDou/COMPASS Navigation Satellite System (V2.0). http:// www.BeiDou.gov.cn 2. Tan, S.H.S., Zhou, B., Guo, S.H.T., Liu, Z.H.J.: Studies of compass navigation signals design. J.: Scientia Sinica: Phys, Mech. Astron. 40(5), 514–519 (2010) 3. Liu, H.C., Cheng, X., Ni, S.H.J., Wang, F.X.: Evaluation of multipath mitigation performances based on error envelope. J. National Univ. Defense Technol. 33(1), 72–75 (2011) 4. Gao, M., Li, X.D., Wang, H.J.: Performance analysis of multipath mitigation of COMPASS signal. J. Telemetry Track. Command 34(2), 35–40 (2013) 5. Li, B., Xu, J.N., Cao, K.J., Zhu, Y.B.: Analysis and simulation on anti-multipath performance of BeiDou2 navigation. J. Chin. Inertial Technol. 20(3), 339–342 (2012) 6. Tang, Z.P., Zhou, H.W., Hu, X.L., Ran, Y.H., Liu, Y.Q., Zhou, Y.L.: Research on performance evaluation of compass signal. J. Scientia Sinica: Phys. Mech. Astron. 40(5), 592– 602 (2010) 7. Irsigler, M., Hein, G.W., Eissfeller, B.: Multipath performance analysis for future GNSS signals. In: 2004 National Technical Meeting of the Institute of Navigation, San Diego, pp. 225—238 (2004) 8. Irsigler, M., Rodriguez, J.A.A., Hein, G.W.: Criteria for GNSS multipath performance assessment. In: ION GNSS 18th International Technical Meeting of the Satellite Division. Virginia: Institute of Navigation, pp. 2166–2177 (2005) 9. He, Z.M., Hu, Y.H., Wu, J.F.: A comprehensive method for multipath performance analysis of GNSS navigation signals. In: 2011 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), pp. 1–6 (2011)

86

D. Wu et al.

10. Mubarak, O.M .: Performance comparison of multipath detection using early late phase in BPSK and BOC modulated signals. In: 2013 7th International Conference on Signal Processing and Communication Systems (ICSPCS), pp. 1–7 (2013) 11. He, Z.M.: Research on Code Tracking Accuracy for Satellite Navigation Signals. D. National Time Service Center, Chinese Academy of Sciences (2012) 12. Li, L., Zhou, W.H., Tan, Sh.S.: Overview of the receiver techniques for BOC modulation signal. In: 4th IET International Conference on Wireless, Mobile and Multimedia Networks (ICWMMN 2011), pp. 99–102 (2011) 13. Bhuiyan, M.Z.H., Zhang, J., Lohan, E.S., Wang, W., Sand, S.: Analysis of multipath mitigation techniques with land mobile satellite channel model. J. Radio Eng. 21(4), 1067– 1077 (2014)

Research of Incremental Dimensionality Reduction Based on Tensor Decomposition Algorithm Xin Guo, Yang Xiang, Dongdong Lv, Shuhan Yuan, Yinfei Huang, Qi Zhang, Jisheng Wang and Dong Wang

Abstract For mass or temporal data, it is too large and even impossible for the calculated amount of dimension reduction all at once. Based on text feature graph clusters, first, each text feature graph serves as a second-order tensor. Then, two or more text feature graphs were made up to form a third-order tensor. Moreover, tensor Tucker decomposition is used to study the incremental dimensionality reduction methods of text feature graphs. Finally, experiments on real data sets show that this method is simple and effective for dimensionality reduction of text feature graphs. Keywords Tensor

 Tucker decomposition  Text feature graphs

1 Introduction In the field of text retrieval and text mining, text data is often expressed by the vector space model. Because of the large number of entries, the dimension of the vector space becomes very high, resulting in very large amount of calculation and X. Guo (&) School of Computer and Information Technology, Shanxi University, Taiyuan, China e-mail: [email protected] Y. Xiang  D. Lv  S. Yuan  Q. Zhang  J. Wang Department of Computer Science and Technology, Tongji University, Shanghai, China Y. Huang Shanghai Stock Exchange, Shanghai, China Q. Zhang  J. Wang Shenhua Helishi Information Technology Co. Ltd., Beijing, China D. Wang School of Computer Science & Information Engineering, Shanghai Institute of Technology, Shanghai, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_9

87

88

X. Guo et al.

low efficiency. Therefore, it is necessary to perform the text data vector space dimension reduction. Deerwester et al. proposed SVD-based latent semantic indexing (LSI) [1], which achieves dimensionality reduction by SVD decomposition method for the text-word matrix. Non-negative matrix factorization (NMF) method [2] divides a matrix into two non-negative matrixes, wherein the vector of the original matrix can be interpreted as a combination of vectors of matrix on the left, and the right matrix is corresponding weight. Independent component analysis (ICA) [3] considers the data as linear or nonlinear unknown latent variables, namely mixture of independent data components, which realizes dimensionality reduction by converting the original data into linearly independent components to the maximum extent. The concept index (CI) method [4] adopts clustering to decompose matrix, and it is a linear combination space of clustering centroid. It is noteworthy that some researchers studied data dimensionality reduction methods based on semantics, such as word-based semantic similarity text representation dimensionality reduction method [5]. These methods consider semantics and parts of speech of feature words and perform dimensionality reduction by collecting highly similar words. Each feature diagram is expressed in a “feature word–feature word” second-order tensor in this paper. As time goes on, there are more and more feature graphs. Thus, a feature dimension is increased to form the “feature word–feature word–feature” third-order tensor and then is subjected to Tucker decomposition. After tensor decomposition, the original tensor is expressed as the relationship between the principal components on each dimension, enabling incremental dimension reduction based on text features graphs.

2 Incremental Dimensionality Reduction Method Based on Tensor Decomposition 2.1

Tensor

Tensor was first proposed by Tullio Levi-Civita and Gregorio Ricci-Curbastro in the Absolute Differential Geometry [6]. Tensor order r (also called rank) is the number of dimensions wherein a tensor is expressed as a multidimensional array. Scalar is a number and is the simplest tensor, and namely scalar is zero-order tensor x. Vector can be represented as a one-dimensional array, that is the vector is first-order tensor x, v ¼ fxi g the matrix can be expressed as a two-dimensional array that matrix  isa second-order tensor X = {xij}. And so on, third or higher order tensor is v ¼ xij...k . So, assuming that each array is n-dimensional, then in n-dimensional space, tensor has nr components. Although tensor can be represented by a coordinate system, it is actually not dependent on the reference system.

Research of Incremental Dimensionality Reduction …

2.2

89

Feature Graphs Tensor Expression

The feature graphs in text feature graphs are expressed as |V| row |V| column square matrix in accordance with the relationship among feature words. At this point, |V| row |V| column square matrix can be seen as a second-order tensor. As time goes on, all text feature graphs can be represented in second-order tensors, and these second-order tensors are combined to form a third-order tensor, wherein the first-order and second-order are the unique feature words; components in the second-order tensor represent the relationship weights among the feature words, and the third-order represents the features. With the passage of time, the processed feature graphs are numbered in the order from small to large; and third-order tensor components represent the relationship weight of each feature word in each feature graph. This third-order tensor is decomposed to reduce the dimension. The text feature graphs at multiple time points can be merged together to form a third-order tensor, and new text feature graphs can also be used to form third-order tensor for reconstruction with the text feature graphs of text feature graphs based on tensor decomposition dimension reduction to get a new third-order tensor. The combined third-order tensor represents the text feature graphs during these periods, wherein each second-order tensor is the positive slice of combined third-order tensor on the direction of the features. The elements and the number of each feature word collection are generally not the same. So, it needs to expand these second-order tensors so that the combined third-order tensor has the same feature words. In a second-order tensor matrix, if there is no feature word, the element value of corresponding row and column is 0. In spite of disruption of the order of the feature graphs or feature words, the element value of third-order tensor is unchanged, the location is changed, and the relationship between feature words and feature graphs and the relationship between feature words are not changed, which will not affect the dimension reduction effect.

2.3

Feature Graphs Tensor Tucker Decomposition

Tucker decomposition is also called multiple linear subspace learning or multiple linear principal component analysis. It divides a tensor into a relatively small core tensor and a set of matrices. It was originally proposed by Tucker [7], who performed three-dimensional array decomposition based on multidimensional extension of factor analysis, namely third-order tensor decomposition, and it was later developed into N-order tensor decomposition. Tucker decomposition of third-order tensor v 2 RIJK is v  = 1 A 2 B 3 C ¼

Q X P X S X p¼1 q¼1 s¼1

gpqs ap  bq  cs ¼ =; A; B; C

ð1Þ

90

X. Guo et al.

Wherein, n is known as “mold-n multiplication,” and mold-n multiplication of third-order tensor = 2 RPQS and matrix A 2 RIP ; B 2 RJQ and C 2 RKS is defined as ð= 1 AÞiqs ¼

P X

xpqs aip

p¼1

ð= 2 BÞpjs ¼

Q X

xpqs bjq

ð2Þ

q¼1

ð= 3 CÞpqk ¼

S X

xpqs cks

s¼1

A 2 RIP ; B 2 RJQ ; C 2 RKS in the formula (1) may be considered as main components of tensor v 2 RIJK , and P, Q, S are the number of main components in the three molds. = 2 RPQS is the core tensor, showing the relationship among the different components, that is, the third-order tensor v becomes a relatively small third-order tensor =2 RPQS through Tucker decomposition dimension reduction. Thus, a third-order tensor is expressed as a core tensor, which is subjected to mold multiplication by three factor matrixes A, B and C.  represents outer product, that is, for tensor v ¼ a  b  c, there is xijk ¼ ai bj ck

ð3Þ

Mold-n multiplication calculation uses the outer product of vectors. Thus, the element values of the third-order tensor v can be expressed as xijk 

Q X P X S X

gpqs aip bjq cks

ð4Þ

p¼1 q¼1 s¼1

Wherein, i = 1,…,I, j = 1,…,J, k = 1,…,K. Before using alternating least squares method for solving the matrix A, B, and C, the third-order tensor mode-n is developed. X ð1Þ  AGð1Þ ðC  BÞT X ð2Þ  BGð2Þ ðC  AÞT Xð3Þ  CGð3Þ ðB  AÞ

ð5Þ

T

Wherein,  represents Kronecker product.   Solving the target minkv=; A; B; Ck is equal to maxv 1 AT 2 BT 3 CT  namely maxk=k. That is the maximized G(n).

Research of Incremental Dimensionality Reduction …

Gð1Þ  AT X ð1Þ ðC  BÞ; Gð2Þ  BT X ð2Þ ðC  AÞ; Gð3Þ  CT Xð3Þ ðB  AÞ

91

ð6Þ

When the matrix A, B, C, respectively, are P, Q, and S singular vectors of Xð1Þ ðC  BÞ; X ð2Þ ðC  AÞ and X ð3Þ ðB  AÞ, the target solution is completed. When P, Q, and S are column rank of Xð1Þ ; Xð2Þ and X ð3Þ , it is optimal decomposition; when P, Q, and S are less than column rank of X ð1Þ ; X ð2Þ and X ð3Þ , it is not optimal decomposition, and alternating least squares method can be used to solve. Therefore, first, the matrix A, B, C are respectively initialized into P, Q, and S feature vectors of X ð1Þ T X ð1Þ ; X ð2Þ T Xð2Þ . and Xð3Þ T X ð3Þ . Then, solving new matrix A, B, C by alternating least squares method to get Y ðnÞ first. Y ð1Þ ¼ X ð1Þ ðC  BÞ Y ð2Þ ¼ X ð2Þ ðC  AÞ

ð7Þ

Y ð3Þ ¼ X ð3Þ ðB  AÞ Then, solve new matrix A, B, C are P, Q, and S feature vectors of Xð1Þ T Xð1Þ ; Xð2Þ T X ð2Þ and Xð3Þ T X ð3Þ . Repeat the above process until convergence. Eventually, the tensor is decomposed into =; A; B; C, wherein = is = ¼ v 1 AT 2 BT 3 CT

2.4

ð8Þ

Feature Graphs Incremental Reconstruction

Tucker decomposition is performed on third-order tensor v 2 RjVjMK formed by text feature graphs in this paper to get factor matrix, wherein “feature word-main component” matrix B 2 RMQ and “feature-main component” matrix C 2 RKS are used to reconstruct feature graphs, and Q and S are the number of main components of the tensor in the mold-2 and mold-3, namely the number of feature words after feature dimensionality reduction. According to relationship weight of each feature word and main components, namely the value of element bmq in matrix B, the main components with the closest relationship are found to combine the feature words of the same main component. According to relationship weight value of each feature fn and each main component, namely value of element cns in the matrix C to find the main components with the closest relationship and combine the features of the same main component to achieve incremental dimension reduction.

92

X. Guo et al.

3 Experiments 3.1

Data Set

Data sets used in this paper are from China Daily website. In order to obtain the plain text from the website, we use web crawlers tool Heritrix to extract the data in the format of each page to be placed into .txt text. The experimental data is from the processed plain-text data, and there are a total of 53 valid .txt texts about finance.

3.2

Experimental Results

The text set data for 3 months is selected to determine the uniform threshold values and then express each text feature diagram in “feature word–feature word” square matrix. The experiment is launched from two kinds of incremental ways. One is basic way, and the other is an iterative manner. The specific practices are as follows: The basic incremental ways are divided into the following steps: (1) Combining all the feature graphs in the three months into third-order tensor. (2) Performing the third-order tensor into matrix, that is, third-order tensor mode-n is launched to obtain Xð1Þ ; Xð2Þ and Xð3Þ . (3) Initializing the matrix A, B, and C by Xð1Þ ; Xð2Þ , and X ð3Þ . (4) Solving new matrix A, B, C by alternating least squares method. (5) Reconstructing the text features graphs by “feature word-main component” and “feature-main component” matrix. Iterative incremental way includes the following steps: (1) Combining the data of the first two months into third-order tensor. The third-order tensor is subjected to matrix, and initialization is performed on matrix A, B, and C, and solving new matrix A, B, C by alternating least squares method to reconstruct text feature graphs. (2) Forming new data based on the above reconstruction results of text feature graphs. (3) Combining the new data with data of the third month into third-order tensor. The third-order tensor is subjected to matrix, and initialization is performed on matrix A, B, and C, and solving new matrix A, B, C by alternating least squares method to reconstruct text feature graphs. Incremental dimension reduction results based on tensor decomposition in the two methods are as shown in Tables 1 and 2. The order of the present set in this method has no impact on dimension reduction results. In basic incremental method, first-order tensor decomposition can get the dimensionality reduction results of all the data. Because the second-order tensor

Research of Incremental Dimensionality Reduction …

93

Table 1 Incremental dimensionality reduction results based on tensor decomposition in the basic method Time Number Number Number Number Number Number

of features of feature words of feature relationship of features after tensor decomposition of feature words after tensor decomposition of feature relationship after tensor decomposition

T1

T2

T3

7 122 865 18 268 1873

6 81 568

10 132 1064

Table 2 Incremental dimensionality reduction results based on tensor decomposition in the iterative method Iterative rounds Time

1 T1

Number of features Number of feature words Number of feature relationship Number of features after tensor decomposition Number of feature words after tensor decomposition Number of feature relationship after tensor decomposition

7 122 865 11 162 1074

T2 6 81 568

2 T1 + T2 11 162 1074 19 235 1603

T3 10 132 1064

combination into third-order tensor needs to extend each second-order tensor, it is improper to have too many feature graphs. Otherwise, it will result in rapid increase of complexity with increase of feature graphs. Thus, in case of a large amount of data, the iterative incremental approach is more effective than the basic incremental approach, but it requires repeated tensor decomposition in order to complete the task.

4 Conclusion In this paper, increment dimensionality reduction method based on tensor decomposition is used to express each feature graph in second-order tensor, which will be combined into the third-order tensor for incremental dimension reduction by Tucker decomposition. According to the obtained factor matrix after decomposition, the relationship between feature words and main components and between features and the main components can be obtained so as to combine the feature words and features of the same main component and reconstruct text feature graphs as well as achieve dimension reduction of feature graphs and feature words by two levels. Experiments show that the incremental dimensionality reduction method based on tensor decomposition is simple and effective.

94

X. Guo et al.

Acknowledgments This work was supported by the National Natural Science Foundation of China (71171148, 61403238), the National Key Technology R&D Program (2012BAD35B01, 2012BAH13F04) and the National High-Tech Research and Development plan of China (2012AA062203), the National Basic Research Program of China (2014CB340404), and the Natural Science Foundation of Shanxi (2014021022-1).

References 1. Deerwester, S.C., Dumais, S.T., Landauer, T.K., et al.: Indexing by latent semantic analysis. J. Am. Soc. Inform. Sci. 41(6), 391–407 (1990) 2. Li, L., Zhang, Y.J.: A survey on algorithms of non-negative matrix factorization. Acta. Electron. Sin. 36(4), 737–743 (2008) 3. He, H.B., Li, X.F., Zhao, L.L.: Research of text categor ization based on CCIPCA and ICA. Comput. Eng. Appl. 44(29), 150–152 (2008) 4. Gao, M.T., Wang, Z.O.: Comparing dimension reduction methods of text feature matrix. Comput. Eng. Appl. 42(30), 157–159 (2006) 5. Zhao, C.W., Sun, S.H., Li, X.P.: Dimension reduction for text expression based on semantic similarity. (Journal of Henan University of Science and Technology). Nat. Sci. 29(5), 36–39 (2008) 6. Ricci Curbastro, G.: Résumé de quelques travaux sur les systèmes variables de fonctions associés à une forme différentielle quadratique. Bulletin des Sciences Mathématiques, 2(16), 167–189 (1982) 7. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM review, 51(3), 455–500 (2009)

Estimating a Transit Passenger Trip Origin–Destination Matrix Using Simplified Survey Method Jangwon Jin

Abstract In a small/medium-sized city (SMSC), bus is the one and only transit mode for citizens. Thus, better transit planning in SMSC will be even more productive in comparison with a big city. OD data is essential for establishing city transit planning. Recently big city’s transit agencies more often use Automatically Collected Data or transit card data for making OD matrix. However, SMSC still use traditional OD survey method which requires a lot of money and labor expenses, though it does not guarantee an accuracy of OD data. Therefore, this study will argue new OD surveying method for SMSC. This study suggests simplified OD surveying method without a big scale OD surveying studies in SMSC. As a result, this study shows that (1) the cost of obtaining OD matrix is significantly reduced; (2) the resulting matrix is based on significantly larger sample size; and (3) the process is more suitable for simplification that will make it much faster and therefore be able to be updated more frequently.



Keywords Transit passenger Origin–destination matrix method Small/medium-sized city Frata model EMME/3









Simplified survey

1 Introduction In small/medium-sized cities of Korea, bus is the only transit system for citizens. In case if mega-city transit agency wants to collect origin–destination (OD) data for establishing city transit planning, they can use automatically collected data (ACD) by electronic transit cards. However, small/medium-sized cities (SMSC) of Korea cannot use ACD because there are a lot of elderly people who are not able to use electronic bus card. Thus, SMSC have to use traditional OD survey method if J. Jin (&) Department of Transportation Planning & Management, Korea National University of Transportation, Uiwang, Gyeonggi 135-460, South Korea e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_10

95

96

J. Jin

they want to establish city transit planning. However, problem is that an accuracy of OD data cannot be guaranteed even by using great amounts of money and time because of non-population errors caused by monitoring persons. Even with mentioned above issues and risks, establishing OD data is indispensable for improving present bus route problems. Route-level transit passenger OD matrices are important inputs to transit planning. They are used for setting headways, evaluating alternatives (expressing and short turning), and for forecasting revenues [1]. Thus, this study will argue new OD survey method for SMSC. It will be suggested simplified OD survey method without big scale OD surveying studies in SMSC. This method had actually been applied by Chungju transit agency in autumn of 2012–2013. It improved bus route alternatives and received positive reaction from many citizens. Same data that were used in Chungju will be used for this study as well. EMME/3 program, which is one of the most famous simulation tools in the field of transportation planning, will be used for simulation.

2 Origin–Destination Survey Methodologies for Transit 2.1

Traditional Survey (On-Board Survey) Method

In the public transit industry, passenger OD matrices are traditionally estimated by conducting on-board surveys. During the surveys, questionnaires are distributed to passengers on-board transit vehicles, asking them for origin and destination information. The OD matrix is then generated from the responses. This method has several shortcomings [1]. Personnel are needed to distribute and process the questionnaires to passengers on-board. Thus, the on-board survey is expensive to conduct, and requires long processing time. Also, it is very difficult to not only recruit a lot of monitoring persons for boarding on buses simultaneously but also to minimize non-population errors that occur because of a human factor. Because of limited human resources, only a small portion of the on-board passengers can be surveyed; and not all distributed questionnaires will be returned. In addition, the results may be biased. For example, passengers making short trips tend to not respond to the questions [2]. Figure 1 demonstrates how to get raw bus passenger data for making OD. It requires for a lot of monitoring persons to be recruited. For example, if there are 66 operating buses in Chungju, then more than 180 persons should be recruited simultaneously including spare monitoring persons. It means that big scale budget and endeavor are necessary for management. However, accuracy of data cannot be guaranteed because of non-population errors.

Estimating a Transit Passenger Trip Origin–Destination Matrix … Fig. 1 Flow chart of traditional OD survey method

97

More than Two Monitoringpersons ride on a bus all day long for manual boarding and alighting counting: One person counts boarding and alighting numbers and other person survey Origin Destination Questionary

Coding of OD survey and arranging of boarding and alighting numbers

Making OD and evaluating

2.2

OD Survey by Automatic Passenger Counter

Unlike on-board surveys, APC systems collect data continuously. Therefore, a much larger sample size can be obtained. The increase in sample size should theoretically reduce the sampling errors and biases. The APC data are collected in an electronic form, and the time requirements for the processing of the raw data would also be much shorter than that involves surveys [2]. The route structure describes the physical layout and the ADC systems installed on the considered route. The elements contained in the route structure usually do not change within a short term, for example 1 year. The key elements of the physical layout include the following: the direction of the considered route; the number of stops on this route; the distances between stop pairs; and the land use characteristics of each stop. Thus, the ADC system will provide data used to estimate an OD matrix for the considered route. If there is no ADC system installed, data required in the OD estimation methods could come from manual collection [3].

2.3

Estimating OD Matrix by Using Transit Card Data

Currently, high-quality transit card data have been gathered due to the increase of the importance of public transit which has stemmed from green transportation policies focused on reducing, and the expansion in the propagation of transit cards, used by more than 90 % of public transportation users in megacities of Korea. By using such card data, Lim et al. [4] try to estimate various parameters of trip distribution model, which minimizes the difference between the observed data from transit cards and estimated data from the trip distribution model. They had provided the optimized algorithm based on the double gravity model, and applied it to analyze the bus and the subway in the south Han-river area in Seoul [4]. However, as mentioned above, this method cannot be used in SMSC where there are very few card users.

98

J. Jin

Fig. 2 Flow chart of suggested simplified OD survey method

Manual boarding and alighting counting by seeing video tapes those are taken in buses’ video cameras : One person counts boarding and alighting numbers and other person write down to sheets

Selection of main bus stops by boarding and alighting numbers

One or Two person per a bus stop survey Origin Destination Questionary

Coding of OD survey and arranging of boarding and alighting numbers

Making OD and evaluating

3 Estimating a Transit Passenger Trip Origin–Destination Matrix Using Simplified Survey Method 3.1

Suggested Simplified OD Survey Method

There are several shortcomings associated with the on-board survey estimation method. Using boarding and alighting counts can reduce some of these shortcomings. (Surveys can still be used in conjunction with the boarding and a lighting counts in OD estimation.) Figure 2 shows a suggested method that counts boarding and alighting passenger numbers not by riding on buses but using video records. This can save a lot of budget and endeavor, and it will also help to keep the accuracy of data at a satisfactory level. On-board survey allows questioning of only a small portion of the on-board passengers (conventionally less than about 10 %). However, survey at a bus stop can provide higher return rate (more than 84.3 %) of distributed questionnaires because of brief OD questionnaires. Compared to the on-board survey, boarding and alighting counts are comparatively less costly to collect. These data are already being collected by APC systems, one of the ADC systems installed on transit vehicles. Therefore, the data already exist, and only the marginal cost of processing and using the data appropriately are incurred. Figure 2 shows flow of suggested simplified OD survey method.

Estimating a Transit Passenger Trip Origin–Destination Matrix … Table 1 Top 10 bus stops of passenger volumes

3.2

99

Name of bus stop

Total

Boarding

Alighting

Intercity bus terminal KNUT Kookmin bank Sincheonji town Moohak market Judeok Umjeong Chungju high school Suanbo Public market

3,605 1,320 1,189 1,065 993 635 632 577 496 485

1,526 884 597 496 534 353 308 312 246 228

2,079 436 592 569 453 282 324 265 250 257

Combined Zoning System and Passenger Volumes by Time Periods

Traditionally when transit planning is established, every bus stop will be considered as an individual zone for making OD matrix in EMME3 program. It means that if there are 300 bus stops, it will be 300 × 300 zonal system. For example, the city of Chungju with its 800 bus stops will have 800 × 800 zonal system. This zonal system is too big; thus it cannot be analyzed by EMME3 program. However, the final purpose of OD matrix is to improve a bus route; thus similar bus stops can be combined together by numbers of passengers. For example, if a bus stop has many passengers, its zone status will remain. But if a bus stop has few passengers, it will be a part of a combined zone together with nearby bus stops. Also when we consider combining zone, GIS analysis will be helpful to judge combined zonal size [5]. Table 1 is observation for top 10 bus stops by passenger volumes in Chungju. There are 6 bus stops in city area and 4 in rural area (KNUT, Judeok, Umjeong, and Suanbo). Figure 3 shows the final zonal system. It combines 99 zonal systems (36 inner city zones and 63 rural area zones). Thus, it can be analyzed by EMME3. Figure 4 shows the amounts of boarding and alighting passengers according to time periods. It can be seen that morning peak time is from 7 to 8 a.m., which totals to 1,618 boarding passengers and 1,634 alighting passengers. Afternoon peak time is from 4 to 5 p.m., which totals to 1,588 boarding passengers and 1,586 alighting passengers. The chart also displays that main peak time is concentrated from 3 to 6 a.m., which are schools and working finish time. Totally in Chungju 15,928 passengers are boarding and 15,873 passengers are alighting in a weekday.

100

J. Jin

Fig. 3 Combined zonal system. Left chart shows inner-city 36 zones, right chart shows rural area 63 zones

Fig. 4 Boarding and alighting passengers by time periods

3.3

Simplified OD Survey at Bus Stops

As mentioned in Fig. 2, simplified OD survey was applied at 30 main bus stops, chosen according to the passengers amounts, during from 7 to 10 a.m. and from 4 to 7 p.m. Surveyors asked just one line questionnaires per a person: from where, to where, bus stop location for transfer, trip purpose. Simple questionnaires make high

Estimating a Transit Passenger Trip Origin–Destination Matrix …

101

Table 2 Results of transfer passengers’ volume by simplified OD survey Transfer

1st

2nd

Rate of 1st transfer (%)

Rate of 2nd transfer (%)

Non-transfer Intercity bus terminal Kookmin bank Public market Sincheonji Yeseong bridge Neomedi clinic 1st rotary Sincheonji town Moohak market Etc Total

2,947 281 44 34 25 16 14 13 10 8 56 3,448

3,444 2 0 0 0 1 0 0 0 0 1 3,448

85.47 8.15 1.28 0.99 0.73 0.46 0.41 0.38 0.29 0.23 1.62 100.00

99.88 0.06 0.00 0.00 0.00 0.03 0.00 0.00 0.00 0.00 0.03 100.00

answer return rate, which is 84.3 %. Questionnaires were delivered to 4,091 persons and got effective answer from 3,448 persons, which means OD survey was applied to 21.7 % of total bus passengers in Chungju. Table 2 demonstrates the results of transfer passengers volume. In Chungju 14.5 % of bus passengers usually use single (1 time) transfer and almost none of them use double (2 times) transfer. Biggest transfer bus stop is Intercity bus terminal (8.15 %) and the next is Kookmin bank (1.28 %). It should be mentioned that budget for this survey was less than 10 % of traditional survey budget.

3.4

Estimating of OD by Frata Model

Now we can have two OD tables. One is based upon simplified OD survey (Table 3). Other is real OD matrix made from video counting method (Table 4). However, real full OD does not have each trip volume (tij*) but has real origin– destination volume of each zone. Thus, the final data must be estimated from two OD tables.

Table 3 OD matrix from survey OD

Zone

1



99

Total

1 2 ⋮ 98 99 Total

t11 t21 ti1 t981 t911 d1

. . tij . . dj

t199 t299 ti99 t9899 t9999 d99

o1 o2 oi o98 o99 P tij

102

J. Jin

Table 4 OD matrix from real volume

Zone

1



99

Total

1 2 ⋮ 98 99 Total

. . . . . D1

. . tij* = ? . . Dj

. . . . . D99

O1 O2 Oi O98 O99 P tij

Here, OD matrix (tij*: Eq. (1)) can be estimated from surveyed OD matrix (tij) by Frata model. Frata model is a well-known pattern that can calculate simply by adjusting factor: Eqs. (3) and (4). Reason for using Frata model is that facilities for bus systems are not changed during estimating OD matrix from surveyed OD matrix. 

Li þ Lj tij  ¼ tij  Ei  Fj  2 Oi Dj ; Fj ¼ oi dj Pn j¼1 tij Li ¼ Pn j¼1 tij  Fj Ei ¼

Pn tij : Lj ¼ Pn i¼1 tij  Ei i¼1

 ð1Þ ð2Þ ð3Þ

ð4Þ

4 Results and Discussion As a result we can see that using the trip distribution model in this research, the estimation volume is similar to the observation volume. Table 5 shows the result of estimated final OD matrix from surveyed OD matrix by using Frata model. RMSE of total volume is significant in 1.15 %. It means estimated OD can be used for improvement of bus route problems. The traditional way a transit agency would obtain an OD matrix is as a by-product of an occasional on-board passenger surveys and using various techniques to expand the survey results based on manual boarding and alighting counts at the stops. Such passenger counts and surveys are expensive to conduct and thus are extremely infrequent [3]. This study examines the possibility of estimating OD matrix due to city bus route improvement without further monetary investments, by devising survey and analyzing method. The analysis’ results indicate that (1) the cost of obtaining OD

Estimating a Transit Passenger Trip Origin–Destination Matrix …

103

Table 5 Results of estimated final OD matrix Zone

1

2

3



97

98

99

Total

1 2 3 ⋮ 97 98 99 Total

0.83 0 0 ⋮ 28.87 0 0 105.25

0 0 0 ⋮ 0 0 0 170.41

0 0 0 ⋮ 54.01 0 0 224.53

… … … ⋮ … … … …

0 0 0 ⋮ 14.27 0 0 295.72

0 0 0 ⋮ 0 26.97 0 27.06

0 0 0 ⋮ 0 0 0 16.04

81.77 196.32 199.61 ⋮ 300.68 26.97 15.89 15717.49

matrix is significantly reduced; (2) the resulting matrix is based on significantly larger sample size; (3) the process is more suitable for simplification that will make it much faster and therefore able to be updated more frequently; and (4) this process can be combined with more targeted surveys to obtain more cost effective and comprehensive picture of passenger travel behavior. In addition, it is reported that with the help of the OD data the reformed bus route system significantly improves the serviceability of the study area, which means more people could receive better services without additional significant cost and/or public investment [6]. However, for the better study, it is needed that the analysis framework employed in this study may aid to make out an efficient zonal table for ensuring better performance in selecting bus stops for removing zero trip zone. Acknowledgments The research was supported by a grant from the Academic Research Program of Korea National University of Transportation in 2013.

References 1. Ben-Akiva, M., Macke, P., Hsu, P.: Alternative methods to estimate route-level trip tables and expand on-board surveys. Transportation Research Record, vol. 1037, pp. 1–11. Washington (1985) 2. Lu, D.: route level bus transit passenger origin-destination flow estimation using APC data: numerical and empirical investigation. Dissertation of M.S. Graduate School of the Ohio State University, Ohio (2008) 3. Cui, A.: Master Thesis. Department of Civil Engineering, Massachusetts Institute of Technology, Cambridge (2006) 4. Lim, Y., Park, C., Kim, D., Eom, J., Lee, J.: Estimating Trip Distribution Model by Using Transit Card Data, Kyotong Yeonku, vol. 19, pp. 1–11. No, Seoul (2012) 5. Jin, J., Lee, G.: A GIS-based analysis for examining the effect of serviceability improvement due to reforming the city bus route system. Int. J. Softw. Eng. Its Appl. 7(6), 89–100 (2013) http://dx.doi.org/10.14257/ijseia.2013.7.6.08 6. Jin, J.: Evaluation of bus routes improvement in small medium city: case study of Chungju. Transp. Technol. Policy 10(6), 32–41 (2013) (Seoul)

An LDPC Coded Adaptive Amplify-and-Forward Scheme Based on the EESM Model Xiang Chen and Mingxiang Xie

Abstract Cooperative communication is a promising technology to obtain diversity gain at terminals. According to the traditional amplify-and-forward (AF) scheme, relay node helps transmit source node’s information with fixed transmission power all the time. It is not optimal in view of power efficiency, especially when the channel state is good. This paper introduces the adaptive AF scheme based on the minus exponential effective-SNR mapping (EESM) model. The optimal power to relay is predicted accurately. The reliability requirement is satisfied, and the power is saved for transmitting new information. Theoretical analysis and simulation results indicate that the adaptive AF scheme improves the power efficiency significantly.



Keywords User cooperation Amplify-and-forward effective-SNR mapping (EESM) model



The minus exponential

1 Introduction User cooperation was proposed by Sendonaris et al. in 1998 [1], the idea of which is that user terminals share their antennas to form a virtual multi-input and multi-output (MIMO) system to acquire diversity gain. Laneman et al. proposed amplify-and-forward (AF) and decode-and-forward (DF) [2] schemes to realize cooperative communication in 2001. Then, coded cooperation was proposed by Hunter et al. in 2002 [3]. In the three cooperation modes, AF makes relay node (RN) to help transmit the codeword of source node (SN) to destination node (DN) whenever RN received SN’s codeword. So it is the scheme with lowest complexity. When the channel states of the SN-RN link are pretty good, the AF scheme is quite practical. X. Chen (&)  M. Xie Hefei Electronic Engineering Institute, No. 460, Huangshan Road, 230037 Hefei, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_11

105

106

X. Chen and M. Xie

However, the traditional AF scheme fixes the transmitting power of relayed modulation symbols at RN. Obviously, it is not a good choice for the power efficiency of the whole system. When the channel state of the RN-DN link is good, it is enough for RN to relay with lower transmission power to meet the reliability requirement of DN. This paper proposes an adaptive AF scheme, based on the minus exponential effective-SNR mapping (EESM) model; the optimal power to relay is predicted precisely; the saved energy can be used to transmit new information. The EESM model is a multi-carrier link error prediction model proposed by the Ericsson [4]. It has been confirmed to be suitable for turbo/LDPC/convolutional coded multi-state system [5]. It is promising in multi-carrier, multi-time-slot, and multi-antenna systems. The rest of the paper is organized as follows. Section 2 describes the three-node model of the adaptive AF scheme. Section 3 introduces the EESM model in brief. The proposed adaptive AF scheme is described in detail in Sect. 4. Finally, the error performance and efficiency of the proposed and the traditional schemes are compared and analyzed in Sect. 5.

2 System Model The system model of fixed and adaptive AF scheme is a three-node model illustrated in Fig. 1, including SN, RN, and DN. The AF Scheme includes two stages. In the first stage, SN broadcasts LDPC codeword to RN and DN. If RN received it successfully, it prepares to transmit it in the next stage. When DN failed to decode the codeword, in the second stage, RN relays the received codeword to DN. As long as SN and RN are far enough from each other, the SN–DN and RN–DN link can be viewed as incoherent from each other. DN combines the incoherent code words from SN and RN, obtaining diversity gain to improve the performance.

RN

SN

DN

Fig. 1 The system model of amplify-and-forward user cooperation scheme

An LDPC Coded Adaptive Amplify-and-Forward Scheme Based …

107

3 The EESM Model The EESM is derived from Chernoff Union Bound of the pair-wise error probability (PEP) [8]. It can be generalized to multi-state channel with coding blocks experienced different SNR states on each sub-carrier channel or in each symbol transmission time interval (TTI). As shown in Fig. 2 shows the EESM model maps the instantaneous channel states set into a single effective-SNR value which yields a BLEP from a LUT established by link-level performance of BPSK over AWGN channel, so it is a simple link error prediction method for multi-carrier system. In [4], the EESM model is defined as (

SINReff

 ) N 1X ck ¼ b ln exp  ; N k¼1 b

ð1Þ

where N is the number of used sub-carriers, γk represents the SNR value of the kth sub-carrier, β is an adjusting factor for each MCS, which can be decomposed into two adjusting factors: one is determined by modulation scheme, denoted by rmod, and the other depends on coding rates, represented by rcod; thus the expression (1) is rewritten as (

SINReff

 ) N 1X ck ¼ rmod  rcod ln exp  : N k¼1 rmod  rcod

Fig. 2 The diagram of the EESM link quality model

ð2Þ

108

X. Chen and M. Xie

The received coded bit information rate (RBIR) value is defined as RBIR ¼ 1  expð

SINReff Þ: rmod  rcod

ð3Þ

RBIR represents the average mutual information carried by each coded bit, which is mapped to the BLEP by a common RBIR-BLEP map curve. Reference [7] proves that for Turbo codes the map of RBIR-BLEP is only relevant to coding rates. BLEPðfck gÞ BLEPAWGN ðceff Þ;

ð4Þ

where BLEP({γk}) is the actual BLEP for the instantaneous channel state set {γk} and BLEPAWGN(γeff) represents the BLEP performance over AWGN channel. The mapping between BLEP and γeff for each MCS is conducted through a LUT, which is acquired from link-level simulation over AWGN channel. To optimize the accuracy of the model, another table of rmod and rcod should be pre-established for each MCS. Hence, two look-up tables (LUTs) are needed for the EESM model.

4 The Adaptive AF Scheme Based on the EESM Model 4.1

The Two Look-up Tables Based on the EESM Model

Our previous work in [5] has confirmed that the EESM model is capable of unifying BLEP performance of various modulation schemes and diverse sub-carrier states, and even different coding block lengths that the IEEE802.16e standard specified. Thus, it will be accurate enough to implement link adaptation with two LUTs; the LUT1 is built according to the RBIR versus BLEP performance of BPSK modulation and different coding rates over AWGN channel. The LUT2 is the Effective SINR-to-RBIR mapping table based on (2) and (3) for the four modulation modes, as illustrated in Fig. 3.

4.2

Adjusting Factors of the EESM Model

The rate compatible (RC) LDPC code family in [6] is used here. The WiMax rate-0.5 (576,288) LDPC code is chosen as the mother code. LDPC codes with lower coding rate from 0.1 to 0.4 are obtained through extending check matrix, and LDPC codes with higher coding rate from 0.6 to 0.9 are obtained through block puncturing. As mentioned before, the EESM model needs well-trained adjusting factors to achieve good accuracy for a given MCS, i.e., rmod and rcod. Table 1 lists the best adjusting factors for each modulation and coding scheme (MCS) of the RC-LDPC

An LDPC Coded Adaptive Amplify-and-Forward Scheme Based … Fig. 3 The SINR-to-RBIR mapping of the EESM model

1 0.9 0.8

EESM EESM EESM EESM

109

BPSK, r=0.8 QPSK, r=1.55 16QAM, r=5.4 64QAM, r=16.5

0.7

RBIR

0.6 0.5 0.4 0.3 0.2 0.1 0 -30

-20

-10

0

10

20

30

SNR (dB)

Table 1 Adjusting factors for RC-LDPC codes MCS

Coding rate

rcod BPSK

QPSK

16QAM

64QAM

0.1 0.2 0.3 0.4 0.5 0.6 2/3 0.7 3/4 0.8 5/6 0.9 rmod

1 1 1 1 1 1 1 1 1 1 1 1 0.8

1.03 1.045 1.05 1.055 1.045 1.065 1.055 1.03 1.045 1.03 1.045 1.03 1.55

0.87 0.93 0.92 0.99 1.03 1.15 1.17 1.23 1.23 1.27 1.30 1.35 5.4

0.635 0.74 0.78 0.85 0.98 1.10 1.23 1.35 1.40 1.43 1.50 1.7 16.5

codes, which we found in simulations by trail and error. Figure 3 presents the mapping between SNR and RBIR for different modulation modes.

4.3

The Adaptive AF Scheme Based on the EESM Model

Specifically, assuming the required instantaneous block error rate (BLER) is no more than BLERtarget, according to (4) and the LUT1, for a specific coding rate, the RBIRtarget is clear. When the broadcasting stage is finished, the EESM model checks the RBIR value received by DN, donating the instantaneous channel state of

110

X. Chen and M. Xie

the SN-DN link by SINRSD, then according to the LUT2, the RBIR1 is known, if the instantaneous BLER at DN is higher than BLERtarget, RBIR1 must be less than RBIRtarget. And the lacked received coded bit information is N  ðRBIRtarget  RBIR1 Þ; these should be compensated by RN. With the same codeword, RBIR2 = RBIRtarget − RBIR1, according to the LUT2, SINRRD can be found out; thus the power factor for relaying can be calculated.

5 Simulation Result To verify the performance of the adaptive AF scheme, the link-level simulation platform is constructed according to the three-node system model. The error-correcting code is the (576, 288) LDPC code specified by IEEE802.16e standard. The SN-RN link, SN-DN, and RN-DN links are all single Rayleigh block fading channel. They are independent from each other. The average signal-to-noise ratio (SNR) of the SN-DN link and the RN-DN link is the same. And RN should retransmit the codeword it received from SN as long as DN fails to decode the codeword. For the fixed AF scheme, RN retransmits the codeword with the same power as SN. However, in the adaptive AF scheme, RN retransmits the codeword with a power factor predicted by the EESM model, which is a relative value that the power used to relay divided by the total power of RN, namely, assuming the total power of RN is 1, then the power factor ranges from 0 to 1. If RN need not relay the codeword, the power factor equals 0. If RN used the same power as SN transmitting the codeword, the power factor is 1. If RN relay the codeword with part of power, the power factor is the ratio of the relayed power to the total power. And the rest power is saved. The error performance of the adaptive AF scheme should be compared with the traditional AF scheme. Assuming the worst BLER of the system should not be higher than 0.01. Only the BPSK modulation scheme is used. For the (576, 288) WiMax LDPC code, the retransmission power is set as 1 for the traditional AF scheme, and that of the adaptive AF scheme ranges from 0.1 to 1, which is predicted by the EESM model. The BLER performance of the adaptive AF scheme, the traditional AF scheme, and the no cooperation scheme are illustrated in Fig. 4a. And the corresponding relative ratios of relayed modulation symbols are shown in Fig. 4b. It can be observed that when the SNR of SN-DN link is lower than −5 dB, the BLER performance of the adaptive AF scheme and that of the traditional AF scheme is almost the same, and both are higher than 0.01. That is because the channel state is so bad that even if the entire power of RN are used to retransmit the codeword of SN, the reliability requirement can still not be satisfied. When the SNR of SN-DN link is from −5 to −2 dB, the BLER performance of the adaptive AF scheme remains near to 0.01 but always lower than 0.01. And the corresponding power factor decreases from 1 to 0.23, which implies that the EESM

An LDPC Coded Adaptive Amplify-and-Forward Scheme Based …

(a)10

BLER

Fig. 4 The performance comparison of the adaptive AF, the traditional AF and no cooperation. a The error performance comparison. b The power efficiency comparison

Transmitting Power Factor at RN

0

10

-1

10

-2

10

111

Adaptive AF Fixed AF no cooperation

-3

-7

-6

-5

-4

-3

-2

-1

0

Es/N0 (dB)

(b) 1 0.8 0.6 Adaptive AF Fixed AF No UC

0.4 0.2 0 -7

-6

-5

-4

-3

-2

-1

0

Es/N0 (dB) at SN

model predicts accurately the minimum power needed for RN to relay. The reliability requirement is met and the energy is saved as much as possible. On the contrary, although the error performance of fixed AF scheme is much lower than the target BLER, yet the waste of energy is obvious that RN is always retransmitting with full power. The power efficiency is not optimal. When the SNR of SN-DN link is higher than −1.5 dB, the reliability requirement can be satisfied without cooperation. As the minimum relay power factor is set to 0.1, the adaptive AF scheme is still better than the no cooperation scheme.

6 Conclusions In the traditional AF scheme, RN always relays with fixed power, when the channel state is good; the error performance is much lower than the reliability requirement, and the waste of radio resource is obvious. An adaptive AF scheme is proposed that the power for relaying can be predicted accurately based on the EESM model. The adaptive AF scheme not only satisfies the target BLER, but also minimizes the power needed to relay. Radio resource is saved to transmit new information, and the spectral efficiency is improved.

112

X. Chen and M. Xie

Acknowledgments This work was sponsored by the National Science Foundation of China (No. 61040007) and the EEI Science Foundation (No. KY14A273).

References 1. Sendonaris, A., Erkip, E., Aazhang, B.: Increasing uplink capacity via user cooperation diversity. In: Proceedings on the IEEE International Symposium on Information Theory, p. 156 (1998) 2. Laneman, J.N., Wornell, G.W.: Exploiting distributed spatial diversity in wireless networks. In: Proceeding of Allerton Conference (2000) 3. Hunter, T.E., Nosratinia, A.: Cooperation diversity through coding. Proceedings on the IEEE International Symposium on Information Theory, p. 220 (2002) 4. Ericsson. System-level evaluation of OFDM—further considerations. 3GPP TSG-RAN WG1 #35, R1-031303, 17–21 Nov 2003 5. Chen, X., et al.: The application of EESM and MI-based link quality models for rate compatible LDPC codes. In: VTC-2007 Fall IEEE 66th Vehicular Technology Conference (2007) 6. Chen, X., et al.: Link adaptation of rate-compatible LDPC coded OFDM system based on minus exponential effective-SNR mapping link quality model. In: Proceedings of 4th International Conference on the Wireless Communications, Networking and Mobile Computing, WiCOM ’08, pp. 1–5, 12–14 Oct 2008 7. Wan, L., Tsai, S., Almergn, M.: A fading-insensitive performance metric for a unified link quality model. IEEE WCNC 4, 2110–2114 (2006) 8. Pauli, M., Wachsmann, U., Tsai, S.: Quality determination for a wireless communications link. U.S. Patent 6 822 80, 9 Oct 2003

An Opportunistic Routing Protocol Based on Link Correlation for Wireless Mesh Networks Huibin Wang, Yang Liu and Shufang Xu

Abstract Opportunistic routing protocol is a wireless multi-hop routing protocol proposed for the broadcast and lossy characteristics existed in wireless networks. In the applications of opportunistic routing, each forwarder is selected by competition in multiple candidates which will result in better transmission reliability compared to the traditional fixed routing. This paper proposes a novel opportunistic routing protocol named link correlated opportunistic routing (LCOR) which considers the link correlation for the selection of candidate forwarders in wireless mesh networks (WMNs). We implement LCOR in simulation scenarios of 30 nodes wireless mesh network. The results show that (i) LCOR can achieve opportunistic routing in WMNs whose nodes are limited and evenly distributed. (ii) LCOR can own higher packet delivery rate compared to link-correlation-unaware opportunistic routing and Shortest Path routing protocol. (iii) The overhead of our protocol is smaller than ExOR when the number of candidates in each forwarding candidates set is less than 5. Keywords Opportunistic routing

 Link correlation  Packet loss  WMNs

1 Introduction In 2005, MIT researchers proposed a sort of multi-hop wireless network routing protocol named opportunistic routing [1] which based on the broadcast nature of wireless network. In traditional wireless network routing protocols such as DSR [2] H. Wang (&)  Y. Liu  S. Xu College of Computer and Information Engineering, Hohai University, Nanjing 211100, China e-mail: [email protected] Y. Liu e-mail: [email protected] S. Xu e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_12

113

114

H. Wang et al.

and AODV [3], they chose a fixed route as data transmission path between a source and a destination. Different from traditional wireless routing protocols, in opportunistic routing, protocol chose a sequence of forwarding candidates sets between the source and the destination, and assign priority to each candidate. If a packet is received by some candidates, the ultimate forwarder will be the one with the highest priority, and the packet is forwarded to next forwarding candidates set, so it goes on until it reaches the destination. Most opportunistic routing protocols always assume that reception at different receiving nodes is independent, such as ExOR [1], MORE [4], and SOAR [5]. However, some literatures have pointed out that packet reception at different receiving nodes which have the same sender is correlated [1, 6–11] and the correlation will influence the performance of opportunistic routing. Thus, link correlation is an important factor in opportunistic routing protocol design. Existing literatures have proposed the normalized correlation coefficient κ [8], the CPRP [10], and the Hamming Distance [11] is used to measure the correlation of links which have the same sender. Two nodes’ packet reception condition respectively recorded by two variables, the variable equals 1 when a packet is successfully received, the variable equals zero on the other hand, κ is the normalized correlation coefficient of two variables. CPRP is the probability that a high packet receive rate (PRR) node receives the packet from sender, when the packet is received by a low PRR node [10]. Bitmap was used to record node’s packet reception condition in Hamming Distance metric and Hamming Distance is defined as the number of positions that the corresponding bits between two bitmaps are different [11]. Unfortunately, the metric above can only measure the correlation between two links. Link correlation cannot be considered in the routing process when the number of candidates in each set is more than 2. In addition, link correlation is essentially caused by interference. Multiple receivers within a certain range can be influenced by the same interference source. Thus, we proposed the Packet Loss Joint Probability to measure the correlation of multiple links which have the same sender. In order to measure the quality of links in WMN, and then select appropriate forwarding candidates sets, ExOR used ETX [12] metric and Zifei Zhong proposed EAX [13] metric. But they assume that the packet loss is uncorrelated among receivers. A. Basalamah proposed cEAX [6] which considered link correlation to measure the expected transmission count of packet from source to destination. But the calculation of cEAX cannot be implemented in most of the wireless mesh network because the number of candidates in each set can only be 2 and hop count cannot be more than 2. The subset quality index (SQI) presented in this paper can measure the quality of node set in WMNs. SQI considers the correlation among multiple links and supports that the number of candidates in each set equals any rational integer. The rest of the paper is organized as follows: Sect. 2 introduces the key mechanisms in protocol design. Section 3 is the basic design of our protocol. In Sect. 4, we achieve our protocol in a variety of WMN simulation scenes and then

An Opportunistic Routing Protocol Based on Link …

115

statistically analyze the performance of the protocol. Our concluding remarks and expectation are shown in Sect. 5.

2 Preliminaries 2.1

Link Correlation in Opportunistic Routing

In WMN, a packet transmit from source to destination usually needs multi-hop forwarding. As shown in Fig. 1, the source node is N1 and the destination node is N6. Suppose each link between two neighbor nodes has the same PRR p = 0.9 and the same ACK delivery probability (equal to 100 %). N2–N5 are intermediate nodes. Now we make a comparison between the traditional Shortest Path routing and the opportunistic routing. The traditional Shortest Path routing [14, 15] will select the path N1–N3–N5–N6, which means that source node N1 firstly sends packet to N3, then N3 forwards packet to N5, and last to N6. The expected transmissions of packet delivery are 1/p3 = 1.372. On the other hand, the path selection of opportunistic routing will result in N1– F1–F2–N6 where F1 and F2 represent the first-hop and the second-hop forwarding candidates sets. Each set includes two candidates. This paper used the Packet Loss Joint Probability to measure the link correlation. Packet Loss Joint Probability represents the probability that multiple receivers receive packet from the same sender successfully. Assume that the Packet Loss Joint Probability from N1 to F1, N2 to F2, N3 to F2 is the same. When the Packet Loss Joint Probability equals 0.1, the diversity gain of multipath is zero, and expected number of transmissions from source to destination equals 1/p3 = 1.372. When the Packet Loss Joint Probability equals 0, the number will be 1/p = 1.111. When the Packet Loss Joint Probability equals 0.05, the number will be 1/(0.95 * 0.95 * p) = 1.231. We can see that 1.372 > 1.231 > 1.111. Therefore, opportunistic routing protocol can get smaller expected transmissions than the Shortest Path routing protocol. In opportunistic routing, Packet Loss Joint Probability is greater; the greater the link correlation is the lower the transmission reliability will be.

Fig. 1 A simple WMN model

116

2.2

H. Wang et al.

Get PRR and Packet Loss Joint Probability

As we described in Sect. 2.1, opportunistic routing generally selects multiple cascaded forwarding candidates sets between source and destination in order to complete packets transmission. Before selecting the forwarding candidates, we need to calculate the PRR of each link. In opportunistic routing, each node sends hello messages periodically to confirm neighboring relationship and calculate the PRR between sending node and its neighbor node. The node which can receive hello message by one hop is the neighbor node for the sender of hello message. Each node broadcasts specified number of hello messages to its neighbor nodes in a cycle. In this protocol, hello message contains a sending node ID and the sequence number; every node maintains its receiving record sequence which is represented by a binary sequence. 1 means it receives hello message, 0 means not. The number of sequence digit equals the transmission times of hello message. The PRR between two nodes equals the percentage of 1 in corresponding to binary sequence. When the hello message transmission has finished, each node broadcasts its known PRR and receiving record sequence to the other node. Thus, all nodes know the PRR and the receiving record sequence in any node pair. In one hop packet transmission process, x node is the sending node, F is a node subset of x’s neighbor nodes. F = {N1, N2,…, Nk}, N1 to Nk represent k nodes in F. The correlation between each node in F is represented by the Packet Loss Joint Probability denoted by prðx; FÞ. prðx; FÞ represents the probability of all node in F receiving packet from sending node x successfully. Perform a bitwise AND on k nodes’ hello message receiving record sequences and then calculate the percentage of 1 in the new sequence that we can obtain the Packet Loss Joint Probability.

2.3

Link Quality Index and Path Quality Index

A complete packet transmission process includes the packet transmission process from source to destination, also includes ACK reply process and the retransmission process after the failure of packet transmission. To measure the quality of links between node pairs, we should consider both forward link and reverse link. Suppose A represents the product of forward and reverse links’ PRR, our link quality metric equals −ln A in value. In Sect. 4 we will prove that our design can achieve higher packet delivery rate than ExOR. We define the Link Quality Index between nodes Ni and Nj as below: eðNi ; Nj Þ ¼  lnðpij  pji Þ:

ð1Þ

An Opportunistic Routing Protocol Based on Link …

117

Here, both Ni and Nj are node numbers; pij represents the PRR from node Ni to node Nj; pji represents the PRR from node Nj to node Ni. The smaller the e(Ni, Nj), the better the link’s quality is. Moreover, in former opportunistic routing like ExOR, MORE etc., the quality of a path equals the sum of all links’ ETX value which links on the path. We think that the sum of the natural logarithm of links’ ETX is more representative for the path quality. Support node Nm is a candidate; d is the destination node; we use the minimum sum of Link Quality Index to represent the quality of path between node Nm and node d. We define the Path Quality Index between node Nm and the destination node d as 8 9 < X = EðNm ; dÞ ¼ min ð2Þ eðNi ; Nj Þ ; :N ;N 2X ; i

j

where Ω is the path from node Nm to node d which has the minimum sum of Link Quality Index, Ni, Nj is the arbitrary node on Ω, Ni is different from Nj. The smaller E(Nm, d) is, the better the path’s quality will be. Path Quality Index cannot measure the path quality between a node subset and a destination node, so we put forward Subset Quality Index (denoted by SQI) for the selection of forwarding candidates set, and at the same time we will take link correlation into account.

2.4

Subset Quality Index

In the selection of a forwarding candidates set, x is the sending node (source node or forwarder), x selects a subset of its neighbor nodes as next-hop forwarding candidates set. Suppose F = {N1, N2,…, Nk}, N1–Nk represent k nodes in F. F is a subset of x’s neighbor nodes. We define the Subset Quality Index of F as SQIðx; F; dÞ ¼

Pk 1 pxi 1 Xk  Pk i¼1  EðNi ; dÞ: i¼1 1  prðx; FÞ k i¼1 pxi pix

ð3Þ

Here, when we are looking for the first (first-hop) forwarding candidates set, the sending node x is the source node. When we are looking for the num-th forwarding candidates set, x is the forwarder in the (num − 1)th forwarding candidates set. Forwarder is the candidate which receives packet and has the highest priority in forwarding candidates set. Please see priority setting process in Sect. 3. pxi is the PRR from node x to node Ni. The quality of F is better when SQI (x, F, d) is smaller.

118

H. Wang et al.

Table 1 Symbols and annotations Symbol

Annotation

Symbol

Annotation

d e(x, n)

Px Rx

F Fnum

Destination node Link quality index between node x and node n A subset of Px num-th forwarding candidates set

A subset of Rx Node x’s neighbor node set Source node The SQI of F set

num NFnum a

Hop count A arbitrarily node in Fnum The threshold value

s SQI(x, F, d) x x0 \

Sending node Forwarder \

3 The Design of LCOR Protocol The selection of forwarding candidates set is the key to opportunistic routing protocol. LCOR protocol uses SQI to implement the forwarding candidates set selection. Symbols and annotations are shown in Table 1. The specific steps of link correlated opportunistic routing (LCOR) protocol: 1. Set initial sending node x as source node. Set hop count’s initial value of the source node to the destination node as 0 (num = 0); 2. Add all node x’s neighbor nodes to the Rx set; suppose node n belongs to Rx(n ∊ Rx), then calculate the e(x, n) (Eq. 1); 3. Add any node n to collection Px if e(x, n) is less than the threshold α. Px is a subset of Rx, Px must be ruled out the source node and the node which has been allocated to the former forwarding candidates sets; 4. Select k nodes from collection Px to form the subset F, F set has various combinations. Then, calculate the SQI(x, F, d) (Eq. 3); 5. num = num + 1. Then, choose the node set F which has the smallest SQI (x, F, d) as num-th forwarding candidates set Fnum. Set F, Px, Rx to null set; 6. The candidate node in Fnum which receives packet and has the highest priority is set as the forwarder x′. The smaller e(x, NFnum) is, the higher the priority of node NFnum(NFnum ∊ Fnum) will be; 7. x = x′. Repeat the steps 2 to 6 until d ϵ Fnum. Then, F1, F2,…,Fnum−1 is the multiple cascaded forwarding candidates sets between source and destination.

An Opportunistic Routing Protocol Based on Link …

119

Algorithm 1. Selection of forwarding candidates set for LCOR

The algorithm’s pseudo-codes are shown in Algorithm 1. s represents the source node, d represents the destination node. Every forwarding candidates set has k members. We initialize the sending node in line 1, forwarding candidates sets and num. Set F, Px as null sets in lines 3; add the nodes to Px in lines 4 to 8 which Link Quality Index is less to α; select k members in Px in lines 9 to 13, F has various combinations, choose the node set F which has the smallest SQI(x, F, d) as the num-th forwarding candidates set Fnum, at the same time num plus 1. Perform cycle operation from lines 2 to 14 until the process of routing selection is over. The steps and pseudo-codes above are the forwarding candidates set selection process. When source node sends a packet to destination node, source node sends the packet and forwards by F1, F2,…,Fnum−1 until reach the destination node.

120

H. Wang et al.

4 Simulation and Evaluation 4.1

Simulation Setup

In order to verify the validity of LCOR protocol, we need to construct a variety of WMN scenes, and know the PRR between each node in the WMNs. We also need to simulate the Packet Loss Joint Probability between multiple receivers which have the same sending node in the selection of forwarding candidates set. Then, we implement LCOR and other routing protocols on a great deal of different WMN scenes. Verify that our protocol can bring higher performance than other routing protocols. At first we simulate WMN spatial topological structure, specify a horizontal region and assign specified number of nodes to this region; each node follows a two-dimension uniform distribution. Then assign node number for each node. Existing literatures [16–18] deduced the relationship between communication distance and PRR. Thus, we can simulate the PRR between any two nodes by their distance. The distance-PRR relationship shown as follows [17, 18]: pðDÞ ¼ ð1  0:5e0:78125cðDÞ Þ8f

ð4Þ

cðDÞ ¼ Pt  PLðDÞ  Pn ðdBÞ   D þ Xr ; PLðDÞ ¼ PLðD0 Þ þ 10n log10 D0

ð5Þ ð6Þ

where p(D) is the PRR between sender and receiver, D is the sender-receiver distance, f is the frame size, γ(D) is the signal-to-noise ratio (SNR), Pt is the transmitting power, Pn is the noise floor, PL(D) is the log-normal shadowing path loss, D0 is a reference distance, n is the path loss exponent, Xσ is a zero-mean Gaussian RV (in dB) with standard deviation σ. For multiple receivers which have the same sending node, the correlation exists in their packet loss [9]. In the simulation, we suppose Packet Loss Joint Probability between sending node and multiple receivers is selected from [0, 1 − max(PxNF)] randomly, F is a collection of multiple receivers, max(PxNF) is the biggest PRR between sender and receiver node in F. Main parameters of WMN simulation scenes are shown in Table 2. Our nodes are MICA2 nodes, distribution range is 90 * 90 m2, the number of nodes equals 30. More parameters are shown in Table 2. We can get the PRR between arbitrary nodes in WMNs based on the parameters in Table 2 and the expressions (4), (5), and (6). We find that at a distance of less than 12 m, the PRR is approximately equals 100 %. At a distance of more than 12 m and less than 38 m, the PRR diminishes with distance. After the distance is greater than 28 m, the PRR equals 0.

An Opportunistic Routing Protocol Based on Link … Table 2 Main simulation parameters

4.2

121

Symbol

Value

Symbol

Value

Distribution range The number of nodes D0 f n

90 * 90 m2 30 1 50 bytes 2

Pn Pt PL(D0) σ \

−115 dB 0 dBm 55 dB 4 \

Results and Evaluation

We achieve a variety of different WMN scenes, then, run LCOR protocol, ExOR protocol, and Shortest Path protocol in different network scenarios. Collect and record the path selections, the packet delivery rate of unicast (not permitted retransmission), and protocol runtime. Packet delivery rate is the probability of destination node receives the packets which are sent by source in a unicast communication process. The protocol supports the number of candidates in each forwarding candidates set (k) equals an arbitrary rational integer. In next section, we running LCOR and ExOR respectively when k = 2, 3, 4. Try to find the effect of k on opportunistic routing. The relationship between the statistical average of packet delivery rate and hop count is shown as Figs. 2, 3, and 4. From Figs. 2, 3, and 4 we can see that the more hops between source and destination is, the smaller the packet delivery rate will be. LCOR which take link correlation into account and use new metrics gets higher packet delivery rate than ExOR and Shortest Path protocol. ExOR is a classical opportunistic routing protocol; Shortest Path is a fixed path routing protocol.

Fig. 2 Relationship between packet delivery rate and hop, k=2

122

H. Wang et al.

Fig. 3 Relationship between packet delivery rate and hop, k=3

Fig. 4 Relationship between packet delivery rate and hop, k = 4. hop, k = 4

In addition, we find that as the k increases, the packet delivery rate of the same hops has not increased. In each hop transmission, there is multiple candidates in a forwarding candidates set. The nodes which are selected to the current forwarding candidates set members will not be selected by subsequent forwarding candidate sets. Such a mechanism ensures that there is one-way traffic to the destination, but at the same time, it limits the promotion of transmission capacity. Figures 5, 6, and 7 show that the bigger the count of hops between source and destination is, the longer the protocol runtimes will be. In addition, under the same count, the greater the k is, the longer the protocol runtimes will be. LCOR protocol spends less runtimes compared to ExOR protocol. Above all, LCOR protocol can get higher packet delivery rate than Shortest Path. LCOR Protocol can obtain higher packet delivery rate than ExOR. LCOR protocol’s runtimes are slightly less than ExOR. Thus, LCOR can bring higher transmission reliability than ExOR.

An Opportunistic Routing Protocol Based on Link … Fig. 5 Relationship between runtime and hop, k = 2

Fig. 6 Relationship between runtime and hop, k = 3

Fig. 7 Relationship between runtime and hop, k = 4

123

124

H. Wang et al.

5 Conclusion The experimental results show that LCOR can achieve opportunistic routing selection in any multi-hop nodes in the WMN whose nodes are limited and evenly distributed; at the same time it can also take link correlation into account. LCOR protocol supports the number of forwarding candidate set members that k equals any rational integer, we achieve LCOR and ExOR in the case that k equals 2, 3, 4 in this paper. Next, we will try to achieve an opportunistic routing where k is adjusted automatically. We will expand the scale of WMN, research the communications capacity between two multi-hop apart nodes. In further research, we will find the effects of node density on opportunistic routing communications capacity and the factors that affect the Package Loss Joint Probability and packet delivery rate.

References 1. Biswas, S., Morris, R.: Exor: opportunistic multi-hoprouting for wireless networks. In: SIGCOMM ’05: Proceedings of the 2005 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications (2005) 2. Intanagonwiwat, C., Govindan, R., Estrin, D.: Directed diffusion: a scalable and robust communication paradigm for sensor networks. In: Proceedings of the ACM Mobicom 2000. ACM Press, New York (2000) 3. Perkins, C., Royer, E.: Ad-hoc on-demand distance vector routing. In: Proceedings of the IEEE WMCSA’99. IEEE Computer Society Press, Washington (1999) 4. Chachulski, S., Jennings, M., Katti, S., Katabi, D.: Trading structure for randomness in wireless opportunistic routing. In: Proceedings of the 2007 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, Kyoto, Japan, 27– 31 Aug 2007 5. Rozner, E., Seshadri, J., Mehta, Y., Qiu, L.: SOAR: simple opportunistic adaptive routing protocol for wireless mesh networks. IEEE Trans. Mob. Comput. 8(12), 1622–1635 (2009) 6. Basalamah, A., Kim, S.M., Guo, S., He, T., Tobe, Y.: Link correlation aware opportunistic routing. In: Proceedings of IEEE INFOCOM, pp. 3036–3040. Florida, USA (2012) 7. Wang, S., Basalamah, A., Kim, S., Guo, S., Tode, Y., He, T.: Link correlation aware opportunistic routing in wireless networks. IEEE Trans. Wireless Commun. 14, 47–56 (2014) 8. Srinivasan, K., Jain, M., Choi, J., Azim, T., Kim, E., Levis, P., Krishnamachari, B.: The κ factor: inferring protocol performance using inter-link reception correlation. In: IEEE Mobicom, pp. 317–328 (2010) 9. Paris, S., Capone, A.: Correlation of wireless link quality: a distributed approach for computing the reception correlation. IEEE Commun. Lett. 15(12), 1341–1343 (2011) 10. Zhu, T., Zhong, Z., He, T., Zhang, Z.: Achieving efficient flooding by utilizing link correlation in wireless sensor networks. IEEE/ACM Trans. Netw. 21(1), 121–134 (2013) 11. Guo, S., Kim, S.M., Zhu, T., Gu, Y., He, T.: Correlated flooding in low-duty-cycle wireless sensor networks. In: 19th IEEE International Conference on Network Protocols (ICNP), pp. 383–392 (2011) 12. Douglas, S., De Couto, J., Aguayo, D., Bicket, J., Morris, R.: A high-throughput path metric for multi-hop wireless routing. Wireless Netw. 11(4), 419–434 (2005)

An Opportunistic Routing Protocol Based on Link …

125

13. Zhong, Z., Nelakuditi, S.: On the efficacy of opportunistic routing. In: SECON ‘07. 4th Annual IEEE Communications Society Conference on, Sensor, Mesh and Ad Hoc Communications and Networks, pp. 441–450, 18–21 (2007) 14. Abraham, I., Fiat, A., Goldberg, A.V., Werneck, R.F.: Highway dimension, shortest paths, and provably efficient algorithms. In: ACM-SIAM Symposium on Discrete Algorithms, pp. 782– 793 (2010) 15. Dijkstra, E.W.: A note on two problems in connexion with graphs. Numer. Math. 1, 269–271 (1959) 16. Bahceci, I., Al-Regib, G., Altunbasak, Y.: Parallel distributed detection for wireless sensor networks: performance analysis and design. In: Global Telecommunications Conference, GLOBECOM ‘05, vol. 4, pp. 2420–2424. IEEE (2005) 17. Zuniga, M., Krishnamachari, B.: Analyzing the transitional region in low power wireless links. In: 2004 First Annual IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks, IEEE SECON 2004, pp. 517–526 (2004) 18. Sohrabi, K., Manriquez, B., Pottie, G.J.: Near ground wideband channel measurement in 800– 1000 MHz. In: 1999 IEEE 49th Vehicular Technology Conference, vol. 1, pp. 571–574 (1999)

Election of Guard Nodes to Detect Stealthy Attack in MANET R. Kathiroli and D. Arivudainambi

Abstract MANETs are exploited due to their dynamic formation of wireless networks and functioning without the help of any infrastructure, thus making them subjected to security attacks. Here we deal with the stealthy attack, wherein the packets sent from the source are disrupted from reaching the destination due to the malicious behavior of any intermediate node. It is a suite of packet misrouting, power control, colluding collision, and identity delegation. The stealthy attack is detected using guard nodes which monitor all its neighboring nodes and formally send a behavior report of its observation. Since the number of neighboring nodes can be high, which could lead to more congestion in exchanging reports, we propose a novel algorithm called DSMG to optimize the selection of the guard nodes, which are empowered with local monitoring. Guard nodes are selected from the common neighbor list of communicating mobile nodes. The selection is further optimized by choosing the node that is more trustworthy. Simulation results show that the overall throughput is increased and also the packet delivery ratio of the nodes in the network is improved.



Keywords Local monitoring Guard nodes Power control Colluding collision Trust





 Misrouting  Packet dropping 

1 Introduction A mobile ad hoc network (MANET) is an infrastructure less, self-organized network where the nodes in the network dynamically change its positions. The primary challenge in building a MANET is to attain security in topology features. R. Kathiroli (&) Department of Computer Technology, Anna University, Chennai, India e-mail: [email protected] D. Arivudainambi Department of Mathematics, Anna University, Chennai, India e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_13

127

128

R. Kathiroli and D. Arivudainambi

Though MANETs are advantageous, they are vulnerable to security attacks; these attacks are categorized into passive and active attacks. A passive attack does not disrupt the normal operation of the network, wherein the attacker hears the data exchanged in the network without modifying the data violating the requirement of confidentiality. An active attack attempts to modify or destroy the data being exchanged in the network. Active attacks can be either internal or external. An external attack is carried out by nodes that do not belong to the network. Internal attacks are carried out by compromised nodes belonging to the network, making it dificult and severe to detect than external attacks. In our work, we concentrate on stealthy packet dropping attack. Stealthy attack is a kind of packet dropping attack which disrupts the packet from reaching the destination, because of malicious behavior of the intermediate node. Stealthy packet dropping attacks include packet misrouting, power control, colluding collision, and identity delegation. The most widely used method for detecting these attacks is the behavior-based detection model. In this method, a normal node overhears its neighborhood activities. An incarnation of this method is local monitoring where different types of checks are performed locally on the observed traffic to determine malicious behavior of nodes. To overcome Stealthy packet dropping attacks, Detection of Stealthy attacks in MANET using Guard nodes (DSMG) is proposed, in which guard nodes are selected among communicating mobile nodes. The guard nodes perform local monitoring to detect and isolate the malicious nodes. Many mechanisms have been proposed and implemented to ensure the security of control and data traffic in wireless ad hoc networks. Control traffic contains information needed to set up the network for data traffic to flow. A widely used technique for mitigating control and data forwarding misbehaviour in multi-hop wireless networks is cooperative local monitoring [1–5]. It overhears traffic in the vicinity region. Khalil et al. [6] presented the MISPAR protocol to mitigate stealthy attacks. Khalil et al. [7] gave an idea about the MIMI protocol based on local monitoring as a remedy for misrouting attack. Khalil et al. [8] proposed the MPC protocol to prevent packet dropping by malicious node. Khalil et al. [9] illustrated the MCC protocol to detect and isolate malicious nodes. Here, malicious node and its colluding partner transmit the packet to the same next hop node. Bagchi et al. [10] explained a secure one-hop neighbor discovery protocol. It prevents two non-neighboring nodes from compromising themselves as well as their other neighbors. Abirami.et al. [11] portrayed Sentinel Protocol that prevents replica attacks and detects malicious nodes. Perkins [12] elucidated that in AODV protocol each mobile host operates as a specialized router, and routes are obtained on-demand with little or no dependence on periodic advertisements. It provides loop-free routes even while repairing broken links. Kandah [13] showed that malicious nodes are injected in the network in node replication phase and node injection phase. Their identities are hidden from other legitimate nodes. In local monitoring, a node monitors the traffic going in and out of its neighbors. In BLM, the nodes in the network check its neighbor for correct packet forwarding to their actual next-hop within acceptable delay bounds. In SADEC [14], a group of nodes called guard nodes perform local monitoring to detect security attacks. The guard

Election of Guard Nodes to Detect Stealthy Attack in MANET Fig. 1 Guard nodes in SADEC

129

R(A) R(G) R(A)

R(G) D C

G

B

A

I

H

F

E GUARD NODES

node for the link A → G, Guard(A, G) = R(A) \ R(G) − G, where G € R(A) and R (A), R(G) are the transmission ranges of nodes A and G. A, C, E, and H are the guard nodes as shown in Fig. 1.

2 Stealthy Attack Description Stealthy attack is a kind of packet dropping attack where a malicious node prevents the packet from reaching the destination. Malicious node gives an impression to its neighbors that it has successfully forwarded the packet to its next hop. Moreover, a legal node is accused of dropping the packet. Stealthy attack consist of four attack types, namely packet misrouting, power control, colluding collision, and identity delegation. Figure 2 depicts the misrouting attack, the path S–A–B–M–C–H–D is obtained through route discovery mechanism for communication between S and D. S forwards the packet to its next hop A. A then forwards the packet to B and B in turn forwards the packet to M. Node M, being malicious, misroutes the packet to E, which is not part of route, instead of forwarding the packet to correct next hop C.

Fig. 2 Misrouting

A

B

S M

I

PACKET

D

H

F E

S - SOURCE

C

D - DESTINATION E - ACCUSED NODE

K

L

M – MALICIOUS

130

R. Kathiroli and D. Arivudainambi

Figure 3 represents the dropping of packets through colluding collusion, the path S–A–B–M–K–H–D is obtained through route discovery mechanism for communication between S and D. S forwards the packet to its next hop A. A then forwards the packet to B and B in turn forwards the packet to M. M forwards to K. The colluding partner of M is node C, forwards the packet at the same time to K. Therefore, a collision occurs at K and the packet is dropped. Figure 4 represents the power control attack, the path S–A–B–M–C–H–D is obtained through route discovery mechanism for communication between S and D. S forwards the packet to its next hop A. A forwards the packet to B and B in turn forwards the packet to M. Now, M being malicious controls the transmission power and forwards the packet over its new range that does not reach C and the packet do not reach D. Figure 5 illustrates the identity delegation attack, the path S–A–B–M–C–H–D from S to D is obtained through route discovery. S forwards the packet to its next hop A. A then forwards the packet to B and B in turn forwards the packet to M. Node I being compromised, uses the identity of M and forwards the packet to C, but C does not fall under the range of I and so the packet gets dropped.

Fig. 3 Colluding collision

A

B

S

K

H

D

J M I

F E

S - SOURCE

D - DESTINATION

PAC KET

Fig. 4 Power control

S

L

C

M - MALICIOUS

K- ACCUSED NODE

C- COLLUDING PARTNER

A

B M

C

H

E

K

L

J I F S - SOURCE PACKET

D – DESTINATION C - ACCUSED NODE

M – MALICIOUS

D

Election of Guard Nodes to Detect Stealthy Attack in MANET Fig. 5 Identity delegation

A

131

B

S M

I

C

D

H

E F

J S - SOURCE PACKET

D - DESTINATION

K

L

C - ACCUSED NODE

I- IDENTITY DELEGATOR

M – MALICIOUS

3 Proposed Work We propose a novel algorithm to detect malicious nodes in a mobile network by selecting guard nodes. Computation of trust value is discussed in Section A. A Guard node which monitors the neighbors is selected and discussed in section B. Section C presents the stealthy attacks detection and isolation. Section D explains the mitigation of misrouting attack. Section E talks about the mitigation of other three attacks. Trust in entities is based on the fact that the trusted entity will not act maliciously [7]. Trust has the following characteristics: it is subjective (different nodes may have different perceptions of the same node’s trustworthiness), asymmetric, (two nodes need not have the same trust toward each other), and time dependent (it grows and decays over a period of time and is based on previous similar experiences with the same party). We compute the trust value based upon the information that cluster head can gather about the other nodes. Direct trust agent performs derivation of trust, quantification, and trust computation. Node X calculates the trust value of node Y by dtxy ¼ ps=pr

ð1Þ

where dtxy is a direct trust value of node X on node Y. ps is the number of successful packets sent from node X. pr is the number of successful packets received from node Y. Indirect trust or recommendation trust is computed by collecting the trust related information of target node from the neighboring nodes. The algorithm describes the method of obtaining indirect trust on Y.

132

R. Kathiroli and D. Arivudainambi

The source node will broadcast the recommendation request packet to all its neighboring nodes. The node with maximum trust value is considered for evaluation of recommendation trust value. Guard Node Selection. A group of nodes, called guard nodes are selected to detect and isolate stealthy attacks. These nodes are normal nodes in the network and perform their basic functionality in addition to monitoring. The guard nodes are selected between communicating mobile nodes. Initially, distances between nodes are calculated to find the neighbor list of each node. A guard node monitors its neighboring nodes’ behavior and record their positive (p) and negative (n) events. Positive events correspond to timely forwarding of packets, generation of successful replies or generation of successful acknowledgments. On the other hand, the negative events include the refusal to packet forwarding either due to selfish or malicious behavior, misrouting packets, and modifying route requests or replies. The guard node for the link A → G, Guard(A, G) = R(A) \ R(G), where R(A) and Fig. 6 Guard nodes in DSMG

R(A) R(G) R(A)

R(G) D C I

B

G

A E GUARD NODES

H

F

Election of Guard Nodes to Detect Stealthy Attack in MANET

133

R(G) are the transmission ranges of nodes A and G. C, A, E, and H are the nodes in the intersection of ranges of A and G. The energy and trust values of these nodes are calculated. Since C and E have higher values of energy and trust, they are chosen as guard nodes as shown in Fig. 6.

Stealthy Attacks Detection and Isolation. We exploit this scheme to obtain the behavioral information of the nodes. The neighboring nodes broadcast the number of packets it has transmitted toward or away from the node over a particular period of time. Similarly, guard nodes over the network maintain the count of number of

134

R. Kathiroli and D. Arivudainambi

messages transmitted by each and every node within its range and also increment the malicious counter of the node if it is misbehaving or involved in some suspicious activities. If the value exceeds the threshold, then the guard node isolates the node from the network. Mitigating Misrouting Packet Drop. In Fig. 7, the path S–A–B–M–C–H–D from S to D is obtained through route discovery. S forwards the packet to its next hop A. A then forwards the packet to B and B in turn forwards the packet to M. M misroutes the packet to E, instead of forwarding the packet to correct next hop, C. The guard node G2 for the link M → E detects M and increments malicious counter of the node by 1. In order to avoid negative count, M forwards the packet to C. C forwards to H then H in turn forwards to D. Thus the packets get transmitted successfully from S to D. Guard nodes over the region between the source to destination maintains verification table which contains the IDs of all the nodes which falls under the route from S to D for reputation maintenance. In Fig. 8, the path S–A–B–M–C–H–D from S to D, S forwards the packet to its next hop A. A then forwards the packet to B and B in turn forwards to M. To detect the other three types of stealthy attacks, guard nodes check on the activities of the nodes’ in the neighborhood. Each node over the network needs to maintain the Fig. 7 Overcoming misrouting attack

A

B

G5

G4

S C

M

H

D

G2 G1

F E

S - SOURCE

D - DESTINATION

G1

M – MALICIOUS

G1, G2, G3, G4, G5 - GUARD NODES

PACKET

Fig. 8 Overcoming three other stealthy attacks

L

G3

B

A

C G2

S F

G5

G4 H

D

M F E

S - SOURCE PACKET

D - DESTINATION

G3

L

M – MALICIOUS

G1, G2, G3, G4, G5 - GUARD NODES

Election of Guard Nodes to Detect Stealthy Attack in MANET

135

count of the number of messages transmitted by its neighbor and has to announce the number of packets it has transmitted over the particular period of time. It is compulsory for a node to broadcast the number of messages it has forwarded over certain period of time, a malicious node would have the difficulty of fulfilling two sets of neighbors that look forward to hear different counts through a single broadcast. The monitoring activities done by guard nodes can be enlisted into four categories, namely missed detection, Detection of malicious node, False detection of legitimate nodes, correct (Successful) detection. Let P1, P2, P3 and P4 be the probabilities of four cases. The probability for missed detection, P1 is defined as the square of the event of missing each packet due to natural channel errors (PC) as the packets are only misrouted as in [14]. Therefore, P1 ¼ P2c

ð2Þ

In general, the sum of all the probabilities is one. So, Psum ¼ P1 þ P2 þ P3 þ P4 ¼ 1

ð3Þ

Psum0 ¼ P2 þ P3 þ P4

ð4Þ

¼ 1  P1 ¼ 1  P2c

ð5Þ

where Psum0 is the sum of probabilities of the above cases excluding the first case. The probability for detection is given by Pdetect ¼

 mis  X mis i

i¼mal

Pisolate ¼

gopt  X g i¼n

 opt

i

ðPsum0 Þi ð1  Psum0 Þmisi

ð6Þ

ðPdetect Þi ð1  Pdetect Þgopt i

ð7Þ

where Pdetect is the Probability of detection, Pisolate is the probability of isolation, mis is the number of packet misroutes, mal is the malicious counter threshold, n is the number of nodes involved in detection, and gopt is the number of optimized guard nodes. The probability for frame detection and isolation is zero.

136

R. Kathiroli and D. Arivudainambi

4 Stimulation Results Ns-2 simulation environment is used in our work. The comparison of SADEC and DSMG based on the number of guard nodes is shown in Fig. 9. In SADEC common neighbors are selected as guard nodes, whereas in DSMG, their selection is further optimized by choosing the nodes from common neighbor list based on their trust. The probability of accurate isolation of the malicious nodes for SADEC and DSMG is shown in Fig. 10. The isolation probability varies as the number of malicious nodes’ change in the network has a poor performance while DSMG has higher probability of isolation malicious as shown in Fig. 11. We observe that SADEC nodes even as the number of nodes increases. The variation in false isolation is shown in Figs. 12 and 13. The following Xgraphs are generated for PDR before and after the detection of stealthy attacks.

Fig. 9 Number of guard nodes

Fig. 10 True isolation probability

Election of Guard Nodes to Detect Stealthy Attack in MANET

Fig. 11 Isolation probability

Fig. 12 Percentage of false isolation

Fig. 13 PDR during/after detection misrouting attack

137

138

R. Kathiroli and D. Arivudainambi

Fig. 14 PDR during/after detection of power control attack

Fig. 15 PDR during/after detection of colluding collision attack

Fig. 16 PDR during/after detection of identity delegation attack

Election of Guard Nodes to Detect Stealthy Attack in MANET

139

5 Conclusion The stealthy attacks includes misrouting, power control, colluding collision, and identity delegation wherein a packet is dropped and prevented from reaching the destination due to the malicious behaviour at an intermediate node. In such scenarios, sometimes a legitimate node is accused of dropping the packets and the malicious behavior does not get detected by any behavior-based detection scheme. Our algorithm (DSMG) helps in accurate detection and isolation of stealthy attacks. Guard nodes are selected and further optimized based on trust. We detect the stealthy attacks by employing the local monitoring scheme which involves the verification of faithful forwarding to the appropriate next hop. In the future we are considering detection techniques for multichannel ad hoc networks. The monitoring process for detecting malicious behavior is more difficult due to the presence of multichannels.

References 1. Huang, Y., Lee, W.: A cooperative intrusion detection system for ad hoc networks. In: Proceedings of ACM Workshop Security of Ad Hoc and Sensor Networks (SASN ’03), pp. 135–147, (2003) 2. Khalil, I., Bagchi, S., Shroff, N.: LITEWORP: a lightweight countermeasure for the wormhole attack in multihop wireless networks. In: Proceeidngs of International Conference Dependable Systems and Networks (DSN ’05), pp. 612–621, (2005) 3. Khalil, I., Bagchi, S., Shroff, N.: MOBIWORP: mitigation of the wormhole attack in mobile multihop wireless networks. Ad Hoc Netw. 6(3), 344–362 (2008) 4. Khalil, I., Bagchi, S., Nina-Rotaru, C., Shroff, N.: UNMASK: utilizing neighbour monitoring for attack mitigation in multihop wireless sensor networks. Ad Hoc Netw. 2, 148–164 (2010) 5. Khalil, I., Khreishah, A.: On the analysis of identity delegation attacks. In: Computing Networking and Communications (ICNC), pp. 990–994, (2012) 6. Khalil, I.: MIMI: mitigating packet misrouting in locally-monitored multi-hop wireless ad hoc networks. In: IEEE GLOBECOM’08, pp. 1–5, (2008) 7. Sadamate, S.S., Nandedkar, V.S.: Review paper on calculation, distribution of trust & reputation in MANET. Int. J. Sci. Modern Eng. (IJISME) 1(6), 671–676 (2013) 8. Khalil, I.: MPC: mitigating stealthy power control attacks in wireless ad hoc networks. In: Global Telecommunications Conference, pp. 1087–1096. IEEE GLOBECOM (2009) 9. Khalil, I., Bagchi, S.: MISPAR: mitigating stealthy packet dropping in locally-monitored multi-hop wireless ad hocnetworks. In: Proceedings of ACM International Conference Security and Privacy in Communication. Networks SecureComm’08, (2008) 10. Bagchi, S., Hariharan, S., Shroff, N.: Secure neighbour discovery in wireless sensor networks. In: Technical Report ECE 07–19, Purdue University, pp. 105–119, (2007) 11. Abirami, K.R., Sumithra, M.G., Rajasekaran, J.:An Enhanced Intrusion Detection System for Routing Attacks in MANET. In: Advanced Computing and Communication Systems (ICACCS), pp. 1– 6, (2013) 12. Perkins, C.E., Royer, E.M.: Ad-Hoc on-demand distance vector routing. In: Proceedings of Second IEEE Workshop Mobile Computing Systems and Applications (WMCSA’99), pp. 90– 100, (1999)

140

R. Kathiroli and D. Arivudainambi

13. Kandah, F., Singh, Y., Chonggang, W.: Colluding injected attack in mobile ad-hoc networks. In: Proceedings of Computer Communications Workshops (INFOCOM), IEEE Conference, pp. 235–240, (2011) 14. Khalil, I., Bagchi, S.: Stealthy attacks in wireless ad hoc networks: detection and countermeasure. IEEE Trans. Mob. Comput. 10(8), 1096–1112 (2011)

GA-LORD: Genetic Algorithm and LTPCL-Oriented Routing Protocol in Delay Tolerant Network Rahul Johari and Dhari A. Mahmood

Abstract In communication systems, there are a number of challenges that make the reliable delivery of data difficult to achieve. In the traditional wireless system, if there was any problem such as disconnection between the intermediate nodes or nodes getting drained off because of low energy, then there was high probability of data getting lost. To solve these problems in delay tolerant network (DTN), we propose two new protocols, viz. the Licklider transmission protocol convergence layer (LTPCL) and a protocol formulated by combination of metaheuristic approaches, viz. genetic algorithm and ant colony optimization: GAACO to select the shortest path for transmission of the packets from source to destination by consuming less energy, less delay, less number of hops, but at the same time delivering high throughput. Keywords MANET

 DTN  LTPCL  Buffer memory size  Bundle  GAACO

1 Introduction DTN, which is popularly referred to as disconnected or disrupted network, is the network that addresses the challenge of accomplishing optimized routing in networks where there is no live end-to-end connection. This is in stark contrast to the existing transmission control protocol/internet protocol (TCP/IP)-based Internet protocols deployed in the networks that operate on a principle of providing end-to-end live communication using a concatenation of potentially dissimilar R. Johari University School of Information and Communication Technology, Guru Gobind Singh Indraprastha University, Delhi, India e-mail: [email protected] D.A. Mahmood (&) Department of Computer Engineering, University of Technology, Baghdad, Iraq e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_14

141

142

R. Johari and D.A. Mahmood

link-layer technologies. The concept of retransmission in TCP/IP-based Internet does not work well in the environment where communication is not reliable and there are many reasons that lead to disconnection in the link (such as the intermediate nodes were getting drained off or the nodes moving out of the transmission range). Some common problems encountered in DTN are: (a) discontinuous connectivity, (b) long or variable delay, (c) asymmetric data rates, and (d) high error rates. The paper is organized as follows: Sect. 1 discusses the merger of DTN-MANET environment, Sect. 2 discusses DTN Routing issues, Sect. 3 discusses motivation, Sect. 4 discusses related work, Sect. 5 discusses the proposed algorithm(s), Sect. 6 discusses the experimental setup and simulation parameters, Sect. 7 discusses comparison of results, Sect. 8 discusses analysis of results, and Sect. 9 discusses the conclusion followed by acknowledgment and references.

1.1

Merger of DTN-MANET Environment

If information about the destination is deleted from the source routing table, then the message follows the usual approach as applicable in MANET, that is, the first route discovery is performed and then message transmission to the destination node as shown in Fig. 1 is achieved. If information about the destination is not present in source routing table, then the DTN nodes transmit the message to other connected components and ultimately the message reaches to the remote destination. There are two modes the nodes follow (MANET and DTN) according to different problems occurring in different environments. Some common problems beings: limited transmission range of nodes, noise leading to disconnection of the communication, power management issues, and the frequent movement of in-between (intermediate) nodes. We have addressed these concerns in our problem statement and solved them using two newly proposed protocols, viz. LTPCL and GAACO.

Fig. 1 Switch mode scenario

GA-LORD: Genetic Algorithm and LTPCL-Oriented Routing Protocol …

143

2 DTN Routing Issues The DTN approach provides flexibility for routing and forwarding at the bundle layer of OSI Model for unicast, anycast, and multicast information as depicted in Fig. 2. When a significant amount of queuing and buffering occurs in the network, the advantage provided by the information may be significant for taking routing decision. An essential element of the bundle-based forwarding in DTN is that bundle has a place to wait in a queue until next hop link becomes available. It highlights the following assumptions: • Storage is available and uniformly distributed over the network. • Storage is sufficiently persistent and robust to store bundles until forwarding. • The “store-and-forward” model followed in DTN is a better choice than attempting to effect continuous connectivity or other alternatives. For a network to effectively support the DTN architecture, these assumptions must be held as necessary. Node storage in essence represents a new resource that must be managed and protected. In a situation if there exists live connection between the source and destination, the intermediate nodes forward the packets to the next hop node. But if disconnection arises for some reason as detailed in Sect. 1.1, then the intermediate node would store and buffer the data packet till the connection is restored or reestablished or simply by choosing the different or alternate path.

Fig. 2 Broadcast of the packet by the source node

144

R. Johari and D.A. Mahmood

Fig. 3 LIVE connection between source and destination with mobile nodes

Fig. 4 Package storage at the node

In Fig. 3, after successful route discovery the connection is established between the source and destination. We observed that the source send package to destination by the selection of this path because of less number of hops on this route as well as due to the reason that the nest hop node possesses large quantum of buffer space. In this situation the source delivered the package to the destination, but after some time while the source still dispatched the package by this path as shown in Fig. 4, there developed some problem(s) in the path. The path disconnected because of one of the following reasons: 1. The buffer space of one of the intermediate nodes was not enough to store the package after which the node moved out of the range of the previous node. 2. Due to noise or signal interference between the two nodes which caused the disconnection of the path between the source and destination.

GA-LORD: Genetic Algorithm and LTPCL-Oriented Routing Protocol …

145

So after the selection of another path, the package stayed there for some time due to some delay, but the packages were buffered/preserved and all the packages were successfully received later by the destination node.

3 Motivation We have studied and analyzed the MANET, and detected that there are some weak features in MANET that can be improved by the DTN approach. By combining MANET and DTN approach, we used biological principle(s) to build two new protocols LTPCL and GAACO in MANET. The user’s density was kept high to provide efficient communication between the nodes, but it is not always high in some environments. The efficiency of MANET decreases to improve the network efficiency. We used DTN approaches with our two newly introduced protocols. In Sect. 8 we propose a simulation scenario wherein we allow the nodes to send the packages to target user in different situation, such as, if there arises any disconnection between source and destination because of low battery or less memory buffer, according to these parameters, the protocols (LTPCL and GAACO) would change the path dynamically to select new path to complete the remainder transmission of the packet.

4 Related Work In [1–4] the authors demonstrate that the mobility is not uniform and a pattern in encounters is observed. The Probabilistic Routing Schema utilizes the individual probabilities of nodes to successfully delivery a message. The SimBet routing algorithm uses ideas from social networking and contact patterns to predict paths to destinations to improve message delivery ratio in the shortest amount of time. Bubble Rap routing scheme extends their work by allocating nodes into social groups based on direct and indirect contacts. In [5, 6] author(s) describes a utility function for a node to decide whether to forward the message to an opportunistic contact or to a scheduled contact. In [7] author(s) proposes a new approach routing in MANET using cluster-based approach (RIMCA), which consists of mobile wireless nodes moving randomly within boundary of cluster. In [8, 9] author(s) propose a metaheuristic-based search technique termed as volume adaptive search technique (VAST) to determine an optimal path from source node to destination node in densely deployed mobile ad hoc network. In [10] authors propose a new approach which uses genetic algorithm driven routing principles to meet with the routing needs of the DTN nodes in the group and then exhibit the results after carrying out extensive simulations in MATLAB using different membership models. In [11] authors introduce a new metaheuristic oriented routing protocol GAACO that utilizes the optimization techniques of genetic algorithm and ant colony optimization to

146

R. Johari and D.A. Mahmood

find the path between source and destination. In [12] authors apply a metaheuristic(s) method to propose a routing mechanism for Delay Tolerant Networks to find the optimized path with maximum throughput by carrying out the experiments on simulated networks using agent-based modeling tool named NetLogo.

5 Proposed Algorithm(S) We have detailing GAACO designed

designed our own algorithm(s) for optimized routing in DTN, but before them for the sake of completeness we reproduce the algorithm 1–3 of [11] and then remaining new algorithms: algorithm 4–7 exclusively for LTPCL Protocol

GA-LORD: Genetic Algorithm and LTPCL-Oriented Routing Protocol …

147

148

R. Johari and D.A. Mahmood

6 Experimental Setup For effective simulation, we designed a network area of dimension 537 m * 100 m, with 31 mobile nodes. Like MANET here too, the source needs to take the help of multiple number of in-between nodes to send the package to the destination node. All mobile nodes moved randomly within simulated network area. The simulations of the two newly proposed protocols, LTPCL and GAACO, were carried out using ns2 as simulator. The protocols worked with DTN agent not with TCP agent. In the Analysis section we have compared these two protocols on the basis of various parameters (shown in Table 1) such as number of hops, throughput, energy, end-to-end delay and package-size versus delivery–ratio, and packet delivery fraction (PDF).

6.1

Simulation Parameters

See Table 1.

7 Comparison of Results See Table 2 and Figs. 5, 6, 7 and 8.

GA-LORD: Genetic Algorithm and LTPCL-Oriented Routing Protocol …

149

Table 1 Simulation parameters Simulation used Code Simulation time Discovery routing time Time to send package Simulation area Application traffic Number of nodes Performance parameter

Routing protocols Network type Interface priority queue (ifq) Antenna model Transmission range of the nodes in cluster Size of the message getting generated buffer size of the nodes in the network

Network simulation version 2 (NS2) TCL, OTCL and C++ 30.0 s Start from 0:0–1.5 s Start from 1.5–30 s 537 * 100 m Voice application 31 mobile node Routing discovery time, end-to-end delay, package-size versus delivery-ratio, package-size_versus_throughput, protocols Energy, number of hop, energy of nodes TPCL, GAACO Store and forward Drop tail Omni antenna 250 m 512 kB Maximum packet in ifq 50

8 Analysis of Results We have implemented two new protocols (LTPCL and GAACO) in the same environment network with the same number of mobile nodes with different parameters (as mentioned in Table 2) to compare which protocol is more efficient and at what cost. So after analysis, we deduced the final result with different scenario (such as the in-between nodes, dead nodes, the in-between nodes which moved outside the transmission path or when the memory buffer of in-between nodes was full). We observe that the GAACO protocol is more efficient than the LTPCL protocol by means of graph as shown in Figs. 5, 6, 7 and 8 because the GAACO protocol consumes less resource(s) such as memory buffer and energy. The GAACO protocol’s fast response changes the path when the disconnection arise(s) between the nodes due to factors such as noise, weak signal; in-between nodes have less energy or nodes having less memory/buffer size.

Average channel accessing delay (ms)

Total energy consumption

Energy consumption per node (J)

Overall residual energy

Residual energy per node (J)

LTPCL 40.7418 121.031Avg. 3.90422 33.9693Avg. 1.09578 GAACO 37.6893 98.3506Avg. 3.1726 56.6494Avg. 1.8274 Number of packet loss: 2 number of packets sent: 425 number of packets received: 423 for LTPCL Number of packet loss: 1 number of packets sent: 425 number of packets received: 424 for GAACO Total energy = 31 (nodes) * 5 J = 155 J

Protocols

Table 2 Comparison of result

0.9953 0.9976

Packet delivery ratio

82.04 82.23

Average throughput (kbps)

150 R. Johari and D.A. Mahmood

GA-LORD: Genetic Algorithm and LTPCL-Oriented Routing Protocol …

Fig. 5 Packet size versus delay (L)

Fig. 6 Packet size versus delivery ratio (R)

151

152

Fig. 7 Comparison of energy consumption

Fig. 8 Packet size versus throughput

R. Johari and D.A. Mahmood

GA-LORD: Genetic Algorithm and LTPCL-Oriented Routing Protocol …

153

9 Conclusion We developed two new protocols (LTPCL and GAACO) that worked well with DTN agent in NS2. In MANET network, the biggest challenge is the battery backup, so we tried and let the mobile nodes move at slow speed. In our paper, we have programmed these two new protocols to change the route when the energy of the node comes near 2 joules to ensure the node(s) store-carry-forward the package if any disconnection occurs in communication between source and destination. The DTN approach is suitable for different environments where the signal is not strong and data can get lost due to reasons such as noise or the low level of battery of some nodes. Acknowledgments We express our sincere gratitude and indebtedness to administration of Guru Gobind Singh Indraprastha University, Delhi and University of Technology, Iraq for providing academic and research oriented environment.

References 1. Lindgren, A., Doria, A., Schelén, O.: Probabilistic routing in intermittently connected networks. ACM SIGMOBILE Mob. Comput. Commun. Rev. 7(3), 19–20 (2003) 2. McNamara, L., Mascolo, C., Capra, L.: Media sharing based on colocation prediction in urban transport. In: Proceedings of the 14th ACM International Conference on Mobile Computing and Networking, pp. 58–69. ACM (2008) 3. Fall, K., Farrell, S.: DTN: an architectural retrospective. IEEE J. Sel. Areas Commun. 26(5), 828–836 (2008) 4. Nelson, S.C., Bakht, M., Kravets, R.: Encounter-based routing in DTNs. In: INFOCOM, pp. 846–854. IEEE (2009) 5. Johari, R., Gupta, N., Aneja, S.: CACBR: context aware community based routing for intermittently connected network. In: Proceedings of the 10th ACM Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, & Ubiquitous Networks. ACM (2013) 6. Johari, R., Gupta, N., Aneja, S.: DSG-PC: dynamic social grouping based routing for non-uniform buffer capacities in dtn supported with periodic carriers. In: Quality, Reliability, Security and Robustness in Heterogeneous Networks, pp. 1–15. Springer, Berlin (2013) 7. Mahmood, D.A., Johari, R.: Routing in MANET using cluster based approach (RIMCA). In: International Conference on Computing for Sustainable Global Development (INDIACom), pp. 30–36. IEEE (2014) 8. Dahiya, P., Johari, R.: VAST: volume adaptive searching technique for optimized routing in mobile ad-hoc networks. In: IEEE International Advance Computing Conference (IACC), pp. 1–6, IEEE (2014) 9. Dahiya, P., Johari, R.: B-VAST: buffer-volume adaptive searching technique for optimized routing in opportunistic networks. In: 4th International Conference on Computer and Communication Technology (ICCCT), pp. 139–144. IEEE (2013) 10. Bhardwaj, P., Johari, R.: Matimo: metaheuristic approach towards implementation of membership models in opportunistic network. In: 4th International Conference—Confluence: The Next Generation Information Technology Summit, pp. 6–14. IET (2013)

154

R. Johari and D.A. Mahmood

11. Johari, R., Dhari Ali, M.: GAACO: metaheuristic driven approach for routing in Oppnet. In: Global Summit on Computer and Information Technology (GSCIT), IEEE, June 2014 12. Bhatia, A., Johari, R.: Genetically optimized ACO inspired PSO algorithm for DTNs. In: 3rd International Conference on Reliability, Infocom Technology and Optimization (ICRITO 2014), IEEE, Oct 2014

Integrated Modeling Environment “Virtual Computational Network” Alexey G. Shishkin, Sergey V. Stepanov and Fedor S. Zaitsev

Abstract Wireless modeling is an extremely important part of wireless networks research and design. Nowadays many advanced of numerical (simulation) codes are being developed and widely used in different areas of wireless networks research. The key problem is that such codes are often proprietary and poorly documented. Another problem is that most codes cannot be used together due to different input/output formats. A set of data adapters and converters should be developed and adopted to couple such codes. At the same time, a number of problems with data processing, monitoring and results visualization must be solved. “virtual computational network” (VCN) modeling environment is designed as a universal easy-to-use toolbox to create complicated modeling cases and scenarios for numerical experiments built atop of different codes and pre-calculated data. VCN is a powerful tool supporting distributed parallel computing, input/output data transformation, and results visualization “out-of-the-box”, allowing end-users to greatly improve numerical modeling studies efficiency. Keywords Modeling environment

 Wireless network  Numerical study

1 Introduction Nowadays most problems in different fields of science are studied both experimentally and theoretically. With great overall hardware performance growth, numerical simulations in wireless communications and networking become more A.G. Shishkin  S.V. Stepanov  F.S. Zaitsev Department of Computational Mathematics & Cybernetics, Moscow State University, Vorobjovy Gory, 119992 Moscow, Russia A.G. Shishkin  S.V. Stepanov (&)  F.S. Zaitsev Scientific Research Institute of System Analysis of Russian Academy of Sciences, Nachimovsky Pr., 36-1, Moscow, Russia e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_15

155

156

A.G. Shishkin et al.

and more important. At the same time, there is a number of difficulties with software solutions for modeling. Today there are hundreds and thousands of codes and tools for different modeling tasks, but, unfortunately, there are still no any defined standard or specifications for such codes integration and coupling. The key problem is that most numerical codes are often proprietary and poorly documented. Another problem is that codes cannot be used together due to different input/output data formats. If some simulation code is not an open source one, then the only way to reuse its numerical results is to write an application or script which will convert the output data to the proper format. “Virtual Computational Network” modeling environment was developed to allow users to build complicated modeling cases only with several mouse clicks by the help of simply-to-use graphical user interface. Having a chance to configure different code calculation chains and to define data convertors, a user can easily reuse third-party simulation codes saving his time. “Virtual Computational Network” is developed with Java, so it is a cross-platform application. Being integrated with ScopeShell data analysis and visualization integrated shell [1] and Tadisys task distribution system [2], it is a powerful tool for complicated simulation cases, which also supports distributed computations. Working on numerical experiments and code implementation, most users encounter the same difficulties—they need to implement custom input–output data convertors, setup and configure visualization software, and increase calculation performance and throughput (Fig. 1). In case of multi-model and multi-parameter experiments, the problem becomes much more complicated as a user has to define input parameter arrays per setup and run all the setups sequentially. The more setups and configurations are defined, the more difficult monitoring and analysis

Fig. 1 User interaction with modeling software

Integrated Modeling Environment “Virtual Computational Network”

157

tasks become. It should be outlined that most numerical codes, which may be coupled with user’s self-developed codes, do not provide any graphical user interface (GUI) and monitoring functionality.

2 Modeling Environment To overcome all the problems mentioned above, the “Virtual Computational Network” solution was designed with the following key requirements and principles: • Allow users to couple third-party numerical codes and solutions without custom data convertors development; • Implement an easy-to-use graphical user interface (GUI); • Support data monitoring, analysis, and visualization; • Support distributed parallel computations. The aim is to develop a solution allowing to couple all the components together and make its usage transparent for a user (Fig. 2). At the same time, the described approach should have easy-to-use intuitive UI. “Virtual Computational Network” developed in Java is a platform-independent software that may be used in any environment. Third-party software and codes are integrated via special modules—computational blocks. Once integrated and configured, such blocks can be easily reused and shared with other users with the help of ImpEx module (Import-Export). The main idea is to allow user to setup computational experiment just in “two mouse clicks.” Having predefined computational blocks and built-in modules, one

Fig. 2 Modeling environment coupling as a main principle

158

A.G. Shishkin et al.

can easily setup computational chains with a few mouse clicks in “drag-n-drop” mode via intuitive graphical user interface. A number of presets—a set of parameters and input data—can be defined for each computational block. A user has an ability to simply drag and drop the needed blocks and choose a proper preset. It should be mentioned that a number of computation chains can be defined and grouped in a project. That approach helps user to easily work with multi-modal and multi-parameter tasks. Each computational chain can be easily cloned or exported with the ImpEx module, so all the defined chains can be easily shared with other users or stored in any version control system (VCS) since it is a simple XML file. Calculation blocks sequence is defined in “drag-n-drop” mode just with a couple of mouse clicks connecting blocks with a “one-way” arrow. If a user needs to define an iterative calculation process, then it is necessary to configure a “stop condition” with the help of a predefined block. Each computational block is coupled with ScopeShell—an environment for data analysis, processing, and visualization [1]. Defining file masks, convertors, and formats, it is easy to process a large set of files that may be produced as an output data by any computational code (Fig. 3). At the same time, ScopeShell allows user to preprocess input data with a set of commands if needed. ScopeShell can be used in two modes—embedded or standalone ones. Embedded mode is allowed to use all the functions of the software just in the same environment. At the same time, it requires to store all the configuration files locally and it uses local resources, so such approach may lead to a significant performance degradation in case of a large computation chain with a number of blocks with local ScopeShell instances (Fig. 4). Standalone mode allows user to configure and run a ScopeShell

Fig. 3 “Virtual Computational Network” graphical user interface. Calculation chains

Integrated Modeling Environment “Virtual Computational Network”

159

Fig. 4 ScopeShell graphical user interface

instance remotely, not only at a local machine, and communicate with an instance via special protocol. Using such approach, it is possible to significantly decrease hardware resource usage. On the other hand, if a large set of input and output data is processed, then it may take much time to transmit all the data from one machine to another over a network, so a network throughput becomes important. One of the mentioned approaches should be chosen for a better performance and usability depending on the infrastructure configuration and the data being processed. Data visualization is performed with a number of third-party open source packets. By default, “Virtual Computational Network” is integrated with the GnuPlot [3], but other visualization tools can be easily integrated with the “Virtual Computational Network” environment with the help of the integration module. Each computational block can be calculated both locally and remotely. When a calculation chain is large, most of its computational blocks are supposed to be calculated remotely to significantly increase overall computational experiment performance. It should be underlined that remote calculation should be used wisely. For example, if a computational task is simple enough, then it is possible that data transmission and remote setup overhead will be significantly more noticeable than the task calculation time. The best practice to use remote approach is to split computational blocks that can be calculated in a parallel mode (or even concurrently), and run its calculations on different servers. In such case, performance will improve close to a linear mode. Distributed calculations are supported via Tadisys software [2]. Tadisys is a client-server application that implements two key approaches to support distributed calculations. The first one is a custom solution to run computational tasks remotely.

160

A.G. Shishkin et al.

All the data are transformed via sftp protocol to remote servers, after that startup scripts are called. User has an ability to monitor remotely running processes and get results as soon as calculations are done. The second solution is built atop of the Hadoop project [4] and its subprojects. It implements bulk synchronous parallel (BSP) computing paradigm which allows run remote tasks easier and more effectively. When calculation chains are defined and configured, one can start its calculation in GUI mode. While up and running, a user can monitor the ongoing process or processes, extract current results, and visualize them.

3 Modeling Environment Setup and Results Wireless modeling is extremely important part of wireless networks research and design. “Virtual Computational Network” environment allows a researcher to solve a number of modeling tasks in a new, more effective way. For simplicity and conciseness, a general model of wireless interference is observed to demonstrate suggested approach pros and cons. One of the important numerical study parameters is throughput between arbitrary pairs of nodes in the presence of interference from other nodes in the studied network. The detailed model description and all the formulas are clearly described in a number of papers [5–7]. There are a number of input parameters: the key ones are number of nodes, area dimensions, and traffic demands. One of the important aims is to study the dependence of throughput profiles on the number of nodes input parameter. It should be underlined that the key aim of the “Virtual Computational Network” modeling environment is to allow an end-user to obtain not only a set of results for one configuration (that task may be solved with a number of other toolkits, e.g., Matlab), but also to define a number of numerical experiments for different configurations in one scenario. Keeping in mind the fact that distributed calculations are supported, overall performance may be greatly increased. More than that, all the obtained results may be compared just in two ticks in one place. A simple scenario for throughput analysis and a calculation graph should be defined in a “Virtual Computational Network” environment. Area dimensions are supposed to be fixed within one scenario. The first block is responsible for an input data setup. It should be noticed that the input data block is processed only once. For throughput profile dependency on number of nodes parameter study, the latter one must be varied, so the second block, defined in the calculation graph, is for iteration over nodes parameter definition. Iteration setup may be easily defined via a corresponding configuration dialog. The other blocks are defined to obtain a set of necessary values to calculate throughput parameters. Some extra blocks are added to visualize data and dump all the results to a set of text files for further analysis. Overall configuration, scenario setup and intermediate calculation process results are presented in Fig. 5.

Integrated Modeling Environment “Virtual Computational Network”

161

Fig. 5 Calculation results analysis and visualization

“Virtual Computational Network” toolkit is extremely helpful when working with multi-model and multi-parameter tasks. The presented example allows an end-user to obtain a set of parameters for different number of nodes. All one should do is to define the described scenario, configure input data, define calculation blocks, and launch the calculation process. There is no need to control the process (but still one has an ability to obtain current results if needed while the process is running). When the process is finished, a set of calculated profiles may be visualized and compared. At the same time, some difficulties should be mentioned. As described above, a number of calculation blocks are required. There are a couple of approaches to implement such blocks (e.g., “out-of-the-box” integration with a number third party software toolkits), but still some extra configuration and integration steps may be required. But once configured, it is extremely easy to build any scenario with defined blocks.

4 Conclusion As shown above, “Virtual Computational Network” modeling software may significantly increase overall numerical experiments performance. With the help of an easy-to-use graphical user interface, integration with data analysis, processing and visualization software, and implementation of parallel distribution calculations, it is possible to perform numerical studies of wireless networks in a more effective way.

162

A.G. Shishkin et al.

References 1. Kostomarov, D.P., Zaitsev, F.S., Shishkin, A.G., Stepanov, S.V.: TheScopeShell graphic interface: support for computational experiments and data visualization. Moscow Univ. Comput. Math. Cybern. 34, 191–197 (2010) 2. Kostomarov, D.P., Zaitsev, F.S., Shishkin, A.G., Stepanov, S.V., Suchkov, E.P.: Automating computations in the virtual tokamak software system. Moscow Univ. Comput. Math. Cybern. 36, 165–168 (2012) 3. GnuPlot website. http://www.gnuplot.info 4. Hadoop website. http://hadoop.apache.org 5. Camp, T., Boleng, J., Davies, V.: A survey of mobility models for ad hoc network research. Wireless Commun. Mob. Comput.: Spec. Issue Mob. AdHoc Netw.: Res. Trends Appli. 2(5), 483–502 (2002) 6. Agarwal, S., Padhye, J., Padmanabhan, V.N., Qiu L., Rao A. , Zill, B.: Estimation of link interference in static multi-hop wireless networks. In: Proceedings of Internet Measurement Conference (IMC) (2005) 7. Qiu, L., Zhang, Y., Wang, F., Kyung, N., RatulMahajan, H.: A general model of wireless interference. In: MOBICOM, pp. 171–182 (2007)

Comparative Study of Different Windowing Techniques on Coupling Modes in a Coaxial Bragg Structure Xueyong Ding, Yuan Wang and Lingling Wang

Abstract Based on the mode-coupling method, numerical simulations comparative study are carried out to study the frequency response characteristics of the coaxial Bragg structure with different windowing functions on coupling modes. Results show that when employing the windowing-function techniques, the residual side-lobes of the frequency response can be effectively suppressed. When employing the Bragg structure with Blackman windowing function, its reflectivity of the working mode and the competing mode is the minimum. When employing the Bragg hamming windowing function, its reflectivity of the working mode and the competing mode is the maximum. These characteristics can improve its performance when the Bragg structure is used as a reflector or filter.





Keywords Coaxial Bragg structure Hanning windowing function Hamming windowing function Blackman windowing function Residual side-lobes





1 Introduction A metallic Bragg structure is considered to be suitable to construct overmoded cavities for high-power cyclotron auto-resonance maser (CARM) and free-electron laser (FEL) [1–10] oscillators in millimeter and sub-millimeter wave ranges. Generally speaking, frequency response curve of the reflectivity of a coaxial Bragg structure has residual side-lobes, and the residual side-lobes are harmful to the performance of the Bragg structure. Fortunately, they can be successfully suppressed using the windowing-functions technique [9–13]. Previously, we only used the Hamming windowing-function technique to suppress the effect of the residual side-lobes [11], and the results demonstrate that Hamming windowing-function technique is useful, no matter if the phase difference X. Ding (&)  Y. Wang  L. Wang Department of Polytechnic, Sanya University Sanya, Hainan 572022, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_16

163

164

X. Ding et al.

between the outer and inner corrugations is 0, π/2, or π. The common windowing functions are Hamming, Hanning, and Blackman window, and each of them has their own characteristic. It is necessary to discuss the effects of other windowing-function techniques on the frequency response characteristics of the coaxial Bragg structure. In this paper, comparative study of different windowing-function techniques in coaxial Bragg structure which operates in a multi-mode model and a higher-order mode will be presented.

2 Different Windowing-Function Techniques Figure 1 shows the longitudinal-sectional view of the coaxial Bragg structure with sinusoidal ripple, where a0, lout, and uout are the outer-wall average radius, corrugation depth, and phase; b0, lin, and uin are the inner-rod average radius, corrugation depth, and phase; pb is the corrugation period; and L is the structure length, respectively. In this paper, the cylindrical coordinate system (r, u, z) with the unit vector ^ , ^z) is employed. The dependence of the outer-wall radius Rout and the (^r , u inner-rod radius Rin on the longitudinal position z can be expressed by [9]: Rout ðzÞ ¼ a0  lout cosðkout z þ uout Þ

ð1Þ

Rin ðzÞ ¼ b0  lin cosðkin z þ uin Þ

ð2Þ

nˆout

pb

R out

O

nˆ in



θ out

lout



θ in

pb

R in

a0

b0

L

Fig. 1 Profile of a coaxial Bragg structure with sinusoidal ripples

Z l in

Comparative Study of Different Windowing Techniques …

165

where kout ¼ 2p=pb , and kin ¼ 2p=pb . The angle hout between ^r and n^out , and the angle hin between ^r and ^nin are determined by the following equations: tan hout ¼

dRout ¼ lout kb sinðkb z þ /out Þ dz

ð3Þ

dRin ¼ lin kb sinðkb z þ /in Þ dz

ð4Þ

tan hin ¼

where ^ nout and hin are the unit vectors normal to the outer-wall and inner-rod surfaces, respectively. Supposing Rout ðzÞ ¼ a0  lout WðzÞ cosðkout z þ /out Þ

ð5Þ

Rin ðzÞ ¼ b0  lin WðzÞ cosðkin z þ /in Þ

ð6Þ

In formula (5) and (6), WðzÞ is windowing function, when WðzÞ¼ 1, the function (5) and (6) return to the function (1) and (2), which means the coaxial Bragg structure with windowing function returns to the structure without windowing function. In this paper, the common windowing functions of Hamming, Hanning, and Blackman will be discussed. The Hanning windowing function is from the cosine windowing function, and the time domain expression can be expressed by the following equation: WðzÞ ¼ 0:5  0:5 cosð2pz=LÞ

ð7Þ

The Hamming windowing function is improved from the Hanning windowing function, and the time domain expression can be expressed by the following equation: WðzÞ ¼ 0:54  0:46 cosð2pz=LÞ

ð8Þ

The time domain expression of the Blackman windowing function can be expressed by the following equation: WðzÞ ¼ 0:42  0:5 cosð2pz=LÞ þ 0:08 cosð4pz=LÞ

ð9Þ

Figure 2 shows the profile of the coaxial Bragg structure with windowing-function technique, and Fig. 3 shows the comparison of the varying ripple amplitude with different windowing-function technique. From this figure, we can clearly see the effect of windowing functions, and we can also see the middle corrugation is the deepest while the depth of other corrugations is gradually decreasing. Since the higher-order mode is required when the device operates at high frequency, in this case multi-mode coupling should be taken into account. A comprehensive coupled mode theory of the coaxial Bragg reflector has been

166

X. Ding et al.

Fig. 2 Profile of a coaxial Bragg structure with windowing-function technique

Fig. 3 The comparison of the varying ripple amplitude with different windowing-function technique

Hamming Blackman

9.0

Hanning

8.5

Depth (mm)

8.0 7.5 7.0 6.5 6.0 5.5 5.0 85

90

95 100 105 110 115 120 125 130 135 140 145

Length (mm)

developed in Ref. [9], where a set of coupled differential equations are derived to describe the intercoupling between forward and backward wave components of each waveguide mode in the reflector: N X dfiþ ¼ ðai þ jDi Þfiþ þ jGik fk dz k¼1

ð10Þ

X dfi ¼ ðai þ jDi Þfi  jGik fkþ dz k

ð11Þ

ðjkb z=2Þ ðjkb z=2Þ where fiþ ¼ Aþ , fi ¼ A , Δi = βi − kb/2, and A i e i e i denote the amplitude of the forward and backward traveling wave components of the ith waveguide mode; Δi, β i, and αi are the Bragg mismatch, the axial wave number, and the attenuation constant; Gik and its complex conjugate G*ik denote the coupling coefficient between the ith mode and the kth mode. Taking the matched boundary conditions at both ends of the reflector, one can solve the coupled equations for each mode using finite differential method or the eigenvector method. Here the reflectivity for the ith mode is defined as the ratio of the backward wave power of

Comparative Study of Different Windowing Techniques …

167

the ith mode to the forward wave power of the operating mode at input port of the reflector. The expressions about TEM, TE, and TM modes are discussed in detail, respectively in Ref. [9]. Based on the coupled mode equations [10, 11], a code is performed to evaluate the frequency response characteristics of the different windowing-function techniques.

3 Effect of Different Windowing Techniques on Coupling Modes Since higher-order mode may be potentially applied in the CARM, we will discuss the influence of the different windowing techniques in a coaxial Bragg structure when it operates in higher-order mode at THz frequency. As an example, we assume that the incident mode TE6,1 is the desired working mode with frequency range of 85–115 GHz (the central frequency being 100 GHz), and the other parameters are summarized in Table 1, respectively. As is indicated in Ref. [9], mode coupling should be taken into account for the higher-order mode operation, suppose that, except for the operating mode TE6,1, the neighboring modes TM6,2 may be involved in the coaxial Bragg structure. Therefore, a code is performed to evaluate the reflection characteristics of the considered Bragg structure. In Ref. [10], it has been discussed that band-gap overlap in an overmoded coaxial Bragg structure can be efficiently separated by setting the phase difference between the inner-rod and outer-conductor corrugations to be p, and single-mode operation of TE6,1 can be achieved at the operating frequency of 0.35 THz due to the suppression of reflection of the competing modes. Figure 4 shows the corresponding band gaps of the involved modes TE6,1 and TM6,2. It indicates that there are two main band gaps which, respectively, correspond to the coupling between the forward wave and the backward wave of TE6,1 (TE6;1 $ TE6;1 scattering) at lower frequency range, and to the coupling of the forward wave of TE6,1 with the backward wave of the competing mode TM6,2 (TE6;1 $ TM6;2 scattering) at higher frequency range. From Fig. 4, we can see when the Bragg structure has no windowing-function technique, it has serious residual side-lobes. Evidently, windowing-function technique is quite effective to suppress the residual side-lobes of TE6,1 and TM6,2, and this technique can also improve the center frequency reflectivity of the working mode and the competing mode. Figure 5 shows the frequency response of the reflectivity with different windowing-function techniques. It shows that the reflectivity of the working mode and the competing mode with Blackman windowing function is the minimum, while the reflectivity with hamming windowing function is the maximum, so the hamming windowing function is the best selection. The physical explanation is that as the input mode is injected in the structure, it is decomposed into a forward wave propagating along the positive z-direction and a

168

X. Ding et al.

Table 1 Main parameters of the coaxial Bragg structure Title

Numerical value

Operating frequency Operating mode Outer-wall radius, a0 Inner-rod radius, b0 Corrugation amplitude Ripple period pb Length of cavity Initial corrugation phase of the outer wall, uout Initial corrugation phase of the inner rod, uin

100 GHz TE6,1 20.0 mm 14.0 mm 0.1 mm 1.52 mm 110.1 mm 0 π

1.0

with windows without windows

Reflectivity

0.8 0.6 residual side-lobes

TE

TM

61

62

0.4 0.2 0.0 0.335

0.340

0.345

0.350

0.355

0.360

0.365

Frequency (THz)

Fig. 4 Frequency response of the reflectivity in a coaxial Bragg structure with and without windows, where the phase difference between the inner-rod and outer-conductor corrugations du ¼ juin  uout j ¼ p and other parameters are the same as in Table 1

Hamming

Hanning

Blackman

1.0

Reflectivity

0.8 0.6

TE 61

TM 62 residual side-lobes

residual side-lobes

0.4 0.2 0.0 0.335

0.340

0.345

0.350

0.355

0.360

0.365

Frequency (THz)

Fig. 5 Frequency response of the reflectivity in a coaxial Bragg structure with different windowing function, where the phase difference between the inner-rod and outer-conductor corrugations du ¼ juin  uout j ¼ p and other parameters are the same as in Table 1

Comparative Study of Different Windowing Techniques …

169

backward wave propagating along the opposite z-direction. These two waves in the Bragg structure exhibit frequency-selective phenomenon known as stop-bands or pass-bands, which enables the coaxial Bragg structure to act as a filter is infinite; where windowing plays a role of the truncation of a potentially infinite-length filter which results in the side-lobes suppression of the frequency response. Physically speaking, different windowing functions change the boundary conditions of the coaxial Bragg structure, which must affect the electromagnetic characteristics, which lead to the different frequency response characteristic.

4 Conclusions In this paper, effects of different windowing functions on coupling modes of the coaxial Bragg structure on the frequency response have been investigated by making use of numerical simulations. Two points can be drawn from the simulation results: (1) Both the residual side-lobes of the frequency response of the working mode and the competing mode can be effectively suppressed by employing the windowing-function technique. (2) When employing the Bragg structure with Blackman windowing function, its reflectivity of the working mode and the competing mode is the minimum. When employing the hamming windowing function, its reflectivity of the working mode and the competing mode is the maximum. Acknowledgments This work was supported mainly by the Project Supported by the Scientific Research Fund of the provincial Natural Science Foundation of Hainan (No. 614252), the Project Supported by the Scientific Research Fund of the provincial Natural Science Foundation of Hainan (No. 114015) and the Key Laboratory Foundation of Sanya (No. L1305).

References 1. Chong, C.K., et al.: Bragg reflectors. IEEE Trans. Plasma Sci. 20, 393–402 (1992) 2. Bratman, V.L., Denisov, G.G., Kol’chugin, B.D., Samsonov, S.V., Volkov, A.B.: Experimental demonstration of high-efficiency cyclotron-autoresonance-maser operation. Phys. Rev. Lett. 75(17), 3102 (1995) 3. Ginzburg, N.S., et al.: The use of a hybrid resonator consisting of one-dimensional and two-dimensional Bragg reflectors for generation of spatially coherent radiation in a coaxial free-electron laser. Phys. Plasmas 9, 2798–2802 (2002) 4. Ginzburg, N.S., Kaminsky, A.A., Kaminsky, A., Yu, N., Peskov, S.N., Sedykh, A.P., Sergeev, A.S.: High-efficiency single-mode free-electron maser oscillator based on a Bragg resonator with step of phase of corrugation. Phys. Rev. Lett. 84, 3574 (2000) 5. Ginzburg, N.S., Yu, N., Peskov, A.S., Sergeev, A.D.R., Phelps. I.V., Konoplev, G.R.M., Robb Cross, A.W., Arzhannikov, A.V., Sinitsky, S.L.: Theory and design of a free-electron maser with two-dimensional feedback driven by a sheet electron beam. Phys. Rev. E 60(1), 935 (1999)

170

X. Ding et al.

6. Konoplev, I.V., et al.: Progress of the Strathclyde free electron maser experiment using 2D Bragg structure. Nucl. Instr. Meth. A 445, 236–240 (2000) 7. Barroso, J.J., Leite Neto, J.P.: Design of coaxial Bragg reflectors. IEEE Trans. Plasma Sci. 34, 666–672 (2006) 8. Lai, Y.-X., Zhang, S.-C.: Coaxial Bragg reflector with a corrugated inner rod. IEEE Microw. Wirel. Compon. Lett. 17, 328–331 (2007) 9. Lai, Y.-X., Zhang, S.-C.: Multiwave interaction formulation of a coaxial Bragg structure and its experimental verification. Phys. Plasmas 14, 113301 (2007) 10. Lai, Y.-X., Zhang, S.-C.: Separation of band-gap overlap in a coaxial Bragg structure operating in higher-order mode at Terahertz frequency. Phys. Plasmas 15, 033301 (2008) 11. Chen, X.-H., Zhang, S.C.: Suppression of residual side-lobes in a coaxial Bragg reflector. Int. J. Infrared Millimeter Waves 29, 552–557 (2008) 12. Ding, X.-Y., Wang, L.-L.: Comparative study of numerical simulations in coaxial Bragg Reflector with the new tapered ripples. Chin. J. Radio Sci. 26(1), 55–61 (2012) 13. Ding, X.-Y., Li, H.-F., Lv, Z.-S.: Effect of ripple taper on band-gap overlap in a coaxial bragg structure operating at terahertz frequency. Phys. Plasmas 19, 092105 (2012)

Joint Two-Dimensional DOA and Power Estimation Based on DML-ESPRIT Algorithm Zheng Luo and Donghua Liu

Abstract By organically integrating the joint multidimensional parameter estimation ability of deterministic maximum likelihood (DML) estimation criterion and the computation efficiency of estimating signal parameters via rotational invariance techniques (ESPRIT) algorithm, a novel joint two-dimensional DOA (2-D DOA) and power fast estimation algorithm named DML-ESPRIT is proposed. First, based on the special characteristics of double L-shaped array, we introduce the space cone angle to represent the source’s 2-D DOA. This operation successfully converts the multidimensional space searching problem to separate one-dimensional angle estimation. Under the DML estimation criterion, the joint estimation model of parameters such as space cone angle and power is established. Then TLS-ESPRIT method which could avoid the spectrum peak search is used to solve the model. On the premise of keeping estimation precision, the time consumption of DML-ESPRIT algorithm is only about 35 ms on the average. So it has the very good prospect for engineering applications.

 



Keywords Two-dimensional direction-of-arrival estimation Power estimation Space cone angle Maximum likelihood estimation criterion ESPRIT algorithm



1 Introduction 2-D DOA estimation has been a hotspot in the research of communication, radar, navigation, etc. [1–4]. Especially, the rapid joint parameter estimation of 2-D DOA and power [5, 6] is a new research direction in the field of electronic warfare and radar. As is known to all, the Bayesian method is a classical method based on statistical theory, applied to parameter estimation problem. Maximum likelihood estimation Z. Luo (&)  D. Liu Electronic System Engineering Company of China, Beijing 10079, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_17

171

172

Z. Luo and D. Liu

method is a special case of the Bayesian method, which is under the condition of known white noise Bayesian optimal estimate [7–9]. Ziskind L idea for M in 1988 maximum likelihood parameter estimation method was applied to DOA estimation problem [10]. And the MUSIC (multiple signal classification, MUSIC) [7] DengZi class space decomposition of DOA estimation algorithm is compared, the estimation precision is high, and the maximum likelihood algorithm has good robustness and stability, but the algorithm realization process is complicated. The 2-D ESPRIT algorithm [11] in the 2-D DOA estimation without two-dimensional spectrum peak search, thus greatly reducing the amount of calculation, can meet the requirements of strong real-time performance, and has wide application prospect. To solve the above problems, this paper explores a kind of two-dimensional fast joint estimation algorithm of DOA and power on the basis of the double L array. By introducing space representation method for the 2-D DOA, azimuth angle and pitching angle of space cone angle are used to estimate the conversion to independent. Deterministic maximum likelihood method is used for the space cone angle and power source joint estimation. Around the maximum likelihood estimation model calculation complex problems, through the study of the extension of array, the TLS-ESPRIT algorithm is applied to achieve the fast solving of the model. Consequently, we achieve the purpose of the 2-D DOA and power fast joint estimation.

2 Array Structure and Signal Model As shown in Fig. 1 of 3 M + 1 double L-shaped array, the array shape X1, X2 is located on the x axis; sub-array Y1, Y2 on the y axis; and sub-array Z1, Z2 on the z axis; the x, y, and z axes are equal to 90°, to facilitate the description. This paper uses x, y, and z, Fig. 1 Illustration of double L-shaped array

z

Sub-array Z1

M-1

s (t )

...

Sub-array Z2

M

ϕ

bar ra

y

X2

S

. .ub-ar . ray

x

Su

M-1 M

Sub-array Y2

...

X1

0

θ

Sub-array Y1

M-1

y M

Joint Two-Dimensional DOA and Power Estimation Based …

173

respectively, on the x, y, and z axis sub-arrays, and sub-array of x, y, z structure is the same and is based on isometric line array, arrays, and number of M + 1. Suppose there are p far-field and narrowband sources impinge on L-shaped array where the kth source has the elevation angle hk and azimuth angle uk ðk ¼ 1; 2; . . .; pÞ, thus the signal model of this array is given by 8 < xðtÞ ¼ Ax ðh; u; f ÞsðtÞ þ nx ðtÞ yðtÞ ¼ Ay ðh; u; f ÞsðtÞ þ ny ðtÞ : zðtÞ ¼ Az ðh; u; f ÞsðtÞ þ nz ðtÞ

ð1Þ

where Ax(θ, φ, f), Ay(θ, φ, f), and Az(θ, φ, f) are array manifold matrix formed by the signal array steering vector aðhk ; uk Þ, SðtÞ is the signal vector, and N(t) is the noise vector with zero-mean and covariance matrix d2N IM . And spatial elements of signal vector sk(t) can be represented as sk ðtÞ ¼ uk ðtÞejðx0 tþUðtÞÞ

ð2Þ

where ui(t), Ф(t), and w0 represent the amplitude, phase, and frequency informations of signal, respectively.

3 The Proposed Method 3.1

The Dimension Reduction of 2-D DOA Using Space Cone Angle

Surrender the idea of dimensionality reduction, and put forward the concept of three-dimensional cone angle: in the space in the graph shown in Fig. 2, with three-dimensional coordinate system, origin for vertex can be obtained with the center line of the axis of the sub-matrix of half-cone surface. The angle on the

γ

...

z

0

...

... x

α

β

Fig. 2 2-D DOA representation by space cone angle

y

174

Z. Luo and D. Liu

surface of the cone and the direction of the axis is called space cone angle, and, respectively, with alpha, beta, and gamma on behalf of X, Y, and Z axis space angles, and {α, β, γ} 2 [0, 180°]. Obviously, the double L array on the X axis and Y axis in the half angle of incidence is the intersection of a cone surface radiation source. By geometric relations, the space cone angles α, β, and γ and 2-D DOA information exist for the following conversions: 8 < cos a ¼ cos h  cos u cos b ¼ sin h  cos u : cos c ¼ sin u

ð3Þ

For multiple sources of 2-D DOA estimation problem, if you can get double L array of matching, the space angle of incident signal can accurately obtain the 2-D DOA information source.

3.2

2-D DOA and Power Estimation Algorithm

According to the above described structure of array signal and noise model, the sub-matrix X, for example, is similar to other sub-matrixes, and so on. For unknown deterministic target signal source, the second-order moments of the observed data parameters satisfy the following conditions: E fxðti Þg ¼ Aðh; uÞsðti Þ n  H o E ½xðti Þ  xðti Þ xðtj Þ  xðtj Þ ¼ r2 Idij n  T o E ½xðti Þ  xðti Þ xðtj Þ  xðtj Þ ¼0

ð4Þ ð5Þ ð6Þ

where Eq. (4) is the average value of deterministic maximum likelihood (DML) standard and Eq. (5) is the variance of observation vector. Then, N times fast data of joint probability density function can be expressed as Fðx1 ; x2 ; . . .; xN Þ ¼

N Y

1 1 expð 2 ðxi  Asi ÞH ðxi  Asi ÞÞ M r2M p r i¼1

ð7Þ

For both sides, taking the negative logarithm in Eq. (7), we can get  ln Fðx1 ; x2 ; . . .; xN Þ ¼ MN ln p þ MN ln r2 þ

N 1X jxi  Asi j2 r2 i¼1

ð8Þ

Joint Two-Dimensional DOA and Power Estimation Based …

175

By Eq. (8), the joint probability density function F(x1, x2,…,xN) is a function of the unknown parameters a, r2 and S. To get the parameters in maximum likelihood estimation, a set of parameters is obtained to estimate the criterion (8) which is the smallest, so the maximum likelihood estimation formulas of a, r2 and S are, respectively, b S ¼ Ay X 1 n ? bo tr PA R M n n oo b ¼ arg max tr PA R a;si n n oo b ¼ arg max tr AðAH AÞ1 AH R b r2 ¼

b a DML

ð9Þ ð10Þ

ð11Þ

a;si

where Ay is the pseudo-inverse matrix of A, PA is the projection matrix of A, P? A is b the orthogonal matrix of PA , R is the estimated value of covariance matrix R, tr{·} is the trace operator, and arg{} is the phase operator. Extensions of deterministic maximum likelihood estimation principle, the signal power estimation, can be available as  H b ¼ E½bsbs H  ¼ Ay R b Ay P

ð12Þ

DML algorithm of angle estimation process is a complex multidimensional search problem and computation grows exponentially with the number of goals. Therefore, in literature [12, 13] of alternating projection algorithm (AP), to implement the angle estimation, the algorithm will make complex multidimensional grid search into simple multiple one-dimensional search. However, when the source number is large, the algorithm convergence speed is quite slow. As a result, we put forward the advantages using the fast algorithm TLS-ESPRIT for space angle estimation, which are as follows: (1) The ESPRIT algorithm without spectral peak searching the algorithm estimates timeliness strong algorithm, and operation time is not affected by source number which is suitable for multiple sources of 2-D DOA estimation. (2) The LS-ESPRIT, TLS-ESPRIT, TAM, and real value space of the ESPRIT algorithm estimates of the performance are close to, but in the case of low SNR of TLS-ESPRIT algorithm to estimate the optimal performance, they can meet the demand of positioning detection under complicated environment. In conclusion, the use of maximum likelihood estimation theory can get joint estimation model of a, r2 , and S, and the use of TLS-ESPRIT algorithm can reduce the computation load greatly. So we name the algorithm as DML-ESPRIT algorithm.

176

Z. Luo and D. Liu

4 Simulation Results In order to verify the performance of DML-ESPRIT algorithm, the following experiments were performed. Experimental simulation conditions consider the array structure as shown in Fig. 1, where the array number of sub-matrix is 6, and array interval d is kmin /2, as the additive white gaussian noise of the sampling count is 512.

4.1

Joint Estimation of 2-D DOA and Power

Hypotheses of three different powers of far-field independent signals incident on a double L array as described above, and the azimuth, elevation, and power parameter combinations (θk, φk, Pk) are (39.44°, 114.24°, 61), (44°, 113°, 1), and (52°, 78°, 0.5), respectively. In order to compare the performance of algorithm estimates, the AP-DML algorithm, AP-SSF algorithm, DML-ESPRIT algorithm, and MUSIC algorithm are applied to estimate the parameters of the signal source. The SNR is 10 dB. 1000 times the Monte Carlo experiments were carried out. Figure 3 shows the space cone angle and power estimation histogram of DML-ESPRIT algorithm. Figure 4 shows the four kinds of algorithm of two-dimensional DOA estimation and true 2-D DOA constellation. In Fig. 3, the DML-ESPRIT algorithm can accurately realize the joint estimation of signal space angle and power estimation, and the results are very close to the real value. The simulation results in Fig. 4 show that according to Sect. 4.1, point matching method realized the space right angles α and β, which is the precise

Power

1 0.5 0 10

20

30

40

50

60

70

80

90

100 110

120

80

90

100 110

120

α /deg Power

1 0.5 0 10

20

30

40

50

60

70

β /deg 1

Power

Fig. 3 Joint estimation results of space cone angle and power

TRUE DOA&Power DML-ESPRIT

0.5 0 10

20

30

40

50

60

70

γ /deg

80

90

100 110

120

Joint Two-Dimensional DOA and Power Estimation Based … Fig. 4 Constellation for 2-D DOA estimation results

177 90

80

120

60 60 40

150

30

20 MUSIC AP-DML AP-SSF DML-ESPRIT TRUE DOA

180

0

330

210

300

240 270

matching. The transformed 2-D DOA estimation results coincide with the real value.

4.2

Validation of Algorithm’s Time Consuming

The purpose of this experiment is involved in experiment 2, which compares four kinds of the efficiency of the algorithm. Antenna array structure and signal model are the same as the experiment 1, and assume the two wave signal parameters (θk, φk, Pk), respectively, (30°, 120°, 0.5) and (60°, 240°, 1), under the condition of SNR = 10 db and 1000 Monte Carlo experiments. Table 1 shows the MUSIC, AP-DML, AP-SSF, and DML-ESPRIT algorithms of each run average time consuming. The statistical results in Table 1 show that the running time of DML-ESPRIT algorithm is far less than the other three algorithms. The average time is only about 1/26 of the MUSIC algorithm, which is about 34 ms. Table 1 Computation time comparison Algorithm

AP-DML

AP-SSF

MUSIC

DML-ESPRIT

Time (ms)

1224.76

1195.52

870.13

33.62

178

Z. Luo and D. Liu

5 Conclusion This paper puts forward the DML-ESPRIT algorithm, which is mainly based on double L array space characteristics, transforming the azimuth and elevation estimation problem for space angle estimation. Using maximum likelihood estimation theory, source of space cone angle is deduced and the mathematical model of joint estimation is realized the joint estimation of DOA and power. Then through the clever using of TLS-ESPRIT algorithm, it has realized the fast estimation of parameters.

References 1. Mashud, H., Kaushik, M.: Direction-of-arrival estimation using a mixed L2.0 norm approximation [J]. IEEE Trans. Signal Process. 58(9), 4646–4655 (2010) 2. Chen, F.J., Sam, K., Chaiwah, K.: ESPRIT-like two-dimensional DOA estimation for coherent signals [J]. IEEE Trans. Aerosp. Electron. Syst. 46(3), 1477–1484 (2010) 3. Liang, J.L., Liu, D.: Joint elevation and azimuth direction finding using L-shaped array. IEEE Trans. Antennas Propag. 58(6), 2136–2141 (2010) 4. Shannon, D.B., Tszping, C., Karl, G.: Robust DOA estimation: the reiterative super resolution (RISR) algorithm. IEEE Trans. Aerosp. Electron. Syst. 47(1), 332–346 (2011) 5. Gang, D., Yong-liang, W., Yong-shun, Z., et al.: Estimation of both the frequency and 2-D arrival angles of coherent signals. Syst. Eng. Electron. 6(30), 1050–1053 (2008) 6. Cong-feng, L., Gui-sheng, L.: Novel method of narrow band signal frequency and 2D angle estimation for wide-band receiver. Acta Electronica Sinica 3(27), 523–528 (2009) 7. Petre, S., Music, A.N.: Maximum likelihood, and cramer-rao bound. IEEE Trans. Acoustic, Speech Signal Process. 37(5), 720–740 (1989) 8. Petre, S., Music, A.N.: Maximum likelihood, and cramer-rao bound: further results and comparisons. IEEE Trans. Acoustic, Speech, Signal Process. 38(12), 2140–2150 (1990) 9. Liskind, I., Wax, M.: Maximum likelihood localization of multiple sources by alternating projection. IEEE Trans. Acoustic, Speech, Signal Process. 36(10), 1553–1559 (1988) 10. Roy, R.K.T.: ESPRIT-estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoustic, Speech, Signal Process. 37(7), 984–995 (1986) 11. Wong, K.M., Reilly, J.P., Wu, Q., et al.: Estimation of the directions of arrival of signals in unknown correlated noise, Part I: the MAP approach and its implementation. IEEE Trans. Signal Process. 40(8), 2007–2017 (1992) 12. Wong, K.M., Reilly, J.P., Wu, Q., et al.: Estimation of the directions of arrival of signals in unknown correlated noise, Part II: Asymptotic behavior and performance of the MAP approach. IEEE Trans. Signal Process. 40(8), 2018–2028 (1992) 13. Lei, Yang, Yong-jun, Zhao, Zhi-gang, Wang: Polarimetric interferometric SAR data analysis based on TLS-ESPRIT of joint estimation of phase and power. Acta Geodaetica Cartogr. Sin. 36(2), 163–168 (2007)

Comparison Between Operational Capability of PDT and Tetra Technologies: A Summary Pengfei Sun, Guanyuan Feng, Kai Guan and Yicheng Zhang

Abstract Science and Technology Informatization Bureau of the Ministry of Public Security released the new generation of police use digital trunking standard PDT in April 2010 in Beijing. Based on brief description of PDT and Tetra standards, this paper comparatively analyzes the specific performance indicators of two standard PDT and Tetra from the aspects of voice quality, voice services, data services, and so on. The analysis shows that advantages of PDT in terms of operational capacity is more obvious, and it is more in line with the development needs of Chinese police trunking communication system. Keywords PDT

 Tetra standards  Performance comparison

1 Introduction The policy support program about Chinese Police Digital Trunking (PDT) designed network construction of new standard has gradually expanded. The Ministry of Public Security issued a document recently, which decided to abolish 11 public safety industry standards, including GA/T444-2003 “Public Security Digital Trunking Mobile Communication System Switchboard Technical Specifications.” The TETRA trunking communication system belonging to European and American technology standard will be gradually replaced by new standard of police digital

P. Sun (&)  G. Feng  K. Guan  Y. Zhang Harbin Institute of Technology Communication Research Center, Harbin, Heilongjiang Province 150001, China e-mail: [email protected] P. Sun Key Laboratory of Police Wireless Digital Communication, Ministry of Public Security, Harbin, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_18

179

180

P. Sun et al.

trunking (PDT). In recent years, the relevant departments have been committed to promote the PDT standards of independent intellectual property rights. The specific implementation of the policy will bring to PDT private network market huge space for development [1]. China’s current digital trunking has three standards, namely SJ/T 11228-2000 “Digital Trunking Mobile Communication system,” TETRA, and PDT. Due to the lack of independent technical standards at the time, the relevant departments had selected the European standards called TETRA and released the GA/T 444-2003. In recent years, taking information security into consideration, the Ministry of Public Security has been working with the development of their own patented technology designed private network digital trunking standards. In April 20, 2010, the Ministry of Public Information and Communication Office in Beijing released the new standard of Chinese Police Digital Trunking named PDT. After five-year development, experiment and progressive commercial process, the GA standard of Police Digital Trunking (PDT) standards and product has been officially released [2]. And its products have begun to conduct large-scale commercial use and put into combat in some parts of China, which illustrated PDT technology has entered mature period. Before the PDT technology, domestic digital trunking adopted the Tetra standard of European ETSI. Meanwhile, large-scale Tetra system was built in Beijing, Shanghai, Guangdong, Shandong, and other provinces, which plays an important role for the communication security in large events and sports events. Compared with analog trunking, digital trunking reflects significant advantages in voice quality and integrated data applications. Digital trunking changes the traditional manner of voice dispatch, so that digital interphone becomes an important part of the work. Tetra is designed for the small land area and high-population density regions. Advanced networking technology, encryption technology and high-spectral efficiency are its advantages. What is more, its protocol architecture is closer to the public network 2G technology, which is easy to inherit traditional 2G mobile telecommunications switching equipment. Thus the birth of Tetra standard in Europe has the basis of deep technical, device, and regional environmental factors. Its follow-up technical evolution experienced the highest rate of 500 kbps data capabilities in Tetra Release 2 (TEDs). In addition, the Tetra Release 3 technology that having high-speed data transmission capability and combined with LTE technology is currently being discussed. PDT is designed to fully draw on the technical advantages of the Tetra. In its network designing, more advanced IP soft switch technology and the 3GPP IMS architecture make it more easier to interconnect with other heterogeneous network and the public network. All of those demonstrate the advanced design concepts of the smooth transition from analog to digital and the integration between narrowband and broadband, private network, and public network. This paper will analyze and compare the Tetra standard and PDT standard in operational capacity.

Comparison Between Operational Capability of PDT …

181

2 Difference Between Tetra and PDT Voice Quality Due to the difference between Tetra and PDT in channel transmission performance, the compression ratio of vocoder compression is different; in addition, there are differences in the sound performance characteristics. PDT vocoder output a compressed voice of 2.4 kbps. After channel coding, a 3.6 kbps voice data is generated, as shown in Fig. 1. TETRA uses the ACELP coding, and the vocoder outputs a compressed voice of 4.567 kbps. After channel coding, a 7.2 kbps voice data is generated, as shown in Fig. 2. Judging from the MOS scores, the compression ratio of Tetra vocoder is low (128/4.567), and that of PDT is (128/2.45), so Tetra has a higher restored voice quality after compression/decompression, which is embodied in slightly higher MOS scores. The advantage of PDT vocoder is stronger suppression of noise, and there is a distinct advantage of communication quality in noisy environments (Table 1). There is no noise reduction in Tetra vocoder, so compared to PDT vocoder it is a little worse in voice clarity when communicating under strong noisy environments, but it is closer to the sound characteristics of analog systems (mixed voice and noise), so that users accustomed to analog communication effects are easier to accept. PDT noise suppression effect in a variety of typical background noise is

Fig. 1 Schematic of PDT vocoder encoder

Fig. 2 Schematic of TETRA with ACELP coding

182 Table 1 MOS scores

P. Sun et al. Vocoder

Scores

PDT AMBE++ ACELP GSM (as comparison)

3.356 3.348 3.474 3.7

relatively good, but some users accustomed to analog communication effects would feel the sound too clean and blunt, and it takes them some time to adapt. Therefore, the voice quality of PDT is better than Tetra under background noise environment. Tetra using ACELP coding, whose bit rate is high to 4.567 kbps, the coding principle is insensitive to the low-frequency band of the speech spectrum. Due to PDT using low coding rate, only 2.45 kbps, coding theory requires using spectrum above 200 Hz. When interconnected with analog systems, since narrowband filter causes that spectrum below 300 Hz is lost, which leads to the capability of compression and decompression sound reduction decreased, the issues of reduced sound quality and declined recognition rate appear. The actual tests results show that there is degradation of sound quality and restore performance, but little effect on intelligibility. It can be seen that the narrowband performance of Tetra is slightly better than PDT. The FEC error correction capability of Tetra speech coding is stronger than PDT, mainly because that Tetra receiver is sensitive to PDT; therefore it requires stronger coding and a lower coding rate in exchange for stronger coding error resilience, in order to reduce the environment complexity and weak field effect on the voice communication. The speech encryption of Tetra uses the method of stealing frame, where the commonly used method is stealing every half frame of 500 ms, and this is the lossy encryption, while PDT uses idle bit to transport encryption related information, which does not affect the speech quality. So after Tetra voice encryption there can clearly be felt decline in voice quality, while similar situations do not occur in PDT.

3 Differences Between Tetra and PDT Voice Services Voice service is the most important service of trunking system, and voice group call is the most frequently used service function. Tetra and PDT can both support rich refined needs in voice group call, such as several participating group and background groups solving the problems of participating users’ range, regional restrictions solving the problems of calls setting up geographical area, priority and emergency calls solving the problems of priority protection of important calls when the channel is busy, handoff solving the problems of no dropping in crossing base stations calls when moving, and using mobility management to allocate channels only in the base stations where there are users to save channel resources. But since

Comparison Between Operational Capability of PDT …

183

Table 2 Comparison of handoff process during calls Steps

Tetra handoff process during calls

PDT handoff process during calls

1

MS background scan to find available neighboring base Use uplink signaling to inform the neighboring base station which is hoped to switch to System allocates channel for terminal at the neighboring base station (if it is already allocated, then do not need to allocate repeatedly) Base station uses downlink handoff signaling to notify the terminal to switch to the neighboring base station

MS background scan to find available neighboring base Use uplink signaling to inform the neighboring base station which is hoped to switch to System allocates channel for terminal at the neighboring base station (if it is already allocated, then do not need to allocate repeatedly) Base station uses downlink broadcast signaling to notify all terminals of available traffic channels of the neighboring base station Terminal compares the current base station according to the broadcast signaling that system sends, and choose to switch to the traffic channel of neighboring base station to continue the call

2

3

4

5

Terminal switch to the traffic channels of neighboring base station according to switching instructions to continue the call

there is a difference in coverage radius of base stations between Tetra and PDT, there are also some differences in processing refinement needs, especially in the voice group call, which are mainly reflected in handoff during the call and missing probability of group call. Table 2 is the typical steps of group call handoff during the noticed type calls, where you can clearly see the two different methods [3]. In the first three steps, there is no difference between Tetra and PDT, but methods of the following two steps are different. Tetra uses a system-controlled terminal switching mode, but PDT uses a terminal discretionary switching mode according to system broadcasting messages. The method of Tetra is more suitable for a single call handoff, which has relatively small users on the channel, so that normal calls are not affected by the handoff signaling interaction; the method of PDT is more suitable for group calls, which has many users on the channel, so that it is more efficient that an application for a group benefit. Tetra handoff is controlled by the system where terminals switch base station one by one, and PDT is that the terminals make independent handoff using system broadcasting message, which supports an unlimited number of simultaneous handoff, with obvious advantages in the group call handoff involving large number of users. Although Tetra and PDT system both have improved mobility management, including location management of group user, which can ensure that channel allocations are only on the base stations with group users. However, due to the distribution of group call users is random, the numbers of users at each base station are difficult to be averaged, and channel configuration of each base station is usually limited. In most cases, a group call needs to allocate channels simultaneously at several base stations, then the number of base stations within the coverage area has

184

P. Sun et al.

Fig. 3 Group call and missing call in multi-stations trunking system

an impact on the missing call probability of group call; the less the base stations, the lower the missing call probability [4]. For example, as the extreme case shown in Fig. 3, a PDT base station needs 9 Tetra base stations to cover, and then there exists no missing call problem at PDT base station, but Tetra base stations No. 2 and No. 7 happens to miss group call because all channels are busy. In multi-stations trunking system, missing group call due to busy channel cannot be avoided, and it usually needs to use calling priority to ensure the resources of important calls, whose side effect is that the high-priority calls occupy channel resources of lower-priority calls, resulting in the interruption of lower-priority calls. With the same size of coverage area, the more the base stations, the higher probability of missing group call, otherwise lower. PDT is superior to Tetra on multi-stations probability of missing group call.

4 Differences of Data Services Between PDT and Tetra Both Tetra and PDT have the capability of low-speed data transmission, but due to the differences among basic conditions of the channels, there are also differences in data transmission capability, where Tetra has a stronger data transfer capability, as shown in Table 3. From table, it can be seen that data carrying capability per unit spectrum of Tetra is doubled to that of PDT.

Table 3 Comparison of data transmission capability Items

Tetra

PDT

Single-slot carrying capability (kpbs) Maximum number of bundle timeslots Maximum data carrying capability of single carrier frequency (kbps)

7.2 4 28.8

3.6 2 7.2

Comparison Between Operational Capability of PDT …

185

Table 4 Numbers of control channels upload GPS per unit time Number of control channels upload GPS per unit time of Tetra N × 150/mina Notea N ≤ 4 is the number of control channels

Number of control channels upload GPS per unit time of PDT About 10/min

There are two types of GPS data uploading: initiative reporting and being pulled by the system. Initiative reporting uses random access mode, which exists probability of collisions over air interface, so a large number of users reporting to GPS can lead to collisions so that success rate is lower, while in pulled type the system arranges the order so that collisions do not occur, so data quantity per unit time of pulled type GPS is larger than that of random access type [5]. Both Tetra and PDT standards define the mechanism of using control channel short messages to upload GPS data; however, due to differences between the control channel carrying capabilities of two standard systems, the upload capability per unit time of Tetra is stronger than PDT. According to air interface occupancy time of each GPS data and 30-percent pass rate of random access, calculation result is shown in Table 4. Due to a lot of GPS data transmitting in the control channel can cause congestion, which affects normal call access control of the system, so the manufacturers of the products also design methods of GPS upload based on dedicated data channels. However, since they have not been supported by standards, they can only be shown as characteristic functions. Table 5 shows the comparison between capabilities of typical Tetra several dedicated data channels GPS active upload and that of PDT several data channels system pulling. By the comparison, it is shown that GPS upload capability of PDT dedicated data channel is better than Tetra. Both Tetra and PDT have status message transmission capability of the highest air interface efficiency, which is usually used for the occurrence of terminal user status being frequently reported (e.g., busy, idle, etc.) and other applications that highly require transmission delay, and status message length of Tetra is larger than PDT. However, their transmission time is basically the same as shown in Table 6.

Table 5 Numbers of data channels upload GPS per unit time Number of data channels upload GPS per unit time of tetra About N × 300/mina Notea N is the number of data channels

Table 6 Comparison of status message transmission capability

Number of data channels upload GPS per unit time of PDT About N × 1000/mina

Indicators

Tetra

PDT

Length (bits) Total number of states Transmission time

16 65536 56.7 ms

7 128 60 ms

186

P. Sun et al.

Table 7 Comparison of common application functions Application functions

Technology used in tetra

Technology used in PDT

Small data inquiries Pictures transfer Long text messages Over-the-air provisioning Telemetry

Packet data/SMS Packet data Short message splicing Undefined in standards Packet data/SMS

SMS Packet data Short Messages splicing/Packet data Packet data Packet data/SMS

Both Tetra and PDT support packet data and all kinds of applications developed using packet data. Since single-data channel carrying capability of Tetra is twice as that of PDT (7.2 k/3.6 k), Tetra can support richer small data applications such as WAP inquiries. Packet data of PDT can also support WAP inquiries, but since the speed is slow, experience effect is unsatisfactory, so method of simplified short message inquiry is used more. In the process of users’ utilization and application with these two technological systems, matched application functions can usually be designed according to their data transfer capability, in order to take their advantages. Common application functions are shown in Table 7.

5 Conclusion Both Tetra and PDT have their own strengths; the two standards are different technical solutions in different application scenarios. Each technical standard is not perfect, so that there will be different strengths and weaknesses in different scenarios. In China, the national land area is large, population is unevenly distributed, and economic conditions are unbalanced. With these national conditions, the dedicated communication system of the Chinese public security is converted from analog MPT1327 to digital system. PDT’s advantages in terms of operational capacity are more obvious and more in line with China’s national conditions. PDT standard setting has made a good start, which has the far-reaching influence for the development of public security specialized mobile communication system. Rising to the challenge is obligatory choice when facing challenges. With the development and grow of PDT standards, we hope this standard can provide a more powerful technical support for public security work in the next 10–20 years. Acknowledgments This paper is supported by National Natural Science Foundation of China (61101122 and 61302074), Major National Science and Technology Project (2012ZX03004-003) and Municipal Exceptional Academic Leaders Foundation (2014RFXXJ002).

Comparison Between Operational Capability of PDT …

187

References 1. Zhou, Y.W.: Technological development research strategy of police PDT standard. J. Chin. People’s Public Security Univ. 17, 35–39 (2011) 2. Jiang, Q.S., Liu, W.J.: Discussion about broadband evolution in PDT digital trunking system. J. Dig. Commun. World 11, 31–34 (2010) 3. Yan, Q.H., Zheng, T., Yi, J.Z.: Two digital trunking communication system TRTRA comparative study with PDT. J. Chin. People’s Public Security Univ. 20, 87–89 (2014) 4. Jiang, Q.S., Chen, Y.: The Role of PDT in the public security Wireless communication “rotation”. J. Police Technol. 6, 23–26 (2010) 5. Hu, X.H.: Police communications equipment from the “Simulated” cross the “digital”—hytera digital trunking market PDT. J. Inform. Secur. Commun. Priv. 5, 26 (2010)

Development and Analysis of Police Digital Trunking Channel Technology of PDT Pengfei Sun, Run Tian, Hao Xue and Ke Wan

Abstract This paper analyzes the standard of TETRA which is the existing digital trunking system evolution in our country and introduces the new requirements and characteristic of the channel technology for a new generation of broadband wireless trunking system of our country in detail. On this basis, this paper analyzes and simulates the new generation of PDT standard in the view of channel technology such as synchronization technology, channel modulation, and channel coding, respectively, and compares the performance of two plans. Analysis results show that China’s independent development and customization of a new generation of police digital trunking standard PDT is more in line with the national conditions and has showed great advantages. It will gradually replace the TETRA and become the mainstream of public security digital trunking system. Keywords Digital trunking system

 Channel technology  PDT, TETRA

1 Introduction With the rapid development and generalization of wireless access technology, mobile data services have been popular. Meanwhile, the demand of mobile data services from all kinds of mobile applications and mobile users increases rapidly as well. Compared with pubic network, the broadband of trunking system which

P. Sun (&)  R. Tian  H. Xue  K. Wan Harbin Institute of Technology Communication Research Center, Harbin 150001, Heilongjiang Province, China e-mail: [email protected] P. Sun Key Laboratory of Police Wireless Digital Communication, Ministry of Public Security, Harbin, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_19

189

190

P. Sun et al.

represents the direction of private mobile communication network development is making slow process. In the current trend of global private network trunking system digital and broadband, businesses and government agencies for broadband digital trunking system demand is increasingly urgent. Especially in the field of public safety, catastrophe occurred in the worldwide and public safety issues come from now and then. Public communication network and technology cannot meet the needs of emergency communication; therefore, the private communication technology has become more and more important. Due to the demand of public security wireless communication increasing steadily, there are some disadvantages of the existing analog trunking system such as small system capacity, limited spectrum resource, low-spectrum utilization, poor security, and single function of service. In addition, China is a vast country and the development of economy is unbalanced. Some remote area cannot solve the problem of wireless coverage in a long time. Meanwhile, the demand of data service from public security is increasing rapidly. In order to solve the above problem, the support of digital trunking system is essential. In order to solve the above problems, the Ministry of Public Security have made the standard of police digital trunking since 2004 and it takes European TETRA standard [1], and in Beijing, Shanghai, Guangdong, Shandong, and other provinces to build a large-scale TETRA systems, which played an important role in supporting communication in major events and sporting events. Compared with analog trunking, digital trunking have much more advantages in the quality of voice and integrated data applications, changing the user traditional habit which is only using voice to dispatch. However, due to the TETRA standard using GSM technology from public network, taking the cellular system, technical complexity, higher construction and maintenance costs, there are some issues that different systems from different manufacturers cannot be interconnected; the encryption mechanism and system are close to China and it cannot transit from analog system smoothly. It cannot meet the actual needs of the public security departments and limits the widespread application of broadband digital trunking system. Therefore, China urgently needs a new digital trunking communication standard which confirms with the national conditions and has fully intellectual property rights. From the view of the development low of mobile communication technologies, nowadays is the big moment to start the next generation of digital trunking replacement. In the April of 2010, the Ministry of Science and Technology Information Bureau released a new generation of police used digital trunking standard Professional Digital Trunking System (PDT) [2] in Beijing. PDT digital trunking standard is the next generation of private network digital trunking standard which is formulated by the Ministry of Science and Technology Information Bureau and participated by national professional mobile communication manufactory, and supported by the Radio Management Bureau of the Ministry of

Development and Analysis of Police Digital Trunking …

191

Information Industry. This standard represents the expectation of the next generation of private communication product from the public security organization and the support as well as the major contribution to our country private communication industry. PDT police use digital trunking standard to meet the needs of all levels of users and the actual network of the country to county, and in natural disasters, social security, and other emergency contingencies, it can quickly access existing command platform of the public security system, complete networking fast and dispatch, to achieve efficient data and voice communications, and meet the needs of a high degree of security and confidentiality. Compared with TETRA, P25, iDEN, and other mature digital trunking systems, the new standard fully takes the feature of the uneven development of the geographical into consideration and infuses some independent intellectual property rights of technology such as voice coding, encryption, and dialing scheme. It has some advantages such as easily to realize, low cost of networking, large area to cover, good intercommunication, and the compatibility with the existing police regular analog trunking system [3]. The appearance of PDT brings more appropriate communications solutions and equipment for the domestic public security system and vastly enhancing the strength of the domestic professional communications industry.

2 PDT Standard Techniques and Analysis The benefits of digital technology that bring mobile communication system is obvious: voice quality improved significantly, spectrum efficiency improved, the security enhanced, and a certain degree of data communication capability. Digital trunking has more clear advantages in voice quality and integrated data applications compared with analog trunking [4]. Therefore, the Ministry of Public Security formulated digital trunking standard for police using since 2004. It recommended TETRA digital trunking standard as one of our country public security industry digital trunking standards at first; however, due to the inherent problems and defects of TETRA standard, it was promoted slowly in our country and only build in Beijing, Shanghai, and other eastern developed cities. The main problems of TETRA reflected in [5]: (1) Cellular system, a large number of base stations, high cost of network construction and maintenance. TETRA standard takes cellular system, the unit of the base station system coverage is worse than traditional analog systems, and a simulated base station needs to be replaced by three digital base stations, which increase the cost of networking as well as the workload of system maintenance.

192

P. Sun et al.

(2) Systems of different manufacturers cannot be interconnected, a national scale large network cannot be built. According to our country conditions, nationwide network cannot be completed by only one company, but the function of TETRA Inter-System Interface (ISI) cannot meet our requirements for interoperability, so that TETRA systems of different manufacturers still cannot be interconnected. (3) Closed safety standard, cannot use independent encryption technology. At present, the major supplier of TETRA systems is foreign companies, and they cannot open the encrypted interface to China with the impact of politics. Therefore, we cannot use the encrypting equipment with our country encrypted algorithm. It cannot guarantee the security of communication since there is no encryption. Although it is public security wireless private network, it still cannot be allowed to access to police information network for data communication with a risk of security. (4) Cannot transit from analog trunking MPT127 smoothly. Although TETRA system achieved interoperability with analog systems, it still cannot get along with the existing analog systems. It is not compatible with the dialing scheme and the operation mode of the analog system and it cannot achieve a smooth upgrade from analog system. (5) Too much patents so that technology and products owned by foreigners, hard to industrialized in domestic. TETRA standard technology is complex and difficult to achieve, and its standards and patents are all owned by foreign companies. Intellectual property issues as well as the development of the industry chain are under restrictions. Aiming at the above limitations and problems of TETRA digital trunking standard practical application in our country, the Ministry of Science and Technology Information Bureau released a new generation of proprietary digital trunking standard PDT for police using. The design of PDT uses the technology advantages of TETRA and P25 for reference and takes more advanced IP softswitch technology and more advanced the 3GPP IMS architecture, making it more easily to interconnect with other heterogeneous networks even public network. It shows an advanced design philosophy that it can transit from analog to digital smoothly and fuse narrowband and broadband as well as private network and public network [6]. As shown in Table 1, the main technical characteristics of PDT digital trunking standard [7]: a logical channel of two slot time division multiple access can be realized in a channel of 12.5 kHz bandwidth, which is equivalent to the frequency utilization of 6.25 kHz channel bandwidth and the frequency utilization increase by 4 times compared with analog trunking system. Modulation mode is 4FSK and the data transfer rate is 9.6 kbps. The constant envelope of modem allows the RF module to use non-linear power amplifier, which not only reduce the realization difficulty but also be compatible with analog systems.

Development and Analysis of Police Digital Trunking …

193

Table 1 PDT standard technical indicators Technical indicator

PDT

Access mode Width of carrier Rate of carrier Channel per carrier Modulation mode Voice coding Rate of voice coding Maximum data rate Coverage rate compared with analog system

TDMA 12.5 kHz 9.6 kbps 2 4-FSK AMBE+2TM/SELP 2.4 kbps 4.8 kbps Over 90 %

Compares with TETRA standard, PDT standard has following advantages: (1) TETRA standard takes cellular system so that the coverage of single base station is worse that the traditional analog system and the cost of networking are higher. However, PDT standard bases on regional system. There are fewer base stations in the same coverage area and it costs less. (2) The major of the suppliers of TETRA system are foreign manufacturers. Due to the political causes and the consideration of communication and information safety of public security agency, China should not and cannot get the access to the encrypted interface. However, PDT standard has Chinese own encryption technology to support end-to-end encryption which meet the standard of the Ministry of Public Security network security accessing. (3) TETRA system has poor compatibility with each other, cannot interconnect between systems from different vendors; PDT using the Ministry of Public Security uniform standards to facilitate interconnection between systems from different vendors as well as good compatibility. (4) TETRA systems standards and patents owned by foreign owners. There are more limitation in use of TETRA system and the intellectual property issues; PDT standard is leaded by the Ministry of Public Security Science and Technology Information Bureau and organized by domestic major professional trunking communications equipment manufacturers to jointly develop standards, with full intellectual property rights. After briefing the achievement of the PDT standard and the advantages of PDT standard compared with TETRA standard, further research and analysis of channel technology will be discussed in the following chapter (Table 2).

194

P. Sun et al.

Table 2 Comparison of TETRA and PDT standard Comparison indicators

TETRA

PDT

Modulation Intercarrier distance Multi-access mode Time slot/carrier Encryption Manufacturer interconnect

π/4-DQPSK (non-constant envelope) 25 kHz TDMA 4 Difficult No identical standard

Equivalent bandwidth Symbol rate Guard interval Operating mode

6.25 kHz 18 kBaud 0.4 ms TMO/DMO

Car-set transmit power Handset transmit power Frequency pool technology Common frequency broadcasting Coverage mode Analog system transition Networking cost Wireless link support

3W 1W No No

TDMA (constant envelope) 12.5 kHz TDMA 2 Self-encryption (easy) Identical standard of the MPS 6.25 kHz (highest) 4.8 kBaud 2.5 ms (bigger) TMO/RMO/DMO Support base station relaying 25 W 4W Yes Yes

Typical cellular system No High No

Regional system Yes Low Yes

3 PDT Standard Channel Technology Research At present, the core technology of the PDF including coding technology, modulation technology, synchronization technology, etc., and all these fields belong to the category of the channel technologies. Therefore, the in-depth study and discussion to channel technology of PDT standard are significant to the development and improvement of PDT standard. PDT and TETRA are narrowband communication system, with PDT using 12.5 kHz channel and TETRA using 25 kHz channel. 350 MHZ band of police plans to 12.5 kHz and all frequency numbers (P94 and P95) are according to the plan of 12.5 kHz. The channel of PDT and TETRA both are in a time slot. The data frames of each time slot in addition to carrying business also need to bear synchronous word, link control, and training sequences, so the actual business carrying capacity less than the physical capacity of time slot. The physical layer rate of each time slot of PDT is 4.8 kbps. The synchronization frame occupies 1.2 kbps, and the actual information carrying capacity is 3.6 kbps. TETRA use phase modulation mode that needs training sequence to assist synchronization in addition to the frame

Development and Analysis of Police Digital Trunking …

195

synchronization. So although the physical rate of each time slot is 9 kbps, the actual information carrying capacity is 7.2 kbps. Therefore, higher information spectrum efficiency need to be supported by more efficient modulation mode in TETRA. The use of π/4-DQPSK can carry a higher rate of information in the same equivalent 6.25 kHz channel than PDT, but it requires the channel has a higher signal-to-noise ratio. It usually can be achieved by closer communication distance, and this study expands the area coverage of TETRA system greatly. In Rayleigh fading channel, the Doppler frequency shift can lead to a carrier frequency drift, thus lead to code error. Besides frequency selective fading, the multipath effect also leads to phase jitter of received elements because of the different paths transmission. After superposition of different phase signal in different multipath, it will bring the eye diagram fuzzy and code error. The error resistance characteristics simulation results of two standard modulation methods are as follows: The effects on PDT and TETRA simulation caused by Doppler frequency shift, as shown in Fig. 1. By the simulation results, it can be seen that PDT is hardly affected by 100 Hz frequency drift when bit error rate is 1 %, while the SNR requirement of TETRA increases about 4 dB because of 100 Hz frequency drift. On the same SNR conditions, the bit error rate of TETRA is more deteriorate about 10 times to PDT under the condition of 100 Hz frequency drift. Through the analysis of Fig. 2 simulation results, to keep 1 % of the bit error rate, PDT needs extra about 2 dB SNR, while TETRA requires additional 6 dB SNR. On the condition of SNR that 1/8 code element deviation lead to 1 % bit error rate in PDT, the bit error rate of TETRA is about 7 %, obviously higher than the bit error rate of PDT. In the same way, the Fig. 3 simulation results show that if the multipath jitter of TETRA is 15 μs, the bit error rate cannot be reduced below 1 %. But PDT is still in the scope of works. 15 μs is equivalent to radio waves propagation distance of 4.5 km. So it

Fig. 1 BER contrast in 100 Hz doppler frequency shift

196

P. Sun et al.

Fig. 2 BER contrast in 1/8 element phase jitter

Fig. 3 BER contrast in 15 μs multipath jitter

can prove that the modulation technique of TETRA is not suitable for using common frequency broadcasting technology. Frame synchronization is the process that finds the start information position of the serial bit stream using synchronous word with high autocorrelation and low cross-correlation properties and the correlation algorithm. The synchronization performance is usually related to word length, the correlation performance of synchronization word, and the selection of probability of false/miss alarm threshold. Longer synchronous word has good anti-interference ability, and the shorter synchronous word have greater probability to “escape” the transient deep fading. The PDT’s time width of synchronization word is 5 ms and TETRA’s is 1.06 ms, which is only about 1/5 of the PDT. Therefore, in urban complex multipath environment, the probability of influence on synchronization by frequency selective fading is below to PDT (Fig. 4).

Development and Analysis of Police Digital Trunking …

197

Fig. 4 Synchronous performance comparison between PDT and TETRA

Table 3 SNR requirements of TETRA and PDT Protocol standard

SNR on 10 % frame loss rate

SNR on 1 % frame loss rate

TETRA PDT

16 dB 8 dB

23 dB 17 dB

According to the effect on synchronization caused by synchronization word length, time width, and modulation mode, a comprehensive simulation results are shown in Fig. 1. The simulation result shows that under the typical condition of acceptable lost frame rate, PDT, and TETRA has different requirements on channel signal-to-noise ratio (SNR), as shown in Table 3. Therefore, in the commonly used lost frame rate range of 1–10 %, if TETRA wants to have the same frame rate as PDT, it has higher demand for channel SNR at least 6–8 dB. PDT’ frame loss rate less than 0.2 % in the channel SNR condition that 1 % frame loss rate in TETRA. It can be seen that PDT’s synchronous performance is significantly superior to the TETRA’s under the condition of low SNR. By doing a large number of random statistics that including modulation mode and synchronization word length, the simulation results show that PDT still relatively has comprehensive advantages of synchronization performance about 6–8 dB than TETRA. Channel coding is usually called correcting coding. It protects the useful information by adding a certain number of redundant information before transmission. It is an effective method to correct the errors that occurred during the process of transmission and find the mistakes that cannot be corrected. Main control signaling of PDT is BPTC (96, 196) code of staggered hamming code. The coding efficiency is 0.49. The control signaling of TETRA mainly uses the 2/3 RCPC coding, and the coding efficiency is 0.67. From the coding efficiency can be seen that the lower coding efficiency of signaling in PDT make it has stronger ability of correcting errors.

198

P. Sun et al.

Fig. 5 Channel coding performance contrast of PDT and TETRA

Table 4 Channel coding of TETRA and PDT Protocol standard

Signaling types

Signaling structure

PDT signaling TETRA signaling

BPTC SCH/HU STCH, BNCH SCH/HD SCH/F

BPTC (96,196) RCPC (108,168) RCPC (140,216) RCPC (284,432)

Doing simulation on error resistance ability of BPTC (96,196) in PDT and 2/3 RCPC in TETRA using Rayleigh channel model. The results are shown in Fig. 5, in which the specific parameters of channel coding of each curve are shown in the Table 4. The simulation results show that when the error rate was 1 %, with TETRA requirement for SNR higher than PDT about 4 dB. When the bit error rate is 1 % in PDT, in the same channel SNR, the BER of TETRA is about 4–5 %. That means the channel coding of PDT has better ability on error correction.

4 Conclusion In this paper, the simulation and analysis show that PDT standard is superior to the existing standard TETRA no matter in synchronization performance, modulation, and channel coding technology. Every technical standard is not perfect. Although two standards chose different technical solutions, according to different application scenarios, show their advantages, while in unsuitable scenarios, there are also weaknesses. But in China, the land area is large, the population is unevenly distributed, the economic conditions are unbalanced. In such national conditions,

Development and Analysis of Police Digital Trunking …

199

China’s public security special communication system is changing from analog MPT1327 to digital conversion system. The advantage of PDT is obvious on networking throughout the country, smooth transition, communications security, and network cost. It is more in line with China’s national conditions. Therefore, we should strengthen the research of PDT, focus on the development and improvement of PDT standards, and look forward to the development, improvement, and growth of PDT standard. Then support the public security work for the future with more powerful technology, promoting China’s public security digital trunking communications to a new step. Acknowledgments This paper is supported by National Natural Science Foundation of China (61101122 and 61302074), Major National Science and Technology Project (2012ZX03004-003), and Municipal Exceptional Academic Leaders Foundation (2014RFXXJ002).

References 1. Jiang, Q.S., Chen, Y.: The role of PDT in police wireless communication “analog to digital”. Police Technol. 6 (2010) 2. PDT Standard Working Group: PDT Standard Basic Technology. Requirement V1.0 (2010) 3. GA/T 1056-2013: Police digital trunking (PDT) communication system. Basic Technology Specification 4. Zhou, Y.W.: The research of police PDT standard technology development strategy. J. People’s Public Security University (Natural Science Edition), 17(1) (2011) 5. Jiang, Q.S., Liu, W.J.: The discussion of the evolution of broadband technology of PDT digital trunking system. Digital Commun. World 11, 31–34 (2011) 6. Li, J.L.: Discussion of development direction of digital trunking. Commun. Today (2006) 7. Feng, R., Yang, N.B.: Daqing: 350 million PDT digital trunking system increases the level of police actual combat information. The People’s Public Security Newspaper (2013)

MR-LSH: An Efficient Sparsification Algorithm Based on Parallel Computing Jianxi Peng and Zhiyuan Liu

Abstract For the problem of graph clustering data analysis lacking adaptation of the increasingly complex distributed cluster environment in artificial intelligence, an efficient sparsification algorithm based on parallel computing (MR-LSH) is proposed in this paper. Minhash algorithm is analyzed and improved by the algorithm based on MapReduce framework theory which makes graph cluster process efficient in increasingly complex distributed cluster environment. The feasibility and high efficiency of algorithm in quick sparse processing graph cluster data are proved in the simulation.







Keywords Artificial intelligence Data mining MapReduce Graph clustering Minhash MR-LSH





1 Introduction The complicated interactive networking system could be constructed in graph model, such as social network, communicating network, transportation network, etc. [1]. In the kind of graph model, each node describes entity object, each edge describes association among entity objects. Social network, for example, is an undirected graph model. Each node presents social group or individual. Each edge presents association among social groups and individuals (e.g., colleagues, friends) [2]. With the development of network technology and information technology, especially Web3.0 network system (e.g., sina weibo, weixin), the index scale of graph data information processing increase rapidly. Very large scale of graph data

J. Peng (&)  Z. Liu Foshan Polytechnic, Foshan, Guangdong 528137, China e-mail: [email protected] Z. Liu e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_20

201

202

J. Peng and Z. Liu

information is generated which brings a great challenge to graph data mining and analysis [3–5]. Graph clustering is an important technology in graph data mining and analyses with the aim to classify nodes of graph model by clusters. It can increase the close relation of correlation degree of same class cluster graph node to entity object and decrease the close relation of correlation degree of different class cluster graph node to entity object. Graph clustering application is widely used in social network system and transportation planning analysis. With the coming of very large scale of graph data information and processing mechanism, how to increase efficiency of graph cluster analysis and process, how to mine potential effective data are hot points in artificial intelligence and data mining fields [6]. Data simplifying is an efficient method in graph cluster analysis and process. The main process is to extract and mine local sample from whole data set in order to improve mining result-time ratio. For graph cluster analysis and process, firstly, each node and edge are extracted (graph sparse process). Secondly, graph clustering is proceeded with the result of graph sparse process. So the efficacy of graph cluster analysis and process could be improved. Graph sparse process mechanism [7] is an important process which has been applied in many fields. For small scale graph model data information, there are many graph sparse process mechanisms, such as L-spar, k-nearest neighbor graph, etc. But these algorithms are not suitable for large scale graph model data or distributed cluster computing environment. With the development of graph model products, the application scale is getting larger and data information is increasing. Single computing environment could not meet the requirement of analysis and process which make graph sparse process mechanism impossible. MapReduce parallel computing is trend, because it could compute large scale data and run in remote sever. So it could meet the requirement of large scale data analysis and process. Based on this advantage, for the requirement of large scale graph model data analysis and process, an efficient graph sparse process algorithm based on parallel computing is proposed in this paper. Conventional Minihash algorithm [8] is used in rapid computing similar degree of data sets. It is applied in text and video process [9]. Minihash compute based on Jaccard similar degree. The principle is that taking K Hash functions to operate data set A and B in turn. So K Minihash parameters are obtained. The similitude value of A and B is the contrast between same Minihash value and whole element. Graph cluster property has been analyzed by some researcher. A non-complicated heuristic graph cluster rule set is obtained. It is also called neighbor node set of each similar node in the condition of same cluster. So that similar nodes of neighbor node set could exist in same cluster. This heuristic graph cluster rule set means that the edge of two correlated nodes could be stored. In contrast, if low similar degree of neighbor node set of two nodes, the existed edge would be deleted. Based on these theories, the optimization of large scale data graph sparse process mechanism in distributed cluster computing [10] is studied in this paper. MapReduce being theoretical basis, Minihash is parallel analyzed to design an efficient graph sparse process algorithm based on parallel computing, e.g. MR-LSH.

MR-LSH: An Efficient Sparsification Algorithm …

203

MR-LSH use parallel MapReduce framework [11] to proceed efficient computing of several tasks in graph cluster sparse process. The flow chart is: (1) neighbor nodes data set estimation, (2) Minihash signature deduction (for each node), (3) signature Hash storage of each node, (4) sparse process computing of graph cluster. The performance of MR-LSH is simulated and analyzed with Hadoop computing environment in this paper. The result shows that MR-LSH is efficient in graph cluster sparse process.

2 Related Researches Minihash and parallel computing MapRuduce frame structure are presented in this part.

2.1

Minihash

Minihash proceed similarity calculation by Jaccard similarity. Jaccard is a kind of similarity parameter being used in similarity detection of several data sets. For example, Jaccard parameter is to proceed with data set A and B as Eq. 1. JðA; BÞ ¼

A\B A[B

ð1Þ

The equation shows that the higher similarity of two data sets is, the value of Jaccard parameter is. But if data sets are large, Jaccard parameter would be effected by the scale of union and intersect sets so that the efficiency would not be increased. Minihash is based on Jaccard parameter theory. Firstly, Hash function (represented by h) is used to compute the whole quantity of element of data set A and B. Secondly, the result is obtained, I.E. Minhash (A) and Minhash (B), as Eq. 2. Pr½minhashðAÞ ¼ minhash(BÞ ¼

jA \ Bj jA [ Bj

ð2Þ

So in this algorithm, similarity process is converted to equivalence probability of several data sets. The computing efficiency is improved.

2.2

Parallel Computing Theory

Distributed framework theory was proposed by Google first being used in very large scale data sets analyze and process. MapReduce is an important architecture

204

J. Peng and Z. Liu

Fig. 1 The flow chart of MapReduce parallel computing theory

of parallel computing. It makes programmer could focus on application system analyze and process without managing complicated and redundant distributed issues. This is a great advantage of MapReduce parallel computing theory. The flow chart is as Fig. 1. Any MapRdeduce distributed task has three parts as below. (1) Mapping: Any Map function operate several Split data sets, and output corresponding parameters, I.E. several key values . (2) Combine: The key values are to permutated and classified. (3) Reducing: Processed key values are ergodic operated. The unique key value is used to operate relative Reduce function to obtain output result. MR-LSH with Hadoop simulation is proceeded in this paper. The simulation is running MapReduce application in Hadoop platform. The application is composed by a Mapping class, Reducer class, newly-built JobConf driver and associated Combiner class (inheriting from Reducer class).

2.3

Existing Problem

L-Spar is a local graph sparse process algorithm designed by conventional mini Hash function. The basic principle is that for graph model edge v(i, j), the storing or deleting method is decided by Jaccard value of node i and j. According to Eq. 1, Jaccard value of node i and j is obtained, so Eq. 3 is presented. Simði; jÞ ¼

jAdjðiÞ \ AdjðjÞj jjAdjðiÞ [ AdjðjÞjj

ð3Þ

MR-LSH: An Efficient Sparsification Algorithm …

205

Fig. 2 L-Spar algorithm flow chart

In which, Adj(i) presents neighbor data set of node i and Adj(j) presents neighbor data set of node j. Minihash efficient computing in data sets similarity is applied in Sim(i, j) output computing. L-Spar algorithm is shown in Fig. 2. L-Spar is applied in single task and small scale graph cluster sparse process. But in very large scale distributed computing, its advantages could not be exerted. For this problem, based on MapReduce parallel computing, L-Spar algorithm is parallel analyzed and improved in this paper. An efficient sparsification algorithm based on parallel computing (MR-LSH) is proposed.

3 MR-LSH Algorithm MR-LSH has 4 steps: Neighbor nodes sets estimating, Minihash signature deduction (for each node), Signature hash storage of all nodes and sparse computing in graph cluster process.

3.1

Neighbor Node Sets Estimating

The first step is to estimate neighbor node sets of random edge nodes for one group Map task. The flow chart is shown as Fig. 3. Task Map obtains a group key value data information. The node information in Fig. 3 are vi and vj. After computing, output key value pair data information

206

J. Peng and Z. Liu

Fig. 3 Neighbor node data set

are . In which, neighbor node data set of vi is list[Ni]. The final output parameter values are applied in HDFS platform. Map task could be described as:

3.2

Minihash Signature Deducting

MR-LSH could deduct Minihash signature based on the combination of Map and Reducer task. The flow chart is shown as Fig. 4. In this part, Map task input parameter is from the output of neighbor node sets estimating. Map task use several (k) Minihash functions as input parameter value. After Hash deducting, the key value pair data information is (m = 1,2,…,k), in which, Hm(Ni) is the list information of Minihash function. The output of this part is Reduce task input parameter. By the deducting of Reduce, the key value pair data information is obtained, i.e., , in which, sig[i][m] is the binary array to illustrate the signature sequence of vi. Finally, sig[i][m] is applied in HDFS platform. The formalization of this part is presented as below.

MR-LSH: An Efficient Sparsification Algorithm …

207

HDFS Minhash

HDFS

Value

Key

Value

V1

V2

V1

V2

V4

Key

Key

Value

V1

Sig1

V2

Sig2

Reduce()

.

.

.

. .

V3

V5

Map()

.

.

.

.

.

H1(N1) Hn(N1) H1(N2) Hn(N2) .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Map()

.

.

Reduce()

.

.

Map()

V2

Map task minhash deducting

Reduce()

Reduce task graph model node signature deducting

Fig. 4 The flow chart of Minihash signature deducting

The process could be divided to several execution sub-step. (1) Map task Input: , in which, key = vi is the node of the graph, value = list [Ni] is the neighbor node data set of the node; k different Minihash functions. Output: , in which, value = list [Hm(Ni)] is Minihash value list of the nodes.

(2) Reduce task Input: , Minihash value list of nodes. Output: , signature array of graph model nodes.

208

J. Peng and Z. Liu

3.3

Hash Storage of Node Signatures

The third step is to decide the edge between each node belong to graph cluster sparse structure or not. The process is based on the statistical computing of Map and Reduce task. The process is statistical computing for arbitrary nodes by using the combination of Map and Reduce tasks. The flow chart is shown as Fig. 5. The input of Map task is key value pair information (being obtained in first step of MR-LSH algorithm) and algorithm signature two-dimensional array Sig[i][m]. The middle parameter value is obtained to be the input of Reduce task. And the hash function is a part of input parameter. The final output is . The description is as below.

HDFS Sig[i][j]

HDFS

Key

Value

V1

V1 V2

Map()

V1

V2

V4

Map()

V2

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Map()

Key

Value

Key

Value

.

.

SortC1i SortC1k SortC2j . .

.

.

.

.

.

.

.

.

.

.

.

Reduce()

.

.

Fig. 5 The flow chart of Hash storage of node signature

Reduce() Reduce()

V1 V2

MR-LSH: An Efficient Sparsification Algorithm …

209

The process could be divided to several execution sub-step too. (1) Map task Input: , in which, key = vi is the node of the graph, value = list [Ni] is the neighbor node data set of the node, node signature array is Sig[i][j]. Output: , in which, value=list[S(Sig[i], Sig[j])] present the signature data set of node edges.

(2) Reduce Input: , in which, key=vi is the node of graph, value=list[S(Sig[i],Sig[j])] present the signature data set of node neighbor edge. Output: , in which, value=list[SortCij] present the matched number of node and neighbor nodes after sorting.

210

3.4

J. Peng and Z. Liu

Sparse Process in Graph Cluster

The last step of MR-LSH is that Map process stored top dje node neighbor edge data information is used based on all matched SortCij neighbor to vi. di present the degree parameter of vi and e (less than 1) present sparse process index in graph clustering which being used to control the graph data information of sparse. As known, e is inversely proportional to graph model sparse. The flow chart is shown as Fig. 6. Map step process the output parameter of the 3rd step of MR-LSH algorithm. die is included into input too. The whole reserving node process is constructed and applied in HDFS.

The process is as below. Input: , key value pair data information of last step output. e is graph sparse index and di is edge degree in die. Output: , in which, value = list[top] present the neighbor nodes need to be reserved.

MR-LSH: An Efficient Sparsification Algorithm …

211

Fig. 6 The flow chart of reserving stored nodes

After 4 steps of MR-LSH, each node of graph model store the number of edge with e > 1. So a connecting state of graph model is guaranteed.

4 Simulation A simplified simulation is to prove the performance and efficiency of sparse algorithm when processing large scale graph cluster.

4.1

Configuration

Open parallel computing architecture, e.i MapRduce, is applied in Hadoop distributed cluster computing system. The computing system is composed by many servers and terminals. Six computer as example, one master, five slaves, all nodes are running with 3.20 GHz intel dual CPU and 1G RAM at least. Hadoop version is

212

J. Peng and Z. Liu

1.0.5. OS is Linux ubuntul 1.5 and the language is JAVA. Simulation data come from relevant graph model of sina weibo virtual social network. Speedup parameter is used to present MR-LSH algorithm performance index variable. Speedup is presented by function 4, in which, Ti present the sparse processing time of number i distributed cluster computing system, and T1 present the sparse processing time of solo computer. Sspeedup ¼ Ti=T1

4.2

ð4Þ

Operation and Analysis

According to different graph model sparse mechanism, the graph model sparse radio parameter e will be change. For different graph model data information and class, the most optimum e will be different. Initial e is chosen with 0.15 in the simulation, e.i e = 0.15. In order to demonstrate the efficiency of MR-LSH processing large scale data in distributed cluster computing environment, the simulation execute MR-LSH process Map task and Reduce task firstly. And graph model data is analyzed secondly. The analyzed result of simulation is shown as Fig. 7. The result shows that computing time of large scale data in distributed cluster computing environment could be reduced when using Hadoop parallel computing platform. So speedup is increased significantly. According to the theory of parallel computing platform, the larger graph model data scale is, the higher sparse radio parameter is. And the radio is linear. But as the communication of nodes being more frequent, process performance would be decrease too. When the data scale becomes smaller, graph cluster sparse analysis and process mechanism would be lower, and e parameter would decrease relatively. Mean while, as speedup and distributed cluster computing environment increase, graph cluster sparse analysis and process mechanism would be higher, e parameter would increase too.

6

1G 2G 4G 16G 32G

Speedup

5 4

x x

3 2 x

x x x x

x x x

3

4

x x

x x x

x x x

x x

1 1

2

Number of distributed clusters

Fig. 7 The analyzed result of simulation

5

6

MR-LSH: An Efficient Sparsification Algorithm …

213

The simulation shows that the new MR-LSH algorithm is suitable for graph data information of very large scale distributed cluster computing environment. Because sorting mechanism is applied in MR-LSH algorithm, the consumption of communication between node and neighbor node is decreased. Which means, the larger the graph data scale is, the better performance-price ratio of MR-LSH algorithm efficiency is.

5 Conclusion For very large scale distributed cluster computing environment, Minihash algorithm is parallel analyzed and improved based on MapReduce architecture theory in this paper. A efficient parallel graph sparse algorithm, e.i MR-LSH, is proposed. The simulation shows that the algorithm is feasible, and has efficiency in fast sparse process of graph cluster data information.

References 1. Lin, J., Schataz, M.: Design patterns for efficient graph algorithms in map reduce. MLG 22(3), 78–85 (2010) 2. Lv, Q., Josephson, W., Wang Z., et al.: Multi-probe LSH: efficient indexing for high-dimensional similarity search. In: Proceedings of the 33rd International Conference on Very Large Data Bases (VLDB’07), 10(2), 950–961. VLDB Endowment, Vienna Austria (2007) 3. Wang, H.C., Dasdam, A., et al.: Map-Reduce-Merge: simplified relational data processing. Proceedings of ACM SIGMOD International Conference on Management of Data, 23(12): 1029–1040. ACM, New York (2007) 4. Vrba, Z., Phalvorsen, et al. Kahn process networks are a flexible alternative to mapreduce. In: Proceedings of IEEE International Conference on High Performance Computing and Communications, 9(1): 154–162. IEEE, Piscataway (2009) 5. Sandholm, T., Lai, K.: MapReduce optimization using regulated dynamic prioritization[J]. Perform. Eval. Rev. 37(1), 299–310 (2009) 6. Liu, Q., Todman, T., et al.: Combining optimizations in automated low power design. In: Proceedings of Design, Automation &Test in Europe Conference & Exhibition, 8(3): 1791– 1796. IEEE, Piscataway (2010) 7. Garcia-Pedrajas, N., de Haro-Garcia, A.: Scaling up data mining algorithms: review and taxonomy. Process Artif. Intell. 1(1), 71–87 (2012) 8. Satu Elisa Schaeffer: Scalable uniform graph sampling by local computation. SIAM J. Sci. Comput. 32(5), 2937–2963 (2010) 9. Juping, Wen, Yong, Zhong: Graphi clustering algorithm and its application in social network. Comput. Appl. Softw. 29(2), 161–178 (2012) 10. Maiya, A.S., Berger-Wolf, T.Y.: Sampling community structure, 54(12): 701–710. : WWW, Raleigh, North Carolina, USA (2010) 11. Choi, S.-S., Cha, S.-H., Tappert, A.: A survey of binary similarity and distance measures. Systemics, Cybern. Inform. 8(1), 43–48 (2010)

Research of Badminton Data Acquisition System Based on Sensors Technology Weijiao Song, Zhengang Wei and Bin Peng

Abstract Due to the rapid and complex characteristics of badminton, the traditional methods of sports data acquisition were difficult to meet the needs of badminton. This paper analyzes the current situation of application of sensor technology in badminton at home and abroad and, on this basis, proposes an improved integrated data acquisition method using a variety of micro-sensors. This method could collect the racket acceleration and shuttle ball speed, lower arm acceleration, and the rotation angular velocity of upper arm and shoulder when badminton players hit the ball. Analyzing these data can provide scientific reference data for badminton team to train athletes and reliable data support to assess the skills level of the athletes. Keywords Sensor technology Athlete skills

 Badminton  Data acquisition  Data analysis 

1 Introduction Badminton is a rapid dynamic sport. All kinds of motion parameters collected from badminton, which can make coaches have a better understanding and guidance of athletes and let athletes continue to improve their technical training level. Rapid dynamic sport is collected by high-speed camera systems in the past, and these systems have obvious limitations. First, these measurements are more or less carried out in a laboratory environment, and not in line with the actual training environment. Second, the cost is high. At the same time, it takes a long time to prepare the equipments before the experiments and needs data processing after the experiments. That leads delay and can not get instant results. W. Song (&)  Z. Wei  B. Peng College of Information Science and Engineering, Ocean University of China, Qingdao 266100, Shandong, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_21

215

216

W. Song et al.

With the development of MEMS sensors, micro-sensors such as accelerometer used for measuring the acceleration of object and gyroscope used for measuring the angular velocity of object appear. They overcome not only the above drawbacks but also ensure sufficient accuracy and are widely applied to the motion capture system. The advantages of using inertial sensor method are: (1) they are small size, light weight, and can be fixed in any part of the body without affecting the performance of the athletes; (2) low complexity, short preparation time, and they can be easily fixed in body; (3) MEMS inertial sensor technology is much cheaper than the others; (4) real-time feedback, it will be able to get an result after the experiment, even the experimental results can be obtained in real time when using wireless technology; and (5) the inertial sensor can be used to match the real environment, not only in the laboratory. Therefore it can provide more information.

2 Research at Home and Abroad Foreign researches in collection of athletes real-time parameters are the following: sensors are used to analyze the stroke and smash in badminton movement. A badminton racket placed an accelerometer to measure the smash [1]. Thomas and Wolf [2] designed a mobile measuring equipment to analyze the movement of arms and racket in badminton and to get the racket acceleration which is closely related to the round ball speed. Liu [3] analyzed the impact of arm movement on the implementation of badminton smash. Chang [4] developed a sensor system that will benefit quantitative analysis on badminton smash. Pylvanainen [5] described a system using 3D accelerometer and continuous hidden Markov model for classification to identify the position of the arm. Slyper and Hodgins demonstrated how to control the wearable accelerometer to get real-time picture. Currently domestic research mainly focus on some motion capture technique in obtaining space gesture: Shuang et al. [6] designed lower limb motion gesture recognition system based on two ADXL203 sensors. Haipeng et al. [7] made data acquisition unit using high-precision three-axis accelerometer, three-axis gyroscope, three-axis magnetometer, and solved body movement gesture using quaternion method. Zongyu et al. [8] designed a kind of measurement program for human motion parameters using wireless acceleration sensor that could measure the body movement easily and effectively. They did the human gait measurement experiment with this method.

3 Acquisition and Analysis of Badminton Sport Data Badminton sport data collected in real time by data acquisition system, including racket acceleration and speed of the ball during athletes training, acceleration of the lower arm, and angular velocity of the upper arm and rotation angular velocity of the

Research of Badminton Data Acquisition System Based …

217

shoulder. These quantitative data can directly reflect the technical level and the advantages and disadvantages of athletes. Analyzing these collected data can provide scientific reference data for coaches to train athletes, and decision support for coaches.

3.1

Racket Acceleration and Speed of the Ball

An Accelerometer and anacoustic sensor were installed on the badminton racket; racket acceleration was obtained when racket smashed. An 18g two-axis accelerometer ADXL321 was mounted in the root of the racket head, and an acoustic sensor BRT1615P-06 was installed in the racket head to detect the hit, as shown in Fig. 1. Because the accelerometer can measure acceleration only, it can not detect the effect of hitting the ball. While the acoustic sensor is capable of detecting the effect of hitting the ball, it can not sense the speed of the racket. Therefore, we combined the acceleration and the acoustic sensor [4]. The accelerometer sensor data was collected by National Instruments data acquisition module NI9233 and NI9201. The required frequency of the ADXL321 accelerometer and the BRT1615P-06 acoustic sensor was 5 kHz. Signals were captured at 10 kHz. In training place, badminton athletes did batting training with badminton racket equipped with accelerometer and acoustic sensor. The experimental data was obtained from the waveform of the instrument. According to the following formula:   Acceleration ¼ g  9:81 m =s2 ; sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PN 2 i¼1 Xi RMS ¼ ; N Peak AE ; CF ¼ RMS

Fig. 1 Installation diagram of accelerometer and acoustic sensor

218

W. Song et al.

Table 1 Experimental data Racket maximum forward acceleration (m/s2)

Racket maximum lateral acceleration (m/s2)

Peak amplitude

Crest factor

Racket lateral velocity (m/s)

Racket forward velocity (m/s)

104 103 83 56 56 60 92 108 70

63 70 95 47 47 51 96 66 64

27 68 69 20 14 44 544 13 41

7.1 7.2 6 14 5.3 7.5 20 7.3 7.6

2.8 2.58 3.8 6 6.3 5 4.6 4.1 6

7.5 7.7 8.4 8.6 8.6 8.6 8.7 8.8 8.9

whereby g represents the gravitational acceleration, RMS represents the standard deviation, N represents the number of samples, CF represents the crest factor, and AE represents the sound emission pulses of acoustic sensors. We could get the forward and lateral final velocity of the racket when the ball hit the racket, acoustic sensor emission peak amplitude, the standard deviation, and the amplitude factor of the waveform transmitted by acoustic sensor, two-dimensional horizontal trajectory maximum forward and side acceleration when racket smashed, as shown in Table 1. In order to integrate these data, we used Fuzzy Inference System (FIS) and Adaptive Neuro-Fuzzy inferior system (ANFIS) sensor fusion algorithm to deal with these ill-posed problems and got accurate results. The inputs were the racket forward velocity, the racket lateral velocity, the peak amplitude, and the crest factor; the output was the speed of the round ball.

3.2

Acceleration of the Lower Arm

A small accelerometer ADXL321 accelerometer was fitted in the arms of the athlete, the size of it was roughly 15 mm × 8 mm, and the weight of it was 7g including cables. The measurement range was ±50g, and the fault rate was 5 % [2]. Sensors were strictly horizontal and vertical to alignment of the axis, in order to measure the acceleration of the lower arm under the flexion and extension and the lateral (rotational) movement. Accelerometer installation diagram is shown in Fig. 2. Raw sensor data could be transmitted by cable to a portable data logger (COMPAQ IPAQ5440); the logger was placed in a small backpack behind the player and was used to store data. Raw data could also be transmitted to an external

Research of Badminton Data Acquisition System Based …

219

Fig. 2 Accelerometer installation diagram

computer via wireless signals. External computer achieved the pretreatment and visualization demonstration in order to support the feedback training. Athletes did the high smash over the net. According to the data of the instruments, we acquired longitudinal and lateral acceleration of the lower arm, and then we could get the lower arm acceleration.

3.3

Angular Velocity of the Upper Arm and Rotation Angular Velocity of the Shoulder

Two 1D ADXRS300 gyroscope sensors were placed in the athlete’s upper arm and chest; the sampling rate is 100 Hz. The size of the apparatus is 52 mm × 34 mm × 12 mm, and the quality is approximately 22g. It is very small and light to mount on the athletes, as shown in Fig. 3. The sensor placed on the upper arm was used to detect the rotation within the upper arm caused by the upper arm lift when the athletes hit the forward ball. The sensor data for rotation of the upper arm was affected due to the twisting movement of the shoulder. Therefore, the aim of setting the chest sensor was to detect the twisting movement of the shoulder and eliminate the influence of the shoulder twisting movement on the upper arm rotation. When the gyroscopes placed on the upper arm and chest were synchronized and calibrated, we could obtain the internal rotation component by the upper arm gyroscopic action subtracted the chest gyroscopic action. The data collection system of gyroscope sensors included a micro-control platform, memory for recording the session, the radio link for controlling these units from the remote, LCD screens for interacting with the device, USB interface for downloading these sessions that had been collected, and the switch buttons for controlling these data record. Badminton athletes completed the serve actions consecutively. The rotation angular velocity of the upper arm and shoulder could be measured by the gyroscope sensors, respectively. But MEMS gyroscope cannot capture the measurements in

220

W. Song et al.

Fig. 3 Gyroscope sensors installation diagram

angular velocity over 300°/s; the experiment chose the slow serve actions [9]. The measurement result is shown in Fig. 4. When athletes served at constant speed, we used virtual gyroscope technology MBVG based on the landmarks to measure angular velocity. MBVG used a marker arrangement method which came from Plug-in-Gait model; selected the shoulders, elbows, and the upper arms as three separate marking point; made the reflective markers; established a plane to be a reference plane to define the position and direction of MBVG; established a method based on vector using signs point trajectory; and determined the rotation relationship using the geometric methods. Errors due to the joints and skin motion influenced the accuracy of the method, and the experiment corrected it by the error curve method. When the speed of the athletes serve exceeded the actual gyroscope measurement range, MBVG method was still possible to get the overall trend and the main features of serve actions.

4 Significance and Prospects The specific quantitative values were measured by the above method. With these data, the coaches do not need to take man-style training for each athlete, or keep a close watch on the athletes’ training video. The coaches can see the training data directly. This will not only liberate the coaches, but also overcome the drawbacks of traditional training methods. The traditional training methods judge qualitatively the training of athletes according to the naked eye and experience of coaches. Using

Research of Badminton Data Acquisition System Based …

221

Fig. 4 Measurement result

inertial sensors resolved these problems from qualitative evaluation of the level of athletes to using quantitative data to reflect the real level of technology and achieved true scientific guidance and teaching. In addition, we establish a huge database based on these data. Athletes can find the difference by comparison of the data with the best players, which replace the previous training that athletes rely on videos to imitate. At the same time, if we introduce the data mining technology, take a detailed analysis and statistics of these data, we can get associated potential discipline. That will provide a better and faster guide for the athletes to increase their badminton technology.

5 Conclusion Badminton data acquisition system uses a variety of inertial sensors, captured motion data during athletes trained, including the measurement of racket acceleration and speed of the ball using an acoustic sensor and a two-axis accelerometer; the acceleration of the lower arm using an accelerometer; the angular velocity of the upper arm and rotation angular velocity of the shoulder using two gyroscope sensors. Badminton data acquisition system can provide professional technical guidance and data support for Chinese badminton team.

References 1. Chang, T.K., Chan, K.Y., Spowage, A.C.: Development of a local sensor system for analysis of a badminton smash. In: Proceedings of the 3rd International Conference on Mechatronics, 2008. Kuala Lumpur:[s.n.], pp. 268–273 2. Thomas, J., Wolf, G.: A mobile measure device for the analysis of highly dynamic movement techniques. Procedia Engineering (2010) 3. Xiang, L.I.U., Wangdo, K.I.M., John, T.A.N.: An analysis of the biomechanics of arm movement during a badminton smash[D]. Nanyang Technological University and National Institute of Education, Singapore (2002)

222

W. Song et al.

4. Chang, T., Chan, K.: Local sensor system for badminton smash analysis. In: International Instrumentation and Measurement Technology Conference, 2009. Singapore 5. Pylvanainen, T.: Accelerometer based gesture recognition using continuous hmms. Pattern Recogn. Image Anal. 639–646 (2005) 6. Shuang, Li, Zhizeng, L., Meng, M.: Lower limb movement information obtained method based on acceleration sensor. Electr. Eng. 26(1), 5–7 (2009) 7. Haipeng, Z., Youyun, D.: Human motion capture system based on MEMS sensor. Xi’an Eng. Univ. 26(1), 82–86 (2012) 8. Chang, Y., Zhang, Z., Yao B.: Research of human motion parameter measurement based on wireless acceleration sensor. In: Tianjin Institute of Electronics 2013 Annual Conference Proceedings (2013) 9. Sheng, Z., Zhu, M.: Application of the inertial gyroscope sensors methods and virtual gyroscope sensors approach based on landmarks used in the upper arm speed measurement in the tennis serve. Tianjin Institute of Physical Education, 27(1) (2012)

The Direct Wave Purifying Based on WIFI Signal for Passive Radar Liubing Jiang, Tao Feng, Wenwu Zhang and Li Che

Abstract Using the WIFI signal as a passive radar, transmission is getting more and more widely attention. Based on IEEE 802.11 standard, the signal structure, ambiguity function, and side peak of WIFI have been made detailed analysis in this paper, and it also shows that the ambiguity function has a good pin shape. Meanwhile, in order to obtain an effective direct wave signal, a constant modulus algorithm (CMA) is used to suppress the interference brought by the multipath clutter. Furthermore, an adaptive step method is adopted to improve the convergence speed of the algorithm, and it gets a good purification effect. Simulation results show that the method is effective and has a better performance in suppressing multipath clutter interference, making an improved in detection performance. Keywords OFDM purification



Passive radar



Ambiguity function (AF)



Direct wave

1 Introduction Compared with the traditional radar system, the passive radar system has many advantages. It has got a broad interest in the field of radar because of its anti-stealth, anti-reconnaissance, and anti-interference characters. At present, a lot of communication signal can act as opportunity transmitters, such as digital audio broadcasting (DAB), digital video broadcasting (DVB), FM radio (FM), GSM, and other satellite systems [1–4]. As a widely used signal with advantages of wide coverage, reasonable easy access, etc., WIFI signal is often used in communication, and it has been proved that it can be used as opportunity transmitters in literature [5], which L. Jiang  T. Feng (&)  W. Zhang  L. Che School of Information and Communication Engineering, Guilin University of Electronic and Technology, Guilin 541004, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_22

223

224

L. Jiang et al.

had also analyzed the performance of ambiguity function of OFDM and DSSS. In literature [6, 7], it used the WIFI signal source to make the through-the-wall detection of moving human. In literature [8], it purified the direct wave using reference signal reconstruction, but it did not provide the specific methods. In literatures [9–11], it used the CMA algorithm to suppress the multipath clutter of reference channel for FM passive radar. In literature [12], it used two-dimensional CMA algorithm to purify the direct wave for passive radar reference channel, but it had no specific signal waveforms. In this paper, a new method is proposed. It adopts the CMA algorithm to purify the direct wave for passive radar based on WIFI signal reference channel. First, we make a detailed analysis for WIFI signal ambiguity function with OFDM modulation. Theoretical analysis determines the basic signals’ resolution and average characters of side-lobe level in time-delay domain and Doppler domain, whose ambiguity function pin shape is good and can be used as a radiation source. Then it purifies direct wave signal of reference channel, improving the detection performance. Finally, we summarize this paper.

2 System Structure A variety of 802.11 standards have been developed, among which 802.11a, 802.11b, and 802.11g are widely used. These standards have defined different data rates and been divided according to different frequency channels. According to the PHY protocol used by each standard, different modes of modulation and coding scheme use different data rates. The main modulation methods includes direct sequence spread spectrum (DSSS) and orthogonal frequency division multiplexing (OFDM), whose data rates range from 1 to 54 Mbit/s. This paper will discuss the WIFI signal modulated by OFDM based on the IEEE 802.11a standard. The physical layer (PHY) frame structure is shown in Fig. 1.

PLC PHeader

Rate 4bits

Reserved 1bit

Length 12bits

Parity 1bit

Tail 1bit

Code/OFDM (BPSK, r=1/2)

PLCP Preamble 12Symbols

Fig. 1 PPDU frame format

Signal One OFDM Symbol

Service 16bits

PSDU

Tail 6bits

Pad Bits

Code/OFDM (Rateis indicated insignal) DATA Variable Number of OFDM Symbols

The Direct Wave Purifying Based on WIFI Signal for Passive Radar

225

8+8=16us 10*0.8=8us

2*0.8+2*3.2=4us

t1.t2.t3.t4.t5.t6.t7.t8.t9.t10 GI2

T1

T2

0.8+3.2=4us

0.8+3.2=4us

GI

GI

Signal

Data1

GI

Data2

Fig. 2 OFDM training structure

In 802.11a OFDM mode, the baseband signal can be represented as: rOFDM ðtÞ ¼ rPREAMBLE ðtÞ þ rSIGNAL ðt  tSIGNAL Þ þ rDATA ðt  tDATA Þ

ð1Þ

The PLCP preamble field is used to implement synchronization, containing the short training sequence and the long training sequence. The short training sequence contains ten short symbols and the long one contains two long symbols. The training structure is shown in Fig. 2. where t1–t10 are short training symbols, GI2 and GI are the guard interval or circular prefix, T1 and T2 represent the long training symbol, the total training time is 16 ls, and the rest ones are signal field and data field, respectively. The preamble symbol can be written as: rPREAMBLE ðtÞ ¼ rSHORT ðtÞ þ rLONG ðt  TSHORT Þ

ð2Þ

where TPREAMBLE ¼ TSHORT þ TLONG ; TSHORT ¼ 8 ls; TLONG ¼ 8 ls. The short training sequence can be written as: NX ST =2

rSHORT ðtÞ ¼ xTSHORT ðtÞ

Sk  exp½j2pDft

ð3Þ

k¼NST =2

where NST ¼ 52 is the sub-carrier number, Df ¼ ð20 MHz=64Þ ¼ 0:3125 MHz is the sub-carrier frequency spacing, xTSHORT ðtÞ is a window function with duration of TSHORT . The long training sequence is: rLONG ðtÞ ¼ xTLONG ðtÞ

NX ST =2

Lk  exp½j2pkDf ðt  TGI2 Þ

k¼NST =2

where xTLONG ðtÞ is a window function with duration of TLONG . The signal and data fields have the same expression:

ð4Þ

226

L. Jiang et al.

rn ðtÞ ¼ xTSYM ðtÞ

NX ST =2

dk;n  exp½j2pkDf ðt  TGI Þ

ð5Þ

k¼NST =2

where TSYM ¼ 4 ls, the sub-carriers are dedicated as pilot signals at the position, written as d21;n ¼ d7;n ¼ d7;n ¼ d21;n ¼ pn , to ensure the stable coherent detection under the interference conditions. pn is a 127bits binary pseudo random sequence code. xTSYM ðtÞ is a window function with duration of TSYM . For signal and data fields, which contain TSYM OFDM symbols, the corresponding expression is: rDATA ðtÞ ¼

NSYM X1

rn ðt  nTSYM Þ

ð6Þ

n¼0

In conclusion: rOFDM ðtÞ ¼ rSHORT ðtÞ þ rLONG ðt  TSHORT Þ þ rDATA ðt  TPREAMBLE Þ

ð7Þ

3 Ambiguity Function and Characteristic Analysis of Resolution The ambiguity function of r ðtÞ is defined as: 2  1  Z   vðs; fd Þ ¼ jnðs; fd Þj2 ¼  rðtÞ  r  ðt  sÞ  expðj2pf d tÞdt  

ð8Þ

1

It is a function of time delay s and Doppler frequency fd, completely describes the recognition ability of radar, both in distance and in speed. In fact, the signal waveform of data segment is unpredictable. Whereas the average of data could determine the rest of ambiguity function. And the average ambiguity function equals to the modulus square of the expected value of two-dimensional-matched filter. Thus, this paper uses the average characteristics of the ambiguity function to describe the WIFI signal, and its average ambiguity function [5–7, 13] expression is as follows: 92  8 1  100 MHz translate average PD from 9 to 18 µW at 3 m For B > 100 MHz translate average PD from 9 to 18 µW at 3 m

Japan

7 GHz (59–66) max 2.5 GHz 3.5 GHz (59.4–62.9)

10

NS

10

150 W (max)

47 dBi (max) NS

10 20

TBD 57 dBm (max)

10

44 dBm (ave) 47 dBm (max)

Australia Korea Europe China

7 GHz (57–64) 9 GHz (57–66) min 50 MHz 5 GHz (59–64)

TBD 37 dBi (max) NS

Limited to land and maritime Recommendation by ETSI

blocked by moving persons studied in our previous works [4, 5], 60 GHz UWB link budget and performance are analyzed. Tests are also performed for determining communication ranges and antenna gains. The goal of this study is to provide useful information for the design of 60 GHz UWB systems in gigabit M2M communications and standardization groups. Table 1 shows 60 GHz plan and limits on transmit power, EIRP, and antenna gain for various countries [6] including the parameters specified in China [7].

2 Mm-Wave 60 GHz Propagation Mechanisms In design and optimization of wireless communications systems, channel models featuring the relevant characteristics of radio-wave propagation are required. Ray tracing is a well-established tool for channel modeling; in ray-tracing algorithm, reflection and diffraction are the main physical processes for LOS and NLOS environments. In our previous works [4, 5], mm-wave 60 GHz propagation mechanism is studied from the direction-of-arrival (DOA) measurements. The DOA measurements require the detailed knowledge of the propagation channels. The measured power angle profiles (PAPs) and PDPs can then be connected with site-specific information of the measurement environments to find the origin of the arriving of signals. From [5], mm-wave 60 GHz propagation mechanism can be concluded as follows:

1030

S. Geng et al.

• Direct path and the first-order reflected waves from smooth surfaces form the main contributions in LOS propagation environments. • Diffraction is a significant propagation mechanism in NLOS cases. Moreover, the signal levels of diffraction and second-order reflection are comparable. • Transmission loss through concrete or brick walls is very high. Person blocking effect (PBE) is also measured in our previous work [4], as moving of person is quite usual in office rooms in reality. PBE is a major concern for propagation research and system development; the effects of person blocking at 60 GHz are studied by many researchers [8, 9]. In [4], PBE is measured by employing DOA measurement technique as described below.

2.1

Person Block Effect (PBE) Measurements

The PBE measurements were performed in a room, where the TX and RX positions are fixed with 5 m apart. When keeping a clear LOS path and a person blocked in the middle of the LOS path, measuring the power angle profiles (PAPs) of the clear LOS path and the blocked path, as shown in Fig. 2a, b, respectively. It is seen that there is about 18 dB person attenuation in the blocked path ðu ¼ 0 Þ. However, the PBE can be reduced to 12 dB by using selection diversity technique, i.e., selecting another stronger path (at u ¼ 315 Þ), which is considered as the first-order reflection from window glass in the room as reported in [5]. The selection diversity can be explained simply that when the LOS path undergoes a deep fading (person blocking), by selecting another independent strong signal the fading effects can be mitigated. Diversity is a powerful communication receiver technique that provides link improvement. Therefore, the effective PBE ¼ 12 dB is considered in 60 GHz UWB system parameter analysis of this paper.

Fig. 2 PAPs of a clear LOS path and b the LOS path blocked by a person in person block effect (PBE) measurements

60-GHz UWB System Performance Analysis …

2.2

1031

Radio Wave Propagation Mechanisms in the NLOS Case

In [5] we know that in the LOS propagation environments, direct path and the first-order reflected waves from smooth surfaces form the main contributions of receiving signals. This is also proved in [10] where a two-ray model (LOS path and first-order reflection from desktop) is proposed for 60 GHz M2M systems. In NLOS cases, diffraction is a significant propagation mechanism, and the signal levels of diffraction and second-order reflection are comparable [5]. This indicates that radio links are relayed by direction and/or second-order reflections in the NLOS propagation scenarios in 60 GHz band. As an example, Fig. 3 shows that radio waves propagate in office room environment for NLOS scenario. In Fig. 3, the diffraction and second-order reflection rays are denoted by dot and real lines, respectively. It should note that in the NLOS case, signal power loss increases greatly with the increasing of distance between the TX and RX. Thus, propagation range is a major concern in NLOS environments in system development.

3 60 GHz UWB System Link Budget Analysis In wireless communication systems, the upper bound of capacity is determined by Shannon theorem, which is function of bandwidth B and signal-to-noise ratio (SNR) as expressed as C ¼ B log2 ð1 þ SNRÞ

ð1Þ

A system capacity increases with B and SNR. However, increasing of bandwidth will lead to high noise power of system. For example, noise power is 18 dB higher with a UWB B ¼ 7 GHz channel than a narrowband B ¼ 100 MHz channel (when antenna noise temperature is T ¼ 290 K). Fig. 3 Mm-wave radio links are relayed by diffraction and/or double-reflection in the NLOS case

TX

RX

1032

S. Geng et al.

In this study, SNR ¼ 10 dB and B ¼ 1 GHz are considered for performing a basic feasibility study for achieving gigabit capacity of 60 GHz UWB systems.

3.1

Parameters Analysis of 60 GHz UWB System Link Budget

In wireless communication systems, the performance and robustness are often determined by SNR from radio link budget: SNR ¼ Pt þ Gt þ Gr  PL  N0  IL

ð2Þ

where Pt is the transmitted power, Gt and Gr are the transmitter (TX) and receiver (RX) antenna gains, PL denotes path loss in propagation channel, N0 is the total noise power at RX, and IL denotes the implementation loss of system. Pt is often limited by regulations of radio systems. In this work, it is chosen as Pt ¼ 10 dBm, as it was specified by most of countries including China. The other system parameters are set as practical values, i.e., IL ¼ 6 dB and noise figure NF = 6 dB in calculating total noise power: N0 ¼ 10 log10 ðkTBÞ þ NF, where k is Boltzmann’s constant and T is the standard noise temperature T = 290 K.

3.2

Path Loss Models in 60 GHz UWB Systems

The channel large-scale fading is a key impact on the coverage and reliability of system, and is often characterized by path loss (PL), which denotes the mean signal power loss and obeys the power distance law. Due to variations in the propagation environments, the signal power is observed at any given points that will deviate from its mean; this phenomenon is called shadowing. Because of shadowing, a fading margin FM is often considered in system design. Thus path loss PL is modeled as a combination of mean path loss and fading margin FM as   d PL ¼ PL0 ðd0 Þ þ 10n log þ FM d0 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}

ð3Þ

mean path loss

where the free space path loss PL0 is frequency-dependent PL0 ¼ 68 dB at reference distance d0 ¼ 1 m, path loss exponent n is environment-dependent, and FM is mainly system-dependent, and UWB system naturally leads to shadow fading improvement relative to narrow band systems. Based on our results that FM decreases with channel bandwidth B, and is less than 4 dB for a minimum bandwidth (B ¼ 500 MHz) in UWB channel with 90 %

60-GHz UWB System Performance Analysis …

1033

link success probability [11], fading margin is considered as FM ¼ 2 dB for the 60 GHz UWB (B ¼ 1 GHz) system in this work. Studies show that in LOS and NLOS office room environments, path loss exponent ranges in 2–3.5. In this work, path loss models of LOS ðn ¼ 2Þ, LOS path blocked by moving person and NLOS ðn ¼ 3:5Þ are considered, they respectively are PL1 ðdBÞ ¼ 68 þ 20 logðdÞ þ FM; PL2 ðdBÞ ¼ 68 þ 20 logðdÞ þ FM þ PBE; and PL3 ðdBÞ ¼ 68 þ 35 logðdÞ þ FM. Note that path loss model of LOS + PBE is more feasible when comparing with the NLOS model. Since the blocking effect is modeled independently on mobile position, which reflects the real case that movement of persons is quite typical in office rooms. Whereas, the NLOS model accounts for high path loss due to large distances practically. The parameters used in the 60 GHz UWB system link budget analysis in this work are shown in Table 2. Note that the maximum coverage range is selected as 5 m considering gigabit capacity of M2M applications (e.g., computer-to-computer data transfer).

4 60 GHz UWB System Performance Analysis As transmission power is restricted in regulations of 60 GHz radio systems, and further, path loss of the 60 GHz channel is high (e.g., free space path loss at 60 GHz is 22 dB higher than 5 GHz frequency band at d0 ¼ 1 m), the antenna gains become very important in guaranteeing radio links in achieving system gigabit capacity. In the following, tests are being performed in order to determine ranges and combined antenna gains (sum of gains at TX and RX), when using the parameters and path loss models of LOS ðn ¼ 2Þ, LOS + PBE ,and NLOS ðn ¼ 3:5Þ in Table 2. Table 2 Radio link budget of 60 GHz UWB system 60 GHz UWB system Data rate Maxi. coverage Bandwidth TX power SNR Noise power Fading margin Implementation loss Effective Person block effect Employed path loss models

>Gbps 5m 1 GHz 10 dBm 10 dB −78 dBm 2 dB 6 dB 12 dB LOS: PL1 ðdBÞ ¼ 68 þ 20 logðd Þ þ FM LOS + PBE: PL2 ðdBÞ ¼ 68 þ 20 logðd Þ þ FM þ PBE NLOS: PL3 ðdBÞ ¼ 68 þ 35 logðdÞ þ FM

1034

PL1=68+20log(d)+FM PL2=68+20log(d)+FM+PBE PL3=68+35log(d)+FM

35

Combined gain [dB]

Fig. 4 Combined antenna gains in 60 GHz UWB channel with link budget in Table 2

S. Geng et al.

30 25 20 15 10 5 0

1

2

3

4

5

Distance [m]

The combined antenna gain versus distance for the 60 GHz UWB system is shown in Fig. 4. It is seen that omni-omni (10 dB) antenna configuration can reach gigabit capacity when employing the three path loss models at short distance d ¼ 1 m. However, when employing omni-omni antenna configuration, only the LOS path loss model PL1 is feasible for gigabit capacity at further distance of d ¼ 5 m. With another two path loss models of PL2 and PL3 , i.e., LOS + PBE and NLOS ðn ¼ 3:5Þ, antenna configuration of omni-directional is required for the 60 GHz UWB system. Note that directional antenna with high gain, for instance, the half power beam width (HPBW) is approximately 6.5 for an antenna with more than 30 dBi gain [6]. The drawbacks of high gain antenna are they suffer from poor flexibility and limited mobility. It should be noted that the path loss model of LOS + PBE in Fig. 4 is more feasible when comparing with the NLOS model, since the blocking effect is modeled independently on mobile position, which reflects the real case that movement of persons is quite typical in multipath indoor channels. Whereas, the NLOS model accounts for high path loss due to large distances practically. The results show that it is essential to keep a clear LOS path in gigabit M2M applications.

5 Conclusions The feasibility and performance of mm-wave 60 GHz ultra-wide band (UWB) systems for gigabit machine-to-machine (M2M) wireless communications are analyzed in this work. Specifically, based on specifications and experimental channel measurements and models for both LOS and NLOS scenarios, the 60 GHz propagation mechanisms are concluded; 60 GHz UWB radio link budget including person block effect and channel fading margin are provided, and system performance is analyzed further. Tests are also performed for determining communication ranges and antenna configurations. Results show that when having a clear LOS path

60-GHz UWB System Performance Analysis …

1035

gigabit capacity can be achieved when employing omni-omni antenna configuration in office room M2M applications. When the LOS path is blocked by a moving person or radio wave propagation in NLOS situation, omni-directional antenna configuration is required in achieving gigabit capacity for 5 m range between machines of rooms. The drawbacks of high gain antenna systems are that they suffer from poor flexibility and limited mobility. Therefore, it is essential to keep a clear LOS path in gigabit M2M applications like data transfer in office rooms. The goal of this study is to provide useful information for the design of 60 GHz UWB systems in gigabit M2M communications.

References 1. Baykas, T., Chin-Sean, S., Zhou, L., et al.: IEEE 802.15.3c: the first IEEE wireless standard for data rates over 1 Gb/s. IEEE Commun. Mag. 49(7), 114–121 (2011) 2. Peng, X., Zhuo, L.: The 60 GHz band wireless communications standardizations (in Chinese). Inf. Technol. Stand. 49–53 (2012) 3. Geng, S., Liu, S., Zhao, X.: 60-GHz channel characteristic interdependence investigation for M2M networks. In: ChinaCom2014, 14–16 Aug, Maoming, China 4. Geng, S., Kivinen, J., Zhao, X., Vainikainen, P.: Measurements and analysis of wideband indoor radio channels at 60 GHz. In: 3rd ESA Workshop on Millimeter Wave Technology and Applications, pp. 39–44. Espoo, Finland, 21–23 May 2003 5. Geng, S., Kivinen, J., Zhao, X., Vainikainen, P.: Millimeter-wave propagation channel characterization for short-range wireless communications. IEEE Trans. Veh. Technol. 58(1), 3–13 (2009) 6. Yong, S.K., Chong, C.C.: An overview of multigigabit wireless through millimeter wave technology: potentials and technical challenges. EURASIP J. Wirel. Commun. Netw. 2007(1), 1–10 (2007) 7. Chinese specifications of the 60 GHz band transmission power for short-range wireless applications (in Chinese). www.miit.gov.cn 8. Jacob, M., Priebe, S., Maltsev, A., Lomayev, A., Erceg, V., Kurner, T.: A ray tracing based stochastic human blockage model for the IEEE 802.11ad 60 GHz channel model. In: Proceedings of the 5th European Conference on Antennas and Propagation (EUCAP), pp. 3084–3088, April 2011 9. Dong, K., Liao, X., Zhu, S.: Link blockage analysis for indoor 60ghz radio systems. Electron. Lett. 48(23), 1506–1508 (2012) 10. Shoji, Y., Sawada, H., Chang-Soon, C., Ogawa, H.: A modified SV-model suitable for line-of-sight desktop usage of millimeter-wave WPAN systems. IEEE Trans. Antennas Propagat. 57(10), (2009) 11. Geng, S., Vainikainen, P.: Experimental investigation of the properties of multiband UWB propagation channels. In: IEEE International Symposium on Wireless Personal Multimedia (PIMRC07), Athens, Greek, 3–7 Sept 2007, CD-ROM (1-4244-01144-0), pap337.pdf

Frequency-Domain Turbo Equalization with Iterative Impulsive Noise Mitigation for Single-Carrier Power-Line Communications Ying Liu, Qinghua Guo, Sheng Tong, Jun Tong, Jiangtao Xi and Yanguang Yu Abstract Power-line communication (PLC) has been recognized as a promising alternative communication technology due to the universal existence of power lines. However, signal transmitted through PLC channel suffers from severe inter-symbol interference (ISI) and strong impulsive noise (IN), degrading the reliability of data transmission. Although multicarrier orthogonal frequency-division multiplexing (OFDM) has been investigated to combat ISI in PLC, the inherent large peak-to-average power ratio (PAPR) of OFDM signal makes IN detection very difficult, and the OFDM based PLC system may not be the best choice under hostile PLC channels. To combat both ISI and IN, we propose a low complexity scheme based on single-carrier frequency-domain turbo equalization (SC-FDTE) coupled with iterative IN mitigation. Simulation results indicate that, the proposed scheme is able to efficiently mitigate IN in the PLC channel and outperforms the joint clipping and blanking approach for IN mitigation.



Keywords Iterative impulsive noise mitigation Single-carrier frequency-domain equalization Turbo equalization Power-line communications Impulsive noise







Y. Liu  Q. Guo (&)  S. Tong  J. Tong  J. Xi  Y. Yu University of Wollongong, Northfields Ave, Wollongong, NSW 2522, Australia e-mail: [email protected] Y. Liu e-mail: [email protected] S. Tong e-mail: [email protected] J. Tong e-mail: [email protected] J. Xi e-mail: [email protected] Y. Yu e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_81

891

892

Y. Liu et al.

1 Introduction Power-line communication (PLC) allows transmitting data through the electrical power transmission and distribution cables, aka, power lines [1, 2]. Compared with the extensively used Asymmetrical Digital Subscriber Lines (ADSL) and broadband wireless communication techniques, PLC attracts growing interests due to the ubiquitous power lines. It saves human and material resources from building additional transmission infrastructures. On the other hand, the power points inside modern constructions allow various appliances to be connected whilst accessible to the global network. This is a significant first step for smarthome projects [1]. Therefore, PLC becomes an attractive alternative for modern in-home last-inch communication technology. However, power lines were originally designed for transmitting electrical power rather than signal data. It provides a harsh, noisy, and nonlinear environment for data transmission [1–4]. The multipath propagation turns the PLC channel into a frequency-selective channel, which induces severe intersymbol interference (ISI) during data transmission [1, 2]. Besides, power-line channel suffers from background noise (BN) originating from commercial radio and TV; as well as impulsive noise (IN) caused by switching transients [2–4]. IN occurs randomly with large amplitude which may exceed more than 50 dB over BN [2]. Due to the existence of IN, data transmitted via a PLC channel is affected by bit or burst errors, especially in high-speed data transmissions. Therefore, in previous PLC research [3–6], ISI and IN are considered as main obstacles, which should be overcome to guarantee the reliability of communication.

1.1

Previous Works Coping with Obstacles in PLC

To date, multicarrier modulation, e.g., orthogonal frequency-division multiplexing (OFDM), based PLC systems have been advocated to address the ISI issue. IN mitigation modules have been introduced in OFDM to further cope with the detrimental impact of IN [3–6]. The authors of [3] compared and analyzed three nonlinear receiver-end preprocessors namely clipping, blanking, and joint clipping and blanking to mitigate IN. These kind of IN mitigation schemes have simple structures and brief mathematical representations, which make them easily to be implemented. However, signal symbols with high amplitude will cause wrong trigger of clipping or blanking and further lead to IN detection and mitigation errors. In [4, 6], the authors proposed approaches for IN estimation and mitigation. Specifically, the authors in [4] proposed a threshold based IN mitigation method, which can achieve visible performance improvement. In general, OFDM-based PLC systems are able to handle the hostile ISI channel and provide low-computational complexity by inducing simple one-tap equalizer. However, OFDM signals have large peak-toaverage power ratio (PAPR) (10–12 dB) which makes IN detection difficult, and the systems are sensitive to carrier frequency offsets (CFOs).

Frequency-Domain Turbo Equalization …

893

In this regard, single-carrier frequency-domain equalization (SC-FDE) system has been advocated as an attractive alternative to OFDM. In wireless communications, SC-FDE systems have been investigated thoroughly from channel estimation to ISI mitigation [7, 8]. In fact, SC-FDE is very suitable for PLC due to the lower PAPR, which can be considered as superiority to meet the Electro Magnetic Compatibility (EMC) regulatory constraints. However, only a few literature [9, 10] focus on the utilization of SC-FDE in PLC. In [9], the authors compared the performance of SC-FDE PLC system with OFDM-based PLC system under practical PLC channels without IN. The results show that, the SC-FDE scheme outperforms the OFDM counterpart in most cases through higher frequency diversity while maintaining almost identical computational complexity. Although different equalizers and different channel coding approaches were considered, bit error rate (BER) performance analysis shows that, there is still 6–7 dB room for BER performance improvement under impulsive PLC channels [10].

1.2

Research Contributions, Paper Organization, and Notations

In comparison to the above works, this paper proposes to use single carrier modulation due to its low PAPR. Specifically, a novel iterative receiver, i.e., single carrier modulation with frequency-domain turbo equalisation (SC-FDTE) coupled with an iterative IN estimation and cancellation (IN-EC) module, is developed for PLC. In the proposed iterative receiver, we first subtract the transmitted signal from the received samples. Then locate IN from the residual signal. Robust turbo equalization is used in this system to iteratively cope with the ISI and provide a priori information for the IN-EC module. Simulation results show that the proposed iterative receiver based PLC system efficiently mitigates IN and achieves a significant BER gain compared with the conventional non-iterative receiver based PLC system [3]. In addition, we observe that, SC-FDE is more suitable than OFDM in IN mitigation for complicated PLC channel. The remainder of the paper is organized as follows. Section 2 presents the PLC channel model as well as the noise model. In Sect. 3, transmitter structure and signal model are introduced thoroughly. Section 4 describes the frequency-domain turbo receiver as well as the proposed iterative IN mitigation approach. Simulation results are presented in Sect. 5. Section 6 concludes this paper. Notations: Boldface letters denote column vectors and matrices. Lowercase and capital letters denote time- and frequency-domain entities, respectively. The superscripts ðÞT , ðÞ1 and ðÞH represent the operation of transpose, inverse, and conjugate transpose, respectively.

894

Y. Liu et al.

2 PLC Channel Model and Noise Model 2.1

PLC Channel Model

This paper considers the in-home last-inch low-voltage PLC channel, which has a tree-like topology with multiple branches. Zimmermann and Dostert proposed a general multipath channel model for PLC [11]. The simplified frequency response of the model for N different paths is given by Hð f Þ ¼

N X i¼1

k si ni  |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} eða0 þa1 f Þdi  |fflffl ej2pf ffl{zfflfflffl}; |{z}

weight

attunation

ð1Þ

delay

where i denotes the path index, ni denotes the weighting factor for path i; a0 , and a1 represent the attenuation parameters; k is the exponent of the attenuation factor (0.5–1); di denotes the length of path i; si represents the delay of path i; and the frequency f ranges from 500 kHz to 20 MHz. In practice, a majority of reflections from distant echoes are eliminated due to severe attenuations. Thus, we only consider the dominant reflections, typically N ¼ 3  5.

2.2

Noise Model

The additive noise in PLC channel is composed of the BN wk , which is usually represented as a complex additive white Gaussian noise (AWGN) with a probability density function (PDF) of wk  CN ð0; r2w Þ, and the IN ik , i.e., nk ¼ w k þ i k ;

ð2Þ

We assume using the concise Bernoulli–Gaussian (BG) [12] model to generate IN, which can be represented as i k ¼ bk gk ;

ð3Þ

where fbk g denotes the Bernoulli process, which is an i.i.d. sequence of zeroes and ones with the probability of one being p, and fgk g follows the PDF gk  CN ð0; r2i Þ. The ratio of the variances of IN and BN is l = r2i /r2w .

Frequency-Domain Turbo Equalization …

895

Fig. 1 Block diagram of the frequency-domain turbo equalization with iterative IN mitigation for a single-carrier PLC system

3 Signal Model The transceiver block diagram is shown in Fig. 1. At the transmitter, first, the binary data sequence {ai g passes through an encoder. Then the coded binary sequence is permuted by an inter leaver. The role of the mapper is to convert the permuted bit sequence fci g to symbol sequence x to be modulated onto the signal carrier for transmission. More specifically, the bit sequence fci g is divided into binary subsequences with length Z, and each binary subsequence is mapped to a symbol from the alphabet v ¼ fxk ; k ¼ 1; 2. . .; 2Z g. After cyclic prefix (CP) insertion, the resulting signal x0 is x0 ¼ ½x0 ; . . .; xM1 ; xM ; . . .xMþK1 T ;

ð4Þ

where K and M denote the symbol length and CP length, respectively. The signal x0 is then transmitted through the PLC channel which can be modeled as a length L tapped delay line. After removing the CP, the received signal y is given by

896

Y. Liu et al.

2 2

y1 6 y2 ex þ n ! 6 y¼H 6 . 4 . . yK

h0 h1 .. .

6 6 6 6 6 7 6 7 6 hL1 7¼6 5 6 6 6 0 6 6 6 .. 4 . 0 3

0 h0 .. .

... ... .. .

hL2



hL1 .. . 0



0 .. .

..  .    hL1

hL1 0 .. .

.. . hL2

3 . . . h1    h2 7 7 3 2 3 .. .. 7 72 n0 . . 7 x0 76 7 6 .. 76 x1 7 7 6 n1 7 . 7 7 þ 6 . 7; 76 . 4 . 5 4 . 5 .. 7 . . . 7 7 xK1 nK1 7 .. .. 7 . . 5    h0

ð5Þ where h0 ; h1 ; . . .hL1 denote the channel taps. Note that, the channel is a frequency-selective time-invariant channel. The PLC channel matrix in (5) is a e , which can be diagonalized by normalized Discrete Fourier circulant matrix H Transform (DFT). Then we get the frequency domain received signal e F H  Fx þ Fn: Fy ¼ F H

ð6Þ

Y ¼ DX þ N;

ð7Þ

It can be rewritten as

where D ¼ DiagfD0 ; D1 ; . . .; DK1 g is a diagonal matrix.

4 Proposed SC-FDTE with Iterative IN Mitigation 4.1

Turbo Receiver Structure

At the receiver end, the received signal first passes through the impulsive noise estimation and cancelation (IN-EC) module as shown in Fig. 1. This module aims to mitigate IN from the received signal. Algorithms for the IN-EC module will be detailed in Sect. 4.2. Then, we input the resulting “IN-free” (Note that, the residual IN may exist) signal into the soft-input soft-output (SISO) turbo equalizer, namely, the FD-LMMSE equalizer in Fig. 1. The objective of the SISO equalizer is to compute the extrinsic log-likelihood ratio (LLR) of each interleaved code bit ci;z [13–15] as follows:  0     P ci;z ¼ 0jy    La ci;z ; L ci;z , ln P ci;z ¼ 1jy0 e

ð8Þ

Frequency-Domain Turbo Equalization …

897

  where La La ci;z denotes the a priori LLR of ci which is the soft information Pðci;z ¼0jy0 Þ generated from the SISO decoder in the previous iteration. ln P c ¼1jy0 represents ð i;z Þ the a posteriori LLR calculated based on both ci itself and the “IN-free” signal y0 . In   [15, 16], the authors proposed a concise representation of Le ci;z based on the general LMMSE principle   2 jxk me j Q 0 0 0 exp  ve k z 6¼z Pðci;z ¼ sk;z Þ   k   Le ci;z ¼ ln ; 2 P jxk mek j Q 0 0 0 xk 2v1z exp  z 6¼z Pðci;z ¼ sk;z Þ ve P

xk 2v0z

ð9Þ

k

where sk;z represents the binary subsequence corresponding to symbol xk . v0z and v1z denote two alphabet subsets and their elements correspond to binary subsequences with the Z th bit being 0 and 1, respectively. mek and vek are the extrinsic mean and variance of xk , respectively.     mek ¼ mek mpk mpk  mak mak  mek ¼

1 1  mpk vak

ð10Þ

1 ð11Þ

In (10) and (11), the a priori mean ma and variance va are calculated using the feedback LLR from the decoder. The a posteriori mean mpk and variance mpk are calculated by applying the MMSE principle to (7) as  1 r2 mp ¼ ma þ F H DH DDH þ w I ðY 0  DFma Þ; va vp0

¼

vp1

¼  ¼

vpK1

K 1 1X 1 j di j 2 ¼ þ 2 K k¼0 va rw

ð12Þ

!1 ;

ð13Þ

where va represents the average value of the elements in va . Thus, we can obtain Le ðci Þ by substituting (10) and (11) into (9). Then we import the de-interleaved LLR Lðbi Þ to the SISO decoder. The SISO decoder [17] computes the extrinsic LLR 0 Le ðbi Þ which is used as a priori information for the equalizer in the next iteration. In addition, the a posteriori LLR of the information bit fai g are computed for hard decision in the last iteration.

898

4.2

Y. Liu et al.

Iterative IN-EC

According to (2) and (7), the received frequency domain signal Y is given by Y ¼ DX þ W þ I;

ð14Þ

which comprises three components, namely, the signal DX, the BN W, and the IN I. The aim of the IN-EC is to remove the IN from the received signal. However, IN may be embedded in the signal when useful signal and IN have comparable magnitudes, which makes the detection of IN challenging, especially for large SNRs. To this end, we first subtract the effect of data signal from the observation, resulting in a residual signal in the time domain as e ðx  bx Þ; br ¼ F 1 Y  F 1 DFbx ¼ w þ i þ H

ð15Þ

where bx denotes the estimate of x calculated based on the a posteriori LLR from the decoder. As we can see in Fig. 1, the second feedback loop represents that a posteriori information is used to generate bx since it achieves a better estimate of x. With the residual signal br , we first identify the location of IN using the proposed Sorting-based method. Then we estimate IN in these locations by treating i as the e ðx  bx Þ as the noise n0 . The prodesired information to be estimated and w þ H cedure of the IN-EC algorithm is shown as follows: Step1. IN Location Detection: We sort the residual signals in decreasing order of amplitude, and the first Q ones are assumed to contain IN, where Q is given by Q ¼ total symbol number  p  elimit :

ð16Þ

Here, p represents the probability of IN occurrence.elimit takes value from [0, 1] which is calculated as follows: elimit ¼ 1  Probðjij  jn0 jÞ

ð17Þ

and  0   0  þ1  0  þ1 Z Z Prob jij  n  ¼ Pjij ðjijÞPjn0 j n  d n d jij ¼ 0

jij

r2n0 ; r2i þ r2n0

ð18Þ

where Pjij ðjijÞ and Pjn0 j ðjn0 jÞ both follow Rayleigh distribution. r2i denotes the variance of each vector in i, and r2n0 denotes the variance of each vector in n0 . Due to the fact that r2n0 r2n0 changes with iteration, the values of elimit are iteratively updated as well. Step2. IN Estimation: Two estimation approaches are presented to estimate IN at the Q locations below, namely, LS estimation (19) and MMSE estimation (20).

Frequency-Domain Turbo Equalization …

LS:i0k

¼ (

MMSE :

i0k

¼

899

^rk ; k  IN locations 0; otherwise

r2i r2i þr2n0

 ^rk ;

k  IN locations

0,

ð19Þ

ð20Þ

otherwise

Step3. IN Cancelation: We subtract the estimation of IN from the received signal to get the “IN-free” signal y0k (21). 0

y0k ¼ yk  ik

ð21Þ

0

Finally, the DFT of y0 , i.e., Y is input to the FD-LMMSE equalizer.

5 Simulation Results and Discussion In this section, we investigate the performance of the proposed iterative receiver based PLC system under the presence of IN. Firstly, we compare the BER performance of the SC-FDE PLC system with OFDM PLC system under impulsive environment. In addition, the joint clipping and blanking [3] IN mitigation approach is compared with the proposed iterative IN-EC. BER performance of the proposed schemes under specific conditions is tested as performance benchmarks. The parameters of a typical 4-path in-home reference multipath channel [11] are e The resulting tapped delay line has adopted to construct the PLC channel matrix H 22 taps. We fix the IN occurrence probability to p ¼ 0:01 and the IN variance to 100 times of the BN variance, i.e., l ¼ 100. These parameters are widely used in IN modeling in PLC channel [3–6]. The length of the information bit frame is 12,798.

Fig. 2 BER performance comparison between SC-FDE-based PLC system and OFDM-based PLC system under impulsive environment (p ¼ 0:01; l ¼ 100)

900

Y. Liu et al.

We adopt a rate-1/2 convolutional encoder with generator (5, 7), which is initialized to all-zero state. In addition, Quadrature Phase Shift Keying (QPSK) Gray mapping is used. We set the SC block length to K ¼ 128 and thus we have 100 blocks to transmit for each coded bit sequence. We first compare the BER performance of OFDM-based PLC system between SC-FDE-based PLC systems using the same iterative IN mitigation approaches. Simulation results are shown in Fig. 2. In the legend, OFDM-no IN and SC-FDE-no IN denote BER performance of OFDM and SC-FDE PLC systems without IN, which are the performance bounds of PLC systems with IN mitigation. Also, OFDM and SC-FDE PLC systems with IN but no IN mitigation are represented by OFDM-IN and SC-FDE-IN in Fig. 2. We observe that, no matter with or without IN, the SC-FDE-based PLC system outperforms the OFDM-based PLC system. Thus, the corresponding BER performance of SC-FDE PLC system with IN

Fig. 3 Convergence rate comparison when SNR = 5– 8 dB of different IN-EC algorithms under PLC channel with IN (p ¼ 0:01; l ¼ 100)

Fig. 4 BER performance comparison between different IN mitigation approaches in PLC channel with IN (p ¼ 0:01; l ¼ 100)

Frequency-Domain Turbo Equalization …

901

mitigation achieves significant improvement compared to OFDM-based PLC system with IN mitigation. Second, we compare the proposed IN-EC algorithms in terms of convergence rate in four scenarios: SNR = 5–8 dB, respectively. Figure 3 shows that larger SNR leads to a faster convergence rate in PLC system. Besides, under same SNR, LS and MMSE estimations of the IN achieve similar convergence speeds. We note that the iterative IN mitigation approaches converge within 10 iterations. Thus, we fix the number of iterations to 10 in the following simulations. In Fig. 4, we compare the performance of the proposed iterative IN mitigation approaches with that of the joint clipping and blanking-based IN mitigation approach. The number of iterations is 10. The BER performance of the PLC system without IN (legend FD-LMMSE no IN in Fig. 4) serves as the performance bound of the IN mitigation in PLC system. We notice that, there is a large BER performance gap between FD-LMMSE turbo equalization with and without IN. This indicates that IN greatly degrades the quality of data transmission even with small occurrence probability. Similar to the trends of convergence rates, LS and MMSE estimations of IN achieve similar BER performance. Furthermore, we observe that, the proposed iterative IN mitigation approaches efficiently mitigate IN and achieve an SNR gain of more than 2 dB compared to the iterative FD-LMMSE equalization without IN mitigation (legend FD-LMMSE IN in Fig. 4). Compared with the joint clipping and blanking IN mitigation method, an SNR improvement of more than 1 dB is obtained by the proposed IN mitigation scheme.

6 Conclusion To address the challenging issues in PLC, this paper proposes to use single carrier modulation due to its lower PAPR compared with the conventional multicarrier modulation. Moreover, a novel iterative receiver, i.e., frequency-domain turbo equalisation with iterative IN mitigation, is developed. Frequency-domain equalisation provides lower complexity compared with the time-domain equalisation, which is able to cope with the heavily dispersive PLC channel. On the other hand, turbo equalisation makes a powerful ISI elimination approach with performance approaching the optimal MAP signal detection. By using the a posteriori LLRs from the decoder, IN-EC is combined with turbo equalization to iteratively improve the IN estimation and cancelation, and further iteratively improve the system’s BER performance. For the module of IN-EC, we propose a Sorting-based approach to detect the locations of IN, then use LS or MMSE estimation to estimate IN at these locations. In the simulation results, we observe that, SC-FDE is proved more suitable than OFDM in IN mitigation under complicated PLC channels. In addition, Sorting-based iterative IN-EC outperforms the joint clipping and blanking IN mitigation approach by more than 1 dB, and the BER performance of the proposed IN mitigation-based PLC system is very close to the performance bound of IN mitigation in the PLC system.

902

Y. Liu et al.

References 1. Pavlidou, N., Vinck, A.J.H., Yazdani, J., Honary, B.: Power line communications: state of the art and future trends. IEEE Commun. Magaz. 41(4), 34-40 (2003) 2. Meng, H., Guan, Y.L., Chen, S.: Modeling and analysis of noise effects on broadband power-line communications. IEEE Trans. Power Delivery 20(2), 630–637 (2005) 3. Zhidkov, S.V.: Analysis and comparison of several simple impulsive noise mitigation schemes for OFDM receivers. IEEE Trans. Commun. 56(1), 5–9 (2008) 4. Zhidkov, S.V.: Impulsive noise suppression in OFDM-based communication systems. IEEE Trans. Consum. Electron. 49(4), 944–948 (2003) 5. Juwono, F.H., Guo, Q., Huang, D., Wong, K.P.: Deep clipping for impulsive noise mitigation in OFDM-based power-line communications. IEEE Trans. Power Delivery 29(3), 1335–1343 (2014) 6. Lin, J., Nassar, M., Evans, B.L.: Impulsive noise mitigation in powerline communications using sparse Bayesian learning. IEEE J. Sel. Areas Commun. 31(7), 1172–1183 (2013) 7. Pancaldi, F., Vitetta, G.M., Kalbasi, R., Al-Dhahir, N., Uysal, M., Mheidat, H.: Single-carrier frequency domain equalization. IEEE Signal Process. Mag. 25(5), 37–56 (2008) 8. Siddiqui, F., Danilo-Lemoine, F., Falconer, D.: Iterative Interference cancellation and channel estimation for mobile SC-FDE systems. IEEE Commun. Lett. 12(10), 746–748 (2008) 9. Ng, Y.H., Chuah, T.-C.: Single-carrier cyclic prefix-assisted PLC systems with frequency-domain equalization for high-data-rate transmission. IEEE Trans. Power Delivery 25(3), 1450–1457 (2010) 10. La-Gatta, F.A., Ribeiro, M.V., Legg, A.P., Machado, R.: Coded CP-SC communication scheme for outdoor power line communications. In: IEEE International Symposium on Power Line Communications and Its Applications (ISPLC), pp. 160–165 28–31 March 2010 11. Zimmermann, M., Dostert, K.: A multipath model for the powerline channel. IEEE Trans. Commun. 50(4), 553–559 (2002) 12. Ghosh, M.: Analysis of the effect of impulse noise on multicarrier and single carrier QAM systems. IEEE Trans. Commun. 44(2), 145–147 (1996) 13. Tüchler, M., Singer, A.C., Koetter, R.: Minimum mean squared error equalization using a priori information. IEEE Trans. Signal Process. 50(3), 673–683 (2002) 14. Tüchler, M., Koetter, R., Singer, A.C.: Turbo equalization: principles and new results. IEEE Trans. Commun. 50(5), 754–767 (2002) 15. Guo, Q., Huang, D.D.: a concise representation for the soft-in soft-out LMMSE detector. IEEE Commun. Lett. 15(5), 566–568 (2011) 16. Guo, Q., Huang, D., Nordholm, S., Xi, J., Yu, Y.: Iterative frequency domain equalization with generalized approximate message passing. IEEE Signal Process. Lett. 20(6), 559–562 (2013) 17. Koetter, R., Singer, A.C., Tüxchler, M.: Turbo equalization. IEEE Signal Process. Mag. 21(1), 67–80 (2004)

Performance of Multimodal Biometric Systems at Score Level Fusion Harbi AlMahafzah and Ma’en Zaid AlRawashdeh

Abstract This paper proposed the use of multimodal score-level fusion as a means to improve the performance of multimodal verification. Different algorithms have been used to extract the features: LG for extracting FKP features, LPQ for extracting iris features, and PCA for extracting face features. Results indicate that the multimodal verification approach has gained higher performance than using any single modality. The biometric performance using score-level fusions under “Sum,” “Max,” and “Min” rules have been demonstrated in this paper.









Keywords Score level fusion Multibiometric Multimodal Log-Gabor LPQ PCA



1 Introduction The need for user authentication techniques and concerns about security and vast progression in networking, and communication has increased in the past few decades. Traditional methods are commonly used for authorizing and binding access to different systems even though these systems could be attacked and the security can be overridden. Biometrics technologies have replaced the traditional authentication methods due to their ability to authenticate the right personality of different people requesting a service [1].

H. AlMahafzah (&) Department of Computer Science, Al-Hussein Bin Talal University, Ma’an, Jordan e-mail: [email protected] M.Z. AlRawashdeh (&) Computer Lecturer, Alghad International Colleges for Health Science, Najran, Saudi Arabia e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_82

903

904

H. AlMahafzah and M.Z. AlRawashdeh

Biometric recognition systems aim at the automation of recognition of a person’s identity based on physical or behavioral characteristics (something a person is or produces). Since majority of biometric systems are single modal which rely on the single biometric information of authentication, problems with those biometrics trait information such as (noise in sensed data, intra-class variations, and inter-class similarities, etc), results in problems such as authenticating the unauthorized user as authorized users (FAR) and rejecting the authorized users (FRR). Usually, we are using FAR and FRR to measure the performance of biometric systems. Another measurement is (EER) could also be used, EER is a cross point when drawing FRR verses FAR (i.e., the equal values of FRR with FAR [2]. Nowadays there is more concern of solving some inherited problems of biometric systems (intra-class variations, inter-class similarity etc.). Possible solutions are to use more than one modality to reduce the classification problems which raise the intra-class variety and inter-class. Multiple biometric traits could be used to improve the performance of biometric systems. Combining or fusing of more than one biometric system is referred as Multi-biometric system [3]. The Multi-biometric systems can offer essential improvement in the authentication accuracy of a biometric system, as it depends on more than one biometric data. The term Multi-biometric refers to the fusion of different types of biometrics according to the way of fusing the biometrics data as follows [2]: Multi-sensor: Multiple sensors are used to collect information of the same biometric. Multi-sample: more than one consideration of the same biometric is taken at the time of the enrollment and/or recognizing time, e.g. a number of face readings are taken from different sides for the same person. Multi-algorithms: different algorithms are used for extracting the same biometric features and matching them with the already obtained database. Multi-instance: means the use of the same biometric trait and processing on multiple instances of the similar biometric trait, (such as left and right irises) [1, 4]. Multi-modal: Multiple biometric modalities can be collected from the same person, e.g. fingerprint and face, which require different sensors. Thus this paper evaluates the performance of multi-modal approach by fusing the data at match score level using Sum, Max, and Min rules. The rest of the paper is organized as follows: Sect. 2 presents related works, proposed method is given in Sect. 3, detailed experimental results are given in Sect. 4, fusion strategies in Sect. 5, result and discussion in Sect. 6, and the conclusion is mentioned in Sect. 7.

2 Related Works Meraoumia et al. [5] proposed a personal identification multi-modal biometric system by using palm print and iris modalities. In this work, the authors describe the development of a multi-biometric system based on Minimum Average Correlation Energy Filter (MACE) method (for matching).

Performance of Multimodal Biometric Systems at Score Level Fusion

905

Morizet and Gilles [6] have suggested a new fusion technique to combine scores obtained from face and iris biometric modalities. Based on a statistical analysis of boots trapped match scores elicitation from similarity matrices, the authors show the usefulness of wavelet noise removal by normalizing scores. Toh et al. [7] have proposed a diverse polynomial model increasing the number of parameters longitudinally with model order and the number of inputs. First, the model is subjected to a well-known pattern classification problem to elucidate the classification capability such as the above-mentioned methods and then followed by a biometrics fusion combining fingerprint and voice data. Giot et al. [8] in their paper have proposed a lower cost multimodal biometric system fusing keystroke and 2D face recognition. The suggested multimodal biometric system has improved the recognition rate compared to the individual method. Rodrigues et al. [9] have proposed two schemes that could increase the security of multi-modal biometric systems. Experimental result shows that the suggested methods are more sturdy against spoof attacks compared to classical fusion methods. Shahin et al. [10] have introduced a multi-modal system based on the fusion of entire dorsal hand geometry and fingerprints that achieves right and left near-infra-red hand geometry and right and left index and ring fingerprints. Scores obtained from different biometric modalities matchers were fused using the Min– Max score fusion technique. Wang et al. [11] have proposed a method to combine the face and iris features for developing a multi-modal biometric system. The authors pick out a virtuous feature level fusion plan for fusing iris and face features in sequence, and normalizing the pristine features of iris and face using z-score model to reduce estrange in the unbalance of girth.

3 Proposed Methodology In this paper, different modalities have been used namely: Face modality of AR-Face database, iris modality of CASIA-Iris database, and Finger Knuckle Print (FKP) modality of D. Zhang FKP database. FKP refers to the image pattern of the outer surface around the phalangeal joint of one’s finger.

3.1

Preprocessing

This section describes the extraction of the Region of Interest (ROI). The process involved to extract ROI for FKP is shown in Fig. 1 and the process involved to extract ROI of Iris is shown in Fig. 2.

906

H. AlMahafzah and M.Z. AlRawashdeh

Fig. 1 a Image acquisition device is being used to collect FKP samples; b sample FKP image; c ROI coordinate system, where the rectangle indicates the area d extracted ROI

Fig. 2 Region of interest extraction of Iris: a Original edge image. b Edge image after edge detection. c Edge image after deleting noise and thinning. d Circular Hough transformed is used to detect the iris border

4 Feature Extraction In this paper, the following feature extraction algorithms have been used to extract the features prior to fuse a different modalities combination. (1) To extract the features from finger knuckle print, Log-Gabor filters have been used. Log-Gabor proposed by Field [12], suggests that natural images are better coded by filters that have Gaussian transfer functions as they are seen on logarithmic frequency scale. On the linear frequency, the Log-Gabor function has a transforming function of the form:  GðwÞ ¼ e

    logðw=w0 Þ2 = 2 logðk=w0 Þ2

ð1Þ

where w0 is the filter’s center frequency and k/w0 is a constant for different w0. (2) To extract the iris’s features, the Local Phase Quantization (LPQ) methods have been used. LPQ introduced by Ojansivu et al. [13]. LPQ is based on the blur undisparity property of the Fourier phase spectrum. It uses the local phase information extracted by the 2D DFT which is computed over a rectangular M-byM neighborhood Nx at each pixel of the image f(x) defined by: F ðu; xÞ ¼

X

f ðx  yÞej2pu

T

y

¼ wTu fx

ð2Þ

yeNx

where wu is the basis vector of the 2D DFT at frequency u and fx is another vector containing all M2 image samples from Nx [13].

Performance of Multimodal Biometric Systems at Score Level Fusion

907

(3) Principal Component Analysis (PCA) was invented in 1901 by Karl Pearson. PCA is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of correlated variables into a set of values with uncorrelated variables called principal components.

5 Biometric Fusion Strategies A biometric system works in two different modes: enrollment and authentication. The two authentication modes are verification and identification. Combining biometric systems, algorithms, and/or traits is a good solution to improve the authentication performance of biometric systems. A lot of researchers have shown that multi-biometrics enhanced the authentication performance. In biometric systems, fusion can be performed at different levels: sensor level, feature level, score level, and decision level fusions [14].

5.1

Sensor-Level Fusion

It is the integration of testimonials presented by different sources of raw data before throwing in one’s hand for feature extraction. Sensor-level fusion can be availed from multi-sample systems which grip multi-snapshots of the same biometric.

5.2

Feature-Level Fusion

In feature-level fusion, the feature suit constructed from multiple biometric algorithms are conjoined into a single feature set by applying a suitable feature normalization, transformation, and reduction planner [3, 14].

5.3

Score-Level Fusion

The matching scores output by verity of biometric matchers are joined to generate a new scalar. Score level fusion is shown in Fig. 3.

908

H. AlMahafzah and M.Z. AlRawashdeh

Fig. 3 Basic concept of the score-level fusion

5.4

Decision-Level Fusion

Fusion is achieved at the epitomized or decision level the only final decisions are obtainable (e.g. AND, OR, Majority Voting, etc.). In all the experiments, the data have been fused at score level, using ‘‘Sum,’’ “Max,” and ‘‘Min’’ rules for two and three modalities combinations.

6 Results and Discussion This section tackles the inquisition results of joining different biometric modalities at score level fusion with ‘‘Sum,’’ “Max,” and ‘‘Min’’ rules to measure the performance of multimodal system. Sum Rule; Si ¼

M X m S i m¼1

ð3Þ

  1 2 M Max Rule; Si ¼ Max s ; s ;    s i i i

ð4Þ

  1 2 M Min Rule; Si ¼ Max s ; s ; . . .s i i i

ð5Þ

In all the experiments, performance is measured in terms of False Acceptance Rate (FAR in %) and corresponding Genuine Acceptance Rate (GAR in %). To start with the performance of a single modality biometric system is measured. Then,

Performance of Multimodal Biometric Systems at Score Level Fusion

909

Table 1 Performance of single modality GAR (%) FAR (%)

FKP

Face

Iris

0.01 0.10 1.00

85.50 88.50 93.00

25.50 40.00 62.00

45.00 57.00 74.00

Fig. 4 The ROC curve performance of single modality

the results for multimodal biometric system are scrutinized. The results attained from single modality biometric system are shown in Table 1 and illustrated as Receiver Operating Characteristic (ROC) curve in Fig. 4. By examining the data in Table 1 it can be noted that the FKP have the highest performance among the three modalities at all the FAR values and Iris has acceptable performance at FAR = 1 but low performance value at FAR = 0.01 compared to FKP. On the other hand, the face has substantial less performance values than other modalities at all values of FAR. The results attained from two modality biometric systems are shown in Table 2, and illustrated as ROC curve in Fig. 5. By examining the data in Table 2, it can be observed that the fusion of two modalities at score level (Sum Rule) when fusing FKP with any other modality, the performance has achieved good improvement at all FAR’s values compared to FKP as single modality performance and very high improvement compared to face or iris performance as a single modality. Fusing face with iris has achieved good performance improvement at FAR = 1; it could be noticed by comparing that the single modality result is 62.00 % for face and 74.00 % for Iris and the fused result is

910

H. AlMahafzah and M.Z. AlRawashdeh

Table 2 Performance of two modality biometric systems FAR (%)

GAR (%) with “sum” rule FKP+Face FKP+Iris

0.01 88.50 0.10 94.00 1.00 98.00 GAR (%) with “max” rule 0.01 32.50 0.10 49.50 1.00 76.00 GAR (%) with “min” rule 0.01 85.00 0.10 92.50 1.00 94.50

Face+Iris

91.50 94.00 95.50

46.50 74.50 93.50

47.00 60.00 76.50

56.50 66.00 82.50

85.00 88.00 93.00

33.00 44.00 76.00

Fig. 5 The ROC curve performance of two modality systems at score level

93.50 %. At FAR = 0.01, it has less improvement performance than iris as a single modality which is 45.00 % with the fused result which is 46.00 %. The fusion of two modalities at score level (Max Rule) when fusing FKP with other modality performance degrade compare to the performance of FKP as a single modality. On the other hand, the performance of fusing face with iris has a good improvement than a single modality for either Iris or face; we could notice here the Max rule is giving good result for fusing two weak modalities but degrading the performance when there is a strong modality. The fusion of two modalities at score level

Performance of Multimodal Biometric Systems at Score Level Fusion Table 3 Performance of three modality biometric systems

FAR (%)

911

GAR (%) with “sum” rule FKP+Face+Iris

0.01 95.00 0.10 98.00 1.00 99.50 GAR (%) with “max” rule 0.01 58.50 0.10 69.00 1.00 84.50 GAR (%) with “min” rule 0.01 84.50 0.10 92.50 1.00 94.50

(Min Rule) does not exhibit any performance improvement over a single modality. The performance fusion of two modality systems at decision level is illustrated as a ROC curve in Fig. 5. The results attained from three modality biometric systems are shown in Table 3 and illustrated as Receiver Operating Characteristic (ROC) curve in Fig. 6. By analyzing the data in Table 3, it can be observed that the fusion of three modalities at score level (Sum Rule) has good score improvement over the two modalities. It could be noticed by comparing the best two modality results at FAR

Fig. 6 The ROC curve performance of three modality systems at score level

912

H. AlMahafzah and M.Z. AlRawashdeh

values 0.01 and 1 which are 88.50 and 98.50 % with the best of three modality results which are 95.00 and 99.50 % at FAR values 0.01 and 1, respectively. The fusion of three modalities at score level (Max and Min Rules) has no performance improvement over the two modalities. The performance fusion of three modality systems at decision level is illustrated as a ROC curve in Fig. 6.

7 Conclusion By analyzing the experimental results, it can be concluded that the performance fusion of two modalities at score level “Sum Rule” has some score improvement over the single modality. Fusing FKP with either face or iris, the performance has a higher score over the best single modality performance at all FAR values except for fusing face with iris at FAR = 0.01 with less score improvement 1 %. At score level “Max Rule” fusing FKP with either face or iris, the performance has degraded over a single modality at all values of FAR. But fusing face with iris has gained some improvement over single modalities. At score level “Min Rule,” the performance was almost the same as the best single modality. The performance fusion of three modalities at score level “Sum Rule” has a good score improvement over the fusion of two modalities at FAR = 0.01 about 4 % over the highest performance of two modalities, but less improvement at FAR = 0.10 about 1 %. At score level “Max and Min Rules,” the performance has degraded over two modalities.

References 1. AlMahafzah. H., Imran M., Sheshadri, H.S.: Multibiometric: feature level fusion using FKP multi-instance biometric. IJCSI Int. J. Comput. Sci. 9(4(3)) (2012) 2. Teoh, A., Samad, S.A., Hussain, A.: Nearest neighbourhood classifiers in a bimodal biometric verification system fusion decision. J. Res. Pract. Information Technol. 36(1) (2004) 3. AlMahafzah, H., Imran, M., Sheshadri, H.S.: Multi-algorithm decision-level fusion using Finger-Knuckle-Print biometric. In: Kim, T.H. et al. (eds.) FGCN/DCA 2012,CCIS 350, pp. 302–311. © Springer-Verlag Berlin Heidelberg 2012 4. Stan, J., Li, Z., Jain, A.K.: Encyclopedia of Biometrics. Springer 5. Meraoumia, A., Chitroub, S., Bouridane, A.: Multimodal biometric person recognition system based on Iris and Palmprint using correlation filter classifier. In: Proceedings of the ICCIT, pp. 782–787 (2012) 6. Morizet, N., Gilles, J.: A new adaptive combination approach to score level fusion for face and iris biometrics combining wavelets and statistical moments. In: Bebis, G. et al. (eds.) ISVC 2008, Part II, LNCS 5359, pp. 661–671. Springer-Verlag Berlin Heidelberg 2008 7. Toh, K.-A., Yau, W.-Y., Jiang, X.: A reduced multivariate polynomial model for multimodal biometrics and classifiers fusion. IEEE Trans. Circuits Systems Video Technol. 14(2) (2000) 8. Giot, R., Hemery, B., Rosenberger, C.: Low cost and usable multimodal biometric system based on keystroke dynamics and 2D face recognition. In: ICPR 2010, 20th International Conference on Pattern Recognition. Istanbul, Turkey, pp. 1128–1131, 23–26 August 2010

Performance of Multimodal Biometric Systems at Score Level Fusion

913

9. Rodrigues, R.N., Ling, L.L.,Govindaraju, V.: Robustness of multimodal biometric fusion methods against spoof attacks. J. Visual Lang. Comput. Elsevier, 20(3), 129–220 (2009) 10. Shahin, M.K., Badawi, A.M., Rasmy, M.E.M.: Multimodal biometric system based on near-infra-red dorsal hand geometry and fingerprints for single and whole hands. World Acad. Sci. Eng. Technol. 56, 1107–1122 August 2011 11. Wang, Z., Yang, J., Wang, E., Liu, Y., Ding, Q.: A novel multimodal biometric system based on iris and face. Int. J. Digital Content Technol. Appl. (JDCTA), 6(2), 111–118 (2012) 12. Field, D.J.: Relation between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A, 4(12), 2379–2394 (1987) 13. Ojansivu, V., Heikkilӓ, J.: Blur insensitive texture classification using local phase quantization. In: Elmoataz, A., Lezoray, O., Nouboud, F., Mammass, D. (eds.) ICISP 2008 2008, LNCS, vol. 5099, pp. 236–243. Springer, Heidelberg (2008) 14. AlMahafzah, H., Sheshadri, H.S., Imran, M.: Multi-algorithm decision-level fusion using Finger-Knuckle-Print biometric. In: Sridhar, V. et al. (eds.) Emerging research in electronics, computer science and technology. Lecture Notes in Electrical Engineering, p. 248. Springer, India (2014). doi:10.1007/978-81-322-1157-0_5

The Study of Fault Diagnosis for Numerical Controlled Machine Based on Improved Case-Based Reasoning Model Huijuan Hao, Maoli Wang and Juan Li

Abstract If there is no match case in the database, the case-based reasoning method shows the limitation. To enhance self-learning of the method, an improved model is proposed in this paper. The case-based reasoning method is combined with the fuzzy relational model in the new method. Considering the complexity of numerical controlled machine, the fuzzy relationship between the fault phenomenon and fault reason is established. The example analysis shows that the new method is relatively accurate, provides the convenience for actual maintenance, and can be used as a powerful auxiliary tool for fault analysis of numerical controlled machines.



Keywords Fault diagnosis Numerical controlled machine reasoning Fuzzy relational model





Case-based

1 Introduction Modern fault diagnosis technology plays a huge role in the fault diagnosis of numerical controlled machine. Especially, with the application of artificial intelligence, the precision of fault diagnosis becomes higher and the diagnosis rate is also growing rapidly. The most common methods based on artificial intelligence [1–6]

H. Hao  M. Wang (&)  J. Li Shandong Provincial Key Laboratory of Computer Networks, Shandong Computer Science Center (National Supercomputer Center in Jinan), Jinan, Shandong Province, China e-mail: [email protected] H. Hao e-mail: [email protected] J. Li e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_83

915

916

H. Hao et al.

are based on fault tree, gray model, fuzzy theory, artificial neural network, the expert system, and so on. The case-based reasoning technology originated from artificial intelligence [7, 8] and has become the method to solve practical problems. Considering the previous similar cases as basis, the fault reason of current case is obtained through analogy and optimization in the case-based reasoning method. It has strong similarity and repeatability. Therefore, it has unique advantages in fault diagnosis. The case-based reasoning technology is closely related to knowledge representation, and different areas have different approaches. The saving form in database, the feature representation and the database structure are studied in this paper. The study fully considered the characteristic of fault in numerical controlled machine. If there is no match case in the database, the case-based reasoning approach shows the limitation. In order to solve the problem, an improved model is proposed in this paper. It is based on the fuzzy relational model [9, 10]; the fault reason is obtained through fuzzy reasoning. So it enhances the self-learning ability of the case-based reasoning method in fault diagnosis. The study of this paper can provide a powerful auxiliary for actual machine maintenance of numerical controlled machine.

2 The Representation of Fault Case 2.1

The Structure

Fault case is described as the set C ¼ fD; S; R; M; Eg. Where D ¼ fd1 ; d2 ; . . .; dn g is a non-empty finite set, which indicates the occurred time of the fault. S ¼ fs1 ; s2 ; . . .; sn g is a non-empty finite set that describes the fault characteristic (sign). R ¼ fr1 ; r2 ; . . .; rn g is a non-empty finite set that indicates the fault reason. M is a non-empty finite set that is the treatment measures of the fault. E is the evaluation of the solution for the fault. For a specific fault, the symptom may be a few; the set is called as the fault sign of this equipment, which is shown as follows. Sp ¼ fs1 ; s2 ; . . .; sm gfm  ng

ð1Þ

The experts analyze the phenomenon (Sp) and give the fuzzy value (Vdi) of the Si(i > > S ðhÞ ¼ Sða ; b ; w ; h ; a ; b ; w ; h ; h; W Þ > > r 1 1 1 1 2 2 2 2 f > < Zo ðhÞ ¼ 2S11 =ð1  S11 Þ ð6Þ wða; bÞ ¼ wjða; b; h; Zo min ; Zo max ; d; Df ; Lr min ; Lr max ; Pmin Þ > > > > hða; bÞ ¼ hjða; b; w; Zo min ; Zo max ; d; Df ; Lr min ; Lr max ; Pmin Þ > > : multði; jÞ ¼ multgða; b; w; h; vsi ; vsi ; h; dij Þ And Z0 min ≤ Z0(θ) ≤ Z0 max; Z0 max/Z0 min ≥ Pfmax/Pfmin; Lr(θ) ≤ Lr max; w ≤ a, h ≤ b; jS11 ðDf Þ=S11 ðf0 Þj  d1 ; phaseðS21 ðDf ÞÞ  phaseðS21 ðf0 ÞÞj  d2 ; multði; jÞ  1 %; Pmin  P0 ; Pl  Pl0 . In these equations, a is waveguide width, b is waveguide height, w is ridge width, h is ridge height, Lr(θ) is the length of resonant slot, h is the inclined angle of slot, wf is the slot width, Sr(θ) and Z0(θ) are the S parameters and impedance of slot, respectively, d is the distance between radiation waveguides, Δf is working frequency, vsi and dij are voltage and distance of resonant slots, mult(i, j) is the higher order modes coupling among slots, S11(Δf) and phase (S21(Δf)) are the S parameters and phase in working frequency, Pl and Pl0 are the loss and the maximum allowable loss of power, d1 and d2 are error control parameters, Z0min and Z0max are minimum and maximum of impedance, Lr min and Lr max are minimum and maximum length of resonant slot, Pfmax and Pfmin are minimum and maximum coupling power of resonant slot, and Pmin is the minimum power capacity of ridge waveguide.

4 Simulation Results Three types of end-fed slotted waveguide array are designed and simulated which are all composed four radiation waveguide and one fed-waveguide. The structures of these arrays are shown in Fig. 4A–C. A and C are projected with common waveguide with different fed-waveguide size and same radiation ones. B is designed with the method proposed in this paper which is suitable for the slotted ridge waveguide array. All the radiation waveguides in three models have the same width value 17.2 mm and height value 4.4 mm. The fed-waveguide in A is 29.64 mm in width and 4.4 mm in height, while the values of C are 20.04 and 4.4 mm respectively. In B model, the single-ridged fed-waveguide is of 20.04 mm width and 4.4 mm height. The size of ridge in fed-waveguide is 10 mm in width and 2.7 mm in height, while the ridge in double-ridged radiation waveguide of B is 8.6 mm in width and 2 mm in height. The end of each radiation waveguides in three models is matching, the characteristics of fed-divider will be examined. Three models are designed and calculated by software HFSS 12.0. The properties of these arrays, such as VSWR and magnitude-phase insistence, are provided in Table 1 and Figs. 5 and 6. It can be seen that the ridge waveguide model B has

932

X. Yan and Y. Yuan

Fig. 4 Three simulated models of slotted waveguide array

A

Table 1 VSWR result

Fig. 5 Max magnitude difference (dB) S21–S51

B

C

Frequency (GHz)

A

B

C

9.4 9.5 9.6 9.7 9.8 9.9 10.0 10.1

2.85 2.64 2.77 3.35 4.88 9 19 32.33

1.78 1.51 1.271 1.101 1.06 1.21 1.38 1.55

2.92 2.23 1.56 1.15 1.34 1.82 2.51 3.25

A Design of Fed-Divider for Slotted Ridge Waveguide Antenna Array

933

Fig. 6 Max phase difference (°) S21–S51

the excellent matching characteristic and magnitude-phase insistence in working frequency compared the other two common ones. The VSWR of model B is less than 1.8 from 9.4 to 10.1 GHz, the maximum magnitude difference in model B is 1.6 dB while the values in A and C are 7 and 5.4 dB, the maximum phase difference is 21° compared with 201° in A and 82.

5 Conclusion A design method of fed-divider of slotted ridge waveguide antenna is provided. First, the equivalent model of fed-slot is deduced and the fed-slots can be calculated accurately. Then, the design equations are investigated by considering parameters of single-ridged fed-waveguide, double-ridged radiation waveguide and the electrical performances of array in analytical model. Finally, three typical slotted waveguide antenna arrays are designed in which B model is calculated by the method proposed in this paper. After simulation by software HFSS12.0, the results demonstrate the excellent characteristics of B model in VSWR and magnitude-phase insistence, where the work band is from 9.4 to 10.1 GHz and confirm the validity of this method. Furthermore, the fed-divider design is suitable for slotted ridge waveguide antenna in any frequency and dimension.

References 1. Hansen, R.C.: Phased Array Antennas. Wiley, New York (1998) 2. Hamadallah, M.: Frequency limitations on broad-band performance of shunt slot arrays. IEEE Trans. Antennas Propag. 37(7), 817–823 (1989) 3. Coetzee, J.C., Joubert, J., Tan, W.L.: Frequency performance enhancement of resonant slotted waveguide arrays through the use of wideband radiators or subarraying. Microwave Opt. Technol. Lett. 22(1), 35–39 (1999)

Positive Opinion Influential Node Set Selection for Social Networks: Considering Both Positive and Negative Relationships Jing (Selena) He, Harneet Kaur and Manasvi Talluri

Abstract Viral marketing is a cost-effective marketing strategy that promotes products by giving free or discounted items to a selected group of highly influential individuals, in the hope that through the word-of-month effects, a large number of products adoptions will occur. Motivated by viral marketing strategies, lots of researches try to investigate how to find the subset of individuals to maximize the influence spread in the network. However, most of studies focus on friendship relations but ignoring foe relationships. Hence, in the paper, we propose a new influence diffusion mode considering both friend and foe relationships in social networks. Moreover, we propose a novel problem called Positive Opinion Influential Node Set (POINS) selection problem. Subsequently, a greedy algorithm called POINS-GREEDY is presented to address the POINS selection problem. Finally, to validate POINS-GREEDY, simulations are conducted on random graphs.







Keywords Influence maximization Social network Influential node set Positive influence Negative influence Friendship relationship Foe relationship







1 Introduction Social network can be considered as a graph made up of a set of actors (such as individuals) and a set of the interactions between these actors. Recently, the rapidly increasing popularity of Online Social Networking (OSN) sites such as Facebook, Twitter, and Google+opens up great opportunities for large-scale viral marketing J. (Selena) He (&)  H. Kaur  M. Talluri Department of Computer Science, Kennesaw State University, Kennesaw, GA, USA e-mail: [email protected] H. Kaur e-mail: [email protected] M. Talluri e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_85

935

936

J. (Selena) He et al.

campaigns. Because of the emergence of OSN’s in the past decades, lots of researchers start to study the information propagation in OSNs. Motivated by marketing applications, an optimization problem called influence maximization [1–3] received considerable attentions in recent years, where the goal is to identify a seed set (i.e., a small subset of individuals) that can be served as early adopters of a new technology or a new product and trigger a large word-of-mouth cascade in OSNs. Kempe et al. [1] formulated the influence maximization problem as a problem in discrete optimization. To be specific, given a network graph G with pairwise user influence probabilities on edges, and a positive number k, the problem is to find k initially activated users, with the objective of the expected spread of influence is maximized under certain proposed influence diffusion models. Kempe et al. also proposed two influence diffusion models in [1] which are the Independent Cascade (IC) and the Linear Threshold (LT) model. Some researches argued that neither IC nor LT models fully incorporate important temporal aspects that have been well observed in the dynamics of influence diffusion. Hence, another model named voter model is proposed in [4, 5]. The voter model is more suitable for modeling opinion diffusions since people may switch opinions back and forth from time to time due to the interactions with different people in OSNs. These diffusion models have been used to solve the influence maximization problem in [1–8]. Many of the studies are based on the social networks that have friendship relationships. Here in this paper, besides friendship relationships, we also consider the foe relationships. With this consideration, an individual will get positively influenced by a friend and negatively influenced by a foe. Positive influence means the individual considers the idea of the friend and follows it. In the same way the negative influence means the individual takes the opinion of the foe and follows the opposite way. Positive and negative weights represent the fraction of a node by which it can be influenced by other node. In the same way, the positive and negative weights depend on the type of a relationship between both nodes whether there is a friendship or foe relationship [9–14]. Now let us take the real world example of social network (shown in Fig. 1). Andy is the individual who is being influenced and John is the friend of Andy, and Bob is an enemy to John. The viral marketing work in such a way that when John posted that he likes a Starbucks coffee, then by seeing the post of John, Andy would be positively influenced and would like to try that coffee at Starbucks. Similarly, when Bob posted that he likes some restaurant, Andy would not show any interest in that post and in addition there is a chance of Andy hating that restaurant by the principle of “the friend of my enemy is my enemy”. Social network is usually modeled as a graph as shown in Fig. 1. The nodes represent the individuals and the edges represent the relationships between individuals. In this paper, we are going to study the influence propagation in social networks considering both positive and negative relationships. The recent studies on networks with negative and positive relationships gained lot of attention because the social networks in real world have both positive and negative relationships. Hence, in this paper, we investigate the Positive Opinion Influential Node Set (POINS) selection problem which maximizes the spread of positive opinion influence through the social network. Specifically, the main contributions of this paper are summarized as follows: (1) Taking into the consideration of both positive and negative

Positive Opinion Influential Node Set Selection for Social …

937

Fig. 1 A online social network with social influences on edges

influences, we define a new influence diffusion model, (2) We introduce a new optimization problem named POINS selection problem, which is to identify the set of positive influential nodes that could maximize the positive opinion in the network, (3) We propose a greedy algorithm called POINS-GREEDY to solve the POINS selection problem, and (4) Finally, we validate the proposed solution by performing simulations on random graphs. The rest of the paper is organized as follows: in Sect. 2, we review some related literatures. In Sect. 3, we first introduce the network model and diffusion model and then formally define the POINS selection problem. The greedy algorithm POINS-GREEDY is presented in Sect. 4. The simulation setting and results are presented in Sect. 5, followed by the conclusion and future work in Sect. 6.

2 Related Work In this section, we summarize the related research works on influence diffusion models and influence maximization problem.

2.1

Influence Diffusion Models

The basic diffusion models that are described in all recent studies are as follows: Linear Threshold (LT) Model: LT model [1] assumes that the sum of the weights of all neighbor nodes must be less than or equal to one. Moreover, each node has a

938

J. (Selena) He et al.

specified preset threshold value. If the total weight of the active neighbors is greater than or equal to the preset threshold, then the node is activated and this node tends to influence all of its neighbor nodes. The important constrain here is that the node can be triggered as active from inactive status but not vice versa. Another model called Independent Cascade (IC) Model [2] describes that an active node influences/activates its neighbor nodes with certain probability. The authors in [15] introduce a new diffusion model named Susceptible/Infected/Susceptible (SIS) model. In SIS model, the nodes become active multiple times, thus increases the complexity of finding the influential nodes. Voter model [9] takes advantage of random walk concept. To be specific, each node in the network first holds one of two opposite opinions, represented by black and white colors. At each step, every node randomly picks one outgoing neighbor node with the probability proportional to the weight of edge between the pair of nodes and changes its color to the outgoing neighboring node’s color.

2.2

Influence Maximization Problem and Its Variations

All of the above-stated influence diffusion models can be applied to solve influence maximization problems. While the common research work is to maximize the spread of influence in social networks, it also helps in identifying an optimal algorithm for selecting the subset of most influential set of individuals for any given social networks. In the LT and IC models, finding the subset is a problem as in discrete optimization and the resulting influence function is submodular and the influence maximization problem for both models is NP-hard. Now let us discuss the algorithms for influence maximization problems: The authors in [15] solve the computational complexity problem by using a layered graph and with some strategies like pruning and burnout. The paper experimentally proved that this proposed SIS model gives better results and has smaller computational complexity than other models. Scalable and Robust Influence Maximization in Social Networks [16] proposed an algorithm called IRIE. The proposed algorithm integrates the Influence Ranking (IR) and Influence Estimation (IE) methods to solve the influence maximization problem under both the Independent Cascade (IC) model and its extension IC-N model that incorporates negative opinion propagations. In IRIE algorithm, each selected node is given a ranking. Some experiments on real social networks are done and the results show that the IRIE model gives better results than other algorithms. The authors in [17] mainly discusses on how to design different algorithms to solve the influence maximization problem. They also summarized and compared some algorithms designed under LT, IC, and other models. The solutions to influence maximization problem are not good for the scenario where someone wants to maximize influence spread within a given timeline/deadline. So, the authors in [3] discuss two algorithms that overcome the inefficiency. Those two algorithms, one is based on programming procedure that computes exact influence and the other converts the problem to one of the original IC model and that applies

Positive Opinion Influential Node Set Selection for Social …

939

fast heuristics to it. But the experiments say that the proposed algorithms give same results as the greedy algorithm. This research focuses on time-critical influence where a product in viral marketing approach must reach maximum number of people in a short time frame. Lots of variations of influence maximization problem are springing up now. The authors in [18] mainly focus on the study of competitive influence propagation in OSNs under the competitive linear threshold model. They focus on the influence blocking maximization problem in social networks, which means the node tries to block the influence propagation of this competing node by selecting a set of nodes that initiate its own influence propagation. The new proposed model here is Competitive LT model for directed acyclic graphs that gives better results and accuracy than the greedy algorithm. So far, the research works were only based on the influences over edges i.e., the influence is within the network, but the authors in [19] did many experiments on Twitter and said that only 71 % of influence is in-network influence but 29 % comes from out-of network where the individuals see other sites and post the tweets that are most interested which contains URL’s of some articles, videos, jokes etc. So, while performing the influence maximization problem this parameter is taken into consideration and a new model is proposed which is more accurate than other proposed models. The researchers in [20] said that the smaller the group of influence nodes is the more accuracy would be. The main problem is to identify the initial links to maximize impact on the society. The authors in [21] consider heterogeneous social networks, which containing multiple types of nodes and links. The proposed method includes two stages. In the first stage, a human-based influence graph is generated, where weights on edges represent that how special the target node is to the source node. In the second stage, entropy-based heuristic algorithms are proposed to identify the disseminators in the previous constructed influence graph to maximize the influence spread. The authors in [22] study the influence maximization problem in the voter model on unsigned and undirected graphs, and they apply node degree as the greedy criterion to pick up seed set nodes. As a contrast, in [23] the influence propagation problem is investigated on signed graphs, especially for weakly connected or disconnected signed graphs. The aforementioned studies are mainly on the social networks with only positive relationship but in practical the relationships between the individuals can also be negative if the person is a foe (enemy) to that individual. The influence may depend on many factors such as, who influences the individual, how they are influenced to each other etc. Friends can influence an individual and the individual may get negative influence if it comes from his/her foe. Thus, in this paper, we propose a new influence diffusion model which is in cooperated with negative relationship and introduce a novel POINS selection problem. The proposed diffusion model is more practical. This is because, in the real world, we have to consider both positive

940

J. (Selena) He et al.

and negative influences, where individuals can positively or negatively influence their neighbors with certain probabilities.

3 Network Model and Problem Definition 3.1

Network Model

We consider a social network as a weighted undirected graph G = (V, E, W), where V is the set of nodes denoted by vi, where i is the node ID. In the model, initially each node in the network holds one of the two opposite opinions A or B, where we further assume that opinion A is positive and opinion B is negative. To be specific, we divided the set of nodes into two categories shown as follows:  V¼

VA ¼ fvi jvi has opinion Ag : VB ¼ fvi jvi has opinion Bg

ð1Þ

  For example, in Fig. 2, VA ¼ fv1 ; v4 ; v6 g; VB ¼ v2 ; v3; v5 . E is the set of edges   where vi ; vj  E represents the interaction between the two nodes vi, and vj. W is   the weight of edge (vi,vj) with wij 6¼ 0 if and only if vi ; vj  E. Here the weight of the edge contains both positive and negative entries, where a positive entry w+ij represents that vi considers vj as a friend or vi trusts vj; and a negative entry w−ij represents that vi considers vj as a foe or vi distrusts vj. The absolute value |wij| represents the trust or distrust value of the relationship. The friendship and foe relation are represented by [ and \ respectively. Hence, weight set can be represented by the following formula:  W¼

W þ ¼ fwþ ij jðvi ; vj Þ 2 [ , i:e:; Friendship relationshipg W  ¼ fw ij jðvi ; vj Þ 2 \; i:e:; Foe relationshipg

ð2Þ

  þ þ     For example, from Fig. 2, W þ ¼ wþ 12 ; w14 ; w56 ; W ¼ fw15 ; w26 ; w23 ;   w24 ; w34 g. To be specific from the figure we can say that v1 is a friend of v2 with social influence 0.1 considering its weight as positive and v1 and v5 are in foe relationship with social influence 0.2 because they hold a negative weight between the link. Similarly, this rule holds for each edge in the network. Now, before we introduce the proposed diffusion model, we first define some useful terms. Definition 1 Neighboring Node Set (Ni). For a network G = (V, E, W), the Neighboring Node Set of vi is defined as follows:

Positive Opinion Influential Node Set Selection for Social …

941

Fig. 2 Demonstration of diffusion model through a social network

  Ni ¼ ðvi ; vj Þ 2 E; wij6¼0 :

ð3Þ

Definition 2 Neighboring Node Set with Opinion A (NAi ). For a network G = (V, E, W), the Neighboring Node Set with Opinion A of vi is defined as follows:   NiA ¼ vj jðvi ; vj Þ 2 E; wij6¼0 ; vj 2 VA :

ð4Þ

Definition 3 Neighboring Node Set with Opinion B (NBi ). For a network G = (V, E, W), the Neighboring Node Set with Opinion B of vi is defined as follows:

942

J. (Selena) He et al.

  NiB ¼ vj jðvi ; vj Þ 2 E; wij6¼0 ; vj 2 VB :

ð5Þ

The relation of the above sets can be summarized as: Ni ¼ NiA [ NiB . For example, in Fig. 2, the neighboring set of v2 is N2 = {v1, v3, v4, v6}, NA2 = {v1, v4, v6}, and NB2 = {v3}.

3.2

Diffusion Model

In a social network, a node can have either an active or inactive status. As mentioned in the network model, every node in the network holds one of the two opposite opinions A or B. So, we further divide the active status into opinion A active and opinion B active. Here at time t, every node which is inactive in the network tends to change its status to active at time t + 1 corresponding to the status of its neighboring nodes where t represents the time slot. Each node in the network belongs to one of the below-defined three node sets. Definition 4 Active Node Set with Opinion A (SA(t)). For a network G = (V, E, W), the Active Node Set with Opinion A at time slot t is a subset SA ðtÞVA such that all nodes in SA(t) are active at time t and holds opinion A. It is defined as: SA ðtÞ ¼ fvi jvi 2 VA and vi is active at time tg:

ð6Þ

Definition 5 Active Node Set with Opinion B (SB(t)). For a network G = (V, E, W), the Active Node Set with Opinion B at time slot t is a subset SB ðtÞVB such that all nodes in SB(t) are active at time t and holds opinion B. It is defined as: SB ðtÞ ¼ fvi jvi 2 VB and vi is active at time tg:

ð7Þ

Definition 6 Inactive Node Set (SIN(t)). For a network G = (V, E, W), the Inactive Node Set with opinion A or B at time slot t is a subset SIN ðtÞV such that all nodes in SIN(t) are inactive at time t and defined as follows: SIN ðtÞ ¼ fvi jvi 2 V and vi is inactive at time tg:

ð8Þ

In this model, vi is influenced positively or negatively by its neighboring node P set according to the weighs where vj 2Ni jwij j  1. Each node vi chooses two A B thresholds θi and θi uniformly at a random interval [0, 1], where θAi and θBi represents the weighted fraction of vi’s neighbor that must be active in order for vi to become active for opinion A and opinion B, respectively.

Positive Opinion Influential Node Set Selection for Social …

943

In Fig. 2, we use different colors for the nodes in the network to differentiate them based on a node’s status. To be specific, a black node represents that it belongs to set SA(t), a blue node for SB(t), and a white node represents that it belong to set SIN(t). At time t = 0, each node has either one of the two opposite opinions and their initial set of active nodes represented by SA(0), and SB(0) respectively. Hence, at time t, the status of a node vi can be decided by using the status of the neighboring nodes at time t − 1: 8 P P A þ A  > < S ðtÞ; when vj 2SA ðt1Þ wij þ vj 2SB ðt1Þ jwij j  hi P P B:  vi 2 SB ðtÞ; when v 2SB ðt1Þ wþ ij þ vj 2SA ðt1Þ jwij j  hi j > : IN S ðtÞ; otherwise

ð9Þ

In Eq. (9), there would be a possibility of the node can be influenced by both of the opinions A and B, i.e., the thresholds θAi and θBi satisfied the rules given above, then the nodes vi holds the opinion same as its opinion at t – 1 but become active node. We assume that nodes which are active at t − 1 remain active at the next time slot t. For better understanding of the proposed diffusion model, we use an example of a network to explain it. From Fig. 2a, at t ¼ 0; SA ð0Þ ¼ fv1 ; v4 ; v6 g; SB ð0Þ ¼ fv3 g; and SIN ð0Þ ¼ fv2 ; v5 g: At time t = 1, the status of v2, v5 changes according to Eq. (9). Given two thresholds θA2 and θB2 for v2 are 0.5 and 0.5. For v5, the θA5 and θB5 values would be 0.2 and 0.2. Now for v2 considering Eq. (9) w12 þ jw23 j ¼ 0:1 þ 0:2 ¼ 0:3  hA2 ¼ 0:5 is false and jw24 j þ w26 ¼ 0:4 þ 0:2 ¼ 0:6  hB2 ¼ 0:5 is true. Hence, v2 is influenced to have opinion B. For v5 the calculation would be w56 = 0.2 ≥ θA5 = 0.2 is true and jw15 j ¼ 0:2  hB5 ¼ 0:2 is true. In this case, both thresholds θA5 and θB5 satisfies the rules given in Eq. (9), then the node vi hold the opinion same as its opinion at t − 1 but becomes active node. Hence, v5 is influenced to have its own opinion B. Hence the node sets at time t − 1 would be SA ð1Þ ¼ fv1 ; v4 ; v6 g; SB ð1Þ ¼ fv2 ; v3 ; v5 g; and SIN ð1Þ ¼ / (empty set) updated respectively as shown in Fig. 2b.

3.3

Positive Opinion Influential Node Set (POINS) Selection Problem

Now we are ready to define the problem we investigated. The objective of the POINS selection problem is to maximize the spread of positive opinion influence through the social network by identifying a subset of most influential nodes as the initial nodes. Before we start introducing our problem, we first formally define new terminologies and then give the definition of the POINS selection problem. Consider a scenario where each person holds one of the two opposite opinions among the public. For example, considering two political parties A and B having

944

J. (Selena) He et al.

opposite opinions, one party tries to win the maximum votes from public (to become the most dominating party) by selecting the initial nodes to maximize influence. Our proposed POINS selection problem helps to find the Active Node Set with Opinion A with size k, where this set influences the maximum number of people which hold opinion A. So that the political party A gets more votes among the public. Definition 7 Expected value of SA(t) (ρ(t)). The expected value of Active Node Set P with opinion A is qðtÞ ¼ t tjSA ðtÞj where t is the time slot. Definition 8 Positive Opinion Influential Node Set (POINS). For a network G = (V, E, W), SB(0), and k, the POINS selection problem is to find an initial active node set with opinion A represented by SA(0) of size at most k that maximize ρ(t), i.e, arg maxjSA ð0Þj  k qðtÞ.

4 Solution to the POINS Selection Problem We propose a greedy algorithm called POINS-GREEDY to solve the POINS selection problem. Before introducing POINS-GREEDY, we first define a useful contribution function as follows: Definition 9 Contribution Function (f(vi)). For a network G = (V, E, W), an initial active node set with opinion A SA(0), and an initial active node set with opinion B SB(0), the contribution function to G is defined as: f ðvi Þ ¼

X X jNiB j þ wþ jw ij þ ij j; d B A v 2S ð0Þ v 2S ð0Þ j

ð10Þ

j

where δ represents the maximum degree in the graph G. Based on the defined contribution function, we propose a greedy algorithm as shown in Algorithm 1. POINS-GREEDY starts from empty set (Line 1 in Algorithm 1). We first search the neighboring set of SB(0), in each iteration, the node having maximum f(vi) value (if there is a tie on the f(vi) value, we use the node ID to break the tie) will be added into SA(0), as shown in Algorithm 1 from Line 2–5. After searching all the neighboring nodes of SB(0), if the size of SA(0) is less than k, then we start to search the rest nodes which are not in the set SA(0) [ SB(0) (as shown in Algorithm 1 from Line 6–9). The algorithm terminates when |SA(0)| = k. Finally, SA(0) is returned (Line 10 in Algorithm 1).

Positive Opinion Influential Node Set Selection for Social …

945

To better understand the proposed greedy algorithm, we use the social network represented by the graph shown in Fig. 2a to illustrate the selection procedure as follows. In this example, SB ð0Þ ¼ fv2 g; k ¼ 2. (1) First round: SA ð0Þ = ɸ (empty set) S (2) Second round: we first search in the set vj 2SB ð0Þ Nj ¼ fv1 ; v4 ; v6 g. After calculating the contribution function values, we got: f(v1) = 2/4 = 0.5; f (v4) = 1/4 + 0.3 = 0.55; f(v6) = 2/4 = 0.5. Therefore, we have SA(0) = {v4}, since v4 has the maximum contribution function value. The size of SA(0) is less than k = 2. Hence, the selection procedure continues. (3) Third round: we calculate the contribution function on updated SA(0) set. We got: f(v1) = 2/4 + 0.5 = 1.0; f(v6) = 2/4 = 0.5. Hence, SA(0) = {v1,v4}. Moreover, the size of SA(0) is equal to k = 2, the algorithm terminates and SA(0) = {v1,v4} is returned. The proposed algorithm try to minimize the number of active nodes with opinion B, hence it starts searching the neighboring nodes of SB(0) first. Moreover, the number of neighboring nodes with opinion B is considered when calculating the contribution function value, which further increases the opportunity to eliminate the number of active nodes with opinion B.

5 Performance Evaluation Currently, there is no existing work studying the POINS selection problem. The simulation results of POINS-GREEDY (denoted by POINS) are compared to the optimal solution obtained by exhaustive search (denoted by OPTIMAL).

946

5.1

J. (Selena) He et al.

Simulation Setting

We build our own simulator to generate random graphs based on the random graph model G(n, p), where n is the number of nodes in the graph and p is the probability to generate an edge between a pair of nodes. The weights on edges representing the social influences are randomly generated between 0 and 1. Moreover, each node holds opinion A with probability 0.5. To simplify the testing procedure, we set θAi = θBi = 0.5. The simulation results under different scenarios are shown as follows.

5.2

Simulation Results

In this subsection, we analyze the number of the active nodes with opinion A for POINS and the size of OPTIMAL under different scenario in the previous mentioned random graphs. In this simulation, we consider two tunable parameters. One is the network size n, and the other is the possibility p to generate an edge between a pair of nodes in the random graph model G(n, p). We would like to validate the performance of our proposed greedy algorithm, thus we implemented exhaustive search to find the optimal solution for our proposed problem POINS. Obviously, the exhaustive search does not work on large-scale networks. Hence, we run the simulations on small-scale networks, i.e., the network size n changing from 10 to 20 nodes. The simulation results are collected and shown in Fig. 3. The impacts of n, and p on the size of the active nodes with opinion A are shown in Fig. 3a, b respectively. In Fig. 3a, x-axis represents the network size n (changing from 8 to 20) and y-axis represents the number of activated nodes with opinion A. From Fig. 3a, we can see that the sizes of the active nodes with opinion A for POINS and OPTIMAL increase when the network size n increases. This is because, when the network size increases, each node has more opportunities to be influenced

Fig. 3 Simulation results. The default setting is n = 15, p = 0.5, and θ = 0.5

Positive Opinion Influential Node Set Selection for Social …

947

by its neighbors. Hence, more nodes are influenced with opinion A. Additionally, for a specific network size, OPTIMAL produces larger sized active nodes with opinion A. This is obvious since OPTIMAL can guarantee finding the optimal solution by exhaust search. However, our proposed POINS-GREEDY can find near optimal solution in small-scale networks. To be specific, POINS produces 1.07 less active nodes with opinion A compared to the optimal solution. In Fig. 3b, x-axis represents the probability p to generate an edge between a pair of nodes (changing from 0.1 to 0.9) and y-axis represents the number of activated nodes with opinion A. From Fig. 3b, we found that there is no obvious trend on the sizes of active nodes with opinion A for both algorithms when p increases (note that the y-axis values are form 3 to 6.5). This is because that p increases introducing more edges in the network, thus one specific node in the network may have more negative or positive neighbors. In a very crowded network, it is hard to tell the pattern of the sizes of active nodes with opinion A. On the other hand, for a specific p, POINS produces smaller sized active nodes with opinion A. On average POINS only produces 1.6 less active nodes with opinion A than the optimal solution. The simulation results imply that our proposed POINS-GREEDY construct a very close approximate solution to the optimal solution in small-scale networks. But we need to validate whether the results keep consistent for large-scale networks or not.

6 Conclusion and Future Work In this paper, we propose a new influence diffusion model considering both friendship relationships and foe relationships in social networks. A novel problem called POINS selection problem is proposed and studied, which is useful to eliminated negative influences while promoting products in social networks. A greedy algorithm is proposed to solve the POINS selection problem, followed by simulation validation on random graphs. We will try to analysis the performance ratio of the proposed greedy algorithm and implement simulations on large-scale networks or on real social data sets to validate POINS-GREEDY in the future. Acknowledgments This research is partly supported by the Kennesaw State University College of Science and Mathematics the Interdisciplinary Research Opportunities (IDROP) Program and Computer Science Department Mini-Grant research grant awards.

References 1. Richardson, M., Domingos, P.: Mining knowledge-sharing sites for viral marketing. In: SIGKDD, pp. 61–70 (2002) 2. Kempe, D., Kleinberg, J., Tardos, E.: Maximizing the spread of influence through a social network. In: SIGKDD, pp. 137–146 (2003)

948

J. (Selena) He et al.

3. Zhang, N., Lu, W., Chen, W.: Time-critical influence maximization in social networks with time-delayed diffusion process. In: AAAI (2012) 4. Clifford, C., Sudbury, A.: A model for spatial conflict. Biometrika 60(3), 581–588 (1973) 5. Holley, R., Liggett, T.: Ergodic theorems for weakly interacting infinite systems and the voter model. Ann. Prob. 3, 643–663, (1975) 6. Chen, W., Yuan, Y., Zhang, L.: Scalable influence maximization in social networks under the linear threshold model. In: ICDM, pp. 88–97 (2010) 7. Even-Dar, E., Shapira, A.: A note on maximizing the spread of influence in social networks, pp. 281–286. Internet and Network, Economics (2007) 8. Goyal, A., Lu, W., Lakshmanan, L.: Simpath: an efficient algorithm for influence maximization under the linear threshold model. In: ICDM, pp. 211–220 (2011) 9. Li, Y., Chen, W., Wang, Y., Zhang, Z.: Influence diffusion dynamics and influence maximization in social networks with friend and foe relationships. In: WSDM, pp. 657–666 (2013) 10. Chen, W., Collins, A., Cummings, R., Ke, T., Liu, Z., Rincn, D., Sun, X., Wang, Y., Wei, W., Yuan, Y.: Influence maximization in social networks when negative opinions may emerge and propagate. In: SDM, pp. 379–390 (2011) 11. Leskovec, J., Huttenlocher D., Kleinberg, J.: Predicting positive and negative links in online social networks. In: WWW, pp. 641–650 (2010) 12. He, J, Ji, S., Liao, X., Haddad, H.M., Beyah, R.: Minimum-sized positive influential node set selection for social networks: considering both positive and negative influences. In: IPCCC, pp. 1–8 (2013) 13. He, J., Ji, S., Beyah, R., Cai, Z.: Minimum-sized influential node set selection for social networks under the independent cascade model. In: MOBIHOC, pp. 93–102 (2014) 14. Leskovec, J., Huttenlocher, D., Kleinberg, J.: Signed networks in social media. In: Human Factors in Computing Systems, pp. 1361–1370 (2010) 15. Saito, K., Kimura, M., Motoda, H.: Discovering Influential Nodes for SIS models in social networks. In: Discovery Science, pp. 302–316 (2009) 16. Jung, K., Heo, W., Chen, W.: IRIE: scalable and robust influence maximization in social networks. In: ICDM, pp. 918–923 (2012) 17. Singer, Y.: How to win friends and influence people, truthfully: influence maximization mechanisms for social networks. In: WSDM, pp. 733–742 (2012) 18. He, X., Song, G., Chen, W., Jiang, Q.: Influence blocking maximization in social networks under the competitive linear threshold model. In: SDM, pp. 463–474 (2011) 19. Myers, S., Zhu, C., Leskovec, J.: Information diffusion and external influence in networks. In: SIGKDD, pp. 33–41 (2012) 20. Taheri, A., Afshar, M., Asadpour, M.: Influence maximization for informed agents in collective behavior. In: Distributed Autonomous Robotic Systems, pp. 389–402 (2010) 21. Galstyan, A., Musoyan, V., Cohen, P.: Influence propagation and maximization for heterogeneous social networks. In: WWW, pp. 559–560 (2012) 22. Even-Dar, E., Shapira, A.: A note on maximizing the spread of influence in social networks. In: Internet and Network Economics, pp. 281–286 (2007) 23. Chen, W., Wang, Y., Yang, S.: Efficient influence maximization in social networks. In: SIGKDD, pp. 199–208 (2009)

Outage Probability of Hybrid Duplex Relaying in Cooperative Communication Xianyi Rui, Pan Xu and Fei Xu

Abstract In this paper, we propose a hybrid duplex mode in cooperative communication, which is the hybrid of a full duplex and a half duplex at the relay. We analyze and calculate the theoretical expression of the outage probability with the decode-and-forward mode. We compare the system performance between with and without a source–destination link and we propose the way with source–destination link is better. Via the computer simulation, we found that the hybrid duplex in this paper have a lower outage probability and a better system performance. Keywords Full duplex Rayleigh fading

 Half duplex  Self-interference  Outage probability 

1 Introduction Cooperative communication has obtained more and more people’s attention all over the world in recent years. Relay technique is an attractive method to improve the communication system capacity and extend the base station coverage, and therefore it will be the key technique in the new generation of cooperative communication [1, 2]. Relay systems can be classified into two categories based on the simultaneity of reception and retransmission at the relay. The first type is the half-duplex relay (HDR) system, in which reception and retransmission are performed in time-orthogonal channels [3, 4], and therefore the half-duplex mode incurs a spectral efficiency penalty of 50 % reduction because it requires two time slots to send one symbol. To make up the loss of spectral efficiency in HDR scheme, another type called the full-duplex relay (FDR) system allows the relays to receive and retransmit at the same time and frequency [5, 6]. However, in the FDR system,

X. Rui (&)  P. Xu  F. Xu School of Electronic & Information, Soochow University, 215006 Suzhou, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_86

949

950

X. Rui et al.

a self-interference inevitably occurs owing to signal leakage between the retransmission and reception at the relay. Rather than using either relaying mode at an early stage of system design, people begin to find out a new hybrid way to use the FDR and HDR together to improve the system performance. Kwon analyzed the outage probability of FDR and HDR over Rayleigh fading channels in decode-and-forward relaying [7]. Riihonen analyzed the system capacity of FDR and HDR over Rayleigh fading channels [8]. In this paper, a hybrid full-duplex and half-duplex mode in cooperative communication is proposed. We derive the theoretical expression of the outage probability, and using them, the performance analysis and comparison are provided conveniently.

2 System Model In this section, we consider a system model for the hybrid duplex cooperative network. As illustrated in Fig. 1, there is one source (S), one full-duplex relay, one half-duplex relay, and one destination (D). There is also a direct link between source and destination. In our hybrid duplex relaying system, the source sends signals to the full-duplex relay, half-duplex relay, and the destination at the same time, the relays receive the signals and retransmit to the destination, and the destination receives and combines the information from the relays and source, respectively. In the FDR, these transmissions simultaneously occur at the same frequency, which causes self-interference at the relay. Let hsr and hrd denote the channel impulse responses between the source and relays and between the relays and the destination, respectively. Let hsd and hli denote the channel impulse responses between the source and the destination and between the transmission and reception at the relay. All the links are assumed to be Rayleigh distributed. Let csr ¼ c0 jhsr j2 , crd ¼ c0 jhrd j2 , csd ¼ c0 jhsd j2 , cli ¼ c0 jhli j2 denote the instantaneous received SNRs for the either hop and the direct link and the interference channel, respectively, where c0 is average SNR. The variables csr , crd , csd , cli subject to independent exponential distribution, fcsr ðcÞ ¼ aeac , fcrd ðcÞ ¼ bebc , Fig. 1 Hybrid FDR/HDR system model

Outage Probability of Hybrid Duplex Relaying …

951

fcsd ðcÞ ¼ lelc , fcli ðcÞ ¼ hehc denote the probability density function (PDF), where a ¼ 1=ðc0 Xsr Þ, b ¼ 1=ðc0 Xrd Þ, l ¼ 1=ðc0 Xsd Þ, h ¼ 1=ðc0 Xli Þ denote the different parameters.

3 Outage Probability The outage probability of a communication system is defined the probability that mutual information (I) less than the target capacity (C) between the source and the destination, POUT ¼ PðI  CÞ. For the system in this paper, the outage probability can be written as POUT ¼ PðI  CÞ ¼ PfmaxfIFD ; IHD ; ISD g  C g   ¼ PfIFD  Cg  P IHD  C  PfISD  Cg

3.1

ð1Þ

Outage Probability of HDR

For the half-duplex relay, the system spectral efficiency loses 50 % because it receives and transmits in each orthogonal channels: 1 IHD ¼ logð1 þ cÞ 2

ð2Þ

For a DF scheme, the end-to-end instantaneous SNR at the destination can be tightly approximated in the high SNR regime as follows: c1eq ¼ minfcSR ; cRD g

ð3Þ

where cSR is the first hop SNR, and its CDF equals Fc ðcÞ ¼ 1  eac , cRD is the SR

second hop SNR, and its CDF equals F c ðcÞ ¼ 1  ebc . Then, CDF of c1eq can be RD written by n o F1 ðcÞ ¼ Pr c1eq \ c ¼ PrfminðcSR ; cRD Þ \ cg    ¼ 1  1  FcSR ðcÞ 1  FcRD ðcÞ ¼ 1  eða þ bÞc

ð4Þ

952

X. Rui et al.

Combining (2) and (4), the outage probability of HDR can be written by POUT

3.2

HD

  ¼ PfI \ C g ¼ P c \ 22C  1 ¼ F1 ð22C  1 Þ

ð5Þ

Outage Probability of FDR

For the full-duplex relay, the system spectral efficiency would not lose because the relay receives and transmits in the same channels: IFD ¼ logð1 þ cÞ

ð6Þ

Consider of the self-interference between the both ends of relay, cs ¼ csr =ðcli þ 1Þ denote the received SNR at relay, and its CDF can be written by Fc s ð c Þ ¼ 1 

h eac h þ ac

ð7Þ

For a DF scheme, the end-to-end instantaneous SNR at the destination can be tightly approximated in the high SNR regime as follows: ð8Þ

c2eq ¼ minfcS ; cRD g

where cRD is the second hop SNR, and its CDF equals Fc ðcÞ ¼ 1  ebc . Then, CDF of c2eq can be written by

RD

  F2 ðcÞ ¼ Pr ceq \ c ¼ PrfminðcS ; cRD Þ \ cg    ¼ 1  1  FcS ðcÞ 1  FcRD ðcÞ h eða þ bÞc ¼1 h þ ac

ð9Þ

Combining (6) and (9), the outage probability of FDR can be written by POUT

3.3

FD

  ¼ PfI \ Cg ¼ P c \ 2C1 ¼ F2 ð2C1 Þ

ð10Þ

Outage Probability of SD Link

In the direct link between the source and the destination, the outage probability can be written by

Outage Probability of Hybrid Duplex Relaying …

953

  PSD ¼ PfI \ Cg ¼ P c \ 2C1 ¼ F3 ð2C1 Þ

ð11Þ

Substituting (5), (10), and (11) into (1), the system outage probability can be rewritten as n

POUT ¼ 1  e

ða þ bÞð22C1 Þ

o   1

n o C1 h ða þ bÞð2C1 Þ e 1  elð2 Þ h þ ac ð12Þ

Taking no account of the direct link between the source and the destination, the system outage probability can be written as POUT

noSDðL¼1Þ

n o 2C1 ¼ 1  eðaþbÞð2 Þ 1 

C1 h  eðaþbÞð2 Þ h þ ac

 ð13Þ

4 Numerical and Simulated Results In this section, the theoretical derivations presented in the previous sections are verified through computer simulations. We assume the S–R, R–D, S–D, and the self-interference channel are Rayleigh fading channels. The relay protocol is chosen as DF. The simulation conditions are as follows: r2R ¼ r2B ¼ r2 . The target rate (R) is 0.5 bit/s/Hz. We assume the channel parameters from the source to FDR and HDR are identical, the parameters from the FDR and HDR to the destination are identical, we set Xsr ¼ 3 dB, Xrd ¼ 1 dB, Xsd ¼ 3 dB, Xrr ¼ 1 dB. Figure 2 illustrates the outage probabilities of different duplex relay system without direct link between source and destination. From the figure, we can observe Fig. 2 Comparison of outage probabilities without with source–destination link

954

X. Rui et al.

Fig. 3 Comparison of outage probabilities with source– destination link

that the outage probability of FDR is lower than the HDR in the lower SNR, and the FDR is better than the HDR since the SNR rises to 7 dB, but the outage probability of hybrid FDR/HDR system is lower than both FDR and HDR. Figure 3 illustrates the situation under the consideration of direct link between source and destination. From this figure, we can observe the same conclusion with Fig. 2, the hybrid FDR/HDR scheme is better than the FDR and the HDR, and then the outage probability is lower.

5 Conclusions In this paper, we propose a hybrid duplex relaying (FDR/HDR) scheme. Compared to the single FDR and HDR, the hybrid duplex relaying scheme has a superior system performance. Via the theoretical analysis and computer simulation, we found that the hybrid duplex at the relay in this paper have a lower outage probability and a better system performance. Acknowledgments This work was supported by Natural Science Found of China (No. 61201213 and 61271360).

References 1. Lioliou, P., Viberg, M., Coldrey, M., Athly, F.: Self-interference suppression in full-duplex MIMO relays. In: Proceedings of IEEE Signals, Systems and Computers, pp. 658–662 (2010) 2. Kang, Y.Y., Cho, J.H.: Capacity of MIMO wireless channel with full-duplex amplify-and-forward relay. In: IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, pp. 117–121 (2009)

Outage Probability of Hybrid Duplex Relaying …

955

3. Kramer, G., Gastpar, M., Gupta, P.: Cooperative strategies and capacity theorems for relay networks. IEEE Trans. Inf. Theory 51, 1244–1248 (2005) 4. Skraparlis, D., Sakarellos, V.K., Panagopoulos, A.D., Kanellopoulos, J.D.: Outage performance analysis of cooperative diversity with MRC and SC in correlated lognormal channels. EURASIP J. Wirel. Commun. Netw. 707–839 (2009) 5. Riihonen, T., Werner, S., Wichman, R., et al.: On the feasibility of full-duplex relaying in the presence of loop interference. In: IEEE Workshop on Signal Processing Advances in Wireless Communications, pp. 275–278 (2009) 6. Riihonen, T., Werner, S., Wichman, R.: Comparison of full-duplex and half-duplex modes with a fixed amplify-and forward relay. In: IEEE Wireless Communications and Networking Conference, pp. 1–5 (2009) 7. Hong, D., Kwon, T., Lim, S., Choi, S.: Optimal duplex mode for DF relay in terms of the outage probability. IEEE Trans. Veh. Technol. 59, 3628–3634 (2010) 8. Riihonen, T., Werner, S., Wichman, R.: Hybrid full-duplex/half-duplex relaying with transmit power adaptation. IEEE Trans. Wirel. Commun. 10, 3074–3085 (2011)

Design and Implementation of Automobile Leasing Intelligent Management System Based on Beidou Compass Satellite Navigation Baoming Shan, Sailong Ji and Qilei Xu

Abstract This paper introduces an intelligent automobile leasing system based on ARM. The system takes STM32 microprocessor as the core of the vehicle terminal system which is running in μC/OS operating system, and it combines Beidou Compass satellite navigation and GPRS data transmission technology. At the same time, this system uses a spatial distance correction algorithm greatly improving the accuracy of Beidou positioning. Finally, the data center software is designed by Web GIS platform technology. The test results show that the system can ensure a reliable and stable data interaction between the vehicle terminal and server, which has achieved the desired effect. Keywords Beidou compass satellite navigation Face recognition technology WebGIS



 Automobile leasing  GPRS 

1 Introduction For a very long time, people have put forward many innovative vehicle consumption modes in order to alleviate the negative impacts of cars on the environment, transportation, social and ecological, and automobile leasing, which is one of them. This method calls for higher requirements on the vehicle management when sharing the cars. In the management of a specific vehicle, the relation between staff and resources must be organized and coordinated reasonably, but the traditional management approach is generally accomplished by the way of statistical information, so the real-time information on the vehicle usually cannot interact with the monitoring center in time, this would not be able to meet the current needs of the intelligent management. At the same time, along with the gradual increase of vehicles, the B. Shan (&)  S. Ji  Q. Xu College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266042, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_87

957

958

B. Shan et al.

management has become more and more complex, so an intelligent method for the vehicle management has become an urgent demand. Huang et al. [1] proposed a vehicle terminal combined the use of GPS, GPRS technology, which achieved data exchange with monitoring center by Internet and GPRS [2]; it introduces a method to achieve vehicle location and monitoring functions by SMS; the disadvantage of it is the lack of supervision and management [3]; it highlights the problems of switch-board in different segments data transmission with monitoring center reliably. This paper puts forward an overall solution for automobile leasing management system. This scheme combines the advantages of Beidou navigation positioning system, GPRS wireless data transmission technology, and Web GIS platform technology, and makes a detail description of the key parts of hardware and software design of this system. And then it analyzes and tests the interaction of the vehicle terminal and the monitoring center system, which achieves the highly integrated vehicle terminal and the convenient of monitoring process; this shows that this system provides a reliable solution for the information management of vehicles.

2 The Overall Design Scheme of the System The automobile leasing intelligent management system is designed mainly by vehicle terminal, mobile base station, and monitoring station. And the Beidou Compass satellite Navigation is composed of the space terminal, the ground terminal, and user terminal, as shown in Fig. 1.

Space terminal Beidou satellite

GPRS

GSM User terminal

Ground terminal

... Vehicle terminal

Monitoring station

Fig. 1 Design structure of automobile leasing intelligent management system

Design and Implementation of Automobile Leasing Intelligent …

959

User needs to get the permission of using the car when the vehicle terminal transmits user’s 3D facial feature information and vehicle state information to the mobile station by means of face recognition and Beidou navigation positioning technology, and the real-time leasing car status information will be sent to the monitoring center by GPRS module. Mobile base station including MSC base controller, SGSN service support node and gateway GGSN support node, GPRS communication public network is provided by Mobile Corporation, which can realize data exchange between the vehicle terminal and the monitoring station. Monitoring center management system is mainly responsible for the data exchange and connection with vehicle terminal, which includes Web client and database server. Web client is used to display online map and management information of vehicles by Web GIS technology, while database server is used to store information of vehicles and users.

3 The Hardware Design of the Vehicle Terminal The vehicle terminal hardware is mainly composed of a processor chip, Flash memory module, GPRS module, Compass module, face recognition module, voice module, and other components, as shown in Fig. 2.

3.1

The Design of Processor Chip

The processor chip STM32F103 of ST Corporation is selected, which has ARM Cortex-M3 core, built-in 32 bit data path, registers, memory interface, independent data bus, and instruction bus, and it can fetch and access data parallel; data access is no longer occupied command bus, and performance is greatly improved.

Fig. 2 Hardware system diagram of vehicle terminal

960

3.2

B. Shan et al.

The Design of Beidou Compass Module

In the embedded vehicle terminal system, we often should consider the positioning accuracy, price, volume, power consumption, anti-interference ability, and several other factors when selecting Beidou Compass module. According to the above principles, this design chooses the UM220-III Beidou module produced by Nnicorecomm Company with completely independent intellectual property rights to realize positioning. The module of small size, light weight, ultra-low power consumption (less than 120 mW), high integration, support joint location, single system independent positioning, and multi-single-chip system to support the complete receiver scheme, the direct output of NMEA data, and operating temperature range: −40 to +85 °C; it is very suitable for high requirement on the size, power consumption for the compass scale application. In this paper, a set of corresponding peripheral circuit is designed after analyzing the electrical characteristics of Beidou navigation module chip, functional properties and physical properties. We mainly extend serial I/O port and the power of Beidou navigation chip. The UM220-III module calculates the current user terminal location information when receipting satellite signals. It mainly consists of a signal input section, a signal processing section, and a signal output part, as shown in Fig. 3.

3.3

The Design of Face Recognition and Voice Information Processing

In the vehicle leasing system, each user must be validated before driving the vehicle, so we need a way to verify each user’s identity. This design uses face

Fig. 3 The design of Beidou hardware structure chart

Design and Implementation of Automobile Leasing Intelligent …

961

recognition to identify, and the chosen embedded face recognition module is ISVIDEO R64/SR64 produced by Isvision company in Shanghai. The module consists of a part of the video processing system, the power supply system, and the interface system, which mainly complete face detection and recognition function. The internal use of the world’s leading face recognition algorithm of infrared image for off-line recognition. While providing RS232 serial port, it can be easily communicated with processor chips. Before users using the vehicle, human faces scanned into the monitoring center management system database first, and then face recognition and database storage corresponding to the human face compared to identify each user’s face identity when users use the vehicle. The voice part is used for monitoring real-time command and prompt information of monitoring center to the user, so it needs to be fast transmission and processing. In this design, we store the basic speech prompt information sampled by 25K A/D into Flash memory, and waiting for the commands form monitoring center, external memory will play speech information when the main control chip receives orders. Speech information broadcast by PWM pulse width modulation; the PWM signal generated by it includes harmonic and DC component; the harmonic will be filtered through the analog low-pass filter, only the DC component is left, and then the DC component will be delivered to the automobile loudspeaker, which broadcast voice information to the user. This approach is superior to the average D/A method, as it does not need the high precision D/A converter, so the hardware cost is reduced, as shown in Fig. 2.

3.4

GPRS Wireless Transmission Communication Module Design

GPRS module uses SIM900A module of SIMCom Company, which with built-in TCP/IP and PPP protocol in it, so it can establish a connection with terminal and transmit data by GPRS services without transplant the TCP/IP protocol. It includes the GSM baseband, memory, GSM radio, the antenna interface, and other interfaces, specific functions as shown in Fig. 4.

4 The Software Design of the Vehicle Terminal After the design of the hardware circuit, what we do next is software design. The vehicle terminal software is divided into the Beidou module positioning data processing, face recognition, voice command transmission, and processing and GPRS wireless data transmission. Beidou Compass module positioning data processing section is responsible for reading the module positioning data packet, and the gross error processing; Face

962

B. Shan et al.

Fig. 4 The design of SIM900A hardware structure chart

recognition and voice information processing is responsible for identifying the user information, and interacting with the user; Wireless data transmission section is responsible for transmitting the data acquired by terminals to the monitoring center system wirelessly.

4.1

Beidou Compass Module Positioning Data Processing

This part is mainly responsible for how to get the navigation data, how to analyze the navigation data, and how to extract the relevant field. The vehicle terminal is based on ARM; the Beidou navigation module is connected to the ARM development platform by serial port; the software design of this section is on the ARM board; it can complete the acquisition receiver configuration and obtain the navigation data by reading and writing of the serial port. Operating system running on the ARM development platform is μC/OS Operating System. In order to operate the serial port in the development board correctly, configuration of serial port parameters is very necessary before opening the serial port. Navigation receiver processes the signal received, extracting field including time, longitude, speed, direction, date, and the magnetic declination information. We can know the position, the current vehicle speed, and direction of travel information from these data. Flow chart is shown in Fig. 5.

Design and Implementation of Automobile Leasing Intelligent …

963

Fig. 5 The compass data processing flow chart

After the necessary processing of the analyzed field completed, send it to the designated functional module, such as communication module etc. Then, repeat the above process until the end. Beidou is an active, bidirectional ranging, two-dimensional navigation system, ground center control system will solver the data, and provides user a three-dimensional location data. Position application of the user is needed to be sent back to the central control system, and central control system will send back the user’s 3D position to the user after the calculation. In the meantime, the signal needs to go back and forth between geostationary earth and satellite, and the transmission of satellite, let alone the process of center control system; those factors will make the delay time longer, so to the high-speed moving body, those factors may increase the positioning deviation, consequently, the positioning accuracy of Beidou navigation system is 20 m. This paper uses a spatial distance principle of correction algorithm [4]. Beidou module can receive three kinds of data from the satellite: the target rate V, target position P, and present time t. The distance of Beidou receiver module moves in a fixed time interval is D = vt, considering the actual situation of the moving target, Within time intervals t, the distance between two points d should be limited into a certain range. We will identify the data received by Beidou as weeks hop data when it exceeds this range, it needs to be rounded or processed. The ratio of D and d should be limited in the vicinity of 1, if the ratio is more than 1; there is reason to say the Beidou positioning data is a gross error and should be corrected. After using the first two Beidou positioning data determination methods, we use the following three methods to smooth the singular points of Beidou positioning data:

964

B. Shan et al.

Fig. 6 The compass data of before and after processing

1. When the Beidou retrieve speed is less than a certain value (because it is automotive systems, we define the speed is 2 km/h), we consider it is static, and the value of Beidou taken at this time will be discarded. 2. For the rate is less than 2 km/h before the second, and after a second rate is less than 2 km/h either, we consider it is static no matter how much the speed it is in this moment. 3. Check current second and last coordinate distance, if the rate is more than 50 km/h, the current point will be discarded. Figure 6 is the comparison of the data which is collected by Beidou positioning directly (above) and processed by algorithm (below); we can know that the drift is obviously weakened after the algorithm processing, and the positioning accuracy is improved greatly.

4.2

Face Recognition and Voice Module Software Processing

The 3D face recognition means that the three-dimensional shape of the face obtained by the collected data, as a recognition object to be identified, matches 3D face shape data from the database of known identity, and then shows the identity of the object to be identified. What we process is a two-dimensional images, which is a three-dimensional object projected on a two-dimensional plane. This section uses Hausdorff distance algorithm [5] which was optimized by Lee [6] for the face recognition. He put forward a depth value weighted distance based on Hausdorff, which the essence is to assign different importances to different face region points. Hausdorff distance required between two aligned face model calculations, the smaller the distance, the more similar. The basic steps of 3D face recognition system are as follows:

Design and Implementation of Automobile Leasing Intelligent …

965

Fig. 7 Face recognition effect chart. Note The source of the image is publicly available on the database

1. Obtain the 3D face shape information of the object to be identified by the 3D face data acquisition equipment; 2. Denoising, cutting, and other pretreating measures to the obtained 3D data automatically; 3. Extract the features from the 3D data; 4. Classificated the extracted features by classifier, and output the final decision. Face recognition results are shown in Fig. 7. Then implement the monitoring center and leasing user interaction by voice module. Figure 8 shows the face recognition and voice module software processes. The first step is face recognition when the user get on the car, and then the face recognition processes program packs and sends the data to the monitoring center after receiving the authentication information; the control center sends commands for the received data; the vehicle terminal voice calls voice processing functions to play voice prompt after receiving the command.

4.3

GPRS Wireless Data Transmission Processing

The vehicle terminal will send the data to the control center via the GPRS module after the completion of the real-time data acquisition, so that the control center can keep abreast of the dynamic information of vehicles. The vehicle terminal for transmitting and receiving information mostly completed in the wireless mobile

966

B. Shan et al.

Fig. 8 Face recognition and voice module software process flow chart

environment, so the reliability and stability of data communication for the vehicle terminal software design is a very important part. This GPRS module is embedded TCP/IP protocol stack, and the user only needs to use the AT instruction set to get TCP/IP or UDP/IP connection with the monitoring center [7]. Figure 9 describes the GPRS module initialization and the process to establish a data link. At first, the module sets hardware initialization after being powered on and then configure data transmission baud rate, set line working parameters, GPRS module will open and check the SIM card. Now this GPRS module will be in the ready state and begin to log on to the network, at last, make a handshake response with monitoring center. The module will get a dynamic IP address after landing is successful, and the module begins to receive text messages with its own IP address from the monitoring center. Once it gets the IP address of the server, it will create a socket attempt to make connection at first. After a successful connection, we send IP messages to terminal control center, and wait for receiving a start command flag head. If it cannot receive a start command in the predetermined waiting time, then resend the vehicle terminal IP address and wait for a response; if the process above is repeated three times with no response, then it means that the network is disconnected and cannot establish the data link. If the handshake is successful, the data link between the vehicle terminal and the monitoring center will be established; reliable transmission of data can be carried out. The vehicle terminal sends the packed data to monitoring center, and the monitoring center would also issue the command to the vehicle terminal.

Design and Implementation of Automobile Leasing Intelligent …

967

Fig. 9 GPRS module initialization process

4.4

GPRS Wireless Data Transmission Processing

The monitoring center software should analyze and store the real-time information received from the terminal at first, and display the position and movement status of all vehicles on the electronic map, then send the command to the terminal according to the situation; this will complete the intelligent management of leasing vehicles. The software design adopts the Web GIS technology which is based on Google Maps API. In the VS2008 development platform, calling Maps API functions via script commands, using GIS component libraries and tool libraries, which embedded GIS into the application development, custom map out practical client interface. The database server uses C/S framework of Web GIS which realizes the information interactive function between the client program and database by network service function of .NET technology.

968

B. Shan et al.

Fig. 10 Client interface of monitoring center

5 The Actual Test Results and Analysis The system is tested on one Taxi Company. Storing 3D facial feature information and identity information of the 30 users into the database by the client beforehand, installing vehicle terminal on 20 cars, the test of car face recognition verification will succeed when users whose information stored in the database, at the same time the monitoring center will display the leasing vehicle status on the electronic map when it receive information, and then issue the command prompt the user that the car leasing is succeed. While the unregistered users will be prompted that not registered yet by monitoring center, when the main server is dropped, the test terminal will be re-upload the data within 2 s. Connection process will upload all the collect data to the server, while these places where the network signal is not good may have a slight delay. The maximum delay of 20 cars in the test process is less than 5 s, which has reached requirement of the real-time vehicle monitoring and management. The leasing car real-time status of monitoring center displayed in Fig. 10.

6 Conclusions This paper makes an overall design of automobile leasing intelligent management system based on ARM. Based on STM32, and integrated use of the advantages of Beidou navigation positioning system, GPRS wireless data transmission technology and Web GIS platform technology for achieving data interaction between the monitoring center and the terminal. This system enables fast and convenient car leasing, a wide range management and low-cost, and strong system expansion capability when it is applied to automobile leasing management. The system combines the embedded technology with wireless data transmission technology, which provides a user friendly wireless networking platforms for vehicle management unit.

Design and Implementation of Automobile Leasing Intelligent …

969

References 1. Huang, Z.W., Zhou, M., Zhang, X.M.: Implementation of embedded vehicle terminal under the GPS/GPRS support. J. Comput. Measur. Control. 17, 2205–2208 (2009) 2. Huang, Q., Tao, Z.S., Song, H.: Design of GPRS remote data transmission module based on ARM. Chin. J. Electron Devices 31, 1214–1218 (2008) 3. Liang, S., Liang, Y., Chen, J.N.: Implementation of communication platform of intelligent public transportation system based on GPRS. J. Commun. Technol. 40, 56–58 (2007) 4. Huang, G.L., Wang, H., Xu, H.P.: Research on GPS positioning drift of based on time series. Comput. Eng. Appl. 44, 94–97 (2008) 5. Huttenlocher, D.P., Klanderman, G.A., Rucklidge, W.J.: Comparing images using the Hausdorff distance. IEEE Trans. Pattern Anal. Mach. Intell. 15, 850–863 (1993) 6. Lee, Y.H., Shim, J.C.: Curvature based human face recognition using depth weighted Hausdorff distance. In: Proceedings of International Conference on Image Processing. Singapore (2004) 7. Li, S.L., Liu, Y.Z.: Embedded web remote control system design based on ARM. Microcomput. Inf. 24, 132–135 (2008)

An IVR Service System Based on Adjustable Broadcast Sequence Speech Recognition Shuhao Zhang, Zhiyi Fang and Hongyu Sun

Abstract At present, IVR service system has been widely used in business area. It can communicate between the client and business operations quickly, intuitively, and reliably. But more mature IVR service systems exist at least two points of deficiencies: First, most users listen to the information by attaching to the handset ear, then take down to input that information. This approach is not good for their experience; Second, the current IVR systems are stereotyping according to a sequential broadcast way, rather than dynamic broadcast in accordance with the users accustom. It will cause the user wait for a long time. In this paper, we discuss about the current common problems in the IVR system, and we propose a new system which is based on the new IVR. It is based on an adjustable broadcast order speech recognition. This new system can play by summing the user by using the laws of the IVR system dynamically, then through the IVR flow to reduce the user’s waiting time. It can reduce the complexity of the user’s key operation, and it can also meet the needs of user’s for the IVR service efficiency through the intelligent voice recognition feature. Keywords IVR

 Adjustable broadcast sequence  Speech recognition

1 Introduction Interactive voice response (IVR) is an interactive voice response system. It is one of the core technologies in computer telephony integration (CTI) [1]. Now it is widely used in the Call Center, in order to improve the quality of call service and reduce the intensity of the attendants for saving the cost. This kind of technology is an

S. Zhang (&)  Z. Fang  H. Sun College of Computer Science and Technology, Jilin University, No. 2699 Qianjin Rd, Changchun 130012, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_88

971

972

S. Zhang et al.

important gateway during the interaction between call-center and human-computer. In traditional IVR systems, user interacts with the system by telephone keyboard [2]. With the development of computer technology and artificial intelligence technology, the natural language understanding is making progress. Speech recognition systems have become an increasingly wide range on applied direction. Due to the popularity of telephone network, the application of natural language processing system which is based on the telephone channels has become one of the most important applications. On the other hand, with the development of mobile communication technology and people’s increasing demand for accessing the mobile information, the market for telephone voice recognition system has also been an increase in demand. Therefore, during the introduction of the new generation call center IVR system, speech recognition technology has become a new input way for users. Users can interact the system by voice directly, which can also improve the work efficiency greatly. With the continuous development of smart phone and mobile communication terminal, users can dial the number of information services through mobile phones. This method of operation has become more and more popular, and this kind of business has more interactive features. It has been widely known as an IVR system of telecommunications business information services such as 10086 and 10010. With the continuous prosperity of e-commerce at present, e-commerce shops such as Jingdong, Dangdang, Amazon, and Taobao have gradually built their own IVR services. But these services have two common characteristics. First, they are collected by the phone keys which determine the needs of users; Second, different users play the same business process like 1, 2. These two characteristics exist at least the following deficiencies: (1) The user’s handset is attached to the ear to hear the information, and then scored to input information, the experience of users is not good. In addition, during the operation delay may be due to other issues, such as making some broadcast voice did not hear, which may lead to the re-dial. (2) Some users may do one type of business, and some users may frequently do other types of business. We should handle of different businesses depending on the user’s intent frequently. But the current IVR systems in accordance with an order of stereotyped broadcast, which leads a long time waiting and inefficiency [3]. In this paper, we propose a new IVR service system which is based on adjustable broadcast sequence speech recognition. The system consists of voice reception module, speech recognition module, information-functional mapping information database, personal history information updating module, personal historical information database query module, the dynamic flow control module, and a voice telephone calling module. In the dynamic voice recognition module, we can reduce the pushing button time by the voice recognition, so it can reduce the waiting delay time. Several other co-through modules adjust the broadcast order to reduce the user’s waiting time and improve the user experience dynamically. The organizational structure of this paper is as follows: Sect. 2 mainly describes the structure and implementation of the organizational processes IVR service system; Sect. 3 is the performance analysis and research of the system and Sect. 4 is the conclusion of this paper.

An IVR Service System Based on Adjustable Broadcast Sequence …

973

2 Systematic Design 2.1

Global Systematic Structure

The overall structure of IVR service system based on adjustable broadcast sequence speech recognition is shown in Fig. 1. This system includes voice receiver module, speech recognition module, information-functional mapping information database, personal history information update module, personal historical information query database, dynamic flow control module, and a voice telephone call module. As shown in Fig. 1, the voice receiver module is responsible for receiving voice information which is inputted by the user, and sample the message as text information dynamically. Then we transmit it to process control module and personal history information update module; Dynamic flow control module and personal history check on the status of information together to complete the library dynamic flow control. This module arrange modules based on historical inquiry circumstances users access IVR process dynamically; Voice recognition module, and information—function together to complete the mapping information database and find a voice recognition corresponding IVR operation [4]. The voice telephone call module is responsible for the corresponding the IVR operating processes to voice playback for users. The techniques including in each module are shown in Table 1.

2.2

Systematic Main Work Process Design

The main flow of the system includes a speech recognition module, the content of the speech recognition, and pattern matching process. The dynamical flow control module is optimized by matching personal history information database query

Fig. 1 Overall structure of IVR service system based on adjustable broadcast sequence speech recognition

974

S. Zhang et al.

Table 1 Protocols used by each module Model name

Protocols

Remarks

Voice receiver module

RSTP

Speech recognition module Information-functional mapping information database Dynamic flow control module Voice telephone call module

STT Regular match JSON Format conversion RSTP

RSTP protocol to ensure safe and fast delivery and receive voice messages STT protocol converts voice into text Speech recognition and broadcast information flow table, the transmission of information in JSON form to a dynamic flow control module The text information is converted to voice information Communicate voice information to the user

process, then through optimizing the calendar and personal database query information to update process figure. The content of the speech recognition module for voice recognition and the pattern matching process are shown in Fig. 2. As shown in Fig. 2, after dialing IVR, the voice receiving module prompts the users what kind of service they need, the users describe their service request via voice method briefly. The voice receiving module will receive the voice message and the numbers which they called then transmit to the speech recognition module. As shown in Fig. 3, the speech recognition module according to the speech content to identify the described keyword in the speech; Using keywords do the map in the information-function database, the function of the intelligent matching is the users need [6, 7]. At the end of matching process, the matching results and users’ call numbers will be operated as the parameters to send to the dynamic process control module; Dynamic process control module according to the number of incoming to find personal history information in the query library and then query for prioritization according to the matching results through the personal history database. As shown in Fig. 4, the dynamic process control module will send the ordering dynamic process module to the voice calls module. Then it will broadcast dynamic process for the user table and wait for the user feedback; the result of user feedback go through the speech recognition module into the historical information update module, the historical information update module according to the result of user feedback information query library update users personal history information [9, 10]. Thus, the next query for the user to provide more accurate dynamic process table.

3 Performance Analysis In order to verify the effectiveness of our system, this paper designed an IVR broadcast process; the main process of the reporting process is shown in Fig. 5.

An IVR Service System Based on Adjustable Broadcast Sequence … Fig. 2 Voice recognition module is used to identify the speech content and the flow chart of pattern matching

975

976

S. Zhang et al.

Fig. 3 Dynamic process control module through personal history query database optimization process of the matching process table

We assume that the delay time of the system is shown in Table 2. We tested the system, such as we want to track the status of the windows and doors of the car. According to the original directory, the number of times and the total time of all kinds of operations are shown in Table 3. If you take in the designation in our paper, the reporting record will rearrange according to voice input; the arrangement result is shown in Fig. 6.

An IVR Service System Based on Adjustable Broadcast Sequence …

977

Fig. 4 Personal history query database update process diagram

According to the design of system, dynamic reporting process is as shown in Fig. 6. The waiting time calculation method puts forward in the system as shown in Table 4. Compared with the former system, during checking the waiting time of the status of the windows and the doors, the new system after the speech recognition to broadcast record sorting the worst case still raised nearly 2 times efficiency. Queries can be seen from Table 4 after using the improved system; the average time has been nearly 2.5 times efficiency. We might make a hypothesis as well; the

978

Fig. 5 An IVR process example

S. Zhang et al.

An IVR Service System Based on Adjustable Broadcast Sequence …

979

Table 2 The delay time of all kinds of operating systems Operation type

Delay time

Time for reading the directory (user) Time for pushing the keys (user) Time for pushing the keys and rereading (user) Language recognition time Broadcast recording time

3 2 3 4 1

s/r (record) s/t (time) s/t (time) s/t (time) s/t (time)

Table 3 Waiting time of tracking the status of the windows and the doors before improvement (We don’t consider the user and the error time) Operation type

Times

Delay time (s)

Time for reading the directory (user) Time for pushing the keys (user) Time for pushing the keys and rereading (user) Language recognition time Broadcast recording time Total delay time

7 2 2 0 0

21 4 6 0 0 31

former system is shown in Fig. 5. We can see that the system is more than a fork tree structure. Assuming that the total number of nodes is n, most for k number is m, then we can know the depth of the tree is log n. The time complexity before we improved is the sum of the time for reading the directory (user), time for pushing the keys (user), and the time for pushing the keys and rereading (user). Time Complexity ðBefore improvementÞ ¼ log n  m þ log n þ log n: Figure 6 shows that the improved system will be a tree structure into linear structure, by sorting sorted by keywords, and the keyword will be matched by the newspaper directory at the top. Then we start to read the message; we will know its time complexity is only a constant C. To sum up, this paper designed the system through the use of efficient speech recognition and computer to sort; it can reduce the time to reporting record directory, time to user keys and hard of hearing the message, such as inefficient human operation to improve efficiency.

980 Fig. 6 The design system of dynamic reporting process in this paper

S. Zhang et al.

An IVR Service System Based on Adjustable Broadcast Sequence …

981

Table 4 Track the status of the waiting time, improve the before and after improvement (According to correctly identify the first keyword dynamic sorting report) Check status

Before the delay time to improve the number of IVR systems (s)

After the delay time to improve the number of IVR systems (s)

Traffic accident rescue Vehicle breakdown rescue Vehicle basic information query Mileage Oil and battery voltage Vehicle cover status before and after the query The doors and windows status query Tire pressure Complaints Advice Artificial service Average time

16 19

8 8

19

11

22 25 28

8 8 14

31

17

34 22 25 22 23.9

8 8 8 8 9.6

4 Conclusion IVR system is an important part of the call center. In this paper, we design and test a new type of system which is based on speech recognition. It can be adjusted in order to broadcast IVR service system. This system can be not affected by specific business, and it has a strong extensibility. Not only the traditional IVR system is optimized, but also it can reduce the users’ waiting time and improve the users’ experience. It can improve the efficiency of the communication with customer. And in the process, we use the most natural communication language between different people; their eyes and hands can be free and they can deal with other things. However, the way of how to improve the efficiency of speech recognition, information matching, and the voice transmission will be helpful to our further research. Acknowledgments This work was supported by the project of “New generation of broadband wireless mobile communications network” (National major special project of China, Grant No. 2011ZX03002-002-03).

982

S. Zhang et al.

References 1. Ma, X., Xue, H., Deng, Z.: Design and realization of interactive voice response system based on voice XML. Microelectron. Comput. 3, 028 (2006) 2. Mittal, A.: Manual testing and quality monitoring of interactive voice response (IVR) applications. Int. J. Comput. Appl. 4(6), 0975–8887 (2010) 3. Kähr, C., Steinert, M.: Explaining the (non) adoption and use of interactive voice response (IVR) among small and medium-sized enterprises. Int. J. Speech Technol. 14, 11–18 (2011) 4. Liu, Z.: IVR technology development. J. CTI BBS 12, 31–35 (2000) 5. Azhao, Y.: Speech recognition in the IVR system. Market Experts 12, 44–48 (2000) 6. Doveh, E., Feigin, P., Greig, D., Hyams, L.: Experience with FNNModels for medium term power demand predictions. IEEE Trans. Power Syst. 14(2), 538–546(1999) 7. Quan, X., Ma, X.: The realization of the meeting notice system based on CTI technology. Comput. Eng. Design (3), 269–270 (2005) 8. Liu, Y., Fang, L., Mao, X.: The new interactive voice response system research and implementation of key technologies. Comput. Appl. Res. (2000) 9. McNaughton. B., Frohlich, J. Graham, A., Young, Q.R, McNaughton, B., et al.: Extended interactive voice response telephony (IVR) for relapse prevention after smoking cessation using varenicline and IVR: a pilot study. BMC Public Health 13, 824 (2013) 10. Anton, J.: The past, present and future of customer access centers. Int. J. Serv. Ind. Manage. 11(2) (2000)

Summary Research on Energy-Efficient Technology for Multi-core Computing System Based on Scientometrics Xingwang Wang

Abstract In order to know the current R & D status of energy-efficient technology for multi-core computing system wholly, and direct the future researching and developing work of it, this article analyzed and summarized the R & D status of it using information analysis and visualization methods based on scientometrics. The R & D trends, cycle time, distribution of R & D countries/districts and organizations, and R & D focuses of this technology are proposed and described visually through knowledge domains map and patent map. This study is an absolutely new attempt to summary research on energy-efficient technology for multi-core computing system.





Keywords Energy-efficient Multi-core Computing system Mapping knowledge domains Patent mapping



 Scientometrics 

1 Introduction Energy conservation is a major challenge to humanity in the twenty-first century. At the same time, large-scale data center systems, which marking a country’s soft power and information services support capabilities, become more and more complex and large, and its core of multi-core computing system has become a veritable “electric tiger.” Therefore, energy-efficient technology has become the urgent theme that need to be studied as the development of multi-core computing systems, and has become the critical and foundational issue for achieving green computing and affecting system performance and scalability, and it also has become the key of building a green data center currently.

X. Wang (&) Shanghai University of Engineering Science, 333 Longteng Road, Shanghai 201620, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_89

983

984

X. Wang

This article will analyze and summarize the current R & D status of energy-efficient technology for multi-core computing system (abbreviated ETMCS below) from the point of scientometrics, which is an absolutely new attempt to study in this research field.

2 Research Methods and Data Source The research methods used in this paper are information analysis and visualization means based on scientometrics, including Mapping Knowledge Domains and Patent Mapping. Mapping Knowledge Domains is a kind of graph showing the relationship between evolution and structure for science knowledge, a new scientometrics method based on a combination of computer science, graphics, information visualization, data mining, mathematics, and other disciplines theories [1–3]. Patent Mapping is a kind of graph showing patent information, a comprehensive method of analyzing and visualizing patent information [4]. The research tools for data analysis and drawing maps include BibExcel [5], Pajek [6], and VOSviewer [7]. The research data in this paper come from two kinds of literature databases: research papers are retrieved from Thomson Reuters’ Science Citation Index Expanded and Conference Proceedings Citation Index— Science, patent data is retrieved from Derwent Innovations Index. The search theme is energy-efficient technology of multi-core computing system, and 756 papers and 2287 patents are retrieved and selected as the analysis data.

3 Research Result Through the analysis on the data what is obtained above, the summary research result on energy-efficient technology for multi-core computing system is obtained as follows.

3.1

R & D Trends of ETMCS

Total R & D trends can be found from the research literature quantities of every year. Count the quantity of papers and patents about ETMCS according to publication year, then draw the R & D trends map of ETMCS (shown as Fig. 1). According to Fig. 1, we can find that the research on energy-efficient technology for multi-core computing system has appeared in 1970s, but its development is very slow until about 2000s, and it has developed very fast in recent years (especially after 2007).

Summary Research on Energy-Efficient Technology … Fig. 1 R & D trends map of ETMCS

985

Quantity

500 400 300 200 100 0

Year

3.2

Cycle Time of ETMCS

The development of a technology generally follows the rule; as human or biologic life cycle, it will experience four different stages: bud period, growth period, mature period, and decline period. Technology cycle time reflects the growth and decline of a technology through the number of research documents and organizations. Count the quantity of research documents (including papers and patents) and organizations about ETMCS according to publication year, then draw the cycle time map of ETMCS (shown as Fig. 2). According to Fig. 2, we can find that the energy-efficient technology for multi-core computing system is staying in growth period currently, and will keep the developing speed in the next several years.

3.3

R & D Countries/Districts of ETMCS

Count the quantity of research documents (including papers and patents) about ETMCS according to the countries/districts where the researcher come from, then select the top ten countries/districts and draw the rank map (shown as Fig. 3). 500

Documet Quantity

Fig. 2 Cycle time map of ETMCS

2013 2012

400 2011

300

2010

2009

2008

200

2007 2006

100

2002

2003 1975 -2001

0 0

100

2004 2005

200

300

Organization Quantity

400

500

600

986

X. Wang

Fig. 3 Rank map of ETMCS R & D countries/districts

USA Peoples R China Japan South Korea Germany Canada Switzerland England Taiwan France 0

200

400

600

800

1000

According to Fig. 3, we can find that the research strength of USA about energy-efficient technology for multi-core computing system is much more powerful than other countries/districts; the next are China and Japan which also have powerful research strength. Other top ten R & D countries/districts are South Korea, Germany, Canada, Switzerland, England, Taiwan, and France successively.

3.4

R & D Organizations of ETMCS

Count the quantity of research documents (including papers and patents) about ETMCS according to the organizations (paper author affiliations and patent assignees), then select the top ten organizations and draw the rank map (shown as Fig. 4). According to Fig. 4, we can find that the research strength of Intel Corp about energy-efficient technology for multi-core computing system is much more powerful than other organizations; the next are Int Business Machines Corp and Toshiba KK who also have powerful research strength. Other top 10 R & D organizations are Hitachi Ltd, Samsung Electronics Co Ltd, Advanced Micro Devices Inc, Fujitsu Ltd, Matsushita Denki Sanyo KK, Texas Instr Inc, and Korea Adv Inst Sci & Technol successively.

Fig. 4 Rank map of ETMCS R & D organization

Intel Corp INT BUSINESS MACHINES CORP TOSHIBA KK HITACHI LTD SAMSUNG ELECTRONICS CO LTD ADVANCED MICRO DEVICES INC FUJITSU LTD MATSUSHITA DENKI SANGYO KK TEXAS INSTR INC Korea Adv Inst Sci & Technol 0

50

100

150

Summary Research on Energy-Efficient Technology …

3.5

987

Research Focuses of ETMCS

Generally, research papers focus on basic study, and the basic research focuses can be found through the hot words appeared in papers. Count the quantity of keywords appeared in papers about ETMCS, and select out the keywords which appeared at least three times and count their co-occurrence times, then draw the keywords co-occurrence map (shown as Fig. 5). According to Fig. 5, we can find the basic research focuses of energy-efficient technology for multi-core computing system, and they are six aspects relating to these keywords list as follows. (1) Multi-core; Power management; Microprocessor; Single-thread performance; Clock distribution; SoC; SOI; Clocking. (2) Performance; Many-core; Multicore; Design; Cache; Low-power; Algorithms; System-on-chip; Experimentation; Test scheduling. (3) Thermal management; Thread migration; Dynamic voltage and frequency scaling; 3D integration; Power; Power gating. (4) Parallel processing; FPGA; Embedded systems; Network-on-chip; Heterogeneous multi-core; Performance evaluation. (5) Multi-core processor; Energy efficiency; Single-thread performance; Compilers; Chip multiprocessors. (6) Low-power; Scheduling; System-on-chip; DVFS; Energy; Real-time; Multicore processor.

3.6

Technology Development Focuses of ETMCS

Generally, patents focus on applied technology study, and the technology development focuses can be found through Derwent manual code, which represent the technology innovation of a patent as the keyword of a paper. Count the quantity of Fig. 5 ETMCS papers keywords co-occurrence map

988

X. Wang

Derwent manual codes appeared in patents about ETMCS, and select out the manual codes which appeared at least twenty times and count their co-occurrence times, then draw the manual codes co-occurrence map (shown as Fig. 6). According to Fig. 6, we can find the technology development focuses of energy-efficient technology for multi-core computing system, and they are four aspects relating to these keywords (corresponding to manual codes) list as follows. (1) Power supplies, stand-by arrangements; Power supply; Claimed software products; Portable; Clock signal generation/distribution; Power management techniques; Sleeping and waking, power-up/down, halting. (2) Execution of machine instructions; Multiprocessor systems; Virtual systems; Synchronization; Resource allocation; Parallel/array; Task transfer initiation; Fig. 6 ETMCS patents manual codes co-occurrence map

Summary Research on Energy-Efficient Technology …

989

Data handling programs and storage management; Cache memory; Virtual memory and hierarchical memory; Memories with capacitor store; Sorting; Single processor computer unit; Semiconductor/solid state memory; For access to memory bus. (3) Portable, hand-held; Using digital signal processors; System and fault monitoring; Operating systems and virtual systems; Program control arrangements; Subprogram execution; Non-wired connection between peripheral and computer; Data transfer; Network operating system management; Radio link; Stored and forward switching; Radio link; Serial interface with additional features; Optical fibers. (4) Cores; Magnetic cores; Power and distribution transformers; Electric vehicle; Low-power systems; Vehicle microprocessor system.

4 Conclusions As the energy-efficient technology for multi-core computing system becomes more and more important and noticeable, international research and development on it also become more and more. This article analyzes and describes the R & D status of energy-efficient technology for multi-core computing system using information analysis and visualization methods based on scientometrics, and summarizes its R & D trends, cycle time, distribution of R & D countries/districts and organizations, and R & D focuses visually using knowledge domains map and patent map. This study is a kind of summary research on energy-efficient technology for multi-core computing system. It provides a reference for whole knowing the status of energy-efficient technology for multi-core computing system, and can direct the researching and developing work for this technology. Acknowledgments This work was financially supported by the Teaching Development Project of Shanghai University of Engineering Science (k201426001).

References 1. Chen, C.: Mapping Scientific Frontiers: The Quest for Knowledge Visualization. Springer-Verlag, London (2013) 2. Chen, Y., Liu, Z.: The rise of mapping knowledge domain. Stud. Sci. Sci. 2, 149–154 (2005) 3. Qiu, J., Hu, W., Luo, L.: Research status and fronts analysis of international web search engine based on knowledge mapping. Libr. Inf. Serv. 24, 89–94 (2010) 4. Wang, X., Sun, J.: The comparative research of patent map. J. Inf. 8, 113–115 (2007) 5. BibExcel. http://www8.umu.se/inforsk/Bibexcel 6. Pajek. http://pajek.imfm.si/doku.php 7. VOSviewer. http://www.vosviewer.com

Sparsity Reconstruction Error-Based Discriminant Analysis Dimensionality Reduction Algorithm Mingming Qi, Yanqiu Zhang, Dongdong Lv, Cheng Luo, Shuhan Yuan and Hai Lu

Abstract The inter-class and intra-class information of existing discriminant analysis method is more sensitive to external disturbance such as the imperfection and occlusion. Aiming at this problem, from the view of the local sparsity representation, a kind of sparsity Reconstruction Error-based Discriminant Analysis dimensionality reduction algorithm is proposed. The algorithm firstly applies the sparsity representation to complete the intra-class local sparsity reconstruction, and then applies the average value of the non-intra-class sample to complete the inter-class local sparsity reconstruction of all the samples, finally keeps the inter-class and intra-class sparsity reconstruction ratio during the dimensionality reduction process. The algorithm improves the sparsity representation computation efficiency and discriminant analysis performance. The experimental results offace data set of AR and UMIST face database verify the effectiveness of the proposed algorithm. Keywords Dimensionality reduction tation Face identification



 Discriminant analysis  Sparsity represen-

1 Introduction Line Fisher Discriminant, hereinafter referred to as LDA [1] is a commonly-used dimensionality reduction algorithm. As a unsupervised dimensionality reduction algorithm, LDA is intended to select the optimized projection matrix to make the

M. Qi School of Yuanpei, Shaoxing University, Shaoxing, China Y. Zhang  D. Lv (&)  C. Luo  S. Yuan Department of Computer Science and Technology, Tongji University, Shanghai, China e-mail: [email protected] H. Lu Network Center, Tongji University, Shanghai, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_90

991

992

M. Qi et al.

inter-class and intra-class sparsity information ratio of the projected low dimension data be large as much as possible. However, only data show a Gaussian distribution, LDA discriminant classification performance is optimal, and it often does not meet the requirements of practical application. The extended LDA [2–4] overcomes the small sample problem but doesn’t solve the limit of Gaussian assumptions. The literature [5] introduces the Spectral Graph Theory and proposes Graph-based Fisher Analysis, GbFA. However, GbFA applies the weight value method to introduce adjacency matrix based on diagram and adjusts the Euclidean distance between similar and different data samples. Comparing with LDA, GbFA only applies the diagram manifold structure information description method to replace LDA Euclidean distance divergence information description method, however it is the distance divergence information representation form essentially. Though GbFA better solves above two problems, other diagram optimization dimensionality reduction algorithm, GbFA is based on one assumption: The data have certain diagram manifold structure. To solve this problem, Chen, etc. [6], based on the inter-class reconstruction error and intra-class reconstruction error, proposed the Reconstructive Discriminant Analysis, RDA. Different from LDA and GbFA structure divergence information description method, RDA applies the least squares estimation reconstruction method to build the data intra-class and inter-class reconstruction divergence information, and it is unnecessary to assume the sample structure feature. When the sample class number is too large, RDA’s inter-class reconstruction error calculation quantity will be huge. Accordingly, this inter-class reconstruction error calculation method is not suitable for high capacity data set. Currently, due to good classification performance, the sparsity representation has been widely applied for the machine learning [7, 8]. Inspired by the sparsity representation, the paper proposes one Sparsity Reconstruction Error-based Discriminant Analysis , hereinafter referred to as SREDA dimensionality reduction algorithm. The algorithm is based on the sparsity representation and the perspective of improving the calculation efficiency, re-defines the intra-class and inter-class reconstruction information to make the ratio between the inter-class and intra-class divergence information have higher discriminant and be suitable for the image imperfection and occlusion in a better way. The experiment of the real face data set AR and UMIST verifies the algorithm dimensionality reduction classification performance proposed by the paper.

2 Sparsity Reconstruction Error-Based Discriminant Analysis (SREDA) 2.1

Objection Function Solving

Intra-class and Inter-class Sparsity Reconstruction Information Description. Assuming the sample X¼ fX 1 ; X 2 ; X 3 ; . . .; X c g 2 Rdn containing class c and

Sparsity Reconstruction Error-Based Discriminant Analysis …

993

X k ¼ fxk1 ; xk2 ; xk3 ; . . .; xknk g 2 Rdnk , where, X k represents the kth sample set and nk represents the kth class sample quantity of X. (1) According to the sparsity representation method and the sparsity representation, obtain the intra-class sparsity representation of each xki ð1  k  c; 1  i  cÞ, namely:   min jj ak i jj1 ðak Þi

s.t.

xki ¼

nk X j¼1;j6¼i

 j xkj ak i

ð1Þ

  1T ak i ¼ 1

 j where, 1 represents all 1 vector, ak i represents xkj ’s reconstruction coefficient to   xki , ak i represents xki ’s intra-class sparsity reconstruction weight value vector. All samples’ intra-class sparsity reconstruction error is expressed by 2   nk  nk c P   P P j   k xkj ak i  : xi    k¼1 i¼1 j¼1;j6¼i (2) Different from the intra-class sparsity reconstruction error calculation method of single sample, based on the average value of different classes of samples, the paper defines the inter-class sparsity reconstruction of single sample, namely:   min jj bk i jj1 k ðb Þi c X  j s.t. Xik ¼ X j bk i

ð2Þ

j¼1;j6¼k

  1 ¼ 1 bk i T

where, X j represents the average value of the jth class sample set, namely Pnj j  k  j X j ¼ n1j i¼1 xi , b i represents the coefficient of the average value X j of the jth   class sample to xki reconstruction, bk i represents xki inter-class sparsity reconstruction weight value vector. All samples’ inter-class sparsity reconstruction error   j P Pk   k Pc 2 is expressed by ck¼1 ni¼1 xi  j¼1;j6¼k X j bk i  : Intra-class and Inter-class Sparsity Reconstruction Error Information Description. Assuming that the sample X ¼ fx1 ; x2 ; . . .; xn g 2 Rdn , X k represents the kth class sample set, nk represents X’s kth class sample quantity and projection matrix W ¼ ½w1 ; w2 ; . . .; wd , yi 2 W T xi represents the projection of xi 2 X; s represents X’s projection and wTl xi ð1  l  dÞ represents the projection point of the

994

M. Qi et al.

sample xi . The description of Y ’s inter-class sparsity reconstruction error defined by the paper is as follows: 2  nk  c X c X X  k j   k Yj b i yi    k¼1 i¼1 j¼1;j6¼k " ! !T # nk c X c c X X X  k j  k j k k j j yi  ¼ tr yi  Y b i Y b i j¼1;j6¼k

k¼1 i¼1

"

"

¼ tr W T

nk c X X

xki 

c X j¼1;j6¼k

k¼1 i¼1

  ¼ tr W T SSb W

 j X j bk i

j¼1;j6¼k

!

c X

xki 

j¼1;j6¼k

 j X j bk i

ð3Þ !T # # W

where, trð:Þ represents the matrix track, X j represents the average value of sub-sample X j of j, and Y j represents X j projection: SSb

¼

nk c X X

xki

c X



xj

j¼1;j6¼k

k¼1 i¼1

 k j b i

! xki

c X



xj

j¼1;j6¼k

 k j b i

!T ð4Þ

Similarly, the intra-class sparsity reconstruction error dispersion matrix can be obtained: 2  nk  nk c X X X    k k k j yj a i  yi    k¼1 i¼1 j¼1;j6¼i " ! !T # nk nk nk c X X X X     k k k j k k k j ¼ tr yi  yj a i yj a i yi  j¼1;j6¼i

k¼1 i¼1

"

¼ tr "

nk c X X

W T xki

W

¼ tr W T

T

nk X

xkj

j¼1;j6¼i

k¼1 i¼1

"

j¼1;j6¼i

nk c X X

xki 

k¼1 i¼1

  ¼ tr W T SSw W

nk X j¼1;j6¼i

 k j a i

 j xkj ak i

!

W T xki

W

nk X

T

xkj

j¼1;j6¼i

! xki 

nk X j¼1;j6¼i

 j xkj ak i

 k j a i

!T #

ð5Þ

!T # # W

where, SSw ¼

nk c X X k¼1 i¼1

xki 

nk X j¼1;j6¼i

 j xkj ak i

! xki 

nk X j¼1;j6¼i

 j xkj ak i

!T ð6Þ

Sparsity Reconstruction Error-Based Discriminant Analysis …

995

According to the formula (5) and formula (6), the target optimization function can be obtained  ! tr W T SSb W max  T S  W tr W Sw W

ð7Þ

where, W ¼ ½w1 ; w2 ; . . .; wd  represents the projection matrix. Equation (4) The target function (7) is converted to   min wT SSb w w

s.t. wT SSb w ¼ 1

ð8Þ

where, w represents certain row vector of the projection matrix W ¼ ½w1 ; w2 ; . . .; wd . According to Lagrange constant method, the formula (8) can be converted to the minimum optimization function:     LðwÞ ¼ wT SSb w þ k wT SSw w  1

ð9Þ

Derive w and obtain 2SSb w þ 2kSSw w ¼ 0

ð10Þ

Further obtain the generalized feature to solve the problem, SSb w ¼ kSSw w

ð11Þ

According to the corresponding feature value k, select the corresponding feature vector of d features to compose the projection matrix W ¼ ½w1 ; w2 ; . . .; wd ðl\dÞ.

2.2

Algorithm Process

Input: Face database training sample data X ¼ fx1 ; x2 ; x3 ; . . .; xn g. Output: Projection matrix W. Steps: (1) Calculate the average value X j of each class subset of the face image data. (2) Employ the formula (4) to calculate the inter-class sparsity representation and make the inter-class sparsity reconstruction error as the sparsity reconstruction dispersion matrix SSb . (3) Employ the formula (5) to calculate the intra-class sparsity representation and make the same intra-class sparsity reconstruction error as the sparsity reconstruction dispersion matrix SSw .

996

M. Qi et al.

(4) According to the formula (8), convert it to the generalized feature problem to solve SSb ti ¼ ki SSw ti and obtain the projection matrix W ¼ ½w1 ; w2 ; . . .; wd .

2.3

Kernel Sparsity Reconstruction Error-Based Discriminant Analysis (KSREDA)

Kernel Sparsity Reconstruction Error-based Discriminant Analysis (KSREDA) is intended to apply the training sample and a nonlinear mapping function U to map the original sample X onto Uð X Þ of the high linearity and separability and higher dimensionality feature space F and then execute the SREDA in Uð X Þ. (1) Intra-class sparsity reconstruction error sum after U mapping   nk  nk c X     j 2 X X   k k k  U xj a i  : U xi    k¼1 i¼1 j¼1;j6¼i (2) Inter-class sparsity reconstruction error sum   j 2 Pc Pnk    k  Pc j Þ bk  U x U ð X , where: UðX j Þ represents the   i k¼1 j¼1;j6¼k i¼1 i   Pn j average value of UðX j Þ of the class j, namely UðX j Þ ¼ n1j i¼1 U xij . (3) Through the similar formula (7), get the target optimization function after U mapping   T  S  min  wU SU b wU wU  T  S s.t. wU SU w wU ¼ 1

ð12Þ

where, 

S SU b ¼

nk c X X k¼1 i¼1

c    j X   U xki  U X j bk i

!

j¼1;j6¼k

c    j X   U X j bk i U xki 

!T

j¼1;j6¼k

ð13Þ 

S SU w ¼

nk c X X k¼1 i¼1

nk X      j U xki  U xij ak i j¼1;j6¼i

!

nk X      j U xij ak i U xki 

!T

j¼1;j6¼i

ð14Þ

Sparsity Reconstruction Error-Based Discriminant Analysis …

997

(4) As wU 2 F, wU can be composed of F space Uð X Þ ¼ ½Uðx1 Þ; . . .; Uðxn Þ sample, namely wU 2 spanfUðx1 Þ; . . .; Uðxn Þg, TU ¼

n X

gi Uðxi Þ ¼ gT Uð X Þ

ð15Þ

i¼1

where, g ¼ ½g1 ; g2 ; . . .; gn .

    Project F space sample data U xki and U X j onto TU , namely n n      X   X ðTU ÞT U xki ¼ gj U xj ; U xki gj Kðxj ; xki Þ ¼ j¼1

ð16Þ

j¼1

 

nj   1X j j ðTU Þ U X ¼ g K xi ; X j nj j¼1 j T

ð17Þ

where, Kðx1 ; x1 Þ ¼ hUðx1 Þ; Uðx1 Þi represents the F space inner product computation, namely the kernel function. (5) Substitute the formula (13) and formula (14) with the formula (16) and formula (17), and the formula (16) and formula (17) can be further converted to: 

wU

T 

S SU b wU

n X

¼

!

gm UT ðxm Þ

m¼1

¼

" nk c X n X n X X k¼1 i¼1

c    j X   U xki  U X j bk i

nk c X X

j¼1;j6¼k

k¼1 i¼1

c    j X   K xm ; xki  K xm ; X j bk i

gm

!

!T

j¼1;j6¼k

m¼1 l¼1

c    j X   U X j bk i U xki 

!T !

j¼1;j6¼k

n X

! gl Uðxl Þ

l¼1

! # c    j X   K xl ; xki  K xl: ; X j bk i gl j¼1;j6¼k

ð18Þ 

wU

T 

SU

S

wU " ! !# nk nk nk n X n c X    j T    j X X X X     k k k k k k ¼ gi K xm ; xi  K xm ; xj a m K xl ; xi  K xl: ; xj a i gl m¼1 l¼1

¼

n X n X

w

j¼1;j6¼i

k¼1 m¼1

j¼1;j6¼i

gm #m;l gl

m¼1 l¼1

ð19Þ

¼ g#g

where, #m;l ¼

nk c X X k¼1 m¼1

K



xm ; xki





nk X j¼1;j6¼i

 K

xi ; xkj

  j ak m

!T K



xl ; xki





nk X j¼1;j6¼i

 K

xl: ; xkj

  j ak i

!

ð20Þ

998

M. Qi et al.

The form of kernel KSREDA’s formula (18) and formula (19) is similar to the SREDA formula (5) and formula (6).

2.4

Relevant Discussions

To clearly describe advantage and disadvantage of the algorithm, the algorithm proposed by the paper is subject to the thought and advantage and disadvantage comparison with the relevant algorithm. Table 1 gives comparison of the relevant algorithm.

3 Experiment 3.1

Face Data Set

The paper selects the larger data set AR and UMIST face set as the experiment data. (1) AR contains more than 126 groups of face images. Each face group has 26 face pictures. Every group of face pictures are taken during two weeks. These faces have different expressions, illuminations and coverages. Figure 1 shows AR face image sample. Table 1 Comparison of the relevant algorithm Algorithm

Thought

Advantage

Disadvantage

GbFA

Through sample class relationship, define the punishment figure and inline figure and make them as weight value structure punishment matrix and inline matrix Through the least squares estimation reconstruction, define the inter-class and intra-class and discriminant information Through the sparsity reconstruction error, describe the inter-class and intra-class discriminant information

Integrated optimization thought and simple calculation

(1) Assumption of the diagram-based manifold structure (2) The performance is easily impacted by external disturbances

Simple calculation

(1) Large calculation quantity (2) The linearity Reconstruction and performance are easily impacted by external disturbances (1) Each class needs certain marking sample (2) The inter-class average value can’t fully represent the optimal sample

RDA

SREDA

Through the sparsity reconstruction error, describe the in inner-class and intra-class discriminant information, thus inheriting the sparsity representation robust performance

Sparsity Reconstruction Error-Based Discriminant Analysis …

999

Fig. 1 AR face image sample

Fig. 2 UMIST face image sample

(2) UMIST contains 20 persons, with 564 face images. UMIST face image contains images from different sides. Figure 2 gives a group of UMIST face image.

3.2

Experiment Setting

To effectively evaluate the dimensionality reduction classification performance of the algorithm, select GbFA and RDA as comparison algorithms, and GbFA’s heat kernel parameter t is set to 1. Randomly select certain images from each group of face image as the training samples and make the remaining face image of each group as the test sample. In addition, for convenient calculation, the size of all face images is adjusted to 30 × 30. The experiment applies the recent neighbor classification algorithm. All experiments are repeated for 20 times, and the average highest identified accuracy is made as the result of the experiment.

3.3

Experimental Result and Analysis

In order to accurately evaluate the performance of each algorithm, the dimensionality reduction respectively selects the different feature dimensionalities and the recent neighbor classification algorithm to calculate the identification accuracy after the corresponding dimensionality reduction. Figures 3 and 4 give the specific experimental results. From Figs. 3 and 4, following conclusions can be drawn: (1) Along with the increase of the feature dimensionality, GbFA, RDA and SREDA’s identification accuracy will increase fast. When the feature dimensionality reaches a certain number and the identification accuracy becomes quite gentle,

1000

M. Qi et al.

Fig. 3 Experiment results on AR. a 6 samples, b 8 samples c 10 samples

it shows that, although GbFA, RDA and SREDA discriminant information description methods are different, the ratio between the different information and similar information is maximized to share the discriminant study feature. (2) GbFA discriminant information is based on the assumption of the manifold. When comparing with GbFA, the identification accuracy under RDA and SREDA’s low dimension number is higher. In addition, the highest identification accuracy of RDA and SREDA is obviously better than GbFA, showing that the discriminant analysis applying the reconstruction thought is better than manifold structure discriminant analysis (3) Though RDA and SREDA are adopted reconstruction thought, SREDA’s identification accuracy is better than RDA. The main cause is that their reconstruction method and inter-class reconstruction information is not consistent. SREDA is adopted with the sparsity reconstruction and average value reconstruction of different classes; however RDA is adopted with the least squares reconstruction estimation thought. In addition, it also shows that SREDA’s inter-class error computation reduces the reconstruction calculation quantity, but there is no impact on the performance of SREDA

Sparsity Reconstruction Error-Based Discriminant Analysis …

1001

Fig. 4 Experiment results on UMIST. a 4 samples, b 6 samples c 8 samples

  In Kernel SREDA, Gauss kernel function K ðx; yÞ ¼ exp kx  yk2 =t is applied, with the kernel function t ¼ 1. Tables 2 and 3 give the experimental results of KGbFA [5], KRDA [6] and KSREDA on AR and UMIST. Where, the number in parentheses in the Table shows the corresponding feature dimensionality of the highest identification accuracy and the boldface shows the largest and highest identification accuracy of the same sample. The Table 1 shows that, when comparing with KGbFA and KRDA, KSREDA keeps a clear classification performance.

Table 2 Maximum identification accuracy of the algorithm kernel on AR (%)

Algorithm

Sample number 6

10

Kernel GbFA Kernel RDA Kernel SREDA

88.67 (48) 90.58 (56) 93.67 (45)

93.16 (55) 96.25 (60) 98.18 (58)

1002 Table 3 Maximum identification accuracy of the algorithm kernel on UMIST (%)

M. Qi et al. Algorithm

Sample number 4

8

Kernel GbFA Kernel RDA Kernel SREDA

85.18 (50) 90.26 (58) 95.35 (52)

92.15 (55) 95.28 (53) 98.36 (50)

4 Conclusions The inter-class and intra-class discriminant information is an important discriminant analysis study content. To solve the problem of the insufficient robust performance of the existing discriminant analysis dimensionality reduction algorithm, the paper, based on the sparsity representation, proposes the sparsity reconstruction error-based discriminant analysis (SREDA) and its kernel. Firstly, SREDA applies the sparsity reconstruction to re-acquire the intra-class sparsity reconstruction error divergence information, and then applies the different class sparsity reconstruction of the average value of the sub-class sample to complete the inter-class reconstruction error divergence information. Finally, by maximizing the information, acquire the optimal projection matrix. SREDA inherits the sparsity study feature and effectively extracts the sparsity reconstruction error information of the inter-class and intra-class with high comparability. The experiment on the real data set shows that, the algorithm reduces the sensitivity to the existing discriminant analysis dimensionality reduction algorithm by external interferences. The experiment on the real AR and UMIST face data shows that, the algorithm has better dimensionality reduction classification performance. The next step is, based on SREDA, to introduce the unsupervised study method. The relevant SREDA’s semi-supervised dimensionality reduction research is the key emphasis in work. Acknowledgments This work was supported by the National Natural Science Foundation of China (71171148, 61103069, 61403238), the National Basic Research Program of China (2014CB340404).

References 1. Mika, S.: Kernel Fisher Discriminant. University of Technology, Berlin (2002) 2. Ji, S.W., Ye, J.P.: Generalized linear discriminant analysis: a unified frame-work and efficient model selection. IEEE Trans. Neural Netw. 19 (10), 768–1782 (2008) 3. Xu, B., Huang, K., Liu, C.L.: Maxi-Min discriminant analysis via online learning. Neural Netw. 34(10), 56–64 (2012) 4. Lu, G.F., Zou, J., Wang, Y.: Incremental complete LDA for face recognition. Pattern Recogn. 45(7), 2510–2521 (2012) 5. Yan, C., Fang, L.Y.: A novel supervised dimensionality reduction algorithm: Graph-based fisher analysis. Pattern Recogn. 45(4), 1471–1481 (2012)

Sparsity Reconstruction Error-Based Discriminant Analysis …

1003

6. Chen, Y., Zhong, J.: Reconstructive discriminant analysis: a feature extraction method induced from linear regression classification. Neurocomputing. 87(15), 41–50 (2012) 7. Yuan, X.T., Yan, S.C.: Visual classification with multi-task joint sparse representation. In: Proceedings of IEEE Conference. Computer Vision and Pattern Recognition, pp. 3493–3500 (2010) 8. Gui, J., Sun, Z.N., Jia, W.et al: Discriminant sparse neighborhood preserving embedding for face recognition .Pattern Recogn. 45(2), 2884–2893 (2012)

Performance Analysis of CFO Estimation for OFDM Systems with Low-Precision Quantization Dandan Li, Xingzhong Xiong and Haifeng Wang

Abstract Orthogonal frequency division multiplexing (OFDM) techniques have been employed intensively in high data rate communication systems. Unfortunately, analog-to-digital converter (ADC) becomes the limiting factor in implementing these high data rate systems for its costly and power-hungry at high speeds. In this paper, we address the important problem at the front end of an OFDM system, i.e., carrier frequency offset (CFO) when using lower precision ADC, and analyze three CFO estimation algorithms for OFDM system with low-precision quantization. The simulation results verify that the demodulation of a signal with CFO in the case of low-precision quantization can cause a higher bit error rate (BER) compared to full precision quantization, and that the optimal algorithm can effectively reduce the effect of low-precision quantization. Thus, this paper provides a reference to explore the optimized CFO estimation algorithms for OFDM system with low-precision quantization.



Keywords Carrier frequency offset Orthogonal frequency division multiplexing Multi-gigabit communications Low-precision quantization





1 Introduction Orthogonal frequency division multiplexing (OFDM) has many advantages in digital transmissions over frequency-selective fading channels. Especially, OFDM technique appears to be an important and efficient modulation scheme when designing a high data rate wide band system. One of the bottlenecks in implementing a high rate D. Li  X. Xiong (&)  H. Wang School of Automation and Electronic Information, Sichuan University of Science & Engineering, Zigong, Sichuan, China e-mail: [email protected] D. Li e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_91

1005

1006

D. Li et al.

wideband digital system is the high-speed and high-precision analog-to-digital converters (ADCS), which are not available for its high cost and large power consumption [1]. This motivates us to use lower precision ADC to simplify the complexity. However, ADC serves as a key component to convert the received analog signal into its digital format at the very early stage of the receiver, and it directly affects the performance of the receiver, such as synchronization, channel estimation, demodulation, and decoding. A lower precision ADC will bring a few challenges to the signal processing for wireless communication receiver because of introducing a larger quantization error, and the quantization noise will worsen the whole performance of the receiver when it cannot be negligible [2]. Therefore, it becomes important to explore the impact of low-precision ADC on receiver tasks such as synchronization and equalization. Related work on estimation using low-precision samples which may be relevant for this purpose includes frequency estimation using 1-bit ADC [3, 4] and signal parameter estimation using 1-bit dithered quantization [5, 6]. Recent information-theoretic results show that the use of low-precision ADC is potential [7– 9]. This motivates more detailed investigation into whether advanced signal processing such as channel estimation and frequency synchronization can be accomplished by using low-precision ADCs. A high rate OFDM system is very sensitive to frequency offset which is the phase distortion due to mismatch between the local oscillators at the transmitter and at the receiver. Frequency offset may directly affect the performance of channel estimation, demodulation, and decoding and hence degrade the entire system performance significantly if it is not appropriately compensated [10]. Therefore, in this paper we focus on CFO estimation for OFDM systems with low-precision quantization through theoretical analysis and simulation, so as to provide a reference to explore the optimized CFO estimation algorithms for OFDM system with low-precision quantization.

2 System Model Figure 1 shows the block diagram for OFDM system with low-precision quantization. An OFDM signal is generated by taking the N point IFFT of a block of symbols fXk g belonging to a QPSK constellation. The useful part of each OFDM symbol has duration T seconds and a cyclic prefix (CP) with length Ng , which should be longer than the delay spread of the channel to eliminate its influence. f and f 0 denote the carrier frequency at the transmitter and the receiver, respectively. At the receiver side, the data stream xn is sampled with the period Ts ¼ T=N . Assuming ideal time synchronization, after passing through a channel, the nth sample of the time-domain received signal in AWGN channel can be expressed as xn ¼ sðnÞexpðj2pne=N Þ þ xðnÞ;

n ¼ 0; 1; . . .; N  1.

ð1Þ

Performance Analysis of CFO Estimation …

1007

Fig. 1 System diagram for OFDM with low-precision quantization

n o where wðnÞ is white Gaussian noise with 0 mean and variance r2w ¼ E jwðnÞj2 , and sðnÞ is the signal component, assuming it has 2K þ 1 modulated subcarriers. Then we can write the signal component as follows. s ð nÞ ¼

k 1 X Xk Hk expðj2pkn=N Þ; N k¼K

n ¼ 0; 1; . . .; N  1;

ð2Þ

where Hk is the channel impulse response at the kth subcarrier, e is the normalized  frequency offset. The signal-to-noise ratio is given by SNR ¼ r2s r2x ; n o r2s ¼ E jsðnÞj2 . Specially, a uniform quantizer QðÞ is considered for an ADC with a small number of l bits of precision. Traditional receivers in which the available ADC precision is large enough. And CFO estimation algorithms are designed assuming that the available samples are at full precision. However, once we restrict the ADC to low precision, the quantization error is severe and cannot be ignored. According to [12], assuming the quantizer is symmetric with a range ½M; þM, the minimum bin for the quantizer is given as q ¼ M=2l1 , with the addition of an independent quantizing noise is uniformly distributed ½q=2; þq=2; and the variance of the PQN is given as r2qn ¼ q2 =12, after low-precision ADC quantization, the signal of the nth sample can be written as. r ðnÞ ¼ Qðxn Þ ¼ QðReðxn Þ þ iQðImðxn ÞÞÞ ¼ sðnÞexpðj2pne=N Þ þ xðnÞ þ qðnÞ;

ð3Þ n ¼ 0; 1; . . .; N:

where ReðÞ and ImðÞ denote the real part and imagery part of xn , respectively, and assuming the noise terms wðnÞ and qðnÞ are independent in the following discussion.

1008

D. Li et al.

3 Performance Analysis of the CFO Estimation with Low-Precision Quantization In this section, we will analyze the effect on the performance of CFO estimation based on different CFO estimation algorithms with low-precision quantization.

3.1

The Estimation Performance of MLE Algorithm

Two identical training symbols are transmitted consecutively and the corresponding signals with CFO of e are related with each other. For an OFDM transmission symbol at one receiver with an assumption of the absence of noise the 2N point sequence as follows [13]. rn ¼

K 1 1 X Hk Xk ej2pnðkþeÞ=N ; N k¼K

n ¼ 0; 1; . . .; 2N  1:

ð4Þ

The kth element of the N point FFT of the first N points of (4) is R1k ¼

N 1 X

rn ej2pkn=N ;

k ¼ 0; 1; . . .; N  1;

ð5Þ

n¼0

And the second half of the sequence is R2k ¼

N 1 X

rnþN ej2pkn=N ;

k ¼ 0; 1; . . .; N  1;

ð6Þ

n¼0

According to (4), we can conclude rnþN ¼ rn ej2pe ; R2k ¼ R1k ej2pe , including the AWGN and quantizing noise, y1k ¼ R1k þ W1k þ Q1k y2k ¼ R1k ej2pe þ W2k þ Q2k ;

k ¼ 0; 1; . . .; N  1:

ð7Þ

Therefore, the offset e is given by using the observations. ( , ) K K h i h i X X 1 1   ^e ¼ tan  Im Y2k Y1k Re Y2k Y1k ; 2p k¼K k¼K

ð8Þ

Performance Analysis of CFO Estimation …

1009

Then we can compute the phase error as follows. tan½2pð^e  eÞ ¼

nPK



 .  j2pe Im Y2k Y1k e !) K X    j2pe Re Y2k Y1k e : k¼K

ð9Þ

k¼K

When j^e  ej  1=2p; (9) can be approximately equal to its argument, that is (P    ) K j2pe  þ Q2k ej2pe Þ R1k þ W1k þ Q1k 1 k¼K Im ðR1k þ W2k e ^e  e     ; PK j2pe þ Q ej2pe Þ R þ W  þ Q 2p 2k k¼K Re ðR1k þ W2k e 1k 1k 1k ð10Þ And at high SNRS, (10) may be approximated by (11). 1 ^e  e  2p

(P

K k¼K

   ) Im R1k ðW2k þ Q2k Þej2pe þ R1k W1k þ Q1k : PK 2 k¼K jR1k j

ð11Þ

Let us suppose the e and fRk g are known, from which we can easily find that E½^e  eje; fRk g ¼ 0:

ð12Þ

Hence, the variance of the estimate is easily acquired as follows [13]. ! r2q 1 1 þ varð^eje; fRk gÞ ¼ 2 : 4p SNR r2s

ð13Þ

As it can be seen, the estimation variance of MLE algorithm is positively related to r2q , larger quantizing noise will result in lower estimation precision. Furthermore, the range of estimate computed by (8) is j^ej  0:5, which is also limit the estimation precision.

3.2

The Estimation Performance of SC Algorithm

In order to realize whole OFDM bandwidth and improve the precision of estimation, we now adopt the SC algorithm [14] with using one symbol for

1010

D. Li et al.

synchronization. Let there be N=2 complex samples in one-half of the training symbol where the first half is identical to the second half, except for a phase shift caused by the carrier frequency offset. After quantified by QðÞ, the complex samples are denoted as shown in (3), and the sum of the pairs of products as shown in (14). Pðd Þ ¼

N=21 X

 rdþN=2 rdþnþN=2 ;

ð14Þ

n¼0

where d is a time index corresponding to the first sample in a window of N samples. Assuming that the effect of the channel is canceled, thus the main difference between the two halves of the training symbol will be a phase difference of / ¼ pe, and if / 2 ½p; þp. Then the frequency offset estimate can be estimated by (15). ^ ¼ argðPðdÞÞ; /

ð15Þ

Then the frequency estimation is shown in (16). ^e ¼

argðPðdÞÞ p

ð16Þ

If / 62 ½p; þp, the estimation would be biased. If not, the estimation variance can be derived by using the method in [13]. 2   argðPðd ÞÞ  e varð^eÞ ¼ E  p  (PN=2  )2  jpe 1  1  n¼0 Im rdþnþN=2 rdþn e ¼ 2 tan   PN=2   ejpe  p  Re r r dþnþN=2 dþn n¼0  8PN=2 h i 9   j2pe  þ qdþnþN=2 ej2pe rdþn þ xdþn þ qdþn = 1 < n¼0 Im rdþn þ xdþnþN=2 e h   2 P i  p : N=2 Re rdþn þ x  j2pe þ q j2pe rdþn þ xdþn þ qdþn ; dþnþN=2 e dþnþN=2 e n¼0 ! r2q 2 1 1  2 þ : p N SNR r2s

ð17Þ As we can see that the estimation variance and N; SNR appear a negative correlation, while positively related to r2q : In addition, this algorithm has a low-complexity compared to MLE; the accurate estimation range is j^ej  1:

Performance Analysis of CFO Estimation …

3.3

1011

The Estimation Performance of ESC Algorithm

In order to achieve a larger estimation range and a better accuracy estimation, let us consider a ESC algorithm that extended the SC algorithm by considering a training symbol composed of two identical parts, which are generated by transmitting a real training sequence as C ¼ f1 1 1. . .1g with length N [15], then the nth complex samples of the first half of the training symbol can be expressed as r ðnÞ ¼ expðj2pne=N Þ þ wðnÞ þ qðnÞ n ¼ 0; 1; . . .; N=2:

ð18Þ

And the second half of the sequence is r ðn þ N=2Þ ¼ expðj2peðn þ N=2Þ=N Þ þ xðn þ N=2Þ þ qðn þ N=2Þ; n ¼ 0; 1; . . .; N=2:

ð19Þ

Let variable RðnÞ be RðnÞ ¼ r ðn þ N=2Þr  ðnÞ  ejpe

ð20Þ

Thus we can obtain the fractional part eF of e, and make the average of N=2 samples, that is 2 XN=21 argðRðnÞÞ n¼0 N (PN=2p )  2 n¼0 ImðrnþN=2 rn Þ 1 tan ¼ : PN=2 Np ReðrnþN=2 r  Þ

^eF ¼

ð21Þ

n

n¼0

We can conclude that the estimation range jeF j\1 and the compensation factor is ej2pneF =N ; n ¼ 0; 1; . . .N  1, so the received training sequence would be expressed by (22). yðnÞ ¼ ej2pneI þ xðnÞ þ qðnÞ;

n ¼ 0; 1; . . .N  1;

ð22Þ

And give the correlative function as shown in (23). IðnÞ ¼ ynþ1 yn ej2peI =N ;

n ¼ 0; 1; . . .; N  2;

ð23Þ

Then we can compute the integer part eI as ^eI ¼

N 2 1 X N argðI ðnÞÞ; N  1 n¼0 2p

ð24Þ

1012

D. Li et al.

Since jeI j\N=2. Thus, the final CFO is 1 N ^e ¼ argðRðnÞÞ þ argðI ðnÞÞ: p 2p

ð25Þ

We now analyze the performance of ESC algorithm. We can obtain the phase error by using the method in [13],  PN=2   jNpeF =2 Np n¼0 Im rnþN=2 rn e tan ð^eF  eF Þ ¼ PN=2  : 2 Re rnþN=2 r  ejNpeF =2

ð26Þ

n

n¼0

When j^eF  eF j  1=ðNpÞ; (26) is approximately equal to  PN=2   jNpeF =2 2 n¼0 Im rnþN=2 rn e ^eF  eF   : P Np N=2 Re rnþN=2 r  ejNpeF =2 n¼0

ð27Þ

n

According to [13], the variance of the estimator of the fractional part of the frequency offset can be found to be ! r2q 4 1 þ varð^eF jeF ; cÞ ¼ 2 2 : N p SNR r2s

ð28Þ

Since the carrier frequency offset estimate consists of the sum of ^eF and ^eI , the variance of ^eF will be equal to the variance of the final estimate if ^eI can be estimated reliably. we can know that estimation performance of ESC algorithm and  From (20), r2q þ r2w have positive correlation. The variance will be smaller if N or SNR are

larger.

4 Simulation Results and Discussion Figures 2, 3, 4 and 5, we provide a simple simulation example to illustrate the BER performance of estimation algorithms with Low-Precision ADC (2–4 bits) for the three algorithms. We perform the simulation by comparing their BER to different SNR under the following conditions: (i) N ¼ 1024; Ng ¼ 128; e ¼ 0:9: (ii) 2K þ 1 ¼ 861; f ¼ 1GHz; Ts ¼ 0:2 ls: (iii) the information bits fXk g are generated by QPSK modulation and transmitted in AWGN channel. (v) assuming perfect OFDM symbol time synchronization.

Performance Analysis of CFO Estimation … Fig. 2 BER versus SNR for the MLE with low-precision quantization

Fig. 3 BER versus SNR for the SC with low-precision quantization

Fig. 4 BER versus SNR for the ESC with low-precision quantization

1013

1014

D. Li et al.

Fig. 5 BER versus SNR for the ESC with low-precision quantization

As shown in Figs. 2, 3 and 4, the system BER increases with the reduction of ADC precision for same SNR in the same algorithm. Taking Fig. 4 (i.e., ESC algorithm) as an example to illustrate, it can be seen that the when l = 2-bit, 3-bit, 4-bit, the order of magnitude of BER is close to the values of 101 ; 102 ; 103 respectively at SNR of 10 dB, while the BER with full precision (6-bit) can only achieve 10−4. Figure 5 illustrates the BER performance compared among the different CFO estimation algorithms with 4 bit quantization. It is observed that for the case with the same quantization precision, the performance of ESC algorithm is better than MLE and SC algorithm; the SC algorithm is close to ESC algorithm performance; the LS algorithm is the worst. This not a surprise because the estimation ranges of MLE, SC, MLE are decreasing. MLE is designed to work only with a very small frequency offset, and the estimation range of SC is only 0.5 larger than that of MLE; they both fail for larger frequency offsets. It is worth mentioning that the performance of ESC algorithm is higher than that of MLE algorithm by improving an order of magnitude at SNR of 10 dB. This highlights that although the low-precision quantization reduced the performance of CFO estimation, an optimized CFO estimation algorithms can relieve the effect of low-precision quantization noise effectively.

5 Conclusion This paper discussed there CFO estimation algorithms and focuses on its performance impacting on the OFDM system with low-precision quantization. Theoretical analysis and simulation figures verify that the quantization error is introduced due to the low-precision quantization, and then significantly degrade the performance of the receiver system. Fortunately, simulation results show that an optimized CFO estimation algorithm can effectively relieve the effect of

Performance Analysis of CFO Estimation …

1015

low-precision quantization noises and the performance of whole system. It provides a reference to further exploring the CFO estimation methods for low-precision quantization. In future, we will further to analyze the effect of the nonlinear characteristics of the low-precision ADCs, and address the synchronization, including frequency synchronization, time synchronization and sampling synchronization for OFDM systems with low-precision ADC. Acknowledgments This work is fully supported by the Innovation Group Build Plan for the Universities in Sichuan [No. 13TD0017], Sichuan Provincial Youth Science and Technology Innovation Team [No. 2015TD0022], Science Founding of Artificial Intelligence of Key Laboratory of Sichuan Province [No. 2012RYJ05], and the Talents Project of Sichuan University of Science and Engineering (No. 2014RC13).

References 1. Singh, J., Ponnuru, S., Madhow, U.: Multi-gigabit communication: the ADC bottleneck.In: IEEE International Conference on UWB (IC UWB 2009) 2. Dabeer, O., Jaspreet, S., Madhow, U.: On the limits of communication performance with one-bit analog-to-digital conversion. In: Proceedings of the International Conference on Signal Processing Advances in Wireless Communications, Cannes, pp. 1–5 July (2006) 3. Host-Madsen, A., Handel, P.: Effects of Sampling and Quantizationon Single-Tone Frequency Estimation. IEEE Trans. Sig. Proc. 48, 650–662, Mar (2000) 4. Andersson, T., Unsound, M., Handel, P.: Frequency Estimation by 1-bit Quantization and Table Look-Up Processing. In: Proceedings of the European Signal Processing Conference (EUSIPCO), Tampere (2000) 5. Rousseau, D., Anand, G.V., Chapeau-Blondeau, F.: Nonlinear Estimation from Quantized Signals: Quantizer Optimization and Stochastic Resonance. In: Third International Symposium on Physics in Signal and Image Processing (PSIP), Grenoble (2003) 6. Dabeer, O., Karnik, A.: Signal parameter estimation using 1-bit dithered quantization. IEEE Trans. Info. Theory, 52(12), 5389–5405 (2006) 7. Singh, J., Dabeer, O., Madhow, U.: Communication Limits with Low-Precision Analog-to-Digital Conversion at the Receiver. In: Proceedings of the IEEE International Conference on Communications, pp. 6269–6274, Glasgow, June 2007 Lehner, W. (eds.) Euro-Par 2006. LNCS, vol. 4128, pp. 1148–1158. Springer, Heidelberg (2006) 8. Dabeer, O., Masry, E.: Multivariate signal parameter estimation under dependent noise from 1-bit dithered quantized data. IEEE Trans. Info. Theory. 54(4), 1637–1654 (2008) 9. Singh, J., Dabeer, O., Madhow, U.: Transceiver Design with Low Precision Analog-to-Digital Conversion: An Information Theoretic perspective to appear IEEE Transactions on Communications (2009) 10. Yan, C.: Synchronization in OFDM Systems, Phd dissertation, University of Electronic Science and Technology of China, Nov (2004) 11. Wang, W.B., Zheng, K.: Broadband wireless communication based on the OFDM technology (in Chinese), BeiJing: People’s Posts and Telecommunications Press, pp. 66–67, Aug (2007) 12. Lin, Z.W., Peng, X.M., Chin, F.: Joint Carrier Frequency Offset and Channel Estimation for OFDM based Gigabit Wireless Communication System with Low Precision ADC. IEEE VTC Fall, pp. 1–5, Sept 6 (2011) 13. Moose, P.: A technique for orthogonal frequency division multiplexing frequency offset correction. IEEE Trans. Commun. 42, 2908–2914 (1994)

1016

D. Li et al.

14. Schmidl, T.M., Cox, D.C.: Robust Frequency and Timing Synchronization for OFDM. IEEE Tmm. Comn. 1613–1621 (1997) 15. Lin, Y.C., Qian, L.S., Xi, T.Y. et a1.: A novel frequency offset estimation method for OFDM Systems with large estimation range. IEEE Trans. Broadcast. 52(1) 58–61, (2006)

Analysis of Sun Outages Influence on GEO to LEO Communication Yan Lou, Yi Wu Zhao, Chunyi Chen, Shoufeng Tong and Cheng Han

Abstract A sun outage is an interruption in satellite signals caused by interference from solar radiation. The theoretical model for predicting impacts of sun outage on geostationary earth orbit (GEO) to low earth orbit (LEO) satellite communication systems is presented. The impacts of sun outage are given by theory calculation and satellite tool kit (STK) simulation; the total time of sun outages for the LEO is 63 min, which often happened between the PM of 14:00–15:00 with no more than 5 min each time in the Spring and Autumn. The total time of sun outages for the GEO are 75 min. which often happened between the AM of 2:00 and 3:00 with no more than 30 min each time in the Spring and Autumn. The definition of available probability was presented in this paper, which was calculated by 99.9857.



Keywords GEO satellites LEO satellites probability Satellite tool kit (STK)



 Direct transit from sun  Available

1 Introduction This article describes techniques to alleviate the Satellite communication links which are affected by solar outage and rain fall. The sun transit outage greatly affects transmission quality and limits system’s availability [1–3]. The rapid growth Y. Lou (&)  Y.W. Zhao  S. Tong NUERC of Space and Optoelectronics Technology, Chang Chun University of Science and Technology, Changchun 130022, Jilin, China e-mail: [email protected] Y. Lou  Y.W. Zhao  S. Tong Fundamental Science on Space-Ground Laser Communication Technology Laboratory, Chang Chun University of Science and Technology, Changchun 130022, Jilin, China C. Chen  C. Han College of Computer Science & Technology, Chang Chun University of Science and Technology, Changchun 130022, Jilin, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_92

1017

1018

Y. Lou et al.

of low earth orbit (GEO–LEO) satellite communication systems using higher frequency bands has highlighted effects of different propagation impairments. Sun transit outage is one of the main sources of link outages. The sun aligns directly with satellites and earth stations twice a year—once in spring and the other in autumn. This phenomenon is called sun outage. During the time, the satellites system between GEO and LEO may experience interference in signal communication. In order to provide continuous high quality communication, it is necessary to identify, predict, and compensate sun outage along earth-satellites paths [4]. In this paper, first of all, a model for predicting impacts of sun transit outage on GEO to LEO satellite communication systems is presented. Second, the probability of the sun transit outage in GEO to LEO satellite communication systems is simulated by the STK software. Finally, the result shows that the prediction of sun transit outage in this paper is reliable and the adaptive modulation scheme for compensation is very valid.

2 Prediction Model of Sun Outage Occlusion between two satellites due to the sun, LEO and GEO satellites are moving in their own orbits with time and overlapping coverage of the adjacent satellites makes none of gap coverage. The positions among the sun, the GEO and LEO satellite are respectively changing all the time; the visual axis of the laser communication system should avoid direct sunlight. The background light is more intense within 15° angle dead zone. This paper selects 1° angle to analyze the angle among the Sun, LEO, and GEO. A link of Sun–LEO–GEO satellites located around earth will be disrupted if any portion of the sun is within the field of view of the receiver LEO when the angle h2  h1 (see Fig. 1). The angle subtended by the sun from the two satellites is

Fig. 1 Relative positions of Sun, GEO, and LEO satellite constellation

Analysis of Sun Outages Influence on GEO to LEO Communication

1019

approximately 1°. The angle between its communicating satellite and the sun can be obtained using standard analysis which is specifically is as shown below [5–8]: 695;500 h1 ¼ arctg qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 ðx1  x2 Þ þ ðy1  y2 Þ2 þ ðz1  z2 Þ2

ð1Þ

ðx1  x2 Þðx3  x2 Þ þ ðy1  y2 Þðy3  y2 Þ þ ðz1  z2 Þðz3  z2 Þ h2 ¼ arccos qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx1  x2 Þ2 þ ðy1  y2 Þ2 þ ðz1  z2 Þ2  ðx3  x2 Þ2 þ ðy3  y2 Þ2 þ ðz3  z2 Þ2

ð2Þ where the solar radius is 695,500 km; Coordinate of Sun is Csun ¼ ðx1 ; x2 ; x3 Þ;

ð3Þ

Coordinate of LEO is CLEO ¼ ðy1 ; y2 Þ;

ð4Þ

Coordinate of GEO is CGEO ¼ ðZ1 ; Z2 ; Z3 Þ;

ð5Þ

The Address of ground stations: Yunnanstation: Latitude: 2°05′23.79″, Longitude: 102°39′05.06″, Altitude: 2014 m. Kashistation: Latitude: 39°31′05.08″, Longitude: 76°01′57.46″, Altitude: 1486 m Ali station: Latitude: 32°33’57.82″, Longitude: 80°09′35.14″, Altitude: 5036 m Hainan station: Latitude: 20°04’55.01″, Longitude: 11°02′22.90″, Altitude: 0 m The Address of Satellite stations: GEO: Latitude 77° LEO: Orbit Altitude: 500 km; Inclination Angle: 95°.

3 Analysis of Solar Outages Influence on GEO to LEO Communication The simulation results of solar outages influence on GEO to LEO when Angle subtended by the sun from the two satellites are approximately less than 1°.

3.1

The Influence on LEO

The simulation results figures (Figs. 2, 3, 4, 5, 6, 7, 8, 9 and 10, 11):

1020

Y. Lou et al.

Fig. 2 Simulation analysis on the outer surface of GEO satellites communications to LEO satellites. Solar outages of October are 3 days, in the year of 2012, duration time/once from at least of 4 min to the most of 5 min, also with starting and ending time/once

Fig. 3 Simulation analysis on the outer surface of GEO satellites communications to LEO satellites. Solar outages of September are 8 days, in the year of 2012, duration time/once from at least of 1 min to the most of 4 min, also with starting and ending time/once

Analysis of Sun Outages Influence on GEO to LEO Communication

1021

Fig. 4 Simulation analysis on the outer surface of GEO satellites communications to LEO satellites. Solar outages of August are 1 day, in the year of 2012, duration time/once from at least of 1 min to the most of 2 min, also with starting and ending time/once

Fig. 5 Simulation analysis on the outer surface of GEO satellites communications to LEO satellites. Solar outages of April are 3 days, in the year of 2012, duration time/once from at least of 2 min to the most of 6 min, also with starting and ending time/once

1022

Y. Lou et al.

Fig. 6 Simulation analysis on the outer surface of GEO satellites communications to LEO satellites. Solar outages of March are 7 days, in the year of 2012, duration time/once from at least of 2 min to the most of 3 min, also with starting and ending time/once Fig. 7 Simulation analysis on the outer surface of GEO satellites communications to LEO satellites. Solar outages of September are 6 days, in the year of 2012, duration time/once from at least of 1 min to the most of 3 min, also with starting and ending time/once

3.2

The Influence on GEO

The simulation results are shown as follows.

4 Analysis of Available Probability The availability calculation in the present work is based on the time budget analysis of Sun outage of the GEO and LEO in a year. For the link of Sun–LEO–GEO Satellite, The non-availability is defined as a time in which the angle h2  h1

Analysis of Sun Outages Influence on GEO to LEO Communication

1023

Fig. 8 Simulation analysis on the outer surface of GEO satellites communications to LEO satellites. Solar outages of August are 3 days, in the year of 2012, duration time/once from at least of 2 min to the most of 3 min, also with Starting and ending time/once

Fig. 9 Simulation analysis on the outer surface of GEO satellites communications to LEO satellites. Solar outages of April are 3 days, in the year of 2012, duration time/once from at least of 2 min to the most of 3 min, also with Starting and ending time/once

(see Fig. 1). Where h1 is defined as Angle of Sun P–LEO–Sun0; h2 is defined as Angle of Sun0–LEO–GEO. Available Probability = (Annual-solaroutages times)/Annual * 100 % = (525600-63-75)/525600* 100 % = 99.7374 (Tables 1 and 2).

1024

Y. Lou et al.

Fig. 10 Simulation analysis on the outer surface of GEO satellites communications to LEO satellites. Solar outages of March are 5 days, in the year of 2012, duration time/once from at least of 2 min to the most of 9 min, also with Starting and ending time/once Fig. 11 Simulation analysis on the outer surface of GEO satellites communications to LEO satellites. Solar outages of February are 1 day, in the year of 2012, duration time/once of 12 min, also with Starting and ending time/once

Table 1 Angle of link Sun–LEO–GEO is less than 1° Solar outages days of a year

Distribution of the month

Duration time/once

Starting and ending time/once

Total time

23 days

October/September/ August/April/March

1–5 min

14:09–15:24

63 min

Table 2 Angle of link Sun–GEO–LEO is less than 1° Solar outages days of a year

Distribution of the month

Duration time/once

Starting and ending time/once

Total time

22 days

September/August April/March/February

1–12 min

2:04–3:07

75 min

Analysis of Sun Outages Influence on GEO to LEO Communication

1025

5 Conclusion Fist, in order to analyze the effect of solar radiation on satellite optical communication system caused by direct sun radiation, a model for predicting impacts of sun transit outage in GEO to LEO satellite communication systems is presented. The next, the available probability of the direct transit from sun to GEO and LEO satellites communication is analyzed by STK and Matlab. Last, the simulation results show that the annual duration time of direct transit from sun to LEO and GEO satellites is 63 min when the angle between the Sun–LEO–GEO (SLG) is 1°. The direct transit time of the sun is generally appears in the 14:00–15:00 PM and each time of duration is no more than 5 min. The annual duration time of direct transit from sun to GEO and LEO satellites is 75 min when the angle between the Sun–GEO–LEO (SGL) is 1°. The direct transit time of the sun is generally appeared in the afternoon 2:00–3:00 of Beijing time and each time of duration is no more than 30 min. The whole year available probability is 99.7374. Acknowledgments This work was financially supported by the National Natural Science Foundation (61007046), Innovation Program of Jilin Municipal Education Commission (20140520115JH).

References 1. Jiang, W., Zong, P.: Prediction and Compensation of Sun Transit in LEO Satellite Systems[J]. In: International Conference on Communications and Mobile Computing. IEEE doi:10.1109/ CMC.2010.206: 495–498 (2010) 2. Mohamadi, F.: Effects of solar transit on Ku band VSAT systems[J]. Int. J. Satell. Commun. 6 (1), 65–71 (1988) 3. Vuong, X.T., Forsey, R.J.: Prediction of sun transit outages in an operational communication satellite system [J]. IEEE Trans. Broadcast. 29(4): 121–126 (1983) 4. Abrishamkar, F., Siveski, Z.: PCS global mobile satellites. IEEE Commun. Mag. 34(9), 132– 136 (1996) 5. Gremont, B., Filip, M., Gallois, P., Bate, S.: Comparative analysis and performance of two predictive fade detection schemes for Ka-band fade counter measures. IEEE J. Sel. Areas Commun. 17(2), 180–192 (1999) 6. Pratt, T., Bostian, C., Allnutt, J.: Satellite Communications, 2nd Edition, Wiley (2003) 7. Chang, T., Phillips, T.M., Acuf, P.R.: A comparative study of solar interference on the iridium and MSS constellations Int. J. Satell. Commun. vol.0 000-000 (1994) 8. Mohamadi, F., Lyon, D.L.: Effects of solar transit in Ku-band VSAT systems. IEEE Trans. Commun. 36(7), 892–894 (1988)

60-GHz UWB System Performance Analysis for Gigabit M2M Communications Suiyan Geng, Linlin Cheng, Xing Li and Xiongwen Zhao

Abstract In this paper, the feasibility and performance of mm-wave 60 GHz ultra-wide band (UWB) systems for gigabit machine-to-machine (M2M) communications are analyzed. Specifically, based on specifications and experimental channel measurements and models for both LOS and NLOS scenarios, 60 GHz propagation mechanisms are summarized; 60 GHz UWB link budget and performance are analyzed. Tests were performed for determining ranges and antenna configurations. Results show that gigabit capacity can be achieved with omni-omni antenna configuration for LOS condition. When the LOS path is blocked by a moving person or radio wave propagation in NLOS situation, omni-directional antenna configuration is required for bigger range between machines in office rooms. Therefore, it is essential to keep a clear LOS path in M2M applications like gigabit data transfer in office rooms. The goal of this study is to provide useful information for design of 60 GHz UWB systems in gigabit M2M communications. Keywords Mm-wave 60 GHz communications

 UWB  Machine-to-machine (M2M)  Gigabit

1 Introduction Wireless data traffic is projected to increase by 1000 fold by the year 2020. The mm-wave 60 GHz band is seen as the major candidate enabling wireless interface for gigabit applications, due to the intrinsic high transmission bandwidth available in the band. The large bandwidth (as a thumb of rule the available bandwidth B is about 10 % of the center frequency for transmission) makes 60 GHz radio particularly interesting for gigabit wireless communications [1]. The band has been S. Geng (&)  L. Cheng  X. Li  X. Zhao School of Electrical and Electronic Engineering, North China Electric Power University, Beijing 102206, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_93

1027

1028

S. Geng et al.

Fig. 1 Examples of the 60 GHz radio for gigabit M2M communications

proposed for gigabit WPAN applications in IEEE 802.15.3c. On the other hand, gigabit wireless applications are emerging today. Especially, gigabit machineto-machine (M2M) applications, e.g., wireless A/V (audio/video) cable replacement, wireless high-speed file transfer like download hour-long movie files within 1 min. In high-definition television (HDTV) applications, up to several Gb/s rate is required for supporting uncompressed exchange of information between TV, cameras, DVD, and other appliances. Figure 1 shows examples of the 60 GHz radio for gigabit wireless applications in M2M networks. The 60 GHz band regulation and standardization efforts are currently underway worldwide. In China, the study progress of 60 GHz band has being got more and more attentions in recent years. At present, the international standards for 60 GHz band are ECMA-387, IEEE 802.15.3c, and IEEE 802.11ad. In China, the study progress of 60 GHz band has being got more and more attentions in recent years. In 2010, the 60 GHz wireless network project group (PG4) was established. The PG4 and the IEEE 802.11 working group formed a formal partnership. The IEEE 802.11-aj task group was founded according to the China millimeter wave band for next generation WLAN standard in September 2012. IMT-2020 (5G) group was founded on February 19, 2013, for promoting the standard formulation in the 60 GHz band for 5G technology, and standard plan will be completed in 2014 [2, 3]. However, the output power of 60-GHz devices is limited by regulations, and the free space path loss for a 60-GHz band carrier is much higher than for a microwave carrier. Though it is possible to use high gain antennas to compensate for the high path loss at mm-wave, the drawback of such antenna system is obvious; the system suffers from poor flexibility and limited mobility. In this work, considering office room machines (e.g., computer-to-computer data transfer) with 1–5 m range, by employing experimental propagation models of LOS, NLOS, and LOS path

60-GHz UWB System Performance Analysis …

1029

Table 1 60 GHz band plan and limits on transmit power, EIRP and antenna gain for various countries [6, 7] Region

Freq. band (GHz)

TX Power (max) (mW)

EIRP

Antenna gain

Comment

USA

7 GHz (57–64)

500

40 dBm (ave) 43 dBm (max)

NS

Canada

7 GHz (57–64)

500

40 dBm (ave) 43 dBm (max)

NS

For B > 100 MHz translate average PD from 9 to 18 µW at 3 m For B > 100 MHz translate average PD from 9 to 18 µW at 3 m

Japan

7 GHz (59–66) max 2.5 GHz 3.5 GHz (59.4–62.9)

10

NS

10

150 W (max)

47 dBi (max) NS

10 20

TBD 57 dBm (max)

10

44 dBm (ave) 47 dBm (max)

Australia Korea Europe China

7 GHz (57–64) 9 GHz (57–66) min 50 MHz 5 GHz (59–64)

TBD 37 dBi (max) NS

Limited to land and maritime Recommendation by ETSI

blocked by moving persons studied in our previous works [4, 5], 60 GHz UWB link budget and performance are analyzed. Tests are also performed for determining communication ranges and antenna gains. The goal of this study is to provide useful information for the design of 60 GHz UWB systems in gigabit M2M communications and standardization groups. Table 1 shows 60 GHz plan and limits on transmit power, EIRP, and antenna gain for various countries [6] including the parameters specified in China [7].

2 Mm-Wave 60 GHz Propagation Mechanisms In design and optimization of wireless communications systems, channel models featuring the relevant characteristics of radio-wave propagation are required. Ray tracing is a well-established tool for channel modeling; in ray-tracing algorithm, reflection and diffraction are the main physical processes for LOS and NLOS environments. In our previous works [4, 5], mm-wave 60 GHz propagation mechanism is studied from the direction-of-arrival (DOA) measurements. The DOA measurements require the detailed knowledge of the propagation channels. The measured power angle profiles (PAPs) and PDPs can then be connected with site-specific information of the measurement environments to find the origin of the arriving of signals. From [5], mm-wave 60 GHz propagation mechanism can be concluded as follows:

1030

S. Geng et al.

• Direct path and the first-order reflected waves from smooth surfaces form the main contributions in LOS propagation environments. • Diffraction is a significant propagation mechanism in NLOS cases. Moreover, the signal levels of diffraction and second-order reflection are comparable. • Transmission loss through concrete or brick walls is very high. Person blocking effect (PBE) is also measured in our previous work [4], as moving of person is quite usual in office rooms in reality. PBE is a major concern for propagation research and system development; the effects of person blocking at 60 GHz are studied by many researchers [8, 9]. In [4], PBE is measured by employing DOA measurement technique as described below.

2.1

Person Block Effect (PBE) Measurements

The PBE measurements were performed in a room, where the TX and RX positions are fixed with 5 m apart. When keeping a clear LOS path and a person blocked in the middle of the LOS path, measuring the power angle profiles (PAPs) of the clear LOS path and the blocked path, as shown in Fig. 2a, b, respectively. It is seen that there is about 18 dB person attenuation in the blocked path ðu ¼ 0 Þ. However, the PBE can be reduced to 12 dB by using selection diversity technique, i.e., selecting another stronger path (at u ¼ 315 Þ), which is considered as the first-order reflection from window glass in the room as reported in [5]. The selection diversity can be explained simply that when the LOS path undergoes a deep fading (person blocking), by selecting another independent strong signal the fading effects can be mitigated. Diversity is a powerful communication receiver technique that provides link improvement. Therefore, the effective PBE ¼ 12 dB is considered in 60 GHz UWB system parameter analysis of this paper.

Fig. 2 PAPs of a clear LOS path and b the LOS path blocked by a person in person block effect (PBE) measurements

60-GHz UWB System Performance Analysis …

2.2

1031

Radio Wave Propagation Mechanisms in the NLOS Case

In [5] we know that in the LOS propagation environments, direct path and the first-order reflected waves from smooth surfaces form the main contributions of receiving signals. This is also proved in [10] where a two-ray model (LOS path and first-order reflection from desktop) is proposed for 60 GHz M2M systems. In NLOS cases, diffraction is a significant propagation mechanism, and the signal levels of diffraction and second-order reflection are comparable [5]. This indicates that radio links are relayed by direction and/or second-order reflections in the NLOS propagation scenarios in 60 GHz band. As an example, Fig. 3 shows that radio waves propagate in office room environment for NLOS scenario. In Fig. 3, the diffraction and second-order reflection rays are denoted by dot and real lines, respectively. It should note that in the NLOS case, signal power loss increases greatly with the increasing of distance between the TX and RX. Thus, propagation range is a major concern in NLOS environments in system development.

3 60 GHz UWB System Link Budget Analysis In wireless communication systems, the upper bound of capacity is determined by Shannon theorem, which is function of bandwidth B and signal-to-noise ratio (SNR) as expressed as C ¼ B log2 ð1 þ SNRÞ

ð1Þ

A system capacity increases with B and SNR. However, increasing of bandwidth will lead to high noise power of system. For example, noise power is 18 dB higher with a UWB B ¼ 7 GHz channel than a narrowband B ¼ 100 MHz channel (when antenna noise temperature is T ¼ 290 K). Fig. 3 Mm-wave radio links are relayed by diffraction and/or double-reflection in the NLOS case

TX

RX

1032

S. Geng et al.

In this study, SNR ¼ 10 dB and B ¼ 1 GHz are considered for performing a basic feasibility study for achieving gigabit capacity of 60 GHz UWB systems.

3.1

Parameters Analysis of 60 GHz UWB System Link Budget

In wireless communication systems, the performance and robustness are often determined by SNR from radio link budget: SNR ¼ Pt þ Gt þ Gr  PL  N0  IL

ð2Þ

where Pt is the transmitted power, Gt and Gr are the transmitter (TX) and receiver (RX) antenna gains, PL denotes path loss in propagation channel, N0 is the total noise power at RX, and IL denotes the implementation loss of system. Pt is often limited by regulations of radio systems. In this work, it is chosen as Pt ¼ 10 dBm, as it was specified by most of countries including China. The other system parameters are set as practical values, i.e., IL ¼ 6 dB and noise figure NF = 6 dB in calculating total noise power: N0 ¼ 10 log10 ðkTBÞ þ NF, where k is Boltzmann’s constant and T is the standard noise temperature T = 290 K.

3.2

Path Loss Models in 60 GHz UWB Systems

The channel large-scale fading is a key impact on the coverage and reliability of system, and is often characterized by path loss (PL), which denotes the mean signal power loss and obeys the power distance law. Due to variations in the propagation environments, the signal power is observed at any given points that will deviate from its mean; this phenomenon is called shadowing. Because of shadowing, a fading margin FM is often considered in system design. Thus path loss PL is modeled as a combination of mean path loss and fading margin FM as   d PL ¼ PL0 ðd0 Þ þ 10n log þ FM d0 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}

ð3Þ

mean path loss

where the free space path loss PL0 is frequency-dependent PL0 ¼ 68 dB at reference distance d0 ¼ 1 m, path loss exponent n is environment-dependent, and FM is mainly system-dependent, and UWB system naturally leads to shadow fading improvement relative to narrow band systems. Based on our results that FM decreases with channel bandwidth B, and is less than 4 dB for a minimum bandwidth (B ¼ 500 MHz) in UWB channel with 90 %

60-GHz UWB System Performance Analysis …

1033

link success probability [11], fading margin is considered as FM ¼ 2 dB for the 60 GHz UWB (B ¼ 1 GHz) system in this work. Studies show that in LOS and NLOS office room environments, path loss exponent ranges in 2–3.5. In this work, path loss models of LOS ðn ¼ 2Þ, LOS path blocked by moving person and NLOS ðn ¼ 3:5Þ are considered, they respectively are PL1 ðdBÞ ¼ 68 þ 20 logðdÞ þ FM; PL2 ðdBÞ ¼ 68 þ 20 logðdÞ þ FM þ PBE; and PL3 ðdBÞ ¼ 68 þ 35 logðdÞ þ FM. Note that path loss model of LOS + PBE is more feasible when comparing with the NLOS model. Since the blocking effect is modeled independently on mobile position, which reflects the real case that movement of persons is quite typical in office rooms. Whereas, the NLOS model accounts for high path loss due to large distances practically. The parameters used in the 60 GHz UWB system link budget analysis in this work are shown in Table 2. Note that the maximum coverage range is selected as 5 m considering gigabit capacity of M2M applications (e.g., computer-to-computer data transfer).

4 60 GHz UWB System Performance Analysis As transmission power is restricted in regulations of 60 GHz radio systems, and further, path loss of the 60 GHz channel is high (e.g., free space path loss at 60 GHz is 22 dB higher than 5 GHz frequency band at d0 ¼ 1 m), the antenna gains become very important in guaranteeing radio links in achieving system gigabit capacity. In the following, tests are being performed in order to determine ranges and combined antenna gains (sum of gains at TX and RX), when using the parameters and path loss models of LOS ðn ¼ 2Þ, LOS + PBE ,and NLOS ðn ¼ 3:5Þ in Table 2. Table 2 Radio link budget of 60 GHz UWB system 60 GHz UWB system Data rate Maxi. coverage Bandwidth TX power SNR Noise power Fading margin Implementation loss Effective Person block effect Employed path loss models

>Gbps 5m 1 GHz 10 dBm 10 dB −78 dBm 2 dB 6 dB 12 dB LOS: PL1 ðdBÞ ¼ 68 þ 20 logðd Þ þ FM LOS + PBE: PL2 ðdBÞ ¼ 68 þ 20 logðd Þ þ FM þ PBE NLOS: PL3 ðdBÞ ¼ 68 þ 35 logðdÞ þ FM

1034

PL1=68+20log(d)+FM PL2=68+20log(d)+FM+PBE PL3=68+35log(d)+FM

35

Combined gain [dB]

Fig. 4 Combined antenna gains in 60 GHz UWB channel with link budget in Table 2

S. Geng et al.

30 25 20 15 10 5 0

1

2

3

4

5

Distance [m]

The combined antenna gain versus distance for the 60 GHz UWB system is shown in Fig. 4. It is seen that omni-omni (10 dB) antenna configuration can reach gigabit capacity when employing the three path loss models at short distance d ¼ 1 m. However, when employing omni-omni antenna configuration, only the LOS path loss model PL1 is feasible for gigabit capacity at further distance of d ¼ 5 m. With another two path loss models of PL2 and PL3 , i.e., LOS + PBE and NLOS ðn ¼ 3:5Þ, antenna configuration of omni-directional is required for the 60 GHz UWB system. Note that directional antenna with high gain, for instance, the half power beam width (HPBW) is approximately 6.5 for an antenna with more than 30 dBi gain [6]. The drawbacks of high gain antenna are they suffer from poor flexibility and limited mobility. It should be noted that the path loss model of LOS + PBE in Fig. 4 is more feasible when comparing with the NLOS model, since the blocking effect is modeled independently on mobile position, which reflects the real case that movement of persons is quite typical in multipath indoor channels. Whereas, the NLOS model accounts for high path loss due to large distances practically. The results show that it is essential to keep a clear LOS path in gigabit M2M applications.

5 Conclusions The feasibility and performance of mm-wave 60 GHz ultra-wide band (UWB) systems for gigabit machine-to-machine (M2M) wireless communications are analyzed in this work. Specifically, based on specifications and experimental channel measurements and models for both LOS and NLOS scenarios, the 60 GHz propagation mechanisms are concluded; 60 GHz UWB radio link budget including person block effect and channel fading margin are provided, and system performance is analyzed further. Tests are also performed for determining communication ranges and antenna configurations. Results show that when having a clear LOS path

60-GHz UWB System Performance Analysis …

1035

gigabit capacity can be achieved when employing omni-omni antenna configuration in office room M2M applications. When the LOS path is blocked by a moving person or radio wave propagation in NLOS situation, omni-directional antenna configuration is required in achieving gigabit capacity for 5 m range between machines of rooms. The drawbacks of high gain antenna systems are that they suffer from poor flexibility and limited mobility. Therefore, it is essential to keep a clear LOS path in gigabit M2M applications like data transfer in office rooms. The goal of this study is to provide useful information for the design of 60 GHz UWB systems in gigabit M2M communications.

References 1. Baykas, T., Chin-Sean, S., Zhou, L., et al.: IEEE 802.15.3c: the first IEEE wireless standard for data rates over 1 Gb/s. IEEE Commun. Mag. 49(7), 114–121 (2011) 2. Peng, X., Zhuo, L.: The 60 GHz band wireless communications standardizations (in Chinese). Inf. Technol. Stand. 49–53 (2012) 3. Geng, S., Liu, S., Zhao, X.: 60-GHz channel characteristic interdependence investigation for M2M networks. In: ChinaCom2014, 14–16 Aug, Maoming, China 4. Geng, S., Kivinen, J., Zhao, X., Vainikainen, P.: Measurements and analysis of wideband indoor radio channels at 60 GHz. In: 3rd ESA Workshop on Millimeter Wave Technology and Applications, pp. 39–44. Espoo, Finland, 21–23 May 2003 5. Geng, S., Kivinen, J., Zhao, X., Vainikainen, P.: Millimeter-wave propagation channel characterization for short-range wireless communications. IEEE Trans. Veh. Technol. 58(1), 3–13 (2009) 6. Yong, S.K., Chong, C.C.: An overview of multigigabit wireless through millimeter wave technology: potentials and technical challenges. EURASIP J. Wirel. Commun. Netw. 2007(1), 1–10 (2007) 7. Chinese specifications of the 60 GHz band transmission power for short-range wireless applications (in Chinese). www.miit.gov.cn 8. Jacob, M., Priebe, S., Maltsev, A., Lomayev, A., Erceg, V., Kurner, T.: A ray tracing based stochastic human blockage model for the IEEE 802.11ad 60 GHz channel model. In: Proceedings of the 5th European Conference on Antennas and Propagation (EUCAP), pp. 3084–3088, April 2011 9. Dong, K., Liao, X., Zhu, S.: Link blockage analysis for indoor 60ghz radio systems. Electron. Lett. 48(23), 1506–1508 (2012) 10. Shoji, Y., Sawada, H., Chang-Soon, C., Ogawa, H.: A modified SV-model suitable for line-of-sight desktop usage of millimeter-wave WPAN systems. IEEE Trans. Antennas Propagat. 57(10), (2009) 11. Geng, S., Vainikainen, P.: Experimental investigation of the properties of multiband UWB propagation channels. In: IEEE International Symposium on Wireless Personal Multimedia (PIMRC07), Athens, Greek, 3–7 Sept 2007, CD-ROM (1-4244-01144-0), pap337.pdf

Optimized Context Weighting Based on the Least Square Algorithm Min Chen, Jianhua Chen, Yan Zhang and Meng Tang

Abstract The optimized context weighting is presented. The relationship between the weighting of context models and the weighting of the description lengths corresponding to their respective context models are discussed first and it indicates that the weighting of context models is equivalent to the weighting of the their description lengths. With these discussions, the weights optimization algorithm based on the minimum description length are presented and the least square algorithm is suggested to implement the optimization of the weights. The proposed optimization algorithm is used in the compression of genome sequences. The experiment results indicate that by using the proposed weights optimization method, our context weighting-based algorithm can achieve better results than some similar algorithms reported in the literature. Keywords Genome sequence compression weighting Least square





Context modeling



Context

1 Introduction During past two decades, the context-based entropy coding technology has been widely used in digital signal compression, on the basis of which some compression algorithms became standards of signal compression such as MPEG and JPEG2000. Context-based entropy coding is built on the theory that conditioning reduces entropy. In context modeling, the conditional probability distributions Pðxt jxt1 ; . . .; xtK Þ are constructed by using past K symbols of the current symbol xt and K is the order of the context model. In coding process, these distributions are used to drive the arithmetic encoder to assign the codeword for xt . Theoretically, M. Chen (&)  J. Chen  Y. Zhang  M. Tang Department of Electronic, Yunnan University, North Cuihu Road 2#, Kunming 650091, Yunnan, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_94

1037

1038

M. Chen et al.

larger K may lead to lower entropy of Pðxt jxt1 ; . . .; xtK Þ. However, in practice, the conditional probability distributions Pðxt jxt1 ; . . .; xtK Þ are estimated from their respective count vectors which are obtained by counting past observations. In the beginning of the counting process, all counts in each count vector are initialized uniformly to one. If K is too large, more observations are needed to achieve better estimates of Pðxt jxt1 ; . . .; xtK Þ; otherwise, the estimated distributions will be similar to the uniform distribution. Encoding by using such a distribution will result in an average codelength near the maximum entropy. Context weighting can balance this conflict since different neighboring symbols are used to construct different context models which are weighted to obtain one coding model. In this way, the coding model can utilize the correlations among more neighboring symbols but maintain a limited order. In [1], the properties of context weighting were discussed and the coding performance of the weighted context model is directly related to the weights. Namely, the selection of weights is significant to context weighting. In [2], context weighting is employed to compress bilevel images. In this algorithm, the weights were obtained with the help of Bayesian’s theory and the coding efficiency of this algorithm is improved. On the other hand, context weighting is one of the important technologies for genome sequence compression. In [3], it is concluded that those genome sequence compression results by using the context models with some appropriate orders are better than the results by using the context models with high orders. In [4], “expert models” (XM) are constructed to describe correlations among bases in a sequence. Each XM is actually one of the Markov models (with different orders). In the coding process, the coding model is obtained by weighting all of these XMs. The selection of weights in this algorithm relies on a filtering operation. However, there are two problems here. The first is that the impulse responses of the filters are determined manually based on some experience. The second is that these filters do not directly produce optimized weights. Meanwhile, in [5], although the weights are related to the average codelengths of respective models, there is no method to optimize the weights directly. Actually, the optimization of weights is necessary for context weighting. But all context weighting methods mentioned above contain no discussion about the weights optimization. In [6], when a training sequence is coded by using an adaptive context model, its codelength is referred to as the description length of the training sequence under the given model. For a given sequence and a given context model, the coding performance of this model can be determined by the description length of the sequence under this model. The conditional probability distribution which is corresponding to the count vector with a smaller description length may lead to a shorter codelength in the coding process. In this way, minimizing the description length can be used as the objective of the weights optimization. In this paper, the relationship between the weighting of context models and the weighting of the description lengths is discussed first and it indicates that the weighting of context models is equivalent to the weighting of the description lengths of these models. Then the least square algorithm is used to implement the

Optimized Context Weighting Based on …

1039

weights optimization. The proposed weights optimization method is used to help the compression of bacterial genome sequences to improve the coding efficiency.

2 Context Weighting Let x0 ; . . .xt ; . . .; xn represents a source sequence and xt 2 f0; 1; . . .; I  1g denote the current symbol to be coded. Conditional probability distributions Pðxt jxt1 ; . . .; xtK Þ in a context model can be estimated based on the past observations x0 ; . . .xt1 . Here, each combination of xt1 ; . . .; xtK is a context event and K is the order of the context model. Let Pðxt jsci Þ denote the conditional probability distribution corresponding to the context event sci in the ith context model and wi denote the value of its weight for context weighting. N denotes the number of context models participated in weighting. Then, context weighting can be represented as Pðxt jSÞ ¼

N X

wi  Pðxt jsci Þ;

ð1Þ

i¼1

where Pðxt jSÞ is the conditional probability distribution used to drive the arithmetic encoder. In practice, the weighting of context models is actually implemented by weighting count vectors which are corresponding to those conditional probability distributions Pðxt jsci Þ, respectively. An example is given to explain this procedure. Considering two context models for a 4–ary source, let sc1 and sc2 denote the current context events in their respective models. The corresponding conditional probability distributions are Pðxt jsc1 Þ and Pðxt jsc2 Þ. Let w1 and w2 denote the weights for Pðxt jsc1 Þ and Pðxt jsc2 Þ and these two weights satisfy w1 þ w2 ¼ 1. Two count vectors CV1 and CV2 which are corresponding to Pðxt jsc1 Þ and Pðxt jsc2 Þ, respectively, are listed on the left side of (2). Multiplied by weights, CV1 and CV2 become the vectors listed on the right side of (2). CV1 : CV2 :

0ðAÞ 1ðTÞ 2ðGÞ 3ðCÞ n0 n1 n2 n3 mo m1 m2 m3

0 w1 CV1 : w1 n0 w2 CV2 : w2 m0

1 w1 n1 w2 m1

2 w 1 n2 w2 m2

3 w1 n3 w2 m3 ð2Þ

After weighting, the count vector CV corresponding to Pðxt jSÞ has the following form: 0 CV: w1 n0 þ w2 m0

1 w1 n1 þ w2 m1

2 w 1 n2 þ w 2 m 2

3 w1 n3 þ w2 m3

ð3Þ

1040

M. Chen et al.

Then Pðxt jSÞ can be estimated by 0

Pðxt jSÞ :

w1 n0 þw2 m0 w1 V1 þw2 V2

1

w1 n1 þw2 m1 w1 V1 þw2 V2

2

3

w1 n3 þw2 m3 w1 V1 þw2 V2

w1 n2 þw2 m2 w1 V1 þw2 V2

ð4Þ

where V1 ¼ n0 þ n1 þ n2 þ n3 denotes the total number of training symbols in CV1 and V2 ¼ m0 þ m1 þ m2 þ m3 denotes the total number of training symbols in CV2 . According to [7], L1 , which is the description length of the count vector CV1 , can be calculated by L1 ¼ logðV1  1Þ! 

3 X

log ni !  logð4  1Þ!

ð5Þ

i¼0

L2 ,which is the description length of the count vector CV2 , can be calculated similarly. When Stirling’s formula pffiffiffiffiffiffi 1 n !  nðnþ2Þ en 2p

ð6Þ

is used to approximately calculate factorials and when log is the natural logarithm, L1 and L2 can be calculated with (7) and (8), respectively: 1 V1 þr L1 ¼ V1 log V1  n0 log n0  n1 log n1  n2 log n2  n3 log n3  log 2 n0 n1 n2 n 3 ð7Þ L2 ¼ V2 log V2  m0 log m0  m1 log m1  m2 log m2  m3 log m3 1 V2  log þr 2 m0 m1 m2 m3

ð8Þ

pffiffiffiffiffiffi where r ¼  log 3!  3 log 2p is a constant. The description length L of the count vector CV can also be calculated with a formula similar to (5). To simplify our representation, let ci denote the ratio of the weighted total number of training symbols in the count vector CVi to the total number of symbols in the weighted ðiÞ count vector CV. Let tj denote the ratio of the weighted total number of training symbols with value j in the count vector CVi to the total number of symbols with ð1Þ ð1Þ value j in the weighted count vector CV. For instance, parameters c1 , t2 , t2 are calculated as follows: c1 ¼

w1 V1 w1 V1 þ w2 V2

ð1Þ

t2 ¼

w 1 n2 w1 n2 þ w2 m2

ð2Þ

t2 ¼

w2 m2 w1 n2 þ w2 m2

Optimized Context Weighting Based on …

1041 ðiÞ

Apparently, when all weights wi are given, all parameters ci and tj are determined. The description length L can also be calculated as in (9), in which, the first line actually equals to w1 L1 þ w2 L2 . 1

0

1

0

3 3 C C B B X X 1 V1 1 V2 C C B B L ¼ w1 BV1 log V1  nj log nj  log 3  rC þ w2 BV2 log V2  mj log mj  log 3  rC A A @ @ Q Q 2 2 j¼0 j¼0 nj mj j¼0

j¼0

Q3 ð1Þ Q3 ð2Þ 3 3 X X w1 w1 w1 w2 w2 w2 j¼0 tj j¼0 tj þw1 V1 log  w1 nj log ð1Þ  log þ w2 V2 log  w2 mj log ð2Þ  log 2 2 c1 2 c 2 c w c 2 1 2 w2 t t 1 j¼0 j¼0 j

j

ð9Þ Let Q denote the last line in (9), then Q3 ð1Þ 3 X w1 w1 w1 w2 j¼0 tj  w1 nj log ð1Þ  log þ w2 V2 log 2 c1 2 c2 c w 1 1 tj j¼0 Q ð2Þ 3 3 X w2 w2 j¼0 tj  w2 mj log ð2Þ  log 2 2 c 2 w2 t j¼0

Q ¼ w1 V1 log

ð10Þ

j

and L can be represented as L ¼ w1 L1 þ w2 L2 þ Q:

ð11Þ

Q can be viewed as some kind of weighting cost. Thus, L can be represented approximately as (12), which implies that context weighting is equivalent to the weighting of description lengths. L  w1 L1 þ w2 L2

ð12Þ

Moreover, this equivalence also means that the optimization of the weights can be achieved by minimizing the description length L of the weighted count vector CV.

3 Weights Optimization On the basis of this approximation, the least square algorithm can be used to optimize weights w1 and w2 . Let L ¼ ðL1 ; L2 ÞT denote the observing vector consisting of those description lengths which come from all count vectors participated ^ in weighting. Let W ¼ ðw1 ; w2 ÞT denote the corresponding weights vector. L,

1042

M. Chen et al.

which is considered as the estimated value of the description length L, can be described in the vector form as ^ ¼ LT W L

ð13Þ

^ However, in order to minThe objective of our optimization is to minimize L. ^ imize L by using the least square algorithm, its corresponding ideal value L should be given in advance. ^ can also be described as According to [8], the description length L ^ ¼ ðw1 V1 þ w2 V2 ÞHðxt jSÞ þ D L

ð14Þ

where Hðxt jSÞ is the entropy of Pðxt jSÞ and D denotes the model cost. Actually, the value of ðw1 V1 þ w2 V2 ÞHðxt jSÞ represents the ideal codelength for coding these w1 V1 þ w2 V2 symbols. However, this codelength can not be achieved in practice. Thus, we choose L ¼ ðw1 V1 þ w2 V2 ÞHðxt jSÞ as the ideal value of the description ^ in our weights optimization process. Let length L ^  L j e ¼ jL

ð15Þ

^ and L . Minimizing the squared error e2 is equivalent to denote the error between L ^ which can be described as minimize L, ^  L j 2 g minff ðWÞ ¼ e2 ¼ jL

ð16Þ

where f ðWÞ is the cost function related to W. The minimization of f ðWÞ can be obtained by solving the following equations. 8 @f ðWÞ > > > < @wi ¼ 0 2 X > > > wi ¼ 1 :

i ¼ 1; 2 ð17Þ

i¼1

The solution of Eq. (17) can directly be written in the form of vectors as W ¼ R1  d

ð18Þ

where R is the correlation matrix of the observing vector L and d is the correlation vector between L and L . They can be obtained by R ¼ L  LT ;

d ¼ L  L :

ð19Þ

After solving these equations, both optimized weights and the coding distribution can be obtained.

Optimized Context Weighting Based on …

1043

There are still some details needed to be addressed. First, since the entropy Hðxt jSÞ is not known, L can not be calculated directly. In practice, a positive value as small as possible will be used instead. Second, the optimization of weights for the coding of xt is based on count vectors which are obtained by counting the past observations x0 ; . . .; xt1 under the same context events as the current context events in different context models. When xt is coded, these count vectors are actually updated by adding one to the corresponding counts. In this case, the corresponding weights should also be updated after the coding of xt . However, such frequent updating leads to high computational complexity. To simplify our algorithm, the weights in this work are updated until a given number of symbols are coded. In next section, the proposed optimization algorithm is applied to compress bacterial genome sequences.

4 Experiments and Results In our work, the genome sequence NC_020409 is used as the training sequence to initialize the context models. Four genome sequences NC_013131, NC_014318, NC_004691, and NC_004532 are used as test sequences. In experiment 1, compression efficiency with different updating periods is compared. Three context models with orders 2, 4, and 8 are constructed and compression results with different updating periods (40, 50 and 100 bases) are listed in Table 1. Genome sequences NC_004691 and NC_004532 are used as test sequences. From Table 1, it is obvious that both for the long sequence and the short sequence, the updating period 50 can lead to better results than the updating period 100. Meanwhile, compression results with the updating period 40 and 50 are close. It means that excessively decreasing the updating period does not always provide significant compression efficiency improvement. However, the short updating period leads to higher computational burden. Therefore, in our work, the updating period is set to 50. In experiment 2, the proposed weights optimization algorithm is applied to compress bacterial genome sequences. Four context models which are the same as those in [5] with orders 2, 4, 10, and 16 are constructed. Bacterial genome sequences NC_013131 and NC_014318 are used as test sequences. Our compression results are listed in Table 2. For comparison, the compression results obtained by using the algorithm XM [5] and the algorithm FCM [6] are also listed in Table 2. Table 1 Compression results with different updating periods

Sequences

Size

Bpb under different updating periods 40 50 100

NC_004691 NC_004532

9,267,221 20,063

1.7349 1.6374

1.7363 1.6417

1.7422 1.6658

1044

M. Chen et al.

Table 2 The compression results by different algorithms Sequences

Size

Bpb by different algorithms XM by Cao FCM by Pinho

Proposed

NC_013131 NC_014318

10,467,782 10,236,715

1.7922 1.7695

1.7216 1.7013

1.779 1.739

From Table 2, our algorithm can produce better compression results than either XM or FCM since the optimization of the weights can lead to higher compression efficiency. It comes to the implication that context weighting based on minimizing the description length of past symbols can reduce the final codelength in coding a genome sequence. Apparently, the proposed algorithm can produce promising results in compressing genome sequences and the optimization algorithm also ensure the optimized weights being obtained. The design objective of optimized context weighting is achieved.

5 Conclusion The weights optimization algorithm based on the minimum description length is proposed. It is discussed that context weighting can be implemented by the weighting of the description lengths of past observations under their respective models. The least square algorithm is employed to optimize the weights for context weighting. The optimized weights are used to improve the compression efficiency of genome sequences. Experiment results indicate that the proposed algorithm can lead to better results than those algorithms reported in the literature. Acknowledgment This work was supported by Natural Science Foundation of China under Grant (61062005) and Natural Science Foundation of Yunnan Province under Grant (2013FD042) and Yunnan University Science Foundation for Graduates under Grant (ynuy201383).

References 1. Willems, F.M.J., Shtarkov, Y.M., Tjalkens, T.J.: The context-tree weighting method: basic properties. IEEE Trans. Inform. Theor. 41, 653–664 (1995) 2. Xiao, S., Boncelet, C.G.: On the use of context-weighting in lossless bilevel image compression. IEEE Trans. Image Process. 15(11), 3253–3260 (2006) 3. Pinho, A.J., Neves, A.J.R., Bastos, C.A.C., Ferreira, P.J.S.G.: DNA coding using finite-context models and arithmetic coding. In: Proceeding of ICASSP-2009, Taipei, Taiwan, April 2009 4. Pinho A.J., et al.: Bacteria DNA sequence compression using a mixture of finite-context models. In: IEEE Statistical Signal Processing Workshop, pp. 125–128. Portugal, (2011)

Optimized Context Weighting Based on …

1045

5. Cao, M.D., Dix, T.I., Allison, L., Mears, C.: A simple statistical algorithm for biological sequence compression. In: Proceedings of the Data Compression Conference (DCC), Snowbird, Utah, (2007) 6. Rissanen, J.: Strong optimality of the normalized ML models as universal codes and information in data. IEEE Trans. Inf. Theory IT-47(5), 1712–1717 (2001) 7. Chen, M., Chen, J., Guo, M.: Affinity propagation for the context quantization. Adv. Mater. Res. 791–793, 1533–1536 (2013) 8. Wu, X., Zhai, G.: Adaptive sequential prediction of multidimensional signals with applications to lossless image coding. IEEE Trans. Image Process. 20(1), 36–42 (2011)

General Theory of the Application of Multistep Methods to Calculation of the Energy of Signals Galina Mehdiyeva, Vagif Ibrahimov and Mehriban Imanova

Abstract It is known that solving of many scientific and application problems can be regarded as the solving of integral equations with the variable boundaries. Among all the integral equations, the most popular ones are those in which one of the boundaries of the integral is fixed. Here, we investigate one particular case in which both boundaries of the integral equation are variable. Assuming that the boundaries of the integral are coincided on the module but have opposite signs, then for solving such equations, the use of symmetric methods is proposed. This paper constructs some general theories about the use of multistep symmetric methods to solve the Volterra integral equation with the symmetric variable boundaries and illustrates some results of the model equation.





Keywords Multistep methods Volterra integral equation Stability and degree Symmetrical integral equations



1 Introduction Since the end of the nineteenth century, scientists have increasingly turned their attention to the solving of integral equations with variable boundaries. Vito Volterra, founder of the theory of integral and integro-differential equations, has fundamentally researched linear integral equations with the variable boundaries and successfully applied them to solve some problems of mechanics, biology, G. Mehdiyeva  M. Imanova (&) Department of Calculation Mathematics, Baku State University, Z.Khalilov 23, Baku AZ1148, Azerbaijan e-mail: [email protected] V. Ibrahimov Institute of Control Systems named after Academician A.Huseynov, 9 B Vaxabzade Street, Baku AZ1141, Azerbaijan e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_95

1047

1048

G. Mehdiyeva et al.

geophysics, etc. [1]. Most of the published works in the scientific literature are dedicated to study the problem mentioned above [2, 3]. Consider, the following nonlinear integral equation of Volterra type: ZbðxÞ yðxÞ ¼ f ðxÞ þ

Kðx; s; yðsÞÞds; aðxÞ

x 2 ½x0 ; X  ;

aðxÞ  s  bðxÞ  X:

ð1Þ

here limited aðxÞ and bðxÞ are some known function. Among the integral equations of this sort (1), one of the most popular is the following: Zx yðxÞ ¼ f ðxÞ þ

Kðx; s; yðsÞÞds ; x0  s  x  X: x0

There are many methods of solving these integral equations. Here, we want to apply some of them the solving integral equations of this type (1). Stability and accuracy are characteristics we must consider concerning this problem. But these methods are either of the higher order of accuracy or of the extended stability regions. In order achieve both, hybrid multistep methods with the constant coefficients are proposed. As hybrid methods are more accurate than the corresponding multistep methods, here we consider the application of hybrid methods to solve Volterra integral equations. Through investigating the variants of multistep methods in application, we confirm that hybrid methods are usually more accurate than others. The proposed operation of Eq. (1) is investigated in the case when aðxÞ ¼ bðxÞ ¼ x. The theory of symmetric integrals is widely used in the fields of mechanics, biology, communications, nuclear physics, etc. For example, when investigating the problem of the rod we encounter the calculation of the integral (see, for example, [4–6]): Zx gðsÞds:

ð2Þ

x

It is known, that the energy, E, for a signal yðtÞ is (see [4], pp. 7–19): Zx jyðtÞj2 dt:

E ¼ lim

x!1

ð3Þ

x

Currently, the transmission of information and signals without interference is one of the important issues in the development of information communication

General Theory of the Application of Multistep Methods …

1049

technology. Therefore, we believe that the study of the Eq. (1) will be interesting for a wide range of specialists. Assume that the Eq. (1) has a unique continuous solution yðxÞ defined in an interval ½0; X . To determine the approximate values of the function yðxÞ, segment ½0; X  is divided into N equal parts using the mesh points xi ¼ ihði ¼ 0; 1; 2; . . .; N Þ. Here the parameter h [ 0 is the step size. We denote yi approximate, and through yðxi Þ exact values of the solution of Eq. (1) at the mesh points xi ði ¼ 0; 1; 2; . . .Þ. There are numerous works of different authors devoted to the study of the Eq. (1), both at aðxÞ ¼ bðxÞ ¼ x, and under aðxÞ 6¼ bðxÞ. Among them the most popular are the methods of the quadrature (see [7–17]) after application of which to solve the Eq. (1) we have: Zxn yðxn Þ ¼ fn þ

Kðxn ; s; yðxÞÞds  fn þ

n X

Ai Kðxn ; xi ; yi Þ;

i¼n

xn

ð4Þ

ðfm ¼ f ðxm Þ; m  0Þ; here Ai —the coefficients of the quadrature method. How to determine the value yn in the linear part of the obtained formulas is not involved with members of the type ynþj ðk  j  kÞ. Note that the main shortcoming of the method (4) is an increasing volume of computational work by increasing the values of the quantity n and the method proposed here relieves this shortcoming.

2 Construction and Application of the Symmetric Methods to Solving of the Eq. (1) for the Case aðxÞ ¼ bðxÞ ¼ x Remark that one of the popular numerical methods is finite difference, which is successfully used to solve various problems of natural science. For this aim, we consider here the application of the following finite-difference method (see, for example, [18–20]). m X

ai ynþi ¼

i¼0

m X

0

bi ynþi

ð5Þ

i¼0

to solve the Volterra integral equations with the symmetric variable boundaries written as following form: Zx yðxÞ ¼ f ðxÞ þ

Kðx; s; yðsÞÞds: x

ð6Þ

1050

G. Mehdiyeva et al.

Let us put x ¼ xnþi in Eq. (6). Then have: Zxn yðxnþi Þ ¼ fnþi þ

Kðxnþi ; s; yðsÞÞds xnþi Zxnþi

Zxn

Kðxnþi ; s; yðsÞÞds þ

þ xn

ð7Þ Kðxnþi ; s; yðsÞÞds:

xn

From (7), we get the Zxn vn ¼

Kðxnþi ; s; yðsÞÞds; xn

then the calculation of the approximate values ynþi by the formula (7) is not difficult since the volume of computational work is not increasing or does not increase. But for the application of the method (5) to solving some problems, the values of quantity yl ðl\n þ mÞ and the existence of a law to determine the relation between yðxÞ and y0 ðxÞ must be known. On fixing the value of i, the quantity vn can be determined by using the following formula: Zxn vn ¼ yðxn Þ þ h

0

Kx ðnnþi ; s; yðsÞÞds

ðxn \nnþi \xnþi Þ:

ð8Þ

xn

To calculate the integral in Eq. (8) one can use the scheme from the work [21] and the following equation: 0

Zx

0

y ðxÞ ¼ f ðxÞ þ Kðx; x; yðxÞÞ þ Kðx; x; yðxÞÞ þ

0

Kx ðx; s; yðsÞÞds:

ð9Þ

x

It is easy to understand that the integral participating in Eq. (6) can be written as follows: Zx yðxÞ ¼ f ðxÞ þ

Zx Kðx; s; yðsÞÞds 

0

Kðx; t; yðtÞÞdt ðt ¼ sÞ: 0

ð10Þ

General Theory of the Application of Multistep Methods …

1051

Note that the calculation of these integrals can be used in one or two formulas. Obviously, these integrals can unite as one integral. Thus, we can use the following method to find the solution of the Eq. (6): m X i¼0

ai ynþi ¼

m X

ai fnþi þ h

m X m X

ðjÞ

bi Kðxnþj ; xnþi ; ynþi Þ:

ð11Þ

j¼0 i¼m

i¼0

As for symmetric methods, some authors suggest the use of the midpoint method, which has the following form: 0

ynþ1 ¼ yn þ hynþ1=2 :

ð12Þ

Generally, the notion of symmetry in this case is used to represent notion of the error of the method (12). However, the notion of symmetry of the method (11) can be defined in different forms. For example, in one variant, the symmetric method for the type (11) can be written as follows: yn ¼ yn1 þ fn  fn1 þ h

m X m X

ðjÞ

bi Kðxnþj ; xnþi ; ynþi Þ:

ð13Þ

j¼0 i¼m

Here, the concept of symmetry refers to the point in which we can find the value of the solutions of the original problem. It is known that methods such as (13) are the forward jumping methods [22]. Such methods like methods of type (13) are constricted by Cowell (see, e.g., [23]) which is met in the works of Laplace and Steklov (see, for e.g., [24]). But for the application of them, we need to use the values of the solution of the considered problem in the next mesh points. This is the main difficulty in using the forward jumping methods which are eliminated in the work [22] by using the predictor-corrector schemes. In the work [25] author constricts concrete forward jumping methods with the degree p ¼ m þ 2 for odd m and thus proves an advantage of the forward jumping methods. Thus, is proved an advantage of the forward jumping methods. To determine the maximum values of the degree of stable forward jumping methods one can use the corresponding theorem of the work [22] or [26]. The concept of stability and degree of the forward jumping methods here are defined in similar way to the corresponding concepts of the work [27] (Tables 1 and 2). It is obviously that both symmetric and asymmetric methods can be obtained from the method (11) in particular. For example, let us consider the trapezoidal method which derives from the formula (11), the values m ¼ 1, thus, we get from the (11) the following method. ynþ1 ¼ yn þ fnþ1  fn þ hðKðxn ; xn ; yn Þ þ Kðxnþ1 ; xn ; yn Þ þ 2Kðxnþ1 ; xnþ1 ; ynþ1 ÞÞ=4 þ hð2Kðxn ; xn ; yn Þ þ Kðxn1 ; xn1 ; yn1 Þ þ Kðxnþ1 ; xn1 ; yn1 ÞÞ=4

ð14Þ

1052

G. Mehdiyeva et al.

Table 1 Comparison of errors for step size h ¼ 0:1

Table 2 Comparison of errors for step size h ¼ 0:05

Step size

Variable x

Example 1

Example 2

Example 3

h ¼ 0:1

0.10 0.40 0.70 1.00

0.36E-06 0.39E-05 0.11E-04 0.17E-04

0.24E-05 0.69E-04 0.27E-03 0.64E-03

0.00E-14 0.00E-14 0.02E-14 0.02E-14

Step size

Variable x

Example 1

Example 2

Example 3

h ¼ 0:05

0.10 0.40 0.70 1.00

0.35E-07 0.44E-06 0.13E-05 0.23E-05

0.13E-05 0.34E-04 0.11E-03 0.25E-03

0.28E-16 0.00 0.11E-15 0.56E-15

In the constructing of this method we have used trapezoidal method, which is asymmetrical. But the method of (14) refers to the groups of symmetric methods [28]. To construct the symmetric methods, consider the following method. ynþ1 ¼ yn þ fnþ1  fn þ hð4Kðxnþ1 ; xnþ1 ; ynþ1 Þ þ 3Kðxn ; xn ; yn Þ þ 2Kðxnþ1 ; xn ; yn Þ þ 4Kðxnþ2 ; xnþ1 ; ynþ1 Þ  Kðxnþ2 ; xnþ2 ; ynþ2 ÞÞ=12

ð15Þ

This method is symmetric and belongs to the class of forward jumping methods. The advantage of these methods lies in how to obtain the value information of the solution of the considered problem at the current location mesh point and the information about the solution of Eq. (1) in the previous and the next mesh points. To approximate the solution of the integral equations Zx yðxÞ ¼ f ðxÞ þ

Kðx; s; yðxÞÞds  x  s  x  X; yð0Þ ¼ f ð0Þ; x

by using the method (15), proposed here the next algorithm. INPUT end point ½X; X , X; integer N; initial condition f ðx0 Þ: OUTPUT approximation yi to yðxi Þ at the (N + 1) values of x.

General Theory of the Application of Multistep Methods …

1053

Here illustrates the received results of considered application of the method (15) to the solving of the following equations: 1. yðxÞ ¼ 12

Rx x

cosðsÞ ds. The exact solution is yðxÞ ¼ sin x,

2. yðxÞ ¼ ekx þ k 3. yðxÞ ¼ x þ

Rx x

Rx x

yðsÞjds. The exact solution is yðxÞ ¼ expðkxÞ,

1 þ y2 ðsÞ 1 þ s2 ds.

The exact solution is yðxÞ ¼ x.

The following tables contain some numerical values corresponding to the step size. If the step size is taken h ¼ 0:01 then the maximal error is equal to 0.2E-4. In order to construct more accurate methods for solving integral equation, more accurate methods for solving integral equation, some authors propose using the hybrid methods [11, 19, 21]. If we generalize the method (12), then, we can write the following: 0

0

ynþ1 ¼ yn þ hðynþl0 þ ynþ1þl1 Þ=2:

ð16Þ

pffiffiffi pffiffiffi Here l1 ¼ l0 ; l0 ¼ ð3  3Þ=6; 1 þ l1 ¼ ð3 þ 3Þ=6. If here take l1 ¼ l0 and l1 ¼ 1=2, then from the formula (16) follows method (12). Remark that for applying the hybrid method (16) to solve some problems we must know the values ynþ1=2þpffiffi3=6 ; ynþ12 and ynþ1=2pffiffi3=6 . Note that these variables are independent from

1054

G. Mehdiyeva et al.

ynþ1 , because method (16) is explicitly explaining it. For example, consider the following method: . 0 0 ynþ1 ¼ yn þ hð3ynþ1=3 þ ynþ1 Þ 4: This method is implicit and has the degree p ¼ 3. Now, let us consider the construction of the method of type (25) for the case k ¼ 1. In this case, assuming that a1 ¼ a0 ¼ 1, then corresponding method to the method (15) can be written as follows: ynþ1 ¼ yn þ fnþ1  fn þ hðKðxnþ1 ; xnþl0 ; ynþl0 Þ þ Kðxnþl0 ; xnþl0 ; ynþl0 Þ þ Kðxnþ1 ; xnþ1l0 ; ynþ1þl0 Þ þ Kðxnþ1l0 ; xnþ1l0 ; ynþ1l0 ÞÞ=4:

ð17Þ

Based on the illustration of the results received here, we have considered the application of the method (17) to the solving of the following equations: 1. yðxÞ ¼ 1 þ x2 =2 þ 2. yðxÞ ¼ ex þ 3. yðxÞ ¼

Rx

Rx

Rx

yðsÞds, the exact solution is yðxÞ ¼ 2ex  x  1.

0

eðxsÞ y2 ðsÞds, the exact solution is yðxÞ ¼ 1.

0

ð1 þ y2 ðsÞÞds=ð1 þ s2 Þ, the exact solution is yðxÞ ¼ x.

0

The obtained results are shown in the Table 3.

Table 3 Comparison of errors for step size h ¼ 0:01

Step size

Variable x

Example 1

Example 2

Example 3

h ¼ 0:01

0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00

0.26E-10 0.54E-10 0.83E-10 0.11E-10 0.14E-09 0.18E-09 0.21E-09 0.25E-09 0.30E-09 0.34E-09

0.15E-06 0.54E-06 0.10E-05 0.17E-05 0.24E-05 0.32E-05 0.40E-05 0.48E-05 0.57E-05 0.66E-05

0.1E-16 0.1E-16 0.1E-16 0.1E-16 0.1E-16 0.1E-16 0.1E-16 0.1E-16 0.1E-16 0.2E-16

General Theory of the Application of Multistep Methods …

1055

3 Conclusions Every method has its advantages and disadvantages. Construction of methods with the minimal disadvantages is one of the principal problems in the theory of numerical methods. However, when solving particular problems some of these disadvantages can be eliminated. Classical scheme for this type of problems mentioned above is an application of the method of Stoermer, which has some advantages in solving of the initial value problem for second-order ODE with a special structure. Here, we construct a numerical method for solving Volterra integral equations with symmetrical variable boundaries through investigating some properties of the considered problem. Remark that for the application of these methods to solve any problem we recommend here to use the forward jumping and hybrid methods by some sequence of the schemes from the book [29] for constructing of algorithms. Here are demonstrated the application of forward jumping and hybrid methods to the solving of Eq. (1), we use some model equations to compare these methods in application, the results show that hybrid methods are the most promising. Acknowledgments The authors wish to express their thanks to academician Ali Abbasov for his suggestion to investigate the computational aspects of our problem and for his frequent valuable suggestion. This work was supported by the Ministry of Communications and High Technologies.

References 1. Volterra, V.: Theory of functional and of integral and integro-differential equations, Dover publications. Ing, New York, Nauka, Moscow, p. 304 (1982) (in Russian) 2. Aliev, T.A., Abbasov, A.M., Guluyev, G.A., Pashayev, F.H., Sattarova, U.E.: System of robust noise monitoring of anomalous seismic processes. Soil Dyn. Earthq. Eng. 53, 11–15 (2013) 3. Aliev, T.A.: Digital Noise Monitoring of Defect Origin Series Springer, London, p. 235 (2007) 4. Guz, A.N.: About continuum theory of materials with small-scale distortions in the structure. Dokl. ANSSR 268(2), 307–313 (1983) 5. Amazadeh, R.Y., Kiysbeyli, E.T., Fatulayeva, J.F.: The Limiting state of a rigidly fixed nonlinearly elastic rod. Mech. Compos. Mater. New York, 42(3):243–252 (2006) 6. Michael, P.: Fitz Analog Communication Theory, The Ohio State University, p. 200 (2001) 7. Linz Linear, P.: Multistep methods for volterra integro-differential equations. J. Assoc. Comput. Mach. 16(2), 295–301 (1969) 8. Brunner, H.: Imlicit Runge-Kutta methods of optimal order for Volterra integro-differential equation. Math. Comput. 42(165), 95–109 (1984) 9. Verlan, A.F., Sizikov, V.S.: Integral equations: methods, algorithms, programs. Naukova Dumka, Kiev (1986) 10. Manzhirov, A.V., Polyanin, A.D.: Handbook of Integral Equations: Methods of solutions, Moscow: Publishing House of the Factorial Press, p. 384 (2000) 11. Makroglou, A.: Block—by-block method for the numerical solution of Volterra delay integro-differential equations. Computing 30(1), 49–62 (1983) 12. Yu Mehdiyeva, G., Imanova, M.N., Ibrahimov, V.R.: On one application of forward jumping methods. Appl. Numer. Math. 72, 234–245 (2013)

1056

G. Mehdiyeva et al.

13. Shampine, L.F.: Solving Volterra Integral Equations with ODE Codes. IMA J. Numer. Anal. 8(1), 37–41 (1988) 14. Imanova, M.N.: One the multistep method of numerical solution for Volterra integral equation. Trans. Issue math. Mech. Series Phys.-Tech. Math. Sci. 26(1), 95–104 (2006) 15. Wolkenfelt, P.H.M.: The Construction of Reducible Quadrature Rules for Volterra Integral and Integro-differential Equations. IMA J Numer Anal 2(2), 131–152 (1982) 16. Scott, J., Dixon, N., Mckee, S.: On the Exact Order of Convergence of Discrete Methods for Volterra-type Equations. IMA J Numer Anal 8(4), 511–515 (1988) 17. Mehdiyeva, G., Imanova, M.: On an application of the finite-difference method, Bulletin of the University, vol. 2, pp. 73–78 (2008) 18. Lubich, Ch.: Runge-Kutta theory for Volterra and Abel integral equations of the second kind. Math. Comput. 41(163), 87–102 (1983) 19. Mehdiyeva, G., Imanova, M., Ibrahimov, V.: The application difference methods to solving Volterra Integral equation. Pensee J. Paris 75(111), 393–400 (2013) 20. Vekua, N.P.: Some application of spline to decision Integral Equations, Transactions. Georgian Tech. Univ. Autom. Cont. Syst. 15(2), 159–163 (2013) 21. Mehdiyeva, G., Imanova, M., Ibrahimov, V.: On a Research of Hybrid Methods, Numerical Analysis and Its Applications, pp. 395–402. Springer, Cambridge (2013) 22. Ibrahimov, V.R.: On a relation between order and degree for stable forward jumping formula. Zh. Vychis. Mat., 7, 1045–1056 (1990) 23. Cowell, P.H., Cromellin, AC.D.: Investigation of the motion of Halley’s comet from 1759 to 1910, Appendix to Greenwich observations for 1909, Edinburgh, pp. 1–84 24. Mukhin, I.S.: Application of interpolating polynomials for Markov-Hermite numerical integration of ordinary differential equations. Apl. Mat. Mex. 2, pp. 231–238 (1952) 25. Mehdiyeva, G., Ibrahimov, V.: On the research of multistep methods with constant coefficients, LAP LAMBERT Academic Publishing, p. 314,(Russian) (2013) 26. Ibrahimov, V.: On the maximal degree of the k-step Obrechkoff’s method. Bull. Iran. Math. Soc. 28(1), 1–28 (2002) 27. Dahlquist, G.: Convergence and stability in the numerical integration of ordinary differential equations. Math. Scand. 4, 33–53 (1956) 28. Bucharskiy, V.L., Kalinchuk, E.M.: Symmetric difference schemes joint approximation method for solving linear transport equation. Math. Mach. Syst. 4, pp. 161–165 (2011) 29. Burden, R.L., Douglas Faires, J.: Numerical Analysis, Cengege Learning, № 7, p. 850 (2001)

Analysis of Influence of Attitude Vibration of Aircraft on the Target Detection Performance Xiufang Wang, Jinye Peng, Bin Chen and Wei Qi

Abstract Considering attitude vibration of aircraft of airborne early warning radar based on rigid body model, the model of aircraft yaw and roll is set up, and then the effects of two kinds of attitude vibration on clutter power spectrum and target echo power are analyzed. Finally, the probability of detection is gained on the influence of yaw and roll. Simulation results show that yaw and roll make the probability of detection decrease, and when the vibration angle exceeds certain value, target will be not detected. Keywords Airborne early warning radar Detection performance

 Attitude vibration  Ground clutter 

1 Introduction Generally, aircraft of airborne early warning radar performs flight with uniform velocity in a straight line. However, the aircraft will be affected by wind or high altitude air in the actual environment, while it has to fly with maneuvering (turning) in some cases, these factors will make the aircraft yaw, roll, and pitch, and vary the characteristics of airborne radar clutter, thus affecting the performance of airborne X. Wang (&)  J. Peng School of Electronic and Information, Northwestern Polytechnical University, Xi’an 710025, China e-mail: [email protected] X. Wang Xi’an Research Institute of High Technology Xi’an China, Xi’an 710025, China B. Chen College of Information System and Management, National University of Defense Technology, Changsha 410073, China W. Qi Qinghe Building, Beijing 100085, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_96

1057

1058

X. Wang et al.

radar. Influence of the aircraft yaw on two-dimension space-time clutter is analyzed in [1–4]. But these papers do not consider three-dimensional movement of airborne early warning radar aircraft modeling, and influence of the attitude vibration of aircraft on the target detection performance. Han and Tang [5] only analyzes target detection probability of aircraft yaw, but does not take into account any other attitude vibration of aircraft, such as the effect of roll and pitch. Attitude vibration of aircraft of airborne early warning radar based on rigid body model is researched in this paper. First, the model of aircraft yaw and roll is built, and then the effects of two kinds of attitude vibration on clutter power spectrum and target echo power are analyzed. Finally, the probability of detection is gained on the influence of yaw and roll.

2 Background Descriptions 2.1

Aircraft of Airborne Early Warning Radar Model

Rigid body model used in this paper refers to the aircraft and the rigid connection of the radar antenna, when the aircraft attitude changes, aircraft and antenna are changed to three-dimensional movements. For phased array radar system, the electronic scanning is used in the azimuth and pitch, while there is some dwell time at each wave to complete the target signal incoherent integration.

2.2

Clutter Model

Airborne radar under the visual work faces serious ground clutter interference, which affects the detection performance of the target. If we want to detect the target from the ground clutter, the ground clutter suppression problem must be solved. As the movement of aircraft, the characteristics of ground clutter will change. So it is necessary to study influence of the attitude vibration of aircraft on clutter, this is also main work in this paper. Clutter modeling and simulation adopt distance ring method, which produces single-channel data related to clutter, and distance pulsed-dimensional matrices. The relative position relation of radar and clutter is given in Fig. 1. Where, w is the angle between scattering of sight and the velocity vector, h is the azimuth angle, and u is the pitch angle, their relationship can be expressed as follows: cos w ¼ cos h cos u:

ð1Þ

Suppose radar transmits linear frequency modulation (FM) signals, and FM slope is treated as kr , thus the l range rings of the k pulse of single-channel clutter data can be expressed as follows:

Analysis of Influence of Attitude Vibration …

1059

Fig. 1 The geometric position relation of radar and clutter

Z V

ψ O

Y

X

θ

ϕ

Scatter objiect

 2r ðtÞ2 Nh pffiffiffiffi X f GFðhi ; uÞGðhi ; uÞ jpkr t Cl j2pðk1Þ fdr j/ cr ðkÞ ¼  a  e  X  e  e : il k rl2 i¼1

ð2Þ

pffiffiffiffi where G stands for the gain related to the radar parameters, Fðh; uÞ and Gðh; uÞ stand for the launch direction figure gain and the receive direction figure gain, respectively, ail and / stand for the amplitude and phase of Clutter amplitude fluctuation component, respectively, Xk stands for the relevant time series, and fd stands for the Doppler frequency of clutter scattering unit. Then fd can be expressed as follows: fd ¼

2V 2V cos w ¼ cos h cos u: k k

ð3Þ

3 Theoretical Analysis of Influence of Aircraft Attitude Vibration The attitude vibration of aircraft includes yaw, roll, and pitch in actual environment, but the pitch is just tiny movement in the Z axis direction of aircraft, which has little effect on radar beam direction and speed of aircraft, therefore, the pitch will not be considered in this paper, the following analyzes the influence of yaw and roll only.

3.1

The Influence of Yaw Analysis

The aircraft yaw caused by horizontal wind makes the aircraft produce a horizontal velocity component, which makes the speed direction have an angle offset Dh, but airborne radar antenna azimuth is no change. The yaw principle diagram can be shown in Fig. 2. Aircraft yaw will make the angle w change, but after attaching the yaw motions, a new relationship can be expressed as follows:

1060

X. Wang et al.

Fig. 2 The yaw principle diagram

Z

V′ Δθ

V

ψ ψ′

X

Δθ

O

θ

Y

ϕ

Scatter objiect

cos w0 ¼ cosðh þ DhÞ cos u:

ð4Þ

Therefore, the clutter Doppler frequency of the scattering object will become as follows: fd ¼

2V 2V cos w0 ¼ cosðh þ DhÞ cos u: k k

ð5Þ

A distance ring on the clutter signal is determined by the distance ring of different clutter scattering echo-signal superposition unit, as a result, the distance ring of clutter has an offset in frequency. Similarly, the main lobe clutter has shifted, then the main clutter of center frequency after aircraft yaw can be expressed as follows: fd0 ¼

2V cosðh0 þ DhÞ cos u0 : k

ð6Þ

where h0 and u0 stand for the main beam azimuth and pitch angle. The adaptive moving target indication (AMTI) is a traditional moving clutter suppression method. So we use this method to analyze performance. If AMTI filter notch and main clutter center frequency are out of alignment, the loss of the performance of clutter suppression will happen. When the aircraft has a steady flight, the main clutter spectrum width is:      2V h3dB h3dB 2V Bd ¼ cos h0  sin h0 cos u0 h3dB :  cos h0 þ cos u0  k k 2 2 ð7Þ When the aircraft has a yaw angle Dh, the main clutter spectrum width is:      2V h3dB h3dB cos h0  þ Dh  cos h0 þ þ Dh cos u0 Bd ¼ k 2 2 2V sinðh0 þ DhÞ cos u0 h3dB :  k

ð8Þ

Analysis of Influence of Attitude Vibration …

1061

where h3 dB stands for 3 dB width of the beam orientation. Due to the yaw angle Dh is small, while the aircraft work side h0 is larger, so the yaw has little impact on the main clutter spectrum width, which can be ignored.

3.2

The Influence of Roll Analysis

The influence of roll on clutter spectrum When there is a roll, radar antennas rotate around the longitudinal axis of the fuselage, and then the shaft and the aircraft speed are in the same direction. In roll process of aircraft, the aircraft velocity remains the same. The roll principle diagram is given in Fig. 3. Where e is the roll angle. When the aircraft flies smoothly, the antenna coordinate and O-XYZ coordinate system will become the same. If there is a roll in the aircraft flight, two coordinates will be inconsistent with each other. If a scattering of azimuth and pitch angles is h and u, respectively in the reference coordinate system, then their coordinates will become ha and ua in the coordinate system of the antenna, so a relationship can be obtained as follows: 2

3 2 3 cos u cos h cos ua cos ha 4 cos ua sin ha 5 ¼¼ ½D/4 cos u sin h 5: sin u sin ua 2

cos D/ ½D/ ¼ 4  sin D/ 0

sin D/ cos D/ 0

3 0 0 5: 1

ð9Þ

ð10Þ

where ½D/ is the direction cosine matrix of the roll. According to the Eqs. (9) and (10), the main beam can be obtained after the roll of azimuth and pitching angle. u00 ¼ sin1 ðcos D/ sin u0 þ sin D/ cos u0 sin h0 Þ:

Z Δφ

Axis of antenna

V

ψ

X O Y Fig. 3 The roll principle diagram

θ ϕ

Scatter objiect

ð11Þ

1062

X. Wang et al.

h00 ¼ sin1 ½ðcos D/ cos u0 sin h0  sin D/ sin u0 Þ cos u00 :

ð12Þ

Therefore, the main beam center frequency can be given as follows: fd0 ¼

2V cos h00 cos u00 : k

ð13Þ

Substitute the Eqs. (9) and (10) into the Eq. (11), we can get the Eq. (14): fd0 ¼

2V 2V cos h00 cos u00 ¼ cos h0 cos u0 : k k

ð14Þ

Therefore, the aircraft roll will not affect the main clutter center frequency. When the beam azimuth direction is 90o, and the roll angle is D/, then the width of main clutter spectrum will become as follows: Bd ¼

2V sin h0 cosðu0 þ D/Þh3 dB : k

ð15Þ

Since the angle D/ is very small (approximately within 5°), and early warning aircraft working angle is smaller. Therefore, the roll has less influence on the clutter spectrum width, which can be ignored. From what has been discussed above, we may reasonably reach the conclusions that the roll has no effect on the main clutter center frequency and clutter spectrum width, but it only makes the main clutter power increase. The influence of roll on echo amplitude The aircraft roll can cause the antenna pointing error, which will cause the amplitude modulation of target echo signal. Relationship between target echo-signal amplitude and directional pattern of the antenna gain can be expressed as: A / Fðh; uÞ  Gðh; uÞ:

ð16Þ

In order to facilitate the analysis, we use the Gaussian pattern to simulate the main beam. Therefore, the Eq. (17) is obtained as follows: Fðh; uÞ ¼ Gðh; uÞ ¼ eaf½ðhh0 Þ=h3dB 

2

þ½ðuu0 Þ=u3dB 2 g

:

ð17Þ

where h3dB and u3dB stand for the width of the main beam azimuth and pitch, respectively, a is Gauss pattern shape parameters. Relative variation of echo-signal amplitude is: ! DA h  h0 u  u0 ¼ 4a  Dh þ 2  Du : A u3dB h23dB

ð18Þ

Analysis of Influence of Attitude Vibration …

1063

Equation (18) shows that antenna pointing error has the least impact on the echo-signal amplitude modulation when the main beam is located in the center of the radar (h ¼ h0 ,u ¼ u0 ). However, at the edges of the beam (jh  h0 j ¼ h3dB =2, ju  u0 j ¼ u3dB =2), influence of antenna pointing on the echo amplitude modulation increases significantly. Suppose the target is located in the center of the main beam ðh0 ; u0 Þ, after the aircraft roll the angle e, the target location in the coordinates system of the antenna can be obtained from the Eqs. (9) and (10). u0 ¼ sin1 ½cos D/ sin u0  sin D/ cos u0 sin h0 :

ð19Þ

h0 ¼ sin1 ½ðcos D/ cos u0 sin h0 þ sin D/ sin u0 Þ=cos u0 :

ð20Þ

Therefore, there are beam pitch error and azimuth error with the aircraft roll. But when the radar uses side-glance at h0 ¼ 90 , there is only a pitch error, the relative amplitude variation can be expressed as follows: DA uu uu ¼ 4a  2 0  Du ¼ 4a  2 0  D/: A u3dB u3dB

ð21Þ

4 Simulation and Analysis In order to analyze the influence of yaw and roll performance of target detection, the various factors affected by the attitude vibration are simulated in this paper. In the following simulation, we use the conventional airborne early warning radar parameters. The radar parameters are shown in Table 1. In the process of simulation, we use middle pulse repetition frequency (MPRF) mode, and clutter simulation use distance ring method. Where the repetition frequency fr ¼ 6000 Hz. Figures 4 and 5 represent clutter spectrum of the 50th distance units with the change of yaw angle. The simulation results show that main

Table 1 The radar parameters The aircraft height H The aircraft speed V Beam pitch width Beam azimuth width Wavelength k Array element spacing Beam pointing Transmitting width s

8 km 180 m/s 10o 2o 0.3 k/2 (90o, 2o) 20 µs

Azimuth to the first sidelobe Pitching to the first sidelobe The initial distance Range resolution Maximum distance Rmax Coherent pulse number K Reflectivity γ Signal bandwidth B

−24 dB (chebwin) −32 dB (chebwin) 10 km 150 m 350 km 34 −10 dB 1 MHz

1064 Fig. 4 The yaw angle 1

Fig. 5 The yaw angle 2

X. Wang et al. 



clutter center frequency offset can have some impact on yaw clutter suppression of subsequent performance. Figure 6 shows the variation of target echo power with different roll angle. Simulation results show that the roll target echo power is reduced, and the rate of decline is associated with the beam angle. This paper will make the receive channel degenerate into single channel, which can produce single-channel clutter data, namely range-pulse data. AMTI processing of target’s distance unit is to complete the main clutter cancellation, and the center of the main clutter Doppler frequency offset is determined by the main beam direction, their relational expression is fd0 ¼ ð2V=kÞ cos h0 cos u0 . Where three-impulse cancellation method is used in simulation, then the filter coefficient is ½ 1 2 expðj2pfd0 =fr Þ expð2  j2pfd0 =fr Þ , the target speed and aircraft relative

Analysis of Influence of Attitude Vibration …

1065

Fig. 6 The influence of roll on target echo power

Fig. 7 The output signal to clutter ratio

speed are 100 m/s, the target distance is 167.8 km. Figure 7 shows the results that the output signal to clutter ratio (OSTCR) goes with the change of roll under the condition of different yaw angles. After the main clutter cancellation, clutter background for target contains main clutter surplus and sidelobe clutter. After adopting three-pulse cancellation, we assume the main clutter cancellation is processed thoroughly, while the sidelobe clutter in each frequency element is basically uniform. Therefore, background clutter in frequency domain after MTI processing could be regarded as an independent identical distribution. At the same time, clutter amplitude obeys Rayleigh distribution, so they can be processed by using cell average-constant false alarm rate (CA-CFAR) method. In view of MRRF used in this paper, the two-dimensional CFAR can be processed in the range-Doppler domain [6]. The process is as follows:

1066

X. Wang et al.

Table 2 The simulation results of the different yaw and roll angles Angles Yaw 0

o

Yaw 1o Yaw 2o

OSTCR/dB DP OSTCR/dB DP OSTCR/dB DP

Roll 0o

Roll 1o

Roll 2o

Roll 3o

Roll 4o

Roll 5o

10.23 92.2 % 7.27 89.2 % 2.23 74.1 %

8.23 72.6 % 5.32 69.3 % 0.21 46.4 %

5.93 45.3 % 3.02 41.8 % −2.1 25.2 %

3.23 18.6 % 0.29 15.0 % −4.81 10.2 %

0.16 9.5 % −2.74 8.8 % −7.84 4.2 %

−3.27 5.0 % −6.12 4.3 % −11.25 2.3 %

perform FFT on each distance unit of AMTI pulse signal, make the distribution of target and clutter in two-dimensional range-Doppler plane, whose role is to make the target signal incoherent integration to complete the moving target detection (MTD) function, and then a certain reference window is selected as clutter estimation unit. In simulation, the Rayleigh distribution parameter r ¼ 1, the reference widow size is 5  9, the target model is Swerling 0, radar parameters are the same as in Table 1, and the Monte Carlo number is 2000. Thus, the target detection probability (DP) can be gained from analysis of influence of yaw and roll, the simulation result is shown in Table 2. It can be discovered from the Table 2, yaw and roll make the target detection probability decrease, but influence of the roll on target detection probability is more evident than the yaw.

5 Conclusions The yaw makes clutter center frequency offset, meanwhile, makes the Doppler frequency of target change, which mainly affects the improvement factor of clutter cancellation. The roll makes the target deviate from the main beam center, which leads to a decline in the target echo power. In addition, the main beam gets closer to the ground, and it will increase the clutter power. These factors have affected the AMTI output of the signal to noise ratio, which ultimately affects the target detection performance. Simulation results show that crabbing and roll make probability of detection decrease, and when the vibration angle exceeds certain value, target will be not detected.

References 1. Yu, H.B., Feng, D.Z., Cao, Y.: Three-dimensional space-time nonadaptive pre-filtering approach in airborne radar. Chinese J. Electron. Inf. Technol. 36(1), 215–219 (2014) 2. Kuang, Y.L., Lu, J., Hu, G.M.: Study on clutter Spectrum of airborne distributed coherent MIMO radar. J. China Acad. Electron Inf. Technol. 9(1), 59–63 (2014)

Analysis of Influence of Attitude Vibration …

1067

3. Dang, X.F., Yang, M.L., Chen, B.X.: Calibration technique for bistatic MIMO radar array amplitude-phase via array rotation. Chinese J. Syst. Eng. Electron. 35(12), 2483–2488 (2013) 4. Zhang, B.H., Xie, W.C., Wang, Y.L.: Performance evaluation of typical STAP for airborne bistatic radar. Chinese J. Mod. Radar 32(8), 48–53 (2010) 5. Han, W., Tang, Z.Y.: Analysis of influence of aircraft crabbing on the target detection performance. Chinese J. Air Force Radar Acad. 23(1), 39–41 (2009) 6. Liu, G.H., Niu, L.M., Li, J.Y.: Performance simulation analysis of distributed CFAR detection algorithm. Chinese J. Electron. Technol. 3(6), 12–15 (2013)

Corner Detection-Based Image Feature Extraction and Description with Application to Target Tracking Lejun Gong, Jiacheng Feng and Ronggen Yang

Abstract Image features extraction and description is very important for pattern recognition and image analysis. Corners in images are typical feature points and represent a lot of important information. Extracting corners accurately is significant to image processing, which can reduce much of the calculations. In this paper, a target tracking algorithm is developed which is based on the local invariant feature point extracting and representing with Harris-Laplace corner. The results of the experiments show the feasibility of the proposed method and accurately localize the target. At last it has been used to construct the intelligent transportation system. Keywords Feature extraction Target tracking



Local invariant features



Corner detection



1 Introduction Image features extraction and description is a very important step for pattern recognition and image analysis. A local invariant feature, relative to global feature, is an image region that differs from its immediate neighborhood [1]. The local regions maybe a corner or an edge point in an image with outstanding features often have unique structures. Local invariant features extraction from images drew much attention and became the hotspot of research in field of computer vision because it can effectively express the image content. It is widely used in image template matching [2–5], target recognition [6, 7], image retrieval [8]. Among these image L. Gong (&)  J. Feng School of Computer Science & Technology, School of Software, Nanjing University of Posts and Telecommunications, Nanjing 210023, China e-mail: [email protected] R. Yang Faculty of Computer Science and Technology, Jinling Institute of Technology, Nanjing 211169, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_97

1069

1070

L. Gong et al.

features, corners are one of the most important local features. Generally, they are the points of edge that between two different brightness regions of images. Specially, corners have some favorable characteristics such as rotational invariance and illuminant invariance, that is, the local features are not affected by illumination and rotation. The number of these pixels is only about 0.05 % in the whole images [9]. Therefore to extract corners can minimize the processing data and reduce much of the calculations and do not lose image data information. Corners plays an important role in scale space theory [10], building 2D mosaics [11], motion tracking [12, 13], stereo vision [14, 15], and preprocessing phase of contour capturing [16], image content representation [17] and other fields. Hence, corner detection is more and more important in image understanding. There are many methods for corner detection, a common solution is based on gray scale while the other is based on image contours. Gray scale-based methods is mainly to use the characteristics of gray scale changes in the image scale space. Scale invariant feature transform (SIFT) is an algorithm that can well describe the image local features and the gray scale-based methods for corner detection are its improvements. Moravec [18] first developed the SIFT algorithm to extract feature points using gray scale variance in 1981. Later, Harris proposed the Harris corner detection algorithm, which is the improved version of Moravec operator [19]. While the contour-based corner detection methods firstly search for image contours and then select curvature maxima along those contours. For instance, Masood et al. detected corners by sliding three rectangles along the curve, at the same time, count the number of contour points lying in each rectangle [20]. In this paper, we will study the Harris-Laplace corner detection method and give a new corner descriptor to accurately represent the feature points. The next section introduces the mathematical foundation of Harris-Laplace corner detection for image feature points extraction as well as the step of the algorithm. In Sect. 3, we design the experiment to show the validation of the algorithm and develop the target tracking method for intelligent transportation system. In Sect. 4, we conclude the discussion.

2 Harris-Laplace Corner Detection 2.1

Mathematical Foundation of Harris Corner

The basic idea is that the point will be considered as corner if the absolute gradient values in two directions are both great. Such points are stable in arbitrary lighting conditions and are representative of an image. Harris corner detection algorithm is the improvement of Moravec’s corner detector. First, it calculates each pixel’s gradient, then computes corner response function and find the local maximum. Harris corner detector is defined as followed:

Corner Detection-Based Image Feature Extraction …

1071



 @ 2 I=@x2 @ 2 I=@x@y R ¼ detðHrÞ=traceðHrÞ; Hr ¼ ; @ 2 I=@x@y @ 2 I=@y2

ð1Þ

where @I=@x and @I=@y are the partial derivatives of gray values in direction x and y at point (x, y), @I=@x and @I=@y are the first-order directional differentials, which can be approximately calculated by convolving the gray values and difference operators in direction x and y. Equation (2) is a common operator and Gaussian kernel is also frequently used to calculate derivative. @ 2 I=@x@y is the second-order mixed partial derivative, which can be computed by Eq. (3). If R exceeds certain threshold, then take the point as a Harris corner. @I=@x ¼ I  ½101; @I=@y ¼ I  ½101T

ð2Þ

@ 2 I=@x@y ¼ ð@I=@xÞ  ð@I=@yÞ

ð3Þ

However, first-order directional differentials are sensitive to noise, it is often essential to introduce Gaussian function, h(x, y), to convolve the first-order directional derivatives before calculate the response function, R, for reducing the impact of noise. hðx; yÞ ¼ ð1=2pr2 Þ expððx2 þ y2 Þ=2r2 Þ

ð4Þ

It should be noted that Harris corners are not affected by illumination and rotation, but they have not the property of scale invariance, which is necessary to image analysis such as target tracking and pattern recognition.

2.2

Harris-Laplace Corner Detection

As we known, human can identify the object in spite of that it is far or near from the eye, because human eyes have well property of scale invariance. It is exciting if computers have characteristics of scale invariance. Just under such a background, Mikolajczyk and Schmid proposed Harris-Laplace detection method to simulate human vision and make up for the deficiency of Harris algorithm [21]. From the Eq. (5), we can find that the difference is that the M replaced Hr in Eq. (1). 

L2x ðx; rD Þ Lx Ly ðx; rD Þ M ¼ lðx; rI ; rD Þ  Lx Ly ðx; rD Þ L2y ðx; rD Þ

 ð5Þ

where σI is the integration scale, σD is the differentiation scale, Lx and Ly are the derivative computed in the x and y direction correspondingly. The gradient distribution in a local neighborhood of a point can be described in the matrix. The local derivatives are computed with Gaussian kernels, parameters of which determined

1072

L. Gong et al.

by the local scale σD (differentiation scale). The derivatives are then averaged by smoothing with a Gaussian window of size σI in the neighborhood of the point. The eigen values of this matrix represent two principal gray value changes corresponding two orthogonal directions in the neighborhood of a point. Points can be identified as corners, for the gray values change is significant in the orthogonal directions.

2.3

Descriptor for Corners

The main task of Harris-Laplace corner detection is to calculate a multi-scale representation for the Harris interest point, then find ‘characteristic scale’ for each FP with Normalized Laplace operator. If there is no characteristic scale for FP, then it will be rejected. After that characteristic scales were found, image should be blurred with them for all scales. All FP with specific characteristic scale are taken with their neighborhood of size according to their scale in blurred image. For each FP in those neighborhoods calculated derivatives and with weighted histogram defined main orientation (MO). This process is similar to Lowe’s SIFT descriptor. FP without MO will be rejected. It is entirely possible that there will be some much supported direction. After MO is calculated, descriptor needs to be calculated for each pair of FP and MO. The descriptor is similar to SIFT. The neighborhood of each FP is divided into 16 patches; each patch has 16 pixels, then calculates the gradient norm and angle of each patch, and accumulates the gradient norm according to angle histogram. If the angle bin is 45°, the descriptor will be a column vector of 128  1, which is constructed by 360 =45  16 ¼ 128.

3 Applications In this section, we construct target tracking algorithm for intelligent transportation system based on feature points extraction and description. We take 52 different targets from 52 frames to test the validity of algorithm. Figure 1 is one of the experimental materials from a high definition surveillance transportation video and the right-hand side is the target that we will be tracked in the experiments. Target tracking algorithm step: (1) Calculate a multi-scale Harris-Laplace corner in video and the target, and find characteristic scale for each FP; (2) Compute main orientation for each corner; (3) Generate the descriptor for each FP; (4) Find the maximum matched points according to minimum distance principle; (5) Localize the FPs in the corresponding images.

Corner Detection-Based Image Feature Extraction …

1073

Fig. 1 Transportation video and the vehicle target

Figure 2 shows FPs detection result given in the original target vehicle which is not changed in scale, rotation, and illumination. We designed three groups experiment with the scale, rotation, and illumination invariance. The mean precision is listed in Table 1; among those numbers, correct matched pairs and error matched pairs is the average value relative to 52 samples. Figure 3 shows the target tracking results, (a), (b), (c), (d) corresponding to original, scale, and rotation and illumination changes. To the original target image, the tracking algorithm can find 24 matched pairs, but there is an error matched pair among these pairs. Four pairs point are matched in (b). 10 points are matched in (c) and 15 points matched in (d), all of FPs that (b), (c), and (d) extracted are correct. From the statistic mean results in Table 1, we can see that the target tracking algorithm can accurately localize the target in the searching scene in general. However, the performance of the tracking algorithm varied with the scale, rotation and illumination. It should be noted that the situation of scale factor 1, rotation angle 0° and illumination 0 is the original image with not changed, so the results were same as each other. In order to compare convenient, we listed in the table. When the target image changed from 4 to 1/16, the number of matched points varied but the precision has no obviously changed. Similarly, the illumination changed, the precision changed little. Unlike these situations, when the rotation angle changed from 0° to 40°, the precision slightly decreased. This is because the

1074

L. Gong et al.

Fig. 2 FPs in images, the left is the FPs detected in target image and the right is the FPs in spectacle

Table 1 The mean precision of corresponding points extracted

Scale

4

1

1/4

1/16

Correct matched (pair) Error matched (pair) Relative precision (%) Rotation Correct matched (pair) Error matched (pair) Relative precision (%) illumination Correct matched (pair) Error matched (pair) Relative precision (%)

25.6 1.8 93.4 0° 39.3 2.0 95.2 +50 31 2.4 92.8

39.3 2.0 95.2 10° 21.7 1.2 94.7 0 39.3 2.0 95.2

27.4 1.9 93.5 20° 17.5 3.7 82.54 −20 38.0 2.1 94.8

17.0 2.0 89.5 40° 14.8 4.3 77.1 −50 35.0 3.2 91.6

Fig. 3 Target tracking results. a The target vehicle is the original image cut from the transportation video. b The scale factor of target is 1/4. c The rotation angle of target image is 20 degrees. d The target image is reduced lightness with −40

Corner Detection-Based Image Feature Extraction …

1075

interpolation after the rotation process brings the error. This can be avoided in the realistic environment for the target image and the video are directly acquired from the real environment.

4 Conclusions In this paper, we have studied Harris-Laplace corner detection method and give a new corner descriptor which can effectively represent the local feature points in the image. The descriptor divides the neighborhood of each FP into 16 patches; each patch has 16 pixels, then calculates the gradient norm and angle of each patch, and accumulates the gradient norm according to angle histogram. Target tracking algorithm based on the corner detection and descriptor is developed for the intelligent transportation system. Experiments in realistic transportation video show that the target tracking algorithm can accurately localize the target in searched scene even the target image’s scale and illumination changed. Although, the precision slightly decreased with the rotation angle variation, this is avoidable when the target image is directly derived from realistic video. On the whole, this target tracking algorithm is perfect for the task of security of intelligent transportation system. Acknowledgments This work is supported by the Natural Science Foundation of the Jiangsu Province (Project No. BK20130417), and Scientific Research Foundation for the introduction of talent of Nanjing University of Posts and Telecommunications (Project No. NY213088) and NUPTSF (Project No. NY214068).

References 1. Tuytelaars, T., Mikolajczyk, K.: Local invariant feature detectors: a survey. Comput. Graphics Vis. 3(3), 177–280 (2007) 2. Colletto, F., Marcon, M., Sarti, A., Tubaro, S.: A robust method for the estimation of reliable wide baseline correspondences. In: Proceedings of the IEEE International Conference on Image Processing, pp. 1041–1044. Atlanta, GA, USA (2006) 3. Yang, Z., Guo, B.: Image mosaic based on SIFT. In: Proceedings of the International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 1422–1425. Harbin, China (2008) 4. Vincent, E., Laganiere, R.: Detecting and matching feature points. J. Vis. Commun. Image Represent. 16(1), 38–54 (2005) 5. Kim, T., Im, Y.J.: Automatic satellite image registration by combination of matching and random sample consensus. IEEE Trans. Geosci. Remote Sens. 41(5), 1111–1117 (2003) 6. Suga, K., Fukuda, T., Takiguchi, Y.A.: Object recognition and segmentation using SIFT and graph cuts. In: Proceedings of the International Conference on Pattern Recognition, Tampa, Florida, USA (2008) 7. Liu, J., Chen, Z., Guo, R.: A mosaic method for aerial image sequence by R/C model. In: Proceedings of the International Conference on Computer Science and Software Engineering, pp. 58–61. Wuhan, China (2008)

1076

L. Gong et al.

8. Schmid, C., Mohr, R.: Local gray value invariants for image retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 19(5), 530–535 (1997) 9. Chen, J., Zou, L., Zhang, J., et al.: The comparison and application of corner detection algorithms. J. Multimedia 4(6), 435–441 (2009) 10. Mokhtarian, F., Mackworth, A.K.: A theory of multiscale, curvature based shape representation for planar curves. IEEE Trans. Pattern Anal. Mach. Intell. 14, 789–805 (1992) 11. Zoghlami, O., Faugeras, R.D.: Using geometric corners to build a 2D mosaic from a set of images. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 420–425 (1997) 12. Wang, H., Brady, M.: Real-time corner detection algorithm for motion estimation. Image Vis. Comput. 13(9), 695–703 (1995) 13. Yang, W., Dou, L., Zhang, J., Lu, J.: Automatic moving object detection and tracking in video sequences. In: SPIE Fifth International Symposium on Multispectral Image Processing and Pattern Recognition, pp. 676–712 (2007) 14. Vincent, E., Laganire, R.: Matching feature points in stereo pairs: a comparative study of some matching strategies. Mach. Graph. Vis. 10, 237–259 (2001) 15. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000) 16. Sarfraz, M., Asim, M.R., Masood, A.: Capturing outlines using cubic Bezier curves. In: Proceedings of IEEE 1st International Conference on Information and Communication Technologies: from Theory to Applications, pp. 539–540 (2004) 17. Cabrelli, C.A., Molter, U.M.: Automatic representation of binary images. IEEE Trans. Pattern Anal. Mach. Intell. 12(12), 1190–1196 (1990) 18. Moravec, H.: Rover visual obstacle avoidance. In: Proceedings of the International Conference on Artificial Intelligence, pp. 785–790. Vancouver, Canada (1981) 19. Harris, C., Stephens, M.: A combined corner and edge detector. In: Proceedings of the Alvey Vision Conference, pp. 147–151. Manchester, UK (1988) 20. Masood, A., Sarfraz, M.: Corner detection by sliding rectangles along planar curves. Comput. Graph. 31, 440–448 (2007) 21. Mikolajczyk, K., Schmid, C.: Scale and affine invariant interest point detectors. Int. J. Comput. Vision 60, 63–86 (2004)

Part V

Security, Privacy, and Trust

Anonymous Entity Authentication-Mechanisms Based on Signatures Using a Group Public Key Zhaohua Long, Jie Lu and Tangjie Hou

Abstract The conventional authentication mechanism based on digital signature is to identify the entity via sending a digital certification and its own digital signature to let verifier confirms the correctness of the identity. While the nowadays popular anonymous identification mechanism based on the group public key features the identification of the entity via sending a group public key certification. This paper proposes a digital signature scheme that enables verifier to identify the identity of the applicants, confirm its effective group identity authenticity, but meanwhile not to know the specific identity of the applicant. The results show that the proposed scheme can not only help complete the conventional digital signature scheme to identify the technical standards, also can largely avoid the information leak in the process of identification to verify the identity. Keywords Information security Anonymous authentication



A signature using group public key



1 Introduction With the development of the application scope of internet, especially the rising of e-commerce in recent years, network security authentication technology is receiving increasing attention. Under this condition, digital signature mechanism has been of great importance, largely improving the safety of both sides of the communication through avoiding the information leakage, tampering, and forging. However, during the information transmission to the receiving party, a digital signature [1] which reveals and exposes his own identity is also sent, thus standing great chance of producing a security loophole [2]. Z. Long  J. Lu (&)  T. Hou Institute of Computer Architecture, Chongqing University of Posts & Telecommunications, Chongqing 400065, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_98

1079

1080

Z. Long et al.

Anonymous authentication technology [3] based on the group public key as a kind of brand-new information security solution has effectively prevented identity leakage. One major difference between a conventional mechanism and an anonymous entity authentication mechanism based on signatures using a group public key is the nature of the digital signatures scheme used to produce tokens and provide confirmation of messages that are generated under the authentication protocol. For an anonymous authentication mechanism, another difference is that the claimant belongs to a group [4], and authentication is conducted with respect to this group. Authentication mechanisms require associated methods to manage the relationship between an entity and a group. An anonymous signature [5] has the following properties: (1) Only group members have the ability to correctly sign messages. (2) The verifier can verify that it is a valid group signature, but cannot discover which group member generated it. (3) Optionally, the signature can be “linked” or “opened.” The anonymous entity authentication mechanisms include the following important operations: (1) An entity (verifier) who wants to authenticate another entity (claimant) interacts with the claimant. (2) The claimant sends a token (optionally a group public key certificate) to the verifier. (3) The verifier confirms the validity of the provided token (optionally the group public key certificate).

2 Group Members Issuing and Key Producing Process As shown in Fig. 1 the key generation process includes key generation algorithms that create the group membership issuing key, the group membership opening key, and the group signature linking key (or keys) if they are required in the mechanism. For this process, the key generation can be described as follows (a) and (b): (a) The group membership issuer takes the group issuing key, group public key, group public parameter, and the distinguishing identifier as input. In this step, a group membership issuer might interoperate with a group member. (b) The group membership issuing process outputs a group member signature key [6].

Anonymous Entity Authentication-Mechanisms …

1081

Fig. 1 A group membership issuing process (include the procession of key)

3 Detailed Description for Five-Pass Mutual Anonymous Authentication This chapter gives detailed analysis of the most complex systems—Five-pass mutual anonymous authentication (initiated by A). As shown in Fig. 2, the tokens involving this mechanism should be created according to one of the following: TokenAB ¼ RA jj ½Text9  jj gsSAG ðRB jj RA jj G0 jj G jj ½Text8 Þ TokenBA ¼ RA jj RB jj ½Text3  jj gsSBG ðG0 jj RA jj RB jj A j ½Text2 Þ TokenTA ¼ ReSA jj ReSG0 jj sST ðRA0 jj RB jj ReSA jjj ReSG0 j ½Text5 Þ The values of the fields IG , IG0 , ReSG , ReSG0 , Status and Failure should have the following forms:

1082

Z. Long et al.

Fig. 2 Five-pass mutual anonymous authentication (initiated by A)

Table 1 Table of IG and ResG

Field

Choice1

Choice2

IG ResG

G ðGjj PGÞ or Failure

CertG ðCertG k StatusÞ or Failure

IG ¼ G or CertG ; IG0 ¼ G0 or CertG0 ReSG ¼ ðCertG k StatusÞ, ðG k PG Þ or Failure Res G0 ¼ ðCertG0 k StatusÞ, ðG k PG Þ or Failure, Status = True or False. The value of this field should be set to False if the group public key certificate is known to have been revoked; otherwise it should be set to True. Failure: ReSG will be set to Failure if neither a group public key nor a group public key certificate of G can be found by TP. In the mechanism, if TP knows the mapping between identifier G and group public key PG , then it should set IG ¼ G; Otherwise, then it should set IG ¼ CertG , and G should be set equal to the set of distinguished identity fields in the CertG . If either G or CertG is permitted to be used as an identity, then there should be a prearranged means to allow TP to distinguish the two types of identity indications. The value of ReSG should be determined according to Table 1. The mechanism is performed as follows: 0

(1) A sends a random number RA , the identity of G, IG and, optionally, a text field Text 1 to B. (2) B sends the token TokenBA and IG0 to A. (3) A sends a random number R0A , together with RB , IG , IG0 and, optionally, a text field Text 4 to TP. (4) When receiving the information in step (c) from A, TP performs the following steps. If IG ¼ G and IG0 ¼ G0 , TP retrieves PG and PG0 ; If IG ¼ CertG and IG0 ¼ CertG0 , TP checks the validity of CertG and CertG0 .

Anonymous Entity Authentication-Mechanisms …

1083

(5) Then TP sends TokenTA and, optionally, a text field Text 7 to A. The fields ReSG and ReSG0 in TokenTA should be: the group public key certificates of G and G0 and their status, the identifier G and G0 and their group public keys, or an indication of Failure. (6) When receiving the information in step (e) from TP, A performs the following steps: (7) Verify TokenTA by checking the signature of TP contained in the token, and by checking that the random number R0A , sent to TP in step (c), is the same as the random number R0A contained in the message-to-be-signed of TokenTA . (8) Retrieve the group public key of G0 from the message, verify TokenBA received in step (b) by checking the group signature of B contained in the token and checking that the value of identifier field ðGÞ in the message-to-be-signed of TokenBA is equal to identifier of G, and then check that the random number RA , sent to B in step (a), is the same as the random number RA contained in TokenBA . (9) A sends TokenAB to B. (10) When receiving the information in step (g) from A, B performs the following steps: (1) Verify TokenTA by checking the signature of TP contained in the token, and by checking that the random number RB , sent to A in step (b), is the same as the random number RB contained in the message-to-be-signed of TokenTA . (2) Retrieve the group public key of G from the message, verify TokenAB by checking the group signature of G contained in the token and checking that the value of identifier field ðG0 Þ in the message-to-be-signed of TokenAB is equal to identifier of G0 , and then check that the random number RB contained in the message-to-be-signed of TokenAB is equal to the random number RB sent to A in step (b).

4 The Authentication of Protocol (Based on the Logic of BAN) On the basis of BAN logic, the anonymous entity authentication mechanism should be analysis. The mechanism of based on the group public key anonymous entity authentication mechanism has the following initial sets: P1 : Aj  A

ResG;ResG0 $ TP

This formula will be interpreted as the following: A believes that passed between A and TP certificate identification result is valid.

1084

Z. Long et al.

P2 : Bj  B

ResG;ResG0 $ TP

This formula will be interpreted as the following: B believes that passed between B and TP certificate identification result is valid. P3 : TPj  A

ResG;ResG0 $ TP

This formula will be interpreted as the following: TP believes that passed between A and TP certificate identification result is valid. P4 : Aj  TPj ) A

ResG;ResG0 $ TP

A believes that TP is responsible for the result of the differential between A and TP P5 : Bj  TPj ) A

ResG;ResG0 ResG;ResG0 $ B P6 : Aj  TPj ) #ðA $ BÞ

P7 : Bj  #ðRb Þ P8 : TPj  A

ResG;ResG0 $ B

P9 : Aj  #ðRa Þ P10 : Bj  #ðA P11 : Sj  #ðA

ResG;ResG0 $ BÞ

ResG;ResG0 $ BÞ

According to the logic of BAN, The mechanism of anonymous identification based onngroup public key authentication has the following logical explanation: ① A / R0A; ResG ; ResG0 ; #ðResG ; ResG0 Þ; fResG ; ResG0 gKBG gKTA   ResG;ResG0 ② B/ A $ B KBG ;   ResG;ResG0 ③ A / RB ; A $ B KBG ;   ResG;ResG0 ④ B / RB ; A $ B KBG ; On the basis of P1 and formula ①, get the following formula (1):   Aj  Sj  ðR0A; ResG ; ResG0 ; #ðResG; ResG0 Þ; ResG; ResG0 KBG0 Þ; According to the rule of fresh news: Aj  #ðRA Þ, get the following formula (2):   Aj  #ðR0A; ResG ; ResG0 ; #ðResG; ResG0 Þ; ResG; ResG0 KBG0 Þ According to (1) and (2), the use of temporary validation criteria:

Anonymous Entity Authentication-Mechanisms …

  Aj  TPj  ðR0A; ResG ; ResG0 ; #ðResG; ResG0 Þ; ResG; ResG0 KBG Þ

1085

ð3Þ

Based on the principles of faith, the formula will be on open (3): Aj  TPj  A

ResG;ResG0 ResG;ResG0 $ B; Aj  TPj  #ðA $ BÞ

ð4Þ

According to formula (4), P3 and P5, ) Aj  A

ResG;ResG0 ResG;ResG0 $ B; ) Aj  #ðA $ BÞ;

Based on the principles of faith: Aj  A

ResG;ResG0 ResG;ResG0 $ B ) Aj  #ðA $ BÞ; 

 ResG;ResG0 B KBG On receipt of the formula (4), B / A $ Bj  TPj  A

ResG;ResG0 ResG;ResG0 $ B; Bj  TPj  A $ B;

Bj  A

ResG;ResG0 ResG $ B ) Bj  A $ B;

ResG0 ResG Formula Aj  #ðA $ BÞ and Bj  A $ B and target completion. The certificate to end.

5 The Analysis of Protocols Security The function of mutual authentication: This mechanism contains a trusted third party, two entities (A and B) served as both verifier and claimant. The mutual identification results between two entities rely on the trusted third party. To begin with, entity A sends a random number RA to B and the random number [7] RA in the whole process is functioned as signature encryption, Every time B receives information, the random number sent must be verified whether it is consistent with the first one. If consistent the information has not been tampered with. The mutual authentication is good to ensure the integrity and effectiveness of the data. Prevent Man-in-the-Middle Attack [8]: The middle attack refers to a third party attacker impersonating verifier information sent to the applicant or after capture, replay, instead of modification. In this mechanism, a group member make use of group members signature key signature information, this is to prevent any third party to intercept the message.

1086

Z. Long et al.

6 Conclusion In this paper, through analyzing and comparing (conventional) the disadvantages of the digital signature mechanism, the author put forward a kind of anonymous authentication mechanism based on group public key. In the mechanism of based on anonymous authentication using a group pubic key, the claimant belongs to a group, and authentication is conducted with respect to this group. Authentication mechanisms require associated methods to manage the relationship between an entity and a group. By using group signature or group public key certificate and realize the anonymous effect, which is not revealed his true identity. There are two modes for the mechanism of based on anonymous authentication using a group pubic key: Mechanisms without an online TTP and Mechanisms involving an online TTP. In the first mechanism, the claimant used to their own group private key to sign, and then by sending their own group public key certificate to the verifier, send to the verifier for validation. On the contrary, In the Mechanisms involving an online TTP, two entities A in G and/or B in G′ to validate each other’s group public keys using an online trusted third party (TP). This trusted third party should possess reliable copies of the group public keys of G (the group which A belonging to) [9] and G′ (the group which B belonging to). The entities A and B should possess a reliable copy of the public key of TP [10]. This mechanism will overcome the disadvantage of conventional digital signature and show that the mechanism is safe and effective.

References 1. Gu, K.: A Digital Signature Scheme Based on Identity Research Under the Standard Model [D], pp. 5–7. Central South University, Changsha (2012) 2. Brickell, E., Li, J.: A pairing-based DAA scheme further reducing TPM resources . TRUST 2010 (LNCS 6101), pp. 181–195 (2010) 3. Walker, J., Li, J.: Key exchange with anonymous authentication using DAA-SIGMA protocol [J]. In: Proceedings of the of 2nd International Conference on Trusted Systems (LNCS 6802), pp. 108–127 (2010) 4. Hwang, J., Lee, S., Chung, B., Cho, H., Nyang, D.: Short group signatures with controllable link ability [J]. LIGHTSEC 2011, 44–52 (2011) 5. Liu, X.: Design and analysis of anonymous signature scheme, pp. 2–8 (2006) 6. Hwang, J., Eom, S., Chang, K., Lee, P., Nyang, D.: Anonymity-based authenticated key agreement with binding properties [J], WISA, pp. 177–191 (2012) 7. Zhang, M., Yang, B., Yao, J., Zhang, W.: The standard model under the identity of anonymous signature scheme analysis and design, 5, 7–12 (2011) 8. Ouyang, J., Fang, Y., Wang, S.: A mechanism of anonymous control authentication based on P2P, 5, 4–6 (2010) 9. Fu, X., Ding, J.: The analysis and improvement based on public key signature, pp. 6–7 (2003) 10. Xue, P.: The public key signature mechanism based on MC, pp. 4–7 (1995)

A Comparative Study of Encryption Algorithms in Wireless Sensor Network Zonghu Xi, Li Li, Guozhen Shi and Shuaibing Wang

Abstract With the development of the Internet of Things, the Wireless Senor Network has been paid increasing attention. It can not only perceive the physical world, collect and transmit information, but also brings new problems in the security of information. As the nodes of Wireless Sensor Network are limited by hardware resources and energy, how to select fast and energy-saving encryption algorithms in terms of Wireless Sensor Network is very important. This paper constructs a test framework for comparing the energy consumption, time efficiency, and space efficiency and then tests parameters of multiple algorithms in the same application environment. The results provide a basis for choosing suitable algorithms for the Wireless Sensor Network.



Keywords Wireless sensor network Encryption algorithm Energy consumption Space efficiency



 Time efficiency 

1 Introduction Wireless Sensor Network (WSN) is a self-organized network that composed of micro sensor nodes [1, 2]. With the advantages of low cost, low data, and short distance, it is widely used in agricultural, biological, medical, military, and other fields. In recent years, with the rapid development of wireless sensor networks, security issues become increasingly prominent, especially in the military and commercial fields. As the data is transmitted by wireless, information may be Z. Xi (&)  S. Wang School of Computer Science and Technology, Xidian University, Xian 710071, China e-mail: [email protected] L. Li  G. Shi Department of Electronic Engineering, Beijing Electronic Science and Technology Institute, Beijing 100070, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_99

1087

1088

Z. Xi et al.

illegally eavesdropped, tampered, or destroyed at any time in the transmission process. Therefore, it is particularly important to ensure the safety of data in the wireless transmission. Based on the characteristics of wireless sensor networks, a series of cryptographic algorithm are mainly used to guarantee the network security. Although many mature encryption algorithms already exist, it is not enough. As the especial properties of wireless sensor network, a question is whether encryption algorithms can satisfy the hardware’s requirements [3]. Thus, the key is how to select a right encryption algorithm. By exploring the application of cryptographic algorithm, analyzing energy consumption, time and space efficiency of these algorithms, comparing the performance parameters in wireless sensor networks, we will conclude from the data. The algorithms should suitable for application scenarios of wireless sensor networks. This paper introduces the basis for selecting cryptographic algorithms in several situations, and describes the framework of the system which is used to analyze the cryptographic algorithms in tests. On this basis it analyzes the performance of parameters in various algorithms, and explains its reasons. Finally, the characteristics of these cryptographic algorithms in wireless sensor networks are summarized.

2 Algorithm Selecting Usually, a large number of sensor nodes are deployed to collect information in the monitoring region, which strictly limits their cost, size, and power consumption. Therefore, the most obvious characteristics of wireless sensor network are the limited hardware resources and power energy. As wireless sensor nodes are limited by power and volume, the code space, and data space are smaller than ordinary computer. Now the storage space of program code and data on computer has reached TB grade and GB grade, respectively, while the equipment of wireless sensor node remains in the KB grade [4–7] and its ability in data processing is even weaker than the general embedded system. Thus, in the design of the wireless sensor network system, the cryptographic algorithm should be selected with the standard of small space and fast speed, so it can spare enough hardware resources for other more important functions like data acquisition, transmission, and network management. Nodes in wireless sensor network are usually composed of two batteries to supply power. Because of the limited by volume, the capacity of the battery will not be great, and in the working process of the nodes, the battery cannot be replaced or recharged. Once the battery energy runs out, the node will lose its function. In the special environment, power consumption must be strictly controlled in each step of the design. Even in each technology and protocol, it must take energy saving as the premise. So the power consumption should also be considered in the selection of algorithms. According to the current development trend of password and the characteristics of sensor networks, five kinds of common influential algorithms in two systems of

A Comparative Study of Encryption Algorithms …

1089

Table 1 Introduction of algorithms Name

Source

Type

Key length

Security

DES

1972, U.S.A

Symmetric

Brute force attack

AES

2001, U.S.A, Rijndael OSCCA (China) OSCCA (China) 1977, U.S.A

Symmetric

192 bit (Triple-DES) 128/192/256 bit

Symmetric

256 bit

Asymmetric

512 bit pri.key, 256 bit pub.key 1024/2048 bit

SM1 SM2 RSA

Asymmetric

Anti difference, Linear analysis [8] Unbreakable Unbreakable Depending on the decomposition of large numbers

symmetric and asymmetric cryptography are selected as the study object. Five algorithms are given in Table 1. Asymmetric cryptography has great advantages in key management. With different encryption and decryption keys, the complex process of the wireless sensor network key negotiation can be resolved [9, 12]. Therefore, asymmetric cryptography is suitable in the application of unidirectional data transmission, and it is the development trend of cryptography algorithm in wireless sensor networks in the future. But because of its complexity, the cryptographic algorithm has not been widely used now.

3 Test Framework To guarantee the integrity and reliability of the test results, the framework for tests is very important in the early stage of requirements analysis. It standardizes the testing work, improves the efficiency and quality of tests, so a good test framework is an important basement for the comparison of algorithms. The test framework in this paper is shown in Fig. 1.

3.1

The Hardware Component

The test framework consists of three parts: input section, hardware section, and monitor section. The hardware section is the core of the whole framework, a complete hardware system composed of an 8051 core, power management, timer, DES, AES, SM1 algorithm module, SM2/RSA unit, and the SM2/RSA library. The functions of these components are shown as follows:

1090

Z. Xi et al.

INPUT

HARDWARE DES Module

SM1 Module

SM2 Lib

Test Data

RSA Lib

Soft Framework

SM2 / RSA Unit

AES Module

8051 Core

Power Management

Timer Power

Base Board

MONITOR Energy Monitor

Time Monitor

Err Monitor

Fig. 1 The test framework of encryption algorithms

(1) The microprocessor executes the test program to control the coordination between the various modules and data processing. (2) Power management is used to control the energy consumption of each module, and support the following analysis of the algorithm. (3) Timer is used to record time and measures the speed of implementation of algorithms. (4) DES, AES, and SM1 algorithm modules implemented by hardware are symmetric encryption, and they are the entity in tests. (5) SM2/RSA unit and the SM2/RSA library provide hardware implementation of the two algorithms partly, and they are also the entity in tests. Although the optimized degree of hardware implementation for the same algorithm will affect the execution time and energy consumption, the differences produced by the hardware implementation for the same algorithm is much smaller than implementation for the different algorithm. So the implementation of one algorithm can represent the average level of implementation in the construction of the test framework. Thus we can focus on the comparability of the performance of different algorithms rather than on the effect of the comparative results produced by different optimization of one algorithm. This effect exists, but appears to be negligible by comparing the differences among different algorithms.

A Comparative Study of Encryption Algorithms …

3.2

1091

The Input Component

The input component is the front part of the test framework and consists of test data, test program, and test power. It is used to provide the necessary data, energy, and other resources for hardware component to ensure the hardware section under normal operation. The test data in different test program offers different data, which is an important basement for the analysis of time efficiency. The test program for different objects provides different programs. At the same time, in comparing space efficiency of the algorithms, the information about the data of stored space is provided by the test program. The test power supports for hardware component, and provides the basis of calculation in comparing energy consumption of the algorithms.

3.3

The Monitor Component

The monitor component provides the output results of test framework. It monitors and records the results from the input component and hardware component. It is an important source of data for comparative analysis. The monitor component is composed of energy monitor, time monitor, and err monitor. Power monitor mainly detects energy consumption of hardware component, and provides data for energy consumption comparison. Time monitor records execution time of algorithm and program, and provides data for comparative efficiency. Err monitor is responsible for the state of the input component and prevents it from showing abnormal test results or wrong results.

4 Analysis of Algorithm This paper respectively compares the energy consumption, time efficiency, and space efficiency of AES, DES, SM1, SM2, RSA these five kinds of common cipher algorithm. Among them, AES, DES, SM1 belongs to the symmetric system, and SM2, RSA belongs to the asymmetric system. The process of algorithm analysis is shown in Fig. 2. In the analysis of time efficiency and energy consumption, the test runs on the same hardware platform, and the algorithms are implemented by the same method. The paper takes the algorithm as primary section, regardless of operating system, network protocol, data transmission, and other indicators. In order to make the comparative analysis more convincing, the test platform is measured under the standard of non-operating system, and except DMA transmission and the timer.

1092

Z. Xi et al.

Fig. 2 The process of algorithm analysis

Selecting algorithms

Power limited

Time efficiency Resources limited

WSN hardware system

Analysis process

Energy consumption

Space efficiency

4.1

Energy Consumption

As sensor nodes become micro shapes with limited energy of batteries and physical constraints which make them difficult to be replaced. The limited energy is one of the most important constraints in the design of the whole wireless sensor network system. It directly determines the lifetime of the network. The modules of energy consumption mainly contain sensor module, processor module and wireless communication module [9]. In order to limit the overall energy consumption, the nodes should maintain at a saved power state, and the energy consumption of cryptographic algorithm should be as low as possible at encryption and decryption of data to extend the service lifetime of nodes and the whole network [9, 10]. This section will calculate the power of the encryption and decryption operations and the following five algorithms will be tested in the same framework. In this test framework, a 3.0 V (equivalent to the voltage of two 5# batteries) constant-voltage source as power, a wattmeter, and an ammeter will be selected. The formula is P = UI (“P” represents power, “U” represents voltage, “I” represents current). The model of testing energy consumption (closing unrelated modules) is shown in Fig. 3. In the model, the measurement of energy consumption contains two parts, the consumption of system and consumption of algorithm. The consumption of system is the sum of necessary consumption in keeping the processor, memory, and other electronic component of the whole hardware system running. The consumption of algorithm is generated by the algorithm module itself [11]. Figure 4 shows the comparison of energy consumption in encryption and decryption algorithms. From the data above, the gap in energy consumption among cryptographic algorithms is not great. In extreme cases, the maximum power consumption of RSA encryption with 24.3 mA can keep working 82 h with two 1.5 V 2000 mAh batteries. But the minimum power consumption of AES with 22.3 mA can keep

A Comparative Study of Encryption Algorithms …

hardware

wattmeter

1093

Power

A Fig. 3 The model of testing energy consumption

Fig. 4 The energy consumption of algorithms

89 h under the same conditions with the same batteries. So by changing algorithms to reduce the energy consumption is almost undesirable.

4.2

Time Efficiency

At present, the wireless sensor network is based on IEEE 802.15.4 standard which provides a transmission rate of 20, 40, and 250 kbit/s, corresponding to the radio frequency 868, 915, and 2450 MHz. Time consumption by the cryptographic algorithms should not be too much in this low rate communication technology [12] which is less considered in conventional wired networks. This section focuses on testing time consumption by the operations of the encryption and decryption algorithms. Because of the small amount of data in wireless sensor networks, the length of data frame cannot exceed 127 bytes in MAC layer [13]. Large block data is not considered suitable to transmit in wireless sensor networks. 2 K (2048) is defined as the largest length of transmission capacity here. With 16 M system clock frequency, 16 bits timer, 1 μs as the unit of time measurement and 2 K bytes of test data used in the encryption and decryption operation on, the test results are as follows. Table 2 shows that the symmetric cryptography runs faster than asymmetric as expected. In the symmetric cryptography, DES runs the fastest, followed by AES,

1094

Z. Xi et al.

Table 2 The consumption of running time

Name

Operation

Time (μs)

Key length (bit)

DES

Encryption Decryption Encryption Decryption Encryption Encryption Decryption Encryption Decryption Encryption

13,297 13,297 49,781 49,877 57,856 1,640,345 1,345,076 81,173 731,905 1,640,345

64 * 3

AES SM1 SM2 RSA

192 128 * 2 512 + 256 1024*2

Fig. 5 The costs of running time in classic model

and SM1 is the third. In the asymmetric cryptography, RSA runs much faster than SM2 does. From Fig. 5, all of these kinds of symmetric algorithms could satisfy the standard of the rate that described in IEEE 802.15.4, while two asymmetric algorithms cannot satisfy the rate.

4.3

Space Efficiency

Nodes in wireless sensor networks not only monitor and collect data, but also undertake the function of routers. The data and tables of routers all need the storage

A Comparative Study of Encryption Algorithms … Table 3 The costs of running space

1095

Name

Implementation

RAM (byte)

ROM (byte)

DES AES SM1 SM2 RSA

Hardware Hardware Hardware Firmware Firmware

50 64 65 352 + 2991 1024 + 2328

0 0 0 About 31,523 About 29,755

space which is very sensitive to nodes whose RAM and ROM are relatively limited. Thus the selected cryptographic algorithm should be easy to implement with a small storage space. In this section, symmetric algorithms implement by hardware and asymmetric algorithms implement by hybrid of hardware and software. As mentioned in the hardware section of the testing framework, SM2 and RSA only have the basic operation and function library which provides sufficient function to implement algorithms. The hardware is equivalent to an accelerator, and the software really makes algorithms implement. In the testing framework, this method is called hybrid implementation. The statistical results of the running space are shown in Table 3. DES needs three key spaces of 64 bits, two buffer spaces of 64 bits (the input buffer and output buffer), a mode space of 16 bits, and an initial vector space of 64 bits. AES needs a key space of 256 bits and two buffer space of 128 bits. SM1 needs a basic key space of 128 bits, an extern key space of 128 bits, two buffer space of 128 bits, and a mode space of 8 bits. SM2 needs a public key space of 512 bits, a private key space of 256 bits, two buffer space of 1024 bits, extends a code ROM of 32 kB and a data ROM of 3 kB. SM2 is a kind of algorithm based on elliptic curves. Due to the large amount of computation, it usually runs on computer by pure software. In the environment of embedded devices, mobile terminals with weak processing generally use the hardware to achieve some basic computing and algorithms, such as modular arithmetic and inverse operation [14]. Using software to achieve the logical parts such as data exchange, the hybrid of hardware and software improves algorithm’s efficiency on the embedded devices. RSA needs a public key space of 2048 bits, a private key space of 2048 bits, and two buffer spaces of 2048 bits. To achieve software parts requires additional a code ROM of 30 kB and a data ROM of 2 kB approximately. It belongs to the public key cryptosystem and usually runs in the hybrid of hardware and software the same as SM2. The RAM space of symmetric algorithms is very small and the maximum length of key is 256 bits. But the asymmetric algorithms not only use large RAM space, but also take up a lot of extra ROM space and its minimum key is 256 bits. Although the asymmetric algorithms use large storage space, it is used to manage the key generation and distribution and verification of signature. In wireless sensor network, information is generally spread from the node to the gateway unidirectional. Nodes only need to encrypt the data without decryption and the

1096

Z. Xi et al.

gateway only needs to decrypt the data without encryption. Both symmetric encryption and decryption use the same key. Before sending and receiving data, it must finish the key. Asymmetric algorithms use public key for encryption and private key for decryption. Verification of signature can ensure the data security, preventing the data from being illegally tampered. So this extra space is necessary for sensor networks. At the same time, it also effectively solves the problem of key negotiation of asymmetric algorithm in data transmission.

5 Conclusion According to the features of limited hardware resources and energy in the wireless sensor network, this paper selects several cryptographic algorithms for researching and comparing. The results show that symmetric encryption can achieve high speed, simple implementation, short key, and small space. Therefore, in the application of relatively weak safety requirements, the symmetric encryption is more suitable for wireless sensor networks to encrypt bulk data. With low speed and large space occupied, asymmetric algorithms is difficult to meet real-time requirements in encryption and decryption. But asymmetric algorithms have great advantages in key management and security, indicating that it has a significant development in wireless sensor networks in the future. Acknowledgments This work was supported by the Grand Science Research Foundation of Central University under Grant (No. 2014CLJH07 and No. 2014XKJS01).

References 1. Yi, F., Li, Z., Wang, H.: Energy-efficient data collection in multiple mobile gateways WSN-MCN convergence system. In: IEEE Consumer Communications and Networking Conference (CCNC), pp. 271–276. IEEE (2013) 2. Healy, M., Newe, T., Lewis, E.: Analysis of hardware encryption versus software encryption on wireless sensor network motes. M. Smart Sensors and Sensing Technology, pp. 3–14. Springer, Berlin, Heidelberg (2008) 3. Biswas, K., Muthukkumarasamy, V., Sithirasenan, E., et al.: A simple lightweight encryption scheme for wireless sensor networks. M. Distributed Computing and Networking, pp. 499– 504. Springer, Berlin, Heidelberg (2014) 4. Varalakshmi, L.M., Sudha, G.F., Jaikishan, G.: A selective encryption and energy efficient clustering scheme for video streaming in wireless sensor networks. J. Telecommun. Syst. 1–9 (2013) 5. Kayalvizhi, R., Vijayalakshmi, M., Vaidehi, V.: Energy analysis of RSA and ELGAMAL algorithms for wireless sensor networks. M. Recent Trends in Network Security and Applications, pp. 172–180. Springer, Berlin, Heidelberg (2010) 6. Baek, J., Tan, H.C., Zhou, J., et al.: Realizing stateful public key encryption in wireless sensor network. In: Proceedings of The Ifip Tc 11 23rd International Information Security Conference, pp. 95–107. Springer, US (2008)

A Comparative Study of Encryption Algorithms …

1097

7. Perrig, A., Stankovic, J., Wagner, D.: Security in wireless sensor networks. J. Commun. ACM 47(6), 53–57 (2004) 8. Suárez, N., Callicó, G.M., Sarmiento, R., et al.: Processor customization for software implementation of the AES algorithm for wireless sensor networks. M. Integrated Circuit and System Design. Power and Timing Modeling, Optimization and Simulation, pp. 326–335. Springer, Berlin, Heidelberg (2010) 9. Mancill, T., Pilskalns, O.: Combining encryption and compression in wireless sensor networks. J Int. J. Wirel. Inf. Netw. 18(1), 39–49 (2011) 10. Mandal, S., Chaki, R.: A novel power balanced encryption scheme for secure information exchange in wireless sensor networks. M. Advances in Computing and Information Technology, pp. 263–271. Springer, Berlin, Heidelberg (2012) 11. Guo, P., Zhang, H., Fu, D.S., et al.: Hybrid and lightweight cryptography for wireless sensor network. J. Comput. Sci. 39(1), 69–72 (2012) 12. Qiu, W., Zhou, Y., Zhu, B., et al.: Key-insulated encryption based group key management for wireless sensor network. J. Central South Univ. 20, 1277–1284 (2013) 13. Jin, N., Zhang, D.Y., Gao, J.Q., et al.: A study on the application of symmetric ciphers and asymmetric ciphers in wireless sensor networks. Chinese J. Sens. Actuators 24(6), 874–878 (2011) 14. Du, X., Chen, H.H.: Security in wireless sensor networks. J. Wirel. Commun. 15(4), 60–66 (2008)

Survey on Privacy Preserving for Intelligent Business Recommendation in Cloud Yong Xu, Ming Li, Xiaomei Hu, Yougang Wang and Hui Zhang

Abstract Security is a key issue for intelligent business recommendation service in cloud computing environment. This paper analyzed the access control strategy and cryptography theory in cloud environment, preserving model for reasoning privacy in outsourcing situation, protection method for location privacy, and trajectory privacy. It was concluded that the following are active topics, such as concept of privacy in information field, access control strategy for across domain cloud servers, privacy preserving method for user’s access pattern. Keywords Cloud computing preserving



Recommendation service



Security



Privacy

1 Introduction Information technology has become increasingly important for enterprises in recent years; large-scale data held by organizations makes it difficult for them to manage and utilize such data source. At the same time, cloud computing technology, which could be used to manage massive data, attracts more and more attention for its convenience, economy, and scalability [1]. Organizations cannot manage data set beyond its ability and maintain infrastructures. They focus attention on their key activities if they can utilize the cloud computing technology well. In fact, the cloud computing technology has been used for many areas [2], for example, location-based recommendation service is a classic scenario [3]. However, it would lead to privacy leak that data was managed in cloud computing platform [4–6],

Y. Xu (&)  M. Li  X. Hu  Y. Wang  H. Zhang Department of Computer Science & Technology, Anhui University of Finance & Economics, Anhui, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_100

1099

1100

Y. Xu et al.

compared with the traditional computing mode, in which the data and information were saved in their own servers. This would allow sensitive information to be exposed in the open situation in cloud platform when data and information are stored in the rented infrastructure. Cloud Security Alliance has concluded seven most security threat issues in the cloud computing field, one of which is Sensitive data loss or leakage. Microsoft pointed out that one of the main challenges in security of cloud computing is attacks to cloud computing system and illegal accessing or utilizing to sensitive information. The Chinese Internet Center published a report that shows there were 66.9 % intelligent mobile applications capturing user’s privacy data in the top 1400 app of all kinds of domestic android market downloads, and 34.5 % of them had Privacy violating behavior. For example, trajectory data contains useful knowledge, which can support various decisions for mobile users. However, aggressive reasoning on the trajectory data can lead to exposure of individual interests and hobbies, behavior patterns, social customs. It is important for us not only to ensure data quality in location-based services, but also to protect real-time trajectory privacy information of mobile users.

2 Security in Cloud Computing Scenario Concepts of data access control, user privacy, confidentiality, integrity in scenario of intelligent recommendation services in cloud computing platform are given some new meaning, which are different from traditional data manage mode. There are also some new security issues that have emerged. There are needed new access control modes to satisfy such new application in a cloud computing situation, where database services could be provided by several service providers belonging to different security domains, which would be agreed upon by each service provider. This is because each security domain has different access control to manage their own resources and applications. The efficient methods used by most researchers are access control mode based on cryptography theory in cloud computing environment [7]. Encryption is a common method to protect sensitive data, but it does not support efficient data manipulation. Huang designed a calculated encryption scheme CESVMC based on matrix and vector operations. CESVMC classified cloud data into string and digital, encrypted the data via calculating on vector and matrix. It supported fuzzy query on encrypted string, and add/subtraction/multiplication/division operations to ensure privacy security during the process of data storage and calculation. Such security problem absolutely constrains the applying [8]. In order to prevent the cloud provider from misusing cloud data, it is essential to take special encryption and control scheme to protect data privacy in the cloud computing environment. Centry et al. proposed a fully homomorphic encryption mechanism [9]; Jensen et al. used

Survey on Privacy Preserving for Intelligent Business …

1101

encryption method based on ring and group signature to get user’s data anonymous storage [10]; Itani et al. designed the method that provided security domain for cloud environment via encrypting coprocessor’s tamper resistant ability. The method could deny the unauthorized access to cloud data from physical and logical views [11]; Victor et al. proposed access control method based on attributes encryption to protect user’s sensitive data [12]. Focus on illegality data theft problem, America DOD puts forward clear and special treatment guidelines solutions [13]. In particularly, Kirch et al. proposed encryption method with self-encryption and full disk encryption function to protect encryption data in virtual machine mapping and files constituting mapping [14]. Key management scheme of multiple users in cloud environment is an important problem for its particularity. The scheme not only should allow multiple users to share the data access, but also could prevent from illegal accessing to data, which required reasonable key management architecture. For example, Seitz et al. proposed key management architecture supporting multiple access on encrypted data [15]; Huang et al. proposed key management scheme using the session ID and user’s ID [16].

3 Privacy Preserving for Outsourcing Data in Recommendation Service Privacy preserving of outsourcing data in business recommendation service is how to prevent illegal user from accessing sensitive outsourcing data without authorization, when data owner submit its data to non-fully trusted third-party outsourcing server. Privacy preserving issue appears in each stage of life cycle of outsourcing data. The main privacy preserving method for outsourced recommendation data includes data model, formal method, multiple instance method [17, 18]. For the database outsourcing service provider locates in untrusted domain, data owner would encrypt sensitive data before they submit to server to protect delegation data. Data encryption is a basic mean to protect privacy of sensitive data, while it does not support effective manipulation [8]. Database encryption mechanism would generally be divided into two types: inner encryption and outside encryption. Encryption granularity could be divided into four levels: table, attribute, record, record value. The smaller the encryption granularity is, the higher the security is, and the larger the influence on the operating efficiency of the system is. Since the server is not full trusted, encryption and decryption are usually implemented in client, so the outside encryption attracted more and more researchers’ attention in outsourcing situation. But all the schemes increased the performance burden of client, so in most scenarios such schemes are not suitable for handheld, mobile device user’s access conditions.

1102

Y. Xu et al.

4 Location and Trajectory Privacy Preserving in Business Recommendation Service With the rapid development of mobile computing technology, applications based on location services have become more and more popular. The location service refers to users’ requesting related to its current location, (e.g., navigation services, tracking services, location-based advertising recommendation service, weather forecast service). Although location-based service brings more convenience for people, but there are privacy preserving issues which should be considered because of not fully trusted servers and accackers, such as how to protect sensitive position, trajectory information, and sensitive query-based location [19, 20]. The existing research work on mobile user trajectory privacy preserving mainly focuses on two aspects: trajectory privacy in data publishing situation, including static position and trajectory privacy information protection; trajectory privacy information protection in LBS [21]. The essence of the former is protecting static sensitive information in the trajectory data of mobile users, rather than the trajectory information. The research work of the latter is justly using privacy preserving technology (disturbing, generalization, limitation) to solve the trajectory privacy preserving directly, while not taking into account characteristics related with fact of mobile trajectory data’s time. Problem brought by location, scale, high dimensionality could not be solved by existing privacy protection technology directly, such as efficiency of solving high-dimensional data equivalence classes, cutting method of privacy relationship emerged in real-time incremental mobile trajectory data. From the view of privacy preserving object, the trajectory privacy preserving method mainly worked from two aspects; cutting off the relationship between trajectory information and individual identity and dividing trajectory into several sub-trajectories to cut off the relationship among sub-trajectory among trajectories sequences. Privacy protection technology designed from the first view could be divided into three categories: privacy preserving based disturbance, privacy preserving method of constrain, anonymous privacy preserving method [19, 22]. In [23] is introduced scalable mobile location privacy protection method of MOBIHIDE. MOBIHIDE could protect user’s privacy well using the Hilbert space filling curve to map the two-dimensional position coordinates of the users to the one-dimensional space, using distributed hash table structure of Chord with scalability and high fault tolerance to self-organize mobile user into a P2P system. A method based on disturbing and exchange of user’s identification was proposed to protect user’s privacy when user’s trajectories were nearly in paper [24]. Some people said fake identification could be used to protect user’s location privacy. The goal of the technology was that the attacker may get the exact location information but he did not know whom the location data belonged to, or the location information in user’s query was false, because the identification information was disturbed or obfuscated when user submitted a query [25]. A personal location privacy

Survey on Privacy Preserving for Intelligent Business …

1103

preserving scheme based on intermediate server was put forward in [26, 27], which allowed user to specified k anonymity degree and server’s response efficiency. The essential idea was obfuscating mobile user’s location information in intermediate server, which could prevent the attacker from getting the relationship between user’s location information and identification. k-anonymous technology was applied in location privacy preserving in many scenarios [28]. When there were k users in one area, and one of these users wanted to submit a query to location server, the user’s location information would be replaced by the area, that meant the user’s location privacy achieved the k-anonymous criteria. The area could be described by data structure such as three tuples ((x1, x2), (y1, y2), (t1, t2)), k-anonymity based on such data structure meant there were at least k users entering/going out the area ((x1, x2), (y1, y2)) in period (t1, t2). This was a trajectory privacy preserving method designed from location information anonymity. A trajectory privacy preserving named silent period was proposed in [29, 30], the main idea of the scheme was that user’s trajectory was divided into two types of mixture period and application period. Multiple users’ identification was disturbed in mixture area, and user could only submit a query in application area. But this method may reduce the quality of services obviously. The silent cascade-based network structure was proposed in paper [31]. Its core was anonymizing user’s location information from time and space view to achieve the privacy preserving purpose. Trajectory privacy preserving method designed from cutting the relation of trajectory’s sections was a good idea [32], which meant user’s trajectory could cut several sub-trajectories, attacker could confirm the user’s identification of each sub-trajectory, but he could not recover the whole trajectory from such sub-trajectory, and map the trajectory to user identification. A spatial region method based on Hilbert anonymity was developed in [23], which promised that user’s trajectory privacy could achieve the k-anonymous even though attacker known user’s identification [23].

5 Conclusion and Future Works It can be concluded that privacy disclosure is the upmost issue which prevents intelligent business recommendation from applying extensively in cloud. Further research can focus on the following three aspects: (1) “Privacy” definition in information field. Privacy from field of law and sociology refers to the individual name, portrait and other private information which user does not want to be known by others. Accompany with rapid growth of electronic data, privacy embedded in electronic data would be concerned, and protect object would include individual sensitive information.

1104

Y. Xu et al.

So the concept of “information privacy” attracted more and more people’s attention, which should be studied extensively to be clarified clearly. (2) Cross domain access control model with privacy protection function in cloud. Location services in cloud computing environment is the aggregation and sharing a service model based on large-scale distributed cyber source of relevant data. When user’s data related to location service was outsourced to cloud service provider, data owners and data management under the cloud computing environment are separated, response action of cloud platform for user’s request may be provided by multi servers. On the other hand, because the virtual machine management strategies of each cloud service provider is different, virtual machine formats supported by each service provider may be different too, it is an important issue in both theory and technology view that how to decide the cross domain access control strategy among different cloud service providers when user submits query in cloud scenario. (3) Privacy preserving method for user access pattern. The most tasks in this area are to study on the mechanism and method of preventing attackers or search engine server from getting user’s access pattern based on access log, including data access distribution. In intelligent business recommendation applications, the data of user search behavior is also valuable information. Search engine servers may analyze such data for their own interests and use the information, such as information which data user uses, how they use such data. The reason why there is privacy preserving demand is search engine server may not be trusted, so user may not want some information about query to be accessed by recommendation server. Acknowledgments The work was supported by Humanity and Social Science foundation of Ministry of Education under Grant No. 12YJA630136; Anhui Provincial Natural Science Foundation Under No. 1408085MF127; the Youth Talent Science Foundation of Anhui Education Department of China under Grant No. 2013SQRL031ZD; the Fund of Anhui University of Finance & Economics under the Grant No. ACKY1302ZDB; the Construction Foundation of Discipline, d.c. “Enterprise information management and data mining”.

References 1. Fernando, N., Loke, S.W., Rahayu, W.: Mobile cloud computing: A survey. Future Gener. Comput. Syst. 29(1), 84–106 (2013) 2. Dudin, E., Smetanin, Y.: A Review of Cloud Computing. Sci. Tech. Inf. Process. 38(4), 280– 284 (2011) 3. Wang, K., Huang, B.W., Peng, W.C.: An efficient geometry data allocation algorithm in cloud computing environments. In: Proceedings of the International Conference on Parallel and Distributed Systems, pp. 260–267 (2012) 4. Lien, I.T., Lin, Y.H., Shieh, J.R., et al.: A novel privacy preserving location-based service protocol with secret circular shift for k-NN search. IEEE Trans. Inf. Forensics Secur. 8(6), 863–873 (2013)

Survey on Privacy Preserving for Intelligent Business …

1105

5. Zhang, Q., Cheng, L., Boutaba, R.: Cloud Computing: State-of-the Art and Research Challenge. J. Internet ServAppl 1, 7–18 (2010) 6. Hamlen, K., Kantarcioglu, M., Khan, L., et al.: Security Issues for Cloud Computing. Int. J. Inf. Secur. Priv. 4(2), 39–51 (2010) 7. Hong, C., Zhang, M., Feng, D.: AB-ACCS: a cryptographic access control scheme for cloud storage. J. Comput. Res. Dev. 47(Suppl.), 259–265 (2010) 8. Huang, R., Gui, X., Yu, Si, et al.: Privacy-preserving computable encryption scheme of cloud computing. Chin. J. Comput. 34(12), 2391–2402 (2011) 9. C. ACentry. Fully Homorphic Encryption Scheme. Stanford University, California, Sept 2009 10. Jensen, M, Schage, S., Schwenk, J.: Towards an anonymous access control and accountability scheme for cloud computing. In: The 3rd International Conference on Cloud Computing. Miami, Florida, pp. 540–541 (2010) 11. Itani, W., Kayss, A., Chehab, A.: Privacy as a service: privacy-aware data storage and processing in cloud computing architectures. In: The 8th IEEE International Conference on Dependable, Autonomic and Secure Computing, pp. 711– 716. Chengdu (2009) 12. Echeverria, V., MLiebrock, L., Dongwan, S.: Permission management system: permission as a service in cloud computing. In: The 34th Annual IEEE Computing Software and Applications Conference Works hops, pp. 371–375. Seoul, July 2010 13. Mather, T., Kumaraswamy, S., Latif, S.: Cloud security and privacy. USA: O’Reilly Media (2009) 14. Kirch, J.: Virtual machine security guidelines. The Center for Internet Security (2007) 15. Seitz, L., Pierson, J.M., Brunie, L.: Key Management for encrypted data storage in distributed systems. In: The 2nd IEEE International Security in Storage Workshop, pp. 20–31. Washington (2003) 16. Huang, J., Xie, C., Cai, B.: Research and implement of an encrypted file system used to NAS. In: The 2nd IEEE International Security in Storage Workshop, pp. 1–5. Washington (2003) 17. Muntés-Mulero, V., Nin, J.: Privacy and anonymization for very large datasets. In: Chen, P., (ed.) Proceedings of the ACM 18th International Conference on Information and Knowledge Management, pp. 2117–2118. CIKM, Association for Computing Machinery, New York (2009) 18. Wong, W.K., Cheung, D.W., Hung, E., et al.: Security in outsourcing of association rule mining. In: Proceedings of 33rd International Conference on Very Large Data Bases (VLDB), Vienna, Sept 2007 19. Chen, R., Fung, B.C.M., Mohammed, N., et al.: Privacy-preserving trajectory data publishing by local suppression. Inf. Sci. 231(5), 83–97 (2013) 20. Mahdavifar, S., Abadi, M., Kahani, M., et al.: A clustering-based approach for personalized privacy preserving publication of moving object trajectory data. Lect. Notes Comput. Sci. 7645, 149–165 (2012) 21. Huo, Z., Meng, X.: A survey of trajectory privacy-preserving techniques. Chin. J. Comput. 34 (10), 1820–1830 (2011) 22. Komishani, E.G., Abadi, M.: A generalization-based approach for personalized privacy preservation in trajectory data publishing. In: Proceedings of 2012 6th International Symposium on Telecommunications, pp. 1129–1135, Nov (2012) 23. Gabriel, G., Panos, K., Spiros, S.: Prive: anonymous location-based queries in distributed mobile systems. In: Proceedings of the 16th International Conference on World Wide Web, pp. 371–380 (2007) 24. Hoh, B., Gruteser, M.: Protecting location privacy through path confusion. In: Proceedings of the First International Conference on Security and Privacy for Emerging Areas in Communications Networks, pp. 192–205 (2005) 25. Pfitamann, A., Kohntopp, M.: Anonymity unobservability and pseudonymity-a proposal for terminology. In: Proceedings of the Workshop on Design Issues in Anonymity and Unobservability, pp. l–9. LNCS, Springer, Heidelberg (2001) 26. Thomas, J., Liu, L.: Protecting location privacy with personalized k-anonymity: architecture and algorithms. IEEE Trans. Mobile Comput. 7(l), l–18 (2008)

1106

Y. Xu et al.

27. Mokbel, M.F., Chow, C.-Y., Aref, W.G.: The new casper: query processing for location services without compromising privacy. In: Proceedings of the 32nd International Conference on Very Large DataBases, pp. 763–774 (2006) 28. Gruteser, M., Grunwald, D.: Anonymous usage of location-based services through spatial and temporal cloaking. In: Proceedings of the 1st International Conference on Mobile Systems, Applications, and Services, pp. 31–42 (2003) 29. Huang, L., Matsuura, K., Yamane, H.: Enhancing wireless location privacy using silent period. In: Proceedings of the IEEE Wireless Communications and Networking Conference, pp. 1187–1192 (2005) 30. Beresford, A.R., Stajano, F.: Mix zones: user privacy in location-aware services. In: Proceedings of the 2nd IEEE Annual Conference on Pervasive Computing and Communications Workshops (PERCOMWO4), pp. 127–131 (2004) 31. Chow, C.Y., Mokbel, M.F., Liu, X.: A peer-to-peer spatial cloaking algorithm for anonymous location-based services. In: Proceedings of the 14th annual ACM International Symposium on Advances in Geographic Information Systems, pp. 171–178 (2006) 32. Gabriel, G., Panos, K., Ali, K., et al.: Private queries in location based services: anonymizers are not necessary. In: Proceedings of the 2008 ACMSIGMOD international conference on Management of data, pp. 121–132 (2008)

The Research on PGP Encrypted Email Recovery Qingbing Ji, Lijun Zhang and Fei Yu

Abstract PGP email encryption technology is a double-edged sword, on the one hand it is helpful to prevent the user’s email content from being monitored or tampered. On the other hand, some criminals take advantage of PGP encrypted email to engage in illegal activities. In this paper, we propose an effective method to crack PGP encrypted email message by recovering the corresponding private key password, which provides the feasibility for the judicial investigation, counterterrorism and prevention of criminal activity. Keywords PGP

 Private key ring  Signature

1 Introduction Since the Internet TCP/IP protocol is insecure for file transfer, email and other electronic business affairs, it is urgent to solve the problem of protecting private data from being stolen or tampered and ensuring the security of email during transmission. The most common way is to adopt encryption technology and the popular email encryption software is pretty good privacy (PGP) [1, 2]. PGP is an encryption software series based on RSA, SHA-1, and AES encryption algorithms. It is approved by American’s standards and technology organization national institute of standards and technology (NIST) as one of the two secure email encryption systems, which acquires widespread attention and recognition from its users. Nowadays, PGP source code is free [3] and its personal edition is divided into free version and desktop version. Free version can be freely downloaded but only permitted to be used by home users, students, and nonprofit organizations. Moreover, this version can merely encrypt and sign email while the desktop version is more powerful and even can be applied to commercial purposes [4]. Q. Ji (&)  L. Zhang  F. Yu Science and Technology on Communication Security Laboratory, Chengdu 610041, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_101

1107

1108

Q. Ji et al.

There are many version numbers of PGP currently and the final free available version is PGP 10.0.2 [5]. Due to the effect of purchase by Symantec corporation [6], PGP is no longer released separately as a stand-alone installation package after version 10.0.2 but exists in the form of integrated plug-ins contained in other commercial security products of Symantec Norton company. Hence, the majority of Internet users employ PGP 10.0.2. Encryption protection is a double-edged sword. On the one hand, it conveniently prevents the private information to be leaked. However, the end-to-end encryption mode provides an opportunity for some criminals to engage in illegal activities through encrypted email, which induces that it is more difficult to implement investigation and evidence collection for those criminal activities of property infringement, corrupt transaction, and even worse cases of affecting market economic order, national security, and social stability. So, the research on recovery of PGP encrypted email message is greatly significant to prevent and investigate behaviors of leaking national secrets, terrorism, and other criminal activities.

2 The Working Principles of PGP Email Encryption 2.1 ks skrX pkrX EC DC H Z Z−1 || R64 R64−1

2.2

Some Notations session key; user X’s private key; user X’s public key; symmetric encryption, optional algorithms are AES (up to 256 bits), CAST, TripleDES, IDEA, Twofish, Blowfish, and Arc4 (128 bits), etc.; symmetric decryption; a hash function, optional algorithms are SHA-1, SHA-256, SHA-384, SHA-512, MD5, and RIPEMD-160, etc.; data compression algorithm, optional algorithms are ZIP, ZLIP, etc.; the inverse operation of Z; concatenation operation; base 64 format of ASCII code; inverse operation of R64.

The Principles of PGP Email Encryption

Let A and B be the email sender and receiver, respectively. The email sender A encrypts and signs an email with the following process:

The Research on PGP Encrypted Email Recovery

1109

(1) Enters the password of the private key ring and gets his private key skrA used for email signature; (2) Uses a hash algorithm H to obtain the message digest of plaintext (for example, H can be SHA-1 or RIPEMD-160); (3) Uses the private key skrA to sign the message digest and obtain the signed message digest; (4) Merges the signed message digest and plaintext then compresses them with algorithm Z; (5) Generates the session encryption key ks randomly; (6) Use ks and symmetric encryption EC to encrypt the compressed message in last step and gets the corresponding ciphertext; (7) Uses the public key pkrB to encrypt ks with RSA algorithm; (8) Merges the encrypted session key and the ciphertext, then sends this encrypted message to user B after Base64 format conversion. This PGP email signature and encryption process is shown in Fig. 1. When user B receives the email, he will execute the decryption and verification process as follows: (1) Implements R64−1 operation on the received ciphertext, which transforms email message to a hexadecimal string format from the ASCII code format;

Passphrase of A

Email Plaintext

H Public Key pkrB of RSA

Private Key skrA of RSA

Message digest

RSA signature The Signed Message Digest

The Signed Message Digest ||Email Plaintext

RSA Encryption

Z Generate ks Randomly

The Compressed Message

EC Encrypted Session Key

Ciphertext

R64 Ciphertext for Sending

Fig. 1 The email signature and encryption process

1110

Q. Ji et al.

(2) Enter B’s password to obtain the private key ring skrB for decrypting the session key; (3) Decrypts the session key with skrB to get the plaintext of ks; (4) Execute decryption algorithm DC with key ks to obtain compressed concatenation message “signed message digest || plaintext;” (5) Decompresses with Z−1 algorithm to get the original message “signed message digest || plaintext”; (6) Uses public key pkrA to decrypt the signed message digest; (7) Decryptes the plaintext and computes its message digest with H algorithm; (8) Compares the two message digest in step (6) and step (7), if they match, then accepts this email as user A’s, else rejects this email. This PGP email decryption and verification is shown in Fig. 2.

Passphrase of B

Email Ciphertext R64

Private Key skrB of RSA

1

Encrypted Session Key

Ciphertext

RSA Decryption Session Key ks DC

Compressed “Signed message digest || plaintext” Z -1

signed message digest || plaintext

Public Key pkrA of RSA

signed message digest

Plaintext

RSA Encryption Message Digest

Fig. 2 The email decryption and verification process

H Match?

Message digest

The Research on PGP Encrypted Email Recovery

2.3

1111

PGP Key Management

Session Key Generation. PGP session key is a random number generated by the random number generator-based ANSIX.917 algorithm. This random number generator obtains the random seed from the user’s keyboarding intervals. For the random seed file “randseed.bin” on hard disk, it is also encrypted by the same strong cryptographic algorithm as that for email encryption which prevents attackers to analyze the seed file to get information of session key. Key Identifier. PGP allows the user to have multiple public/private key pairs. These key paris change from time-to-time, and at the same time, many key pairs are used in different interactive communication group. There is no one-to-one relationship between the user and their key pair. So PGP specifies a key ID for every public key of every user, which almost is unique for the every user. Now, suppose A wants to communicate with B using one public key of B, B will know the corresponding private key in terms of key ID. In fact, a key ID consists of the lower 64 bit of a public key, this length is sufficient to guarantee the uniqueness of key ID for one user. Key Ring. Key needs to be organized and stored in a systematic way to make it effective and efficient use. PGP provides a pair of data structure at each node, one is to store the node’s public/private key pair (named private key ring), the other is to store all the other user’s public key (named public key ring) which this node knows. Every user correspondingly has both key rings, note that in the private key ring, the private key is encrypted to store. Key Management Features. Key is the most important component in an encryption system, so there is no doubt of the importance of the key management. In fact, this is often the most vulnerable part in the entire system which is easily to be attacked by cryptographist. Therefore, an excellent encryption software is always careful on key management and takes a number of additional measures to protect key. The core part of PGP key management is certificate and key distribution. For any public key to be used, it will be identified. Recognize the reliability of identification is essentially based on mutual trust relationship between people. PGP system is exactly identifying the authenticity of the public key by imitating the process of building mutual trust in the real society. PGP provides two descriptions for every public key in the public ring, which is authenticity and trust, respectively. Authenticity shows the true extent of the public key and the PGP system will automatically calculate a value to measure it. Trust is the confidence degree of received key’s owner. PGP always prompts the user to input a trust value for the key’s owner, and according to a set of rules to calculate the appropriate authenticity parameters. PGP system uses five levels to measure the trust levels in the public key’s owner, which also measure the credibility of their signatures. The Use of Password. In PGP system, many places need to use a password which is mainly to protect the private key. Because the private key is too long and

1112

Q. Ji et al.

complicated, it is difficult to remember. PGP encrypts the private key with a password and stores it in the key ring. Hence, a user can use the private key indirectly by remembering easier password. Note that every PGP private key is encrypted by a corresponding password.

3 PGP Encrypted Mail Recovery From Fig. 2, we can see that it only needs to retrieve receiver’s private key skrB for decrypting PGP encrypted email. According to PGP key management, skrB is in the private key ring of B. In order to obtain skrB, we have to find the password. So for PGP email recovery, it can be attributed to crack the receiver’s password of encrypting private key ring. Taking the version of PGP 10.0.2 as example, we present the method of cracking private key password. The cracking process of PGP private key ring password is as follows: (1) according to the private key owner’s ID, find the starting position of skrB in the private key ring, then from this position we could obtain parameters PGPskr_n, PGPskr_e, PGPskrSALT, HashSaltlterID, PGPskr_IV, and PGPskr_encrypted_key; (2) use SHA-1 algorithm to deal with PGPskrSALT and password strings PassPhrase with HashSaltlterID times iteration to generate 32-byte key; (3) use AES-256 algorithm with the key in step (2) as working key to decrypt string PGAskr_IV || PGPskr_encrypted_key, which output the string plain; then exclusive–or the PGAskr_encrypted_key with plain byte by byte to obtain the private key PGPskr_key; (4) now obtain the private key parameters in the following manner: PGPskr_d PGPskr_p PGPskr_q PGPskr_u

[0*255] [0*127] [0*127] [0*127]

= = = =

PGPskr_key PGPskr_key PGPskr_key PGPskr_key

[2*257]; [260*387]; [390*517]; [520*647];

(5) verify whether the following conditions hold with the parameters PGPskr_n and PGPskr_e: PGPskr_d < PGPskr_n, PGPskr_p ≠ 1, PGPskr_q ≠ 1, PGPskr_p * PGPskr_q = PGPskr_n, PGPskr_d * PGPskr_e mod (PGPskr_p-1) ≠ 1, PGPskr_d * PGPskr_e mod (PGPskr_q-1) ≠ 1, PGPskr_p-1 mod PGPskr_q = PGPskr_u, PGPskr_u < PGAskr_q;

The Research on PGP Encrypted Email Recovery

1113

Fig. 3 The process of PGP private key ring password cracking

(6) if all the conditions are satisfied in step (5), then the password PassPhrase is correct, else go to the next step; (7) try another password PassPhrase, go to step (2). The PGP private key ring password cracking process is shown in Fig. 3.

4 Conclusion This paper presented a cracking method for PGP encrypted email message. We give all the details of PGP email message encryption algorithm and private key password recovering process. The parameters of PGP email message needed to extract in the cracking operation are also demonstrated explicitly. For password recovering, it will be more effective by combining dictionary-based social engineering techniques in the password choice phrase. This cracking method is of great significance for the prevention and investigation of leaking national secret, terrorism and other criminal activities.

1114

Q. Ji et al.

References 1. Zhang, L.: Applied research on PGP email encryption algorithm. Master Thesis, Liaoning Engineering Technology (2008) 2. Huiping, H., Yan, R., Lan, Z.: Using PGP software to realize safe sending and receiving email. Comput. Secur. 1, 52–54 (2011) 3. RFC4880. OpenPGP Message Format. http://tools.ietf.org/html/rfc4880 4. NAI. PGP Windows User’s Guide. Networks Associates Technology Inc. (2002) 5. PGP Corporation Home Page. http://www.pgp.com/ 6. Symantec Corporation Home Page. http://www.symantec.com/business/theme.jsp?themeid= pgp

An Improved Lightweight Pseudonym Identity-Based Authentication Scheme on Multi-server Environment Hao Lin, Fengtong Wen and Chunxia Du

Abstract Recently, Xue et al. proposed a lightweight dynamic pseudonym identity based authentication and key agreement protocol for multi-server architecture (2014). They claimed that their scheme overcomes security flaws of related schemes. In this paper, we reanalyze the security of Xue et al.’s scheme and show that their scheme cannot resist password guessing attacks. In addition, their scheme cannot achieve user anonymity and untraceability. To conquer these defects, we propose an improved and lightweight pseudonym identity-based authentication scheme for multi-server environment. Compared with Xue et al.’s scheme, our protocol not only maintains the merits, but also overcomes the security flaws. Keywords Multi-server flaws



Elliptic curve cryptography



Anonymity



Security

1 Introduction Nowadays, with the rapid development of computer networks, it is very important for service providing server to authenticate the legal identities of users. Similarly, the service providing server also needs to be verified by the users. The multi-server environment is comprised of three parts: the users, the servers, and registration center. As the trusted third party, registration center administrates all the users and servers. Registered users acquire information services which provided by servers. The traditional identity-based authentication schemes for single-server architecture are not suitable for multi-server architecture. If users want to acquire services from many servers, they have to register in each server and remember the corresponding identities and passwords in the single-server system. However, users only need to register in the registration center one time in the multi-server architecture. H. Lin  F. Wen (&)  C. Du School of Mathematical Science, University of Jinan, Jinan 250022, China e-mail: [email protected]; [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_102

1115

1116

H. Lin et al.

Compared with single-server environment, many protocols are designed for multi-server environment [1–4]. In 2001, Li et al. [1] proposed a remote password authentication scheme for multi-server architecture. Following their works, many user authentication schemes [5–8] are proposed for achieving security and high efficiency. In 2012, Li et al. [9] proposed an efficient and secure identity authentication protocol. Unfortunately, Xue et al. [10] pointed out that Li et al.’s scheme [9] is vulnerable to replay attack, internal attack, smart card forgery attack, eavesdropping attack, and masquerade attack. To revise the defects, they proposed a lightweight dynamic pseudonym identity-based authentication and key agreement protocol for multi-server architecture in 2014 [10]. We found that Xue et al.’s scheme is susceptible to password guessing attack and cannot achieve user anonymity and untraceability. To solve these problems, we propose a revised pseudonym identity-based authentication scheme for multi-server environment. Our scheme quotes symmetric encryption/decryption operation, the intractability of the discrete logarithm problem and Diffie-Hellman key exchange protocol to enhance security. The remainder of this paper is organized as follows. We first review Xue et al.’s protocol in Sect. 2. In Sect. 3, we show the security weaknesses of Xue et al.’s protocol. Our improved scheme is proposed in Sect. 4. In Sect. 5, we analyze the security of our proposed protocol. The performance of our proposed protocol is located in Sect. 6. Finally, we conclude this paper in Sect. 7.

2 Review of Xue et al.’s Scheme In this section, we briefly review initialization and registration phase, login phase, authentication, and key agreement phase of Xue et al.’s scheme. We summarize the notations and their corresponding meanings in Table 1 and show the main procedures of login and authentication phase in Fig. 1.

2.1

Initialization and Registration Phase

Step 1: The control server CS first selects two arbitrary numbers x and y. Step 2: The user Ui selects password Pi and random number b. Ui computes Ai ¼ hðb k Pi Þ and submits fIDi ; b; Ai g to CS via a secure channel. Step 3: After verifying the validity of Ui, CS computes PIDi ¼ hðIDi k bÞ, Bi ¼ hðPIDi k xÞ and submits Bi to Ui via a secure channel. Step 4: After receiving the smart card from CS, Ui computes Ci ¼ hðIDi k Ai Þ, Di ¼ Bi  hðPIDi  Ai Þ: Then Ui stores Ci, Di, hðÞ and b into the smart card. Step 5: The service providing server Sj selects a arbitrary number d and uses the identity to register with CS.

An Improved Lightweight Pseudonym Identity …

1117

Table 1 Notations Notation

Meaning

Ui Sj CS IDi SIDj x, y b d PIDj PSIDj SK Ni1, Ni2, Ni3, h(.)  k

User Service providing server The control server The identity of Ui The identity of Sj The secret number only known to CS A random number chosen by Ui A random number chosen by Sj The pseudonym identity of Ui The pseudonym identity of Sj The session key shared among the Ui, Sj, and CS Random numbers chosen by Ui, Sj, and CS A one-way hash function The bitwise XOR operation The bitwise concatenation operation

    Step 6: CS computes PSIDj ¼ h SIDj k d , BSj ¼ h PSIDj k y , and submits BSj to Sj via a secure channel. Step 7: On receiving the message from CS, Sj stores BSj into its database.

2.2

Login Phase

Step 1: Ui inserts the smart card into terminal and inputs IDi and Pi. Step 2: The smart card computes Ai ¼ hðb k Pi Þ, Ci ¼ h IDi k Ai , and verifies Ci ? ¼ Ci : If the equation is equal, Ui is verified by the smart card. Otherwise, the smart card terminates this session.

2.3

Authentication and Key Agreement Phase

Step 1: The user Ui selects a arbitrary number Ni1 at time TSi and computes PID ¼ hðIDi k bÞ; Bi ¼ D  hðPIDi  Ai Þ; Fi ¼ Bi  Ni1 ; Pij ¼ hðBi   i  i h Ni1 k SIDj k PIDi k TSi Þ; CIDi ¼ IDi  hðBi k Ni1 k TSi k00 0000 Þ; Gi ¼ b  hðBi k Ni1 k TSi k00 1100Þ Finally, Ui submits the message  Fi ; Pij ; CIDi ; Gi ; PIDi ; TSi to Sj.

1118

H. Lin et al.

Fig. 1 Login and authentication phase

Step 2: Upon receiving the message at time TSj, Sj checks whether TSj  TSi [ DT holds or not. If it holds, Sj terminates this request. Otherwise, Sj chooses a arbitrary number Ni2 and computes Ji ¼ BSj  Ni2 ,       Li ¼ SIDj  h BSj kNi2 kTSi k00 0000 , Ki ¼ h Ni2 BSj Pij kTSi ; Mi ¼ d     h BSj kNi2 kTSi k00 1100 . Then, Sj sends the message Fi ; Pij ; CIDi ; Gi ; PIDi ; TSi ; Ji ; Ki ; Li ; Mi ; PSIDj g to CS. Step3: After receiving the message at time TSCS , CS checks whether TSCS . If yes,  CS terminates this request. Otherwise, CS computes BSj ¼ h SIDj jjy ,      Ni2 ¼ Ji  BSj ; K  i ¼ h Ni2 BSj Pij TSi . If Ki 6¼ Ki , CS terminates

An Improved Lightweight Pseudonym Identity …

1119

request. Otherwise, CS computes Bi ¼ hðPIDi jjxÞ, Ni1 ¼ Fi  Bi ,     IDi ¼ CIDi  h Bi kNi1 kTSi k00 0000 , SIDj ¼ Li  h BSj kNi2 kTSi k00 0000 ,       Pij ¼ h Bi  h Ni1 SIDj PIDi kTSi . If Pij 6¼ Pij , CS rejects the session.   Otherwise, CS computes b ¼ Gi  h Bi kNi1 kTSi k00 1100 , di ¼ Mi      h BSj kNi2 kTSi k00 1100 , PIDi ¼ hðIDi jjbÞ, PSIDj ¼ h SIDj jjd . If PIDi 6¼ PIDi and PSIDj 6¼ PSIDj , CS rejects the session. Otherwise, CS and Sj are verified by CS. Then  CS chooses a arbitrary number Ni3 and computes Qi ¼ hðNi1  Ni3 Þ, Pi ¼ Ni1  Ni3  h SIDj kNi2 kBSj , Ri ¼ Ni2  Ni3  hðIDi kNi1 kBi Þ, Vi ¼ hðNi2  Ni3 Þ. Finally, CS transmits fPi ; Qi ; Ri ; Vi g to Sj : Step 4: On receiving fPi ; Qi ; Ri ; Vi g, Sj computes Ni1  Ni3 ¼ Pi    h SIDj kNi2 kBSj and Qi ¼ hðNi1 þ Ni3 Þ. If Qi 6¼ Qi , Sj rejects this session. Otherwise, Sj and Ui are verified by Sj . Finally, Sj sends fRi ; Vi g to Ui : Step 5: Upon receiving the message fRi ; Vi g from Sj , Ui computes Ni2  Ni3 ¼ Ri  hðIDi kNi1 kBi Þ and Vi ¼ hðNi2  Ni3 Þ If Vi ¼ Vi , CS and Sj are verified by Sj .

3 Cryptanalysis of Xue et al.’s Scheme We assume that a malicious adversary has ability to intercept and eavesdrop on the message submitted in public channel. It is possible for a malicious adversary to extract the message stored in the smart card [12–14].

3.1

User Anonymity and Traceability

If a malicious adversary has stolen a legal smart card, he/she can extract the information fCi ; Di ; hðÞ; bg. Furthermore,  the adversary also eavesdropped on the information Fi ; Pij ; CIDi ; Gi ; PIDi ; TSi transmitted in public channel. Because the attacker acquired the value of b, he/she can speculate ID0i to meet the equation   PIDi ¼ h ID0i jjb . Therefore, the identity of the user cannot be protected.  The message PIDi is a unchanged element in the login request message Fi ; Pij ; CIDi ; Gi ; PIDi ; TSi . The attacker can track the origin according to PIDi.

1120

3.2

H. Lin et al.

Password Guessing Attack

The password guessing attack can be executed according to the following steps. Step 1: After extracting the information stored in the smart card, the adversary guesses ID0i to content the equation PIDi ¼ h ID0i jjb : Step 2: If the adversary has obtained the correct  value of IDi , he guesses a password P0i and computes A0i ¼ h bjjP0i :      Step 3: The adversary computes Ci0 ¼ h IDi jjA0i ¼ h IDi jjh bjjP0i . If Ci0 ¼ Ci , the adversary has found the true value of Pi : Because user’s identity IDi and password Pi are revealed by the adversary, he/she may masquerade as a legal user to communicate with the server.

4 Our Improved Scheme In this paper, we propose an improved user authentication scheme to solve the flaws in Xue et al.’s protocol. Compared with Xue et al.’s scheme, our protocol is secure against many attacks and realizes the user anonymity and high efficiency. Our protocol includes three phases: registration phase, login phase, and authentication phase. We summarize the notations and their corresponding meanings in Table 2. The main processes of login and authentication phase are shown in Fig. 2. Table 2 Notations

Notation

Meaning

Ui Sj CS IDi SIDj PWi x Q r g PIDi Ni SK Es ðÞ Ds ðÞ hðÞ  k

User Service providing server The control server The identity of Ui The identity of Sj The password of Ui The master secret key of CS The public key of CS, Q ¼ x  P A random number chosen by Ui A random number chosen by Sj The pseudonym identity of Ui A random number chosen by Ui Session key shared among Ui and Sj Symmetric key encryption under the key S Symmetric key decryption under the key S A one-way hash function The bitwise XOR operation The bitwise concatenation operation

An Improved Lightweight Pseudonym Identity …

1121

Fig. 2 Login and authentication phase

In this section, we first present the basic concept of elliptic curve cryptosystem (ECC) [11]. Let Ep ða; bÞ is a set of elliptic curve points over the prime field Fp . The elliptic curve equation is defined as y2 ¼ x3 þ ax þ bðmod pÞ with a; b 2 Fp , p [ 3, and ð4a3 þ 27b2 Þ 6¼ 0ðmod pÞ. Given an integer m 2 Fp and a point P 2 Ep ða; bÞ, the scalar multiplication m  P 2 Ep ða; bÞ is defined as m  P ¼ P þ P þ    þ P (m times). Then, we quote the following mathematical problems to enhance the security of proposed scheme. The elliptic curve discrete logarithm problem: Given P; Q 2 Ep ða; bÞ, it is hard to find an integer m 2 Fp such that Q ¼ m  P: The computational Diffie-Hellman problem: Given P; m  P; n  P 2 Ep ða; bÞ, for m; n 2 Fp , it is hard to find the point ðm  nÞ  P 2 Ep ða; bÞ.

1122

4.1

H. Lin et al.

Registration Phase

Step 1: Ui first selects IDi , PWi and a random number Ni . Then Ui computes Ai ¼ hðNi kPWi kIDi Þ and sends the information fIDi ; Ni ; Ai g to CS via a secure channel. Step 2: After receiving the information from Ui , CS computes PIDi ¼ hðIDi jjNi Þ, Bi ¼ hðPIDi jjxÞ  hðIDi jjAi jjQÞ. The parameter x is the master secret key of CS and Q is the public key, where Q ¼ x  P. Finally, CS sends fBi ; Qg to Ui via a secure channel. Step 3: After receiving the smart card, Ui enters Bi ; Ni and Q into the smart card. Step 4: Sj sends the identity SIDj to Ui via a secure channel.   Step 5: CS computes BSj ¼ h SIDj jjx and forwards the information BSj to Sj via a secure channel. Step 6: On receiving the message BSj , Sj stores it into its database. The identity of Sj does not need to be protected by the pseudonym identity. Ui has known the identity of Sj before sending the information to Sj .

4.2

Login and Authentication Phase

Step 1: Ui inserts the smart card and inputs IDi , PWi : Step 2: After computing Ai ¼ hðNi kPWi kIDi Þ, the smart card selects a random number r and generates timestamp TSi . Then, the smart card computes R ¼ r  P, R0 ¼ r: Q, C i ¼ Bi  hðIDi kAi kQÞ, PIDi ¼ hðIDi jjNi Þ, Fi ¼ ER0 PIDi ; Ci ; SIDj ; TSi , and forwards the login request message fR; Fi g to Sj :   Step 3: Sj computes Ji ¼ EBSj ½R; Fi  and sends the message Ji ; SIDj to CS.   Step 4: Upon receiving the message from Sj , CS computes BSj ¼ h SIDj jjx ,   DBSj ½Ji  ! fR; Fi g, R0 ¼ x  R, DR0 ½Fi  ! PIDi ; Ci ; SIDj ; TSi . Then, CS generates timestamp TSCS and checks jTSCS  TSi j?\DT. If it does not hold, CS terminates the session. Otherwise, CS computes Ci ¼ hðPIDi jjxÞ. If Ci ¼ Ci , the legitimacy of Ui is verified by CS. Step 5: CS compares SIDj acquired in the channel with the SIDj decrypted from the information Fi . If they are equal, the validity of Sj is verified by CS. Otherwise, CS rejects the  request. Finally, CS computes    Ki ¼ h R0 kPIDi kSIDj , Li ¼ EBSj SIDj ; Ji ; Ki ; R and sends the message fLi g to Sj : Step 6: On receiving the message from CS, Sj computes   DBSi ½Li  ! SIDj ; Ji ; Ki ; R . If SIDj , Ji are equal to the SIDj , Ji sent by Sj , Sj authenticates the validity of CS. Otherwise, Sj stops the session.

An Improved Lightweight Pseudonym Identity …

1123

After that, Sj generates a arbitrary number g and computes G ¼ g  P,   SK ¼ g  R, Mi ¼ ESK Ki ; SIDj . Finally, Sj submits the information fG; Mi g to Ui : Step 7: Upon receiving the information from Sj , Ui computes SK ¼ r  G,     DSK ½Mi  ! Ki ; SIDj , Ki ¼ h R0 kPIDi kSIDj . If Ki ¼ Ki , Sj and CS are verified by CS:

5 Security Analysis In this section, we analyze the security of our proposed scheme and show that it can resist the following attacks.

5.1

User Anonymity and Untraceability

In our proposed scheme, IDi is protected by the pseudonym identity PIDi ¼ hðIDi jjNi Þ, where Ni is a random number selected by Ui . PIDi is encrypted by the key R0 . The attacker has to know the value of r if he/she wants to compute R0 ¼ r  Q. Because of R ¼ r  P, the attacker will face with the discrete logarithm problem. Moreover, because of the random number r, the login request message fR; Fi g is different in each communication. Thus, our scheme achieves user anonymity and untraceability.

5.2

Impersonation Attack

If an attacker has extracted the information fAi ; Bi ; Ni ; Qg stored in the smart card, he/she cannot get the value of PWi without IDi . The attacker cannot guess the identity IDi by the equation Ci ¼ Bi  hðIDi kAi kQÞ. Because x is the master secret key of CS, CS will verify the validity of Ui by computing Ci ¼ hðPIDi jjxÞ. It is difficult for the adversary to masquerade as a legal user.

5.3

Servers Spoofing Attack

A vicious adversary does not have ability to masquerade Sj to communicate with other users even if he/she is a legal server. If the attacker has intercepted the message fR; Fi g through the Internet, he/she uses his/her own BSj to encrypt this message and sends the message Ji and SID0j to CS. Notable, SID0j is a legal identity

1124

H. Lin et al.

of the simulate server. After receiving the message, CS will compare the message SIDj which acquired in the channel with the message SID0j from the decrypted information Fi . Finally, CS will perceive imparity and invalidate communication.

5.4

Off-Line Password Guessing Attack

We assume that a malicious adversary has obtained the message fAi ; Bi ; Ni ; Qg from a legal smart card and intercepted the information   fR; Fi g through the channel. Because the message Fi ¼ ER0 PIDi ; Ci ; SIDj ; TSi is encrypted by R0 ¼ r  Q, where r is a random number. The attacker cannot decrypt Fi to gain the value of PIDi . He also cannot acquire the value of IDi from the equation PIDi ¼ hðIDi jjNi Þ. It is hard for an attacker to guess the two values of IDi and PWi by the equation Ai ¼ hðNi kPWi kIDi Þ.

5.5

Replay Attack

The replay attack is defined as a valid data transmission which is maliciously or fraudulently repeated or delayed. The timestamps and TSCS are used in n TSi o 0 0 0 0 authentication phase. If the message fR ; F i g and Ji ; SIDj are replayed message, CS can find that jTSCS  TSi j [ DT and reject the session.

5.6

Forward Secrecy

The forward secrecy is a form of assurance that the session key will not be leaked if the master secret key is stolen. Suppose that the adversary has gained the master secret key x, he/she will computes R0 ¼ x  R. Because of the intractability of the discrete logarithm problem, he/she cannot get r by the equation R0 ¼ r  Q. It is difficult for the adversary to compute the previous session key SK ¼ r  G ¼ g  R.

6 Performance Comparison We compare the security of the other schemes with our proposed scheme in Table 3. In Table 4, we compared the communication cost of the other schemes with our proposed scheme. From Table 3, we conclude that the security of our protocol is better than the other two protocols. Li et al.’s protocol satisfies two out of the following criterions

An Improved Lightweight Pseudonym Identity …

1125

Table 3 Security comparison Feature

Li et al.’s protocol

Xue et al.’s protocol

Our protocol

Untraceability User anonymity Replay attack Impersonation attack Internal attack

Yes Yes No No No

No No Yes No Yes

Yes Yes Yes Yes Yes

Table 4 Efficiency comparison Feature

Li et al.’s protocol

Xue et al.’s protocol

Our protocol

Computation of Ui Computation of Sj Computation of CS

12t1 8t1 8t1

10t1 6t1 16t1

4t1 + 3t2 + 2t3 2t2 + 3t3 3t1 + t2 + 3t3

and Xue et al.’s protocol also satisfies two out of the five criterions. Our protocol can satisfy all of the criterions listed in Table 3. In Table 4, the meaning of t1 is the time complexity of the hash computation, t2 is the time complexity of modular multiplication and t3 is the time complexity of symmetric encryption/decryption operation. We summarize that the related schemes are more efficient than our proposed protocol. Although our scheme requires more operations, it is more secure than the other schemes.

7 Conclusion In this paper, we first review Xue et al.’s scheme for multi-server architecture and enumerate some security vulnerabilities of their protocol. To overcome these defects, we proposed an improved pseudonym identity-based authentication scheme for multi-server environments. Security analysis shows that our scheme has robust security. Acknowledgments This work is supported by Natural Science Foundation of Shandong Province (No. ZR2013FM009).

References 1. Li, L., Lin, I.C., Hwang, M.S.: A remote authentication scheme for multi-server architecture using neural networks. IEEE Trans. Neural Netw. 12, 1498–1504 (2001) 2. Lin, I.C., Hwang, M.S., Li, L.H.: A new remote user authentication scheme for multi-server architecture. Future Gener. Comput. Syst. 19, 13–22 (2003)

1126

H. Lin et al.

3. Tsai, J.L.: Efficient multi-server authentication scheme based on one-way hash function without verification table. Comput. Secur. 27, 115–121 (2008) 4. Guo, D.L., Wen, F.T.: Analysis and improvement of a robust smart card based-authentication scheme for multi-server architecture. Wirel. Pers. Commun. 78, 475–490 (2014) 5. Wen, F.T., Li, X.L.: An improved dynamic ID-based remote user authentication with key agreement scheme. Comput. Electr. Eng. 38, 381–387 (2012) 6. He, D.B., Chen, J.H., Zhang, R.: A more secure authentication scheme for telecare medicine information systems. J. Med. Syst. 36, 1989–1995 (2012) 7. Li, C.T., Hwang, M.S.: An efficient biometrics-based remote user authentication scheme using smart cards. J. Netw. Comput. Appl. 33, 1–5 (2010) 8. Wen, F.T., Susilo, W., Yang, G.M.: A robust smart card based anonymous user authentication protocol for wireless communications. Secur. Commun. Netw. 7, 987–993 (2013) 9. Li, X., Xiong, Y.P., Ma, J., Wang, W.D.: An efficient and security dynamic identity based authentication protocol for multi-server architecture using smart cards. J. Netw. Comput. Appl. 35, 763–769 (2012) 10. Xue, K.P., Hong, P.L., Ma, C.S.: A lightweight dynamic pseudonym identity based authentication and key agreement protocol without verification tables for multi-server architecture. J. Comput. Syst. Sci. 80, 195–206 (2014) 11. Hankerson, D., Menezes, A., Vanstone, S.: Guide to elliptic curve cryptography. Springer, New York (2004) 12. Kocher, P., Jaffe, J., Jun, B.: Differential power analysis. In: 19th Annual International Cryptology Conferrence, pp. 388–397, vol. 1666 (1999) 13. Messerges, T.S., Dabbish, E.A., Sloan, R.H.: Examining smart-card security under the threat of power analysis attacks. IEEE Trans. Comput. 51, 541–552 (2002) 14. Leng, X.F.: Smart card applications and security. Inf. Secur. Tech. Rep. 14, 36–45 (2009)

Abnormal Situation Detection for Mobile Devices: Feasible Implementation of a Mobile Framework to Detect Abnormal Situations German Lancioni and Patricio Maller

Abstract Most mobile threats are associated not to a normal usage in a standard environment, but to small variations in context, usage or configuration that make the device more vulnerable. Connecting to an untrusted network or an unusual activity on bluetooth can be predecessors of an attack. Awareness of abnormal situations is key to help users prevent potential security and privacy issues in a mobile environment. This paper presents a feasible implementation of a framework to detect abnormal situations for mobile devices, used to determine at any moment, if a situation is suspicious or unsafe.



Keywords Mobile Security sensing Engine Privacy





 Context aware  Android  Framework  Context

1 Introduction Mobile devices such as smartphones and tablets are nowadays exposed to different threats, both physical and virtual. Unauthorized access and phishing are in particular, problems that may harm the user probably even more than a virus, since in this kind of threats personal information can be stolen and used with bad intentions. Current solutions are centered on the web experience, such as detecting suspicious activity at email accounts [1]. However, there are little solutions centered on mobile applications. This paper presents a proposal to help in the prevention of privacy violation by analyzing the device’s context and detecting unusual parameters. This process is referenced as detecting abnormal situations, essentially on mobile devices. G. Lancioni (&)  P. Maller Intel Corporation, Intel Security Group, Córdoba, Argentina e-mail: [email protected] P. Maller e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_103

1127

1128

G. Lancioni and P. Maller

The purpose of detecting an abnormal situation is to provide mobile applications the chance to take preventive actions to protect user’s data. For example, a mobile banking application may be able to require an extra step in the authorization process if the user is connecting from a location never visited before. In the example, the abnormal situation would be characterized by variations in expected values for the “location” attribute. In order to detect abnormal situations, it is necessary to slice the situation in specialized verticals. Examples of verticals are: location analysis, Wi-Fi Access Points’ analysis, SIM card analysis, etc. Therefore, each vertical contributes with a different perspective to determine the final verdict. Context aware technology which includes hardware and software sensors is helpful to feed the equation used to determine suspicious situations. Since the definition of normality of a situation may change from domain to domain, it is important to provide a flexible implementation that could be used by applications with different objectives and requirements. For instance, a mobile banking application might be interested in detecting when the user is connected from an unprovable location while a phonebook application might be interested in detecting when the mobile’s SIM card has been suddenly changed.

1.1

Scope

This paper presents a solution composed by a conceptual model and subsequent minimum viable implementation for Android OS. The implementation includes Java APIs and is presented as a JAR library for application developers. Finally, the implementation is tested in three different applications in order to determine the added value of the solution in terms of security.

2 The Solution As part of the solution, a conceptual model has been defined as the Abnormal Situation Engine (ASE). An abnormal situation includes two types of analysis: (a) determining if a situation is suspicious (e.g., changed the SIM card) and (b) determining if a situation is unsafe (e.g., bluetooth active and visible [2]). Figure 1 presents these two analysis dimensions that in composition are the basis to later determine if a situation is out of normal parameters. The proposed model can be explained by introducing an analogy with automotive gearboxes. Each host application running the ASE will require a different configuration of the engine to detect different types of suspicious situations, as required for the application domain. Extending the metaphor in the implementation, the ASE provides a component called the Gearbox, which is in charge of attaching and detaching different Gears to meet the configuration required by the application.

Abnormal Situation Detection for Mobile Devices …

1129

Fig. 1 Abnormal situation analysis is based on two dimensions (suspiciousness) and (safeness)

Each Gear represents a vertical (add-on) specialized in the sensing, analysis, and detection of a particular suspiciousness or unsafe factor, such as detecting when the SIM card has been replaced. That way, the application developer is able to create a custom Gearbox instance with just the Gears that are useful to determine if a situation is suspicious or unsafe based on the application objective. Figure 2 exposes a high-level view of the ASE conceptual model. The ASE Analyst is in charge of executing the custom gearbox implementation and computing the results of the execution. Each Gear should implement a specific interface in order to return a standard result meaning the degree of suspiciousness or unsafe value for that Gear. Finally, the ASE Analyst computes the total abnormal situation factor (suspiciousness + unsafe values) based on the results of all the attached Gears. With this factor, the host application may be able to take actions to either protect the user or continue the normal execution of the application flow.

3 Outline of Implementation To proof the feasibility of the proposed model, an Android library (JAR) was created. The objective of this library is to provide situation analysis to application developers in order take safe decisions while accomplishing the application tasks. The main entry point of the ASE library is the Conductor class. This class is designed using the singleton design pattern and provides the main APIs that the application developer will use. Conductor provides functionality to attach and detach gears, as well as the capability to run the situation analysis process both on demand and by continuous monitoring. Figure 3 shows an outline of the library high level design.

1130

G. Lancioni and P. Maller

Fig. 2 Abnormal situation engine conceptual view composed by (gears) as the add-ons, (gearbox) as the coordinator component and the (analyst) as the results generator

The attached gears are represented as the Gearbox configuration that is used by the Executor component to run each of the desired gears. Since each gear implements the IGear interface, it is easy to extend the capabilities of the framework when adding new gears. Considering that the execution of the attached gears is an asynchronous process, the Executor has the capability of reporting the task progress through listeners. Finally, when the Executor finishes, the Analyzer component takes the raw results of each gear and determines the overall situation suspiciousness as the average value of all the gears’ results. The result is reported through listeners. With this result, the application developer is able to decide which actions to take accordingly. A gear is a dedicated and self-contained component that implements the IGear interface. The gear is specialized in the detection of an abnormal situation under a specific vertical. For example, a Bluetooth gear checks if the device’s Bluetooth is active and how it is configured. Based on this analysis, the gear determines if the

Abnormal Situation Detection for Mobile Devices …

1131

Fig. 3 Library high level design

situation (only in terms of Bluetooth) is safe or unsafe. This safeness factor is returned through the OnGearFinishListener as a percentage value. Table 1 shows different verticals and potential threats associated with each one, as well as the implemented Gears supported by the ASE. For example, detecting the USB Data cable connection is useful to prevent brute force PIN crack techniques [3, 4].

Table 1 Verticals and gears used to evaluate a situation Vertical

Related threats (may imply)

Gear

Type of detection/analysis

Bluetooth

Bluecasing, Bluesniff, Wardriving, Redfang Unlock pattern bypass/disable, PIN crack, unauthorized data acquisition Sniffing, unauthorized data acquisition, identity theft

Bluetooth gear Data cable gear

Bluetooth active, bluetooth discoverability Data cable connection, ADB presence

AP map zone gear

OS Boot

Physical theft

SIM card

Unauthorized access, physical theft

Recent boot gear SIM gear

Nearby access points zone analysis, type of AP security, known BSSIDs Recent boot detection

USB Data cable Wi-Fi

SIM card subscriber ID change detection

1132

3.1

G. Lancioni and P. Maller

AP Map Zone Gear: Strategy Example

Each gear present in the ASE contains specific logic to analyze one vertical. In the case of the access points map zone gear, it is important to highlight the implementation details since it is one of the most valuable gears for applications willing to add security regarding network transactions, such as banking or email applications. The AP map zone gear analyzes if the device is connected to a Wi-Fi access point or not, the amount of near-by access points that are already known and the signal dBm level of each one, in order to produce a weighted list that transforms into the suspiciousness factor result. At first, the gear starts by executing a Wi-Fi scan to retrieve the list of near-by access points. If the Wi-Fi interface is unavailable, the gear immediately returns a zero suspiciousness factor, since it is not turned on. Once the list of access points is retrieved, the BSSID is used to compare the access point against a list stored in the local database. Within this comparison, the algorithm is capable of determining the amount of coincidences obtained by comparing the current list of access points versus previously visited/stored access points. This comparison is used to generate a map zone composed by all the detected access points in a specific time, which virtually represents a place being visited by the user. After obtaining the current map zone and comparing against previously visited map zones, the gear checks if the user is currently connected to an access point within the ones present at the zone. If the user is connected, the map zone is saved in the local database for further reference. Otherwise, the map zone is discarded since the user is not linked to any access point in the zone. If a previously known access point is recognized again, a weight factor is increased for that access point. That way, more frequently visited access points are stronger than these rarely visited. Figure 4 shows a summary of the gear analysis based on access points.

Fig. 4 Access points map zone gear analysis diagram

Abnormal Situation Detection for Mobile Devices …

1133

Finally, based on the amount of access points’ coincidences, the access points’ weight factor, and the security type of the currently connected access point, a final suspiciousness factor result is calculated as a percentage returned to the developer through the ASE APIs.

4 Outline of Usage The ASE requires a first setup in order to analyze a situation. This is achieved by attaching the desired gears that will contribute to the situation analysis. The following gears were implemented so far: • • • • •

GEAR.RECENT_BOOT GEAR.SIM_PRESENCE GEAR.NEAR_ACCESS_POINTS GEAR.BT_PRESENCE GEAR.USB_DATA_CABLE_PRESENCE

The application developer decides which gears are important for the application being developed and use the attachGears API to setup the ASE, as follows: ASE.attachGearsn (gearsList). Once the ASE is configured and there is at least one gear attached to analyze the situation, the developer can execute an on-demand analysis request by using the runGearboxAnalysis API. ASE.runGearboxAnalysis (onAnalysisListener). Both the analysis progress and the final results are received through the onAnalysisListener instance.

5 Results The ASE implementation was exported as a JAR library and tested with multiple Android applications. These Android applications did not include a previous mechanism to verify the safeness of the situation before executing sensitive tasks. As part of the test, each application included the ASE library, created a particular gearbox configuration and executed the engine. The objective of this test is to validate how the ASE helps to different kind of applications. Three types of applications were selected to include the ASE library. “Application 1” represents a banking application that wants to protect the access to online banking. “Application 2” is a phonebook application that wants to protect unauthorized access to private phone numbers. “Application 3” is a photo vault application that wants to protect user’s photos under abnormal conditions. Table 2

1134

G. Lancioni and P. Maller

Table 2 Android applications test results Application

Gearbox configuration

Situation description

Engine result (suspiciousness + unsafe) (%)

Application 1

AP Map Zone + Recent Boot + SIM

15.50

Application 2

SIM + Recent Boot Bluetooth + Data cable

Device connected to a previously known access point with 80 % of nearby AP matching. Last boot was 2 days ago, no SIM card changed Rebooted one minute ago and SIM card has been changed Bluetooth is active but not visible. Data cable is not connected

Application 3

100 33

summarizes the gearbox configuration of each application, and the situation suspiciousness value obtained from ASE under the induced situation. Based on the obtained results, we can propose different resolution paths for each application. In “Application 1”, the normal execution flow would be the right option since the situation analysis returned a low suspiciousness value. However, “Application 2” is indeed under a suspicious situation since the SIM card has been replaced and the phone has been recently rebooted. Considering that, “Application 2” should prompt an authorization request in order to validate the user or take any other action in order to protect user´s private data. Finally, “Application 3” is under a partially suspicious and/or unsafe situation, so one possible action may be to just advice the user about the operational conditions before executing the application tasks.

6 Conclusions The proposed conceptual model has been useful to define the solution design and implementation. After implementing the Android version of the ASE, we found that applications can quickly and easily extend security related capabilities by including the ASE library. This adds an extra value that applications’ users will appreciate in order to protect the privacy and sensitive information under suspicious situations. That way, applications are enabled to request an extra user authorization or verification step in order to confirm the information access under abnormal conditions. Further work includes the implementation of more vertical add-ons to detect other kind of abnormal situations. We are also interested in adapting optimal thresholds for different situations in order to determine when the application should take protective actions.

Abnormal Situation Detection for Mobile Devices …

1135

References 1. Google Support. https://support.google.com/accounts/answer/140921?hl=en 2. Bruce, P., Brian, C.: Bluesniff—The Next Wardriving Frontier. DefCon XI. https://www. defcon.org/images/defcon-11/dc-11-presentations/dc-11-Potter/dc-11-potter.pdf 3. PenTestPartners, Brute Forcing Android Pins. https://www.pentestpartners.com/blog/bruteforcing-android-pins-or-why-you-should-never-enable-adb-part-2/ 4. Rehman, A.: http://www.addictivetips.com/android/how-to-bypass-disable-pattern-unlock-onandroid-via-adb-commands/

Virtual Machine Security Monitoring Method Based on Physical Memory Analysis Shumian Yang, Lianhai Wang, Liang Ge, Shuhui Zhang and Guangqi Liu

Abstract In the cloud computing environment, the security of virtual machine system becomes increasingly important with virtual machines have been widely deployed. Virtual machine security monitoring method based on physical memory analysis is proposed allusion to security risk and criminal behavior in the current cloud computing, computer, and mobile information terminal. Cloud security monitoring forensics system is developed based on the monitoring method, which can get each virtual host “memory” in physical hosts without affecting the user experience and the running state of virtual machines. The forensic system can fast access to the critical information through memory analysis, such as process information, thread information, network information, registry information, opened file information, and can further in-depth analysis and mine the virtual machine hard disk information. It can achieve comprehensive monitoring, forensics evidence, analysis and processing, and efficiently obtain evidence of a crime in the cloud. The method has been verified on KVM and VMware Workstation and is proved to be effective and reliable. Finally, it gives the deficiencies of the research work and the next work.







Keywords Cloud computing Cloud forensics Security Virtual machine C/S Electronic evidence Physical memory Cloud monitoring









S. Yang (&)  L. Wang  L. Ge  S. Zhang  G. Liu Shandong Provincial Key Laboratory of Computer Network, Jinan 250014, People’s Republic of China e-mail: [email protected] S. Yang  L. Wang  L. Ge  S. Zhang  G. Liu Shandong Computer Science Center (National Supercomputer Center in Jinan), Jinan 250014, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_104

1137

1138

S. Yang et al.

1 Introduction Cloud computing has a huge computing power and abundant computing resources, and is a super-computing model based internet, which has the characteristics of on-demand service, resource virtualization, dynamic resource allocation, etc. The rapid development of cloud computing brings not only more convenience and enormous economic benefits, but also is accompanied by a large number of criminal activities. More and more malicious attackers have implemented a malicious attack by the cloud computing, which bring a great challenge for cloud forensic. Domestic computer forensics industry began late, core technologies were poor, which relied on foreign technology and products. Foreign products accounted for more than 90 % market share. Especially in cloud security monitoring and forensics, domestic still has no effective tools and products which cannot meet the needs of forensic work. Cloud monitoring has currently two main research areas: First, how to obtain resources and load conditions of cloud computing platform, the monitoring may provide the basis for large-scale resource management and scheduling, maintenance of application systems for cloud computing. There are many mature research: UC Berkeley developed Ganglia [1], Santa Barbara (UCSB) the University of California developed a distributed framework NWS [2] which can monitor resources and loads, Carnegie Mellon University developed DSMon [3] systems. Secondly, there are lots of researches, such as cloud malicious code detection, virtual machine attacks, and intrusion monitoring, auditing, and other virtual machines security audit. For example, Xiang et al. [4] put forward to deploy a monitor tool in an isolated virtual machine to inspect the target virtual machine. The security monitoring is very helpful for solving a variety of security issues of cloud computing environment, and therefore becomes research hotspot. Many organizations, research groups, and standardization organizations have started relevant research, security vendors are also concerned about all kinds of cloud computing products, such as Sun Microsystems released a series of open-source cloud computing security tools for Amazon’s EC2, S3, and security monitoring and auditing of protection for private cloud platforms, provided virtual machines with significantly enhanced security protection and monitoring capabilities. Microsoft’s codenamed “Sydney” preparatory security plan was to help business users to exchange data between the server and the Azure cloud, which solved the security of the virtualization and the multitenant environment. In the environment of cloud forensics, there were certain monitoring method and forensic model, but the main problems in the cloud environment were: the traditional monitoring technology in cloud computing needed install monitoring program on a physical host and virtual host, which impacted inevitably processing speed and bandwidth of the data and the user experience; moreover, the traditional means of monitoring and monitoring capacity was limited, which could only monitor the cloud parameters of each hardware server, but the file system for cloud computing security software personalized information could not be monitored.

Virtual Machine Security Monitoring Method …

1139

In the article, cloud computing service model is introduced into the cloud monitoring forensics, which solves a virtual machine security question by using physical memory analysis technology, virtualization technology, and hard disk technology.

2 Cloud Monitoring Forensics Method Based on Physical Memory Analysis Cloud monitoring forensic method mainly monitors and obtains evidence from physical hosts and virtual machines within host, which do not affect the virtual machine users and the running state of the virtual machine, get physical memory and virtual machine hard driver file information. It can analyze the dynamic data which mainly refers to the processes, the threads, the drivers of processes, the dynamic link libraries, the opened network connections, currently the opened files, registry, and other volatile information [5–9] from the physical memory of a virtual machine, and from the above information, it can analyze suspicious information. If you need further analysis of the hard disk file of virtual machine which will be in-depth be analyzed and mined. Based on physical memory and hard disk file and the state of the running virtual machine, it can quickly identify criminals of the network and give preventive advice combined with “the cloud crime” memory feature analysis.

2.1

VMware Station Virtualization Tools

The client run on a physical host, which can collect running virtual machine physical memory information and hard driver information on the physical host by researching nvram, vmdk, vmx, vmxf, and other file formats. Based on VMware Workstation system, forensic methods flow was shown in Fig. 1. Hard Disk Information. VMware Workstation file system is a. vmdk format, the client gets the hard driver information by analyzing the vmdk file. The file contains all the data of the virtual machine disk partition information. The method is to load the vmdk file into logical drive mode, open the disk by calling CreateFile (), read the MBR master boot record, get the logical file system, according to the data on the hard disk file system format for reading, such as NTFS. Any data stored in the volume present in the $ MFT document, which was called the main document table (Master File Table). The $MFT is composed of File record array. File record size is generally fixed, usually is 1 kb. MFT root directory information is in the fifth record. According to MFT, the client obtains a specific location of directory information and documents. Obtain and Analyze Physical Memory. When the virtual machine is running, which will generate a uuid named memory file whose format suffix is .vmem. The . vmem file is virtual machine memory page file, which backed up physical memory information of running virtual machine.

1140

S. Yang et al.

Get the version and config info of VM

Get the number and path of running VM

Sequentially processing each virtual machine

Parse each VM' s configuration file

Get the configuration file

Get memory size and uuid info

system version info

VM

Get . vmdk/ raw path of the VM

Get . vmem path of the VM

judging operating system of the VM

Analyzer the memory of VM

Process info and dll info

Get the name of

Aanlyze the vmdk file

Network info

Sensitive info

Browse Files Driver info

Dump the required documents on demand

Suspicious Information Analysis Summarizes the state of the virtual machine is running, given preventive suggestions

Fig. 1 System forensics process flow

Parsing the VMX configuration file for the virtual machine operating system version, the system will choose a different physical memory analysis method based on the version.

Virtual Machine Security Monitoring Method …

2.2

1141

KVM Virtualization Tools

Get Hard Disk Information. Detect KVM virtual machine and get each absolute path KVM hard disk file which is running and the corresponding VMCS (virtual machine control structure) value. By raising the read permission to read the hard disk file information. The choice of different formats generated different file formats when a virtual machine was created. The raw file format was studied in the project. Raw virtual hard disk file format is NTFS by analyzing the MBR structure. The implementation process: First, get the starting address and size of each partition by analyzing MBR structure, analysis corresponding MFT structure based on the starting address of each partition. According to the starting address of the cluster in the MFT structure, get the root directory’s virtual address. Further traversal root directory and file by the root directory. Raw format file analysis is developed using visual studio 2010 in windows, the linux client was called through wine (a software based on linux) and an executable file. Simple flowchart as follows: MBR → Primary partition table → The starting address of each partition → Start cluster address of the MFT → MFT Structure → Root directory → Directory and file Obtain and Analyze Physical Memory. The model changed the traditional model which needed to install monitoring agents in the virtual machine. Starting from the physical memory, gets the virtual machine status information by acquiring and analyzing virtual machines physical memory, combines with the virtual machine hard disk information, to realize anti-escape behavior monitoring and real-time operation in each virtual machine, which is unnoticed by user and is not affect the running virtual machine, it is higher in security and reliability. Mach VMCS Structure Characteristic Values. The first variable vmcs structure is 32 bits revision identifier, the second variable is 32 bit vmx-abort indicator, followed vmcs data, including vmcslinkpointer, host_ct4 and host_cr3, etc. The method can get a running virtual machine depending on the cpu type and vmcs structure characteristics which can match the characteristics values in the host’s physical memory.

3 Design and Implementation of Monitoring Forensics System Cloud monitoring forensic system design goal is through the analysis of all virtual host physical memory and virtual machine hard disk file in the cloud server and the host physical machine status information to quickly identify whether the virtual machine is in an illegal state, and take the appropriate warning and protective measures. In the cloud computing era, if it cannot implement effective security monitoring and forensics for this open information infrastructure in cloud computer, criminal activities can be more serious. It can be said, cloud computing and

1142

S. Yang et al.

Server Case management

In-depth analysis

Config client

Display, storage, warning, record Network Polling physical machine Getphysical memory

Poll VM Parse disk hard

Memory analysis

Client

Fig. 2 Cloud security monitoring forensics C/S functional model

information terminal forensics is urgently needed for the field of information security and public safety. Using the above method, we developed a set of cloud security monitoring forensics system. Mainly, it was divided into two parts: the server and the client, the service and client functions as shown in Fig. 2.

3.1

Remote Communication Process

The client and the server of remote communication are based on communication agreement, remote communication part of the connection is established via TCP, on top of this protocol, the client, and server application protocol for the session is designed, including both the communication process and the definition of the communication field. Its implementation process include: the client implementation process includes: connecting service-side, service confirmation, connecting authentication, receiving a command string, sending summary information, data information, and agreed terminator by tunnels translation. The service-side implementation process includes: connecting client, service confirmation, connection authentication, transmission command string, receiving summary information, data information and agreed terminator, processing data, the basic flow was shown in Fig. 3.

Virtual Machine Security Monitoring Method …

1143

Fig. 3 System communication flow

3.2

The Cloud Monitoring Forensics System Server

The server has lots of functions including of display, case management, storage, in-depth analysis, early warning and logging, etc. The server dynamic monitors the physical and virtual machine, increases and deletes physical machine configuration information to monitor and get physical and virtual host data information. The server polls the client by sending to fixed strings every half an hour, the client sends a request within the latest information on physical machines and virtual host information to the server which updates with the latest data on the physical and virtual hosts. If there are suspicious circumstances, the server determines whether it will need further depth analysis. Without warning and suspicious, system will automatically do the snapshot which is saved to the server which is specialized storage evidence. For massive data, it supports further analysis and provides analysis reports in the end. The function is shown in Fig. 4.

1144

S. Yang et al.

Scanning the state of VM Get and analyze the memory of PM and VM

Send the state info to PM

Get the state of VM

N Suspicious ?

Y Need hard info ?

N

Send suspicious info to PM

Recv the command of PM

Y Continue to analyze the combination of hard information Snapshot ? Suspicious ?

Y N

N End

Y Snapshot and save evidence

Fig. 4 The cloud monitoring forensics system server

3.3

The Cloud Monitoring Forensics System Client

The client collects in real-time cloud server cluster, including data information of physical and virtual hosts, scans the virtual host status, access, analyze physical machine physical memory, accesses to the virtual host memory. If it has suspicious information, the client is further depth analysis of the virtual machine hard drive information, and finally, the data collected through the MD5 encryption authentication technology is sent through the pipeline to the cloud forensics server which displays these information. The system do not need to install and already used in the security sector.

Virtual Machine Security Monitoring Method …

1145

4 The Cloud Monitoring Forensics System The cloud monitoring forensic system was divided into two parts by the client and the server. The client could be run under the windows and linux, and was, respectively, developed in visual studio2010 and C language environment. The server was running on windows xp, vista, windows 7, windows server2003, and other operating systems and was developed in the c# and .net language environment. The client only needed to run on a physical host, it would be able to analyze the physical memory of host self and virtual machine, and be able to further analyze virtual hard disk file, which could effectively prevent the virtual hosts running malicious software, virtual machine escape, providing illegal services and other issues, and achieved the entire cloud computing platforms and different information terminal comprehensive security monitoring and forensics. For database connectivity, the server used a “Mysql” middleware as an interface between the server application and the database. “Mysql” and the server programs were independent and “Mysql” was used to manage the database which collected evidence information. To ensure the integrity and accuracy of forensic data, the system used the tcp protocol as the network communication protocol. Process of the system was: first, open the client application, wait for a connection request from the server; open the server program, register according to the user type which mainly is super administrator, single manager, general manager, different users have different permissions, after successful registration, login, then create a new case, enter transmission interface of the service, enter server and client ip address, Mysql database account and password, and select a data storage location, test whether the database was connected successfully. If database has been connected, enter the ip address of the client, and establish a connection with the server, and then establish the server transmission interface which was shown in Fig. 5: it can be seen from the figure, the same time can handle communication requests from multiple clients, so C/S communications belong to one-to-many model, and the file you want to transfer of the client is more than one, so the communication between the server and a single client belong to a single connecting multiple file transfer mode, which improve the efficiency of the C/S. For the discovery of illegal activity or subjected to unusual attack, mainly for the dynamic link library file information which was gathered from the physical and virtual host’s process. Using the approach is: established virus signatures, collected as many assemblies which were called by virus and dynamic link library and MD5 value information. Such information was as the basis for analysis of the data. When data needed to be analyzed, the relevant dynamic link library information collected by the client was as a query keywords to search the virus signature database, if the virus database had relevant records, the current information may be abnormal which was displayed as a abnormal results. The basic process of data analysis is: (1) An unique key of each record in the target database table was the query criteria of the virus signature database. (2) Using SQL statements and querying virus signatures in corresponding tabular

1146

S. Yang et al.

Fig. 5 Cloud server security management interface

data. (3) If the query results are returned, it indicates that this record of target database is the exception, otherwise this record is normal, and abnormal results will be displayed to the users. Clicking the left button DLL file analysis, DLL file information which was collected from multiple virtual machines was compared with the virus signature database information, the flow was shown in Fig. 6, it obtained the results by analyzing and comparing the dynamic link library and process of collecting with virus signature database.

5 The Next Plan in the Article There are two key issues in cloud computing security monitoring and forensics. First, the physical host level, a physical host and virtualization management system are unable to implement effective security monitoring. An attacker could start from the physical host, attack each virtual machine side channel by controlling the physical hosts; An attacker could also hire a virtual machine whose vulnerability was exploited by injecting malicious code to physical host system in order to control other hosting or physical host itself. In addition, the virtualization management system vulnerabilities may lead to isolation which is not complete between virtual machines, for example, by accessing to a shared clipboard way through a

Virtual Machine Security Monitoring Method …

1147

Fig. 6 Cloud monitoring forensic analysis system abnormalities

stack overflow, and getting information from other virtual machines. The best solution to this problem run in the core layer of the physical host to monitor the physical machines and virtualization management system, but there is no research in this area. Second, in the virtual machine level, cloud computing virtual machines which is running monitor cannot effectively block the virtual machine abuse and misuse. The best solution to this problem is to install the monitor system in the virtual machine, but vast majority of users (especially illegal users) do not allow the deployment of the monitoring system in a virtual machine, when the monitoring system is perceived by unauthorized users who may be taken a variety of means to prevent it. Such monitoring and management is more difficult.

6 Conclusion Cloud computing describes a new computing concept, it will link a number of different hosts through the Internet or intranet to provide software services, resource deployment, and virtualization platforms, etc.; by studying clouds calculation model, cloud forensics model was proposed to solve the cloud ambient computing platforms and different information terminal forensics difficult problem for the entire cloud. In this paper, cloud monitoring forensic system in the cloud computing platform realized monitoring for cloud security in the virtual machine without affecting the user experience and the virtual machine operating status situation; for information terminals in the security monitoring and forensics, it provided a complete solution; by using of physical memory analysis technology, it solved the problem of cloud computing security. The method has been verified on KVM and VMware Workstation and is proved to be effective and reliable.

1148

S. Yang et al.

Acknowledgments The work is supported by Shandong Province Outstanding Young Scientists Research Award Fund Project (BS2013DX010) and Shandong Academy of Sciences Youth Fund Project (2013QN007).

References 1. Massie, M.L., Chun, B.N., Culler, D.E.: The ganglia distributed monitoring system: design, implementation, and experience [J]. Parallel Comput. 30(7), 817–840 (2004) 2. Poladian, V., Arlan, A., Shaw, M., et al.: Leveraging resource prediction for anticipatory dynamic configuration. In: First International Conference on Self-Adaptive and Self-Organizing Systems, pp. 214–223 (2007) 3. Bearden, M., Bianchini, R.: Efficient and fault-tolerant distributed host monitoring using system-level diagnosis. In: Proceedings of the IFIP/IEEE International Conference on Distributed Platforms:Client/Server and Beyond:DCE,CORBA,ODP and Advanced Distributed Applications,pp. 159–172 (1996) 4. Xiang, G., Jin, H., Zou, D.: Virtualization-based security monitoring. J. Softw. 23(8), 2173–2187 (2012) 5. Wang, L.-H.: A method on extracting network connection information from 64-bit Windows 7 memory images. China Commun. 7(6), 44–51 (2010) 6. Zhang, S.: Exploratory study on memory analysis of Windows 7 operating system. In; 2010 3rd International Conference on Advanced Computer Theory and Engineering (ICACTE), vol. 5 (2010) 7. Xu, L., Wang, L., Zhang, L., et al.: Acquisition of network connection status information from physical memory on Windows Vista operating system. China Commun. 7(6), 71–77 (2010) 8. Wang, L., Zhang, R., Zhang, S.: A model of computer live Forensics based on physical memory analysis. In: 2009 1st International Conference on Information Science and Engineering (ICISE), pp. 4647–4649 (2009) 9. Zhang, R., Wang, L., Zhang, S.: Windows memory analysis based on KPCR. In: 2009 Fifth International Conference on Information Assurance and Security, pp. 677–680 (2009)

Password Recovery for WPA/WPA2-PSK Based on Parallel Random Search with GPU Liang Ge, Lianhai Wang, Lijuan Xu and Shumian Yang

Abstract Password recovery of WPA/WPA2-PSK is an important problem in computer forensics. It is difficult to deal with this problem by traditional methods such as brute force, rainbow table, dictionary, and so on. We give a new method based on parallel random search to solve this problem. This method integrates the advantages that random search can improve the hit rate and the parallel search can improve the operating efficiency. The principle and implementation of this method is also given based on GPU. Finally the test results show that this method can improve the speed of the password search for WPA2-PSK. Keywords Password recovery GPU



WPA/WPA2-PSK



Parallel random search



1 Introduction Nowadays, with the rapid development of mobile Internet, the wireless local area network (WLAN) is playing an important role in people’s life, but at the same time, illegal and criminal activities associated with the WLAN also emerge in an endless stream. In order to examine the crime on the wireless network, we must break through the secure scheme of WLAN before using computer forensics to L. Ge (&)  L. Wang  L. Xu  S. Yang Shandong Provincial Key Laboratory of Computer Network, Shandong Computer Science Center (National Supercomputer Center in Jinan), Jinan 250014, China e-mail: [email protected] L. Wang e-mail: [email protected] L. Xu e-mail: [email protected] S. Yang e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_105

1149

1150

L. Ge et al.

investigate [1]. WEP is the first generation wireless encryption algorithm. But since the defects of design, the key of WEP can be easily cracked after the enough data are collected. To overcome this problem, new security protocols named Wi-Fi Protected Access and Wi-Fi Protected Access 2 (WPA/WPA2) have been developed by the Wi-Fi Alliance to secure WLAN. The WPA/WPA2-PSK protocols are used for the WPA/WPA2 [2]. In the model of WPA/WPA2-PSK, the AES algorithm is used as the encrypt algorithm usually and the key is generated by the password through a complex algorithm. Every password which is used in WPA/WP2-PSK network has to contain from 8 to 63 printable ASCII Character. So it is difficult to crack [3]. At present there are three methods for password recovery in the field of computer forensics, such as dictionary [4], brute force [5–8], and rainbow table [9]. For the WPA/WPA2-PSK password recovery, there are also some works. Liu and Jin [10] give a distributed method for cracking WPA/WPA2-PSK on multi-core CPU and GPU architecture. Visan [11] gave a WPA/WPA2 password cracking method based on GPU. But most of these works based on brute force need to take a long time to recover a password. A dictionary-based password recovery application [12] performs a large number of password cracking operations. A password dictionary is filled with familiar names, common words, and simple character sequences. It can be available for any password recovery including WPA/WPA2-PSK if the correct password is in the dictionary. For the sequence search, the success rate is 1/N (where N is the number of password in the dictionary). With a large dictionary or a large number of entries in a password database, password recovery is also computationally expensive. However, random search and parallel computing are the two methods to improve the efficiency. Since testing one word from the dictionary is independent of testing another, this type of analysis has a high level of parallelism or random search. This makes dictionary-based password recovery well suited for stochastic search [13] or parallel computing, such as MPI [14], CUDA [15]. In this paper we give a parallel random search method for password recovery of WPA/WPA2-PSK on the CUDA platform, and then analyze the efficiency with the time cost. The rest of this paper is organized as follows. Section 2 presents some relate works for the handshake authentication protocol of WPA/WPA2-PSK, the password recovery process for WPA/WPA2-PSK and the knowledge of GPU with CUDA. In Sect. 3, we show our parallel random search method for password recovery based on GPU. Then in Sect. 4 the experiment shows the validity of the algorithm. Finally, we give the conclusion of this paper.

2 Related Work In this section we introduce the handshake authentication protocol of WPA/WPA2-PSK, the password recovery process for WPA/WPA2-PSK, and the knowledge of GPU with CUDA.

Password Recovery for WPA/WPA2-PSK …

2.1

1151

Four-Way Handshake Process of WPA/WPA-PSK

The message of four-way handshake process for WPA/WPA-PSK is encapsulated by IEEE 802.1 EAPOL-Key frames [10]. It is shown in Fig. 1 and the details are as follows, where STA is the wireless access point, STA is the client, SNonce and ANonce are the random values of STA and AP, STA_MAC and AP_MAC are the MAC addresses of STA and AP, RSN IE is the robust security network information element, MIC is the message integrity code, GTK is the currently group temporal key of WLAN: (1) Message 1 (M1): AP generated ANonce and sent it to STA. (2) Message 2 (M2): STA generated SNonce until ANonce has been received and computed PTK by the following equation, where “||” is the attended operation: PTK = prf-x(PSK, “pairwise key expansion”, min(STA_MAC, AP_MAC) || max(STA_MAC, AP_MAC) || min(ANonce, SNonce) || max(ANonce, SNonce)) The PTK contains three major sections, that is, EAPOL Key Confirmation Key (KCK), EAPOL Key Encryption Key (KEK), Temporal Key (TK). RSN IE was sent from STA to AP, and KCK was used to the MIC check of M2. (3) Message 3 (M3): Until SNonce has been received by AP, PTK is computed by the same method and is used to the MIC check of M2. If the verification was wrong, M2 would be given up; else ANonce, RSN IE, MIC, the message whether install PTK, and the encrypted GTK would be sent to STA. (4) Message 4 (M4): Until M3 has been received by STA, PTK and GTK are loaded. Then M4 is sent to AP to show that PTK and GTK have been load. Until M4 has been received by AP, PTK also is loaded by AP.

Fig. 1 Four-way handshake process of WPA/WPA-PSK

1152

2.2

L. Ge et al.

Password Recovery for WPA/WPA2-PSK

From the above process, we can see that the authentication process of WPA/WPA-PSK is the authentication process for MIC actually. MIC is generated by PTK. PTK is generated by ANonece, SNonce, SA, AA, and PMK and PMK is generated by the password and SSID. So if we have known the password of the wireless routing and could sniff the handshake process, we could compute the MIC. So the password recovery for WPA/WPA2-PSK is as shown in Fig. 2 and the details are as follows:

Fig. 2 Password recovery for WPA/WPA-PSK

Password Recovery for WPA/WPA2-PSK …

1153

(1) Obtain the 4 times handshake packet and SSID of AP by monitoring the communication between AP and STA; (2) Generation a password and compute the PMK by PSK = PMK = pbkdf2_SHA1 (password, SSID, length of SSID, 4096); (3) Obtain Anonce, Snonce, AA, SPA by analyzing the handshake packet 1 and 2, and use these to compute the MIC_KEY jointed with the PMK by following: PTK = SHA1-PRF (PMK, ‘pairwise key expansion’, Min(AA, SPA)||Max (AA, SPA) || Min(ANonce, SNonce) || Max(ANonce, SNonce)), For (i = 0; i < 128; i++) MIC_KEY[i] = PTK[i]; (4) Decide the message authentication algorithm following the key descriptor type in EAPOL-key frame. Use the KCK given in (3) to compute the MIC by following: MIC = HMAC_md5(or HMAC_SHA1)(MIC_KEY,16, 802.1x data); (5) If MIC is equal to the MIC from handshake packet 2, then terminate the algorithm and output the password; else go to (2) and continue the calculation.

2.3

Related Knowledge of GPU with CUDA

The programming model of CUDA takes the CPU as the host and takes the GPU as the co-processor or device [15]. In this heterogeneous model, the GPU must be team working with CPU, and it cannot work independently. The strong logic transaction and the serial arithmetic is executed by CPU, while GPU focuses on performing highly threaded parallel processing tasks. Once the parallel part of the program is determined. If the parallel part is computation-intensive, we can consider these computational work to GPU implementation. Kernal function is a CUDA parallel computing function running in the GPU. A separate kernel function is not a complete procedure. It is just one step that can be executed concurrently in the whole CUDA program. As shown in Fig. 3, the serial processing steps in host machine and a series of equipment end kernel function parallel steps together to form a complete CUDA program.

3 Parallel Random Search for Password Recovery Based on GPU In this section we will give the parallel random password recovery method based on GPU for WPA/WPA2-PSK. It contains two parts: random search and parallel password cracking based on GPU. First, we give the random search method.

1154

L. Ge et al.

Fig. 3 CUDA programming model

3.1

Password Recovery Based on Random Search

The basic idea of random password cracking method is to crack the key by random selecting the possible keys from the key space [13]. First, a random number generated by the random number generator Rand() is mapped to the key space to give a key ki by the map function f(), then the key is used to encrypt the plaintext P to obtain the ciphertext C′. If the C′ is equal to the known ciphertext C, the key ki is the seeking key, otherwise continue to generate the next random key ki+1. The password random search algorithm is as following: Loop: ki =f(Rand()) ; C’=Encrypt(ki); If(C’==C) Break; Else Goto Loop;

Password Recovery for WPA/WPA2-PSK …

1155

If all the words in key space are to be examined, the number of steps of the random search in the worst case is the same as sequential. However, usually we need not to examine all of the words. So theoretically the random search is more quickly than the sequential since the probability to hit the proper key of each random search is same meanwhile the probability of each sequential search is 1/ N (N is the size of the key space). But in practical application, the random search also takes a long time when the key space is big since the generation of random selecting also takes the times. So we give a new parallel random number search algorithm by CPU+GPU in the next subsection.

3.2

Parallel Random Search for Password Recovery Based on GPU

The random number generation is an important part of the parallel random search. First, we will give the parallel random number generation method. There have been many random number generation strategies. Based on the study of the existing methods, we find that the existing methods mainly use the linear congruence random number generation algorithm, namely xn+j = (aj × xn) mod m. In order to generate the random number parallel, we will use the double linear congruence method and the CPU+GPU hybrid mode of the parallel computing in this paper. The algorithm is shown as following, where a is the multiplier, M is the remainder, x0 is the initial value and P is the number of GPU cores. 1. In CPU Input: a, x0, M, P Output : Aj, xj, j=1,2,⋯,P Let A0=a; For j=1; j 1), then fill in the relevant Tij. If AS numbers of Ti1 and T1j are the same, which means that the directly linked neighbors are also the two-hop neighbors, use ε to denote it. 6. Return to AS relationship table T. The algorithm is over. Local AS inquires ASes’ relationships in AS relationship table and use it to judge whether the route information should be transmitted by neighbors.

4.4

Processing of TwoReply

Above all, we define three functions. (1) R(ASi, ASj) stands for AS relationship between ASi and ASj. (2) Sk(m) denotes that ASk signs for message m using private key. (3) Vk(S) denotes checking signature S using public key of ASk.

A Security Mechanism for Detecting Nonfeasance …

1169

Fig. 3 Working example of TwoReply

Therefore, the message m is integrity if Vk(Sk(m)) = m. Use tetrad to record prefixes of the messages with no reply. Next-Hop Receiver and Second-Hop Receiver stands for the direct receivers of routing information and two-hop neighbors receiver. {Prefix, Time} stands for the prefix information that need to reply and waiting time. Nonfeasance Router Score (NRS) is used to quantify nonfeasance behavior of neighborhood ASes, which will be introduced in Sect. 5. TwoReply scheme mainly consists of 5 parts. Figure 3 describes one example of this scheme. (1) Update: Without loss of generality, assume that source S notifies the prefix f to destination D. In the path, ASi advertises routing prefix f to ASi+1 based on the local strategy. According to local relationship table, judge whether ASi+1 needs to continuously advertise to two-hop neighbor ASi+2. Update algorithm is as follows. Algorithm 2: Update algorithm

1170

C. Zhao et al.

In BGP routing information, effective AS path should satisfy the valley-free character. So if the relationship between neighbor and two-hop neighbor is s2s (p2c) or the relationship between locality and directly linked neighbor is s2s, then return true. If the relationship between neighbor and two-hop neighbor is p2p and the relationship between locality and neighbor is c2p, then return true. If the relationship between neighbor and two-hop neighbor is c2p and the relationship between locality and neighbor is c2p, then return true. Otherwise, return false. If it is true, local AS records tetrad , which means neighbor should continuously advertise routing message. If it is false, local AS will not record the tetrad, which means directly linked neighbor does not need to advertise any more. (2) Forwarding: ASi+1 receives routing message from ASi, judges that whether it needs to continuously transmit to ASi+2. If it needs, judges subsequent update process of ASi+2 according to algorithm 2. (3) Reply: ASi+2 receives routing information (f, {ASi+1, ASi,…, S}) advertised by ASi+1. If path is longer than 2, ASi+2 sends reply information R to ASi by the reverse route. Replying path is ASi+2→ASi+1→ASi, and the signature is Signature = Si+2(ASi+2, ASi+1, ASi, f). (4) Verify: ASi receives reply information within the waiting time, then perform verification by algorithm 3. If it passes, ASi deletes the prefix information in the detrad. Otherwise ASi decides the neighbor as the nonfeasance node. Algorithm 3: Verify the replying information

We verify the correspondence of destined AS and forwarding AS in replying information. Verify the reality of prefix, in which f is the prefix of replying information. Finally, we verify signature signed by origin AS. (5) Abnormity determinant: If a prefix message without relevant replying message within waiting time, then AS seeks that if there is a redundant route information aimed at the same destination in database Adjacent Routing Information Base, Incoming (Adj-Rib-In). Because BGP optimal rule, if local routing message is not the optimal one to neighbors, then local AS will receive

A Security Mechanism for Detecting Nonfeasance …

1171

the optimal routing sent by neighbors. If there is a qualified redundant routing, the local AS will decide the neighbor as normal node. Otherwise, it will consider that there is some nonfeasance behavior in neighbor node.

5 Punishment of Nonfeasance Behavior 5.1

Punishment Algorithm

TwoReply scheme maintains a punishment value NRS for every neighbor to quantify the nonfeasance behavior, that its origin value is 100. (1) Punishment: The formula NRS value that a neighbor has nonfeasance behavior at time t0: NRSt0 ¼ NRSt0 þ K, in which NRSt0 stands for the NRS value at t0 and the increment factor K is 100. (2) Award: If there is no subsequent nonfeasance behavior in nonfeasance node, the NRS value will decrease exponentially by time as award. If the punishment value at time t0 is NRSt0 , at time t value is NRSt , then NRSt ¼ NRSt0  ekðtt0 Þ . λ is decided by H = ln 2/λ. The value will be half of the origin after the time H.

5.2

Route Selection

We expand the BGP route selection with considering nonfeasance behavior. Select the smaller NRS value as the next hop of routing and avoid the nonfeasance nodes or routes. Use this rule to decrease the number of nonfeasance node and nonfeasance behavior. Route selection rules of TwoReply are shown in Table 2.

6 Security Analysis TwoReply can not only monitor the nonfeasance behavior, but can also detect deception of neighbors effectively. Table 2 Route selection

Step

Rule

1 2 3 4 5

Lowest NRS value Highest local preference Lowest AS path length Lowest origin type Lowest MED

1172

C. Zhao et al.

(1) Detecting the nonfeasance behavior Theorem 1 The proposed scheme is feasible. Based on AS certificate, we obtain the two-hop neighbors’ information to judge if they are supposed to forward information or not through AS relationship. If there is no nonfeasance behavior, locality should receive replying information of two-hop neighbor or there should be redundant route information. So this scheme can effectively detect the nonfeasance behavior in BGP route forwarding. (2) Detecting the deception To avoid being detected, nonfeasance neighbor can deceive the verifier by two ways as follows: 1. Forge replying information. Neighbors can forge replying information of two-hop neighbors, but because every AS cannot get other ASes’ private key, so the verifier could detect the deception by the signature. 2. Steal replying information. Nonfeasance node would steal replying information to deceive the verifier by ways like wiretap. For example in Fig. 1, if AS4 is nonfeasance node, it does not transmit routing from AS2 to AS5. To avoid being detected, AS4 steals the information that AS5 reply to AS3. But the field TransAS in replying information can effectively detect this deception.

7 Performance Evaluation TwoReply uses the DSA signature algorithm with the length of signature 320-digit. The result of experiment shows that under the condition of 2 GHz’s CPU, DSA algorithm’s signature time is 2.5 ms, verifying time is 3 ms. We use NS2 [11] simulation tool to inspect the performance of mechanism, and use BRITE [12] topology editor to generate topology. Assuming that every AS has only one BGP router, set the simulation factor as follows: the Minimum Routing Notification Interval (MRAI) M = 30 s, link delay ld = 0.1 ms, maximum waiting time t = 30 s. And every AS advertises two route information.

7.1

Transmission Overload

In our experiment, the new message size is 51 bytes. θTwoReply = 51 × (l − 2), θ is the increment of UPDATE message size, l is the length of AS_PATH. As illustrated in Fig. 4, the increase of transmission overload has little influence on BGP.

A Security Mechanism for Detecting Nonfeasance …

1173

Fig. 4 Transmission overload of BGP and TwoReply

Fig. 5 Average convergence time

7.2

Convergence Time

In TwoReply scheme, waiting and verifying replying information do not influence the BGP normal routing process; the most important factor is calculated time consumption of replying information signature. The simulated result of convergence time is shown in Fig. 5. Convergence time of this scheme is close to BGP convergence time. In conclusion, it can afford extra consumption brought by TwoReply under current inter-domain routing system condition.

1174

C. Zhao et al.

8 Conclusion BGP is a critical component of the Internet’s routing infrastructure and highly vulnerable to a variety of attacks. Different approaches have been taken for addressing security in BGP. But none of the proposed solutions has resolved the nonfeasance behavior issues. The TwoReply provides a new method to address the problems by waiting reply messages from two hops away neighbors with their signatures to judge there exists nonfeasance behavior. Moreover, the increase of performance cost brings little burden to current route system. The result shows that the TwoReply mechanism improves the whole security of inter-domain routing system. In the meantime, it brings less burden and easy to expand. Acknowledgments This work was supported by National High Technology Research and Development Program of China (863 Program) (No. 2013AA014701), National Nature Science Foundation of China (No. 61171193).

References 1. Rekhter, Y., Li, T., Hares, S.: A border gateway protocol 4 (BGP-4). In: RFC, p. 4271 (2006) 2. Butler, K., Farley, T., McDaniel, P.: A survey of BGP security issues and solutions. IEEE Proc. 2010(1), 100–122 (2010) 3. Youtube hijacking: A RIPE NCC RIS case study. http://www.ripe.net/news/study-youtubehijacking.html 4. Huston, G., Rossi, M., Armitage, G.: Security BGP: a literature survey. IEEE Commun. Surv. Tutorials 13(2), 199–222 (2011) 5. Yu, X.P., Wang, H.J.: Detecting invalid BGP routes based on AS relationships. J. Jilin Univ. 25(4), 461–464 (2007) 6. Wei, Z.H., Chen, M., Zhao, H.H.: AS relationships quick inference algorithm. J. Univ. Electron. Sci. Technol. China 39(2), 266–270 (2010) 7. Kent, S., Lynn, C., Seo, K.: Secure Border gateway protocol (S-BGP). IEEE J. Sel. Areas Commun. 18(4), 582–592 (2000) 8. White, R.: Securing BGP through secure origin BGP. Internet Protoc. J. 6(3), 15–22 (2003) 9. Kranankis, E., Wan, T., Oorschot, P.C.: On interdomain routing security and pretty secure BGP (psBGP). ACM Trans. Inf. Syst. Secur. (TISSEC) 10(3), 1–41 (2007) 10. Gao, L.: On inferring autonomous system relationships in the Internet. In: IEEE/ACM Transactions on Networking (2001) 11. The Network Simulator–ns2. http://www.isi.edu/nsnam/ns/ 12. BRITE. http://www.cs.bu.edu/brite/

A Novel Routing Scheme in Three-Dimensional Wireless Sensor Networks Bang Zhang, Xingwei Wang and Min Huang

Abstract In this paper, a novel routing scheme in the three-dimensional wireless sensor networks is proposed with the limited energy of sensor node considered. Based on similarity between biological cell selection and distributed system design, a Biological cell Clustering (BC) algorithm is devised. In order to minimize energy consumption and maximize the network lifetime, an Optimal Distance Routing (ODR) algorithm is further proposed. All nodes are divided into clusters according to the BC algorithm, the proposed ODR algorithm is used to find intracluster and intercluster routes to transfer data. The proposed BC-ODR scheme is implemented by simulation and its performance is evaluated. Simulation results have shown that the proposed scheme can reduce the network energy consumption and extend the network lifetime. Keywords Three-dimensional wireless sensor networks saving Network lifetime





Routing



Energy

1 Introduction With widely use of the three-dimensional wireless sensor network, studies of the routing algorithm has become a hot spot. Based on the basic principle of GFG algorithm, the literature [1, 2] proposed Greedy-Half-Greedy (GHG) algorithm and Greedy Anti-Void Routing (GAR) algorithm, to some extent, improved the performance of the GFG. Literature [3] proposed the first memoryless local geographic routing algorithm Greedy-Random-Greedy (GRG) in a three-dimensional wireless sensor networks, applied the recovery mechanism of Random Walks (RW) to avoid the occurrence of local minima. Due to the memoryless characteristic of GRG B. Zhang (&)  X. Wang  M. Huang College of Information Science and Engineering, Northeastern University, Shenyang 110004, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_107

1175

1176

B. Zhang et al.

algorithm, it applies to the mobile network. Literature [4] proposed Included Angles Iteration Routing (3DIAIR) contains the angle iteration routing algorithm effectively avoid the plane and space loops. Literature [5] proposed Energy-efficient Restricted 3D Greedy Routing (ERGrd) limit energy 3D routing algorithm, integrated use of two parameters to save energy consumption, and effectively prevent the loops, but it may lead to a local minimum (Local Minimum) problem and cannot guarantee the global optimum. Literature [6] proposed a Circular Sailing Routing (CSR) arc navigation routing algorithm, mapped three-dimensional space network node according to a certain mathematical calculations to the sphere, based on the virtual coordinates of nodes to calculate the sphere distance in order to establish a data transmission path to reduce network congestion, load balancing and extending the network lifetime. Literature [7] has proposed PAGH, PAGO and PAGR adjustable transmission radius energy-aware routing algorithm, studies have shown that the new algorithm had significantly improve data delivery ratio and reduced the Local Minimum phenomenon influence on data delivery in sparse network. Literature [8] proposed a Beacon-less energy-aware routing algorithm. In addition to considering the length from the destination node, also considered the current node residual energy, balancing energy consumption improved network lifetime, but it was likely to cause a rapid depletion of some nodes’ energy which shortened the lifetime of the node network. Literature [9] proposed a Geometric Stateless Routing (G-STAR) geometry stateless routing algorithm, which created a tree based on the location of the node and could dynamic find the path. To meet the needs of practical application, we propose a BC-ODR scheme to find intracluster and intercluster routes to transfer data with energy saving achieved and the network lifetime prolonged.

2 Description of the Problem 2.1

Node Model

Assuming that all the sensor nodes are homogeneous, i.e., with the same set of parameters. Each sensor node has three parameters: the location coordinates of nodes V(x, y, z), the initial energy of nodes E, bode number ID, ID ∊ {0, 1, 2,…n}, where n represents the total number of sensor nodes. All sensor nodes are divided into three categories: Ordinary Nodes V(v1, v2, v3) which used to obtain information and to send a message to the belonged cluster head; Cluster Head Node S(s1, s2, s3) which used to receive and integrate the data within the cluster, and sent to the next cluster head; Base Station (or Sink Node) BS which used to receive and process the data which sent from the cluster head nodes. The node model includes the perceptual mode and communication mode of the sensor nodes.

A Novel Routing Scheme in Three-Dimensional …

1177

Perceptual Model Suppose the perceptual model of sensor nodes is sphere, and has the same sensing range Rs, then the perceptual region is a sphere which treats O = (x, y, z) as center and Rs as radius. We typically assume the monitor model for the interest target or event of sensor nodes is a Boolean model, only when the target or event within the perceptual radius of the sensor nodes, it means monitoring has succeed, or that the monitoring has failed. Suppose the probability occurrence of the sensor node u which informed the target or events of interest T is p, the occurrence position of T is M = (xm, ym, zm), Boolean perceptual model for the sensor node is as follows:  p¼

1; 0;

dðO; MÞ  Rs dðO; MÞ [ Rs

ð1Þ

where O = (x, y, z) represents the three-dimensional coordinate values of sensor node u, Rs represents the perceptual radius of sensor nodes. d(O, M) is the Euclidean distance of occurrence position between the sensor node u and T, as shown in Eq. (2) dðO; MÞ ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx  xm Þ2 þ ðx  ym Þ2 þ ðx  zm Þ2

ð2Þ

Communication Model. The communication model of sensor nodes is similar to the perceptual model, all the nodes have the same perceptual radius Rc, the perceptual model is a sphere whose center is (x, y, z) and radius is Rc. If the Euclidean distance between two nodes less the communication radius Rc, it could directly communicate, and the sensor node only could communicate with the node within the communication radius. Suppose the position of a neighbor node v of the node u is P = (xv, yv, zv), the communication probability of node u and its neighbor node v is express as:  p¼

1; dðO; PÞ  Rc 0; dðO; PÞ [ Rc

ð3Þ

where d(O, P) represents the Euclidean distance between node u and node v. dðO; PÞ ¼

2.2

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx  xv Þ2 þ ðx  yv Þ2 þ ðx  zv Þ2

ð4Þ

Network Model

Assume the sensor nodes deployed in a certain way within a three-dimensional network space based on subdivision of the DT3D (in 3D space Delaunay Triangulation),

1178

B. Zhang et al.

abstract the network model as Unit Ball Graph (UBG) GDT3D ¼ ðV; EÞ. V represents the set of nodes, E represents the set of links, according to the model of network E ¼ fðu; vÞ 2 Vjduv  Rg, where duv represents the Euclidean distance between u and v, R represents the maximum of perceptual radius or communication radius(we assume the perceptual radius is same with communication radius). If and only if both nodes can communicate directly or sensing, the two nodes can connect, any other two links can intersect only at the endpoints.

2.3

Energy Model

The total energy consumption of three-dimensional sensor network includes perceptual consumption, calculating consumption and communication consumption of nodes. But the perceptual consumption of nodes is very small with calculating consumption and communication consumption, and they seem a bit detached from routing algorithm design [10, 11], so the calculation of energy consumption only consider he energy consumption of wireless communication module. According to wireless communication model [12] researched by Heinzelman and Chandrakasan et al., the energy consumption of data node comprises sending and receiving data consumption.

3 Algorithm Design 3.1

Biological Cell Clustering Algorithm

Biological cell selection usually has similarity with distributed systematic design. Selection of Sensory Organ Precursor (SOP) [13] is process of cell clustering during biological nervous system generation. By selecting SOP derive the approach to select the Maximal Independent Set (MIS) in network connected by multiple processor. Suppose the total number of network nodes is n, the upper bound of neighbor nodes of any node is D, the maximum value of D is n. This algorithm has log D stages and there are M log n steps of every stage with the constant value M which is defined in literature [14]. All nodes are active initially, there are two information exchange every step of every stage. At first information exchange, the probability of every active node be selected Pi = 1/2logD−i, the probability of node be selected increase with value of i increase and current active node u send data to neighbor node. At second information exchange, add the node which broadcast information at first information exchange but not received prevent information from neighbor node to A. And send prevent information to the neighbor node at the same time, then the neighbor node

A Novel Routing Scheme in Three-Dimensional …

1179

becomes inactivity node. When the i stage ends, there are D/2i neighbor nodes of all active nodes at most. Most neighbor nodes add to A directly or connect with nodes in A, few neighbor nodes can add to A successfully with decrease of neighbor node then forms MIS finally. In order to improve the method of MIS set selection aim to actual wireless sensor network application, it proposes Biological cell clustering algorithm (BC). Here is the basic idea of this algorithm: Set threshold value of cluster head energy first and select nodes that overtop the threshold value added to the set of candidate head nodes. And then set threshold value r of node cluster, find neighbor nodes within threshold value for every node in candidate head node set. Ensure that whether the current node is active or not at first information exchange. Send information to neighbor node and modify current node state to V = 1, if the information from neighbor node in the period is received, then current node becomes inactive node and would not continue the second information exchange. At second information exchange, if current node V = 1, then send information to all active neighbor nodes that become inactive nodes; If current node did not receive information from neighbor during the send information period,then current node compose cluster with other active neighbor. In addition, in order to prevent a node belong to different cluster, it should update node information periodically. A node update active information when added to a cluster becomes inactive node, next candidate cluster head will cross it when member node is selected to avoid recurrent selection.

3.2

Optimal Distance Routing Algorithm

Basic Idea. Select the candidate node set from all neighbor nodes first, then obtain the optimal distance of next hop based on energy consumption model under ideal condition. Take the vertical distance of current node to source node and destination node into account in order to prevent next hop node from deviating to destination node at the same time and set different weights of the two factors to ensure least energy consumption of overall entire path. Main Steps (1) Determine the candidate node set When the link bandwidth between current node and neighbor node is greater than user need, the neighbor can be the candidate node of next hop. All nodes that can communicate with current node directly and satisfy user bandwidth need to compose next hop candidate node set.

1180

B. Zhang et al.

(2) Calculate optimal distance Energy consumption of path nodes is more uniform and the network lifetime is longer in the process of data transmission. Assume under ideal state data transmission along straight path from source node S to destination node D and every hop node in the path distribute uniformly. Assume that dSD is the distance between source node and destination node, dSC is the distance between source node and current node, node C is projection in space straight line SD denoted by M and dL is the vertical distance from node C to space straight line SD. Set d to be the distance between two nodes equally distributed in the path, then according to energy consumption model energy consumption of single node is: ETx ðk; dÞ ¼ ETxelec ðk; dÞ þ ETxamp ðk; dÞ ¼ Eelec  k þ eamp  k  d a ( Eelec  k þ efs  k  d 2 ; d\d0 ¼ Eelec  k þ emp  k  d 4 ; d  d0

ð5Þ

Reception energy of single node is: ERx ðkÞ ¼ k  Eelec

ð6Þ

Total energy consumption from source node to destination node is: dSD ðETX þ ERX Þ d dSD  2  k  Eelec þ eamp  k  d a Þ ¼ d

EPath ¼

ð7Þ

Eelec, ɛamp, and k in above equation are all independent of d. dSD is a definite value when source node and destination node are decided. Take the first-order dE ¼ 0. Then get the optimal distance derivative of Epath with respect to d and set dpath d doptimal under ideal state as follows:

doptimal

8 qffiffiffiffiffiffiffiffi < 2Eelec ; d\d0 efs ¼ qffiffiffiffiffiffiffiffi : 4 2Eelec ; d [ d0 3emp

ð8Þ

where Eelec is the energy consumption of send and receive circuit, k is the size of data package, ɛamp is the power amplification energy consumption. When the transmission distance is less than threshold value d0, then α = 2, adopt free space model, i.e., ɛamp = ɛfs; When transmission distance is greater than threshold value d0, then α = 4, adopt multipath fading model, i.e., ɛamp = ɛmp.

A Novel Routing Scheme in Three-Dimensional …

1181

Fig. 1 Effective live nodes number

(3) Calculate vertical distance According to Fig. 1, set source node coordinate S(xs, ys, zs), current node coordinate C(xc, yc, zc), destination node coordinate D(xd, yd, zd), vector a constituted by source node and current neighbor node denoted by: a ¼ ðxs  xc ; ys  yc ; zs  zc Þ

ð9Þ

Vector b constituted by source node and destination node is denoted by: b ¼ ðxs  xd ; ys  yd ; zs  zd Þ

ð10Þ

The space included angle θ constituted by source node, destination node, and current neighbor node and it considers the source node as vertex, cosine θ is: *

cos h ¼

ab qffiffiffiffiffi pffiffiffiffiffi *2 2 a þ b

ð11Þ

According to trigonometric function we can calculate: sin h ¼

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1  cos h2

ð12Þ

Acquire the space distance dSC between source node and current neighbor node: dSC ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðxs  xc Þ2 þ ðys  yc Þ2 þ ðzs  zc Þ2

ð13Þ

Finally, acquire the vertical distance dL between current neighbor node and destination node. dL ¼ dSC  sin h

ð14Þ

1182

B. Zhang et al.

(4) Determine next hop node ODR algorithm according to optimal distance with least energy consumption under ideal state to select next hop, calculate the Euclidean distance d from current node to all candidate nodes and select node with minimum difference of d and optimal distance doptimal as next hop node, so it can guarantee minimum energy consumption of total path overall. In addition, this algorithm considers the vertical distance from this node to source node and destination node at the same time in case next hop node biases the destination node so far that waste energy. Set the distance from current node to candidate node as d, the difference of this distance and optimal distance dδ as:   dd ¼ d  doptimal 

ð15Þ

According to the following equation select NT (Next hop): NT ¼ w1  dd þ w2  dL

ð16Þ

Here, w1 and w2 are two weights and the sum of them is 1, dδ is the different value between the distance from current node to candidate node and optimal distance, dL is the vertical distance from candidate node to source and destination node ligature. In conclusion, the smaller the dδ, the less total energy consumption selected for next hop closer to ideal state path; the smaller the dL, the candidate node closer to destination node avoids energy waste. Therefore, the smaller the value of NT the better, the node selected with smallest NT value from current node set as next hop to guarantee minimum energy consumption of total path globally. (5) Form path Next hop node selected every time store in path set Rout successively. Set selected node set Selectednodes to keep off loop circuit, every node just can be selected once at a way finding process. Once the node is selected, it would not be selected to be next hop. Moreover, if current node cannot find satisfied next hop node, then popup current node from path set Rout reselects node with next great NT value as next hop node and tab current node state = 0; denote this node as blocked up and next time it would not select this node to be next hop node. (6) Update node remaining energy Calculate rest energy of every node in the path according to Eq. (17), update node rest energy and execute ISCA algorithm to reselect cluster head. Node rest energy updates function as follows: ER ¼ E  ETX  ERX

ð17Þ

A Novel Routing Scheme in Three-Dimensional …

1183

Here, ER is the rest energy of node, E is current energy of node, ETX is the send data energy consumption of node calculated by Eq. (5), ERX is receive energy consumption of node calculated by Eq. (6).

3.3

Biological Cell Clustering—Optimal Distance Routing Scheme

Specific steps are as follows: Step1: Input network topology and record the time begin_time the network begin to simulat. Step2: Execute BC to cluster all the nodes in the network and store nodes within the cluster in set mis_rednodes. Step3: Store cluster head node in set mis. Step4: Communications within the clusters, nodes in the cluster send the collected data to cluster head node directly. Step4.1: If the energy of member nodes in cluster is not 0 and the number of cluster head nodes variable bi < mis.size(), the number of nodes variable bj < mis_rednodes[bi].size(), turn to Step4.2, otherwise turn to Step5. Step4.2: Member nodes in cluster calculate the distance to cluster head and send data to cluster head node in TDMA time slot, respectively. Step4.3: According to Eq. (17) to calculate rest energy of update node and calculate total energy interE as the sum of energy of every node in the cluster. Step4.4: bj++ , if bj < mis_rednodes[bi].size(), then turn to Step4.1, else, turn to Step5. Step5: Data fusion, cluster head node dispose the received data package. Step6: Routing within cluster, cluster head node find the path to base station, respectively. Step6.1: If the cluster head node is not 0 and the number of cluster head nodes variable bi < mis.size(), node number variable bj ≥ mis_rednodes[bi].size(), turn to Step2.2, else turn to Step3. Step6.2: Calculate the difference value dδ of the distance from current node and candidate node and optimal distance at first according to Eq. (15). Calculate the ligature vertical distance dL from candidate node to current node and destination node according to Eq. (14) second. Select the node that has minimum NT value to be next hop cluster head node according to Eq. (16). Step6.3: Put the selected next hop cluster head node into path set Rout41 and put them into selected node set Selectednodes41 at the same time. Step6.4: Calculate rest energy information of updating cluster head node according to Eq. (17), the calculated total energy consumption of path within cluster outE is the energy consumption sum of every cluster head node in set Rout41. Step6.5: bi++ , if bi < mis.size(), turn to Step2, else, turn to Step3.

1184

B. Zhang et al.

Step7: Calculate the rest energy of node and update information of cluster head node set. Step8: Judge the rest energy of node whether is 0 or not, if not, turn to Step4, else, record the simulation end time end_time, turn to Step9. Step9: End the algorithm.

4 Simulation and Performance Evaluation In order to evaluate the proposed BC-ODR scheme, we compared it with the IGreedy-PAGA algorithm proposed in the literature [11] and test them in random topology, uniform topology, respectively. We evaluate them by the following performance metrics: Effective Live node Number (ELN), Network Lifetime (NLT), Network Energy Consumption (NEC), and the Average Node energy Consumption (ANC). The ELN in network refers to the number of nodes that can send data to base station. The ELN of the BC-ODR is better than that of the IGreedy, as shown in Fig. 1. The NLT refers to the duration from network operation start transmitting data to network cannot operate normally. We measure NLT in the following three terms: the first node died time, the half nodes died time, and all nodes died time. The NLT of the BC-ODR is better than that of the IGreedy, as shown in Fig. 2. The NEC refers to the total energy consumption of all data transmission paths in network during a certain period. The NEC of the BC-ODR is better than that of the IGreedy, as shown in Fig. 3. The ANC refers to the ratio of NEC to the transferred data size. The ANC of the BC-ODR is better than that of the IGreedy, as shown in Fig. 4.

Fig. 2 Network lifetime

A Novel Routing Scheme in Three-Dimensional …

1185

Fig. 3 Network energy consumption

Fig. 4 Node average energy consumption

5 Conclusion In this paper, a novel routing scheme in 3D-WSN is proposed. The related network model and energy model are devised and the BC-ODR scheme is presented. Its simulated implementation has been done and performance evaluation has been carried out over random and uniform topologies. Simulation results have shown that the BC-ODR scheme can effectively reduce the network energy consumption and prolong the network lifetime compared with certain existent work. Further testing its effectiveness in practical environments and thus improving its practicability will be the emphasis of our future research and development work. Acknowledgments This work is supported by the National Science Foundation for Distinguished Young Scholars of China under Grant No. 61225012 and No. 71325002; the Specialized Research Fund of the Doctoral Program of Higher Education for the Priority Development Areas under Grant No. 20120042130003; the Fundamental Research Funds for the Central Universities under Grant No. N110204003 and No. N120104001.

References 1. Liu, C., Wu, J.: Efficient routing in three dimensional ad hoc networks.In: INFOCOM 2009, pp. 2751–2755. IEEE (2009) 2. Liu, W.-J., Feng, K.-T.: Three-dimensional greedy anti-void routing for wireless sensor networks. IEEE Trans. Wireless Commun. 8(12), 5796–5800 (2009)

1186

B. Zhang et al.

3. Flury, R., Wattenhofer, R.: Randomized 3D geographic routing. In: The 27th Conference on Computer Communications INFOCOM 2008, pp. 834–842. IEEE (2008) 4. Duan, J., Li, D., Chen, W.: Geometric precluding loops and dead ends in 3D wireless sensor networks. In: Global Telecommunications Conference (GLOBECOM 2010), pp. 1–5. IEEE (2010) 5. Huang, M., Li, F. Wang, Y.: Energy-Efficient Restricted Greedy Routing for Three Dimensional Random Wireless Networks. Lecture Notes in Computer Science, pp. 95–104. Springer, Heidelberg (2010) 6. Li, F., Chen, S., Wang, Y., el at.: Load balancing routing in three dimensional wireless networks. In: ICC ‘08 IEEE International Conference on Communications, pp. 3073–3077 (2008) 7. Abdallah, A.E., Fevens, T., Opatrny, J., et al.: Power-aware semi-beaconless 3D georouting algorithms using adjustable transmission range for wireless ad hoc and sensor networks[J]. ELSEVIER. Ad Hoc Netw. 8(1), 15–29 (2010) 8. Jain, M., Mishra, M.K., Gore, M.M.: Energy aware beaconless geographical routing in three dimensional wireless sensor networks. In: ICAC 2009 Advanced Computing, pp. 122–128 (2009) 9. Sun, M.-T., Sakai, K., Benjamin, R., el at.: G-STAR: geometric stateless routing for 3D wireless sensor networks. ELSEVIER: Ad Hoc Netw. 9(3), 341–354 (2011) 10. Ramanathan, R., Rosales-Hain, R.; Topology control of multihop wireless networks using transmit power adjustment. In: Proceedings of the IEEE INFOCOM, pp. 404–413 (2000) 11. Stojmenovic, I., Lin, X.: Power aware localized routing in ad hoc networks. IEEE Trans. Parallel Distrib. Syst. 12(11), 1122–1133 (2011) 12. Heinzelman, W., Balakrishnan, H., Chandrakasan, A.: Energy-efficient communication protocol for wireless microsensor networks. In: Proceedings of the International Conference on System Sciences, pp. 3005–3014, Hawaii (2000) 13. Afek, Y., Alon, N., Barad, O.: A biological solution to a fundamental distributed computing problem. 331(12), 183–185 (2011). www.sciencemag.org 14. Materials and methods are available as supporting material on Science Online

A Model of Cloud Computing-Based TDOA Location System Bohao Huang, Shuo Gu and Wei Xia

Abstract Due to the high requirements on data’s analyzing and computing ability in time difference of arrival (TDOA) system. Extra attentions are needed to build a robust and powerful server in traditional TDOA system. And normally ordinary servers are not satisfying for their limitation in number of supported servers and clients simultaneously. Thus a new TDOA system takes advantage of cloud computing technology is proposed to offset the shortage in data processing. The experiments in this paper show that implementing cloud computing technology in TDOA location system is meaningful and promising. Keywords TDOA

 Cloud computing  Client  Android

1 Introduction The time difference of arrival (TDOA) algorithm is a currently widely used locating algorithm. The time difference of arrival signal to each receiver is measured to resolve the target’s position in this algorithm [1]. Traditional TDOA system employs a server to take responsibilities of data transmitting and computing. But because of the huge quantity of signal data, a server can manage limited number of receivers simultaneously. And it also takes researchers’ extra focuses on server’s architecture to improve system’s performance in multi-client-request condition. Cloud computing is an edging technology which provides users with access of a shared pool of configurable computing resources, and only minimal management effort or service provider interaction is needed [2]. Its powerful data processing

B. Huang (&)  S. Gu  W. Xia College of Electronic Engineering, University of Electronic Science and Technology of China, Xiyuan Ave, Chengdu 2006, Sichuan, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_108

1187

1188

B. Huang et al.

ability and convenient deployment procedures [3] make it adept to replace ordinary server in TDOA system. This paper provides a model of cloud computing-based TDOA system, and its feasibility and efficiency will be discussed.

2 TDOA System Model We assume that a TDOA system consists of a number of N receivers (i = 1,…, N), M clients (j = 1,…M) and a server [4]. Users have no direct access to receivers and data from receivers cannot be transmitted to users from receivers immediately. The server is responsible for transiting data and instructions between them (Fig. 1). During a locating request of a user, the client will be launched first, and locating parameters including the order number of receivers (e.g., n = a, b, c, a, b, c ≤ N) in this locating request will be transmitted to the server through internet after user confirms instructions. The server then transits instructions to corresponding receivers when receives locating parameters from the client. The corresponding receivers start to work as soon as the instructions are received and signal data will be sent to server once it is captured. In this case, signal data come from receiver a, b, and c. The server uses built-in algorithm to analyze signal data and resolve the target’s position. Then a result will be delivered to the original client and it will be presented to user by the client eventually. The prototype system in this paper adopts Chan locating algorithm [5] to resolve target’s position. Details of Chan locating algorithm can be found in Ref. [5]. One of the problems of this system is the number of receivers that can be employed in a single locating request due to the limitation of data processing ability of the server to analyze each receiver’s signal data. And the increase in the server’s response time in dealing with multi-client’s locating request simultaneously is far from satisfying. So cloud computing is implemented in this system to increase the data processing ability and improve performance in response time in multi-client’s locating request. Fig. 1 System model of TDOA location

A Model of Cloud Computing-Based TDOA Location System

1189

Fig. 2 Architecture of the cloud computing-based system

3 Cloud Computing-Based TDOA System 3.1

Architecture of the System

In the cloud computing-Based TDOA system, cloud replaces the server in traditional TDOA system and is responsible for managing receivers, transiting, and processing data. In this system, cloud can deal with signal data from a number of K (1 ≤ K ≤ N) receivers and can process a maximum number of P (1 ≤ P ≤ M) locating request with no significant increase in response time. The purpose of cloud computing-based TDOA system is to make K and P larger than the counterpart in traditional TDOA system thus a better service can be provided to users. Unlike the limited storage ability of ordinary server, cloud platform is equipped with hundreds and thousands gigabytes storage space [6]. To take full advantage of the mass storage space of cloud platform, signal data will be stored in the cloud for users to do further research in this system (Fig. 2).

3.2

Design of Cloud

Cloud links users and receivers by transiting data and instructions between clients and corresponding receivers. In this system, cloud fulfills all the functions of the server in traditional TDOA system with better performance and additional storage ability. TCP protocol is used in cloud to connect clients and receivers. A typical procedure of a single locating request of cloud starts when it receives locating parameters (instructions) from the client. Then cloud resolves the parameters from received package and transits it to corresponding receivers (e.g., receiver a, b, and c).

1190

B. Huang et al.

Cloud storage is a basic function for cloud service provider. The prototype introduced in this paper uses sina app engine (SAE) to build a prototype and adopts its Storage service to fulfill distributed file storage function [7] (Fig. 3). SAE provides API for users to read and write the file, get information and index of the file. The list of used API is shown in Table 1. After cloud receives signal data from a receiver, the program uses get_bucket API to get an instance of the bucket and put the signal data into a newly created file using put_object API. The name of the file is determined by the order of the corresponding receiver, receiver1.dat for example. Before cloud launches TDOA algorithm, the signal file stored in cloud will be read first using get_object_contents API. And the user’s instructions sent to cloud at the beginning determines which signal files to read. Once the computing is finished, the results will be sent to clients immediately and that is the end of the cloud’s duty in a single locating request.

3.3

Design of Android Client

Client is the interface for users to manipulate the system, deploy receivers, and propose locating requests. The form of client can be diversified and a cross platform application can serve users’ need best in practical condition. In this paper, an android application will be specially discussed as a client. Receiver

Client

Cloud

Cloud Storage

TDOA Algorithm

Client

Fig. 3 Diagram of the cloud Table 1 API used in SAE storage service Name

Parameter

Return

get_bucket(bucket)

bucket: bucket’s name

An instance of the bucket class

put_object(obj, contents, content_type=None, content_encoding=None, metadata=None)

obj: object’s name contents: object’s content content_type: object’s mime type content_encoding: object’s encoding type metadata: object’s meta data

get_object_contents(obj, chunk_size=None)

obj: object’s name chunk_size: used when a chunk_sized content iteration return is needed

A tuple contains of file’s content

A Model of Cloud Computing-Based TDOA Location System Target s position

Cloud Receiver s data HTTP POST

Start

User set locating parameters

1191

Locating parameters

Input parameters validity check

Show locating result

End

Fig. 4 Diagram of the application in android application

The major function of the client is to upload data to cloud, present location result to users and provide users with access to set-up locating parameters like signal’s bandwidth and central frequency, receiver’s sample number. The diagram of the application is shown in Fig. 4. After the application launches, users will be noted to set-up locating parameters including bandwidth, central frequency, sample number, and desiring receiver. After user finishes setting up those parameters and propose the request, the application will package those data using HTTP protocol and send them to cloud under the condition that all input parameters are checked available. Then the application waits for cloud’s response and presents the location results to user. For a more friendly and complete presentation of locating result, showing target and receiver’s position in the map are necessary. In this prototype, Baidu SDK [8] is implemented for presenting position in the map. For further development of the system, a user’s login activity can be added to this client. It would be meaningful because multi-client request is an unavoidable situation in this system and adding user-manage function can bring about more functions. A user’s locating request record and result can be stored in cloud, for example.

4 Results 4.1

Development Environment of the Experiment

The development environment of the cloud1: Sina App Engine with python 2.7.3. The development environment of the android application: Android ADT v22.0.1-685705 Baidu SDK v3.0.0 Minimum SDK of 14 and target SDK 18

1

In this experiment, for the convenience of simulation, signal data are stored in cell phone in advance. And signal data are sent from client to cloud instead of transiting them from receiver to cloud.

1192

B. Huang et al.

Fig. 5 User interface of android application

4.2

Result of the Prototype System

The left figure of Fig. 5 is the launcher activity. It is for user to set-up location parameters. Once the confirm button is pressed, the activity will check the completeness and validity of the input value, and the application will turn to second activity once the check is passed. The center figure in Fig. 5 is the second activity. It shows the procedure of the connection to cloud in real-time to users and present users with the locating result at last. Users can press “show in map” button to start next activity to view the target’s position in map. From the right figure in Fig. 5, we can see that receivers and target’s position are shown in the map clearly.2 In fact, the locating algorithm can also compute the altitude of the target, so 3-D dimension map is better in presentation. But due to the lack of time, a 2-D map is realized merely in this prototype.

4.3

Result of the Multi-client Test

To test the prototype’s ability in dealing with multi-client situation, an experiment is designed to simulate the real condition.

2

The computing result of locating algorithm is a relative position to the reference receiver, a transform to WGS coordinate need to be done before the application shows target’s position in map.

A Model of Cloud Computing-Based TDOA Location System

1193

Fig. 6 The comparison of response time between cloud and server in different number of simultaneous locating request

The experiment is to record the cloud’s response time with an increase number of locating requests simultaneously. The signal data will be stored in cloud in advance instead of using cell phone to upload signal data to avoid the influence of the difference of network speed in uploading file. The time will be recorded before and after running location algorithm, and the difference will be used to measure the prototype’s ability in responding multi-client requests. In this experiment, the signal data are from six receivers in real condition. The experiment is repeated fifty times in each number of locating requests, and the response time is the average number. The result of the experiment is shown in Fig. 6. It can be observed from Fig. 6 that the response time has no significant increase with the increase of locating requests. While in contrary, the response time of the server increases sharply as the number of locating requests grows. And the server cannot response to simultaneous locating requests beyond 8. It is obvious that due to the inefficiency of python (which is employed in cloud) compared with C+ + (which is employed in server), the server does have it advantage in dealing with less than 3 simultaneous locating requests. However, with the increase of the simultaneous requests, the cloud performs better in maintaining its response time.

5 Conclusion and Further Work This paper proposes a model of cloud computing-based TDOA location system to offset the shortage in data analyzing and processing ability in traditional TDOA system’s server and spare researchers’ effort from building a robust and powerful server. The cloud computing-based TDOA system utilizes cloud to replace the server. Cloud connects receivers and clients through internet and signal data and instructions are packaged by HTTP protocol. Users use client to set-up receivers’ parameters and cloud adopts built-in algorithm to compute target’s position by utilizing signal data received from working receivers. The purpose of this system is to increase the maximum supported number of receivers and clients. And from the

1194

B. Huang et al.

experiment results, we can see that employing cloud computing technology in TDOA system indeed increases the system’s performance in multi-client’s requests response. In this prototype, the map used in Android application can only represent 2D configuration while receiver and target’s altitude should also be included. In this condition, a 3D map is necessary for better presentation. And it can be seen from the results of the experiment, the response time of a single locating request is around 7 s. To minimize the average response time, the program of the location algorithm should be ameliorated. Also, for practice usage, a users’ login/logout system is indispensable to manage different users’ request. The prototype is the vision of the cloud computing-based TDOA system, and further work still needs to be done to improve and perfect the system.

References 1. Zhao, H.Z., Wang, G.L., Xie, L.G.: Research on passive localization estimate algorithm of TDOA. Mod. Defense Technol. 35(1), 76–82 (2009) 2. National Institute of Standards and Technology. http://www.nist.gov/itl/cloud/ 3. Li, Q., Zheng, X.: Research survey of cloud computing. J. Comput. Sci. 38(4), 32–37 (2011) 4. Song, J.X.: Research on Time Difference Localization Algorithm and Timing Information Transmission Implementation. Chengdu, University of Electronic Science and Technology of China 5. Chan, Y.T., Ho, K.C.: A simple and efficient estimator for hyperbolic location. IEEE Transactions on Signal Processing 42(8), 1905–1915 (1994) 6. Luo, J.Z., Jin, J.H., Song, A.B., Dong, F.: Cloud computing: architecture and key technologies. J. Commun. 37(7), 4–19 (2011) 7. Sina App Engine’s documents center. http://sae.sina.com.cn/doc/python/storage.html#id1 8. Baidu SDK guide and introduction. http://developer.baidu.com/map/wiki/index.php?title= androidsdk/guide/introduction

Position-Based Unicast Routing Protocols for Mobile Ad Hoc Networks Using the Concept of Blacklisting Muhammad Aman, Asfandyar Khan, Azween Abdullah and Israr Ullah

Abstract Routing is the backbone of any computer network and mobile ad hoc networks (MANETs) are no exception to this rule. Position-based routing protocols are special kind of routing protocol that uses position information to route packet from source to destination. These protocols are considered as the most scalable solution, as they do not cause flooding in the whole network. Although performance of these protocols is better than most of their competitors but their recovery procedures create some burden on the network. After taking inspirations from different design principles adapted by the community, we have proposed a new simple position-based routing protocol using the concept of blacklisting. The goal of our protocol is reduce the load involved in recovery procedure and make it relatively simpler. Results of simulation show that our proposed protocol performs much better than its competitors. Keywords Ad hoc networks

 Routing protocol  Blacklisting

1 Introduction Ad hoc Networks are one of the most hot research areas in these days. As they require no fixed centralized infrastructure, so they can easily be deployed at low cost. Ad hoc networks are mainly used in disasters, military, and rescue operations. Soon they will be commonly used in industry. Ad hoc networks have low cost, easy M. Aman  A. Khan (&) University of Science and Technology, Bannu, Pakistan e-mail: [email protected] A. Abdullah Taylors University, Subang jaya, Kuala lumpur, Malaysia I. Ullah National University of Computer and Emerging Sciences, (NUCES), Islamabad, Pakistan © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_109

1195

1196

M. Aman et al.

deployment, and robust nature, but at the same time there are many challenges on the way, which do not allow these networks to be common and popular. Some of the challenges are nodes frequent mobility, limited resources (battery and processing), and low bandwidth. Researchers around the world have designed different solution to cope with these challenges. Of course there is no One-for-All solution, every solution has some pros and cons. Routing protocols are considered as the backbone of the Ad hoc networks, and they have two main categories (a) pro-active protocols in which every node keeps routing information about every other node in the network. They offer instant data forwarding but cause tremendous overhead to keep updated routing information. (b) Reactive routing protocols discover the path on-demand to reduce the overhead but it cause delay in initiating the transmission and route discovery. Position-based routing protocols use location information to avoid un-necessary flooding in the network; rather communication is directed toward the destination. Data is greedily forwarded toward the destination which often get stuck in local minima but their recovery procedures often demands complex planarization and transformation algorithms with local topological information. In our proposed protocol, we try to simplify the recovery procedure and provide the same efficiency with relatively low overhead using the concept of blacklisting. The rest of the paper is organized as Sect. 2 presents a brief discussion about routing protocols, particularly position-based routing protocols, their assumptions and classification. In Sect. 3, working and features of our proposed protocol are discussed. Section 4 is reserved for experiments and results. Finally, this paper is concluded with future direction.

2 Ad Hoc Routing Protocols Design of routing protocol is very crucial part in Ad hoc networks as they play a vital role in efficient utilization of the available resources. These protocols can be categorized mainly into two types: Proactive and Reactive. Proactive protocols are those which keeps routes information in advance so that it can readily be used when required for transmission. Often too much network traffic is generated in order to maintain these routes with mobility of nodes even no data is transmitted. Some popular proactive routing protocols are given in [1–3]. While reactive protocols discover routes from source to destination on-demand thus this overhead is only involved when some data has to be transmitted. This route discovery procedure is considered as the most costly part of reactive protocols, often causing undesirable delays and sometime leads to search to infinity because of loops or destination unreachable. This makes reactive protocols a poor choice for real-time transmission and QoS support scenarios. Some popular reactive routing protocols are [4, 5]. If we can reduce somehow the overhead involved in route maintenance in proactive protocols, then they surly will be the best choice for Ad hoc routing. This is main theme of this work and how this can be done is briefly discussed in coming section.

Position-Based Unicast Routing Protocols for Mobile …

1197

Our goal is to design a hybrid like protocol which do not require search for destination when it wants to send some data (Proactive). While at the same time it should not cause too much overhead on the network in order to maintain those routes with nodes mobility (Reactive). In order to achieve this goal, use of global positioning system (GPS) device is very helpful. There are several ad hoc routing protocols that use location information for destination route discovery. Most of them are reactive in nature and try to find the estimated current location of destination on-demand with help of its last known position, its speed, direction, and sequence of motion. Some protocols divides the area under observation into Grids/Zones [9], such that all nodes in a Grid/Zone can easily be reached once it has a route to at least one node in that particular Grid/Zone. Some protocols designate one or multiple nodes per Grid/Zone as Core nodes and all communication with rest of the nodes of that Zone as routed through these core nodes. Some of most popular Location-based protocols are given in [6–8, 10–13]. One overall plus point of these protocols is that are considered as most scalable. On the other side, as they have no pre-calculated routes and rely on estimation which often causes delays in route discovery.

2.1

Position-Based Routing Protocols

In this paper, we are focusing on different unicast protocols designed for Ad hoc networks which are based on location information. As there may be many different possible solutions to a problem but each one of them can be the best for a particular scenario. These solutions are actually proposed and designed by different researchers presented in the form of protocols. For sake of brevity, we will restrict ourselves to only select few of them. This paper presents typical protocols selected from the class of similar approaches that can reflect the up to date of research work on mobile ad hoc network location-based routing. Our main focus is on the technique and approach used by different authors. But before going into the details, in the next sub-section, we present some assumption for these protocols. Assumptions for Position-Based Protocols: There are three main assumptions for position-based routing protocols. First and the most important one is that all nodes can determine their own position that can be done through GPS or any other triangulation technique. The position can be real or virtual. The second assumption is that a node can determine location of their neighbors through Packet exchange. Final assumption states that nodes can determine the position of the destination. This can be done through some Grid Location Service or Distributed Hash Tables. We will not focus here in this paper on how these assumptions are valid in real, but there is a whole lot of information how they are possible. In the next section, we will discuss the main categories of position-based protocols and some representative protocols.

1198

2.2

M. Aman et al.

Classification of Position-Based Routing Protocols

These protocols can be classified in no. of ways as done different researchers in their survey reports. Inspired from those classifications, we define four main classes of these protocols and all so far defined protocols will fall into or other class. The four main classes are as below: (1) (2) (3) (4)

Basic: Position-Based Routing Algorithms: Greedy, Compass etc. Restricted Directional Flooding: LAR, DREAM, etc. Right-Hand Routing Algorithms: FACE-I, FACE-II, AFR, etc. Hybrid/Hierarchical Position-Based Routing Algorithms: ZHLS, Grid, GPSR, etc.

3 Proposed Routing Protocol The objective of this work is to design a simple and client routing protocol that can suits well to the conditions of ad hoc networks. Greedy protocol is truly the best protocol if there are no local minima. It is very easy to understand and implement and have very low computation overhead. In ad hoc networks, it chooses the best available path from source to destination but all of its goodness is spoiled when in it stuck in local minima. In real-world ad hoc networks, we cannot avoid local minima because of frequent nodes mobility. So the next best option is employee some recovery mechanism in greedy algorithm to make it resilient to recover from local minima. GPSR, GOAFR, etc. are examples of such protocols that are based on Greedy. Although performance of these protocols is better than most of their competitors, but their recovery procedures creates some burden on the network. The goal of our protocol is reduce the load involved in recovery procedure and make it relatively simpler. In the next section, we present brief description of our protocol.

3.1

Working of Proposed Protocol

Packet Forwarding As stated earlier, our proposed protocol is based on greedy technique. It means all nodes will forward the data to one of its neighbor that is closer toward destination. Of course it has its own technique for recovery from local minima that will be discussed in coming sections. This protocol is on-demand, means when a node receives a packet for the first time it will broadcast a local hello packet to get its neighbor information. After getting all neighbor reply packets, the node will choose the one which is closer toward destination. This on-demand hello packet will reduce the control overhead on the network as this exchange will happen only along the route from source to destination. This phenomenon is

Position-Based Unicast Routing Protocols for Mobile …

1199

Fig. 1 The basic working of our proposed protocol

explained in Fig. 1. Suppose node A want to send some data to node G. First, it will send a hello like packets to its neighbor’s nodes B; C; and D. In reply, these nodes will send back their current location to node A. After getting location information from neighbors, node A will then choose a node which is most closer toward G i.e., C and forward the packet to it. The same procedure will be followed at every node during first transmission.

3.2

Path Establishment

In our protocol, there is no explicit route request process for establishing a route from source to destination. Packets are immediately forwarded toward the destination and path is established during first transmission. In first transmission, every node will have to query its neighbors for updated location information. During this, the path followed by the first packet is stored in packet header until it reaches destination. Destination will pass this packet to upper layer and send back a reply packet which follows the same route in reverse order. The Reply packet can be piggybacked with ack packet in case of TCP connections. Nodes receiving Reply packet makes a route entry in its routing table for the same destination. Before route

1200

M. Aman et al.

entry process, every node applies some optimization procedures in order to escape the un-necessary nodes on path.

3.3

Path Optimization.

The path followed by reply is shown in Fig. 2. In above case, when node H receives the reply packet from node K, it checks whether any node beyond F = K toward source/destination is included in the path and also among its neighbors. Nodes keep neighbor information until they receive the awaited reply packet. Let suppose that the path includes a node E which is also in the neighbor list of H, so next hop entry for the same source will be E instead of F at H. Thus F will be removed from the

Fig. 2 An optimization scenario and how it will be performed

Position-Based Unicast Routing Protocols for Mobile …

1201

path although it was used in the forward path. This optimization will cover the limitation of greedy protocol. Recovery from Dead Ends. Our protocol is not pure greedy rather more like Geographical Distance Routing. It means that node will forward the packet to Dest if in range, or most closing neighbor to Dest, then itself. If a node has no such neighbor that is closer toward destination then itself then the node is considered to be a dead end e.g., in Fig. 3 node F has no other neighbor then its previous node E. In this case, recovery from dead ends had done using the concept of blacklisting. BLACKLIST is a list of disapproved nodes to which a node will not forward a packet although they may be closest toward the neighbor. Node F will then send back the packet to node E with BLACK flag set. When node E receives a packet with BLACK flag set, it will put that node in its BLACKLIST. Thus electively

Fig. 3 How the propose protocol will recover from a dead end in Scenario-I

1202

M. Aman et al.

every node chooses its next hop node among its neighbors excluding blacklisted nodes. This is completely new concept and is not used by any protocol so far to the best of our knowledge. Moreover, GPSR will initiate here FACE routing assuming that path is available on either direction of dead end and chooses only left side of dead end to discover the route. Thus only anti-clock wise traversal might lead the packet into a deeper dead end and hence will take too much time to recover. The proposed protocol does address this scenario and do perform traversal on both direction Left and Right. This enables our protocol to quickly discover a route from source to destination and recover from dead end as well.

4 Results and Discussion We evaluated the performance of our proposed protocol in NS-2 by creating Ad hoc network scenarios. Results are reported with TCP protocol being used at transport layer, 200 mobile nodes are moving in a rectangular area of 2,400,800 m2. Mobility files are generated using Bonmotion tool. Nodes are moving using random waypoint mobility model. Node speed is selected using uniform distribution between 1 m = sec (walking speed) and maximum speed of 20 m = sec (moving cars speed). We have used default settings of MAC and physical layers. The reported results are simple average of 5 independent simulation run to eliminate any stochastic elements present in the environment or protocol. The simulation time is set to 100 s. We have compared our proposed protocol with two state-of-the-art routing protocols i.e., AODV and DSR. We have used their default implementation available in ns-2 without any modifications. In the following sections, we briefly discuss the performance metrics.

4.1

Performance Metrics

Packet delivery ratio (PDR): It is the ratio between total no. of packets successfully received at destination and total no. of packets sent by all sources in the network. Throughput: It is the no. of bits received at all destinations in the network per unit time (sec). Delay: It is difference between times when packets was sent by source and received at destination. Avg. Energy Used: This used to show how much energy is used by a node in the network on average duration total simulation time. Network Life: It is the average amount of battery in percentage remaining at nodes.

Position-Based Unicast Routing Protocols for Mobile …

4.2

1203

Results with Varying Node Mobility

The purpose of these set of experiments was to evaluate adapting ability of protocols to its changing environment and frequent link failures. In these experiments, CBR was fixed to 64 kbps with zero pause time i.e., nodes are constantly moving. PDR of all protocols remains unelected with increase in node mobility as shown in Fig. 4a. This is due to TCP at transport layer. Certainly, with increase in node velocity, frequency of path failure increases but as a result, TCP congestion control algorithm slows down its packet transmission rate which helps in avoiding packet loss. PDR of AODV is comparatively low which is due its reactive nature i.e., on every packet loss, a complete new sequence of path rediscovery is initiated which further results in network congestion and packets collisions. From Fig. 4b, it is evident that throughput of all three protocols decreases with increase in node speed. The same is obvious because increase in node mobility causes frequent link failure

Fig. 4 Results w.r.t varying node mobility

1204

M. Aman et al.

which results in activation of TCP congestion control protocol to reduce its packet rate. Our proposed protocol is able to perform better due to its dynamic path re-establishment feature. Packet delay gets increases with increase in increase in node velocity which is depicted from Fig. 4c. Our proposed protocol is able to maintain minimum delay due path optimization and local path repairing feature. Moreover, data packets are immediately sent and there are no explicit route discovery packets like RREQ in AODV. Energy consumed per KB data transmission increases for all protocols as node mobility gets increases (see Fig. 4d) as because with higher mobility, throughput gets decreases. Thus with same amount of energy less amount of data is transmitted. Our proposed protocol energy expenditure is low because of its limited broadcast and it does not uses network wide broadcasting in route discovery. Unexpectedly, Network lifetime increases with increase in node velocity as from Fig. 4e. As with higher mobility, throughput gets decreases means that overall network utilization for data transmission is decreased which result in increase in network life. Our proposed protocol network life is low as compared to others, and it is due to its relatively higher throughput which consumes more energy and decreases network life.

4.3

Delay Statistics

Figure 5 shows detailed delay statistics of the simulation performed with varying node mobility in a tabulated form. We have reported average, median, standard deviation, minimum, and maximum values of delay (in milliseconds) along with 70, 80, and 90 percentiles. This information gives us an insight of delay spread. Overall, DSR’s protocol standard deviation of delay is high as compared to other protocols. Respective percentile values of delay for DSR shows that only 10 20 percent values of delay are extremely high which affects standard deviation and average values of delay. AODV and our proposed protocol delay values are uniformly distributed.

5 Conclusion and Future Work The proposed protocol is performs well with moderate mobility conditions. It follows the best available path. It has very low control overhead which results in increase in overall network life time. It has the ability to escape from local minima and local route recovery procedure. Its ancestor LAR reduces the control overhead by 1 = 4 using directional flooding while this protocol has even minimum overhead as beaconing is done on-demand and only along the route followed by packet. DREAM is proactive which provide low latency but the communication overhead involved is too much and even grows exponentially in high-mobility environment. GPSR and GOAFR routing has the ability to recover from local minima but their

Position-Based Unicast Routing Protocols for Mobile …

1205

Fig. 5 Delay Statistic of varying node mobility

recovery procedures demands complex planarization and transformation algorithms with local topological information. Our proposed protocol provide the same efficiency with relatively low overhead. At high-mobility conditions, techniques like path optimization and local path recovery back fires and the protocol fails to deliver maximum performance. Currently, we have compared performance of our proposed protocol with AODV and DSR. In future, we are looking forward to have implemented protocols like LAR and GPSR. It would be interesting to see performance analysis with these protocols. Moreover, we are only focusing on one parameter i.e., distance from destination. After successful implementation of this protocol we are looking forward to include some other metrics like delay, energy, path-stability etc. This will make it even more robust and scalable.

1206

M. Aman et al.

References 1. Murthy, S., Garcia-Luna-Aceves, J.J.: An efficient routing protocol for wireless networks, ACM Mobile Netw. App. J. 183–97, Special Issue on Routing in Mobile Communication Networks (1996) 2. Perkins, C.E., Bhagwat, P.: Highly dynamic destination-sequenced distance-vector routing (DSDV) for mobile computers. ACM Comput. Commun. Rev. 24(4), 234–244 (1994). ACM SIGCOMM’94 3. Pei, G., Gerla, M., Chen, T.-W.: Fisheye state routing in mobile ad hoc networks. In Proceedings of the 2000 ICDCS Workshops, pp. D71-D78. Taipei, Taiwan, Apr. 2000 4. Johnson, D., Maltz, D.A.: Dynamic source routing in ad hoc wireless networks, in mobile computing. In: Imielinski, T., Korth, H. (eds.), Kluwer Acad. Publ. (1996) 5. Perkins, C.E., Royer, E.M.: Ad hoc on demand distance vector routing. In: Proceedings of the Second IEEE Workshop on Mobile Computing Systems and Applications, WMCSA’99, pp. 90–100 (1999) 6. Ko, Y.B., Vaidya, N.H.: Location-aided routing (LAR) in mobile ad hoc networks. In: Proceedings of MOBICOM 1998, pp. 66–75; Wireless Networks 6(4), July 2000, pp. 307–321 7. Basagni, S., Chlamtac, I., Syrotiuk, V.R., Woodward, B.A.: A distance routing effect algorithm for mobility (DREAM). In: Proceedings of MOBICOM, pp. 76–84 (1998) 8. Li, J., Jannotti, J., De Couto, D.S.J., Karger, D., Morris, R.: A scalable location service for geographic ad hoc routing. In: Proceedings of MOBI-COM’2000, Boston, MA, USA (2000) 9. Haas, Z.J., Pearlman, M.R: The zone routing protocol (ZRP) for ad hoc networks. IETF Internet draft, Aug 1998 10. Karp, B., Kung, H.T.: GPSR: Greedy perimeter stateless routing for wireless networks. In: Proceedings of MOBICOM, pp. 243–254, Aug 2000 11. Jain, R., Puri, A., Sengupta, R.: Geographical routing using partial information for wireless ad hoc networks. IEEE Personal Communications, pp. 48–57, Feb 2001 12. Stojmenovic, I., Lin, X.: Loop-free hybrid single-path/flooding routing algorithms with guaranteed delivery for wireless networks. IEEE Trans. Parallel Distrib. Syst. 12(10), 1023– 1032 (2001) 13. Liao, W.-H., Tseng, Y.-C., Sheu, J.-P.: Grid: A fully location-aware routing protocol for mobile ad hoc networks. Telecommun. Syst. 18(1), 37–60 (2001)

Cloud Computing: Models, Services, Utility, Advantages, Security Issues, and Prototype Ikechukwu Nwobodo

Abstract Cloud computing has tremendously evolved from recent advancement in virtualization technology, resulting in utilization, enhancement, and transformation to more computational requirement model such as on demand and pay-per-use services. The advent of internet early 1990 to present day has helped to facilitate the era of ubiquitous computing. This has lead to transition from the concept of parallel, distributed, grid, and presently cloud computing. Cloud computing have ushered an environment where different computing resources such as application, networks, and storage can be shared via internet infrastructure by users anytime, anywhere without constraint to hardware or resource requirement. This paper explores cloud computing deployment models, services, utility classes, advantages, security issues, and prototype. Keywords Cloud computing

 IaaS  SaaS  PaaS  VM  VMM  SCVMM

1 Introduction Cloud computing concept is a novel internet-based technology used in delivering data, application, and storage infrastructures to users. This is a recent trend in IT that shifts computing from traditional client server network into data center structure [7]. Cloud computing deals with computation, data access, and storage services deliveries without user’s knowledge of physical location or system configuration of cloud data. Cloud computing term has attracted various connotations. In some instance, it refers to grid computing, meaning a method of acquiring more computing resources or specialized computing hardware resources. In most cases, it refers to software as a service where access to third party software or services that I. Nwobodo (&) Computing and Engineering, School of Architecture, University of East London, Dockland Campus, London, UK e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_110

1207

1208

I. Nwobodo

runs their applications are provided. To wider audience, it means a total computing infrastructure where users computing requirement are provided, managed, and monitored by cloud providers or third party [10]. Cloud computing is a representation of commoditized services similarly delivered like traditional utilities such as electricity, water, telephone, and gas. Users in such model only access services based on their need, regardless of how the services are delivered or where it is hosted [5]. Several computing paradigms such as cluster, grid, and presently cloud computing have promised to deliver the vision of utility computing. The contemporary model represents an infrastructure where users and enterprises are capable of accessing services such as applications, network, and storage via internet on demand from anywhere in the globe. Consequently, cloud computing innovation is rapidly transforming toward developing unlimited software for users or enterprises to utilize or access as a services exclusive of underlying hosting structure, instead of installing and running on their respective computers. Thus, cloud computing infrastructures are made up of large data centers maintained and monitored round the clock by cloud providers. Therefore, this has made cloud computing as conservatory of the paradigm where the potentials of enterprise application are uncovered as powerful services that can be accessed via the internet infrastructure. The cost associated with traditional provision of in-house network services by businesses has made adoption of cloud computing as an inevitable alternative to eliminating and where possible reducing overhead spending. There is need for users to demand service delivery guarantee to be enshrined in SLA with cloud providers since cloud application may be core or critical part of the enterprise business operation [5]. The most important objective of cloud computing is to properly utilize the distributed resources by combining it to achieve reasonable output and in essence be able to resolve a major computation issues. Understanding cloud computing clearly is a massive challenge. The existence of varying level of support with different level of services from different providers using the same term cloud to describe and market their product and services makes the public perception about cloud computing technology extremely cloudy [10]. Cloud computing in simplest term entails a modern and innovative process of offering variety of computing services, including application virtualization, presentation software virtualization, Server infrastructure virtualization, mobile application virtualization, desktop virtualization, user state virtualization, hardware, and total platforms or computing environment as depict in the cloud computing diagram Fig. 1.

1.1

History

There are numbers of historical attributes to cloud computing. The underlying concept dates back to 1960 when John McCarthy who expressed an opinion that computation will in future be structured as public utility [16]. The cloud computing

Cloud Computing: Models, Services, Utility, Advantages …

1209

User state Virtualization

Server Virtualization

Presentation Virtualization Microsoft Amazon Salesforce Rackspace

Google

Yahoo Zoho

Desktop Virtualization

Mobile Application

Application Virtualization

Fig. 1 Cloud computing

characteristic also was brought into wider domain in 1966 by Douglas Parkhill in his book “The challenge of the Computer” [7]. Retrospectively, the term cloud was ascribed to the world of Telecommunication, where Virtual Private Network services was started to be offered with comparable quality of service at reasonable cheaper rate. Recently, cloud implementation has attracted huge interest from large organizations. In 2006, Amazon launched amazon web service (AWS), IBM and Google commenced huge research project in cloud. Also the first open source cloud platform for private cloud deployment was Eucalyptus [7].

2 Cloud Computing Models and Services Cloud computing deployment model and services can be categorized based on types of resource deployment process, architectural integration, and service delivery forms as shown in Table 1. The deployment models are Public cloud, Private cloud, Hybrid cloud, and Community cloud while cloud services include infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS).

1210

I. Nwobodo

Table 1 Cloud computing deployment model and services Cloud computing deployment model and services Deployment model • Public cloud • Private cloud • Hybrid • Community cloud Cloud computing services Infrastructure as a services Platform as a service Storage Database Servers Application development Reporting Security Example Example • Rack space cloud •Facebook • Amazon EC2 and S3 •Google AppEngine • Terremark enterprise •Microsoft • Microsoft

2.1

Software as a service Tools for production e.g., office Emails Collaboration Example • Google Mail • SaleForce.com CRM • Google Doc.

Cloud Computing Model

Public cloud is a cloud infrastructure used to provide various types of cloud services to different categories of users via internet by third party providers. Users of public cloud services only pay by utilization basis popularly termed as pay-per-use on demand. The infrastructure provisioned can be simultaneously utilized by multiple users, managed, and operated by cloud service providers or third party. A good example of applications offering in public cloud include: web-based email (Gmail, etc.), social media (LinkedIn, Facebook, Twitter, etc.) and online multi-media hosting/storage such as Snapfish, YouTube, Flickr, and iTunes. [3, 14]. Private cloud is an infrastructure made exclusively to enterprise users or explicit customers and can be managed by organization themselves or third party cloud provider. This model is suitable for government department or SMEs with multiple business branches. Unlike public cloud, the mode of private cloud deployment is predominantly within organizational data center environment, giving enterprises advantages of full resource control and security [7]. Hybrid cloud is a cloud infrastructure comprising two or more deployment types that continues to maintain unique entities but are connected together via proprietary or standardized technology that enable application and data portability [16]. Community cloud is a cloud infrastructure used for shared infrastructure provisioning between organizations and can be managed by them or third party provider. The location of physical infrastructure of the cloud can be local or remote to the businesses it supports, and the infrastructure ownership can be one or more of the participating enterprises or third party. Community cloud represent a potential solution generally for entities that share common bonds i.e., different department within a large organization or government entities can use community cloud, irrespective of the fact, that task being performed by participating enterprises are

Cloud Computing: Models, Services, Utility, Advantages …

1211

complete independent of each other. Other community cloud instances are where parenting enterprise of subsidiaries share community cloud or partnership/strategic alliance between independent organizations share community cloud [3].

2.1.1

Cloud Computing Services

IaaS is used to provide users remote access to infrastructure resources provisioned in cloud such as storage, processing, and network. The client does not need to purchase required infrastructure but charged by pay-per-use, and the resources are scalable depending on customer demand. For example, Rack Space Cloud, Amazon EC2 and S3, Terremark Enterprise Cloud and Microsoft [14]. PaaS enables the cloud service offering of complete platform of computer, in addition to design, application hosting, and testing. Moreover, users can utilize the service platform to develop web application without requiring installing software or extra hardware in their local PC e.g., Facebook, Google AppEngine, and Microsoft [7]. SaaS is a method where entire application are hosted and offered via the internet on demand. Users of SaaS are not required to install or purchase application or software on their local system rather by pay-per-use or demand model via cloud provider. These modes of service eradicate issues of software licensing or maintenance to users. In addition, it enables rapid deployment of software, thereby reducing cost associated with software deployment and planning to enterprises or users e.g., Google Mail, SaleForce.com CRM, and Google Doc. [4].

3 Cloud Computing Characteristics (1) On demand self-service. The on demand self-services is a cloud computing feature that enable cloud users to instantaneously at any time initiate provision of the needed resources automatically without any necessity of physical, human communication with cloud service provider. Cloud resources in this category ranges from server, network, storage, software and platform for development, etc. [9, 13]. (2) Resource pooling. The resources of cloud computing providers are pooled (combined together) purposely to serve multiple users simultaneously using multi-tenancy representation, consisting diverse virtual and physical resources assigned and reassigned dynamically based on user requirement. The motivational factor behind the concept of resource pooling computing paradigm is economics of scale and resource specialization. The model has resulted to anonymity of the physical computing resource, moreover, a wider view of location independence as cloud users have no knowledge, structure, and control over the location of resource provided or where their data or resources are stored in cloud i.e., network, memory, storage etc. [13].

1212

I. Nwobodo

(3) Broad network access. The cloud computing resources are provided over the network and accessed via internet in a standard process that enable its use through heterogeneous thick and thin platforms clients such as workstations, PDA, Mobile Phones, laptops, and tablets. The cloud computing broad network access initiatives helps promote cloud utility resource, in addition widens scope of cloud benefits [9]. (4) Metering or Measured service. Cloud computing has built-in service that automatically optimize and control cloud resources usage by utilizing metering system to bill customer based on services type such as network and storage. The usage of resources can be tracked, audited, monitored, reported, controlled, and providing high level of transparency to both users and service providers [9, 13]. (5) Rapid elasticity. It is difficult to anticipate how cloud users may demand or require services. However, services should be designed to be provisioned elastically and released, in some instance be able to automatically scale rapidly in respect to user demand. The ability to scale services whenever additional user is added or when user requirement changes can only be achieved by providing elasticity in cloud [4, 9].

4 Enterprise Cloud Implementations Presently, a lot of cloud computing providers have been implementing cloud computing successfully in commercial level and prominent among them include: Amazon, Google, IBM, and Microsoft, etc. Amazons elastic compute cloud (EC2) offers ranges of services characterizing computing virtual environment. These enable use of web services interface by subscribers to launch instances of operating systems combination as well as user’s custom applications environments, in addition filling and managing user’s access permission to Networks. Amazon simple storage service (S3) offers a simple Web Service Interface utilized by users in storing and retrieving unlimited data, anywhere, anytime via internet. Moreover, it gives developers opportunity of access to reliable, fast, scalable low-cost infrastructure data storage, normally utilized in running global Network of Web Sites based on the descriptive of services by Amazon. Additionally, Amazon has other cloud-based services such as Amazon flexible payment service (FPS) and Amazon simple queue service (SQS). The Amazon SQS provides hosted queue environment designated for message storage as it transverse between computers. This is quite useful for developers as they make use of opportunity presented by the service in moving data between distributed components of their applications that execute various tasks, without the need for every component to be constantly available. Other Amazon platforms are perfectly integrated by the services of AWS and Amazon EC2. The Amazon FPS in the other hand is a piece of web services that enable company’s bill their users comparable to

Cloud Computing: Models, Services, Utility, Advantages …

1213

Checkout or PayPal. It also offer send and receive of fund as well as capability for deciding how payment instruction can be structured, plus standing order that can remain in place for several transaction. The main purpose of FPS is to create an efficient micropayment and economically cost-efficient process, helping organization charge lower interest by combining together entire purchase for cheaper transaction cost [16, 17]. Google offers a number of cloud services for various requirements. Prominent among them is Google App, a group of Web-based application and file storage that can be run on a web browser. Applications in these categories include: Google talk, Gmail, Google Calendar, productive tools such as text files, Google Docs, presentations, and spreadsheets. This applications help enable content sharing at the same time promotes collaboration. IBM offers cloud products under its smart Business collection such as Smart cube, Smart Market, and Smart Desk. IBM Smart cube is a combination of appliance of office software built-in and networking storage. Smart Market is a portal services used for managing and comparing of various business applications specifically run in IBM cloud computing environment. Smart Desk is a software dashboard package specifically developed to facilitate management of services and applications by users from the market and cube clouds. One of the fundamental issues to organizational IT staff is having enough resource environments to perform test on new application before deployment to production. This issue was successfully addressed by IBM through the introduction of Smart Business Test Cloud. The product is intended to reduce cost associated with application testing by organization prior to production deployment [16, 17]. Microsoft has invested much in computing services delivery model by introducing Azure as main offering for cloud platform. The Azure platform has three components such as Windows Azure, SQL Azure, and Azure AppFabric known formally as Net Services. Windows Azure is distinctively designed to enable on demand compute and storage services for developers to host, scale as well as manage cloud application including internet. SQL Azure is developed for the extension of potentials associated with the SQL Server and Microsoft database management system (DBMS) as a web-based distributed relational database into cloud. The AppFabric component is a collection of integrated technologies design to enable developer’s link applications and services with Windows Azure and on-premises deployment [16, 17].

5 Cloud Utility Classes A model of computation is highly required by any application as communication and storage model. The necessary statistical multiplexing critical to accomplishing elasticity and the facade of unlimited capacity accessible on demand needs automatic managements and allocation, which in practical sense, the key operational element is virtualization. Several clouds computing utility offering can be

1214

I. Nwobodo

distinguished based on abstraction level of cloud software and resource management level. Some typical examples in respect to cloud utility classes are Amazon EC2, Google AppEngine, and Microsoft Azure [1]. Amazon EC2 is a critical example in this category. The instance of Amazon EC2 appears exactly like the physical hardware, meaning users have the ability to control the complete stack of the hardware right from the kernel. Apparently, the low level makes it naturally difficult for automatic failover and scalability to be offered by Amazon due to semantics associated with replication including other management issues that are greatly application independent [1]. Google AppEngine is another vital instance, which is an application specific domain platform, solely targeted at conventional web application, enforcing an application composition of clean separation between stateful tier and stateless computation tier. The AppEngines high availability mechanism and sound automatic scaling, including Google proprietary data storage megastore available to AppEngine applications, completely relied on these constraints [1]. Further example is Microsoft Azure application developed using .NET libraries, compilation still based on common Runtime, a management environment language-independent. Comparatively, the framework is quite flexible than AppEngines although storage model and application structure user choices are constrained. Consequently, Microsoft Azure can be classified as an intermediate between hardware virtual machine such as EC2 and application framework like AppEngine [1].

6 Advantages of Cloud Computing Reduction of Cost: Cloud computing service delivery process, simplification and economics convenience contributes to major drives to cloud computing demands. Recent anticipation of many organizations in considering cloud technology services as a potential for massive reduction in IT cost is gathering strength. Moreover, this will help eliminate much expense and problems associated with installing and maintaining application locally, thereby maximize SMEs financial investment capacity [15, 17]. Create Opportunity to SMEs: Presently, cloud computing has considerably reduced the cost of entry for SMEs trying to profit from compute exhaustive business analytics, usually accessible only by larger enterprises. Such computational requirements involve high-computational power for comparatively short time; however, cloud computing facilitates opportunity for dynamic provision of services by SMEs [2, 11]. Bridge Digital Divide: Cloud computing present a huge opportunity for growing economy or third world countries to bridge digital divide in IT revolution with developed world, by adopting cloud computing technologies to deploy and deliver ranges of IT services. Practically, the growing economies or third world countries

Cloud Computing: Models, Services, Utility, Advantages …

1215

will find it difficult if not impossible to achieve IT delivery strategies in absence of cloud computing due to lack of resources to setup traditional IT environment [2]. Easy Access to Resource: Cloud computing enable organizations immediate access to IT resources or services without upfront capital investment rather; services are provision in the cloud and accessed by pay-per-use on demand model, leading to quick resumption or commencement of business operations [2, 11, 17]. Reduced Carbon Emission: The cost benefits of cloud computing is not limiting only to capital amounts users can save by not purchasing hardware-related IT devices or maintaining and managing in-house IT infrastructure; however, it extend to the ability of users having a considerable level of carbon footprint reduction as a result of adopting or using cloud computing business solutions. Recent research, suggested ICT is already contributing 2 % of carbon emissions globally due to extensive use of computer-related technologies in organizations, meaning the cloud virtualized services will drastically contribute to reduction of carbon emissions [7, 17]. Scalability: Cloud computing enable enterprises to easily scale resources dynamically up and down in real time according to requirement, thereby reducing cost of investment on infrastructure expansion, in addition giving them opportunity of focusing on business growth [2]. Reduced Over Provisioning: Cloud computing helps business reduce cost of over provisioning of IT resources by enabling them, including users ubiquity access to IT services via internet anywhere, any device at any location in the world [8].

7 Cloud Computing Security Issues Recently, an investigation into cloud computing reported vulnerability incidents reveals that the number of vulnerability incident has risen considerably over the past 5 years. The insecure interfaces, hardware failure and data loss or leakage combined together constitute about 25 % of known threat and generally accounts for 64 % of overall cloud computing vulnerability incident [6]. The security of data stored, accessed, or transmitted in cloud remains a big concern to enterprises and users. Cloud Security threats and issues, as displayed in Fig. 2, include: Denial of services attacks and distributed denial of services attack (DoS, DDoS), Insider and Malicious outsider attacks, Cross-site scripting, Virtual machine rootkit, XML wrapping, Service disruption, legal issues, privacy, Loss of data control, Multi-location and Multi-tenancy remains a critical concern to adoption of cloud computing. Multi-location: The storage of data in multi-location in cloud computing has a critical security implications to enterprises data resources. The risks include accidental occurrences that emerge from situation where cloud third party provider goes out of business or may decide to freeze or seize cloud data in event of industrial dispute or legal proceedings pending resolution. This may result to complete business collapse to cloud subscribers. Most importantly, such situation may entail

1216

I. Nwobodo

Service Disruption

Outsider Attack

Multi-tenancy Loss of Control

Insider Attack

Cloud Computing Security Issues

XML Wrapping

Privacy

Multi- Location

Cross Site Scripting Legal Issues VM Rootkit

DoS & DDoS

Fig. 2 Cloud computing security issues

legal nightmare to enterprises in incident of data compromise oversee where countries information misuse; legal jurisdiction does not permit culprit extradition or an extradition agreement does not exist between nations involved. Other related security impact to cloud data in multi-location is legislation made within state boundaries of some technological powers such as USA Patriot Act. Recently the act has prompted Canada to urge its government to desist from using network connected systems, operated within USA borders due to uncertainty relating to confidentiality and privacy of Canadian data stored in those computers [2, 14]. Multi-tenancy: The design of cloud computing was architecturally focused to meet its primary objective as a shared computational resource, residing physically or logically at providers domain to serve multiple users. This means users can access and share same cloud infrastructure resources. The multi-tenancy deployment approach used by cloud providers present a serious security risk such as VM to VM attack or information exploitation [14]. Privacy: Data and information privacy remain a critical security issue in cloud computing. Presently, legislative restriction on privacy or private information regulation varies across countries. Therefore, customer data storage across multiple sites particularly nation’s boundaries present serious security risk to information confidentiality and legal challenges toward privacy [7, 14]. DoS, DDoS: This type of attack is popular in network security where access to computer resources by authorized users is impaired by Smurf, SYN flood, UDP and ICMP attacks. Part of the attacks process involve raising system CPU task by overwhelming the target victims with request thereby making the system to slowdown or crashing the system, and this can result to loss of availability. DoS and DDoS possess a real-time security threat to Cloud computing infrastructure [14]. Insider and Malicious Outsider Attacks: Cloud computing users are exposed to this kind of attacks as cloud is based on multi-tenant representation under single domains of providers. Users are not aware of standards used by providers to hiring personnel or data storage processes in multi cloud locations. Poor practices in this

Cloud Computing: Models, Services, Utility, Advantages …

1217

respect by providers may result in successful insider attacks by third party employees engaging in sensitive data theft for the purpose of selling to competitors of victim’s organization or corporate espionage such as information exposure. Outsider attack entails defacing or openly releasing confidential information and the attackers utilizes easy accessibility to cloud as advantage in exploiting cloud API weaknesses [14]. Virtual Machine Rootkit: Cloud computing is susceptible to this form of attack, as the most important component of cloud is virtualization where operating system and vital components are typically packaged to be hardware independent. This is achieved when the system with privileged small kernel known as hypervisor are multiplexed. The Virtual Machine Rootkit classified same as hypervisor is a latest form of malware which can be possibly installed underneath layer of operating system and hoist operating system to VM [14]. XML Wrapping and Cross-Site Scripting: The essential implementation technology for Service Oriented Architecture is web service including for interoperability and platform independent. Generally, the underlying mark up language used as intermediary for server and client communication is XML. XML signature enables unauthorized modification and origin authentication for any XML specific documents, and this raises a critical security concern for cloud computing. While Cross-site scripting is used to exploit vulnerability in web application by injecting malicious code in client machine. This has potentiality of impersonating user credentials when an attacker successfully introduces his own script to enable him engage in malicious activities like session hi-jacking or craft fishing [14]. Service Disruption: Service disruption or service hi-jacking represent a critical security challenges to cloud computing infrastructure attacks such as software vulnerability and phishing. Attack such as eavesdropping, sessions replay or redirection of enterprise clients to illegitimate sites can be mounted using compromised enterprise login credentials. The softest target for this kind of attack remains internet connected machines, IP addresses, and extension potentially exposed via internet tools such as WHOIS [14]. Loss of Control: When users or enterprises move their resources to the cloud, they will not only lose control of their data but also awareness of where they are stored in cloud, since it can be possibly stored anywhere. This process of storage from user’s perspective has a security issue as they lost control and also awareness of type of security mechanism used to protect their resources in cloud [7, 14].

8 Prototype of Cloud Computing Platform Cloud computing varies in models although share similar characteristics such as resource pooling, elasticity, multi-tenancy, and on demand self-service. In this section, a prototype of private cloud computing platform will be implemented based on Microsoft private cloud solution such as Windows 2012 Server with Hyper-V server 2012 including System Centre 2012 as shown in Fig. 3.

1218

I. Nwobodo

SCCM PRIMARY SCCM CENTRAL Virtual Switch

SCSM DW

System Centre 2012 Components

SCSM SCORCH

Virtual Client PC

SCOM REPORTING SERVICE SCOM APP CONTROLLER SCVMM

Virtual Member Server

SharePoint Sever SQL Server 2008 Windows Update Server Service Exchange Server

Domain Controller (DCO1) Virtual Server

Hyper v Server Virtualisation Storage Physical hardware with Windows Server 2012

Fig. 3 Cloud platform architecture

8.1

Overview of Hyper-V and System Centre 2012

Hyper view server 2012 and System centre 2012 has capability to enable creation and management of virtualized cloud environment as shown in Fig. 3. The Hyper-V management tools include Virtualisation VMI provider, Windows Hypervisors, Virtual infrastructure driver and the virtualisation service provider (VSP) and VM bus, while System Centre 2012 components include: virtual machine manager (SCVM), Application Controller 2012, Operations Manager 2012 (SCOM), Orchestration Manager 2012 (SCORCH), Service Manager 2012 (SCSM), Configuration Manager 2012 (SCCM), and Data Protection Manager [12].

8.1.1

Platform Setup

Foundation Infrastructure: The first task was to build a foundation infrastructure for the cloud platform, and this was commenced by installing Windows Server 2012 on physical server, installing Hyper-V Server 2012 role as integrated in Windows Server 2012. Subsequent task include: creation of virtual machine (VM) named DCO1 on Hyper-V, installation of Windows Server 2012 on DC01 VM, install active directory (AD), dynamic host configuration protocol

Cloud Computing: Models, Services, Utility, Advantages …

1219

Table 2 Platform components Platform components Hardware

Operating system

Physical server Broadband

W.Server 2012 and Hyper-VServer2012 Windows 7

Network cables Switches

System centre 2012 core components VMM 2012 Application controller 2012 Operations manager 2012 Orchestrator 2012 Service manager 2012 Configuration manager 2012 DataProtection manager

Database

Application

SqlServer 2008R2 SqlAgent

Exchange server 2010 SharePoint

(DHCP), domain name server (DNS) including promotion of the server to Domain controller and creation of domain user accounts, installation of Exchange Server, Web Server, Windows Update Server, File, Print, IIS Server, and SharePoint Server to present front end for all users including dash board and request portals as shown in Table 2 was followed by creation of two VM on Hyper-V and installing, configuring Windows Server 2012 as domain member server, and Windows 7 professional as a client, respectively. Further installation include: System Centre 2012 components such as Application Controller for cloud and virtual machine request, Operational Manager 2012 for managing and monitoring purposes, Orchestration Manager 2012 for integration and automation, Configuration Manager 2012 with central site role as well as primary site role, Endpoint Protection Services for end user support, and enabling Data warehousing and CMD services. Creation of Logical Network: This stage was used to configure three logical networks on Hyper-V console named: CloudLAB (VLAN) 001, CloudLAB (VLAN) 002, and CloudLAB (VLAN) 003, and this will be used as the fabric based communication. Three virtual (VLAN) IP Pools were also created and associated to the three Logical networks including setting up their IP address ranges as required. Creation of Virtual Hard Disk (VHD): VHD was crested as storage host and this was accomplished by utilizing SMB 3 integrated in Server 2012 as virtual storage, managed exclusively with file server. A new share file was subsequently created including classification section configured and linked to the platform host device. Setup Virtual Machine Manager: The core component of System Centre 2012 is virtual machine manager (VMM) used for managing the environment, specifically focusing on Hyper-V and users that utilize cloud resources. Prior to setting up VMM, efforts were focused to ensure all pre-requisite such as IIS role, .Net frame work 3.5.1, Windows automated installation kit (AIK) used by VMM for image management and physical bare metal deployment services, SQL native client and

1220

I. Nwobodo

SQL command line utilities used by VMM for communicating to SQL server are fully installed. This was followed by installing VMM features such as VMM Management server, VMM console, and VMM self-service portal. Further task include, configuring database section using the database details installed earlier, configuring VMM service account, and distributed key management, specifying hosting server name running SCVMM, and it was completed without error. The console window includes useful management features such as Fabric, Library, Jobs, Settings, VMs, and views. Add Hyper-V Host to VMM Console: The Hyper-V host was added to the VMM console to enable efficient management of platform, and this was carried out by clicking on Add Resources button on VMM console window and from option drop down click to select Hyper-V Host and Cluster, next at credential windows click to select check box next to Manually option, administrative account name and password was entered, then click next to continue, on discovery scope window enter the host server name and click next, at the target resources window the wizard scans network and display the host server found, click on the server to select it and click next, on host setting window accept host group by ticking check box next to reassociate this host with this VMM environment and click next, now click finish to complete wizard. Creation of Cloud: This stage is used to create a cloud on VMM console where resources can be positioned for access by cloud tenants. The task begins by right clicking on cloud button within VMM console and select create cloud from drop down option to start cloud wizard, at the next window write name of the cloud and click next, then select all default host as cloud resources, at next windows select CloudLAB (VLAN) as logical network, click next twice to select high bandwidth and Port classification, on the library click add to add Windows Server 2012 ISO image created earlier to be used as cloud resource services, now set capacity for memory and storage, in capability windows select Hyper-V and next select Hyper-V as VM type, click finish to complete setup. Setup Tenant on the Cloud: The next task was used to create Tenants using the wizard in the cloud created earlier by configuring profile, members, quotas, networking, user role, user action—such as start, save, stop, shut down, remote connection to VM, deploy, and local admin—in addition configure run as account. Positioning Resources in Library: The task of positioning resources begin by navigating to VMM console, right click library, and click on MSSCVMMLibrary to open it, on MSSCVMMLibrary window right click on MSSCVMMLibrary and on drop down option click explore, windows explorer opened, on displayed folders find and double click to open ISO folder saved earlier, select ISO image and click ok, now the ISO image of Windows servers 2012 loads into the MSSCVMMLibrary and positioned the image as cloud resource, next right click MSSCVMMLibrary, click refresh to complete the process [12]. Accessing Cloud Resources: Finally, cloud resources can be accessed by navigating to upper corner of the console window to click the drop down link and select Open New Connection, next at logon window enter user credential of cloud user Tenant at name and password spaces as required, then click connect, SCVMM will

Cloud Computing: Models, Services, Utility, Advantages …

1221

now open a new window showing a brand new VMM window displaying tenant user name account in upper pane successfully logged into the cloud environment and being able to see his VM network and all his cloud accessible resources. The platform is completely dynamic as numerous opportunities exist for resources pooling, extensions, clustering scalability, self-services, and rapid elasticity. Cloud computing services such as Platform as a Service, Software as a Service and Infrastructure as Service can be achieved within the platform.

9 Conclusion Cloud computing is a novel internet-based IT technology broadly used and rapidly evolving in recent reminiscences. The innovation can help enterprises to accomplish much with less expenditure in longer terms. This paper have presented in detail, cloud computing models, services, characteristics, enterprise cloud implementation, utility classes, advantages, security issues, and prototype of cloud computing platform. This will help enlighten enterprises and users while making decision on types of services obtainable and current concerns on cloud computing.

References 1. Armbrust, M., Fox, A., Griffith, R., Joseph, A.D., Katz, R., Konwinski, A., Zaharia, M.: A view of cloud computing. Commun. ACM 53(4), 50–58 (2010) 2. Avram, M.G.: Advantages and challenges of adopting cloud computing from an enterprise perspective. Procedia Technology 12, 529–534 (2014) 3. Beaty, D.L.: Cloud computing 101. ASHRAE J. 55(10), 88–93 (2013) 4. Bhatt, D.: A revolution in information technology-cloud computing. Walailak J. Sci. Technol. (WJST) 9(2), 107–113 (2011) 5. Buyya, R., Yeo, C.S., Venugopal, S., Broberg, J., Brandic, I.: Cloud computing and emerging IT platforms: vision, hype, and reality for delivering computing as the 5th utility. Future Gener. Comput. Syst. 25(6), 599–616 (2009) 6. CSA (2013) Cloud Computing Vulnerability: A Statistical Overview. https://cloudsecurityalliance. org/download/cloud-computing-vulnerability-incidents-astatistical-overview/. Accessed 28th July, 2014 7. Jadeja, Y., Modi, K.: Cloud computing-concepts, architecture and challenges. In: 2012 IEEE International Conference on Computing, Electronics and Electrical Technologies (ICCEET), pp. 877–880, March, 2012 8. Kaur, S.: Cloud Computing is like having an Infinite Credit Line! IETE Technical Review. Medknow Publications & Media Pvt. Ltd., 29(6) (2012) 9. Khalid, A., Shahbaz, M.: Cloud computing technology: services and opportunities. Pak. J. Sci. 65(3) (2013) 10. Lehman, T.J., Vajpayee, S.: We’ve looked at clouds from both sides now. In: Annual SRII Global Conference (SRII), pp. 342–348. IEEE, March 2011 11. Marston, S., Li, Z., Bandyopadhyay, S., Zhang, J., Ghalsasi, A.: Cloud computing—the business perspective. Decis. Support Syst. 51(1), 176–189 (2011)

1222

I. Nwobodo

12. Microsoft: System Centre Technical Documentation Library. http://technet.Microsoft.com/enus/library/cc507089.aspx (2012). Accessed 4 June 2014 13. NIST: NIST Cloud Computing Standards Roadmap (2013). http://www.nist.gov/itl/cloud/ upload/NIST_SP-500-291_Version2_2013_June18_FINAL.pdf (accessed 28 July 2014) 14. Nwobodo, I., Jahankhani, H., Edoh, A.: Security challenges in the distributed cloud computing. Int. J. Electron. Secur. Digit. Forensics 6(1), 38–51 (2014) 15. Paul, P.K., Ghose, M.K.: Cloud computing: possibilities, challenges and opportunities with special reference to its emerging need in the academic and working area of information science. Procedia Eng. 38, 2222–2227 (2012) 16. Savu, L.: Cloud computing: deployment models, delivery models, risks and research challenges. In: 2011 International Conference on Computer and Management (CAMAN), pp. 1–4, 19–21 May 2011 17. Sultan, N.: Making use of cloud computing for healthcare provision: opportunities and challenges. Int. J. Inf. Manage. 34(2), 177–184 (2014)

Optimization of Logistics Distribution Network Model Based on Random Demand Feng Yu, Wei Liu, Liang Bai and Gang Li

Abstract Since customers’ demand has randomness and uncertainty in logistics distribution network, it needs optimized dynamic distribution path for commodities. This paper studies optimization model of chain retailing enterprise in logistics distribution system which is constituted by one region distribution center, multiple urban distribution centers, and multiple retailing stores under random requirement. Retailing store location and independent decision model as well as corresponding joint decision model of logistics distribution network design are established. Solution methods of various independent decision models are provided. In addition, Lagrangian relaxation and coordination mechanism of sub-gradient algorithm-based coordination mechanisms are constructed to realize joint decision. Finally, effectiveness of joint decision model and coordination mechanism is proved by examples. Keywords Distribution center decision Sub-gradient





Retailer



Lagrangian slack operator



Joint

1 Introduction In logistics distribution network system of urban commodity, due to customer demand information uncertainty of regional commodity supply chain terminal, it results in distribution path dynamical change of logistics distribution center. It needs optimize dynamic distribution path of urban pharmaceuticals on how to offer accurate and minimal cost distribution service in time within customers’ requiring service time [1]. There lacks professional studies on dynamic distribution path optimization of modern urban commodity and related research achievements mainly focus on general dynamic vehicle paths. Dror et al. in [2] concentrated on

F. Yu (&)  W. Liu  L. Bai  G. Li Network Center, Shenyang Jianzhu University, Shenyang 110168, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_111

1223

1224

F. Yu et al.

studying requirement modeling method of dynamic random vehicle transportation and applied models in many fields including post-EMS, product distribution, production scheduling, etc. Hanshar et al. in [3] studied how to seek the most appropriate planned route for vehicles under satisfying related constrained conditions when customers’ requirement is uncertain or traffic information changes. Li Bing et al. transformed the method through introducing virtual task points into a static vehicle path problem and adopted saving algorithm solution. Liu Shixin et al. in [4] designed guidance domain search algorithm in dynamic environment. Alan in [5] carried out related research on dynamic vehicle path optimization from different perspectives. Since store location and distribution network design in above references are all applied independent decision but interaction between independent decisions has not been studied and discussed to possibly result in reducing total profit in system, it is essential to coordinate between independent decisions in order to realize global optimization. This paper, respectively, establishes independent decision-making model of store location and logistics distribution network design and provides corresponding solutions. We analyze secondary inventory cost in chain enterprises in detail and establish joint decision model of store selection and logistics distribution network design on this basis. By means of constructing a Lagrangian relaxation technology and sub-gradient algorithm-based coordination mechanism, with adding scores structure of models and relaxing coupling constraints between variables, joint decision model is decomposed into each independent decision models and Lagrangian multiplier is updated through sub-gradient algorithm to realize coordination between each independent decision. In case of simulation, the solution results of coordination mechanism-based joint decision can be effectively converged into exact solution of joint decision-making and its solution is superior to the exact solution of independent decision-making so as to verify effectiveness of model and coordination mechanism. Finally, we analyze the influence of model parameters variation on realized profit increase of joint decision model.

2 Independent Decision Model The model is supposed as follows: (1) Logistics distribution system is composed of one regional distribution center, multiple distribution centers, and multiple retailing stores. Variety of commodities for sale can be converted into a single commodity. Distribution center in this paper refers to urban distribution center. (2) Each store adopts inventory check strategy whose period is 1 day while ordering policy is the largest stock level strategy whose daily demand quantity is normally distributed.

Optimization of Logistics Distribution Network Model …

1225

The ratio of each store demand variance and mean value is constant li which can be acquired by market share of store. Demand between each store and stores in different periods are mutually independent. (3) City distribution center adopts continuously checking inventory check strategy and ordering policy is (R, Q) while inventory capacity is not limited. (4) Regional distribution center location is determined without considering its site selection cost. Its ordering cost and inventory cost can be approximately considered in corresponding cost of city delivery center. (5) It is supposed that shortage cost can be ignored under condition that non-shortage probability of store and distribution center can reach or exceed target service level. It can determine secure inventory factors ki, kj by given target service level to guarantee the service level of system. The main signs used in model are: i Retailer No. (i = 1, 2, …, I) j Distribution center (j = 1, 2, …, J) m Demand point (m = 1, 2, …, M) hc Holding cost of unit inventory Mij Ordering lead time of retailer i serviced by j L Ordering lead time k Safety inventory factor ω Sales income of unit commodity gj Fixed transportation from j to regional distribution center aj Unit variable transportation from j to regional distribution center OC Ordering cost every time RCj Unit transportation cost from j to regional distribution center TCij Unit transportation cost from retailer i to distribution center Q Order amount ESL Probability of stock loss TSL Target service level F Facility address cost P Retailer address amount.

3 Joint Decision Model and Coordination Mechanisms 3.1

Joint Decision Model

Coordination decision model aims to minimize the difference between system cost and sales income of chain enterprise in unit time. The object is equivalent to maximizing corporate profits [6]. Total cost includes operation cost and address cost of distribution center and retailer. In consideration of the sale income of unit commodity x and daily demand of retailer, the sales income of enterprise can be

1226

F. Yu et al.

obtained. Daily demand of retailer obeys normal distribution Nðli ; r2i Þ. li is calculated by market share acquired by retailers. To simplify this model further, we assume the proportion of demand variance and means of each detailer is a constant, that is, r2i =li ¼ c, 8 1  i  I. Because the producing process for retailer demand is a Poisson process, the assumption is tenable. Taking Xi, Yi, and Zij as decision variables, the system optimized model can be described as follows: ( Min

J X

Fj Yj þ

j¼1

þ

i¼1

TCji li Zij

i¼1 j¼1

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi J  X pffiffiffiffi pffiffiffiffi  hcj kj rj Li þ aj li þ 2hcj ðOCj þ gj Þ li þ j¼1

I X

I X J X

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 19 u J = X u l @hcj @ i þ ki ri t Mij Cji þ 1A þ OCj A ; ; 2 j¼1 0

0

s:t:

J X

Zji ¼ Xi ;

j¼1 I X

Xi ¼ P;

i¼1

li ¼

M X m¼1

Gm lmi Xi

I X

!1 lmi Xi þ bm

;

i¼1

r2i ¼ c: li

3.2

Lagrangian Relaxation and Model Decomposition

Joint decision model has classic plus separability structure. We adopt the coordination mechanism based on Lagrangian relaxation technology [7, 8] and sub-gradient algorithm, to decompose the model to independent decision sub-models and to make coordination among independent decisions. So the results of independent decision and joint decision tend to be consistent. The coordination mechanism mainly includes three phases: (1) Lagrangian relaxation and decomposition: relaxing-related constraints among independent decisions and decomposing original problems into several sub-problems.

Optimization of Logistics Distribution Network Model …

1227

(2) Optimized coordination based on sub-gradient algorithm: for given slack operators, solving Lagrangian relaxation problem to get the lower bound of target function. We use sub-gradient algorithm to update the Lagrangian multiplicator and gradually improve the value of dual goals, to acquire new lower bound of target function of original problem. (3) Feasibility of the solution: making the solution of relaxation problem and acquire the upper bound of target function of original problem. The constraints couple the demand and address variables among distribution center, retailer, and demand point. We adopt non-negative slack operators bi and ki to add constraint relaxation to target function, and acquire Lagrangian relaxation as ( Min

J X

ðFj  bi ÞXi þ

j¼1

þ

J X

Fj CYj

j¼1

I X J  X

bi Zij þ TCij li Zij



i¼1 j¼1

þ

J  X j¼1

þ

I X i¼1



qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffi pffiffiffiffi hcj kj rj Li þ aj li þ 2hcj ðOCj þ gj Þ li

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 19 u J = X u l @hcj @ i þ ki ri t Mij Cji þ 1A þ OCj A ; 2 j¼1 0

J X

0

ðx þ ki Þli þ

J X

j¼1

j¼1

ki

M X

Gm lmi Xi 

m¼1

I X

ðlmi Xi þ bm Þ

! 1

;

i¼1

s:t:bi  0 : ki  0 For given slack operators bi and ki , Lagrangian relaxation model are decomposed to independent dual sub-problems P and D, to raise the calculation efficiency. Sub-problem P: ( Min

J X

ðFj  bi ÞXi

j¼1

þ

J X j¼1

PI ki Gm PI i¼1

i¼1

lmi Xi

lmi Xj þ bm

:

1228

F. Yu et al.

Sub-problem D: ( Min

J X

I J  X X

ðFj Yj þ

j¼1

þ bi

i¼1 J X

TCij li Zij

j¼1

ðZij  ðx þ ki Þli Þ

j¼1

þ

J X

pffiffiffiffi ðhcj kj ri Li þ ai ki Þ

j¼1

þ

J X j¼1

þ

I X i¼1

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffi pffiffiffiffi ðhcj kj rj Li þ aj li þ 2hcj ðOCj þ gj Þ li Þ

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 19 u J = X u @hcj @li þ ki ri t Mij Cji þ 1A þ OCj A ; ; 2 j¼1 0

0

s:t:

J X

Zji  1

j¼1 I X

li 

i¼1

M X m¼1

PI Gm

I P

i¼1

lmi

lmi þ bm

i¼1

3.3

Optimized Coordination Based on Sub-gradient Algorithm

Sub-gradient algorithm gets lower bound ZKinf of original problem by solution of Lagrangian duality. The solution of each pair of sub-problem is the process of independent decision, and it is based on give Lagrangian slack operator and constraints to select suitable algorithm for solutions, while the slack operators in sub-gradient algorithm are updated by iteration. The slack operators will coordinate the independent decision and instruct result of each decision to get global optimal solution. The parameter of algorithm is as follows: (1) Updating rules of Lagrangian slack operator bi , ki bik þ 1

maxð0; bki þ akb Vik Þ;

kik þ 1

maxð0; kki þ akk VRki Þ;

Optimization of Logistics Distribution Network Model …

1229

k denotes the times of iteration; Vik and VRki denote sub-gradient vector V k and ith component of VRk ; akk and akb are steps. ! M  J P P um Xik k k k k P (2) In kth iteration Vi ¼ Zij  Xi , VRj ¼ Gm I  ui ; k j¼1

the step is akb ¼ qk ,

Zksup

 Zkinf k 2

jV j

m¼1

, akk ¼ qk

i¼1

umi Xi þ bm

Zksup

 Zkinf jVRk j2

Zksup is upper bound of target function of original problem; Zkinf is lower bound of target function of duality problem in kth iteration; qk is control parameter and 0 \ qk \ 2. (3) Algorithm termination rules: when one of following two conditions is satisfied, the iteration stops: The times of iterations k  K, K is given times of iterations. jZksup  Zkinf j  e, e is given value.

4 Cases Study The example contains 1 regional distribution center, 2 cities, 4 candidate distribution centers, 10 candidate retailers, and 16 demand points, constructing the chain enterprise distribution system. The parameters are: ki ¼ 2, kj ¼ 1:95, qk ¼ 1:5, K = 100, e ¼ 1; basic model parameters are: x ¼ 50, P = 6, c ¼ 1:2, Li ¼ 1; the other parameters are set as Tables 1 and 2. (1) Coordination mechanism-based joint decision solving results can be effectively converged into the exact solution of joint decision model and its solution is superior to the exact solution of independent decision model. (2) While customers’ demand is increasing, increasing proportion of total profit which is realized by joint decision model decreases. The reason is when customers’ demand is very large, in order to maximize enterprise profit, joint decision model will mainly concern location decision of store so that solving results of joint decision model and independent decision model tend to be similar and the realized total profit is the same.

Table 1 Parameter table of distribution cent the tables

No.

Parameter hcj Fj

OCj

Lj

aj

gj

1 2 3 4

2392 1722 1925 2108

84 75 81 76

2 2 1 2

2 3 2 2

50 60 55 56

0.24 0.23 0.25 0.30

1230

F. Yu et al.

Table 2 Model parameters of retailer

No.

Parameter hcj Fj

OCj

TCi1

TCi2

TCi3

1 2 3 4 5 6 7 8 9 10

760 780 770 760 750 770 730 790 810 740

20 22 23 24 20 21 23 24 25 23

4 5 6 7 6 4 35 36 38 40

7 6 7 8 7 5 33 32 34 35

30 34 35 37 35 32 5 6 7 6

1.2 1.3 1.1 1.2 1.5 1.4 1.6 1.3 1.2 1.2

(3) When location cost in distribution center is increasing, increasing proportion of total profit which is realized by joint decision model is increasing. The reason is when constructing cost of distribution network is increasing, joint decision realizes largely saving distribution network construction cost so as to increase the increasing proportion of total profit. (4) When unit holding cost in store and distribution center is increasing, the increasing proportion of total profit which is realized by joint decision model is decreasing. The reason is the profit increase which is realized by joint decision model is too little at this time. Even under condition that absolute quantity of total profit is decreasing, the realized total profit increase proportion is also decreasing (Table 3).

Table 3 Solution of decision model No.

0 1 2 3 4 5 6 7 8

Parameter change

Basic model 0.75 Fj 1.25 Fj 0.75 Gm 1.25 Gm 0.75 hci 1.25 hci 0.75 hcj 1.25 hcj

Decision

Joint decision (Lingo)

Total profit

Total profit

Increase (%)

Joint decision (Lagrangian) Total Increase profit (%)

4718.44 5630.60 3807.00 1351.00 8001.00 7389.00 2049.00 5223.40 4213.60

4877.86 5848.54 3966.10 1550.40 8205.30 7548.30 2082.00 5495.60 4246.10

3.38 3.73 4.01 12.86 2.49 2.11 1.59 4.95 1.09

4863.77 5823.73 6949.00 1442.33 8141.82 7496.14 2066.83 5383.24 4229.19

3.08 3.43 3.73 6.76 1.76 1.45 0.87 3.06 0.37

Optimization of Logistics Distribution Network Model …

1231

5 Conclusion Under uncertainty environment, store location and distributing network design in logistics distributing system optimization of chain retailing enterprises can increase enterprise profit by joint decision. Based on, respectively, establishing independent decision and joint decision model in logistics distribution system optimization, this paper applies Lagrangian relaxation and sub-gradient algorithm coordination mechanism to realize coordination in each independent decisions so that joint decision results which are realized in this coordination mechanism can be effectively converged in the exact solution of joint decision. We also verify the effectiveness of model and coordination mechanism through case simulation and analyze profit increase proportion variation which is provided by joint decision under situation of various cost parameters variations. The models and coordination mechanism in this paper can provide important decision tool for constructing and optimizing logistics distribution system in chain retailing enterprises.

References 1. Chunguang, Y.I., Songdong, J.U.: Enterprises’ distribution networks based on coordination and decision of agile SC. Logistics Technol. 1, 51–53 (2006) 2. William, H., Ali, E.: Multi-criteria logistics distribution network design using SAS/OR. Expert Syst. Appl. 36, 7288–7298 (2009) 3. Athakorn, K.: Design of a decision support system to evaluate logistics distribution network in Greater Mekong subregion countries. Int. J. Prod. Econ. 115, 388–399 (2008) 4. Huang, J., Zhang, C.: Application of complex network centrality in logistics and distribution networks. Logistics Technol. 22, 108–111 (2013) 5. Shu, J., Wang, G., Zhang, K.: Logistics distribution network design with two commodity categories. J. Oper. Res. Soc. 64, 1400–1408 (2013) 6. Benati, S.: An improved branch & bound method for the uncapacitated competitive location problem. Ann. Oper. Res. 122, 43–58 (2003) 7. Liu, S., Zhang, J., Li, G.: Location-allocation model of logistics distribution network of fast fashion products in mature period. J. Southwest Jiaotong Univ. 47, 333–340, 354 (2012) 8. Xu, C., Ming, Z.: Design and simulation of a logistics network for a telecom products supply application: A case study. Int. J. Ind. Eng.: Theor. Appl. Pract. 17, 80–91 (2010)

Data Forwarding with Selectively Partial Flooding in Opportunistic Networks Lijun Tang and Wei Wu

Abstract Opportunistic network is an evolution of the Mobile Ad hoc Network, which is composed of many mobile nodes. In opportunistic networks, there exist intermittent connecting communication paths between any two nodes and nodes try to realize communication with each other relying on the mobility of nodes. Furthermore, the frequent movements of nodes in networks lead to dynamic change of the network topology, and nodes have not any prior information and knowledge about the changeable network topology. However, the traditional data forwarding approaches do not solve the above issues, therefore they are unusable for opportunistic networks. In this work, a novel data forwarding approach is proposed to apply to opportunistic networks, which is called as selective probability-based flooding data forwarding scheme (SPF). It is a advanced approaches based on the conventional flood forwarding, and it chooses fewer neighboring nodes to forward data with contrast to conventional flooding scheme. These chosen nodes have more communication chances with the destination node than other neighboring nodes. With the traditional flooding method, data is forwarded to all neighboring nodes, which let data reaches the destination as fast as possible, so it has the minimum end-to-end delays and maximum delivery ratio. However, its disadvantage is that the large amount of forwarding data excessively uses the network resources. With SPF, the goal is to decrease the network traffic by selecting partial neighboring nodes based on the connecting likelihood with the destinations. Finally, the network performance is evaluated through simulations among the SPF, flooding, and partial flooding. The simulation results show our SPF can provide better performance than other two forwarding schemes while decreasing the network traffic. Keywords Opportunistic networks Selective probability-based flooding



Data forwarding



Flood forwarding



L. Tang (&) Chongqing Vocational Institute of Engineering, Chongqing 400037, China e-mail: [email protected] W. Wu Chongqing City Management College, Chongqing 401331, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_112

1233

1234

L. Tang and W. Wu

1 Introduction Wireless sensor network (WSN) [1] is a rapidly developing wireless communication technology where sensor nodes can change their location at any time and form a multiple hops self-organizing network. WSN is widely used in military, intelligent transportation, environment monitoring, health care, and other fields. Each sensor node can share its information that has been gathered autonomously with others in network. Compared with the traditional system, WSN is much cheaper and it can be established quickly. Because the carried energy is limited, all time tracking monitoring is not easy and impossible for sensor nodes. Some sensor nodes quickly have exhausted their limited power or they move dynamically, which result in intermittent connectivity in WSN. Such intermittent connecting network is called Opportunistic Network [2, 3], where communication opportunity is obtained relying on the mobility of sensor nodes. Opportunistic Network is a new research hot topic in wireless communication in recent years. In this new type of network structure, fixed infrastructure is not necessary, and the existing technologies may not be applied to it. The researches of opportunistic network mainly focus on the application areas of mobile ad hoc networks, such as disaster emergency communication, personal communications, and tracking wild animals. For instance, when disasters such as earthquakes, forest fires, or hurricane happened, the communication infrastructure would have been damaged, then the correspondence would be interrupt, which bring much inconvenience for the rescue. Opportunistic networks can solve these problems effectively. Thus, one of its main goals is to provide best-effort correspondence for people. In some literatures, some projects applied to opportunistic networks have been developed, like Zbranet [4] and Daknet [5]. Zbranet project is used to monitor zebras’ living habits in their habitat. In this project, special collars worn by zebras are used to collect data and transmit data. Daknet project is mainly used to provide intermittent connectivity between the underdeveloped area and the other area of the world. In this project, there are mobile stations and mobile access points (MAPs), MAPs are equipped on vehicles such as a bus, a motorcycle or a bicycle, and they shuttle between different regions to exchange, store, and forward information (email, voice mail, etc.) With the help of the MAPs, people in remote areas can communicate with each other. Therefore, opportunistic networks are used to establish a communication between two disconnected nodes in a special application environment. In order to increase the communication chances with the destinations as far as possible, flooding routing [6] has been proposed for opportunistic networks. In flooding routing, messages generated by source nodes are spread to every encountered node that will lead to generating a lot of message copies, increasing the network traffic and suffering from excessive resource consumption. Furthermore, some control flooding approaches have been considered, such as single-copy scheme [7, 8] and multiple-copies scheme [9, 10]. The both methods decrease the network overhead by controlling the copies

Data Forwarding with Selectively Partial Flooding …

1235

of the message. Lindgren et al. [11] proposed a probabilistic protocol using history of encounters and transitivity (PROPHET). This approach used the history of encounters, to compute the delivery predictability of the nodes for all known destinations. When two nodes meet each other, this information can be forwarded to the peer node which has more contact chance with the destination. When a mobile node encounters multiple neighbor nodes in the same time, partial flooding [12] routing has been used to reduce the excessive usage of the network resources the traffic in dissemination-based networks without infrastructure. In partial flooding, a certain percentage of neighbor nodes are chosen randomly to act as relay nodes, but the impact of different characteristics of network nodes and limited network resources on the network performance has not been considered. In this paper, we were inspired by the partial flooding, a novel routing called selective probability-based flooding data (SPF) forwarding scheme is proposed for opportunistic networks. The proposed scheme strives to deliver the packet as soon as possible under the circumstance of limited network resources. Compared to the traditional flooding routing, such a scheme can reduce the usage of network resources. It also can provide better performance in terms of end-to-end delay and delivery ratio than p% partial flooding algorithm. The rest of paper is organized as follows. In Sect. 2, we define the SPF algorithm. In Sect. 3, we describe the simulation model and analyze the results obtained to compare the performance characteristics of the above algorithms. The paper ends with the conclusions made in Sect. 4.

2 Selective Probability-Based Flooding Protocol In this section, we present our assumption and describe the protocol in detail. We assume that each node has a fixed-size buffer and contact duration between nodes is limited. With SPF, a node transmits the data not to all of its neighboring nodes but to a portion of neighbors based on a weight value assigned to each neighbor. The weight value is an estimate of delivery probability. To avoid loops, the data can not be forwarded to the node that the data is coming from.

2.1

Estimating Delivery Probability

Let the set of nodes in the network be s. Each node i 2 s, keeps track of a probability of meeting peer j 2 s. We use the weight Wji denote the degree of the contact frequency between node i and node j. The higher weight means higher delivery probability for messages. The weight is calculated as follows. Wji ¼ k

k X 1

TðkÞ;

ð1Þ

1236

L. Tang and W. Wu

where k is the meeting times between i and j, TðkÞ is the contact duration in the kth time. When two node encounter they compute their respective weight according to Eq. (1).

2.2

Protocol Definition

The difference between SPF and p% partial flooding is the selection of neighboring nodes. With p% partial flooding, a node transmits the data to a randomly selected subset of its neighboring nodes. However, SPF algorithm forwards the data to those neighboring nodes that have more chance to contact with messages’ destinations. We estimate the contact probability with destinations for neighboring nodes based on the weight value in Eq. (1). Each node records all weight value relative with all other encountered nodes and maintains a list table of weight value. In our scheme, a hop list in each packet stores peers that has already seen it, including peers to which the current node has sent the packet. To prevent loops, the packet can not be sent to these nodes in its hop list. Table 1 illustrates the typical steps with the SPF algorithm in detail, when a network node receives a packet.

Table 1 The steps of the SFP algorithm The SPF algorithm If the message has been received before Drop the message End If the message has been received for the first time If direct communication with the destination is possible Sent the message to its destination End If direct communication with the destination is impossible Discover all the neighboring nodes can not directly communicate with the current node, and obtain the weight value list related to likelihood of forwarding the message to its destination through these neighboring nodes. Sort all neighboring nodes in decreasing order according to their respective weight values and select these nodes located in the higher half portion as next hop of the message. Transmit the message copy to every selected node End End

Data Forwarding with Selectively Partial Flooding …

1237

3 Simulation Model and Results 3.1

Simulation Model

In this section, we present a comparative simulation analysis among flooding, %p (p = 50) partial flooding and our SPF using opportunistic networking environment simulator (ONE) [13]. For the simulations, various numbers of nodes have been placed in 1000 m × 1000 m area. One node marked as destination node is located in (1000, 1000) coordinate and has large buffer size. Other nodes move as Shortest Path Map-Based Movement mobility model [14]. Their parameters are shown in Table 2. A message is forwarded from the sender node to the receive node using flooding, p% partial flooding and our SPF scheme. With flooding, each node sends a message to all other nodes in its range. Under p% partial flooding algorithm, each node sends the message to some of random selected nodes in its range. Using our SPF scheme, each node sends the message to some ones with high weigh of nodes in its range. For the simulations, three parameters have been used to compare and evaluate the performance of the algorithms. These parameters are as follows. Delivery Ratio It is the ratio of number of successfully delivered messages to the total number messages generated. It can be calculated by the following formula. delivery ratio ¼

number of packets received number of packets sent

ð2Þ

End-to-end Delay Between the Source and Destination Nodes It is calculated by the following formula. number of packets P received

end-to-end delay ¼

ðreceived timeðiÞ  send timeðiÞÞ

i¼1

number of messages received

ð3Þ

Total Network Traffic by the Algorithms It is the total copies of all messages in network. Table 2 Parameters used in simulation

Parameters

Default value

Network size Number of nodes Transmission range Speed of node Pause time Size of message Max buffer size Messages generation rate

1000 m × 1000 m 500 30 m [5, 30] m/s [0, 120] s 512 byte 300 messages 0.1/s

1238

3.2

L. Tang and W. Wu

Simulation Results

Simulation results have been obtained for these parameters by varying the number of nodes and buffer size of nodes. Figure 1a–c illustrate the delivery ratio, end-to-end delay and network traffic, respectively, of each algorithm by varying the number of nodes (N = 100, 250, 500,750, and 1000), when buffer size in node is confined to 300 messages. With this increase in the density of nodes, we can see the delivery ratio increases slightly for all algorithms because of obtaining more contact chance. Our SPF scheme outperforms the other two algorithms in terms of delivery ratio with resources constraint. Figure 1b shows the end-to-end delay of all algorithm declines as increase in node density. When node density is low, our SPF algorithm obtains the minimum delay and the delay for flooding routing is lowest with the higher density

Fig. 1 Impact of node density. a Delivery ratio. b Average delivery delay. c Network traffic

Data Forwarding with Selectively Partial Flooding …

1239

of node. It means that the SPF scheme is more appropriate to sparse network than flooding algorithm. From Fig. 1c, we can see the flooding scheme generates the most number of messages, which aggravate the network performance easily. While the SPF incurs the minimum network traffic due to sending messages to nodes with more contact chances to destinations. Figure 2a–c shows the delivery ratio, end-to-end delay, and network traffic, respectively, of each algorithm by varying the buffer size of nodes (buffer size = 100, 300, 500,700, and 1000), when the number of nodes is 500 and nodes’ communication range is 30 m. With the increase in buffer size, Fig. 2 has a similar tendency with Fig. 1. When the buffer size is small, it may be filled with messages quickly and redundant messages are dropped; thus the delivery ratio becomes low. As the buffer size increases, nodes have enough storage to store messages. Therefore, the delivery is improved accordingly for all three algorithms. It is

Fig. 2 Impact of buffer size. a Delivery ratio. b Average delivery delay. c Network traffic

1240

L. Tang and W. Wu

possible that the SPF can quickly delivery messages to the destinations by experiencing the least hops, so that the scheme can provide the higher delivery ratio than other two schemes. With the buffer increasing, the delay decreases accordingly as shown in Fig. 2b. When the buffer is small, the SPF outperforms the other two algorithms in terms of delay. Figure 2c shows that the flooding generates the most network traffic in the same condition. In summary, compare to other two algorithms, our SPF scheme can provide better communication with least consumption of network resources.

4 Conclusions We have proposed SPF as an effective protocol for data forwarding in opportunistic networks. With SPF, a network node sends messages to its neighboring nodes with higher delivery probability to destinations. Under opportunistic networks, the networks might be disconnected in most of time, thus a path from a source node to a destination node might not exist. Using flooding routing, it is possible to reach a destination with minimum delay. However, flooding requires excessive usage of network resources and causes extreme network traffic to aggravate performance of network. Therefore, the proposed SPF can decrease the excessive usage of network resources while still improving the performance of network. With SPF, only a number of neighbors with higher delivery likelihood receive the incoming data. As a result of the simulations, it is observed that the network traffic in SPF is far less than the network traffic in flooding and similarly less than the network traffic in %p partial flooding. Our evaluations also show the SPF performs well in terms of delivery ratio and delivery delay for opportunistic networks. Acknowledgments This work is supported by the National Natural Science Foundation of China (61203321) and the Science and Tech. Research Fund Project of Chongqing Education Commission (KJ1403208).

References 1. Akyildiz, I.F., Su, W., Sankarasubramania, Y., Cayirci, E.: Wireless Sensor Networks: A Survey. IEEE Comput. 38(4), 393–422 (2002) 2. Pelusi, L., Passarella, A., Conti, M.: Opportunistic networking: data forwarding in disconnected mobile ad hoc networks. IEEE Commun. Mag. (2006) 3. Lilien, L., Kamal, Z.H., Bhuse, V., Gupta, A.: Opportunistic networks: the concept and research challenges in privacy and security. In: International Workshop on Research Challenges in Security and Privacy for Mobile and Wireless Networks (2006) 4. Juang, P., Oki, H., Wang, Y., et al.: Energy-efficient computing for wildlife racking: design trade-offs and early experiences with zebra net. ACM SIGPLAN Notices 37, 96–107 (2002) 5. Pentland, R., Fletcher, R., Hasson, A.: DakNet: rethinking connectivity in developing nations. IEEE Comput. 37(1), 78–83 (2004)

Data Forwarding with Selectively Partial Flooding …

1241

6. Vahdat, A., Becker, D.: Epidemic routing for partially-connected ad hoc networks. Technical Report CS-2000-06, Duke University (2000) 7. Roenevelt, R.G., Naim, P., Koole, G.: The message delay in mobile ad hoc networks performance evaluation 62(1–4), 210–228 (2005) 8. Spyropoulos, T., Psounis, K., Raghavendra, C.S.: Single-copy routing in intermittently connected mobile networks. In: First Annual IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks, pp. 235–244 (2004) 9. Spyropoulos, T., Psounis, K., Raghavendra, C.S.: Spray and wait: an efficient routing scheme for intermittently connected mobile networks. In: Proceeding of 2005 ACM DIGCOMM Workshop on Delay-tolerant Networking, pp. 252–259 (2005) 10. Grossglauser, M., Tse, D.N.: Mobility increases the capacity of ad hoc wireless networks. IEEE/ACM Trans. Netw. 10(4), 477–486 (2002) 11. Lindgren, Doria, A., Schelen, O.: Probabilistic routing in intermittently connected networks. ACMSIG Mob. CCR 7(3), 19–20 (2003) 12. Erdogan, M., Gunel, K., Koc, T., et al.: Routing with (p-percent) Partial Flooding for Opportunistic Networks. In: Proceedings of Future Network & Mobile Summit 2010 Conference, pp. 1–8 (2010) 13. Keränen, A., Ott, J., Kärkkäinen, T.: The ONE simulator for DTN protocol evaluation. In: SIMUTools ’09: Proceedings of the 2nd International Conference on Simulation Tools and Techniques (2009) 14. Hyytiä, E., Koskinen, H., et al.: Random waypoint model in wireless networks. Networks and Algorithms: Complexity in Physics and Computer Science (2005)

Design of Flame End Points Detection System for Refuse Incineration Based on ARM and DSP Fengying Cui, Sailong Ji and Qilei Xu

Abstract In this paper, based on the early research of detecting the flame end points for refuse incineration using machine vision technology, a new detection system was proposed based on ARM and DSP. With the powerful signal processing ability and advantages in image processing, DSP finished the image acquisition and image pre-processing, including gray processing, median filtering, image segmentation based on DFC clustering algorithm, and edge detection based on phase, finally got the flame end points using median method and changed it into the standard signal of 4–20 mA. Then according to the signal and using the powerful control ability of embedded system, ARM achieved the control of air volume, grate velocity, and action. The results shows that this design can detect the flame end points automatically and fast, and make the controller react quickly. The system has the advantages of fast processing speed, good real-time performance, low cost and convenient operation, and so on. It not only can meet the requirements of environmental protection of refuse incineration industry, but also improve the efficiency of the related enterprises greatly. Keywords Refuse incineration points detection

 ARM  DSP  Image processing  Flame end

1 Introduction With the increasing of municipal solid waste (MSW), garbage disposal has become an important problem that troubles the economic development and environmental governance. How to solve the pollution of MSW problem effectively has already been an important topic which we have to face. In this case, garbage disposal F. Cui (&)  S. Ji  Q. Xu College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266042, China e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_113

1243

1244

F. Cui et al.

technology emerges. Now there are mainly two kinds of refuse treatment technologies: one is landfill technology. The technique consumes a large amount of land, so it is not suitable for large-scale promotion and application; the other is refuse incineration, refuse incineration technology has the advantages of reduction, harmless, and resources [1], so it is worth to popularize. The traditional flame detection of refuse incinerator is mainly by manual work. The staff with experience and feeling complete the adjustment of grate by the flame size and brightness, such as when to roll and what speed by to move forward. The artificial regulation requires a lot of manpower and has low efficiency, high-labor intensity; what’s more, it is easy to cause the second serious pollutant emissions and has high risk. Therefore, the artificial way has not been able to meet the design production, the requirements of environmental protection [2]. At present, the exploration of image processing technology based on machine vision has been used widely in flame, which is the international hot research topic. With the rapid development of computer technology and the continuous decrease of image acquisition hardware cost, many new image processing technologies emerge. The processing speed and effect are greatly improved. Now it has become the direction of industrial development using the image processing technology to replace the manual method [3]. For example, apply image processing technology to the boiler flame of power plant, the fire detection and alarm, etc. [4–6]. Based on this idea, we have introduced image processing technique into the flame detection of refuse incinerator. In that design, the camera was used to take the flame photos of refuse incinerator, and the image was input to the PC by the image acquisition card. On PC using the software Visual C++, it realized of image collecting, processing, and calculating the end point location of flame, then transformed it into the standard signal 4–20 mA to control the action of the grate. This detection technology not only effectively eliminated the artificial subjective factors and improved the safety of the equipment, but also saved the labor and reduced the waste disposal costs. This technology has been applied successfully to flame control in a refuse incinerator and achieved good effect. But this kind of detection technology based on image processing is all on PCI or USB, the image by CCD must be output to the PC machine to use software for processing, the system’s continuity, and maneuverability is not strong; the image processing methods are more suitable for static test study, and it has disadvantages such as large and complex structure, real-time not strong and high cost. So the embedded flame image detection system based on ARM and DSP is proposed. Now the application of ARM and DSP to flame detection are very much, such as described in the literature [7–12]. But we can not find this technology is used in refuse incineration in related literature of flame detection at present, so in this paper we just applied this embedded structure of ARM and DSP to the flame detection. The system uses sophisticated industrial CCD to obtain real-time flame burning image, uses high-performance digital signal processor DSP to achieve real-time and online processing—combined with strong control ability and rich interface functions of ARM embedded system—the system has a massive boost in data processing and real-time. The system realizes the real-time and online flame state

Design of Flame End Points Detection System …

1245

detection and diagnosis, and it has the advantages of small volume, low cost, stable operation, and fast processing speed.

2 System Structure 2.1

Principle of System Detection

The disposal process of refuse incineration technology is shown in Fig. 1. Before feeding, start the flue gas fan and negative pressure is generated in the furnace, then two diesel burners in incinerator are ignited one after another to heat furnace. When the furnace temperature is heated to the set value, the feed motor starts automatically, at the same time the signal is sent to the storage cabinet. Then the hydraulic pushing piston pushes the garbage to the conveyor and sends them to the combustion chamber. The garbage in the combustion chamber is mixed and the grate is rolled up and down, then they are pushed to ash outlet from garbage inlet, meanwhile the garbage is heated and burned gradually when pushing. The moving speed of garbage is mainly adjusted by the garbage combustion conditions (i.e., the flame end points) to ensure the waste has enough time in furnace to be burned fully. According to the burning process, we adjust the volume of garbage by its burning condition (the flame end point position) and adjust the grate flip and action. But due to the complexity of waste composition, refuse incinerator operation is not stable, it often causes some problems, such as the coke, ash, and corrosion, which often resulting in shutdown or pollutant emissions exceed the standard. So how to

eject smoke the secondary combustion chamber combustor smoke the primary combustion chamber

material inlet

piston air inlet of bottom

Fig. 1 Disposal process of refuse incineration

ash

1246

F. Cui et al.

get the flame conditions accurately and reliably is an effective means to ensure the incinerator security and effective operation.

2.2

System Structure

According to the above processing workflow, the basic idea of designed system based on ARM and DSP is by the properties of variable garbage and constant incineration capacity to control and adjust the grate speed, combustion air volume and a series of control operation. It ensures the system achieve the optimal combustion conditions and ensures the stability, economy, and the requirements of environmental protection. The key control of system is to control the each grate speed. Because in controlling grate speed of burning section, we take the actual flame end point as the feedback to adjust the grate speed, the flame end points detection is one of the key and difficult points. In this paper, above on the image processing technology, image acquisition will upload the images to DSP real-time signal processor and ARM controller to complete the detection and control the combustion. The system structure is as follows. As shown in Fig. 2, the system consists of CCD industrial camera, DSP real-time signal processing unit and ARM control unit. The furnace flame image is collected by the CCD camera, and they are decoded to be sent into First-In-First-Out (FIFO) memory, then, be sent into DSP processor. Because DSP has rich software and hardware resources which are more suitable for image signal processing, it can be used to complex image processing algorithms. In this system, the image acquisition, image processing algorithm, and the detection of flame end points are all realized in DSP, which makes the system has characteristics of high-processing speed and real-time detection.

Video decode

FIFO DSP processor SDRAM

HPI interface

CCD industrial camera

SDRAM

External I/O interface

Display

Keyboard

Flame of refuse incinerator

Fig. 2 System structure

ARM processor FLASH

Communication inteface

Design of Flame End Points Detection System …

1247

In this design, the control and data processing core adopts ARM and DSP structure, completes the hardware connection, and the drives program design through the communication interface between the two system units. ARM unit, using ARM-Linux embedded real-time operating system, is responsible for the entire system workflow, task scheduling, completing the communication with the external system, and to realize the coordinated control of the whole system. DSP implements the former image acquisition and related detection processing algorithms, and exchanges data with ARM microprocessor. The system with ARM+DSP structure has biggest advantages, such as flexible structure, strong universality, suitable for modular design, which can achieve the high efficiency and real-time control; at the same time, its development process can be parallel and independent, it is easy to maintain and expand the system, and is also suitable for real-time signal processing.

3 Image Processing Algorithm The flame images from the refuse incinerator often mixed with a lot of noise, at the same time the background gray is gradational, the edge sharpness is not enough, transition is too smooth, the density of refuse incineration ashes is uneven which leads to have continuous bright spot noise in the flame images. Those factors all affect the edge judgment greatly. Therefore, in order to detect the flame end points, that is to say the edge, the original flame images must be pre-processed. Image pre-processing technology includes image contrast enhancement, noise removing and edge feature enhancement, and so on [13, 14]. After processing, the quality of output image is obviously improved, which is convenient for computer analysis, processing, recognition, and understanding.

3.1

Image Graying

In this system, the images collected by CCD camera are true color images, that is, the RGB image. In image processing, it is faster to process the single-gray signal than R, G, and B three signals. Converting RGB image to gray image, the key is to use the three different component of each pixel to distinguish the pixel gray. There are three kinds of methods: The calculation formula for maximum value method is as follows. R ¼ G ¼ B ¼ MAX ðR; G; BÞ The calculation formula for the mean value method:

ð1Þ

1248

F. Cui et al.

R ¼ G ¼ R þ G þ B=3

ð2Þ

The calculation formula for the weighted average method: R ¼ G ¼ B ¼ 0:3  R þ 0:59  G þ 0:11  B

ð3Þ

This system uses the weighted average method, i.e., formula (3), to process the flame image of refuse incineration.

3.2

Image De-noising

Image de-noising is to improve the image definition through filtering the various noise signals which is produced in process of image acquisition. Speaking from a certain meaning, it is a kind of image enhancement method. There are many image filtering methods, such as mean filter, neighborhood average method, median filter, and Gauss filter. Comparing the effects of several methods, this paper selects the median filtering method to pre-process the flame image. Median filter is a nonlinear smooth technology which can suppress the noise effectively based on the order statistics theory. In the actual application process, because it does not need to know the statistical characteristics of image, the application is more convenient. The principle is: sort all of the pixel gray in neighborhood pixels of center pixel and find the middle value, then replace the center pixel value using the middle value. The processing step of median filter is as follows. (1) Make the template traverse on the whole image, coincide the center pixel with one pixel of image; (2) Read the corresponding pixel gray value under template; (3) Sort the reading pixels from big to small; (4) Find the intermediate value in sorting sequence; (5) Assign the intermediate value to the corresponding pixel of center point; This paper selects 3 × 3, 5 × 5, and 7 × 7 template, respectively, to filter the flame image of refuse incineration. The effect diagrams are shown in Fig. 3. From the results, the image edge of center region after median filtering is very clear. Meanwhile, this part image has enhanced and highlighted the central region. The actual combustion flame is fluctuation in the time domain, and the gray gradient is more close to the actual combustion conditions when it is bigger. So using the median filtering can protect and enhance the image contour and edge, it can remove the noise and protect the image edge restoration.

Design of Flame End Points Detection System …

3×3 template

5×5 template

1249

7×7 template

Fig. 3 The filtering diagrams of different templates

3.3

Image Segmentation

Image segmentation is to separate the interest image from the image background, the threshold method is the commonly used. For example, the method of fixed threshold selects fixed value as the image segmentation threshold. If setting the threshold is T, compare the T with each pixel gray value, when the value is greater than the T, set the pixel to 255, otherwise, set it to 0. But this method is not good for flame image, a locally adaptive two valued processing method based on DFC clustering algorithm is proposed in this paper [15]. This method can segment image of refuse incineration and the segmentation result can meet the requirements. In this method, the image is divided into many sub regions of r × r evenly, in each sub region its sub threshold is calculated by DFC clustering algorithm and according to these thresholds sub regions are segmented. Then combine the segmentation results of sub region to obtain the whole image segmentation result [16]. The clustering algorithm generally has three steps, including defining the grid step, initializing clustering step, and merging sub class step. Figure 4 is the segmentation image after clustering algorithm, from it we can see the image contour is very clear, and it contains most information of flame.

3.4

Image Corrosion and Expansion

The image after pre-processing usually has discontinuous edge information and noise and so on; in this paper, the morphology method is selected to corrode and expand the image and other operations to overcome these deficiencies [17]. Corrosion is a kind of process that eliminates the boundary point to make it contract to the interior. It can eliminate the small noise and other insignificant objects and can cut the “bridge” among regions. Expansion is a kind of process that merges all the background points contacting with objects into the object to expand the boundary to the exterior. It can fill the hole in an object. Figure 5 is the results of corrosion and expansion with round structural elements of 5 × 5.

1250

F. Cui et al.

Original image

Image after segmentation

Fig. 4 Image segmentation

Fig. 5 Image of corrosion and expansion

3.5

Edge Extraction

The traditional edge detection algorithm mainly uses the spatial differential operator through convolution to complete. Because the edge gray value is discontinuous, derivative operators can detect the gray change. Use the derivative operator on the image and highlight the local edge, and make the derivative values as boundary strength of corresponding points, finally extract the boundary by setting threshold. There are many operators such as Sobel operator, Prewitt operator, Robert operator, Canny operator and so on, they have common characteristics that construct the edge by the neighborhood region of pixel so as to detect the discontinuity of local characteristics. Because of random noise, when these operators are used to detect edge, they are very sensitive to noise due to their inherent characteristics of differential operation, although they have the advantage of less computation. So when detecting edge the noise is usually mistook for the edge (i.e., false edge), while the

Design of Flame End Points Detection System …

Original image

Phase consistency algorithm

1251

Canny operator

Fig. 6 Simulation results of edge detection operator

real edge is not detected because of the disturbance of noise. Therefore, the edge often has gap, burr, and so on. After realizing the importance and stability of phase information, edge detection based on phase information has become a new research topic [18]. In this paper, the classical Canny algorithm and phase consistency algorithm was used to detect edge, the comparison diagrams are shown in Fig. 6. From Fig. 6, the edge detection method based on phase consistency is compared to the common gray extraction result, it is found that the image edge by phase consistency extracting had delicate lines and the sealing is good, which is more convenient for the following measurement and calculation of parameters. For the original image with different contrast, the edge effect by phase consistency method is obvious better than Canny operators. So the method of phase consistency is selected to extract the edge contour in this paper.

3.6

Calculate the Flame End Points

First, we introduce the method of calculating the flame end point. Check all the pixels, if the pixel is the one which contact object to background (pixels of four regions connected not only have background pixels but also object pixels), it is adjudged as the boundary. According to the flame image after expansion, search the pixel which the last gray value is 255 in each column, this pixel is the flame end point. According to the end point searched in each column, calculate the end point position of the image. In calculation, those image columns which has not target will no longer participate in the calculation of the end point position. In this paper 3, ways are provided to calculate: arithmetic mean value, median and maximum value. (1) The method of arithmetic mean value: take all the end point locations in column which has target to compute its arithmetic mean, choose the integer part of results; (2) The method of median value: sort all the end point locations in column which has target by non-descending way, and then take the middle number in the sorting sequence as the last result. (3) The method of maximum value: select the maximum of all the end point locations in column which has target as the last result.

1252

F. Cui et al.

Fig. 7 Diagram of the flame end point position

In this flame detection system, the end point position map is made by the statistical arrangement of end point separatrix map; according to the flame characteristics of refuse incineration, the median is chosen as the final result of flame end point position. As shown in Fig. 7, the red line is the final position of the flame end point location by median value; the yellow line is the result by arithmetic mean value; the black line is the result by maximum value. Figure 7 shows that the flame burning position can be controlled well after using the detection system of flame end point for refuse incinerator. It can make the burning position wander in the grate center and improve its service life of the refuse incineration equipment and the safety of the whole control system.

4 Conclusions The flame detection system for refuse incineration presented in this paper is based on ARM and DSP. According to the principle of waste incineration process, we design the hardware structure consists of DSP signal processor and ARM controller. According to the characteristics of refuse incineration, images obtained are pre-processed first, and on this basis, calculate the end points of image edge, finally obtain the flame end point position by the mean value algorithm. Image acquisition and processing are all realized by DSP with strong signal processing function, and it is converted into 4–20 mA current signal into ARM controller. Then ARM controller regulates the grate speed and air volume and a series of control operation according to the signal size. It can make the system reach the optimal combustion and the requirements of environmental protection, and ensure the operation stability and economy. The system based on the structure of ARM and DSP has overcome the shortcoming of previous method based on only machine vision; it gets rid of the dependence on PC machine and meets the requirements of real-time and fast.

Design of Flame End Points Detection System …

1253

Through the actual flame test the results show that the operation is stable and effective. Therefore the system has the advantages of low cost, good real-time performance, stability and so on, it is worth to promote widely in the processing industry of refuse incineration.

References 1. Song, Z.W., Lv, Y.B., Liang, Y.: Development of refuse incineration technology of municipal solid waste both here and aboard. Eng. Environ. Health 15(1), 21–24 (2007) 2. Zhu, F.G.: Discuss of technology and control characteristic for refuse incineration in Suzhou. New Resour. Renew. Resour. (2009) 3. Hua, X.G.: Research on Fuzzy Control Refuse Combustion Automatically of Refuse Incineration. Chongqing University, Chongqing (2008) 4. Zhen, C.G., Han, P., Niu, Y.G.: Image processing and temperature field reconstruction of Furnace flame. Power Eng. (2003) 5. Shen, B.: Flame Detection System of Entire Furnace Based on Digital Image Processing Technology. Huabei University of Electric Power, Baoding (2004) 6. Wang, F.: Flame Temperature Field Measurement and Combustion Diagnosis Based on Image Processing Technology. Zhejiang University, Hangzhou (2009) 7. Xu, G.: Design of embedded video acquisition system based on ARM11. Measur. Control Technol. 32(12), 37–44 (2013) 8. Hong, W.: Research on Video Monitor Plat Based on ARM and DSP. Dalian University of Science and Engineering, Dalian (2009) 9. Wu, J.J.: Design and Realization of Embedded Real-Time Image Processing Plat Based on ARM+FPGA+DSP. Huazhong University of Science and Technology, Wuhan (2012) 10. Lv, L.X., Ding, D.R., Yang, K.Y., Xu, J.T.: Design of fire detection system based on ARM and image recognition. Comput. Eng. Des. 29(10), 2530–2533 (2008) 11. Li, W.L.: Recognition research and realization of barcode picture based on ARM. Nanjing University of Science and Engineering, Nanjing (2013) 12. Lang, P.: Research on real-time image processing system based on ARM and DSP. Tianjin University of Science and Engineering, Tianjin (2007) 13. Russ, C.: The Image Processing Handbook. CRC Press, Inc. (1995) 14. Sun, J.: Image Analysis. Beijing (2005) 15. Schneider, A.: Weighted possibilistic C-mean clustering algorithm. In: The 9th IEEE International Conference on Fuzzy Systems, pp. 176–l80 (2000) 16. Gao, X.B., Li, J., Ji, H.B.: A multi-threshold Image segmentation algorithm based on weighting fuzzy c-means clustering and statistical test. Acta Electronica Sinica 32, 661–665 (2004) 17. Yang, Y., Zheng, M.: Vehicle license plate location based on histogram and mathematical morphology. Autom. Ident. Adv. Technol. (2005). In: Fourth IEEE Workshop on, pp. 89–94 18. Xiao, Z.T., Hou, Z.X., Guo, D.M.: Image features detection algorithm based on phase: phase consistency. Signal Process. (2004)

Routing Protocols in Delay Tolerant Networks: Application-Oriented Survey Rahul Johari and Sakshi Dhama

Abstract In today’s world scenario, frequent communications disruption arise in the wireless networks due to various factors such as natural calamity: earthquake, cyclone, thunderstorm, and fierce other climatic condition(s) in inhabited areas. Such frequent disturbances which arise from time to time results in failure of establishment of end-to-end connection path between nodes of a network. This problem spreads light on the immediate requirement of Delay Tolerant Network (DTN). The existing mobile ad hoc routing protocols fail to provide services to a dynamically changing topology which suffers from frequent delays. The paper discusses various routing protocols which provide improved performance and reduced delays over their predecessors. A routing protocol which is able to deliver the data packet in minimum amount of time with high reliability and minimum delay is considered the best. The protocols researched in the current work perform better in sparsely connected, partitioned, and intermittent networks. Keywords DTN

 Epidemic routing  COMFA  DSG  RIMCA

1 Introduction A mobile ad hoc network consists of mobile devices which are interconnected in a wireless medium. For example, the mobile devices such as hand-held mobile phones, laptops, smart phones, palmtops, personal digital assistant (PDA), pagers, and personal navigation devices can be interconnected via Bluetooth or a wireless LAN. The mobile nodes in any network follow a particular movement pattern. R. Johari (&) USICT, GGSIP University, Delhi 110078, India e-mail: [email protected] S. Dhama Indira Gandhi Delhi Technical University for Women, Delhi 110006, India e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_114

1255

1256

R. Johari and S. Dhama

Fig. 1 Working example of DTN application

For example, let’s say a pedestrian carrying a smart phone may walk randomly, but on a certain path. Similarly other mobile nodes such as buses move on a specific route and follow a fixed pattern. Besides there are other nodes which have fixed time period and route. Just like Delhi metro rail (Fig. 1) which arrive at every metro station in a periodic interval of 2–4 min. However, if any unusual situation arises these nodes must adjust their position dynamically. Due to the mobility of these devices, there is no fixed topology of the network. Delay tolerant network are those networks which are characterized by frequent delays incurred due to disruption of end-to-end communication path between the sender and receiver. The word delay tolerant itself signifies that ultimate goal of DTN is the eventual delivery of the message, in spite of the delay caused. DTN consists of interconnected mobile devices termed as nodes. The topology changes dynamically due to the constant movement of the nodes. These inter-mediate mobile nodes buffer the message until broken path is re-established. In such dynamically changing network topology, the traditional routing protocols for mobile ad hoc network fail due to the frequent delays. The Paper is organized as follows: Sect. 2 discusses general applications of DTN, Sect. 3 discusses the background, Sect. 4 discusses the application of DTN contacts in village scenario, Sect. 5 discusses the various routing protocols, and Sect. 6 discusses the conclusion and future work followed by acknowledgement and references.

Routing Protocols in Delay Tolerant Networks …

1257

2 General Applications of DTN Various researchers over a period of time have explored various applications of routing in DTN (Fig. 1) such as: 1. InterPlaNetary (IPN) Internet project [1]. 2. Wizzy digital courier service which provides asynchronous (disconnected) Internet access to schools in remote villages of South Africa [5]. 3. A scenario where a hypothetical village is being served by digital courier service, a wired dial-up Internet connection, and a store and forward LEO satellite. Route selection through any one of them depends upon the variety of factors including message source and destination, size, time of request, available connections, and other factors like cost and delay et al. [5]. 4. Transmission of information/message during mission critical operations like natural disasters such as earthquake, floods, and cyclone. 5. Resource discovery: Where in people may want to find a determined service without knowing their accurate locations, such as searching for historical monument to be visited in a city or searching for ATM in a metro town or city or may be searching for a vacant parking slot in the shopping mall/plaza so as to avoid inconvenience. 6. Long-distance education: Where in Asynchronous Internet access or VSAT can be used to reach out to schools or college in remote villages with state transport bus acting as ferries carrying the resource material.

3 Background Since the last decade, DTN has drawn considerable amount of attention from the researchers due to the problems faced by loss of end-to-end connectivity which arises due to consistent bad weather condition(s) or those regions which suffer from natural calamity. DTN is the answer for underwater networks and other networks such as network with intermittently connected nodes which suffer from delays [5]. Epidemic routing uses pair wise encounter to deliver the message. But this routing scheme is costly in terms of message traffic and therefore, not well suited for large networks. One of the greatest challenges is limited buffer space of a node as it has to store message all the time. The classes of routing algorithms these days are based on various day-to-day life activities such as probability and social grouping. The probability of a node to deliver a message to the destination may vary from time-to-time. So the strength of their contact frequency with base station is a determining factor in choosing the node of higher probability for delivering a message. In social grouping, every node remains in touch with some other node, and hence social groups are formed based on their interaction with other nodes. Dynamic social grouping makes use of both probabilistic and social group

1258

R. Johari and S. Dhama

formation for routing. Since messages are stored in the buffer of inter-mediate node, therefore security of the message is also an important criterion.

4 Application of DTN Contacts in Village Scenario Routing schemes both deterministic and stochastic in DTN fails to fully exploit the varied type of DTN contacts [4] viz. scheduled contact, on-demand contact, predicted or probabilistic contact, persistent contact and opportunistic contact. While travelling through the long, dry and arid regions of some of the 33 districts of the state of Rajasthan in India, we came across many issues that touches the life of common villager(s) such as proper hygienic sanitation condition, public distribution system for the supply and distribution of sugar, wheat and pulses, mid day meal scheme, and various sub-issues related to increase in the production related to horticulture, agriculture, diaries and fisheries etc., wherein villagers have to jostle on day-to-day basis. But the most challenging issue was the inaccessibility of quality drinking water in the number of remote towns and villages. We present how the different types of contact can be applied to a village which can be used to distribute the water.

4.1

Persistent Contact

In Persistent Contact a direct water pipeline is laid down right from the source (Municipal water overhead tank) to the different households of the village.

4.2

Scheduled Contact

The scheduled contact can be enacted and visualized as: if there exists a scheduled contact, then it specifies the time of day at which the water is going to be supplied by the municipal corporation, so that the villagers can receive the water supply in their house hold at the designated day/time.

4.3

Opportunistic Contact

The Opportunistic contact is enacted and visualized as: in such a contact, there is no fixed time when the water would be supplied by the municipal corporation to the villages, as a result the villagers are always in the ready and/or alert mode through out the day to receive quality drinking water from the municipal corporation.

Routing Protocols in Delay Tolerant Networks …

4.4

1259

On-Demand Contact

In such a contact, if the water pipelines get burst up due to digging or soil erosion or landslides then the village reels under acute shortage of water, with no supply of municipal water for days together. To extricate the villagers, the sarpanch of the gram sabha requests/instructs the municipal corporation to provide water tanker to the affected village, until the water pipelines are repaired and the situation is normalized.

4.5

Probabilistic Contact/Predicted Contact

In such a contact, if the water supply to a village is non-continuous and erratic, then the villagers are never sure when they would receive the water supply, so they just observe and remember the timings of the water they had received from the municipal corporation in the last couple of days, based on which they compute the probability that on which days of week and during which time of the day the municipal corporation will supply water to them and this calculation is done on almost day to day basis.

5 Routing Protocols 5.1

Epidemic Routing Scheme

Epidemic Routing [10] takes into task of ultimate delivery of message to the destination without considering the underlying topology. In epidemic routing protocol, when two nodes enter into transmission region of one another than the node with smaller identifier initiates the anti entropy session with the corresponding node of larger identifier. All nodes maintain a cache of recently encountered nodes. Anti-entropy is not started with the nodes which are encountered in some recent time that is they are still in cache. When a node X enters the transmission range of node Y it starts the anti-entropy session with Y by transmitting compressed information about all the messages contained in the buffer of X. Node Y takes its independent decision while replying to X and informs X about what all messages it actually needs. X sends all the messages as requested by Y. This similar procedure continues whenever node Y comes in contact of any other node. It is guaranteed that the message is ultimately delivered using this pair wise exchange mechanism. Optionally each message contains a unique message identifier, hop count, and an acknowledgment request. Messages with a higher probability are generally marked with higher hop count. Limitation of Epidemic routing includes: limited network resources and buffer size. Messages are dropped whenever the buffer space is full.

1260

5.2

R. Johari and S. Dhama

Spray and Wait Routing Scheme

Spray and Wait Routing Scheme [9] is proposed for those mobile networks which are intermittently connected. Spray and wait performs considerably fewer transmissions than its predecessors such as epidemic routing. It is optimal, simple, and does not depend much on the underlying topology. There are two phases which constitute spray and wait. Spray Phase. The source node initiates this phase by forwarding a number of copies of the message to other nodes in the network at some fixed distinct relays. Waiting Phase. If the spray phase fails to find the destination, then all the nodes carrying a copy of the message directly transmit their copies to the destination. A much more balanced approach with a reduced residual delay is Binary Spray and Wait. In Binary Spray and wait when a node carrying number of copies of a message greater than one meets another node, it transfers half of the copies to that node and keeps the rest with itself. This process continues until the node is left with a single copy of the message. After being left with single copy the node then initiates the direct transmission. A high number of active nodes will ensure copies are spread and residual delay is reduced. This way we get a balanced binary tree for this scheme. In spite of the fact that this algorithm is proposed for the delay tolerant networks, delays necessarily affect the performance when this is implemented in some scenarios. To overcome this hindrance, an expected delay is calculated and to meet this value, number of copies required for spray is also found out. However, required number of copies depends on the number of nodes in the network, but the expected delay is independent of the size of the network. All the nodes maintain a list of nodes to which they are coupled. Spray and wait produce a reduced contention than flooding schemes. If number of copies is too less than the number of nodes, then the delay of waiting phase dominate the delay of spray and wait. If the ratio of number of copies and number of nodes is held constant, then spray and wait is known to have shown lesser delay than the optimal scheme. This makes this routing scheme significantly scalable. Unlike other multi copy schemes the spray and wait reduces the number of transmission per node, whenever the number of nodes in a network increases. Therefore it performs better in large networks than other schemes.

5.3

Ferry-Based Intrusion Detection Scheme

The ferry-based intrusion detection scheme [3] (Fig. 2) is meant for those networks which are sparsely connected. The ferries travel on a predetermined route and stop at a point to broadcast the secret service message to some particular nodes. These nodes have the knowledge of public key of the ferry. Each node when receives the encrypted message from the ferry also shares encounter and delivery predictability information with the ferry. All such information collected by the ferry is used to

Routing Protocols in Delay Tolerant Networks …

1261

Fig. 2 Working example of ferry-based scheme on Delhi Metro

identify the malicious nodes. In ferry-based intrusion detection scheme each node besides the delivery predictability also maintains some previous values of delivery predictability to other nodes in Delivery encounter table. The ferries sight the consistencies between delivery predictability of node A to B and B to A from node A and node B, respectively. Then the delivery predictability is estimated using most encountered node information and this is compared with experimentally obtained value of simulated results of delivery predictability. In case, these two values are not same then the node is black-listed. The rest of nodes also update their list of blacklisted node(s). The black listed nodes are never used as next hop node. This scheme reduces the degradation due to dropping of data packets. The limitation of this scheme is that since it is implemented in a scenario which has sparsely connected nodes; therefore a limited number of nodes are available for monitoring. Second, processing of all overhead packets consumes energy.

5.4

Dynamic Social Grouping Based Routing

DSG algorithm [2] (Fig. 3) explores patterns of social group formation among the nodes of a network. The group identification phase starts with estimation of contact frequency between two nodes. There are various steps in this algorithm, which are summarized as follows:

1262

R. Johari and S. Dhama

Fig. 3 Working example of DSG scheme: (the left figure shows the situation in a network before merging of two groups and the right figure shows a smaller group getting killed after being merged into larger group)

Group Formation. When two nodes frequently remain in touch with each other, then the nodes form members of one particular group. Cluster Head selection. Upon the group formation any member node with a highest probability of delivery is chosen as the cluster head of the group. Usually the nodes closer to the base station (which is fixed and have high percentage of probability of delivery) are chosen as Cluster head. Group Merging. After the group formation, the merging of two groups takes place, only if there is greater amount of similarity. The similarity factor is calculated using the ratio of common members of both groups to the total number of members of two groups. It is the cluster head which takes the decision regarding the merging of two or more groups. When a cluster head receives a request for merging of two groups, it compares the member list contained in the message with its own and then it takes decision independently. Usually the smaller group is merged with the larger group and the corresponding smaller group is then deleted/killed. The groups formed are dynamically updated due to the changing nature of social groups. Over a period of time the contact frequency of a node with all the members of group is calculated again. A lower value of contact frequency indicates that currently node is not in frequent touch with its group members and hence should withdraw from the group. Any decision regarding removal, updating and addition of a node is solely taken by the cluster head only. Individual Probability. All nodes initially have some default value. When two nodes encounter then node with lower probability value transfers all its messages to node with higher probability value to forward further. In this process, corresponding default probability value of both the nodes is updated. All the messages

Routing Protocols in Delay Tolerant Networks …

1263

also maintain time log from the time when message was initiated. In case a message expires then individual probability is also reduced. Group Probability. Each node of the group calculates its own group probability on the basis of frequency of contacts. However, every node of the group will have a different group probability even though being part same group. In case of those nodes which are yet not encountered, probability is calculated based on the average values, group probability, and individual probability. For example, when node A encounters node B then their average probability is calculated using individual probability and group probability of all the groups they are member of. This is the maximum probability of all the groups they are member of. It is more efficient than regular probabilistic routing scheme.

5.5

COMFA: Exploiting Regularity of People Movement for Message Forwarding in Community-Based Delay Tolerant Networks

In [11], the behavior of the regular movement pattern of people in some time slot on a particular day can be used to impart knowledge to nodes in a network. For example, in GGSIP University, New Delhi, there is a group of non-teaching staff, who arrives at the university at the same time, meet during the lunch and depart from University at the same time (Fig. 4). Due to same nature of work assigned, these employees interact with each other. Since these employees work for the same

Fig. 4 Working example of COMFA in GGSIP University

1264

R. Johari and S. Dhama

duration and on the same days in a week, they form a part of the same group within the University campus. Similarly COMFA routing protocol explores the behavior of nodes of same community which meet regularly on one particular day. There are two methods employed. In first approach, the history of contact is used to estimate contacts for the data forwarding in the near future. Social interaction of the nodes with the other nodes updates the contact probability. In the second method, frequent communication among members of the same community is the criteria for identifying the most popular member of the community. The chosen popular node then acts as the forwarding node for upcoming messages. However, the second method is only partially dependent on the movement of the nodes. However, there might be situation that two nodes of same value of delivery probability meet. In this case, the contact number is used to choose the forwarding node. Node with a greater value of contact number is used for forwarding the copy of the message. COMFA performs better than Epidemic routing in terms of average message delivery. Since COMFA is not a flood-based technique hence the average delivery time is also less than Epidemic Routing technique.

5.6

DSG-PC: Dynamic Social Grouping-Based Routing for Non-uniform Buffer Capacities in DTN Supported with Periodic Carriers

In [7], author(s) propose an approach that combines the advantages associated with social routing, probabilistic routing, and scheduled contacts for routing a message. For routing, author(s) assume social groups among opportunistic nodes are formed in the network in a similar way as that in DSG. Periodic carriers do not participate in group formation and mergers. The individual probabilities are updated even when two nodes meet and there are no messages to be exchanged. In contrast to DSG where the initial probabilities of the nodes are uniform, author(s) assign the initial probabilities proportional to the buffer capacity of the nodes. A utility function for a node is defined to choose between an opportunistic carrier and a scheduled carrier. The buffer capacity of the scheduled carrier is higher than that of the other nodes. To be able to use groups to forward messages, author(s) make use of contact strength to define joint individual probabilities that are used to make routing decisions. Author(s) show through simulation that the message delivery ratio, message delay and the traffic ratio improve considerably over DSG when the time period of the carriers is not too big. Author(s) also exhibit the impact of time period of the carrier nodes on the performance. It was observed that delivery ratio increased significantly without increase in the message traffic ratio and delay.

Routing Protocols in Delay Tolerant Networks …

5.7

1265

CACBR: Context-Aware Community-Based Routing for Intermittently Connected Network

In [6], author(s) present a source-based routing approach named Context-Aware Community-Based Routing (CACBR), which exploits the existence of social groups and context awareness of nodes for efficient delivery of the messages in DTN. A node maintains a table called Delivery Probability Vector Table (DPVT) for storing the Delivery Probability Vector (DPV) received from each of the neighboring node in a network. A DPV contains the delivery probability of a node for each of the base station of a network. When a node has a message to be forwarded to a destination node, it searches for the best neighboring node from its DPVT. The author(s) have used multiple factors for computing delivery probability vector. The author(s) have shown that node’s mobility which is widely used as one of the factor for determining node’s probability to deliver a message may not add much to node’s capability to deliver a message. Author(s) have shown that their approach outperforms both DSG (routing approach using dynamic social grouping and opportunistic contacts) and CAR (Context aware Routing using source-based routing approach) through extensive simulation on Opportunistic Network Simulator in terms of message delivery ratio, average delay per delivered message and average message traffic per delivered message.

5.8

Routing in MANET Using Cluster-Based Approach (RIMCA)

In [8], author(s) propose a new routing approach, a cluster-based approach coined as RIMCA for efficient routing of message from the source to destination. In RIMCA, a mobile ad hoc network (MANET) consists of mobile wireless nodes which move randomly within boundary of cluster (Fig. 5). The communication between these mobile nodes are carried out without any centralized control per cluster but the communications between nodes in different clusters is carried out

Fig. 5 Working example of RIMCA protocol

1266

R. Johari and S. Dhama

using border cluster node (BCN) located at the border of cluster. To effectively depict the efficiency of RIMCA approach, author(s) took two popular MANET protocols AODV and DSR. The simulation tool used was Network Simulator (NS2). The performance of the routing protocols was analyzed using four metrics: protocols energy, route discover time, end-to-end delay, package-size versus delivery ratio, package-size versus Throughput.

6 Conclusion and Future Work The current work compares and contrast various routing protocols used in delay tolerant network. We have simulated some of these protocols [6–8] and propose to simulate all the remaining routing protocols which we have discussed and would show their comparative results on the basis of number of parameters such as: message traffic ratio, message delivery ratio, message delay, and the time to deliver the message. Acknowledgments The authors are indebted to the administration of Guru Gobind Singh Indraprastha University for financial funding and for providing the academic environment to pursue the research activities.

References 1. Burleigh, S., Hooke, A., Torgerson, L., Fall, K., Cerf, V., Durst, B., Scott, K., Weiss, H.: Delay-tolerant networking: an approach to inter- planetary internet. IEEE Commun. Mag. 41 (6), 128–136 (2003) 2. Cabaniss, R., Madria, S.K., Rush, G., Trotta, A., Vulli, S.S.: Dynamic social grouping based routing in a mobile ad-hoc network. In: High Performance Computing (HiPC), 2010 International Conference on, pp. 1–8. IEEE (2010) 3. Chuah, M., Yang, P., Han, J.: A ferry-based intrusion detection scheme for sparsely connected ad hoc networks. In: Mobile and Ubiquitous Systems: Networking & Services, 2007. MobiQuitous 2007. Fourth Annual International Conference on, pp. 1–8. IEEE (2007) 4. Fall, K., Farrell, S.: Dtn: an architectural retrospective. IEEE J. Sel. Areas Commun. 26(5), 828–836 (2008) 5. Jain, S., Fall, K., Patra, R.: Routing in a Delay Tolerant Network, vol. 34. ACM (2004) 6. Johari, R., Gupta, N., Aneja, S.: Cacbr: context aware community based routing for intermittently connected network. In: Proceedings of the 10th ACM Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, & Ubiquitous Networks, pp. 137– 140. ACM (2013) 7. Johari, R., Gupta, N., Aneja, S.: Dsg-pc: dynamic social grouping based routing for non-uniform buffer capacities in dtn supported with periodic carriers. In: Quality, Reliability, Security and Robustness in Heterogeneous Networks, pp. 01–15. Springer (2013) 8. Mahmood, D.A., Johari, R.: Routing in Manet Using Cluster Based Approach (RIMCA). In: 2014 International Conference on Computing for Sustainable Global Development (INDIACom), pp. 30–36. IEEE (2014)

Routing Protocols in Delay Tolerant Networks …

1267

9. Spyropoulos, T., Psounis, K., Raghavendra, C.S.: Spray and wait: an efficient routing scheme for intermittently connected mobile networks. In: Proceedings of the 2005 ACM SIGCOMM workshop on Delay-tolerant Networking, pp. 252–259. ACM (2005) 10. Vahdat, A., Becker, D., et al.: Epidemic routing for partially connected ad hoc networks. In: Technical Report CS-200006, Duke University (2000) 11. Vu, L., Do, Q., Nahrstedt, K.: Comfa: Exploiting Regularity of People Movement for Message Forwarding in Community-Based Delay Tolerant Networks (2010)

Survey of Indoor Positioning Systems Based on Ultra-wideband (UWB) Technology Guowei Shi and Ying Ming

Abstract Indoor positioning is a challenging research area, and various kinds of indoor positioning systems have been developed based on different technologies. Ultra-wideband (UWB) positioning technology is mainly used for high-accuracy indoor wireless positioning. This paper provides an overview of indoor positioning solution based on UWB technology. First, the conception, standardization, and advantages of UWB were introduced, and then four location measure techniques based on UWB technology are analyzed. Finally, the applications and future trends for the technology are provided. Keywords Indoor positioning systems

 Ultra-wideband  Measure technique

1 Introduction In recent years, location-based services have a large amount of demands for various applications, such as navigation and tracking [1–3]. Location information becomes one of the key factors to analyze people’s behavior. For outdoor location information, we can obtain it easily using GPS. The accuracy could achieve 10 m for commercial utilization. Currently, GPS has been the standard configuration of mobile devices. Same as outdoor location information, indoor location information is very useful for applications, like personalized service in banking, retail, healthcare, and workforce management. It is estimated that the indoor positioning market will achieve to $2.60 billion by 2018, forecasted by Markets and Markets. G. Shi (&) Academy of Telecommunication Research of MIIT, Beijing 100084, China e-mail: [email protected] G. Shi Network Manage Center of CAPF, Beijing 100089, China Y. Ming Political Institute of CAPF, Shanghai 200435, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_115

1269

1270

G. Shi and Y. Ming

Compared with outdoor positioning, indoor location information is more difficult to obtain due to the complex indoor environment, which could cause the reflection and attenuation of signals, and also mix with various interference from walls, furnishes, and even human [4]. GPS for outdoor positioning has poor performance for indoor positioning because the line-of-sight (LOS) transmission between GPS devices and satellites will be attenuated in an indoor environment. Therefore, indoor positioning has more technical challenges than those of outdoor positioning. There are many techniques used in the indoor positioning [5–7]. Figure 1 shows the main methods used in indoor positioning. They can be classified based on the media used or the principle used. Based on the media used, the main indoor positioning techniques can be classified as several types, as illustrated in Table 1. Techniques using Based on the principles used for positioning, the techniques can be classified as: • • • •

Techniques Techniques Techniques Techniques

using using using using

angle of arrival time of arrival signal strength proximity

When evaluating a technique for indoor positioning, we should consider the accuracy, deployment complexity, cost, and easy portable. Taking WiFi as an

Fig. 1 Comparison of indoor positioning system

Table 1 Indoor positioning techniques based on media Media

Techniques

Optical Ultrasound Wireless signal Magnetic Video

Techniques using LED, infrared, laser Techniques using ultrasound WiFi, ultra-wide band (UWB), RFID, bluetooth, near field communication (NFC), wireless senor network (WSN), UHF Techniques using magnetic Techniques using video

Survey of Indoor Positioning Systems …

1271

example, we can localize an object using the WiFi signal strength. In general cases, the accuracy of WiFi positioning can be 3–5 m in a typical indoor environment. Since most of mobile phones have WiFi function, this technique is portable and low cost. However, the deployment of WiFi-based positioning system is complex and time consuming. Using fingerprint method, we need to create the radio map before the whole system running. The engineering work is huge with the increasing of indoor size. That’s why Apple’s iBeacon solution (Bluetooth 4.0) is used in retail widely instead of WiFi. For UWB technology, it can achieve high accuracy up to 20 cm. Therefore, for applications request high accuracy, UWB is the good choice for indoor positioning. In this survey, we focus on UWB IPSs which can provide high-accuracy positioning.

1.1

The Definition of UWB

Federal communications commission (FCC) and ITU-R define UWB as a transmission from an antenna for which the emitted signal bandwidth exceeds the lesser of 500 MHz or 20 % of the center frequency. In 2002, the 3.1–10.6 and 22–29 GHz bands were opened to UWB by FCC, and the power spectral density emission for UWB transmitters was limited within −41.3 dBm/MHz [8].

1.2

The Standardization Efforts for UWB

Channel Model IEEE has standardized the channel models for UWB. For high throughput applications [9], IEEE defines the channel model for IEEE 802.15.3a; And for low data rate systems, IEEE defines the channel model for 802.15.4a [10]. In both of channels models, there are three components to describe the channel model. They are path loss, small-scale fading, and large-scale fading. There are some key differences between IEEE 802.15.3a and IEEE 802.15.4a channel models. First, IEEE 802.15.4a models the channel impulse response as a complex base-band process, while IEEE 802.15.3a uses a real model. Second, ray arrival times in 802.15.4a are mixed Poisson, while IEEE 802.14.3a is modeled as plain Poisson. Third, intra-cluster decay factor depends on cluster arrival time. Fourth, the distribution of small-scale amplitudes is assumed as Nakagami distribution in 802.15.4a, but in 802.15.3a it is a log normal [11]. Physical Layer (PHY) There are three UWB PHY Layer standards defined in IEEE 802.15.4a, IEEE 802.15.6, and IEEE 802.15.4f [12, 13]. IEEE 802.15.4a defines the direct sequence UWB PHY, which is very efficient and can support precision ranging, and is very robust even at low transmit powers. The UWB PHY in IEEE 802.15.4f targets to reduce the transmitter’s complexity, simplify the modulations, without scrambling and dithering, and with simple pulse shaping and low pulse repetition frequency (PRF). For impulse-radio UWB, the UWB PHY is

1272

G. Shi and Y. Ming

defined in IEEE 802.15.16. It includes IR-UWB and FM-UWB technologies. The impulse-radio UWB is mainly used for body area networks (BANs).

1.3

The Advantages of UWB IPSs

UWB IPSs have the following advantages [14]. First, unlike conventional radio systems operating on specific radio frequency, UWB IPS transmits a radio signal over ultra-wide band of frequencies. Second, UWB signals are transmitted for a much shorter duration with very low-power spectral density, so that consume less power than conventional systems. Third, since it has very low-power spectral density, UWB can be used in close proximity to other RF signals without causing or suffering from interference. Fourth, UWB short duration pulses are easy to filter in order to determine which signals are correct and which are reflection and diffraction. Fifth, the UWB signal can transmit easily inside indoor environment. Finally, UWB can achieve very high indoor location accuracy (i.e., 20 cm) with the precise time of arrival (TOA) measurement.

2 Measure Techniques of UWB IPSs 2.1

Measure Technique Based on Time of Arrival (TOA)

TOA is a widely used method to measure the distance between the mobile target and the measuring unit. It is based on the facts that the distance is proportional to the propagation time. For two-dimensions positioning, we need at least three reference points for TOA measurements, as shown in Fig. 2 [15]. For TOA-based systems, the distance could be calculated through measuring the signal propagation time. Using triangulation algorithm, we can derive the position of the mobile user. Direct TOA measurement needs two prerequisites. First, the IPS system should be synchronized for all transmitters and receivers. Second, we need to insert a time stamp at the transmitted packets for propagation time measurement. Errors in synchronization will cause incorrect localization. A direct calculation approach is to compute the intersection points of the circles of TOA using geometric method. The equation of a circle is given by Ri ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðxi  xÞ2 þ ðyi  yÞ2 ¼ cðti  tÞ;

ð1Þ

where c is the speed of light, ti is the signal arrival time in measuring unit i, (xi, yi) represents the coordinate of the beacon unit i, and (x, y) represent the coordinate of the mobile target. The location of mobile target can be calculated using Eq. (1).

Survey of Indoor Positioning Systems …

1273

Fig. 2 Measure technique based on TOA

For 2-D application, the parameter i can be selected as 1, 2, and 3. Some optimized algorithms used for position computing were proposed by article [16, 17].

2.2

Measure Technique Based on Time Difference of Arrival (TDOA)

Unlike the TOA method that uses the absolute propagation time, the TDOA selects the difference in time to derive the relative position of the mobile target. The difference in time refers to the time difference for arriving at multiple measuring units. It can solve the part of the problem caused by measurement error of TOA. Using measuring units, we can obtain some hyperbolas. The mobile target should be at the intersection of a hyperbola formed through TDOA measurements, as illustrated in Fig. 3. The equation of the hyperbola is given by qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ri;j ¼ ðxi  xÞ2 þ ðyi  yÞ2  ðxj  xÞ2 þ ðyj  yÞ2 ¼ cðti  tj Þ;

ð2Þ

where (xi, yi) and (xj, yj) represent the coordinate of measuring units i and j; (x, y) represents the coordinate of the mobile target; ti and tj represent the signal arrival time in measuring unit i and j; for 2-D application, i and j = 1, 2, 3, 4. The answer of Eq. (2) could be derived from nonlinear regression. In [18], the author proposed a linearized iterative algorithm to make it more easier. In [19], Depeng et al. used compressive sensing TDOA for positioning.

1274

G. Shi and Y. Ming

Fig. 3 Measure technique based on TDOA

Fig. 4 Measure technique based on AOA

2.3

Measure Technique Based on Angel-of Arrival (AOA)

The AOA method is to use the special information of signals for positioning. According to the angel-of-arrival, we can draw the angle direction line between a measuring unit and the mobile target. The location of the mobile target is the intersection of those direction lines. For a mobile target, it needs two measuring units to form two angels for positioning. Figure 4 is an illustration for AOA. The equation of angel is given by tanðhi Þ ¼

x  xi ; i ¼ 1; 2; y  yi

ð3Þ

We can use antenna array or directional antenna to measure the AOA. One of the main advantages of AOA method is no synchronization requirements for the positioning system, which release the issue in TOA and TDOA. Compared with TOA and TDOA, AOA needs less number of measuring units. Thus it will make

Survey of Indoor Positioning Systems …

1275

system deployment easier and faster with less cost. The disadvantage comes from the complexity of hardware used in AOA based system. And the performance degradation for mobile targets moving far away is another main shortcoming of AOA method [20–22].

2.4

Measure Technique Based on Received Signal Strengths (RSS)

The measurement techniques discussed above have a common drawback that they have worse performance for NLOS environment. However, for most of indoor environment, they have furniture, people, and building elements inside, which will cause signals inflection and diffraction. Signals can not propagate in LOS channel and will suffer from multipath effects. Thus the positioning accuracy will decrease. In recent years, positioning methods using signal strength becomes more attractive. Adopting some propagation models, we can calculate the distance between the mobile target and the measuring unit according to the signal strength received at the mobile target. However, the distance derivation largely depends on the propagation model selected. The in-accurate model will cause large error in positioning. In order to release the model-caused issue, scientists begin to use RSS information directly for positioning, such as fingerprint method [23]. For fingerprint method, we need to create a radio map of the indoor environment through RSS measurement. A radio map is a RSS distribution map with pre-defined density and measuring points. With this radio map and updated RSS information, we can derive the position of mobile target through varied machine learning algorithms. In general, it can achieve good performance both for LOS and NLOS channels.

3 Applications of UWB IPSs 3.1

Commercial Applications of UWB IPSs

Indoor positioning system can be used for many applications. Since UWB system can provide high-accurate positioning, it is more often used in applications needed high accuracy, such as military, health care, and asset tracking. Zebra enterprise solution (ZES) and Ubisense demonstrated UWB IPS for asset tracking [24, 25], which utilize TDOA or a combination of TDOA and AOA. A hybrid system named as Navis™, developed by ZES, integrates UWB, GPS, and WiFi for robust asset tracking. The solution could be used for airport process optimization, marine terminals, defense, and automotive assembly optimization, etc. [26]. The Ubisense also developed a system to track automobiles at different stages in assembly plant. It also provided an indoor and outdoor combination positioning solution for military

1276

G. Shi and Y. Ming

usage, which integrated GPS and UWB technology seamlessly [27]. UWB IPS can also be used in autonomous cruise control, driver assistance and safety systems available in the Mercedes-S Class [28].

3.2

Research Applications of UWB IPSs

In UWB IPS research area, there are major achievements in certain topics, such as high-accuracy 3-D positioning for surgical navigation [29], low-power UWB CMOS for biosensors [30], and the development of the IEEE 802.15.4a standard for WSNs [31]. As depicted in article [29], real-time 3-D accuracy of 5– 6 mm was obtained in a range of experiments including tracking a robotic arm, free-form motion, and movement along an optical rail. The WSNs can also be used in personal area networks (PANs) for sensing, communication, and positioning. Recently, the UWB imaging is used for breast cancer detection and through-wall imaging. With integration of UWB positioning and imaging, UWB can be used in a global coordinate system.

4 Conclusion and Future Trends In this paper, we describe the conception of UWB, introduce the standardization efforts for UWB, and discuss advantages of UWB used for IPS. Then we analyze four location measure techniques based on UWB technology. Finally, we overview the applications of UWB-based IPSs in both commercial and research areas. From this survey, we can see that UWB IPS is promising technology for high-accuracy indoor positioning due to its special characteristics. For UWB positioning, it will keep on improving the accuracy of positioning, such as 1-mm 3-D accuracy. Compared with other popular positioning technologies based on wireless signals, high accuracy is the key differentiation of UWB IPS. And the UWB IPS will integrate with other systems to make redundant positioning strategy. Since UWB IPS needs special transmitter and receiver, it limits the general applications of UWB on some degree. We can imagine that UWB IPS will coexist with other indoor positioning technologies with considering from different aspects.

References 1. Hightower, J., Borriello, G.: Location systems for ubiquitous computing. Computer 4(8), (2001) 2. Pahlavan, K., Li, X., Makela, J.: Indoor geolocation science and technology. IEEE Commun. Mag. 40(2), 112–118 (2002)

Survey of Indoor Positioning Systems …

1277

3. Liu, H., Darabi, H., Liu, J.: Survey of wireless indoor positioning techniques and systems. IEEE Trans. Syst. Man Cybern. Part C: Appl. Rev. 37(6), 1067–1080 (2007) 4. Ladd, J.A.M., Bekris, K.E., Rudys, A.P., Wallach, D.S., Kavraki, L.E.: On the feasibility of using wireless ethernet for indoor localization. IEEE Trans. Wireless Commun. 5(10), 555– 559 (2006) 5. Vossiek, M., Wiebking, L., Gulden, P., Wiehardt, J., Hoffmann, C., Heide, P.: Wireless local positioning. IEEE Microwave Mag. 4(4), 77–86 (2003) 6. Hightower, J., Borriello, G.: Location sensing techniques. In: Technical Report UW CSE 2001-07-30, Department of Computer Science and Engineering, University of Washington (2001) 7. Hightower, J., Borriello, G.: Location systems for ubiquitous computing. IEEE Compu. Soc. Press 34(8), 57–66 (2001) 8. Federal Communications Commission: The first report and order regarding ultra-wideband transmission systems. In: FCC 02-48, ET Docket No. 98-153 (2002) 9. DS-UWB Physical Layer Submission to 802.15 Task Group 3a, IEEE 802.15.3a Working Group, P802.15.03/0137r0 (2004) 10. IEEE standard “wireless medium access control (MAC) and physical layer (PHY) specifications for low-rate wireless personal area networks (WPANS). In: IEEE Std. 802.15.4a (2007) 11. Shahriar, E.: UWB Communication Systems: Conventional and 60 GHz. Springer (2013) 12. IEEE P802.15.6, Feb 2012, Part 15.6: Wireless Body Area Networks 13. IEEE P802.15.4f, April 2012, PART 15.4: low-rate wireless personal area networks (LRWPANs). Amendment 2: active radio frequency identification (RFID) system physical layer (PHY) 14. Gezici, S., Tian, Z., Giannakis, G.V., Kobaysahi, H., Molisch, A.F., Poor, H.V., Sahinoglu, Z.: Localization via ultra-wideband radios: a look at positioning aspects for future sensor networks. IEEE Signal Process. Mag. 22(4), 70–84 (2005) 15. Fang, B.: Simple solution for hyperbolic and related position fixes. IEEE Trans. Aerosp. Electron. Syst. 26(5), 748–753 (1990) 16. Kanaan, M., Pahlavan, K.: A comparison of wireless geolocation algorithms in the indoor environment. Proc. IEEE Wireless Commun. Netw. Conf. 1, 177–182 (2004) 17. Çetin, Ö., Naz, H., Gürcan, R., Öztürk, H., Güneren, H., Yelkovan, Y.: An experimental study of high precision TOA based UWB positioning systems. In: 2012 IEEE International Conference on Ultra-Wideband (ICUWB), pp. 357–361, Sept 2012 18. Torrieri, D.: Statistical theory of passive location systems. IEEE Trans. Aerosp. Electron. Syst. 20(2), 183–197 (1984) 19. Depeng, Y., Husheng, L., Peterson, G., Fathy, A.: Compressive sensing TDOA for UWB positioning system. In: 2011 IEEE Radio and Wireless Symposium (RWS), pp. 194–197, Jan 2011 20. Van Veen, B.D., Buckley, K.M.: Beamforming: a versatile approach to spatial filtering. IEEE ASSP Mag. 5(2), 4–24 (1988) 21. Stoica, P., Moses, R.L.: Introduction to Spectral Analysis. Prentice-Hall, Englewood Cliffs (1997) 22. Ottersten, B., Viberg, M., Stoica, P., Nehorai, A.: Exact and large sample ML techniques for parameter estimation and detection in array processing. In: Haykin, S.S., Litva, J., Shepherd, T.J. (eds.) Radar Array Processing, pp. 99–151. Springer, New York (1993) 23. Zhou, J., Chu, K.M.-K., Ng, J.K.-Y.: Providing location services within a radio cellular network using ellipse propagation model. In: Proceedings of 19th International Conference Advanced Information Networking and Applications, , pp. 559–564, Mar 2005 24. Sapphire DART Product Data Sheet: Zebra Enterprise Solutions, Oakland, CA. http://zes. zebra.com/pdf/products-datasheets/ds_sapp_dart.pdf, (2009) 25. Hardware Datasheet: Ubisense, Cambridge, UK. http://www.ubisense.net/media/pdf/Ubisense %20System%20Overview%20V1.1.pdf, (2007)

1278

G. Shi and Y. Ming

26. Zebra Enterprise Solutions Fact Sheet: Zebra Enterprise Solutions, Oakland, CA, 2009, http:// zes.zebra.com/pdf/zes_fact_sheet.pdf 27. PLUS®RTLS Data Sheet: Time Domain Corp., Huntsville, AL. http://www.timedomain.com/ datasheets/plus-system.pdf, (2009) 28. Bloecher, H.L., Sailer, A., Rollmann, G., Dickmann, J.: 79 GHz UWB automotive short range radar–spectrum allocation and technology trends. Adv. Radio Sci. URSI Open Access J. 7, 61–65 (2009) 29. Mahfouz, M., Kuhn, M., To, G., Fathy, A.: Integration of UWB and wireless pressure mapping in surgical navigation. IEEE Trans. Microwave Theory Tech. (2009) 30. Zito, F., Zito, D., Pepe, D.: UWB 3.1–10.6 GHz CMOS transmitter for system-on-a-chip nano- power pulse radars. In: Ph.D. Res. Microelectronics Elec. Conf., Bordeaux, France, pp. 189–192, July 2007 31. Zheng, Y., Arusa, M., Wong, K., et al.: A 0.18 µm CMOS 802.15.4a UWB transceiver for communication and localization. In: IEEE Int. Solid State Cir. Conf., San Francisco, CA, 2008, pp. 118–119, p. 600, Feb 2008

Nonlinear Attitude Stabilization and Tracking Control Techniques for an Autonomous Hexa-Rotor Vehicle Hyeon Kim and Deok Jin Lee

Abstract This paper present nonlinear attitude stabilization and position tracking control techniques for an autonomous hexa-rotor flying vehicle. Due to its stable and robust maneuverability and fault-tolerant capability, hexa-rotor vehicles have received lots of attention and can be used in various applications such as object delivery and reconnaissance in hostile urban areas. In this work, advanced nonlinear control techniques such as sliding mode control and integral backstepping control are presented and their performances are compared in terms of stabilization and position tracking accuracy and robustness to disturbances. For the verification of the proposed control techniques, various simulation studies are demonstrated along with a realistic nonlinear dynamic model.



Keywords Hexa-Rotor robot Autonomous unmanned vehicle ization Nonlinear guidance and control



 Attitude stabil-

1 Introduction Development of sensor, actuators, processors, and communications technology has raised interests about smart autonomous system, and many studies have been made on this field [1]. Specially, as an autonomous vehicle, unmanned flying vehicles have been used in various applications from search, rescue, and surveillance to environment monitoring. The flying aerial vehicles can be divided into two types with

H. Kim  D.J. Lee (&) School of Mechanical & Automotive Engineering, Kunsan National University, 558 Daehak-Ro, Gunsan-Si, Jeollabuk-Do, South Korea e-mail: [email protected] H. Kim e-mail: [email protected] © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_116

1279

1280

H. Kim and D.J. Lee

respect to its wing shape: fixed wing aerial vehicle and rotary aerial vehicle. A fixed wing aerial vehicle has advantage of its fast speed fly and strong in payload compared with rotary vehicles. However, fixed wing aerial vehicles need enough distance for take-off and have disadvantage of small area. While the rotary aerial vehicles have advantage of Vertical Take-off and Landing (VTOL) and hovering in built-up area and small area [2]. A multi-rotor vehicle is a kind of rotary aerial vehicle which is controlled by many rotors attached to its body. Most multi-rotor systems focus on outdoor environments like surveillance and search, and recently they can autonomously operate in indoor environments as well. And in order to improve the performance of a multi-rotor, there have been many studies related to control and navigation [3]. Multi-rotor divided into the number of rotor: tri- [4, 5], quad- [2, 6, 7], hexa- [8], and octo-rotor [9]. Also, in OS4 project of EPFL, nonlinear controllers were investigated based on a simplified micro quad-rotor model which includes LQ optimal control, sliding mode control, backstepping control, and integral backstepping controller [10, 11, 12]. Ground vehicle navigation techniques are applied to quad-rotor, proposed autonomous indoor flight small size quad-rotor systems navigation method, and demonstrated an experimental study for autonomous indoor flight quad-rotor which was done in [7]. A robust control law which is robust with respect to dynamical couplings and adverse torques of triple tilting rotors was verified in simulation and experiment studies [13]. Nonlinear control design for 4Y octo-rotor was verified by numerical simulations about stabilization and waypoint navigation [9]. This paper presents nonlinear attitude stabilization and position tracking control techniques for an autonomous hexa-rotor flying vehicle. Hexa-rotor is controlled by six rotors attached to body that is more strong characteristics compared to the quad-rotor vehicles. Due to its stable and robust maneuverability and fault-tolerant capability with extra actuators, hexa-rotor vehicles have received lots of attention and can be used in various applications such as object delivery and reconnaissance in hostile urban areas. In this work, first, an efficient linear control method such as Proportional-Integral-Derivative (PID) technique with a successive loop closure approach for the self-tuning gain calculation is presented. Then advanced nonlinear control techniques such as sliding mode control and integral backstepping sliding mode control [2, 9] are proposed and their performances are compared in terms of stabilization and position tracking accuracy and robustness to disturbances. This paper is arranged in the following fashion. First, dynamic modeling of hexa-rotor including force and moment terms is presented. Second, based on the nonlinear dynamic model, the PID, SMC, and IB control techniques are designed. Third, the effectiveness of the proposed nonlinear controllers for the attitude stabilization and position tracking capability is demonstrated through various simulation studies. Furthermore, the robustness to external disturbance of the hexa-rotor flying vehicle is investigated as well.

Nonlinear Attitude Stabilization and Tracking …

1281

2 Nonlinear Dynamic Modeling of Hexa-Rotor For the hexa-rotor 6-DOF (degree of freedom) motion, the inertial frame is defined as an earth fixed coordinate system with the origin at the defined home location, and each axis indicates north, east, and downward. The body frame is a body fixed coordinate system with the origin at the center of gravity, each axis indicates directed nose of airframe, right wing, and out of belly as shown in Fig. 1. The nonlinear hexa-rotor dynamics includes translational and rotational equation of motions, and the 6-DOF nonlinear dynamics of the hexa-rotor vehicle is summarized by 8 € ¼ h_ wð _ Iyy Izz Þ þ Jr h_ Xr þ 1 U2 > / > Ixx Ixx Ixx > > Izz Ixx Jr _ > 1 € _ _ > h ¼ / wð / X Þ  þ r > Iyy U3 Iyy Iyy > < € ¼ /_ h_ ðIxx Iyy Þ þ 1 U4 w ; ð1Þ Izz Izz > 1 > € ¼ ðchcw þ s/swÞ U p > 1 n m > > 1 > > >€e ¼ ðc/shsw1 s/cwÞ m U1 : €pd ¼ g  ðchÞ m U1 where Jr is the rotor inertia moment, Xr is the propeller speed, Jr h_ Xr and Jr /_ Xr are the gyro effects, and F ¼ U1 is the total force, ðs/ ; sh ; sw ÞT ¼ ðU2 ; U3 ; U4 Þ is the torque in body frame. Ixx ¼ Iyy ¼ 2 MR2 =5 þ 2l2 mr , Izz ¼ 2 MR2 =5 þ 4l2 mr , M is the mass of hexa-rotor, R is the radius of center of hexa-rotor, l is the distance of rotor from the center of hexa-rotor, and mr is the mass of each rotor. The layout of a hexa-rotor is shown Fig. 1 where l is the length of the arm and the angle between rotor1 and rotor2 is 30. So we can derive force and moment of hexa-rotor.

Fig. 1 Layout and coordinate system of a hexa-rotor vehicle

1282

H. Kim and D.J. Lee

2

x21

3

6 27 6 x2 7 6 7 6 27 6s 7 6 x3 7 6 /7 7 6 7 ¼ AH 6 6 x2 7; 4 sh 5 6 47 6 7 6 x2 7 sw 4 55 x26 2

F

3

2

k1 6 0 AH ¼ 6 4 lk1 k2

k1 clk1 slk1 k2

k1 clk1 slk1 k2

k1 0 lk1 k2

k1 clk1 slk1 k2

3 k1 clk1 7 7 ð2Þ slk1 5 k2

where s ¼ sin; c ¼ cos, x is the rotor speed, and k1 ; k2 are proportional constant of relation between thrust of propeller and reactive torque. For the motion control of hexa-rotor, we should derive individual motor speed to require inverse matrix AH . But AH is not square matrix so we used Moore-Penrose pseudo-inverse method [8]. 

PRight ¼ AT  ðA  AT Þ1 ðif m\nÞ Pleft ¼ ðAT  AÞ1  AT ðif m [ nÞ

ð3Þ

where P is the pseudo-inverse matrix of m  n matrix A and AT is the transport matrix of A. The individual motor speed of hexa-rotor is given by Eq. (4). 2

x21 6 2 6 x2 6 6 2 6 x3 6 6 x2 6 4 6 6 x2 4 5 x26

3 7 2 3 7 F 7 7 6 7 s/ 7 7 7 ¼ PRight 6 7; 6 7 4 sh 5 7 7 7 sw 5

Pright

2 1 6 6k11 6 6 6k1 6 6 1 6 6k 1 ¼6 6 1 6 6k 6 1 6 1 6 4 6k1 1 6k1

0 1 4clk1 1  4clk1



0 1 4clk1 1 4clk1

1 6Lk1 1 6sLk1 1  6sLk1 1  6Lk1 1  6sLk1 1 6sLk1

1 3 6k2 7 1 7 6k2 7 7 1  7 7 6k2 7 1 7 7 6k2 7 1 7  7 6k2 5 1 6k2



ð4Þ

where F ¼ F1 þ F2 þ    þ F6 is the total trust force, s/ is the rolling torque, sh is the pitching torque, sw is the yawing torque.

3 Nonlinear Controller Design of Hexa-Rotor In this section, we propose advanced attitude stabilization and position tracking controllers for the hexa-rotor vehicle, and the architecture is shown in Fig. 2. Position controller calculates attitude desire values, so U1 is calculated in altitude controller and attitude controller calculates U2, U3, and U4. The control commands used in this work are given by:

Nonlinear Attitude Stabilization and Tracking …

1283

Fig. 2 Guidance, attitude and tracking control architecture of the hexa-rotor flying vehicle

8 U > > < 1 U2 > U > : 3 U4

¼ bðX21 þ X22 þ X23 þ X24 þ X25 þ X26 Þ ¼ blcðX25 þ X26  X22  X23 Þ ¼ blðsðX22 þ X26 Þ þ X21  sðX23 þ X25 Þ  X24 Þ ¼ dlðX22 þ X24 þ X26  X21  X23  X25 Þ

ð5Þ

where s ¼ sin 30; c ¼ cos 30, b is the thrust coefficient, and d is the drag coefficient. U1 is input of the total trust force, U2 is input of the rolling torque, U3 is input of the pitching torque, and U4 is input of the yawing torque. Finally, the nonlinear state space equation is expressed by 9 8 9 8 x2 /_ > > > > > > > > >€> > > > > > x4 x6 a1 þ a2 x4 Xr þ b1 U2 > / > > > > > > > > > > > > _ > > > > x h 4 > > > > > > > > > > > > > > > > € x x a  a x X þ b U h > > > > 2 6 3 4 2 r 2 3 > > > > > > > > > > > > _ x > > > > w 6 > > > > = = < < € x x a þ b U w ¼ 2 4 5 3 4 f ðX; UÞ ¼ x8 > > z_ > > > > > :: > > > > > 1 > > > > g  ðcx > > > z> 1 cx3 Þ m U1 > > > > > > > > > > > > x > > > > _ x 10 > > > > > > > > :: 1 > > > >x> > > > u U xm 1 > > > > > > > > > > > > > > > > x _ y 12 > > > > ; : :: ; : uy m1 U1 y

ð6Þ

where the variables are defined as a1 ¼ ðIyy  Izz Þ=Ixx , a2 ¼ Jr =Ixx , a3 ¼ ðIzz  Ixx Þ=Iy , a4 ¼ Jr =Iyy , a5 ¼ ðIxx  Iyy Þ=Izz , b1 ¼ 1=Ixx , b2 ¼ 1=Iyy , b3 ¼ 1=Izz , ux ¼ ðc/shcw þ s/swÞ, and uy ¼ ðc/shsw  s/cwÞ.

1284

3.1

H. Kim and D.J. Lee

Design of Linear Control

In this chapter, a PID controller is designed based on the Successive Loop Closure (SLC) approach where the control gain is calculated in a systematic self-tuning way. For the PID control design, first, the derived dynamic model is simplified using some assumptions. 1. Roll, pitch, and heading angles are very small angle (/; h; w  0). 2. Coriolis terms are very small (pq ¼ pr ¼ qr  0). 3. The hexa-rotor’s altitude is fixed. Then, simplified equations are given by [3, 14, 15] (

€ ¼ 1 U2 ; €h ¼ 1 U3 ; w € ¼ 1 U4 / Ixx Iyy Izz €pn ¼ / m1 U1 ; €pe ¼ h m1 U1 ; €pd ¼ g  m1 U1

ð7Þ

Now, using the above simplified model, we apply a roll PID control loop given by _ þ ki U2 ¼ kp/ ð/des  /Þ þ kd/ ð/_ des  /Þ /

Zt ð/des  /Þdt

ð8Þ

0

where kp/ is the proportional gain, kd/ is the derivative gain and ki/ the gain of integrator. The roll transfer function is given by [15] Kp/ =Ixx /ðsÞ ¼ 2 /des ðsÞ s þ ðKd/ =Ixx Þs þ Kp/ =Ixx

ð9Þ

The proportional gain is calculated Kp ¼ M=A; where A is the amplitude of the command input and M is the maximum of the saturation. Namely, the proportional gain is calculated by the ratio of saturation and command input. The derivative gain is calculated as Kd ¼ 2fxn , where f is the damping ratio and xn is the natural frequency. We compare Eq. (9) with a standard second-order system transfer function having the damping ratio, f  0:9. Similarly, the altitude control loop and speed control loop are computed by Kph þ Kih s HðsÞ ¼ ; Hdes ðsÞ s3 þ Kdh s2 þ Kph s þ Kih

3.2

Kpv =g VðsÞ ¼ Vdes ðsÞ s þ Kpv =g

ð10Þ

Design of Sliding Mode Control

Generally, the Sliding Mode Control (SMC) defines the sign function [2, 9, 16].

Nonlinear Attitude Stabilization and Tracking …

 8  < 1  sn [ 0  signðsn Þ ¼ 0  sn ¼ 0  : 1  sn \0

1285

ð11Þ

The roll tracking error is defined as: e1 ¼ x1d  x1

ð12Þ

The sliding surface is defined as: s1 ¼ e_ 1 þ a1 e1

ð13Þ

where a1 [ 0, we consider the augmented Lyapunov function: 1 Vðs1 Þ ¼ s21 2

ð14Þ

The desired law for the sliding surface is based on the time derivative of the Lyapunov function satisfying (V_ ¼ s_s\0). The time derivative of sliding surface is computed as: s_ 1 ¼ k1 signðs1 Þ  k2 s1 ¼ €e1 þ a1 e_ 1 ¼ x_ 2  €x1d þ a1 e_ 1

ð15Þ

¼ a1 x4 x6 þ a2 x4 Xr þ b1 U2  €x1d þ a1 e_ 1 The roll control input U2 is derived using the SMC with the constant parameter values k1 ; k2 [ 0 1 b1

U2 ¼  ða1 x4 x6 þ a2 x4 Xr þ a1 e_ 1 þ k1 signðs1 Þ þ k2 s1 Þ

ð16Þ

The pitch, yaw, altitude, and position tracking error are defined as e3 ¼ x3d  x3 , s3 ¼ e_ 3 þ a2 e3 , e5 ¼ x5d  x5 , s3 ¼ e_ 5 þ a3 e5 , e7 ¼ x7d  x7 , s7 ¼ e_ 7 þ a4 e7 , e9 ¼ x9d  x9 , s9 ¼ e_ 9 þ a5 e9 , e11 ¼ x11d  x11 , and s11 ¼ e_ 11 þ a6 e11 . Based on the similar approach, we can derive other SMC control inputs which are computed as: 1 b2

U3 ¼  ða3 x2 x6  a4 x2 Xr þ a2 e3 þ k3 signðs3 Þ þ k4 s3 Þ 1 b3

U4 ¼  ða5 x2 x4 þ a3 e5 þ k5 signðs5 Þ þ k6 s5 Þ

ð17Þ ð18Þ

1286

H. Kim and D.J. Lee

U1 ¼

m ðg þ a4 e7 cos x1 cos x3

ux ¼ uy ¼

3.3

m fa e U1 5 9

m fa e U1 6 11

þ k7 signðs1 Þ þ k8 s1 Þ

ð19Þ

þ k9 signðs9 Þ þ k10 s9 g

ð20Þ

þ k11 signðs11 Þ þ k12 s11 g

ð21Þ

Design of Integral Backstepping Control

The roll tracking-error is defined Eq. (12), and the time derivative is given by [2, 12] de1 ¼ x_ 1d  x1 dt

ð22Þ

The angular speed is its own dynamics, and it is considered as our virtual control x1d ¼ c1 e1 þ x_ 1d þ q1 v1

ð23Þ

where c1 , q1 [ 0; and v1 is the integral of roll tracking error given by Zt v1 ¼

e1 ðsÞds

ð24Þ

0

The angular velocity tracking error is defined by e2 ¼ x1d  x1

ð25Þ

de2 :: :: ¼ c1 ðx1  x_ 1d Þ þ x1d þ q1 e1  x1 dt

ð26Þ

And the time derivative is

The roll tracking error is rewritten as de1 ¼ c1 e1  q1 v1 þ e2 dt

ð27Þ

Replacing €x1 expressed in Eq. (6), de2 :: ¼ c1 ðx1  x_ 1d Þ þ x1d þ q1 e1  a1 x4 x6  a2 x4 Xr  b1 U2 dt

ð28Þ

Nonlinear Attitude Stabilization and Tracking …

1287

And we obtain the following equation: de2 :: ¼ c1 ðc1 e1 þ q1 v1  e2 Þ þ x1d þ q1 e1  a1 x4 x6  a2 x4 Xr  b1 U2 dt

ð29Þ

The desired dynamics for the angular speed tracking error is de2 ¼ c2 e2  e1 dt

ð30Þ

Finally, the control input is computed by 1 b1

U2 ¼ fð1  c21 þ q1 Þe1 þ ðc1 þ c2 Þe2  c1 q1 v1  ða1 x4 x6 þ a2 x4 Xr Þg

ð31Þ

In the similar way, we can derive the pitch, yaw, altitude, and position control inputs as followings: 1 b2

U3 ¼ fð1  c23 þ q2 Þe3 þ ðc3 þ c4 Þe4  c3 q2 v2  ða1 x2 x6 þ a4 x2 Xr Þg 1 b3

U1 ¼

ð32Þ

U4 ¼ fð1  c25 þ q3 Þe5 þ ðc5 þ c6 Þe6  c5 q3 v3 g

ð33Þ

m fg  ð1  c27 cos x1 cos d3

ð34Þ

þ k4 Þe7  ðc7 þ c8 Þe8 þ c7 k4 v4 g

m U1

ux ¼  fð1  c29 þ k5 Þe9 þ ðc9 þ c10 Þe10  c9 k5 v5 g m U1

uy ¼  fð1  c211 þ k6 Þe11 þ ðc11 þ c12 Þe12  c11 k6 v6 g

ð35Þ ð36Þ

The roll desire and pitch desire are calculated from ux and uy as: / ¼ sin1 ðux sin w  uy cos wÞ   1 ux cos w þ uy sin w ¼ cos  sin h   1 ux  sin / sin w h ¼ sin cos / cos w   1 sin / cos w  uy ¼ sin cos / sin w   1 ux cos w þ uy sin w ¼ sin  cos /

ð37Þ

ð38Þ

1288

H. Kim and D.J. Lee

4 Simulation Results 4.1

Attitude Stabilization and Position Tracking Performance

In this chapter, we compare the effectiveness and performance of the proposed nonlinear controllers, the PID, SMC, and IB control methods, in terms of the attitude stabilization and position tracking accuracy. In the simulation studies, all initial condition are 0 and position inputs are set as X = 5 m, Y = 5 m, and Z = 5 m. Figures 3 and 4 show the hexa-rotor altitude control and position tracking control results from each nonlinear controller. In the altitude control, PID controller became stabilized within the 2 s, the SMC control made the attitude stabilized within the 2 s as well but it had some overshoot. On the other hand, the IB control made the attitude stabilized within the 2 s without overshoots. In the position X tracking control, the PID controller reached the desired value in the 3 s, but the SMC controller got to the desired one in the 3 s but have some overshoot and oscillation. While the IB controller is stabilized within the 4 s with a little bit oscillation. In the position Y tracking control performance, a similar result was obtained like the X position case. Figures 5 and 6 show the hexa-rotor attitude stabilization control results using position and heading inputs. The roll attitude is stabilized in the 5 s using the PID control, the SMC control made the roll attitude stabilized in the 5 s with some oscillation, and the IB control took 7 s to make the roll attitude stabilized. In the pitch attitude control performance, similar result was obtained. In the yaw attitude

Fig. 3 Altitude control simulation result of hexa-rotor

Fig. 4 Position X control simulation result of hexa-rotor

Nonlinear Attitude Stabilization and Tracking …

1289

Fig. 5 Roll control simulation result of hexa-rotor

Fig. 6 Yaw control simulation result of hexa-rotor

stabilization, the PID and IB controller were quickly stabilized, but the SMC has a little oscillation between 2 and 4 s. In the general cases without any disturbance and perturbation, the performance difference among the attitude and position tracking controllers proposed is not that much big.

4.2

Robustness Performance to Disturbances

In this section, in order to check the effectiveness and robustness of the proposed nonlinear controllers, the performance of the PID and the other nonlinear control methods is compared with external disturbances. First, the disturbance is added to the attitude of the hexa-rotor in the shape of a pulse signal generated from 12 to 14 s as shown in Fig. 7. Figures 8 and 9 show the hexa-rotor attitude stabilization control results with a disturbance signal input added to the nominal one. As can be seen in figures, the PID controller is very sensitive to the disturbance and its performance became degraded. On the other hand, the SMC controller and IB controller made the attitude quickly stabilized even with the disturbance. Figures 10 and 11 described the hexa-rotor position tracking results with disturbance signal inputs. As expected, in the simulation result of altitude tracking control with the disturbance, the PID, SMC, and IB controllers showed very robust characteristics to the disturbance while tracking the desired positions with quick stabilization. For the position tracking control results with the disturbance, the SMC’s disturbance maximum error is

1290

H. Kim and D.J. Lee

Fig. 7 Pulse input as attitude and position disturbance signal

Fig. 8 Add into roll disturbance simulation result of hexa-rotor

Fig. 9 Add into yaw disturbance simulation result of hexa-rotor

Fig. 10 Add into altitude disturbance simulation result of hexa-rotor

approximately 20 %, and the IB control has an approximately 25 % disturbance maximum error. The PID control, however, has an almost 60 % maximum error while taking a long time to be stabilized again.

Nonlinear Attitude Stabilization and Tracking …

1291

Fig. 11 Add into position X disturbance simulation result of hexa-rotor

5 Conclusion In this paper, a hexa-rotor flying vehicle, which has the advantages of robust maneuverability and fault-tolerant capability with extra actuators over the quad-rotor or tri-rotor vehicle, is investigated. First, the nonlinear dynamics with the torque and moment equations acting on the hexa-rotor were derived. For attitude and position tracking control, an effective linear proportional-integral-derivative control technique with a successive loop shaping gain computation approach was presented. Then, advanced nonlinear control techniques such as sliding mode control and integral backstepping sliding mode control were designed with the hexa-rotor dynamic model. Their performances were compared in terms of attitude stabilization, position tracking accuracy, and robustness to disturbances. The proposed sliding mode controller and integral backstepping controller show better performance compared to that of the PID controller in terms of the accuracy of the stabilization and tracking performance. Moreover, the advanced nonlinear controllers, the SMC and IBC, showed much faster disturbance rejection and fast convergence capability. Acknowledgments This work was supported by the National Research Foundation of Korea (NRF). (No. 2014-017630) & (No. 2014-063396), and also was supported by the Human Resource Training Program for Regional Innovation and Creativity through the Ministry of Education and National Research Foundation of Korea (No. 2014-066733).

References 1. Lee, D.J., I, Kaminer, Dobrokhodov, V., Jones, K.: Autonomous feature following for visual surveillance using a small unmanned aerial vehicle with gimbaled camera system. Int. J. Control Autom. Syst. 8(5), 957–966 (2010) 2. Bouabdalla, S.: Design and control of quadrotors with application to autonomous flying. In: EPFL, Ph.D. Dissertation (2007) 3. Kim, H., Jeong, H.S., Chong, K.T., Lee, D.J.: Dynamic modelling and control techniques for multi-rotor flying robots. Trans. KSME. A 38(2), 137–148 (2014) 4. Yoo, D.W., Oh, H.D., Won, D.Y., Tahk, M.J.: Dynamic modeling and stabilization techniques for tri-rotor unmanned aerial vehicles. Int. J. Inst. Control Rob. Syst. 19(2), 164–170 (2010) 5. Alwafi, H., Arıkan, K.B., İrfanoğlu, B.: Attitude and altitude control of two wheel trirotor hybrid robot. Int. J. Sci. Knowl. Comput. Inf. Technol. 2(1), (2013)

1292

H. Kim and D.J. Lee

6. Baek, S.J., Lee, D.J., Park, S.H., Chong, K.T.: Design of lateral fuzzy-PI controller for unmanned quadrotor robot. J. Inst. Control Rob. Syst. 19(2), 164–170 (2013) 7. Grzonka, S., Grisetti, G., Burgard, W.: A fully autonomous indoor quadrotor. IEEE Trans. Rob. 28(1), (2011) 8. Baránek, R., Šolc, F.: Modeling and control of a hexa-copter. In: IEEE Conference Publications, pp. 19–23 (2012) 9. Victor, G.A., Adrian, M.S., James, F.W.: Sliding mode control of a 4Y Octorotor. U.P.B. Sci. Bull. Ser. D 74, 37–52 (2012) 10. Bouabdalla, S., Noth, A., Siegware, R.: PID vs LQ control techniques applied to an indoor micro quadrotor. Intell. Rob. Syst. 3, 2451–2456 (2004) 11. Bouabdalla, S., Siegware, R.: Backstepping and sliding-mode control techniques applied to an indoor micro quadrotor. In: Robotics and Automation, ICRA 2005. Proceedings of the 2005 IEEE International Conference on, pp. 2247–2252 (2005) 12. Bouabdalla, S., Siegware, R.: Full control of a quadrotor. In: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, California (2007) 13. Escareno, J., Sanchez, A., Garcia, O., Lozano R.: Triple tilting rotor mini-UAV: modeling and embedded control of the attitude. In: 2008 American Control Conference, Seattle (2008) 14. Beard, R.W., McLain, T.W.: Small Unmanned Aircraft: Theory and Practice. Princeton University Press, New Jersey (2012) 15. Ansu, M.S., Lee, D.J., Hong, D.P., Chong, K.T.: Successive loop closure based controller design for an autonomous quadrotor vehicle. Adv. Mech. Mater. 483, 361–367 (2013) 16. Liu, J., Wang, X.: Advanced Sliding Mode Control for Mechanical Systems: Design, Analysis and MATLAB Simulation. Springer (2012)

The Design and Implementation of Occupational Health Survey System Based on Internet of Things Honger Tian, Lili Cao, Yongguo Zhan and Liuliu Liu

Abstract The survey system of occupational health based on Internet of things is developed in PHP+MYSQL+Apache framework and B/S mode. The system can collect and save the psychological and physiological information synchronously, realizing online investigation, occupational health archives building, occupational health assessment, occupational health consulting, occupational health education, and other functions. Keywords Internet of things

 Occupational health  PHP  MYSQL  Apache

1 Introduction Occupational stress is widespread and the job burnout triggered by it is not only a serious threat to the employee’s physical and mental health, but also directly or indirectly causing economic losses. Currently the survey method of occupational stress is still dominated by paper questionnaire. As a result, data collection and input need a lot of manpower and material resources and time. With the Internet technology rapidly developing today, network globalization is an inevitable trend. Researching based on network can not only improve the efficiency of investigation

H. Tian (&)  L. Cao  Y. Zhan  L. Liu School of Public Health, Southeast University, Nanjing City 210009, Jiangsu, China e-mail: [email protected] H. Tian  L. Cao  Y. Zhan  L. Liu Key Laboratory of Environmental Medical Engineering of the Ministry of Education, Nanjing City 210009, Jiangsu, China © Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5_117

1293

1294

H. Tian et al.

reducing the money, manpower, and time, but also ensure the quality of information avoiding the errors caused by manual data entry. This study uses PHP+MYSQL +Apache technology to build occupational health survey system based on Internet of things, which can realize effective survey and fast feedback of results of the occupational health assessment, as a consequence put forward countermeasure to relieve the occupational stress avoiding burnout and reducing occurrence of diseases related.

2 The Function Design of Occupational Health Survey System Based on Internet of Things 2.1

System Environment

This system adopts the combination of Apache+PHP+MYSQL technology, using Dreamweaver 8 and PHP technology to implement web design and software edition, taking Windows Sever 2000 as a server platform and MYSQL database server as the background.

2.2

System Development Mode

This system consists of three layer architecture models of development, which is the presentation layer, business logic layer, and data access layer, as shown in Fig. 1. UI (presentation layer) mainly refers to the graphical interface of the interaction with the user. It can be used to receive the data from the user and display the data on the browser to meet the needs of users. The business logic layer (BLL) is a bridge between the UI and DAL layer, which can implement the mercantile logic. Business logic includes: validation, computing, business rules and so on. Data access layer (DAL) deals with the database. The main functions are data supplementing, data deleting, data updating, and data checking. It can submit the date stored in the database to the business layer, at the same time, save the date handled by the business layer in the database. (Of course, these operations are based on the UI layer. The UI layer identifies the needs of users and reflects the needs to interface (UI); UI passes the needs to the BLL, then the BLL reflects the needs of users to DAL. DAL is designed for data operation and feedback of data needed to the user.)

The Design and Implementation of Occupational Health …

1295

Fig. 1 Three-layer system structure

User request Data access layer The presentation layer (UL)

Date feedback

User request

Business logic layer Date feedback

User request Date access layer

2.3

The Overall Function of System

This system uses the technology of PHP site dynamic exchange, collecting the occupational health questionnaires, physiological and biochemical indexes on the web quickly and effortlessly, and monitoring the collecting progress at any time. It also can store, analyze, download data, and provide feedback of the occupational health evaluation results of each module and put forward the corresponding suggestions. Under this premise, the system guarantees the quality of the questionnaire recovery and significantly reduces the manpower, material resources, and time invested in the survey, which helps realize occupational health online investigation, data recording, results assessment, methods counseling, education, and other functions.

2.4

System Main Function Module Design

In Fig. 2, the system includes 10 main modules, each module contains a number of small modules. Contents of the first four modules are occupational stress indictor questionnaire of Cooper introduced by Yu Shanfa and other researchers. The related research shows that most of the correlation coefficients between the total score and the average project are above 0.60, and Crohach ‘alpha coefficients are mostly close

1296

H. Tian et al.

Fig. 2 System main function module design

System entrance (home page)

Unregistered User registration

Register

The user login

(10)Work staff

(9)Online consulting

(8)Assessment, advice, feedback

(7)Biochemistry indexes

(6) Life event scale

(5) Job burnout inventory

(4) Mitigating factors scale

(3) Personality scale

(2) Stress response scale

(1) Factor

Exit system

to or above 0.70, so we can see the questionnaire has a high-level reliability and validity [1, 2]. The fifth module deals with Maslach job burnout questionnaire (MBI), and 90 % of the job burnout research uses this table [3]. The sixth module describes the life events scale (LES) compiled by Yang Desen, Zhang Yalin in 1986, and related research shows that Crohach ‘alpha coefficient of the scale is 0.78 [4]. The seventh module includes the general physiological indexes (such as blood pressure, heart rate and blood oxygen, blood routine, blood biochemical examination, routine urine, immune index, enzymes, liver function index, and hormone). Ordinary users can fill in and submit the first six modules one by one. Only the modules are filled completely, can it be submitted, otherwise the system will give the corresponding prompt. The physiological indexes in module 7 are achieved by electronic sphygmomanometer synchronous measurement and then automatically uploaded to the database. Users should write biochemical indicators according to the last months. After each sub-module has been filled in and the measurement has been completed, the users can see their level of occupational health among the general survey population, and the corresponding suggestion will be given

The Design and Implementation of Occupational Health …

1297

feedback. In addition, they also can get online expert advice. Staff module can only be filled out by managers. The main contents the manager fills are progress monitoring, data downloading, and data management.

2.5

The System Database Design

According to the design of system function module, we establish corresponding data tables to store the raw data from each module. At the same time, based on the supposed guide, this system signs scores for each module and can store them separately. In addition, the system also can analyze the scores assigned according to various conditions (overall survey of the crowd, in the same industry, with same gender, age and length of service), and give feedback of evaluation results and the corresponding suggestions on the page.

2.6

Quality Control Design

In order to ensure the security of system information, we set the rule that only the user has administrator rights to enter the crew module for progress monitoring and fill in the content verification, download data, and do other work. To prevent repeated registration and ensure the quality of information collected, there is the function of password back, and qualified individual IP can only register and login 3 accounts. To avoid user filling wrong answers, reminding function and automatic logic jumping are set. Because of the requirements for timeliness of the survey content, the system will record the first time login and the time of filling out completely. The total survey time is limited up to 1 week. If the stored records have not been filled in 1 week, then they will be removed.

3 The Implementation of Occupational Health Information Acquisition System Based on Internet of Things 3.1

System Architecture Implementation

PHP syntax is simple, its program is stable, the execution efficiency is high and the original code is completely free for public, and it has rich functions which can meet the development of all sorts of site functions. MYSQL is a free multi-user

1298

H. Tian et al.

database server supporting multithreading and can store more than 50 million records with high security, stability, and expansibility. As currently the fastest database system in the market, its function can be comparable to or even better than a large database. Apache server is one of the world’s most popular WEB server software and it is the best choice when combined with PHP platform and MYSQL platform. B/S structure pattern combines WWW browser technology with a variety of script language (VBScript, JavaScript, etc.) and the ActiveX technology. Users can enter the system at any time and place using the generic browser. It can replace the original complex special-purpose software and has a powerful function, reducing the cost of development. Updating the server software alone is what needed for the maintenance and up gradation of the system, which is simple and convenient. The system of server operating has a variety of choices and is characteristic of cross-platform.

3.2

The Realization of the Function

Administrator permissions settings

The user submits the preservation Part of the code:

The Design and Implementation of Occupational Health …

Calculation and feedback evaluation results Part of the code:

1299

1300

Database assignment is calculated Part of the code:

H. Tian et al.

The Design and Implementation of Occupational Health …

Quality control (1) the IP restrictions The main code: (2) logical jump Part of the code:

1301

1302

H. Tian et al.

The Design and Implementation of Occupational Health …

(3) leak remind The main code:

1303

1304

H. Tian et al.

(4) the research time limit Online consulting The main code:

3.3

The Main Interfaces

The main interfaces are as shown in Figs. 3 and 4.

Fig. 3 Occupational health information acquisition system main function interface (1)

The Design and Implementation of Occupational Health …

1305

Fig. 4 Occupational health information acquisition system main function interface (2)

4 Examples of Application Using the Internet vocational health information acquisition system, and carrying out the Stratified cluster sampling of county and township hospitals in Yangzhou City, there are a total of 420 medical workers in 1 county-level hospital, 3 township hospitals, 3 institutes, and 3 community health service centers in the investigation. Besides blank and low quality of network questionnaire, we obtain 386 valid questionnaires, and the efficient rate of samples is 91.9 %. This system breaks through the limitation of traditional paper questionnaire survey, it enables users to be investigated under the wired and wireless networks and synchronizes the measurement with the automatic uploading of physiological indexes reducing the time and energy wasted in data collection and input and avoiding the error caused by input, which makes investigation more quickly, convenient, and accurate. After the investigation of each module, the users can view the module evaluation results and get corresponding adjustments and suggestions, understanding their own occupational health level among the crowd of people. They can also learn the relevant occupational health knowledge and can also do online communication and consultation with the occupational health experts related to survey and occupational health issues. In addition, all occupational health data collected will be stored and the users’ occupational health archives will be established. Only users and administrators can consult them.

5 Conclusion This system combines the network investigation with the traditional investigation and realizes the functions of assessment, consultation, and education with regard to occupational health. It provides a new convenient and effective research tool, which can alleviate occupational stress condition and avoid the happening of job burnout,

1306

H. Tian et al.

improving psychological health and physical health level. So it is worth popularization and application. Although this system has realized survey by network, from the point of information technology development and the different needs of survey, a lot of functions of it still need to be upgraded. Acknowledgments This work was supported by the Humanities and Social Sciences Research Planning Fund, Ministry of Education, Grant No.: 14YJA840012.

References 1. Shanfa, Y., Rui, Z.: Occupational stress testing index test result on the OSI analysis. Chinese Labor Occup. Health Mag. 15(2), 96–97 (1997) 2. Shanfa, Y., Rui, Z., Liangdong, M.: Occupational stress measuring tool research. J. Henan Med. Res. 9(2), 171–174 (2000) 3. Enzmann, D., Schaufeli, W.B., Jassen, P., et al.: Dimensionality and validity of the burnout measure. J. Occup. Organ. Psychol. 71(4), 331–351 (1998) 4. Yanping, Z., Deshen, Y.: Life events survey, China—(a) tonic life events in the general population basic classifiers. Chinese J. Mental Health 4(6), 262–267 (1990)

Author Index

A Abdullah, Azween, 1195 AlMahafzah, Harbi, 903 AlRawashdeh, Ma’en Zaid, 903 Aman, Muhammad, 1195 An, Ning, 757 Arivudainambi, D., 127, 693 Azad, Md Abul Kalam, 13

D Dai, Huanyao, 821 Dakkak, Omar, 363 Derawi, Mohammad, 337, 419 Dhama, Sakshi, 1255 Ding, Lianghua, 327 Ding, Xueyong, 163 Du, Chunxia, 1115

B Bah, Mamadou Hady, 775 Baharun, Sabariah, 669 Bai, Liang, 1223 Balaji, S., 693 Balogh, Zoltán, 641 Bízik, Richard, 641

E Ershov, Roman A., 851

C Cai, Liang, 351 Cao, Lili, 1293 Cao, Shihua, 583 Che, Li, 223 Chen, Bin, 1057 Chen, Chunyi, 1017 Chen, Haipeng, 51 Chen, Hui, 27 Chen, Jianhua, 1037 Chen, Li, 399 Chen, Min, 1037 Chen, Wei, 73 Chen, Xiang, 105 Chen, Yonghong, 741 Chen, Yuzhong, 305 Chen, Zhefeng, 529 Cheng, Linlin, 1027 Cheng, Wei, 449 Cheng, Yanming, 861 Cui, Fengying, 1243 Cui, Jie, 873

F Fan, Wen, 409 Fang, Zhiyi, 273, 971 Feng, Dawei, 799 Feng, Guanyuan, 179 Feng, Jiacheng, 1069 Feng, Tao, 223 Feng, Yuan, 757, 799 Fidelman, Vladimir R., 851 Filippou, A., 371 Fu, Tian, 829 G Gan, Xintai, 449 Gao, Ming, 707 Ge, Liang, 1137, 1149 Geng, Duanyang, 295 Geng, Suiyan, 1027 Gong, Lejun, 1069 Gong, Yishan, 553 Gu, Shuo, 1187 Guan, Kai, 179 Guo, Baoxian, 621 Guo, Lili, 741 Guo, Qinghua, 891 Guo, Shuxu, 51 Guo, Wenzhong, 305

© Springer India 2016 Q.-A. Zeng (ed.), Wireless Communications, Networking and Applications, Lecture Notes in Electrical Engineering 348, DOI 10.1007/978-81-322-2580-5

1307

1308 Guo, Xin, 87, 741 Guo, Xingchen, 717 Guo, Yujie, 41 H Hamid, Nor Asilah Wati Abdul, 251 Han, Cheng, 1017 Hao, Huijuan, 915 Hao, Yongqin, 757, 799 Haslina Hassan, Wan, 669 He, Bintai, 757 He, Bo, 749 He, Jing (Selena), 935 He, Mingfeng, 267 He, Qingsu, 607, 621 Hong, Jingsong, 775 Hong, Wei, 305 Hou, Bin, 495 Hou, Tangjie, 1079 Hou, Wenbing, 267 Hu, Dengpeng, 449 Hu, Guojie, 829 Hu, Jun, 529 Hu, Xiaomei, 1099 Huang, Bohao, 1187 Huang, Jhinfang, 883 Huang, Meirong, 399, 729 Huang, Min, 681, 1175 Huang, Yinfei, 87 I Ibrahimov, Vagif, 1047 Idrus, Sevia M., 669 Imanova, Mehriban, 1047 Ip, W.H., 537 J Jamro, Deedar Ali, 319, 775 Ji, Jing, 73 Ji, Qingbing, 1107 Ji, Sailong, 957, 1243 Jia, Aaron Z., 409 Jia, Bicong, 593 Jiang, Chunyan, 295 Jiang, Huilin, 285 Jiang, Lihua, 607, 621 Jiang, Liubing, 223 Jiang, Yuzhe, 473 Jiao, Bin, 821 Jie, Qing, 593 Jin, Jangwon, 95 Jing, Yanlong, 351 Johari, Rahul, 141, 1255

Author Index K Kang, Yibin, 267 Karras, D.A., 371, 385, 783 Kathiroli, R., 127 Kaur, Harneet, 935 Khan, Asfandyar, 1195 Khan, Habibulla, 561 Kho, Lee Chin, 235 Kim, Hyeon, 1279 Kirichek, Ruslan, 485 Komaki, Shozo, 669 Koprda, Štefan, 641 Koucheryavy, Andrey, 485 Krishna, Patteti, 571 Kumar, Tipparti Anil, 561, 571 L Lai, Wencheng, 883 Lakshmi, B., 459 Lancioni, German, 1127 Lee, Deok Jin, 1279 Li, Dandan, 1005 Li, Gang, 1223 Li, Jing, 73 Li, Juan, 915 Li, Li, 1087 Li, Ming, 1099 Li, Shanshan, 351 Li, Simin, 809 Li, Weitao, 829 Li, Wensu, 517 Li, Xiaobai, 729 Li, Xing, 1027 Li, Yang, 799 Li, Yintao, 607, 621 Li, Yuanyuan, 553 Li, Yuchen, 267 Li, Yuhuan, 295 Li, Zaijin, 799 Lian, Shixing, 473 Lim, Azman Osman, 235 Lin, Hao, 1115 Liu, Chao, 757 Liu, Donghua, 171 Liu, Guangqi, 1137 Liu, Guojun, 757, 799 Liu, Haiyang, 409 Liu, Jingxue, 765 Liu, Liuliu, 1293 Liu, Pengcheng, 757, 799 Liu, Qiang, 517 Liu, Wei, 517, 1223 Liu, Xingcheng, 473

Author Index Liu, Yang, 113 Liu, Ying, 891 Liu, Zhanghui, 305 Liu, Zhiyuan, 201 Long, Zhaohua, 1079 Lou, Yan, 285, 1017 Lu, Hai, 991 Lu, Hongyang, 73 Lu, Jie, 35, 1079 Luo, Cheng, 991 Luo, Zheng, 171 Lv, Dongdong, 87, 991 M Ma, Juntao, 829 Ma, Xiaohui, 799 Mahmood, Dhari A., 141 Maller, Patricio, 1127 Mangi, Farman Ali, 319, 775 Mao, Yingchi, 593 Mehdiyeva, Galina, 1047 Memon, Imran, 319, 775 Meng, Pengxiang, 295 Ming, Ying, 1269 Miyamoto, Michiko, 59 Mogaibel, Hassen, 251 Morozov, Oleg A., 851 Muzahidul Islam, A.K.M., 669 N Ngu, Sze Song, 235 Nor, Shahrudin Awang, 363 Nwobodo, Ikechukwu, 1207 O Oh, Hoon, 13 Othman, Mohamed, 251 P Pan, Qiuhui, 267 Papademetriou, R.C., 783 Papazoglou, P.M., 385, 783 Peng, Bin, 215 Peng, Jianxi, 201 Peng, Jinye, 1057 Q Qi, Mingming, 991 Qi, Wei, 1057 R Rao, Kalithkar Kishan, 571 Rui, Xianyi, 949

1309 S Shan, Baoming, 957 Shen, Haibo, 655 Shen, Jing, 717 Shen, Xuchi, 729 Shi, Guowei, 1269 Shi, Guozhen, 1087 Shi, Junling, 681 Shi, Runhua, 873 Shishkin, Alexey G., 155 Siva Kumar Reddy, B., 459 Song, Lihua, 27 Song, Wei, 607, 621 Song, Weijiao, 215 Sreekanth, G., 693 Stepanov, Sergey V., 155 Su, Dan, 607, 621 Subramaniam, Shamala, 251 Sun, Hongyu, 971 Sun, Jilong, 351 Sun, Pengfei, 179, 189 T Talluri, Manasvi, 935 Tan, Hongzhou, 439 Tan, Yasuo, 235 Tang, Lijun, 1233 Tang, Liyang, 3 Tang, Meng, 1037 Tang, Wang, 1163 Tang, Zhiling, 809 Tian, Honger, 1293 Tian, Run, 189 Ting, Jacky S.L., 537 Tong, Jun, 891 Tong, Sheng, 891 Tong, Shoufeng, 285, 1017 Turčáni, Milan, 641 U Ullah, Israr, 1195 V Vempati, Srinivas Rao, 561 Voitenko, Iurii, 419 W Wakabayashi, Toshio, 669 Wan, Ke, 189 Wang, Dong, 87 Wang, Haifeng, 1005 Wang, Haitao, 27 Wang, Huibin, 113

1310

Author Index

Wang, Jiazheng, 295 Wang, Jisheng, 87 Wang, Jiulong, 593 Wang, Lianhai, 1137, 1149 Wang, Lidong, 583 Wang, Lingling, 163 Wang, Lixing, 537 Wang, Maoli, 915 Wang, Mingkun, 505 Wang, Qihui, 583 Wang, Ronghe, 351 Wang, Shuaibing, 1087 Wang, Tianshu, 285 Wang, Xiaoman, 327 Wang, Xingwang, 983 Wang, Xingwei, 681, 1175 Wang, Xiufang, 1057 Wang, Yanhai, 327 Wang, Ye, 861 Wang, Yong, 3, 799 Wang, Yougang, 1099 Wang, Yuan, 163 Wei, Zhengang, 215 Wei, Zhipeng, 757 Wen, Fengtong, 1115 Wu, Di, 73 Wu, Hao, 3 Wu, Qiwu, 35 Wu, Wei, 1233 X Xi, Jiangtao, 891 Xi, Zonghu, 1087 Xia, Wei, 1187 Xiang, Yang, 87 Xiao, Shaoqiu, 319 Xie, Mingxiang, 105 Xiong, Xingzhong, 1005 Xu, Bo, 529 Xu, Fei, 949 Xu, Lijuan, 1149 Xu, Pan, 949 Xu, Qilei, 957, 1243 Xu, Shufang, 113 Xu, Yong, 1099 Xu, Zhi, 717 Xue, Hao, 189 Y Yan, Yan, Yan, Yan, Yan,

Changling, 799 Hanbing, 1163 Li, 27 Lili, 829 Shuhui, 439

Yan, Xuequan, 927 Yang, Guowei, 285 Yang, Peng, 707 Yang, Pigi, 883 Yang, Ronggen, 1069 Yang, Rui, 607, 621 Yang, Ruijuan, 399, 449, 729 Yang, Shumian, 1137, 1149 Yang, Wanshu, 495 Yang, Wei, 409 Yu, Fei, 1107 Yu, Feng, 1223 Yu, Jie, 757 Yu, Li, 861 Yu, Yanguang, 891 Yuan, Shuhan, 87, 991 Yuan, Weiguo, 607, 621 Yuan, Yuan, 927 Z Zaitsev, Fedor S., 155 Zeng, Guangjun, 765 Zhan, Yongguo, 1293 Zhang, Bang, 1175 Zhang, Chenxin, 505 Zhang, Chunfei, 273 Zhang, Hao, 337 Zhang, Hui, 1099 Zhang, Huixi, 583 Zhang, Lijun, 1107 Zhang, Linlin, 707 Zhang, Lizhong, 285 Zhang, Nan, 707 Zhang, Peng, 285 Zhang, Qi, 87 Zhang, Quanqi, 439 Zhang, Shibing, 741 Zhang, Shuhao, 971 Zhang, Shuhui, 1137 Zhang, Shun, 873 Zhang, Tian, 51 Zhang, Wei, 553 Zhang, Wenwu, 223 Zhang, Xiaokuan, 505 Zhang, Yan, 1037 Zhang, Yanqiu, 991 Zhang, Yechi, 873 Zhang, Yicheng, 179 Zhang, Yuanpeng, 41 Zhang, Yue, 3 Zhang, Zejun, 449 Zhang, Zhijia, 553 Zhao, Chen, 1163 Zhao, Fangfang, 41