Cloud-Based E-Health Multimedia Framework for ... - IEEE Xplore

5 downloads 20353 Views 698KB Size Report
CLOUD-BASED E-HEALTH MULTIMEDIA FRAMEWORK FOR. HETEROGENEOUS NETWORK. Atif Alamri. Chair of Pervasive Mobile Computing, College of ...
2012 IEEE International Conference on Multimedia and Expo Workshops

CLOUD-BASED E-HEALTH MULTIMEDIA FRAMEWORK FOR HETEROGENEOUS NETWORK

Atif Alamri Chair of Pervasive Mobile Computing, College of Computer and Information Sciences King Saud University Saudi Arabia [email protected]

forms of medical services [5]-[6]. To employ an effective telehealth system, it is essential to ensure the availability of telecommunication infrastructures and access networks for healthcare professionals like doctors, nurses, public safety workers, health IT specialists, patients, and related communities [7]-[9]. The healthcare services include high definition (HD) video-centric and super resolution image-intensive remote diagnosis applications [8]-[9]. Moreover, real-time delivery of multimedia content is necessary to increase patient reach, which includes extending healthcare to the patients in rural areas [10]-[11]. However, medical services such as teleradiology and telepathology require high computation processing, gigantic database storage and efficient transmission of high-resolution digital images that have huge file sizes. This is a challenging task because of the high costs that are associated with the limited bandwidth resources. Significant improvements have been revealed in video compression techniques, along with introducing an H.264/AVC video coding standard. Scalable Video Coding (SVC) enables the extraction of appropriate partial bit streams that meet the specific target bit rate, with lower temporal or spatial resolutions, or overall quality. This process retains a reconstruction quality that is high, relative to the rate of the partial bit streams [12]-[15]. Cloud computing could be regarded as an unlimited resource that can be accessed anytime and anywhere in the world. There are a number of computationally intensive operations that can be offloaded onto the cloud. Hence, the combination of cloud computing technology with the state of the art H.264/AVC compression technique can be beneficial for transmitting both healthcare services and related applications. These have not been given much attention in the existing research field. In this paper, a framework for the efficient transmission of health care services, a database and monitoring applications over limited bandwidth resources that are based on cloud computing technology, and an efficient compression technique are proposed to enhance the capability of heterogeneous networks and devices for use in healthcare multimedia applications.

Abstract—There is a strong need to improve the transmission mechanism for eHealth management and multimedia services over a heterogeneous network. Both efficient mechanisms for patient health information access, and communication and delivery over various devices face challenges. Specifically, they include devices with computation and power supply limitations, and HDTV that has a high bandwidth requirement, but limited resources. Network resource limitations are mostly characterized by throughput variations, and delay or transmission errors. An adaptive bit rate of a video in multimedia health services comprise desirable key features that should be considered. In this paper, we propose and demonstrate a cloud computing framework that uses scalable video coding. This has efficient video compression that relieves heterogeneous devices when they execute heavier multimedia applications, use database storage and when delivering mobile health services over limited bandwidth resources. Simulation results show that the proposed scheme achieves a significant improvement in a PSNR-Y gain, as compared to the existing scheme. Keywords-cloud computing; network; scalable video coding

I.

e-health;

heterogeneous

INTRODUCTION

The recent developments in mobile, telecommunication and information technologies (and their impact on interoperability issues in different telemedical and telecare applications) are apparent in the emerging commercial domain of mobile telemedical services. Current examples include mobile ECG transmissions; video images and teleradiology; wireless ambulance services to predict emergency and stroke morbidity; and other integrated mobile telemedical monitoring systems [l]-[4]. The concept of including high-speed data and other services over heterogeneous networks are emerging as one of the main points for the future telecommunication and multimedia priorities. Specifically, the relevant benefits to citizen centered healthcare systems are a draw. Telehealth and telemedicine systems utilize telecommunication infrastructure to facilitate the exchange of electronic information, such as medical images and real-time videos for remote monitoring, remote diagnosis, telesurgery and other 978-0-7695-4729-9/12 $26.00 © 2012 IEEE DOI 10.1109/ICMEW.2012.84

447

II.

E-HEALTH MULTIMEDIA SERVICES AND STANDARDS

removes information that would identify any individual patient.

The World Health Organization (WHO) defines eHealth as “the cost-effective and secure use of information and communications technologies in support of health and health-related fields, including health-care services, health surveillance, health literature, and health education, knowledge and research” [16]. Electronic health (eHealth) systems continue to hold great potential for improving global access to healthcare services and health informatics. Recent improvements in remotely administered medicine increasingly enable the virtual multimedia delivery of medical consultation, remote imaging services, specialized medical diagnostics, and remote medical procedures. Standardized electronic medical records hold the promise of facilitating the digital exchange of patient data among a patient’s primary care physician and among other health providers. Another medical development is the use of genomic data (e.g. genetic markers) as part of personalized electronic health records to assist with diagnosis and treatment decisions. Technological obstacles that hinder the promise of eHealth systems include the lack of global interoperability standards for eHealth and technical infrastructure barriers [16]. According to the ITU-T technology reports [16], many of these challenges can be addressed through advancements in technical standards for eHealth. Fig. 1 depicts challenges in the standardization for (i) eHealth information such as patient data, images and informatics; (ii) eHealth software systems such as information, database, process management and mobile application systems; (iii) eHealth infrastructure such as mobile systems, remote diagnostics, security systems and systems management.

Figure 1. Challenges in eHealth information, software systems and infrastructure [16]

B. ITU-T Multimedia Framework for eHealth Applications The significance of multimedia application in eHealth is realized at the highest level of ITU-T [20]-[21]. eHealth standardization studies in ITU-T are addressed by Question 28/16: “Multimedia Framework for eHealth Applications”. This high level Question, which coordinates the technical standardization of multimedia systems to support eHealth applications, is allocated under ITU-T Study Group 16, the Lead Study Group on ubiquitous applications (e.g. eHealth and eBusiness). Q28/16 focuses on the critical need for global interoperability among fragmented eHealth systems based on different standards, and seeks to provide the necessary coordination among major global players (e.g. medical institutions, governments, inter-governmental organizations, non-profit groups, private industry). Q28/16 also produced a roadmap for telemedicine and indicated major technologies that are applicable to telemedicine in eHealth, as well as the benefits from standardized activities. The ITU-T’s eHealth Question 28/16, via Study Group 16, works with relevant consortia and standardization bodies such as HL7, DICOM, ISO, ETSI, IETF, IEEE, IEC, CEN, as well as the eHealth Group. Hence, there is a strong desire from the eHealth research community to have an efficient transmission mechanism for patient data, diagnostic images, videos and other multimedia applications in the software system, along with their infrastructures.

A. Trends in eHealth Systems First, This section describes four emerging trends in eHealth systems that use information and communication technologies for the delivery of healthcare services and for the digital recordation, storage, and sharing of medical information [17]-[19]: • Genomic medicine (e.g. genetic markers) used in assisting with disease prevention, diagnosis, and treatment decisions. • Standardized electronic health records to create common digital formats and structures for integrating a variety of information about a patient and allowing this information to be exchanged among medical information systems which are developed by different manufacturers. • Remote healthcare and diagnostics accessed via telecommunication networks and information technology for many medical purposes including remote clinical care, diagnostics, electronic patient monitoring and patient and provider access to medical information. • Aggregated public health data refers to a large body of data obtained by combining some characteristics of standardized digital health records in a way that

III.

SCALABLE VIDEO CODING: A SOLUTION FOR EHEALTH OVER HETROGENEOUS NETWORK

The Joint Video Team (JVT) of the VCEG and MPEG has standardized Scalable Video Coding (SVC) extension of the H.264/AVC standard [22]-[24]. SVC allows partial transmission and decoding of a bit-stream. The resulting (decoded) video has lower temporal or spatial resolution, or

448

control methods applied by the intermediate nodes. Therefore, the expected bandwidth for each user may be different, which will lead to various receive patient’s videos being combined with different picture rates, spatial resolutions, and/or quality levels. Even for one client, owing to bandwidth fluctuation, the received video may change at any moment in picture rate, spatial resolution, and quality level. All of these scenarios can easily be handled in SVC [25]-[26]. The inherent characteristic of SVC with high compression, its robustness against transmission errors, and specifically for its flexibility for inter-operability of heterogeneous networks and devices make it a suitable candidate for the transmission of eHealth images and videos. The adaptive feature with supporting transporting interfaces including a real-time protocol payload format is most appropriate for use in telehealth services.

less fidelity. However, the video retains a reconstruction quality that is high relative to that achieved using the existing single-layer H.264/AVC design, with the same quantity of data as in the partial bit-stream. Hence, SVC provides adaptation capability for heterogeneous network structures and different receiving devices with the help of three types of scalabilities, namely, temporal scalability, spatial scalability, and quality scalability.

Figure 2. Application scenarios of eHealth services for heterogeneous devices and networks.

The encoding structure of SVC consists of intra-layer coding, inter-layer coding, a hierarchical prediction structure. It supports a flexible transport interface, including a realtime transport protocol payload format. The scalability of a video bit stream allows for the adaptation for media bit rate, as well as for device capability without the need to modify transcoding or re-encode. The availability of SVC will not only be of interest to higher levels, but for certain services it is also very attractive to have a rate-scalable extension with a backward compatible-based layer. Furthermore, rate and quality adaptation in IP network may happen in different network instances. Using the scalable bit stream rate adaptation may be applied at the streaming server, in intermediate network nodes for device adaptation, in radio link buffers for channel adaptation or only at the receiver to extract the appropriate resolution for the terminal display. Typical application scenarios for SVC are shown in Fig. 2. Owing to the decoding capability of partial bit-stream for various levels, videos with different spatial resolutions, e.g. for a Standard Definition TV (SDTV) set and a High Definition TV (HDTV) set, can be decoded as shown in scenario (a). It can also be shown in videos with different picture rates; for example, for a mobile device and a laptop, videos can be decoded as shown in scenario (b). The healthcare professional can be the same network but within different sub-networks or with different connections, e.g. in scenario (c). Healthcare professionals are connected with cable, Local Area Network (LAN), Digital Subscriber Line (DSL) and Wireless LAN (WLAN). Healthcare professionals can also be located in the same network, but with a different Quality of Service (QoS), e.g., the different congestion

Figure 3. Illustration of cloud computing layers as a collection of services XaaS

IV.

USING CLOUD COMPUTING FOR E-HEALTH SERVICES, APPLICATION AND DATABASE

Cloud Computing refers to both the applications delivered as services over the Internet and the systems software and hardware in the data centers that provide those services. Cloud computing can be defined as “a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet” [27]. It can be ubiquitously accessed over the Internet from any connected devices, such as PCs, laptops, smart phones and PDAs. Emerging cloud applications include but are not limited to social networking, eHealth service and databases, gaming portals, business applications, media content delivery and scientific workflows [28]. A. Layers of Cloud Computing Cloud computing is the combination of different layers of services. These services delivered to users in a real time environment via the Internet. There are three types of cloud computing layers, namely, Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS). Fig. 3 illustrates cloud computing layers as a collection of services (XaaS).

449

onto cloud. Fig. 4 illustrates the functionalities and infrastructure of media cloud, which consist of eHealth media services, an application directory, and a system manager. The resource allocation manager coordinates among different VM platforms. These consist of virtual machines and physical resources for the processing of and for eHealth database storage.

Figure 4. Illustration of an eHealth multimedia application in cloud computing

In SaaS, software applications are exposed as services that run on a cloud infrastructure. An application is hosted as a service to client, who accesses it via the Internet [29]. For example, a web user can work on Google docs and they do not need to install any application in order for it to function. Other providers like Amazon have cloud services where a subscriber needs to pay for only the amount of services that they want to use. Infrastructure as a Service (IaaS) refers to computing resources as a service. This includes virtualized computers with guaranteed processing powers and the reservation of bandwidth for storage and Internet access. Therefore, instead of owning, managing or controlling the underlying infrastructure, the infrastructure is rented as a service. Platform as a Service is similar to Infrastructure as a Service, as it also includes operating systems and requires services for a particular application. PaaS services include application design, development, testing, deployment and hosting [31]-[31]. In this category, in addition to services (application software etc.), a server, memory and other platforms are utilized. In PaaS, we virtualized and shared resources among different applications with the objective of being able to have better server utilization. Virtualization technologies include virtual machine techniques including VMware, and virtual networks such as virtual private network (VPN). The virtual machines provide virtualized IT infrastructure on-demand. The virtual networks support clients with a customized network environment to access cloud resources.

Figure 5. Framework for eHealth application in cloud computing functioning over heterogeneous network and devices

C. Framework for eHealth Multimedia Services with Cloud Computing over a Hetrogeneous Network Owing to the transmission and decoding capability of partial bit-stream, images and videos that include ECG and telecardiology that have different spatial resolutions and which range from a mobile device to HDTV can be decoded with SVC. Fig. 5 depicts the framework for eHealth multimedia services and the applications in cloud computing functioning over heterogeneous networks. The cloud can be public for a citizen-centered healthcare system; private for remote monitoring; diagnostic for healthcare professionals in the hospital; or a hybrid i.e. combination of public and private clouds to facilitate access for specialty care in large areas with a shortage of specialists. The application servers process the intensive computation of patient video and images and are stored in the eHealth database. Application severs also process the encoding of SVC bit-streams for patient multimedia applications. To encode patient’s images and videos with improved PSNR gain and low complexity, we derive MB mode partition using block-merging technique. Wang at el in [32] proposed a block-merging scheme where two smaller partition blocks may be merged into a larger one, as long as they share the same reference frame index and similar

B. eHealth Services with Cloud Computing Healthcare multimedia data consists of images, audio, video, digital signals and text data that require computational processing of bio-signals, their appropriate storage, and high bandwidth utilization transmitting a patient’s data. This intensive processing of bio-signals consumes physical resources, such as CPU and memory, which can be offloaded

450

increase in a PSNR-Y gain in a packet loss environment. Cloud computing and a multimedia services combination is an encouraging advancement that is expected to improve health care systems and will ideally lead into the next generation of personal health care systems.

motion vectors. However, a more relaxed criterion may be applied. In general, for adjacent blocks, their motion vectors (MVs) are similar as long as they belong to one homogeneous video object [33]. In this case, it is helpful to merge two neighboring 4x4 blocks into one, even if their MV absolute difference is larger than the MV threshold set in JVT-W030. Hence, by assuming that MVs for adjacent block are similar in one homogeneous video object, we recommend the following simplified block-merging criterion: • Calculate the square of l2 norm of motion vector difference of the neighboring blocks. • The proposed block merging scheme will be applied, if the square of l2 norm of MVs difference is less than the threshold set in the JSVM. The encoded bit streams can be accessed by authenticated users from anywhere and anytime from the cloud via IP network as shown in Fig. 5. V.

TABLE I.

PERFORMANCE COMPARISON OF RATE-DISTORTION EFFICIENCY BETWEEN ANCHOR AND PROPOSED SCHEME

SIMULATION RESULTS AND RELATED DISCUSSIONS

To demonstrate the performance of the proposed method, two compressed ehealth videos (MRI of lung echo and multislice CT scan sequences) are downloaded from springer images [34]-[35], and are converted into raw video sequences of 100 frames (YUV, 4:2:0, progressive and intra period of -1). After generating a bit stream, a packet-loss simulator is used to simulate packet loss during transmission. Error pattern is used in the simulator, as proposed by video coding expert group (VCEG) [36]. Other simulation conditions are as follows: • JSVM version 9.8 [37]. • Quantization parameter setting: quantization parameters of QP0 = (28, 32, 36, 40) and QP1 = (30, 34, 38, 42) are used for base layer and enhancement layer respectively. • Resolution ratios of 240x192 and 320x240, @15fps) for each sequences are used for base layer and enhancement layer respectively. • Bit-stream is decoded after passing through simulator with 3% packet loss rate for base layer and 5% packet loss rate for enhancement layer.

Sequ.

QP0 QP1

Fr. (fps)

MRI

40-42

MRI

36-38

MRI

Anchor

Proposed

PSNR (dB)

Bitrate (kbps)

PSNR (dB)

Bitrate (kbps)

15

27.0553

51.07

27.1524

56.05

15

28.3540

98.87

28.6850

108.74

32-34

15

29.3051

192.43

30.5724

208.46

MRI

28-30

15

30.4885

348.75

31.3189

378.43

M-CT

40-42

15

25.2713

33.30

27.2759

35.97

M-CT

36-38

15

26.2665

62.89

28.6472

67.50

M-CT

32-34

15

27.1272

117.79

29.8486

124.12

M-CT

28-30

15

27.7230

203.90

30.8144

217.23

In Table I, the decoded bit-rates with PSNRs-Y of anchor and the proposed algorithms at different QPs on each sequence are listed. From Table I, we can conclude that the proposed scheme outperforms the anchor. Fig. 6 shows RD curves for MRI of lung echo sequence (top) and a multi-slice CT scan sequence (bottom) with 320x240 resolutions. VI.

CONCLUSIONS

In this paper, the scalable extension of H.264/AVC encoding bit stream with the potential use in eHealth multimedia application is presented, which will benefit from using a layer structure of cloud computing. Due to the flexibility and operability characteristic of SVC, the layer configuration of a scalable stream for a patient’s videos can be adapted for heterogeneous network and devices. Experimental results show that the proposed scheme achieves a significant

Figure 6. Rate-Distortion curve for MRI (top) and multi-slice CT scan (bottom) sequences

451

REFERENCES [1]

[2]

[3]

[4]

[5]

[6]

[7] [8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16] [17]

[18]

[19] http://www.ehiprimarycare.com/news/5588/telehealth_%27to_take_o ff_by_2012%27. [20] Question 28/16 – Multimedia Framework for e-Health Applications at http://www.itu.int/ITUT/studygroups/com16/sg16-q28.html. [21] http://www.itu.int/publ/T-TUT-EHT-2006-RTM/en. [22] S. Wenger, Y-K. Wang, and T. Schierl, “Transport and signaling of SVC in IP networks” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 9, Sep. 2007. [23] Y. -K. Wang, M.M. Hannuksela, S. Pateux, A. Eleftheriadis, and S. Wenger, “System and transport interface of SVC,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 9, Sep. 2007. [24] P. Amon, T. Rathgen, and D. Singer, “File format for scalable video coding” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 9, Sep. 2007. [25] T. Schierl, K. Gänger, T. Stockhammer, and T. Wiegand, “SVCbased multi source streaming for robust video transmission in mobile ad-hoc networks,” IEEE Wireless Commun. Mag., vol. 13, no. 5, pp. 96–103, Oct. 2006. [26] M. Wien, R. Cazoulat, A. Graffunder and P. Amon, “Real-Time systems for adaptive video streaming based on SVC,” in IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 9, Sep. 2007. [27] Available at http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ ns836/ns976/white_paper_c11-543729.html [28] M. Nkosi, F. Mekuria, “Cloud Computing for enhanced mobile Health Applications,” Proceedings of the IEEE Cloud computing and Technology conference, CloudCom 2010, Nov.30 - Dec. 3, 2010, Indiana, USA. [29] F. Mekuria, et.al. “Intelligent Mobile Sensing & Analysis Systems,” Proceedings of 3rd CSIR Biennial conference, August 31-September 2, 2010, International Convention Center, Pretoria, South Africa. [30] M. G. Jaatun, “Cloud Computing: First International Conference,” CloudCom 2009, Beijing, China, December 1-4, 2009, Proceedings, Springer Publications. [31] A. Khan and K. K. Ahirwar, “Mobile cloud computing as a future of mobile multimedia database,” International Journal of Computer Science and Communication ol. 2, No. 1, January-June 2011, pp. 219221. [32] X. Wang and J. Ridge, “CE2 report: Improvement of macroblock mode prediction in ESS,” in Proc. ISO/IEC JTC1/SC29/WG11/JVTW030, San Jose, California, USA, 21-27 April 2007. [33] 4. D. Wu, F. Pan, K. P. Lim, S. Wu, Z. G. Li, X. Lin, S. Rahardja, and C. C. Ko, “Fast intermode decision in H.264/AVC video coding,” IEEE Trans. Circuits Syst. Video Technol., vol. 15, no. 6, pp. 953– 958, Jun. 2005. [34] Available at http://www.springerimages.com/Images/RSS/110.1007_s00330-010-1918-0-9 [35] Available at http://www.springerimages.com/Images/MedicineAndPublicHealth/1 -10.1007_s10554-006-9155-y-v1vailable at [36] S. Wenger, “Error patterns for Internet experiments,” TU Berlin, Doc. VCEG-Q15-I-16r1, New Jersey, Oct. 1999. [37] JVT, “H.264/SVC reference software (JSVM 9.8) and manual,” CVS sever at garcon.ient.rwth-aachen.de, Oct. 2007.

Robert S. H. Istepanian and Jose C. Lacal, “Emerging Mobile Communication Technologies for Health Some Imperative notes on m-health,” Proceeding of the 25th Annual international Conference of the IEEE EMBS Cancun, Mexica. September 17-21, 2003. Pattichis, C. S., Kyriacou E., Voskarides S., and Istepanian, R.S.H., “Wireless Telemedicine Systems: An Overview,” IEEE Antennas and Propagation, vo1. 44, pp.143-153, 2002. Yao, Wenbing, and Istepanian, R.S.H., “3 G Mobile Communications for Wireless Tele-Echography Robotic System,” Proceedings of the World Multiconference on Systemics, Cybernetics and Informatics, Orlando, Florida, USA, pp.138-142, 14-18 July 2002. Tacbkara, S., and Istepanian, “Mobile E Health; The Unwired Evolution of Telemedicine,” Telemedicine and E-Health Journal, 2003. A. Chowdhury, H.-C. Chien, S. Khire, S.-H. Fan, X. Tang, N. Jayant and G.-K. Chang, “Next-Generation E-Health Communication Infrastructure using Converged Super-Broadband Optical and Wireless Access System,” WoWMoM 2010, 14-17 June 2010, pp 1– 5, Montreal, QC, Canada. D.A. Perednia et al., “Telemedicine technology and clinical applications,” Journ. of American Medical Association, Vol. 273, pp. 483-488, 1995. O. Onguru et al., “Intra-hospital use of telepathology system,” Pathology oncology research, vol. 6, No. 3, 2000. M.J. Su et al., “Application of tele-ultrasound in emergency medical services,” Telemedicine Journal and E-Health, Vol. 14, pp. 816-824, Oct. 2008. D.K. Kim et al., “A Mobile telemedicine system for remote consultation system in cases of acute stroke,” Jour. Telemedicine and Telecare, vol. 15, pp. 102-107, 2009. I. Pratap et al., “Comparative technical evaluation of various communication media used for telemedical video-conference,” HealthCom 2008, July 2008, pp 1-2. S. Yu et al., “A tele-opthalomology system based on secure video conferencing and white-board,” HealthCom 2008, July 2008, pp. 51 – 52. I. E. Richardson, “H.264 and MPEG-4 video compression - video coding for next generation multimedia,” John Wiley & Sons Ltd, 2003. H. Schwarz, D. Marpe, and T. Wiegand, “Overview of the scalable video coding extension of H.264/AVC,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 9, pp. 1103–1120, Sep. 2007. T. Schierl, T. Stockhammer, and T. Wiegand, “Mobile video transmission using scalable video coding” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 9, Sep. 2007. M. Wien, H. Schwarz, and T. Oelbaum, “Performance analysis of SVC” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 9, Sep. 2007. http://www.itu.int/ITU-T/techwatch. W. Gregory Feero, M.D., et al., “Genomic Medicine – An Updated Primer,” The New England Journal of Medicine, Volume 362, Number 21, May 27, 2010. Koray Atalag, Douglas Kingsford, Chris Paton, and Jim Warren, “Putting Health Record Interoperability Standards to Work,” Electronic Journal of Health Informatics, Volume 5 (1) 2010.

452