Improved joint algorithms for reliable wireless

1 downloads 0 Views 6MB Size Report
Su gg ested. C o lor an d Depth. Po st-P ro cessin g Techniq ues. Fig. 3 The overall framework of the suggested 3DV transmission system. Multimed Tools Appl ...
Multimed Tools Appl https://doi.org/10.1007/s11042-018-6440-4

Improved joint algorithms for reliable wireless transmission of 3D color-plus-depth multi-view video Walid El-Shafai 1 & EL-Sayed M. El-Rabaie 1 & Mohamed Elhalawany 1 & Fathi E. Abd El-Samie 1

Received: 30 November 2017 / Revised: 9 June 2018 / Accepted: 20 July 2018 # Springer Science+Business Media, LLC, part of Springer Nature 2018

Abstract Error control techniques like Error Resilience (ER) and Error Concealment (EC) are preferred techniques to ameliorate the lost Macro-Blocks (MBs) in the 3D Video (3DV) communication systems. In this paper, we present different enhanced ER-EC algorithms for intra-frame images for 3DV and Depth (3DV + D) communication through wireless networks. At the encoder, the slice structured coding, explicit flexible macro-block ordering, and context adaptive variable length coding are utilized. At the decoder, a hybrid approach comprising spatial circular scan order interpolation algorithm and temporal partitioning motion compensation algorithm is suggested to reconstruct the Disparity Vectors (DVs) and Motion Vectors (MVs) of the erroneous color images. For the corrupted depth images, a depth-assisted EC algorithm is proposed. Then, the optimum concealment MVs and DVs are chosen by employing the weighted overlapping block motion and disparity compensation algorithm. Furthermore, the Bayesian Kalman Filter (BKF) is utilized as an amelioration tool due to its efficiency to smooth the remnant inherent corruptions in the formerly optimally chosen color and depth DVs and MVs to obtain a good video quality. Simulation results on several 3DV streams show that the suggested algorithms have extremely adequate subjective and objective video quality performance compared to the traditional methods, particularly at high Packet Loss Rates (PLRs).

* Walid El-Shafai [email protected] EL-Sayed M. El-Rabaie [email protected] Mohamed Elhalawany [email protected] Fathi E. Abd El-Samie [email protected]

1

Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt

Multimed Tools Appl

Keywords Multi-view image retrieval . Depth estimation . Error resilience and concealment . Kalman filter . Disparity and motion compensation . Wireless networks

1 Introduction Recently, the research efforts in the Three-Dimensional Multiview Video Coding (3DMVC) have increased and received more attentiveness. It is expected that the 3DV is rapidly taking place of the 2DV in numerous applications including entertainment, gaming, medicine, machine learning, and education [25–29, 35, 37]. The 3D video consists of various sequences taken by different cameras for the self-same object. The 3DV communication over wireless networks and Internet has increased, dramatically [1, 3]. Thus, to transmit the 3DV over limited-resources channels, extremely sufficient encoding must be implemented while maintaining a good reception performance. Therefore, the 3D-MVC prediction exploits the inter-view correlation in the various 3DV sequences in addition to the time and spatial correlation inside frames in the self-same sequence to improve the coding performance. Unfortunately, the highly encoded 3DV is more sensitive to transmission channel corruptions. The 3D video streaming over wireless channels is servile to random and bursty packet corruptions because of packet dropping or fading-motivated bit errors [13, 24]. The 3D-MVC predictive framework indicated in Fig. 1 is employed to encode the transmitted 3DV streams [7]. It utilizes the inter- (P and B) and intra- (I) coded frames. So, errors might propagate to the neighbouring views or to the subsequent frames resulting in inefficient 3DV quality. It is intractable to re-transport the corrupted or erroneous MBs due to delay restrictions on real-time video applications. So, there is a need for error control techniques such as pre-processing ER techniques at the encoder side and post-processing EC techniques at the decoder side. The ER schemes are exploited to let the transmitted 3DV streams to be more robust to expected fading-

Fig. 1 3D-MVC hierarchical prediction coding framework

Multimed Tools Appl

induced bit errors to mitigate the effect of packet loss and to enhance the performance of EC techniques at the decoder side. The EC techniques are recommended as they decrease the channel errors without increasing the delay or requiring any complicated encoder adjustments [10, 20, 39]. Several ER techniques have been suggested to minimize the effect of packet damages prior the 3D video transmission. In the state-of-the-art works [14, 17, 23, 30], the Automatic Repeat Request (ARQ) and Forward Error Correction (FEC) algorithms have been proposed. Unfortunately, they increase the transmission bit rate, and they are not recommended for real-time 3DV applications. Therefore, in this paper, we propose efficient content-adaptive ER tools at the encoder, which neither add redundant data nor increase the transmission bit rate. The proposed ER methods in this work adaptively exploit the available 3DV information to conserve the transmitted 3DV bit streams prior to transmission. Moreover, the EC algorithms proposed in this paper have the feature of enhancing the received 3DV performance without modifications in the encoder software or hardware or in the transmitted bit rate. They take the merit of the intra- and inter-view correlation inside the 3DV sequences to conceal the corrupted frames or MBs of the inter- and intra-frames. The 3DV-plus-depth is an important representation for 3DV format that has been recently introduced. It contains information about the object per-pixel geometry [3, 13]. The depth data can be used to recognize the boundaries of the objects to assist in the concealment process. The depth information is very important in different 3D implementations. It is important for the coincidince of different 3DV displays. In addition, it saves the amount of 3DV stored bits compared to those of the conventional 2DV applications. In this paper, the depth data maps are calculated at the decoder instead of being calculated at the encoder. The traditional depth-based EC works require the depth maps to be encoded and transmitted through the network [2, 5, 19, 22, 31, 32, 36]. Thus, they increase the transmission latency, and they are not suitable for band-limited wireless networks. Yan et al. [36] introduced a Depthaided Boundary Matching Algorithm (DBMA) for color-plus-depth sequences. In addition to the zero MV and other MVs of the collocated MBs and neighboring spatial MBs, another MV estimated through depth map motion calculations is appended to the group of estimated candidate MVs. Yunqiang et al. [22] proposed a Depth-dependent Time Error Concealment (DTEC) technique for the color streams. In addition to the ready MVs of the MBs surrounding a corrupted MB, the corresponding MVof the depth map MB is added into the candidate group of the former MVs. Furthermore, the MB depth identical to a corrupted MB color is classified into homogeneous or boundary types. A homogeneous MB can be recovered as a whole. On the other hand, boundary MBs may be divided into background and foreground, and after that they are separately recovered. The DTEC method can differentiate between the background and foreground, but it fails to allow adequate outcomes, when the depth information is absent at the identical corresponding positions of the color sequence. In [5], Chung et al. suggested a frame damage concealment scheme, where precise calculations for the corrupted color MVs are carried out with the assistance of depth data. Tai et al. [31] suggested an efficient full frame algorithm for depth-based 3D videos. This algorithm determines the objects according to temporal and depth information. The TME and depth map were exploited to estimate the MV of the lost depth MBs by extrapolating each object in the reference frame to recover the corrupted frame. Khattak et al. [19] presented a consistent framework for 3D video-plus-depth concealment that allows high levels of temporal and inter-view consistency. Assuncao et al. [2] introduced an error concealment method for intra-coded depth maps based on spatial intra- and inter-view methods. This method exploits neighboring data of depth and received error-free color

Multimed Tools Appl

images. An efficient three-stage processing algorithm was devised to reconstruct sharp depth transitions, i.e. lost depth contours. Wang et al. [32] presented an MB distinction model for both texture and depth video concealment to relieve the virtual view quality degradation induced by packet loss. It is noticed that most of the previous above-mentioned depth-assisted EC works extract the depth data at the encoder and allow depth maps to be coded and transported through the network to the decoder. So, they increase the computational complexity and bit rate of the 3DV transmission system, and this does not meet the future bandwidth constraints of wireless communication networks. It is noticed that most of the previous depth-assisted EC works extract the depth data at the encoder and allow depth maps to be coded and transported through the network to the decoder. So, they increase the computational complexity and bit rate of the 3DV transmission system. Therefore, to avoid the limitations of the traditional EC algorithms, in this work, we propose a depth-assisted EC algorithm that predicts the depth maps at the decoder side rather than predicting at the encoder via estimating the depth DVs and MVs of the received color and depth 3DV bit streams. So, to transmit the 3DV bit streams efficiently over erroneous wireless channels and to deal efficiently with the case of heavy MB losses with which the traditional error control algorithms failed to deal [2, 5, 7, 10, 14, 17, 19, 20, 22, 23, 30–32, 36, 39], we suggest multi-stage error control schemes for robust color and depth 3DV transmission over severely corrupted wireless channels. Therefore, we propose improved joint resilienceconcealment schemes for intra-frame images of the 3D videos transmitted over error-prone channels. At the encoder, we utilize the Slice Structured Coding (SSC) modes, Explicit Flexible Macro-block Ordering (EFMO) mapping, and Context Adaptive Variable Length Coding (CAVLC) entropy. These methods are applied at the encoder side to enhance the performance of the proposed concealment schemes at the decoder side in order to restore the corrupted MBs, efficiently. At the decoder, a hybrid approach of spatial Circular Scan Order Interpolation Algorithm (CSOIA) and temporal Partitioning Motion Compensation Algorithm (PMCA) is proposed to determine the MVs and DVs of the corrupted erroneous color intraframes. For the corrupted depth intra-frames, an improved Encoder Independent Decoder Dependent Depth Assisted EC (EIDD-DAEC) technique is suggested to recover the lost depth MBs. Furthermore, the Weighted Overlapping Block Motion and Disparity Compensation (WOBMDC) algorithm is exploited to select the optimum concealment depth and color DVs and MVs. After that, for extra improvement of the concealed 3DV bit streams, the BKF is applied to the selected color-plus-depth DVs and MVs to enhance the overall efficiency of the suggested hybrid framework. In summary, the proposed error control algorithms can achieve more efficient objective and subjective qualities than those achieved with prior error control works while saving the transmission bit rate and presenting a minimal computational complexity. The proposed work neither requires difficult encoder operations nor increases the transmission delay. The proposed algorithms can also transmit the 3DV bit streams efficiently over severely erroneous wireless channels and can deal with the case of heavy MBs losses ignored in the literature [2, 5, 7, 10, 14, 17, 19, 20, 22, 23, 30–32, 36, 39]. The rest of this paper is organized as follows. Section 2 introduces the 3DV compression prediction framework, and the BKF model. In section 3, we present the proposed ER schemes, the proposed EC schemes for erroneous color intra-frames, the proposed depth-aided technique for the corrupted depth intra-frames, the proposed WOBMDC selection algorithm, and the suggested BKF amelioration technique. Simulation outcomes and discussions are summarized in section 4. Section 5 presents the concluding remarks.

Multimed Tools Appl

2 Related works 2.1 3D MVC prediction structure (3D MVC-PS) The two-dimensional H.264/AVC codec [12] differs extremely from the three-dimensional H.264/MVC codec. The 3DV codec performs the spatial-temporal matching between frames inside the same-view stream in addition to the inter-view matching between various views. The Motion Compensation Prediction (MCP) and Disparity Compensation Prediction (DCP) are utilized at the encoder side to obtain efficient 3DV encoding, and furthermore at the decoder side for executing an efficient error concealment process [34]. In the MVC Prediction Structure (MVC-PS) depicted in Fig. 1, the DCP is employed to calculate the DVs amongst various neighboring frames of various views and the MCP is employed to determine the MVs within various neighboring frames in the same 3D video view. Therefore, each frame within the 3D MVC-PS is determined via different-view frames and/or via the temporally adjoining frames. The 3DV MVC prediction structure presents a Group-Of-Pictures (GOPs) that includes eight various temporal images. The horizontal axis refers to the temporal direction, whilst the vertical axis refers to the camera views. For instance, the even views (V2, V4, and V6) are compressed depending on the MCP, while their prime key images are encoded through the utilization of the DCP. The V0 view is compressed using only time matching based on the MCP. On the other hand, in the odd views (V1, V3, and V5), both the MCP and DCP are jointly utilized for improving the compression efficiency. In this paper, we define the 3D video view depending on its prime 3D video frame. So, as depicted in Fig. 1, the even views are named as the P views, the S0 view is called the I view, and the odd views are called the B views. The final view could be odd or even depending on the proposed MVC-PS for 3D video. The 3DV MVC prediction structure introduces two different types of compressed frames, which are the I intra-frames and the B and P inter-frames. The B views inter-frames are determined using the I view intra-frames and as well via the P views inter-frames. Thence, if an error occurs at the P or I frame, it is broadcasted to the reference inter-view frames, and moreover into the neighboring temporal pictures in the selfsame 3DV view. So, the intraframes are the key frames of the 3D MVC-PS, and they control the whole efficiency of the 3DV transmission. Therefore, in this work, we suggest a hybrid approach of improved spatial and temporal EC schemes to conceal the heavily corrupted color-plus-depth intra-frames.

2.2 Bayesian Kalman filter (BKF) The BKF is a tool that is used to analyze discrete-time mechanisms [11]. It is utilized to determine the events s(k) of the discrete-time process concerned with the linear difference stochastic formula from a group of determined measurements y(k). The BKF mathematical model can be described as in eqs. (1) and (2). sðk Þ ¼ Aðk‐1Þsðk‐1Þ þ wðk Þ

ð1Þ

yðk Þ ¼ Hðk Þsðk Þ þ rðk Þ

ð2Þ

In eqs. (1) and (2), s(k) = [s1(k), s2(k), …., sN(k)]T, r(k) is an M × 1 measurement noise vector with Normal(0,σ2r ), y(k) is an M × 1 measurement vector, H(k) is an M × M

Multimed Tools Appl

measurement matrix, A(k-1) is an N × N transition matrix, and w(k) is an N × 1 white noise vector with Normal(0,σ2w ). In this paper, we assumed that the measurement noise r(k) and process white noise w(k) follow independent and normal distributions. In this paper, we employed the BKF in the DVs and MVs estimation process. We assumed that the DVs and MVs prediction model of the erroneous MBs from the reference neighbouring MBs is a Markov model. Therefore, the BKF may work efficiently for the motion and disparity vectors prediction process for recovering the erroneous MBs. The BKF prediction framework consists of two different steps, which are the measurement and time updates as illustrated in Fig. 2. The pk,^sk ,p−k ,^sk , and Kk refer to the a posteriori estimated error covariance, the a posteriori estimated state, the a priori estimated error covariance, the a priori estimated state, and the KF gain. The BKF is utilized to determine the filtered states s(k) from a group of noisy events y(k) [6]. It exploits the determined state through the previous measurement and time updates to estimate the current state.

3 Proposed hybrid error control techniques The error concealment of 3DV intra-frames is not only substantial for ameliorating the recovered intra-frame quality. It also aims to enhancing the reconstructed inter-frame quality in the next views and frames. In this section, the suggested color and depth error refinement techniques are presented. We present the proposed pre-processing ER tools at the encoder side. Also, the proposed post-processing EC schemes at the decoder side for the lost color intra images and the proposed depth-assisted technique for the corrupted depth intra images are explained. Furthermore, the proposed WOBMDC selection algorithm, and the proposed BKF enhancement algorithm are discussed. The schematic Fig. 2 The BKF prediction mechanism

Predict Process

(Time Update)

sk

As k

pk

Ap k 1A T

1 2 w

Correct Process

(Measurement Update) p k HT

Kk

Hp k H T

sk

sk

Kk yk

pk

I

K kH p k

2 r H

sk

Multimed Tools Appl

diagram of the framework of the proposed hybrid color-plus-depth error control algorithms is given in Fig. 3 that will be explained in more details in the next sections.

3.1 Proposed ER algorithms In this section, the proposed content-adaptive ER algorithms at the encoder side are introduced. They are used to assist the suggested EC schemes at the decoder side to reconstruct the lost depth and color intra frames, efficiently. The proposed efficient ER tools neither add redundant information to the transmitted bit streams nor increase the transmission bit rate. These algorithms are the SSC modes, EFMO mapping, and CAVLC entropy. They adaptively exploit the available 3DV content information to preserve the transmitted 3DV bit streams prior transmission. The main entropy encoders available for the 2D H.264/AVC system are the Context Adaptive Binary Arithmetic Coding (CABAC) and CAVLC [12]. It is known that the CAVLC is an efficient entropy encoding scheme that provides low delay characteristics for unpreserved bit stream transmission over wireless channels. In this paper, the CAVLC entropy encoding is adopted for the 3D H.264/MVC transmission due to its efficient performance under heavy wireless error conditions, and also to its limited processing time. With the CAVLC encoder, synchronization is independent of the received codewords. So, the CAVLC encoding presents a minimal likelihood of codeword synchronization loss. In order to get better 3DV quality and to limit the loss in the transmitted bit streams consistency and propagation of errors, the SSC tool is employed to prevent the spatial error propagation. Also, the EFMO mapping is deployed to stop both spatial and temporal error propagation. In this work, the EFMO mapping is used with four slice groups within each encoded frame as shown in Fig. 4. It spreads the corruptly received MBs into the frame structure rather than leaving them to accumulate in certain positions within the corrupted frame. So, it works as an interleaving method to exploit the merit of the successfully decoded surrounding MBs for facilitating the concealment process. In

Suggested Color and Depth Post-Processing Techniques

Fig. 3 The overall framework of the suggested 3DV transmission system

Multimed Tools Appl

Fig. 4 Proposed EFMO mapping with four slice groups

the EFMO mapping, successive MBs are transmitted into various coding group slices to conserve the closeness of MBs. It regularly scrambles losses to the entire video frame to avoid error aggregation in certain locations by using an arrangement that is recognized to the decoder and the encoder to distribute the 3DV MBs. Therefore, it enhances the performance of the proposed EC algorithms at the decoder.

3.2 Proposed EC algorithms for color intra frames The temporal Outer Block Boundary Matching Algorithm (OBBMA) and the spatial Weight Pixel Averaging Interpolation (WPAI) technique [16, 34] were proposed for the 2D H.264 video intra-frame concealment. They can be exploited to recover the DVs and MVs of the corrupted 3DV intra frames. However, they fail and cannot perform efficiently with severelycorrupted channels, and therefore this will result in deterioration of the EC performance. Thus, in this work, we propose an enhanced joint approach of the temporal PMCA and spatial CSOIA algorithms for concealment of the lost MBs for achieving adequate 3DV intra-frame EC performance. Figure 5 shows the proposed hybrid framework of the spatial and temporal EC schemes for the 3DV color intra frames. As shown, the spatial CSOIA is utilized for estimating the DVs of the corrupted MBs based on their neighboring and surrounding pixels. Also, the temporal PMCA is applied to determine the MVs of the lost MBs based on their neighboring and surrounding reference frames. After that, the Sum of Absolute Difference (SAD) is utilized to collect the candidate MVs and DVs for cocealment. Then, suitable coefficient magnitudes are allocated to the collected MBs of the most optimum candidate DVs and MVs (MBopt. (DVs) and MBopt. (MVs)). Finally, the erroneous MBs are exchanged with the estimated optimal weighted values of DVs and MVs of the candidate MBs. In the spatial WPAI technique, each pixel of a corrupted MB is estimated with nearest-neighbor interpolation from four adjoining MB pixels over the MB outlines. Hence, the WPAI technique is not recommended for the restoration of the texture characteristics, particularly with severely corrupting channel conditions, for which most of the adjacent data is lost or corrupted. Therefore, in this work, the CSOIA is utilized with an MB size of 4 × 4 to achieve an efficient EC performance.

Multimed Tools Appl

Receive lost MBs within I frame inside I view Determine the location of the corrupted MBs within the lost I frame Collect all spatial surrounding pixels and neighboring temporal candidate MBs

No

Apply CSOIA to estimate all spatial candidate MBs

Apply PMCA to estimate all temporal candidate MBs

Calculate DVs for all spatial candidate MBs

Calculate MVs for all temporal candidate MBs

SAD(DV(current)) < SAD(DV(previous))

SAD(MV(current)) < SAD(MV(previous))

No

Yes

Yes

Set MB(optimal) = MB(DV(current))

Set MB(optimal) = MB(MV(current)) MBopt.(MV) < MBopt.(DV)

Replace the MB(lost) with Candidate MB= 1/3 MBopt. (MV) + 2/3 MBopt. (DV)

Yes

No

Replace the MB(lost) with Candidate MB= 2/3 MBopt. (MV) + 1/3 MBopt. (DV)

Fig. 5 Proposed framework of the 3DV color intra-frame EC schemes

In the CSOIA, the decoder side can precisely determine the texture direction of the corrupted MB pixels and can obtain the best interpolation along this direction. For each corruptly-received MB with size 16 × 16 that has the same predicted direction, it is divided into sixteen MBs with size 4 × 4. Then, the predicted direction of the correctly-received MB is used to determine the best direction of the corrupted MB, and to interpolate the lost MB pixels along the predicted direction. Thus, the superior estimated direction is calculated for the whole lost 4 × 4 MB pixels, and thus it improves the spatial EC performance. Another disadvantage in the spatial WPAI technique is that it utilizes raster scanning to portend the sixteen 4 × 4 MBs from the left side to the right side, and thereafter from the upper to the lower. This introduces accumulation of error over the scanning order [12], and thus the estimation errors will propagate to other frames. Therefore, the upper-left 4 × 4 MB scores the smallest estimation error, while the lower-right 4 × 4 MB scores the highest accumulated prediction error.

Multimed Tools Appl

To minimize the prediction error accumulation and propagation, the circular scanning is proposed rather than the raster scanning to determine the sixteen 4 × 4 MBs. Thus, the spatial correlations among all 4 × 4 MBs will be exploited, efficiently. The CSOIA firstly interpolates the twelve 4 × 4 MBs at the edge, and then follows up with the internal 4 × 4 MBs as shown in Fig. 6. The MB number 1 scores the smallest estimation error, while the MB number 16 scores the highest estimation error, because the prediction error accumulation increases in the direction form MB number 1 to MB number 16. In order to avert this issue, both the clockwise and anti-clockwise circular interpolation directions are utilized as shown in Fig. 6, and the average of both interpolation results is utilized for spatial concealment of intra-frame errors. Since the temporal OBBMA recovers only one MV for the corrupted MB, this means that all pixels in the damaged MB will have the same MV intensity direction, which will lower the reconstruction quality. Since only one MV cannot provide adequate reliability to conceal the heavy channel errors as the MB may include many direction details, it is necessary to present an efficient temporal EC algorithm rather than the OBBMA for improving the precision of MV estimation and for enhancing the EC performance. Therefore, to enhance the subjective and objective 3DV qualities of the received intra frames, the temporal PMCA is suggested jointly with the proposed spatial CSOIA. The PMCA estimates the MVs of the lost MBs efficiently via the correctly-received MVs of the adjacent MBs. It enhances the MVs compensation for efficient concealment of intra-frame errors. In the PMCA, the corrupted MB is divided into four 8 × 8 blocks as shown in Fig. 7. So, there are four MVs available for concealment of the damaged MB. The PMCA estimates the eight MVs for the eight (T1, T2, R1, R2, B1, B2, L1, L2) 8 × 8 MBs surrounding the lost MB. The MVL1 and MVT1 are used to estimate the MVBlock1, the MVT2 and MVR1 are exploited to evaluate the MVBlock2, the MVB1 and MVL2 are employed to calculate the MVBlock3, and the MVR2 and MVB2 are utilized to recover the MVBlock4. The MVBlock1 is calculated by (3), where the MVL1(x, y) and the MVT1(x, y) are the estimated MVs values of block L1 and block T1, respectively. By the way, the MVBlock2, MVBlock3, and MVBlock4 are calculated by

(a) Clockwise interpolation. Fig. 6 Circular scan order interpolation (CSOI) mechanism

(b) Anti-clockwise interpolation.

Multimed Tools Appl

Fig. 7 Partitioning motion compensation algorithm (PMCA)

(4) to (6), respectively. For each 8 × 8 block, there are 16 pixels on the border as illustrated in Fig. 7. Then, the Boundary Matching Error (BME) is estimated between the outside and inside pixels of the MB boundary pixels by (7). The Pinside(j) is the pixel rate inside the MB boundary and the Poutside(j) is the pixel rate of the neighboring blocks. So, the smallest BME value of the candidate MVs within each block will be selected to determine the optimum MV for the lost 8 × 8 block, so that four MVs are selected rather than one MV for estimating the damaged MB. MVBlock1 ðx; yÞ : ½MinðMVL1 ðx; yÞ; MVT1 ðx; yÞÞ; MaxðMVL1 ðx; yÞ; MVT1 ðx; yÞÞ

ð3Þ

MVBlock2 ðx; yÞ : ½MinðMVT2 ðx; yÞ; MVR1 ðx; yÞÞ; MaxðMVT2 ðx; yÞ; MVR1 ðx; yÞÞ

ð4Þ

MVBlock3 ðx; yÞ : ½MinðMVL2 ðx; yÞ; MVB1 ðx; yÞÞ; MaxðMVL2 ðx; yÞ; MVB1 ðx; yÞÞ

ð5Þ

MVBlock4 ðx; yÞ : ½MinðMVB2 ðx; yÞ; MVR2 ðx; yÞÞ; MaxðMVB2 ðx; yÞ; MVR2 ðx; yÞÞ

ð6Þ

BME ¼

16

!

∑ jðPinside ð jÞ−Poutside ð jÞÞj =16

j¼1

ð7Þ

Multimed Tools Appl

3.3 Proposed EC algorithm for depth intra frames In this section, the proposed depth-assisted technique is presented for recovering the depth intra frames of the received 3DV streams. The description of the proposed depth-assisted EC technique at the decoder side is introduced in the pseudo-code steps of Algorithm (1). Algorithm (1) The pseudo-code steps of the proposed EIDD-DAEC algorithm for the lost depth intra-frames. input: distorted depth intra-frames of the received 3DV sequence. - initialize the depth-assisted error concealment process. - receive the 3DV bit streams.

for all received MBs, do if the received MB is corrected, then - calculate the MVs and DVs depth maps values using equations (8) and (9).

else if there is a corruptly received MBs in I-frame within I-view are obtained, then - Specify the location of the damaged pixels and MBs inside the corruptly received intra-frames within I-view. - determine the spatial and temporal depth map values of the lost pixels and MBs within the surrounding spatial pixels and neighbouring MBs in the reference frames using equations (10) and (11). - employ the DCP and MCP mechanisms on the whole previously-estimated DVs an MVs depth maps. - use the SAD value to collect all concealment depth DVs and MVs candidates using equations (12), (13), (14), and (15). - collect also all the texture color candidate DVs and MVs as explained in section 3.2 and shown in Figure 5. - Determine and select the best concealment color and depth DVs and MVs values. - conceal the lost depth MBs using the best determined color and depth DVs and MVs values of the selected MBs.

end if end for output: concealed depth intra-frames of the received 3DV sequence.

Since the DVs and MVs are estimated at the encoder side and transmitted to the decoder side, we can use them at the decoder side to estimate the depth data of the received MBs. The depth data maps of the properly-received MBs in their relative MBs depending on the properly-received MVs and DVs, respectively, are found by (8) and (9). dMV ðm; nÞ ¼ k 

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi MVðm; nÞ2x þ MVðm; nÞ2y

ð8Þ

dDV ðm; nÞ ¼ k 

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi DVðm; nÞ2x þ DVðm; nÞ2y

ð9Þ

where the DV(m, n) and MV(m, n) are the disparity and motion vectors at (m, n), k is a scaling operator, and the notations y and x represent the MBs vertical and horizontal coordinates. Then, we calculate the inter-view and spatial-temporal (DVs and MVs) depth map values of the corrupted MBs in their relative adjacent frames by (10) and (11).

DMV ðm; nÞ ¼ k 

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi MVT‐1 ðm; nÞ2x þ MVT‐1 ðm; nÞ2y þ MVTþ1 ðm; nÞ2x þ MVTþ1 ðm; nÞ2y 2

ð10Þ

Multimed Tools Appl

DDV ðm; nÞ ¼ k 

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi DVS‐1 ðm; nÞ2x þ DVS‐1 ðm; nÞ2y þ DVSþ1 ðm; nÞ2x þ DVSþ1 ðm; nÞ2y 2

ð11Þ

The subscripts T + 1 and T-1 refer to the temporal right and left reference frames. The subscripts S + 1 and S-1 refer to the surrounding spatial up and down reference pixels. After the estimation of disparity and motion depth data values for the frames of the lost MBs in their reference collocated neighbouring frames of MBs, the MCP and DCP are applied on the formerly-calculated depth data maps estimated by (10) and (11) to calculate the depth data of motion and disparity vectors. To collect the candidate depth MVs and DVs, the SAD [16] is utilized for calculation as indicated in (12), (13), (14), and (15).      ∑ DM ðm; nÞ−DT‐1 m þ i; n þ j 

ð12Þ

     ∑ DM ðm; nÞ−DTþ1 m þ i; n þ j 

ð13Þ

     ∑ DM ðm; nÞ−DS‐1 m þ i; n þ j 

ð14Þ

     ∑ DM ðm; nÞ−DSþ1 m þ i; n þ j 

ð15Þ

R−1 R−1

SADMV ðm; nÞ ¼ ∑

i¼−R j¼−R

R−1 R−1

SADMV ðm; nÞ ¼ ∑

i¼−R j¼−R

R−1 R−1

SADDV ðm; nÞ ¼ ∑

i¼−R j¼−R

R−1 R−1

SADDV ðm; nÞ ¼ ∑

i¼−R j¼−R

where the R is the search area, and Di(m, n) refers to the magnitude of depth data at (m, n) of the ith MB. We assume that the pixels inside the Mth MB are lost in the above formulas. Thus, DM(m, n) is estimated by (10) or (11). The DT ‐ 1(m, n) and DT + 1(m, n) are estimated by (8). Similarity, the DS ‐ 1(m, n) and DS + 1(m, n) are estimated by (9). To enhance the EC efficiency, we as well collect the candidate group of color DVs and MVs for the corrupted frames in their relative frames as discussed in section 3.2. The color DVs and MVs are those related to the current lost MBs and the reference collocated MBs of their relative frames. As explained formerly, the depth motion and disparity vectors of the erroneous MBs are estimated from the depth data maps with the same positions as the color motion and disparity vectors, whose depth data are calculated by (10) and (11). As well, we collect the depth motion and disparity vectors related to the values of the correctly-received MBs, whose depth data values are calculated by (8) and (9). At this step, there is a lot of estimated color-aided DVs and MVs for concealment besides the calculated depthaided DVs and MVs. Thus, finally, the best concealment candidate DVs and MVs are selected from the complete candidate color-aided and depth-aided DVs and MVs group for restoring the values of the corrupted color and depth MBs of the received 3D video intra frames, efficiently.

Multimed Tools Appl

3.4 Proposed WOBMDC selection algorithm After the incipient EC process, to select the most suitable DVs and MVs among the previously-estimated depth and color DVs and MVs as explained in section 3.2 and 3.3, the WOBMDC is utilized as a selection stage to enhance the subjective quality with further reliability. The WOBMDC scheme is utilized to enhance the initially estimated concealment MBs through predicting more accurate candidate DVs and MVs for concealment of the corrupted color and depth MBs. Therefore, the WOBMDC is exploited to avoid the deblocking effects after the primary concealment process of the corruptly received color and depth MBs. Thus, it is utilized to conserve the spatial smoothness in the previously concealed color and depth MBs. It splits the incipient selected candidate MBs that are estimated in the previous color or depth EC stages into four 8 × 8 blocks, and then each pixel within each block is concealed separately using the surrounding MB pixels and predefined weighting matrices using (16) [4]. pði; jÞ ¼ pe ði; jÞHe ði; jÞ þ pab ði; jÞHab ði; jÞ þ plr ði; jÞHlr ði; jÞ þ 4 >> 3

ð16Þ

∀(i, j) ∈ 8 × 8 block of primary recovered MB, pe is the best estimated pixel involved in the 8 × 8 MB of the initially reconstructed MB, pab is the estimated pixel of the 8 × 8 MB at the neighborhood below or above the present MB, and the plr corresponds to the predicted pixel of the 8 × 8 MB at the right or left neighborhood of the current MB. The He, Hab, and Hlr are their weighting matrices, which are given by (17), p refers to the definitive reconstructed pixels of the corrupted MB, and B>>^ is the shift-right bit operation. 2

4 65 6 65 6 65 He ¼ 6 65 6 65 6 45 4 2 2 62 6 62 6 62 ¼6 62 6 62 6 42 2

5 5 5 5 5 5 5 5

5 5 6 6 6 6 5 5

5 5 6 6 6 6 5 5

5 5 6 6 6 6 5 5

5 5 6 6 6 6 5 5

5 5 5 5 5 5 5 5

1 2 2 2 2 2 2 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 2 2 2 2 2 2 1

3 4 57 7 57 7 57 7; 57 7 57 7 55 4 3 2 27 7 27 7 27 7 27 7 27 7 25 2

2

Hab

2 61 6 61 6 61 ¼6 61 6 61 6 41 2

2 1 1 1 1 1 1 2

2 2 1 1 1 1 2 2

2 2 1 1 1 1 2 2

2 2 1 1 1 1 2 2

2 2 1 1 1 1 2 2

2 1 1 1 1 1 1 2

3 2 17 7 17 7 17 7; 17 7 17 7 15 2

Hlr

ð17Þ

3.5 Proposed BKF enhancement algorithm The suggested BKF enhancement technique is introduced in this section. It is utilized for smoothing of the optimum formerly-determined color and depth DVs and MVs by the WOBMDC algorithm. It is used for compensating for the remnant measurement corruptions

Multimed Tools Appl

among the efficiently chosen candidate DVs and MVs. Thus, the BKF is used as an amelioration stage for enhancement of the 3DV visual quality. We supposed that any determined DV or MV is specified through the state vector s = [sn, sm]. Also, we supposed that the formerly-reconstructed measurement DV or MV is characterized through y = [yn, ym], where m and n represent the MB height and width, respectively. The eligible result s(k) of the kth frame is supposed to be chosen from that state at frame k-1 utilizing (18). The measurement y(k) of the vector s(k) state is determined using (19). Thence, the state and measurement model for DV or MV estimation may be formulated as in (18) and (19), where r(k) is an M × 1 measurement noise vector with Normal(0, σ2r ) and w(k) is an N × 1 white noise vector with Normal(0, σ2w ). In the suggested BKF technique, we supposed that the observation noise r(k) and constant process noise w(k) follow zero-mean Gaussian distributions defined with the constant covariance of σ2w= σ2r = 0.50I2, where I2 is 2 × 2 unity matrix. Moreover, we supposed that the constant measurement H(k) and transition A(k) matrices are two-dimensional unity matrices. It is also assumed that there is more matching amidst the vertically and horizontally adjoining MBs. The suggested BKF optimization technique is described by the two different steps of the updating and prediction stages as explained in section 2.2 and shown in Fig. 2. The suggested BKF improvement technique is described by the following three stages: Stage (A): Initialization Stage 1- Define the previously-measured noisy DVs or/and determined MVs. 2- Put the prime state and covariance vectors as in (20). Stage (B): Prediction Stage 1- Estimate the d a priori estimated state (DV or MV) ^sk for frame k utilizing the preceding (DV or MV of the MB) updated state ^sk−1 of frame k-1 as in (21). 2- Determine the predicted a priori error covariance p−k employing the noise covariance process σ2w and the updated estimated covariance pk − 1 as in (22). Stage (C): Updating Stage 1- Determine the estimated updated state (finally estimated DV or MV) vector ^sk employing the measurement state yk(the recovered determined DVs or/and determined MVs) and the observation covariance noise σ2r as in (23). 2- Determine the filtered updated covariance error pk via the estimated covariance p−k and the observation covariance noise σ2r as in (24). The BKF gain kk is estimated by (25). 

      wnk−1 snk 1 0 snk−1 ¼ þ smk w 0 1 smk−1  mk−1 ; wnk−1 ; wmk−1 ∼Normal 0; σ2w        ynk 1 0 snk rn k ¼ þ ymk r 0 1 smk  2 mk ; rnk ; rmk ∼Normal 0; σr

ð18Þ

ð19Þ

Multimed Tools Appl



sn0 s m0 "

"

pn  k pm  k



^snk ^smk

¼

^sn  k ^sm  k

#

 ¼



      pn0 0 0 ¼ ; pm0 0 0

#

 ¼

1 0 0 1

"

^sn  k ¼ ^sm  k







pnk pmk

k nk k mk





#

¼



pnk−1 pmk−1

^snk−1 ^smk−1





1 0 0 1

ð21Þ T

þ σ2w

ð22Þ

  3 ^sn  k nk : ynk −^ k 5  þ4 k mk : ymk −^ ^sm  k

ð23Þ

"



1 0 0 1

ð20Þ

2

ðI−k nk Þ: pn  k ðI−k mk Þ:pm  k

#

3 2 6 pnk = pnk þ σr 7

7 ¼6 5 4 2 p m  = p m  þ σr k k

ð24Þ

2

ð25Þ

So, the above-mentioned updating and prediction stages are employed as a refinement process for each erroneous MB regardless of its position in the lossy view and frame. We repeat the BKF stages for the whole erroneous frames till we obtain the best ultimatelydetermined DVs or/and MVs as given in (22). Therefore, the stages of the proposed scheme are iterated until the whole erroneous frames are recovered. Consequently, if the initiallydetermined DVs and MVs are not sufficient for an optimum EC process because of low available surrounding pixel information and severe errors. Therefore, they are then enhanced by the predicted DVs and MVs, which are estimated through the utilization of the suggested BKF technique to achieve better 3D video transmission performance.

4 Simulation results and comparative analysis 4.1 Performance evaluation In order to enhance the efficiency of the suggested techniques, we have carried out different simulation tests on well-known 3D video (Kendo, Ballet, and Balloons) streams [18]. The three tested 3D videos have different characteristics. The Kendo is an intermediate-moving stream, the Balloons is a slow-moving stream, and the Ballet is a fast-moving stream. The tested streams resolution is 1024 × 768. For each 3DV sequence, eight views with 250 images within each view are encoded. The utilized frame rate is 20 frames per second (fps). The Quantization Parameters (QPs) values for the tested 3DV streams utilized in the simulations are 27, 32, and 37. In the simulation tests, the I-frame is considered to be the first frame of the

Multimed Tools Appl

used GOP, and the utilized GOP size is 8. For each stream, the encoded bit streams are obtained by employing the proposed ER tools at the encoder, and after that transmitted over the wireless transmission channel with various PLRs of 20%, 30%, 40%, 50%, and 60% simulated using Gilbert channel model [10, 36]. Then, the received bit streams are subsequently decoded and concealed by the suggested EC techniques at the decoder side. The reference H.264 JMVC framework [33] is utilized in the simulation work implemented using C++ programming language. The compression simulation conditions used in this paper are chosen based on the JVT standards [18]. To assess the efficiency of the suggested EC algorithm, the Peak Signal-to-Noise Ratio (PSNR), the Feature Similarity (FSIM) index, and the Structural Similarity (SSIM) index [38] are used as objective quality metrics to determine the quality of the reconstructed and concealed 3DV MBs. Further details about their definitions and equations can be found in [21]. We have performed ten simulation experiments on the utilized 3DV sequences to carefully test the proposed algorithms, and thence we presented the average values of the objective simulation results. To illustrate the effect of applying the proposed techniques for concelment of the corruptlyreceived color and depth intra frames, we have made different comparisons. We compared the suggested techniques performance to the state of employing the ER and color-plus-depth EC techniques without the WOBMDC and BKF schemes (ER/Color-based EC/Depth-based EC/ No-WOBMDC/No-BKF). Also, we compared the proposed algorithms with the case of utilizing the ER, color-plus-depth EC, and WOBMDC algorithms without the BKF technique (ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF), and with the state when none of the error control methods is applied (No-ER/No-Color-based EC/No-Depth-based EC/NoWOBMDC/No-BKF). In the introduced simulation results, the ER/Color-based EC/Depthbased EC/WOBMDC/BKF techniques refer to the suggested hybrid algorithms. Figures 8, 9 and 10 introduce the color and depth visual subjective simulation comparisons at various QPs of 37, 32, and 27 and at PLRs of 40%, 50%, and 60% for the color and depth intra frames of the chosen three 3DV (Kendo, Ballet, and Balloons) streams, respectively. For the 3DV tested sequences, different corrupted intra-frame locations have been chosen within the I-view to clarify the efficacy of the proposed techniques in concealing the corrupted MBs inside different intra-frame positions for different 3DV characteristics at different PLRs. Figure 8 introduces the subjective color and depth results of the 3DV Kendo stream over a wireless channel with PLR = 40% and QP = 37 for the selected 41st intra-encoded color and depth intra frame inside the I-view V0. We recovered the tested 41st intra color and depth frames with the suggested ER and color and depth EC techniques, without the WOBMDC and BKF schemes as introduced in Figs. 8c and h, as well as when utilizing the ER, color-plusdepth EC, and WOBMDC algorithms and without the BKF algorithm as shown in Fig. 8d and i, and finally with the case of using the proposed ER + Color-based EC + Depth-based EC + WOBMDC + BKF techniques as given in Fig. 8e and j, respectively. We notice that the visual results, which are given in Fig. 8e and j of the depth and color frames for the proposed hybrid techniques outperform the results of the cases of the ER/Color-based EC/Depth-based EC/NoWOBMDC/No-BKF algorithms and the ER/Color-based EC/Depth-based EC/WOBMDC/ No-BKF schemes, which are introduced in Fig. 8c, h, d, and i. Figure 9 shows the color and depth frames visual results of the Ballet stream over a wireless channel with PLR = 50% and QP = 32. We selected the 73rd intra-coded frame as a tested color and depth I frame inside the I-view V0. From the presented recovered color and depth frames results in Fig. 9e and j compared to those offered in Fig. 9c, d, h, i, we notice the prominence of exploiting the full suggested hybrid schemes in contrast to the other error

Multimed Tools Appl

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j) Fig. 8 Simulation results of the selected color and depth 41st intra frame within the I-view (V0) of the BKendo^ sequence: (a) Original error-free I41 color frame, (b) Corrupted I41 color frame with PLR = 40%, QP = 37, (c) Recovered I41 color frame with the case of ER + Color-assisted EC + Depth-assisted EC + No-WOBMDC + NoBKF, (d) Recovered I41 color frame with the case of ER + Color-assisted EC + Depth-assisted EC + WOBMDC + No-BKF, (e) Recovered I41 color frame with the case of ER + Color-assisted EC + Depth-assisted EC + WOBMDC +BKF, (f) Original error-free I41 depth frame, (g) Corrupted I41 depth frame with PLR = 40%, QP = 37, (h) Recovered I41 depth frame with the case of ER + Depth-assisted EC + Color-assisted EC + NoWOBMDC + No-BKF, (i) Recovered I41 depth frame with the case of ER + Depth-assisted EC + Color-assisted EC + WOBMDC + No-BKF, (j) Recovered I41 depth frame with the case of ER + Depth-assisted EC + Colorassisted EC + WOBMDC + BKF

control algorithms for reconstructing the erroneous color and depth MBs efficiently in the state of severely corrupting channel conditions such as the presented state of PLR = 50%. Figure 10 introduces the visual results of the color and depth intra frames of the 3DV Balloons stream over a wireless channel with PLR = 60% and QP = 27 for the selected 81st intra-coded color and depth I frame within the I-view V0. From both presented color frames visual results in Fig. 10e compared to those presented in Fig. 10c and d, and also from the visual results of the presented depth frames given in Fig. 10j compared to those shown in Fig. 10h and i, we notice that the visual simulation results of the suggested

Multimed Tools Appl

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j) Fig. 9 Simulation results of the selected color and depth 73rd intra frame within the I-view (V0) of the BBallet^ sequence: (a) Original error-free I73 color frame, (b) Corrupted I73 color frame with PLR = 50%, QP = 32, (c) Recovered I73 color frame with the case of ER + Color-assisted EC + Depth-assisted EC + No-WOBMDC + NoBKF, (d) Recovered I73 color frame with the case of ER + Color-assisted EC + Depth-assisted EC + WOBMDC + No-BKF, (e) Recovered I73 color frame with the case of ER + Color-assisted EC + Depth-assisted EC + WOBMDC + BKF, (f) Original error-free I73 depth frame, (g) Corrupted I73 depth frame with PLR = 50%, QP = 32, (h) Recovered I73 depth frame with the case of ER + Depth-assisted EC + Color-assisted EC + NoWOBMDC + No-BKF, (i) Recovered I73 depth frame with the case of ER + Depth-assisted EC + Color-assisted EC + WOBMDC + No-BKF, (j) Recovered I73 depth frame with the case of ER + Depth-assisted EC + Colorassisted EC + WOBMDC + BKF

hybrid ER + Color-based EC + Depth-based EC + WOBMDC + BKF techniques outperform those of utilizing the ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF and the ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF algorithms. Thus, the significance of exploiting the full proposed joint error control techniques is clear for reconstructing the corrupted color and depth MBs efficiently in the state of severelycorrupting channel conditions as in the presented state of PLR = 60%. In Figs. 11 and 12 and Table 1, we compare the objective PSNR results for the utilized 3DV streams at various PLR and QP values. Table 2 shows the FSIM and SSIM index values for the

Multimed Tools Appl

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j) Fig. 10 Simulation results of the selected color and depth 81st intra frame within the I-view (V0) of the BBalloons^ sequence: (a) Original error-free I81 color frame, (b) Corrupted I81 color frame with PLR = 60%, QP = 27, (c) Recovered I81 color frame with the case of ER + Color-assisted EC + Depth-assisted EC + NoWOBMDC + No-BKF, (d) Recovered I81 color frame with the case of ER + Color-assisted EC + Depth-assisted EC + WOBMDC + No-BKF, (e) Recovered I81 color frame with the case of ER + Color-assisted EC + Depthassisted EC + WOBMDC +BKF, (f) Original error-free I81 depth frame, (g) Corrupted I81 depth frame with PLR = 60%, QP = 27, (h) Recovered I81 depth frame with the case of ER + Depth-assisted EC + Color-assisted EC + No-WOBMDC + No-BKF, (i) Recovered I81 depth frame with the case of ER + Depth-assisted EC + Colorassisted EC + WOBMDC + No-BKF, (j) Recovered I81 depth frame with the case of ER + Depth-assisted EC + Color-assisted EC + WOBMDC + BKF

used 3DV streams at various PLRs. We compare the PSNR, FSIM, and SSIM results of the suggested error control schemes with those of utilizing the ER and color-plus-depth EC techniques without WOBMDC or BKF schemes (ER/Color-based EC/Depth-based EC/NoWOBMDC/No-BKF), with the state of utilizing the ER, color-plus-depth EC, and WOBMDC algorithms without BKF technique (ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF), and with the state of no error control methods (No-ER/No-Color-based EC/No-Depth-based EC/ No-WOBMDC/No-BKF).

Multimed Tools Appl 2

Peak Signal to Nois e Ratio (PSNR) [dB]

10

1

10

0

10

No Error ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF Error occurs(No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF) 20

30

40 Packet Loss Rate (PLR) [%]

50

60

50

60

50

60

(a) 2

Peak Signal to Nois e Ratio (PSNR) [dB]

10

1

10

0

10

No Error ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF Error occurs(No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF) 20

30

40 Packet Loss Rate (PLR) [%]

(b) 2

Peak Signal to Nois e Ratio (PSNR) [dB]

10

1

10

0

10

No Error ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF Error occurs(No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF) 20

30

40 Packet Loss Rate (PLR) [%]

(c) Fig. 11 Objective PSNR (dB) simulation results for the: (a) Kendo, (b) Ballet, and (c) Balloons 3DV sequences at QP = 27 and various PLRs

Multimed Tools Appl

From all presented simulation results for the tested 3DV sequences, we perceive that the suggested ER + Color-based EC + Depth-based EC + WOBMDC + BKF techniques always achieve superior PSNR, FSIM, and SSIM values. It can be noticed that the suggested hybrid algorithms have PSNR enhancement for the whole utilized 3DV streams of about 0.57 0.89 dB, and 0.93 1.95 dB at various PLRs and QPs compared to those of employing the ER/Colorbased EC/Depth-based EC/WOBMDC/No-BKF EC techniques, and those of utilizing the ER/ Color-based EC/Depth-based EC/No-WOBMDC/No-BKF techniques, respectively. We can also notice that the proposed hybrid techniques are recommended, because they achieve about 11.35 22.55 dB average objective PSNR values at various PLRs and QPs in contrast to those obtained without employing any error control techniques (No-ER/No-Color-based EC/ No-Depth-based EC/No-WOBMDC/No-BKF). It is also noticed from the comparisons subjective and objective simulation results that utilizing the suggested hybrid techniques is highly recommended in the state of severely-corrupting wireless channels. The results demonstrate the significance of employing the proposed WOBMDC selection and BKF refinement algorithms to reinforce the subjective video quality. In addition, they achieve a significant gain in the objective PSNR. Moreover, it is observed that the proposed hybrid techniques provide efficient simulation results for the 3D video streams with different characteristics.

4.2 Comparison study and discussions To demonstrate and confirm the performance efficiency of the suggested hybrid techniques for efficient 3DV communication through severe-packet-loss wireless channels, further different experiments have been implemented to clarify the simulation results of the suggested techniques compared to those of the state-of-the-art schemes utilizing the JMVC software. The results of the suggested (ER + Color-based EC + Depth-based EC + WOBMDC + BKF) techniques are compared to those of the Hybrid Motion Vector Extrapolation (HMVE) [4], the Disparity Vector Extrapolation (DVE) [9, 15], hybrid DIECA and DMVEA [7], a joint algorithm of WPAI and OBBMA [13], the four-directional MV/DV extrapolation for whole frame loss concealment [39], and a hybrid method of DIECA and DMVEA EC techniques utilizing the Dispersed Flexible Macro-block Ordering (DFMO) error resilience technique [8]. All these techniques do not employ the ER, depth-aided EC, WOBMDC, and BKF. On the other hand, in the suggested techniques, we exploited color and depth-aided DVs and MVs concealment in addition to the ER tools, the refinement WOBMDC algorithm, and the BKF selection algorithm. Therefore, all of those state-of-the-art algorithms failed in achieving an efficient concealment performance compared to those of the proposed algorithms, especially in the state of severely-corrupting transmission channels. Also, they do not estimate the most appropriate candidate concealment MVs and DVs for recovering the erroneous intra frames of the transmitted 3DV sequences. The comparison results have been carried out on the Balloons, Kendo, and Ballet standard test streams using the self-same experimental conditions, which are presented in section 4.1 at different PLR and QP values. All simulation comparison tests have been performed using Intel® Core™i7-4500 U CPU @1.80 GHz and 2.40 GHz with 8 GB RAM and running with Windows 10 64-bit operating system. The PSNR and average frame execution time results are introduced to prove the efficiency of the proposed hybrid techniques for different simulation configurations. Table 3 presents the average PSNR and average frame execution time values of the suggested algorithms compared to the HMVE, DVE algorithms, [7, 8, 13, 39] at different PLRs and QPs.

Multimed Tools Appl 2

Peak Signal to Nois e Ratio (PSNR) [dB]

10

1

10

0

10

No Error ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF Error occurs(No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF) 20

30

40 Packet Loss Rate (PLR) [%]

50

60

50

60

50

60

(a) 2

Peak Signal to Nois e Ratio (PSNR) [dB]

10

1

10

0

10

No Error ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF Error occurs(No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF) 20

30

40 Packet Loss Rate (PLR) [%]

(b) 2

Peak Signal to Nois e Ratio (PSNR) [dB]

10

1

10

0

10

No Error ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF Error occurs(No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF) 20

30

40 Packet Loss Rate (PLR) [%]

(c) Fig. 12 Objective PSNR (dB) simulation results for the: (a) Kendo, (b) Ballet, and (c) Balloons 3DV streams at QP = 37 and various PLRs

Multimed Tools Appl

We notice that the suggested algorithms outperform the traditional techniques in all simulation experiments. It is observed that with the proposed hybrid error control algorithms, the recovered 3DV quality can be further enhanced by taking the advantage of the depth-aided EC in addition to the color-aided EC algorithms, the proposed ER tools, the WOBMDC algorithm, and the BKF algorithm. Also, it is noticed that the proposed algorithms have a little additional computational cost of execution compared to those of the traditional algorithms. So, the execution times of the proposed algorithms may be acceptable for online and real-time 3DV communication applications. In order to further verify the performance improvement and efficiency of the suggested hybrid techniques, we have carried out further experimental tests on more 3D video plus depth test sequences of the Shark and Breakdancer 3DV streams, which have various spatial resolutions and temporal characteristics. Table 4 presents the objective average PSNR results for the utilized 3DV streams at various PLRs and a QP value of 32. We compared the PSNR results of the suggested error control schemes with those of the case of utilizing the ER and color-plus-depth EC techniques without the WOBMDC and BKF schemes (ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF), those of the state of utilizing the ER, colorplus-depth EC, and WOBMDC algorithms without the BKF technique (ER/Color-based EC/ Table 1 Objective PSNR (dB) simulation results for the Kendo, Ballet, and Balloons streams at QP = 32 and various PLRs Packet Loss Rate (PLR)% Sequence Kendo

Ballet

Balloons

Applied Scheme No Error ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF Error occurs (No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF) No Error ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF Error occurs (No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF) No Error ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF Error occurs (No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF)

20% 43.03 41.59

30% 43.03 40.63

40% 43.03 38.92

50% 43.03 37.15

60% 43.03 35.23

41.04

40.02

38.28

36.54

34.49

40.58

39.53

37.90

36.11

34.13

24.41

21.19

16.83

12.26

8.11

38.70 37.38

38.70 36.64

38.70 35.83

38.70 34.11

38.70 32.26

36.62

35.97

35.02

33.49

31.57

36.17

35.50

34.62

33.10

31.17

21.50

18.78

14.49

9.76

5.71

41.13 39.71

41.13 38.65

41.13 37.42

41.13 35.97

41.13 34.23

39.13

38.04

36.71

35.35

33.46

38.89

37.72

36.35

35.02

33.26

25.82

22.19

18.93

15.44

11.39

Balloons

Ballet

Sequence Kendo

Applied Scheme ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No- WOBMDC/No-BKF Error occurs (No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF) ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No- WOBMDC/No-BKF Error occurs (No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF) ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No- WOBMDC/No-BKF Error occurs (No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF) 0.9649/ 0.9550 0.9611/ 0.9532 0.7239/ 0.6998

0.9804/0.9667 0.9791/0.9643 0.7568/0.7295

0.7154/ 0.6876

0.7283/0.7039

0.9672/ 0.9565

0.9626/ 0.9503

0.9810/0.9616

0.9815/0.9684

0.9667/ 0.9516

0.6534/ 0.6198

0.6952/0.6723

0.9821/0.9634

0.9608/ 0.9489

0.9807/0.9586

0.9689/ 0.9531

0.9624/ 0.9508

0.9813/0.9612

0.9833/0.9645

30% 0.9657/ 0.9521

20% 0.9825/0.9634

FSIM/SSIM

Table 2 FSIM/SSIM objective simulation results of the Kendo, Ballet, and Balloons streams at different PLRs

0.7055/ 0.6738

0.9503/ 0.9285

0.9521/ 0.9329

0.9564/ 0.9358

0.6986/ 0.6588

0.9537/ 0.9246

0.9564/ 0.9307

0.9596/ 0.9319

0.6214/ 0.5997

0.9461/ 0.9124

0.9492/ 0.9189

40% 0.9508/ 0.9204

0.6829/ 0.6497

0.9367/ 0.9043

0.9396/ 0.9082

0.9407/ 0.9104

0.6558/ 0.6236

0.9367/ 0.8959

0.9401/ 0.8996

0.9419/ 0.9035

0.5867/ 0.5581

0.9287/ 0.8917

0.9317/ 0.8952

50% 0.9364/ 0.8967

0.6398/ 0.6089

0.9192/ 0.8726

0.9214/ 0.8793

0.9237/ 0.8825

0.5986/ 0.5729

0.9125/ 0.8651

0.9184/ 0.8687

0.9253/ 0.8747

0.4367/ 0.4014

0.9011/ 0.8356

0.9104/ 0.8423

60% 0.9185/ 0.8586

Multimed Tools Appl

50%

40%

27

30%

37

32

27

37

32

27

37

32

QP

PLR

Ballet Kendo Balloons Ballet Kendo Balloons Ballet Kendo Balloons Ballet Kendo Balloons Ballet Kendo Balloons Ballet Kendo Balloons Ballet Kendo Balloons Ballet Kendo Balloons Ballet Kendo Balloons

Sequence

40.58/0.039 43.72/0.050 41.92/0.043 38.70/0.041 43.03/0.053 41.13/0.044 36.84/0.043 42.29/0.055 40.41/0.047 40.58/0.039 43.72/0.050 41.92/0.043 38.70/0.041 43.03/0.053 41.13/0.044 36.84/0.043 42.29/0.055 40.41/0.047 40.58/0.039 43.72/0.050 41.92/0.043 38.70/0.041 43.03/0.053 41.13/0.044 36.84/0.043 42.29/0.055 40.41/0.047

Origin

28.13/0.041 25.91/0.052 32.78/0.044 27.68/0.043 26.31/0.055 32.87/0.045 27.74/0.047 26.28/0.058 32.75/0.050 26.67/0.043 24.82/0.055 31.27/0.046 26.79/0.044 25.41/0.057 32.08/0.047 27.10/0.049 25.69/0.061 32.17/0.052 25.69/0.046 23.97/0.056 30.34/0.049 25.16/0.047 23.89/0.060 30.49/0.050 25.27/0.052 23.48/0.064 29.93/0.054

DVE 33.58/0.042 27.72/0.052 36.57/0.046 33.04/0.044 27.61/0.056 35.08/0.048 32.23/0.049 27.49/0.061 33.42/0.048 32.68/0.043 26.18/0.054 35.64/0.048 32.29/0.045 26.83/0.058 34.36/0.051 31.83/0.052 26.87/0.063 32.92/0.051 31.57/0.047 25.61/0.058 34.49/0.051 30.82/0.048 25.34/0.061 32.82/0.054 29.92/0.055 24.75/0.067 30.86/0.055

HMVE

PSNR (dB) / Execution time per frame (sec)

34.24/0.046 29.68/0.056 37.24/0.052 34.02/0.048 29.05/0.061 35.58/0.053 32.89/0.053 28.58/0.064 33.87/0.053 33.67/0.048 28.37/0.058 36.33/0.054 33.26/0.051 28.23/0.063 34.83/0.056 32.61/0.056 27.92/0.067 33.29/0.056 32.59/0.051 27.24/0.061 35.38/0.057 32.75/0.054 26.83/0.065 33.14/0.058 30.43/0.059 26.16/0.070 31.37/0.060

[39] 34.92/0.041 37.97/0.053 37.64/0.056 33.27/0.047 37.15/0.065 36.07/0.057 31.17/0.058 35.98/0.063 34.96/0.052 34.43/0.042 37.47/0.056 36.81/0.059 32.67/0.049 36.59/0.067 35.53/0.060 30.73/0.061 35.61/0.065 34.73/0.055 33.32/0.046 36.36/0.060 35.57/0.063 31.53/0.053 34.78/0.069 33.69/0.063 28.57/0.064 33.61/0.069 32.53/0.059

[13] 36.58/0.050 39.59/0.061 38.55/0.064 34.32/0.056 38.12/0.068 36.83/0.066 32.18/0.063 36.72/0.071 35.79/0.061 35.85/0.052 38.62/0.064 37.69/0.067 33.92/0.058 37.54/0.070 36.28/0.069 31.62/0.069 36.46/0.073 35.49/0.064 34.49/0.056 37.29/0.067 36.23/0.070 32.23/0.062 35.79/0.073 34.67/0.072 29.59/0.073 34.08/0.076 33.14/0.067

[7] 37.89/0.056 39.92/0.068 39.29/0.071 35.94/0.065 39.23/0.073 37.97/0.074 33.11/0.073 37.62/0.076 36.03/0.072 37.38/0.059 39.95/0.071 38.95/0.074 35.13/0.067 38.72/0.076 37.43/0.078 32.62/0.077 37.35/0.080 35.58/0.074 36.13/0.062 37.79/0.074 37.08/0.078 33.52/0.069 36.71/0.080 35.59/0.081 30.72/0.080 35.27/0.084 33.49/0.079

[8]

38.58/0.070 41.38/0.084 39.68/0.079 36.64/0.072 40.63/0.089 38.65/0.081 34.50/0.076 39.59/0.094 37.51/0.084 37.91/0.073 39.72/0.089 38.53/0.081 35.83/0.075 38.92/0.094 37.42/0.085 33.53/0.078 37.74/0.097 35.89/0.088 36.34/0.076 38.03/0.092 37.31/0.085 34.11/0.077 37.15/0.097 35.97/0.089 31.78/0.084 35.79/0.099 34.26/0.093

Proposed

Table 3 The objective average PSNR (dB) and the average execution time comparison between the suggested techniques and the state-of-the-art EC on the tested Ballet, Kendo, and Balloons 3DV sequences at different QPs and PLRs

Multimed Tools Appl

Multimed Tools Appl Table 4 Objective PSNR (dB) comparison results of the Shark and Breakdancer streams at QP = 32 and different PLRs Packet Loss Rate (PLR)% Sequence Shark

Breakdancer

Applied Scheme No Error ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF Error occurs (No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF) No Error ER/Color-based EC/Depth-based EC/WOBMDC/BKF ER/Color-based EC/Depth-based EC/WOBMDC/No-BKF ER/Color-based EC/Depth-based EC/No-WOBMDC/No-BKF Error occurs (No-ER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF)

20% 51.27 49.61

30% 51.27 48.52

40% 51.27 47.04

50% 51.27 44.96

60% 51.27 43.11

49.07

48.13

46.39

44.56

42.53

48.43

47.32

45.86

44.15

42.12

32.71

29.23

24.95

20.21

16.44

37.53 36.14

37.53 35.29

37.53 34.51

37.53 32.78

37.53 30.84

35.39

34.69

33.72

32.26

31.29

34.92

34.21

33.29

31.82

30.42

20.24

17.55

13.29

8.66

4.95

Depth-based EC/WOBMDC/No-BKF), and those of the state of no error control methods (NoER/No-Color-based EC/No-Depth-based EC/No-WOBMDC/No-BKF). We notice that the suggested ER + Color-based EC + Depth-based EC + WOBMDC + BKF techniques always achieve superior PSNR values compared to those of other techniques.

5 Conclusions This paper presented efficient error control algorithms for reliable 3D video communication over severely corrupting wireless channels to recover the lost intra frames of the encoded 3DV. The main significant strength of this paper is that the proposed algorithms achieve efficient objective and subjective qualities, whilst optimizing the communication bit rate and minimizing the computational intricacy, because they adaptively exploit the available 3DV content to conserve the transmitted 3DV bit streams prior to transmission. Therefore, they are highly reliable and recommended in the state of severely-corrupting channel conditions, and they outperform the traditional error control schemes. Moreover, another main strength of this paper is the utilization of depth-aided DVs and MVs without compressing and transporting depth data to avert the complicated depth calculations at the encoder side. Also, the suggested efficient ER tools neither add redundant data nor increase the transmission bit rate. Furthermore, the proposed WOBMDC and BKF algorithms avoid the deblocking effects of the erroneous MBs, and thus they are utilized to refine the spatial smoothness in the corrupted MB reconstruction. Experimental simulation results proved the accomplishments of the suggested hybrid techniques in recovering the severely-erroneous 3DV streams with high channel PLRs. The results demonstrated the significance of deploying the selection WOBMDC and the amelioration BKF algorithms for improving the visual video quality and

Multimed Tools Appl

achieving significant PSNR gains. The proposed schemes have the ability to recover the intra frames of different characteristics in 3DV sequences efficiently with high visual quality.

References 1. Abreu A, Frossard P, Pereira F (2015) Optimizing multiview video plus depth prediction structures for interactive multiview video streaming. IEEE Journal of Selected Topics in Signal Processing 9(3):487–500 2. Assunçao P, Marcelino S, Soares S, Faria S (2016) Spatial error concealment for intra-coded depth maps in multiview video-plus-depth. Multimedia Tools and Applications 1–24 3. Chakareski J (2013) Adaptive multiview video streaming: challenges and opportunities. IEEE Commun Mag 51(5):94–100 4. Chen M, Chen L, Weng R (1997) Error concealment of lost motion vectors with overlapped motion compensation. IEEE Trans. on Circuits and Systems for Video Technology 7(3):560–563 5. Chung T, Sull S, Kim C (2011) Frame loss concealment for stereoscopic video plus depth sequences. IEEE Trans Consumer Electronics 57(3):1336–1344 6. Cui S, Huijuan C, Kun T (2012) An effective error concealment scheme for heavily corrupted H.264/AVC videos based on Kalman filtering. Journal of Signal, Image and Video Processing 8(8):1533–1542 7. El-Shafai W (2015) Pixel-level matching based multi-hypothesis error concealment modes for wireless 3D H.264/MVC communication. 3D Res 6(3):31 8. El-Shafai W (2015) Joint adaptive pre-processing resilience and post-processing concealment schemes for 3D video transmission. 3D Res 6(1):1–13 9. El-Shafai W, Hrušovský B, El-Khamy M, El-Sharkawy M (2011) Joint space-time-view error concealment algorithms for 3D multi-view video. In: 18th IEEE Int. Conference on Image Processing (ICIP) pp 2201–2204 10. El-Shafai W, El-Rabaie S, El-Halawany M, El-Samie F (2017) Encoder-independent decoder-dependent depth-assisted error concealment algorithm for wireless 3D video communication. Multimedia Tools and Applications 1–28 11. Gao Z, Lie W (2004) Video error concealment by using Kalman-filtering technique. In: IEEE Int. Symposium on Circuits and Systems pp 69–72 12. H.264/AVC codec http://iphome.hhi.de/suehring/tml/, accessed 28 December 2017 13. Hewage C, Martini M (2013) Quality of experience for 3D video streaming. IEEE Commun Mag 51(5): 101–107 14. Huo Y, El-Hajjar M, Hanzo L (2013) Inter-layer FEC aided unequal error protection for multilayer video transmission in mobile TV. IEEE Trans. on Circuits and Systems for Video Technology 23(9):1622–1634 15. Hwang M, Ko S (2008) Hybrid temporal error concealment methods for block-based compressed video transmission. IEEE Trans on Broadcasting 54(2):198–207 16. Hwang M, Kim J, Duong D, Ko S (2008) Hybrid temporal error concealment methods for block-based compressed video transmission. IEEE Trans. on Broadcasting 54(2):198–207 17. Ibrahim A, Sadka A (2014) Error resilience and concealment for multiview video coding. In: IEEE Int. Symposium on Broadband Multimedia Systems and Broadcasting pp 1–5 18. ISO/IEC JTC1 Common test conditions for multiview video coding, (JVT-U207) pp 1–9 19. Khattak S, Maugey T, Hamzaoui R, Ahmad S, Frossard P (2016) Temporal and inter-view consistent error concealment technique for multiview plus depth video. IEEE Trans. on Circuits and Systems for Video Technology 26(5):829–840 20. Lee P, Kuo K, Chi C (2014) An adaptive error concealment method based on fuzzy reasoning for multiview video coding. J Disp Technol 10(7):560–567 21. Lie W, Lee C, Yeh C, Gao Z (2014) Motion vector recovery for video error concealment by using iterative dynamic-programming optimization. IEEE Transactions on Multimedia 16(1):216–227 22. Liu Y, Wang J, Zhang H (2010) Depth image-based temporal error concealment for 3-d video transmission. IEEE Trans. Circuits and Systems for Video Technology 20(4):600–604 23. Liu J, Zhang Y, Zheng X, Song J (2012) A dynamic hybrid UXP/ARQ method for scalable video transmission. In: IEEE 23rd Int. Symposium on Personal, Indoor and Mobile Radio Communications(PIMRC) pp 2566–2571 24. Liu Z, Cheung G, Ji Y (2013) Optimizing distributed source coding for interactive multiview video streaming over lossy networks. IEEE Trans on Circuits and Systems for Video Technology 23(10):1781–1794 25. Liu Y, Nie L, Han L, Zhang L, Rosenblum D (2015) Action2Activity: Recognizing Complex Activities from Sensor Data. In: IJCAI pp 1617–1623

Multimed Tools Appl 26. Liu Y, Zhang L, Nie L, Yan Y, Rosenblum D (2016) Fortune Teller: Predicting Your Career Path. In: AAAI pp 201–207 27. Liu Y, Nie L, Liu L, Rosenblum D (2016) From action to activity: sensor-based activity recognition. Neurocomputing 181:108–115 28. Liu Y, Zheng Y, Liang Y, Liu S, Rosenblum D (2016) Urban water quality prediction based on multi-task multi-view learning. Google Scholar 29. Purica A, Mora E, Pesquet-Popescu B, Cagnazzo M, Ionescu B (2016) Multiview plus depth video coding with temporal prediction view synthesis. IEEE Trans Circuits and Systems for Video Technology 26(2): 360–374 30. Salim O, Xiang W, Leis J (2013) An efficient unequal error protection scheme for 3-D video transmission. In: IEEE Wireless Communications and Networking Conf. (WCNC) pp 4077–4082 31. Tai S, Wang C, Hong C, Luo Y (2016) An effiicient full frame algorithm for object-based error concealment in 3D depth-based video. Multimedia tools and applications 75(16):9927–9947 32. Wang H, Wang X (2016) Important macroblock distinction model for multi-view plus depth video transmission over error-prone network. Multimedia Tools and Applications 1–23 33. WD 4 reference software for multiview video coding (mvc) http://wftp3.itu.int/av-arch/jvt-site/2009_01_ Geneva/JVT-AD207.zip, accessed 25 September 2017 34. Xiang X, Zhao D, Wang Q, Ji X, Gao W (2007) A novel error concealment method for stereoscopic video coding. In: IEEE Int. Conf. on Image Processing pp 101–104 35. Xiang W, Gao P, Peng Q (2015) Robust multiview three-dimensional video communications based on distributed video coding. IEEE Syst J 11(4):2456–2466 36. Yan B, Zhou J (2012) Efficient frame concealment for depth image-based 3-d video transmission. IEEE Trans on Multimedia 14(3):936–941 37. Zeng H, Wang X, Cai C, Chen J, Zhang Y (2015) Fast multiview video coding using adaptive prediction structure and hierarchical mode decision. IEEE Trans. Circuits and Systems for Video Technology 24(9): 1566–1578 38. Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Process 20(8):2378–2386 39. Zhou Y, Xiang W, Wang G (2015) Frame loss concealment for multiview video transmission over wireless multimedia sensor networks. IEEE Sensors J 15(3):1892–1901

Walid El-Shafai was born in Alexandria, Egypt, on April 19, 1986. He received the B.Sc degree in Electronics and Electrical Communication Engineering from Faculty of Electronic Engineering (FEE), Menoufia University, Menouf, Egypt in 2008 and M.Sc degree from Egypt-Japan University of Science and Technology (E-JUST) in 2012. He is currently working as a Teaching Assistant and Ph.D. Researcher in ECE Dept. FEE, Menoufia University. His research interests are in the areas of Wireless Mobile and Multimedia Communications Systems, Image and Video Signal Processing, Efficient 2D Video/3D Multi-View Video Coding, Multi-view Video plus Depth coding, 3D Multi-View Video Coding and Transmission, Quality of Service and Experience, Digital Communication Techniques, 3D Video Watermarking and Encryption, Error Resilience and Concealment Algorithms for H.264/AVC, H.264/MVC and H.265/HEVC Video Codecs Standards.

Multimed Tools Appl

EL-Sayed M. El-Rabaie (SM’92) was born in Sires Elian, Egypt, in 1953. He received the B.Sc. degree (with honors) in radio communications from Tanta University, Tanta, Egypt, in 1976, the M.Sc. degree in communication systems from Menoufia University, Menouf, Egypt, in 1981, and the Ph.D. degree in microwave device engineering from Queen’s University of Belfast, Belfast, U.K., in 1986. In his doctoral research, he constructed a Computer-Aided Design (CAD) package used in nonlinear circuit simulations based on the harmonic balance techniques. Up to February 1989, he was a Postdoctoral Fellow with the Department of Electronic Engineering, Queen’s University of Belfast. He was invited as a Research Fellow in the College of Engineering and Technology, Northern Arizona University, Flagstaff, in 1992 and as a Visiting Professor at Ecole Polytechnique de Montreal, Montreal, QC, Canada, in 1994. He has authored and co-authored more than 300 papers and nineteen textbooks. He was given several awards (Salah Amer Award of Electronics in 1993, The Best Researcher on (CAD) from Menoufia University in 1995). He acts as a reviewer and member of the editorial board for several scientific journals. He has participated in translating the first part of the Arabic Encyclopedia. Professor El-Rabaie was the Head of the Electronics and Communication Engineering Department, Faculty of Electronic Engineering, Menoufia University, and after that the Vice Dean of Postgraduate Studies and Research at the same Faculty. Prof. El-Rabaie is involved now in different research areas including CAD of nonlinear microwave circuits, nanotechnology, digital communication systems, biomedical signal processing, acoustics, antennas, and digital image processing. Now he is reviewer for the Quality Assurance and Accreditation of Egyptian for Higher Education. E-mail: [email protected], Mobile: 0020128498170.

Mohamed Elhalawany received the M.Sc. degree in electrical engineering from Menoufia University in Egypt in 1982. He received the Ph.D. degree from Wroclaw University in 1989. He is a Professor Emeritus in the Department of Electronics and Electrical Communications, Faculty of Electronic Engineering, Menoufia University. His areas of interests are signal processing, image processing, and digital communications.

Multimed Tools Appl

Fathi E. Abd El-Samie received the B.Sc. (Honors), M.Sc., and Ph.D. degrees from Menoufia University, Menouf, Egypt, in 1998, 2001, and 2005, respectively. He is a Professor at the Department of Electronics and Electrical Communications, Faculty of Electronic Engineering, Menoufia University, Egypt. He is a co-author of several papers and textbooks. His current research interests include image enhancement, image restoration, image interpolation, super-resolution reconstruction of images, data hiding, multimedia communications, medical image processing, optical signal processing, and digital communications. Dr. Abd El-Samie was a recipient of the Most Cited Paper Award from the Digital Signal Processing journal in 2008.