0 * . Page 2 Page 3

6 downloads 0 Views 13MB Size Report
TWO MARK QUESTIONS WITH ANSWER. 35 ..... In many other respects, the cordless phone is similar to the cellular phone: it allows ...... In packet radio (PR) access techniques, many subscribers attempt to access a single ...... problem that emissions from radio transmitters could be used by the enemy to triangulate the.


    

   

                                                                                                                                                                                                                !                                     !        "    #  

 $  %  &      '     (   )* *   + %   )* *   ,  -   .    (         +   % &            ,  +    /  0 *    .     

 



   

 



         

 

       

  

         

                                                                                                                              !     "  #

$   %   &      '  ()$   

# # )(*+,$  

  

   -$     - .+() "   #

 $    %    &     '     # # .+()

Principles of Wireless Communications

Dr. N. Muthukumaran, B.E, M.E., Ph.D

Mrs. R. Sonya,

Professor,

Associate Professor

Department of Electronics and Communication Engineering, Francis Xavier Engineering College, (Affiliated to AICTE and Anna University Chennai, Recognized Under Section 2(F), 12(B) of the UGC Act 1956 & Accredited by NBA), Tirunelveli, Tamilnadu, India. Mobile Number : +91 9952203887, +91 7010408127. E-Mail: [email protected]

ŝ 

ABOUT THE AUTHOR 1 Dr. N. Muthukumaran was born in Kaniyakumari, Tamilnadu, India, in 1984. He received the B.E Degree in Electronics and Communication Engineering from Anna University, Chennai, India, in 2007 and the M.E Degree in Applied Electronics from Anna University, Chennai, India, in 2010 and the Ph.D Degree in Information and Communication Engineering from Anna University, Chennai, India in 2015. He is currently working as an Professor, Department Electronics and Communication Engineering in Francis Xavier Engineering College, Tirunelveli, Tamilnadu, India. His major research interests are in the field of Digital Image and Signal Processing, Multimedia Image and Video Processing and Compression, Digital and Analog Very Large Scale Integration circuit design. He conducted several Ph.D and M.E projects in the area of Image processing, Image Compression, Very Large Scale Integration circuit, Communication System and Networks. Since 2006, he has published more than 68 International Journals like Springer, IEEE, Elsevier and 82 National/International conferences papers. He has published 7 Books which is related to Engineering Students. He has actively participate and organized more than 82 research related events like National and International Workshop, Faculty Development Program, Seminar, Symposium, Conference and Short Term Courses Delivered & Attended Dr. N. Muthukumaran has served as a Ph.D. Recognized as a Supervisor for guiding Ph.D and M.S. (By Research) scholars under the faculty of Information and Communication Engineering in Anna University Chennai. He has supervised and evaluated 32 candidates for their M.E and Ph.D Doctoral thesis in the areas of Digital Image Processing, VLSI Design, Networks, Communication System in the various University like Visvesvaraya Technological University Belagavi, Mysore University, Punjab Technology University, Barkatulla Vishvavidyalaya University Bhopal, Sri Siddhartha Academy of Higher Education University, Gujarat Technological University, Anna University of Technology Tirunelveli and Anna University Chennai. Currently, he is serving as Editorial and Reviewer Board Member of 28 International Journals like Anna University Chennai Recognized Annexure I, Annexure II Journals. He has submitted more than 7 Project Proposal to various Government Organization like DRDO, ISRO, DST in the total amount of Rs. 1,20,00,000 (one crore twenty lakhs) and received a grant of Rs. 45,00,000 /- (Forty Five Lakhs), Under Ministry of Science & Technology (MST), Department of Science & Technology (DST), titled as " Fund for Improvement of Science and Technology Infrastructure in Universities and Higher Educational Institutions (FIST) Program ± 2016". He has Served as various Positions which is related to Academic and Non Academic during the faculty and students. He has collaborated and life time member of more than 19 various Memberships body Association like IEEE, ISI, WCECS, UACEE ect.

ŝŝ 

ABOUT THE AUTHOR 2 Mrs. R. Sonya was born in Kaniyakumari, Tamilnadu, India, in 1985. She received the B.E Degree in Electronics and Communication Engineering, M.E Degree in Power Electronics and Drives from Anna University, Chennai, India in 2007, 2009 respectively. She is currently working as an Assistant Professor, Department of Electronics and Communication Engineering in Francis Xavier Engineering College, Tirunelveli, Tamilnadu, India. Her major research interests are in the field of Digital Image Processing, Wireless Communication, VLSI design. Since 2007 she has published more than 15 International Journals like Springer, IEEE and 38 National/International conferences papers. She has published 1 Book which is related to Engineering Students. She has actively participate and organized more than 22 events like National and International Workshop, Faculty Development Program, Seminar, Symposium, Conference and Short Term Courses Delivered & Attended.

ŝŝŝ 

PREFACE In this book deals with the fundamental cellular radio concepts such as frequency reuse and handoff. This also demonstrates the principle of trunking efficiency and how trunking and interference issues between mobile and base stations combine to affect the overall capacity of cellular systems. It presents different ways to radio propagation models and predict the large scale effects of radio propagation in many operating environment. This also covers small propagation effects such as fading, time delay spread and doppler spread and describes how to measures and model the impact that signal bandwidth and motion have on the instantaneous received signal through the multi-path channel. It provides idea about analog and digital modulation techniques used in wireless communication. It also deals with the different types of equalization techniques and diversity concepts. It provides an introduction to speech coding principles which have driven the development of adaptive pulse code modulation and linear predictive coding techniques. It deals with advanced transceiver schemes and second generation and third generation wireless networks. The objective of the student should be made to know the characteristic of wireless channel, learn the various cellular architectures, understand the concepts behind various digital signalling schemes for fading channels and also be familiar the various multipath mitigation techniques and understand the various multiple antenna systems. The outcomes of the course, the student should be able to characterize wireless channels, design and implement various signalling schemes for fading channels, design a cellular system, compare multipath mitigation techniques and analyze their performance, design and implement systems with transmit/receive diversity and MIMO systems and analyze their performance.

ŝǀ 

Contents About the Author Preface Contents 1 SERVICES AND TECHNICAL CHALLENGES 1.1 Introduction to wireless communication 1.2 Types of Services 1.2.1 Broadcast 1.2.2 Paging 1.2.3 Cellular Telephony 1.2.4 Trunking Radio 1.2.5 Cordless Telephony 1.2.6 Wireless Local Area Networks 1.2.7 Personal Area Networks 1.2.8 Fixed Wireless Access 1.2.9 Ad hoc Networks and Sensor Networks 1.2.10 Satellite Cellular Communications 1.3 Requirements for the Services 1.3.1 Data Rate 1.3.2 Range and Number of Users 1.3.3 Mobility 1.3.4 Energy Consumption 1.3.5 Use of Spectrum 1.3.6 Direction of Transmission 1.3.7 Service Quality 1.4 Technical Challenges of Wireless Communications 1.4.1 Multipath Propagation 1.4.2 Spectrum Limitations 1.5 Noise- and Interference-Limited Systems 1.5.1 Noise-Limited Systems 1.5.2 Man-made noise 1.5.3 Receiver noise 1.6 Principles of Cellular Networks 1.6.1 Reuse Distance 1.6.2 Cell shape 1.6.3 Cell Planning with Hexagonal Cells 1.6.4 Methods for Increasing Capacity 1.7 Multiple Access Schemes 1.7.1 Frequency Division Multiple Access 1.7.2 Time Division Multiple Access 1.7.3 Spread Spectrum Multiple Access 1.7.4 Code Division Multiple Access 1.7.5 Space Division Multiple Access 1.7.6 Packet Radio 1.8 Comparison between wired and wireless communications QUESTION BANK ǀ 

ii iv v 1 1 1 1 2 3 3 4 5 5 6 6 7 7 7 8 9 10 10 11 11 12 12 15 17 17 18 19 21 21 22 22 24 25 27 28 30 30 32 32 33 34

TWO MARK QUESTIONS WITH ANSWER 2 WIRELESS PROPAGATION CHANNELS 2.1 Propagation Mechanisms 2.1.1 Free Space Attenuation 2.1.2 Reflection 2.1.3 Diffraction 2.1.4 Scattering by Rough Surfaces 2.2 Propagation effects with mobile radio 2.2.1 Rayleigh Fading 2.2.2 Rician Fading 2.2.3 Doppler Shift 2.3 Channel Classification 2.3.1 Time-Selective Channels 2.3.2 Frequency-Selective Channels 2.3.3 General Channel 2.3.4 WSSUS Channels 2.3.5 Coherence Time 2.3.6 Power-Delay Profile 2.3.7 Coherence Bandwidth 2.3.8 Stationary and Non stationary Channels 2.4 Link calculations 2.5 Narrowband models 2.5.1 Modelling of Small-Scale and Large-Scale Fading 2.5.2 Path Loss Models 2.6 Wideband Models 2.6.1 Tapped Delay Line Models 2.6.2 Models for the Power Delay Profile 2.6.3 Models for the Arrival Times of Rays and Clusters 2.6.4 Standardized Channel Model QUESTION BANK TWO MARK QUESTIONS WITH ANSWER 3 WIRELESS TRANSCEIVERS 3.1 Structure of a Wireless Communication Link 3.2 Modulation and demodulation 3.3 Quadrature Phase Shift Keying 3.4 OFFSET Quadrature Phase Shift Keying 3.5 ʌ /4 Quadrature Phase Shift Keying 3.6 Binary Frequency Shift Keying 3.7 Minimum Shift Keying 3.8 Gaussian Minimum Shift Keying QUESTION BANK TWO MARK QUESTIONS WITH ANSWER 4 SIGNAL PROCESSING IN WIRELESS SYSTEMS 4.1 Principle of Diversity 4.2 Microdiversity 4.2.1 Spatial Diversity 4.2.2 Temporal Diversity ǀŝ 

35 39 39 39 40 42 48 50 51 53 54 55 56 56 56 56 56 57 57 57 58 60 60 60 61 61 62 63 63 64 65 69 69 73 75 78 79 83 86 91 93 95 99 99 99 100 101

4.2.3 Frequency Diversity 4.2.4 Angle Diversity 4.2.5 Polarization Diversity 4.3 Macro diversity and Simulcast 4.4 Combination of Signals 4.4.1 Selection Diversity 4.4.2 Switched Diversity 4.4.3 Combining Diversity 4.5 Transmit Diversity 4.5.1 Transmitter Diversity with Channel State Information 4.5.2 Transmitter Diversity Without Channel State Information 4.6 Equalisers 4.6.1 Modeling of Channel and Equalizer 4.6.2 Channel Estimation 4.6.3 Linear Equalizers 4.6.4 Non Linear Equalizers 4.7 Comparison of Various Algorithms for Adaptive Equalization 4.8 Channel coding techniques 4.8.1 Block Codes 4.8.2 Convolutional Codes 4.8.3 Trellis Coded Modulation 4.8.4 Turbo Codes 4.9 Speech coding techniques 4.9.1 Speech Coder Designs 4.9.2 The Sound of Speech 4.9.3 Quantization and Coding QUESTION BANK TWO MARK QUESTIONS WITH ANSWER 5 ADVANCED TRANSCEIVER SCHEMES 5.1 Spread Spectrum Systems 5.2 Frequency Hopping Multiple Access 5.2.1 Principle Behind Frequency Hopping 5.2.2 Frequency Hopping for Multiple Access 5.3 Cellular Code-Division-Multiple-Access Systems 5.3.1 Principle Behind Code Division Multiple Access 5.3.2 Power Control 5.3.3 Methods for Capacity Increases 5.3.4 Effects of Multipath Propagation on Code Division Multiple Access 5.4 Orthogonal Frequency Division Multiplexing 5.4.1 Principle of Orthogonal Frequency Division Multiplexing 5.4.2 Implementation of Transceivers 5.4.3 Cyclic Prefix 5.5 GSM ± Global System for Mobile Communications 5.5.1 System Overview ǀŝŝ 

102 103 104 104 105 105 107 108 110 110 111 111 112 112 113 118 120 121 121 123 125 126 126 127 127 132 132 133 135 135 135 135 136 137 137 138 140 140 141 141 142 143 145 145

5.5.2 Logical and Physical Channels 5.5.3 Synchronization 5.5.4 Establishing a Connection and Handover 5.5.5 Services and Billing 5.6 IS-95 5.6.1 System Overview 5.6.2 Air Interface 5.6.3 Logical and Physical Channels 5.6.4 Handover 5.7 WCDMA/UMTS 5.7.1 Historical Overview 5.7.2 System Overview 5.7.3 Hierarchical Cellular Structure 5.7.4 Data Rates and Service Classes 5.7.5 Physical and Logical Channels 5.7.6 Establishing a Connection 5.7.7 Power Control QUESTION BANK TWO MARK QUESTIONS WITH ANSWER AUTHOR PICTURE & BIOGRAPHY

ǀŝŝŝ 

147 149 150 151 153 153 154 155 156 157 157 157 158 159 159 161 162 162 163 166

Principles of Wireless Communications UNIT I SERVICES AND TECHNICAL CHALLENGES 1.1

Introduction to wireless communication: Wireless communication is the transfer of information between two or more points that are not connected by an electrical conductor. The most common wireless technologies use radio. With radio waves distances can be short, such as a few meters for television or as far as thousands or even millions of kilometres for deep-space radio communications. Other examples of applications of radio wireless technology include GPS units, garage door openers, wireless computer mice, keyboards and headsets, headphones, radio receivers, satellite television, broadcast television and cordless telephones. Characteristics of wireless communications systems : 1. Mobility 2. Reachability 3. Simplicity 4. Maintainability 5. Roaming Services Advantages 1. Flexibility 2. Ease of use 3. Planning 4. Place devices 5. Durability 6. Prices Applications of wireless technology 1. Mobile telephones 2. Wireless data communications 3. Wi-Fi 4. Cellular data service 5. Mobile Satellite Communications 6. Wireless Sensor Networks 7. Wireless energy transfer 8. Wireless Medical Technologies 9. Computer interface devices 1.2 Types of Services 1.2.1 Broadcast The first wireless service was broadcast radio. In this application, information is transmitted to different possibly mobile users (see Figure 1.1). Four properties differentiate broadcast radio: 1. The information is only sent in one direction. It is only the broadcast station that sends information to the radio or TV receivers; the listeners (or viewers) do not transmit any information back to the broadcast station. 2. The transmitted information is the same for all users. 3. The information is transmitted continuously. ϭ 

Principles of Wireless Communications

Figure 1.1 Principle of a broadcast transmission. The transmitter does not need to have any knowledge or consideration about the receivers. There is no requirement to provide for duplex channels (i.e., for bringing information from the receiver to the transmitter). The number of possible users of the service does not influence the transmitter. 1.2.2

Paging Similar to broadcast, paging systems are unidirectional wireless communications systems. They are characterized by the following properties (see also Figure 1.2): 1. The user can onO\UHFHLYHLQIRUPDWLRQEXWFDQQRWWUDQVPLW&RQVHTXHQWO\D³FDOO´ PHVVDJH  can only be initiated by the call centre, not by the user. 2. The information is intended for, and received by, only a single user. 3. The amount of transmitted information is very small. Due to the unidirectional nature of the communications, and the small amount of information, the bandwidth required for this service is small. This in turn allows the service to operate at lower carrier frequencies ± e.g., 150MHz.

Figure 1.2 Principle of a pager. The main appeal of paging systems, after the year 2000, lies in the better area coverage that they can achieve. Ϯ 

Principles of Wireless Communications 1.2.3 Cellular Telephony Cellular telephony is the economically most important form of wireless communications. It is characterized by the following properties: 1. The information flow is bidirectional. A user can transmit and receive information at the same time. 2. The user can be anywhere within a (national wide or international) network. Neither (s)he nor the callinJSDUW\QHHGWRNQRZWKHXVHU¶VORFDWLRQLWLVWKHQHWZRUNWKDWKDVWRWDNHWKHPRELOLW\ of the user into account. 3. A call can originate from either the network or the user. In other words, a cellular customer can be called or can initiate a cell. 4. A call is intended only for a single user, other of the network should not be able to listen in. 5. High mobility of the users. The location of a user can change significantly during a call. Figure 1.3 shows the block diagram of cellular system. A mobile user is communicating with a BS that has a good radio connection with that user. The BSs are connected to a mobile switching centre which is connected to the public telephone system.

Figure 1.3 principle of a cellular system. Since each user wants to transmit or receive different information, the number of active users in a network is limited. The available bandwidth must be shared between the different users WKLVFDQEHGRQHYLDGLIIHUHQW ³PXOWLSOHDFFHVV´VFKHPHV7KLV LV DQLPSRUWDQW GLIIHUHQFHIURP broadcast systems, where the number of users (receivers) is unlimited, since they all receive the same information. In order to increase the number of possible users, the cellular principle is used: the area served by a network provider is divided into a number of subareas called cells. Within each cell, different users have to share the available bandwidth. Each user occupies a different carrier frequency. Even users in neighbouring cells have to use different frequencies, in order to keep co-channel interference low. However, for cells that are sufficiently far apart, the same frequencies can be used, because the signals get weaker with increasing distance from the transmitter. Thus, within one country, there can be hundreds or thousands of cells that are using the same frequencies. 1.2.4

Trunking Radio Trunking radio systems are an important variant of cellular phones, where there is no connection between the wireless system and the PSTN; therefore, it allows the communications of closed user groups. ϯ 

Principles of Wireless Communications Obvious applications include police departments, fire departments, taxis, and similar services. The closed user group allows implementation of several technical innovations that are not possible (or more difficult) in normal cellular systems: 1. Group calls: a communication can be sent to several users simultaneously, or several users can set up a conference call between multiple users of the system. 2. Call priorities: DQRUPDOFHOOXODUV\VWHPRSHUDWHVRQD³ILUVW-come, first-VHUYH´EDVLV2QFHD call is established, it cannot be interrupted. This is reasonable for cell phone systems, where the network operator cannot ascertain the importance or urgency of a call. A trunking radio system thus has to enable the prioritization of calls and has to allow dropping a low-priority call in favour of a high-priority one. 3. Relay networks: the range of the network can be extended by using each Mobile Station (MS) as a relay station for other MSs. Thus, an MS that is out of the coverage region of the BS might send its information to another MS that is within the coverage region, and that MS will forward the message to the BS; the system can even use multiple relays to finally reach the BS. Such an approach increases the effective coverage area and the reliability of the network. 1.2.5 Cordless Telephony Cordless telephony describes a wireless link between a handset and a BS that is directly connected to the public telephone system. The main difference from a cellphone is that the cordless telephone is associated with, and can communicate with, only a single BS (see Figure 1.4). There is thus no mobile switching center; rather, the BS is directly connected to the PSTN. This has several important consequences: 1. The BS does not need to have any network functionality. When a call is coming in from the PSTN, there is no need to find out the location of the MS. Similarly, there is no need to provide for handover between different BSs. 2. There is no central system. A user typically has one BS for his/her apartment or business under control, but no influence on any other BSs. 3. The fact that the cordless phone is under the control of the user also implies a different pricing structure: there are no network operators that can charge fees for connections from the MS to the BS; rather, the only occurring fees are the fees from the BS into the PSTN.

Figure 1.4 Principle of a simple cordless phone. In many other respects, the cordless phone is similar to the cellular phone: it allows mobility within the cell area; the information flow is bidirectional; calls can originate from either the PSTN or the mobile user, and there have to be provisions such that calls cannot be intercepted or listened to by unauthorized users and no unauthorized calls can be made. ϰ 

Principles of Wireless Communications Cordless systems have also evolved into wireless Private Automatic Branch exchanges (PABXs) (see Figure 1.5). In its most simple form, a PABX has a single BS that can serve several handsets simultaneously ± either connecting them to the PSTN or establishing a connection between them (for calls within the same company or house). In its more advanced form, the PABX contains several BSs that are connected to a central control station. Such a system has essentially the same functionality as a cellular system; it is only the size of the coverage area that distinguishes such a full functionality wireless PABX from a cellular network. In the U.S.A., digital cordless phones mainly operate in the 2.45-GHz Industrial, Scientific, and Medical (ISM) band, which they share with many other wireless services.

Figure 1.5 Principle of a wireless private automatic branch exchange. 1.2.6 Wireless Local Area Networks The functionality of Wireless Local Area Networks (WLANs) is very similar to that of cordless phones ± connecting a single mobile user device to a public landline system. The ³PRELOHXVHUGHYLFH´LQWKLVFDVHLVXVXDOO\DODSWRSFRPSXWHUDQGWKHSXEOLFODQGOLQHV\VWHPLV the Internet. As in the cordless phone case, the main advantage is convenience for the user, allowing mobility. Wireless LANs can even be useful for connecting fixed-location computers (desktops) to the Internet, as they save the costs for laying cables to the desired location of the computer. A major difference between wireless LANs and cordless phones is the required data rate. While cordless phones need to transmit (digitized) speech, which requires at most 64 kbit/s, wireless LANs should be at least as fast as the Internet that they are connected to. WLAN devices can, in principle, connect to any BS (access point) that uses the same standard. However, the owner of the access point can restrict the access ± e.g., by appropriate security settings. 1.2.7 Personal Area Networks When the coverage area becomes even smaller than that of WLANs, we speak of Personal Area Networks 3$1V  6XFK QHWZRUNV DUH PRVWO\ LQWHQGHG IRU VLPSOH ³FDEOH UHSODFHPHQW´GXWLHV)RU example, devices following the Bluetooth standard allow to connect a hands-free headset to a phone without requiring a cable; in that case, the distance between the two devices is less than a meter. In such applications, data rates are fairly low (> Ȝand d >> La. In order to better point out the distance dependence, it is advantageous to first compute the received power at 1-m distance. The actual received power at a distance d (in meters) is then: (2.6) 2.1.2 Reflection 6QHOO¶V/DZ Electromagnetic waves are often reflected at one or more IOs before arriving at the RX. The reflection coefficient of the IO, as well as the direction into which this reflection occurs, determines the power that arrives at the RX position. In this section, we deal with specular ϰϬ 

Principles of Wireless Communications reflections. This type of reflection occurs when waves are incident onto smooth, large objects. A related mechanism is the transmission of waves ± i.e., the penetration of waves into and through an IO. Transmission is especially important for wave propagation inside buildings. If the Base Station (BS) is either outside the building, or in a different room, then the waves have to penetrate a wall (dielectric layer) in order to get to the RX. The dielectric constant and conductivity can be merged into a single parameter, the complex dielectric constant: (2.7) where fc is the carrier frequency, and j is the imaginary unit. Though this definition is strictly valid only for a single frequency, it can actually be used for all narrowband systems, where the bandwidth is much smaller than the carrier frequency, as well as much smaller than the bandwidth over which the quantities ıe and İvary significantly. The plane wave is incident on the half space at an angle Ĭe, which is defined as the angle between the wave vector k and the unit vector that is orthogonal to the dielectric boundary. We have to distinguish between the Transversal Magnetic (TM) case, where the magnetic field component is parallel to the boundary between the two dielectrics, and the Transversal Electric (TE) case, where the electric field component is parallel (see Figure 2.1). The reflection and transmission coefficients can now be computed from postulating incident, reflected, and transmitted plane waves, and enforcing the continuity conditions at the ERXQGDU\)URPWKHVHFRQVLGHUDWLRQVZHREWDLQ6QHOO¶VODZWKHDQJOHRILQFLGHQFHLVWKHVDPH as the reflected angle: (2.8)

Figure 2.1: Reflection and transmission and the angle of the transmitted wave is given by:

(2.9) The dí4 Power Law ϰϭ 

Principles of Wireless Communications 2QHRIWKH³IRONODZV´RIZLUHOHVVFRPPXQLFDWLRQVVD\VWKDWWKHUHFHLYHGVLJQDOSRZHULV inversely proportional to the fourth power of the distance between TX and RX. This law is often justified by computing the received power for the case that only a direct (Line Of Sight, LOS) wave, plus a ground-reflected wave, exists. (2.10) where hTX and hRX are the height of the transmit and the receive antenna, respectively; it is valid for distances larger than: (2.11) This equation ZKLFK UHSODFHV WKH VWDQGDUG )ULLV¶ ODZ LPSOLHV WKDW WKH UHFHLYHG SRZHU becomes independent of frequency. Furthermore, it follows from Eq. (2.10) that the received power increases with the square of the height of both BS and MS. For distances d < dbreak, )ULLV¶ODZUHPDLQVDSSUR[LPDWHO\YDOLG)RUWKHOLQNEXGJHWLWLVXVHIXOWRUHZULWHWKHSRZHUODZ on a logarithmic scale. Assuming that the power decays as dí2 until a breakpoint dbreak, and from there with dín, then the received power. Brewster Angle The Brewster angle is the angle at which no reflection occurs in the medium of origin. It RFFXUVZKHQWKHLQFLGHQWDQJOHșB LVVXFKWKDWWKHUHIOHFWLRQFRHIILFLHQWȽt is equal to zero. The %UHZVWHUDQJOHLVJLYHQE\WKHYDOXHRIșB which satisfies

(2.12) For the case when the first medium is free space and the second medium has a relative permittivity İr equation (2.12) can be expressed as sin (2.13).

(2.13) Note that the Brewster angle occurs only for vertical (i.e. parallel) polarization. 2.1.3 Diffraction All the equations derived up to now deal with infinitely extended IOs. However, real IOs like buildings, cars, etc. have a finite extent. A finite-sized object does not create sharp shadows (the way geometrical optics would have it), but rather there is diffraction due to the wave nature In the following, we first treat two canonical diffraction problems: diffraction of a homogeneous plane wave (i) by a knife edge or screen and (ii) by a wedge, and derive the diffraction coefficients Diffraction by a Single Screen or Wedge The Diffraction Coefficient The simplest diffraction problem is the diffraction of a homogeneous plane wave by a semi-LQILQLWH VFUHHQ DV VNHWFKHG LQ )LJXUH  'LIIUDFWLRQ FDQ EH XQGHUVWRRG IURP +X\JHQ¶V principle that each point of a wave front can be considered the source of a spherical wave. For a ϰϮ 

Principles of Wireless Communications homogeneous plane wave, the superposition of these spherical waves results in another homogeneous plane wave, see transition from plane A¶to B¶. If, however, the screen eliminates parts of the point sources the resulting wave front is not plane anymore, see the transition from plane %¶ to C¶&RQVWUXFWLYHDQGGHVWUXFWLYHLQWHUIHUHQFHVRFFXULQGLIIHUHQWGLUHFWLRQV

Figure 2.2 +X\JHQ¶VSULQFLSle The electric field at any point to the right of the screen (x •0) can be expressed in a form that involves only a standard integral, the Fresnel integral . With the incident field represented as exp(íjk0x), the total field becomes (2.14)

Figure 2.3 Fresnel integral. ϰϯ 

Principles of Wireless Communications Figure 2.3 plots this function. It is interesting that ×) ȞF) can become larger than unity for some values of ȞF. This implies that the received power at a specific location can actually be increased by the presence of screen. Consider now the more general geometry of Figure 2.4. The TX is at height hTX, the RX at hRX, and the screen extends from í’ to hs. The diffraction angle șd is thus:

(2.15)

Figure 2.4 Geometry for the computation of the Fresnel parameters. and the Fresnel parameter ȞF can be obtained from șd as:

(2.16) The field strength can again be computed from Eq. (2.14), just using the Fresnel parameter from Eq. (2.16). Fresnel Zones The impact of an obstacle can also be assessed qualitatively, and intuitively, by the concept of Fresnel zones. Figure 2.5 shows the basic principle. Draw an ellipsoid whose foci are the BS and the MS locations. According to the definition of an ellipsoid, all rays that are reflected at points on this ellipsoid have the same run length (equivalent to runtime). The eccentricity of the ellipsoid determines the extra run length compared with the LOS ± i.e., the direct connection between the two foci. Ellipsoids where this extra distance is an integer multiple of Ȝ DUH FDOOHG ³)UHVQHO HOOLSVRLGV´ 1RZ H[WUD UXQ OHQJWK also leads to an additional phase shift, so that the ellipsoids can be described by the phase shift that they cause. More specifically, the ith Fresnel ellipsoid is the one that results in a phase shift of Lʌ.

ϰϰ 

Principles of Wireless Communications

Figure 2.5 The principle of Fresnel ellipsoids. Fresnel zones can also be used for explanation of the dí4 law. The propagation follows a free space law up to the distance where the first Fresnel ellipsoid touches the ground. At this distance, which is the breakpoint distance, the phase difference between the direct and the reflected ray becomes ʌ. Diffraction by a Wedge The semi-infinite absorbing screen is a useful tool for the explanation of diffraction, since it is the simplest possible configuration. However, many obstacles especially in urban environments are much better represented by a wedge structure, as sketched in Figure 2.6. The problem of diffraction by a wedge has been treated for some 100 years, and is still an area of active research. Depending on the boundary conditions, solutions can be derived that are either valid at arbitrary observation points or approximate solutions that are only valid in the far field (i.e., far away from the wedge). These latter solutions are usually much simpler, and will thus be the only ones considered here. The part of the field that is created by diffraction can be written as the product of the incident field with a phase factor exp(íjk0dRX), a geometry factor A(dTX, dRX) that depends only on the distance of TX and RX from the wedge, and the diffraction coefficient ' ijTX, ijRX) that depends on the diffraction angles (2.17) The geometry factor is given by: ϰϱ 

Principles of Wireless Communications

(2.18) The diffraction coefficient D depends on the boundary conditions ± namely, the reflection coefficients ȡTX and ȡRX.

Figure 2.6 Geometry for wedge diffraction. Diffraction by Multiple Screens Diffraction by a single screen is a problem that has been widely studied, because it is amenable to closed-form mathematical treatment, and forms the basis for the treatment of more complex problems. However, in practice, we usually encounter situations where multiple IOs are located between TX and RX. Such a situation occurs, e.g., for propagation over the rooftops of an urban environment. As we see in Figure 2.7, such a situation can be well approximated by diffraction by multiple screens. Unfortunately, diffraction by multiple screens is an extremely challenging mathematical problem. %XOOLQJWRQ¶V0HWKRG %XOOLQJWRQ¶VPHWKRGUHSODFHVWKHPXOWLSOHVFUHHQVE\DVLQJOH³HTXLYDOHQW´ screen. This equivalent screen is derived in the following way: put a tangential straight line from the TX to the real obstacles, and select the steepest one (i.e., the one with the largest elevation angle), so that all obstacles either touch this tangent, or lie below it. Similarly, take the tangents from the RX to the obstacles, and select the steepest one. The equivalent screen is then determined by the intersection of the steepest TX tangent and the steepest RX tangent (see Figure 2.8).

Figure 2.7 Approximation of multiple buildings by a series of screens.

Figure 2.8 Equivalent screen after Bullington. ϰϲ 

Principles of Wireless Communications 7KH PDMRU DWWUDFWLRQ RI %XOOLQJWRQ¶V PHWKRG LV LWV VLPSOLFLW\ +RZHYHU WKLV VLPSOLFLW\ also leads to considerable inaccuracies. Most of the physically existing screens do not impact the location of the equivalent screen. Even the highest obstacle might not have an impact. Consider Figure 2.8: if the highest obstacle lies between screens 01 and 02, it could lie below the tangential lines, and thus not LQIOXHQFH WKH ³HTXLYDOHQW´ VFUHHQ HYHQ WKRXJK LW LV KLJKHU WKDQ either screen 01 or screen 02. In reality, these high obstacles do have an effect on propagation loss, and cause an additional attenuation. The Bullington method thus tends to give optimistic predictions of the received power. The Epstein±Petersen Method The low accuracy of the Bullington method is due to the fact that only two obstacles determine the equivalent screen, and thus the total diffraction coefficient. This problem can be somewhat mitigated by the Epstein±Petersen method. This approach computes the diffraction losses for each screen separately. The attenuation of a specific screen is computed by putting a YLUWXDO³7;´DQG³5;´RQWKHWLSVRIWKHVFUHHQVWRWKHOHIWDQGULJKWRIWKis considered screen (Figure2.9). The diffraction coefficient, and the attenuation, of this one screen can be easily computed . Attenuations by the different screens are then added up (on a logarithmic scale). The method thus includes the effects of all screens. Despite this more refined modeling, the method is still only approximate. It uses the diffraction attenuation (Eq. 2.14) that is based on the assumption that the RX is in the far field of the screen. If, however, two of the screens are close together, this assumption is violated, and significant errors can occur. The inaccuracies caused by WKLV³IDU-ILHOGDVVXPSWLRQ´FDQEHUHGXFHGFRQVLGHUDEO\E\WKHslope diffraction method. In this approach, the field is expanded into a Taylor series. In addition to the zeroth-order term (far field), which enforces continuity of the electrical field at the screen, also the first-order term is taken into account, and used to enforce continuity of the first derivative of the field.

Figure 2.9 The Epstein±Petersen method. 'H\JRXW¶V0HWKRG 7KHSKLORVRSK\RI'H\JRXW¶VPHWKRGLVVLPLODUWRWKDWRIWKH(SVWHLQ±Petersen method, as it also adds up the attenuations caused by each screen. However, the diffraction angles are defined in the Deygout method by a different algorithm: ‡In the first step, determine the attenuation between TX and RX if only the ith screen is present (for all i). ‡ 7KH VFUHHQ WKDW FDXVHV WKH ODUJHVW DWWHQXDWLRQ LV GHILQHG DV WKH ³PDLQ VFUHHQ´ ± its index is defined as ims. ‡ Compute the attenuation between the TX and the tip of the main screen caused by the j th screen (with j running now from 1 to ims). The screen resulting in the largest attenuation is ϰϳ 

Principles of Wireless Communications FDOOHGWKH³VXEVLGLDU\PDLQVFUHHQ´6LPLODUO\FRPSXWHWKHDWWHQXDWLRQEHWZHHQWKHPDLQVFUHHQ and the RX, caused by the j th screen (j >ims + 1). ‡2SWLRQDOO\UHSHDWWKDWSURFHGXUHWRFUHDWH³VXEVLGLDU\VFUHHQV´HWF ‡Add up the losses (in dB) from all considered screens. The Deygout method works well if there is actually one dominant screen that creates most of the losses. Otherwise, it can create considerable errors. Comparison of the Different Methods x The Bullington method is independent of the number of screens, and thus obviously gives a wrong functional dependence. x The Epstein±Petersen method adds the attenuations on a logarithmic scale and thus leads to an exponential increase of the total attenuation on a linear scale. x Similarly, the Deygout method and the ITU-R method predict an exponential increase of the total attenuation as the number of screens increases.

Figure 2.10 Comparison of different computation methods for multiple-screen diffraction. x

For a small number of screens of different height, both the Deygout and the Epstein± Petersen method can be used successfully.

2.1.4 Scattering by Rough Surfaces Scattering on rough surfaces (Figure 2.11) is a process that is very important for wireless communications. Scattering theory usually assumes roughness to be random.

Figure 2.11 Scattering by a rough surface. ϰϴ 

Principles of Wireless Communications The justifications for this approach are rather heuristic: (i) the errors made are smaller than some other error sources in ray-tracing predictions and (ii) there is no better alternative. This area has been investigated extensively in the last 30 years, mostly due to its great importance in radar technology. Two main theories have evolved: the Kirchhoff theory and the perturbation theory. The Kirchhoff Theory The Kirchhoff theory is conceptually very simple and requires only a small amount of information ± namely, the probability density function of surface amplitude (height). The theory assumes that height variations are so small that different scattering points on the surface do not influence each other ± in RWKHUZRUGVWKDWRQHSRLQWRIWKHVXUIDFHGRHVQRW³FDVWDVKDGRZ´RQWR other points of the surface. This assumption is actually not fulfilled very well in wireless communications. Assuming that the above condition is actually fulfilled, surface roughness leads to a reduction in power of the specularly reflected ray, as radiation is also scattered in other directions (see r.h.s.of Figure 2.11). This power reduction can be described by an effective reflection coefficient ȡrough. In the case of Gaussian height distribution, this reflection factor becomes: (2.19) where ıh is the standard deviation of the height distribution, k0 is the wavenumber 2ʌȜ, and ȥis the angle of incidence (defined as the angle between the wave vector and the surface). The term 2k0ıh sinȥis also known as Rayleigh roughness. Note that for grazing incidence ȥ § 0), the effect of the roughness vanishes, and the reflection becomes specular again. Perturbation Theory The perturbation theory generalizes the Kirchhoff theory, using not only the probability density function of the surface height but also its spatial correlation function. (see Figure 2.12).

Figure 2.12 Geometry for perturbation theory of rough scattering. Mathematically, the spatial correlation function is defined as: (2.20)

ϰϵ 

Principles of Wireless Communications

where r and _r are (two-dimensional) location vectors, and Er is expectation with respect to r. :HQHHGWKLVLQIRUPDWLRQWRILQGZKHWKHURQHSRLQWRQWKHVXUIDFHFDQ³FDVWDVKDGRZ´ onto another point of the surface. If extremely fast amplitude variations are allowed, shadowing situations are much more common. The above definition enforces spatial statistical stationarity ± i.e., the correlation is independent of the absolute location r. The correlation length Lc is defined as the distance so that W(Lc) = 0.5. W(0). T he effect of surface roughness on the amplitude of a specularly reflected wave can be GHVFULEHG E\ DQ ³HIIHFWLYH´ FRPSOH[  GLHOHFWULF FRQVWDQW įeff, which in turn gives rise to an ³HIIHFWLYH´ UHIOHFWLRQ FRHIILFLHQW DV FRPSXWHG IURP 6QHOO¶V ODZ )RU YHUWLFDO SRODUL]DWLRQ WKH įeff is given by comparing these results to the Kirchhoff theory, we find that there is good agreement in the case k0Lc >>1, ȥ >>1/¥ k0Lc. This agrees with our above discussion of the limits of the Kirchhoff theory: by assuming that the coherence length is long compared with the ZDYHOHQJWKWKHUHFDQQRWEHGLIIUDFWLRQE\DVXGGHQ³VSLNH´LQ WKHVXUIDFH$QGE\IXOILOOLQJ ȥ >> 1/¥ k0Lc, it is assured that a wave incident under angle ȥ cannot cast a shadow onto other points of the surface. 2.2 Propagation effects with mobile radio The major difficulties: x Mobile antenna is below the surrounding buildings. x Most communication is via scattering of electromagnetic waves from surfaces or diffraction over and a round building. x These multipaths have both slow and fast aspects. Slow fading ‡$ULVHVIURPODUJHUHIOHFWRUVDQGGLIIUDFWLQJREMHFWVZLWKGLVWDQWSDWKIURPWKHVPDOOWHUPLQDl. ‡:LWKVORZSURSDJDWLRQFKDQJHVWKHVHIDFWRUVFRQWULEXWHWR WKHPHGLDQSDWK ORVVHVEHWZHHQD fixed transmitter and receiver. ‡ 7KH VWDWLVWLFDO YDULDWLRQ RI WKHVH PHDQ ORVVHV GXH WR YDULDWLRQ ZDV PRGHOHG DV ORJQRUPDO distribution for terrestrial application. Fast fading ϱϬ 

Principles of Wireless Communications ‡5DSLGYDULDWLRQRIVLJQDOOHYHOVZKHQXVHUWHUPLQDOPRYHVVKRUWGLVWDQFHV ‡,WLVGXHWRUHIOHFWLRQVRIORFDOREMHFWVDQGPRWLRQRIWHUPLQDO7KDWLVWKHUHFHLYHGVLJQDOLV the sum of a number of signals reflected from local surfaces and signals sum i n the constructive or destructive manner. ‡ 7KH UHVXOWLQJ SKDVH UHODWLRQVKLSV DUH GHSHQGHQW RQ UHODWLYH SDWK OHQJWKV WR WKH ORFDO REMHFW speed of motion and frequency of transmission

Figure 2.13 Illustration of constructive or destructive interference 2.2.1

Rayleigh Fading In wireless communication a distinction is made between portable and mobile terminals. ‡3RUWDEOHWHUPLQDOV- Easily moved but communications occur when the terminal is stationary. ‡0RELOHWHUPLQDOV- Easily moved and communication can occur while the terminal is moving The complex phasor of the N signal rays is given by (2.23) ‡:KHUH(n is the electric field strength of the nth path and șn is relative phase. The reflections can arrive from any direction, we assume that the relative phases are independant DQG XQLIRUPO\ GLVWULEXWHG RYHU > ʌ@ 7KH UDQGRP YDULDEOHV DSSURDFKHV WKH *DXVVLDQ distribution as the number of random variables get large. ie.)

(2.24) Where Zr and Zi are the real Gaussian random variables. Considering one of the components of the sum, the expectation of each component is

(2.25) =0 E denotes the statistical expectation operator ϱϭ 

Principles of Wireless Communications Mean of the complex envelope is given by

(2.26) =0 The variance (power) in the complex envelope is given by mean-square value

(2.27)

(2.28) Difference of two random phases is a random phase. By symmetry, the power is equally distributed between the real and imaginary parts of complex envelope. Since the complex HQYHORSHKDV]HURPHDQIRUı2 = Po/2 ,the probability density function of Zr in Eq. (2.49)is given by the Gaussian density function (2.29) Define the amplitude of complex envelope as (2.30) Rayleigh probability density function (2.31) Integrating the Rayleigh probability density function yields the corresponding cumulative probability distribution function:

(2.32) The mean value of Rayleigh distribution is given by

(2.33) ϱϮ 

Principles of Wireless Communications

Mean-square value is given by

(2.34) The root-mean-square (rms) amplitude is (2.35)

Figure 2.14 Rayleigh amplitude distributions Above figure indicates that there can be a wide variation in received signal strength due to local reflections. For a truly stationary receiver we would choose a location that minimizes the local reflections and provides the maximum received signal strength. 2.2.2 Rician Fading The Rayleigh fading model assumes that all paths are relatively equal i.e.) no dominant path. For the direct line-of-sight path in mobile radio channels and indoor wireless, the presence of a direct path is usually required to close the link in satellite communications. In this case the reflected paths tend to be weaker than the direct path and the complex envelope is (2.36) Where the constant term represents the direct path and the summation represents the collections of reflected paths. This model is referred to as Ri fading model. A key factor in the analysis is the ratio of the power in the direct path to the power in the reflected paths. This ratio is referred to as the, Rician K factor and is defines as

ϱϯ 

Principles of Wireless Communications

(2.37) Rician K factor is expressed in dB. Amplitude density function (2.38) Where Io( ) is the modified Bessel Function of zeroth order.

Figure 2.15 Amplitude distribution for the Rician channel Deep fades are clearly less probable than with the Rayleigh channel, and the probability of their occurrence decreases as the K- factor increases. 2.2.3 Doppler Shift Doppler shift: A receiver is moving toward the source. Zero crossings of the signal appear faster therefore the received frequency is higher The opposite effect occurs if the receiver is moving away from the source. For example just as a train whistle or car horn appears to have a different pitch, depending on whether it is moving WRZDUGV RU DZD\ IURP RQH¶V ORFDWLRQ UDGLR ZDYHV GHPRQVWUDWH WKH VDPH phenomenon.

Figure 2.16 Illustration of Doppler effect For complex envelope emitted by transmitter is Ae j2ʌIRW with the A(x) is the amplitude and c is the speed of light, then the signal at a points along the x-axis is given by ϱϰ 

Principles of Wireless Communications

(2.39) If x represents the position of the constant velocity receiver, then we may write (2.40) Where xo is the receiver' s initial position and v is its velocity. Substituting Eq. (2.40) into (2.39) the signal at the receiver is

(2.41) If we focus on the frequency term in the last exponent of equation given by

the received frequency is

(2.42) The Doppler shift is given by (2.43) Relationship between Doppler frequency and velocity

(2.44) If the terminal motion and the direction of radiation are at an angle ȥVKLIWFDQEHH[SUHVVHGDV (2.45) For operating frequencies between 100MHz and 2GHz and for speeds up to 100Km/hr, the Doppler shift can be as large as 185 Hz 2.3 Channel Classification ‡ /DUJH-scale effects -due to terrain, density and height of the building. They are characterized statistically by median path loss and lognormal shadowing ‡6PDOO-scale effects- due to local environment and the movement of radio terminal. They are characterized statistically as fast Rayleigh fading. ‡&KDQQHOVDUe classified on the basis of the properties of time varying impulse response Types of Channel Classification 1. Time-Selective Channels 2. Frequency-Selective Channels 3. General Channel 4. WSSUS Channels 5. Coherence Time 6. Power-Delay Profile 7. Coherence Bandwidth 8. Stationary and Non stationary Channels ϱϱ 

Principles of Wireless Communications

2.3.1 Time-Selective Channels For time selective channel, the channel impulse response is (2.46) :KHUHį IJ LVWKH'LUDFGHOWDIXQFWLRQRUXQLW-impulse function Frequency-flat channel: The frequency response of the channel is approximately constant and does not change the spectrum of t he transmitted signal 2.3.2 Frequency-Selective Channels With large-scale effect, the complex phasor is

(2.47) Channel impulse response

(2.48) This channel is time invariant, but shows a frequency dependent response. 2.3.3 General Channel Time-varying impulse response

(2.49) Received signal

(2.50) Time-varying frequency (2.51) WSSUS Channels A random process is wide-sense stationary if it has a mean that is time independent and a correlation function. In multipath channels, the gain and phase shift at one delay are uncorrelated with the gain and phase shift at another delay. 2.3.4

(2.52) This refers to as uncorrelated scattering. The combination of wide-sense stationary signal and uncorrelated scattering is called WSSUS. 2.3.5 Coherence Time Coherence time Tc, is the time domain dual of Doppler spread and is used to characterize the time varying nature of the frequency depressiveness of the channel in the time domain. Doppler power spectrum ϱϲ 

Principles of Wireless Communications

(2.53) The Doppler spread and coherence time are inversely proportional to one another. That is,

(2.54) Coherence time is actually a statistical measure of the time duration over which the channel impulse response is essentially invariant, and quantifies the similarity of the channel response at different times. In other words, coherence time is the time duration over which two received signals have a strong potential for amplitude correlation. If the reciprocal bandwidth of the baseband signal is greater than the coherence time of the channel, then the channel will change during the transmission of the baseband message, thus causing distortion at the receiver. If the coherence time is defined as the time over which the time correlation function is above 0.5, then the coherence time is approximately (2.55) where fD is the maximum Doppler shift given by fD YȜ 2.3.6 Power-Delay Profile Power-delay profile provides an estimate of the average multipath power as a function of WKHUHODWLYHGHOD\IJ (2.56) 2.3.7 Coherence Bandwidth The delay spread is a natural phenomenon caused by reflected and scattered propagation paths in the radio channel, the coherence bandwidth is a defined relation derived from the rms delay spread. Coherence bandwidth is a statistical measure of the range of frequencies over which the channel can be considered "flat" (i.e., a channel which passes all spectral components with approximately equal gain and linear phase); In other words, coherence bandwidth is the range of frequencies over which two frequency components have a strong potential for amplitude correlation. Two sinusoids with frequency separation greater than are affected quite differently by the channel. If the coherence bandwidth is defined as the bandwidth over which the frequency correlation function is above 0.9, then the coherence bandwidth is approximately (2.57) Relationship between time and frequency domains

(2.58) 2.3.8 Stationary and Non stationary Channels ‡6WDWLRQDU\PRGHOVIRU FKDQQHO FKDUDFWHULVWLFVDUHFRQYHQLHQW IRU DQDO\VLVEXW RIWHQH[FHSW IRU short time intervals ‡QRWDFFXUDWHGHVFULSWLon of reality. ‡7HUUHVWULDOPRELOHFKDQQHOVDUHKLJKO\QRQ-stationary because: ‡3URSDJDWLRQSDWKRIWHQFRQVLVWVRIVHYHUDOGLVFRQWLQXLWLHV ϱϳ 

Principles of Wireless Communications ‡(QYLURQPHQWLWVHOILVSK\VLFDOO\QRQVWDWLRQDU\ ‡ 7KH LQWHUIHUHQFH FDXVHG E\ RWKHU XVHUV VKDULQJ WKH VDPH Irequency channel will vary dynamically. 2.4

Link calculations A link budget is accounting of all of the gains and losses from the transmitter, through the medium (free space, cable, waveguide, fiber, etc.) to the receiver in a telecommunication system. It accounts for the attenuation of the transmitted signal due to propagation, as well as the antenna gains, feedline and miscellaneous losses. Randomly varying channel gains such as fading are taken into account by adding some margin depending on the anticipated severity of its effects. A simple link budget equation looks like this: Received Power (dBm) = Transmitted Power (dBm) + Gains (dB í/RVVHV G% For a line-of-sight radio system, the primary source of loss is the decrease of the signal power due to uniform propagation, proportional to the inverse square of the distance. x Transmitting antennas are for the most part not isotropic omnidirectional. x Completely omnidirectional antennas are rare in telecommunication systems, so almost every link budget equation must consider antenna gain. x Transmitting antennas typically concentrate the signal power in a favored direction, normally that in which the receiving antenna is placed. x Transmitter power is effectively increased (in the direction of highest antenna gain). This systemic gain is expressed by including the antenna gain in the link budget. x The receiving antenna is also typically directional, and when properly oriented collects more power than an isotropic antenna would; as a consequence, the receiving antenna gain (in decibels from isotropic, dBi) adds to the received power. x The antenna gains (transmitting or receiving) are scaled by the wavelength of the radiation in question. This step may not be required if adequate systemic link budgets are achieved. A link budget equation including all these effects, expressed logarithmically, might look like this: (2.59) where: = received power (dBm) = transmitter output power (dBm) = transmitter antenna gain (dBi) = transmitter losses (coax, connectors...) (dB) = free space loss or path loss (dB) = miscellaneous losses (fading margin, body loss, polarization mismatch, other losses...) (dB) = receiver antenna gain (dBi) = receiver losses (coax, connectors...) (dB) The loss due to propagation between the transmitting and receiving antennas often called as path loss. Path loss depends on distance and frequency. Link working condition is that the total of total transmit plus total propagation plus total receive must be greater than 0. ϱϴ 

Principles of Wireless Communications If the power minus free space loss of the link path is greater than the minimum received signal level of the receiving radio then a link is possible. The difference between the minimum received signal level and the actual received power is known as link margin. Link margin must be positive. Importance of link budget: 1. Used to predict the performance before the link is established 2. Show in advance if it will be acceptable 3. Show if one option is better than another 4. Provide a criterion to evaluate actual performance Types 1. Free-space link budget: Ensure that sufficient power is available at the receiver to close the link and meet SNR requirement. For free space propagation the basic budget equation is given by

Where, No = kTe

(2.60) Lp ± pathloss , Te - absolute equivalent noise temperature of the system.

2.

Satellite to mobile terminal link budget: Link budgets are important in earth ± moon ± earth communications as the albedo of the moon is very low, and the path loss over the 770,000km return distance is extreme. High power and high gain antennas must be used. Although the deep space network has been able to maintain the necessary technological advances to maintain the link, the received field strength is still many billions of times weaker than a battery powered wristwatch. For satellite applications the basic link budget equation is C = EIRP ± Lp + (GR / Te) ± k (2.61) No Where overall link carrier to noise ratio is (2.62) 3.

Terrestrial link budget: Due to urban conditions the reach of the terrestrial link is limited by the amount of multipath interference. Since the effect of multipath are migrated using a Guard Interval (GI) between two consecutive OFDM symbols, the distance between the gap filler and the farthest position of the receiver is dictated by the time taken by delayed versions of the OFDM symbols to reach the receiver antenna. This is similar to free space link budget but their calculations are different. The received signal strength S and the carrier to noise ratio are related by (2.63) Where F is the noise factor ϱϵ 

Principles of Wireless Communications 2.5

Narrowband models

2.5.1 Modelling of Small-Scale and Large-Scale Fading For a narrowband channel, the impulse response is a delta function with a time-varying attenuation, so that for slowly time-varying channels: K WIJ  Į W į IJ  (2.64) the variations in amplitude over a small area are typically modeled as a random process, with an autocorrelation function that is determined by the Doppler spectrum. The complex amplitude is modeled as a zero-mean, circularly symmetric complex Gaussian random variable. When considering variations in a somewhat larger area, the small-scale averaged amplitude F obeys a ORJQRUPDOGLVWULEXWLRQZLWKVWDQGDUGGHYLDWLRQ ı)W\SLFDOO\YDOXHVRI ı)DUHWRG%7KH spatial autocorrelation function of lognormal shadowing is usually assumed to be a double-sided exponential, with correlation distances between 5 and 100 m, depending on the environment. 2.5.2 Path Loss Models Next, we consider models for the received field strength, averaged over both small-scale and the large-scale fading. This quantity is modeled completely deterministically. The most VLPSOHPRGHOVRIWKDWNLQGDUHWKHIUHHVSDFHSDWKORVVPRGHODQGWKH³EUHDNSRLQW´PRGHO ZLWK n = 2 valid for distances up to d < dbreak, and n = 4 .In more sophisticated models, described below, path loss depends not only on distance but also on some additional external parameters like building height, measurement environment (e.g., suburban environment), etc. The Okumura±Hata Model The Okumura±Hata model is by far the most popular model in that category. Path loss (in dB) is written as PL = A + B log(d) + C (2.65) where A, B, and C are factors that depend on frequency and antenna height. Factor A increases with carrier frequency and decreases with increasing height of the BS and Mobile Station (MS). Also, the path loss exponent (proportional to B) decreases with increasing height of the BS. The model is only intended for large cells, with the BS being placed higher than the surrounding rooftops. Advantages: 1. Accuracy and suitable for LAN mobile radio system 2. For urban and suburban areas 3. Suitable for large cell mobile system. Disadvanges: 1. It is based on measured data and does not provide any analytical explanation 2. Not suitable for rural areas 3. It has not any of the path specific correction. The COST2 231±Walfish±Ikegami Model The COST 231±Walfish±Ikegami model is also suitable for microcells and small macrocells, as it has fewer restrictions on the distance between the BS and MS and the antenna ϲϬ 

Principles of Wireless Communications height. In this model, total path loss consists of the free space path loss PL0, multiscreen loss Lmsd along the propagation path, and attenuation from the last roof-edge to the MS, Lrts (rooftop-to street diffraction and scatter loss) (Figure 2.17). Free space loss depends on carrier frequency and distance, while the rooftop-to-street diffraction loss depends on frequency, the width of the street, and the height of the MS, as well as on the orientation of the street with respect to the connection line BS±MS. Multiscreen loss depends on the distance between buildings and the distance between the BS and MS, as well as on carrier frequency, BS height, and rooftop height.

Figure 2.17 Parameters in the COST 231±Walfish±Ikegami model. The Motley±Keenan Model For indoor environments, wall attenuation plays an important role. Based on this consideration, the Motley±Keenan model suggests that path loss (expressed in decibel (dB)) can be written as PL = PL0 + 10n log(d /d0) + Fwall + Ffloor (2.65) where Fwall is the sum of attenuations by the walls that a Multi Path Component (MPC) has to penetrate on its way from the transmitter (TX) to the receiver (RX); similarly, Ffloor describes the summed-up attenuation of the floors that are located between the BS and MS. Depending on the building material, attenuation by one wall can lie between 1 and 20 dB in the 300MHz±5 GHz range, and can be much higher at higher frequencies. The Motley±Keenan model is a site-specific model, in the sense that it requires knowledge of the location of the BS and MS, and the building plan. It is, however, not very DFFXUDWHDVLWQHJOHFWVSURSDJDWLRQSDWKVWKDW³JRDURXQG´WKHZDOOV)RUH[DPSOHSURSDJDWLRQ between two widely separated offices can occur either through many walls (quasi-Line Of Sight ± LOS), or through a corridor (signal leaves the office, propagates down a corridor, and enters from there into the office of the RX). The latter type of propagation path can often be more efficient, but is not taken into account by the Motley±Keenan model. Advantages: 1. It is a site specific model 2. It is also called as BS and MS environmental model. 3. It is building plan model Disadvantages: 1. It is not accurate. 2. It neglects propagation paths that go around the walls. 2.6 Wideband Models 2.6.1 Tapped Delay Line Models The most commonly used wideband model is an N-tap Rayleigh-fading model. Adding an LOS component does not pose any difficulties; the impulse response then just becomes ϲϭ 

Principles of Wireless Communications

(2.66) where the LOS component a0 does not vary with time, while the ci (t) are zero-mean complex Gaussian random processes, whose autocorrelation function is determined by their associated Doppler VSHFWUD,QPRVWFDVHVIJ IJVRWKHDPSOLWXGHGLVWULEXWLRQRIWKHILUVWWDSLV Rician. component is allowed. This is the simplest stochastic fading channel exhibiting delay dispersion, and thus very popular for theoretical analysis. it is alternatively called the two-path channel, two-delay channel, or two-spike channel. Another popular channel model consists of a purely deterministic LOS component plus RQHIDGLQJWDS 1  ZKRVHGHOD\IJFDQGLIIHUIURPIJ7KLVPRGHOLVZLGHO\XVHGIRUVDWHOOLWH channels ± in these channels, there is almost always an LOS connection, and the reflections from buildings near the RX give rise to a delayed fading component. The channel reduces to a flatIDGLQJ5LFLDQFKDQQHOZKHQIJ IJ 2.6.2 Models for the Power Delay Profile It has been observed in many measurements that the Power Delay Profile (PDP) can be approximated by a one-sided exponential function: (2.67) the PDP is the sum of several delayed exponential functions, corresponding to multiple clusters of Interacting Objects (IOs):

(2.68) where pcOIJ c0,l , Sc IJODUHWKHSRZHUGHOD\DQGGHOD\VSUHDGRIWKHOWKFOXVWHUUHVSHFWLYHO\ The sum of all cluster powers has to add up to the narrowband power. For a PDP in the form of Eq. (2.67), the rms delay spread characterizes delay dispersion. In the case of multiple clusters, Eq. (2.68), the rms delay spread is defined mathematically, but often has a limited physical meaning. Still, the vast majority of measurement campaigns available in the literature use just this parameter for characterization of delay dispersion. Typical values of the delay spread for different environments are as follows: ‡Indoor residential buildings: 5±10 ns; but up to 30 ns have been measured. ‡Indoor office environments: Between 10 and 100 ns, but even 300 ns have been measured. ‡Factories and airport halls: 50 to 200 ns. ‡Microcells: 5±100 ns for LOS situations 100±500 ns for non-LOS ‡Tunnels and mines: 20 ns for empty tunnels 100 ns for car-filled tunnels ‡Typical urban and suburban environments: Between 100 and 800 ns ‡ Bad Urban (BU) and Hilly Terrain (HT) environments:  ȝV RFFXU LQ PRXQWDLQRXV terrain. The delay spread is a function of the distance BS±MS, increasing with distance approximately as Gİ, where İ = 0.5 in urban and suburban environments, and İ = 1 in ϲϮ 

Principles of Wireless Communications mountainous regions. The delay spread also shows considerable large-scale variations. Several papers find that the delay spread has a lognormal distribution with a variance of typically 2±3 dB in suburban and urban environments. 2.6.3 Models for the Arrival Times of Rays and Clusters The modeled PDPs were continuous functions of the delay; this implies that the RX EDQGZLGWKZDVVRVPDOOWKDWGLIIHUHQWGLVFUHWH03&VFRXOGQRWEHUHVROYHGDQGZHUH³VPHDUHG´ into a continuous PDP. For systems with higher bandwidth, MPCs can be resolved. In that case, it is advDQWDJHRXV WR GHVFULEH WKH 3'3 E\ WKH DUULYDO WLPHV RI WKH 03&V SOXV DQ ³HQYHORSH´ function that describes the power of the MPCs as a function of delay. In order to statistically model the arrival times of MPCs, a first-order approximation assumes that objects that cause reflections in an urban area are located randomly in space, giving rise to a Poisson distribution for excess delays. Two models have been developed to reflect this fact: the ǻ K model, and the Saleh±Valenzuela (SV) model. The _ ǻ K model has two states: S1, where the mean arrival rate is Ȝ0(t ), and S2, where the mean arrival rate is .Ȝ0(t ). The process starts in S1. If an MPC arrives at time t, a transition is made to S2 for the interval [t, t + ǻ _]. If no further paths arrive in this interval, a transition is made back to S1 at the end of the interval. Note that for K = 1 or ǻ= 0, the above-mentioned process reverts to a standard Poisson process. The SV model takes a slightly different approach. It assumes a priori the existence of cluster. Within each cluster, the MPCs are arriving according to a Poisson distribution, and the arrival times of the clusters themselves are Poisson distributed (but with a different inter arrival time constant). Furthermore, the powers of the MPCs within a cluster decrease exponentially with delay, and the power of the clusters follows a (different) exponential distribution (see Figure 2.18). Mathematically, the following discrete time impulse response is used: (2.69)

Figure 2.18 The Saleh±Valenzuela model. 2.6.4 Standardized Channel Model The model distinguishes between four different types of macrocellular environments namely, typical urban (TU), bad urban (BU), rural area (RA), and hilly terrain (HT). Depending on the environment, the PDP has a single-exponential decay, or it consists of two singleexponential functions (clusters) that are delayed with respect to each other. The second cluster corresponds to groups of faraway high-rise buildings or mountains that act as efficient IOs, and thus give rise to a group of delayed MPCs with considerable power. Bandwidth is 200 kHz or less. For simulation of third-generation cellular systems, which have a bandwidth of 5MHz, the ϲϯ 

Principles of Wireless Communications International Telecommunications Union (ITU) specified another set of models that accounts for the larger bandwidth. This model distinguishes between pedestrian, vehicular, and indoor environments. QUESTIONS BANK Part A 1. What are the propagation mechanisms of EM waves? 2. What is the significance of propagation model? 3. What do you mean by small scale fading? 4. What are the factors influencing small scale fading? 5. Define large scale propagation. 6. Differentiate the propagation effects with mobile radio. 7. Define Doppler shift. 8. Differentiate time selective and frequency selective channel. 9. Define coherence time and coherence bandwidth. 10. What do you mean by WSSUS channels? 11. What is free space propagation model? 12. Define EIRP. 13. Explain path loss? 14. What is intrinsic impedance& Brewster angle? 15. What is scattering? 16. Define radar cross section? 17. Name some of the outdoor propagation models? 18. Define indoor propagation models? 19. Mention some indoor propagation models? 20. :KDWDUHPHULWVDQGGHPHULWVRI2NXPDUD¶VPRGHO" 21. List the advantages and disadvantages of Hata model? 22. What is the necessity of link budget? 23. Distinguish between narrow band and wideband systems. Nov. 2012 24. What is link budget calculation? Nov. 2012 25. List the different types of wireless channels. May 2012 26. What is frequency selective fading? How to avoid fading problem? May 2012 27. Compute the Rayleigh distance of a square antenna with 20 dB gain. Nov. 2011 28. List out any two properties of wide band channel. Nov. 2011. 29. State the difference between narrowband and wideband systems. Nov./Dec.2013. 30. Find the far field distance for an antenna with maximum dimension of 1m and operating frequency of 900 MHZ. Nov./Dec.2013. PART B 1. i. Describe any two methods of diffraction by multiple screens. (8) Nov. 2011 ii. Discuss about ultra wide band channel. (8) 2. i. Compare coherence bandwidth and coherence time. (8) Nov. 2011 ii. Discuss the mathematical formulation for narrowband and wideband system, with relevant figures. (8) 3. i. Explain the free space path loss and derive the gain expression. (8) May 2012 ϲϰ 

Principles of Wireless Communications

4.

5.

6.

7. 8. 9.

ii. Describe in detail Two Ray Model propagation mechanism. (8) i. Define the following: Auto correlation, Cross correlation and Power spectral density for narrow band fading model. (8) May 2012 ii. What is the need for link calculation? Explain with suitable example. (8) i. How the received signal strength is predicted using the free space propagation model? Explain. (10) Nov. 2012 ii. Find the far-field distance for an antenna with maximum dimension of 1 m and operating frequency of 900 MHz. (6) i. With system theoretic description, explain the characteristics of time-dispersive channels. (8) ii. Explain the three basic propagation mechanisms in a mobile communication system. ( Briefly explain the factors that influence small scale fading.(8). Nov./Dec.2013. Briefly explain the three basic propagation mechanisms which impact propagation in a mobile communication system (8). Nov./Dec.2013. What is Brewster angle? Calculate the Brewster angle for a wave impinging on ground having a permittivity of 4. Nov./Dec.2013.

TWO MARK QUESTIONS WITH ANSWER 2.1.

What are the propagation mechanisms of EM waves? The four propagation mechanisms of EM waves are i. Free space propagation ii. Reflection iii. Diffraction iv. Scattering

2.2.

What is the significance of propagation model? The major significance of propagation model are: i. Propagation model predicts the parameter of receiver. ii. It predicts the average received signal strength at a given distance from the transmitter.

2.3.

What do you mean by small scale fading? Rapid fluctuations of the amplitude, phase as multipath delays of a radio signal over a short period of time is called small scale fading. 2.4.

What are the factors influencing small scale fading? The factors which influence small scale fading are: Multipath propagation, Speed of the mobile, Speed of surrounding objects and the transmission bandwidth of the signal. 2.5.

When does large scale propagation occur? Large scale propagation occurs due to general terrain and the density and height of buildings and vegetation, large scale propagation occurs. ϲϱ 

Principles of Wireless Communications 2.6.

Differentiate the propagation effects with mobile radio. Slow Fading Slow variations in the signal strength. Mobile station (MS) moves slowly.

Fast Fading Rapid variations in the signal strength. Local objects reflect the signal causes fast fading.

It occurs when the large reflectors and diffracting objects along the transmission paths are distant from It occurs when the user terminal (MS) the terminal. moves for short distances. Eg. Rayleigh fading, Rician fading and Doppler shift

2.7.

Define Doppler shift. If the receiver is moving towards the source, then the zero crossings of the signal appear faster and the received frequency is higher.The opposite effect occurs if the receiver is moving away from the source. The resulting chance in frequency is known as the Doppler shift (fD).

2.8.

Differentiate time selective and frequency selective channel. The gain and the signal strength of the received signal are time varying means then the channel is described as time selective channel. The frequency response of the time selective channel is constant so that frequency flat channel. The channel is time invariant but the impulse response of the channel show a frequency-dependent response so called frequency selective channel. 2.9.

Define coherence time and coherence bandwidth. Coherence time is the maximum duration for which the channel can be assumed to be approximately constant. It is the time separation of the two time domain samples. Coherence bandwidth is the frequency separation of the two frequency domain samples. 2.10.

What do you mean by WSSUS channels? In multipath channels, the gain and phase shift at one delay are uncorrelated with another delay is known as uncorrelated scattering of WSSUS. 2.11. What is free space propagation model? The free space propagation model is used to predict received signal strength, when unobstructed line-of-sight path between transmitter & receiver. Friis free space equation is given by,

The IDFWRU ȜʌG 2 is also known as the free space loss factor. ϲϲ 

Principles of Wireless Communications 2.12. Define EIRP. EIRP (Equivalent Isotropically Radiated Power) of a transmitting system in a given direction is defined as the transmitter power that would be needed, with an isotropic radiator, to produce the same power density in the given direction. EIRP=PtGt Where Pt-transmitted power in W Gt-transmitting antenna gain 2.13.

Explain path loss. The path loss is defined as the difference (in dB) between the effective transmitted power and the received power. Path loss may or may not include the effect of the antenna gains. PL = A + B log(d) + C 2.14. What is intrinsic impedance and Brewster angle? Intrinsic impedance is defined by the ratio of electric to magnetic field for a uniform plane wave in the particular medium. Brewster angle is the angle at which no reflection occurs in the origin. Brewster angle LVGHQRWHGE\ș%DVVKRZQEHORZ

2.15. What is scattering? When a radio wave impinges on a rough surface, the reflected energy is spread out in all directions due to scattering. 2.16. Define radar cross section. Radar Cross Section of a scattering object is defined as the ratio of the power density of the signal scattered in the direction of the receiver to the power density of the radio wave incident upon the scattering object & has units of squares meters 2.17. Name some of the outdoor propagation models? Some of the commonly used outdoor propagation models are i. Longely-Rice model LL'XUNLQ¶VPRGHO iii. Okumura model. 2.18.

Define indoor propagation models. The indoor propagation models are used to characterizing radio propagation inside the buildings. The distances covered are much smaller, and the variability of the environment is much greater for smaller range of Transmitter and receiver separation distances. Features such as lay-out of the building, the construction materials, and the building type strongly influence the propagation within the building.

ϲϳ 

Principles of Wireless Communications 2.19. Mention some indoor propagation models? Some of the indoor propagation models are: i. Long ±distance path loss model ii multiple break point model iii. Attenuation factor model. 2.20. :KDWDUHPHULWVDQGGHPHULWVRI2NXPDUD¶VPRGHO" Merits: Accuracy in parameter prediction. Suitable for modern land mobile radio system. Urban, suburban areas are analyzed. Demerits: Rural areas are not analyzed. Analytical explanation is not enough. 2.21.

List the advantages and disadvantages of Hata model? Advantages: Suitable for large cell mobile system. Cell radius on the order of 1km is taken for analysis. Disadvantages: Not suitable for PCS model. This model does not have any path specific correction. 2.22. What is the necessity of link budget? The necessities of link budget are: i. A link budget is the clearest and most intuitive way of computing the required Transmitter power. It tabulates all equations that connect the Transmitter power to the received SNR ii. It is reliable for communications. iii. It is used to ensure the sufficient receiver power is available. iv. To meet the SNR requirement link budget is calculated.

ϲϴ 

Principles of Wireless Communications UNIT III WIRELESS TRANSCEIVERS 3.1 Structure of a Wireless Communication Link Transceiver Block Structure In this section, we describe a block diagram of a wireless communication link, and give a brief description of the different blocks. Figure 3.1 shows a functional block diagram of a communications link. In most cases, the goal of a wireless link is the transmission of information from an analog information source (microphone, video camera) via an analog wireless propagation channel to an analog information sink (loudspeaker, TV screen). The transmitter (TX) can then add redundancy in the form of a forward error correction code, in order to make it more resistant to errors introduced by the channel (note that such encoding is done for most, but not all, wireless systems). The encoded data are then used as input to a modulator, which maps the data to output waveforms that can be transmitted. By transmitting these symbols on specific frequencies or at specific times, different users can be distinguished. The signal is then sent through the propagation channel, which attenuates and distorts it, and adds noise. At the receiver (RX), the signal is received by one or more antennas. The different users are separated by receiving signals only at a single frequency. If the channel is delay dispersive, then an equalizer can be used to reverse that dispersion, and eliminate intersymbol interference. Afterwards, the signal is demodulated, and a channel decoder eliminates most of the errors that are present in the resulting bit stream. A source decoder finally maps this bit stream to an analog information stream that goes to the information sink (loudspeaker, TV monitor, etc.); in the case when the information was originally digital, this last stage is omitted. Figures 3.2 and 3.3 show a more detailed block diagram of a digital TX and RX that concentrate on the hardware aspects and the interfaces between analog and digital components:

Figure 3.1 Block diagram of a transmitter and receiver ‡ 7KH information source provides an analog source signal and feeds it into the source ADC (Analog to Digital Converter). This ADC first band limits the signal from the analog information source (if necessary), and then converts the signal into a stream of digital data at a certain sampling rate and resolution (number of bits per sample). For example, speech would typically be sampled at 8 ksamples/s, with 8-bit resolution, resulting in a datastream at 64 kbit/s. For the transmission of digital data, these steps can be omitted, and the digital source directly provides WKHLQSXWWRLQWHUIDFH³*´LQ)LJXUH ‡ 7KH source coder uses a priori information on the properties of the source data in order to reduce redundancy in the source signal. This reduces the amount of source data to be transmitted, ϲϵ 

Principles of Wireless Communications and thus the required transmission time and/or bandwidth. For example, the Global System for Mobile communications (GSM) speech coder reduces the source data rate from 64 kbit/s mentioned above to 13 kbit/s. Similar reductions are possible for music and video (MPEG standards). Also, fax information can be compressed significantly. One thousand subsequent V\PEROV³´ UHSUHVHQWLQJ³ZKLWH´ color), which have to be represented by 2,000 bits, can be UHSODFHG E\ WKH VWDWHPHQW ³ZKDW IROORZV QRZ DUH  V\PEROV ´ ZKLFK UHTXLUHV RQO\  bits. For a typical fax, compression by a factor of 10 can be achieved. The source coder increases the entropy (information per bit) of the data at interface F; as a consequence, bit errors have greater impact. For some applications, source data are encrypted in order to prevent unauthorized listening in. ‡7KH channel coder adds redundancy in order to protect data against transmission errors. This increases the data rate that has to be transmitted at interface E ± e.g., GSM channel coding increases the data rate from 13 to 22.8 kbit/s. Channel coders often use information about the statistics of error sources in the channel (noise power, interference statistics) to design codes that are especially well suited for certain types of channels. Data can be sorted according to importance; more important bits then get stronger protection. Furthermore, it is possible to use interleaving to break up error bursts; note that interleaving is mainly effective if it is combined with channel coding. ‡ Signaling adds control information for the establishing and ending of connections, for associating information with the correct users, synchronization, etc. Signaling information is usually strongly protected by error correction codes. ‡ 7KH multiplexer combines user data and signaling information, and combines the data from multiple users. If this is done by time multiplexing, the multiplexing requires some time compression. In GSM, multiaccess multiplexing increases the data rate from 22.8 to 182.4 kbit/s (8 · 22.8) for the standard case of eight participants. The addition of signaling information increases the data rate to 271 kbit/s. The baseband modulator assigns the gross data bits (user data and signaling at interface D) to complex transmit symbols in the baseband. Spectral properties, intersymbol interference, peak to- average ratio, and other properties of the transmit signal are determined by this step. The output from the baseband modulator (interface C) provides the transmit symbols in oversampled form, discrete in time and amplitude. Oversampling and quantization determine the aliasing and quantization noise. Therefore, high resolution is desirable, and the data rate at the output of the baseband modulator should be much higher than at the input. For a GSM system, an oversampling factor of 16 and 8-bit amplitude resolution result in a data rate of about 70 Mbit/s.

ϳϬ 

Principles of Wireless Communications

Figure 3.2 Block diagram of a radio link.

Figure 3.3 Block diagram of a digital receiver chain for mobile communications. ‡ The TX Digital to Analog Converter (DAC) generates a pair of analog, discrete amplitude voltages corresponding to the real and imaginary part of the transmit symbols, respectively. ϳϭ 

Principles of Wireless Communications ‡The analog low-pass filter in the TX eliminates the (inevitable) spectral components outside the desired transmission bandwidth. These components are created by the out-of-band emission of an (ideal) baseband modulator, which stem from the properties of the chosen modulation format. Furthermore, imperfections of the baseband modulator and imperfections of the DAC lead to additional spurious emissions that have to be suppressed by the TX filter. ‡The TX Local Oscillator (LO) provides an unmodulated sinusoidal signal, corresponding to one of the admissible center frequencies of the considered system. The requirements for frequency stability, phase noise, and switching speed between different frequencies depend on the modulation and multiaccess method. ‡The upconverter converts the analog, filtered baseband signal to a passband signal by mixing it with the LO signal. Upconversion can occur in a single step, or in several steps. Finally, amplification in the Radio Frequency (RF) domain is required. ‡ The RF TX filter eliminates out-of-band emissions in the RF domain. Even if the low-pass filter succeeded in eliminating all out-of-band emissions, upconversion can lead to the creation of additional out-of-band components. Especially, nonlinearities of mixers and amplifiers lead to LQWHUPRGXODWLRQ SURGXFWV DQG ³VSHFWUDO UHJURZWK´ ± i.e., creation of additional out-of-band emissions. ‡ The (analog) propagation channel attenuates the signal, and leads to delay and frequency dispersion. Furthermore, the environment adds noise (Additive White Gaussian Noise ± AWGN) and co-channel interference. ‡ The RX filter performs a rough selection of the received band. The bandwidth of the filter corresponds to the total bandwidth assigned to a specific service, and can thus cover multiple communications channels belonging to the same service. ‡The low-noise amplifier amplifies the signal, so that the noise added by later components of the RX chain has less effect on the Signal-to-Noise Ratio (SNR). Further amplification occurs in the subsequent steps of down conversion. ‡The RX LO provides sinusoidal signals corresponding to possible signals at the TX LO. The frequency of the LO can be fine-tuned by a carrier recovery algorithm (see below), to make sure that the LOs at the TX and the RX produce oscillations with the same frequency and phase. ‡The RX downconverter converts the received signal (in one or several steps) into baseband. In baseband, the signal is thus available as a complex analog signal. ‡The RX low-pass filter provides a selection of desired frequency bands for one specific user (in contrast to the RX bandpass filter that selects the frequency range in which the service operates). It eliminates adjacent channel interference as well as noise. The filter should influence the desired signal as little as possible. ‡The Automatic Gain Control (AGC) amplifies the signal such that its level is well adjusted to the quantization at the subsequent ADC. ‡ The RX ADC converts the analog signal into values that are discrete in time and amplitude. The required resolution of the ADC is determined essentially by the dynamics of the subsequent signal processing. The sampling rate is of limited importance as long as the conditions of the sampling theorem are fulfilled. Oversampling increases the requirements for the ADC, but simplifies subsequent signal processing. ‡Carrier recovery determines the frequency and phase of the carrier of the received signal, and uses it to adjust the RX LO. ϳϮ 

Principles of Wireless Communications ‡The baseband demodulator obtains soft-decision data from digitized baseband data, and hands them over to the decoder. The baseband demodulator can be an optimum, coherent demodulator, or a simpler differential or incoherent demodulator. This stage can also include further signal processing like equalization. ‡If there are multiple antennas, then the RX either selects the signal from one of them for further processing or the signals from all of the antennas have to be processed (filtering, amplification, downconversion). In the latter case, those baseband signals are then either combined before EHLQJ IHG LQWR D FRQYHQWLRQDO EDVHEDQG GHPRGXODWRU RU WKH\ DUH IHG GLUHFWO\ LQWR D ³MRLQW´ demodulator that can make use of information from the different antenna elements. ‡ Symbol-timing recovery uses demodulated data to determine an estimate of the duration of symbols, and uses it to fine-tune sampling intervals. ‡The decoder uses soft estimates from the demodulator to find the original (digital) source data. In the simplest case of an uncoded system, the decoder is just a hard-decision (threshold) device. For convolutional codes, Maximum Likelihood Sequence Estimators (MLSEs, such as the Viterbi decoder) are used. Recently, iterative RXs that perform joint demodulation and decoding have been proposed. Remaining errors are either taken care of by repetition of a data packet (Automatic Repeat reQuest ± ARQ) or are ignored. The latter solution is usually resorted to for speech communications, where the delay entailed by retransmission is unacceptable. ‡ Signaling recovery identifies the parts of the data that represent signaling information and controls the subsequent demultiplexer. ‡The demultiplexer separates the user data and signaling information and reverses possible time compression of the TX multiplexer. Note that the demultiplexer can also be placed earlier in the transmission scheme; its optimum placement depends on the specific multiplexing and multiaccess scheme. ‡The source decoder reconstructs the source signal from the rules of source coding. If the source data are digital, the output signal is transferred to the data sink. Otherwise, the data are transferred to the DAC, which converts the transmitted information into an analog signal, and hands it over to the information sink. 3.2 Modulation and demodulation Modulation is the process of encoding information from a message source in a manner suitable for transmission. It generally involves translating a baseband message signal (called the source) to a bandpass signal at frequencies that are very high when compared to the baseband frequency. The bandpass signal is called the modulated signal and the baseband message signal is called the modulating signal. Modulation may be done by varying the amplitude, phase, or frequency of a high frequency carrier in accordance with the amplitude of the message signal. The performance of modulation scheme is measured in terms of power efficiency, and bandwidth efficiency. Demodulation is the process of extracting the baseband message from the carrier so that it may be processed and interpreted by the intended receiver (also called the sink). Various modulation techniques: 1. Analog modulation techniques 2. Digital modulation techniques ϳϯ 

Principles of Wireless Communications Advantages of analog modulation techniques 1. Simple circuits are needed 2. Easy to design Disadvantages of analog modulation techniques 1. Poor maintenance 2. Poor security 3. Low noise immunity 4. Poor SNR Advantages of digital modulation techniques 1. Ruggedness to channel noise and external interference 2. Flexible operation of the system 3. Multiplexing of various sources 4. Security of information 5. Digital circuits are more reliable, lower cost than analog circuits 6. Errors may be detected and corrected by the use of coding 7. Signal quality is achieved while occupying small spectrum Constellation Diagram A constellation diagram is a representation of a signal modulated by a digital modulation scheme such as quadrature amplitude modulation or phase-shift keying. It displays the signal as a two-dimensional scatter diagram in the complex plane at symbol sampling instants. The real and imaginary parts are often called the in phase, or I-axis, and the quadrature, or Q-axis, respectively. Plotting several symbols in a scatter diagram produces the constellation diagram. The points on a constellation diagram are called constellation points. They are a set of modulation symbols which comprise the modulation alphabet. Also a diagram of the ideal positions, signal space diagram, in a modulation scheme can be called a constellation diagram. Digital modulation techniques 1. Linear and 2. Non-linear types Linear modulation: The amplitude of the transmitted signal s(t) varies linearly with the modulating signal m(t). Most popular techniques are 1. Pulse shaped QPSK 2. OQPSK 3. ʌ /4 QPSK Non- linear modulation: Constant envelope modulation is obtained. Eg) MSK, BFSK, GMSK. ϳϰ 

Principles of Wireless Communications 3.3 Quadrature Phase Shift Keying (QPSK) Quadrature phase shift keying (QPSK) has twice the bandwidth efficiency of BPSK, since 2 bits are transmitted in a single modulation symbol. 1 symbol = 2 bits The phase of the carrier takes on 1 of 4 equally spaced values, such as 0, ʌ /2, ʌ, and 3 ʌ /2, where each value of phase corresponds to a unique pair of message bits. Representation of QPSK signal: The QPSK signal for this set of symbol states may be defined as

Where i = 1,2,3,4 where Ts is the symbol duration and is equal to twice the bit period.

Constellation diagram

ϳϱ 

Principles of Wireless Communications For QPSK, Number of message points M = 4 Dimension N = 2

Error probability of QPSK

ϳϲ 

Principles of Wireless Communications Features of QPSK system x The average probability of bit error of coherent BPSK system = coherent QPSK system. x Bandwidth required for coherent QPSK system = 1/2 coherent BPSK system. x Information transmission rate is higher than BPSK system. Data rate of QPSK = 2 x data rate of BPSK x Carrier power remains constant. So it has less interference. x Spectral efficiency of QPSK = 2 x spectral efficiency of BPSK Spectrum and Bandwidth of QPSK Signals The power spectral density of a QPSK signal can be obtained in a manner similar to that used for BPSK, with the bit periods Tb replaced by symbol periods Ts. Hence, the PSD of a QPSK signal using rectangular pulses can be expressed as

QPSK Transmission Figure 3.4 shows a block diagram of a typical QPSK transmitter. The unipolar binary message stream has bit rate Rb and is first converted into a bipolar non-return-to-zero (NRZ) sequence using a unipolar to bipolar convener. The bit stream m (t) is then split into two bit streams mI (t) and mQ (t) (in-phase and quadrature streams), each having a bit rate of Rs = Rb/2. The bit stream m1 (t) is called the "even" stream and mQ (t) is called the "odd" stream. The two ELQDU\VHTXHQFHVDUHVHSDUDWHO\PRGXODWHGE\WZRFDUULHUVĭ W DQGĭ W ZKLFK are in quadrature. The two modulated signals, each of which can be considered to be a BPSK signal, are summed to produce a QPSK signal. The filter at the output of the modulator confines the power spectrum of the QPSK signal within the allocated band. This prevents spill-over of signal energy into adjacent channels and also removes out-of-band spurious signals generated during the modulation process. In most ϳϳ 

Principles of Wireless Communications implementations, pulse shaping is done at baseband to provide proper RF filtering at the transmitter output.

Figure 3.4 Block diagram of a QPSK transmitter. Figure 3.5 shows a block diagram of a coherent QPSK receiver. The frontend bandpass filter removes the out-of-band noise and adjacent channel interference. The filtered output is split into two parts, and each part is coherently demodulated using the in-phase and quadrature carriers. The coherent carriers used for demodulation are recovered from the received signal using carrier recovery circuits. The outputs of the demodulators are passed through decision circuits which generate the in-phase and quadrature binary streams. The two components are then multiplexed to reproduce the original binary sequence.

Figure 3.5 Block diagram of a QPSK receiver. Advantages: 1. Higher data rate. 2. Bandwidth conservation is achieved. Disadvantages: 1. Signals are amplified using linear amplifiers, which are less efficient 2. Only suitable for rectangular data pulses 3. QPSK phase changes by 90° or 180°. This creates abrupt amplitude variations in the waveform 3.4 OFFSET QPSK (OQPSK) In QPSK the amplitude is ideally constant. When QPSK signals are pulse shaped, they lose the constant envelope property. It causes 1. Regeneration of side lobes. 2. Spectral widening ϳϴ 

Principles of Wireless Communications To prevent regeneration of side lobes modified form of QPSK called offset QPSK (OQPSK) or staggered QPSK is preferred OQPSK has more advantages than QPSK. Offset provides an advantage when non-rectangular data pulses are used. It supports more efficient RF amplification. The OQPSK signal is represented as same as QPSK

Principle of OQPSK x The time alignment of the even and odd bit streams of OQPSK differ with QPSK signal. x The even and odd bit streams mI(t) and mQ(t) are offset by one bit period. Features of OQPSK x In QPSK phase transitions occre only once every Ts = 2Tb seconds. Maximum Phase shift = 180° x In OQPSK bit transitions occur every Tb seconds. Maximum phase shift = ±90° x Due to phase transitions, OQPSK signals does not change the signal envelope to go to zero. x OQPSK allows nonlinear amplification, so regeneration of sidelobes are eliminated. x OQPSK signal is identical to that of a QPSK signal. Hence both signals occupy the same bandwidth.

Applications of OQPSK signal: 1. Very attractive for mobile communication systems. 2. OQPSK signals perform better than QPSK in the presence of phase jitter. ʌ /4 QPSK The ʌ /4 shifted QPSK modulation is a quadrature phase shift keying unique which offers a compromise between OQPSK and QPSK in terms of the allowed maximum phase transitions. ϳϵ 

Principles of Wireless Communications It may be demodulated in a coherent or noncoherent fashion. In ʌ /4 QPSK, the maximum phase change is limited to ± 135°. as compared to 180° for QPSK and 90° for OQPSK. Hence, the band limited ʌ /4 QPSK signal preserves the constant envelope property better than band limited QPSK, but is more susceptible to envelope variations than OQPSK. An extremely attractive feature of ʌ /4 QPSK is that it can be noncoherently detected, which greatly simplifies receiver design. Further, it has been found that in the presence of multipath spread and fading, ʌ /4 QPSK performs better than OQPSK. When differentially encoded, ʌ /4 QPSK is called ʌ /4 DQPSK. Constellation diagram of ʌ /4 QPSK

Features of ʌ /4 QPSK x Class C power efficient amplifiers are used x Low out of band radiation of the order of -60 dB to -70 dB can be achieved x Simple limiter, discriminator circuits are used. x Receiver circuits provide high immunity x Constant envelope modulation are power efficient. Disadvantages: 1. They occupy a large bandwidth 2. So, poor bandwidth efficiency. ʌ /4 QPSK Transmission Techniques

)LJXUH%ORFNGLDJUDPRIʌ /4 QPSK Transmission ϴϬ 

Principles of Wireless Communications Just as in a QPSK modulator, the in-phase and quadrature bit streams Ik and Qk are then separately modulated by two carriers which are in quadrature with one another, to produce the ʌ /4 QPSK Both Ik and Qk are usually passed through raised cosine rolloff pulse shaping filters before modulation, in order to reduce the bandwidth occupancy. Pulse shaping also reduces the spectral restoration problem.

ʌ /4 QPSK Detection Techniques Due to ease of hardware implementation, differential detection is often employed to demodulate ʌ /4 QPSK signals. In an AWGN channel, the BER performance of a differentially detected ʌ /4 QPSK is about 3 dB inferior to QPSK, while coherently detected ʌ /4 QPSK has the same error performance as QPSK. There are various types of detection techniques that are used for the detection of ʌ /4 QPSK signals. They include 1. Baseband differential detection, 2. IF differential detection,and 3. FM discriminator detection. Baseband Differential Detection

Figure 3.7 block diagram of a baseband differential detector Figure 3.7 shows a block diagram of a baseband differential detector. The incoming ʌ /4 QPSK signal is quadrature demodulated using two local oscillator signals that have the same frequency as the unmodulated carrier at the transmitter, but not necessarily the same phase.In decision device SI and SQ are the detected bits in the in-phase and quadrature terms, respectively. These two bits are multiplexed to get an output. Advantages ϴϭ 

Principles of Wireless Communications 1. Easy to implement 2. Simple hardware circuits Disadvantages 1. Any drift in the carrier frequency will cause a drift in the output phase. 2. It leads to BER degradation IF Differential Detector

Figure 3.8 IF differential detector The IF differential detector shown in Figure 3.8 avoids the need for a local oscillator by using a delay line and two phase detectors. The received signal is converted to IF and is bandpass filtered. The bandpass filter is designed to match the transmitted pulse shape, so that the carrier phase is preserved and noise power is minimized. The minimize the effect of IS! and noise, the bandwidth of the filters are chosen to be 0.57/Ts. The received IF signal is differentially decoded using a delay line and two mixers. The bandwidth of the signal at the output of the differential detector is twice that of the baseband signal at the transmitter end. FM Discriminator

Figure 3.9 block diagram of an FM discriminator detector Figure 3.9 shows a block diagram of an FM discriminator detector for it ʌ /4 QPSK. The input signal is first filtered using a bandpass filter that is matched to the transmitted signal. The filtered signal is then hard limited to remove any envelope fluctuations. Hard limiting preserves the phase changes in the input signal and hence no information is lost. The FM discriminator extracts the instantaneous frequency deviation of the received signal which, when integrated over each symbol period gives the phase difference between two sampling instants. The phase difference is then detected by a four level threshold comparator to obtain the original signal. The phase difference can also be detected using a modulo-2 ʌ phase detector. The modulo-2 ʌ phase detector improves the BER performance and reduces the effect of click noise. ϴϮ 

Principles of Wireless Communications 3.6 Binary Frequency Shift Keying In binary frequency shift keying (BFSK), the frequency of a constant amplitude carrier signal is switched between two values according to the two possible message states (called high and low tones), corresponding to a binary 1 or 0. Depending on how the frequency variations are imparted into the transmitted waveform, the FSK signal will have either a discontinuous phase or continuous phase between bits. In general, an FSK signal may be represented as, Where 2ʌ ¨ILV a constant offset from the nominal carrier frequency. One obvious way to generate an FSK signal is to switch between two independent oscillators according to whether the data bit is a 0 or a 1. Normally, this form of FSK generation results in a waveform that is discontinuous at the switching times, arid for this reason this type of FSK is called discontinuous FSK.

Constellation diagram The coherent BFSK is characterized by a signal space diagram

ϴϯ 

Principles of Wireless Communications

BFSK Transmission

Figure 3.10 block diagram of BFSK Transmission On-off level encoder x The incoming binary data sequence is first applied to an on-off level encoder. x )RUV\PEROµ¶XSSHUFKDQQHOLVVZLWFKHG21 x )RUV\PEROµ¶ORZHUFKDQQHOLVVZLWFKHG21E\XVLQJDQLQYHUWHU Modulator x The symboOµ¶LVPRGXODWHGE\XSSHUFKDQQHOPRGXODWRU7KHPRGXODWHGIUHTXHQF\ is f1. ϴϰ 

Principles of Wireless Communications x x

7KHV\PEROµ¶LVPRGXODWHGE\ORZHUFKDQQHOPRGXODWRU7KHPRGXODWHGIUHTXHQF\ is f2. The two frequencies f1 and f2 are chosen to equal different integer multiples of the bit rate = 1/Tb.

Oscillator The oscillators used are synchronized to satisfy the requirements of the two orthonormal basis functions. VCO is used to modulate the wave continuously. Coherent Detection of Binary FSK A block diagram of a coherent detection scheme for demodulation of binary FSK signals is shown in Figure 3.11. The receiver shown is the optimum detector for coherent binary FSK in the presence of additive white Gaussian noise. It consists of two correlators which are supplied with locally generated coherent reference signals. The difference of the correlator outputs is then compared with a threshold comparator. If the difference signal has a value greater than the threshold, the receiver decides in favor of a 1, otherwise it decides in favor of a 0. It can be shown that the probability of error for a coherent FSK receiver is given by

Figure 3.11 Block diagram of Coherent Detection of Binary FSK Non-coherent Detection of Binary FSK A block diagram of a noncoherent FSK receiver is shown in Figure 3.12. The receiver consists of a pair of matched filters followed by envelope detectors. The filter in the upper path is matched to the FSK signal of frequency fH and the filter in the lower path is matched to the signal of frequency fL. These matched filters function as bandpass filters centered at and fH and fL' respectively. The outputs of the envelope detectors are sampled at every t = kTb, where k is an integer, and their values compared. Depending on the magnitude of the envelope detector output, the comparator decides whether the data bit was a 1 or 0.

ϴϱ 

Principles of Wireless Communications

Figure 3.12 Block diagram of non- Coherent Detection of Binary FSK Advantages of BFSK: FSK is relatively easy to implement. 1. It has better noise immunity than ASK. Therefore the probability of error free reception of data is high. Disadvantages of BFSK: Therefore FSK is extensively used in low speed modems having bit rates below 1200 bits/sec. 1. The FSK is not preferred for the high speed modems because with increase in speed, the bit rate increases. 2. This increases the channel bandwidth required to transmit the FSK signal. 3. As the telephone lines have a very low bandwidth, it is not possible to satisfy the bandwidth requirement of FSK at higher speed. Therefore FSK is preferred only for the low speed modems. Error probability of BFSK

3.7 Minimum Shift Keying Minimum shift keying (MSK) is a special type of continuous phase.frequency shift keying (CPFSK) wherein the peak frequency deviation is equal to 1/4 the bit rate. In other words, MSK is continuous phase FSK with a modulation index of 0.5 .A modulation index of 0.5 corresponds to the minimum frequency spacing that allows two FSK signals to be coherently orthogonal, and ϴϲ 

Principles of Wireless Communications the name minimum shift keying implies the minimum frequency separation (i,e. bandwidth) that allows orthogonal detection.

Where

MSK is sometimes referred to as fast FSK, as the frequency spacing used is only half as much as that used in conventional noncoherent FSK.

,WFRQWDLQVRUWKRJRQDOEDVLVIXQFWLRQVĭ W DQGĭ W they form a pair of modulated carriers.

We may express the MSK signal in the expanded form

MSK is a spectrally efficient modulation scheme and is particularly attractive for use in mobile radio communication systems. It possesses properties such as constant envelope, spectral efficiency, good BER performance, and self-synchronizing capability Constellation diagram

ϴϳ 

Principles of Wireless Communications

ϴϴ 

Principles of Wireless Communications

Error performance of MSK

Generation of MSK

Figure 3.13 Block diagram of MSK modulator Figure 3.13 shows a typical MSK modulator. Multiplying a carrier signal with cos [ʌ t/2T] produces two phase-coherent signals at fc + 1 /4T and ±fc + ² I /4T. These two FSK signals are separated using two narrow bandpass filters and appropriately combined to form the in-phase and quadrature carrier components x(t) and y(t), respectively. These carriers are multiplied with the odd and even bit streams, rnI(t) and mQ(t), to produce the MSK modulated signal SMSK(t). ϴϵ 

Principles of Wireless Communications

Demodulation of MSK The block diagram of an MSK receiver is shown in Figure 3.14. The received signal SMSK(t) (in the absence of noise and interference) is multiplied by the respective in-phase and quadrature carriers x(t) and y(t). The output of the multipliers are integrated over two bit periods and dumped to a decision circuit at the end of each two bit periods. Based on the level of the signal at the output of the integrator, the threshold detector decides whether the signal is a 0 or a1. The output data streams correspond to rnI(t) and mQ(t), which are offset combined to obtain the demodulated signal.

Figure 3.14 block diagram of an MSK receiver Properties x It has constant envelope, smoother waveforms are obtained x Relatively narrow bandwidth x Coherent detection suitable for satellite communication x Side lobes are zero outside the frequency band, so it has resistance to co-channel interference Advantages 1. Smoother waveform than QPSK 2. There is no amplitude variation, constant envelope. 3. Main lobe is wider, contains 99% of signal energy. 4. Less inter channel interference 5. Spectral efficiency is good; BER performance is suitable for mobile radio communication systems. Disadvantages 1. Complex circuits are needed for generation and detection of MSK signal. 2. Main lobe of MSK is wide 3. Slow decay of MSK power spectral density creates adjacent channel interference. 4. Not suitable for multi user communications 5. Bandwidth required is higher than that of QPSK scheme.

ϵϬ 

Principles of Wireless Communications 3.8 Gaussian Minimum Shift Keying GMSK is a simple binary modulation scheme. The side lobe levels of the spectrum are much reduced by passing the modulating NRZ data waveform through a pre-modulation Gaussian pulse-shaping filter. Gaussian refers to the shape of filter. GMSK has excellent power efficiency and spectral efficiency than conventional FSK. Properties of Gaussian filter x Suppress the high frequency components of the transmitted signal x Avoids excessive deviations in the instantaneous frequency of the FM signal x Detect coherently or non-coherently the GMSK signal x Gaussian filter is used before the modulator to reduce the transmitting bandwidth of the signal. Features x It has excellent power efficiency due to the constant envelope and it has excellent spectral efficiency. x The pre-modulation Gaussian filtering introduces ISI in the transmitted signal. But the degradation is not severe if 3dB bandwidth bit duration product of the filter is greater than 0.5. x GMSK Bit Error Rate The bit en-or rate for GMSK was first found in for AWON channels, and was shown to offer performance within 1 dB of optimum MSK when BT=0.25. The bit error probability is a function of BT, since the pulse shaping impacts ISI. The bit error probability for GMSK is given by

ϵϭ 

Principles of Wireless Communications

GMSK Transmitter

Figure 3.15 Block diagram of GMSK Transmitter This modulation technique is shown in Figure 3.15 and is currently used in a variety of analog and digital implementations for the U.S. Cellular Digital Packet Data (CDPD) system as well as for the Global System for Mobile (GSM) system. Figure 3.15 may also be implemented digitally using a standard I/Q modulator. GMSK Receiver GMSK signals can be detected using orthogonal coherent detectors as shown in Figure 3.16, or with simple noncoherent detectors such as standard FM discriminators. Carrier recovery is sometimes performed using a method suggested by de Buda where the sum of the two discrete frequency components contained at the output of a frequency doubler is divided by four. De Buda's method is similar to the Costas loop and is equivalent to that of a PLL with a frequency doubler. This type of receiver can be easily implemented using digital logic as shown in Figure 3.17. The two D flip-flops act as a quadrature product demodulator and the XOR gates act as baseband multipliers. The mutually orthogonal reference carriers are generated using two D flipflops, and the VCO center frequency is set equal to four times the carrier center frequency. A non-optimum, but highly effective method of detecting GMSK signal is to simply sample the output of an FM demodulator.

ϵϮ 

Principles of Wireless Communications

Figure 3.16,Block diagram of a GMSK receiver

Figure 3.17 Digital logic circuit for GMSK demodulator

QUESTIONS BANK PART A 1. List the advantages of digital modulation techniques. 2. What are the factors that influence the choice of digital modulation? 3. Define power efficiency and bandwidth efficiency. 4. What is QPSK? 5. 'HILQHRIIVHW436.DQGʌGLIIHUHQWLDO436. 6. What is meant by MSK? 7. List the salient features of MSK scheme. 8. Why GMSK is preferred for multiuser, cellular communications? 9. How can we improve the performance of digital modulation under fading channels? 10. Write the advantages of MSK over QPSK. ϵϯ 

Principles of Wireless Communications 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37.

38. 39. 40.

Define M-ary transmission system? What is quadrature modulation? What is QAM? Define QPSK? What is linear modulation? Define non linear modulation? What is the need of Gaussian filter? Mention some merits of MSK Give some examples of linear modulation? What are the techniques used to improve the received signal quality? What is the need of equalization? What is diversity? Define spatial diversity? Define STCM. Define adaptive equalization? Define training mode in an adaptive equalizer? What is tracking mode in an adaptive equalizer? Write a short note on linear equalizers and non linear equalizers? Why non linear equalizers are preferred? What are the nonlinear equalization methods used? What are the factors used in adaptive algorithms? Define diversity concept? Draw the mathematical link model for analysis of modulation schemes. Nov. 2011 What is OQPSK? Nov. 2011 List the advantages of QPSK. May 2012 Differentiate between MSK and GMSK. May 2012 Find the 3-dB bandwidth for a Gaussian low pass filter used to produce 0.25 GMSK with a channel data rate of Rb = 270 Kbps. What is the 90 % power bandwidth in the RF channel? Nov. 2012 What is slotted frequency hopping? Nov. 2012 Give the expression for bit error probability of Gaussian minimum shift keying modulation. Nov./Dec.2013. What is fading and Doppler spread? Nov./Dec.2013.

PART B 1. Compute the ratio of signal power to adjacent channel interference when using (i) raised cosine pulses (ii) root raised cosine pulses with ‫ = ן‬0.5, when two considered signals have center frequencies 0 and 1.25 / T. (16) Nov. 2011 2. i. Discuss in detail any two demodulation techniques of minimum shift keying method. (8) Nov. 2011 ii. Explain in detail about optimum receiver structure for non-coherent detection. (8) 3. Explain with neat signal diagrams, the modulation and demodulation technique of QPSK. (16) May 2012 4. i. Describe with a block diagram, offset ± Quadrature phase shift keying and its ϵϰ 

Principles of Wireless Communications

5.

6.

7.

8.

advantages. (8) May 2012 ii. Explain the concept of GMSK and mention its advantages. (8) i. Briefly explain the structure of a wireless communication link. (6) Nov. 2012 ii. With block diagram, explain the MSK transmitter and receiver. Derive an expression for MSK and its power spectrum. (10) Nov. 2012 Derive an expression for: i. M-ary Phase Shift Keying and (8) ii. M-ary Quadrture amplitude modulation. (8) Also derive an expression for their bit error probability. i. Explain the Nyquist criterion for ISI cancellation. (8) Nov./Dec.2013. ii. Explain the performance of digital modulation in slow flat-fading channels.(8)Nov./Dec.2013. i. Explain the QPSK transmission and detection techniques. (8) Nov./Dec.2013. ii. with transfer function, explain the raised cosine roll off filter. (8) Nov./Dec.2013.

TWO MARK QUESTIONS WITH ANSWERS 3.1. List the advantages of digital modulation techniques. The advantages of digital modulation techniques are: i. Immunity to channel noise and external interference. ii. Flexibility operation of the system. iii. Security of information. iv. Reliable since digital circuits are used. v. Multiplexing of various sources of information into a common format is possible. vi. Error detection and correction is easy. 3.2. What are the factors that influence the choice of digital modulation? The factors that influence the choice of digital modulation are: i. Low BER at low received SNR. ii. Better performance in multipath and fading conditions. iii. Minimum bandwidth requirement. iv. Better power efficiency. v. Ease of implementation and low cost. 3.3. Define power efficiency and bandwidth efficiency. Power efficiency describes the ability of a modulation technique to preserve the fidelity of the digital message at low power levels. ݅S (E1= Bit energy / Noise power spectral density Ability of a modulation scheme to accommodate data within a limited bandwidth is called bandwidth efficiency. ݅% 5% 'DWDUDWH%DQGZLGWKLQESV+] 3.4. What is QPSK? The Quadrature Phase Shift Keying (QPSK) is a 4-ary PSK signal. The phase of the carrier in the QPSK takes 1 of 4 equally spaced shifts. Two successive bits in the data sequence are grouped together. 1 symbol = 2 bits ϵϱ 

Principles of Wireless Communications This reduces bit rate and bandwidth of the channel. Coherent QPSK = 2 x coherent BPSK system The phase of the carrier takes on one of four equally spaced valuHVVXFKDVʌʌʌDQG ʌ 3.5. 'HILQHRIIVHW436.DQGʌGLIIHUHQWLDO436. In offset QPSK the amplitude of data pulses are kept constant. The time alignment of the even and odd bit streams are offset by one bit period in offset QPSK. ,Q ʌ4 QPSK, signaling points of the modulated signal are selected from two QPSK FRQVWHOODWLRQVZKLFKDUHVKLIWHGE\ʌZLWKUHVSHFWWRHDFKRWKHU,WLVGLIIHUHQWLDOO\HQFRGHGDQG GHWHFWHGVRFDOOHGʌGLIIHUHQWLDO436. 3.6. What is meant by MSK? A continuous phase FSK signal with a deviation ratio of one half is referred to as MSK. It is a spectrally efficient modulation scheme. 3.7. List the salient features of MSK scheme. Salient features of MSK are: i. It has constant envelope, smoother waveforms than QPSK. ii. Relatively narrow bandwidth. iii. Coherent detection suitable for satellite communications. iv. Side lobes are zero outside the frequency band, so it has resistance to co-channel interference. 3.8. Why GMSK is preferred for multiuser, cellular communication? It is a simple binary modulation scheme. Premodulation is done by Gaussian pulse shaping filter, so side lobe levels are much reduced. GMSK has excellent power efficiency and spectral efficiency than FSK. For the above reasons GMSK is preferred for multiuser, cellular communication. 3.9. How can we improve the performance of digital modulation under fading channels? By the using of diversity technique, error control coding and equalization techniques performance of the digital modulation under fading channels are improved. 3.10. Write the advantages of MSK over QPSK. Advantages of MSK over QPSK: i. In QPSK the phase changes by 90degree or 180 degree .This creates abrupt amplitude variations in the waveform, Therefore bandwidth requirement of QPSK is more filters of other methods overcome these problems , but they have other side effects. ii. MSK overcomes those problems. In MSK the output waveform is continuous in phase hence there are no abrupt changes in amplitude. 3.11. Define M-ary transmission system? In digital modulations instead of transmitting one bit at a time, two or more bits are transmitted simultaneously. This is called M-ary transmission. 3.12. What is quadrature modulation? Sometimes two or more quadrature carriers are used for modulation. It is called quadrature modulation. 3.13. What is QAM? At high bit rates a combination of ASK and PSK is employed in order to minimize the errors in WKHUHFHLYHGGDWD7KLVPHWKRGLVNQRZQDV³4XDGUDWXUH$PSOLWXGH0RGXODWLRQ´ ϵϲ 

Principles of Wireless Communications 3.14. Define QPSK QPSK is defined as the multilevel modulation scheme in which four phase shifts are used for representing four different symbols. 3.15. What is linear modulation? In linear modulation technique the amplitude of the transmitted signal varies linearly with the modulating digital signal. In general, linear modulation does not have a constant envelope. 3.16. Define non linear modulation. In the non linear modulation the amplitude of the carrier is constant, regardless of the variation in the modulating signals. Non-linear modulations may have either linear or constant envelopes depending on whether or not the baseband waveform is pulse shaped. 3.17. What is the need of Gaussian filter? Need for Gaussian Filter: i. Gaussian filter is used before the modulator to reduce the transmitted bandwidth of the signal. ii. It uses less bandwidth than conventional FSK. 3.18. Mention some merits of MSK. Merits of MSK: i. Constant envelope ii. Spectral efficiency iii. Good BER performance iv. Self-synchronizing capability v. MSK is a spectrally efficient modulation scheme and is particularly attractive for use in mobile radio communication systems. 3.19. Give some examples of linear modulation. Examples of linear modulation: i. Pulse shaped QPSK ii. OQPSK 3.20. What are the techniques used to improve the received signal quality? Techniques such as, 9 Equalization 9 Diversity 9 Channel coding 3.21. What is the need of equalization? Equalization can be used to compensate the Inter Symbol Interference created by multipath within time dispersion channel. 3.22. What is diversity? Diversity is used to compensate the fading channel impairments and is usually implemented by using two or more receiving antennas. Diversity improves transmission performance by making use of more than one independently faded version of the transmitted signal. 3.23. Define spatial diversity. The most common diversity technique is spatial diversity, whereby multiple antennas are strategically spaced and connected to a common receiving system. While one antenna sees a signal null, one of the other antenna may sees a signal peak, and the receiver is able to select the antenna with the best signals at any time. ϵϳ 

Principles of Wireless Communications 3.24. Define STCM. Channel coding can also be combined with diversity a technique called Space-Time Coded Modulation. The space-time coding is a bandwidth and power efficient method for wireless communication. 3.25. Define adaptive equalization? To combine Inter Symbol Interference, the equalizer coefficients should change according to the channel status so as to break channel variations. Such an equalizer is called an adaptive equalizer since it adapts to the channel variations. 3.26. Define training mode in an adaptive equalizer? First, a known fixed length training sequence is sent by the transmitter then the receivers equalizers may adapt to a proper setting of minimum bit error detection where the training sequence is a pseudo random binary signal or a fixed and prescribed bit pattern. 3.27. What is tracking mode in an adaptive equalizer? Immediately following this training sequence the user data is sent and the adaptive equalizer at the receiver utilizes a recursive algorithm to evaluate the channel and estimate filter coefficients to compensate for the distortion created by multipath in the channel. 3.28. Write a short note on linear equalizers and non linear equalizers? Linear equalizers: If the output d(t) is not used in the feedback path to adapt the equalizer. This type of equalizers is called linear equalizer. Nonlinear equalizers: If the output d(t) is fed back to change the subsequent outputs of the equalizers is called non linear equalizers. 3.29. Why non linear equalizers are preferred? The linear equalizers are very effective in equalizing channels where ISI is not severe.The severity of the ISI is directly related to the spectral characteristics. In this case that there are spectral noise in the transfer function of the effective channel, the additive noise at the receiver input will be dramatically enhanced by the linear equalizer. To overcome this problem non linear equalizers are used. 3.30. What are the nonlinear equalization methods used? Commonly used non linear equalization methods are: i. Decision feedback equalization ii. Maximum likelihood symbol detection iii. Maximum likelihood sequence estimation 3.31. What are the factors used in adaptive algorithms? Rate of convergence Mis adjustments Computational complexity 3.32. Define diversity concept. If one radio path undergoes a deep fade, another independent path may have a strong signal. By having more than one path to select from, both the instantaneous and average SNRs at the receiver may be improved often by as much as 20dB to 30dB. The principle of diversity is to ensure that the same information reaches the receiver on statistically independent channels.

ϵϴ 

Principles of Wireless Communications UNIT IV SIGNAL PROCESSING IN WIRELESS SYSTEMS

4.1

Principle of Diversity For Additive White Gaussian Noise (AWGN) channels, such an approach can be quite reasonable: the Bit Error Rate (BER) decreases exponentially as the Signal-to-Noise Ratio (SNR) increases, and a 10-dB SNR leads to BERs on the order of 10í4. However, in Rayleigh fading the BER decreases only linearly with the SNR. We thus would need an SNR on the order of 40 dB in order to achieve a 10í4 BER, which is clearly unpractical. The reason for this different performance is the fading of the channel: the BER is mostly determined by the probability of channel attenuation being large, and thus of the instantaneous SNR being low. A way to improve the BER is thus to change the effective channel statistics ± i.e., to make sure that the SNR has a smaller probability of being low. Diversity is a way to achieve this. The principle of diversity is to ensure that the same information reaches the receiver (RX) on statistically independent channels. Consider the simple case of an RX with two antennas. The antennas are assumed to be far enough from each other that small-scale fading is independent at the two antennas. The RX always chooses the antenna that has instantaneously larger receive power. As the signals are statistically independent, the probability that both antennas are in a fading dip simultaneously is low ± certainly lower than the probability that one antenna is in a fading dip. The diversity thus changes the SNR statistics at the detector input. Diversity is divided into micro and macro diversity 4.2 Microdiversity As mentioned in the introduction, the basic principle of diversity is that the RX has multiple copies of the transmit signal, where each of the copies goes through a statistically independent channel. This section describes different ways of obtaining these statistically independent copies. We concentrate on methods that can be used to combat small-scale fading, which are therefore FDOOHG³PLFURGLYHUVLW\´7KHILYHPRVWFRPPRQPHWKRGVDUHDVIROORZV 1. Spatial diversity: several antenna elements separated in space. 2. Temporal diversity: transmission of the transmit signal at different times. 3. Frequency diversity: transmission of the signal on different frequencies. 4. Angular diversity: multiple antennas (with or without spatial separation) with different antenna patterns. 5. Polarization diversity: multiple antennas with different polarizations (e.g., vertical and horizontal). When we speak of antenna diversity, we imply that there are multiple antennas at the receiver. Consider the correlation coefficient of two signals that have a temporal separation IJ and a frequency separation f1 í f2.

(4.1) Note that for moving Mobile Stations (MSs), temporal separation can be easily converted into spatial separation, so that temporal and spatial diversity become mathematically equivalent. ϵϵ 

Principles of Wireless Communications Equation (4.1) is thus quite general in the sense that it can be applied to spatial, temporal, and frequency diversity. However, a number of assumptions were made in the derivation of this equation: (i) validity of the Wide Sense Stationary Uncorrelated Scatterer (WSSUS) model, (ii) no existence of Line Of Sight (LOS), (iii) exponential shape of the Power Delay Profile (PDP), (iv) isotropic distribution of incident power, and (v) use of omnidirectional antennas. 4.2.1 Spatial Diversity Spatial diversity is the oldest and simplest form of diversity. Despite (or because) of this, it is also the most widely used. The transmit signal is received at several antenna elements, and the signals from these antennas are then further processed according to the principles. But, irrespective of the processing method, performance is influenced by correlation of the signals between the antenna elements. A large correlation between signals at antenna elements is undesirable, as it decreases the effectiveness of diversity. A first important step in designing diversity antennas is thus to establish a relationship between antenna spacing and the correlation coefficient. This relationship is different for BS antennas and MS antennas, and thus will be treated separately. 1. MS in cellular and cordless systems: it is a standard assumption that waves are incident from all directions at the MS. Thus, points of constructive and destructive interference of Multi Path Components (MPCs) ± i.e., points where we have high and low received power, respectively ± are spaced approximately Ȝ4 apart. This is therefore the distance that is required for decorrelation of received signals. This intuitive insight agrees very well with the results from the exact mathematical derivation (Eq. (4.1), with f2 í f1 = 0), given in Figure 4.1: decorrelation, defined as ȡ= 0.5, occurs at an antenna separation of Ȝ4.

Figure 4.1 Envelope correlation coefficient as a function of antenna separation. 2. BS in cordless systems and WLANs: in a first approximation, the angular distribution of incident radiation at indoor BSs is also uniform ± i.e., radiation is incident with equal strength from all directions. Therefore, the same rules apply as for MSs. 3. BSs in cellular systems: for a cellular BS, the assumption of uniform directions of incidence is no longer valid. Interacting Objects (IOs) are typically concentrated around the MS (Figure 4.2). Since all waves are incident essentially from one direction, the correlation coefficient is much higher. ϭϬϬ 

Principles of Wireless Communications

Figure 4.2 Scatterers concentrated around the mobile station. To get an intuitive insight, we start with the simple case when there are only two MPCs whose wave vectors are at an angle Įwith respect to each other (Figure 4.3). It is obvious that the distance between the maxima and minima of the interference pattern is larger the smaller Įis. For very small Į WKH FRQQHFWLRQ OLQH EHWZHHQ DQWHQQD HOHPHQWV OLHV RQ D ³ULGJH´ RI WKH interference pattern and antenna elements are completely correlated..

Figure 4.3 Interference pattern of two waves with 45ƕ (a) and 15ƕ (b) angular separation. 4.2.2 Temporal Diversity As the wireless propagation channel is time variant, signals that are received at different WLPHV DUH XQFRUUHODWHG )RU ³VXIILFLHQW´ GHFRUUHODWLRQ WKH WHPSRUDO GLVWDQFH PXVt be at least 1/(2Ȟmax), where Ȟmax is the maximum Doppler frequency. Temporal diversity can be realized in different ways: 1. Repetition coding: this is the simplest form. The signal is repeated several times, where the repetition intervals are long enough to achieve decorrelation. This obviously achieves diversity, but is also highly bandwidth inefficient. Spectral efficiency decreases by a factor that is equal to the number of repetitions. 2. Automatic Repeat reQuest (ARQ): here, the RX sends a message to the TX to indicate whether it received the data with sufficient quality. If this is not the case, then the transmission is repeated (after a wait period that achieves decorrelation). The spectral efficiency of ARQ is better than that of repetition coding, since it requires multiple transmissions only when the first transmission occurs in a bad fading state, while for repetition coding, retransmissions occur always. On the downside, ARQ requires a feedback channel. ϭϬϭ 

Principles of Wireless Communications 3. Combination of interleaving and coding: a more advanced version of repetition coding is forward error correction coding with interleaving. The different symbols of a codeword are transmitted at different times, which increase the probability that at least some of them arrive with a good SNR. The transmitted codeword can then be reconstructed. 4.2.3 Frequency Diversity In frequency diversity, the same signal is transmitted at two (or more) different frequencies. If these frequencies are spaced apart by more than the coherence bandwidth of the channel, then their fading is approximately independent, and the probability is low that the signal is in a deep fade at both frequencies simultaneously. For an exponential PDP, the correlation between two frequencies can be obtained from Eq. (4.1) by setting the numerator to unity as the signals at the two frequencies occur at the same time. Thus (4.2) This again confirms that the two signals have to be at least one coherence bandwidth apart from each other. Figure 4.4 shows ȡas a function of the spacing between the two frequencies. It is not common to actually repeat the same information at two different frequencies, as this would greatly decrease spectral efficiency. Rather, information is spread over a large bandwidth, so that small parts of the information are conveyed by different frequency components. The RX can then sum over the different frequencies to recover the original information. This spreading can be done by different methods:

Figure 4.4 spacing.

Correlation coefficient of the envelope as a function of normalized frequency

‡ Compressing the information in time: ± i.e., sending short bursts that each occupy a large bandwidth ± TDMA. ‡Code Division Multiple Access (CDMA) ‡Multicarrier CDMA and coded orthogonal frequency division multiplexing. ‡Frequency hopping in conjunction with coding: different parts of a codeword are transmitted on different carrier frequencies. Advantage of frequency diversity: ϭϬϮ 

Principles of Wireless Communications 1. By using redundant signal transmission, this diversity improves link transmission quality. 2. New OFDM modulation uses frequency diversity. Disadvantage of frequency diversity: 1. It requires large bandwidth 2. More number of receivers are required 3. High cost 4.2.4 Angle Diversity A fading dip is created when MPCs, which usually come from different directions, interfere destructively. If some of these waves are attenuated or eliminated, then the location of IDGLQJ GLSV FKDQJHV ,Q RWKHU ZRUGV WZR FROORFDWHG DQWHQQDV ZLWK GLIIHUHQW SDWWHUQV ³VHH´ differently weighted MPCs, so that the MPCs interfere differently for the two antennas. This is the principle of angle diversity (also known as pattern diversity). Angular diversity is usually used in conjunction with spatial diversity; it enhances the decorrelation of signals at closely spaced antennas. Different antenna patterns can be achieved very easily. Of course, different types of antennas have different patterns. But even identical antennas can have different patterns when mounted close to each other (see Figure 4.5). This effect is due to mutual coupling: antenna B acts as a reflector for antenna A, whose pattern is therefore skewed to the left. Analogously, the pattern of antenna B is skewed to the right due to reflections from antenna A. Thus, the two patterns are different. The different patterns are even more pronounced when the antennas are located on different parts of the casing. While dipole antennas are usually restricted to the top of the casing, patch antennas and inverted-F antennas can be placed on all parts of the casing (see Figure 4.6). In all of these cases, decorrelation is good even if the antennas are placed very closely to each other.

Figure 4.5 Angle diversity for closely spaced antennas.

ϭϬϯ 

Principles of Wireless Communications

Figure 4.6 Configurations of diversity antennas at a mobile station. 4.2.5 Polarization Diversity Horizontally and vertically polarized MPCs propagate differently in a wireless channel, as the reflection and diffraction processes depend on polarization. Even if the transmit antenna only sends signals with a single polarization, the propagation effects in the channel lead to depolarization so that both polarizations arrive at the RX. The fading of signals with different polarizations is statistically independent. Thus, receiving both polarizations using a dualpolarized antenna, and processing the signals separately, offers diversity. Let us now consider more closely the situation where the transmit signal is vertically polarized, while the signal is received in both vertical and horizontal polarization. In that case, fading of the two received signals is independent, but the average received signal strength in the two diversity branches is not identical. Depending on the environment, the horizontal (i.e., crosspolarized) component is some 3±20 dB weaker than the vertical (co-polarized) component. Various antenna arrangements have been proposed in order to mitigate this problem. It has also been claimed that the diversity order that can be achieved with polarization diversity is up to: three possible components of the E-field and three components of the H-field can all be exploited. Advantage of polarization diversity: x Multipath delay spread is reduced. 4.3 Macro diversity and Simulcast The previous section described diversity methods that combat small-scale fading ± i.e., the fading created by interference of MPCs. However, not all of these diversity methods are suitable for combating large-scale fading, which is created by shadowing effects. Shadowing is almost independent of transmit frequency and polarization, so that frequency diversity or polarization diversity are not effective. Spatial diversity can be used, but we have to keep in mind that the correlation distances for large-scale fading are on the order of tens or hundreds of meters. In other words, if there is a hill between the TX and RX, adding antennas on either the BS or the MS does not help to eliminate the shadowing caused by this hill. Rather, we should use a separate base station (BS2) that is placed in such a way that the hill is not in the connection line between the MS and BS2. This in turn implies a large distance between BS1 and BS2, which gives rise to the word macrodiversity.The simplest method for macrodiversity is the use of on-frequency repeaters that receive the signal and retransmit an amplified version of it. Simulcast is very similar to this approach; the same signal is transmitted simultaneously from different BSs. Simulcast is also widely used for broadcast applications, especially digital TV. In this case, the exact synchronization of all possible RXs is not possible ± each RX would require a different timing advance from the TXs. The use of on-frequency repeaters is simpler than that of ϭϬϰ 

Principles of Wireless Communications simulcast, as no synchronization is required. On the other hand, delay dispersion is larger, because (i) the runtime from BS to repeater, and repeater to MS is larger (compared with the runtime from a second BS), and (ii) the repeater itself introduces additional delays due to the group delays of electronic components, filters, etc. Advantage : 1. To compensate for large scale fading effects macro diversity technique is used. 2. Distance between each BS is increased. 3. On-frequency repeaters (or) simulcast methods are used.. Disadvantage: 1. Simulcast requires large amount of signaling information that has to be carried on landlines, so it requires large bandwidth. 2. On- frequency repeater causes delay dispersion. 4.4 Combination of Signals By combining signals from different at the RX, the total quality of the signal is improved. Signals selected from the multiple diversity branches by 1. Selection diversityZKHUHWKH³EHVW´VLJQDOFRS\LVVHOHFWHGDQGSURFHVVHG GHPRGXODWHGDQG decoded), while all other copies are discarded. There are different criteria for what constitutes the ³EHVW´VLJQDO 2. Combining diversity, where all copies of the signal are combined (before or after the demodulator), and the combined signal is decoded. Again, there are different algorithms for combination of the signals. Combining diversity leads to better performance, as all available information is exploited. We also have to keep in mind that the gain of multiple antennas is due to two effects: x diversity gain x beam forming gain. Diversity gain reflects the fact that it is improbable that several antenna elements are in a fading dip simultaneously; the probability for very low signal levels is thus decreased by the use of multiple antenna elements. Beam forming gain reflects the fact that (for combining diversity) the combiner performs an averaging over the noise at different antennas. Thus, even if the signal levels at all antenna elements are identical; the combiner output SNR is larger than the SNR at a single-antenna element. 4.4.1 Selection Diversity Received-Signal-Strength-Indication-Driven Diversity In this method, the RX selects the signal with the largest instantaneous power (or Received Signal Strength Indication ± RSSI ), and processes it further. This method requires Nr antenna elements, Nr RSSI sensors, and a Nr-to-1 multiplexer (switch), but only one RF chain (see Figure 4.7). The method allows simple tracking of the selection criterion even in fast-fading channels. Thus, we can switch to a better antenna as soon as the RSSI becomes higher there.

ϭϬϱ 

Principles of Wireless Communications

Figure 4.7 Selection diversity principle: (a) Received-signal-strength-indication-controlled diversity. (b) Bit error- rate-controlled diversity. 1. If the BER is determined by noise, then RSSI-driven diversity is the best of all the selection diversity methods, as maximization of the RSSI also maximizes the SNR. 2. If the BER is determined by co-channel interference, then RSSI is no longer a good selection criterion. High receive power can be caused by a high level of interference, such that the RSSI criterion makes the system select branches with a low signal-to-interference ratio. 3. Similarly, RSSI-driven diversity is suboptimum if the errors are caused by the frequency selectivity of the channel. RSSI-driven diversity can still be a reasonable approximation, because that errors caused by signal distortion occur mainly in the fading dips of the channel. For an exact performance it is important to obtain the SNR distribution of the output of the selector. Assume that the instantaneous signal amplitude is Rayleigh distributed, such that the SNR of the nth diversity branch, Ȗn, is (4.3) where Ȗ is the mean branch SNR (assumed to be identical for all diversity branches). The cumulative distribution function (cdf) is then (4.4) The cdf is, by definition, the probability that the instantaneous SNR lies below a given level. As the RX selects the branch with the largest SNR, the probability that the chosen signal lies below ϭϬϲ 

Principles of Wireless Communications the threshold is the product of the probabilities that the SNR at each branch is below the threshold. In other words, the cdf of the selected signal is the product of the cdfs of each branch:

(4.5) Advantages of RSSI: 1. Only one RF chain is required. It is processed with only a single received signal at a time. 2. Easy to implement. Disadvantage of RSSI: 1. It wastes signal energy by discarding (Nr -1) copies of received signal. 2. It is not an optimum method.

Bit-Error-Rate-Driven Diversity For BER-driven diversity, we first transmit a training sequence ± i.e., a bit sequence that is known at the RX. The RX then demodulates the signal from each receive antenna element and compares it with the transmit signal. The antenna whose associated signal results in the smallest %(5LVMXGJHGWREHWKH³EHVW´DQGXVHGIRUWKHVXEVHTXHQWUHFHSWLRQRIGDWDVLJQDOV$VLPLODU DSSURDFK LV WKH XVH RI WKH PHDQ VTXDUH HUURU RI WKH ³VRIW-dHFLVLRQ´ GHPRGXODWHG VLJQDO RU WKH correlation between transmit and receive signal. BER-driven diversity has several drawbacks: 1. The RX needs either Nr RF chains or demodulators (which makes the RX more complex), or the training sequence has to be repeated Nr times (which decreases spectral efficiency), so that the signal at all antenna elements can be evaluated. 2. If the RX has only one demodulator, then it is not possible to continuously monitor the selection criterion (i.e., the BER) of all diversity branches. This is especially critical if the channel changes quickly. 3. Since the duration of the training sequence is finite, the selection criterion ± i.e., bit error probability ± cannot be determined exactly. The variance of the BER around its true mean decreases as the duration of the training sequence increases. Disadvantage of BER 1. More number of RXs are needed, which makes the RX more complex. 2. The training sequence has to be repeated Nr times, which decreases spectral efficiency. 3. If the channel changes quickly, more than one demodulators are required. 4. Duration of training sequence increases, BER decreases. So trade off between duration of training sequence and BER is maintained. 5. Diversity branches are monitored all the times, so hardware effort increases, spectral efficiency is reduced. 4.4.2 Switched Diversity The main drawback of selection diversity is that the selection criteria (power, BER, etc.) of all diversity branches have to be monitored in order to know when to select a different antenna. As we have shown above, this leads to either increased hardware effort or reduced spectral efficiency. An alternative solution, which avoids these drawbacks, is switched diversity. ϭϬϳ 

Principles of Wireless Communications In this method, the selection criterion of just the active diversity branch is monitored. If it falls below a certain threshold, then the RX switches to a different antenna. Disadvantage: x Performance is worst than that of selection diversity. 4.4.3 Combining Diversity Basic Principle Selection diversity wastes signal energy by discarding (Nr í 1) copies of the received signal. This drawback is avoided by combining diversity, which exploits all available signal copies. Each signal copy is multiplied by a (complex) weight and then added up. complex weight = phase correction + real weight of the amplitude ‡ Phase correction causes the signal amplitudes to add up, while, on the other hand, noise is added incoherently, so that noise powers add up. ‡For amplitude weighting, two methods are widely used: x Maximum Ratio Combining (MRC) weighs all signal copies by their amplitude. x Equal Gain Combining (EGC), where all amplitude weights are the same (in other words, there is no weighting, but just a phase correction). The two methods are outlined in Figure 4.8. Maximum Ratio Combining MRC compensates for the phases, and weights the signals from the different antenna branches according to their SNR. This is the optimum way of combining different diversity branches ± if several assumptions are fulfilled. Let us assume a propagation channel that is slow fading and flat fading. The only disturbance is AWGN. Under these assumptions, each channel realization can be written as a time-invariant filter with impulse response: (4.6) where Įn is the (instantaneous) gain of diversity branch n. This signals at the different branches are multiplied with weights w*n and added up, so that the SNR becomes

(4.7) where Pn is the noise power per branch. The SNR is maximized by choosing the weights as (4.8) i.e., the signals are phase-corrected (remember that the received signals are multiplied with w*) and weighted by the amplitude. We can then easily see that in that case the output SNR of the diversity combiner is the sum of the branch SNRs:

(4.9)

ϭϬϴ 

Principles of Wireless Communications

Figure 4.8 Combining diversity principle: (a) maximum ratio combining, (b) equal gain combining. If the branches are statistically independent, then the moment-generating function of the total SNR can be computed as the product of the characteristic functions of the branch SNRs.

Figure 4.9 Figure 4.9 compares the statistics of the SNR for RSSI-driven selection diversity and MRC. Naturally, there is no difference between the diversity types for Nr= 1, since there is no diversity. We furthermore see that the slope of the distribution is the same for MRC and selection diversity, but that the difference in the mean values increases with increasing Nr. This is intuitively clear, as selection diversity discards Nr í 1 signal copies ± something that increases with Nr. For Nr = 3, the difference between the two types of diversity is only about 2 dB.

ϭϬϵ 

Principles of Wireless Communications Advantage : 1. Output is produced with an acceptable SNR even when none of the individual signals are themselves acceptable. 2. This technique gives the best statistical reduction of fading of any known linear diversity combiner. Disadvantage 1. It requires individual receiver and phasing circuits for each antenna elements. 2. High cost. Equal Gain Combining For EGC, we find that the SNR of the combiner output is

(4.10) where we have assumed that noise levels are the same on all diversity branches. The mean SNR of the combiner output can be found to be (4.11) if all branches suffer from Rayleigh fading with the same mean SNR Ȗ. Remember that we only assume here that the mean SNR is the same in all branches, while instantaneous branch SNRs (representing different channel realizations) can be different. It is quite remarkable that EGC performs worse than MRC by only a factor ʌ4 (in terms of mean SNR). The performance difference between EGC and MRC becomes bigger when mean branch SNRs are also different. Advantage of EGC: x It is superior to selection diversity. Disadvantage. Of EGC: 1. EGC is inferior to that of maximal ratio combiner, since interference and noise corrupted signals may be combined with high quality signals. 2. (*&SHUIRUPVZRUVHWKDQ05&E\RQO\DIDFWRUʌ 4.5 Transmit Diversity Multiple antennas can be installed at just one link end (usually the BS). For the uplink transmission from the MS to BS, multiple antennas can act as receive diversity branches. For the downlink, any possible diversity originates at the transmitter. 4.5.1 Transmitter Diversity with Channel State Information Channel State Information (CSI) is available at the TX. This knowledge might be obtained from feedback from the RX, or from reciprocity principles. The optimum transmission scheme linearly weights signals transmitted from different antenna elements with the complex conjugates of the channel transfer functions from the transmit antenna elements to the single receive antenna. This approach is known as maximum ratio transmission. Channel transfer function =

1 Transmit signal transfer function ϭϭϬ



Principles of Wireless Communications

4.5.2 Transmitter Diversity Without Channel State Information In many cases, Channel State Information (CSI) is not available at the TX. We then cannot simply transmit weighted copies of the same signal from different transmit antennas, because we cannot know how they would add up at the RX. It is equally likely for the addition of different components to be constructive or destructive. We thus cannot gain any diversity (or beam forming). In order to give benefits, transmission of the signals from different antenna elements has to be done is such a way that it allows the RX to distinguish different transmitted signal components. One way is delay diversity. In this scheme, signals transmitted from different antenna elements are delayed copies of the same signal. This makes sure that the effective impulse response is delay dispersive, even if the channel itself is flat fading. So, in a flat-fading channel, we transmit data streams with a delay of 1 symbol duration from each of the transmit antennas. The effective impulse response of the channel then becomes

(4.12) where the hn are gains from the nth transmit antenna to the receive antenna, and the impulse response has been normalized so that total transmit power is independent of the number of antenna elements. The signals from different transmit antennas to the RX act effectively as delayed MPCs. If antenna elements are spaced sufficiently far apart, these coefficients fade independently. With an appropriate RX for delay-dispersive channels ± e.g., an equalizer or a Rake RX , diversity works effectively If the channel from a single transmit antenna to the RX is already delay dispersive, then the scheme still works, but care has to be taken in the choice of delays for different antenna elements. The delay between signals transmitted from different antenna elements should be at least as large as the maximum excess delay of the channel. An alternative method is phase-sweeping diversity. In this method, which is especially useful if there are only two antenna elements, the same signal is transmitted from both antenna elements. However, one of the antenna signals undergoes a time-varying phase shift. This means that at the RX the received signals add up in a time-varying way; in other words, we are artificially introducing temporal variations into the channel. The TX, RX, and the IOs are stationary if appropriate coding and/or interleaving, is done means then transmit diversity improves its performance. Yet another possibility for achieving transmit diversity is space±time coding. 4.6 Equalisers Equalizers are RX structures that work both ways: they reduce or eliminate ISI, and at the same time exploit the delay diversity inherent in the channel. The operational principle of an equalizer can be visualized either in the time domain or the frequency domain. The goal of an equalizer is thus to reverse distortions by the channel. In other words, the product of the transfer functions of channel and equalizer should be constant. This can be expressed mathematically the following way: let the original signal be s(t); it is sent through a (quasi-static) wireless channel with the impulse response h(t), received, and sent through an equalizer with impulse response e(t ). ϭϭϭ 

Principles of Wireless Communications

(4.13) 4.6.1 Modeling of Channel and Equalizer The following sections, describing different equalizer structures, will require a discrete time model of the channel and equalizer. We now give such a model, together with the important concept of the noise-whitening filter that has great importance for optimum RXs. The first stage of the RX consists of a filter that limits the amount of received noise power. This filter also should make sure that all information is contained in sample values at instances ts + iTS. This is achieved by a filter that is matched to Ș W  ± i.e., the convolution of channel impulse response and basis pulse.

4.6.2 Channel Estimation A common strategy for data detection with an equalizer is to separate the estimation of f and c. In a first step, a training sequence (i.e., known c) is used to estimate f. During the subsequent transmission of the unknown payload data, we assume that the estimated impulse response is the true one, and solve the above equation for c. In this subsection, we discuss estimation of the channel impulse response by means of a training sequence. Channel estimation VKRZVVWURQJVLPLODULWLHVWR WKH³FKDQQHO-VRXQGLQJ´WHFKQLTXHV$YHU\VLPSOHHVWLPDWHFDQEH obtained by means of Pseudo Noise (PN) sequences with period Nper. The ACF of the PN sequence approximates a Dirac delta function.

Figure 4.10 The principle of channel estimation by correlation. More precisely, periodic continuation of the sequence {bi}, convolved with a time-reversed version of itself, gives a sum of Dirac pulses spaced Nper symbols apart:

ϭϭϮ 

Principles of Wireless Communications

(4.14) Now, if the duration of the channel impulse response is shorter than N per, the f is simply a periodic repetition of f (see Figure 4.10). Channel estimation by means of a training sequence technique has several drawbacks: 1. A reduction in spectral efficiency: the training sequence does not convey any payload information. For example, the Global System for Mobile communications (GSM) uses 26 bits in every 148-bit frame for the training sequence. 2. Sensitivity to noise: in order to keep spectral efficiency reasonable, the training sequence has to be short. However, this implies that the training sequence is sensitive to noise, and also to nonidealities in sounding sequences. If channel estimation is done by means of iterative algorithms, only algorithms with a fast convergence rate can be used; however, such algorithms lead to a high residual error rate. 3. Outdated estimates: if the channel changes after transmission of the training sequence, the RX cannot detect this variation. Use of an outdated channel estimate leads to decision errors. Classification of equalizer

4.6.3 Linear Equalizers Linear equalizers are simple linear filter structures that try to invert the channel in the sense that the product of the transfer functions of channel and equalizer fulfils a certain criterion. This criterion can either be achieving a completely flat transfer function of the channel±filter concatenation, or minimizing the mean-squared error at the filter output. The basic structure of a linear equalizer is sketched in Figure 4.11 , a transmit sequence {ci} is sent over a dispersive, noisy channel, so that the sequence {ui} is available at the ϭϭϯ 

Principles of Wireless Communications equalizer input. We now need to find the coefficients of a Finite Impulse Response (FIR) filter (transversal filter, Figure 4.12) with 2K + 1 taps. This filter should convert sequence {ui} into sequence ^Öci}:

(4.15) WKDWVKRXOGEH³DVFORVHDVSRVVLEOH´WRWKHVHTXHQFH{ci }. Defining the deviation İLas (4.16) we aim to find a filter so that (4.17) which gives the ZF equalizer, or that (4.18) which gives the Minimum Mean Square Error (MMSE) equalizer.

Figure 4.11 Linear equalizer in the time domain (a) and time-discrete equivalent system in the z-transform domain (b).

Figure 4.12 Structure of a linear transversal filter. Remember that zí1 represents a delay by one sample. Advantage of lattice equalizer: 1. It is simple and easy to implement. 2. It has numerical stability and faster convergence. 3. Unique structure of the lattice filter allows the dynamic assignment of the most effective length of the lattice equalizer. 4. When the channel becomes more time dispersive, the length of the equalizer can be increased by the algorithm without stopping the operation of the equalizer. ϭϭϰ 

Principles of Wireless Communications Disadvantage of lattice equalizer: 1. The structure of lattice equalizer is more complicated than a linear transversal equalizer. 2. Not suitable for severe distortion channel. 4.6.3.1 Zero-Forcing Equalizer The ZF equalizer can be interpreted in the frequency domain as enforcing a completely flat (constant) transfer function of the combination of channel and equalizer by choosing the equalizer transfer function as E(z) = 1/F (z). In the time domain, this can be interpreted as minimizing the maximum ISI (peak distortion criterion). The ZF equalizer is optimum for elimination of ISI. However, channels also add noise, which is amplified by the equalizer. At frequencies where the transfer function of the channel attains small values, the equalizer has a strong amplification, and thus also amplifies the noise. As a consequence, the noise power at the detector input is larger than for the case without an equalizer (see Figure 4.13).

Figure 4.13 Illustration of noise enhancement in zero-forcing equalizer (a), which is mitigated in an MMSE linear equalizer (b). 7KH)RXULHUWUDQVIRUPȄ(e MȦ7V)of the sample ACF ȗLLVUHODWHGWRȄÖ(eMȦ7), the Fourier transform of Ș W , as

(4.19) The noise power at the detector is

(4.20) ,WLVILQLWHRQO\LIWKHVSHFWUDOGHQVLW\ȄKDVQR RURQO\LQWHJUDEOH VLQJXODULWLHV ϭϭϱ 

Principles of Wireless Communications Advantage of ZF equalizer: x It performs well for static channels with high SNR. Disadvantage of ZF 1. At high attenuation the equalizers excessively amplify the noise. 2. Noise enhancement makes ZF equalizer not suitable for wireless link. 4.6.3.2 The Mean Square Error Criterion The ultimate goal of an equalizer is minimization, not of the ISI, but of the bit error probability. Noise enhancement makes the ZF equalizer ill-suited for this purpose. A better criterion is minimization of the Mean Square Error (MSE) between the transmit signal and the output of the equalizer. We are thus searching for a filter that minimizes: (4.21) this can be achieved with a filter whose coefficients eopt are given by (4.22) where R is the correlation matrix of the received signal, and p the cross correlation between the received signal and the transmit signal. Where, and Considering the frequency domain, concatenation of the noise-whitening filter with the equalizer E(z) has the transfer function:

(4.23) which is the transfer function of the Wiener filter. The MSE is then

(4.24) Comparison with Eq. (4.20) shows that the noise power of an MMSE equalizer is smaller than that of a ZF equalizer (as illustrated in Figure 4.13). Advantage of mean square error equalizer: 1. The noise power of an MMSE equalizer is smaller than that of a ZF equalizer. 2. The noise variance is lower for the MMSE equalizer that for the ZF equalizer. 4.6.3.3 Adaptation Algorithms for Mean Square Error Equalizers In order to find the optimum equalizer weights, we can directly solve Eq. (4.22). However, this requires on the order of (2K + 1)3 complex operations. To ease the computational burden, iterative algorithms have been developed. The quality of an iterative algorithm is described by the following criteria: ϭϭϲ 

Principles of Wireless Communications ‡Convergence rate: +RZPXFKLWHUDWLRQLVUHTXLUHGWR³FORVHO\DSSUR[LPDWH´WKHILQDOUHVXOW",W is usually assumed that the channel does not change during the iteration period. However, if an algorithm converges too slowly, it will never reach a stable state ± the channel has changed before the algorithm has converged. ‡Misadjustment: the size of deviation of the converged state of the iterative algorithm from the exact MSE solution. ‡&RPSXWDWLRQDOHIIRUWSHULWHUDWLRQ In the following, we discuss two algorithms that are widely used ± the Least Mean Square (LMS) and the Recursive Least Square (RLS). Least Mean Square Algorithm The LMS algorithm also known as the stochastic gradient method, consists of the following steps: 1. Initialize the weights with values e0. 2. With this value, compute an approximation for the gradient of the MSE. The true gradient cannot be computed, because it is an expected value. Rather, we are using an estimate for R and p ± namely, their instantaneous realizations: (4.25) (4.26) where subscript n indexes the iterations. The gradient is estimated as (4.27) 3. We next compute an updated estimate of the weight vector e by adjusting weights in the direction of the negative gradient: (4.28) where ȝis a user-defined parameter that determines convergence and residual error. 4. If the stop criterion is fulfilled ± e.g., the relative change in weight vector falls below a predefined threshold ± the algorithm has converged. Otherwise, we return to step 2. It can be shown that the LMS algorithm converges if (4.29) Here Ȝmax is the largest eigen value of the correlation matrix R. The problem is that we do not know this eigenvalue (computing it requires larger computational effort than inverting the correlation matrix). We thus have to guess values for ȝ. If ȝ is too large, we obtain faster convergence, but the algorithm might sometimes diverge. If we choose ȝ too small, then convergence is very probable, but slow. Generally, convergence speed depends on the condition number of the correlation matrix (i.e., the ratio of largest to smallest eigenvalue): the larger the condition number, the slower the convergence of the LMS algorithm.

ϭϭϳ 

Principles of Wireless Communications Advantage 1. It maximizes the signal to distortion at its output within the constraints of the equalizer filter length. 2. Low computational complexity 3. Simple program. Disadvantage: x Slow convergence and poor tracking .It converges after 300 bits. The Recursive Least Squares Algorithm In most cases, the LMS algorithm converges very slowly. Furthermore, the use of this algorithm is justified only when the statistical properties of the received signal fulfill certain conditions. The general Least Squares (LS) criterion, on the other hand, does not require such assumptions. It just analyzes the N subsequent errors İL, and chooses weights such that the sum of the squared errors is minimized. This general LS problem can be solved by a recursive algorithm as well ± known as RLS.

Advantage 1. Fast convergence. It converges after 10 bits. 2. Good tracking ability. Disadvantage 1. High computational complexity 2. Complex program structure 3. ,IȜLVWRRVPDOOWKHHTXDOL]HUZLOOEHXQVWDEOH 4. Large residual error it has. 4.6.4 Non Linear Equalizers If the channel distortion is too severe non- linear equalizers are used. Three very effective non-linear methods have been developed which offer improvements over linear equalization techniques and are used in most 2G and 3G systems. 1. Decision Feedback Equalizers 2. Maximum likelihood symbol detection 3. Maximum Likelihood Sequence Estimation (MLSE) 4.6.4.1 Decision Feedback Equalizers A decision feedback equalizer (DFE) has a simple underlying premise: once we have detected a bit correctly, we can use this knowledge in conjunction with knowledge of the channel impulse response to compute the ISI caused by this bit. In other words, we determine the effect this bit will have on subsequent samples of the receive signal. The ISI caused by each bit can then be subtracted from these later samples. The block diagram of a DFE is shown in Figure 4.14.

ϭϭϴ 

Principles of Wireless Communications

Figure 4.14 Structure of a decision feedback equalizer. The DFE consists of a forward filter with transfer function E(z), which is a conventional linear equalizer, as well as a feedback filter with transfer function D(z). As soon as the RX has decided on a received symbol, its impact on all future samples (postcursor ISI ) can be computed, and (via the feedback) subtracted from the received signal. A key point is the fact that the ISI is computed based on the signal after the hard decision; this eliminates additive noise from the feedback signal. Therefore, a DFE results in a smaller error probability than a linear equalizer. One possible source of problems is error propagation. If the RX decides incorrectly for one bit, then the computed postcursor ISI is also erroneous, so that later signal samples arriving at the decision device are even more afflicted by ISI than the unequalized samples. This leads to a vicious cycle of wrong decisions and wrong subtraction of postcursors.

Advantage of zero forcing DFE 1. FBF can be realized as a lattice structure. 2. RLS lattice algorithm can be used to yield fast convergence. 3. DFE has a smaller error probability than a linear equalizer. Disadvantage of zero forcing DFE 1. The number of taps in the FFF and FBF approach infinity. 2. High computational complexity. 3. Error propagation occurs. MMSE Decision Feedback Equalizer The goal of the MMSE DFE is again minimization of the MSE, by striking a balance between noise enhancement and residual ISI. As noise enhancement is different in the DFE case from that of linear equalizers, the coefficients for the forward filter are different: as postcursor ISI does not contribute to noise enhancement, we now aim to minimize the sum of noise and (average) precursor ISI. The coefficients of the feed forward filter can be computed from the following equation:

(4.30) where Kff is the number of taps in the feed forward filter. The coefficients of the feedback filter ϭϭϵ 

Principles of Wireless Communications are then

(4.31) where Kfb is the number of taps in the feedback filter. Assuming some idealizations (the feedback filter must be at least as long as the postcursor ISI; it must have as many taps as required to fulfill Eq. (4.30); there is no error propagation), the MSE at the equalizer output is

(4.32) Zero-Forcing Decision Feedback Equalizer The ZF DFE is conceptually even simpler. The noise-whitening filter eliminates all precursor ISI, such that the resulting effective channel is purely causal. Postcursor ISI is subtracted by the feedback branch. The effective noise power at the decision device is

(4.33) This equation demonstrates that noise power is larger than it is in the unequalized case, but smaller than that for the linear ZF equalizer.

4.7 Comparison of Various Algorithms for Adaptive Equalization No. of S.No Algorithm multiply Advantages operations 1 LMS Gradient 2N + 1 Low computational DFE complexity, simple program 2 Kalman RLS 2.5N2 + 4.5N Fast convergence, good tracking ability 3 Gradient lattice 13N - 8 Stable, low computational complexity, flexible structure 4 5

Gradient lattice 13N1 + 33N2 DFE -36 Fast Kalman 20N + 5 DFE

Low computational complexity Can be used for DFE, fast convergence and good tracking

ϭϮϬ 

Disadvantages Slow convergence, poor tracking High computational complexity Performance not as good as other RLS

Complex programming Complex programming, computation not low, unstable.

Principles of Wireless Communications

4.8 Channel coding techniques Channel coding protects digital data from errors by selectively introducing redundancies in the transmitted data. Channel codes are used to detect errors are called error detection codes. Codes that are used to detect and correct errors are called as error correction codes. One way of classifying codes is to distinguish between block codes, where the redundance is added to blocks of data and convolutional codes, where redundancy is added continuously. Block codes are well suited for correcting burst errors ± something that frequently occurs in wireless communications; however, error bursts can also be converted into random errors by interleaving techniques. Convolutional codes have the advantage that they are easily decoded by means of a Viterbi decoder. They also offer the possibility of joint decoding and equalization by means of the same algorithm. Turbo codes and LDPC codes easily fit into this categorization. 4.8.1 Block Codes Block codes are codes that group the source data into blocks, and ± from the values of the bits in that block ± compute a longer codeword that is actually transmitted. The smaller the code rate ± i.e., the ratio of the number of bits in the original datablock to that of the transmitted block ± the higher the redundancy, and the higher the probability that errors can be corrected. The most VLPSOH FRGHV DUH UHSHWLWLRQ FRGHV ZLWK EORFNVL]H  IRU DQ LQSXW ³EORFN´ x, the output block is xxx (for a repetition code with rate 1/3). After this intuitive introduction, let us now give a more precise description. First, we define some important terms and notations: ‡Block coding: for block coding, source data are parsed into blocks of K symbols. Each of these uncoded data blocks is then associated with a codeword of length N symbols. ‡Code rate: The ratio K/N is called the code rate Rc (assuming the symbol alphabet of coded and uncoded data is the same). ‡Binary codes: these ocFXUZKHQWKHV\PERODOSKDEHWLVELQDU\XVLQJRQO\³´DQG³´$OPRVW all practical block codes are binary, with the exception of Reed±Solomon (RS) codes (see below). If not stated otherwise, the remainder of the chapter always talks about binary codes. TKHUHIRUH LQ WKH IROORZLQJ ³VXP´ PHDQV ³PRGXOR- VXP´ DQG ³+´ GHQRWHV D PRGXOR-2 addition. ‡Hamming distance: The Hamming distance dH(x, y) between two codewords is the number of different bits: For example, the Hamming distance between the codewords 01001011 and 11101011 is 2. Note that it is common in coding theory to use row vectors instead of the column vectors commonly used in communication theory. ‡Euclidean distance: The squared Euclidean distance between two codewords is the geometric distance between the code vectors x and y is: (4.34) Minimum distance: the minimum distance dmin of a code is the minimum Hamming distance min(dH), where the minimum is taken over all possible combinations of two codewords of the code. Note that this minimum distance is equal to the number of linearly independent columns in the parity check matrix (see below). ‡Weight: the weight of a codeword is the distance from the origin ± LHWKHQXPEHURI¶VLQWKH codeword. For example, the weight of codeword 01001011 is 4. ϭϮϭ 

Principles of Wireless Communications ‡Systematic codes: in a systematic code, the original, information-bearing bits occur explicitly in the output of the coder, at a fixed location. The parity check (redundant) bits, which are computed from the information-bearing bits, are at different (also fixed) locations. For transmission over an ideal (noise-free, nondistorting channel), the codeword could be determined without any information from the parity check bits. As an example, a systematic (7, 4) block code can be created in the following way:

where k represents information symbols and m represents parity check symbols. ‡ Linear codes (group codes): for these codes, the sum of any two codewords gives another valid codeword. Important properties can be derived from this basic fact: x The all-zero word is a valid codeword. x All codewords (except the all-zero word) have a weight equal to or larger than dmin. x The distribution of distances ± i.e., the Hamming distances between valid codewords ± is x equal to the weight distribution of the code. x All codewords can be represented by a linear combination of basic codewords (generator x words). ‡Cyclic codes: cyclic codes are a special case of linear codes, with the property that any cyclic shift of a codeword results in another valid codeword. Cyclic codes can be interpreted either by codevectors, or by polynomials RIGHJUHH”N í 1 where N is the length of the codeword. The nonzero coefficients correspond to the nonzero entries of the codevector; the variable x is a dummy variable. As an example, we show both representations of the codeword 011010:

(4.35) ‡Galois Fields: A Galois field GF(p) is a finite field with p elements, where p is a prime integer. A field defines addition and multiplication for operating on elements, and it is closed under these operations (i.e., the sum of two elements is again a valid element, and similar for the product); it contains identity and inverse elements for the two operations; and the associative, commutative, and distributive laws apply. The most important example is GF(2). It consists of the elements 0 and 1 and is the smallest finite field. Its addition and multiplication tables are as follows:

Codes often use GF(2) because it is easily represented on a computer by a single bit. It is also possible to define extension fields GF(pm), where again p is a prime integer, and m is an arbitrary integer. ‡Primitive polynomials: we define as irreducible a polynomial of degree N that is not divisible by any polynomial of degree less than N and greater than 0. A primitive polynomial g(x) of degree m is defined as primitive if it is an irreducible polynomial such that the smallest integer N for which g(x) divides (xN + 1) is N = 2m í 1. ϭϮϮ 

Principles of Wireless Communications

4.8.1.1 Encoding The most straightforward encoding is a mapping table: any K-valued information word is associated with an N-valued codeword; the table just checks the input, and reads out the associated codeword. However, this method is highly inefficient: it requires the storing of 2K codewords. For linear codes, any codeword can be created by a linear combination of other codewords, so that it is sufficient to store a subset of codewords. For example, for a K-valued information word, only K out of the 2K codewords are linearly independent, and thus have to be stored. It is advantageous to select those codewords that have only a single 1 in the first K positions. This choice automatically leads to a systematic code. The encoding process can then best be described by a matrix multiplication: X = uG (4.36) Here, x denotes the N-dimensional codevector, u the K-dimensional information vector, and G the K × N-dimensional generator matrix. For a systematic code, the leftmost K columns of the generator matrix are a K × K identity matrix, while the right N í K columns denote the parity check bits. The first K bits of x are identical to u. Note that ± as discussed above ± we use row vectors to represent codewords, and that a vector±matrix product is obtained by pre multiplying the matrix with this row vector. 4.8.1.2Decoding In order to decide whether the received codeword is a valid codeword, we multiply it by a parity check matrix H. This results in a N - K dimensional syndrome vector Ssynd. If this vector has allzero entries, then the received codeword is valid. Next, let us determine how to find an H-matrix. The relationship H · GT = 0 has to hold, as each row of the generator matrix is a valid codeword, whose product with the parity check matrix has to be 0. Representing G as G = (IP) (4.37)

(4.38) 4.8.2 Convolutional Codes 4.8.2.1 Principle of Convolutional Codes

ϭϮϯ 

Principles of Wireless Communications

Figure 4.15 Example of a convolutional encoder. Convolution codes do not divide (source) data streams into blocks, but rather add redundancy in a quasi-continuous manner. A convolution encoder consists of a shift register with L memory cells and N (modulo-2) adders (see Figure 4.15 ). Let us assume that at the outset we have a clearly defined state in memory cells ± i.e., they all contain 0. When the first data bit enters the encoder, it is put into the first memory cell of the shift register (the other zeros are VKLIWHGWRWKHULJKWDQGWKHULJKWPRVW]HUR³IDOOVRXW´RIWKHUHJLVWHU 7KHQDPXOWLSOH[HUUHDGV out the output of all the adders n = 1, 2, 3. We thus get three output bits for one input bit. Then, the next source data bit is put into the register (and the contents of all memory cells are shifted to the right by one cell). The adders then have new outputs, which are again read by the multiplexer. The process is continued until the last source data bit is put into the register. Subsequently, zeros are used as register input, until the last source data bit has been pushed out of the register, and the memory cells are again in a clearly defined (all-zero) state. A convolution encoder is thus characterized by the number of shift registers and adders. Adders are characterized by their connections to memory cells. In the example of Figure 4.15, only element l = 1 is connected to the output n = 1, so that source data are directly mapped to the coder output. For the second output, the contents of memory cells l = 1, 2 are added. For the third output, the contents of elements l = 1, 2, 3 are combined. This coder structure can be represented in different ways. One possibility is via generator sequences: we generate N vectors of length L each. The lth element of the nth vector has value 1 if the lth shift register element has a connection to the nth adder; otherwise it is 0. Generator sequences can immediately be interpreted for building an encoder. For the decoder, the trellis diagram is a more useful description method. In this representation, the state of the encoder is characterized by the content of the memory cells. The trellis shows which input bits get the shift register into which state, and which output bits are created consequently. As an example, Figure 4.16 shows the trellis of the convolutional encoder of Figure 4.15. Only the states of the cells 2, . . . L need to be described, since the content of cell 1 is identical to the input (information) bit. For that reason, the number of states that need to be distinguished is 2Lí1 = 4. Two lines originate from each state: the upper ϭϮϰ 

Principles of Wireless Communications represents source data bit 0, and the lower source data bit 1. It is not possible to get from each state directly into each other state. For example, we can only get from state A to state A or B (but not C or D). This is the redundancy that can be

Figure 4.16 Trellis for the convolutional encoder of the previous figure. 4.8.2.2 Viterbi Decoder ± Classical Representation The Viterbi algorithm is the most popular algorithm for MLSE. The goal of this algorithm is to find the sequence Ös that was transmitted with the highest likelihood, if the sequence r was received: (4.39) where maximization is done over all possible transmit sequences s. If perturbations of the received symbols by noise are statistically independent,8then the probability for the sequence Pr(r_Ös) can be decomposed into the product of the probabilities of each symbol: (4.40) The Viterbi algorithm greatly decreases storage requirements by elimination of nonsurviving paths, but they are still considerable. It is thus undesirable to wait for the decision as to which sequence was transmitted until the last bits of the source sequence. Rather, the algorithm makes GHFLVLRQVDERXWELWVWKDWDUH³VXIILFLHQWO\´LQWKHSDVW Improvements of the Viterbi Algorithm 1. Soft Decoding 2. Tail Bits 3. Puncturing 4.8.3 Trellis Coded Modulation A main problem of coding is the reduction in spectral efficiency. Since we have to transmit more bits, the bandwidth requirement becomes larger as we add check bits. This problem can be avoided by the use of higher order modulation alphabets, which allow the transmission of more bits within the same bandwidth. In other words: using a rate 1/3 code, ϭϮϱ 

Principles of Wireless Communications while at the same time changing the modulation alphabet from BPSK to 8-Phase Shift Keying (PSK), the number of symbols that are transmitted per unit time remains the same. Increasing the symbol alphabet increases the probability of error; on the other hand, introducing coding decreases the error probability. A simplistic approach to solving the spectral efficiency problem would thus be to add parity check bits to the data bits, and map the resulting coded data to higher order modulation symbols. However, this usually does not give good results. In contrast, Trellis coded modulation adds to the redundancy of the code by increasing the dimension of the signal space, while disallowing some symbol sequences in this enlarged signal space. The important aspect here is that modulation and encoding are designed as a joint process. This allows the design of a modem-plus-codec that shows higher resilience to noise than uncoded systems with the same spectral efficiency. 4.8.4 Turbo Codes Turbo codes are among the most important developments of coding theory since the field was founded. Turbo codes were the first practically used codes that came close to the Shannon limit using reasonable effort. The codes are interleaved by a pseudorandom interleaver. The vital trick now lies in the decoder: because the code is a combination of several short codes, the decoder can also be broken up into several simple decoders that exchange soft information about the decoded bits and thus iteratively arrive at a solution. The random interleaver in the code approximately realizes the idea of a random code, so that the total codeword has very little structure. Advantages: ‡Interleaving increases the effective code length of the combined code. In other words, it is the interleaver (whose operation can easily be reversed), and not the constituent codes, that determines the length of the codes. This allows the construction of very long codes with simple encoder structures. ‡ The special structure of the total code ± i.e., the composition of separate constituent codes ± makes decoding possible with an effort that is essentially determined by the length of the constituent codes. 4.9 Speech coding techniques Speech coding or compression is a process of obtaining a compact representation for the speech signals, for the purpose of efficient transmission over band limited wired or wireless FKDQQHOV DQG DOVR IRU HIILFLHQW VWRUDJH ,Q UHFHQW GD\¶V VSHHFK FRGHUV EHFDPH WKH HVVHQWLDO components for telecommunications and multimedia as the utilization of the bandwidth affects the cost of transmission. The goal of speech coding is to represent the samples of a speech signal with a minimum number of bits without any reduction in the perceptual quality. Speech coding helps a telephone company to carry out more voice calls on a single fiber link or cable. Speech coding is very important in Mobile and Cellular communications Source coding with a small but tolerable level of distortion is also known as lossy coding whereas the limiting case of zero distortion is known as lossless coding. In most cases, a finite rate allows lossless coding only for discrete amplitude signals which we might consider for transcoding of PCM speech ± i.e., the digital compression of speech signals which have already ϭϮϲ 

Principles of Wireless Communications been digitized with a conventional PCM codec. However, for circuit-switched wireless speech telephony, such lossless coders have two drawbacks: first, they waste the most precious resource ± i.e., the allocated radio spectrum ± as they invest more bits than necessary to meet the quality expectations of a typical user; second, they often result in a bitstream with a variable rate ± e.g., when using a Huffman coder ± which cannot be matched efficiently to the fixed rate offered by circuit switched transmission. The speech quality must be measured under various conditions, such as: x Dependency on speaker x Dependency on language x Dependency on signal levels x Background noise x Tandem coding x Channel errors x Non speech signal 4.9.1 Speech Coder Designs Source-coding theory teaches us how to use models of source redundancy and of userdefined relevance in the design of speech-coding systems. 1. Waveform coders use source models only implicitly to design an adaptive dynamical system which maps the original speech waveform on a processed waveform that can be transmitted with fewer bits over the given digital channel. 2. Model-based coders or vocoders rely on an explicit source model to represent the speech signal using a small set of parameters which the encoder estimates, quantizes, and transmits over the digital channel. The decoder uses the received parameters to control a real-time implementation of the source model that generates the decoded speech signal. 3. Hybrid coders aim at the optimal mix of the two previous designs. They start out with a model based approach to extract speech signal parameters but still compute the modeling error explicitly on the waveform level. This model error or residual waveform is transmitted using a waveform coder whereas the model parameters are quantized and transmitted as side information. The two information streams are combined in the decoder to reconstruct a faithful approximation of the waveform such that hybrid coders share the asymptotically lossless coding property with waveform coders. 4.9.2 The Sound of Speech :KLOH WKH ³VRXQG RI PXVLF´ LQFOXGHV D ZLGH UDQJH RI VLJQDO JHQHUDWLRQ PHFKDQLVPV DV provided by an orchestra of musical instruments, the instrument for generating speech is fairly unique and constitutes the physical basis for speech modeling, even at the acoustic or perception levels. 4.9.2.1 Speech Production In a nutshell, speech communication consists of information exchange using a natural language as its code and the human voice as its carrier. Voice is generated by an intricate oscillator ± the vocal folds ± which is excited by sound pressure from the lungs. In the view of wireless engineering, this oscillator generates a nearly periodic, Discrete Multi Tone (DMT) ϭϮϳ 

Principles of Wireless Communications signal with a fundamental frequency f0 in the range of 100 to 150 Hz for males, 190 to 250 Hz for females, and 350 to 500 Hz for children. Its spectrum slowly falls off toward higher frequencies and spans a frequency range of several 1,000 Hz. From its relative bandwidth, it should be considered an ultra wideband signal, which is one of the reasons why the signal is so robust and power-efficient (opera singers use no amplifiers) under many difficult natural environments. Sound Generation %HVLGHV WKH RVFLOODWRU\ YRLFH VLJQDO ³SKRQDWLRQ´  DGGLWLRQDO VRXQG VRXUFHV PD\ contribute to the signal carrier, such as turbulent noise generation at narrow flow constrictions or due to impulse-like pressure release after complete flow closures . If the vocal folds do not contribute at all to the sound generation mechanism, the speech signals are called unvoiced, otherwise they are voiced$VDQH[DPSOHFRPSDUHWKHZRUGV³0HWV´DQG³PHVV´7KHLUPDMRU difference lies in the closure period of the [t] which results in a silence interval between the vowel [İ@DQGWKH>V@LQ³0HWV´ZKLFKLVDEVHQWLQ³PHVV´2WKHUZLVHWKHWZRSURQXQFLDWLRQVDUH essentially the same, so the information about the [t] is carried entirely by the silence interval. Articulation While the sound generation mechanisms provide the speech carrier, the articulation mechanism provides its modulation using a language-specific code. This code is both sequential and hierarchical. However, the articulation process is not organized in a purely sequential way ± i.e., the shape of the vocal tract waveguide is not switched from one steady-state pose to another for each speech sound. Rather, the articulatory gestures (lip, tongue, velum, throat movements) are strongly overlapping and interwoven mostly asynchronous, and continuously evolving patterns, resulting in a continuous modulation process rather than in a discrete shift-keying modulation process. 4.9.2.2 Speech Acoustics Source Filter Model The foundations of modern speech acoustics were laid by G. Fant in the 1950s and resulted in the source filter model for speech signals (see Figure 4.17).

Figure4.17 Source filter model of speech: The excitation signal generator provides the source which drives a slowly time-varying filter that shapes the spectral envelope according to formant frequencies to produce a speech waveform. While this model is still inspired from our understanding of natural speech production, it deviates from its physical basis significantly. In particular, all the natural sound sources (which can be located at many positions along the vocal tract and which are often controlled by the local aerodynamic flow) are collapsed into a single source which drives the filter in an independent way. Furthermore, there is only a single output whereas the natural production system may switch between or even combine the oral and nasal branches of the vocal tract. Therefore, the ϭϮϴ 

Principles of Wireless Communications true value of the model does not lie in its accuracy describing human physiology but in its flexibility in modeling speech acoustics. In particular, the typical properties of speech signals as evidenced in its temporal and spectral analyses are well represented by this structure. Sound Spectrograms An example of the typical time±frequency analysis for speech is shown in Figure 4.18.

Figure 4.18 6SHFWURJUDPRIWKHSKUDVH³is the clear spring´VSRNHQE\DPDOHVSHDNHUDQDORJ bandwidth limited to 7 kHz, sampled at fs = 16 kHz, horizontal axis = time, vertical axis = frequency, dark areas indicate high-energy density (a). Time domain waveform of the same signal, vertical axis = amplitude (b). 7KHORZHU JUDSKVKRZVWKHWLPHGRPDLQZDYHIRUPIRUWKHSKUDVH ³is the clear spring´ spoken by a male speaker and limited to an analog bandwidth of 7 kHz for 16-kHz sampling. This graph shows the marked alternation between the four excitation source mechanisms where ZHFDQQRWHWKDWWKH³QHDUO\SHULRGLF´VRXUFHRIYRLFHGVSHHFKFDQPHDQDQLQWHUYDORIRQO\WKUHH IXQGDPHQWDO ³SHULRGV´ ZKLFK VKRZ D KLJKO\ LUUHJXODU SDWWHUQ LQ WKH FDVH RI WKH VHFRQG YRLFHG segment FRUUHVSRQGLQJ WR ³the´ )XUWKHUPRUH VWURQJ IOXFWXDWLRQV RI WKH HQYHORSH DUH YLVLEOH which often correspond to 20 to 30 dB. The upper graph shows a spectrogram of the same signal which provides the time± frequency distribution of the signal energy where darker areas correspond to higher energy densities. This representation can be obtained either by means of a filter bank or short-time Fourier analysis. It illustrates global signal properties ± like anti-aliasing low-pass filtering at 7 kHz ± and the observation that a significant amount of speech energy is found above 3.4 kHz (in particular for fricative and plosive sounds) suggesting that conventional telephony is too narrow in its bandwidth and destroys natural speech quality in a significant way. 4.9.2.3 Speech Perception The ultimate recipient of human speech is the human hearing system, a remarkable receiver with two broadband, directional antennas shaped for spatiotemporal filtering (the outer ear) in terms of the individual, monaural Head Related Transfer Functions, HRTFs (functions of ϭϮϵ 

Principles of Wireless Communications both azimuth angle and frequency) which along with interaural delay evaluation give rise to our spatial hearing ability. Auditory Speech Modeling Auditory models in the form of psychoacoustic ± i.e., behavioral ± models of perception are very popular in audio coding (like those for the ubiquitous MP3 standard) because they allow us to separate relevant from irrelevant parts of the information. For instance, certain signal components may be masked by others to such an extent that they become totally inaudible. In speech coding, the situation is reversed: we have a lot of prior knowledge about the signal source and can base the coder design on a source model and its redundancy, whereas the perceptual quality requirements are somewhat relaxed compared with audio coding. Therefore, perceptual models play a lesser role in speech coder design, although a simple perceptual weighting filter , originally proposed by Schroeder et al. [1979], has wended its way into most speech-coding standards and allows a perceptually favorable amount of noise shaping. Perceptual Quality Measures The proof of a speech coder lies in listening. Till today, the best way of evaluating the quality of a speech coder is by controlled listening tests performed with sizable groups of listeners (a couple of dozens or more). The related experimental procedures have been standardized in International Telecommunications Union (ITU-T) Recommendation P.800 and include both absolute category rating and comparative category rating tests. An important example of the former is the so-called Mean Opinion Score (MOS) test which asks listeners to rate the perceived quality on a scale from 1 = poor to 5 = excellent where traditional narrowband speech with logarithmic PCM coding at 64 kbit/s is typically rated with an MOS score in the vicinity of 4.0. With a properly designed experimental setup, high reproducibility and discrimination ability can be achieved. Linear Prediction Analysis The LPC encoder has to estimate the model parameters for every given frame, and to quantize and code them for transmission. We will only discuss estimation of the LP filter parameters here; estimation of the fundamental period T0 = 1/f0 is treated. As a first step, specification of the minimum-phase filter is narrowed down to an all-pole filter ± i.e., a filter where all transfer function zeros are concentrated in the origin of the z-plane and where only the poles are used to shape its frequency response. This is justified by two considerations:

Figure 4.19 Linear predictive vocoder signal generator as used in decoder. ϭϯϬ 

Principles of Wireless Communications ‡The vocal tract is mainly characterized by its resonance frequencies and our perception is more sensitive to spectral peaks than to valleys. ‡For a minimum-phase filter, all the poles and zeros lie inside the unit circle. In this case, a zero at position z0 with |z0|