DIGITAL MICROWAVE COMMUNICATION - PDFCOFFEE.COM (2024)

DIGITAL MICROWAVE COMMUNICATION

IEEE Press 445 Hoes Lane Piscataway, NJ 08854 IEEE Press Editorial Board 2013 John Anderson, Editor in Chief Linda Shafer George W. Arnold Ekram Hossain Om P. Malik

Saeid Nahavandi David Jacobson Mary Lanzerotti

George Zobrist Tariq Samad Dmitry Goldgof

Kenneth Moore, Director of IEEE Book and Information Services (BIS)

DIGITAL MICROWAVE COMMUNICATION Engineering Point-to-Point Microwave Systems

GEORGE KIZER

Copyright © 2013 by The Institute of Electrical and Electronics Engineers, Inc. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. All rights reserved Published simultaneously in Canada No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com. Library of Congress Cataloging-in-Publication Data: Kizer, George M. (George Maurice), 1945Digital microwave communication : engineering point-to-point microwave systems / George Kizer. pages cm ISBN 978-0-470-12534-2 (hardback) 1. Microwave communication systems. 2. Digital communication. I. Title. TK7876.K548 2013 621.382–dc23 2012048284

Printed in the United States of America ISBN: 9780470125342 10 9 8 7 6 5 4 3 2 1

CONTENTS

Preface

xv

Acknowledgments

xvii

About the Author

xix

1

A Brief History of Microwave Radio Fixed Point-to-Point (Relay) Communication Systems 1.1 1.2 1.3 1.4

2

2.5 2.6 2.7 2.8

3

In the Beginning, 1 Microwave Telecommunications Companies, 7 Practical Applications, 10 The Beat Goes On, 14 References, 16

Regulation of Microwave Radio Transmissions 2.1 2.2 2.3 2.4

20

Radio Frequency Management, 21 Testing for Interference, 28 Radio Paths by FCC Frequency Band in the United States, 29 Influences in Frequency Allocation and Utilization Policy within the Western Hemisphere, 30 2.4.1 United States of America (USA), 30 2.4.2 Canada, 36 FCC Fixed Radio Services, 36 Site Data Accuracy Requirements, 41 FCC Antenna Registration System (ASR) Registration Requirements, 42 Engineering Microwave Paths Near Airports and Heliports, 44 2.8.1 Airport Guidelines, 46 References, 47

Microwave Radio Overview 3.1 3.2 3.3

1

48

Introduction, 48 Digital Signaling, 50 Noise Figure, Noise Factor, Noise Temperature, and Front End Noise, 50 v

vi

CONTENTS

3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16

4

Radio Network Performance Objectives 4.1 4.2 4.3 4.4

4.5

4.6 4.7

4.8 4.9

4.10 4.11 4.12 4.13

5

Digital Pulse Amplitude Modulation (PAM), 53 Radio Transmitters and Receivers, 58 Modulation Format, 60 QAM Digital Radios, 65 Channel Equalization, 68 Channel Coding, 70 Trellis Coded Modulation (TCM), 72 Orthogonal Frequency Division Multiplexing (OFDM), 75 Radio Configurations, 76 3.12.1 Cross-Polarization Interference Cancellation (XPIC), 78 Frequency Diversity and Multiline Considerations, 82 Transmission Latency, 85 Automatic Transmitter Power Control (ATPC), 87 Current Trends, 87 3.16.1 TDM (or ATM) over IP, 87 3.16.2 TDM Synchronization over IP, 88 3.16.3 Adaptive Modulation, 89 3.16.4 Quality of Service (QoS) [Grade of Service (GoS) in Europe], 89 References, 90

Customer Service Objectives, 96 Maintenance Objectives, 96 Commissioning Objectives, 98 Design Objectives, 98 4.4.1 Quality, 98 4.4.2 Availability, 98 Differences Between North American and European Radio System Objectives, 99 4.5.1 North American Radio Engineering Standards (Historical Bell System Oriented), 99 4.5.2 European Radio Engineering Standards (ITU Oriented), 99 North American Telecommunications System Design Objectives, 100 International Telecommunications System Design Objectives, 100 4.7.1 Legacy European Microwave Radio Standards, 102 4.7.2 Modern European Microwave Radio Standards, 102 Engineering Microwave Paths to Design Objectives, 102 Accuracy of Path Availability Calculations, 106 4.9.1 Rain Fading, 106 4.9.2 Multipath Fading, 106 4.9.3 Dispersive Fading Outage, 107 4.9.4 Diversity Improvement Factor, 107 Impact of Flat Multipath Variability, 108 Impact of Outage Measurement Methodology, 108 Impact of External Interference, 109 Conclusion, 109 References, 110

Radio System Components 5.1 5.2

96

Microwave Signal Transmission Lines, 115 Antenna Support Structures, 121 5.2.1 Lattice Towers, 122 5.2.2 Self-Supporting Towers, 122 5.2.3 Guyed Towers, 122

114

CONTENTS

5.3 5.4 5.5 5.6 5.7 5.8 5.9

5.10 5.11 5.12 5.13 5.A

6

6.5 6.6 6.7

6.8

7

5.2.4 Monopoles, 124 5.2.5 Architecturally Designed Towers, 125 5.2.6 Building-Mounted Antennas, 126 5.2.7 Camouflaged Structures, 126 5.2.8 Temporary Structures, 126 Tower Rigidity and Integrity, 127 Transmission Line Management, 127 Antennas, 127 Near Field, 137 Fundamental Antenna Limitations, 143 Propagation, 143 Radio System Performance as a Function of Radio Path Propagation, 145 5.9.1 Flat Fading, 146 5.9.2 Dispersive Fading, 148 Radio System Performance as a Function of Radio Path Terrain, 149 Antenna Placement, 153 Frequency Band Characteristics, 155 Path Distances, 157 Appendix, 159 5.A.1 Antenna Isotropic Gain and Free Space Loss, 159 5.A.2 Free Space Loss, 163 5.A.3 Antenna Isotropic Gain, 164 5.A.4 Circular (Parabolic) Antennas, 166 5.A.5 Square (Panel) Antennas, 167 5.A.6 11-GHz Two-foot Antennas, 168 5.A.7 Tower Rigidity Requirements, 169 References, 172

Designing and Operating Microwave Systems 6.1 6.2 6.3 6.4

7.2

175

Why Microwave Radio? 175 Radio System Design, 175 Designing Low Frequency Radio Networks, 179 Designing High Frequency Radio Networks, 182 6.4.1 Hub and Spoke, 183 6.4.2 Nested Rings, 184 Field Measurements, 185 User Data Interfaces, 185 Operations and Maintenance, 202 6.7.1 Fault Management, 203 6.7.2 Alarms and Status, 206 6.7.3 Performance Management, 207 Maintaining the Network, 210 References, 217

Hypothetical Reference Circuits 7.1

vii

North American (NA) Availability Objectives, 220 7.1.1 NA Bell System Hypothetical Reference Circuit-Availability Objectives, 220 7.1.2 NA Telcordia Hypothetical Reference Circuit-Availability Objectives, 222 North American Quality Objectives, 225 7.2.1 Residual BER, 225 7.2.2 Burst Errored Seconds, 225 7.2.3 DS1 Errored Seconds, 225 7.2.4 DS3 Errored Seconds, 225

220

viii

CONTENTS

7.3 7.4

7.5

8

Microwave Antenna Theory 8.1 8.2

8.3

8.4

8.5 8.6

8.7

8.A

9

International Objectives, 225 7.3.1 International Telecommunication Union Availability Objectives, 228 International Telecommunication Union Quality Objectives, 236 7.4.1 Legacy Quality Objectives, 236 7.4.2 Current Quality Objectives, 240 Error-Performance Relationship Among BER, BBER, and SESs, 245 References, 247

Common Parameters, 251 Passive Reflectors, 252 8.2.1 Passive Reflector Far Field Radiation Pattern, 253 8.2.2 Passive Reflector Near Field Power Density, 255 Circular (Parabolic) Antennas, 256 8.3.1 Circular (Parabolic) Antenna Far Field Radiation Pattern, 256 8.3.2 Circular (Parabolic) Antenna Efficiency, 260 8.3.3 Circular (Parabolic) Antenna Beamwidth, 261 8.3.4 Circular (Parabolic) Antenna Near Field Power Density, 264 8.3.5 General Near Field Power Density Calculations, 265 8.3.6 Circular Antenna Near Field Power Density Transitions, 272 8.3.7 Circular Antenna Far Field Reference Power, 273 Square Flat Panel Antennas, 274 8.4.1 Square Antenna Beamwidth, 276 8.4.2 Square Near Field Power Density, 279 8.4.3 Square Antenna Far Field Reference Power, 288 8.4.4 Square Near Field Power Density Transitions, 289 Regulatory Near Field Power Density Limits, 290 Practical Near Field Power Calculations, 290 8.6.1 A Parabolic Antenna Near Field Power Example Calculation, 293 8.6.2 Safety Limits, 294 Near Field Antenna Coupling Loss, 296 8.7.1 Antenna to Antenna Near Field Coupling Loss, 296 8.7.2 Coupling Loss between Identical Antennas, 300 8.7.3 Coupling Loss between Different-Sized Circular Antennas, 300 8.7.4 Coupling Loss between Different-Sized Square Antennas, 301 8.7.5 Parabolic Antenna to Passive Reflector Near Field Coupling Loss, 302 8.7.6 Coupling Loss for Circular Antenna and Square Reflector, 303 8.7.7 Coupling Loss for Square Antenna and Square Reflector (Both Aligned), 305 8.7.8 Back-to-Back Square Passive Reflector Near Field Coupling Loss, 306 Appendix, 307 8.A.1 Circular Antenna Numerical Power Calculations, 307 8.A.2 Square Antenna Numerical Power Calculations, 311 8.A.3 Bessel Functions, 315 References, 318

Multipath Fading 9.1 9.A

249

Flat and Dispersive Fading, 329 Appendix, 338 9.A.1 Fading Statistics, 338 9.A.2 DFM Equation Derivation, 339 9.A.3 Characteristics of Receiver Signature Curves and DFM, 342 References, 344

320

CONTENTS

10

Microwave Radio Diversity 10.1 10.2 10.3 10.4 10.5 10.6 10.7

10.A

11

12

348

Space Diversity, 350 Dual-Frequency Diversity, 354 Quad (Space and Frequency) Diversity, 357 Hybrid Diversity, 358 Multiline Frequency Diversity, 358 Crossband Multiline, 365 Angle Diversity, 366 10.7.1 Angle Diversity Configurations, 368 10.7.2 Angle Diversity Performance, 371 Appendix, 372 10.A.1 Optimizing Space Diversity Vertical Spacing, 372 10.A.2 Additional Optimization, 377 References, 380

Rain Fading 11.1 11.2 11.3 11.4 11.5 11.6 11.7 11.8 11.9 11.10 11.A

ix

384

Point (Single-Location) Rain Loss (Fade) Estimation, 386 Path Rain-Fade Estimation, 390 Point-to-Path Length Conversion Factor, 398 Single-Location Rain Rate R, 398 City Rain Rate Data for North America, 407 New Rain Zones, 430 Worst-Month Rain Rates, 430 Point Rain Rate Variability, 439 Examples of Rain-Loss-Dominated Path Designs, 441 Conclusions, 444 Appendix, 446 11.A.1 North American City Rain Data Index, 446 References, 458

Ducting and Obstruction Fading

461

12.1

Introduction, 461 12.1.1 Power Fading, 463 12.2 Superrefraction (Ducting), 465 12.3 Subrefraction (Earth Bulge or Obstruction), 469 12.4 Minimizing Obstruction Fading, 471 12.4.1 Path Clearance (Antenna Vertical Placement) Criteria, 471 12.5 Obstruction Fading Model, 477 12.6 Obstruction Fading Estimation, 479 12.7 Bell Labs Seasonal Parameter Charts, 483 12.8 Refractivity Data Limitations, 484 12.9 Reviewing the Bell Labs Seasonal Parameter Charts, 485 12.10 Obstruction Fading Parameter Estimation, 486 12.11 Evaluating Path Clearance Criteria, 487 12.A Appendix: North American Refractivity Index Charts, 490 12.B Appendix: Worldwide Obstruction Fading Data, 491 References, 511 13

Reflections and Obstructions 13.1

Theoretical Rough Earth Reflection Coefficient, 514 13.1.1 Gaussian Model, 516 13.1.2 Uniform Model, 517

514

x

CONTENTS

13.2 13.3 13.4 13.5 13.6 13.7 13.8 13.9 13.10 13.11 13.12 13.13 13.14 13.15 13.16 13.A

Scattering from Earth Terrain, 517 Practical Earth Reflection Coefficient, 519 Reflection Location, 519 Smooth Earth Divergence Factor, 522 Reflections from Objects Near a Path, 523 Fresnel Zones, 525 Antenna Launch Angle (Transmit or Receive Antenna Takeoff Angle), 527 Grazing Angle, 527 Additional Path Distance, 528 Estimating the Effect of a Signal Reflected from the Earth, 528 Flat Earth Obstruction Path Loss, 529 Smooth Earth Obstruction Loss, 529 Knife-Edge Obstruction Path Gain, 530 Rounded-Edge Obstruction Path Gain, 531 Complex Terrain Obstruction Losses, 532 Appendix, 536 13.A.1 Smooth Earth Reflection Coefficient, 536 13.A.2 Procedure for Calculating RH AND RV , 536 13.A.3 Earth Parameters for Frequencies Between 100 kHz and 1 GHz, 538 13.A.4 Earth Parameters for Frequencies Between 1 GHz and 100 GHz, 540 13.A.5 Comments on Conductivity and Permittivity, 541 13.A.6 Reflection Coefficients, 541 References, 555

14 Digital Receiver Interference 14.1 14.2 14.3 14.4 14.5 14.6 14.7 14.8 14.9 14.A

14.B

Composite Interference (T /T ) Criterion, 559 Carrier-to-Interference Ratio (C/I) Criterion, 560 Measuring C/I, 560 Estimating C/I, 561 Threshold to Interference (T/I) Criterion, 562 Why Estimate T/I, 563 T/I Estimation—Method One, 564 T/I Estimation—Method Two, 565 Conclusion, 569 Appendix, 569 14.A.1 Basic 10−6 Threshold for Gaussian (Radio Front End) Noise Only, 569 14.A.2 Using a Spectrum Mask as a Default Spectrum Curve, 570 Appendix: Receiver Parameters, 571 References, 572

15 Network Reliability Calculations 15.1 15.2

15.3 15.4 15.5 15.6

559

Hardware Reliability, 574 System Reliability, 577 15.2.1 Equipment in Series, 577 15.2.2 Multiple Equipment in Parallel, 578 15.2.3 Nested Equipment, 579 15.2.4 Meshed Duplex Configuration, 579 Communication Systems, 579 Application to Radio Configurations, 580 Spare Unit Requirements, 580 BER Estimation, 583 15.6.1 Time to Transmit N Digits, 585 References, 585

573

CONTENTS

16

Path Performance Calculations

xi 587

16.1 16.2 16.3 16.4 16.5

Path Loss, 588 Fade Margin, 589 Path Performance, 589 Allowance for Interference, 590 North American (NA) Path Performance Calculations, 590 16.5.1 Vigants–Barnett Multipath Fading (Barnett, 1972; Vigants, 1975)—NA, 591 16.5.2 Cross-Polarization Discrimination Degradation Outages—NA, 596 16.5.3 Space Diversity: Flat-Fading Improvement—NA, 596 16.5.4 Space Diversity: Dispersive-Fading Improvement—NA, 599 16.5.5 Dual Frequency Diversity: Flat-Fading Improvement—NA, 599 16.5.6 Dual Frequency Diversity: Dispersive-Fading Improvement—NA, 600 16.5.7 Quad (Space and Frequency) Diversity—NA, 601 16.5.8 Hybrid Diversity—NA, 601 16.5.9 Multiline Frequency Diversity—NA, 601 16.5.10 Angle Diversity—NA, 602 16.5.11 Upfading—NA, 603 16.5.12 Shallow Flat Fading—NA, 603 16.6 International Telecommunication Union—Radiocommunication Sector (ITU-R) Path Performance Calculations, 604 16.6.1 Flat Fading—ITU-R, 605 16.6.2 Dispersive Fading—ITU-R, 606 16.6.3 Cross-Polarization Discrimination Degradation Outages—ITU-R, 608 16.6.4 Upfading—ITU-R, 609 16.6.5 Shallow Flat Fading—ITU-R, 609 16.6.6 Space Diversity Improvement—ITU-R, 610 16.6.7 Dual-Frequency Diversity Improvement—ITU-R, 611 16.6.8 Quad (Space and Frequency) Diversity—ITU-R, 611 16.6.9 Angle Diversity Improvement—ITU-R, 613 16.6.10 Other Diversity Improvements—ITU-R, 614 16.7 Rain Fading and Obstruction Fading (NA and ITU-R), 614 16.8 Comparing the North American and the ITU-R Flat-Fading Estimates, 614 16.8.1 Vigants–Barnett Flat-Fading Estimation for Bell Labs Path, 614 16.8.2 ITU-R Flat-Fading Estimation for Bell Labs Path, 615 16.9 Diffraction and Vegetation Attenuation, 621 16.10 Fog Attenuation, 622 16.11 Air Attenuation, 624 16.A Appendix, 631 References, 649 A

Microwave Formulas and Tables A.1

General, 653 Table A.1 General, 653 Table A.2 Scientific and Engineering Notation, 654 Table A.3 Emission Designator, 655 Table A.4 Typical Commercial Parabolic Antenna Gain (dBi), 656 Table A.5 Typical Rectangular Waveguide, 656 Table A.6 Typical Rectangular Waveguide Data, 657 Table A.7 Typical Copper Corrugated Elliptical Waveguide Loss, 657 Table A.8 Typical Copper Circular Waveguide Loss, 658 Table A.9 Rectangular Waveguide Attenuation Factors, 659 Table A.10 CommScope Elliptical Waveguide Attenuation Factors, 659 Table A.11 RFS Elliptical Waveguide Attenuation Factors, 660

653

xii

CONTENTS

A.2

A.3

Table A.12 Elliptical Waveguide Cutoff Frequencies, 660 Table A.13 Circular Waveguide Cutoff Frequencies, 661 Table A.14 Typical Coaxial Microwave Connectors, 663 Table A.15 Coaxial Cable Velocity Factors, 664 Table A.16 50 Ohm Coaxial Cable Attenuation Factors, 664 Table A.17 Frequency Bands, General Users, 665 Table A.18 Frequency Bands, Fixed Point to Point Operators, 665 Table A.19 Frequency Bands, Radar, Space and Satellite Operators, 666 Table A.20 Frequency Bands, Electronic Warfare Operators, 666 Table A.21 Frequency Bands, Great Britain Operators, 666 Table A.22 Signal-to-Noise Ratio for Demodulator 10−6 BER, 667 Radio Transmission, 668 A.2.1 Unit Conversions, 668 A.2.2 Free Space Propagation Absolute Delay, 669 A.2.3 Waveguide Propagation Absolute Delay, 669 A.2.4 Coaxial Cable Propagation Absolute Delay, 669 A.2.5 Free Space Propagation Wavelength, 669 A.2.6 Dielectric Medium Propagation Wavelength, 669 A.2.7 Free Space Loss (dB), 670 A.2.8 Effective Radiated Power (ERP) and Effective Isotropic Radiated Power (EIRP), 670 A.2.9 Voltage Reflection Coefficient, 670 A.2.10 Voltage Standing Wave Ratio Maximum, 670 A.2.11 Voltage Standing Wave Ratio Minimum, 670 A.2.12 Voltage Standing Wave Ratio, 670 A.2.13 Power Reflection Coefficient, 671 A.2.14 Reflection Loss, 671 A.2.15 Return Loss, 671 A.2.16 Q (Quality) Factor (Figure of Merit for Resonant Circuits or Cavities), 671 A.2.17 Q (Quality) Factor (Figure of Merit for Optical Receivers), 672 A.2.18 Typical Long-Term Interference Objectives, 672 A.2.19 Frequency Planning Carrier-to-Interference Ratio (C/I), 672 A.2.20 Noise Figure, Noise Factor, Noise Temperature, and Front End Noise, 672 A.2.21 Shannon’s Formula for Theoretical Limit to Transmission Channel Capacity, 674 Antennas (Far Field), 675 A.3.1 General Microwave Aperture Antenna (Far Field) Gain (dBi), 675 A.3.2 General Microwave Antenna (Far Field) Relative Gain (dBi), 675 A.3.3 Parabolic (Circular) Microwave Antenna (Far Field) Gain (dBi), 675 A.3.4 Parabolic (Circular) Microwave Antenna Illumination Efficiency, 676 A.3.5 Panel (Square) Microwave Antenna (Far Field) Gain (dBi), 676 A.3.6 Panel (Square) Microwave Antenna Illumination Efficiency, 676 A.3.7 Angle Between Incoming and Outgoing Radio Signal Paths, C, for a Passive Reflector, 677 A.3.8 Signal Polarization Rotation Through a Passive Reflector, φ, 678 A.3.9 Signal Effects of Polarization Rotation, 678 A.3.10 Passive Reflector (Far Field) Two-Way (Reception and Retransmission) Gain (dBi), 678 A.3.11 Rectangular Passive Reflector 3-dB Beamwidth (Degrees, in Horizontal Plane), 678 A.3.12 Elliptical Passive Reflector 3-dB Beamwidth (Degrees), 679 A.3.13 Circular Parabolic Antenna 3-dB Beamwidth (Degrees), 679 A.3.14 Passive Reflector Far Field Radiation Pattern Envelopes, 680 A.3.15 Inner Radius for the Antenna Far-Field Region, 681

CONTENTS

A.4

A.5

A.6

A.7

A.8 A.9

A.10

B

Near-Field Power Density, 682 A.4.1 Circular Antennas, 682 A.4.2 Square Antennas, 682 Antennas (Close Coupled), 683 A.5.1 Coupling Loss LNF (dB) Between Two Antennas in the Near Field, 683 A.5.2 Coupling Loss LNF (dB) Between Identical Antennas, 683 A.5.3 Coupling Loss LNF (dB) Between Different-Sized Circular Antennas, 684 A.5.4 Coupling Loss LNF (dB) Between Different-Sized Square Antennas (Both Antennas Aligned), 684 A.5.5 Coupling Loss LNF (dB) for Antenna and Square Reflector in the Near Field, 685 A.5.6 Coupling Loss LNF (dB) for Circular Antenna and Square Reflector, 685 A.5.7 Coupling Loss LNF (dB) for Square Antenna and Square Reflector (Both Aligned), 686 A.5.8 Two Back-to-Back Square Reflectors Combined Gain, 687 Path Geometry, 687 A.6.1 Horizons (Normal Refractivity over Spherical Earth), 687 A.6.2 Earth Curvature (Height Adjustment Used on Path Profiles), 688 A.6.3 Reflection Point, 688 A.6.4 Fresnel Zone Radius (Perpendicular to the Radio Path), 690 A.6.5 Fresnel Zone Projected onto the Earth’s Surface, 690 A.6.6 Reflection Path Additional Distance, 691 A.6.7 Reflection Path Additional Delay, 691 A.6.8 Reflection Path Relative Amplitude, 691 A.6.9 Antenna Launch Angle, 691 A.6.10 Antenna Height Difference, 692 A.6.11 K Factor (From Launch Angles), 692 A.6.12 Refractive Index and K Factor (From Atmospheric Values), 693 Obstruction Loss, 693 A.7.1 Knife-Edge Obstruction Loss, 693 A.7.2 Rounded-Edge Obstruction Path Loss, 694 A.7.3 Smooth-Earth Obstruction Loss, 695 A.7.4 Infinite Flat Reflective Plane Obstruction Loss, 695 A.7.5 Reflection (Earth Roughness Scattering) Coefficient, 695 A.7.6 Divergence Coefficient from Earth, 696 A.7.7 Divergence Factor for a Cylinder, 697 A.7.8 Divergence Factor for a Sphere, 697 A.7.9 Signal Reflected from Flat Earth, 697 A.7.10 Ducting, 697 Mapping, 698 A.8.1 Path Length and Bearing, 698 Towers, 700 A.9.1 Three-Point Guyed Towers, 700 A.9.2 Three-Leg Self-Supporting Tower, 701 A.9.3 Four-Leg Self-Supporting Tower, 701 Interpolation, 702 A.10.1 Two-Dimensional Interpolation, 702 A.10.2 Three-Dimensional Interpolation, 705

Personnel and Equipment Safety Considerations B.1 B.2 B.3 B.4

xiii

General Safety Guidelines, 709 Equipment Protection, 711 Equipment Considerations, 712 Personnel Protective Equipment, 713

709

xiv

CONTENTS

B.5 B.6 B.7 B.8 B.9 B.10 B.11 B.12 B.13 B.14 B.15 B.16 B.17 B.18 B.19 B.20 B.21 Index

Accident Prevention Signs, 713 Tower Climbing, 713 Hand Tools, 715 Electrical Powered Tools, 715 Soldering Irons, 715 Ladders, 716 Hoisting or Moving Equipment, 716 Batteries, 717 Laser Safety Guidelines, 717 Safe Use of Lasers and LED in Optical Fiber Communication Systems, 718 Optical Fiber Communication System (OFCS) Service Groups (SGs), 718 Electrostatic Discharge (ESD), 719 Maximum Permissible Microwave Radio RF Exposure, 720 Protect Other Radio Users [FCC], 720 PAUSE (Prevent all Unplanned Service Events) and Ask Yourself (Verizon and AT&T Operations), 721 Protect Yourself (Bell System Operations), 721 Parting Comment, 721 723

PREFACE

As a young engineer, with only one previous significant project as experience, I was tasked with an overwhelming project: expand the existing South Korean intercity microwave network by 140%. I had a copy of Bob White’s Engineering Considerations for Microwave Communications Systems, a couple of volumes of the Lenkurt Demodulator, some Collins Engineering Letters, and a couple of Dick Lane’s propagation papers. While these were excellent resources, I was totally unprepared for the job ahead of me. As the old cowboy said, “There were a lot of things they didn’t tell me when I signed on for my first cattle drive.” The Korean project was, as you might imagine, rather exciting for a young, enthusiastic engineer. I was introduced to problems I could never have imagined. With the help of many others, I was successful and learned from the experience. However, my technical preparation could have been better. It has been several years since that first big project. I have done many others and been involved in numerous technical areas related to microwave transmission. However, I continue to be disappointed in the technical information available for the practicing microwave transmission engineer. If I were a new engineer starting on a project, I don’t know where I would go to get in-depth technical knowledge on designing fixed point-to-point microwave communication systems. This book is my attempt to remedy the situation. When I approach a complicated subject for the first time, I like to grasp the overall concepts before diving into the details. I have always admired Dumas and Sands’s little blue book, Microwave System Planning. It covers most of the important considerations of microwave path design in less than 140 pages. To provide similar coverage I have organized this book so that the first six chapters address general topics of universal interest. Equations have been kept to a minimum. Figures and tables have been used extensively. The other chapters go into detail on a wide range of topics. The depth of coverage varies. If the topic has been covered adequately in the literature, I attempt to summarize. If the topic has not been covered adequately (e.g., path diversity, dispersive fading, or antenna near field), I go into considerably more detail. Appendix A summarizes the important formulas, and Appendix B covers safety, a critical topic ignored in all other books to date. This book covers universal design principles. While the agencies performing frequency planning and path design are quite different in North America from those in Europe, the methodologies are similar. I address both North American and European (ITU-R) methods. Several other authors have covered the European (ITU-R) methods; for the first time, this book also covers the North American approach. To augment the text, Internet resources are also available. Understanding multipath (Chapter 9) is critical to path engineering. After you grasp the concept of a spectrum analyzer (a device that displays received power (on the Y-axis) in a narrow bandwidth around a specific frequency (on the X-axis), take a look at the following videos on YouTube: Digital Radio Multipath Experiment (authored by Eddie xv

xvi

PREFACE

Allen) http://youtu.be/AR8Nee-GmTI and Digital Radio Dispersive Fading (authored by Ron Hutchinson) http://youtu.be/ugaz4R3babU. These videos graphically illustrate the received signal distortion caused by multipath propagation. Wiley has graciously provided a Website for additional data associated with the book: http://booksupport.wiley.com. Enter the ISBN, title, or author’s name to access the files. The following folders of information are provided: Site Index and Book Updates or Corrections. A detailed index of the site folder contents is provided in one document. The other document describes any updates or corrections that may be discovered. Computer Code. This folder contains actual working code for several of the important algorithms described in the book. The code is Microsoft QuickBasic but can be easily converted to other languages. Data. This folder contains critical data required to implement many of the algorithms discussed in the book. Figures. This folder contains detailed color pictures from Chapters 2 and 8. Like the book, they are copyrighted by Wiley. Public Domain References. This folder contains resources related to the book’s topics. Most of the publications are from the US government. A few are from the National Spectrum Manager’s Association. While NSMA documents are not public domain, NSMA has granted the right to distribute their documents freely as long as they are attributed to NSMA. Rain fading is a complex, difficult subject. Defining high frequency microwave path performance in a rain environment is subject to considerable variability between short-term estimates and actual performance in all cases. Spatial and temporal variations of an order of magnitude or more are common. Rain-related documentation (and climatic data in general) is just too extensive to be easily described or provided. To gain an appreciation for the problem, a good start would be to go to the NOAA Website http://www.nws.noaa.gov/oh/hdsc/currentpf.htm#PF_documents_by_state and download the basic documents found there. For more detailed study, you may need to contact NOAA directly for archival support. Be prepared to be surprised by the challenge of this topic. My goal is to provide you with the technical background to understand and perform the significant tasks in microwave path design. While no book can make you an expert, I believe this book can significantly enhance your knowledge. As you probably know, success is a combination of ability, preparation, and opportunity. I can’t help you with the first and last requirements, but I am confident this book can help you with the preparation.

ACKNOWLEDGMENTS

First, I would like to thank Mike and Cathy Newman. Mike suggested this project and was a great supporter and facilitator. Cathy connected me with Wiley. I would also like to thank all my reviewers: Michael Newman (Editorial Coordinator and general whip wielder), Prof. Donald Dudley, Thomas Eckels, Ted Hicks, William Ruck, and last (alphabetically but not technically), Dr. William Rummler. They have given me many great suggestions and corrections. I am in their debt. I especially want to thank the late Dr. Dudley who convinced Wiley this book needed to be published. Also, Dr. Rummler’s many technical and ITU-R-related comments and corrections are very much appreciated. I would also like to thank my editor, Mary Hatcher, production editor, Stephanie Loh, and project manager, Jayashree Saishankar. Moving a concept from text to book is a daunting task; this book was especially demanding. Without their tireless efforts and creative ideas, the project could not have been completed successfully. I don’t want to forget all my associates at Collins Radio, Rockwell International, and Alcatel-Lucent who have contributed do my day to day experiences in microwave radio. I appreciate the friends I have made in many industry associations and government offices I have frequented over the years. I fondly remember the many trips Bob Miller and I made to Washington, D.C. in support of industry regulatory matters. The many customers I have worked with have helped me improve as an engineer; I have enjoyed our mutual experiences. I have many friends throughout the industry but I would like to single out four: Dick Lane has been a longtime associate. I appreciate his knowledge and advice. Eddie Allen is always helpful with path design advice. He is a world-class microwave propagation expert. Of course, it is hard to say too much good about Bill Rummler. He and I have worked together in FCC, FWCC, ITU-R, and TIA matters, and his political and technical capabilities cannot be overstated. Mike Newman has been a longtime associate. He and I started working together 20 years ago when the industry created the FCC Part 101 rules and regulations. This pleasant association has continued ever since. Although it took me a couple of years to assemble this book, it is based on decades of projects, courses, and presentations. I would like to thank my wife Anne and our children, Amy and Mark, who over the years have put up with the seemingly endless trips and other interruptions that were a constant part of my professional life—and a source of the material for this book.

xvii

ABOUT THE AUTHOR

George Kizer has been a microwave engineer for the US Air Force, Collins Radio, Collins Microwave Radio Division of Rockwell International, and Alcatel (now Alcatel-Lucent). He has been a systems engineer, project manager, and product manager for microwave products. From 1991 to 1996, George served as Chairman of the Fixed Point-to-Point Communications Section of TIA in Washington, D.C. During this time, the Section, in coordination with the National Spectrum Management Association, assisted the FCC in the creation of Part 101, the rules that govern licensed microwave communication in the United States. George retired from Alcatel in 2001 and has been a private consultant since then. He lives in Plano, Texas, with his wife Anne and two dogs, Jax and Zoey.

xix

1 A BRIEF HISTORY OF MICROWAVE RADIO FIXED POINT-TO-POINT (RELAY) COMMUNICATION SYSTEMS

1.1

IN THE BEGINNING

Message relaying and digital transmission seem like recent inventions. Not true—these go way back. The first known message relay system was created by the Egyptian king Sesostris I about 2000 bce. The earliest recorded digital relay transmission by electromagnetic means was around the same time during the Trojan War. King Agamemnon and his troops used signal fires located on mountaintop repeater stations to communicate with each other. The king even used that method to send a message to his wife Clytemnestra. The binary message was either the war was continuing (no fire) or the war was over and he was returning home (fire). The Greek general Polybius, in 300 bce, developed a more complex message set to allow greater information transfer per transmitted symbol. One to five torches were placed on top of each of two walls. Since each wall had five independent states, this allowed 24 Greek characters plus a space to be transmitted with each symbol. This basic concept of using two orthogonal channels (walls then and in-phase and quadrature channels today), with each channel transmitting independent multiple digital states, is the basis of the most modern digital microwave radio systems of today (Bennett and Davey, 1965). Digital transmission systems continued to advance using the basic concept developed by Polybius. Systems used in the eighteenth and early nineteenth centuries were direct descendents of this approach. In 1794, the French government installed a two-arm optical system, developed by the Chappe brothers 2 years earlier, which could signal 196 characters per transmitted symbol. This system used several intermediate repeater sites to cover the 150 miles between Paris and Lille. In 1795, the British Admiralty began using a 64-character dual multiple shutter optical system. Versions of this semaphore system are in use in the military today (Bennett and Davey, 1965). Synchronous digital transmission began in 1816 when Ronalds installed an 8-mile system invented by the Chappe brothers. Each end of the system had synchronized clocks and a synchronized spinning wheel that exposed each of the letters of the alphabet as it spun. At the transmitting end, the operator signaled when he or she saw the letter of interest. At the receiving end, a sound (caused by an electric spark) signaled when to record the exposed letter (Bennett and Davey, 1965). S¨ommering proposed a telegraphic system in 1809. Wire (cable)-based electromagnetic telegraphic systems began in the early 1800s with the discovery of the relationship between electricity and magnetism by Aepinus, Oersted, Amp`ere, Arago, Faraday, Henry, Ohm, Pouillet, and Sturgeon and chemical Digital Microwave Communication: Engineering Point-to-Point Microwave Systems, First Edition. George Kizer. © 2013 The Institute of Electrical and Electronics Engineers, Inc. Published 2013 by John Wiley & Sons, Inc.

1

2

A BRIEF HISTORY OF MICROWAVE RADIO FIXED POINT-TO-POINT (RELAY)

batteries by Volta, Becquerel, Daniell, Bunsen, and Grove (although a chemical battery from 250 bce was discovered in Baghdad, Iraq, by Konig in 1938). In 1886, Heaviside introduced the concept of impedance as the ratio of voltage divided by current. In 1892, he reported that an electrical circuit had four fundamental properties: resistance, inductance, capacity, and leakage. In 1830, Joseph Henry used an electromagnet to strike a bell over 1 mile of wire. In 1834, Gauss and Weber constructed an electromagnetic telegraph in Gottingen, Germany, connecting the Astronomical Observatory, the Physical Cabinet, and the Magnetic Observatory. In 1838 in England, Edward Davy patented an electrical telegraph system. In 1837, Wheatstone and Cooke patented a telegraph and in 1839 constructed the first commercial electrical telegraph. Samuel Morse, following Henry’s approach, teamed with Alfred Vail to improve Morse’s original impractical electromagnetic system. The Morse system, unlike earlier visual systems, printed a binary signal (up or down ink traces). Vail devised a sequence of dots and dashes that has become known as Morse code. Morse demonstrated this system in 1838 and patented it in 1840. This design was successfully demonstrated over a 40-mile connection between Baltimore and Washington, DC in 1844. About 1850, Vail invented the mechanical sounder replacing the Morse ink recorder with a device allowing an experienced telegraph operator to receive Morse code by ear of up to 30 words per minute. Morse and Vail formed the Western Union to provide telegram service using their telegraphic system (Carl, 1966; IEEE Communications Society, 2002; Kotel’nikov, 1959; O’Neill, 1985; Salazar-Palma et al., 2011; Sobol, 1984; AT&T Bell Laboratories, 1983). While the early systems were simple optical or sound systems, printing telegraphs followed in 1846 with a low speed asynchronous system by Royal House. In 1846, David Hughes introduced a high speed (30 words per minute) synchronous system between New York and Philadelphia. Gintl, in 1853, and Stearns, in 1871, invented telegraphic systems able to send messages in opposite directions at the same time. In 1867, Edward Calahan of the American Telegraph Company invented the first stock telegraph printing system. In 1900, the Creed Telegraph System was used for converting Morse code to text. Soon systems were developed to provide multiple channels (multiplexing) over the same transmission medium. The first practical system was Thomas Edison’s 1874 quadruplex system that allowed full duplex (simultaneous transmission and reception) operation of two channels (using separate communications paths). In 1874, Baudot invented a time division multiplex (TDM) system allowing up to six simultaneous channels over the same transmission path. In 1936, Varioplex was using 36 full duplex channels over the same wire line. Pulse code modulation (PCM), the method of sampling, quantizing, and coding analog signals for digital transmission, was patented in 1939 by Sir Alec Reeves, an engineer of International Telephone and Telegraph (ITT) laboratories in France. In the 1960s, PCM telephone signals were time division multiplexed (TDMed) to form digital systems capable of transmitting 24 or 30 telephone channels simultaneously. These PCM/TDM (time division multiplex) signals could be further TDMed to form composite digital signals capable of transmitting hundreds or thousands of simultaneous telephone signals using cable, microwave radio, or optical communications systems (Bryant, 1988; Carl, 1966; fa*gen, 1975; Welch, 1984). From 1847, wire-based terrestrial systems were used on oversea cables beginning in 1847. These long systems could not use repeaters and were quite slow (about one to two words per minute). The use of Lord Kelvin’s mirror galvanometer significantly increased transmission speed to about eight words a minute. Basic transmission limitations were analyzed using the methods of Fourier and Kelvin. In 1887, Oliver Heaviside, by analyzing the long cable as a series of in-line inductances and parallel resistances, developed a method of compensating the cable to permit transmission rates limited only by loss and noise. Distributed inductors (loading coils) were patented by Pupin in 1899 and further developed by Krarup in 1902. By 1924, distributed inductance allowed the New York to Azores submarine cable to operate at 400 words per minute (Bryant, 1988; Carl, 1966; fa*gen, 1975). About 585 bce, Thales of Miletus discovered both static electricity (attraction of dry light material to a rubbed amber rod) and magnetism (attraction of iron to a loadstone). In 1819, Hans Orsted demonstrated that a wire carrying electric current could deflect a magnetized compass needle. Wireless transmission, utilizing orthogonal electric and magnetic fields, began in 1840 when Joseph Henry observed high frequency electrical oscillations at a distance from their source. James Maxwell, besides making many contributions to optics and developing the first permanent color photograph, predicted electromagnetic radiation mathematically. He first expressed his theory in an 1861 letter to Faraday. He later presented his theory at the Royal Society of London in December 1864 and published the results in 1873. His theory can be expressed as four differential or integral equations expressing how electric charges

IN THE BEGINNING

3

produces electric fields (Gauss’ law of electric fields), the absence of magnetic monopoles (Gauss’ law of magnetism), how changing magnetic fields produce electric fields (Faraday’s law of induction), and how currents and changing electric fields produce magnetic fields (Ampere’s law). The modern mathematical formulation of Maxwell’s equations is a result of the reformulation and simplification by Oliver Heaviside and Willard Gibbs. Heinrich Hertz (an outstanding university student and an associate of Helmholtz) demonstrated the electromagnetic radiation phenomenon in 1887. In 1889, Heinrich Huber, an electric power station employee, questioned Hertz if radio power transmission between two facing parabolic mirrors was possible. Hertz said that radio transmission between parabolic antennas was impractical. In 1892, Tesla delivered a speech before the Institution of Electrical Engineers of London in which he noted, among other things, that intelligence would be transmitted without wires. In 1893, he demonstrated wireless telegraphy (Bryant, 1988; Carl, 1966; fa*gen, 1975; Maxwell, 1865; Salazar-Palma et al., 2011; Tarrant, 2001). In the early 1860s, several people, including Bell, Gray, La Cour, Meucci, Reis, and Varley demonstrated telephones. In 1876, Alexander Bell patented the telephone (fa*gen, 1975) in the United States. In 1880, Bell patented speech over a beam of light, calling this the photophone. This device was improved by the Boston laboratory of the American Bell Telephone Company and patented in 1897. E. J. P. Mercadier renamed the device the “radiophone,” the first use of the term radio in the modern sense (Bryant, 1988; Carl, 1966; fa*gen, 1975). It is not often appreciated that before Marconi, several “wireless” approaches were attempted that did not involve radio waves. In the 1840s, Morse developed a method of sending messages across water channels or rivers without wires. He placed a pair of electrodes on opposite sides of a channel of water. As long as the electrodes on the same side of the water were spaced at least three times the distance across the water, practical telegraphic communication was possible. He demonstrated communication over a river a mile wide. In 1894, Rathenau extended Morse’s concept to communicate with ships. Using a sensitive earpiece and a 150-Hz carrier current, he was able to communicate with ships 5 km from shore using electrodes 500 m apart. In 1896, Strecker extended the distance to 17 km (Sarkar et al., 2006). In 1866, Loomis demonstrated the transmission of telegraph signals over a distance of 14 miles between two Blue Ridge Mountains using two kites with 590-ft lines. The two kite lines were conductors. He transmitted a small current through the atmosphere but used the Earth as the return path. This was somewhat like Morse’s transmission through water (Sarkar et al., 2006). In 1886, Edison devised an induction telegraph for communicating with moving trains. He induced the telegraphic signals onto the metal roof of the train by wires parallel to the train tracks. The grounded train wheels completed the circuit. While this system worked, it was not a commercial success (Sarkar et al., 2006). In the 1880s, Hertz experimented with radio waves in the range 50–430 MHz. In 1894, Sir Oliver Lodge demonstrated a wireless transmitter and receiver to the Royal Society. In the early 1890s, Augusto Righi performed experiments at 1.5, 3, and 15 GHz. Soon several investigators including Marconi, Popov, Lebedew, and Pampa were performing wireless experiments at very high frequencies. In 1895, Bose used 10- to 60-GHz electromagnetic waves to ring a bell. About the same time, he made the first quantitative measurements above 30 GHz. In the 1920s, Czerny, Nichols, Tear, and Glagolewa-Arkadiewa were producing radio signals up to 3.7 THz. Very high frequency research of up to 300 GHz is currently underway. Commercial applications are currently being deployed as high as 90 GHz. High frequency microwaves in the 11–40 GHz range are finding applications in wide area networks and backhaul networks in urban areas. Higher frequency systems are being used for high density industrial campus and building applications (Bryant, 1988; Meinel, 1995; Wiltse, 1984). In 1825, Munk discovered that a glass tube with metal plugs and containing loose zinc and silver filings tended to decrease electrical resistance when small electrical signals were applied. Using this principle to create a “coherer” detector, in 1890, Edouard Branly demonstrated the detection of radio waves at a distance. The coherer detector was improved by Lodge and others. Braun invented the galena crystal (“cat’s whisker”) diode in 1874 (Bryant, 1988). Crystal detectors were applied to radio receivers by Bose, Pickard, and others between 1894 and 1906 and were a big improvement over the coherer. In 1894, Lodge detected “Hertzian” waves using Branly’s coherer. In 1897, Tesla sensed electrical signals 30 miles away and received his basic radio patent. In 1898, Tesla demonstrated a radio controlled boat. In 1894, Marconi became interested in Hertzian waves after reading an article by Righi. After visiting the classes and laboratory of Professor Righi, Marconi began radio experiments in 1895. His

4

A BRIEF HISTORY OF MICROWAVE RADIO FIXED POINT-TO-POINT (RELAY)

radio receiver detector was the newly improved coherer. (Later he transitioned to Braun’s crystal diode.) He created a wireless communications system that could ring a bell. Perhaps, he and Bose were the first to use radio for remote control. In 1896, he demonstrated a 1.75-mile 1.2-GHz radio telegraph system to the British Post Office. This was probably the first microwave radio link. In 1897, Marconi installed the first permanent wireless station on the Isle of Wight and communicated with ships. The next year he added a second station at Bournemouth. This was the first permanent point to point wireless link. In 1899, the link was used to send the first paid wireless digital transmission, a telegram. In 1899, Marconi sent messages across the English Channel, and in 1901, he sent signals across the Atlantic between St. Johns, Newfoundland and Poldhu, England. In 1897, Lodge patented a means of tuning wireless transmissions. In 1898, Braun introduced coupling circuits to obtain accurate frequency tuning and reduce interference between radio stations. Marconi and Braun were cowinners of the Nobel Physics Prize for their work (one of the few times the Nobel Prize was awarded to engineers rather than scientists) (Bryant, 1988; Tarrant, 2001). The first audio transmission using radio was by the Canadian Reginald Fessenden in 1900. He also performed the first two-way transatlantic radio transmission in 1906 and the first radio broadcast of entertainment and music in the same year. However, commercial applications awaited the de Forest Audion. Early transmitters were broadband Hertzian types (spark gaps exciting tuned linear radiators). In 1906, the quenched spark transmitter was introduced by Wien. Continuous wave oscillations were introduced by Poulsen in 1906 (using an arc). Alexanderson, Goldschmidt, and von Arco quickly demonstrated continuous waves by other methods (Bryant, 1988). On the basis of the comments by Crookes, in 1892, Hammond Hayes, head of the Boston laboratories of the American Bell Telephone Company, had John Stone and later G. Pickard investigate the possibility of radiotelephony using Hertzian waves. These investigations did not result in practical devices. Further investigation was delayed until 1914. By 1915, one-way transmissions of 250 and 900 miles had been achieved. Later that year speech was successfully transmitted from Arlington, Virginia, to Mare Island, California, Darien, Panama, Pearl Harbor, Hawaii, and Paris, France. In the 1920s, radio research in the Bell System was divided among the American Telephone and Telegraph (AT&T) Development and Research Departments and the Bell Laboratories, all in New York. The Bell Laboratories moved to New Jersey to be less troubled by radio noise and became the primary radio investigation arm of the Bell System. In the 1920s, terrestrial radio propagation was an art, not a science. In 1920, Englund and Friis of the Bell Labs began developing radio field strength measuring equipment. This was followed by field measurements to, as Friis stated, “demystify radio.” By the early 1930s, an interest began to develop for long-distance relaying of telephone service by radio. It was clear that wide-frequency bandwidth was needed. The only spectrum available was above 30 MHz, the ultrashortwave frequencies (later termed very high frequencies, ultrahigh frequencies, and superhigh frequencies). Radio theory was developed and propagation experiments were carried out to validate it. By 1933, surface reflection, diffraction, refraction, and K factor (equivalent earth radius) were understood (Bullington, 1950; Burrows et al., 1935; England et al., 1933, 1938; Schelleng et al., 1933). By 1948, the theory and technology (Friis, 1948) had advanced to the point that fixed point to point microwave radio relay systems were practical (Bryant, 1988; fa*gen, 1975; Friis, 1948). At least three major technologies have been used for microwave antennas. In 1875, Soret introduced the optical Fresnel zone plate antenna. It was adapted to microwave frequencies in 1936 by Clavier and Darbord of Bell Labs (Wiltse, 1958). Dielectric and metal plate lenses were also tested (Silver, 1949, Chapter 11). However, by far, the most practical antenna was the reflector antenna. The first use of optical parabolic reflectors was by Archimedes during the siege of Syracuse (212–215 bce). This reflector was used by Gregory (1663), Cassegrain (1672), and Newton (1672) to invent reflector telescopes. Hertz (1888) was the first to use a parabolic reflector at microwave radio frequencies. The World War II saw the widespread use of this type of antenna for radio detection and ranging (radar) systems. They remain the most important type of microwave antenna today (Rahmat-Samii and Densmore, 2009). Radio transmission and reception antennas need to be above path obstructions. Convenient locations for the transmitter and receiver equipment are usually somewhere else. A transmission line that was free from reflecting or absorbing objects was needed to connect antennas and radio equipment. Coaxial cable was patented in Germany by Ernst Werner von Siemensin in 1884 and in the United States by Nikola Tesla in 1894. Hertz demonstrated the use of coaxial lines in 1887. Transmission by two parallel wire lines was demonstrated by Ernst Lecher in 1890. While this method had significantly less loss than

IN THE BEGINNING

5

coaxial cable, extraneous radiation made it impractical at microwave frequencies. Until the late 1930s, all radio transmission lines were two-conductor lines: two wire balanced line, one conductor (with implied ground plane mirror conductor), and coaxial cable. Two-conductor lines were popular for transmission at radio frequencies below 30 MHz (Bryant, 1984, 1988; fa*gen, 1975; Millman, 1984). Stripline- and microstrip-printed circuit technologies developed in the 1950s were used extensively in high frequency radio products. V. H. Rumsey, H. W. Jamieson, J. Ruze, and R. Barrett have been credited with the invention of the stripline. Microstrip was developed at the Federal Telecommunications Laboratories of ITT. Coaxial cable, while the most complex, tended to be the choice for most longdistance applications because of its low radiation and cross-talk characteristics. However, its relatively high loss was an issue for transmission of high frequency radio signals for long distances. Coaxial cable was patented in England in 1880 by Oliver Heaviside and in Germany in 1884 by Siemens and Halske. The first modern coaxial cable was patented by Espenschied and Affel of Bell Telephone Laboratories in 1929. The first general-use coaxial connector was the UHF (ultrahigh frequency) connector created in the early 1940s. It was suitable for applications up to several hundred megahertz. The N connector, a connector for high frequency applications, was developed by Paul Neill at Bell Labs in 1944. This was followed by several derivative connectors such as the BNC (baby-type N connector) and the TNC (twist-type N connector). The N connector was limited in frequency to about 12 GHz, although precision versions were used up to 18 GHz. The SMA (SubMiniature connector version A) connector was adopted by the military in 1968 and became the industry standard for radio signals up to 18 GHz (precision versions are rated to 26 GHz). By extending the SMA design, the connectors 3.5 mm (rated to 34 GHz), 2.9 mm (or K) (rated to 46 GHz), 2.4 mm (rated to 50 GHz), 1.85 mm (rated to 60 GHz), and 1 mm (rated to 110 GHz) have been developed (Barrett, 1984; Bryant, 1984). In 1887, Boys described the concept of guiding light through glass fibers. In 1897, Lord Rayleigh published solutions for Maxwell’s equations, showing that transmission of electromagnetic waves through hollow conducting tubes or dielectric cylinders was feasible. R. H. Weber, in 1902, observed that the wave velocity of a radio signal in a tube was less than that in free space. He suggested that the wave was equivalent to a plane wave traveling in a zigzag path as it is reflected from the tube walls. DeBye, in 1910, developed the theory of optical waveguides. The first experimental evidence of radio frequency (RF) waveguides was demonstrated by George Southworth at the Yale University in 1920. The hollow waveguide was independently researched by W. Barrow of Massachusetts Institute of Technology (MIT) and George Southworth of Bell Laboratories in the mid-1930s. Southworth discovered the primary modes and characteristics of rectangular and circular waveguides. Southworth (1962) wrote a highly readable history of waveguide, waveguide filters, and related developments at Bell Labs. At nearly the same time, characteristics of various shapes of waveguides were also being developed by Brillouin, Schelkunoff, and Chu and also William Hansen began working on high Q microwave frequency resonant cavity circuits. Waveguide flanges of various types were invented to provide a cost-effective yet accurate way to attach waveguide components (Bryant, 1988; fa*gen, 1975; Millman, 1984; O’Neill, 1985; Packard, 1984; Southworth, 1950). During World War II, most of the critical waveguide and coax elements had been developed. Waveguide flanges (and flange adapters) provided cost-effective coupling of waveguide, directional couplers provided signal monitoring and sampling, filters provided frequency selectivity, and isolators and circulators provided two- and three-port directional routing of signals (Fig. 1.1) (Marcuvitz, 1951; Montgomery et al., 1948; Ragan, 1948; Southworth, 1950). In 1897, Braun invented the Cathode Ray Tube with magnetic deflection. Fleming invented the twoelectrode “thermionic valve” vacuum tube rectifier in 1904. de Forest improved on Fleming’s rectifier by inventing the three-electrode “Audion” vacuum tube in 1906. In 1912, Colpitts invented the push–pull amplifier using the Audion. Meissner used the three-electrode (triode) tube to generate RF waves in 1913. About the same time, other oscillators were developed by Armstrong, de Forest, Meissner, Franklin, Round, Colpitts, and Hartley. Colpitts developed a modulator circuit in 1914. In 1918, Armstrong invented the superheterodyne receiver that is commonly used today. In 1919, Barkhausen and Kurz used a triode to generate radio frequencies as high as 10 GHz and Transradio, a subsidiary of Telefunken, introduced duplex radio transmission. Oscillators using coaxial line and waveguide (hollow cavities) were introduced in 1932 and 1935, respectively. Armstrong invented frequency modulation (FM) (Armstrong, 1936) in 1935. With the exception of a short period in the early 1970s [when single-sideband amplitude modulation

6

A BRIEF HISTORY OF MICROWAVE RADIO FIXED POINT-TO-POINT (RELAY)

Figure 1.1 Waveguide coupling for multiple radios. Source: Reprinted with permission of AlcatelLucent USA, Inc. (AM) was used briefly], FM was the primary modulation used in wideband microwave radios from the 1940s until the beginning of the digital radio era in the mid-1970s (Bryant, 1988; O’Neill, 1985). Radar was the beginning of widespread applications of microwave radio frequencies. Radar was developed independently in the 1930s by Great Britain, Germany, Canada, Italy, Russia, Japan, the Netherlands, and the United States. In 1940, the United States formed the National Defense Research Committee (NDRC). Shortly thereafter, the NDRC’s Microwave Committee began meeting at the private laboratories of Alfred Loomis. Microwave radar and navigation were the primary interests. At this time, the primary microwave research centers were at MIT and the Stanford University and the laboratories of Bell Laboratories, General Electric, Radio Corporation of America (RCA), and Westinghouse. In coordination with Sir Henry Tizard of the British Scientific Mission, the US government selected MIT as the contractor to carry out the radio research needed for the US and British military. This was organized as the Radiation Laboratory (Rad Lab). The 28-volume Radiation Laboratory Series of books detailing the results of the laboratory from 1940 to 1945 are beyond a doubt the most impressive single group of research reports on radio. Volumes 8, 9, 10, 12, and 13 (Kerr, 1951; Marcuvitz, 1951; Montgomery et al., 1948; Ragan, 1948; Silver, 1949) are still useful reading for microwave radio engineers. In roughly the same time period, low noise concepts such as noise figure and noise factor and low noise design concepts were discovered (Bryant, 1988; fa*gen, 1975, 1978; Okwit, 1984; Sobol, 1984). In 1937, Sigurd and Russell Varian demonstrated the first klystron oscillator. It was further developed by General Electric, the Stanford University, Sperry Gyroscope, and Varian Associates. Eventually, 3-GHz klystrons were manufactured by Bell Telephone Laboratories (Bell Labs), MIT Radiation Laboratories, Federal Telephone and Radio, General Electric, Westinghouse, Varian Associates, CSF in France, and the Alfred Loomis Laboratory in England. Klystrons have had a long use for low and medium power applications. However, their relatively low power conversion efficiency (30% conversion of DC power input to microwave power output) limited high power applications (Bryant, 1988; fa*gen, 1975, 1978; Sobol, 1984). A 200-MHz two-pole magnetron was first demonstrated by Albert Hull at General Electric in the 1920s. By 1930, both the Americans and the Japanese were using magnetrons to generate microwave signals. In 1935, a 3-GHz multiple cavity magnetron was developed by Hans Hollmann. In 1940, John Randall and Harry Boot produced a high power water-cooled magnetron and a 6-kW version was produced for the US government by GECRL of Wembley, England. During World War II, Percy Spencer, a Raytheon engineer, significantly improved magnetron efficiency and manufacturability. After the war, he invented the first microwave oven (Bryant, 1988; fa*gen, 1975, 1978). In 1942, Rudolf Kompfner invented the traveling wave tube, a medium power microwave amplifier. In 1947, Brattain, Bardeen, and Shockley at Bell Laboratories invented the point-contact transistor. In 1948,

MICROWAVE TELECOMMUNICATIONS COMPANIES

7

they invented the junction transistor. The first n–p–n transistor was demonstrated in 1950. Townes published the principle of the MASER (microwave amplification by stimulated emission of radiation) in 1951. In 1957, Esaki developed the germanium tunnel diode and Soulde described the LASER (light amplification by stimulated emission of radiation). In 1959, Jack Kilby of Texas Instruments and Robert Noyce of Fairchild independently developed the integrated circuit. The two shared the 2000 Nobel Prize in Physics for this achievement. In 1960, Khang and Atalla developed silicon–silicon dioxide field-induced surface devices [which led to metal–oxide–semiconductor field-effect transistors (MOS FETs)]. In 1961, Biard and Pittman of Texas Instruments invented gallium arsenide (GaAs) diodes. Today most microwave low and medium power applications use solid-state devices such as galium arsenide field-effect transistors (GaAs FETs). In 1962, Holonyak invented the first practical visible light-emitting diode (LED). In 1975, Ray Pengelly and James Turner invented the Monolithic Microwave Integrated Circuit (MMIC), although the concept had been mentioned in the early 1960s by Kilby. These devices were later further developed through support from the Defense Advanced Research Projects Agency (DARPA) (Bryant, 1988; fa*gen, 1975, 1978; Millman, 1983). In 1963, the Institute of Electrical and Electronic Engineers was formed by the merger of the Institute of Radio Engineers (IRE) and the American Institute of Electrical Engineers (AIEE) (Tarrant, 2001). In 1909, Sommerfeld (1909) published his theoretical integral equation solution to free space radio wave propagation. This was the beginning of theoretical analysis of radio waves (Oliner, 1984). In the 1920s, Nyquist (1924, 1928) and Hartley (1928) published the first significant papers addressing information theory. In the 1920s, 1930s, and 1940s Kotel’nikov (1959), Nyquist, and Shannon developed the theoretical concepts of sampled signals and the relationship between the time and frequency domains. In 1939, Philip Smith, at Bell Telephone’s Radio Research Lab in New Jersey, developed what is known as the Smith chart (Smits, 1985), a circular chart that shows the entire universe of complex impedances in one convenient circle. From the mid-1930s to the mid-1940s, considerable research was applied to (Norton, 1962). In 1944, 1945, and 1948, Rice (July 1944, 1948) published his mathematical analysis of random noise with and without a sine wave. In 1943, North (1963) defined what became called Matched Filters and Friis (1944) discovered the concept of noise figure. This work has been used extensively in the analysis of microwave fading statistics. In the late 1940s, Shannon (1948, 1949, 1950) and Tuller (1949) published significant papers on information theory of communications. In 1949, Weiner (1949) published his theory of linear filtering of signals in the presence of noise. About this same time he reported what has become known as the Weiner–Hopf equation that defines the relationship between signals in the time and frequency domains. In the 1950s, Cooley and Tukey developed the fast fourier transform (FFT) algorithm. Blackman and Tukey (1958) introduced the concept of signal power spectrum. The Hamming (1950) and Reed–Muller (Muller, 1954; Reed, 1954) convolutional (Elias, 1955) and cyclic (Prange, 1957) codes were invented in the 1950s. Friis (1946) and Norton (1953) developed the modern radio wave transmission loss formula. In the 1950s, Bullington (1947, 1950, 1957, 1977) was developing the fundamental characteristics of practical microwave propagation. About the same time, Norton et al. (1955) were expanding Rice’s work on the combination of constant and Rayleigh-distributed signals. In the 1950s and 1960s, Medhurst, Middleton, and Rice were developing the theory of analog FM microwave transmission. In 1960, Kalman (1960) published his theory of linear filtering of signal in the presence of noise and the Bose, Hocquenghem, Chaudhuri (BCH) (Bose and Ray-Chaudhuri, 1960; Hocquenghem, 1959), and Reed and Solomon (1960) coding were invented. In 1965, Wozencraft and Jacobs introduced the concept of geometric representation of signals. This is the basis of “constellations” now popular in modulation theory. In 1967, Viterbi (1967) published the algorithm currently used for most digital radio demodulators. Ungerboeck (1982) invented trellis coded modulation in 1982. In 1993, Berrou et al. (1993a) invented Turbo Coding. In 1996, Gallagher’s (Gallagher, 1962) low density parity codes (LDPCs) were rediscovered by MacKay (Kizer, 1990; MacKay and Neal, 1996).

1.2

MICROWAVE TELECOMMUNICATIONS COMPANIES

Ericsson was started in 1876 as a telephone repair workshop in downtown Stockholm. It eventually became the primary supplier of telephones and switchboards to Sweden’s first telecommunications operating company, Allm¨anna Telefonaktiebolag.

8

A BRIEF HISTORY OF MICROWAVE RADIO FIXED POINT-TO-POINT (RELAY)

In 1897, Guglielmo Marconi formed the Wireless Telegraph and Signal Company (also known as the Marconi Company Limited, as well as the Wireless Telegraph Trading Signal Company). Marconi and his company created the first commercial radio transmission equipment and services. English Electric acquired the Marconi Company in 1946. The company was sold to the General Electric Corporation in 1987 and renamed the Marconi Electronic Systems. In 1999, most of the Marconi Electronic Systems assets was sold to British Aerospace (BAE) and it became part of BAE Systems. However, General Electric retained the Marconi name, Marconi Corporation, which it sold to Ericsson in 2006 (Bryant, 1988; Sobol, 1984). Alcatel-Lucent was created in 2006 when Alcatel acquired Lucent Technologies. Alcatel was started in 1898 as Compagnie G´en´erale d’Electricit´e. In 1991, it became Alcatel Alsthom. In 1998, it shortened its name to Alcatel. ALCATEL stands for “ALsacienne de Constructions Atomiques, de TELecommunications et d’Electronique” (Alsacian Company for Atomic, Telecommunication, and Electronic Construction). Over several years ITT, SEL, Thomson-CSF, Teletra, Network Transmission Systems Division (NTSD) of Rockwell International (including the former Collins Microwave Radio Division), Newbridge Networks, DSC Communications, Spatial Wireless, Xylan, Packet Engines, Assured Access, iMagicTV, TiMetra, and eDial. Lucent Technologies was formed in 1996 by AT&T when it spun off its manufacturing and research organizations (primarily Western Electric and Bell Labs). Lucent acquired Ascend Communications in 1999. Alcatel-Lucent has three centers of microwave radio development and marketing: Velizy (southwest Paris), France; Vimercate (northeastern Milan), Italy (the former Teletra); and Plano (north Dallas), Texas (the former Collins Microwave Division of Rockwell International). The Alcatel-Lucent North American microwave radio facilities in Plano, Texas traces its roots to the Collins Radio Company, which was founded in 1933 by Arthur A. Collins in Cedar Rapids, Iowa. The Collins Radio Microwave Radio Division was founded in Richardson (north Dallas), Texas, in 1951. The first prototype of Collins commercial microwave equipment was placed in service between Dallas and Irving, Texas, in the spring of 1954. Later that year, the first Collins microwave radio system was sold to the California Interstate Telephone Company. By 1958, Collins was mass-producing microwave equipment and was providing the FAA (Federal Aviation Administration) with microwave systems providing communications and radar signal remoting networks. In 1973, Collins radio merged into Rockwell International. The Texas based Collins Microwave Radio Division ultimately became Rockwell’s NTSD. During the 1970s, this division was the sole supplier of microwave radio equipment to the MCI (Microwave Communications, Inc.), with most of its other sales to the Bell operating companies. In 1976, NTSD introduced its first digital microwave radio, the MDR-11, an 11-GHz multiline system delivered to Wisconsin Bell (Madison to Eau Claire). In a parallel evolution, the Alcatel Network Systems’ Raleigh, North Carolina, facility was originally operated by the ITT Corporation, which opened its first plant in 1958. From this facility, ITT first established its T1 spanline business in 1971, T3 fiber-optic transmission systems in 1979, and the first commercial single-mode fiber-optic transmission system in 1983. In 1987, ITT and Compagnie Generale d’Electricitie (CGE) of France agreed to a joint venture, creating Alcatel N.V.—the largest manufacturer of communications equipment in the world. Alcatel completed a buyout of ITT’s 30% interest in the spring of 1992. The company was incorporated in the Netherlands, operated from Paris, and had its technical center in Belgium. This organization also created the Alcatel Network Systems Company, which was headquartered in Raleigh, North Carolina. In 1991, Alcatel purchased NTSD from Rockwell International and combined it with Alcatel Network Systems Company to form the Alcatel Network Systems, Inc., headquartered in Richardson, Texas. After the acquisition of DSC, the headquarters was moved to the former DCS facilities in Plano, Texas. In addition to microwave radios, this facility develops and markets fiber optics and digital cross-connect systems. Founded in 1899 in Japan as the first US/Japanese joint venture with Western Electric Company, Nippon Electric (now NEC), headquartered in Tokyo, Japan, established itself as a technological leader early in its history by developing Japan’s telephone communications system. Recognizing the impact information processing would eventually have on the world community, NEC was one of the earliest entrants into the computer and semiconductor markets in the early 1950s. NEC also supported much microwave research. NEC has also manufactured microwave radios since the early 1950s and introduced its first microwave radio product to the US market in the early 1970s. Later, NEC delivered its first digital microwave radio to the United States in the mid-1970s. The NEC Corporation of America’s Radio Communications Systems Division (RCSD) is headquartered in Irving, Texas (Morita, 1960).

MICROWAVE TELECOMMUNICATIONS COMPANIES

9

In 1903, the German wireless company Telefunken was formed. It was the first significant commercial wireless telegraph competitor to Marconi’s company. In 1878, Alexander Graham Bell and his financiers, Gardiner Hubbard and Thomas Sanders, created the Bell Telephone Company. The company name was changed to the National Bell Telephone Company in 1879 and to the American Bell Telephone Company in 1880. By 1881, the company bought a controlling interest in the Western Electric Company from Western Union. In 1880, the AT&T Long Lines was formed. This group became a separate company named the American Telephone and Telegraph Company in 1885. In 1899, the AT&T Company bought the assets of American Bell and became the Bell System. In 1918, the federal government nationalized the entire telecommunications industry, with national security as the stated intent. In 1925, AT&T created Bell Telephone Laboratories (“Bell Labs”). In 1956, the Hush-A-Phone v. United States ruling allowed a third-party device to be attached to rented telephones owned by AT&T. This was followed by the 1968 Carterfone Decision that allowed third-party equipment to be connected to the AT&T telephone network. On January 8, 1982, the 1974 United States Department of Justice antitrust suit against AT&T was settled. Under the settlement AT&T (“Ma Bell”) agreed to divest its local exchange service operating companies in return for a chance to go into the computer business. Effective January 1, 1984, AT&T’s local operations were split into seven independent Regional Bell Operating Companies (RBOCs), or “Baby Bells.” Western Electric was fully absorbed into AT&T as AT&T Technologies. After its own attempt to penetrate the computer marketplace failed, in 1991, AT&T absorbed the NCR (National Cash Register) Corporation. After deregulation of the US telecommunications industry via the Telecommunications Act of 1996, NCR was divested again. At the same time, the majority of AT&T Technologies and Bell Labs was spun off as Lucent Technologies. In 1994, AT&T purchased the largest cellular carrier, McCaw Cellular. In 1999, AT&T purchased IBM’s Global Network business, which became AT&T Global Network Services. In 2001, AT&T spun off AT&T Wireless Services, AT&T Broadband, and Liberty Media. AT&T Broadband was acquired by Comcast in 2002. AT&T Wireless merged with Cingular Wireless in 2004 to become Cingular; in 2007, it became AT&T Mobility. In 2005, SBC Communications acquired AT&T Corp. and became AT&T Inc. General Telephone & Electronics (GTE), founded in Wisconsin in 1918, was started as the Richland Center Telephone Company. It changed names many times: Commonwealth Telephone Company (1920), Associated Telephone Company (1926), General Telephone Corporation (1935), and finally, GTE Corporation (1959, when it merged with Sylvania Electric Products). In 1964, the Western Utilities Corporation merged with GTE. In 1955, GTE acquired Automatic Electric, the largest independent manufacturer of automatic telephone switches. In 1959, it acquired Lenkurt Electric Company, Inc., a manufacturer of microwave radio and analog multiplex equipment. Lenkurt Electric was established in 1933 as a wire-line telephone multiplex manufacturer. It moved from San Francisco to San Carlos in 1947. Its radio product line was terminated in 1982 and a number of employees migrated to Harris Farinon. At the same time, the company adopted the name GTE Corporation and formed GTE Mobilnet Incorporated to handle the company’s entrance into the new cellular telephone business. In 1983, Automatic Electric and Lenkurt were combined as GTE Network Systems. GTE became the third largest long-distance telephone company in 1983 through the acquisition of Southern Pacific Communications Company. At the same time, Southern Pacific Satellite Company was also acquired, and the two firms were renamed GTE Sprint Communications Corporation and GTE Spacenet Corporation, respectively. Through an agreement with the Department of Justice, GTE conceded to keep Sprint Communications separate from its other telephone companies and limit other GTE telephone subsidiaries in certain markets. In 1997, Bell Atlantic merged with NYNEX but retained the Bell Atlantic name. In 2000, Bell Atlantic merged with GTE and adopted the name Verizon. In 2005, Verizon acquired MCI (formerly WorldCom). The company that eventually became IBM was incorporated in the state of New York on June 16, 1911, as the Computing-Tabulating-Recording (C-T-R) Company. On February 14, 1924, C-T-R’s name was formally changed to International Business Machines Corporation. In 1944, IBM and Harvard introduced the Mark 1 Automatic Sequence Controlled Calculator based on electromechanical switches. In 1952, IBM introduced the 701, a computer based on the vacuum tube. The 701 executed 17,000 instructions per second and was used primarily for government and research work. The IBM 7090, one of the first fully transistorized mainframes, could perform 229,000 calculations per second. In 1964, IBM introduced the System/360, the first large “family” of computers to use interchangeable software and peripheral equipment.

10

A BRIEF HISTORY OF MICROWAVE RADIO FIXED POINT-TO-POINT (RELAY)

Farinon Electric was established in San Carlos, California, in 1958. Seeing a need for a high quality, light-route radio for telecommunications, Bill Farinon left his job with Lenkurt Electric to begin business in a Redwood City cabinet shop. Farinon Electric was started as a limited partnership with $140,000, of which $90,000 was cash (Farinon and his wife came up with $30,000, Farinon’s father invested $20,000, and a friend invested $40,000.). With two employees, he started designing and building the first Farinon Electric PT radio offering 36 channels at 450 MHz for the telephone and industrial market. In 1980, Farinon Corporation was sold to Harris Corporation and became known as Harris Farinon. In 1998, the division was renamed the Harris Microwave Communications Division. Meanwhile, elsewhere in California, in 1984, Michael Friedenbach, Robert Friess, and William Gibson formed the Digital Microwave Corporation (DMC) to serve the short-haul microwave market. In 1998, DMC acquired MAS Technology and Innova Corporation. In 1999, the company name was changed to Stratex Networks. Plessey Broadband was acquired in 2002. In 2007, Stratex Networks and the Microwave Communications Division of Harris Corporation were merged to create Harris Stratex Networks. This independent company was majority-owned by Harris Corporation. In 2010, Harris spun off the company as Aviat Networks. In 1899, Cleyson Brown formed the Brown Telephone Company in Abilene, Kansas. The company’s name was changed to United Utilities in 1938 and to United Telecommunications (“United Telecom”) in 1972. In 1980, United Telecom introduced a nationwide X.25 data service, Uninet. The Southern Pacific Railroad operated its telephone system as an independent company, called the Southern Pacific Communications Corporation (SPCC). In the late 1950s, this primarily wire-based communications system began transitioning into a microwave radio relay network. In 1983, the GTE Corporation, parent company of General Telephone, purchased the network and renamed it GTE Sprint Communications. In 1986, Sprint was merged with US Telecom, the long-distance arm of United Telecom, to form the US Sprint. This partnership was jointly owned by GTE and United Telecom. Between 1989 and 1991, United Telecom purchased controlling interest in US Sprint. In 1991, United Telecom changed its name to Sprint. In the mid-1980s, Sprint began building a nationwide fiber-optics network. In 1986, a highly successful “dropping pin” ad was used to describe the superiority (clear sound depicted by the pin’s “ting” when it hit a hard surface) of the “all-fiber” digital network. In 1988, Sprint ran an ad showing a microwave radio tower being blown up, thereby emphasizing their claim of an “all-fiber” network (in an effort to differentiate the Sprint network from the largely microwave based AT&T and MCI networks). While this marketing approach was very successful, it did not win friends within the microwave community (who were quick to point out that radio was more reliable than multiline fiber—a problem later solved with ring architecture). Some microwave “old timers” are happy to point out that Sprint rebuilt the microwave tower and now has many fixed point to point microwave links in their all-digital network.

1.3 PRACTICAL APPLICATIONS In the early 1860s, several people, including Bell, Gray, La Cour, Meucci, Reis, and Varley demonstrated telephones (transmitters and receivers). In 1876, Alexander Bell and Thomas Watson, as well as Elisha Gray, demonstrated the first practical telephones. Both Bell and Gray filed patent documents the same day in 1886. Meucci sued Bell for patent infringement but died in 1889 before the suit could be completed. Western Union, in conjunction with Edison and Gray’s company, Gray and Barton, began offering telephone service and entered into litigation with Bell concerning patent rights. Edison had invented a superior carbon telephone transmitter but Bell had the better receiver. About the same time, Western Union formed the Western Electric Manufacturing Company. In 1878, the New England Telephone Company was formed. The same year the Bell Telephone Company was formed by Bell and his financiers. The next year the two companies merged to form the National Bell Telephone Company. In 1880, Bell’s company became the American Bell Telephone Company. In 1882, the company bought controlling interest in Western Electric Company from Western Union. About the same time, Western Union and Bell settled their long-standing patent infringement conflict. The same year AT&T was formed to create a nationwide long-distance telephone network. This organization eventually became AT&T Long Lines. In 1899, AT&T purchased the assets of the American Bell Telephone Company and added local service to its long-distance services. In 1925, AT&T created Bell Telephone Laboratories. In the United

PRACTICAL APPLICATIONS

11

States, much of the radio development was conducted by Bell Labs. These laboratories were funded from federally regulated telephone service income. After much legal maneuvering, the US Department of Justice imposed the Consent Decree of 1956 on AT&T. Bell Labs was treated as a national resource. The results of its employees were viewed as national property. Until this decree was changed in 1984, AT&T could not receive royalties for any of its inventions (such as the transistor and laser) (Bryant, 1988; IEEE Communications Society, 2002; O’Neill, 1985; Schindler, 1982; AT&T Bell Laboratories, 1983; Thompson, 2000). The first commercial telephone exchange was opened in New Haven, Connecticut, in January 1878. In 1889, the first coin-operated telephone was installed in a bank in Hartford, Connecticut. In 1892, the Strowger Automatic Telephone Exchange Company (later Automatic Electric Company) installed the first automatic telephone exchange, the Stroger Step by Step. Later other automatic mechanical switches were invented and installed: the Panel in 1930 and the Crossbar in 1938. The cost of mechanical switch upgrades was increasing with each generation. In the 1950s, the concept of a software-defined switch was envisioned as a way to reduce the cost of upgrades and changes to telephone switches. Electronic switches were trialed in the 1960s. The first digital switching toll office system in North America was the Western Electric 4ESS introduced in 1972. Initially, these switches had analog telephone interfaces. Later they evolved to strictly digital DS1 interfaces. The digitization of the national telecommunications transportation network followed the evolution of the 4ESS. Northern Telecom introduced the first local office switch, the DMS10, in the late 1970s followed by Western Electric’s 5ESS in early 1989 (Schindler, 1982; Thompson, 2000). With the proliferation of telephone service, the need for telephone lines increased dramatically. A solution to this problem was the development of frequency division multiplex (FDM) systems that “stacked” multiple telephone channels into one composite wide bandwidth analog signal. At first they connected cities via coaxial cable. Later analog FM microwave radios were used to transport the FDM signals (Fig. 1.2). The use of these telecommunication systems was universal. Compared to long-distance cable systems, the microwave radio systems were relatively inexpensive and could be placed practically anywhere (Fig. 1.3). Many companies evolved worldwide to supply this telecommunication equipment. In 1916, ship to shore two-way radio communication was demonstrated between the USS New Hampshire and Virginia based transmitter and receiver locations. Bell Labs developed the CW-936 500-kHz to 1.5-MHz radio telephones for the Navy. About 2000 of these radiotelephones were installed on US and British ships during the World War I. Western Electric produced the 600-kHz to 1.5-MHz SCR-68 ground to air radiotelephones for the US Navy (Bryant, 1988; fa*gen, 1978).

(a)

(b)

Figure 1.2 (a, b) Too many telephone lines and the early FDM-FM microwave radio system solution. Source: Photos from Collins Microwave Radio Company archives. Reprinted with permission of Alcatel-Lucent USA, Inc.

12

A BRIEF HISTORY OF MICROWAVE RADIO FIXED POINT-TO-POINT (RELAY)

(a)

(b)

Figure 1.3 (a, b) Microwave radio locations. Source: Photos from archives of Collins Microwave Radio Division of Rockwell International. Reprinted with permission of Alcatel-Lucent USA, Inc.

About the same time, commercial applications began. Long-distance telephone service was needed on the Catalina Island off the coast of California. However, due to wartime shortages, a cable system could not be provided. Radio was a logical choice. The project was started in April 20, 1920, and engaged in service on July 16, 77 days later. Rapid deployment has been a standard radio system feature ever since. Transatlantic telephony experiments were conducted in the 1920s. Radio telephone service between the United States and England began on January 7, 1927. The cost for a 3-min call was $75. By 1939, shortwave radio telephone service was available between most major cities in the world (fa*gen, 1978; Sobol, 1984). Although Guarini had suggested radio relay communications in 1899, this was not attempted for several years. In 1925, the RCA installed an experimental radio link across the English Channel. The first commercial radio telephone service was initiated in 1927 between Great Britain and the United States. In 1931, French and English engineers of Les Laboratoires Standard (later Laboratoire Central de Telecommunication) and Standard Telephone and Cables (later International Telephone and Telegraph), under the direction of Andre Clavier, experimented with a 40-km microwave radio link across the English Channel (one telephone/telegraph channel between Calais and St. Margarets Bay). It operated with a 1-W 1.7-GHz transmitter with 10-ft (3-m) parabolic antennas. Reports on this project (Armstrong, 1936) first used the terms micro waves (two words). The first commercial microwave radio link was installed between the Vatican and the Italian PTT (Post, Telephone and Telegraph) in 1932. In 1933, a link was installed between Lympne, England, and St. Inglevert, France, which was in continuous operation until 1940. Also, in 1933, at the Chicago World’s Fair, Westinghouse demonstrated 3.3-GHz radio links using parabolic antennas. Two of these systems were sold to the US Army Signal Corps for $2500 each. In 1936, the British General Post Office opened a multichannel link between Scotland and Northern Ireland. This 65-km link operated at 65 MHz using AM to carry nine voice channels. In Germany, Lorenz and Telefunken produced a single-channel 500-MHz AM system for the Army in 1937. In 1939, a 1.3-GHz FM 10-channel magnetron-based system was introduced in Stuttgart. These German systems were deployed widely in Europe and North Africa. These networks covered 50,000 route km, with terminals as far apart as 5000 km (Carl, 1966; fa*gen, 1978; Sobol, 1984). The first microwave radio relay system radios were the British Wireless Set No. 10 developed by the UK Signals Research and Development Establishment (SRDE). The Pye Company built the RF section and the TMC Company built the multiplex. It was an eight-telephone-channel TDM pulse width modulation 5-GHz radio system designed to operate in tandem as a radio relay. It was demonstrated to the US Signal Corps Labs and Bell Laboratories on September 1942. This spurred the development of similar systems in the United States: the RCA AN/TRC-5 and the Bell Labs AN/TRC-6 (Carl, 1966; fa*gen, 1978; Sobol, 1984). In 1941, Bell Laboratories tested a 12 voice channel AM system between Cape Charles and Norfolk. In 1943, Western Union installed the first intercity commercial microwave radio system using the RCA

PRACTICAL APPLICATIONS

13

microwave equipment. In 1945, AT&T Corporation was operating a multichannel FM system between New York and Philadelphia. In 1948, Western Union had a 1000-mile 24-hop microwave radio system connecting New York, Washington, DC, and Pittsburgh. This was the first system to use unattended radio repeater locations and was the first use of loop topology to increase system reliability. A 7-hop (tandem radio path) 100 voice channel 4-GHz system between New York and Boston (300 km) was introduced in 1947. This experimental system, named TD-X, was the basis of the widely deployed improved system, named TD-2, which provided 489 telephone channels or one television channel per radio channel. In 1951, AT&T completed the TD-2 107-hop, 12 RF channel per hop, system between New York and San Francisco. This system spanned 4800 km and reached 12,000 km of total hop length. The short-haul 11-GHz TJ system was announced in 1957. In 1955, AT&T began the development of the 1800 voice channel 6-GHz TH system. By 1960, it had been deployed in parallel with the TD-2 system (fa*gen, 1978; Friis, 1948; Sobol, 1984; Thayer et al., 1949). In 1954, the US Air Force SAGE system employed the first (1200-baud analog telephone channel) modems to communicate between computer systems. In the 1960s, digital cable systems began to be deployed worldwide to interconnect telephone operating company switches. In France, TDM using PCM had been studied beginning in 1932. These techniques were used extensively in the United States beginning in 1962 with T1 digital cable spans with 24 voice channel banks connecting the 4ESS tandem switches. The transport was digital but the connections to the 4ESS were analog. Later direct DS1 interfaces were added to the switch. The introduction of the electronic 4ESS switch in 1976 spurred the development of terrestrial digital systems. In Europe, 30 voice channel TDM/PCM E1 links were being deployed (fa*gen, 1978; Sobol, 1984; Welch, 1984). In 1971, Hoff at Intel invented the silicon microprocessor that is used in personal computers. Jobs and Wozniak created the Apple I computer in 1976. The 16-bit microprocessor computer was introduced in 1980 by IBM using the Microsoft disk operating system (MS-DOS) developed by Gates and Allen. Jobs and Wozniak introduced the Macintosh computer in 1984. Before 1949, the Federal Communications Commission (FCC) assigned microwave spectrum only to telecommunications common carriers. After that date, it began to license private microwave systems on a case by case basis if no common carrier service was available. In 1959, the FCC, in its “Above 890” ruling, decided to allow licensing private intercity microwave systems for voice or data service at frequencies above 890 MHz. After that ruling, in the United States, microwave frequencies were defined as starting at 890 MHz. This definition is still in common use in the United States today. In 1962, the first telecommunication satellite, Telstar, was placed into orbit. In 1963, the American Standard Code for Information Exchange (ASCII) was defined. In 1968, DARPA began deployment of ARPANET (Advanced Research Projects Agency Network) and placed it in service in 1971. This was the first step in creating the Internet. This was to have a profound impact on telecommunications worldwide. In the United States, the 1968 Carterfone Decision created the opportunity to interconnection of customer-owned telephone equipment. In 1969, the first digital radio relay system went into operation in Japan. It operated at 2 GHz with a transmission capacity of 17 Mb/s. The FCC’s Specialized Common Carrier Decision of 1969 decreed that new microwave companies could compete with the existing regulated telephone companies to sell private network transmission services. This brought a flood of Specialized Common Carriers utilizing microwave radio. In 1968, Western Microwave merged with Community Television cable system and became American Tele-Communications, with Western TeleCommunications (WTCI) and Community Tele-Communications (CTCI) subsidiaries. The same year the parent company’s name was changed to Tele-Communications Inc. (TCI) and the headquarters was moved to Denver, Colorado. WTCI used the 1969 ruling to begin building an extensive microwave network used primarily for video distribution. By 1974, WTCI had become a large US microwave common carrier, second only to AT&T. MCI was founded as Microwave Communications, Inc. on October 3, 1963. Initially it built microwave relay stations between Chicago and St. Louis. In 1969, the umbrella company Microwave Communications of America, Inc. (MICOM) was incorporated and MCI began building its national private long-distance microwave radio network. MCI remains a significant common carrier today. In 1979, the Times Mirror Company entered the cable network business. Not long after that the company formed Times-Mirror Microwave Communications Company of Austin, Texas. This company operated a large microwave network that provided, on a long-term contractual basis, transmission capacity to Telcos (long-distance resellers, independent telephone companies, and regional Bell companies) to carry their voice calls.

14

A BRIEF HISTORY OF MICROWAVE RADIO FIXED POINT-TO-POINT (RELAY)

In 1970, Data Transmission Co. (DATRAN) filed for FCC approval of a nationwide system exclusively for data transmission over digital microwave radios. The DATRAN system was a nationwide all-digitalswitched microwave radio network, which linked subscriber terminals in 35 metropolitan areas. The same year Norman Abramson and Franklin Kuo at the University of Hawaii introduced ALOHANET, the first large-scale deployment of data packets over radio. In 1986, this concept was refined by Robert Metcalfe at Xerox PARC into Ethernet, the technology that led to the IEEE 802.3 Local Area Network data interface standard. The 1972 FCC “Open Skies” ruling created domestic satellite communications carriers. These systems shared the terrestrial microwave radio 4- and 6-GHz common carrier bands. The same year Southern Pacific Communication (the forerunner of Sprint) got the FCC approval for an 11-state common carrier microwave radio network. In 1977, Bell Labs installed the first Advanced Mobile Phone System (AMPS). This was the first cellular radio system. The need for transmission circuits between cell sites would eventually expand the use of microwave radio systems. In the late 1970s, AT&T digitized its network enabling it to carry data traffic. In 1983, Judge Greene approved divestiture of AT&T. The AT&T divestiture (the US Department of Justice’s Modified Final Judgment of the 1956 Consent Decree), effective January 1, 1984, separated AT&T from seven new RBOCs (Ginsberg, 1981).

1.4 THE BEAT GOES ON In the late 1940s, all fixed point to point microwave relay systems used analog FM transmission. It carried video and telephony exclusively. FDM was used to aggregate the 4-kHz-wide analog telephone channels for transmission over FM radios. In the 1970s, single-sideband analog radios were used to increase the analog transmission capacity. However, by the late 1970s, digital transmission began to be deployed worldwide. TDM was used to aggregate the PCM analog telephone channels. Various standards were developed in different countries to multiplex various levels of TDMed digital signals (“asynchronous” systems in North America and Plesiochronous Digital Hierarchy in Europe) (Gallagher, 1962; AT&T Bell Laboratories, 1983). In 1988, Bellcore’s Synchronous Optical Network (SONET) and in 1989, the ITU-T’s Synchronous Digital Hierarchy (SDH) were finalized, setting new standards for worldwide digital transport interconnectivity. The SONET and SDH systems were widely deployed in radio networks in the 1990s. Beginning in the 1970s, while data equipment was being developed, data network architectures were beginning to become standardized. The IBM Systems Network Architecture (SNA) introduced the concept of layered hierarchical peer processes. Its six-layer architecture was very popular. The Digital Equipment Corporation (DEC) also provided a five-layer Digital Network Architecture (DNA). The International Standards Organization (ISO) defined a seven-layer Reference Model [Open Systems Interconnection (OSI) “seven-layer stack”]. ARPANET developed its four-layer architecture that has become the standard for the Internet. There were many other architectures that achieved various levels of popularity. However, today the Internet (“Ethernet and IP”) architecture is by far the most popular (Green, 1984; IEEE Communications Society, 2002; Konangi and Dhas, 1983). J. C. R. Licklider, in his January 1960 paper, Man-Computer Symbiosis, proposed “a network of such [computers], connected to one another by wide-band communication lines [which provide] the functions of present-day libraries together with anticipated advances in information storage and retrieval and symbiotic functions.” During the 1960s, Paul Baran and Donald Davies independently proposed data networks based on the principle of breaking down all digital messages into message blocks called packets. AT&T engineers and management discounted the concept as unworkable. Unlike the AT&T approach of circuit-switching networks, the proposed packet networks would store and forward message blocks over different routes based on various criteria. With adequate path redundancy, these networks were inherently highly reliable in the face of localized network outages. Leonard Kleinrock, in 1961, was the first to develop a mathematical theory of this technology (Hafner and Lyon, 1996). In 1962, Licklider was appointed head of the US Department of DARPA. Licklider created a computer science community associated with DARPA. In 1964, Ivan Sutherland took over as head of DARPA. Sutherland recruited Robert Taylor, from Dallas, Texas, to manage the DARPA computer networks. Taylor’s office had three different communications terminals to three different computers at three different locations. The complexity of interacting with each computer and the inability to transfer information from

THE BEAT GOES ON

15

one computer to another prompted Taylor to propose a data network to connect all facilities performing research for DARPA using a common interface. Taylor proposed this network and the project was approved for implementation (Hafner and Lyon, 1996). Taylor’s network began as a network of four nodes connecting the University of California, Los Angeles (UCLA), Stanford Research Institute, University of Utah, and University of California, Santa Barbara. The nodes were controlled by Interface Message Processors (IMPs), the forerunner of the modern router. The IMPs and the network concept were specified and managed by Bolt Beranek and Newman (BBN), and the IMPs were designed and manufactured by Honeywell. The data connections among the nodes were data modems connected to audio circuits leased from AT&T. The IMP packet switches and their connections were called the ARPANET. The UCLA Network Measurement Center would deliberately stress the network to highlight bugs and degradations. The IMPs reported various quality metrics and statistics to a central Network Control Center (NCC) to facilitate effective management of network transmission quality. The NCC was also the focal point for coordinated software upgrade of all IMPs via remote download; the concept of a data Network Operations Center (NOC) was introduced. Request for Comments (RFC) Number 1 (RFC 1), entitled “Host Software,” was written by Steve Crocker in 1969. About the same time, an informal group, which was eventually called the Network Working Group (NWG), was formed to oversee the evolution of the network. This group eventually became the Internet Engineering Task Force (Fial, R., private communication, 2010, Hafner and Lyon, 1996). The first electronic mail (e-mail) between two machines was sent in 1971 by Ray Tomlinson at BBN. Tomlinson chose the @ symbol as the separator between the user name and the user’s computer. In 1972, Robert Metcalfe and others at Xerox PARC adapted the packet techniques from ALOHANET (Norman Abramson and others) to create a coaxial cable network connecting Alto computers. Metcalfe first called the new network Alto Aloha and later Ethernet. In the early 1970s, AT&T was asked if it wanted to take over ARPANET. AT&T and Bell Labs studied the proposal but declined. About the same time, ITU developed a packet network standard X.25. In 1974, Vint Cerf and Robert Kahn described the end to end routing of packets called datagrams, which encapsulated digital messages. The paper also introduced the concept of gateways. In 1975, Yogen Dalal, using the Cerf and Kahn concepts, developed a specification for transmission control protocol (TCP). The original concept of TCP included both packet protocol and packet routing. In a TCP review meeting in 1978, Vint Cerf, Jon Postel, and Dan Cohen decided to split the packet protocol and routing functions of TCP into two separate functions: Internet protocol (IP) and TCP. All ARPANET host computers were converted to Transmission Control Program and Internetwork Protocol (TCP/IP) operation in 1983 (Fial, R., private communication, 2010, Hafner and Lyon, 1996). Also in 1983, Mosaic, the first graphical Internet browser, was released by Marc Andreessen and Eric Bina at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign. The next year the Netscape Navigator appeared, which quickly expanded the Web’s presence and made it a viable commercial medium. In 1988, the ISO produced the OSI protocol standard. It was intended to replace TCP/IP. Its complexity and insistence to replace (rather than supplement) existing standards made its adoption and implementation very difficult. In 1991, Tim Berners-Lee of CERN published a summary of the World Wide Web. For the first time the Internet was introduced to the concepts of HyperText Transfer Protocol (HTTP), HyperText Markup Language (HTML), the Web browser, and the Web server. HTML is the markup language used for documents served up by a Web server. HTTP is the transfer protocol developed for easy transmission of these hypertext documents by the Web server. A Web browser consumed these documents and drew them on a page. While these sat upon the already existing infrastructure of the Internet, they were one of the several formats at the time used for sharing information. Some of the other formats popular at the time were Gopher and FTP. The HTML format was a little more user friendly, embedding navigation and display together, but it was not until the graphical browser was created that it became the de facto standard. In 1995, the US government formally turned over operation of the Internet to private Internet service providers (ISPs). The world would never be the same (Kizer, M., private communication, 2010, Hafner and Lyon, 1996). The new millennium has shown a significant increase in adoption of IP technology for interconnecting all forms of digital transmission. For now, the SONET and SDH systems are maintaining a hold on the long-distance transmission market. However, the user community drop and edge connections are rapidly transitioning to IP. IP, with its evolving Quality of Service features, is the new wave of digital transmission. Fixed point to point microwave network evolution mirrors that transition.

16

A BRIEF HISTORY OF MICROWAVE RADIO FIXED POINT-TO-POINT (RELAY)

REFERENCES Armstrong, E., “A Method of Reducing Disturbances in Radio Signaling by a System of Frequency Modulation,” Proceedings of the IRE, pp. 689–740, May 1936. AT&T Bell Laboratories, Engineering and Operations in the Bell System, Second Edition. Murray Hill: AT&T Bell Laboratories, 1983. Barrett, R. M., “Microwave Printed Circuits—The Early Years,” IEEE Transactions on Microwave Theory and Techniques, Vol. 32, pp. 983–990, September 1984. Bennett, W. R. and Davey, J. R., Data Transmission. New York: McGraw-Hill, 1965. Berrou, C., Glavieux, A. and Thitimajshima, P., “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes,” IEEE International Conference on Communications Proceedings, pp. 1064–1070, 1993. Blackman, R. B. and Tukey, J. W., The Measurement of Power Spectra from the Point of View of Communications Engineering. New York: Dover Publications, 1958. Bose, R. and Ray-Chaudhuri, D., “On a Class of Error-Correcting Codes,” Information and Control , Vol. 3, pp. 68–79, 1960. Bryant, J. H., “Coaxial Transmission Lines, related Two-Conductor Transmission Lines, Connectors and Components: A U. S. Historical Perspective,” IEEE Transactions on Microwave Theory and Techniques, Vol. 32, pp. 970–983, September 1984. Bryant, J. H., “The First Century of Microwaves—1886 to 1986,” IEEE Transactions on Microwave Theory and Techniques, Vol. 36, pp. 830–858, May 1988. Bullington, K., “Radio Propagation at Frequencies Above 30 Megacycles,” Proceedings of the IRE—Waves and Electrons Section, pp. 1122–1136, October 1947. Bullington, K., “Radio Propagation Variations at VHF and UHF,” Proceedings of the IRE, pp. 27–32, January 1950. Bullington, K., “Radio Propagation Fundamentals,” Bell System Technical Journal , Vol. 36, pp. 593–626, May 1957. Bullington, K., “Radio Propagation for Vehicular Communications,” IEEE Transactions on Vehicular Technology, Vol. 26, pp. 295–308, November 1977. Burrows, C. R., “Propagation Over Spherical Earth,” Bell System Technical Journal , Vol. 14, pp. 477–488, July 1935. Burrows, C. R., Hunt, L. E. and Decino, A., “Ultra-Short Wave Propagation: Mobile Urban Transmission Characteristics,” Bell System Technical Journal , Vol. 14, pp. 253–272, April 1935. Carl, J., Radio Relay Systems. London: MacDonald, 1966. Elias, P., “Coding for Noisy Channels,” IRE Convention Record , Vol. 3, pp. 37–47, 1955. England, C. R., Crawford, A. B. and Mumford, W. W., “Some Results of a Study of Ultra-Short-Wave Transmission Phenomena,” Bell System Technical Journal , Vol. 12, pp. 197–227, April 1933. England, C. R., Crawford, A. B. and Mumford, W. W., “Ultra-Short-Wave Transmission and Atmospheric Irregularities,” Bell System Technical Journal , Vol. 17, pp. 489–519, October 1938. fa*gen, M. D., A History of Engineering & Science in the Bell System, The Early Years (1875-1925). Murray Hill: Bell Telephone Laboratories, 1975. fa*gen, M. D., Editor, A History of Engineering and Science in the Bell System, National Service in War and Peace (1925–1975). Murray Hill: Bell Telephone Laboratories, 1978. Friis, H. T., “Noise Figures of Radio Receivers,” Proceedings of the IRE, pp. 419–422, July 1944. Friis, H. T., “A Note on a Simple Transmission Formula,” Proceedings of the IRE—Waves and Electrons Section, pp. 254–256, May 1946. Friis, H. T., “Microwave Repeater Research,” Bell System Technical Journal , Vol. 27, pp. 183–246, 1948. Gallagher, R. G., “Low Density Parity Check Codes,” IRE Transactions on Information Theory, Vol. 8, pp. 21–28, January 1962.

REFERENCES

17

Ginsberg, W., “Communications in the 80’s: The Regulatory Context,” IEEE Communications Magazine, Vol. 19, pp. 56–59, September 1981. Green, P. E., Jr., “Computer Communications: Milestones and Prophecies,” IEEE Communications Magazine, Vol. 22, pp. 49–63, May 1984. Hafner, K. and Lyon, M., Where Wizards Stay Up Late, the Origins of the Internet. New York: Simon & Schuster, 1996. Hamming, R., “Error Detecting and Error Correcting Codes,” Bell System Technical Journal , Vol. 29, pp. 41–56, January 1950. Hartley, R., “Transmission of Information,” Bell System Technical Journal , Vol. 7, pp. 535–563, July 1928. Hocquenghem, A., “Codes Correcteurs d’Erreurs,” Chiffres, Vol. 2, pp. 147–156, 1959. IEEE Communications Society, A Brief History of Communications. Piscataway: IEEE, 2002. Kalman, R. E., “A New Approach to Linear Filtering and Prediction Problems,” Transactions of the ASME , Vol. 82, pp. 35–45, January 1960. Kerr, D., Propagation of Short Radio Waves, Radiation Laboratory Series, Volume 13. New York: McGraw-Hill, 1951. Kizer, G. M., Microwave Communication. Ames: Iowa State University Press, 1990. Konangi, V. and Dhas, C. R., “An Introduction to Network Architectures,” IEEE Communications Magazine, Vol. 21, pp. 44–50, October 1983. Kotel’nikov, V. A., The Theory of Optimum Noise Immunity. New York: McGraw-Hill, 1959. MacKay, D. J. C. and Neal, R. M., “Near Shannon Limit Performance of Low Density Parity Check Codes,” Electronics Letters, Vol. 32, pp. 1645–1655, August 1996. Marcuvitz, N., Waveguide Handbook , Radiation Laboratory Series, Volume 10. New York: McGraw-Hill, 1951. Maxwell, J. C., “A Dynamical Theory of the Electromagnetic Field,” Philosophical Transactions of the Royal Society of London, Vol. 155, pp. 459–512, 1865. Meinel, H. H., “Commercial Applications of Millimeterwaves History, Present Status and Future Trends,” IEEE Transactions on Microwave Theory and Techniques, Vol. 43, pp. 1639–1653, July 1995. Millman, S., Editor, A History of Engineering and Science in the Bell System, Physical Sciences (1925–1980). Murray Hill: AT&T Bell Laboratories, 1983. Millman, S., Editor, A History of Engineering and Science in the Bell System, Communications Sciences (1925–1980). Indianapolis: AT&T Technologies, 1984. Montgomery, C. G., Dicke, R. H. and Purcell, E. M., Editors, Principles of Microwave Circuits, Radiation Laboratory Series, Volume 8. New York: McGraw-Hill, 1948. Morita, K., “Report of Advanced in Microwave Theory and Techniques in Japan - 1959,” IRE Transactions on Microwave Theory and Techniques, Vol. 8, pp. 395–397, July 1960. Muller, D., “Application of Boolean Switching Algebra to Switching Circuit Design,” IEEE Transactions on Computers, Vol. 3, pp. 6–12, September 1954. North, D. O., “An Analysis of the Factors which Determine Signal/Noise Discrimination in Pulse-Carrier Systems,” Proceedings of the IEEE, pp. 1016–1027, July 1963. Norton, K. A., “Transmission Loss in Radio Propagation,” Proceedings of the IRE, pp. 146–152, January 1953. Norton, K. A., “Radio-Wave Propagation During World War II,” Proceedings of the IRE, pp. 698–704, May 1962. Norton, K. A., Vogler, L. E., Mansfield, W. V. and Short, P. J., “The Probability Distribution of the Amplitude of a Constant Vector Plus a Rayleigh-Distributed Vector,” Proceedings of the IRE, pp. 1354–1361, October 1955. Nyquist, H., “Certain Factors Affecting Telegraph Speed,” Bell System Technical Journal , Vol. 3, pp. 324–346, April 1924.

18

A BRIEF HISTORY OF MICROWAVE RADIO FIXED POINT-TO-POINT (RELAY)

Nyquist, H., “Certain Topics in Telegraph Transmission Theory,” AIEE Transactions, Vol. 47, pp. 617–644, April 1928. O’Neill, E. F., A History of Engineering & Science in the Bell System, Transmission Technology (1925–1975). Murray Hill: AT&T Bell Laboratories, 1985. Okwit, S., “An Historical View of the Evolution of Low-Noise Concepts and Techniques,” IEEE Transactions on Microwave Theory and Techniques, Vol. 32, pp. 1068–1082, September 1984. Oliner, A. A., “Historical Perspectives on Microwave Field Theory,” IEEE Transactions on Microwave Theory and Techniques, Vol. 32, pp. 1022–1045, September 1984. Packard, K. S., “The Origin of Waveguides: A Case of Multiple Rediscovery,” IEEE Transactions on Microwave Theory and Techniques, Vol. 32, pp. 961–969, September 1984. Prange, E., “Cyclic Error-Correcting Codes in Two Symbols,” Air Force Cambridge Research Center Technical Report TN-57-103. Cambridge: United States Air Force, 1957. Ragan, G., Editor, Microwave Transmission Circuits, Radiation Laboratory Series, Volume 9. New York: McGraw-Hill, 1948. Rahmat-Samii, Y. and Densmore, A., “A History of Reflector Antenna Development: Past, Present and Future,” SBMO/IEEE MTT-S International Microwave & Optoelectronics Conference, pp. 17–23, November 2009. Reed, I., “A Class of Multiple-Error-Correcting Codes and a Decoding Scheme,” IEEE Transactions on Information Theory, Vol. 4, pp. 38–49, September 1954. Reed, I. and Solomon, G., “Polynomial Codes Over Certain Finite Fields,” Journal of the Society of Industrial Applied Mathematics, Vol. 8, pp. 300–304 1960. Rice, S. O., “Mathematical Analysis of Random Noise,” Bell System Technical Journal , Vol. 23, pp. 282–332, July 1944, and Vol. 24, pp. 46–156, January 1945. Rice, S. O., “Statistical Properties of a Sine Wave Plus Random Noise,” Bell System Technical Journal , Vol. 27, pp. 109–157, January 1948. Salazar-Palma, M., Garcia-Lamperez, A., Sarkar, T. K. and Sengupta, D. L., “The Father of Radio: A Brief Chronology of the Origin and Development of Wireless Communications,” IEEE Antennas and Propagation Magazine, Vol. 53, pp. 83–114, December 2011. Sarkar, T. K., Mailloux, R. J., Oliner, A. A., Salazar-Palma, M. and Sengupta, D. L., History of Wireless. Hoboken: John Wiley & Sons, Inc., 2006. Schelleng, J. C., Burrows, C. R. and Ferrell, E. B., “Ultra-Short Wave Propagation,” Bell System Technical Journal , Vol. 12, pp. 125–161, April 1933. Schindler, G. E., Jr., Editor, A History of Engineering and Science in the Bell System, Switching Technology (1925–1975). Murray Hill: AT&T Bell Laboratories, 1982. Shannon, C. E., “A mathematical theory of communication,” Bell System Technical Journal , Vol. 27, pp. 379–423, 623–656, July and October 1948. Shannon, C. E., “Communication in the Presence of Noise,” Proceedings of the IRE, pp. 10–21, January 1949. Shannon, C. E., “Recent Development in Communication Theory,” Electronics, Vol. 21, pp. 80–83, April 1950. Silver, S., Editor, Microwave Antenna Theory and Design, Radiation Laboratory Series, Volume 12. New York: McGraw-Hill, 1949. Smits, F. M., Editor, A History of Engineering and Science in the Bell System, Electronics Technology (1925–1975). Indianapolis: AT&T Technologies, 1985. Sobol, H., “Microwave Communications—An Historical Perspective,” IEEE Transactions on Microwave Theory and Techniques, Vol. 32, pp. 1170–1181, September 1984. ¨ Sommerfeld, A., “Uber die Ausbreitung der Wellen in der drahtlosen Telegraphie,” Annals of Physics, Vol. 28, pp. 665–736, 1909. Southworth, G. C., Principles and Applications of Waveguide Transmission. New York: Van Nostrand, 1950.

REFERENCES

19

Southworth, G. C., Forty Years of Radio Research. New York: Gordon and Breach, 1962. Tarrant, D. R. Marconi’s Miracle. St. John’s: Flanker Press, 2001. Thayer, G. N., Roetken, A. A., Friis, R. W. and Durkee, A. L., “A Broad-Band Microwave Relay system Between New York and Boston,” Proceedings of the IRE—Waves and Electrons Section, pp. 183–188, February 1949. Thompson, R. A., Telephone Switching Systems. Boston: Artech House, 2000. Tuller, W. G., “Theoretical Limits of the Rate of Transmission of Information,” Proceedings of the IRE, pp. 468–478, May 1949. Ungerboeck, G., “Channel Coding with Multilevel/Phase Signals,” IEEE Transactions on Information Theory, Vol. 28, pp. 55–67, January 1982. Viterbi, A. J., “Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm,” IEEE Transactions on Information Theory, Vol. 13, pp. 260–269, April 1967. Weiner, N., Extrapolation, Interpolation and Smoothing of Stationary Time Series with Engineering Applications. Cambridge: The MIT Press, 1949. Welch, H., “Applications of Digital Modulation Techniques to Microwave Radio Systems,” Proceedings of the IEEE International Conference on Communications, Vol. 1, June 1978, pp. 1170–1181, September 1984. Wiltse, J. C., “History of Millimeter and Submillimeter Waves,” IEEE Transactions on Microwave Theory and Techniques, Vol. 32, pp. 1118–1127, September 1984. Wiltse, J. C., “History and Evolution of Fresnel Zone Plate Antennas for Microwaves and Millimeter Waves,” Antennas and Propagation Society International Symposium, Proceedings, Vol. 2, pp. 722–725, July 1999.

2 REGULATION OF MICROWAVE RADIO TRANSMISSIONS

The International Telecommunication Union—Radiocommunication Sector (ITU-R) is the branch of the United Nations that regulates radio transmissions internationally. The ITU began in 1865 as the International Telegraph Union. In 1903, the first International Radiotelegraph Convention was held in Berlin. The result of that convention was the formation of the International Radiotelegraph Union in 1906. It produced the first international regulations governing wireless telegraphy. A Table of Frequency Allocations was introduced in 1912. In 1927, the International Radio Telegraph Convention established the International Technical Consulting Committee on Radio (CCIR, Comite Consultatif International des Radiocommunications) to study issues pertaining to radio communication. In 1932, the International Telegraph Convention and the International Radiotelegraph Convention were merged to form the International Telecommunication Convention. The name of the merged group was changed to International Telecommunication Union (ITU) in 1944. In 1947, the ITU became an agency of the United Nations. The same year the ITU entered the UN, the International Frequency Registration Board (IFRB) was established, and conformance to the Table of Frequency Allocations became mandatory for all ITU signatory nations. In 1993, the name of the Comite Consultatif International des Telegraphes et Telephones (CCITT), which dealt with international telecommunications standards, was changed to International Telecommunication Union—Telecommunication Standardization Sector (ITU-T). In 1992, the name of CCIR, the group that dealt with international radio matters, was changed to International Telecommunication Union—Radio Communication Sector. International frequency allocations and technical rules are listed in the ITU-R Radio Regulations (“Red Books”). Allocations and rules are defined by region: Region I Region II Region III

Africa, Europe, Middle East, and Russian Asia North and South America and Greenland South Pacific, Australia, and South and East Asia

ITU-R radio regulations (http://www.itu.int/publ/R-REG-RR/en) are updated and modified during World Radiocommunication Conferences (WRCs) that are held at 2- to 5-year intervals, most often at the ITU headquarters in Geneva. The result of the WRC is considered a treaty obligation by participating countries. By this international treaty, all subscribing nations agree to abide by the ITU-R worldwide regulations contained in the ITU-R Radio Regulations. Those regulations apply to all radio transmissions Digital Microwave Communication: Engineering Point-to-Point Microwave Systems, First Edition. George Kizer. © 2013 The Institute of Electrical and Electronics Engineers, Inc. Published 2013 by John Wiley & Sons, Inc.

20

RADIO FREQUENCY MANAGEMENT

21

that extend beyond a single country. For transmissions completely contained within a single country, the transmissions are considered “internal matters.” Country administrations are not bound by the ITU-R regulations for the management of these “internal” transmissions (with the exception of radio transmission near an international border where such transmission could interfere with another nation’s transmission systems and with international satellite systems that terminate within that nation). For frequency bands that share dissimilar services, the difference in governing regulations between “internal matter” services and international services can be significant. In the United States, the fixed point to point terrestrial commercial services are governed by Part 101 of CFR 47, Chapter 1 (http://www.fcc.gov/encyclopedia/rules-regulations-title-47). Fixed satellite services (FSSs) are governed by Part 25. Part 101 uses coordination standards developed by the Telecommunications Industry Association (TIA) (Committee TR14-11, 1994). Part 25 uses coordination standards developed by the ITU. Coordination of potential interference into the satellite systems from the fixed radios and protection to the fixed radios by the satellite systems, as defined in Part 25, is different than coordination within the fixed community, as defined within Part 101. Differences in coordination and licensing methodologies complicate frequency sharing between fixed point to point microwave and satellite services. Within the United States, national telecommunication law is codified within the Code of Federal Regulations (CFR), Title 47—Telecommunications (Mosley, published yearly). These laws provide for three agencies: Chapter I defines the Federal Communications Commission (FCC), Chapter II defines the Office of Science and Technology Policy and National Security Council, and Chapter III defines the National Telecommunications and Information Administration (NTIA), Department of Commerce. The FCC has regulatory authority over all non-Federal-Government radio spectrum, as well as all international communications that originate or terminate within the United States, the District of Columbia, and the US possessions. It is the result of a long history of regulatory attempts to manage radio transmission in the United States (Linthicum, 1981). The Wireless Ship Act passed by the US Congress in 1910 required all ships of the United States traveling over 200 miles off the coast and carrying over 50 passengers to be equipped with wireless radio equipment with a range of 100 miles. The Radio Act of 1912 (enacted at least in part because of the Titanic disaster) gave regulatory powers over radio communication to the Secretary of Commerce and Labor and required all seafaring vessels to maintain 24-h radio watch and keep in contact with nearby ships and coastal radio stations. It did not mention broadcasting and limited all private radio communications to what is now the AM band. The Radio Act of 1927, which superseded the Radio Act of 1912, created the Federal Radio Commission (FRC). In 1934, Congress passed the Communications Act, which abolished the FRC and transferred jurisdiction over radio licensing to the new FCC. The Commission was created to regulate radio use “as the public convenience, interest, or necessity requires.” The Office of Science and Technology Policy and National Security Council is responsible for procedure for the use and coordination of the radio spectrum during a wartime emergency. It establishes emergency restoration priority procedures for telecommunications services. Within an official disaster area, it may preempt frequency allocations and procedures temporarily to provide short-term telecommunication restoration. The NTIA, an agency of the US Department of Commerce, serves the president in an advisory role regarding telecommunications policy. Its director is appointed by the president. The Office of Spectrum Management (OSM) within the NTIA has regulatory authority over all Federal Government radio spectrum within the United States, the District of Columbia, and the US possessions. The NTIA was created in 1978 as a result of Executive Branch reorganization. This reorganization transferred and combined various functions of the White House’s Office of Telecommunications Policy (OTP) and the Commerce Department’s Office of Telecommunications (OT).

2.1

RADIO FREQUENCY MANAGEMENT

Nationally and internationally, frequency management begins with a decision regarding how to manage radio paths. One of three approaches is usually chosen: Individual Licensing. This is conventional link-by-link coordination. It is usually managed under a national administration although the technical tasks may be assigned to private entities or, in the

22

REGULATION OF MICROWAVE RADIO TRANSMISSIONS

so-called case of “light licensing,” this responsibility may be assigned to the users. It is usually implemented in such a way that multiple users have access to the spectrum. It does limit the utilization of the frequencies to specified technologies. This is generally regarded as the most efficient method of spectrum usage. Block Assignment. A block of spectrum in a defined geographic area is licensed to an individual user (typically by auction). The user defines the usage within that block. However, the user is responsible for establishing appropriate guard bands or spectrum powers and/or masks to protect other users in other spectrum and/or geographic blocks. Since only one user controls the spectrum, this method does limit user access to the spectrum. This method is generally regarded as a compromise between spectrum usage and user flexibility. License Exempt. In this methodology, a block assignment is made but access is open to any eligible user whose equipment meets defined standards. Frequency assignments are ad hoc and no guarantee of interference protection is provided. This is the most flexible and cost-effective method of radio usage, but quality and availability of service are unpredictable. We limit our discussion to conventional individual licensing. We start with a segregation of compatible radio services into similar contiguous blocks of frequencies (“frequency bands”). Different services are assigned to different frequency bands (Withers, 1999). Rules are then adopted for the implementation of radio service. If the band is licensed, rules for licensing are established in such a way that the introduction of a new service or user has minimal negative impact on the service quality of existing services or users. Owing to the limited number of frequency allocations, most new services attempt to “share” bands of established services. Convincing the regulatory agencies and the incumbent services that successful sharing is possible is an interesting process. A database of existing users and their equipment’s significant characteristics is developed and maintained. New users use this database to design their systems to be compatible with the existing users. The licensing process causes the new user’s equipment to be entered into the database for future use. In the United States, Canada, and Australia, the licensing process is controlled by the national government. In many other countries, the spectrum is controlled by private organizations. Before a commercial satellite system is deployed, it must be coordinated with the ITU, as well as with the countries that could be affected by the deployment of such a satellite. Both the affected countries and the ITU-R maintain records of the satellite systems. In the United States, commercial fixed point to point microwave radio systems are licensed by the FCC of the US government. The FCC maintains its own database. Commercial firms and some private companies also maintain private databases that are used to prepare license applications and provide related services. The commercial databases are the ones usually used to prepare license applications and resolve frequency disputes. A frequency allocation (“frequency band”) for a particular type of radio is typically subdivided into equally spaced subdivisions (“channels”) for use by individual transmitters. The bandwidth of the channels is sized on the basis of the anticipated data transmission requirements. For most radio applications, the communication between two sites is duplex (simultaneous transmission in both directions along the radio path). Therefore, each radio path requires a transmit and a receive radio channel. The earliest frequency plan, developed for the 4-GHz Bell System TD-2 multiple channel microwave system, interleaved transmit and receive frequencies consecutively. Transmit and receive frequencies at a station were on opposite polarizations to take advantage of antenna cross-polarization discrimination. Having the transmit and receive frequencies so close together complicated equipment design. All subsequent microwave frequency channel plans arranged the radio frequency (RF) transmission channels into two groups (a high sub-band and a low sub-band) with the intent that one group would be used for transmission and the other for reception. The concept of using opposite polarizations on consecutive channels was retained. A small portion of the spectrum, called a guard band, separates the two sub-bands to reduce the cost of radio filtering. When the transmit and receive channels are grouped into two subbands, the frequency plan is called a two-frequency plan. When the channels are further subdivided into four different sub-bands, the plan is called a four-frequency plan. Four-frequency plans may be required to solve a bucking station issue (see following paragraphs). Since they increase the guard bands required, they are less efficient than the two-frequency plans using the same spectrum (the guard bands block usage of those frequencies by other users in the area).

RADIO FREQUENCY MANAGEMENT

23

When analog radios employing FM modulation were common, using two plans, one a conventional two-frequency plan and another plan (called an offset or interstitial plan) offset by half a channel bandwidth, was common. Offsetting the analog received signals by half a channel bandwidth significantly reduced the interference due to the interfering signal as the analog signal had most of its energy concentrated at the carrier frequency. However, modern radios are digital. These radios spread the transmitted energy evenly through the radio channel. The interstitial frequency plans have little advantage in this situation and are no longer used. Microwave frequency bands for the United States are defined by the FCC in the CFR Title 47 (Telecommunication), Chapter I, Part 101.147 (http://wireless.fcc.gov/index.htm?job=rules_and_regulations) (http://www.access.gpo.gov/nara/cfr/waisidx_10/47cfr101_10.html). In Canada, microwave frequency bands are defined by Industry Canada (http://www.ic.gc.ca/eic/site/smt-gst.nsf/eng/h_sf06130.html). Figure 2.1 graphically depicts the frequency channel allocation of the US lower 6-GHz frequency band. The most popular channels are 30 MHz wide with a smaller number of smaller bandwidth channels. The smaller bandwidth channels are grouped in such a way as to minimize their impact on the higher bandwidth channels. It is expected that, to the extent possible, channels will be used in numbered order with transmit and receive channels being of the same number (for complicated, dense systems, this is not always feasible). The odd-numbered channels typically operate on one polarization, and the evennumbered channels on the other. This is to take advantage of antenna cross-polarization discrimination to reduce receiver filtering requirements. Grouping transmit (“go”) and receive (“return”) channels into two separate groups is the most frequency-efficient method of using the most channels in a geographic area. However, this does place some constrains on the use of the channels. As noted in Figure 2.2, the use of a two-frequency plan assumes that you will transmit using one group of channels and receive using the other group. At one site, the radio transmitters transmit in the high sub-band and receive in the low sub-band. This is termed a High site. The next site receives in the high sub-band and transmit in the low sub-band. This second site is called a Low site. For any given site, the intent is to transmit using the same sub-band. For junction stations (as illustrated in Figure 2.3) with many radio paths converging at a site, frequencies may be exhausted. While this methodology optimizes the reuse of a frequency spectrum, it is challenging to implement in practice. In congested areas, frequency coordination is especially difficult where many different paths of varying lengths and capacities must share the same area. Since the sites cycle between high and low sub-bands down a line, closed loops or rings need to include an even number of sites. An odd number is difficult to accommodate, as noted in Figure 2.4. Linear networks, as depicted in Figure 2.5, also have constraints. Once a system is installed, adding or eliminating sites may be desirable for any of a number of reasons (e.g., capacity upgrades or path propagation performance improvement). This can cause significant frequency planning problems (it may be difficult to re-coordinate the system with existing users or may require frequency retuning of several sites). Also, changing routes to cause sites to appear at locations with other radio systems in the same frequency band can also prove challenging (especially if the existing transmitters are transmitting in one sub-band and the proposed new transmitters need to transmit in the other sub-band). If two consecutive stations are both “high” or both “low,” one of the stations is considered a “bucking” or “bumping” station. It needs to transmit high in one direction and receive low in the other. Transmitter to receiver isolation for operation at the same frequency needs to be at least 120 dB. Achieving this in practice is very difficult (because of antenna spillover and unanticipated foreground reflections, as well as inadequate feeder isolation within the station). The only practical solution for this situation is to subdivide the band into a four-frequency plan (use frequencies in each of the two normal sub-bands to both transmit and receive), use another frequency band, or connect the sites by another transmission medium (such as fiber-optic cable). Maintaining the high–low pattern for the many different systems within a geographic area can be challenging. Typically, all sites within approximately one-half mile of each other need to be using the same high or low transmit frequencies to avoid creating a “buck.” Having to use a four-frequency plan to solve a “bucking station” can block channels for all users in the area. In general, frequency planning requires careful consideration of potential interference from and to other radio systems within a coordination area (Fig. 2.6). One of the significant challenges of frequency planners is in maintaining a high/low frequency pattern for new systems once such a plan has been established in an area.

(a)

(b)

Figure 2.1 The US lower 6-GHz band channel allocations.

24

25

RADIO FREQUENCY MANAGEMENT

Polarization H V 8’ 7’ 6’ 5’ 4’ 3’ 2’ 1’

High (transmit) site

Polarization V H 8’ 7’ 6’ 5’ 4’ 3’ 2’ 1’

8’

Transmit high group

Polarization H V 8’ 7’ 6’ 5’ 4’ 3’ 2’ 1’

Receive high group

7’

5’

5’

3’

3’

6’

6’

4’

Receive low group

4’

2’

4’ 2’

7’ 5’ 3’ 1’

8’ 7’

7’

5’

5’

3’

3’

6’

Transmit low group

6’

4’

4’

2’

2’ 1’ Frequency channels

1’ Frequency channels

6’

8’

8’ 7’

Polarization V H 8’

Increasing frequency

Increasing frequency

High (transmit) site

2’ 1’ Frequency channels

1’ Frequency channels

Figure 2.2 Typical two-frequency plan utilization.

Low

3

Low 4

Low

Horizontal Vertical

Horizontal

2

3’

4’

2’ 5’

5 Low

Vertical High 1’

Vertical

1

Low

6’ 8’

7’ 6

Vertical

Horizontal

Horizontal 8

Low

Low

7 Low

Figure 2.3 An example junction station.

In North America, coordination is accomplished on the basis of carrier-to-interference (C/I) ratio. This is different than the international C/I methodology described in Chapter 14. The coordination area defines the area within which the effect of transmitters must be evaluated into existing receivers. In the United States, that is defined by the National Spectrum Managers Association guidelines (Fig. 2.7) (Working Group 3, 1992). A similar but more loosely defined international coordination area is defined by ITU-R Recs. F.1095 and SF.1006 (ITU-R Recommendations). For United States commercial digital systems, coordination is accomplished on the basis of threshold to interference (T/I) objectives (degradation to the receiver threshold) (Committee TR14-11, 1994; Working

26

REGULATION OF MICROWAVE RADIO TRANSMISSIONS

High

Low

High

Low

Low

High

Low

High

Low

High

High

Options: Change bands Change media Split band (“four-frequency plan”)

Figure 2.4 Closed loop networks. High

Low

High

Low

High

High

Low

Low

High

Figure 2.5 Linear networks. C A De

b

sir

ed

sig

Int

er

fer

ing

na

sig

na

l

l

a

B

Figure 2.6

Hz) 15 G Hz) < ( G s mile (>15 125 miles 80

Transmitter

D

Generalized interference situation.

5° Main beam 200 mil e 150 mil s (15 GHz)

Figure 2.7 The US FCC band coordination area.

RADIO FREQUENCY MANAGEMENT

27

Another system F1

F1

F2

F1

F2

F1 F2

F2

e

c en

r

rfe

em

e nt

i

Junction interference

t ys

ts

en

c ja

Ad

F1

F1

F2

F1

F1 F2 F 1 F2

F2

System of interest

F2

F2

F1

Desired radio paths Representative potential direct interference paths (Not all potential interference paths are shown. Foreground reflections and overshoot further complicate the situation.)

Figure 2.8

Typical interference cases to be investigated.

Group 3, 1986; Working Group 3, 1995; Working Group 3, 1987; Working Group 18, 1992). In the United States, commercial analog systems (and most international administrations) use C/I objectives (Working Group 5, 1992) (See Chapter 14). As illustrated in Figure 2.8, the number of frequency interference cases usually requires computer analysis for practical optimization. Even with a computer, this can be a complicated, time-consuming, and iterative process. Interference is analyzed as both long-term interference, which represents interference that is present most of the time, and short-term interference, which represents high power levels that may occur for short periods. Long-term interference may affect radio performance by degrading the fade margin of the receiver. Short-term interference could cause errors in a receiver even if the received signal is unfaded. Internationally, long-term interference is analyzed in terms of the interference power level that is exceeded no more than 20% of the time. This level is called the 80% interference level. Domestically, long-term interference is analyzed in terms of the median value of the interference power. Short-term interference is a term used in analyses of the effects on a receiver of interference power levels that are exceeded less than 1% of the time. Interference criteria are usually specified as interference power levels that can only be exceeded no more than specified percentages of the time (Rummler, W. D., private communication with George Kizer in 2009). In the United States, in bands where frequency coordination is carried out between fixed service (FS) systems, frequency coordination is usually based only on the long-term interference criteria. Short-term interference criteria are only invoked to clear exceptional cases as needed. For coordination purposes in the United States, short-term interference is defined (Working Group 9, 1985) as a level 10 dB worse than the long-term (median) interference power level. In bands shared with FSS earth stations, both long-term and short-term interference criteria are used in frequency coordination because of the high power used by transmitting earth stations and the extreme sensitivity of earth station receivers (Rummler, W. D., private communication, 2009). Internationally, frequency coordination within the FS is implemented under the rules specified by each administration. The most important international application of short-term interference criteria is in studies of the use or potential use of spectrum shared between services. The short-term interference criteria developed for this purpose are specific to the frequency band and the applications in each of the two services. Because of the widely differing characteristics of some of the other proposed services, the short-term interference criteria vary widely (Rummler, W. D., private communication, 2009). Several principles are employed in developing interference criteria to protect the FS from interference from other services. Recommendation ITU-R F.1094 specifies that shared services are allowed to take 10%

28

REGULATION OF MICROWAVE RADIO TRANSMISSIONS

of the (international) performance and availability budgets. Where there is more than one other service in a frequency band, it may be necessary to further subdivide the allowance for the service under consideration. The most vulnerable FS application in the band must be identified, and the interference objective for this application be allocated to long-term and short-term interference. Then, appropriate interference criteria must be developed. This process may require iterations in redefining the characteristics of the other service and the FS criteria (Rummler, W. D., private communication, 2009). In some cases it may not be necessary to develop a long-term interference criterion because of the intermittent presence of the interference sources (see, for example, Report ITU-R M.2119). In other cases, it may be necessary to develop multiple short-term interference criteria to ensure the protection of the FS. Guidance for the development of interference criteria can be found in ITU-R Recommendations F.758, F.1108, and F.1094. The results of some of the sharing studies carried out over recent years may be found in ITU-R Recommendations F.1494, F1495, F1606, F1669, F1706, SF1006, SF1482, SF1483, and SF1650. As might be expected, the specific results vary widely depending on the sharing scenario (Rummler, W. D., private communication, 2009) (ITU-R Recommendations). Several frequency bands are shared between the fixed point to point terrestrial microwave services and the FSSs. The coordination requirements and procedures are different. Terrestrial satellite transmitters are usually of much higher power than terrestrial point to point transmitters. For frequency bands shared with synchronous (stationary) satellite uplinks (e.g., lower 6 GHz), this imposes a couple of locations on the horizon, which must be excluded from transmission to protect satellite receivers. Terrestrial stations must usually stay away from satellite earth station transmitter sites. Satellite earth station transmitters are usually limited to urban areas. This can be an issue in major cities, but is generally not an issue elsewhere. For terrestrial services, the primary issue with sharing frequency bands with satellite service has been the satellite receivers. Satellite earth station antennas do not use shrouds and their sites usually have no shielding (earth hills or RF fences). Without additional shielding mechanisms, satellite earth station receivers and antennas are much more sensitive to interference than fixed terrestrial microwave services (Curtis, 1962). Terrestrial services coordinate specific frequencies actually to be used. FSSs coordinate all frequencies at all azimuths regardless of anticipated need. Satellite earth station receivers can enter an area if they avoid the existing terrestrial users’ frequencies. However, owing to their receiver sensitivity, new terrestrial users are often excluded. The impact of this on fixed microwave deployment can be seen by comparing Figure 2.10 with Figure 2.11 and Figure 2.12. The FSS uses geostationary and nongeostationary orbit satellites. The earth synchronous satellites are located 22,500 miles above the Earth. The synchronous satellite locations are nearing saturation over many areas of the Earth. New satellite services are often required to use lower orbits. Medium Earth Orbit (MEO) and Low Earth Orbit (LEO) systems with satellites approximately 6000 and 480–1000 miles above the Earth, respectively, are being contemplated. Since these systems are always moving relative to the Earth, their potential interference into terrestrial services (and vice versa) is always changing. Traditional FSs are coordinated on the basis of median estimated (“long-term”) interference limits (as both satellite and terrestrial equipment are stationary). Nonstationary satellites are always moving relative to the Earth. Nonstationary satellite services sharing the spectrum with the FSs impose significant shortterm interference limits (higher than long-term limits and often representing a short outage of terrestrial service) for frequency coordination.

2.2 TESTING FOR INTERFERENCE Fixed point to point microwave systems are designed for performance objectives that place minimum fade margin (difference between typical receiver received signal level and receiver out of service received signal level) requirements on path design. Fade margin is directly limited by radio system characteristics and external radio system interference. Frequency coordination is intended to protect the designed path fade margin. In areas with dense deployment of radio systems, unexpected interference can happen. If it occurs, path performance can be significantly impaired. A common test used during the commissioning phase of a microwave radio link is a fade margin test (Fig. 2.9). After the radio link is turned up and optimized, an attenuator is placed between the transmitter and the antenna. The attenuator is increased until the far end receiver threshold is reached. The amount of

RADIO PATHS BY FCC FREQUENCY BAND IN THE UNITED STATES

29

Receiver

Transmitter Path test

Power Band–pass Variable attenuator filter amplifier

Figure 2.9

Receiver test

Variable attenuator

Band–pass Low Noise amplifier filter

The fade margin test.

attenuation is a measure of path fade margin. Of course, this must be done when path fading is at a minimum. If the measured fade margin is significantly different from the anticipated value, the test is repeated at the receive end. If the measured fade margin at the receiver is the expected value, path interference is probable. If the measured fade margin is similar to that measured using the attenuator at the transmitter, a defective receiver is probable. Placing an attenuator between a transmitter or receiver and the waveguide can be challenging. For high frequency split package radios, this may be impossible (although this test is important for initial testing of urban radios where unanticipated interference is common). Some radios allow Automatic Transmit Power Control (ATPC) to reduce transmitter power by 30 or 40 dB below the nominal level. This feature, if available, can be used to fade the test to validate the flat fade margin. The above interference test is quite powerful for constant interference. However, it is an “out of service” test that is most suitable for use before commissioning the path. Interference can occur intermittently or after the path is installed. Identifying this condition is much more difficult after the path is in service. The impact of interference is to reduce path fade margin. If a path experiences significantly more fading outages in one direction than the other, one should suspect interference (or the receiver front end could have become defective). If interference is probable, one must find the source. The direct approach is to take a standard gain horn antenna, low noise amplifier and spectrum analyzer around the area and try to find the interference. Discussions with a coordinating agency can be helpful to pick probable transmitters. Errors in transmitter polarization and transmitter location (errors in tower location, errors in antenna type and placement, reversal of hop transmitters) do happen. Finding and eliminating this interference can be challenging.

2.3

RADIO PATHS BY FCC FREQUENCY BAND IN THE UNITED STATES

Figures 2.10–2.17 graph the paths in the FCC license database by frequency band. Notice that the 4 (3.7–4.2)-GHz band is being significantly underutilized from a fixed point to point microwave radio perspective. Every year a couple hundred satellite earth stations are coordinated. No new fixed point to point paths are coordinated although in the comparable lower 6 (5.925–6.425)-GHz band, thousands of fixed MW paths are coordinated each year. The reason for this disparity is the FCC’s curious policy of licensing earth stations for all possible frequencies and all possible path angles regardless of need. In addition, earth station antenna and location standards are not strict from a MW compatibility perspective. For these reasons, the 4-GHz band is essentially dead for new MW paths except for very isolated locations. Clearly, frequency coordination has been successful in the 6- and 11-GHz bands. Those bands are highly utilized. Notice that 10.5 and 11 GHz are long-distance bands in the western United States where the rain rates are moderate (Fig. 2.13 and Fig. 2.14). All the other higher frequency bands are only applicable for short distance paths and typically are used in large metropolitan areas (Fig. 2.15, Fig. 2.16, and Fig. 2.17).

30

REGULATION OF MICROWAVE RADIO TRANSMISSIONS

Figure 2.10 The 4 (3.7–4.2)-GHz band (shared between fixed point to point microwave and satellite earth stations).

Figure 2.11

Lower 6 (5.925–6.425)-GHz band.

2.4 INFLUENCES IN FREQUENCY ALLOCATION AND UTILIZATION POLICY WITHIN THE WESTERN HEMISPHERE 2.4.1

United States of America (USA)

2.4.1.1 Governmental FCC This governmental agency has regulatory authority over all US non-federal-government wireline and radio communications. Frequency coordination within FCC frequency bands is governed by the FCC Rules and Regulations (Title 47 of the CFR, Parts 0 through 101, http://wireless.fcc.gov/index .htm?job=rules_and_regulations). The FCC is composed of five commissioners appointed by the president and confirmed by the Senate. One of the commissioners is designated the chairman. The commission sets broad frequency policies. Its

INFLUENCES IN FREQUENCY ALLOCATION AND UTILIZATION POLICY

Figure 2.12

Upper 6 (6.525–6.875)-GHz band.

Figure 2.13

The 10.5 (10.55–10.68)-GHz band.

31

members are strongly influenced by high-ranking industry officials and the US Congress. Serious industry policy issues may be discussed personally with the commissioners. However, most commercial activity is coordinated with the various FCC Bureaus. The Office of Engineering and Technology (OET) advises the commission on technical matters. However, the individual bureaus have considerable influence on technical matters also. The OET Laboratory Division Equipment Authorization Branch sets standards for radio transmitters. The OET Policy and Plans Division includes the Technical Rules Branch, the Spectrum Policy Branch, and the Spectrum Coordination Branch. The Common Carrier Bureau, typically, is involved in wire-line regulatory issues and does not get involved directly in frequency issues. The Wireless Telecommunications Bureau is the most significant bureau for most FCC radio issues. However, it seldom gets involved in satellite-related issues. The International Bureau is the group that is involved in WRC, ITU-R Study Groups, various industry

32

REGULATION OF MICROWAVE RADIO TRANSMISSIONS

Figure 2.14

The 11 (10.7–11.7)-GHz band.

Figure 2.15

The 18 (17.7–19.7)-GHz band.

ad hoc committees, and satellite frequency allocation proceedings. This agency is quite influential at the working level in the setting of FCC international and domestic satellite spectrum policy. The FCC attempts to operate through consensus. The recommendations of industry associations are important factors in FCC decision making. A positive working relationship with the various governmental agencies, industry groups, unofficial coalitions and groups of common interest is crucial to success in influencing fixed terrestrial and satellite policy.

NTIA This organization has regulatory authority over all US federal government radio communications. The most important parts of this organization are the Institute for Telecommunication Sciences (ITS) and the OSM. ITS is the federal laboratory that addresses the technical telecommunications issues. OSM develops and implements policies and procedures for the use of the spectrum controlled by the federal government in the United States. The most important group in OSM is the Frequency Assignment & IRAC

INFLUENCES IN FREQUENCY ALLOCATION AND UTILIZATION POLICY

33

Figure 2.16 The 23 (21.2–23.6)-GHz band.

Figure 2.17 The 38 (38.6–40.0)-GHz band.

(Interdepartmental Radio Advisory Committee) Administrative Support Division. Its IRAC is comprised of members from the 20 most active federal users and the FCC. IRAC is responsible for developing and executing policies and procedures pertaining to frequency management of the federal government spectrum. IRAC is composed of the Frequency Assignment Subcommittee (FAS), the International Notification group, the Radio Conference Subcommittee, Spectrum Planning Subcommittee, Technical Subcommittee, and approximately 20 ad hoc subcommittees. The FAS is the group that coordinates with the FCC to manage FCC/NTIA shared spectrum. Frequency coordination within NTIA frequency bands is governed by the Manual of Regulations and Procedures for Federal Radio Frequency Management (NTIA “Red Book” manual, http://www.ntia.doc.gov/osmhome/redbook/redbook.html). It has little in common with FCC rules and regulations. This complicates coordination with commercial FCC-governed services sharing NTIA frequency bands.

34

REGULATION OF MICROWAVE RADIO TRANSMISSIONS

ITU-R National Committee This group, sponsored by the US Department of State, is the official interface of the United States with the ITU. This organization is made up of industry and government members. The major suborganizations are the following: Radiocommunication Advisory Group (RAG) The Radiocommunication Advisory Group (RAG) is the overall steering committee of the ITU. It sets the overall direction regarding strategic planning, work programs, and ongoing activities. The members of the United States from this group provide high level guidance. International Telecommunications Advisory Committee (ITAC) This organization is similar to the RAG. It provides strategic planning recommendations. It is a feeder organization to the ITU Working Groups (WGs). The telecommunications industry participates in WRC preparation through participation in the FCC advisory committees. The WRC Advisory Committee (WAC) takes the consensus views developed by the industry participants in the Informal Working Groups (IWGs). This process is managed by the FCC. Details of structure, activities, and all documents are available on the FCC Web site (Rummler, W. D., private communication, 2009). WRC Preparation Committee This group takes input from the NTIA and the FCC and prepares the US WRC position documents. This is where the pre-WRC Conference Preparatory Meeting (CPM) United States position is set. This organization significantly influences US international frequency policy. The US WRC Delegation This group is the official US delegation at the WRC. The United States’ positions change through the entire WRC process. Contact with this delegation is crucial to monitoring the process of US international frequency policy setting and allocation. 2.4.1.2

Industrial Organizations Many professional organizations lobby Congress and the FCC for policy and rules favorable to their interests. Congress and the FCC attempt to provide rules favorable to the most significant users. The following Washington, DC organizations are among those who have considerable influence on FCC policy and rules.

Fixed Wireless Communications Coalition (FWCC) Fixed Wireless Communications Coalition (FWCC) is a coalition of companies, associations, and individuals interested in the terrestrial fixed microwave communications. It is the single most significant organization representing the interests of both, the fixed point to point microwave radio users and manufacturers. National Spectrum Managers Association (NSMA) This organization represents the microwave (both fixed terrestrial and satellite) coordination organizations within the United States. Its primary focus is to define the implementation methodology to support the reduction of interference among all users. Since it represents all radio interests, it is usually policy neutral. It does not establish rules or policy. It establishes procedures to implement them.

Telecommunications Industry Association (TIA) This organization represents manufacturers of telecommunications equipment in the United States. Satellite and fixed terrestrial interests are represented by different divisions. This organization can be very influential regarding FCC plans and policy for telecommunications users and manufactures. Utilities Telecommunications Council (UTC) This organization represents US utilities. Association of American Railroads (AAR) This organization represents the railroads. American Petroleum Institute (API) The telecommunications subcommittee of this organization represents the oil companies in telecommunications matters.

INFLUENCES IN FREQUENCY ALLOCATION AND UTILIZATION POLICY

35

Association of Public-Safety Communications Officials (APCO)—International This organization represents the domestic and international police, fire, and local government organizations in telecommunications matters. Cellular Telecommunications Industry Association (CTIA) This organization represents the cellular and some licensed PCS (personal communications services) users. National Association of Broadcasters (NAB) This organization is the most influential private telecommunication group in Washington. It wields enormous influence in all FCC and congressional frequency policy matters. Various Manufacturers Various manufacturers lobby Congress and the FCC for policy and rules favorable to their interests. They typically do this individually and as part of industry groups. 2.4.1.3 Intergovernmental ITU This specialized agency of the United Nations has several support groups within the United States. The most significant are the ITU-R USA Study Groups. The ITU-R USA Study Groups, aligned with the ITU-R groups, are sponsored by the FCC. They are quite influential in developing ITU recommendations. Study Study Study Study Study Study

Group Group Group Group Group Group

1 3 4 5 6 7

(SG (SG (SG (SG (SG (SG

1) 3) 4) 5) 6) 7)

Spectrum management Radiowave propagation Satellite services Terrestrial services Broadcasting service Science services

The cost of participation in these groups at the national and international level is significant. New radio services take an active interest. They develop methodologies of “spectrum sharing,” which they then feed through the ITU study group and WRC Preparation Committee process. If these positions are successfully adopted at the WRC, the new services then lobby the FCC to adopt the new rules within the rules for domestic FSs in the interest of “world telecommunications harmonization.” Mature FSs fail to participate in this process at their own peril.

North American Free Trade Agreement (NAFTA) The Telecommunications Standards Subcommittee (TSSC), established pursuant to the North American Free Trade Agreement (NAFTA) and comprised of governmental representatives from the United States, Mexico, and Canada, is charged with facilitating the implementation of NAFTAS’s telecommunications-related provisions . The Consultative Committee on Telecommunications (CCT) is comprised of private sector representatives and assists the TSSC. NAFTA is aimed at facilitating telecommunications equipment deployment. Thus, much of the TSSC’s work deals with standards and conformity assessment procedures that are often used as a means to limit market access. NAFTA limits the types of standards that can be imposed on telecommunications terminal equipment to those that can be justified under certain criteria. One such criterion is to prevent electromagnetic interference, and ensure compatibility with other uses of the electromagnetic spectrum. Thus, the TSSC and the CCT indirectly may be involved in frequency policy. The country most affected by these NAFTA criteria is Mexico, as Canadian and US regulations already meet these criteria.

Inter-American Telecommunications Union (CITEL) The Inter-American Telecommunications Commission (CITEL) is the advising entity to the Organization of American States (OAS) in telecommunications matters as a specialized commission for the OAS Inter-American Economic and Social Counsel. CITEL is formed by an assembly, a permanent executive committee, and three permanent consultative committees (PCCs). Most concrete work of CITEL is carried out in the PCCs. PCC 1 deals with public telecommunication. PCC 2 deals with broadcasting issues. PCC 3 deals with radiocommunications issues. The main goals

36

REGULATION OF MICROWAVE RADIO TRANSMISSIONS

of PCC 3 are the harmonization of services, the reduction of harmful interference, and the promotion of ITU regulations and standards. Specific issues are studied in detail by WGs chaired by member nations. PCCs meet once or twice a year in plenary sessions. Each CITEL member (i.e., government of an OAS member country) has one vote during PCC plenary sessions. Over 60 nongovernmental organizations, such as private sector companies and associations, pay annual membership dues and have the status of associate members of CITEL, with a voice but no vote. Nongovernmental groups from the United States participate extensively in CITEL PCC meetings. CITEL PCCs report their findings to the member state telecommunications regulators, and thereby influence the ITU standardization process. CITEL is especially active in WRC matters. It provides the forum for developing WRC inputs that represent the consensus of ITU-R Region 2.

2.4.2

Canada

2.4.2.1 Governmental Industry Canada Industry Canada is similar to a combination of the US FCC and NTIA. This governmental organization (http://www.ic.gc.ca/eic/site/ic1.nsf/eng/h_00006.html) is responsible for managing both private and government spectrum in Canada. Its spectrum policy branch develops frequency policies through the use of gazette notices (the Canadian equivalent of an FCC notice of proposed rulemaking). Its spectrum engineering branch implements these policies and relies heavily on the advice and recommendations of the Radio Advisory Board of Canada (RABC). Preparation for WRCs is the ongoing responsibility of the Canadian preparatory committee. The Industry Canada ad hoc group that handles ITU-R matters is the Canadian National Organization (CNO/ITU-R).

2.4.2.2 Industrial Radio Advisory Board of Canada (RABC) The RABC is an industry advisory group comprised mainly of associations of users and manufactures of radio equipment with Industry Canada sitting as an observer. The terrestrial microwave manufactures sit at the RABC radio relay committee under the umbrella of Electro-Federation Canada (EFC). Industry Canada pays considerable attention to this group. The RABC is the single most powerful body for influencing frequency policy in Canada and implementing it. It operates through consensus and meets three to four times a year.

Frequency Coordination System Association (FCSA) This association is similar to the National Spectrum Managers Association in the United States. It is the umbrella organization for the various RF coordination groups. This group prepares recommendations for coordination methods and procedures but is less politically active than its US equivalent, the National Spectrum Managers Association.

2.5 FCC FIXED RADIO SERVICES Chapter I of the CFR Title 47—Telecommunications establishes the FCC and the following Fixed Radio Services: Experimental Radio Unlicensed Radio Domestic Public Fixed Radio International Fixed Public Radiocommunication (Public Fixed) Satellite Communications TV Studio-Transmitter Links (STLs) Fixed Point-to-Point Microwave Services

Part Part Part Part Part Part Part

5 15 21 23 25 74 Subpart F 101

The frequency bands used by these services are defined in Part 2. However, deployment within these allocations is prohibited until rules are included within the CFR. Regulations are always changing. Frequency bands are reallocated (services are moved, eliminated, or created) and sharing among services

FCC FIXED RADIO SERVICES

37

may be allowed. Use of the licensed bands is based on the class of service. A user licensed with a primary status is accorded protection from harmful interference from any other user (whether primary, secondary, or unlicensed). A user licensed with secondary status is allowed to use the band but has no legal recourse if interfered with. Harmful interference is defined as “Any emission, radiation or induction that endangers the functioning of a radio navigation service or of other safety services or seriously degrades, obstructs or repeatedly interrupts a radiocommunications service.” [15.3(m)] For the above and following sentences the citation within brackets [ ] indicates the paragraph (Part) and subparagraph within the CFR 47, Chapter 1, containing the reference. Part 101 defines three FSs: Private Operational Fixed Point-to-Point Microwave Service—Part H. Common Carrier Fixed Point-to-Point Microwave Service—Part I. Local Multipoint Distribution Service (LMDS)—Part L. The following is a summary of the rules that apply to these services: Microwave radio is defined as radio operation above 890 MHz [101.3]. No foreign government can hold a radio license [101.7]. No foreign corporation may operate a common carrier radio service [101.7]. Common carrier services may be concurrently licensed for noncommon carrier communications purposes [101.133(a)]. Private carrier and common carrier transmission facilities may be interconnected [101.135]. Private carriers may offer for-profit private carrier service [101.135]. More than one private carrier may use the same transmission facilities [101.133]. License applications are typically one of the following [1.929]: Application for initial authorization Application for renewal of authorization (typically once every 10 years) Application to change ownership or control (including partitioning and disaggregation) Application requesting authorization for a facility that would have a significant environmental effect Application for an amendment that requires frequency coordination, including adding new frequency or frequencies Application for special temporary authority [1.931], or temporary or conditional authorization [101.31]. Emergency operations are allowed in some cases [101.205]. Licenses normally authorize operation between or among individual stations. Operation at 38.6–40.0 GHz is based on a Partitioned Service Area (PSA) [101.56] and [101.64]. License applications for new authorization must contain the following [101.21(e)]: Applicant’s name and address Transmitting and receiving station name Transmitting and receiving station coordinates (within 1 s) Frequencies and polarizations to be added, deleted, or changed Transmitting equipment, its stability, effective isotropic radiated power (EIRP), emission designator, and type of modulation Transmitting antenna(s), model, gain, and radiation pattern (if required) Transmitting and receiving antenna center line height(s) above ground level and ground elevation above mean sea level [within 1 m (3.3 ft)] Path azimuth and distance. Licensee must file a modification application if major changes are made [1.947(a)].

38

REGULATION OF MICROWAVE RADIO TRANSMISSIONS

Major changes are defined as any of the following [1.929(d)(1)]: Any change of transmitter antenna location by more than 5 s in latitude or longitude Any increase in frequency tolerance Any increase in bandwidth Any change in emission type Any increase in EIRP of more than 3 dB Any increase in transmit antenna height of more than 3 m (9.8 ft) Any increase in transmit antenna beamwidth Any change in transmit antenna polarization Any change in transmit antenna azimuth greater than 1◦ Any change since the last major modification that may produce a cumulative effect exceeding any of the above criteria. License applications for any major change to an existing authorization must contain the following [101.21 (e)]: Applicant’s name and address Transmitting and receiving station name Transmitting and receiving station coordinates (within 1 s) Frequencies and polarizations to be added, deleted, or changed Transmitting equipment, its stability, EIRP, emission designator, and type of modulation Transmitting antenna(s), model, gain and radiation pattern (if required) Transmitting and receiving antenna center line height(s) above ground level and ground elevation above mean sea level [within 1 m (3.3 ft)] Path azimuth and distance. To operate channels with a bandwidth of at least 10 MHz and with channel frequency between 3.7 and 11.7 GHz, transmitters must meet the minimum payload capacity requirements [101.141(a)(3)]. This capacity must be loaded (utilized) within 30 months of licensing [101.141(a)(3)]. Attachment of appropriate multiplex equipment meets minimum loading requirements [101.141(a)]. Minimum transmit and receive antenna standards are imposed (these standards do not apply to diversity antennas) [101.115]. Antenna structures (towers, buildings, etc.) higher than 200 ft must be registered with the FCC [17.7] (Fig. 2.18). Antenna structures within the specified glide slope of enumerated airports and heliports must be registered with the FCC [17.4] (Fig. 2.19). See the subsequent section for a detailed discussion regarding path clearance near airports and heliports. Exceptions are provided for minor additions to existing structures [17.7 (a) and 17.14 (b)]. FAA must be notified of proposed antenna structure construction [17.7]. Rules are imposed for painting and lighting these structures [17.21]. Transmitter frequency tolerance [74.661][101.107] and power (EIRP) limitations [74.636][101.113] apply. For transmitters using ATPC, this power limitation applies to the maximum transmit power, not to maximum coordinated power [101.143(b)]. Paths shorter than the following [74.644(a)][101.143(a)] require transmit power reduction: 1.850–7.125 GHz: 17 km (10.6 miles) 10.550–13.250 GHz: 5 km (3.1 miles) The transmitter power reduction formula is

FCC FIXED RADIO SERVICES

39

Figure 2.18 Locations of FCC-registered antenna structures.

Figure 2.19 Locations of FCC-registered airports and heliports. EIRP = Max EIRP − 20 log10 (Limit/Actual) [74.644(b)] EIRP = Max EIRP − 40 log10 (Limit/Actual) [101.143(b)] EIRP = usually allowable transmitter power limit = above minimum distance limit actual = the actual path length. Frequency diversity requires at least 1 : 3 channels within 3 years [101.103(c)]. Note that there are at least three exceptions to this rule. First, collapsed rings operation is allowed (Wireless Telecommunications Bureau, 2000). Second, if a frequency diversity radio system provides for the different traffic to be placed on different frequencies (the so-called protect channel access with high priority traffic on one frequency channel and low priority traffic on the other frequency channel) and the higher priority channel preempts operation of the lower priority channel as needed, this type of

40

REGULATION OF MICROWAVE RADIO TRANSMISSIONS

operation would be allowed (Knerr, 1998). Third, if the multiple radio channels are connected to an IP router, different traffic will be applied to the different radio channels. If a channel fails, the router will not transmit low priority traffic. Waivers of technical rules may be requested [1.925][101.23]. Common carriers must provide special showing for renewal of systems using frequency diversity [101.705]. Coordination applies for new applications, major amendments, or major modifications to existing licenses [101.103]. License applications must contain evidence of prior coordination of proposed frequency use with existing licensees, permittees, and applicants in the applicable coordination area [101.21 (f)][101.103(d)(1)]. Coordination must use procedures in [25.251][101.103 (d)]. Coordination must include geostationary satellite users [101.145]. Coordination within 35 miles of Canada or Mexico has special requirements and involves the US government [101.31(b)(v)]. “Quiet zones” must be respected [0.121][1.924]. License with prior coordination notice must contain the following [101.103(d)(2)(ii)]: Applicant’s name and address Transmitting and receiving station name Transmitting and receiving station coordinates (within 1 s) Frequencies and polarizations to be added, deleted, or changed Transmitting equipment type, its stability, actual output power, emission designator, and type of modulation Transmitting antenna(s), model, gain, and radiation pattern (if required) Transmitting and receiving antenna center line height(s) above ground level and ground elevation above mean sea level [within 1 m (3.3 ft)] Path azimuth and distance Estimated transmitter and receiver transmission line loss in dB For systems employing ATPC, maximum transmit power, maximum coordinated transmit power, and nominal transmit power. Coordination is a two-part process: Notification Response. The maximum coordination period is 30 days after receipt by the entity being notified. The notifying party must secure positive responses from all notified entities. No response within 30 days is assumed a positive response (most coordinators allow 35 days to provide time for the recipient to receive the coordination notice). Expedited prior coordination may be requested by the notifying party. New applicants are required to technically resolve any potential interference. Both parties are encouraged resolve interference disputes. The Commission can be contacted as a last resort [101.106(e)]. New applicants must make reasonable effort to avoid blocking existing coordinated systems. Any frequency reserved by the current licensee for future use must be released for use by pending applicant on showing that the use of any other frequency cannot be coordinated. Prior coordination is canceled if license application is not filed in 6 months or within 10 days of the end of the 6-month period. When co-pending applicants file, the earliest file date has priority for license. The following is a summary of licensed station operator requirements: All point-to-point services may begin station construction before station authorization at applicant’s risk [101.5].

SITE DATA ACCURACY REQUIREMENTS

41

On filing a properly completed application, successful completion of prior coordination, and tower clearance by the FAA, conditional authorization to operate is granted [101.31(b)] with the following exceptions: Operation of paths within 35 miles (56.3 km) of the Canadian or Mexican border must not begin until the license is granted. Operation of paths in bands shared with the federal government (NTIA) must not begin until the license is granted. Operation should not be within a “quiet zone” or other similarly designated area. Although not stated, if a station license application is modified after submission, operation may not begin until the final license is granted. Operation must begin within 18 months from the granting of the license (except LMDS & 38.6–40.0 GHz service) [1.946] and [101.63 (a)]. Licensee must file a notification of compliance within 15 days of expiration of the 18-month construction period [1.946(d)]. License for station authorization is issued for 10 years [101.67]. Station operator must retain the license application certification form [101.31(b)(viii)(3)]. Station identification (from over the air) is not required [101.212]. Tower lights must be inspected (ideally automatically) at least once every 24 h [17.47]. Light monitoring equipment must be inspected every 3 months [17.47]. Record of light inspections must be maintained [17.49]. Record must be maintained of any known malfunction of the tower lighting system [17.49]. Repairs or modifications must be recorded. Date and time of FAA notification of malfunction must be recorded. Towers (“antenna structures”) must be cleaned or painted as often as necessary to maintain good visibility [17.50]. Transmitters must be installed so as to limit operation to those authorized by the licensee [101.131]. Station operator must post station authorization and transmitter identification [101.215]. Name, address, and telephone number of custodian must be posted at each station [101.215]. Station operator must maintain station records of the following actions and the name and address of the individual performing them [101.217]: Results and dates of transmitter measurements Pertinent details of all transmitter adjustments (Note that monitoring transmitter frequency and power is no longer required. However, it is still a good idea that they be monitored.). Records must be kept in an orderly manner and readily available [101.217]. All records must be retained for at least 1 year [101.217]. Licensee must make the radio station available for inspection by the Commission [101.201]. Operation of an intentional, unintentional, or incidental radiator is subject to the conditions that no harmful interference is caused . . . [15.5(b)]. The operator of an RF device shall be required to cease operating the device on notification by a Commission representative that the device is causing harmful interference. Operation shall not resume until the condition causing the harmful interference has been corrected [15.5(c)]. Harmful interference is defined as “Any emission, radiation or induction that endangers the functioning of a radio navigation service or of other safety services or seriously degrades, obstructs or repeatedly interrupts a radiocommunications service” [15.3(m)].

2.6

SITE DATA ACCURACY REQUIREMENTS

FCC rule 101.21 (e) requires the horizontal accuracy of coordinates to be no less than 1 s and the vertical accuracy to be no less than 1 m. One second of latitude represents approximately 101 ft. One

42

REGULATION OF MICROWAVE RADIO TRANSMISSIONS

second of longitude represents approximately 92 ft in the southern most parts of the United States and approximately 66 ft in the northern most parts of the United States. USGS 7.5-min maps have an average elevation accuracy of 5 ft (1.52 m). USGS digital seamless NED data has an average elevation accuracy of 8 ft (2.44 m). Neither of these sources is suitable for meeting the 1 m site elevation accuracy requirement. During FAA studies of building structures, the FAA may impose measurement accuracy standards. The following FAA Obstacle Accuracy Codes are defined in FAA Order 8260.19E, Appendix 3:

Horizontal Measurements Code 1 2 3 4 5 6 7 8 9

Vertical Measurements

Tolerance

Code

Tolerance, ft

±20 ft ±50 ft ±100 ft ±250 ft ±500 ft ±1000 ft ± 1/2 mile (nautical) ±1 mile (nautical) Unknown

A B C D E F G H I

±3 ±10 ±20 ±50 ±125 ±250 ±500 ±1000 Unknown

Current FCC license requirements (101.21 (e)) dictate at least 2A accuracy.

2.7 FCC ANTENNA REGISTRATION SYSTEM (ASR) REGISTRATION REQUIREMENTS The Antenna Structure Registration Program (Part 17) is the process under which each antenna structure that requires FAA notification—including new and existing structures—must be registered with the FCC by its owner. The owner is the single point of contact to resolve antenna-related problems and is responsible for the maintenance of those structures that require painting and/or lighting. Note that because the ASR requirements apply only to those antenna structures that may create a hazard to air navigation (either by their height or by their proximity to an airport), the registration files do not contain a comprehensive record of all antenna structures. The ASR does not replace the FAA notification requirement. When the antenna structure is registered, an ASR number is assigned. This number is seven digits long, with the first (leftmost) digit being one. ASR number assignments may be determined at the Web site (http://wireless2.fcc.gov/UlsApp/AsrSearch/asrRegistrationSearch.jsp). The FCC rules specifically define the term antenna structures as “[T]he radiating and/or receive system, its supporting structures and any appurtenances mounted thereon.” In practical terms, an antenna structure could be a free-standing structure, built specifically to support or act as an antenna, or it could be a structure mounted on some other man-made object (such as a building or bridge). Note that in the latter case, the structure must be registered with the FCC, and not the building or the bridge. Objects such as buildings, observation towers, bridges, windmills, and water towers, the primary function of which is not to mount antenna, are not antenna structures and should not be registered. Keep in mind that the FCC only has jurisdiction over antenna structures and, thus, other objects that do not normally house antennas are not required to be registered with the FCC—regardless of their location or height (but the FAA will have an interest in them). Per CFR, Title 47, Telecommunication, Chapter 1, FCC, Part 17, Construction, Marking and Lighting of Antenna Structures, Subpart B, FAA Notification Criteria, some antenna structures are required to be registered with the FCC. Sec. 17.7—antenna structures requiring notification to the FAA are discussed here. A notification to the FAA (CFR), Title 14: Aeronautics and Space, Part 77, Safe, Efficient Use and Preservation of the Navigable Airspace) is required, except as set forth in Sec. 17.14, for any of the following constructions or alterations:

FCC ANTENNA REGISTRATION SYSTEM (ASR) REGISTRATION REQUIREMENTS

43

(a) Any construction or alteration of height of more than 60.96 m (200 ft) above ground level at its site; antenna structure heights are recorded without and with appurtenances (structures added to the antenna structure). The FCC considers the “with appurtenances” height when reviewing the 200-ft limit. (b) Any construction or alteration of height greater than an imaginary surface extending outward and upward at one of the following slopes (Glide Slope Rules): (1) 100 to 1 for a horizontal distance of 6.10 km (20,000 ft) from the nearest point of the nearest runway of each airport specified in paragraph (d) of this section with at least one runway of more than 0.98 km (3200 ft) in actual length, excluding heliports (2) 50 to 1 for a horizontal distance of 3.05 km (10,000 ft) from the nearest point of the nearest runway of each airport specified in paragraph (d) of this section with its longest runway no more than 0.98 km (3200 ft) in actual length, excluding heliports (3) 25 to 1 for a horizontal distance of 1.52 km (5000 ft) from the nearest point of the nearest landing and takeoff area of each heliport specified in paragraph (d) of this section. (c) When requested by the FAA, any construction or alteration that would be in an instrument approach area (defined in the FAA standards governing instrument approach procedures) and when available information indicates it might exceed an obstruction standard of the FAA. (d) Any construction or alteration in any of the following airports (including heliports): (1) An airport that is available for public use and is listed in the Airport Directory of the current Airman’s Information Manual or in either the Alaska or Pacific Airman’s Guide and Chart Supplement (2) An airport under construction that is the subject of a notice or proposal on file with the FAA, and except for military airports, it is clearly indicated that the airport will be available for public use (3) An airport that is operated by an armed force of the United States. Sec. 17.14 lists certain antenna structures exempt from notification to the FAA. A notification to the FAA is not required for any of the following constructions or alterations: (a) Any object that would be shielded by existing structures of a permanent and substantial character or by natural terrain or topographic features of equal or greater height, and that would be located in the congested area of a city, town, or settlement where it is evident beyond all reasonable doubt that the structure so shielded will not adversely affect safety in air navigation. The applicant claiming such exemption under Sec. 17.14(a) shall submit a statement with their application to the FCC explaining the basis in detail for their finding. (b) Any antenna structure of height 6.10 m (20 ft) or less, except one that would increase the height of another antenna structure (20 Foot Rule). (c) Any air navigation facility, airport visual approach or landing aid, aircraft arresting device, or meteorological device, of a type approved by the administrator of the FAA, the location and height of which is fixed by its functional purpose [currently Navigation Aids (i.e., a glideslope, VOR (VHF Omnidirectional Range), or nondirectional beacon) are the main facility of concern]. FCC Form 601, Schedule I lists the following antenna structure codes as appropriate for filing for a MW path license: Code B* BANT BMAST BPIPE BPOLE BRIDG*

Description Building (with a side mounted antenna) Building with antenna on top Building with mast (and antenna) on top Building with pipe (and antenna) on top Building with pole (and antenna) on top Bridge

44

REGULATION OF MICROWAVE RADIO TRANSMISSIONS

BTWR GTOWER LTOWER MAST MTOWER NNGTAMM# NNLTAMM# NNMTAMM# PIPE POLE RIG* SIGN* SILO* STACK* TANK* TREE* UPOLE*

Building with tower Guyed structure used for communication purposes Lattice tower Mast (self-support structure used to mount an antenna) Monopole Guyed tower array (grouping of guyed towers) Lattice tower array (grouping of lattice towers) Monopole tower array (grouping of monopoles) Any type of pipe Any type of pole (used only to mount an antenna) Rig used for oil or water extraction or other purpose Any type of sign or billboard Any type of silo Smoke stack Any type of tank (e.g., water or gas) Tree when used as a support for an antenna Utility pole (or tower) used to provide utility service (e.g., electric or telephone service)

The following codes have been used for years, but are obsolete as of June 2012: NNTAMM# NTOWER TOWER

Antenna tower array Multiple antenna structures Free-standing or guyed structure used for communications purposes

∗ This structure, as its primary function is not related to antenna support, is not considered an antenna structure (although it may perform that function in addition to its primary function) and, therefore, is exempt from ASR requirements. However, it still must conform to FAA glide slope requirements. # The NN indicates the number of towers in the array. The MM is optional and indicates the position of that tower in the array. The value of MM would be between 1 and NN (inclusive).

The above abbreviations are also used in the FCC ASR database (ULS Downloads/Databases/Database Downloads/Antenna Structure Registration at http://wireless.fcc.gov/uls). If an antenna structure is attached to any of these exempted structures, then only that attached structure is considered an antenna structure (e.g., BANT, BMAST, BPIPE, BPOLE, and BTWR above). If that attached structure exceeds 20 ft above the nonantenna structure, then it is subject to FAA glide slope rules and possibly ASR requirements. Nonexempted structures are considered antenna structures and additions are treated as extensions of that structure. Therefore, if a 19 ft structure is attached to the top of a 190-ft smoke stack, no ASR is required (the smoke stack is not an antenna structure and the antenna structure is less than 20 ft high). However, if a 19 ft structure is added to the top of a 190-ft self-supporting tower, an ASR is required (because the antenna structure height is now 209 ft). The FCC Web site (http://wireless2.fcc.gov/UlsApp/AsrSearch/towairSearch.jsp) has a tool (TOWAIR Determination) that can be used to test whether or not a site passes the FAA glide slope requirements. The FAA Web site (https://oeaaa.faa.gov/oeaaa/external/portal.jsp) has a tool (Notice Criteria Tool on the left border of Web site) that tests whether or not a site passes both the glide slope and navigational facilities requirements.

2.8 ENGINEERING MICROWAVE PATHS NEAR AIRPORTS AND HELIPORTS When designing microwave networks, airport or heliports occasionally appear under or near paths. The following guidelines are proposed for microwave paths that might be adversely affected by aircraft: Regarding heliports, as helicopters can move up or down vertically without limitation, no microwave path should pass directly over a helipad. Movement laterally around a helipad is unrestricted and unpredictable, so microwave paths within one-quarter mile of the pad should be avoided.

ENGINEERING MICROWAVE PATHS NEAR AIRPORTS AND HELIPORTS

45

Airports should be investigated for potential helicopter use. Helicopter landing pads are often located in airports. Helicopters can take off or land from any airport ramp. They may take off and land on runways if they have to make an emergency landing. Microwave paths over a runway used by helicopters can be problematic. Radar transmitters can interfere with fixed point to point microwave paths. The National Weather Service operates the NEXRAD weather radar on hills near airports. It operates between 2.70 and 3.00 GHz. Sometimes, its second harmonic interferes with nearby 6-GHz receivers. However, this is rare. Of more concern is the High Resolution Terminal Doppler Weather Radar (TDWR) that operates between 5.60 and 5.65 GHz and is located near most large airports. The transmitters are often operated without filtering to avoid filter losses (and increase detection range). Microwave paths operating at lower than 6 GHz should avoid passing over or near commercial airports that use this radar. Airports using TDWR may be found at http://www.wunderground.com/radar/map.asp. Airport runways in the United States are 800 to 18,000 ft in length (Fig. 2.20). For commercial airports, the touchdown area (area within which the aircraft must land on the airport runway) is a maximum of 3000 ft long (from threshold to last mark). For airports that have runways shorter than 7000 ft, the touchdown area may be shorter. The end of the touchdown area is marked with a large number of hash marks. This group of hash marks is the threshold marker. The part of the runway between the threshold markings is used for landing. Smaller airports may not have a marked touchdown area. However, all pilots are expected to land in the first half of the runway regardless of airport size. Therefore, for unmarked airports, the touchdown area is the first half of the runway that the plane approaches. The airplanes are allowed to use the second half of the runway (including the displaced threshold) to stop. The short area beyond the landing threshold is the displaced threshold. This is the lineup area for takeoffs and for taxiway entrance and exit. Airplanes can take off from this area but are not permitted to land on it (but can use it to stop). This area is usually marked with large white arrows pointing in the direction of the runway. Large commercial airports have a short portion of the runway extending beyond the displaced threshold called the blast pad. It keeps the jet exhaust from eroding the ground and provides a safety area for troubled aircraft that require additional landing distance. This area, if it exists, is typically marked with yellow chevrons (V shapes). Planes should never be on this area except in an emergency. Commercial airliners flying the precision glide slope appear 50 ft above the runway threshold marker and, with 3◦ glide slope, touch the runway 1000 ft from the beginning of the touchdown area threshold (on the touchdown marker). Private planes are allowed to use glide slopes between 2.7◦ and 4◦ and may land anywhere in the touchdown area. When approaching the runway, private pilots at small airports are expected to maintain a minimum of 500 ft above ground level while approaching the runway until they begin the glide slope. Pilots of large commercial airliners are expected to maintain 1200 ft of minimum elevation until they begin to land. During takeoff, the commercial airplane climb angle varies from 10◦ to 20◦ . Private plane climb angles are more in the 2◦ –5◦ range. Climb angles vary greatly depending on plane type, its weight loading and runway altitude, and temperature and barometric pressure.

Touchdown area

Blast Displaced pad threshold

Threshold marking

Figure 2.20

Aiming point marking

Airport runway.

46

REGULATION OF MICROWAVE RADIO TRANSMISSIONS

The launch location (takeoff rotation point) is controlled by speed rather than location on the runway. However, it usually occurs in the final 40% stretch of the runway for medium and large commercial airplanes. Private planes are unpredictable. Some can takeoff in as short a distance as 200 ft. Their takeoff point can be anywhere on the runway. Private jets with light loads often takeoff using much shorter distances than heavily loaded commercial planes. Military planes can take off at very steep angles using very little of runway. Microwave links over private or military airport runways are highly speculative. If the airplane is not landing or taking off, it can be anywhere on the airport tarmac. Therefore, a basic limitation applied to all runways, taxiways, and parking areas is the maximum vertical height of the tail of the largest aircraft in use at that runway. For commercial airports, that would be the Boeing 747 with a 64 ft tail height. For military airports, the largest plane is the Lockheed C-5 with a tail height of 65 ft. The microwave height minus first Fresnel zone distance (below) should exceed these limits

2.8.1

Airport Guidelines

For microwave paths traversing commercial airports, all microwave path heights minus first Fresnel zone clearance should exceed 65 ft or the expected height of the tail of planes expected to land at the airport. For private airports, microwave paths should not cross between the landing threshold markers because the planes may takeoff from anywhere on the runway. For heavily loaded scheduled commercial airplanes, the possible takeoff height YTO : is estimated by the following: ◦

YTO = (tan 20 × d) + hT = (0.364dTO ) + hT dTO = distance out (away from the center of the runway) from point D; hT = worst case airplane tail height ≈65 ft; D = a point 50–75% of the way from one threshold marker to the other threshold marker. Of course, both directions of takeoff must be considered. For large commercial runways, the value is near 50%. For smaller commercial airports the value is closer to 60–75%. For microwave paths traversing airports limited to scheduled commercial airplanes, all microwave path heights minus first Fresnel zone clearance should exceed YTO (Fig. 2.21). For microwave paths traversing private or commercial airports, all microwave path heights plus first Fresnel zone clearance should be less than the possible landing height YL : ◦

YL = tan 2.7 × d = 0.0472dL

20°

20° Use this area if runway is limited to scheduled commercial airplanes, otherwise avoid this area

Av

oid

thi

sa

h T ≈ 65 ft

re

a

ea

id

o Av

2.7° Clear area

his

ar

t

2.7° Clear area

Figure 2.21

Touchdown area

Airport exclusion area.

Blast Pad Displaced threshold

Displaced threshold

Blast Pad

Touchdown area

REFERENCES

47

dL = distance out (away from the center of the runway) from the runway threshold marker. First Fresnel Zone Radius F1

d1 (miles) d2 (miles) F (GHz)D(miles) d1 (km) d2 (km) F1 (m) = 17.3 sqrt F (GHz)D(km)

F1 (ft) = 72.1 sqrt

Fn = F1 sqrt(n) sqrt(x) n d1 d2 D F

= = = = = =

square root of x; Fresnel zone number (an integer); direct distance from one end of the path to the reflection; direct distance from other end of the path to the reflection; total path distance = distance = d1 + d2 ; frequency of radio wave.

REFERENCES Committee TR14-11, TIA/EIA Telecommunications Systems Bulletin 10-F, Interference Criteria for Microwave Systems. Washington, DC: Telecommunications Industry Association, June 1994. Curtis, H. E., “Interference between Satellite Communications Systems and Common Carrier Surface Systems,” Bell System Technical Journal , Vol. 41, pp. 921–943, May 1962. ITU-R Recommendations. Geneva: International Telecommunications Union—Radiocommunication Sector, available online for a fee or by subscription to biannual DVD, 2013. Knerr, A., Chief, Technical Analysis Section, Public Safety and Private Wireless Division, private letter to M. Blomstrom, State of Nevada, Reference: PS&PWD-LTAB-647, September 3, 1998. Linthicum, J. M., “A Guide to the FCC’s Rulemaking Procedures,” IEEE Communications Magazine, Vol. 19, pp. 34–37, July 1981. Mosley, R. A., Director, Code of Federal Regulations (CFR), Title 47—Telecommunication, Chapters 1, 2 and 3. Washington: Office of the Federal Register, published yearly. Wireless Telecommunications Bureau Order, Alcatel USA, Inc. Request for Ruling that Part 101 Frequency Diversity Restrictions are not Applicable to Collapsed Ring Architecture for Microwave Systems, Adopted January 19, 2000. Withers, D., Radio Spectrum Management. London: Institution of Electrical Engineers, 1999. Working Group 18, Automatic Transmit Power Control (ATPC), Recommendation WG 18.91.032 . Washington: National Spectrum Managers Association, 1992. Working Group 3, The Contents of Prior Coordination Notifications, Recommendation WG 3.86.002 . Washington: National Spectrum Managers Association, 1986. Working Group 3, Primer on Frequency Coordination Procedures, Recommendation WG 3.87.001 . Washington: National Spectrum Managers Association, 1987. Working Group 3, Coordination Contours for Terrestrial Microwave Systems, Recommendation WG 3.90.026 . Washington: National Spectrum Managers Association, 1992. Working Group 3, Coordination Procedures for Automatic Transmit Power Control (ATPC), Recommendation WG 3.94.041 . Washington: National Spectrum Managers Association, 1995. Working Group 5, Report & Tutorial, Carrier-to-Interference Objectives, Recommendation WG 5.92.008 . Washington: National Spectrum Managers Association, 1992. Working Group 9, Long Term/Short Term Objectives for Terrestrial Microwave Coordination, Recommendation WG 9.85.001 . Washington: National Spectrum Managers Association, 1985.

3 MICROWAVE RADIO OVERVIEW

3.1 INTRODUCTION The purpose of a communication system is to transport information from one location (the transmitter or source) to another (the receiver, destination, or sink). Information in its simplest form is knowledge previously unknown to the receiver. The signal conveying information will have the characteristics of a random process (similar to noise) if it is to convey information most efficiently. However, for the information to be interpreted, the signal must have some predefined nonrandom components to define when the information is being received. By definition, this “framing” information must be known previously and must therefore contain less information than that being conveyed. As Hartley (1928) noted, “The capacity of a system to transmit a particular sequence of [information conveying] symbols depends upon the possibility of distinguishing at the receiving end between the results of the various selections made at the sending end.” The transmitted information is contained in a time-varying electrical signal called the payload or baseband. The fundamental quality of analog payloads, which may have an arbitrary (“infinite”) number of states at any given instant of time, is characterized by average signal-to-noise (power) ratio. The fundamental quality of digital payloads, which may have a predefined number of states at any instant of time, is characterized by the probability of message error. Digital radio transmission systems process payload signals in a digital environment but transport those signals between two locations in an analog environment. Depending on where the payload signal is observed, the signal may be regarded as analog or digital. A payload may be described as being sampled at repetitive time intervals T (samples per second) and encompassing a frequency range (bandwidth) F (Hz). If the payload is statistically time invariant (“stationary”), the information content of the signal may be loosely associated with the product FT. For a digital payload signal, information is delivered in discrete periodic times using a symbol from a predefined symbol set (“alphabet”). For a statistically stationary digital signal from a source with no memory (each symbol is statistically independent from the others), the average information content of a symbol is the entropy of that symbol: the sum of (the negative logarithm of the probability of each possible symbol multiplied by the probability of that symbol). If the logarithm is base 2, the information content unit is a bit. If the base is 10, the content unit is a digit (Hartley, 1928; Shannon, 1948). Information content is directly related to symbol uncertainty and entropy. For the systems we will consider, all baseband symbols will be equally likely. Digital Microwave Communication: Engineering Point-to-Point Microwave Systems, First Edition. George Kizer. © 2013 The Institute of Electrical and Electronics Engineers, Inc. Published 2013 by John Wiley & Sons, Inc.

48

INTRODUCTION

49

In 1623, Francis Bacon was the first to notice that information could be completely described using binary symbols, “ . . . a man may expresse and signifie the intentions of his minde, at any distance . . . by objects which . . . be capable of a twofold difference onely . . . ” (Aschoff, 1983). Bit, a term attributed to J. W. Tukey (Shannon, 1948), represents the state of a single binary symbol. (The term bit was first used in an article by Claude Shannon in 1948.) This state is usually represented as “0” or “1”. Digital signals (of uniform probability) are usually described in units of bits. The average transmission capacity of a system is generally expressed as bits per second (b/s). While the bit is the most common measure of symbol information content, other units are also used, but rarely. One bit equals approximately 0.301 decits or 0.693 nepits. A symbol (single signaling element) represents a single transmission event. The term baud, the unit of signaling speed, is equal to the number of symbols per second. If a symbol of a memory-less system (trellis and partial response transmission systems use memory in their coding and are exceptions) may assume any of the m states with equal probability, then the number of bits n that may be transmitted by the symbol is defined by the relationship m = 2n or n = log2 (m) ∼ = 3.32193 log10 (m). For example, 64 quadrature amplitude modulation (QAM) may assume any of the 64 (constellation) states (m = 64). Each symbol represents 6 bits (n = 6). If the Nyquist bandwidth signaling limitation (discussed below) is considered, 64 QAM is often described as having a spectral efficiency of 6 bits/s/Hz. If digital communication signals are distorted by noise or channel distortions, the receiver may make mistakes (“errors”) in determining the transmitted digital signal. For radio systems, the most common measure of error performance is termed bit error ratio (BER) (Kizer, 1995). This is the ratio of binary errors to total number of transmitted bits. BER is measured in radios using many different methodologies (Newcombe Pasupathy, 1982). While BER is a common criterion for radio performance, it is well known that many practical systems have errors in bursts (Johannes, 1984). Measuring error performance using errored-second ratio (ESR, ratio of seconds with at least one received error to the total number of seconds of data transmission) and background block error ratio (BBER, ratio of transmitted blocks with errors to the total number of transmitted blocks) is common method of quantifying error performance. BER is popular in North America and BBER in Europe (ITU-T and ITU-R). Communication of information requires a previously agreed signal format to encode the information for transmission and the capacity to transmit that signal. Shannon (1950) observed, “The type of communications system that has been most extensively investigated . . . consists of an information source which produces the raw information or message to be transmitted, a transmitter which encodes or modulates this information into a form suitable for the [transmission] channel, and the channel on which the encoded information or signal is transmitted to the receiving point. During transmission the signal may be perturbed by noise . . . . The received signal goes to the receiver, which decodes or demodulates to recover the original message, and then to the final destination of the information.” Shannon (1950) described this process using the lumber mill analogy: “A basic idea in communication theory is that information can be treated very much like a physical quantity such as mass or energy. . . . The system . . . is roughly analogous to a transportation system; for example, we can imagine a lumber mill producing lumber at a certain point and a conveyor system for transporting the lumber to a second point. In such a situation there are two important quantities, the rate R (in cubic feet per second) at which lumber is produced at the mill and the capacity C (cubic feet per second) of the [lumber] conveyor. If R is greater than C it will certainly be impossible to transport the full output of the lumber mill. If R is less than or equal to C, it may or may not be possible [to transport the full output of the lumber mill], depending on whether the lumber can be packed efficiently in the conveyor. Suppose, however, that we allow ourselves a saw-mill at the source. Then the lumber can be cut up into small pieces in such a way as to fill out the available capacity of the conveyor with 100% efficiency. Naturally in this case we should provide a carpenter shop at the receiving point to glue the pieces back together in their original form before passing them on to the consumer. If this analogy is sound, we should be able to set up a measure R in suitable units telling how much information is produced per second by a given information source, and a second measure C which determines the capability of a channel for transmitting information. Furthermore, it should be possible, by using a suitable coding or modulation system, to transmit the information over the channel if and only if the rate of production R is not greater than the capacity C.” The rest of this chapter discusses digital sawmills and the methods of reliably gluing our digital lumber back together after transportation from the source to the destination.

50

MICROWAVE RADIO OVERVIEW

3.2 DIGITAL SIGNALING For transmission channels constrained by limited bandwidth and transmission power and corrupted by noise, Shannon (1948, 1949) developed the theoretical limit C for digital transmission channel information capacity: s C ≤ W log2 1 + n s ≤ 3.322 W log10 1 + n s ≤≈ 3.322 W log10 n S (3.1) ≤ 0.3322 W (dB) N where C= W = s/n = S/N =

channel channel channel channel

capacity (Mb/s); bandwidth (MHz); signal to noise (power ratio); signal to noise (dB) = 10log10 (s/n).

In Equation 3.1, replacing the power ratio (S/N + 1) with S/N introduces less than 1% dB error for all S/Ns ≥ 10 dB. (All practical systems require S/N > 10 dB for acceptable operation.) Shannon’s limit may be rewritten to define the minimum S/N required to achieve a given spectral efficiency: S/N (dB) ≥ 3

C W

S/N (dB) ≥ 3 [spectral efficiency (bits/s/Hz)]

(3.2)

The primary Shannon assumptions are filtering is rectangular (“brick wall”) and noise is Gaussian. While this limit may be approached with appropriate (but undefined) signal processing, Shannon noted, “ . . . any system which attempts to use the capacities of a wider band to the full extent possible will suffer from an threshold effect . . . ” (Shannon, 1949) and “ . . . as one attempts to approach the ideal, the transmitter and receiver required become more complicated and the delays increase.”(Shannon, 1950) Shannon set limits on communication channel performance but offered no guidance for approaching those limits. Our first challenge is to find a practical method of signaling through a frequency-band-limited transmission channel with noise and distortion using a power-limited transmitter. We will then improve our transmission capacity by intelligent choice of data coding. However, first let us consider the effect of receiver noise.

3.3 NOISE FIGURE, NOISE FACTOR, NOISE TEMPERATURE, AND FRONT END NOISE In the absence of external interference, the limiting factor in radio system gain (or transmission distance or transmission speed) is the noise introduced by the first amplifier in the receiver (“front end noise”) (Kerr and Randa, 2010). This noise has the effect of degrading the signal-to-noise ratio of the incoming received signal. The minimum noise of an ideal amplifier that is perfectly impedance-matched to its receive antenna would be the noise introduced by a (hypothetical) resistor of the interface impedance (typically 50 ) operating at temperature T (usually assumed to be 290 K = 17 ◦ C = 63 ◦ F). In general, the noise P delivered to a matched device by the noise source resistor at temperature T may be shown (Kizer, 1990) to be the following: n = KTb(W) (3.3)

NOISE FIGURE, NOISE FACTOR, NOISE TEMPERATURE, AND FRONT END NOISE

n K T b

= = = =

51

noise produced by a matched resistor operating at temperature T ; Boltzmann’s constant = 1.38 × 10−23 (J/K); noise temperature of the resistor (kelvin = ◦ C + 273); noise bandwidth of the device (Hz).

If the amplifier adds noise to the received signal, that noise is characterized by adding another noise temperature to characterize the added noise. The relationship of amplifier signal-to-noise ratio is Te nf = 1 + To s n (3.4) = sI nO nf = noise factor; To = amplifier operating (“room”) temperature (nominally 290 K); Te = amplifier additional (“excess”) noise temperature (K); = device “noise temperature”; = To (nf–1); s/nI = signal-to-noise power ratio at input to amplifier; s/nO = signal-to-noise power ratio at output of amplifier. NF(dB) = 10 log(nf) S S – = NI NO

(3.5)

NF(dB) = noise figure; nf = 10NF/10 ; S/NI = signal-to-noise ratio at input to amplifier (dB); = 10 log(s/nI ); S/NO = signal-to-noise ratio at output of amplifier (dB); = 10 log(s/nO ). Friis (1944) derived the noise figure for cascaded (series) active amplifiers: nf = nf1 + nf nf1 nf2 nf3 g1 g2

= = = = = =

(nf2 –1) (nf3 –1) + + ··· g1 g2

(3.6)

overall noise figure of the cascaded amplifiers; noise factor of the first device; noise factor of the second device; noise factor of the third device; gain (power ratio) of the first device; gain (power ratio) of the second device.

The implied assumption is all devices are matched impedances and bandwidth shrinkage of cascaded devices is insignificant. The noise factor of an attenuator is simply the attenuation (1/gain) of the device since the output signal and noise are the input signal and noise multiplied by the gain (1/g) of the attenuator: s s 1 nI nI nf = s = g s = 1 g1 nO nI

(3.7)

52

MICROWAVE RADIO OVERVIEW

Since the device is an attenuator, attenuator gain g1 is between 0 and 1. If an attenuator (device 1) and an amplifier (device 2) are cascaded, the overall noise figure of the pair is the sum of the attenuator loss (dB) and the noise figure (dB) of the device: nf –1 1 + 2 = nf = g1 g1

1 g1

(nf2 )

(3.8)

nf = overall noise factor of the cascaded amplifiers. NF(dB) = 10 log

1 g1

1 (nf2 ) = 10 log + 10 log(nf2 ) g1

(3.9)

NF(dB) = overall noise figure of the cascaded attenuator and amplifier; = attenuation (dB) + noise figure of amplifier (dB). The above equations show that the noise figure of an attenuator is simply the attenuation (dB, > 0) of the attenuator. The noise figure of a cascaded attenuator and an amplifier is the sum of the two (dB values). The front end noise produced by an amplifier may be calculated as follows: n = K(To + Te )B106 (W ) Te B = 1.38 × 10−17 To 1 + To = 4.00 × 10−15 nf B

(3.10)

n = noise produced by a matched “internal” resistor; B = noise bandwidth of the device (MHz); N = front end noise = 10 log(n). N (dBW) = 10 log(n) = −144 + NF(dB) + 10 log(B) N (dBM) = N (dBM) + 30 = −114 + NF(dB) + 10 log(B) N (dBW/MHz) = −144 + NF N (dBW/4 kHz) = −168 + NF A common problem is to determine the signal associated with a known radio threshold signal-to-noise ratio. Let us assume that the radio receiver is limited by front end noise. S(dBM) = S/N(dB) + N (dBM) = S/N(dB) − 114 + NF(dB) + 10 log(B) S S/N NF B

= = = =

received signal power level (dBm) at threshold; receiver threshold signal-to-noise ratio (dB); receiver noise figure (dB); receiver bandwidth (MHz).

(3.11)

DIGITAL PULSE AMPLITUDE MODULATION (PAM)

53

It should be remembered that the receiver noise figure is the noise figure of the front end amplifier plus the loss (dB) between the amplifier and the measurement location. The typical amplifier noise figure for low frequency microwave radios is about 2 dB. The typical waveguide and receiver filter loss in front of a receiver is about 2 dB. Therefore, the typical receiver noise figure is 4 dB. Receiver front end noise, along with channel bandwidth, will be a primary limitation of microwave radio receivers.

3.4

DIGITAL PULSE AMPLITUDE MODULATION (PAM)

The simplest digital symbol is the single bit. It represents a “0” or “1” by one of two voltage levels at a defined repetitive sampling time. This is termed pulse amplitude modulation (PAM) of two or 2-level PAM. A more complex symbol may be constructed to signal multiple bits per symbol. In this case, various voltage levels are used at a defined repetitive sampling time (N-level PAM) (Fig. 3.1). The advantage of multilevel coding is that the baud (signaling) rate can be slower than that in the binary case. The disadvantage is that multilevel coding is increased susceptibility to noise and sampling time error. If an oscilloscope is used to view the voltage amplitude versus time of a PAM pulse train, the display looks like Figure 3.2 (QAM is a form of modulation discussed later): At the sampling time, the PAM pattern is relatively clear or has an open “eye.” A subjective measure (Breed, 2005) of pulse impairment may be made by measuring the eye. The width is a function of timing accuracy and stability, and the height is a function of noise and channel distortion (primarily noise and intersymbol interference). If a transmission channel had unlimited bandwidth, digital pulses could be signaled as rectangular pulses (see Chapter 14 for various PAM formats currently in use). However, all radio transmission

2-Level coding Logic signal clock

1001110110001101

1 0 Clock

“1” Voltage 4-Level coding Clock

1 0 0 1 1 1 0 1 1 0 00 1 1 0 1

11 10 01 00 Clock

“0” Voltage Sampling time

Sampling time

8-Level coding

Sampling time Clock

Figure 3.1

1 0 0 111 0 11 0 0 0 11 0

Binary and multilevel digital coding.

2 PAM 4 QAM 4 PAM 16 QAM 8 PAM 64 QAM 16 PAM 256 QAM

Figure 3.2

Binary and multilevel digital coding.

Clock

54

MICROWAVE RADIO OVERVIEW

Rectangular filter characteristics B(f ) Amplitude A (f )

Baseband filtering

Phase f (f )

1.0

f1

f

Input

Output 1.0

Area of 1

sin w1t w1t

w1 = 2 pf1 t

−2 −1 0 1 2 Infinite bandwidth signal

f

f1

Rectangular filter B(f )

t −4

−3

−2

−1

1

2

3

4

Bandwidth-limited signal

Figure 3.3

“Brick wall” filtering.

channels have significantly limited bandwidth and that limitation must be taken into account. Nyquist (1928) determined that (for baseband digital systems) the minimum frequency bandwidth (often called the Nyquist bandwidth) required to pass a PAM signal without distortion was half the PAM sampling rate. The sampling rate is the signaling (baud) rate. For double sideband orthogonal modulation (e.g., QAM), the Nyquist bandwidth and the baud rate are the same (Forney and Ungerboeck, 1998) (Fig. 3.3). If an impulse signal with symbol period T is passed through a transmission channel that has a low pass filter (LPF) strictly limited to the Nyquist bandwidth W = 1/(2s) (a “brick wall” filter), the filter output would be a sin ωt/ωt signal (where t is time) for a single digital pulse input. While this signal could in theory be used for signaling (since it has zero value at all sampling instants except one), it is of no practical significance. A “brick wall” filter would have infinite time delay and is physically unrealizable, and the output pulse becomes unbounded when time sampling is not perfect. Nyquist defined several criteria (Bennett and Davey, 1965; Nyquist, 1924, 1928) that must be met if the “brick wall” filtering requirement were to be relaxed to allow practical filter implementation. For our application, they may be summarized as the following: 1. The filter impulse response must have zero voltage axis crossings equally spaced in time. 2. The area under the filter impulse response around the signaling time must be proportional to the area of the signal entering the filter and zero for all other signaling times. Nyquist demonstrated that these two criteria are satisfied if the frequency response of the filter is relaxed symmetrically about the Nyquist frequency W (Fig 3.4). Nyquist proposed a specific curve for the relaxed filter frequency response now called raised cosine filtering. This filtering requires more bandwidth than the brick wall filter but is physically realizable (Bayless et al., 1979). The alpha factor defines the excess bandwidth of the filter. The smaller the alpha (α), the narrower the filter. However, the impact of reducing alpha is to increase overshoot between sampling instants (Fig. 3.5). Alphas as small as 0.2 are common in current commercial radios. Since all power amplifiers are peak-power-limited from a distortion perspective, this requires the power amplifier average output power to be reduced (“backed off”) as alpha is reduced. The impact of complex multilevel signaling can be several decibels. The relative peak-to-average ratio (PARR) for various alphas (Noguchi et al., 1986) is listed in Table 3.1.

DIGITAL PULSE AMPLITUDE MODULATION (PAM)

A (f )

Basedband filtering

55

Rectangular filter

Amplitude (voltage)

A (f ) =

A (f ) =

1, f ≤ 1 0, f > 0

Symmetrical (raised cosine) filter 1 1 , 0 ≤ f < (W –E ) 1 [1 + cos (Φ)] , (W –E ) ≤ f ≤ (W +E ) 2 0 , f > (W +E ) Φ (f ) =

π f [1 + aw 2

1 a ]

a=E/W

0 E W

E Frequency

Figure 3.4 “Raised cosine” filtering.

a = 1.0

a = 0.6

a = 0.2

Figure 3.5

Impact of alpha.

These values are independent of the constellation PARRs noted below. For the range of 0.1 ≤ α ≤ 1.0, the following equation estimates the above values: Y (PARR, in dB) = eF e ≈ 2.7182818285 F = A + Bα + Cα 2 + Dα 3 + Eα 4 A = 1.701169502177906 B = −2.651499059441027 C = −6.772188272114639 D = 19.65951075385066 E = −16.93991626467497

(3.12)

The relaxed filtering curve is not unique. Gibby and Smith (1965) more precisely defined Nyquist’s conditions for signaling without intersymbol amplitude distortion. This and their later work would suggest

56

MICROWAVE RADIO OVERVIEW

TABLE 3.1 α 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1

Alpha Peak-to-Average Ratios Peak-to-Average Ratio, dB 0.00 (reference) 0.05 0.20 0.45 0.75 1.10 1.45 2.00 2.80 4.00

that Nyquist’s two criteria for signaling without intersymbol interference through a filter could be restated as the following two criteria: f (nT ) = 1 for n = 0 ; f (nT ) = 0 for n = 0 2. The frequency domain transform of f (t) is symmetrical about frequency T /2. 1.

For the above, f (t) is a time domain impulse response signal out of the filter, t is the time, T is the data sampling rate, and n is a positive or negative integer. While Nyquist’s raised cosine filter is quite popular, several other curves are “better” (have more open eyes that are less sensitive to timing error). The optimum curves (Franks, 1968; Lee and Beaulieu, 2008) use a two-step (“double jump”) discontinuous frequency response curve that must be approximated. Continuous frequency response curves with eyes more open than those of Nyquist have been demonstrated (Assalini and Tonello, 2004; Beaulieu et al., 2001; Scanlan, 1992). Some work (Liveris and Georghiades, 2003; Mazo, 1975; Rusek and Anderson, 2009) has been done using Gaussian shaped sync pulses signaled slightly faster than the Nyquist rate. However, these pulses have intersymbol interference and have not found practical application. To this point, our discussion has assumed impulses (very narrow pulses) driving the transmission channel filter. We would usually prefer to use rectangular pulses. While the frequency spectrum of an impulse is flat (“white”), the spectrum of a rectangular pulse is sin x/x, where x is the normalized frequency. If we drive the Nyquist filter with rectangular pulses, we must multiply the filter frequency response by the inverse of the rectangular filter (x/ sin x) to “whiten” the frequency spectrum of the rectangular pulse (Bennett and Davey, 1965). Also, we usually would like to transfer the digital signal to a limited frequency band in an appropriate portion of the radio spectrum using double sideband quadrature modulation (Fig. 3.6). The preceding process is the total radio filtering between the transmitter and the receiver. Part of the filtering occurs in the transmitter and the other part in the receiver. Filtering is required in both locations for different reasons. See discussion below regarding square root raised cosine filtering. For a transmitter, the bandwidth the digital signal may be specified in many ways (Amoroso, 1980). For modern QAM-derived radios with relatively small α factors, the radio transmitter spectrum 3-dB bandwidth is approximately 80% of the channel bandwidth. The 99% bandwidth is about 90% and the 20-dB bandwidth is essentially the same as the channel bandwidth. The radio baud rate (symbols per second) is typically about 85% of the channel bandwidth. One difficulty is how to quantify the small intermodulation energy produced by the transmitter power amplifier. It has little impact on some definitions of bandwidth but can create excessive adjacent channel interference (Fig. 3.7). Regulatory requirements (Mosley, published yearly) usually sidestep a formal bandwidth definition and address the spectrum issue by requiring the transmitter-modulated spectrum to fit within a “spectrum mask” (Fig. 3.8).

57

DIGITAL PULSE AMPLITUDE MODULATION (PAM)

Relative amplitude (dB)

+5 0 −5 Typical α for microwave radios

−10 a=0

0.5

1

1

0.5

0=a

a=0

0.5

1

−15 −20

1 0.5 Normalized frequency (a)

−1

−0.5

0 0.5 Normalized frequency (b)

1

Amplitude

Figure 3.6 Normalized Nyquist baseband (a) and double sideband radio frequency (RF, b) spectra for rectangular pulse signaling.

Original spectrum

Additional spectrum generated by nonlinearity

Frequency

Figure 3.7

Spectrum “skirt” produced by transmitter power amplifier nonlinearity.

The receiver needs to reject adjacent potentially interfering signals. The optimum way to split the filtering requirement has been studied extensively (Lucky et al., 1968). The optimum detection performance (Noguchi et al., 1986) for Gaussian noise interference is to make the transmit and the receive filters the same (ignoring the prewhitening (x/ sin x) term that occurs at the modulator). Each filter voltage amplitude response is the square root of the raised cosine filter frequency response. Each filter is termed a square root raised cosine filter. A receiver-matched filter is a filter whose frequency domain response matches the frequency domain spectrum of the transmitted signal (actually the filter is the complex conjugate of the transmitted spectrum but assuming the spectrum has no complex component, this is an accurate statement). It has the desirable property of being the filter that optimizes the receiver signal-to-noise ratio in the presence of white Gaussian noise. While North (1943) originated this concept, Van Vleck and Middleton independently derived this result and defined the name “matched filter.” The matched filter optimizes the detection of pulse amplitude signals being sampled at the center of the sampling window. Using square root raised

58

MICROWAVE RADIO OVERVIEW

FCC spectrum mask Unfiltered digital signal spectrum (sin x / x)

−40 Power spectral density (dBm / 4 kHz)

Channel

−50

Digital signal spectrum

X [ x / sin x ] X RCF

RCF = Raised cosine filtering −60

−70

−80

−90 fc−3B

−90

fc−2B

−60

fc−B

−30

fc

fc+B

fc+2B

+30

fc+3B

+60

+90

Relative frequency (MHz) 1-W spectrum

Figure 3.8

B = Baud rate

FCC digital spectrum filtering.

cosine filtering (assuming a prewhitened transmit spectrum followed by a square root transmit filter) creates a matched filter at the digital radio receiver. These pulse shaping filters reside in the modulator and the demodulator of the radio and are usually digital. They have limited rejection and overload capabilities so additional physical flat pass-band filters with deep adjacent frequency rejection are also added for additional filtering to meet regulatory and adjacent frequency filtering requirements.

3.5 RADIO TRANSMITTERS AND RECEIVERS Microwave radio transmitters and receivers are paired to convey information from one location to another. They are subjected to the many potential impairments of this process (Fig. 3.9). The degree of impairment is a function of the external environment and internal design choices (Borgne, 1985; Johnson, 2002a, 2002b; Yin, 2002). Overall performance will be fundamentally limited by the radio BER for very low received signal (power) levels (RSLs) (due to receiver noise) and receiver overload distortion (due to excessive received signal) (Fig. 3.10). The nominal RSL will lie somewhere within the low BER range of the microwave radio. The decibel difference between the nominal RSL and the radio threshold for larger RSL is termed head room. The difference between the nominal RSL and the small RSL radio threshold is termed flat fade margin. These concepts are shown in Figure 5.38. Dispersive fade margin is a concept unrelated to “flat” power fading (see Chapter 9 for a discussion of this concept).

RADIO TRANSMITTERS AND RECEIVERS

59

Timing & amplitude impairments

Modem impairments Data source

Decision device

Intersymbol interference

Demod

Branching filters

Branching filters

Mod

Filters

Filters Interference

Lo Phase noise

Nonlinearities

Fading

Lo Thermal noise

Phase noise

Figure 3.9 Typical radio system impairments. Source: Adapted from Borgne, M. “Comparison of High-Level Modulation Schemes for High-Capacity Digital Radio systems,” IEEE Transactions on Communications, Vol. 33, p. 442, May 1985. Adapted with permission of IEEE.

10−9

Radio thresholds

10−6 Radio thresholds

Bit errorratio (BER)

10−3

BER as a function of RSL

10−12 −75

−65 −55 −45 −35 −25 Received signal level (RSL, dBm)

Figure 3.10

Typical radio receiver dynamic range.

−15

60

MICROWAVE RADIO OVERVIEW

3.6 MODULATION FORMAT As noted previously, Nyquist’s criteria for distortionless data transmission was that digital transmission could not occur faster than twice the frequency bandwidth at baseband (unmodulated) signals or the transmission bandwidth for double sideband modulated signals. This limits how fast we may produce digital symbols without intersymbol interference. (Some early-generation digital radios used oversampling techniques called partial response signaling that used predictable intersymbol interference, but the techniques were limited in their spectral efficiency. They are no longer produced.) Of course, the primary purpose of modulation is to impress a digital data stream on an RF sine wave (“carrier”) for transmission over the air to another location. The other purpose of modulation is to increase spectral efficiency by creating symbols that represent multiple bits. This is accomplished by changing (modulating) a carrier sine wave (or waves) amplitude and/or phase in such a way as to map a set of bits into a unique symbol. PAM is a simple one-dimensional (voltage amplitude as a function of time) signaling method. Microwave radio modulation signaling methods typically operate in at least two dimensions (amplitude and phase as a function of time). The modulated signal can be represented at a unity length phasor multiplied by a time-varying multiplicative scalar M: C = Mej(ω+φ) (3.13) C M ω F t φ

= modulated carrier; = carrier amplitude modulation; = 2π ft; = frequency; = time; = carrier phase modulation.

For QAM radios, the amplitude and phase modulation is created by modulating two orthogonal signals: a cosine wave and a sine wave. We will express the orthogonal signals as a complex number with cosine amplitude on the real axis and the sine amplitude on the imaginary (j) axis. Radio carrier phase will be referenced to the cosine wave. The amplitude modulation of the cosine wave will be termed the in-phase modulation. The amplitude modulation of the sine wave will be termed the quadrature modulation. C = VI cos(ω) + jVQ sin(ω) C VI VQ j

= = = =

(3.14)

modulated carrier; amplitude of in-phase modulation (in phase with carrier); amplitude of quadrature modulation (orthogonal to carrier); imaginary number (i) used to create a complex number.

The modulated carrier can be represented as two orthogonal carrier signals (VI and VQ ) that are multiplied by modulation signals. C = Mej(ω+φ) = Mej(φ) ej(ω) 2 V = sqrt VI + (VQ )2 VI φ = arctan VQ

(3.15)

We wish to focus on the modulation. Euler’s formula allows us to represent the modulated carrier as a set of quadrature signals. VI cos(φ) + jVQ sin(φ) = V e−jφ = carrier signal modulation

(3.16)

MODULATION FORMAT

61

Our modulation is a unity amplitude phasor (vector) e−jφ multiplied by a time-varying amplitude V . We would like to display the modulation symbol states in two-dimensional Cartesian coordinates (referenced to the carrier with constantly changing phase ω). This display is called a phasor diagram (Fig. 3.11). This phasor representation of all possible signaling states (symbols) is called a constellation. For the typical system with no memory, to transport n bits of information per signaling interval, a constellation of M = 2n symbols is required. The transmission system is often described as having a spectral efficiency of n bits/s/Hz. Many different constellations are possible (Cahn, 1960; Campopiano and Glazer, 1962; Dong et al., 1999; Foschini et al., 1974; Hanco*ck and Lucky, 1960; Simon and Smith, 1973; Thomas et al., 1974) (Fig. 3.12). The above constellations are referenced to normalized average signal-to-noise ratio (S/N) values for a 10−6 BER relative to the best case constellation. Most radios are limited by the peak S/N values, not average values. As the number of points increases, the optimum constellation converges toward a grouping of equilateral triangles within a circle (Foschini et al., 1974). The most popular constellations are the QAM and the phase shift keying (PSK). Although they are not optimal, the performance penalty is nominal and they are much easier to implement than the other constellations. In general, the constellation PARR will impact the transmitter amplifier design (Table 3.2). Since PSK uses a circular constellation, the PARR never changes. QAM constellations vary between a square (4, 16, 64, 256, 1024, and 4096 QAM) and a cross pattern (32, 128, 512, and 2048 QAM). The lower PARR for the cross pattern is significant. Xiong (2006) showed that for a square QAM constellation, the PARR (dB) may be calculated using the following formula: 2 3 sqrt (N ) − 1 (3.17) PARR(dB) = 10 log N −1 where N is the number of states in the QAM constellation (e.g., 64 for 64 QAM). The relationship between constellation pattern and required signal-to-noise ratio for a given BER has been established by many sources [including the work by Craig 1991, with the π in Equation 3.13 replaced by 2π (Khabbazian et al., 2009), (Szczecinski et al., 2006)]. Proakis and Salehi (2002, Table 7.1) noted the relationship between QAM and PSK (Table 3.3). Proakis and Salehi (2002, Eq. 7.6.69) derived the relationship between signal-to-noise ratio (S/N ) and probability of error (BER): BER = 2 1–

1 sqrt (M)

Q sqrt

3 s M −1 n

(3.18)

M = QAM level = 2η ; η = spectral efficiency (bits/s/Hz) = an integer > 0; s/n = signal-to-noise ratio (power ratio) = 10(S/N)/10 . S/N(dB) ≈

Eb + 10log10 (η) No

S/N(dB) = average signal-to-noise power ratio (dB). Eb = energy per bit to noise power spectral density ratio (dB) No symbols per second = S/N (dB)–10log10 bits per symbol × (Hz) B ≈ S/N (dB)–10log10 [spectral efficiency (bits/s/Hz)] = 10 log[(s/n)/η](assuming the modulated signal essentially fills the transmission channel) (3.19)

V cos(w + f)

f Phase

Time

Time

Figure 3.11

62

Phasors of different amplitude and phase.

MODULATION FORMAT

Four

Sixteen

1.5 dB

0 dB

Average S/N penalty for 10−6 BER

0.5 dB

Thirty two

Sixty four

0 dB 0.8 dB

0.1 dB

0.6 dB

0 dB is reference S/N for optimal constellation 0.4 dB

Eight

0.4 dB 0.2 dB 0.2 dB

0 dB 0.3 dB 0.3 dB

0.4 dB

1.2 dB

1.8 dB 0.2 dB

0.1 dB

1.3 dB

0.4 dB

0.2 dB

0 dB

0.4 dB

1.0 dB

1.2 dB

1.4 dB 1.4 dB

4.1 dB

0 dB

Figure 3.12

Typical modulated signal constellations.

TABLE 3.2 M-ary Constellation Peak-to-Average Ratio (dB) M 4 16 32 64 128 256 512 1024 2048 4096

QAM 0.0 2.6 2.3 3.7 3.2 4.2 3.4 4.5 3.6 4.6

PSK 0.0 (reference) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

Abbreviations: M, signaling states; QAM, Quadrature Amplitude Modulation; PSK, phase shift keying.

TABLE 3.3 Average S/N Advantage of M-ary QAM over M-ary PSK M

S/N Advantage, dB

4 8 16 32 64

0.0 1.7 4.2 7.0 10.0

Abbreviations: M, signaling states; QAM, Quadrature Amplitude Modulation; PSK, phase shift keying.

63

64

MICROWAVE RADIO OVERVIEW

B = channel bandwidth; sqrt (x) = square root of x; Q = tail probability of a Gaussian random variable. Q may be approximated (Abramowitz and Stegun, 1968) as follows: Q(X) ≈ Z[(B1 × T ) + (B2 × T 2 ) + (B3 × T 3 ) + (B4 × T 4 ) + (B5 × T 5 )] −(X×X)/2 1 e = Z= sqrt(2π) sqrt(2π)[e(X×X)/2 ] s 3 X = sqrt M −1 n T =

1 1 + (R × X)

π ≈ 3.1415926536 e ≈ 2.7182818285 R = 0.2316419 B1 = 0.319381530 B2 = −0.356563782 B3 = 1.781477937 B4 = −1.821255978 B5 = 1.330274429

(3.20)

Abreu’s tight upper bound (12) (Abreu, 2012) on the Q-function also is an excellent approximation for Q. Using the above relationships, we may calculate the signal-to-noise ratio (S/N) relationship for QAM to BER (Table 3.4). These values are for coherent demodulation. Most practical receivers use differential demodulation since the original carrier phase is difficult to determine. Differential demodulation is a couple of decibels worse than coherent demodulation but this is usually made up by adding forward error correction to the modulation/demodulation process (Fig. 3.13). Figure 3.13 graphically displays the carrier-to-noise requirements for a 10−6 BER. For commercial products, the term carrier to noise (C/N) is typically used in lieu of signal-to-noise ratio (S/N). They represent the same quantity. See Table A.22 for signal-to-noise threshold requirements, spectral efficiency, and transmitter PARRs for popular modulation formats.

TABLE 3.4 Average S/N of M-ary QAM for 10−6 BER M 4 8 16 32 64 128 256 512 1024

S/N, dB

η, bits/s/Hz

13.5 17.1 20.2 23.2 26.2 29.1 32.0 34.9 37.7

2 3 4 5 6 7 8 9 10

QAM DIGITAL RADIOS

10

65

2−PSK 2−ASK 4−QAM 4−PSK

15

8−QAM

Required C/N (dB) for 10−6 BER

8−PSK 20

16−QAM 32−QAM 16−PSK

25

64−QAM 128−QAM

30

32−PSK 256−QAM

35 Coherent demodulation

64−PSK

40 0

2

4 Spectral efficiency (bits/s/Hz)

6

8

Figure 3.13 Spectral efficiency for QAM and PSK.

Constant amplitude envelope PSK is popular for satellite systems where maximum transmit power is the primary interest. For most fixed point to point radios, spectral efficiency is most important so variable amplitude envelope QAM constellations are chosen. In Figure 3.14, 8 QAM is not shown. Although it is theoretically possible, there is no industry agreement as to the appropriate constellation and it is not used commercially. (Several logical choices have been proposed but when eight states are required, 8 PSK is typically used.) As systems transition from lower to higher order QAM, the constellations transition naturally between square and cross patterns. Cross patterns are desirable for peak-power-limited systems because their peak to average power ratio is lower than that of square constellations. Cross patterns for 16, 64, and 256 are possible, but they, like 8 QAM, require complicated multiple levels for different constellation symbols and are not used commercially. Figure 3.15 shows the constellation pattern and the digits associated with each symbol for 16 and 64 QAM. The digits are gray coded in such a way that the closest symbols to any symbol only change by one digit. This minimizes the impact on BER of moderate noise symbol errors. (For moderate noise, one symbol error is one bit error.) Figure 3.16 is a simplified diagram of 16 and 64 QAM modulators. QAM is usually formed by summing the output of two PAM modulators operating with carriers in phase quadrature (90◦ relative phase). Each modulator varies the amplitude of the carrier as well as the phase between the two states in phase or 180◦ out of phase. The two modulated carriers are usually termed the in-phase (I) and quadrature (Q) signals and the modulator is called an I–Q modulator. For square patterns, the modulators are independent. For cross constellations, there is dependency.

3.7

QAM DIGITAL RADIOS

The power amplifier for a QAM transmitter is by far the largest consumer of power in the radio. For an I–Q modulator, it must operate in linear mode with some output power reduction due to the constellation

66

MICROWAVE RADIO OVERVIEW 256 QAM

33 dB

128 QAM

S/N for 10−6 BER

S/N for 10−6 BER

16 QAM

4 QAM

14 dB

30 dB

64 QAM

S/N for 10−6 BER

21 dB

27 dB 32 QAM

S/N for 10−6 BER

24 dB

Figure 3.14 Typical QAM constellations. 1000

1001

0001

0000

101000

101010

101110

1010

1011

0011

0010

101111

111100

1110

1111

0111

0110

111010

1101

0101

0100

111000 111001

(a)

011110

010111 011011

010010 110010

110001

011100 011111

010110 110110

111011

1100

011101 010101

110100

110011

001100

000101 010100

110111 111111

001101

000100 100100

111101

001110

000111

100110

110101

111110

001111

000110

100101 101101

001010

000011

100010 100111

101100

001011

000010

100011

001000

000001

100000

101011

001001

000000

100001 101001

011010

010011 011001

010000 110000

010001

011000

(b)

Figure 3.15 The (a) 16 and (b) 64 gray-coded QAM constellations. and filter alpha PARRs. Relatively low speed radios are now starting to implement novel approaches to improve transmitter amplifier efficiency (Birafane et al., 2010; Groe, 2007; Kim et al., 2010a, 2010b; Lavrador et al., 2010). These modulators offer the opportunity to reduce amplifier power consumption while maintaining amplitude and phase linearity. These approaches have yet to be implemented in high speed digital radios. QAM radio architecture is fairly standardized (Dinn, 1980; Noguchi et al., 1986) (Fig. 3.17). Serial binary data enters the transmitter. It is scrambled by a self-synchronizing scrambler to remove periodic patterns (such as tributary data stream framing, which would create undesired coherent spectral lines in the modulated signal) from the incoming data. The data is then converted from serial to parallel data (S to P). The parallel data is used by the modulator to create I and Q amplitude states. These I and Q signals are up-converted to the desired transmit frequency and summed to form a constellation state. “Direct” modulation radios typically modulate a low frequency (e.g., 2-GHz) signal and up-convert it to the appropriate transmission frequency. The received signal is amplified to an appropriate level and applied to two down converters being driven by a voltage controlled oscillator (VCO) with quadrature outputs. The VCO has been phase and

67

QAM DIGITAL RADIOS

(a)

Bits per 1 symbol S1

2 bits

S3

Differential encoder

S4

I

One symbol parallel to serial converter

S2

4 bits

Signal mapper

Gray coded Q

Signs for I and Q Bits per 1 symbol

(b) S1

4 bits

One symbol parallel to serial converter

S4 S5

Differential encoder

S6

I 6 bits

Signal mapper

Gray coded Q

Signs for I and Q

Figure 3.16

The (a) 16 and (b) 64 QAM modulators.

I

Data Scrambler Clock

S to P

D to A

LPF cos

Modulator Q

D to A

BPF

Modulated signal

LPF sin 90° f°

AGC LPF

A to D

AGC

I VCO

Modulated signal

Carrier recovery

P to S

Demod

90°

Q LPF

Data Descramble

Clock

A to D AGC Clock recovery

Figure 3.17 Generalized QAM transmitter and receiver. frequency locked to the incoming received signal. The quadrature VCO signals extract the incoming VI and VQ modulation signals. The following illustrates this process. The incoming signal is C. C = VI cos(ω) + jVQ sin(ω) C = modulated carrier.

(3.21)

68

MICROWAVE RADIO OVERVIEW

The variables are as noted previously. One receive channel multiplies the incoming signal by a cosine wave and the other channel multiplies it by a sine wave and each LPFs the result. VI = LPF{2 cos(ω)[VI cos(ω) + jVQ sin(ω)]} = LPF{VI [1 + cos(2ω)] + jVQ sin(2ω)}

(3.22)

VQ = LPF{2 sin(ω)[VI cos(ω) + jVQ sin(ω)]} = LPF{VI sin(2ω) + jVQ [1 − cos(2ω)]}

(3.23)

The recovered VI and VQ signals are digitized and demodulated into a sequence of parallel groups of bits. These are converted to a sequential binary data stream and descrambled before being delivered to the data user. The descrambler feedback loops multiply the output error rate by the number of loops (which is kept small). During heavy received signal fading (see Chapter 9), a microwave receiver may lose the received signal repetitively. To minimize transmission outage time, the receiver should regain operation as quickly as possible after the received signal returns to acceptable levels. The receiver should recover carrier frequency, lock to its phase, and then start synchronizing with the data stream. This process is a function of many factors (Franks, 1980; Mueller and Muller, 1976; Noguchi et al., 1986). Regaining carrier frequency and phase lock usually happens quickly for short-duration receiver outages. It can be much longer if the receiver has been without a received signal for some time. Lockup on the demodulated data is a function of the radio symbol signaling (baud) rate (the slower the baud, the longer the lockup time). This can have a significant effect on system availability. Use of differential demodulation means the receiver is insensitive to 180◦ phase constellation orientation. However, the receiver must still determine whether the receiver has been locked up in the left/right (in-phase I) or up/down (quadrature Q) orientation. This is usually done using unique frame sequences on the I and Q channels (“rails”).

3.8 CHANNEL EQUALIZATION Microwave paths are subject to dispersive fading (see Chapter 9). This is caused by multiple transmission paths in the atmosphere between the transmitter and the receiver. The multiple paths produce a signal at the receiver that is the original digital signal corrupted by time-shifted “echos” of this signal. These additional signals produce a broadened (“dispersed”) digital signal, and this linear distortion is called dispersion. The digital radio receiver must compensate for this distortion before demodulation can be performed reliably. Linear distortion can, in theory, be compensated for by frequency domain equalizers. However, in this case, that is not always possible. The dominant signal may occur from any of the multiple paths at any given time. If the strongest signal is from a relatively long time-delayed path, the echo (e.g., signal from the normal main path) may precede the dominant signal in time. This produces a nonminimum phase linear distortion that cannot be compensated for by any realizable frequency domain equalizer. This type of distortion is known to occur approximately half the time when multipath fading occurs. The Wiener–Khinchin theorem implies that for linear (approximately) time-invariant transmission channels, equalization may be performed in either the frequency domain (as observed on a spectrum analyzer) or in the time domain (as observed on an oscilloscope), or both. It has long been known (Aaron and Tufts, 1966) that the optimum linear distortion equalizer is a frequency-domain-matched filter followed by a (infinite tap) time domain transversal equalizer. The radio transmission channel changes over time and the ability to provide a frequency domain filter to match the channel distortion at any instant is limited. Most microwave radio receivers use a slope equalizer to compensate for out-of-band dispersive notches and leave all other compensation to a time domain transverse equalizer. While linear feedback (infinite impulse response) adaptive filters have been considered, they are not popular because they are difficult to stabilize and lack inputs from the opposite quadrature transmission channel (Qureshi, 1985). Today, use of automatically adaptive (finite impulse response) linear transversal

CHANNEL EQUALIZATION

69

Received signal

Transmitted signal

(equalizer input) Amplitude

Amplitude

Time

Time

Equalizer output

Simplified equalizer Input

C1

T Delay C2

T Delay C3

T Delay C4

T Delay

Amplitude

C5

Time variable gain multipliers

Summer

Figure 3.18

Output

Simplified transversal equalizer.

equalizers (Lucky, 1966) is the standard method of compensating for path intersymbol interference (dispersive fading). The transversal equalizer is a digital tapped delay line, with the output signal composed of weighted delayed samples of the original received signal (Lucky, 1965; Qureshi, 1982, 1985). The equalizer sums the weighted portion samples that bracket the sample of interest. The value of interest is delayed so that samples ahead of it and behind it in time may be used. Typically, there are an equal number of samples (“taps”) before and after the decision circuit to enhance convergence (Brunner Weaver, 1988) (Fig. 3.18). The weighting coefficients are automatically varied to satisfy a criterion of goodness. The speed at which coefficients are changed is a trade-off between stability and dynamic performance (how fast the equalizer produces a useful signal). The early equalizers took signal samples at the symbol (baud) rate synchronized to the incoming signal (synchronous equalizers) (Lucky, 1965, 1966). They used a zero forcing (ZF) criterion that varied the weighting to achieve zero composite signal for all samples except the sample of interest. This had disadvantages in that it enhanced high frequency (band edge) noise and could not equalize a signal with a fully collapsed eye pattern. Equalizers using the least mean square (LMS) criterion (Widrow, 1966) were an improvement. They minimized the mean square error of all the taps. Using this criterion, the equalizer could both suppress noise and equalize a severely distorted signal. Later equalizers were designed to take samples faster than the signaling rate (Gitlin and Weinstein, 1981). These were called fractional equalizers because they sampled the received signal between the digital sampling instants. The advantage of high rate (fractional equalizer) sampling was better control of distortion near the frequency edge of the transmission channel (no sampling fold-over distortion) and insensitivity to the sampling time (better dynamic performance since precise signal sampling timing is not required). The fractional equalizer could synthesize the best combination of the characteristics of an adaptive matched filter and a synchronous equalizer with the constraints of its number of taps (Qureshi, 1982). For the same number of taps, fractionally spaced equalizers outperform synchronous ones (Baccetti Raheli Salerno, 1987; Niger Vandamme, 1988). Figure 3.18 shows a simplified analog equalizer for only one channel. Today all equalizers are digital. For I and Q demodulators, both channels are sampled and fed back to the channels to produce the output for each channel. The weighting coefficients must be complex (both real and imaginary numbers since the received signals are in quadrature). The transverse equalizer is a critical element to a modern digital receiver. It (and the receiver filtering) will determine the shape of the dispersive fading W or M curve (see Chapter 9) and thereby determine the dispersive fade margin of the receiver.

70

MICROWAVE RADIO OVERVIEW

3.9 CHANNEL CODING We know how to shape digital signals for transmission. We know how to map these signals into efficient constellations. We know how to compensate for transmission channel distortion. The final step is to improve the dynamic range of the receiver by improving error performance. This takes us to the subject of error-correcting coding (Bhargava, 1983; Costello Forney, 2007; Forney, 1991; Forney et al., 1984; Forney and Ungerboeck, 1998; Goldberg, 1981; Kassam and Poor, 1983; Lucky, 1973; Sklar, 1983a, 1983b; Whalen, 1984). The first codes were block codes because they encoded data in fixed blocks of bits. Hamming (1950) invented the first such code. He took a block of 4 bits and created three check sums to create a 7-bit word. Using that word, a single error could be corrected. Golay (1949) produced two codes that had the ability to correct two or three block errors. Reed 1954 and Muller 1954 produced even more powerful codes. The Hamming, Golay, and Reed–Muller codes were linear, meaning if they encoded n blocks, the modulo-n sum of any two code words produced a new code word. Cyclic codes were invented by Prange (1957). These block codes had the property that any cyclic shift of a code word produced a code word. Shortly thereafter, Bose and Ray-Chaudhuri (1960) and Hocquenghem (1959) produced the BCH cyclic code. About the same time, Reed and Solomon (1960) created the powerful Reed–Solomon code. This code could correct continuous groups of errors besides individual block errors. This code was more complex than any previous code and initially this limited its use. However, today, it is the most popular block code. It is used in CDs, DVDs, cell phones, and NASA deep space missions (Berlekamp et al., 1987; Liu and Lee, 1984). Forney (1966) introduced concatenated codes by cascading two block encoders (Fig. 3.19). This two-stage coding scheme was capable of correcting a wide variety of error patterns not correctable by individual block codes. While quite useful, block codes have some limitations. Since they are frame oriented, they require an entire frame before an output signal is available and this introduces considerable delay (latency). They obviously require frame synchronization, which causes start-up delay and framing complexity. These limitations were overcome by convolutional coding (Forney, 1970, 1971, 1974; Viterbi, 1971) first introduced by Foschini et al. (1974) and Elias (1955). Viterbi (1971) noted that for the same order of complexity, convolutional codes considerably outperform block codes. Rather than segregating data into distinct blocks, convolutional encoders add redundancy to a continuous stream of input data using a linear shift register. Each set of n output bits is a linear combination of the current set of k input bits and the m bits stored in the shift register. The total number of bits that each output depends on is called the constraint length. The rate of the encoder is the number of data bits k taken in by the encoder in one coding interval divided by the number of code bits n output during the same time. While various decoding algorithms for convolutional codes were invented, the optimal solution was not available until the invention of the Viterbi decoding algorithm (Forney, 1973; Viterbi, 1967). This algorithm allowed soft decisions to be modified by the history of the data stream and the constraints of the encoding architecture. While convolutional coding with the Viterbi decoder is very powerful, convolutional codes suffer from a serious problem—when the Viterbi decoder fails, it creates long, continuous bursts of errors. This limitation was moderated by Odenwalder [as reported by Liu and Lee (1984)]. He used Forney’s concatenated coding architecture with the Reed–Solomon outer coding but replaced the inner block encoder/decoder pair with a convolutional encoder and Viterbi decoder pair. The powerful Reed–Solomon

Outer coding Inner coding

Data

Outer encoder

Inner encoder

Constellation mapper

Figure 3.19

Channel

Constellation demapper

Concatenated coding.

Inner decoder

Outer Data decoder

CHANNEL CODING

71

coding was used to “tame the Viterbi.” Sometimes a block interleaver and deinterleaver (Forney, 1971b) are placed between the inner and outer coders to break up the decoder burst errors so that a shorter Reed–Solomon code can be used. Bhargava (1983) evaluated several coding techniques (for binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK)) and concluded that concatenating the Reed–Solomon block coding outside a system using a convolutional encoder with Viterbi inner decoder resulted in the optimum error performance (if the error source was Gaussian noise). This architecture is commonly used in modern QAM and trellis coded modulation (TCM) radios. The next step in coding was to merge constellation mapping and coding. Imai and Hirakawa (1977) realized that if subsets of constellations could be used at any signaling instant, the larger space between symbols would improve error performance (Forney, 1988a, 1988b, 1989; Forney and Wei, 1989). Their approach, called multilevel signaling, used independent binary codes to pick the PSK or QAM constellation subsets. While they mentioned convolutional coding and Viterbi decoding, most actual commercial products used block coding and hard decision decoding. Shortly thereafter, Ungerboeck (1982; 1987a, 1987b) disclosed two-dimensional (2D) TCM for QAM and PSK constellations, which incorporated convolutional encoders and the Viterbi decoder [Wolf and Ungerboeck (1986) even developed trellis coding for partial response signaling]. Since commercial products using multiple levels used hard decisions at each level of the decoding process, multilevel performance was slightly worse than products using 2D TCM. Later, Yamaguchi and Imai (1987) improved multilevel performance by explicitly including convolutional encoding and Viterbi decoding to achieve slightly better performance than Ungerboeck’s 2D trellis coding. Wei (1984a, 1984b; 1987) discovered rotational invariant multidimensional trellis modulation. Wei took Ungerboeck’s two-dimensional trellis constellation decomposition and added multiple consecutive time dependencies to define 4, 8, 16, and even higher dimension trellis coding. As the number of dimensions increased, the complexity of decoding (and latency) increased dramatically, with only incremental coding gain improvement. Four-dimensional trellis coding is the typical choice for current commercial products. Multilevel is used only if low latency or limited computational complexity is important. The current state-of-the-art codes are turbo codes (Berrou Glavieux Thitimasjshima, 1993; Lodge Young Hoeher Hagenauer, 1993) and the recently rediscovered LDPCs (Gallagher, 1962; MacKay and Neal, 1996). An LDPC has been demonstrated (Chung et al., 2001) to come within 0.04 dB of the Shannon capacity limit. However, this code requires a block length of 107 and typically requires about 1000 processing iterations to achieve 10−6 BER. While these codes are popular in low speed wireless equipment (Brink, 2006), their computational complexity and recursive processing induce latency and the computational difficulties have delayed their recent introduction into high speed radio systems. Currently, for QAM, LDPCs achieve up to 7-dB threshold coding gain when compared to uncoded QAM. While these codes dramatically improve 10−3 and 10−6 BER thresholds, they exhibit a relatively high BBER. For critical requirements, they are paired with a powerful background error-correcting scheme (such as the Reed–Solomon coding) to suppress residual errors. The primary limitation to improving radio S/N performance is processing latency. The 150-Mb/s QAM radios typically have latency less than 100 μs (transmitter plus receiver). TCM radio may push that to a few hundred microseconds. The primary difference is the degree of the Reed–Solomon coding used in the radios. LDPCs can significantly improve the radio S/N performance. For current digital microwave systems, the modulation methods of choice are concatenated LDPCs with QAM (IP baseband) or the Reed–Solomon coding with 4D TCM (SONET baseband) (Fig. 3.20).

Reed-solomon or low density parity coding Trellis or QAM coding

Data

Outer encoder

Inner encoder

Constellation mapper

Channel

Constellation demapper

Inner decoder

Figure 3.20 Typical digital microwave radio coding.

Outer Data decoder

72

MICROWAVE RADIO OVERVIEW

Currently, typical 10−6 BER threshold improvements of 4 dB with the Reed–Solomon coding and 7 dB with LDPCs are commercially available. For 3 DS3 and 4 E3 transmissions, 64 QAM is typically used. It has the spectral efficiency (6 bits/s/Hz) to transport those signals in the typical nominally 30-MHz radio channel. For the SONET/SDH equivalents (STS-3/OC-3 or STM-1), 4D trellis provides the additional transmission bandwidth (6.5 bits/s/Hz) required without sacrificing system gain or increasing channel bandwidth. In the United States, for radio channels narrower than 10 MHz (transporting multiple DS1s or E1s), 16 QAM or 32 4D trellis are commonly used. Modern adaptive modulation IP radios typically use QAM to facilitate rapid switching between different transmission speed modulation schemes. When compared to the Shannon theoretical S/N limit, QAM is a little more than 9 dB away (requires slightly more than 9 dB S/N than Shannon’s limit). Compared to the 2n constellation QAM, the TCM 4D 2n+1 constellation achieves slightly better system gain and spectral efficiency. Its spectral efficiency (increased transmission bandwidth for a given radio channel bandwidth) is the primary reason it is used with most SONET radio system.

3.10

TRELLIS CODED MODULATION (TCM)

To transport n bits of information per signaling interval, a two-dimensional (I and Q) constellation of 2n symbols is required. Trellis coding expands the two-dimensional constellation to 2n+1 symbols. The constellation is partitioned into 2m+1 subsets (called cosets or subfamilies), which have greater voltage separation between signaling states than the 2n+1 constellation. Of the n bits that arrive into the trellis modulator as a group (from the serial to parallel converter), m bits enter an m/(m + 1) convolutional encoder. The m + 1 bits out of the encoder specify which of the constellation subsets will be used for signaling at the next signaling intervals. If the number of consecutive symbol pairs constrained by the convolutional encoder is p, the dimensionality of the TCM is D = 2 × 2p . If each signaled symbol (using one of the cosets) is independent of the other symbols (no pairs, just a single set of cosets), then p = 0 and the TCM is two-dimensional (it only has the two dimensionality of an individual I and Q symbol). If the set of coset symbols is consecutively chosen in single pairs, then p = 1. This defines 4D TCM, the most common form. Higher order TCM is possible (e.g., 8D with two consecutive pairs, p = 2, and 16D with three consecutive pairs, p = 3). The higher order TCMs are much more complex to decode, have much greater latency, and have only marginal error threshold improvement. They are not popular commercially. Each of the subset constellations has significantly improved noise immunity relative to the original 2n+1 constellation. However, the receiver must determine which of the 2m+1 subsets was used for the signaling interval. The Viterbi decoder is used to make that determination. In North America, the most popular trellis formats are Wei’s original (Wei, 1987) 32 4D or 128 4D implementations (Fig. 3.21 and Fig. 3.22). Trellis 32 and 128 have a 0.3- and 0.5-dB PARR advantage over 16 and 64 QAM, respectively. The trellis 32 4D and 128 4D have a 1-dB average signal-to-noise advantage over 16 and 64 QAM, respectively, while achieving improved spectral efficiency. Comparing QAM and TCM is not straightforward. TCM expands the constellation by 1 bit relative to QAM but spreads out signaling over multiple intervals. Composite transport efficiency is slightly greater than QAM since more composite bits are absorbed by the transmitter per multiple signaling intervals. When compared to QAM, the error threshold improvement is relatively small while the spectral efficiency is slightly greater. For SONET transport radios, this increased spectral efficiency was the main reason TCM was the popular choice (Table 3.5). The challenge for trellis coding demodulation is to determine the subset constellation being used for signaling at any given time. Since the particular subset used at any given signaling instant is not known initially, it must be determined based on the signaling history and a knowledge of the possible signaling states. The Viterbi decoding algorithm is an optimal method for determining the subset. For a typical 4D coder, only four subsets are used (Fig. 3.21). Since they are paired by the convolutional encoder, there are 16 (4 × 4) possible pairs. This can be diagramed using a trellis diagram. A trellis diagram is a graph that illustrates the transitions between modulation states for a modulation method with memory. For a modulation system with memory, current states are constrained by previous states (Fig. 3.23).

TRELLIS CODED MODULATION (TCM)

73

(a)

A

B

A

B

Subsets

C

D

C

D

(b)

Figure 3.21

TABLE 3.5

Subsets

Constellation decomposition for trellis (a) 32 4D and (b) 128 4D.

Comparing Typical QAM and TCM

Relative System Gain 16 QAM = Ref 64 QAM = Ref 128 QAM = Ref 256 QAM = Ref

32 TCM 2D = 0.0 dB 128 TCM 2D = 0.0 dB 256 TCM 2D = 0.0 dB 512 TCM 2D = 0.0 dB

32 TCM 4D = 1.0 dB 128 TCM 4D = 1.0 dB 256 TCM 4D = 1.0 dB 512 TCM 4D = 1.0 dB

32 TCM 2D = 4 128 TCM 2D = 6 256 TCM 2D = 7 512 TCM 2D = 8

32 TCM 4D = 4.5 128 TCM 4D = 6.5 256 TCM 4D = 7.5 512 TCM 4D = 8.5

Spectral Efficiency, bits/s/Hz 16 QAM = 4 64 QAM = 6 128 QAM = 7 256 QAM = 8

For any set of two subsets the next possible set of subsets is constrained by the convolutional coder to four (rather than 16) possible pairs. Each of the sets of four pairs is chosen to have the largest space between symbols in consecutive space (I and Q) and time subsets. The constraint of consecutive subsets allows the Viterbi decoder to estimate the subset used (Fig. 3.24). The Viterbi decoder stores the analog received signal I and Q coordinates at each sampling instant. If the Viterbi could store all states, it would be the optimum decoder. Since the storage must be terminated

74

MICROWAVE RADIO OVERVIEW

(a)

Bits per 2 symbols

S1 6 bits

Two symbol parallel to serial Subset 1 converter

S6 S7 S8

Differential encoder

I 5 bits

Signal mapper

Q

Convolution encoder

S9

Subset 2 Only 4 of 16 possible subset pairs per 2 consecutive symbols

(b)

Bits per 2 symbols S1 10 bits

Two symbol parallel to serial Subset 1 converter

S1 0

S11 S1

Differential encoder

I 7 bits

Signal mapper Q

Convolution encoder

2

S1 3

Subset 2 Only 4 of 16 possible subset pairs per 2 consecutive symbols

Figure 3.22

Trellis (a) 32 4D and (b) 128 4D modulators.

Current demodulator coset pairs

Current demodulator state

Next demodulator state

Next demodulator coset pairs

AA AB AC AD BA BB BC BD CA CB CC CD DA DB DC DD

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

AA AB AC AD BA BB BC BD CA CB CC CD DA DB DC DD

Figure 3.23

Trellis 4D subset combinations and allowable states.

ORTHOGONAL FREQUENCY DIVISION MULTIPLEXING (OFDM)

Figure 3.24

75

Trellis (state diagram) of five pairs of consecutive subset choices.

at some defined length, actual implementations are slightly suboptimal. Let us assume that the Viterbi memory length is five times the constraint length of a 4-bit convolutional encoder. The Viterbi must store 40 (5 coset pairs × 4 bits/encoder state × 2 subsets per set of encoder bits) consecutive signal samples. The Viterbi calculates and stores the squared distance between each received signal and the closest symbol in each of the 16 subsets. This squared distance is stored for all 40 signal samples. The Viterbi calculates the cumulative squared error for all possible paths across the 40 sets of subsets and picks the path with the lowest squared error (the “survivor”). This determines the subset most likely to have been used for signaling 40 samples delayed from the current sample. After signaling 40 samples, the digital decoding decision is made based on the minimum distance to a symbol in the just determined constellation subset. Since the Viterbi must wait for many samples (40 in this example) before a signal decision is made, signal latency is significant. Also, since many samples are used, if some of the samples are so significantly flawed that an error is made in the estimated constellation subset, the error is propagated for several signal estimates (the Viterbi exhibits burst errors). At start-up after radio loss of signal, many (40 in this case) samples are required before starting to provide valid outputs. There is a design trade-off between the Viterbi depth (for improved stable-state error performance) and dynamic performance and latency.

3.11

ORTHOGONAL FREQUENCY DIVISION MULTIPLEXING (OFDM)

If a digital transmission channel is subjected to noise and intersymbol interference (usually caused by multipath propagation), the conventional approach is to use an appropriately modulated single carrier operating at the Nyquist symbol rate at the transmitter followed by an adaptive time domain equalizer at the receiver. While this approach is effective for fast changing multipath of low order, its use in very complex multipath environments (as in urban radio propagation in a highly reflective environment or for long-distance cable modems subjected to multiple signal reflections) is more challenging. An alternative approach is to subdivide the available channel bandwidth into a number of equal bandwidth subchannels operating at the same relatively slow symbol rate. The bandwidth of the sub-channels is sufficiently narrow that the frequency response characteristics of the sub-channels require only simple amplitude compensation but not complicated phase compensation. While this approach has been around for some time (Doeltz Heald Martin, 1957), it has become popular fairly recently (Bingham, 1990; Chow et al., 1995). For example, it is the technology of choice for asymmetric digital subscriber line (ADSL) communication. Unlike previous generation analog frequency division multiplex systems that required complex analog filters, these systems avoid that complexity by separating the individual carriers by the inverse of the

76

MICROWAVE RADIO OVERVIEW

symbol rate. When this separation is used, each carrier is orthogonal to all the others (the time integral of any two carriers over the time duration of a single symbol is zero). This allows each carrier to be extracted by digital means (typically FFT techniques). Each carrier may be modulated differently (or may be turned off entirely) if significant noise or attenuation appears at that carrier’s frequency. All of the above modulation and coding techniques are available. This technique is especially well suited to deal with spectral noise (narrow band interference) and multiple reflection multipath distortions. However, since it uses very narrow (“sluggish”) transmission carriers, it is best suited for transmission channels whose noise and multipath distortions change slowly. It has no advantage over any other approach when subjected to broadband noise (such as spread spectrum interference or flat fading front end noise). It is well suited for obstructed urban paths which attempt to benefit from reflections from terrain and buildings to achieve a transmission path. However, system performance is difficult to predict for these environments. It can be designed to minimize the effect of narrow band interference. Otherwise, it has no advantage over conventional modulation methods when operated on conventional unobstructed point to point microwave paths. When applied to radio transmission systems, the most significant limitation of this technology is PARR. For conventional systems the PARR for high capacity QAM systems is roughly 8 dB. For multicarrier systems, the PARR can be tens of decibels more. Since transmitter performance is peak power limited but system gain is average power limited, this issue can dramatically impact reliable transmission distance for multicarrier systems. This issue is the focus of much current research on radio orthogonal frequency division multiplexing (OFDM) systems.

3.12

RADIO CONFIGURATIONS

All the preceding examples describe communication in one direction (simplex transmission). Most actual systems communicate two directions simultaneously (full duplex). By their very nature, they transfer data to and from other devices, which have a transmit and a receive function. However, the concepts of transmitter and receiver as well as the transmit and the receive functions are not standardized within the telecommunications community (Fig. 3.25). A transmit signal from one device may be a receive signal on the device it is connected to. This can be quite confusing when discussing these functions with different people at different locations. The concepts of in and out are less confusing but not as commonly used. For the following discussion, the transmit signal enters the radio transmitter and the receive signal exits the radio receiver. This convention may or may not conform to the definitions of equipment (e.g., add/drop multiplexers or routers) to which the radio’s baseband signals are connected. Commercial fixed point to point microwave radios have two basic hardware configurations: integrated radio (all functions in one box) and split package radio (baseband functions in one box near the other telecommunication equipment and RF functions in another box usually collocated with the antenna). For radios that accept several different signal formats, the integrated radio is typically an all-indoor unit. If the radio supports only an IP interface, the radio may be a single package intended for all-outdoor installation (basically a split package radio with no indoor unit). The radio will have an IP interface that connects directly to a router (Fig. 3.26).

Data terminal equipment Transmitter or receiver Receiver or transmitter

Figure 3.25

Transmission facility Out

In

Transmit or receive ? In

Out

Receive or transmit ?

Radio transport equipment Transmitter XMTR Receiver RCVR

Interfacing the radio transmitter and receiver.

RADIO CONFIGURATIONS ODU

(a)

77

(b)

MW

Elliptical waveguide

Cable

MW

IDU

MW

ODU Outdoor (RF) unit

Indoor (baseband) unit

Integrated transceiver

IDU

MW

Figure 3.26

Two types of installation of (a) integrated and (b) split package radios.

The split package and all-outdoor configurations have advantages in an urban building environment. However, they pose operational constraints in suburban and rural tower installations. Functionally, radios are either terminal or nodal. Terminal radios connect baseband signals between two ends of a single radio path (point to point). Nodal radios connect several end locations (drop and insert). The nodal radio connects to several antennas which communicate to multiple far end locations. The radio may allow some traffic to pass through (act as a repeater) and may drop/insert other traffic at their location. Segregation of traffic may be via IP routing or digital cross-connect. Often, both TDM and IP traffic are transported via radio. TDM radios transport IP using TDM data blocks, and IP radios transport TDM using IP packets (pseudowire or circuit emulation). Since IP transport is asynchronous, transport of TDM traffic over IP circuits requires some form of external or encapsulated synchronization. IP radios invariably have more transport latency than TDM radios. IP latency is a function of packet size and transport bandwidth. The following paragraphs diagram the basic structures of microwave radio transmitters and receivers. These may be used in any of the above radio configurations.

78

MICROWAVE RADIO OVERVIEW

(a)

(b)

Receiver

Receiver

Transmitter 2 Transmit filter

Receive filter

Transmit filter

Receiver

Power amplifier

Receiver

Receiver

Power amplifier

Input / output module

Modulator

Input / output module

Input / output module

Modulator

Receive filter

Digital signal

Transmitter

Receive filter

3

Figure 3.27

1

Digital signal

Nonstandby radios (a) without and (b) with space diversity.

The simplest duplex radio is nonstandby without or with space diversity (Fig. 3.27). A key component is the circulator (shown as a circle with three numbered ports). The circulator is a passive device that manages the transfer of microwave signals between ports. A transmit signal entering port 1 exits port 2 toward the antenna. A receive signal from the antenna entering port 2 is transferred to port 3. Another popular configuration is monitored hot standby without or with space diversity (Fig. 3.28). The nondiversity radio receiver has an asymmetrical receive signal power splitter that typically reduces the main receiver input power by about 1 dB and the protection (offline) receiver by 10 dB. In integrated radios, the transmitter radio outputs are switched using a relay switch (as shown in figure). For split package radios, since each radio transmitter is in a separate box, the outputs are combined using a waveguide coupler and the transmit outputs are switched on and off electronically. Figure 3.29 shows the simplified diagrams of the major radio configurations. The hot standby space diversity configuration that switches transmit antennas when the transmitters are switched is often used in locations where loss of an antenna (due to snow or wind loading) is a significant concern. This affects the antenna structure design. Most path clearance criteria (see Chapter 12) are between the main transmit and main receive antennas. In this case, the main transmit antenna may be the (typically lower) diversity antenna. This will cause the antennas to be placed higher on the antenna structure than would be the case with a conventional space diversity configuration. Frequency diversity (Fig. 3.30) depends on all frequencies being adequately separated so that RF filters (on the receiver and the transmitter front ends) tuned to frequencies other than the receiver or transmitter will be reflected back into the circulator (and propagated on down the waveguide). Frequency and space (“quad”) diversity is sometimes used for difficult paths. The configuration in Figure 3.30b is commonly used. The configuration in Figure 3.30c is the preferred configuration. This is the most powerful radio diversity configuration that is commercially available but is costlier than the former configuration because it requires same-sized antennas for main and diversity and, due to clearance requirements for the lower antenna, it usually requires a taller tower. It is used for the most difficult (e.g., long overwater or mountaintop to mountaintop) paths subjected to multipath and reflection distortions. Hybrid diversity is sometimes used when space diversity is not possible at one end of the radio link (Fig. 3.31). Angle diversity is also sometimes used if space diversity is not practical. Multiline is a method of placing several radios on the same antenna to provide several radio channels on the same radio path (Fig. 3.32). Each radio is nonstandby. Equipment protection is achieved by switching traffic from a failed radio to a separate nonstandby radio reserved for circuit restoration.

3.12.1

Cross-Polarization Interference Cancellation (XPIC)

The flexibility to increase path transmission bandwidth has created a renewed interest in cross-polarization interference cancellation (XPIC). Historically, this methodology was applied to low frequency (lower than 12 GHz) radios to maximize the number of transmission channels on a path. Significant degradation of the radio signal due to multipath or rain usually limited this methodology to relatively short paths. This configuration is a method of placing many radios on the same radio path (Fig. 3.33). Different radios using the same frequencies are operated on the antenna horizontal and vertical polarizations. The

RADIO CONFIGURATIONS

79

(a)

Receiver

Transmitter Receive filter

Transmit filter

10:1 splitter

Digital signal

Receiver

Receiver

Power amplifier

Power amplifier

Input / output module

Input / output module

Modulator

Modulator

(b)

Receiver

Transmitter

Receive filter

Receive filter

Receiver

Receiver

Power amplifier

Power amplifier

Input / output module

Input / output module

Modulator

Modulator

Transmit filter

Digital signal

Figure 3.28 Hot standby radios (a) without and (b) with space diversity.

vertical signal receiver must sense and cancel the horizontal signal using its frequency and vice versa for the horizontal signal receiver. Every received frequency signal must appear at two receivers. This is done by using hybrids before the receiver lineup (a technique in older systems that reduced system gain) or by taking an output from one receiver and connecting it to another (a modern approach impacting reliability). XPIC can only reduce the cross-polarized signal by about 20 dB, and at least 40-dB channel isolation is required for typical high order QAM to avoid impacting performance (inducing “dribbling” errors). It means precisely oriented high performance antennas with high cross-polarization discrimination are required. Maintaining antenna alignment over time is difficult and some operators only assume 20-dB cross-polarization discrimination for design purposes even when very high polarization discrimination antennas are used. XPIC radios are sometimes used in the multiline configuration for integrated package radios. They are also used for split package or all-outdoor radios to provide two nonstandby radio channels using one antenna. This minimizes tower loading and leasing costs and is popular in dense urban high frequency networks. High frequency radio paths (higher than 10 GHz) are naturally short due to rain and multipath limitations. However, spectrum congestion is becoming greater in urban areas. Cross-polarization-cancelling radios increase the flexibility of radio channel selection while minimizing radio transceiver sparing requirements.

Nondiversity

Space diversity (Separate receive antenna)

Nondiversity

Xmtr f 1

Antenna

Input Xmtr f 1 Input

Xmtr f 1

Output

Rcvr f 2

Antenna

Rcvr f 2 Output

Main antenna

Xmtr f 1

−1 dB

Rcvr f 2 Power Splitter

Switch Rcvr f 2

Xmtr f 1 Input

0 dB (ref.)

Output

−10 dB

Switch Diversity antenna

Rcvr f 2

Space diversity Space diversity Input

Xmtr f 1 Rcvr f 2

Output

Switch Rcvr f 2

Main antenna

Input

Vertical separation on tower Diversity antenna

(Separate transmit & receive antenna) Switch Xmtr f 1 Main antenna Xmtr f 1 Rcvr f 2

Output

Switch

Switch Rcvr f 2

Figure 3.29

80

Nonstandby and hot standby radios without and with space diversity.

Diversity antenna

(a)

(c)

(b) Xmtr f1

Xmtr f1 Input

Input

Xmtr f2

Xmtr f2 Xmtr f1

Main antenna

Main antenna

Input Xmtr f2

Rcvr f3

Rcvr f3 Antenna

Output

Switch

Switch

Rcvr f3

Rcvr f3

Rcvr f3

Switch

Output

Output

Switch

Rcvr f4

Switch Rcvr f4

Rcvr f4 Diversity antenna

Switch Rcvr f4

Switch

Diversity antenna

Rcvr f4

Figure 3.30 (a) Frequency diversity without space diversity and (b,c) quad diversity (frequency diversity with space diversity).

81

82

MICROWAVE RADIO OVERVIEW

Antenna

Xmtr f 1

Xmtr f 3

Xmtr f 1 Input

Input

Switch

Input

Xmtr f 4

Xmtr f 2

Xmtr f 1

Antenna Rcvr f 3 Output

Switch

Figure 3.31 diversity.

Rcvr f 2 Switch

Rcvr f 4

3.13

Rcvr f 1

Antenna

Rcvr f 2

Output Output

Switch Rcvr f 2

Hybrid diversity (frequency diversity with asymmetrical space diversity) and angle

FREQUENCY DIVERSITY AND MULTILINE CONSIDERATIONS

Historically, high capacity 6-GHz radio links used multiple transmitter/receiver pairs on the same path to increase overall path transmission capacity on “long-haul” circuits. Interest in increasing the capacity of short-haul high frequency radio paths has led to more designs with multiple transmitter/receivers on the same path. When multiple transmitters and receivers are placed on the same radio path, two degradations (of receiver threshold) are possible: Out-of-band transmitter noise can affect receivers operating on nearby channels. This potential problem is resolved by filtering the transmitter output. Very slight amplitude nonlinearity in waveguide flanges (multiple flanges in long waveguide runs common in low frequency systems), ferrite-combining circulators, or receiver front end amplifiers produces intermodulation products from the multiple transmitter signals. Those products may be predicted and eliminated using the procedures described in the following paragraphs. When two or more transmitters share the same antenna as two or more receivers, intermodulation interference can occur (Fig. 3.34). This intermodulation can occur due to nonlinearities between the transmitters and receivers (Fig. 3.35). If a composite signal made of several discrete signals of slightly different frequencies (e.g., A, B, C, . . . ) is passed through a nonlinear device, intermodulation products are produced at the output. If the nonlinearity is constant and relatively small compared to the normal linear output components, the input Vi to output Vo relationship may be characterized by a low order polynomial of Vi . The even-order components have frequencies much higher than the original signals. The odd-order products are similar in frequency to the original signals and can cause undesired interfering signals (Table 3.6). If high order products are present, the low order products are also present. Typically, system nonlinearity is so small that only third-order intermodulation products need to be considered. The (2A − B) product (where A and B are any two combinations of transmit center frequencies) applies to all frequency diversity systems. The (A + B − C) product (where A, B, and C are any three combinations of transmit center frequency) only applies to multiline systems (systems with multiple transmitters on the same waveguide). On the basis of the Monte Carlo simulation, the expected spectral density of the intermodulation components is shown in Figure 3.36. When performing intermodulation checks with channel center frequencies, products as far away as one channel frequency should be considered as potentials for interference. If the intermodulation is likely to occur in the receivers (typical for high frequency radio designs with short or no waveguide runs), a filter can be placed in front of the receivers to eliminate the potentially interfering transmit signals (band-pass filters can be used to filter the receiver so that only the desired signal enters the receiver preamplifier). However, if the nonlinearities are in the waveguide joints or the circulators (common for complex low frequency systems), there are only two choices: The simplest

Inputs (b)

(a)

Tx antenna

Tx antenna

Inputs

Channel 1 Channel 1 Channel 2

Xmtr f 1 Xmtr f 2

Channel 1 Rx antenna

Channel 2

Channel 2 Channel 3

Channel 3 Protect channel access

Xmtr f 3 Xmtr f 4

Channel 3 Protect

Protect channel access

Xmtr f 1 Xmtr f 2 Xmtr f 3 Xmtr f 4

Channel 1 Rx antenna Channel 2 Channel 3 Protect

Outputs Channel 1

Outputs

Rcvr f 5 Switch

Channel 1 Rcvr f 5

Channel 1 Channel 2

Rcvr f 7

Channel 3

Channel 3

Protect

Channel 2

Switch Rcvr f 6

Rcvr f 6 Rcvr f 7

Channel 3 Switch

Channel 3 Rcvr f 7 Rcvr f 8

Protect Rcvr f 8

Diversity antenna

Rcvr f 5 Rcvr f 6

Channel 2

Channel 2

channel access

Channel 1

Protect

Switch

channel access

Protect Rcvr f 8

Figure 3.32 Multiline (a) without and (b) with space diversity.

83

84

MICROWAVE RADIO OVERVIEW

V H

Figure 3.33

P

2

4

6

P

2

4

6

1

3

5

7

1

3

5

7

Multiline cross-polarization interference cancellation (XPIC).

Feeder Multiple high power transmitter signals

Tx

Multiple low power intermodulation signals

Rx 2

3

1

3

1

F Tx2

Tx1

Figure 3.34

1

1

2

2

F Tx1

3

3

Tx2

2

2

F Rx1

3

1

F Rx2

Rx1

Rx2

Frequency diversity “2A − B” intermodulation interference.

is to predict the interfering signal (i.e., perform a “2A − B” calculation under the assumption that the nonlinearity is only third and fifth order) using Table 3.6 and choose transmit or receive channel center frequencies that do not produce intermodulation products near the receiver frequencies (typically assumed to be a spurious carrier frequency within the victim receiver’s receive channel). If appropriate frequencies are not available, the intermodulation interference can be eliminated by placing the transmitters on one antenna and the receivers on another. The antenna to antenna isolation is adequate to keep the transmitter products entering the receivers. For multiline systems with more than four duplex (“go/return”) channels, separate antennas are usually required.

TRANSMISSION LATENCY Output voltage (Vo)

Output voltage (Vo)

(b)

Am pli

fie

r

(a)

85

tor

nua

Atte

Input voltage (Vi)

V o = C Vi

Figure 3.35

Vo = C 1 Vi + C 2 (Vi)2 + C 3 (Vi)3 + C 4(Vi)4 + C 5 (Vi)5 + ...

Transfer functions of (a) linear and (b) nonlinear devices.

Normalized spectrum power (dB)

10

−10 Transmitter spectrum

−20

2A–B spectrum A + B – C spectrum

−30

−40 −1.5

−1.0

−0.5

0.0

0.5

1.0

1.5

Normalized signal spectrum frequency (relative to channel center frequency)

Figure 3.36

3.14

Low order intermodulation spectrum.

TRANSMISSION LATENCY

Some radio applications (such are simulcast and transfer trip circuits) are quite sensitive to transmission delay. Radio transmission delay is typically a function of forward error correction and block interleavers, as well as time slot interchange buffers used in cross-connects and first-in first-out (FIFO) buffers used in IP transport. In general, the higher the transmission speed of the radio, the lower the latency. The following are typical “ball park” estimates of single hop (transmitter to receiver) transmission latency:

86

MICROWAVE RADIO OVERVIEW

TABLE 3.6

Low Order Intermodulation Products

Third-Order Products

Fifth-Order Products

Seventh-Order Products

A+B−C 2A − B

A+B+C−D−E A + B + C − 2D 2A + B − C − D 2A + B − 2C 3A − B − C 3A − 2B

A+B+C+D−E−F−G A + B + C + D − 2E − F A + B + C + D − 3E 2A + B + C − D − E − F 2A + B + C − 2D −E 2A + B + C − 3D 2A + 2B − C − D − E 2A + 2B − 2C − D 2A + 2B − 3C 3A + B − C − D − E 3A + B − 2C − D 3A + B − 3C 4A − B − C − D 4A − 2B − C 4A − 3B

Air (radio) 5.4 μs/mile (1 ms/190 miles) Fiber-optic cable (optical) 8.3 μs/mile (1 ms/120 miles) Voice frequency channel bank (FDM looped at group distribution bay) 180–290 μs delay over the range 1000–2600 Hz (relative to 2000 Hz) FDM master group/super group filters 75–200 μs absolute delay DS1 channel bank No buffering, no cross-connect Buffering with cross-connect

250 μs 35 ms

M13 multiplexer DS1 to DS3 to DS1

50 μs

SONET add/drop multiplexer (OC-3 to OC-3 connection) VT1.5 cross-connect STS-1 cross-connect DS1 to OC-3 to DS1 DS3 to OC-3 to DS3 STS-1 to OC-3 to STS-1

30–50 μs 10–25 μs 25–200 μs 140–200 μs 40–50 μs

Radio (one transceiver) 2 DS1 to 2 DS1 4 DS1 to 4 DS1 8 DS1 to 8 DS1 12 DS1 to 12 DS1 16 DS1 to 16 DS1 1 DS3 to 1 DS3 2 DS3 to 2 DS3

120–220 μs 60–120 μs 30–60 μs 20–40 μs 15–30 μs 75–200 μs 45–110 μs

CURRENT TRENDS

3 DS3 to 3 DS3 1 STS-1 to 1 STS-1 1 STS-3 to 1 STS-3 OC3 to OC3 Ethernet to Ethernet Pseudowire to pseudowire

87

35–85 μs 85–200 μs 55–85 μs 65–95 μs 100–400 μs Multiple milliseconds

Ethernet test results are highly variable depending on the block size and the test methodology. Most vendors use the Internet Engineering Task Force (IETF) RFC 2544 as the Ethernet testing method. The emerging Ethernet and pseudowire (TDM over Ethernet) circuit’s latency currently is highly variable from manufacturer to manufacturer.

3.15

AUTOMATIC TRANSMITTER POWER CONTROL (ATPC)

Unlike analog microwave radios, baseband performance does not change with RSL until the RSL is within a few decibels of radio BER threshold. Digital radio operators soon took advantage of this fact to facilitate frequency-coordinating microwave radios into areas already densely populated with other microwave radios (Vigants, 1975). Automatic transmit power control (ATPC) is a feature of a digital microwave radio link that adjusts transmitter output power based on the varying signal level at the receiver. ATPC allows the transmitter to be operated at a less than maximum power level (typically 10 dB) most of the time. This is the power level that is frequency-coordinated. When received signal fading occurs, the transmit power is increased as needed until the maximum power level is reached. In the United States, there is a limit as to how much reduced power level can be frequency-coordinated (10 dB), the maximum time for which maximum power can be on (5 min), what is used to monitor receiver performance (RSL, not BER), and the maximum operating frequency at which ATPC can be used (not >11 GHz) (TIA/EIA, 1994; Working Group 18, 1992). Although ATPC was originally conceived as a frequency coordination tool, some operators use it to reduce power consumption and to lengthen the life of transmitter power amplifiers (since they are running cooler during the reduced power output). It has even been used to reduce 2A − B frequency diversity interference (since this intermodulation interference is very power sensitive). It is a very popular microwave radio feature.

3.16

CURRENT TRENDS

Worldwide, digital networks are evolving toward a “converged” all-IP transport solution. SONET and very high speed “SONET-like” fiber systems will remain for the foreseeable future. However, all other legacy TDM systems (DS1, DS3, E1, and E3 interfaces) as well as ATM are going away in favor of IP connectivity [ATM has been reinvented in the IP world as MPLS (Multiprotocol Label Switching)]. Support for the legacy TDM traffic during this transition, as well as maximum transport capacity, is important. This has led to the emphasis on the evolving areas discussed in the following sections.

3.16.1

TDM (or ATM) over IP

Microwave radios are evolving toward all-IP transmission. The TDM/ATM traffic will be packetized and encapsulated for transmission over IP. TDM (DS1, DS3, E1, or E3) or ATM over IP is usually termed pseudowire, Circuit Emulation over Packet (CEP) or TDM over IP (TDMoIP). Several standards are evolving: IETF pseudowire emulation edge to edge (PWE3) RFC 3985 (architecture) and RFCs 4xxx and 5xxx for specific implementations. IP/MPLS Forum suite of specifications. Metro Ethernet Forum (MEF) MEF 3 (service definitions) and MEF 8 (implementation). ITU-T Y.1411 (ATM) and Y.1413 and Y.1453 (TDM).

88

MICROWAVE RADIO OVERVIEW

TDM signals are inherently synchronous. IP packet transmission is inherently plesiochronous (“asynchronous”). This requires an IP transport product to provide a method of synchronizing the received TDM signal.

3.16.2

TDM Synchronization over IP

Another issue to be dealt with is received TDM signal frequency synchronization. The point to point TSM signal will be transported by a series of separate packets. These packets are routed and received individually. TDM signals have maximum allowable wander and jitter requirements. This requires that the TDM signal be buffered and then clocked out at the same rate as it was received at the packet source. Since the packet network is not synchronous, recovering the TDM signal clock requires additional consideration. Three methods of clock recovery are currently used. Adaptive Clock Recovery. In this approach, the TDM clock is recovered by averaging the received TDM signal clock. When averaged over a long time, the received clock will have the same average frequency of the transmitted signal. The clock stream format may be proprietary (requiring similar source and sink devices) or a standard pseudowire flow (simplifying interoperability with third-party equipment). Typically, proprietary methods achieve better performance. Sometimes a multicast pseudowire signal is used for general clock distribution. Owing to packet arrival time variation, the short-term frequency will tend to wander. This wander is uncontrolled and may exceed the requirements of the terminating equipment. For small networks this may be satisfactory, whereas for large networks with considerable packet arrival time variation, this approach may not be satisfactory. Differential Clock Recovery. If the network is closed, some equipment can provide for one transmission device to serve at the master clock and send clock reference to all appended equipment. The frequency of the incoming TDM signal is differentially referenced to this master clock at the transmit end and recovered at the receive end. The disadvantage of this approach is that all equipment must support the same clock synchronization mode. At present, Sync Messaging to validate synchronization quality is not standardized. External Synchronization. For large networks composed of different vendor’s equipment or for networks composed of different operators, the most practical method may be to provide external synchronization to each transmission device. Clock networks similar to those used in SONET and SDH networks have been created. At present, Sync Messaging to validate synchronization quality is not standardized in IP radio networks.

3.16.2.1 Timing over IP Connections and Transfer of Clock (TICTOC) An IETF Working Group has been developing standards for distribution of time and frequency over IP and MPLS networks. This working group appears to be transitioning to IEEE 1588 V2 and NTP V4. 3.16.2.2

IEEE Precision Time Protocol 1588 V2 (IEEE 1588-2008 and IEC61533 Ed. 2)

This is a frequency and time-of-day distribution protocol that is based on time-stamp information exchange in a master–slave hierarchy. Timing information originated at a Grandmaster Clock function that is usually traceable to a Primary Reference Clock (PRC) or Coordinated Universal Time (UTC). Similar to NTP (network time protocol), it nonetheless offers better accuracy (fractional nanosecond precision). This standard defines the packet format for timing distribution but does not specify the actual clock recovery algorithm. Although it can be implemented end to end, use of intermediate network elements (“boundary clocks” and “transparent clocks”) improves performance. This technology is primarily concerned with very accurate time stamps. Currently, the multiple queues in the process create excessive short time bias and jitter for use in synchronizing TDM traffic in an Ethernet network. Synchronous Ethernet (ITU-T G.8261/Y.1361 and G.8262/Y.1362) is an evolving methodology whereby methods such as the aforementioned ones are used to synchronize selected nodes. Those nodes then synchronize slave nodes in a traditional hierarchal timing network. These approaches show promise for synchronizing TDM traffic in Ethernet networks. Traditional SONET/SDH (GR-436-CORE, G.803) synchronous clock distribution networks have matured over the past 20 years and are suitable for synchronizing TDM traffic in Ethernet networks.

CURRENT TRENDS

3.16.3

89

Adaptive Modulation

The need to increase fixed point to point microwave radio bandwidth is well known. For many radio paths, adequate fade margin exists for radio transmission using relatively high modulation formats for most of the time. Traditionally, radio transmission bandwidth was fixed and the priority of all baseband signals was considered equal. However, IP traffic priority is not equal. Encapsulated TDM signals require full time operation but many packet services can sustain short interruptions in service and still provide adequate performance. Radios are starting to utilize adaptive modulation. When the microwave path is distortion free, the highest capacity modulation format is used. When the path experiences rain of multipath-induced distortion, the radio transmitter and receiver dynamically reduce the modulation complexity to limit the effect of path distortions. Modulation reduction to 4 QAM or QPSK for short periods is common. Most of the time, high bandwidth is available. For short periods, part of the transmission bandwidth is blocked. The signals that must be transmitted and those that can be blocked are differentiated using one of several possible quality-of-service (QoS) bandwidth identification methods. Adaptive modulation radios are now available, which can provide large bandwidth transmission most of the time but revert to low bandwidth (as low as 4 QAM) when propagation anomalies occur. Some manufacturers use BER or BBER thresholds, and some use RSL thresholds for transmission rate changes. The error thresholds work for any degradation but require errors before a rate shift occurs. The RSL method does not sense all degradations but switches before any error degradation due to low RSL. Both methods work well and have their supporters. Since switching from high capacity to low capacity is attempted before significant errors (for a BER lower than normal threshold), the high capacity circuit fade margin (and path availability) for an adaptive modulation radio will be less than that for a fixed modulation radio of the same characteristics. Given the embryonic nature of adaptive modulation, router considerations could be important. How will the routers in the network deal with changing transmission bandwidth? Will they disable the radio path because it is “flapping”? An open issue is whether or not OSPF (Open Shortest Path First) routing information (which is based on path distance and bandwidth) must be updated to account for bandwidth capacity changes. At present, the prevailing opinions are that the rate changes are so infrequent and transient that updates are not needed.

3.16.4

Quality of Service (QoS) [Grade of Service (GoS) in Europe]

QoS refers to resource reservation control, not the achieved service quality. QoS is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. QoS guarantees are important if the network capacity is insufficient. In the absence of network congestion, QoS mechanisms are not required. Standards for quality and availability of IP circuits are still in the early stages of development. The ITU-R Report F2058 outlines design considerations for wireless IP transmission. The most significant factors mentioned are bandwidth, delay, and lost packets. The ITU-T Recommendation Y.1540 defines the QoS/CoS (class of service) parameters. Currently, the parameters of interest include successful packet transfer, errored packets, lost packets, spurious packets, average packet delay, and packet delay variation. As noted in the ITU-R Report F2058 (Chung et al., 2001), “There are two schemes to achieve QoS and Cos. One is the prioritized scheme which offers a priority control among the service classes without specifying service specific parameters. The other is a parameterized scheme to assure required communication quality parameters. Only a parameterized scheme has the possibility to guarantee QoS. . . . CoS control is often explained by an ‘airplane model.’ Service quality is classified into several service classes just like airplane seats which are classified into first, business and economy class. Higher service classes than usual best effort class are used to offer high-level services, e.g. ensuring minimum delay time or available bandwidth. High-quality service is provided if the request from the user is accepted. Admission control or policy control methods are used to determine which service classes are allowed for data transfer. According to the service class, each data transfer is transferred based on that quality. However, the amount of traffic carried in such higher service classes is limited because the available bandwidth is limited.”

90

MICROWAVE RADIO OVERVIEW

TABLE 3.7 Network Performance Parameter IPTD IPDV, ms

IPLR

IPER

IP Classes of Service Nature of Network Objective Upper bound on the mean IPTD Upper bound on the 1 × 10−3 (0.999) quantile of IPTD minus the minimum IPTD Upper bound on the packet loss probability Upper bound

Quality-of-Service (QoS) Classes Class Class Class 2 3 4

Class 0

Class 1

100 ms

400 ms

100 ms

400 ms

1s

Undefined

50

50

Undefined

Undefined

Undefined

Undefined

1 × 10−3

1 × 10−3

1 × 10−3

1 × 10−3

1 × 10−3

Undefined

1 × 10−4

Class 5 Unspecified

Undefined

Abbreviations: IPTD, IP Packet Transfer Delay; IPDV, IP Packet Delay Variation; IPLR, IP Packet Loss Ratio; IPER, IP Packet Error Ratio. The suggested measurement time is 1 min.

Currently, there are several internationally established methods of QoS/CoS: IP Differentiated Services (DiffServ) field in the IP header; IEEE 802.1d Annex H2 Tagging and 802.1p (MAC layer 2) (similar to DiffServ); TCP/IP TOC (layer 3 or 4); IP Integrated Services (IntServ); Resource reSerVation Protocol (RSVP); IETF MPLS. Informal and proprietary methods are also sometimes used. A prospective user of an adaptive modulation radio should confirm whether his or her preferred method of QoS is supported by the radio. The ITU-T Recommendation Y.1541 defines bounds on network performance between User Network Interfaces (UNIs). Six classes or services are defined: Class 0 is the strictest. It is for real time, jitter sensitive, high interaction traffic using constrained routing and distance. Class 5 is least strict (with no defined objectives). It is for traditional applications of IP networks using any route or path. The provisional values in Table 3.7 are defined. Packet-switched networks (PSNs) operate on a contention basis. If bottlenecks occur in the network, packets are queued or dropped if transmission bandwidth is unavailable. QoS/CoS provides a means of prioritizing traffic when this transmission dilemma occurs. For legacy TDM radios, baseband transmission bandwidth is fixed. With the new generation of IP radios, baseband transmission bandwidth can be variable (although the RF signal bandwidth remains constant). Often, microwave radios have more fade margin than necessary to support spectrally efficient modulation formats. Error-free transmission at relatively high data rates is quite possible much of the time. Multipath and rain outages occur only for a small fraction of the total transmission time. With IP radios, baseband transmission bandwidth can be varied to suit the real-time transmission qualities of the radio path.

REFERENCES Aaron, M. R. and Tufts, D. W., “Intersymbol Interference and Error Probability,” IEEE Transactions on Information Theory, Vol. 12, pp. 26–34, December 1966. Abramowitz, M. and Stegun, I., Handbook of Mathematical Functions (NBS AMS 55, seventh printing with corrections). Washington, DC: US Government Printing Office, pp. 931–933, 1968.

REFERENCES

91

Abreu, G., “Very Simple Tight Bounds on the Q-Function,” IEEE Transactions on Communications, Vol. 60, pp. 2415–2420, September 2012. Amoroso, F., “The Bandwidth of Digital Data Signals,” IEEE Communications Magazine, Vol. 18, pp. 13–24, November 1980. Aschoff, V., “The Early History of the Binary Code,” IEEE Communications Magazine, Vol. 21, pp. 4–10, January 1983. Assalini, A. and Tonello, A. M., “Improved Nyquist Pulses,” IEEE Communications Letters, Vol. 8, pp. 87–89, February 2004. Baccetti, B., Raheli, R. and Salerno, M., “Fractionally Spaced Versus T-Spaced Adaptive Equalization for High-Level QAM Radio Systems,” IEEE Global Telecommunications Conference (Globecom) Conference Record, Vol. 2, pp. 31.1.1–31.1.5, November 1987. Bayless, J. W., Collins, A. A. and Pedersen, R. D., “The Specification and Design of Bandlimited Digital Radio Systems,” IEEE Transactions on Communications, Vol. 27, pp. 1763–1770, December 1979. Beaulieu, N. C., Tan, C. C. and Damen, M. O., “A Better Than Nyquist Pulse,” IEEE Communications Letters, Vol. 5, pp. 367–368, September 2001. Bennett, W. R. and Davey, J. R., Data Transmission. New York: McGraw-Hill, 1965. Berlekamp, E. R., Peile, R. E. and Pope, S. P., “The Application of Error Control to Communications,” IEEE Communications Magazine, Vol. 25, pp. 44–57, April 1987. Berrou, C., Glavieux, A. and Thitimasjshima, P., “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo-Codes,” Proceedings, IEEE International Conference on Communication, pp. 1064–1070, May 1993. Bhargava, V. K., “Forward Error Correction Schemes for Digital Communications,” IEEE Communications Magazine, Vol. 21, pp. 11–19, January 1983. Bingham, J. A. C., “Multicarrier Modulation for Data Transmission: An Idea Whose Time Has Come,” IEEE Communications Magazine, Vol. 28, pp. 5–14, May 1990. Birafane, A., Mohamad, E., Kouki, A. B., Helaoui, M. and Ghannouchi, F. M., “Analyzing LINC Systems,” IEEE Communications Magazine, Vol. 11, pp. 59–71, August 2010. Borgne, M., “Comparison of High-Level Modulation Schemes for High-Capacity Digital Radio Systems,” IEEE Transactions on Communications, Vol. 33, pp. 442–449, May 1985. Bose, R. C. and Ray-Chaudhuri, D. K., “On a Class of Error-Correcting Binary Group Codes,” Information and Control , Vol. 3, pp. 68–79, March 1960. Breed, G., “Analyzing Signals Using the Eye Diagram,” High Frequency Electronics, Vol. 4, pp. 50–53, November 2005. Brink, S. T., “Coding over Space and Time for Wireless Systems,” IEEE Communications Magazine, Vol. 13, pp. 18–30, August 2006. Brunner, K. S. and Weaver, C. F., “A Comparison of Synchronous and Fractional-Spaced DFE’s in a Multipath Fading Environment,” IEEE Global Telecommunications Conference (Globecom) Conference Record, Vol. 1, pp. 44.4.1–44.4.5, November 1988. Cahn, C. R., “Combined Digital Phase and Amplitude Modulation Communication System,” IRE Transactions on Communications, Vol. 8, pp. 150–155, September 1960. Campopiano, C. N. and Glazer, B. G., “A Coherent Digital Amplitude and Phase Modulation System,” IRE Transactions on Communications, Vol. 10, pp. 90–95, March 1962. Chow, J. S., Cioffi, J. M. and Bingham, J. A. C., “A Practical Discrete Multitone Transceiver Loading Algorithm for Data Transmission over Spectrally Shaped Channels,” IEEE Transactions on Communications, Vol. 43, pp. 357–363, October 1995. Chung, S-Y., Forney, G. D., Jr., Richardson, T. J. and Urbanke, R., “On the Design of Low-Density Parity-Check Codes within 0.0045 dB of the Shannon Limit,” IEEE Communications Letters, Vol. 5, pp. 58–60, February 2001. Costello, D. J. and Forney, G. D., Jr., “Channel Coding: The Road to Channel Capacity,” Proceedings of the IEEE, pp. 1150–1177, June 2007.

92

MICROWAVE RADIO OVERVIEW

Craig, J. W., “A New, Simple and Exact Result for Calculating the Probability of Error for TwoDimensional Signal Constellations,” Military Communications Conference (MILCOM) Record, Vol. 2, pp. 25.5.1–25.5.5, November 1991. Dinn, N. F., “Digital Radio: Its Time Has Come,” IEEE Communications Magazine, Vol. 18, pp. 6–12, November 1980. Doeltz, M. L., Heald, E. T. and Martin, D. L., “Binary Data Transmission Techniques for Linear Systems,” Proceedings of the IRE, pp. 656–661, May 1957. Dong, X., Beaulieu, N. C. and Wittke, P. H., “Signaling Constellations for Fading Channels,” IEEE Transactions on Communications, Vol. 47, pp. 703–714, May 1999. Elias, P., “Coding for Noisy Channels,” IRE Convention Record, Part 4, pp. 37–46, March 1955. Forney, G. D., Jr., Concatenated Codes. Cambridge: MIT Press, 1966. Forney, G. D., Jr., “Convolutional Codes I: Algebraic Structure,” IEEE Transactions on Information Theory, Vol. 16, pp. 720–737, November 1970. Forney, G. D., Jr., “Correction to “Convolutional Codes I: Algebraic Structure”,” IEEE Transactions on Information Theory, Vol. 17, p. 360, May 1971a. Forney, G. D., Jr., “Burst-Correction Codes for the Classic Bursty Channel,” IEEE Transactions on Communications Technology, Vol. 19, pp. 772–781, October 1971b. Forney, G. D., Jr., “The Viterbi Algorithm,” Proceedings of the IEEE, pp. 268–278, March 1973. Forney, G. D., Jr., “Convolutional Codes II. Maximum-Likelihood Decoding,” Information and Control , Vol. 25, pp. 222–266, July 1974. Forney, G. D., Jr., “Coset Codes—Part I: Introduction and Geometrical Classification,” IEEE Transactions on Information Theory, Vol. 34, pp. 1123–1151, September 1988a. Forney, G. D., Jr., “Coset Codes—Part II: Binary Lattices and Related Codes,” IEEE Transactions on Information Theory, Vol. 34, pp. 1152–1187, September 1988b. Forney, G. D., Jr., “Multidimensional Constellations—Part II: Voronoi Constellations,” IEEE Journal on Selected Areas in Communications, Vol. 7, pp. 941–958, August 1989. Forney, G. D., Jr., “Combined Equalization and Coding Using Precoding,” IEEE Communications Magazine, Vol. 29, pp. 25–34, December 1991. Forney, G. D., Jr. and Ungerboeck, G., “Modulation and Coding for Linear Gaussian Channels,” IEEE Transactions on Information Theory, Vol. 44, pp. 2384–2415, October 1998. Forney, G. D., Jr. and Wei, L.-F., “Multidimensional Constellations—Part I: Introduction, Figures of Merit, and Generalized Cross Constellations,” IEEE Journal on Selected Areas in Communications, Vol. 7, pp. 877–892, August 1989. Forney, G., Jr., Gallager, R., Lang, G., Longstaff, F. and Qureshi, S., “Efficient Modulation for BandLimited Channels,” IEEE Journal on Selected Areas in Communications, Vol. 2, pp. 632–647, September 1984. Foschini, G. J., Gitlin, R. D. and Weinstein, S. B., “Optimization of Two-Dimensional Signal Constellations in the Presence of Gaussian Noise,” IEEE Transactions on Communications, Vol. 22, pp. 28–38, January 1974. Franks, L. E., “Further Results on Nyquist’s Problem in Pulse Transmission,” IEEE Transactions on Communications Technology, Vol. 16, pp. 337–340, April 1968. Franks, L. E., “Carrier and Bit Synchronization in Data Communication—A Tutorial Review,” IEEE Transactions on Communications, Vol. 28, pp. 1107–1121, August 1980. Friis, H.T., “Noise Figures of Radio Receivers,” Proceedings of the IRE, pp. 419–422, July 1944. Gallagher, R. G., “Low Density Parity Check Codes,” IRE Transactions on Information Theory, Vol. 8, pp. 21–28, January 1962. Gibby, R. A. and Smith, J. W., “Some Extensions of Nyquist’s Telegraph Transmission Theory,” Bell System Technical Journal , Vol. 44, pp. 1487–1510, September 1965. Gitlin, R. D. and Weinstein, S. B., “Fractionally-Spaced Equalization: An Improved Digital Transversal Equalizer,” Bell System Technical Journal , Vol. 60, pp. 275–296, February 1981.

REFERENCES

93

Golay, M. J. E., “Notes on Digital Coding,” Proceedings of the IRE, p. 657, June 1949. Goldberg, B., “Applications of Statistical Communications Theory,” IEEE Communications Magazine, Vol. 19, pp. 26–33, July 1981. Groe, J., “Polar Transmitters for Wireless Communications,” IEEE Communications Magazine, Vol. 45, pp. 58–63, September 2007. Hamming, R. W., “Error Detecting and Error Correcting Codes,” Bell System Technical Journal , Vol. 29, pp. 147–160, April 1950. Hanco*ck, J. C. and Lucky, R. W., “Performance of Combined Amplitude and Phase Modulated Communications System,” IRE Transactions on Communications, Vol. 8, pp. 232–237, December 1960. Hartley, R., “Transmission of Information,” Bell System Technical Journal , Vol. 7, pp. 535–563, July 1928. Hocquenghem, A., “Codes Correcteurs d’Erreurs,” Chiffres, Vol. 2, pp. 147–156, September 1959. Imai, H. and Hirakawa, S., “A New Multilevel Coding Method Using Error-Correcting Codes,” IEEE Transactions on Information Theory, Vol. 23, pp. 371–377, May 1977. Johannes, V. I., “Improving on Bit Error Rate,” IEEE Communications Magazine, Vol. 22, pp. 18–20, December 1984. Johnson, K. K., “Optimizing Link Performance, Cost and Interchangeability by Predicting Residual BER: Part I—Residual BER Overview and Phase Noise,” Microwave Journal , Vol. 45, pp. 20–30, July 2002a. Johnson, K. K., “Optimizing Link Performance, Cost and Interchangeability by Predicting Residual BER: Part II—Nonlinearity and System Budgeting,” Microwave Journal , Vol. 45, pp. 96–131, September 2002b. Kassam, S. A. and Poor, H. V., “Robust Signal Processing for Communication Systems,” IEEE Communications Magazine, Vol. 21, pp. 20–28, January 1983. Kerr, A. R. and Randa, J., “Thermal Noise and Noise Measurements—A 2010 Update,” IEEE Microwave Magazine, Vol. 11, pp. 40–52, October 2010. Khabbazian, M., Hossain, M. J., Alouini, M. and Bhargava, V. K., “Exact Method for the Error Probability Calculation of Three-Dimensional Signal Constellations,” IEEE Transactions on Communications, Vol. 57, pp. 922–925, April 2009. Kim, B., Kim, I. and Moon, J., “Advanced Doherty Architecture,” IEEE Communications Magazine, Vol. 11, pp. 72–86, August 2010a. Kim, B., Moon, J. and Kim, I., “Efficiently Amplified,” IEEE Communications Magazine, Vol. 11, pp. 87–100, August 2010b. Kizer, G. M., Microwave Communication. Ames: Iowa State University Press, pp. 589–602, 1990. Kizer, G. M., “Microwave Radio Communication,” Handbook of Microwave Technology, Volume 2, Applications. Ishii, T. K., Editor. San Diego: Academic Press, pp 449–504, 1995. Lavrador, P. M., Cunha, T. R., Cabral, P. M. and Pedro, J. C., “The Linearity-Efficiency Compromise,” IEEE Communications Magazine, Vol. 11, pp. 44–58, August 2010. Lee, J. S. and Beaulieu, N. C., “A Novel Pulse Designed to Jointly Optimize Symbol Timing Estimation Performance and the Mean Squared Error of Recovered Data,” IEEE Transactions on Wireless Communications, Vol. 7, pp. 4064–4069, November 2008. Liu, K. Y. and Lee, J., “Recent Results on the Use of Concatenated Reed-Solomon/Verterbi Channel Coding and Data Compression for Space Communications,” IEEE Transactions on Communications, Vol. 32, pp. 518–523, May 1984. Liveris, A. D. and Georghiades, C. N., “Exploiting Faster-Than-Nyquist Signaling,” IEEE Transactions on Communications, Vol. 51, pp. 1502–1511, September 2003. Lodge, J., Young, R., Hoeher, P. and Hagenauer, J., “Separable MAP ‘Filters’ for Decoding of Product and Concatenated Codes,” Proceedings, IEEE International Conference on Communication, pp. 1740–1745, May 1993.

94

MICROWAVE RADIO OVERVIEW

Lucky, R. W., “Automatic Equalization for Digital Communication,” Bell System Technical Journal , Vol. 44, pp. 547–588, April 1965. Lucky, R. W., “Techniques for Adaptive Equalization of Digital Communication,” Bell System Technical Journal , Vol. 45, pp. 255–286, February 1966. Lucky, R. W., “A Survey of the Communication Theory Literature: 1968-1973,” IEEE Transactions on Information Theory, Vol. 19, pp. 725–739, November 1973. Lucky, R. W., Salz, J. and Weldon, E. J., Jr., Principles of Data Communications. New York: McGrawHill, 1968. MacKay, D. J. C. and Neal, R. M., “Near Shannon Limit Performance of Low Density Parity Check Codes,” Electronics Letters, Vol. 32, pp. 1645–1655, August 1996. Mazo, J. E., “Faster-Than-Nyquist Signaling,” Bell System Technical Journal , Vol. 54, pp. 1451–1462, October 1975. Mosley, R. A., Director, Code of Federal Regulations (CFR), Title 47 - Telecommunication, Chapter I, Part 101.111. Washington: Office of the Federal Register, published yearly. Mueller, K. H. and Muller, M. S., “Timing Recovery in Digital Synchronous Data Receivers,” IEEE Transactions on Communications Technology, Vol. 24, pp. 516–531, May 1976. Muller, D. E., “Application of Boolean Algebra to Switching Circuit Design and to Error Detection,” IRE Transactions on Electronic Computers, Vol. 3, pp. 6–12, September 1954. Newcombe, E. A. and Pasupathy, S., “Error Rate Monitoring for Digital Communications,” Proceedings of the IEEE, pp. 805–828, August 1982 and Correction to “Error Rate Monitoring for Digital Communications”, Proceedings of the IEEE, p. 443, March 1983. Niger, Ph. and Vandamme, P., “Outage Performance of High-Level QAM Radio Systems Equipped with Fractionally-Spaced Equalizers,” IEEE Global Telecommunications Conference (Globecom) Conference Record, Vol. 1, pp. 8.3.1–8.3.5, November 1988. Noguchi, T., Daido, Y. and Nossek, J. A., “Modulation Techniques for Microwave Digital Radio,” IEEE Communications Magazine, Vol. 24, pp. 21–30, October 1986. North, D. O., “An Analysis of the Factors Which Determine Signal/Noise Discrimination in Pulse-Carrier Systems,” RCA Report PTR-6C, 1943; also Proceedings of the IEEE, pp. 1016–1027, July 1963. Nyquist, H., “Certain Factors Affecting Telegraph Speed,” Bell System Technical Journal , Vol. 3, pp. 324–346, April 1924. Nyquist, H., “Certain Topics in Telegraph Transmission Theory,” AIEE Transactions, Vol. 47, pp. 617–644, April 1928. Prange, E., Cyclic Error-Correcting Codes in Two Symbols, Technical Report TN-57-103. Cambridge, MA: Air Force Cambridge Research Center, September 1957. Proakis, J. G. and Salehi, M., Communications Systems Engineering, Second Edition. Upper Saddle River: Prentice Hall, 2002. Qureshi, S., “Adaptive Equalization,” IEEE Communications Magazine, Vol. 20, pp. 9–16, March 1982. Qureshi, S., “Adaptive Equalization,” Proceedings of the IEEE, pp. 1349–1387, September 1985. Reed, I. S., “A Class of Multiple-Error Correcting Codes and the Decoding Scheme,” IRE Transactions on Information Theory, Vol. 4, pp. 38–49, September 1954. Reed, I. S. and Solomon, G., “Polynomial Codes over Certain Finite Fields,” Journal of SIAM , Vol. 8, pp. 300–304, June 1960. Rusek, F. and Anderson, J. B., “Multistream Faster Than Nyquist Signaling,” IEEE Transactions on Communications, Vol. 57, pp. 1329–1339, May 2009. Scanlan, J. G., “Pulses Satisfying the Nyquist Criterion,” Electronics Letters, Vol. 28, pp. 50–52, January 1992. Shannon, C. E., “A Mathematical Theory of Communication, Parts I, II & III,” Bell System Technical Journal , Vol. 27, pp. 379–423, and 623–656, July and October 1948. Shannon, C. E., “Communication in the Presence of Noise,” Proceedings of the IRE, pp. 10–21, January 1949.

REFERENCES

95

Shannon, C. E., “Recent Developments in Communication Theory,” Electronics, Vol. 23, pp. 80–83, April 1950. Simon, M. K. and Smith, J. G., “Hexagonal Multiple Phase-and-Amplitude-Shift-Keyed Signal Sets,” IEEE Transactions on Communications, Vol. 21, pp. 1108–1115, October 1973. Sklar, B., “A Structured Overview of Digital Communications—a Tutorial Review—Part I,” IEEE Communications Magazine, Vol. 21, pp. 4–17, August 1983a. Sklar, B., “A Structured Overview of Digital Communications—a Tutorial Review—Part II,” IEEE Communications Magazine, Vol. 21, pp. 6–21, October 1983b. Szczecinski, L., Gonzalez, C. and Aissa, S., “Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping,” IEEE Transactions on Communications, Vol. 54, pp. 389–392, March 2006. Thomas, C. M., Weidner, M. Y. and Durrani, S. H., “Digital Amplitude-Phase Keying with M-ary Alphabets,” IEEE Transactions on Communications, Vol. 22, pp. 168–180, February 1974. TIA/EIA, Interference Criteria for Microwave Systems, Telecommunications Systems Bulletin TSB10-F, June 1994. Ungerboeck, G., “Channel Coding with Multilevel/Phase Signals,” IEEE Transactions on Information Theory, Vol. 28, pp. 55–67, January 1982. Ungerboeck, G., “Trellis-Coded Modulation with Redundant Signal Sets, Part I: Introduction,” IEEE Communications Magazine, Vol. 25, pp. 5–21, February 1987a. Ungerboeck, G., “Trellis-Coded Modulation with Redundant Signal Sets, Part II: State of the Art,” IEEE Communications Magazine, Vol. 25, pp. 12–21, February 1987b. Vigants, A., “Space-Diversity Engineering,” Bell System Technical Journal , Vol. 54, pp. 103–142, January 1975. Viterbi, A. J., “Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm,” IEEE Transactions on Information Theory, Vol. 13, pp. 260–269, April 1967. Viterbi, A. J., “Convolutional Codes and Their Performance in Communication Systems,” IEEE Transactions on Communications Technology, Vol. 19, pp. 751–772, October 1971. Wei, L-F., “Rotationally Invariant Convolutional Channel Coding with Expanded Signal Space—Part I: 180◦ ,” IEEE Journal on Selected Areas in Communications, Vol. 2, pp. 659–671, September 1984a. Wei, L-F., “Rotationally Invariant Convolutional Channel Coding with Expanded Signal Space—Part II: Nonlinear Codes,” IEEE Journal on Selected Areas in Communications, Vol. 2, pp. 672–686, September 1984b. Wei, L-F., “Trellis-Coded Modulation with Multidimensional Constellations,” IEEE Transactions on Information Theory, Vol. 33, pp. 483–501, July 1987. Whalen, A. D., “Statistical Theory of Signal Detection and Parameter Estimation,” IEEE Communications Magazine, Vol. 22, pp. 37–44, June 1984. Widrow, B., Adaptive Filters, I: Fundamentals, Technical Report No. 6764-6. Stanford: Stanford Electronic Laboratories, Stanford University, December 1966. Wolf, J. K. and Ungerboeck, G., “Trellis Coding for Partial-Response Channels,” IEEE Transactions on Communications, Vol. 34, pp. 765–773, August 1986. Working Group 18, Automatic Transmit Power Control (ATPC), National Spectrum Managers Association (NSMA) Recommendation WG 18.91.032, April 1992. Xiong, F., Digital Modulation Techniques, Second Edition. Boston: Artech House, pp. 694–696, 2006. Yamaguchi, K. and Imai, H., “Highly Reliable Multilevel Channel Coding System Using Binary Convolutional Codes,” Electronics Letters, Vol. 23, pp. 939–941, August 1987. Yin, P., “Specifications and Definitions for Quadrature Demodulators and Receiver Design Measurements,” Microwave Journal , Vol. 45, pp. 22–42, October 2002.

4 RADIO NETWORK PERFORMANCE OBJECTIVES

Telecommunications providers may offer a telecommunications service with a specified end to end quality of service and service availability. This availability objective is expected to cover all causes of outage. Customer service objectives are established to support this offering. Design, commissioning, and maintenance objectives are defined to support overall customer service objectives. Design objectives are the most stringent. They are estimates based on idealized average performance of many individual systems. They ignore any affects of maintenance actions or unpredictable events. They define expected median or average performance. Actual performance will vary above or below these levels. Commissioning objectives are a little more lax in acknowledgment that their actual performance is less than ideal and subject to significant variation. Maintenance objectives are the least stringent. They are intended to insure end to end customer objectives are maintained. The ITU concepts of differentiation of objectives by function and by different networks and equipment have been accepted worldwide. This overall concept in network design is outlined in the ITU-T (a branch of the United Nations) Recommendation G.102, Transmission Performance Objectives and Recommendations, Transmission Systems and Media (ITU-T Recommendation G.102, 1993). This recommendation describes the design, commissioning, and maintenance objectives.

4.1 CUSTOMER SERVICE OBJECTIVES Customer service objectives are often a service availability (often specified as 98% or 99.8%) end to end. Most of the availability objective is allocated to nontransmission impairments. The transmission circuits are specified in such a way as to make their impairment of the end to end circuit minor. The various components of the transmission system are then allocated a portion of this objective. Since most transmission systems support many customers, this is generally an economical approach. The customer service objective is a “not to exceed” objective.

4.2 MAINTENANCE OBJECTIVES Maintenance objectives are more stringent than customer service objectives but less stringent than commissioning objectives (ITU-T Recommendation M.35, 1993). The difference in performance between Digital Microwave Communication: Engineering Point-to-Point Microwave Systems, First Edition. George Kizer. © 2013 The Institute of Electrical and Electronics Engineers, Inc. Published 2013 by John Wiley & Sons, Inc.

96

MAINTENANCE OBJECTIVES

97

maintenance and commissioning objectives is usually termed system margin or aging factor. Maintenance objectives are often formalized as levels of alarm and performance with associated maintenance actions. These are “not to exceed” objectives. ITU-T general maintenance quality objectives are contained in M.2100 [Plesiochronous Digital Hierarchy (PDH)] (ITU-T Recommendation M.2100, 2003) and M.2101 (SDH) (ITU-T Recommendation M.2101, 2003). ITU-R radio-specific quality objectives are contained in F.1566-1 (ITU-R Recommendation F.1566-1, 2007). As noted in F.1566-1 (ITU-R Recommendation F.1566-1, 2007), currently there are no ITU-R unavailability (outage) maintenance objectives. The ITU-R maintenance quality objectives are based on a maintenance performance limit (MPL), which is quite similar to the bringing into service performance objective (BISPO). In accordance with M.20 (ITU-T Recommendation M.20, 1992), these objectives must be modified to determine unacceptable, degraded, and acceptable performance limits. These limits are left to the network administrations to determine. Specific objectives vary widely among administrations. Telcordia defines North American maintenance objectives by defining an alarm type and an alarm level (Telcordia (Bellcore) Staff, 2009a). Alarm types are service affecting (SA) and nonservice affecting (NSA). An SA alarm indicates an equipment failure that causes loss of the transported (baseband) signal. An NSA alarm indicates that an equipment failure has occurred but functionality was automatically restored by backup equipment. Critical alarms are SA alarms indicating failures affecting many users or considerable bandwidth. Telcordia defines (Telcordia (Bellcore) Staff, 2000) critical alarms as failures requiring immediate corrective action independent of the time of day. Major alarms are SA alarms affecting fewer users or less bandwidth. Major alarms are failures requiring immediate attention. Minor alarms are NSA alarms or SA alarms indicating failure of few customers or little bandwidth. Action on minor alarms is usually deferred until normal operating hours. Nonalarmed is a nonalarm condition, status, or event. Typically it is not reported or recorded. Telcordia defines specific alarms and performance levels for TDM circuits. However, operators of other services are expected to define appropriate similar maintenance objectives. Maintenance performance is directly related to equipment quality [as measure by equipment two-way mean time between failure (MTBF)] and maintenance staff performance. Operational time Total time Operational time = Operational time + Outage time

System availability =

=

1 MTBF = MTBF + MTTR 1 + (MTTR/MTBF)

(4.1)

MTBF = composite mean time between failure of the cascaded components in the system; MTTR = mean time to restore (mean down time). MTTR includes the whole time from the outage occurrence until the network is restored. This includes the time to detect and diagnose the problem, acquire a working replacement, and travel to and enter the failed site. A fully qualified repair person with working spare unit is assumed available who performs maintenance without error. It is also assumed that each repair makes the system “as good as new.” Sparing philosophy, maintenance personnel training and staffing, site access, and effective fault management systems will significantly influence the actual MTTR achieved. Some authors use the term mean time to replace (MTR) for the MTTR function. MTTR is then defined as only the actual time to perform the repair (after being notified, obtaining the spare unit, and reaching the site). This is not common usage and the use of MTTR to encompass all restoral action time is preferred. In evaluating equipment specifications, it is critical that all specifications use the same MTTR value. If not, the comparisons can be quite misleading. To simplify comparing various vendors, Telcordia has suggested that MTTR be standardized to 2 h for central office equipment with separable modules, 4 h for remote-site equipment with separable modules, and 48 h if the equipment must be replaced as a complete unit or the site has limited access (Telcordia (Bellcore) Staff, 2009b).

98

RADIO NETWORK PERFORMANCE OBJECTIVES

4.3 COMMISSIONING OBJECTIVES Commissioning objectives are always less strict than design objectives (ITU-T Recommendation M.35, 1993). This is an acceptance that actual performance varies from designed performance due to variations in equipment and media performance, software errors in transmission and protection equipment, maintenance and operations personnel actions, and the effects of other equipment such as power and fault alarm systems. These are “not to exceed” objectives. ITU-T general commissioning objectives (bringing into service performance limits) are contained in M.2100 (PDH) (ITU-T Recommendation M.2100, 2003) and M.2101 (SDH) (ITU-T Recommendation M.2101, 2003). ITU-R radio-specific objectives are contained in F.1330-1 (ITU-R Recommendation F.1330-1, 1999). These objectives only apply to quality. Currently there are no unavailability (outage) commissioning recommendations. Telcordia offers no commissioning objectives. These are determined by the individual operator.

4.4 DESIGN OBJECTIVES Design objectives are always made more stringent than required for actual operation. This is necessary for practical reasons. Design objectives represent average (mean) performance. Individual path performance will vary above or below this value. Design objectives are “typical” or “average” objectives. It is seldom economical to design telecommunications systems to “not-to-exceed” or “worst-case” objectives. Typically, the telecommunications systems design objectives are about 1% of the customer service objectives. The worldwide telecommunication transmission design objectives (sometimes termed engineering standards) focus on two areas: quality and availability.

4.4.1

Quality

Quality is the performance of the end to end telecommunications circuit under normal conditions. This is usually defined as the residual BER, BBER, or a percentage of error-free seconds (EFS). It is measured one-way (each direction of an end to end circuit is evaluated separately) during the time the end to end circuit is considered available. Quality is usually specified over 1 month in the ITU and 1 year for North American systems. This book uses the term quality for the performance of the end to end telecommunications circuit under normal conditions. This concept is also identified as error performance by some North American, some ITU-T, and all ITU-R documents. Worldwide, the use of the terms quality and error performance is inconsistent; usually, the preferred term is quality. Replace the word quality with error performance when appropriate.

4.4.2

Availability

Availability defines the percentage of time over which the end to end circuit achieves a minimum level of performance. It is measured two-way (each direction of an end to end circuit must meet the criterion simultaneously to be considered available). Generally, availability is specified over a 1-year period. All ITU-T and ITU-R sources use the term availability. Some North American sources such as Telcordia use the term reliability (Telcordia (Bellcore) Staff, 2002; Telcordia (Bellcore) Staff, 2005). For the ITU (ITU-T Recommendation G.821, 2002; ITU-T Recommendation G.827, 2003) and most North American telecommunications systems (Telcordia (Bellcore) Staff, 2009b), service is said to become unavailable if the transmission system has experienced severely errored seconds (SESs), loss of frame (LOF), or loss of signal (LOS) for the last 10 consecutive seconds. The 10 s before declaring the circuit unavailable are considered unavailable time once the declaration has been established. Once the service becomes unavailable, it remains so until 10 consecutive non-SESs occur, after which it becomes available. The 10 s before declaring the circuit available are considered available time once the declaration has been established. Availability is defined in both directions simultaneously. The duplex path is considered unavailable when either (or both) simplex paths are unavailable.

DIFFERENCES BETWEEN NORTH AMERICAN AND EUROPEAN RADIO SYSTEM OBJECTIVES

99

North American radio objectives (AT&T, 1984; TIA/EIA, 1994) depart from this definition. North American availability objectives are based on “instantaneous” path BER performance. The ITU requirement for waiting 10 s before declaring a circuit available or unavailable is not used. Unlike the ITU SES criterion [originally taken as a 10−3 BER criterion but currently a variable BBER criterion (ITU-T Recommendation G.826, 2002; ITU-T Recommendation G.828, 2000)], North American objectives use a 10−6 BER availability criterion (TIA/EIA, 1994). As with ITU, the duplex path is considered unavailable if either simplex path is unavailable. Using ITU definitions, quality (error performance) objectives include the effects of multipath fading, (short-term) upfading, residual error rate, error bursts, and (short-term) interference for radio systems. Error performance is measured one-way. Availability includes the effects of rain fading, obstruction (bulge) fading, (long-term) upfading, and (long-term) interference. These effects are measured two-way. Since North American objectives use an “instantaneous” availability definition, all performance degradations except residual errors are included in measurements of availability. In North America, quality is used to define residual error performance objectives.

4.5 DIFFERENCES BETWEEN NORTH AMERICAN AND EUROPEAN RADIO SYSTEM OBJECTIVES Everyone agrees on the following definitions. Quality: The performance of end to end telecommunications circuits under normal conditions. Quality criteria define residual (“background”) performance thresholds (typically BER, ES, SESs, or BBER) to be met over a defined measurement period when the system is available. This is a one-way (simplex) criterion. Availability: The performance of end to end telecommunications circuits when they achieve a minimum level of performance. Availability criteria define limits of path (media) and equipment outages (usually defined as an unavailability objective). This is a two-way (duplex) criterion. Differences in the concept of availability can lead to misunderstandings of performance expectations.

4.5.1

North American Radio Engineering Standards (Historical Bell System Oriented)

Availability: It is an “on/off” criterion. All equipment and media defects are included. Equipment and media defect apportionments are predefined and individually specified. Usually, the equipment will have one set of objectives (typically MTBF and MTTR oriented) and the media will have another (typical path performance oriented), and the equipment and media (radio path) each typically have half the overall system availability objective. Path availability objectives will include all predictable sources of path degradation (multipath, rain, and interference). The on/off threshold is a 10−6 BER. Often path availability objectives are specified unavailability objectives (when the system does not meet the minimum level of performance). Quality: It is usually defined as residual (“background”) performance thresholds (typically BER, ES, SESs, or BBER) during normal operation (when the system is available).

4.5.2

European Radio Engineering Standards (ITU Oriented)

Availability: It has a 10-s measurement window. It is usually specified as an unavailability objective. This objective only includes defects that last for at least 10 s. The on/off threshold is an SES. These long-term media defects include rain and (long-term) interference, as well as equipment and maintenance outages. The mixing of equipment, maintenance, and media objectives can add confusion. The division of the objective among these three defect groups is not specified in the ITU recommendations. These decisions are left to the system designer. Quality: Quality defects are limited to short-term defects that individually last no longer than 10 s. The on/off threshold is an SES. For microwave radio systems, this objective is limited to multipath, short-term interference, and residual error performance. Quality objectives are usually named performance objectives in the ITU recommendations. The above discussion leads to the following differences in understanding.

100

RADIO NETWORK PERFORMANCE OBJECTIVES

Availability (two-way objectives): In North America, these objectives include all path defects. These objectives are highly defined. In Europe, these objectives include only long-term media (radio path) defects (typically rain and long-term interference degradations) and maintenance and equipment outages. These objectives vary widely among recommendations because they depend on the class of service and whether or not the system was designed before or after 2002 (the adoption of ITU-T G.826). In Europe, the proration of the objective to the media is not defined. (Some administrations use the entire objective for media defects; others only take a portion.) Quality (one-way objectives): In North America, these standards are relatively loosely defined. These objectives do not include media (radio path) defects. When quality measurements are being taken, it is assumed that the system is operating normally (radio paths are not fading). In Europe, these objectives (termed performance objectives by the ITU-R) include only short-duration system defects. For microwave systems, the only defects covered by this objective are multipath fading (including fade margin degradation by long-term interference), short-term interference, and residual errors. These European objectives typically are the primary system objectives and are highly defined. An SES is defined by block errors in ITU but as a BER in most North American performance standards. The ITU-R standards for SESs are formulated for SONET/SDH TDM networks. However, their use in IP networks is not currently defined. The differences in definition lead to some confusion. Given the above definitions, discussions between North American customers and international telecommunications systems suppliers can lead to misunderstandings. In North America, a two-way availability (or unavailability) objective sets the limits for radio path performance. It includes all sources of path degradation, including multipath, rain outage, and interference. The threshold is a 10−6 BER. In Europe, radio path performance has two specifications. One specification is quality (termed performance in the ITU-R recommendations), which includes multipath and interference and is defined one-way. Two error performance objectives must be met: ESs and SESs. Because of the existence of two error performance objectives, rain fading can cause error performance degradations. For fade levels between the fade margin for ESs and the level for SESs, the error performance is degraded without regard for 10-s constraints. The amount of time the received signal is expected to be between these two levels depends on the particular levels and the statistics of the rain fading. Some designers estimate the time between these two levels to be between 5% and 15% of the time that the SES threshold is exceeded, but there is no known documentation supporting these estimates. The other European path performance specification is availability (or unavailability). It includes rain fading and interference, as well as maintenance and equipment outages. (Some path designers forget to assign a portion of the European unavailability specification to maintenance and equipment outages and erroneously give the entire objective to rain outages.) It is defined two-way. The threshold is SESs. Notice that the terms quality and availability define different defects in the North American and European radio systems.

4.6 NORTH AMERICAN TELECOMMUNICATIONS SYSTEM DESIGN OBJECTIVES In North America, the customer service objective is typically a 98% overall system (two-way) availability. This outage objective is not more than 175 h per year. The transmission system is allocated 1% of that objective (99.98%). Half the objective is usually assigned to media and the half to hardware and personnel (typically 99.99% each). Individual link objectives are based on typical percentage objective per link or outage time per kilometer or mile. The objective allocations are summarized in Figure 4.1, Figure 4.2, and Figure 4.3. Details of the derivation of these summaries and the references are provided in Chapter 7.

4.7 INTERNATIONAL TELECOMMUNICATIONS SYSTEM DESIGN OBJECTIVES The ITU-T and ITU-R have established various recommendations for the design of international telecommunications systems. These relate to all transmission media. The end to end customer service objective is

101

INTERNATIONAL TELECOMMUNICATIONS SYSTEM DESIGN OBJECTIVES 4000 Mile (6400 km) path

path termination

path termination

Path consisted of 150 equal-length hops 26.7 miles (42.9 km) long

250 Mile (400 km) path

path termination

path termination

Path consisted of 10 equal-length hops 25 miles (40.2 km) long

Figure 4.1 Bell System hypothetical reference circuit. Overall objective is 99.98% average annual two-way availability end to end. Path objectives include hop diversity but not multiplex/demultiplex or path-protective switching equipment. (a) Long haul and (b) short haul. Source: Bell System Technical Journal, pp. 2085–2116, September 1971 and pp. 1779–1796, October 1979.

Figure 4.2 Telcordia hypothetical reference circuit, average 250-mile path. The conventional assumption is that each radio hop is 25 miles long. Path objectives include but not multiplex/demultiplex or path-protective switching equipment. Source: Telcordia GR-499.CORE. North American Standards Media availability two-way objectives

North American Standards Quality one-way objectives

Quality is only measured during “normal conditions”

Availability is an instantaneous measurement. An outage occurs when a BER threshold is exceeded.

Figure 4.3 North American objectives. Availability is an instantaneous measurement. An outage occurs when a BER threshold is exceeded. Quality is only measured during “normal” conditions. Availability is per year. Quality is typically per short term measurement.

102

RADIO NETWORK PERFORMANCE OBJECTIVES 2500-km (1600-mile) Path 280 km

1 64 kbits/s Input/output

280 km

2

280 km

280 km

3

4

280 km

5

280 km

6

64 kbits/s Input/output

First order digital multiplex equipment (e.g., PCM/TDM channel bank)

280 km

7

280 km

8

64 kbits/s Input/output

280 km

9 64 kbits/s Input/output

Higher order digital multiplex equipment (e.g., M13, SONET/SDH ADM)

Digital radio section (number of hops unspecified)

Figure 4.4 Legacy ITU-R hypothetical reference digital path for high grade performance. Reference path does not include multiplex/demultiplex or (path) protective switching equipment. Conventional assumptions are to assume each radio hop is 40.0 or 46.7 km long. Source: ITU-T Rec. G.821 and ITU-R Rec. F.1556-1.

(path or equipment)

It is assumed that equipment and media availability are equal. Availability measured using 10-s on/off window. Total measurement period under study but probably greater than 1 year. Typical path degradations are rain and long-term interference.

Quality objectives defined as error performance during worst month. Performance is only measured when circuit is available (using 10-s on/off window). Typical path degradation is multipath fading short-term interterence.

Figure 4.5 Legacy ITU-R objectives. Availability objectives are per year. Quality objectives are per worst month. 98% available for high priority systems and 91% for standard priority systems (ITU-T Recommendation G.826, 2002). The following sections overview these recommendations applicable to fixed point to point microwave paths. See Chapter 7 for the derivation of this overview as well as the applicable references.

4.7.1

Legacy European Microwave Radio Standards

These are for systems designed before December 2002, the adoption of G.826. See Fig. 4.4 and Fig. 4.5.

4.7.2

Modern European Microwave Radio Standards

These are for systems designed after December 2002, the adoption of G.826. See Fig. 4.6 and Fig. 4.7.

4.8 ENGINEERING MICROWAVE PATHS TO DESIGN OBJECTIVES The process of designing a microwave path begins with the overall system design. After that has been determined, the appropriate end to end and path objectives are established. The transmission engineer’s task is to design a system that meets a defined end to end objective.

103

ENGINEERING MICROWAVE PATHS TO DESIGN OBJECTIVES 27,500-km (17,000-mile) Path National

International

National

Long haul Short haul

Access

Interexchange

Long haul

Terminating Intermediate Intermediate Intermediate Terminating country country country country country

Interexchange

Short haul

Access

Path termination

Local exchange

Exchange

Exchange

International gateway

International gateway

International gateway

International gateway

Exchange

Exchange

Local exchange

Path termination

Figure 4.6 Modern ITU-T hypothetical reference path, the technology or media of each section is not explicitly defined. The actual number of spans/hops of equipment is not explicitly defined. Path objectives do not include multiplex/demultiplex or protective switching equipment. Source: ITU-T Recs. G.801, G.826, G.827 and G.828 as well as ITU-R Recs. F.1668 and F.1703.

Availability objectives are per year. Availability measured using 10-s on/off window. It is assumed that equipment and media availability are equal. Typical path degradations are rain and long-term interference.

Quality objectives defined as error performance during worst month. Performance is only measured when circuit is available (using 10-s on/off window. Typical path degradation is multipath fading and short-term interference.

Figure 4.7 Modern ITU-R objectives. Availability objectives are per year. Quality objectives are per worst month. The traditional approach is to divide the end to end performance objective of the hypothetical reference circuit by the total circuit distance to arrive at a per mile or per kilometer objective for an individual system. Each transmission system’s path objective is taken as the per unit distance objective multiplied by the path length. While this is the historical methodology of designing paths, it often leads to uneconomical designs when applied to radio systems composed of paths of varying length. Most radio system degradations are not a linear function of path distance. (Multipath fading, both flat and dispersive, increase as distance cubed. Rain outage increases with distance up to the typical thunderstorm cell size. Obstruction fading tends to remain constant for paths exceeding 25 miles.) Slavish adherence to per mile objectives typically causes most of the system money to be spent on the long paths. The skillful transmission engineer can tailor each path’s objective to respect the overall system objective while prorating the outage objective between paths in a nonlinear fashion. By increasing the objective for short paths (which are economical to improve) more performance degradation can be allowed for longer paths (which are much more costly to improve). Use of per mile or kilometer objectives with careful tailoring of objectives to balance short- and long-path requirements lead to more economical successful designs. Another approach commonly used is to define a typical path, determine the objectives for that path, and use those as the requirements for all paths in the network. This is commonly expressed as “all paths must meet a 99.999% availability.” As with the traditional approach, this can lead to uneconomical

104

RADIO NETWORK PERFORMANCE OBJECTIVES

designs if path characteristics vary greatly. It is recommended that the designer always keep in mind the end to end objective to achieve the most cost-effective design consistent with the user objectives. There are three main steps in designing a microwave path. The first step is to size the transmitter power and antenna sizes in such a way that the path performance objectives are met. This is usually done by making path availability calculations based on path fade margin and diversity (if needed). Consideration is given to flat (thermal), dispersive rain fading and inter- and intrasystem interference. There are many factors (often undefined) that significantly influence the results. The following are among the choices that must be made before objective estimation can begin. Path engineering methodology North American European (ITU-R) Atmospheric attenuation models vary slightly. Use of receiver hysteresis is rare. Antenna gains (midband or frequency specific) are not used consistently. Antenna height (obstruction fading) criteria ITU-R Bell Labs/Alcatel-Lucent Lenkurt/Aviant Terrain data source USGS Seamless 10 m (one-third arc second) USGS Seamless 30 m (arc second) USGS Space Shuttle [Shuttle Radar Topography Mission (SRTM)] Commercial Private GIS Data Upfading criteria Bell Labs ITU-R Do not consider Multipath objectives Two-way (duplex) One-way (simplex) Transmitter output power Peak or average or guaranteed Measured where (antenna, waveguide, amplifier) Receiver thresholds Typical or guaranteed BER threshold (10−3 or 10−6 ) Measured where (antenna, waveguide, receiver) Rain point rate data (>6 GHz) Crane (1980, 1996, or 2003) ITU-R (1978 [530-1], 2001 [530-10]) Other Rain point to path fading model (>6 GHz) Crane (which version) ITU-R (which version) Other Wet radome loss (>6 GHz) Determine value for wet radome

ENGINEERING MICROWAVE PATHS TO DESIGN OBJECTIVES

105

Apply at one or both ends Not considered (most engineers ignore this) Interference Intrasystem Intersystem due to similar services Intersystem due to dissimilar services such as FSS Interference allowances vary from 1 (single instance) to 5 dB (multiple instance) depending on the network operator. Field margin (additional miscellaneous path loss) Used (determine value) Not used. Choices on the above factors will significantly affect the results. After choosing the appropriate system characteristics, the path designer must pick the appropriate calculation methodologies to estimate path quality and availability. The typical choices for path performance estimates are the following: Multipath flat fading Multipath dispersive fading Rain fading. Both Bell Labs and ITU-R have calculation methods to address these path performance limitations. The different methods provide different estimates. The following path performance limitations are well documented but typically ignored: Wet radomes attenuation (Anderson, 1975; Blevis, 1965; Burgueno et al., 1987; Effenberger and Strickland, 1986; Lin, 1975; Lin, 1973; Rummler, 1987). Ducting upfading (Anderson and Gossard, 1955; Day Trolese, 1950; Dougherty, 1968; Dougherty and Dutton, 1981; Dougherty Hart, 1976; Dougherty, 1979; Dutton, 1982; England et al., 1938; Fruchtenicht, 1974; Hubbard, 1979; Ikegami, 1959; Katzin Bauchman Binnian, 1947; Mahmoud Boghdady El-Sayed, 1987; Schiavone, 1982; Stephansen, 1981) (Bell Laboratories, Upfade Margin and Outage Due to Upfades, unpublished results of experiments on two Palmetto, Georgia, paths, 1981.) Earth bulge (obstruction) fading (Dougherty, 1968; Dougherty Hart, 1976; Lee, 1986; Lee, 1985; McGavin et al., 1970; Schiavone, 1981; Vigants, 1981; Vigants, 1972; Wheeler, 1977). Industry standards for interference mitigation are currently available. However, they seldom cover multiple exposure limits—only limits for a single interference case. There is no industry standard defining which methodologies must be used and which path performance limitations must be estimated. These are determined by the designer and the user. It should be remembered that there are unusual weather situations that give rise to “anomalous propagation.” These events, as with other weather-dominated phenomena, cannot be predicted but can adversely affect microwave radio propagation. The second step is to place the transmit and the receive antennas at appropriate locations on the antenna-supporting structure in such a way that adequate path terrain clearance is achieved. This is done using agreed path clearance guidelines (see Chapter 12). These guidelines are intended to keep obstruction fading at an insignificant level. Other than the Bell Labs obstruction fading estimation procedures discussed in Chapter 12, there is no performance expectation associated with meeting these guidelines. If operating in a licensed frequency allocation, the third step is to perform frequency planning (see Chapter 2) if the radios are operating in a licensed band. In North America, frequency planning is based on the T/I concept (see Chapter 14). If T/I objectives are met, the performance objectives in the first step are not significantly degraded. If the T/I objective is not met, the difference between the T/I objective and the estimated interference is simply a decibel degradation of the radio path thermal fade margin. This simplifies the recalculation of system performance to evaluate the impact of the estimated interference.

106

RADIO NETWORK PERFORMANCE OBJECTIVES

4.9 ACCURACY OF PATH AVAILABILITY CALCULATIONS For microwave paths, the largest variation is due to multipath flat and dispersive fading and rain outage. These phenomena can vary an order of magnitude (or more) from design estimates (Achariyapaopan, 1986; Babler, 1972; Crane, 1996; Giger, 1991; ITU-R Recommendation P.530-13, 2009; Osborne, 1977; Ranade Greenfield, 1983; Stephansen, 1981; Vigants, 1971).

4.9.1

Rain Fading

Rain fading will be quite different from path to path and from year to year. Lin (1975) reviewed the statistics of 96 rain gauges located in a grid with 1.3-km spacing. The incidence of 100 mm/h rain was higher by a factor of 5 for the upper 25% of rain rates as opposed to the lowest 25%. In another study of four rain gauges spaced in a square with 1-km sides, Lin noted that for rain rates greater than 80 mm/h (the rates of interest for high frequency path engineering), rain rates varied by a factor of 3. He observed “ . . . on a short-term basis, the relationship between the path rain attenuation distribution and the [point] rain rate distribution measured by a single rain gauge is not unique.” Different paths in the same area will experience difference average rain attenuation when averaged over the same period. The actual rain rate measured over any one specific year will be different than the average value. (Rain rates associated with time shorter than the gauge integration time, typically 1 or 5 min, clearly did not occur for each year in the observation period.) Osborne (1977) observed that the worst-case 1-year rain rates can exceed long-term averages by a multiplicative factor of 2–20. Worst-case month or hour rates can exceed long-term averages by extremely large factors. Data taken over a period of less than 10 years is generally unreliable for moderate rain rates. High rain rates are rarely observed.

4.9.2

Multipath Fading

The effect of multipath flat fading has been studied extensively (Babler, 1972; Barnett, 1974; Vigants, 1971) and is fairly well understood. There is general industry agreement as to how to calculate long-term outage associated with multipath flat fading (Vigants, 1975). What is not generally appreciated is the short-term variability of this data. Some engineers and managers assume that yearly outage calculations should be met at all times. This is simply not the case. All radio outage events are statistical and vary widely. First of all, flat fading only occurs during a few months of the year (typically the warm months). When it does occur, the fading usually occurs at night (with the exception of path reflective fading that is usually most intense during the day). Fading varies considerably even on any given path. Over a 2-month period in the summer, Vigants (1971) observed a wide variation in multipathinfluenced received signal power levels on different radio channels (frequencies) on the same 26-mile 6-GHz path. At the 40-dB fade point, on some channels, fades occurred at much as 25% more than expected. In another case, Babler (1972) observed relatively consistent performance of different radio channels (frequencies) on one antenna but significantly greater variation on another antenna. Amazingly both antennas were 20 ft apart on the same tower of the same 28-mile 6-GHz path. For the horn antenna, for 40-dB fades, an order of magnitude difference in radio channel performance was observed. One channel experienced four times the expected fading outage over the 2-month observation time. Vigants (1975) of Bell Labs devised a multipath estimation methodology that is widely used in North America to estimate fixed point to point microwave radio multipath outage. Giger (1991), also of Bell Labs, observed, “It is our experience that a prediction of r [fading occurrence factor, directly related to fading outage time], based on the best available information, may still be off by an order of magnitude either above or below the measured worst month value.” Stephansen (1981) observed, “ . . . it is well known that a considerable year-to-year variability exists in measured data; for example, for deep [multipath] fades, year-to-year variations of more than a factor of 10 in time percentage are often seen.” ITU-R (ITU-R Recommendation F.1093-2, 2006) noted, “Propagation conditions vary from month to month and from year to year, and the probability of occurrence of these conditions may vary by as much as several orders of magnitude. It may therefore take some three to five years before drawing a proper conclusion on the results of a propagation experiment.”

ACCURACY OF PATH AVAILABILITY CALCULATIONS

4.9.3

107

Dispersive Fading Outage

Dispersive fading outage estimates are made based on the dispersive fade margin (DFM) concept (Dupuis Joindot Leclert Rooryck, 1979; Rummler, 1982). The advantage of this approach is that DFM can be used in the path design calculations exactly like the flat fade margin. It also allows similar bandwidth radios to be compared on their ability to discriminate against dispersive fading. One disadvantage is that the current industry practice is to use 6.3-ns delay to characterize the reference path length (26.4 miles). Rummler (1979) showed that his typical 26.4-mile path was characterized by a median delay of 9.1 ns. The universal use of 6.3 ns (rather than 9.1 ns) as the typical path-dispersive echo delay leads to 2-dB optimistic DFM estimates. The Bellcore (now Telcordia) method of calculating DFM was developed as a method to facilitate the comparison of different radio receivers. It fails to accommodate the characteristics of actual paths. Modifying W curves to accommodate different path delays is well understood. However, there is no general industry agreement for estimating path delay on actual paths. AT&T Bell Labs attempted to introduce the concept of dispersion ratio (Rummler, 1988) to account for differences in path-dispersive fading characteristics. This concept has not been fully developed. The concept of DFM was developed using 6-GHz data. It is not clear how DFM should be modified to be applicable at other frequencies. Currently, a linear frequency dependency is assumed. Limited data does not substantiate this assumption. An issue not currently addressed by industry methodologies is receiver hysteresis. All published W curves are based on “static” measurements (BER after the receiver has recovered from any anomalous performance). It is well known (Lundgren and Rummler, 1979) that dispersive events occur quickly and can cause momentary loss of synchronization for large BERs. “Dynamic” measurements more accurately represent this actual performance. Nevertheless, these “dynamic” measurements, described in Bellcore specifications (Bellcore (Telcordia) Staff, 1989) and typical DFM measurement equipment manuals, are generally not available. The use of static W curves leads to optimistic path outage estimates ranging from 1 dB to several decibels depending on the particular receiver. Perhaps, the most significant issue with dispersive fading estimates is the variability from year to year of measured fading time. Rummler (1981) observed yearly outages as much as twice the long-term average (on different antennas on the same tower of a 26-mile 6-GHz path). Ranade and Greenfield (1983) observed 6-GHz yearly outages as much as three times the long-term average. This year to year variability limits the accuracy of total fading time estimates.

4.9.4

Diversity Improvement Factor

Diversity improvement factor varies throughout the industry. For space and frequency diversity, engineers typically use the Vigants (Vigants, 1975; Vigants, 1968) diversity improvement factors for flat fading. There is no complete agreement on dispersive fading. Some engineers use space and frequency diversity improvement factors developed for flat fading, whereas others use factors developed by Bell Labs (Lee Lin, 1986; Lee Lin, 1985; Lin et al., 1988). The Bell Labs factors are more optimistic than flat fading improvement factors. Some of the flat fading models include a factor for threshold hysteresis; however, these are typically ignored. None of the dispersive diversity improvement models includes a factor for practical considerations such as threshold hysteresis. Experience suggests actual systems do not always achieve calculated diversity improvement. There are very few published statistics of calculated versus achieved radio diversity improvement. Giger (1991) observed that “ . . . the [space diversity] improvement factor I [for dispersive fading] can vary by at least an order of magnitude . . . over the [measurement] period of 1 year . . . .” Angle diversity can be implemented in different ways. There is no industry agreement on how to engineer or install these systems. Likewise there is no agreement on what angle diversity improvement to expect. AT&T Bell Labs developed an angle diversity improvement estimate model (Giger, 1991), but it has not gained acceptance. Everyone treats angle diversity differently. Rummler and Dhafi (1989) observed, “As yet there are no algebraic formulas which permit the estimation of improvement by using angle diversity. However, recent studies by Lin have shown that improvements in performance over conventional space diversity are possible. It is clear . . . that the treatment of angle diversity lacks completeness, and further work is required. In particular, the dependence of the observed improvement factors on the nature of the path has yet to be determined.”

108

RADIO NETWORK PERFORMANCE OBJECTIVES

Everyone agrees that diversity improves microwave radio performance. However, other than for flat multipath fading, the industry is not in agreement on diversity improvement estimation.

4.10

IMPACT OF FLAT MULTIPATH VARIABILITY

Paths are engineered to design objectives. Each path’s performance will vary above and below the path’s design objectives. Many propagation limitations (including multipath and rain) are believed to have a lognormal distribution. Path calculations attempt to estimate the performance mean (average) value. System or path performance can be expected to vary around the mean value M based on the standard deviation σ . System or path performance = M ± [mσ ] (4.1) About 68% of the systems can be expected to perform within one standard deviation of the mean (m = 1), 95% within two standard deviations (m = 2), and 99% within three standard deviations (m = 3). Although variation from path to path can be significant, end to end variation is reduced by the following relationship: σindividual path (4.2) σsystem = sqrt (n) Each path is assumed to be identical and n is the number of cascaded paths. Bellcore (Achariyapaopan, 1986) studied several microwave radio paths influenced by flat multipath fading and determined that the standard deviation of actual paths from the Vigants model (typical result) was 10.2 dB. ITU-R estimates (International Telecommunication Union—Radiocommunication Sector (ITU-R), 2007) its multipath propagation model to have a standard deviation of 5.2–7.3 dB depending on the type of path. This means that if an engineer wanted a path to be designed so that the Vigants estimated value of outage would not be exceeded for more than 10% of the paths, the path fade margin would have to be increased 13 dB from the Vigants fade margin (Alternatively, it may be expected that annually only 10% of the paths will fade more than 20 times the estimated outage time when averaged over many years.). If the expected outage of only 1% is desired, the Vigants fade margin must be increased 24 dB. This is generally not practical (or necessary). Engineers use average results and expect the cumulative performance of several cascaded paths to average to the expected result.

4.11

IMPACT OF OUTAGE MEASUREMENT METHODOLOGY

Multipath outage time estimates are different than outage time measurements. Multipath outage estimates attempt to predict the total time a receiver’s receive signal level is below its digital threshold (10−3 or 10−6 BER). Actual receiver outages are usually measured in cumulative threshold ESs. If a digital test set is used, usually the only threshold measurement available is SESs (currently, in North America, this is equivalent to a threshold of 10−3 BER). The difference between the digital test set threshold and the threshold of interest (typically 10−6 BER) will introduce some time measurement error. More importantly, the duration of a fade will not exactly match a second. Barnett (1974) and Vigants (1969; 1971) determined that for a typical 28.5-mile path, the average fade duration at 4, 6, or 11 GHz is given by the following: T= = = L= P= = FFM =

410 L for nondiversity receivers; 205 L for space or frequency diversity receivers; average duration of multipath fade in seconds; square root of P; 10−FFM/10 inverse of flat fade margin expressed as a power ratio; flat fade margin expressed in positive dB.

For a 40-dB fade margin, the average nondiversity fade lasts for about 4 s. The diversity fade lasts half that time. If the fade margin is less than 40 dB or the path is shorter, the outage will be longer; if

Measurement error (%)

CONCLUSION

109

Diversity Nondiversity

Fade margin (dB)

Figure 4.8

Errored-second measurement error—actual outage versus threshold errored-second count.

the fade margin is greater or the path longer, it will be less. Every time a fade occurs, several outage seconds are counted. If the outage is less than an integer number of seconds, an integer number of outage seconds is still counted. This creates measurement error that can be considerable for deep fades (Fig 4.8). This “outage stretching” effect causes measurements to be longer than the estimates predict even if the actual outages exactly match the estimates. The “outage stretching” effect can be made worse by poorly synchronizing receivers. However, measurements of well-functioning radios demonstrate that for fade durations of a few seconds, receivers resynchronize within a few tens of milliseconds. This is a relatively insignificant increase in outage time.

4.12

IMPACT OF EXTERNAL INTERFERENCE

In the frequency planning aspect of path engineering, the intent is to reduce external interference due to other radio systems to a nominal influence (typically not more than 1 dB threshold degradation). In many cities today, for a variety of reasons (e.g., installation errors causing reversed paths, undocumented changes in antennas or transmitters), interference can, in fact, be a significant degradation. For low frequency paths primarily influenced by flat multipath fading, a 10-dB loss in flat fade margin increases path outage time by a factor of 10. Performing a path fade margin test before commissioning the path is strongly recommended.

4.13

CONCLUSION

Network engineering has several universally accepted objectives. The most common are customer, maintenance, commissioning, and design. Of these the relative performance objective (Ivanek, 1989) is quite different. Differentiation of performance limits is universally recognized as critical to the successful and independent operation of the various network-engineering functions. The ITU establishes general international recommendations. Telcordia has further defined these objectives for North American systems. For international connections, ITU standards are universally applied. For connections within the US Public Switched Telephone Network, Telcordia standards are imposed. For all other operations, each operator must establish their own standards for internal use.

110

RADIO NETWORK PERFORMANCE OBJECTIVES

Microwave path design objectives are achieved through the use of path calculations of typical path degradations. The choice of the objectives and the methods of estimating them vary widely. The calculations are an attempt to estimate typical path performance. Actual path performance can be expected to vary from those estimates. Those estimates do not attempt to estimate performance in unusual, atypical situations. Since all radio path performance is ultimately limited by weather conditions, unusual propagation conditions occasionally happen. As an independent microwave consultant, Thrower (1977) observed “On the philosophical side of the dB ledger, one doesn’t like to see a path fade but, being practical, you can’t make absolute predictions as to whether a path will fade or not. We live in a terrestrial environment rather than in theoretical free space. Even path testing is not an absolute way of establishing the reliability of a path. To do it properly, one would have to run a propagation test over the path, with the planned tower height and with the planned antenna sizes for a minimum of a year in order to obtain data under all environmental conditions. Even that will vary from year to year; witness, the drought stricken areas of the country for 1976–1977. Tests made, for example, in the Pacific Northwest during that winter would result in totally different results compared to “normal” wet years in that region. A planned 11-GHz radio path would be defective when weather conditions return to their more normal saturated state. It is for these reasons that one cannot and should not guarantee a path. The potential system user should be wary of those who offer to guarantee the path because it just can’t be done. The experienced systems manufacturer and the experienced consultant don’t and won’t and shouldn’t. The user should be cautioned that there is always the possibility of [excessive outages due to] fading although the path was designed using the techniques that have been found to offer best protection against fading.” The microwave propagation researcher Millington (1959) noted “ . . . where practical applications are concerned, we should not try to be too precise. For instance, some of the propagation curves that have been calculated with great accuracy from an idealized theory may give the impression that we can estimate field strengths with much greater precision than is actually feasible, and the engineer should always strive to appreciate the limits of accuracy set by the practical conditions. . . . I wish to make [my] concern [known regarding] the use of statistical results when dealing with specific situations . . . The chief difficulty in applying statistical methods arises when the basic material is very complex . . . field strength [i.e., received signal level] at a given distance from the transmitter may vary greatly from place to place . . . As a result of a measurement survey or of a prediction based on a study of ground profiles, a certain field strength will be obtained at 50% of locations . . . This may be the best scientific way of assessing the problem in general . . . But when it comes to the serving of a particular area where the terrain may be exceptionally rugged or of a particular town that is unfavorably placed, this general picture may be inadequate . . . These [field strength] curves are drawn through a spread of points, each one of which represents a time average at a given location for a specific link. This means that, for a point that lies a long way off the curve, the performance over the circuit to which it corresponds will on the average differ considerably from the value given by the curve at the same distance. I wish, therefore, to plead that in applying statistical methods we should keep a sense of perspective.”

REFERENCES Achariyapaopan, T., “A Model of Geographic Variation of Multipath Fading Probability,” Bellcore National Radio Engineer’s Conference Record, pp. TA1–TA16, 1986. Anderson, I., “Measurements of 20-GHz Transmission Through a Radome in Rain,” IEEE Transactions on Antennas and Propagation, Vol. 23, pp. 619–622, September 1975. Anderson, L. J. and Gossard, E. E., “Prediction of Oceanic Duct Propagation from Climatological Data,” IRE Transactions on Antennas and Propagation, Vol. 3, pp. 163–167, October 1955. AT&T, Microwave Radio, Radio Engineering Standard, Western Electric Practices, Section 940-300-130, Issue 2, March 1984. Babler, G. M., “A Study of Frequency Selective Fading for a Microwave Line-of-Sight Narrowband Radio Channel,” Bell System Technical Journal , Vol. 51, pp. 731–757, March 1972. Barnett, W. T., “Multipath Propagation at 4, 6 and 11 GHz,” Bell System Technical Journal , Vol. 51, pp. 321–361, June 1974.

REFERENCES

111

Bellcore (Telcordia) Staff, Bellcore (Telcordia) Technical Reference TR-TSY-000752, Microwave Digital Radio Systems Criteria, pp. 7–13, October 1989. Blevis, B. C., “Losses Due to Rain on Radomes and Antenna Reflecting Surfaces,” IEEE Transactions on Antennas and Propagation, Vol. 13, pp. 175–176, January 1965. Burgueno, A., Austin, J., Vilar, E. and Puigcerver, M., “Analysis of Moderate and Intense Rainfall Rates Continuously Recorded Over half a Century and Influence on Microwave Communications Planning and Rain-Rate Data Acquisition,” IEEE Transactions on Communications, Vol. 35, pp. 382–395, April 1987. Crane, R. K., Electromagnetic Wave Propagation Through Rain. New York: John Wiley & Sons, Inc., pp. 107–184, 1996. Day, J. P. and Trolese, L. G., “Propagation of Short Radio Waves Over Desert Terrain,” Proceedings of the IRE, pp. 165–175, February 1950. Dougherty, H. T., A Survey of Microwave Fading Mechanisms: Remedies and Applications, Environmental Science Services Administration Technical Report ERL 69-WPL4 . Washington, DC: US Department of Commerce, pp. 4–32, March 1968. Dougherty, H. T., “Recent Progress in Duct Propagation Predictions,” IEEE Transactions on Antennas and Propagation, Vol. 27, pp. 542–548, July 1979. Dougherty, H. T. and Dutton, E. J., The Role of Elevated Ducting for Radio Service and Interference Fields, NTIA Report 81-69 . Washington, DC: US Department of Commerce, March 1981. Dougherty, H. T. and Hart, B. A., Anomalous Propagation and Interference Fields, Office of Telecommunications Report 76-107. Bolder: Institute of Telecommunications Sciences, US Department of Commerce, pp. 20–31, December 1976. Dupuis, P., Joindot, M., Leclert, A. and Rooryck, M., “Fade Margin of High Capacity Digital Radio System,” IEEE International Conference on Communication, Vol. 3, pp. 48.6.1–48.6.5, June 1979. Dutton, E. J., “A Note on the Distribution of Atmospherically Ducted Signal Power Near the Earth’s Surface,” IEEE Transactions on Communications, Vol. 30, pp. 301–303, January 1982. Effenberger, J. A. and Strickland, R. R., “The Effects of Rain on a Radome’s Performance,” Microwave Journal , Vol. 29, pp. 261–272, May 1986. England, C. R., Crawford, A. B. and Mumford, W. W., “Ultra-Short-Wave Transmission and Atmospheric Irregularities,” Bell System Technical Journal , Vol. 17, pp. 489–519, October 1938. Fruchtenicht, H. W., “Notes on Duct Influences on Line-of-Sight Propagation,” IEEE Transactions on Antennas and Propagation, Vol. 22, pp. 295–302, March 1974. Giger, A. J., Low-Angle Microwave Propagation: Physics and Modeling. Boston: Artech House, pp. 214–218, 1991. Hubbard, R. W., Investigation of Digital Microwave Communications in a Strong Meteorological Ducting Environment, NTIA Report 79-24 . Washington, DC: US Department of Commerce, August 1979. Ikegami, F., “Influence of an Atmospheric Duct on Microwave Fading,” IEEE Transactions on Antennas and Propagation, Vol. 7, pp. 252–257, July 1959. ITU-R Recommendation F.530-12, “Propagation Data And Prediction Methods Required for the Design of Terrestrial Line-of-Sight Systems,” 2007. ITU-R Recommendation F.1093-2, “Effects of Multipath Propagation on the Design and Operation of Line-of-Sight Digital Fixed Wireless Systems,” 2006. ITU-R Recommendation F.1330-1, “Performance Limits for Bringing Into Service of the Parts of International Plesiochronous Digital Hierarchy and Synchronous Digital Hierarchy Paths and Sections Implemented by Digital Radio-Relay Systems,” 1999. ITU-R Recommendation F.1566-1, “Performance Limits for Maintenance of Digital Fixed Wireless Systems Operating in Plesiochronous and Synchronous Digital Hierarchy-based Paths and Sections,” 2007. ITU-R Recommendation P.530-13, “Propagation Data and Prediction Methods for the Design of Terrestrial Line-of-Sight Systems,” pp. 4–8, 13–14, 2009. ITU-T Recommendation G.102, “Transmission Performance Objectives and Recommendations, Transmission Systems and Media,” pp. 1–4, 1993.

112

RADIO NETWORK PERFORMANCE OBJECTIVES

ITU-T Recommendation G.821, “Error Performance of an International Digital Connection Operating at a Bit Rate Below the Primary Rate and Forming Part of an Integrated Services Digital Network,” 2002. ITU-T Recommendation G.826, “Error Performance of an International Digital Connection Operating at a Bit Rate Below the Primary Rate and Forming Part of an Integrated Services Digital Network,” 2002. ITU-T Recommendation G.827, “Error Performance of an International Digital Connection Operating at a Bit Rate Below the Primary Rate and Forming Part of an Integrated Services Digital Network,” 2003. ITU-T Recommendation G.828, “Error Performance of an International Digital Connection Operating at a Bit Rate Below the Primary Rate and Forming Part of an Integrated Services Digital Network,” 2000. ITU-T Recommendation M.20, “Maintenance Philosophy for Telecommunication Networks,” 1992. ITU-T Recommendation M.2100, “Performance Limits for Bringing-Into-Service and Maintenance of International Multi-Operator PDH Paths and Connections, International Transport Network,” 2003. ITU-T Recommendation M.2101, “Performance Limits for Bringing-Into-Service and Maintenance of International Multi-Operator SDH Paths and Multiplex Sections, International Transport Network,” 2003. ITU-T Recommendation M.35, “Principles Concerning Line-up and Maintenance Limits,” 1993. Ivanek, F., Terrestrial Digital Microwave Communications. Boston: Artech House, pp. 21–71, 1989. Katzin, M., Bauchman, R. W. and Binnian, W., “3- and 9-Centimeter Propagation in Low Ocean Ducts,” Proceedings of the IRE, pp. 891–905, September 1947. Lee, J. L., “Refractivity Gradient and Microwave Fading Observations in Northern Indiana,” IEEE Global Telecommunications Conference (Globecom) Conference Record, Vol. 3, pp. 36.8.1–36.8.5, December 1985. Lee, J. L., “Observed Atmospheric Structure Causing Degraded Microwave Propagation in the Great Lakes Area,” IEEE Global Telecommunications Conference (Globecom) Conference Record, Vol. 3, pp. 1548–1552, December 1986. Lee, T. C. and Lin, S. H., “More on Frequency Diversity for Digital Radio,” IEEE Global Telecommunications Conference (Globecom) Conference Record, Vol. 3, pp. 36.7.1–36.7.5, December 1985. Lee, T. C. and Lin, S. H., “A Model of Space Diversity Improvement for Digital Radio,” International Union of Radio Science Symposium Proceedings, pp. 7.3.1–7.3.4, July 1986. Lin, S. H., “Statistical Behavior of Rain Attenuation,” Bell System Technical Journal , Vol. 52, pp. 557–581, April 1973. Lin, S. H., “A Method for Calculating Rain Attenuation Distributions on Microwave Paths,” Bell System Technical Journal , Vol. 54, pp. 1051–1086, July-August 1975. Lin, S. H., Lee, T. C. and Gardina, M. F., “Diversity Protections for Digital Radio—Summary of Ten-Year Experiments and Studies,” IEEE Communications Magazine, Vol. 26, pp. 51–64, February 1988. Lundgren, C. W. and Rummler, W. D., “Digital Radio Outage Due to Selective Fading - Observation vs Prediction From Laboratory Simulation,” Bell System Technical Journal , Vol. 58, pp. 1073–1100, May-June 1979. Mahmoud, S. F., Boghdady, H. N. and El-Sayed, O. L., “Analysis of Multipath Fading in the Presence of an Elevated Atmospheric Duct,” Proceedings of the IEE, Vol. 134, pp. 71–76, February 1987. McGavin, R. E., Dougherty, H. T. and Emmanuel, C. B., “Microwave Space and Frequency Diversity Performance Under Adverse Conditions,” IEEE Transactions on Communication Technology, Vol. 18, pp. 261–263, June 1970. Millington, G., “Random Thoughts of a Propagation Engineer,” Proceedings of the IEE, Vol. 106, Part B, pp. 11–14, January 1959. Osborne, T. L., “Applications of Rain Attenuation Data to 11-GHz Radio Path Engineering,” Bell System Technical Journal , Vol. 56, pp. 1605–1627, November 1977.

REFERENCES

113

Ranade, A. and Greenfield, P. E., “An Improved Method of Digital Radio Characterization from Field Measurements,” IEEE International Conference on Communications, pp. C2.6.1–C2.6.5 (Vol. 2, 659–663), June 1983. Rummler, W. D., “A New Selective Fading Model: Application to Propagation Data,” Bell System Technical Journal , Vol. 58, pp. 1037–1071, May-June 1979. Rummler, W. D., “More on the Multipath Fading Channel Model,” IEEE Transactions on Communications, Vol. 29, pp. 346–352, March 1981. Rummler, W. D., “A Comparison of Calculated and Observed Performance of Digital Radio in the Presence of Interference,” IEEE Transactions on Communications, Vol. 30, pp. 1693–1700, July 1982. Rummler, W. D.,“Advances in Microwave Radio Route Engineering for Rain,” IEEE Conference on Communications (ICC) Proceedings, pp. 10.8.1–10.8.5, June 1987. Rummler, W. D., “Characterizing the Effects of Multipath Dispersion on Digital Radios,” IEEE Global Telecommunications Conference (Globecom) Record, Vol. III, pp. 52.5.1–52.5.7, November 1988. Rummler, W. D. and Dhafi, M., “Route Design Methods,” Terrestrial Digital Microwave Communications. Ivanek, F., Editor. Norwood: Artech House, pp. 326–329, 1989. Schiavone, J. A., “Prediction of Positive Refractivity Gradients for Line-of-Sight Microwave Radio Paths,” Bell System Technical Journal , Vol. 60, pp. 803–822, July-August 1981. Schiavone, J. A., “Microwave Radio Meteorology: Fading by [Duct] Beam Focusing,” IEEE International Conference on Communications (ICC) Conference Record, Vol. 3, pp. 7B.1.1–7B.1.5, June 1982. Stephansen, E. T., “Clear-air Propagation on Line-of-Sight Radio Paths: A Review,” Radio Science, Vol. 16, pp. 609–629, September and October 1981. Telcordia (Bellcore) Staff, Telcordia Special Report SR-2275, Telcordia Notes on the Networks, pp. 8.49–8.52, October 2000. Telcordia (Bellcore) Staff, Telcordia Generic Requirements GR-929-CORE, Reliability and Quality Measurements for Telecommunications Systems (RQMS-Wireline), Issue 8, December 2002. Telcordia (Bellcore) Staff, Telcordia Generic Requirements GR-1929-CORE, Reliability and Quality Measurements for Telecommunications Systems (RQMS-Wireless), Issue 2, February 2005. Telcordia (Bellcore) Staff, Telcordia Generic Requirements GR-253-CORE, Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria, Issue 5, pp. 6.86–6.87, September 2009a. Telcordia (Bellcore) Staff, Telcordia Generic Requirements GR-499-CORE, Transport Systems Generic Requirements (TSGR): Common Requirements, Issue 4, pp. 2-1 to 4-3, November 2009b. Thrower, R. D., “Curing the Fades, Part III,” Telephone Engineer and Management, p. 82, 1 September, 1977. TIA/EIA, Interference Criteria for Microwave Systems, Telecommunications Systems Bulletin TSB10-F, June 1994. Vigants, A., “Space-Diversity Performance as a Function of Antenna Separation,” IEEE Transactions on Communication Technology, Vol. 16, pp. 831–836, December 1968. Vigants, A., “The Number of Fades and Their Durations on Microwave Line-of-Sight Links With and Without Space Diversity,” IEEE International Conference on Communications (ICC) Proceedings, pp. 3.7–3.11, June 1969. Vigants, A., “Number and Duration of Fades at 6 and 4 GHz,” Bell System Technical Journal , Vol. 50, pp. 815–841, March 1971. Vigants, A., “Observations of 4 GHz Obstruction Fading,” IEEE International Conference on Communications (ICC) Conference Record, pp. 28.1–28.2, June 1972. Vigants, A., “Space-Diversity Engineering,” Bell System Technical Journal , Vol. 54, pp. 103–142, January 1975. Vigants, A., “Microwave Radio Obstruction Fading,” Bell System Technical Journal , Vol. 60, pp. 785–801, July-August 1981. Wheeler, H. A., “Microwave Relay Fading Statistics as a Function of a Terrain Clearance Factor,” IEEE Transactions on Antennas and Propagation, Vol. 25, pp. 269–273, March 1977.

5 RADIO SYSTEM COMPONENTS

Microwave radio signals are generally regarded as those that extend from about 1 GHz to roughly 1000 GHz (1 THz), just below the optical infrared frequency band. In optical terms, 1000 GHz is 300,000 nm (300 μm) in wavelength. The so-called “low frequency” microwave is generally from 1 to 7 GHz, which is the range of frequencies typically used for “long haul” applications that are relatively unaffected by rain attenuation, but highly influenced by multipath fading. The frequency range 7–10 GHz is a transition region where rain attenuation becomes a significant factor. “High frequency” microwave signals are generally those above 10 GHz. High frequency radio paths are limited in length by rain attenuation. Examples of path lengths by frequency band are given in Section 5.13. The ITU-R has allocated frequency bands to various services up to 400 GHz. The FCC currently has rules and regulations that govern operation up to 100 GHz. Most licensed applications in the United States are in the 6–25 GHz range. Unlicensed applications occur above and below these frequencies. See Appendix A, Table A.18, for more details on these frequency allocations. Fixed point-to-point microwave radio systems (Fig. 5.1) use transmitters and receivers deployed miles apart to transport high speed digital signals. Wireless transmission between a transmitter and a receiver is influenced by several factors. A transmission line [coaxial cable (coax) or waveguide] connects the transmitter or receiver to an antenna; an antenna structure (typically a building or a tower) holds the antenna at an appropriate height and orientation and the antenna launches or receives a signal that is propagated between the antennas along the radio wave path. Power loss between the transmit and receive antennas is variable in time or location because of many factors. Path outage time is a function of external interference (Chapter 2, frequency planning section and Chapter 14), rain attenuation (Chapter 11) and received signal variations due to upfades (Chapter 12, ducting), earth bulge (Chapter 12, obstruction fading), flat multipath fading (frequency insensitive time variable path attenuation), and dispersive multipath fading (frequency selective time variable path distortion). Multipath fading is discussed in Chapter 9. Microwave path engineering can be rather complicated. At least two tasks are always required. The first is to reduce multipath and rain fading to an acceptable level. This is done by sizing the transmit power and antenna sizes appropriately. This process is accomplished by performing path performance calculations using a spreadsheet or a computer program. The result is the estimated average path outage due to multipath and rain. The calculation process uses path performance equations discussed in Chapter 16. Performance objectives are reviewed in Chapter 4. Digital Microwave Communication: Engineering Point-to-Point Microwave Systems, First Edition. George Kizer. © 2013 The Institute of Electrical and Electronics Engineers, Inc. Published 2013 by John Wiley & Sons, Inc.

114

MICROWAVE SIGNAL TRANSMISSION LINES Antenna gain

Path loss

115

Antenna gain

Transmission line loss

Transmission line loss

Transmitter

Figure 5.1

Receiver

Simplified radio transmission model.

The second task is to place the antennas at an appropriate height to reduce obstruction and reflective fading. This is usually done by using a computer-generated path profile (vertical plot of terrain and antenna heights) to place the antenna appropriately. Antenna placement methodology is reviewed in Section 12.4.1. After the path is designed, if the radio operates in a licensed frequency band, a third task must be accomplished: that path must be frequency coordinated with other users and then licensed (Chapter 2). This chapter presents an overview of the building blocks of a microwave radio path between the transmitter and the receiver. The transmitter and receiver are discussed in Chapter 3.

5.1

MICROWAVE SIGNAL TRANSMISSION LINES

Microwave transmission lines are typically waveguide for frequencies above 3 GHz and coaxial cable (“coax”) for lower frequencies. The reasons are primarily their physical size and transmission loss. See Appendix A for typical waveguide and coax cable attenuation. In Figure 5.2, a 1-ft ruler and a sheet of paper the size of a US dollar bill were included for reference. Waveguides are usually connected to other devices using flanges that bolt together. Rectangular waveguides are usually used indoors and are unpressured. Elliptical waveguides are used outdoors and are usually pressured to prevent the entry of water. When they are connected together, a pressure window is used to maintain pressure on the elliptical waveguide. Rectangular waveguides have lower loss than elliptical waveguides. However, they are difficult to adapt to complex installations. The primary advantages of elliptical waveguides are the simplicity of installation and resistance to water penetration. Circular waveguides have the lowest attenuation of all commercial waveguides. However, they require long, straight installation with provision to expand and

$

Figure 5.2 Rectangular and elliptical waveguide with coaxial cable. Source: Reprinted with permission of Alcatel-Lucent USA, Inc.

116

RADIO SYSTEM COMPONENTS BNC

50 Ω

75 Ω

50 Ω

75 Ω

Type N

Figure 5.3

Typical coaxial connectors.

contract as a result of temperature variation. With proper couplers (containing mode filters), circular waveguides can be used over a very large frequency range. However, significant installation considerations limit their utilization to situations where very long transmission lines or multiple frequencies are important. Waveguide flanges are quite distinctive. Occasionally, waveguides must be transitioned to coax. Coaxial connectors come in two impedances, 50 and 75 . As shown in Figure 5.3, they appear very similar. Be careful not to mate connectors of different impedances (a common risk with coax adapters). The connection will be loose or one of the connector center pins will be deformed permanently. A waveguide is a hollow metal tube that is rectangular, elliptical, or circular. In free space, Maxwell’s equations (Schelkunoff, 1943) dictate that a traveling electromagnetic wave will have electric and magnetic fields orthogonal to the direction of wave transmission. Inside a metal tube, Maxwell’s equations force either the electric or the magnetic field to be orthogonal to the side of the waveguide and the other field to be parallel to the direction of wave transmission down the guide. If the electric field (Schelkunoff, 1963) is always perpendicular to the direction of propagation, the mode of transmission is termed transverse electric (TE). If the magnetic field is always perpendicular to the direction of propagation, the mode of transmission is termed transverse magnetic (TM). Modes are usually described as TM or TE with a two-number subscript following the mode designation. The first subscript describes the number of half-cycle variations across the wide dimension of the waveguide (when viewed as cross-section). The second subscript describes the number of half-cycle variations across the narrow dimension of the waveguide. A waveguide (Marcuvitz, 1986; Schelkunoff, 1963; Southworth, 1950a, 1950b) operates similarly to a high pass filter. Many modes of operation are possible. Each mode has a cutoff frequency. Below that frequency, the waveguide acts similarly to an attenuator. Above that frequency, the waveguide acts similarly to a low attenuation transmission line. Lower order modes (smaller number subscripts) have lower cutoff frequencies. Usually, a waveguide’s operating frequency is between the cutoff frequency for the lowest order (fundamental) mode and that of the next higher mode (Fig. 5.4). Operation with multiple modes is undesirable because each mode’s velocity of propagation is different. If multiple modes are generated at the transmit end of a transmission line and reconverted into a usable signal at the receive end, significant undesirable signal dispersion (pulse widening) will occur. Figure 5.5 illustrates the fundamental (lowest frequency) mode of propagation for typical waveguide shapes and coax. The electric field is represented by solid lines and the magnetic field is represented by dashed lines. Waveguide propagation in coaxial cable is undesirable. Coax is constructed by surrounding one conductor (center conductor) with a second one (shield). If the shield is solid, radiation outside the transmission line is essentially eliminated. In coax, the normal mode of propagation is by transverse electric and magnetic (TEM) field. However, at high frequencies

MICROWAVE SIGNAL TRANSMISSION LINES

117

Circular waveguide

Elliptical waveguide

Rectangular waveguide

Rectangular waveguide Mode cutoff frequency Fundamental mode cutoff frequency

Figure 5.4

Waveguide modes cutoff frequencies.

TE10

TE11

eTE11

TE11

Figure 5.5

Fundamental waveguide modes.

the coax will become a waveguide that is able to support both TE- and TM-mode propagation. As with traditional waveguides, coax has a lowest waveguide mode, TE11 . The cutoff frequency for coax TE11 mode is given by the following equation: FCO =

(190 × VF ) (7.50 × VF ) = [D(in) + d(in)] [ D(mm) + d(mm)]

(5.1)

118

RADIO SYSTEM COMPONENTS

D = inside diameter of outer conductor; d = outside diameter of inner conductor; VF = velocity factor = 1/sqrt[dielectric constant(relative permittivity)]. Bends and connectors will generate evanescent waveguide modes (highly attenuated modes operating below the cutoff frequency for that mode). As long as the coax is operated at frequencies no higher than 3/4 the lowest mode cutoff frequency, evanescent modes will not affect performance. In waveguide or coax, the velocity of propagation of an electromagnetic wave is slower than in free space. The velocity of propagation in the transmission line is termed group velocity. VG = Group velocity = V0 VF V0 = = = VF =

(5.2)

velocity of propagation in free space; 0.9833 ft/ns; 0.2998 m/ns; velocity factor.

The absolute delay, D , of a transmission line is given by the following: D=

L VG

L = physical length of the transmission line. The effective length of the transmission line (when compared to a RF signal traveling in free space) is given by the following: L LEFF = Effective length = (5.3) VF For coax, the velocity of propagation is independent of frequency because the velocity of propagation is a function of the dielectric constant (relative permittivity). See Eq. 5.2. For a waveguide, the velocity of propagation is a function of frequency (Ramo et al., 1965). f 2 VF = Velocity factor = sqrt 1– c (5.4) f fc = cutoff frequency for the waveguide mode of interest. f = frequency of interest. For a rectangular waveguide with a = 2b, the following applies for the fundamental mode, TE10 : fc (MHz) =

5902 14,990 = a(inches) a(cm)

(5.5)

a = larger cross-sectional dimension; b = smaller cross-sectional dimension. The next higher modes (TE01 and TE20 ) have cutoff frequencies that are twice the fundamental waveguide cutoff frequency. A smooth, elliptical waveguide has not been studied as deeply as rectangular and circular waveguides. Currently, all commercial elliptical waveguides have corrugations. This type of waveguide has never been studied theoretically. A corrugated waveguide is designed with periodic corrugation spacing set so that reflections caused by the corrugations occur at frequencies below cutoff. Its loss is slightly higher than that of the theoretically smooth elliptical waveguide. Elliptical waveguide cutoff frequencies are generally a function of waveguide ellipticity (Chu, 1938; Stratton et al., 1941). b 2 Ellipticity(e) = sqrt 1– (5.6) a

MICROWAVE SIGNAL TRANSMISSION LINES

119

a = larger cross-sectional dimension; b = smaller cross-sectional dimension. The parameter is not well defined for corrugated waveguides. Fortunately, the fundamental cutoff frequency is relatively insensitive to ellipticity (e). For elliptical waveguides with a typical ellipticity of 0.5–0.75, the following applies for the fundamental mode, e TE11 : fc (MHz) =

17,790 7005 = a(inches) a(cm)

(5.7)

a = larger cross-sectional dimension. The next higher mode (e TM01 ) has a cutoff frequency that is 1.4–1.7 times the fundamental waveguide cutoff frequency for ellipticity between 0.5 and 0.75, respectively. For a circular waveguide, the following defines the cutoff frequency for the fundamental mode, TE11 : 6917 17,570 = D(inches) D(cm)

fc (MHz) =

(5.8)

D = inside diameter of the circular waveguide. The circular waveguide is usually specified as WCXX, where XX is the inside diameter in inches. For instance, WC281 is 2.812 in. in diameter, WC109 is 1.09 in., and WC75 is 0.75 in. The next higher mode (TM01 ) has cutoff frequency 1.3 times the fundamental waveguide cutoff frequency. A circular waveguide is sometimes operated over a very wide frequency range. Several higher modes are possible. If this is done, input and output couplers with mode suppressors are necessary. It is also important to not attempt operation near cutoff frequencies for any of those higher order modes. Significant attenuation and phase distortions occur at those frequencies. The first 31 mode cutoff frequencies (Kizer, 1990) for circular waveguides are listed in Appendix A. Waveguide attenuation is given by the following (Kizer, 1990):

dB Attn 100 m

2 A ff +B C =

2 f f −1 f f C

Attn

dB 100 ft

(5.9)

C

= 0.3048 Attn

dB 100 m

(5.10)

f = frequency of interest (GHz); fc = cutoff frequency (GHz). A and B are coefficients determined from measured waveguide attenuation versus frequency tables. See Appendix A for the coefficients for common rectangular and elliptical waveguides. A and B may be determined from the following: DN =

EN =

fN fC

fN fC

fN fC

2

fN fC

2

1 2 fN fC

−1

(5.11)

−1

(5.12)

120

RADIO SYSTEM COMPONENTS

B= A= f1 f2 N C1 C2

= = = = =

C2 − E2 −

C1 D2 D1 E1 D2 D1

(5.13)

(C1 − BE1 ) D1

(5.14)

lowest frequency of interest (GHz); highest frequency of interest (GHz); 1 or 2; attenuation measured through 100 m of waveguide at frequency f1; attenuation measured through 100 m of waveguide at frequency f2.

Coax attenuation is determined from the following formulas: dB = A f + Bf 100 m dB dB = 0.3048 Attn Attn 100 ft 100 m Attn1 f2 Attn A= − 2 f2 f2 f1 f2 f −1 f −1

Attn

1

B= f1 f2 A= B = f = Attn1 = Attn2 = f1 = f2 = =

(5.15) (5.16) (5.17)

1

Attn2

f2 f1

Attn − 1 f2 −1 f1 f −1

(5.18)

1

conductive (“skin effect”) loss coefficient; dielectric loss coefficient; frequency of interest (MHz); cable loss (dB/100 m) at f1; cable loss (dB/100 m) at f2; lowest frequency of interest (GHz); highest frequency of interest (GHz); f1 < f2 and f1 ≤ f ≤ f2 .

The microwave radio is connected to an antenna by a transmission line (coax for low frequencies and waveguide for higher frequencies). The power match between the transmission line and the radio and antenna can affect performance (Wu and Achariyapaopan, 1985). For low frequency transmission lines, the power match between a transmission line and a terminating element is measured as a voltage standing wave ratio (VSWR). In a waveguide, the concept of voltage and current do not exist; only electric and magnetic fields are relevant. For this situation, the concept of return loss is used. The return loss of a transmission line and terminating element interface describes the relative amount of incident energy returned toward the source. For example, if a 30-dBm pulse enters a transmission line interface from a radio and 10 dBm of energy is reflected back toward the radio (the pulse source), the transmission line has a return loss of 20 dB (when it is connected to the radio). Reflections from both ends of the transmission line cause a signal echo that can degrade radio receiver performance (Fig. 5.6). This echo represents a secondary signal that, when it appears at the receiver, introduces dispersion (pulse widening) distortion. Radiowave pulses in free space travel about 1 ns/ft. In coax and waveguide, they are approximately half that speed (2 ns/ft). If the transmission line is short enough that the echo delay is much shorter than the transmitted symbol, return loss at the transmission line interface does not matter (as long the interface does not cause a significant power loss). For example, consider a radio operating in a 30-MHz radio channel. Nyquist’s signaling rate limits radio symbols to a maximum rate of 30 × 106 symbols per second (for

ANTENNA SUPPORT STRUCTURES

Transmission line Radio

121

Antenna Primary signal Echo

Figure 5.6 Transmission line echo.

distortion-free transmission). This signal would be [1/(3 × 107 )] 109 ≈ 33 ns wide. If the echo is delayed by less than 4 ns (transmission line 0 Loss(dB) = 92.4 + 20 log F (GHz) + 20 log D(km) > 0

(5.30)

F = frequency of radio wave; D = path length. As we will see in other chapters, atmospheric and terrain factors can significantly change the actual loss experienced on a radio path. Reflections from the terrain and nearby structures are addressed in Chapter 13. Abnormal atmospheric refractive index effects are covered in Chapter 12. Multipath fading

RADIO SYSTEM PERFORMANCE AS A FUNCTION OF RADIO PATH PROPAGATION

145

caused by atmospheric layering is covered in Chapter 9. Rain fading is addressed in Chapter 11. However, other factors can also be significant. The atmosphere contains pollutants described as aerosols and hydrometeors. Aerosols are particulate matter suspended in the atmosphere with diameter of 1 micron (1 micron = 1 μm = 1 micrometer = onemillionth of a meter) or less. Examples include smog, smoke, haze, clouds, fog, and soil. Hydrometeors have a diameter greater than 1 μm. They include mist, rain, freezing rain and ice pellets, snow, hail, ocean spray, clouds, fog, dust, and sand. Fog and rain are addressed in Chapter 9. Attenuation by ice (and snow) is significantly less than by rain (Ishimaru, 1978). With the exception of sleet (typically treated as rain), ice is usually limited to an accumulation on antenna radomes and is not deep enough to significantly attenuate the radio signal (as contrasted with rain which can extend for several miles over a path). For most areas of the world, the climate is relatively humid or arid. Humid areas subject to rain are addressed in Chapter 9. Arid areas subjected to soil, dust, or sand storms are the opposite extreme. Soil, dust, and sand particles are chemically similar and have similar highly irregular shape. Soil particles have diameters less than 1 μm. Dust is soil particles between 1 and 60 μm in diameter. Fine dust is between 1 and 10 μm. Course dust is between 10 and 60 μm. Sand is greater than 60 μm in diameter. All these particles are significantly smaller than the radio wavelength and have similar propagation characteristics. Dust and sand storm attenuation is a function of particle density as well as the particle size distribution (Ahmed, 1987). This is specified as a visual distance inside the storm. Attenuation and cross-polarization performance of circularly polarized signals are much more sensitive to sand and dust storms than are linearly polarized signals. Except for very dense storms (visibility of less than 10 m), linearly polarized signal attenuation is negligible for frequencies below 30 GHz. Figure 5.33 was created from the results of research by Chen and Ku (2012) for vertically polarized signals. Other researchers have used other assumptions and derived results which estimate greater (Ahmed et al., 1987) or less (Dong et al., 2011; Goldhirsh, 2001) attenuation for a given visibility. The results of other researchers vary from Chen and Ku’s by ±40% to ±80%. The Chen and Ku results represent an average of current research. They may be estimated from the following equations. A1 = 9.286 + 0.2911F + 0.0001426F 3 −

102.5 F

log(A) = log(A1 ) − 1.25 log(V ) A = 10

log(A)

(5.31) (5.32) (5.33)

A = path attenuation (dB/km); F = frequency (GHz), 10 ≤ F ≤ 100; V = visibility (m). Small amounts of water increase dust propagation attenuation (5% moisture increases attenuation at 11 GHz by 75% relative to dry dust attenuation). Horizontal polarization has about twice the attenuation of vertically polarized signals. Linear signal cross-polarization discrimination can be significantly affected by dust and sand even for relatively low frequencies (Ahmed et al., 1987; Chen and Ku, 2012; Dong et al., 2011; Ghobrial and Sharief, 1987; Goldhirsh, 2001).

5.9 RADIO SYSTEM PERFORMANCE AS A FUNCTION OF RADIO PATH PROPAGATION If the atmosphere is hom*ogeneous (e.g., well stirred by wind or rain), propagation between the transmitter and the receiver is along one path. However, if the atmosphere is allowed to stratify (e.g., during quiet summer nights), invisible vertical microlayers of slightly different temperatures and humidity levels can form. These thin atmospheric layers parallel to the earth provide multiple propagation paths between the transmitter and the receiver. Signals traveling these paths take different time to propagate to the receiver. During times when the atmosphere is quiet, the receive antenna may receive two or three (or more) signals from the transmitter over slightly different paths. These signals are exact replicas of the originally

146

RADIO SYSTEM COMPONENTS

Sand and dust storm attenuation

Visibility (m)

Figure 5.33

Dust storm path attenuation.

Short echo

Long echo

Main signal

Figure 5.34

Desired and multipath signals.

transmitted signal but delayed in time. For this reason, they are often termed echoes. Typically, the main received signal occurs over a fairly direct path from the transmit antenna. The other signals typically occur over slightly longer paths that are physically slightly above the main path (Fig. 5.34). Most atmospheric multipath occurs because of paths at elevations slightly above the main path. The delayed signal’s angle of arrival is typically slightly greater (0.25◦ –2◦ ) than the angle of arrival of the main signal. The difference in path length between the main signal and a short echo is less than a foot (less than a nanosecond in time) for a typical 26 mile path. The long echo path difference is usually 9 or 10 ft (9 or 10 ns) (Rummler, 1979, 1980) for a typical 26 mile path. When the different echo signals combine, the resultant signal is a distorted received signal (Fig. 5.35).

5.9.1

Flat Fading

If the primary echo is a short delayed echo, the result is an enhancement (“upfade”) or reduction (“downfade”) of the composite received signal power (“received signal level”) that is essentially constant (“flat”) in frequency across the radio transmission channel. This type of fading is termed flat fading because it represents an overall depression in received signal level but otherwise causes no distortion of the received signal (Fig. 5.36). Flat fading is also termed scintillation fading. This multipath fading increases as the path length increases. It is the same mechanism that causes star light to “twinkle” at night (although in the case of light, the multipath is caused by air turbulence rather than layering). The visual effect can be seen by

RADIO SYSTEM PERFORMANCE AS A FUNCTION OF RADIO PATH PROPAGATION

147

Main signal cos (2πft)

f

Com

posi

te si

gnal

Echo signal R cos (2πft + f) Maximum composite signal = 20 log (1 + R) Minimun composite signal = 20 log (1 – R) 0 70

66

63

60

57

54

51

48

45

42

39

36

33

30

27

24

21

18

15

Fade margin (dB)

Figure 5.39 Flat fade margins in US paths. measured on this path (Rummler, May-June 1979, Introduction and Table 1). When both signals are demodulated at the receiver, the digital pulse is significantly widened or “dispersed.” Dispersive fading is usually produced by relatively long atmospheric echoes. However, it can also be caused by reflections from the terrain for paths with excessive clearance and from off-path structures. Dispersive fading has no relationship to received signal level (Giger and Barnett, 1981). Figure 5.40 shows the relationship between RSL and BER for a 26-mile path dominated by dispersive fading. Since the effective fade margin for dispersive fading is not related to RSL, a statistical concept for fade margin, termed dispersive fade margin, has been developed (Giger and Barnett, 1981). It relates the statistical composite effects of dispersive fading to a particular radio receiver. The effects of dispersive fading can only be reduced by the use of transversal equalizers and diversity techniques.

5.10 RADIO SYSTEM PERFORMANCE AS A FUNCTION OF RADIO PATH TERRAIN Another fundamental transmission limitation is the terrain near a radio path. A primary task of path design is to mitigate the effects of path terrain (which is interrelated with atmospheric refractivity) by vertical antenna placement.

RADIO SYSTEM COMPONENTS

Bit error ratio (BER)

150

Flat fade depth (dB) Adapted from Giger, A.J. and Barnett, W. T., “Effects of Multipath Propagation on Digital Radio,” IEEE Transactions on Communications, pp. 1345–1352, September 1981.

Figure 5.40 Example of dispersive fading. Reprinted with permission of IEEE. Beam width at midpath Width of 3.0 dB points 6 GHz +/−0.58° 1600 ft

Width of 0.1 dB points 6 GHz +/−0.2° 550 ft

Radiation axis

Midpath 30 mile path

Tx

Rx

A typical 10 ft 6 GHz parabolic transmit antenna

Figure 5.41

Typical path signal illumination.

Microwave radio transmit antennas do not just send a thin beam to the receive antenna. They actually illuminate a wide area of terrain along the microwave radio path (Fig. 5.41). The effect of the terrain in reflecting the transmitted energy toward the receive antenna can significantly influence the received signal (Fig. 5.42). Analysis of terrain reflections is done on the basis of Fresnel zones (Fig. 5.43 and Fig. 5.44). A Fresnel zone is described as the locus of points above or below the direct path from the transmitter to the receiver where the distance from one end of the path to the point and then to the other end of the path is an integer number of 1/2 wavelengths longer than the direct path. The first Fresnel zone, F1 , has a total additional path length of 1/2 wavelength. The second Fresnel zone, F2 , has 2 × 1/2 wavelengths, the third Fresnel zone, F3 , has 3 × 1/2, and so on. A Fresnel zone radius, Fn, is the distance perpendicular to the path from a location of interest to a point on the Fresnel zone. Fn (ft) = nth Fresnel zone radius

n × d1 (miles) × d2 (miles) Fn (ft) = 72.1 sqrt [F (GHz)D(miles)]

151

RADIO SYSTEM PERFORMANCE AS A FUNCTION OF RADIO PATH TERRAIN

R

h f

h

f Flat plane Round earth

Knife edge

Figure 5.42

Major terrain reflection models.

Second Fresnel zone (n = 2) First Fresnel zone (n = 1) a1

a2

b2

b1

F1 P F2

d1

d2 D

F1 = 72.1 [(d1 * d2) / (f * D)]1/2 d1 & d2 are distances from Tx & Rx in miles

an + bn = D + n (l/2) Fn = F1 [n]1/2

F1 = The first fresnel zone radius in feet at point P D = Path length in miles f = Operating frequency in GHz

n = The nth fresnel zone Fn = The radius of the n th fresnel zone l = Operating frequency wavelength

Figure 5.43

Fresnel zone radii, side view.

Fn (m) = 17.3 sqrt n d1 d2 D F

= = = = =

n × d1 (km) × d2 (km) [F (GHz) × D(km) ]

(5.34)

Fresnel zone number (an integer); distance from one end of the path to the reflection; distance from the other end of the path to the reflection; total path distance = d1 + d2 ; frequency of radio wave.

Microwave paths are essentially parallel to the earth. As noted in Chapter 13, virtually all surfaces (since they have a significantly different refractive index than the atmosphere) reflect high frequency radio

152

RADIO SYSTEM COMPONENTS

F4

F1

F2 F3

Radio path clearance (h /F1)

Figure 5.44

Fresnel zone radii, end view.

F4/F1 F3/F1 F2/F1 F1/F1

Flat plane Round earth Knife edge

Obstruction gain (dB)

Figure 5.45 Received signal variation with antenna height.

signals (unless the signal is blocked by terrain or trees or dispersed by rough surfaces). Tangential reflections from the earth have a 180◦ phase reversal relative to the direct wave (a requirement of—you guessed it—Maxwell’s equations). Therefore, all reflected signals with odd Fresnel zone clearance produce signal enhancement at the receive antenna. All reflected signals with even Fresnel zone clearance produce signal cancelation (of the received direct signal). These concepts are important in path engineering. As the receive antenna is raised above the terrain, the composite received signal increases or decreases depending on the type of terrain and the height above the terrain (Fig. 5.45). For paths with significant surface reflections, two vertically spaced receive antennas (“space diversity”) can be used to mitigate the effects of the reflected signal (Fig. 5.46).

ANTENNA PLACEMENT

Height-gain pattern

Antenna height

Space diversity antennas located to decorrelate ground or water reflections.

153

Rm–m Rm–d

R’ m

–m

S 2S

d

– R’ m

Water reflections

Figure 5.46

Space diversity.

The two antennas are placed so that the reflection effects on the antennas are complementary. Since path clearance for long paths varies with atmospheric refractivity (K factor), this requires an understanding of expected refractivity. Dual antenna spacing will be a compromise for the expected range of refractivity. For long paths with very large path clearance (such as mountain top to mountain top), the even and odd Fresnel zones are so close together that exact placement is not possible. Experimentation or placement of the antennas to avoid reflection may be the only practical choices. For ground-based reflections, another technique is to tilt the antenna up to place the reflected path in the first null of the receive antenna (Hartman and Smith, 1977). This has the disadvantage of degrading the side lobe performance and cross-polarization discrimination of the antenna. However, this is an effective technique for relatively short (i.e., stable propagation) remote area (i.e., few other transmitters) paths.

5.11

ANTENNA PLACEMENT

One of the main tasks of path engineering is the proper placement of the antennas to mitigate the effects of terrain reflection and atmospheric refractivity variations. The refractivity of the atmosphere is a function of atmospheric pressure, temperature, and relative humidity. Under normal conditions, atmospheric humidity and pressure are relatively constant with height. However, atmospheric temperature decreases with increasing height so atmospheric refractivity decreases with height. This causes the radio wave to bend down toward the earth. (Diagrams showing the radio wave bending toward the earth are misleading. The radio wave is bowed slightly down but the earth curvature bows up more. This causes the path clearance between the radio wave and the earth surface to decrease near the center of the path.) If the vertical gradients of temperature and/or humidity change, the vertical direction of the radio wave can change significantly. This change in refractivity gradient is usually described as a change in K factor (see following equations). Under normal atmospheric conditions (Bean and Dutton, 1966; Bullington, 1957; Schelleng et al., 1933), radio and light waves curve toward the earth. If the earth were replaced by a sphere with radius aK (where a is the physical radius of the earth and K is a function of refractivity), a radio or light wave launched parallel to the earth would remain parallel to this modified “earth.” For radio waves nearly parallel to the earth, the following equation approximates K (Fig. 5.47): 1 K≈ dn 1 + a dh

RADIO SYSTEM COMPONENTS k = 4/3; dN/dh = −39 N-units/km

100.M

154

2

Height (h)

1

k = 4/3

Radio Ref. (N) 10

Tx

Figure 5.47

20

30

km

40

Rx

Typical microwave radio path. Drawing courtesy of Eddie Allen. Used with permission.

= =

253 + 157 +

dN dh

dN dh

253 (N units per mile) 157 (N units per kilometer)

6 7 (average) to (midday) for light 5 5 4 = typically (average) for radio waves < 40 GHz 3

= typically

(5.35)

K = effective earth radius factor; a = physical earth radius. Microwave path antenna locations are typically designed and analyzed using a plot of the terrain between the transmitter and the receiver with a line representing the maximum power of the radio wave front as it moves from transmitter to receiver. The radio wave path will bend up or down depending on the K factor (atmospheric refractivity). The convention is to always plot the radio wave path as a straight line and to move the earth up or down as a function of K factor to preserve the vertical distance between the radio wave and the earth at any location on the path. In the past, different graphs were plotted with the earth surface predistorted to account for different K factors. Today, path profiles are created by computers, but the “earth bulge” convention remains (Fig. 5.48). The physical height of the height measurements on the path profile are modified by adding the following values (Fig. 5.49): h(ft) =

[d1(miles) d2(miles)] (1.500 K)

h(m) =

[d1(km) d2(km)] (12.74 K)

(5.36)

Path profiles are used to place the vertical location of antennas (see Chapter 10). The path profile is usually based on digitized path elevation data. However, this data is modified to account for actual path obstructions, such as trees and structures. “Driving the path” is important to make sure potential obstructions and reflections are identified.

FREQUENCY BAND CHARACTERISTICS

Figure 5.48

EXISTING SS TOWER 150 ft

Typical microwave radio path profile convention.

PATH ROUGHNESS IS 23 ft FROM 0 TO 23.6 MILES

150

155

Main-to-main path meets the clearance criteria of grazing at K = 1/2 at the critical point

PROPOSED GUYED TWR 100 ft

MAIN FRESNEL ZONES: — FRACTION of 1st FRESNEL: 1.0 F1

Main-to-main path meets the clearance criteria of 1.0 Fresnel zone at K = 4/3 for this tree

MAIN CL 130 ft

150

100

100 K = 1/2

50

50 First Fresnel zone

85 ft AMSL

90 ft AMSL

K = 4/3

Poor propagation area (c = 2)

K = Infinity OVER WATER

SITE 1 38–13–16, 6 N 0 5 76–31–27, 0 W NAD83 AZIMUTH = 160.3 F = 6.175 GHZ

10

MAIN CL 70 ft

SITE 2 15

DISTANCE IN MILES

20 D = 23.60 MI

37–53–56, 5 N 76–22–42, 4 W AZIMUTH = 340.4 NAD83

Figure 5.49 Computer-generated path profile.

5.12

FREQUENCY BAND CHARACTERISTICS

For commercial applications, two general types of bands are available: licensed and unlicensed. Unlicensed bands have the advantage of rapid installation but have unpredictable performance because interference is not controlled. Nevertheless, these bands are quite popular. Several unlicensed bands are available:

156

RADIO SYSTEM COMPONENTS

Unlicensed National Information Infrastructure (UNII) Bands (FCC Part 15.407): 5.2-GHz Band (5150–5250 MHz) Transmit power: 50 mW maximum Transmit power must be reduced by 1 dB for every decibel of antenna gain that exceeds 6 dBi 5.3-GHz Band (5250–5350 MHz) and 5.6-GHz band (5470–5725 MHz) Transmit power: 250 mW maximum Transmit power must be reduced by 1 dB for every decibel of antenna gain that exceeds 6 dBi User must sense radar and cease operation if detected 5.8-GHz Band (5725–5825 MHz) Transmit power: 1 W Transmit power must be reduced by 1 dB for every decibel of antenna gain that exceeds 23 dBi Because of the significant transmitted power and antenna gain limitations, the UNII bands are not popular for fixed point-to-point applications. Unlicensed Industrial, Scientific, Medical (ISM) Bands (FCC Part 15.247) 900-MHz Band (902–928 MHz) Not useful for wideband fixed point-to-point applications 2.4-GHz Band (2400–2483.5 MHz) Transmit power: 1 W maximum Transmit power must be reduced by 1 dB for every 3 dB of antenna gain that exceeds 6 dBi 4 ft parabolic = 27 dBi → transmit power = 200 mW 8 ft parabolic = 33 dBi → transmit power = 125 mW 5.8-GHz Band (5725–5850 MHz) Transmit power: 1 W maximum No antenna limitations The ISM 2.4 and 5.8 bands are quite popular. However, owing to transmit power and antenna limitations at 2.4 GHz, 5.8 GHz is the more popular band for fixed point-to-point applications. 60-GHz Band (57.0–64.0 GHz) This band’s primary feature is the very high atmospheric attenuation due to oxygen absorption. This band is typically used where frequency reuse within a geographic area and communication security are important. The licensed bands are quite important as they offer the user a relatively predictable performance: 4 GHz (3.7–4.2 GHz) 20 MHz channels Very good propagation band Owing to coordination difficulties with existing satellite receivers, the band is effectively unavailable for new fixed point-to-point applications Lower 6 GHz (5.9–6.4 GHz) 0.4, 0.8, 1.25, 2.5, 3.75, 5, 10, 30, and 60 MHz channels Very good propagation band Highly desirable but congested in many urban areas Upper 6 GHz (6.5–6.9 GHz) 0.4, 0.8, 1.25, 2.5, 3.75, 5, 10, and 30 MHz channels

PATH DISTANCES

157

Very good propagation band Highly desirable but congested in many urban areas 10 1/2-GHz Band (10.6–10.7 GHz) 0.4, 0.8, 1.25, 2.5, 3.75, and 5 MHz channels Path length rain limited in most areas except the west coast 11-GHz Band (10.7–11.7 GHz) 1.25, 2.5, 3.75, 5, 10, 30, and 40 MHz channels Path length rain limited in most areas except the west coast 18 GHz (17.7–18.7 GHz) 1.25, 2.5, 5, 10, 20, 30, 40, 50, and 80 MHz channels Path length rain limited 23-GHz Band (21.3–23.6 GHz) 2.5, 5, 10, 20, 30, 40, and 50 MHz channels Path length rain limited 28, 29, and 31 GHz (27.5–28.4, 29.1–29.3, and 31.0–31.3 GHz) 75–850 MHz channels Spectrum auctioned and not available to the general public Path length rain limited 38 GHz (38.6–40.0 GHz) 50 MHz channels Spectrum auctioned and not available to the general public Path length rain limited 70-GHz Band (71.0–76.0 GHz) Spectrum unchannelized Often paired with the 80-GHz band to achieve very wide duplex channels Path length rain limited 80-GHz Band (81.0–86.0 GHz) Spectrum unchannelized Often paired with the 70-GHz band to achieve very wide duplex channels Path length rain limited 90-GHz Band (92.0–94.0 and 94.1–95.0 GHz) Band unchannelized and not currently popular Path length rain limited The above frequency ranges are rounded off. See Appendix A for more detailed frequency ranges. In the following section we will observe the typical path distances for the most popular licensed frequency bands.

5.13

PATH DISTANCES

The distance over which a fixed point to point microwave radio system transmission can operate reliability depends heavily on the frequency of operation and geographic and weather characteristics of the terrain near the path. The details of path characteristics are discussed in the later chapters. There are thousands of paths already licensed and in use in the United States today. Looking at their path lengths will give an understanding of the path distances that can be expected for each microwave frequency band. To perform a statistical evaluation of all US licensed frequency bands, the entire FCC microwave point-to-point license data of simplex paths was utilized. The following statistics are based on that data. In these graphs, a duplex path is a pair of matched simplex paths (Fig. 5.50).

158

RADIO SYSTEM COMPONENTS

Figure 5.50

Number of paths and 4-GHz paths.

The national averages of path lengths only give a part of the picture. As we will learn in Chapter 11, high frequency paths are limited by intense rain. The southeastern United States has very intense rainfall but the western United States has very light rainfall. The southeastern United States is a poor propagation area for low frequency microwave paths but the upper western United States is a good propagation area. When we look at the statistics of paths in those areas, we will have more insight as to what is possible in those areas. For low and high frequency paths, we will define a poor propagation area (SE) for latitudes south of 32 and longitudes east of −92. For low frequency paths, we will define a good propagation area (UW) as latitude north of 36 and longitude between −114 and −105. For high frequency paths, we will define a good propagation area (W) as anywhere west of −114 longitude. The following statistics are based on these filters (Fig. 5.51, Fig. 5.52, Fig. 5.53, Fig. 5.54, Fig. 5.55, Fig. 5.56, Fig. 5.57, Fig. 5.58, Fig. 5.59 and Table 5.2). The 4-GHz paths were not analyzed because there were not enough paths in the areas of interest to be statistically significant. Obviously not all paths are created equal.

APPENDIX

Figure 5.51

159

Lower 6-GHz and upper 6-GHz paths (entire United States).

5.A APPENDIX 5.A.1

Antenna Isotropic Gain and Free Space Loss

Consider an isotropic transmit antenna radiating a radio signal that is received at a remote location. The power received by the receiving antenna may be estimated as follows: Pr Pt Are Ate D λ

= = = = = =

receiving antenna power; transmitting antenna power; receive antenna effective area; transmit antenna effective area; distance between the transmitting and receiving antennas; radio signal free space wavelength.

160

RADIO SYSTEM COMPONENTS

Figure 5.52

The 10 1/2- and 11-GHz paths (entire United States).

Pr = Transmitted power received by the receive antenna = (Received power density)(Receiver effective area)

Pt = Are (Area of a sphere at distance d) =

Pt Are (4π d 2 )

Pr = Isotropic transmit antenna to receive antenna power loss Pt Are = (4π d 2 )

(5.A.1)

(5.A.2)

APPENDIX

Figure 5.53

161

The 18- and 23-GHz paths (entire United States).

Now replace the isotropic transmit antenna with a directional antenna. Pr = Directional transmit antenna to receive antenna power loss Pt Are = (Transmit antenna gain relative to an isotropic radiator) (4π d 2 )

(5.A.3)

Silver (1949, Eq. 20, p. 177) showed that the gain of an antenna relative to an isotropic radiator is the following: Gi = Antenna gain relative to an isotropic radiator =

4π Ae λ2

(5.A.4)

162

RADIO SYSTEM COMPONENTS

Figure 5.54

Lower 6-GHz poor propagation area and good propagation area path lengths.

Ate Are Pr = Pt (λ2 d 2 ) This is the well-known Friis transmission loss formula (Friis, 1946). Now, we will reformat the formula into the more familiar path loss form: Gti Gri Pr = Pt LFS =

4πAte 4πAre λ2 λ2 16π 2 d 2 λ2

(5.A.5)

163

APPENDIX

Figure 5.55

Upper 6-GHz poor propagation area and good propagation area path lengths.

Gti = transmit antenna gain relative to an isotropic radiator; Gri = receive antenna gain relative to an isotropic radiator; LFS = free space loss. Now, we convert these to the popular decibel format: Pr(dBm) − Pt(dBm) = Gt(dBi) + Gr(dBi) − LFS (dB)

5.A.2

Free Space Loss

16 π 2 d 2 LFS (dB) = 10 log λ2

(5.A.6)

(5.A.7)

164

RADIO SYSTEM COMPONENTS

Figure 5.56

The 10 1/2-GHz poor propagation area and good propagation area path lengths.

d = distance between the transmit and receive antennas (ft or m); λ (ft) = 0.98357/F (GHz); λ (m) = 0.29980/F (GHz). LFS = 96.58 + 20 log[d(miles)] + 20 log[F (GHz)] = 92.45 + 20 log[d(km)] + 20 log[F (GHz)]

5.A.3 Antenna Isotropic Gain G(dBi) = antenna gain(relative to an isotropic radiator) 4π Ae = 10 log λ2

(5.A.8)

165

APPENDIX

Figure 5.57

The 11-GHz poor propagation area and good propagation area path lengths.

= 10 log

4π ηA λ2

(5.A.9)

A = antenna physical area; η = antenna illumination efficiency (power ratio ≤ 1) = E/100 E = illumination efficiency (percentage) = 100 η. G(dBi) = 10 log(4π ) + 10 log G(dBi) = 11.14 + 10 log

E 100

E 100

+ 10 log(A) − 20 log λ

(5.A.10)

+ 10 log[A(ft2 )] + 20 log[F (GHz)]

(5.A.11)

166

RADIO SYSTEM COMPONENTS

Figure 5.58

The 18-GHz poor propagation area and good propagation area path lengths.

G(dBi) = 21.46 + 10 log

E 100

+ 10 log[A(m2 )] + 20 log[F (GHz)]

For parabolic antennas, E ≈ 55% (generally between 45% and 65%). For panel antennas and passive reflectors, E ≈ 100%. For passive reflectors, the area is the area projected onto the path (see Appendix A).

5.A.4 Circular (Parabolic) Antennas G(dBi) = 10.09 + 10 log

E 100

+ 20 log[D(ft)] + 20 log[F (GHz)]

167

APPENDIX

Figure 5.59

The 23-GHz poor propagation area and good propagation area path lengths.

G(dBi) = 20.41 + 10 log

E 100

+ 20 log[D(m)] + 20 log[F (GHz)]

(5.A.12)

D = diameter of the antenna.

5.A.5

Square (Panel) Antennas

E + 20 log[W (ft)] + 20 log[F (GHz)] 100 E + 20 log[W (m)] + 20 log[F (GHz)] G(dBi) = 21.46 + 10 log 100 G(dBi) = 11.14 + 10 log

(5.A.13)

168

RADIO SYSTEM COMPONENTS

TABLE 5.2

Microwave Radio Path Length Statistics

Arithmetic Mean 4 GHz Lower 6 GHz Upper 6GHz 10.5 GHz 11 GHz 18 GHz 23 GHz

Entire United States Median Mode Standard Deviation

Kurtosis

28.2 21.0

26.9 19.4

26 14

11.5 11.2

2.2 1.6

9.6 4.9

18.9 7.9 10.1 4.0 2.3

16.7 6.6 7.7 3.0 1.4

11 5 5 2.50 0.25

12.2 5.4 8.7 3.5 2.5

1.5 1.9 2.7 2.8 2.7

3.6 6.3 12.4 22.6 12.5

Skew

Kurtosis

7.7

1.1

3.0

7.6 2.9 3.7 1.5 1.8

0.93 1.2 1.3 3.7 1.7

2.3 2.9 1.9 27.5 2.1

Poor Propagation Area (Southeast United States) Arithmetic Mean Median Mode Standard Deviation Lower 6 GHz Upper 6GHz 10.5 GHz 11 GHz 18 GHz 23 GHz

Skew

15.9

15.3

13.6 4.7 5.6 2.1 1.7

12.8 4.2 4.7 1.8 1.0

9 11 4 4 1.00 0.25

Good Propagation Area (Upper Western United States for 6 GHz, Western United States for Others) Arithmetic Mean Median Mode Standard Deviation Skew Lower 6 GHz Upper 6GHz 10.5 GHz 11 GHz 18 GHz 23 GHz

Kurtosis

30.7

29.3

22

16.5

1.0

1.3

27.9 9.3 11.4 4.5 2.7

25.7 8.2 9.5 3.4 1.6

15 5 6 2.50 0.25

16.2 5.8 8.5 3.9 2.9

0.78 1.6 2.2 3.2 2.2

0.65 4.4 9.3 28.4 7.1

W = width of the antenna. For the aperture antennas typically used in microwave applications, their gain is in the 20 to 50 dBi range. For reference, a short (Hertzian) dipole has 1.76-dBi gain (ignoring resistive losses). A half wave dipole has 2.15-dBi gain.

5.A.6 11-GHz Two-foot Antennas In metropolitan areas, cellular operators use many 11-, 18-, and 23-GHz radio paths. These paths are typically short and installed on leased towers. Lease costs are directly related to the size of the antenna. Most operators limit their antenna size to a maximum of 2 ft. Since essentially everyone uses Class A antennas, this has excluded the use of 11 GHz. However, many operators feel the need to use 11 GHz for long paths or in high rainfall regions. The FCC recently revised their rules (Mosley, 2011) to provide “almost Class A” properties for 2-ft antennas. FCC rule 101.115 (f) states, “In the 10,700-11,700 MHz band, a fixed station may employ transmitting and receiving antennas meeting performance standard B in any area. If a Fixed Service or Fixed Satellite Service licensee or applicant makes a showing that it is

APPENDIX

TABLE 5.A.1

FCC Antenna Radiation Pattern Requirements

Boresight Gain (dBi) Antenna Class A Class B Diff. (dB)

169

38 33.5 4.5

Gain Relative to Boresight 5◦ –10◦ 10◦ –15◦ 15◦ –20◦ −25 −29 −33 −17 −24 −28 8 5 5

20◦ –30◦ −36 −32 4

30◦ –100◦ 100◦ –140◦ 140◦ –180◦ −42 −55 −55 −35 −40 −45 7 15 10

likely to receive interference from such fixed station and that such interference would not exist if the fixed station used an antenna meeting performance standard A, the fixed station licensee must modify its use. Specifically, the fixed station licensee must either substitute an antenna meeting performance standard A or operate its system with an EIRP reduced so as not to radiate, in the direction of the other licensee, an EIRP in excess of that which would be radiated by a station using a Category A antenna and operating with the maximum EIRP allowed by the rules.” Although 2-ft antennas are still Class B, they may be used similarly to Class A antennas [i.e., once they are licensed, they do not have to be changed in the future unless impacted by a “major change” (see Chapter 2)]. The conditions of “almost Class A” operation may be inferred from the rules (Mosley, published yearly): Per FCC 101.115 (b), 11-GHz Class A antennas must meet the standards listed in Table 5.A.1. Per FCC 101.113 (a), 11-GHz maximum allowable EIRP is +55 dBW (85 dBm). If the boresight gain of a Class A antenna is +38 dBi, then the maximum allowable transmit power into the antenna is +47 dBm. The worst case difference between Class A and Class B antenna side lobes is 15 dB. Therefore, as long as transmitter power does not exceed +47 dBm − 15 dB = +32 dBm, the antenna may be treated as Class A. Very few 11-GHz transmitters exceed this transmit power. Keep in mind that typical 11-GHz Class B 2-ft antennas meet or exceed Class A antenna side lobe standards between 100◦ and 180◦ . For these cases, the worst case difference in side lobe power is 8 dB. For these cases, the transmit power level limit becomes +47 dBm − 8 dB = +39 dBm. For all practical purposes, this gives all Class B 2 ft 11-GHz antennas the rights of Class A antennas.

5.A.7

Tower Rigidity Requirements

Antenna structures must maintain the antenna position within acceptable limits. In the United States, the current standard, ANSI/TIA/EIA-222-G (TIA Subcommittee TR-14.7, 2005), allows 10-dB loss of received signal level due to antenna structure twist or tilt under standardized wind and ice loading. Obviously, the direct approach is to use the antenna pattern of the proposed antenna. If the specific antenna decision has not been made when the tower is being specified, the standard offers the following formula for estimating that limit:

5.A.7.1

Parabolic (Circular) Antenna θ = Maximum allowable twist or tilt (degrees) relative to normal position λ = 54 D =

53.1 [D(ft) F (GHz)]

170

RADIO SYSTEM COMPONENTS

16.2 [D(m) F (GHz)]

=

(5.A.14)

D = antenna diameter; F = radio operating frequency. Reflectors and square antennas are not addressed in the current standard. The earlier version of this standard (ANSI/TIA/EIA-222-F (TIA Subcommittee TR-14.7, 1996)) listed the following guidelines for θ :

5.A.7.2

Parabolic (Circular) Antenna θ = Maximum allowable twist or tilt (degrees) relative to normal position λ = 60 D =

59.0 [D(ft)F (GHz)]

=

18.0 [D(m)F (GHz)]

(5.A.15)

D = antenna diameter; F = radio operating frequency.

5.A.7.3

Rectangular Reflector θ = Maximum allowable twist or tilt(degrees)relative to normal position λ = 44 W =

43.3 [ W (ft) F (GHz)]

=

13.2 [ W (m) F (GHz)]

(5.A.16)

W = reflector width (as projected along the path); F = radio operating frequency. Square antennas were not addressed but would be similar to rectangular reflectors. Chapter 8 provides the following limits (for a 10-dB power loss):

5.A.7.4

Circular (Projection) Reflector θ = Maximum allowable twist or tilt(degrees)relative to normal position λ = 49.8 D =

49.0 [ D(ft)F (GHz)]

=

14.9 [ D(m)F (GHz)]

(5.A.17)

171

APPENDIX

D = reflector diameter (as projected along the path); F = radio operating frequency.

5.A.7.5

Rectangular Reflector θ = Maximum allowable twist or tilt(degrees)relative to normal position λ = 42.3 W =

41.6 [ W (ft) F (GHz)]

=

12.7 [ W (m) F (GHz)]

(5.A.18)

W = reflector width (as projected along the path); F = radio operating frequency.

5.A.7.6

Diamond (Projection) Reflector θ = Maximum allowable twist or tilt (degrees) relative to normal position λ = 45.2 W =

44.5 [W (ft) F (GHz)]

=

13.6 [W (m) F (GHz)]

(5.A.19)

W = reflector width (as projected along the path, measured along the edge of the reflector); F = radio operating frequency.

5.A.7.7

Parabolic (Circular) Antenna

η(illumination efficiency) = 0.65 (65%, worst case) θ = Maximum allowable twist or tilt (degrees) relative to normal position λ = 68.1 D =

67.0 [D(ft) F (GHz)]

=

20.4 [D(m) F (GHz)]

(5.A.20)

η(illumination efficiency) = 0.55 (55%, typical) θ = Maximum allowable twist or tilt (degrees) relative to normal position λ = 74.2 D =

73.0 [D(ft)F (GHz)]

=

22.2 [D(m) F (GHz)]

(5.A.21)

172

RADIO SYSTEM COMPONENTS

h(illumination efficiency) = 0.45(45%) θ = Maximum allowable twist or tilt (degrees) relative to normal position λ = 82.1 D =

80.8 [D(ft)F (GHz)]

=

24.6 [D(m)F (GHz)]

(5.A.22)

D = antenna diameter; F = radio operating frequency.

5.A.7.8

Square Antenna η(illumination efficiency) = 1.00(100%, typical)

(5.A.23)

For the typical square antenna, the twist limits are exactly the same as for the square or diamond reflector (as reflectors in the far field have illumination efficiency of 100%).

REFERENCES Ahmed, A. S., “Role of Particle-Size Distributions on Millimetre-Wave Propagation in Sand/Duststorms,” IEE Proceedings, pp. 55–59, February 1987. Ahmed, A. S., Ali, A. A. and Alhaider, M. A., “Airborne Wave into Dust Storms,” IEEE Transactions on Geoscience and Remote Sensing, Vol. 25, pp. 593–599, September 1987. Aubin, J. F., “A Brief Tutorial on Antenna Measurements,” Microwave Journal , Vol. 48, pp. 92–108, August 2005. Balanis, C. A., Modern Antenna Handbook . New York: John Wiley & Sons, Inc., 2008. Bean, B. R. and Dutton, E. J., Radio Meteorology. Washington, DC: U. S. Government Printing Office 1966. Bullington, K., “Radio Propagation Fundamentals,” Bell System Technical Journal , Vol. 36, pp. 593–626, May 1957. Chen, H. and Ku, C., “Calculation of Wave Attenuation in Sand and Dust Storms by the FDTD and Turning Bands Methods at 10–100 GHz,” IEEE Transactions on Antennas and Propagation, Vol. 60, pp. 2951–2960, June 2012. Chu, L. J., “Electromagnetic Waves in Elliptic Hollow Pipes of Metal,” Journal of Applied Physics, Vol. 9, pp. 583–591, September 1938. Cogdell, J. R., McCue, J. J. G., Kalachev, P. D., Salomonovich, A. E., Moiseev, I. G., Stacey, J. M., Epstein, E. E., Altshuler, E. E., Feix, G., Day, J. W. B., Hvatum, H., Welch, W. J. and Barath, F. T., “High Resolution Millimeter Reflector Antennas,” IEEE Transactions on Antennas and Propagation, Vol. 18, pp. 515–529, July 1970. Communication Equipment Specialists and Grasis Towers LLC, Tower 101 . Lee’s Summit: Communication Equipment Specialists and Grasis Towers LLC, 2002. Davies, W. S., Hurren, S. J. and Copeland, P. R., “Antenna Pattern Degradation Due to Tower Guy Wires on Microwave Radio Systems,” IEE Proceedings, pp. 181–188, June 1985. Dong, X., Chen, H. and Guo, D., “Microwave and Millimeter-Wave Attenuation in Sand and Dust Storms,” IEEE Antennas and Wireless Propagation Letters, Vol. 10, pp. 469–471, May 2011. Friis, H. T., “A Note on a Simple Transmission Formula,” Proceedings of the I. R. E and Waves and Electrons, pp. 254–256, May 1946.

REFERENCES

173

Friis, H. T., “Microwave Repeater Research,” Bell System Technical Journal , Vol. 27, pp. 183–246, April 1948. Friis, H. T. and Lewis, W. D., “Radar Antennas,” Bell System Technical Journal , Vol. 26, pp. 219–317, April 1947. Ghobrial, S. I. and Sharief, S. M., “Microwave Attenuation and Cross Polarization in Dust Storms,” IEEE Transactions on Antennas and Propagation, Vol. 35, pp. 418–425, April 1987. Giger, A. J. and Barnett, W. T., “Effects of Multipath Propagation on Digital Radio,” IEEE Transactions on Communications, Vol. 29, pp. 1345–1352, September 1981. Goldhirsh, J., “Attenuation and Backscatter from a Derived Two-dimensional Duststorm Model,” IEEE Transactions on Antennas and Propagation, Vol. 49, pp. 1703–1711, December 2001. Hansen, R. C., Fundamental Limitations of Antennas, Proceedings of the IEEE, pp. 170–182, February 1981. Hartman, W. J. and Smith, D., “Tilting Antennas to Reduce Line-of-Sight Microwave Link Fading,” IEEE Transactions on Antennas and Propagation, Vol. 25, pp. 642–645, September 1977. Hollis, J. S., Lyon, T. J. and Clayton, Jr., L., Microwave Antenna Measurements. Atlanta: ScientificAtlanta, 1970. Howell, J. Q., “Microstrip Antennas,” IEEE Transactions on Antennas and Propagation, Vol. 23, pp. 90–93, January 1975. Institute of Electrical and Electronics Engineers, IEEE Std 951–1996, IEEE Guide to the Assembly and Erection of Metal Transmission Structures, Revised 2009. International Telecommunication Union—Radiocommunication Sector (ITU-R), “Report F.2059, Antenna characteristics of point-to-point fixed wireless systems to facilitate coordination in high spectrum use areas”, pp. 1–18, 2005. Ishimaru, A., Wave Propagation and Scattering in Random Media, Volume 1. New York: Academic Press, pp. 41–68 (Fig. 3–5), 1978. Kerr, D. E., Propagation of Short Radio Waves, Volume 13. New York: McGraw-Hill, 1951. Kizer, G. M., Microwave Communication. Ames: Iowa State University Press, 1990. Marcuvitz, N., Waveguide Handbook . London: Peter Peregrinus Ltd., 1986 (reprint of the 1951 McGrawHill Rad Lab Series, Volume 10, with errata). Mosley, R. A., Code of Federal Regulations (CFR), Title 47 - Telecommunication, Chapter1, Parts 101.113 and 101.115 . Washington, DC: Office of the Federal Register, published yearly. Munson, R. E., “Conformal Microstrip Antennas and Microstrip Phased Arrays,” IEEE Transactions on Antennas and Propagation, Vol. 22, pp. 74–78, January 1974. Norton, M. L., “Microwave System Engineering Using Large Passive Reflectors,” IRE Transactions on Communications Systems, pp. 304–311, September 1962. Pozar, D. M. and Schaubert, D., Microstrip Antennas. New York: John Wiley & Sons, Inc., 1995. Ramo, S., Whinnery, J. R. and Van Dozer, T., Fields and Waves in Communication Electronics, First Edition. New York: John Wiley & Sons, Inc., 1965. Ramsay, J., “Highlights of Antenna History,” IEEE Communications Magazine, Vol. 19, pp. 4–16, September 1981. Ramsdale, P. A., “Antennas for Communications,” IEEE Communications Magazine, pp. 28–36, September 1981. Rummler, W. D., “A New Selective Fading Model: Application to Propagation Data,” Bell System Technical Journal , Vol. 58, pp. 1037–1071, May–June 1979. Rummler, W. D., “Time- and Frequency-Domain Representation of Multipath Fading on Line-of-Sight Microwave Paths,” Bell System Technical Journal , Vol. 59, pp. 763–795, May–June 1980. Ruze, J., “Antenna Tolerance Theory—A Review,” Proceedings of the IEEE, pp. 633–640, April 1966. Schelkunoff, S. A., Electromagnetic Waves. New York: Van Nostrand, pp. 476–479, 1943. Schelkunoff, S. A., Electromagnetic Fields. New York: Blaisdell, pp. 224–238, 1963.

174

RADIO SYSTEM COMPONENTS

Schelkunoff, S. A. and Friis, H. T., Antennas, Theory and Practice. New York: John Wiley & Sons, Inc., p. 40, 1952. Schelleng, J. C., Burrows, C. R. and Ferrell, E. B., “Ultra-Short Wave Propagation,” Bell System Technical Journal , Vol. 21, pp. 125–161, April 1933. Silver, S., Microwave Antenna Theory and Design, Radiation Laboratory Series, Volume 12. New York: McGraw-Hill, 1949. Slayton, W. T., Design and Calibration of Microwave Antennas Gain Standards, NRL (Final) Report 4433 . Washington, DC: Naval Research Laboratory, November 1954. Southworth, G. C., Principles and Applications of Waveguide Transmission. New York: Van Nostrand, 1950a. Southworth, G. C., “Principles and Applications of Waveguide Transmission,” Bell System Technical Journal , pp. 295–342, July 1950b. Stratton, J. A., Morse, P. M., Chu, L. J. and Hutner, R. A., Elliptic Cylinder and Spheroidal Wavefunctions. New York: John Wiley & Sons, Inc., 1941. Telcordia Technologies, Generic Requirements GR-180-Core, Generic Requirements for Hardware Attachments for Steel, Concrete and Fiberglass Poles, May 2008. Telcordia Technologies, Special Report SR-1421, Blue Book—Manual of Construction Procedures, October 2011. TIA Subcommittee TR-14.7, Structural Standards for Steel Antenna Towers and Antenna Supporting Structures, ANSI/TIA/EIA-222-F , Arlington: Telecommunications Industry Association, 1996. TIA Subcommittee TR-14.7, Structural Standard for Antenna Supporting Structures and Antennas, ANSI/TIA/EIA-222-G, Arlington: Telecommunications Industry Association, 2005 (Addendum 1, 2007). Tilston, W. V., “On Evaluating the Performance of Communications Antennas,” IEEE Communications Magazine, Vol. 19, pp. 18–27, September 1981. Western Electric Company, “Significant Characteristics of Bell System Microwave Antennas,” Engineering Handbook, Systems Equipment & Standards. New York: Western Electric Company, Engineering Division, 1970. Wheeler, H. A., “Small Antennas,” Antenna Engineering Handbook , Third Edition, Johnson, R. C., Editor. New York: McGraw-Hill, pp. 6–1–6–18, 1993. Wu, K. T. and Achariyapaopan, T., ”Effects of Waveguide Echoes on Digital Radio Performance”, IEEE Global Telecommunications Conference (Globecom), Vol. 3, pp. 47.5.1–47.5.5, December 1985.

6 DESIGNING AND OPERATING MICROWAVE SYSTEMS

6.1

WHY MICROWAVE RADIO?

Digital transmission is usually accomplished by cable or wireless methods. For long-distance transmission, fiber optics and microwave radios are the typical choices. Fiber optics has many obvious advantages, the primary advantage being transmission bandwidth. The disadvantages are also obvious: high cost of installation and maintenance, lead time for implementation, and inability to redeploy assets once installed. Fiber optics is also not practical in extreme terrain and climate locations. Microwave radio has a transmission bandwidth limitation due to channel bandwidth restrictions. However, it has several attractive features. The cost of installation and time for implementation can be much less than for fiber optics. Since only the terminal locations need to be maintained, maintenance cost is lower and transmission security is greater than for fiber optics. Radio is much easier to restore after natural disasters and it often survives them with little degradation. Fiber optics is economical for communication between major metropolitan areas and within an urban or suburban environment, but is seldom economical when smaller cities and rural areas are involved. Perhaps, the greatest advantage of radio is its ease of deployment. Microwave radio greatly simplifies system planning. As long as site-to-site path clearance is available, a radio path can be installed almost anywhere (Fig. 6.1). For short paths, even lack of line of sight (LOS) can sometimes be overcome.

6.2

RADIO SYSTEM DESIGN

There are many ways to design microwave networks—nearly as many as there are designers. Although the major tasks are usually the same, their order and importance is not. The following tasks are typical: Create preliminary network design Validate the site coordinates and tower information Determine the system architecture and design objectives (transmission and network management) Create candidate design based on terrain data and site and fiber hub candidate list Perform path LOS surveys to confirm path availability and clearance Digital Microwave Communication: Engineering Point-to-Point Microwave Systems, First Edition. George Kizer. © 2013 The Institute of Electrical and Electronics Engineers, Inc. Published 2013 by John Wiley & Sons, Inc.

175

176

DESIGNING AND OPERATING MICROWAVE SYSTEMS

Figure 6.1 Microwave radio can be placed nearly anywhere. Source: Reprinted with permission of Alcatel-Lucent USA, Inc.

Perform site surveys Perform site and tower mapping Perform tower loading and attachment analysis Perform inside and outside plant feasibility analysis. Create final path design (based on actual cleared paths and site availability) Perform regulatory requirements tasks: FAA antenna structure studies (if needed), frequency coordination, prior coordination notice and Form 601 data preparation Perform final detail engineering Create site integration plan Create installation engineering plan Generate bill of materials Perform installation, test, and documentation. The first order of business is to formalize the need. This is usually the defining of the end points requiring communication with each other. For a corporate data network or telephone system, this is typically a number of “core” sites and a number of “edge” user locations. For high capacity data networks, all sites may be “core” or all “edge.” In any case, the first step is to establish termination points with capacity, reliability (robustness), and QoS (priority and timeliness) needs. The next step is to establish a hierarchy (meta-architecture). A logical segregation of sites or functions may be necessary. A typical case is a large grouping of radio backhaul paths tied together or to a central office by a broadband fiber network. The actual hierarchy will be heavily influenced by the choice of transport technology [plesiochronous/“asynchronous” TDM, synchronous TDM (SONET/SDH) or ATM or IP packets]. Many, if not most, new microwave networks are converged IP technology. Next is the consideration of architecture or topology. This will be influenced by transmission needs among users. Clusters may be needed or the network may be relatively uniform. Next, the physical span of the network must be considered. If a large number of nodes are all in a metropolitan area with relatively close spacing (e.g., a cellular network or a local business campus), a high frequency radio network (11 GHz or higher frequency) is usually most practical. If the network is a small number of nodes spaced over a large geographic network (e.g., a railroad or pipeline network), a low frequency (lower or upper 6 GHz) radio network is typically required. This is illustrated by the following maps of

RADIO SYSTEM DESIGN

177

Figure 6.2 Lower 6-GHz band utilization.

Figure 6.3

18-GHz band utilization.

the lower 6- and 18-GHz networks currently in place in the United States (based on the FCC licensed service database) (Fig. 6.2 and Fig. 6.3). The basic architectures are as shown in Figure 6.4. The decision of architecture is based on perceived needs, such as network length (for coverage), diameter (for delay minimization), scalability (for flexibility to handle changing requirements), and connectedness (for robustness/reliability/survivability). Backbone and spur is the most common for long-distance designs where the transmission requirement is dispersed along a defined route. Concatenated rings are often used to cover a metropolitan area (telecommunications providers use this to advantage in the Los Angeles and San Francisco areas where system redundancy and coverage are critical). Star, hub and spoke, and mesh are common high frequency metropolitan design approaches. Ring architectures (tied to a fiber aggregation hub), as they typically use nonredundant hardware configurations,

178

DESIGNING AND OPERATING MICROWAVE SYSTEMS

Star

Backbone–spur

Hub–spoke

Ring

Concatenated rings

Figure 6.4

Mesh/ring

Mesh

Typical architectures.

provide the highest coverage for the lowest cost. However, they are the least flexible. An entire ring is required regardless of the final coverage requirement. Equipment cannot be redeployed if requirements change and capacity expansion is awkward (adding a second ring or bifurcating the initial ring with additional hubs) as initial deployment of rings usually requires high bandwidth radios. Rings usually home to a single-fiber aggregation hub (fiber point of presence or POP). An alternative is to use two-fiber aggregation hubs with a backbone and spur architecture connecting them (“necklace”). The main design characteristics of both approaches are similar. The hub and spoke provides more flexibility than the ring but at a higher cost. The hub and spoke equipment can be redeployed as coverage requirements change without impacting existing network functionality. Capacity upgrades are usually easier because the initial implementation is relatively low bandwidth radios. With the migration of microwave networks to IP technology, hot standby configurations are being replaced with dual radios connected to routers. With dual IP radios, the path capacity is restricted on failure but under normal operation, capacity is double that of a hot standby radio. Replacing hot standby with IP-routed radios is simple. However, if both radios are to be used for active traffic, route link aggregation (point-to-point LAG) is required. Currently, not all routers can separate traffic for transmission on multiple physically parallel (collocated) paths and reaggregate it at the far end. Small mesh networks (diamonds and bifurcated rings) require symmetric loads on all nodes to be effective. Large mesh networks are the most flexible and redundant, but are also the most expensive, owing to the large number of redundant paths. The performance of these networks (transmission capacity, reliability, and latency) is difficult to predict. Gupta and Kumar (2000) note that for two-dimensional meshes, per node traffic throughput capacity degradation with additional nodes is generally proportional to [W/(n log n)1/2 ] for randomly located nodes and traffic patterns. W is a single-path (node to adjacent node) traffic capacity in bits per second and n is the total number of nodes in the network. If the nodes are optimally placed in a disk pattern and traffic is optimized, the capacity degradation becomes proportional to (W/n1/2 ). Jun and Sichitiu (2003) noted that if gateways to other networks [such as to the Internet or to a high speed fiber optic metropolitan area network (MAN)] are added to a random two-dimensional mesh network, the extra congestion created by the gateways cause the per node traffic throughput capacity degradation to be proportional to (W/n). Sometimes, 60-GHz radios are spread throughout a building to create a three-dimensional mesh network. Gupta and Kumar (2000) note that for spherical three-dimensional meshes, per node traffic throughput capacity degradation is generally proportional to [W/(nlog2 n)1/3 ]. Since channels near the center of a mesh tend to experience higher traffic than near the periphery, this result would be effectively the same for square networks. Anticipated network growth should also be considered in metropolitan designs. Mesh networks are relatively easy to expand. However, consideration of throughput reduction when nodes are added should be kept in mind.

DESIGNING LOW FREQUENCY RADIO NETWORKS

179

Figure 6.5 Metropolitan radio networks.

Integrated radio and fiber optics subnetworks are quite common. The examples in Figure 6.5 show two different approaches for cellular networks. Both network designs use extensive fiber-optic rings to pull the networks together and add survivability to the network. Low frequency networks using all indoor radios for long-distance networks have been engineered since the late 1940s. With the emphasis on metropolitan networks, high frequency radios (often split package radios with RF on the tower behind the antenna and a baseband interface unit in the telecommunications facility or just an IP radio directly behind the antenna) have become more common. High frequency networks represent unique challenges for the network designer. Typically, they are very large (300 to over 1000 site networks are common). All microwave radio paths are limited by terrain clearance. However, high frequency radios are also limited in performance by rain attenuation (“fading”). Minimizing the impact of rain outages is a dominant concern for the network designer. While necklace (backbone connecting two-fiber aggregation hubs) and hub/spoke architectures are the most common, some designers use novel designs to minimize the rain effect. While heavy rain will always impact a node, the architecture is usually designed is such a way that the impact to the network is minimized.

6.3

DESIGNING LOW FREQUENCY RADIO NETWORKS

Low frequency radio paths (frequency 10 GHz) are a few links connecting a business campus. However, more commonly, today these systems comprise of hundreds of sites covering an entire city and providing backhaul for new data services. Low frequency and high frequency radio path design has much in common. However, the site volume and churn of many high frequency network designs can be a daunting transmission engineering challenge. Typically, the service provider is attempting to utilize known sites on a rental or lease basis. Site acquisition or evaluation may be going on simultaneously with the path design. Hundreds of paths must be designed (and redesigned) very quickly (on the order of days, not weeks). If time is important (and it usually is for large networks), automating at least part of the design process is highly desirable. Virtually all modern metropolitan designs have the radio backhaul networks converging at a fiber POP. The fiber POP is usually part of a SONET ring connecting to a central office or Internet access location. The first design step is critical to overall coverage and success of the market design. That step is to estimate which potential POPs will be suitable for radio designs (Aoun et al., 2006; Dutta Kubat Liu, 2003; McGregor and Shen, 1977). If the network uses IP radios, end to end concatenated path latency must be evaluated. If the candidate network is hub–spoke, the criterion might be how many sites can be connected to the POP by paths meeting the maximum path length, minimum path clearance, and the maximum number of cascaded hops. If the candidate network is ring and spoke, the criterion might be the number of paths that meet the maximum path length and minimum path clearance. If fiber is not currently at the potential sites, the process is complicated by finding sites where fiber ring access can be provided at a reasonable price. Currently, this is a semiheuristic process of matching sites to expected coverage areas. Many urban high frequency designs involve hundreds of sites (and thousands of potential paths). Manual path and system design is costly from a time and labor perspective. To speed up the path design process, software can yield significant labor and time savings. Candidate radios and antennas are chosen. Potential sites are recorded in a database with name(s), site coordinates, site address, and maximum practical height. Sites are also graded by capability to support microwave radios. An example would be high grade (buildings or self-supporting towers capable of supporting many radios), medium (guyed towers and monopoles capable of supporting some radios), and low (small or skinny structures capable

DESIGNING HIGH FREQUENCY RADIO NETWORKS

183

of supporting only one radio—suitable only for end sites). Design rules are established for using those sites (Doshi and Harshavardhana, 1998; Dutta and Mitra, 1993; Gersht and Weihmayer, 1990; Ruston and Sen, 1989; Dijkstra, 1959). Some are common to all designs and some are architecture specific. Most current network design packages do not support sophisticated radio network designs (Bragg, 2000; Kasch et al., 2009). However, new versions of microwave radio path design programs are starting to include some of these features. First a “spider web” of all possible radio paths is created. This will typically be thousands of candidate paths. These “cleared paths” are then used to synthesize an appropriate system design (Fig. 6.8). Perhaps the most challenging aspect of semiautomated high frequency design for cellular backhaul networks is determining accurate antenna structure locations, structure types, and potential antenna heights. At this time, another challenging part of this design process is in obtaining adequate terrain clutter (trees, buildings, and other structures) data at a reasonable price (as noted earlier for low frequency design). All current radar and laser terrain mapping methodologies have their limitations as noted earlier. Adding to the difficultly of automating high frequency design is the lack of accurate site geographic coordinates (longitude, latitude, and elevation). For cellular networks, this data is notoriously unreliable. Location inaccuracies of the order of over 100 ft are common. This significantly increases the error in path clearance analysis (since vegetation and building clearance is highly location dependent). The basic idea is to take a group of sites (typically predefined), look at a subset of all possible paths and discover a set of paths that provides reasonable coverage while meeting predefined design rules (Chattopadhyay et al., 1989; Ju Rubin, 2006; Ko et al., 1997; Kubat et al., 2000; Luss et al., 1998; Soni et al., 2004). The optimum design is not required (and often would take too long). Since time to market is generally the overriding consideration, an adequate design that meets minimum standards is usually the criterion. Two architectures popular in metropolitan high frequency design are discussed in the following sections.

6.4.1

Hub and Spoke

A critical aspect of this type of design is the proper placement of hub sites. This is a challenging issue because coverage will be a factor of antenna structure height, but usually access to a fiber ring is also required for backhaul. For high capacity systems, two or three tiers (layers of paths in series with

Figure 6.8

Semiautomated design process.

184

DESIGNING AND OPERATING MICROWAVE SYSTEMS

Figure 6.9 Automated hub–spoke design.

the hub) are usually acceptable. For low capacity systems, four or five may be acceptable. Eventually, the end to end rain outage path availability and/or concatenated path latency become the dominant limitations. When an automated design process is used, it is usually based on a breadth-first (Knuth, 1997) search of paths meeting the site and path length maximums. This process is continued until adequate area coverage is obtained. Figure 6.9 is an example (Dallas/Fort Worth) of the results of this type of automated program (the long dark lines are fiber rings).

6.4.2

Nested Rings

Here, the task is to define methods of creating multiple rings from each of several fiber POPs that do not interfere with each other. As with hub and spoke, the hub location is important. However, it is not as challenging because only a few hubs are required. The limitation to the ring size is primarily the ring transport capacity (which limits per site capacity). This limitation must be defined before beginning network design. For IP networks, end to end latency may also be a criterion. In the ring design, paths that begin and end at a node (“hub”) are designed. Either spurs or additional (“secondary”) rings are added to the primary ring if additional sites need to be reached. The limitation of additional rings and spurs is the primary ring transport capacity. Usually secondary rings are used if significant additional distance coverage is needed (using the loop to minimize the cascaded path unavailability). If only a site or two needs to be added, simple concatenated spurs are used. Once the area is grouped and sectored around the hubs, individual rings must be determined. Typically, this uses a depth-first (Cormen et al., 2001) (to yield a result quickly) or breadth-first (Knuth, 1997) (to yield all possible paths) search method. The search is usually based on meeting a minimum and maximum number of nodes in the ring and, of course, the maximum path distance per frequency band. After primary rings are created, secondary and tertiary rings and spurs are added to complete the coverage. This is challenging to automate primarily because customers typically have many design requirements to be met. Generally, each set of rings (and finally spurs) must be selected from the set of all possible rings ordered by the appropriate criteria (such as overall ring distance). Each network layer was added

USER DATA INTERFACES

185

until the design was complete. Totally automated algorithms have also been utilized but are challenging to implement if customer constraints are significant. For networks where hub reliability is a concern, a variant of this design method is to use two hubs rather than one (“necklace” design). All paths that were a ring now terminate on two hubs rather than one. While this improves overall reliability, it complicates the design by requiring two hubs for each set of paths. Otherwise, the design methodology is similar to the basic nested ring.

6.5

FIELD MEASUREMENTS

After an initial tabletop design effort, the sites and paths must be finalized. Site surveys validate the antenna mounting structure and equipment locations. Actually locating the correct location and structure is a surprisingly difficult task. Many locations are hundreds of feet from any street so street names and numbers can be problematic. Site coordinates are notoriously wrong (for many reasons). Often, structure types have been modified significantly or new structures built since the last drawings were made. Sometimes, owing to faulty records, the site is at one location, the address of record at another, and the site coordinates of record at a third. Even when the correct location is found, there may be several different potential antenna structures at that location. Owing to its criticality to the path design, finalizing site locations and structures is not trivial. The next significant task is verifying that the potential path (with anticipated antenna heights) meets the minimum path clearance. If the site coordinates and antenna heights are known (and that is a big “if”), in theory, satellite images could be used to validate a path. While this is helpful, lack of current data (and lack of height information) is a significant limitation to this approach. In Canada, some operators use helicopters equipped with terrain tracking radar to map terrain heights. In urban areas, stereo images that yield terrain heights with 1- to 2-m vertical and horizontal resolution are available. However, their cost is currently prohibitive for large urban networks. Although it is time consuming and difficult to meet typical urban high frequency development schedules, using path surveyors to verify path clearance is the current method of final path verification.

6.6

USER DATA INTERFACES

In North America, most national and regional networks are gravitating to a three-layer model (Fig. 6.10). At the highest, fastest layer, very high speed proprietary synchronous systems dominate. At the intermediate level, virtually all systems are standardized synchronous systems. At the lowest user interface layer, there is a wide range of interfaces. Most systems today use the TDM technology, but migration to IP interface and transport is clearly the future. When specifying a physical data interface, at least three basic questions must be answered: What defines a bit (signaling format)? Where are the bits located (method of synchronization)?

Dense optical networks Wavelength services Wavelength division multiplexing High level restoration

SONET / SDH High speed protection Time division multiplexing Time slot grooming

Local services Delivery of services to end user

Figure 6.10 National networks.

186

DESIGNING AND OPERATING MICROWAVE SYSTEMS

Figure 6.11

Basic binary coding formats.

Where are the sequenced continuous bits that represent a message (packet or frame message format and sequencing)? In the early development of TDM digital transmission, several different signaling techniques were used (Stallings, 1984). Today, there are three basic TDM bit signaling formats (Fig. 6.11): Nonreturn to Zero (NRZ). Signaling voltage maintains its level until the next signal. Return to Zero (RZ). Signaling voltage returns to zero level before the next signal. Bipolar Alternate Mark Inversion (AMI). Binary 0s are represented by zero amplitude. Successive binary 1s are represented by alternating positive and negative levels of the same amplitude. The bit coding formats shown in Figure 6.12 are used in telecommunications signals (American National Standards Institute (ANSI), 1996). BPV means bipolar violation (explained later in this chapter). B means normal bipolar signal and V means bipolar violation. Bipolar signals and the above abbreviations will be explained subsequently. Messages are subdivided into frames (or packets) (Fig. 6.13).

Figure 6.12 Telecommunications binary coding formats.

USER DATA INTERFACES

Figure 6.13

187

Message frames.

The choice of user data interface will impact the way data is managed and therefore, can influence architecture. Both North American and European TDM plesiochronous systems (American National Standards Institute (ANSI), 1993; American National Standards Institute (ANSI), 1995a; International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 1993; International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 2001; International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 1998) have been the primary user data interface since the late 1960s (Fig. 6.14 and Fig. 6.17). The North American “asynchronous” hierarchy defines a common digital interface location termed a DSX point. This is a location where digital signals of common rate and signal shape can be interconnected (patched or “rolled”) and tested. It is a concept only used in North American Digital Signal (DS-N) formats (it is included in the ANSI T1-102 (American National Standards Institute (ANSI), 1993) specification but is not in the ITU-T G.703 (International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 2001) specification for DS1 signals) (Fig. 6.15 and Fig. 6.16). Equipment line build out (LBO) circuits are required in digital equipment to achieve cross-connect power and waveform requirements. The LBO circuits are typically specified for a reference length of a reference cable. If a different cable is used, the cable reference length changes. All data sources and sinks must be synchronized (Bregni, 2002; Okimi and f*ckinuki, 1981). The early data networks were composed of pairs of point to point channel banks with the master channel bank synchronizing the slave channel bank. All pairs of channel banks were synchronized but plesiochronous (“asynchronous”) relative to other pairs. For all these channel banks, all traffic terminated at

Figure 6.14 North American “asynchronous” (plesiochronous) digital hierarchy.

Figure 6.15

188

DS1 cross-connect and reference cable lengths.

Figure 6.16 DS3 or STS-1 cross-connect and reference cable lengths.

189

190

DESIGNING AND OPERATING MICROWAVE SYSTEMS

Figure 6.17 European plesiochronous digital hierarchy.

Figure 6.18

Mutually synchronized DS1/E1 data network.

each bank. With the introduction of subrate (n × “DS0” or n × “E0”) drop and insert cross-connects, not all traffic dropped at a single digital terminal. In this environment, mutual synchronization was required to avoid frame slips (Fig. 6.18). If a single cross-connect connects all low speed digital terminals, they all sync slave to the crossconnect. If multiple cross-connects are used, the cross-connects must each be mutually synchronized. If the cross-connects are distributed throughout the network in intelligent channel banks containing small cross-connects, each channel bank must be synchronized. Radios transporting the low speed data merely loop time to that signal and transport the data transparently and need no synchronization to the data. Plesiochronous hierarchies support subrate (slower than DS0 or E0) signals. For these, the nominal signaling speed is predefined. The short data strings make absolute timing accuracy unnecessary. Standard speeds are 110, 150, 300, 600, 1200, 2400, 3600, 4800, 9600, and 19,200 bits per second (b/s). Typical electrical format is V.24/28 or RS-232. Data is organized into 8- to 11-bit strings. Zero (space) is high voltage and one (mark) is low voltage (inverted data). The first bit is a space start bit followed by 5-, 6-, 7-, or 8-character bits. Usually, the character is 8 bits (7-character bits plus a parity bit or 8-character bits and no parity bit). The parity bit can be even, odd, mark, space, or none. The 8-character bits usually represent an ASCII character. The 8-character bits are sent with the least significant bit first and most significant bit last. The bit string ends with 1, 2 (and rarely 1 1/2) stop bits. Higher speed signaling (integer multiples of 64 kb/s) is organized by the data source and sink. For these signals, transmission is usually synchronous with the transmission clock being provided by the

USER DATA INTERFACES

191

transmission equipment. The data source and sink loop time to this clock reference. Typical electrical format is V.35 or RS-422. For plesiochronous DS1, DS3, E1, and E3 signals, signal timing is determined loosely by a predefined nominal frequency with specified absolute accuracy (ppm or b/s). The data source uses an external or internal reference frequency to transmit the digital signal. Received signal synchronization is achieved by frequency and phase locking to the incoming data stream of bits. Successful receiver synchronization depends on maintaining a minimum of data transition activity. Data activity is maintained through the use of an appropriate line code. The data source encapsulates the baseband signal to be transmitted into a predefined digital frame. The data sink locates the digital frame and retrieves the transported data. For DS1, DS3, E1, and E3 signals, the basic signal is Bipolar AMI RZ with 50% duty cycle. Pulse shape must meet predefined shapes (“pulse mask”) at the data receiver. The signal is bipolar with sequential 1s of opposite polarity. Consecutive nonzero signals of the same polarity represent a BPV. DS1 signals use one of three transmission formats: AMI. This is a bipolar signal with sequential 1s of opposite polarity (AMI). The Mux/Demux equipment ensures no more than eight consecutive 0s. To maintain a minimum pulse density of 12.5%, one bit out of eight must be reserved for pulse density maintenance. This limits a DS0 data channel to 56 kb/s for data transmission. B8ZS. (bipolar with eight-zero suppression). This is a bipolar signal with a sequence of 1s of opposite polarity (AMI). The sequence 000VB0VB is inserted for eight consecutive 0s. Since this requires the signal to be buffered for 8 bits, this format introduces a nominal delay of about 5 μs. Since no in-channel bits are used, it supports 64 kb/s clear channel transmission per DS0. ZBTSI. (zero byte time slot interchange). This format processes the data stream to remove excess zeros. It requires data channel in ESF format (so an overhead channel is available). It introduces approximately 1.5-ms delay. It is rarely used today. DS3 signals use bipolar with three-zero substitution (B3ZS). Each block of three consecutive zero signals is replaced by 00V or B0V (V represents a bipolar violation and B represents normal bipolar signaling). The choice of substitution block is made so that the polarity of consecutive V signal elements alternates to avoid introducing a DC component into the signal (number of B pulses between consecutive V pulses is odd). E1, E2, and E3 signals use high density bipolar of order three (HDB3). Each block of four consecutive zero signals is replaced by 000V or B00V. The choice of substitution block is made so that the polarity of consecutive V signal elements alternates to avoid introducing a DC component into the signal (number of B pulses between consecutive V pulses is odd). E4, SONET, and SDH signals use coded mark inversion (CMI). This is an NRZ 100% duty cycle code with two signaling voltage levels of the same amplitude but opposite polarity. Binary 1 is represented by either voltage level being sustained for one full signaling time interval. Successive binary 1s use alternate voltage levels. For binary 1, there is a positive transition at the start of the binary unit time interval if in the preceding time interval the signal level was low. For binary 1, there is a negative transition at the start of the binary unit time interval if the preceding last binary 1 signal level was high. Binary 0 is represented by both voltage levels, each being sustained consecutively for half a signaling time interval. For binary 0, there is always a positive transition at the midpoint of the signaling time interval. DS1 signals come in two formats as shown in Figure 6.19 and Figure 6.20 (American National Standards Institute (ANSI), 1995a). DS3 signals have one general format as shown in Figure 6.21 (American National Standards Institute (ANSI), 1995a). DS3s can be operated in one of four ways: M13. This is the original format. It supports DS2 mapping. It does not support C-bit alarm, status, or loopback features. C-Bit Parity. This mode supports end to end performance management (PM) and control by redefining DS2 stuffing bits (but it is typically implemented differently by different manufactures). Owing to

192

DESIGNING AND OPERATING MICROWAVE SYSTEMS

Frame Frame 125 μs long Frame = 24 DS0 channels (8 bits) + one F (Frame) bit

Superframe (“D4”) Composed of 12 frames F bits used for framing (sequence: 100011011100) Two signaling channels, A & B Signaling bits robbed from 8th bit of each 6th DS0 byte Signaling typically used for E & M signaling

Figure 6.19

DS1 superframe format.

proprietary implementations, it often has interoperability issues with different vendors’ equipment. It is incompatible with M13 format. Clear channel. This format usually retains only DS3 framing. It is used for encrypted data or video transmission. Lack of zero suppression and nonstandard pulses can cause compatibility issues. Syntran. This is an obsolete synchronous format rarely used today. SONET and SDH TDM formats (American National Standards Institute (ANSI), 1993; American National Standards Institute (ANSI), 1995b; International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 2003) are very popular for medium- and long-distance data transport (Fig. 6.22). The North American SONET format is defined (American National Standards Institute (ANSI), 1995c; American National Standards Institute (ANSI), 1997) for four data rates as shown in Figure 6.23. Within the lowest speed format, STS-1 or STM-0, various synchronous virtual tributaries (VT-X) are defined for encapsulating plesiochronous and packet signals. For SONET STS-1, four VTs are defined for DS1 signals (Fig. 6.24). Locked Mode. This is an obsolete mode that locks all VTs together (no VT pointer processing) within the STS-1. The DS1 must be locked to the STS-1 device. Floating Byte Synchronous Mode. This mode pointer processes (moves VT forward or backward) one byte (8 bits) at a time. DS0 grooming (drop and insert) requires slip buffers. The DS1 must be synchronized to the STS-1 network. Floating Bit Synchronous Mode. This mode provides single bit pointer processing. DSO grooming can be performed without slip buffers. The DS1 must be synchronized to the STS-1 network. Floating Asynchronous Mode. This mode provides multiple bit pointer processing. The VT is synchronous with the DS1 but asynchronous with the STS-1. This mode supports DS0 grooming

USER DATA INTERFACES

Frame Frame 125 μs long Frame = 24 DS0 channels (8 bits) + one F bit

Extended Superframe Composed of 24 frames Four signaling modes F bits used for 8 kb/s overhead channel 2kb/s framing channel (sequence: 001011) 2kb/s CRC-6 word for PM 4kb/s facility data link (FDL) Supports end to end PM using FDL and CRC-6

Figure 6.20

DS1 extended superframe format.

Figure 6.21 DS3 framing format.

193

194

DESIGNING AND OPERATING MICROWAVE SYSTEMS

Figure 6.22 North American Synchronous Optical Network (SONET) and European Synchronous Digital Hierarchy (SDH).

Figure 6.23

North American SONET framing format.

without slip buffers or network synchronization of DS1. Since this mode completely decouples the operation of DS1s from the SONET network, it is the preferred mode in use today. As with plesiochronous networks employing drop and insert functionality, all DS0s within the DS1 must be mutually synchronized (American National Standards Institute (ANSI), 1999; International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 2003; International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 2000a).

USER DATA INTERFACES

Figure 6.24

STS-1 locked and floating VT 1.5 framing formats.

195

196

DESIGNING AND OPERATING MICROWAVE SYSTEMS

Figure 6.25

Simple synchronous networks.

By their very nature, the network elements (NEs) (typically add/drop multiplexers) in SONET or SDH networks must be mutually synchronized (American National Standards Institute (ANSI), 1996). If the network is small and one set of sync clocks can see all NEs, a relatively low cost clock is adequate (Fig. 6.25). If no single pair of clocks can see all NEs (and the interconnects are all synchronous), high quality clocks are required (Fig. 6.26). Interconnects also affect the required sync clock quality. If the interconnects between synchronous networks is plesiochronous, then no mutual synchronization among networks is required and low quality clocks are adequate (Fig. 6.27).

Figure 6.26 Compound synchronous network.

USER DATA INTERFACES

197

Figure 6.27 Plesiochronously connected synchronous networks.

Figure 6.28 Synchronously connected synchronous networks.

However, if the networks are interconnected synchronously, all networks require a high quality clock (since they all cannot use the same one) (Fig. 6.28). If external synchronization clocks are required, one of two architectures is usually used (Bregni, 2002; Okimi and f*ckinuki, 1981) (Fig. 6.29). It is sometimes useful to synchronize a chain of clocks (typically through cascaded NEs or BITS shelves) (Fig. 6.30). The number of clocks in the series (cascaded) should be limited to avoid excessive wander and network breathing (International Telecommunication Union—Telecommunication Standardization Sector (ITUT), 2000d; International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 2000e; International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 2002b). G.703 (International Telecommunication Union—Telecommunication Standardization

198

DESIGNING AND OPERATING MICROWAVE SYSTEMS

Primary reference Master source clock

Master clock

Slave clock

Slave clock

Slave clock

Slave clock

Figure 6.29

Slave clock

Slave clock

Slave clock

GPS receiver

Master clock

Primary reference source

Slave clock

GPS receiver

Slave clock

Master clock

Slave clock

GPS receiver

Slave clock

Master clock

Hierarchical (root-leaf) and independent (flat) synchronization networks.

External reference (e.g., stratum 1)

Slave clock

Slave clock

Figure 6.30

Slave clock

Primary clock

Synchronization chain

Sector (ITU-T), 2001) suggests that the maximum number of cascaded clocks is 60 although many industry sources suggest informally that this number should be on the order of 20 or 30. For simple junction stations, cross-wiring synchronization signal between NEs is usually adequate. For larger sites, a building-integrated synchronization supply (BITS) shelf with appropriate quality clock is usually required to mutually synchronize the various NEs. For channel banks and small cross-connects, the synchronization source is typically a 64-kb/s composite clock. For higher speed devices, a DS1 or E1 synchronization signal is usually used. For large networks, redundant BITS shelf (DS1 or E1) synchronization sources may be used to increase sync reliability. DS1 or E1 sources of synchronization may have Sync Messaging (source synchronization quality signals) (American National Standards Institute (ANSI), 1996; American National Standards Institute (ANSI), 1994; International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 2003) embedded within the DS1 or E1 signals. For small networks, sync messaging is implemented in the individual NEs. For large networks, sync messaging is implemented in the BITS shelf.

USER DATA INTERFACES

Figure 6.31

199

Popular packet formats

Most SONET or SDH microwave radios are SONET or SDH compatible, not compliant. That means they transport the SONET or SDH signals but probably do not support the synchronization and alarm and performance monitoring standards. They appear as active fiber (i.e., do not interface with SONET or SDH network management). Since they are loop-timed from the incoming synchronous signal, they do not require external synchronization. Today, most user interfaces are based on a PSN. The most popular modern forms are LAN and ATM (Fig. 6.31). Ethernet LAN packet interfaces (Institute of Electrical and Electronics Engineers, 1999–2008) are becoming the de facto interface for many new user data circuits (Fig. 6.32).

Figure 6.32

Local area network (LAN) “Ethernet” physical interfaces.

200

DESIGNING AND OPERATING MICROWAVE SYSTEMS

Figure 6.33

Typical message encapsulation.

The above formats are the direct physical interface to the user. The user message must be encapsulated so it can be routed to the far end destination. Encapsulation and routing occurs by layer (Fig. 6.33). Routing of packets among nodes is usually described on the basis of an IP four-layer (Internet Engineering Task Force (IETF), 1989) or OSI seven-layer model (International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 1994) (Fig. 6.34). The OSI model supports the following layers: 7. Application Layer: NNTP, SIP, SSI, DNS, FTP, Gopher, HTTP, NFS, NTP, SMPP, SMTP, SNMP, Telnet, DHCP, Netconf, RTP, SPDY 6. Presentation Layer: MIME, XDR, TLS, SSL 5. Session Layer: NetBIOS, SAP, L2TP, PPTP, Named Pipes 4. Transport Layer: TCP, UDP, SCTP, DCCP, SPX 3. Network Layer: IP (IPv4, IPv6), ICMP, IPsec, IGMP, IPX, AppleTalk

Figure 6.34

Message encapsulation layers.

USER DATA INTERFACES

201

2. Data Link Layer: ATM, SDLC, HDLC, ARP, CSLIP, SLIP, GFP, PLIP, IEEE 802.3, Frame Relay, ITU-T G.hn DLL, PPP, X.25, Network Switch 1. Physical Layer: EIA/TIA-232, EIA/TIA-449, ITU-T V-Series, I.430, I.431, POTS, PDH, SONET/SDH, PON, OTN, DSL, IEEE 802.3, IEEE 802.11, IEEE 802.15, IEEE 802.16, IEEE 1394, ITU-T G.hn PHY, USB, Bluetooth, Hubs MPLS operates between traditional definitions of Layer 2 (Data Link Layer) and Layer 3 (Network Layer) in the OSI model layer and is often referred to as a Layer 2.5 protocol. The TCP/IP model (RFC 1122) supports four layers: 4. Application Layer: BGP, DHCP, DNS, FTP, HTTP, IMAP, IRC, LDAP, MGCP, NNTP, NTP, POP, RIP, RPC, RTP, SIP, SMTP, SNMP, SSH, Telnet, TLS/SSL, XMPP 3. Transport Layer: TCP, UDP, DCCP, SCTP, RSVP, ECN 2. Internet Layer: IP (IPv4 • IPv6), ICMP, ICMPv6, IGMP, IPsec 1. Link Layer: ARP/InARP, NDP, OSPF, Tunnels (L2TP), PPP, Media Access Control (Ethernet • DSL • ISDN • FDDI) Routing between nodes is via OSI Layer 1, Layer 2, or Layer 3 (Fig. 6.35). The four- and seven-layer models are historical and neither exactly fits modern packet networks. In practice, the layers are described rather loosely. The latest Ethernet-based transport products (routers and radios) will have native Ethernet interfaces and emulated (pseudowire) interfaces for TDM (DS1, DS3, E1, E3, SONET, SDH, and Frame Relay) and ATM circuits. Many organizations including Internet Engineering Task Force (IETF RFCs 3985 through 5287), International Telecommunication Union-Telecommunication Standardization Sector (ITUT Y series), and Metropolitan Ethernet Forum (MEF 3 and 8) have recommendations for pseudowire circuit emulation. Many next generation radios are native Ethernet transport products. They carry IP signals as direct inputs and outputs. They carry TDM signals (e.g., DS1 and DS3) as emulated (pseudowire) circuits. Since the TDM signals must be encapsulated, transmitted as packets, and then reassembled at the receive end, increased delay (relative to a conventional TSM radio) will occur. The point to point TSM signal will have been transported by a series of separate packets. These packets are routed and received individually. TDM signals have maximum allowable wander and jitter requirements. This requires that the TDM

Figure 6.35

Typical message routing.

202

DESIGNING AND OPERATING MICROWAVE SYSTEMS

signal be buffered and then clocked out at the same rate as it was received at the packet source. Since the packet network is not synchronous, recovering the TDM signal clock requires additional consideration. See Chapter 3.16 for further discussion on this topic. Standards for quality and availability of IP circuits are still in the early stages of development (Song et al., 2007). ITU-R Report F2058 (International Telecommunication Union—Radiocommunication Sector (ITU-R), 2006). The most significant factors mentioned are bandwidth, delay, and lost packets. ITU-T Recommendation Y.1540 (International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 2002a) defines QoS/CoS parameters. Currently, parameters of interest include successful packet transfer, errored packets, lost packets, spurious packets, average packet delay, and packet delay variation. See Section 3.16 for further discussion on this topic. Error performance of the radio network can impact the PSN. During the time that multipath or rain attenuation is occurring, the received signal may have repetitive errors. For typical routers, a single error to the IP header will cause the entire packet to be discarded. A single radio error results in a signal gap resulting in a frame loss for the TDM circuit. Significant repetitive errors from a fading radio will cause some routers to disable the transmission port from that radio. The router views that connection as a defective “flapping” port. It is important that the radio receiver recompute the IP packet CRC error checking block to avoid these error-related issues being propagated.

6.7 OPERATIONS AND MAINTENANCE Telecommunications operations companies provide services. These services must be initiated, administered, and maintained. The primary goal of network management is to support telecommunications services by maintaining efficient, reliable operation both when the network is under stress because of overload or failure and when it is changed by the introduction of new equipment and services. At the same time, network management must increase the performance of the network in terms of the quality and quantity of service provided to its end users. Network management is a recursive, three-step process, applied to several functions necessary to network operation: • Data analysis and interpretation • Situation assessment • Planning and response generation. The ISO telecommunications management network model describes five functions of network management: fault, configuration, accounting (or administration), performance, and security (FCAPS) (International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 2000b): Fault Management. A set of functions that enables the detection, isolation, and correction of abnormal operation of the network or its elements: • Alarm surveillance • Failure localization • Testing. Configuration Management. A set of functions to exercise control over, identify, collect data from, and provide data to NEs: • Status and Control • Installation • Provisioning • Network overall ongoing operations and management. Accounting (Billing) Management. A set of functions that enables the use of the network service to be measured and the cost for such use to be determined and assigned to an appropriate subscriber.

OPERATIONS AND MAINTENANCE

203

PM. A set of functions to evaluate and report on the behavior of telecommunications equipment and the effectiveness of the network or its elements: • Status and control • Performance monitoring. Security Management. A set of functions that protects telecommunications networks and systems from denial of service and the unauthorized disclosure of information, modification of information, or access to resources. Over the years, various specialized electronic systems (typically computer-based) have been developed to support these activities. Collectively, these software and hardware functions, as well as their inter- and intracommunication circuits, have been termed the telecommunications management network (International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 2000c) (Fig. 6.36). We now focus on network managers most closely involved with the actual NEs (e.g., radios, add/drop multiplexers, cross-connects, and routers) and the element managers and network management equipment that directly interact with them. Element managers are the computer-based products which directly configure and provision the NEs. They may be simple craft interfaces or sophisticated systems providing end-to-end or system-wide connections, path restoration, or root cause analysis. Typically, they are very tightly coupled to the NE and are vendor-unique. They provide the specialized, evolving, interactive, unique functionality of a particular NE. Element managers and other network managers provide one or more of the basic five TMN functions (International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 2000b): performance, fault, configuration, accounting, and security management. Several NMs may operate in parallel or the structure of NMs may be tiered. Of particular interest to network operators are fault management and PM.

6.7.1

Fault Management

The basic function of a fault management system is to allow one or more users to perform commands at and receive alarms or status events from a remote location device or unit. Typical fault management functions include the following: • • • • •

Accepting and acting on fault detection notifications Tracing and identifying fault locations Correcting faults Carrying out diagnostic tests Maintaining and examining fault history reports.

ITU-T suggests (International Telecommunication Union—Telecommunication Standardization Sector (ITU-T), 1992) that the purpose of fault management is to minimize both the occurrence and impact of failures and to ensure that, in the case of failure, the right notification sends the right personnel to the right place with the right equipment and the right information at the right time to perform the right action. A computer running-specialized management software and connected to a data communication network acts as the local interface for the user. The remote location monitoring unit may be a discrete device or may be included in the function of the remote transmission equipment. The remote equipment is often called a network element (NE). The communication channel between the computer(s) and the monitoring units is called the data communications channel (DCC), data communications network (DCN), embedded communications channel (ECC), embedded operations channel (EOC), or simply the telemetry channel. This channel may be a physically separate channel (over an external network) or a virtual channel within the NE payload. Communication over the telemetry channel is via a predefined language called a protocol. The protocol defines the dictionary and syntax of words appropriate for command and responses. If the words are short

Figure 6.36

204

Telecommunications management network and its functional management layers.

OPERATIONS AND MAINTENANCE

205

or they are composed of ASCII characters, communication is generally asynchronous. If the words are long, synchronous communication is required. The source and termination of a data channel is termed data terminal equipment (DTE). The master and remote units are DTE devices. The equipment forming the transmission channel is called data circuit-terminating equipment (DCE). Radios and modems are examples of DCE. If the transmission channel is synchronous, the DCE supplies the transmit and receive data clocks. Proper operation of the fault management system assumes error-free data transmission between the master and the remote units. Loss of communication or corrupted data can cause errors in the telemetry channel. Most protocols have error-checking methods that preclude bad data words from being reported as valid or causing a remote control. Errors can also cause the data connection between the origination point and the termination point to be lost. This condition is the usual cause of “no report” status. Multiple NEs with the same protocol address can create confusing or unpredictable behavior. In some protocols, this can generate a “no report” condition. For OSI-based systems (TL1 in North America and CMISE/CMIP in Europe), loss of data due to buffer overflow or long delays in message transfer due to alarm storms is common. Similarly, IP-based systems (SNMP) have similar problems although the predominant issue is loss of data due to data collisions. For both OSI- and IP-based systems, the network manager must have specialized algorithms to mitigate these issues. Two types of fault management architectures are common. The first is based on a peer–peer relationship between the user computer and the remote units. In this architecture, any remote unit can (in theory) communicate with any fault management master at any time. The computer used to support this architecture is termed a manager. Examples of this type of network are SNMP, CMISE, and TL1 (Fig. 6.37). Although simple in concept, multiple remote units attempting to contact multiple masters at the same time significantly stress the speed and reliability of the telemetry network. The advantages of this architecture are that interaction is possible between all network devices (peer-to-peer communication) and it is easily scaled to large networks. The limitations of this architecture include potential congestion, loss of communication, uncontrolled transmission delays, and unknown reliability. The use of relatively large (verbose) standardized alarm messages makes this system relatively slow. SNMP’s use of UDP protocol on a data channel shared with other communications makes alarm messaging inherently unreliable. Since messages are never expected unless an alarm occurs, the manager must determine the loss of communication to a site or the site’s functional state. Loss of alarm messages and failure to learn of site and equipment failures are common issues. The manager must mitigate these. The second architecture is based on a master–slave relationship between the user computer and the remote units. The computer used to support this architecture is termed a master. Examples of this are vendor proprietary protocols such as MCS-11 and FarScan (Fig. 6.38). In this architecture, the master polls one remote unit at a time. The advantage of this approach is that telemetry channel activity is minimized and data collisions are eliminated (unless multiple NEs share the same name/address). The master always knows whether the remote unit or site is present and operating. Use of compacted, predefined packets makes this approach relatively fast. Usually, this architecture uses

Figure 6.37

Peer–peer network management.

206

DESIGNING AND OPERATING MICROWAVE SYSTEMS

Figure 6.38

Master–slave architecture.

a dedicated communications channel with error-detecting protocols. This maximizes alarm reliability as well as fast detection of site or communication channel failure. This architecture is optimum for moderatesized networks. However, for very large networks, the master may take a long time to determine the status of all NEs. An additional limitation is that communication between remotes is not possible. The only allowed communication is between remotes and the master. Use of multiple active masters is difficult. Facilities and equipment are the two basic entities monitored by fault managers. Facilities are the signals carried or supported by the transmission equipment. These are of specific interest to the customer. Performance monitoring and performance thresholds are most commonly associated with facilities. Equipment is the actual hardware that is used to support the transportation of the customer facilities. Equipment is of particular interest to the network operator. Alarms and status (nonfailure conditions or events) are most commonly associated with equipment.

6.7.2

Alarms and Status

Traditionally, alarm or event messages have been classified as binary alarms or status conditions. An alarm is a binary state indicative of equipment failure. A status is a binary state that represents a condition of equipment not associated with failure. Typically, alarms or status conditions are indicated by contact closures. An “off” condition is an open circuit or battery voltage and an “on” condition is closure to ground. If the alarm sensing circuitry receives the binary alarms as contact closures, the alarm (status) points are termed parallel. If the alarm sensing circuitry accepts the alarm (or status) data as a preprocessed serial binary data stream [such as telemetry byte oriented serial (TBOS)], the alarms (status) are termed serial. Alarms are unipolar if they are only reported when an event occurs. Bipolar alarms report the transition from “no event” to “event” status as well as the transition for “event” to “no event” status. If alarms are not released until they have been reported over the fault management telemetry channel, they have been “stretched” or “latched.” Alarm integration is a process by which an alarm is not declared until it has been continuously present for a predefined period of time. The time period is typically 2.5 s in terminal equipment and a few tens of milliseconds in transport equipment. Virtual alarms (derived alarms) are user-defined alarms based on logical combinations of other alarms. The Bell System and Telcordia (previously Bellcore) define three basic alarms. Notification of an upstream facility outage is an alarm indication signal (AIS) or blue signal. It is used to suppress alarms downstream from the failure. A local unprotected facility outage is a red signal. A downstream facility outage is a remote defect indication (RDI) or yellow signal. Telcordia has further defined alarm types and levels (Telcordia Technologies, 2000a). Alarm types are service affecting (SA) and nonservice affecting (NSA). An SA alarm indicates an equipment failure which causes loss of the transported (baseband) signal. An NSA alarm indicates that an equipment failure has occurred but that functionality was automatically restored by backup equipment. Critical alarms are SA alarms that indicate failures that could affect many users or considerable bandwidth. Telcordia defines critical alarms as failures that require immediate corrective action independent of the time of the day

OPERATIONS AND MAINTENANCE

207

(early Telcordia documents suggested that a critical alarm was a failure of more than 5 DS1s). Major alarms are SA alarms that affect fewer users or less bandwidth. Telcordia defines major alarms as failures that require immediate attention (early Telcordia documents suggested that a major alarm was a failure of 1–5 DS1s). Minor alarms are NSA alarms or SA alarms that indicate a failure that affects few customers or little bandwidth. Telcordia defines minor alarms as NSA failures (early Telcordia documents suggested that a minor alarm was an NSA failure or an SA failure that affected 17 GHz) links 99.9850% free of SES for hop

Quality objectives defined as error performance during worst month. Performance is only measured when circuit is available (using 10 s on/off window). Typical path degradation is multipath fading short term interference.

Figure 7.13

Legacy international path quality objectives.

240

HYPOTHETICAL REFERENCE CIRCUITS

It defines three types of services: Type 1. Analog transmission (3.1-kHz audio signals). Type 2. Low speed digital transmission [data rates below the primary rate (DS1 or E1)]. Type 3. High speed digital transmission [data rates at or above the primary rate (DS1 or E1)]. Quality (error-performance) objectives for Type 2 systems are to be based on G.821 (superseded by G.826) and F.697 (superseded by F.1668). Quality objectives for Type 3 systems are to be based on F.1189 (now withdrawn). Availability for Type 2 systems are block allocations of 99.99% for medium quality applications and 99.999% for high quality applications. Availability objectives for Type 3 systems are not defined. This recommendation, while still in force, has limited scope and is based on obsolete recommendations. It should not be used for new systems.

7.4.2

Current Quality Objectives

Radio quality (error-performance) objectives are described in F.1668-1 (ITU-R, 2005). Error Performance Objectives (EPOs) are defined for ESRs, SESRs, and BBERs. These parameters were defined above in the discussion of G.828. As noted above, when radios are operating under normal conditions, they run essentially error free [Owing to testing time limitations, radios with low speed (1.5 or 2 Mb/s) tributaries are generally tested to confirm that the bit error ratio is 45 Mb/s) tributaries that the bit error ratio is = 0.500

(8.6)

(8.7)

For a diamond reflector (W = H and φ = 45◦ ): PdB

4 πuW πuW sin = 10 log sqrt (2) sqrt (2)

(8.8)

PASSIVE REFLECTORS

255

W = width of any side of the diamond For uW ∼ =

π 180

W λ

θ (degrees) the following relationships apply: θ1dB = 30.1

λ D

degrees

λ degrees D λ degrees θ10dB = 90.3 D λ θ20dB = 120 degrees D θ3dB = 51.7

The first side lobe is −26.5 dB below the peak boresight power. The envelope of the diamond reflector radiation pattern is PdB = 40 log

sin 2.221 uW /( 2.221 uW ) ,

PdB = 6.02 − 40 log(πuW ),

uW < 0.70

uW >= 0.700

(8.9) (8.10)

For the preceding reflector discussions, the shape of the above reflector is the shape projected onto the path of radio transmission. For example, a square or circular reflector is physically a rectangle or ellipse, respectively. All dimensions are for the projected shape, not the physical shape. If the passive reflector is rotated in the plane of pattern measurement, projected width = [cosine ( ) x physical width] where is the angle between a line perpendicular to the face of the reflector and the line of radio wave propagation. The angle is half the angle formed at the reflector by the incoming and outgoing radio paths.

8.2.2

Passive Reflector Near Field Power Density

For circular and square passive reflectors, the maximum (center of the reflector) near field power density is well known (Bickmore and Hansen, 1959) (Fig. 8.2).

8.2.2.1

For the Circular Reflector: PNNF = 10 log

8.2.2.2

π S () = 10 log 13.14 1– cos S( = 1) 8

(8.11)

For the Square Reflector: S () S( = 1)

PNNF = 10 log

= 20 log 4.05

C

2

1 2 sqrt ()

+S

2

1 2 sqrt ()

(8.12)

The circular reflector has an infinite number of peaks of 14.2 dB with = 0.125 (dB = −9.031) being the farthest from the reflector. The square reflector has a peak of 11.2 dB at = 0.1704 (dB = −7.685). The square reflector peaks become smaller as measurements are made closer to the reflector. The limiting value is 6.13 dB.

256

MICROWAVE ANTENNA THEORY 15

10 log10 [ S (Δ) / S (Δ = 1 ) ]

Circle

10

5

Square

0 –20

–15

–10

–5

ΔdB = 10 log10 [ Δ]

Figure 8.2 Passive reflector near field power density. The general case for circular and square antennas is numerically derived subsequently. The reflectors are the limiting cases of η = 1.0 (100% illumination efficiency). As will be noted later, the abovementioned formulas are actually limiting cases where D/λ 10 or W/λ 10. For moderate to large antennas, they accurately predict the peak power density but do not always predict its exact location. More accurate numerical methods follow.

8.3 CIRCULAR (PARABOLIC) ANTENNAS Commercial microwave radio transmit and receive antennas are usually large, circular-shaped parabolic reflectors using waveguide-driven feedhorns to illuminate the reflecting surface. The radio antennas are characterized by a circular aperture with energy distributed over the aperture in such a way that off-axis (“side lobe”) energy is minimized while energy directly in front of the antenna (“boresight”) remains large. The analytical analysis difficulty is related to the nonuniform distribution of energy across the antenna aperture. Most commercial microwave transmit antennas place most of the energy into the center of the antenna. The energy power is tapered toward a finite (“pedestal”) value at the edge of the antenna. The purpose of this tapered power illumination is to reduce the spurious side lobe responses (reduce spurious radiation and reception). However, the undesired effect is the reduction of the gain of the antenna (relative to a uniformly illuminated antenna). This gain reduction is called a power efficiency factor η.

8.3.1

Circular (Parabolic) Antenna Far Field Radiation Pattern

Aperture antenna analysis requires precise knowledge of the illumination of the aperture. Previously, attempts (Sciambi, 1965; Silver, 1949) to define it required multiple variables with no knowledge on how to vary them to achieve a typical antenna illumination. Hansen (1976a) solved this problem by using a pedestal parabolic antenna illumination defined by the following function: I0 π H sqrt 1 − ψ 2 PI (dB) = 20 log (8.13) I0 (πH) ψ = normalized radial distance from the center of the antenna (between 0 and 1).

CIRCULAR (PARABOLIC) ANTENNAS Uniform illumination aperture efficiency = 100%

Tapered illumination aperture efficiency = 75%

0 dB

0 dB

–20 dB

–20 dB

–40 dB

–40 dB +300 0 +5

–60 dB +300 +50

00 0

+300 +50

0 –300 –30–50 –50

+300 0 +5 00 00

–300 –30–50 –50

Tapered illumination aperture efficiency = 50%

Tapered illumination aperture efficiency = 25%

0 dB

0 dB

–20 dB

–20 dB

–40 dB

–40 dB +300 0 +5

–60 dB +300 +50

257

00 0

0 0 –300 –30–50 0 –5

+300 +50

+300 0 +5 00 0

–300 –30–50 –50

2 ft Diameter, 5 GHz unlicensed band (5.5 GHz), D/l = 11 (+30° to –30°) 10 ft Diameter, lower 6 GHz licensed band (6.2 GHz), D/l = 66 (+5° to –5°)

Figure 8.3 Circular antenna far field radiation patterns.

This is the distinct advantage of making the illumination a function of only one parameter H (which can be related to antenna illumination efficiency). Based on this circular antenna illumination, Hansen derived the far field antenna pattern:

H I1 π sqrt H2 –u2 , for u ≤ H PdB = 20 log {sqrt[H2 –u2 ] I1 [πH]}

H J1 π sqrt u2 − H2 , for u > H PdB = 20 log {sqrt[u2 − H2 ] I1 [πH]} D sin(θ) u= λ

(8.14)

Examples of circular far field antenna patterns are presented in Figure 8.3. For shaped antennas with the same illumination, the far field antenna pattern is a function of similarly u = Dλ sin(θ). The function sin (θ ) is a nearly linear function of θ for small values of θ. For θ < 0.7854 rad (45◦ ), θ = sin(θ) with accuracy of better than 10%. Therefore, for values of θ up to 45◦ , the antenna side lobe value is directly related to the [D/λ]θ as shown in Figure 8.3.

258

MICROWAVE ANTENNA THEORY 0 –10 Antenna efficiency (h, power ratio)

Antenna illumination (dB, relative to center)

–20 –30

1.0

–40

0.9 –50

0.8 0.7

–60

0.6 0.5

–70

0.4 –80

0.3 0.2

–90

0.1 –100 –1.0

–0.5

0.0

0.5

1.0

Distance from center of antenna (normalized to edge)

Figure 8.4 Circular antenna illumination. For some frequency planning studies, FCC Category A and B antenna envelopes are used to limit off-boresight antenna patterns. These limits begin at −20 or −25 dB below the boresight value. All that is needed to complete them is a near boresight envelope. A simple envelope formula that is accurate (within 1 dB over the range of 0 to −25 dB and 0.35 ≤ η ≤ 0.65) is

PdB = Cfac

sin (u) log (u)

(8.15)

Cfac = −24.6752 + 221.881η –103.738η2 + 96.6907η3 H is a function related to antenna power efficiency, η, defined by the following equation: η=

[4I1 2 (πH)] {π2 H2 [I0 2 (πH)–I1 2 (πH)]}

(8.16)

The following curve-fitted equation, accurate for 0.1 ≤ η ≤ 1.0, makes H a function of η (Fig. 8.4): H = (10789.06678518257 − 21713.73557586721η + 11772.42148035185η2 − 847.7526882273281η3 ) /(1 + 8454.692672442079η − 14801.06001940764η2 + 5699.562595386604η3 + 652.2024360393683η4 )

(8.17)

Hansen’s antenna illumination function is similar to actual illumination of commercial antennas. This is demonstrated by the close match between the calculated antenna pattern and the actual commercial antenna patterns near the boresight (Fig. 8.5). The antenna pattern off-boresight is a function of antenna feedhorn/support structure scattering, phase pattern changes of the feedhorn, and reflector illumination variation. Therefore, deviation of the actual performance from the theoretical for angles away from the boresight is to be expected. It is interesting

259

CIRCULAR (PARABOLIC) ANTENNAS

0 10 ft diameter 6.425 GHz 43.88 dBi h = 0.5801 Antenna radiated power (dB, relative to maximum)

–10

–20

–30 Measured Calculated –40 –3

–2

–1

1

2

3

2

3

Radiation angle (degrees, relative to boresight)

Antenna radiated power (dB, relative to maximum)

8 ft diameter 11.20 GHz 46.98 dBi h = 0.6096 –10

–20

–30 Measured Calculated –40 –3

–2

–1

1

Radiation angle (degrees, relative to boresight)

Figure 8.5

Parabolic antenna far field radiation patterns.

260

MICROWAVE ANTENNA THEORY 0.60 2 ft 4 ft 6 ft 8 ft 10 ft 12 ft 15 ft

0.58

Efficiency (Gain factor) (power ratio)

0.56 0.54 0.52 0.50 0.48 0.46 0.44 0.42 1

2

3

4 5 6 7 8 910 Frequency (GHz)

20

Figure 8.6 Typical circular antenna efficiency. to note that the side lobe level and null angles of practical antennas are often quite different than those calculated theoretically.

8.3.2

Circular (Parabolic) Antenna Efficiency

While antenna illumination efficiency is seldom provided for commercial microwave antennas, all manufactures list antenna diameter and isotropic gain for a given frequency. Antenna illumination efficiency, η, for commercial antennas may be estimated from this data. The power gain of an aperture antenna (Kizer, 1990) is given by: g=

4πAη λ2

(8.18)

This yields the following equations for antenna efficiency: 10 log(η) = –10.1 + G(dB)–20 log[ f (GHz)]–20 log [D(ft)] = –20.4 + G(dB)–20 log[ f (GHz)]–20 log[D(m)] η = antenna relative power efficiency (0 to 1)

(8.19) (8.20)

This formula works because the parabolic antenna itself is nearly lossless and loss of gain is essentially all due to illumination. Typical values of η for various typical commercial antennas were derived using these formulas (Fig. 8.6): While circular antenna efficiency is easily inferred from size and gain, it can also be inferred from the common 3-dB beamwidth specification, φ3dB , using the following formulas: (N 0 + N 1β + N 2β 2 + N 3β 3 + N 4β 4 ) , 0.1 ≤ η ≤ 1.0 (1 + D1β + D2β 2 + D3β 3 + D4β 4 ) φ3dB D sin , 1.68 ≥ β ≥ 0.515 β= λ 2 η=

N0 N1 N2 N3

= = = =

2.30135726199837; −9.718502599996146; 10.71969385496424; 0.2532903862637571;

(8.21)

CIRCULAR (PARABOLIC) ANTENNAS

N4 D1 D2 D3 D4

= = = = =

261

0.0; −4.929022134884171; 18.21074703351663; −44.98768585675341; 43.03842941363276.

This formula may be needed if for some reason the antenna is electrically lossy.

8.3.3

Circular (Parabolic) Antenna Beamwidth

Circular antenna efficiency, η, can be related to antenna beamwidth φn using the following formulas:

D λ

(N 0 + N 1η + N 2η2 + N 3η3 + N 4η4 ) , (1 + D1η + D2η2 + D3η3 + D4η4 ) D λ sin θn D = 2 arcsin

sin θn = φndB

λ

For θ3dB the following values are used: N0 N1 N2 N3 N4 D1 D2 D3 D4

= = = = = = = = =

7.972180065637231; 137.7187514674955; −65.72554929123112; −169.5462549241622; 92.52937419842578; 90.30209868556027; 299.7227877858257; −615.0321426632636; 229.7331500526253. 1.68 ≥

D λ

sin θ3dB ≥ 0.515

For θ10dB the following values are used: N0 N1 N2 N3 N4 D1 D2 D3 D4

= = = = = = = = =

10.82186120921261; 88.12091114204749; −134.7746923842901; −12.95576841796835; 49.0194213930968; 48.19536883990477; 35.09170209420454; −191.3604015533953; 107.3398772721432. 3.05 ≥

For θ20dB the following values are used: N0 N1 N2 N3 N4 D1 D2

= = = = = = =

14.57321339163523; 90.54344327554624; −182.6759854406716; 57.45550532477225; 20.32003841569955; 42.19500334507131; −1.350679649101605;

D λ

sin θ10dB ≥ 0.869

0.1 ≤ η ≤ 1.0

(8.22)

262

MICROWAVE ANTENNA THEORY

D3 = −108.9283729566957; D4 = 67.28268578480228.

4.28 ≥

D λ

sin θ20dB ≥ 1.0

The above formula calculates the beamwidth of the primary antenna beam (it ignores side lobes). For circular antennas of very high efficiency (1.0 ≥ η > 0.978), the first side lobe can exceed −20 dB relative power (for η = 1.0, the first side lobe power peaks at −17.6 dB). For 1.0 ≥ η > 0.978, θ20dB calculated by the above formula should be expanded by a multiplicative factor of 1.73 to 1.49, respectively, if the first side lobe power is considered. φ3dB is a commonly used criterion. For (D/λ) ≥ 1 and η = 0.5, the formula may be simplified to: 88.0 θ3dB = degrees D λ

(8.23)

The actual beamwidth values vary approximately ±10% from the above values over the η range of 0.4–0.6. The 3-dB angle formula may be compared to other commonly used historical values: 61 degrees (TIA Subcommittee TR-14.7, 2005), converted to normalized format; D λ 70 degrees (TIA Subcommittee TR-14.7, 1996); D λ 71 degrees (White, 1970), converted to normalized format; D λ 71 degrees (Reintjes and Coate, 1952); D λ 84 degrees (Silver, 1949), Table 6.2, 0.56 gain (efficiency) factor. D λ

θ3dB =

θ3dB =

θ3dB =

θ3dB =

θ3dB =

This difference may be explained by the observation that earlier antennas studied used significantly higher efficiency than modern antennas. An emphasis on frequency reuse has caused most modern antennas to be designed with lower efficiency to reduce off-boresight radiation. An exception to this is the recent introduction of very small antennas into unlicensed bands where high efficiency antennas are used (at the expense of frequency reuse). Another way to look at parabolic antenna beamwidth is the beamwidth expansion as a function of illumination efficiency. As the efficiency is reduced, the antenna beamwidth expands. λ degrees θ1dB = 34.66 EX1 D λ degrees θ3dB = 58.90 EX3 D λ θ20dB = 124.7 EX20 degrees D EX( ) = antenna expansion factor; θ = angle of power measurement relative to boresight; D = diameter of the circular reflector.

CIRCULAR (PARABOLIC) ANTENNAS

263

Note that the θ20dB formula uses a factor of 124.7 rather than the previous 215.2. The previous formula considered side lobe values. This factor only considers main lobe values. For efficiencies greater than 0.978, the parabolic antenna first side lobe peak value will exceed −20 dB. The EX1, EX3 , and EX20 represent expansion factors of the full illumination beamwidth. They clearly show the beamwidth penalty of lower antenna efficiency. These factors may be calculated (to at least four significant figures) using the formulas below (Fig. 8.7): C11 C12 C13 C14 C15 C16 C17 C18 C19 C110

= = = = = = = = = =

11.47467471422744; 96.4836471787781; −149.130810770249; −7.726291805073669; 49.11001513993868; 49.56175002248294; 34.99249700379614; −207.6119399582662; 135.3743821944512; −13.10545450684395. C11 + C12η + C13η2 + C14η3 + C15η4 EX1 = 1 + C16η + C17η2 + C18η3 + C19η4 + C110η5

= = = = = = = =

11.65806521303201; 96.51565982372452; −152.1423956242208; −4.217102204230871; 48.37581221497451; 49.10391941747248; 32.82051399322144; −201.8020843245464; 4

Beamwidth expansion factor

C31 C32 C33 C34 C35 C36 C37 C38

(8.24)

20 dB

3

3 dB 1 dB 2

1 0.2

0.4

0.6

0.8

Antenna efficiency (power ratio)

Figure 8.7

Circular antenna beamwidth expansion.

1.0

264

MICROWAVE ANTENNA THEORY

C39 = 130.4562878319081; C310 = −11.38859718015969. EX3 = C201 C202 C203 C204 C205 C206 C207 C208 C209 C2010

= = = = = = = = = =

C31 + C32η + C33η2 + C34η3 + C35η4 1 + C36η + C37η2 + C38η3 + C39η4 + C310η5

12.68923557512405; 61.4965668903022; −163.3946591887904; 100.3483444646388; −11.09614160640891; 37.0644387489422; −24.55611931753965; −69.50012243371823; 67.85296330438491; −11.81781396481483. C201 + C202η + C203η2 + C204η3 + C205η4 EX20 = 1 + C206η + C207η2 + C208η3 + C209η4 + C2010η5

8.3.4

(8.25)

(8.26)

Circular (Parabolic) Antenna Near Field Power Density

The near field power density of an antenna is of considerable interest for evaluating potential health hazards near an antenna. Calculating near field energy is relatively difficult. Over the years, several attempts have been made to approximate the near field power density of a circular parabolic antenna. Since it is well known that the maximum near field power is in a straight line in front of the center of the antenna, previous estimations calculated that power (near field on axis power density). One estimate (Saad et al., 1971) of near field on axis power density used Bickmore and Hansen’s results (Bickmore and Hansen, 1959) for a circular antenna with power linearly tapered (electric field parabolically tapered) with maximum at the antenna center and zero at the edge (Fig. 8.8). S () S( = 1) 1 1 + 2δ 2 1 − cos = 10 log 26.1 1– (2δ) sin δ δ

PNNF = 10 log

8 . π

10 log10 [S (Δ)/S (Δ = 1)]

δ=

ΔdB = 10 log10 [Δ]

Figure 8.8 Tapered antenna near field power density.

(8.27)

CIRCULAR (PARABOLIC) ANTENNAS

Normalized distance 10 log [d / (2D 2/l)]

−5

5

5

5

5

10

10

10

10

15

−10

15

15

20

15 20

10

5-5

15

10

5

15

10

5

15 15

10 10 10

5 5 5

25

15

−15 20 30

25

−20 0.2

Figure 8.9

265

15

0.8 0.4 0.6 Antenna efficiency (h, power ratio)

0.10

Hansen’s circular antenna near field power density estimate.

This approach predicts a power peak (16.17 dB for = 0.09612 with limiting value of 14.17 dB). This peak is not observed by actual measurements (Medhurst, 1959). With this broad taper, illumination power is spread out more than is typical for a commercial antenna (as evidenced by poorer side lobe performance). This result is not indicative of commercial antennas. Hansen (1976b) derived the near field on axis power density for circular microwave transmit antennas on the basis of his pedestal illumination function H. His result (after correcting typographical errors) is the following: 2 1 π 2 1 1−p j 2 8 pdp e (8.28) S = 2 I0 πH 1 − p 0

Applying Euler’s identity, this equation becomes ⎧⎡ ⎤2 ⎪ 1 π 1 ⎨⎣ 2 2 1 − p pdp ⎦ S= 2 I0 πH 1 − p cos ⎪ 8 ⎩ 0

⎡ +⎣

1 I0 0

⎤2 ⎫ ⎪ ⎬ π 1 − p 2 pdp ⎦ πH 1 − p 2 sin ⎪ 8 ⎭

(8.29)

Numerical integration provides the results for PNNF = 10 log [S()/S( = 1)] as shown in Figure 8.9. Near field power density is a function of antenna size (D/λ) and illumination efficiency. Hansen’s formula represents the limiting case for a very large antenna. If only the peak value is of interest (but its location is unimportant), Hansen’s formula is adequate for D/λ 10. If null values are of interest, Hansen’s approximation requires D/λ 100.

8.3.5

General Near Field Power Density Calculations

As one goes closer and closer to an antenna, he or she enters the near field of the antenna. In this region, the measured power density is no longer the far field value. When the observer is very close to the antenna, the measured power is due to evanescent electromagnetic fields orthogonal to the face of the antenna. This is the reactive region of the antenna. The measured energy in this region is complicated

266

MICROWAVE ANTENNA THEORY Y

X

Z

P(X2, 0, Z2) (X1,Y1,0)

Figure 8.10

Antenna power density calculation geometry.

to estimate analytically. Fortunately, we seldom need to calculate this energy because it only exists very close to the face of the aperture antenna. As the observer retreats from the face of the antenna, he or she enters a near field region where the energy radiates from the antenna but is still not predicted by the far field assumptions. This is the Fresnel region of the antenna. Fortunately, this energy can be predicted by optical analogy. The Huygens–Fresnel principle states that the amplitude of the electromagnetic wave at any given point equals the superposition of the amplitudes of all secondary wavelets emitted from the entire wave front before that point. For our purposes, the wave front of interest is the aperture of an antenna. Assume an antenna aperture described by points x1 and y1 in an X–Y plane (for circular parabolic antennas, this is a circular planar surface directly in front of the antennas where the wave front from the parabolic reflective surface is totally in phase across the entire virtual aperture). Assume an orthogonal Z-axis centered on the center of the (virtual) antenna aperture. This defines a horizontal X–Z plane intersecting the antenna at z = 0. A point (x, y, z) on the antenna is (x1 , y1 , 0). A point in space in front of the antenna (but variable in the X–Z plane) is (x2 , 0, z2 ) (Fig. 8.10). Silver (1949) describes the near field energy of an aperture antenna UP at point P (x2 , 0, z2 ) in very general terms (page 170, Eq. 5). 1 1 e−jkr • • jk + (iz r1 ) + jk (iz s) dxdy F (x, y) (8.30) Up = 4π r r A

This is based on Huygen’s concept of estimating a field at a point of interest by summing the contributions of all energy radiating from the aperture antenna. The factor iz is a unit vector normal to the aperture. The factor r1 is a unit vector in line with a ray from a point on the aperture (x1 , y1 , 0) to a point of interest P (x2 , 0, z2 ). To achieve maximum antenna boresight gain, the phase of the antenna aperture illumination must have constant phase of unity. We will assume that our antenna has this attribute (which all commercial antennas do). Assuming the antenna (field) illumination function F (x, y) has constant unity phase (i2 • s = 1), replacing k with 2π/λ and applying Euler’s identity, Silver’s results become π 1 (B + jC )dxdy (8.31) F (x, y) Up = 2 λ δ A

where the integral is over the entire aperture (A) of the antenna. F (x, y) = the field (square root of power density) illumination (H) function; (cos φ) cos δ; B = (1 + cos φ) sin δ + δ (cos φ) sin δ; C = (1 + cos φ) cos δ − δ

CIRCULAR (PARABOLIC) ANTENNAS

267

2πr ; λ r = distance from a point on the antenna (x1 , y1 , 0) to a point P (x2 , 0, z2 ) in free space; = sqrt[(x2 –x1 )2 + y1 2 + z2 2 ] with sqrt[ ] as the square root function; φ = the angle formed by Z-axis and a ray from a point on the antenna to a point P in free space.

δ=

cos(φ) = iz • r1 =

z2 r

|Up |2 = near field power density at free space point (x2 , 0, z2 ). Since the results will be normalized to the far field transition point, F (x, y) can be taken as simply the Hansen illumination function H redefined as a function of radial distance. Both, the real and imaginary components are integrated separately and power summed (sum of squares) to arrive at the composite power. The results of the integration are normalized to the power at the nominal far field transition point. This integral is challenging to numerically integrate owing to the oscillating sine and cosine functions. It requires considerable area granularity, especially for calculations away from antenna boresight. Integrating a rectangular aperture is straightforward. The circular aperture requires a differential sector area (formed from the difference of two sector areas). The area of a sector slice defined by two radii, R/x and (R − 1)/x, and angle α (radians) is α(2R –1)/(2x 2 ). The x, y, and z distances are normalized by dividing the distances by (2D 2 /λ) where D is the diameter of the circle or the width of the square and λ is the wavelength. For convenience, the x and y coordinates are normalized to (d/D) so that d/D = ±0.5 represents the edges of the antenna. The (d/D) values are then divided by (2D/λ). See Appendix 8.A.1 for a numerical approximation of this integration. Silver’s formula calculates the radiating power in the Fresnel and Fraunhofer regions. It does not predict the nonradiating reactive fields very near the antenna. Hansen (1964) noted that this energy is restricted to the area no greater than one wavelength (λ) from the face of the antenna (where radial distance is measured to the closest part of the aperture). This result was reconfirmed by Laybros et al. (2005). Silver’s formula is also based on discarding high order terms of a series expansion (Fresnel field approximation) involving 1/δ. Comparing Silver’s formula with Hansen’s results, which include higher order terms (Hansen, 1964), shows that Silver’s formula has negligible error (= −8! AND ASTEP# = 0! AND ASTEP# 15! AND (ASTEP# - DSTEP#) 6.489. Real path designs often require p < 0.001%. For these cases, most designers use these equations for (Ap /A0.01 ) ≤ 6.489. Ap /A0.01 is limited to a maximum value of 6.489 (p = 0.0000005%). This is a more than adequate range for real designs. For latitudes less than 30◦ (North or South), the allowable range is 0.06994 ≤ (Ap /A0.01 ) ≤ 1.443. The formulas fail for (Ap /A0.01 ) > 1.445. Real path designs often require p < 0.001%. For these cases, the formulas may not be used because they fail. To deal with this, it is suggested that for (Ap /A0.01 ) > 1.443, the path designer use the above formulas for latitudes equal to or greater than 30◦ (North or South) but with Ap /A0.01 replaced by 1.482 (Ap /A0.01 ). Ap /A0.01 is limited to a maximum value of 4.378 (p = 0.0000005%). The following algorithm may be used to calculate path outage probability p based on a path fade margin M: Step One: Determine the point rain attenuation for probability p = 0.01%. M = path fade margin (dB) Latitude = latitude (decimal) near center of path Determine K and Alpha for operating frequency using ITU-R formulas Determine local rain rate R01 (mm/hr) not exceeded more than 0.01% of time KRAlpha = K * (R01 ˆ Alpha) REMARK AˆB = AB

394

RAIN FADING

Step Two: Find the effective path distance. PathLenKM = path length (kilometers) RD0 = R01 IF R01 > 100 THEN RD0 = 100 D0 = 35 / (EXP(0.015 * RD0)) REMARK EXP(x) = ex Pt2Path = 1 / (1 + (PathLenKM / D0)) PathEff = Pt2Path * PathLenKM Step Three: Given path fade margin M, determine the expected probability p (%) that M is not exceeded. A01 = KRAlpha * PathEff Ratio = M / A01 IF ABS(Latitude) >= 30 THEN REMARK For latitudes equal to or greater than 30 (North or South): IF Ratio > 6.48901 THEN Ratio = 6.48901 PART = 23.26 * (LOG(Ratio / 0.12) / LOG(10)) REMARK LOG(x) = natural log of x = loge (x) REMARK LOG(x)/LOG(10) = common log of x = log10 (x) FP = -6.34901 + SQR(40.31 - PART) REMARK SQR(x) = square root of x P = 10 ˆ FP END IF IF ABS(Latitude) < 30 THEN REMARK For latitudes less than 30 (North or South): IF Ratio > 4.37801 THEN Ratio = 4.37801 IF Ratio 1.443 THEN PART = 23.26 * (LOG((1.482 * Ratio) / 0.12) / LOG(10)) FP = -6.349 + SQR(40.31 - PART) p = 10 ˆ FP END IF END IF Step Four: Determine the outage time associated with p. OutagePerCent = p AvailabilityPerCent = 100–OutagePerCent MinutesInYear = 365.25 * 24 * 60 OutageMinutes = MinutesInYear * (OutagePerCent / 100) OutageSeconds = 60 * OutageMininutes The ITU-R rain attenuation formulas are simple to use but do not conform to the standard path attenuation formula. If we wish to compare the ITU-R method with the Crane method, we must relate the above formulas to the standard formula γ R = KRα (dB/km). γR = K(R0.01 )

α

Ap A0.01

=K

Ap A0.01

α

1/α R0.01

(dB/km)

PATH RAIN-FADE ESTIMATION

R=

Ap A0.01

395

1/α R0.01 = ITU-R rain rate(mm/h)

(11.9)

We wish to use a standard fade margin formula M = γ R deff with terms as previously defined. As the ITU-R method for calculating fade margin M is not written in the standard γ R deff format, the assignment of pieces to that formula is artificial. As the path unavailability decreases, rain rate increases but we would expect path attenuation to stay constant or decrease. Therefore, the factor Ap /A0.01 was assigned to the rain rate factor γ R rather than the effective path length factor deff as (Ap /A0.01 ) matches the action of γ R (they both increase with decreasing path unavailability). This leaves us with the curious result that ITU-R point rain rate is frequency dependent (through the α factor’s frequency dependency). For example, a point rain rate of 50 mm/h 0.01% of the time becomes a 0.001% point rain rate of 87 (86) mm/h at 8 GHz, 96 (94) mm/h at 11 GHz, 107 (101) mm/h at 18 GHz, 110 (105) mm/h at 23 GHz, and 122 (119) mm/h at 38 GHz for a vertically (horizontally) polarized signal. To derive an approximation of the ITU-R rain rates (simplified ITU-R rain rates) to compare with the Crane city data, the approximation α ∼ = 1 are used. While this approximation introduces significant error for many frequencies, for the popular US 18- and 23-GHz vertical polarization frequencies, this approximation is surprisingly good. It introduces no more than 22 % error for high rain rates (200 mm/h) and significantly less for lower rain rates. As noted above, for moderate rain rates over the frequency range 11–23 GHz, the approximation is accurate within 10%. This allows the following redefinition to facilitate comparisons. R∼ =

Ap A0.01

R0.01 = approximate (“simplified”) ITU-R single-location rain rate

(11.10)

This approximate relationship for R is used below for comparison with Crane rain rate measurements. The resulting rain curves are termed the simplified ITU-R rain curves. The following approach is used for the Crane method (Crane, 1980,1996) of rain path attenuation estimation: Crane uses actual rain rate measurements to define average rain rates R for a geographic area called a zone (Crane, 1980,1996) or actual city rain rates (Crane, 2003). The Crane method of rain path attenuation has been defined several times by Crane. The primary versions are the following: Crane (1980). This paper introduced the concepts of point rain attenuation and point-to-path conversion factor. End to end path rain attenuation = Point rain attenuation × Point-to-path conversion factor Point rain attenuation = αRβ (dB/km) where α and β are factors based on frequency and polarization. Crane’s α and β factors ignored polarization and therefore are not used by the industry. The industry universally uses the ITU-R factors (P.838 with α and β being relabeled as K and α respectively.). R is rain rate in millimeters per hour at a single location (point rain rate). Crane used various alphabetically labeled land areas (zones), which had predefined rain rates. (This approach was very similar to the ITU-R P.837-1 methodology but the zones and rain rates were different.) Point-to-path conversion factor (also termed the path reduction factor) was a complicated function of rain rate, path length, and RF. Crane (1996). In this model, the point rain attenuation factor was relabeled as KRα to conform to the ITU-R definition. Crane did not address how to determine K and α but the industry approach is to use ITU-R factors (initially P.838 and now P.838-3). As before, the definition of R was unchanged. However, Crane updated the zones used to determine R. He introduced two new zones: B1 and B2. He also updated the zone rain rates (typically, they increased relative to the 1980 rates). Although Crane changed the appearance of the formulas of the point-to-path conversion factor, a little mathematical manipulation will show that the 1996 formulas are exactly the same as those in his 1980 model. One typo needs correction. A minus sign must be placed in the exponent of formula (3.3) so it represents a lognormal distribution. Crane (2003). In this model, only point rain attenuation was addressed. In Appendix 5.2, Crane determined K and α using an obscure 1981 source. The industry ignores this and uses ITU-R

396

RAIN FADING

factors (P.838 initially and now P.838-3). The big change was to introduce location-specific rain rates for R. (This is similar to the ITU-R P.837-5, which calculates R on a specific longitude and latitude basis.) He listed rates for 109 cities. (The data for two cities, Greeley, CO, and Boise, ID, are flawed and are not used) Rain data by Segal (Canada) and Kizer [NOAA (National Oceanic and Atmospheric Administration) data for other US cities] extended the city data list to 279 cities in North America (see Appendix, 11.A). The Crane method of rain loss calculation is an iterative approach that uses two factors: point rain attenuation and point-to-path conversion factor. The Crane point-to-path conversion factor has never changed. Other than changing labels to conform to ITU-R usage and relying on ITU-R values, the only difference in the point rain attenuation factor among the three models is the method of calculating R. In the 1980 model, one table of zone R values is used. In the 1996 model, an updated set of zone R values is used. In the 2003 model, the R values are based on specific cities. Basically, the three Crane models define what R (rain rate) values one uses for the calculation. Industry practice is to use the ITU-R 838-X model to determine the values of K and α. The basic methodology of using those values has never changed (although the K and α. values have). The values of the older ITU-R 838 are generally smaller than the current ITU-R 838 values. The new factors result in larger rain attenuation for a given rain rate. This, in turn, results in shorter path distances for a given calculated outage probability. The rain path attenuation equations are defined in the work by Crane (1980), Eqs. 3–7 for Crane 1980 and Equation 4.7 and Equation 4.8 and the following unnumbered equations of the work by Crane (1996). While the two sets of equations appear different, a little manipulation will show that they are exactly the same. They are equivalent to the following equations. In general, the following notation follows Crane 96 but the roles of d and D are reversed. M = γR deff

(11.11)

M = radio path fade margin not exceeded with probability p; γ R = KR(p)α (dB/km), αR(p)β in Crane 80, and KR(p)α in Crane 96; R(p) = rain rate (mm/h) not exceeded with probability p; deff is relatively complex to calculate. Crane considers path attenuation to be influenced by two factors—one due to intense rain cells and one due to more diffused debris rain. These factors are built into Crane’s two-component path attenuation model. G1 = effective path length due to component 1 eU αd − 1 Uα = outer limit of effective path length due to component 1 = G1L

eU αD − 1 Uα G2 = effective path length due to component 2 =

ew α (ecαd − ecαD ) cα = outer limit of effective path length due to component 2 =

G2L

= d b w c

= = = =

ew α [ecα(22.5) − ecαD ] cα

path distance (km) = 1.609 × path distance (miles), D in Crane 80 and 96; 2.3/R0.17 ; ln(b) = 0.83 − 0.17ln(R), B in Crane 96; 0.026 − 0.03 ln(R);

(11.12)

(11.13)

(11.14)

(11.15)

PATH RAIN-FADE ESTIMATION

397

D = 3.8 − 0.6 ln(R) = rain cell diameter, R ≤ 550 mm/h, d in Crane 80 and δ(R) in Crane 96; U = [ln(becD )]/D = [ln(b)/D] + c = (w/D) + c; ewα = bα ; ln(x) = loge (x). Using the above Crane model, effective path distance is defined as follows: For 0 < d ≤ D, For D < d ≤ 22.5,

For 22.5 < d,

deff = G1 . deff = G1L + G2 . deff = G1L + G2L

For d > 22.5, probability of occurrence p is replaced by a modified probability of occurrence pm where pm = (22.5/d) × p. The rain rate value R is not changed, only its probability of occurrence. This has the effect of reducing rain outage time as the path length increases beyond 22.3 km. In practice, the fade margin of the path is determined from system parameters. The above formulas are used to iterate to an appropriate rain rate. Then the probability of rain rate occurrence is calculated (usually using a lookup table) to determine the unavailability of the path due to rain. The following algorithm may be used to calculate path outage probability p based on a path fade margin M. Step One: Determine rain rate R (mm/h) associated with fade margin M (dB) Determine K and α for operating frequency using ITU-R formulas PathLenKM = path length in kilometers L = PathLenKM IF PathLenKM > 22.5 THEN L = 22.5 R = 0.001 Rstep = 20 10 B = 2.3 / (R ˆ (0.17)) REMARK AˆB = AB C = 0.026 - (0.03 * LOG(R)) D = 3.8 - (0.6 * LOG(R)) U = (LOG(B) / D) + C REMARK LOG(x) = natural log of x = loge (x) KRAlpha = K * (R ˆ Alpha) IF L D THEN NM1 = ((EXP(U * Alpha * D)) - 1!) / (U * Alpha) NM2 = (B ˆ Alpha) * ((EXP(C * Alpha * L)) - (EXP(C * Alpha * D))) / (C * Alpha) REMARK EXP(x) = ex PathEff = (NM1 + NM2) END IF TrialMargin = K * (R ˆ Alpha) * PathEff IF TrialMargin < M THEN GOTO 20 R = R - Rstep Rstep = Rstep / 2! 20 IF Rstep < .01 THEN GOTO 30

398

RAIN FADING

R = R + Rstep GOTO 10 30 REMARK R has been determined Step Two: Determine the outage time associated with R Determine outage probability p (%) associated with R (mm/h) by interpolating a lookup table of rain rates for the given area. IF PathLenKM > 22.5 THEN p = p * 22.5 / PathLenKM OutagePerCent = p AvailabilityPerCent = 100 − OutagePerCent MinutesInYear = 365.25 * 24 * 60 OutageMinutes = MinutesInYear * (OutagePerCent / 100) OutageSeconds = 60 * OutageMininutes

11.3

POINT-TO-PATH LENGTH CONVERSION FACTOR

Two factors differentiate the ITU-R and Crane rain models: point rain rate and point-to-path conversion factor Fpp . First consider the point-to-path conversion factor Fpp . Figure 11.4 shows the point-to-path attenuation factors Fpp based on the results of the above paragraphs. The Crane factor increases with path distance for low point rain rates. Crane observed that if the rain rate at one location is low, it is likely to be higher at another location on the path. However, if the rain rate at a location is high, it is likely that the most intense rain is at that location. The Crane factor also has frequency dependency. This effect has been observed by others (Hodge, 1977; Kheirallah et al., 1980). Obviously, there is a significant difference between the point-to-path conversion factors using the Crane or ITU-R methods. A direct comparison can be a little misleading because the ITU-R conversion factor is calculated for the rain rate at unavailability 0.01% while the Crane conversion is calculated at the unavailability rate of interest. The ITU-R conversion factor is limited to R ≤ 100 mm/h. In general, the Crane conversion factor is greater than the ITU-R factor. The Crane conversion factor is significantly greater than the ITU-R factor for very low rain rates (

DIGITAL MICROWAVE COMMUNICATION - PDFCOFFEE.COM (2024)

FAQs

What is microwave in digital communication? ›

A microwave link is a communications system that uses a beam of radio waves in the microwave frequency range to transmit video, audio, or data between two locations, which can be from just a few feet or meters to several miles or kilometers apart.

What are the fundamentals of microwave communication? ›

The transmitting antenna sends transmitter signal energy out into space. This energy, radiated in the form of electro-magnetic waves, is intercepted by a receiving antenna. If the receiver is tuned to the same frequency as the transmitter, the signal will be received and intelligible information made available.

What is the longest microwave transmission? ›

This record setting link was 360 km long and crossed the Red Sea over a good part of its path connecting Jebel Ebra, Sudan with Jabal Dakka, Saudi Arabia. This microwave link was deployed by Telettra using the 2 GHz band and clearly some excellent engineering. A 360 km microwave hop is an extraordinary accomplishment.

What are the two types of microwave communication? ›

Terrestrial microwave transmissions are sent between two microwave stations on the earth (earth station). It is the most common form of long-distance communication. Satellite microwave transmissions involve sending microwave transmissions between two or more earth-based microwave stations and a satellite.

Does wifi use microwaves? ›

Wi-Fi is an example of a radio wave, specifically a microwave. Microwaves are high-energy radio waves.

What are 3 types of technology that use microwaves? ›

Applications of microwaves

They are used in communications, radio astronomy, remote sensing, radar, and of course, owing to their heating application, they are used in cooking as well.

What is microwave communication summary? ›

Microwave is a line-of-sight wireless communication technology that uses high frequency beams of radio waves to provide high speed wireless connections that can send and receive voice, video, and data information.

What are the examples of microwave communication? ›

TV stations use microwave links to send footage from the studio to the transmitter location. Cell phone companies use microwave links to transfer calls between cell tower sites. Wireless internet companies rely on microwave links to bring internet connectivity across a wide area without cables or wires.

What frequency is microwave communication? ›

Microwave frequency range spans from ~300 MHz to 30 GHz (300 x 106 Hz – 30 x 109 Hz) corresponding to a wavelength range of 1 m to 0.01 m (1000 mm – 10 mm), respectively.

How far can microwaves transmit data? ›

Microwaves travel by line-of-sight; unlike lower frequency radio waves, they do not diffract around hills, follow the earth's surface as ground waves, or reflect from the ionosphere, so terrestrial microwave communication links are limited by the visual horizon to about 40 miles (64 km).

Is a cell phone a microwave relay? ›

Mobile phones work by transmitting and receiving radio frequency microwave radiation. The radio frequency (RF) emitted by mobile phones is stronger than FM radio signal which are known to cause cancer.

How far can a microwave link travel? ›

What is a microwave link? The microwave link is a point-to-point (P2P) radio signal transmission system that is used to transport mobile data. A microwave link can cover a distance of up to 150 kilometres between a transmitter and a receiver.

What are the disadvantages of microwave communication? ›

Disadvantages of Microwave Transmission
  • The cost of equipment or installation cost is high.
  • Susceptible to weather conditions.
  • Eavesdropping.
  • Limited Bandwidth (between 300MHz to 300GHz)
Oct 20, 2023

What antenna is commonly used for microwave links? ›

Horn antennas

A horn antenna or microwave horn is an antenna that consists of a flaring metal waveguide shaped like a horn to direct radio waves in a beam. Horns are widely used as antennas at UHF and microwave frequencies, above 300 MHz.

What is the most commonly used type of antenna for microwave communications systems? ›

Horn Antenna

Some notable advantages of horn antennas include their wide bandwidth, low standing wave ratio, and moderate directivity. With gains reaching up to 25 dB, they are frequently used in microwave applications where moderate power gain is essential.

Is microwave communication digital or analog? ›

Microwave RF can be either analog or digital. While digital is the newest microwave RF format, both transmission methods pose certain benefits for users.

What does microwave technology do? ›

Basically microwave technology provides an alternative to conventional heating methods, with several important advantages like: penetrating radiation, controllable electric field distribution, rapid heating, selective heating of materials, and self-limiting reactions [21].

Top Articles
Latest Posts
Article information

Author: Cheryll Lueilwitz

Last Updated:

Views: 5887

Rating: 4.3 / 5 (74 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Cheryll Lueilwitz

Birthday: 1997-12-23

Address: 4653 O'Kon Hill, Lake Juanstad, AR 65469

Phone: +494124489301

Job: Marketing Representative

Hobby: Reading, Ice skating, Foraging, BASE jumping, Hiking, Skateboarding, Kayaking

Introduction: My name is Cheryll Lueilwitz, I am a sparkling, clean, super, lucky, joyous, outstanding, lucky person who loves writing and wants to share my knowledge and understanding with you.