Iptv Testing Book
Iptv Testing Book
Iptv Testing Book
Service Quality Monitoring, Analysis, and Diagnostics for IP Television Systems and Services
Lawrence Harte
IPTV Testing
Lawrence Harte
Althos Publishing Fuquay-Varina, NC 27526 USA Telephone: 1-800-227-9681 Fax: 1-919-557-2261 email: info@althos.com web: www.Althos.com
Althos
All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying recording or by any information storage and retrieval system without written permission from the authors and publisher, except for the inclusion of brief quotations in a review. Copyright c 2008 By Althos Publishing First Printing Printed and Bound by Lightning Source, TN. Every effort has been made to make this manual as complete and as accurate as possible. However, there may be mistakes both typographical and in content. Therefore, this text should be used only as a general guide and not as the ultimate source of information. Furthermore, this manual contains information on telecommunications accurate only up to the printing date. The purpose of this manual to educate. The authors and Althos Publishing shall have neither liability nor responsibility to any person or entity with respect to any loss or damage caused, or alleged to be caused, directly or indirectly by the information contained in this book.
Mr. Harte is the president of Althos, an expert information provider which researches, trains, and publishes on technology and business industries. He has over 29 years of technology analysis, development, implementation, and business management experience. Mr. Harte managed a repair and calibration laboratory, created many test and measurement procedures, and is the inventor of several patents on communication systems. Mr. Harte has appeared on television as an industry expert and has been referenced in over 75 communications related articles in industry magazines. He has been a speaker and moderator at numerous industry seminars and trade shows. Mr. Harte has worked for leading companies including Ericsson/General Electric, Audiovox/Toshiba and Westinghouse and has consulted for hundreds of other companies. Mr. Harte continually researches, analyzes, and tests new communication technologies, applications, and services. Mr. Harte has instructed communication courses at The Billing College, Wray Castle, Nokia, MCI, Panasonic, Telcordia, and at many other companies. He has received numerous certificates and diplomas including IPTV, VoIP/Internet Telephony, 3G wireless, wireless billing, Bluetooth technology, Internet billing, cryptograph, microwave measurement, calibration, radar, nuclear power, Dale Carnegie, 360 leadership, and public speaking. As of 2008, he has authored over 100 books on telecommunications technologies and business systems covering topics such as mobile telephone systems, data communications, voice over data networks, broadband, prepaid services, billing systems, sales, and Internet marketing. Mr. Harte holds many degrees and certificates including an Executive MBA from Wake Forest University (1995) and a BSET from the University of the State of New York, (1990).
iii
iv
Table of Contents
IPTV TESTING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 IPTV CONNECTION . . . . . . . . . IPTV LAYERS . . . . . . . . . . . . . QUALITY OF SERVICE (QOS) . . . QUALITY OF EXPERIENCE (QOE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 3 5 5 5 6 6 6 6
WHY TEST FOR IPTV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 CUSTOMER SATISFACTION . . . . . . . . NETWORK UTILIZATION . . . . . . . . . FAILURE PREDICTIONS . . . . . . . . . . OPPORTUNITY IDENTIFICATION . . . . SERVICE LEVEL AGREEMENT (SLA)
IPTV TESTING CHALLENGES . . . . . . . . . . . . . . . . . . . . . . . . . 7 MIXED MEDIA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 CONTENT DEPENDENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 MULTIPLE CONVERSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 CONTENT PROTECTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 ERROR CONCEALMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 TESTING TYPES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 OPERATIONAL TESTING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 FUNCTIONAL TESTING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Feature Function Testing END TO END TESTING . . . . . MULTILAYER TESTING . . . . . ACCEPTANCE TESTING . . . . . FIELD TESTING . . . . . . . . . . DIAGNOSTIC TESTING . . . . . . LOOPBACK TESTING . . . . . . . LABORATORY TESTING . . . . . ALPHA TESTING . . . . . . . . . . BETA TESTING . . . . . . . . . . . PERFORMANCE TESTING . . . . INTEROPERABILITY TESTING . LOAD TESTING . . . . . . . . . . . STRESS TESTING . . . . . . . . . SERVICE CAPACITY TESTING . MEDIA CAPTURING . . . COMPRESSION . . . . . . PACKETIZATION . . . . . PACKET TRANSMISSION PACKET RECEPTION . . DECOMPRESSION . . . . DECODING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 16 16 17 17 17 17 19 19 19 20 20 20
CONTENT FLOW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
IPTV SYSTEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 CONTENT AGGREGATION HEADEND . . . . . . . . . . CORE NETWORK . . . . . . ACCESS NETWORK . . . . PREMISES NETWORK . . VIEWING DEVICES . . . .
AUDIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
vi
Table of Contents
AUDIO COMPRESSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Waveform Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 Perceptual Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 VIDEO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 VIDEO COMPRESSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Spatial Compression (Image Compression) . . . . . . . . . . . . . . . .27 Time Compression (Temporal Compression) . . . . . . . . . . . . . . .30 Coding Redundancy (Data Compression) . . . . . . . . . . . . . . . . .32 VIDEO ELEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Pixels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 Macroblocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 Slice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 FRAMES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Intra Frames (I-Frames) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 Predicted Frames (P-Frames) . . . . . . . . . . . . . . . . . . . . . . . . . .35 Bi-Directional Frames (B-Frames) . . . . . . . . . . . . . . . . . . . . . .36 FRAME RATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 GROUPS OF PICTURES (GOP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 QUANTIZER SCALING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 MPEG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 MEDIA STREAM (MS) . . . . . . . . . . . . ELEMENTARY STREAM (ES) . . . . . . . PACKET ELEMENTARY STREAM (PES) PROGRAM STREAM (PS) . . . . . . . . . . TRANSPORT STREAM (TS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 44 44 45 45
QUALITY METRICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 OBJECTIVE QUALITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Mean Square Error (MSE) . . . . . . . . . . . . . . . . . . . . . . . . . . . .49 Peak Signal to Noise Ratio (PSNR) . . . . . . . . . . . . . . . . . . . . .49 SUBJECTIVE QUALITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
vii
AUDIO QUALITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 AUDIO FIDELITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 FREQUENCY RESPONSE (FR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 TOTAL HARMONIC DISTORTION (THD) . . . . . . . . . . . . . . . . . . . . . . 52 NOISE LEVEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 SIGNAL TO NOISE RATIO (SNR) . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 VIDEO QUALITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 TILING . . . . . . . . . . . ERROR BLOCKS . . . . JERKINESS . . . . . . . . RINGING . . . . . . . . . QUANTIZATION NOISE ALIASING EFFECTS . . ARTIFACTS . . . . . . . OBJECT RETENTION . BRIGHTNESS . . . . . . CONTRAST . . . . . . . . SLICE LOSSES . . . . . BLURRING . . . . . . . . COLOR PIXELATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 54 55 56 56 56 57 58 59 59 59 59 60
TESTING MODELS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 FULL REFERENCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 REDUCED RATE REFERENCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 ZERO REFERENCE (NON REFERENCE) . . . . . . . . . . . . . . . . . . . . . . . 62 NETWORK MEASUREMENTS . . . . . . . . . . . . . . . . . . . . . . . . . 63 PACKET LOSS RATE (PLR) . . . . . PACKET DISCARD RATE (PDR) . . PACKET LATENCY . . . . . . . . . . . . PACKET JITTER . . . . . . . . . . . . . Packet Delay Variation (PDV) OUT OF ORDER PACKETS . . . . . . . . . . 63 65 65 65 . . . . . . . . . . . . . . . . . . . . . . . . . .65 . . . . . . . . . . . . . . . . . . . . . . . . . . 66 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
viii
Table of Contents
GAP LOSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Packet Gap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68 ROUTE FLAPPING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 LOSS OF SIGNAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 ERROR FREE SECONDS (EFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 BIT ERROR RATE (BER) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 CONNECTION SUCCESS RATE (CSR) . . . . . . . . . . . . . . . . . . . . . . . . 70 LINE RATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 STREAM RATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 CONTENT QUALITY MEASUREMENTS . . . . . . . . . . . . . . . . 71 DELAY FACTOR (DF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 FRAME COUNT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 FRAME LOSS RATE (FLR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 MEDIA LOSS RATE (MLR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 BUFFER TIME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 REBUFFER EVENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 REBUFFER TIME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 STREAM INTEGRITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 AUDIO VISUAL SYNCHRONIZATION OFFSET . . . . . . . . . . . . . . . . . . . . 73 TRANSPORT STREAM RATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 PROGRAM STREAM RATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 CLOCK RATE JITTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 JITTER DISCARDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 COMPRESSION RATIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 PROTOCOL CONFORMANCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 PROGRAM TRANSPORT STREAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Program Association Table Error (PAT Error) . . . . . . . . . . . . .75 Continuity Count Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75 Program Map Table Error (PMT Error) . . . . . . . . . . . . . . . . . .75 Packet Identifier Error (PID Error) . . . . . . . . . . . . . . . . . . . . .75 Transport Stream Synchronization Loss (TS-Sync Loss) . . . . .75 Transport Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 Program Clock Rate Error (PCR Error) . . . . . . . . . . . . . . . . . .76 Presentation Time Stamp Error (PTS Error) . . . . . . . . . . . . . .76
ix
Cyclic Redundancy Check Error (CRC Error) . . . . . . . . . . . . . .76 Channel Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 IMAGE ENTROPY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 MISSING CHANNELS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 COMMAND AND CONTROL MEASUREMENTS . . . . . . . . . . 77 CHANNEL CHANGE TIME (ZAP TIME) . . . . . . . . . . . . . . . . . . . . . . . . 77 Multicast Join Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79 SET TOP BOX INITIALIZATION TIME . . . . . . . . . . . . . . . . . . . . . . . . . 79 ENCODER INITIALIZATION TIME . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 CONNECT TIME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 CONTENT QUALITY RATING SYSTEMS . . . . . . . . . . . . . . . . 79 MOVING PICTURE QUALITY METRICS (MPQM) . . . . . . . . . . . . . . . . 80 MEDIA DELIVERY INDEX (MDI) . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 V FACTOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 VIDEO SERVICE TRANSMISSION QUALITY (VSTQ) . . . . . . . . . . . . . . 84 VIDEO SERVICE PICTURE QUALITY (VSPQ) . . . . . . . . . . . . . . . . . . . 84 VIDEO SERVICE AUDIO QUALITY (VSAQ) . . . . . . . . . . . . . . . . . . . . 84 PERCEPTUAL EVALUATION OF VIDEO QUALITY (PEVQ) . . . . . . . . . . 84 MEAN OPINION SCORE (MOS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Video Mean Opinion Score (MOS-V) . . . . . . . . . . . . . . . . . . . . .86 Audio Mean Opinion Score (MOS-A) . . . . . . . . . . . . . . . . . . . .86 Audiovisual Mean Opinion Score (MOS-AV) . . . . . . . . . . . . . .86 Gap Video Mean Opinion Score (Gap MOS-V) . . . . . . . . . . . . .86 Burst Video Mean Opinion Score (Burst MOS-V) . . . . . . . . . . .86 SINGLE STIMULUS CONTINUOUS QUALITY EVALUATION (SSCQE) . . . 87 TEST EQUIPMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 VIDEO ANALYZER . . . . . . . . . . . . . . MPEG GENERATOR . . . . . . . . . . . . PROTOCOL ANALYZER . . . . . . . . . . . BUILT-IN TEST EQUIPMENT (BITE) IMPAIRMENT EMULATOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 88 88 88 88
Table of Contents
NETWORK MONITORING . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 MIRROR PORT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Active Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89 In Line Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90 Hierarchical Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90 Alarm Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90 NETWORK PROBES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Measurement Probe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90 Reference Probe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91 TEST CLIENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 HEARTBEAT GENERATOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 FAULT MANAGEMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 FAULT PREDICTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 FAULT FINDER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 FAULT ANALYSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 APPENDIX I - ACRONYMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 APPENDIX II - IPTV TEST EQUIPMENT MANUFACTUERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 INDEX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
xi
xii
IPTV Testing
IPTV testing is the performing of measurements or observations of a device, system or service that provides television service through data networks to validate its successful operation and/or performance. IPTV testing can be complicated because there are many interrelated processes which all can reduce the quality of the media and processes that are used to control the media flow. IPTV systems differ from broadcast television systems as they use transmission systems that provide varying levels of performance. Broadcast systems are designed for controlled continuous transmission while IPTV systems use packet transmission that is subject to varying transmission patterns and packet losses (burst errors).
IPTV Connection
IPTV systems use switched video service (SVS) that dynamically setup (on demand) video signal connections between two or more points. SVS services can range from the setup of data connections that allow video transfer to the organization and management of video content and the delivery of video programs.
Figure 1.1 shows how a basic IP television system can be used to allow a viewer to have access to many different media sources. This diagram shows how a standard television is connected to a set top box (STB) that converts IP video into standard television signals. The STB is the gateway to an IP video switching system. This example shows that the switched video service (SVS) system allows the user to connect to various types of television media sources including broadcast network channels, subscription services, and movies on demand. When the user desires to access these media sources, the control commands (usually entered by the user with a television remote control) are sent to the SVS and the SVS determines which media source the user desires to connect to. This diagram shows that the user only needs one video channel to the SVS to have access to virtually an unlimited number of video sources.
IPTV Layers
IPTV systems can be divided into multiple layers ranging from a layer that physically transports data to the layer that presents the media to the viewer. The divisions of the hierarchy are referred to as layers or levels, with each layer performing a specific task. In addition, each protocol layer obtains services from the protocol layer below it and performs services to the protocol layer above it. The physical layer is responsible for converting bits of information into data packets that are transferred on a network. The MAC layer is responsible for requesting and coordinating access to the physical channel. The Internet protocol (IP) layer is responsible for adding the network address to packets so they can travel through the network to reach their destination. The transport layer is responsible for transferring packets (such as UDP/RTP) between the sender and the receiver. The session layer coordinates and oversees that transfer of the media components for the program channel (such as MPEG). The packet elementary stream (PES) layer maps and coordinates the media components to the transport streams. The application layer coordinates the information interface between the communication device and the end user or the program they are using. Figure 1.2 shows an IPTV system that has been divided into multiple layers. The physical layer is responsible for converting bits of information into data packets that are transferred on a network. The MAC layer is responsible for requesting access and coordinating the flow of information. The Internet protocol (IP) layer is responsible for adding the network address to packets. The UDP/RTP (transport layer) is responsible for transferring packets between the sender and the receiver. The MPEG transport stream layer combines multiple media streams (audio and video) into a single program transport stream. The PES layer assigns media components (such as audio and video) to specific packet streams. The application layer presents the media to the viewer.
The operation of IPTV systems is commonly measured by a combination of objective quality of service (QoS) and quality of experience (QoE) evaluation processes.
Customer Satisfaction
Customer satisfaction is the perceived value that a customer has that a product or service fulfills their needs or desires. Customer satisfaction for IPTV systems can be influenced by the content offered, quality of service, features, cost, and other factors.
Network Utilization
Network utilization is a comparison of how many network resources are being used as compared to the total amount of availability of network resources. Testing can be used to determine how network resources are assigned and when additional resources need to be acquired (reducing the need to overbuild).
Failure Predictions
Fault predictions are estimated unwanted conditions that are likely to occur as a result of measured or observed conditions. Testing can be used to identify potential areas of a system that may fail reducing the cost of emergency service.
Opportunity Identification
Opportunity identification is the awareness of services or products that may be provided to earn additional revenue, reduce cost, or increase customer satisfaction. IPTV testing can be used to identify people or customers that have specific types of needs or buying patterns.
Mixed Media
Mixed media is the combining of media of different types. An example of mixed media is the combining of video, audio, and text graphics on a video or television monitor. The challenge that this can cause is in the way each media type is processed as it is distributed through the network. Video and audio processing functions can result in different amounts of delay or quality resulting in acceptable quality on one type of media while another type of media has an unacceptable quality level.
Content Dependent
Content dependency factors are a set of conditions such as rapid motion graphics that can influence the display or perception of media. Content dependency causes some types of content to look good while other types of content look bad given the same network performance impairments. This means that the users perceived quality can vary on the same network depending on the content that is sent through the network.
Multiple Conversions
Media conversions are the process of changing information from one format to another format. There may be several conversion processes along the content flow path in IPTV systems and one or more of them may degrade the quality of the media.
IPTV media conversion commonly uses lossy media compression. Lossy compression is a process of reducing an amount of information (usually in digital form) by converting it into another format (such as MPEG) that represents the initial form of information. Each time the media is converted, additional distortion occurs. The content producer (such as a studio) provides the media to a content distribution system (such as a satellite distribution system) usually in highquality uncompressed form. Content distributors may compress the media and send it to broadcasters (such as IPTV systems). When it is received by the IPTV systems, it is decoded and re-encoded for local distribution. The reencoding process may be in another compressed format (such as MPEG-4). The encoder may change the media format from variable bit rate (VBR) to constant bit rate (CBR). Each of these conversions can add distortion to the media signal. Figure 1.3 shows how content may be converted multiple times between its high-quality format and when the media is received by the viewing device. This example shows that the media is compressed and encoded into MPEG2 before it is distributed via a satellite system. When the satellite signal is received at the cable head end, it is decoded, switched with other video sources, and re-encoded into MPEG-4 before it is distributed to the viewer.
Content Protection
Content Protection is the end-to-end encryption system that prevents content from being pirated or tampered with in a communication network (such as in a television system). Content protection involves uniquely identifying the content, assigning the usage rights, scrambling and encrypting the digital assets prior to play-out or storage (both in the network or end user devices) as well as the delivering the accompanying rights to allow legal users to access the content. When content is encrypted or uniquely encoded, it is usually not possible to analyze the underlying media.
Error Concealment
Error concealment is a process that is used by a coding device (such as a speech coder) to create information that replaces data that has been received in error. Error concealment is possible when portions of the signal output of the coder have some relationship to other portions of the signal output and that relationship can be used to produce an approximated signal that replaces the lost information period (lost bits). Error concealment methods (such as repeating the last frame of video when a frame is lost) can influence the ability to accurately measure the effects of distortion (such as packet loss).
Testing Types
There are many types of testing ranging from simple operational testing to multilayer testing. Each testing type may have a set of test procedures associated with it to allow customer support and test personnel to reliably perform the tests.
Operational Testing
Operational testing is the configuring of system equipment, application of test signals (if required) and measuring or observation of signals and test responses that ensure a system is operating correctly.
Functional Testing
Functional tests are observations and/or measurements that are performed during normal operating conditions of a device, service or system to determine if it can perform its designed functions.
Multilayer Testing
Multilayer testing is the performing of measurements or observations of a network or system which interact with different functional levels such as physical, link, transport, and session to help understand the operation or performance of a device, system or service.
Acceptance Testing
Acceptance testing is the performing of measurements that determine if the operations and performance of a system, subsystem, or component parts within systems meet the required performance characteristics.
Field Testing
Field testing is the process of testing a device, assembly, or system at a location that typically involves its normal operation. Field testing commonly involves the use of portable test equipment that is used by qualified test technicians.
Diagnostic Testing
Diagnostic testing is the process of gathering information or data that can be used to identify parts of a device or system that are performing undesired processes or functions.
Loopback Testing
Loopback testing is the process of testing the transmission capability and functioning of equipment within a system in which a signal is transmitted through a loop that returns the signal to the source. The test verifies the capability of the source to transmit and receive signals. Failure of one or more of these tests can be used to isolate and help diagnose problems within the system. Loopback testing can be used to verify the operation or performance of the system. To verify the performance of the network, the error rate, packet loss rate, and other network parameters can be tested in loopback mode. During error testing, error correction processes (such as FEC) may be disabled so that the automatic error correction processes do not interfere with the counting of errors in the test signals. Figure 1.4 shows how loopback testing can be used in an IP television system to progressively test, confirm, and identify failed equipments in portions of a network such as the core network, the access network, and the end user viewer device. To verify the core network, the test signal is sent to the ONU at the access network connection point where it is returned (looped) back to the headend. If the core network is verified, the test signal can be sent to the modem at the access connection point portion of the network where it is looped back to the headend. If the access network is operating correctly, the test signal can be sent to the viewing device where it is looped back to the headend. This verifies that all the links in the network are operating correctly.
Laboratory Testing
Laboratory testing is the process of measuring the characteristics or operation of a device, assembly, or system at a location that typically involves its design, prototyping or performance certification.
Alpha Testing
Alpha testing is the first stage in testing a new hardware or software product, usually performed by the in-house developers or programmers. Alpha testing is the initial internal and possibly limited field testing process used to confirm the operation and performance of new hardware or software prod-
ucts. The key purpose of Alpha testing is to identify basic problems during typical operating conditions. The typical number of Alpha test participants is 10 to 50.
Beta Testing
Beta testing is the field testing process used to confirm the operation and performance of new hardware or software products before a product is officially released. Beta testing is the second stage for testing a new hardware or software product, usually performed by friendly customers or affiliates of the manufacturer or developer. The key purpose of Beta testing is to identify problems and the reliability of operation during normal field operating conditions. The typical number of Beta test participants is 50 to several hundred.
Performance Testing
Performance tests are measurements of operational parameters during specific modes of operation. Performance tests are used to determine if the device or service is operating within its designed operational parameters. Performance tests can be performed over time to determine if a system is developing operational problems.
Interoperability Testing
Interoperability testing is the performing of measurements or observations of a device, system or service to determine if the device will operate with other devices of a similar type or with devices that have been designed and tested to specifications (e.g. industry standards). Interoperability testing is very important to IPTV systems because different products, models, and software versions may not operate as expected when used with other products, models, and software versions.
Load Testing
Load testing is the setup of a system where the services are consumed or provided at defined rates such as near or at maximum designed capacity limits. Load testing is performed to help ensure that a system will meet or exceed its performance requirements during high-capacity operating conditions.
Stress Testing
Stress tests are observations and/or measurements of devices or services under operational conditions that are near or above their design limitations. Stress tests are performed to determine how a network or system will operate under loaded or failed conditions.
Content Flow
Content flow in an IPTV system is the transfer of media from one functional area to another. Content flow includes capturing media, compression, packetization, transmission, packet reception, decompression, and decoding of the media signal back into its original form.
Media Capturing
Media capturing is the process of gathering and processing signals or information. For IPTV systems, media capturing can involve the conversion from analog video to digital video (A/D conversion). Because the digital video data rates are relatively high (270 Mbps for standard definition video and 1.5 Gpbs for high definition video), the digital video signal is compressed.
Compression
Compression is the processing of digital information to a form that reduces the space required for storage. There are several types of compression that can be used for video and audio. Some of the compression techniques replace commonly occurring sequences of characters by tokens that take up less space and others convert media segments to other formats that approximate the media to dramatically reduce the data rate (lossy compression). The higher the compression level (MPEG-3 video is approximately 200:1 compression), the more sensitive the media is to distortion (such as corrupted or lost data packets).
Packetization
Packetization is the process of dividing data files or blocks of data into smaller blocks (packets) of data. For IPTV systems, packetization involves converting media into fixed size data packets (MPEG packets). Each MPEG packet only contains a certain type of media such as a video segment, audio segment, or clock reference message. These packets are relatively small so
several MPEG packets fit into the data portion (the payload) of an IP data packet. This means that if one IP packet is lost during transmission, several MPEG packets may be lost (including timing reference information).
Packet Transmission
Packet transmission is the process of addressing, transferring, and controlling packets as they pass through switching points in a packet data network. A destination address is added to the header part of each packet before it is sent into the packet data network. Control information (such as the maximum number of transfers or hops that may occur) is also added to the packet header.
Packet Reception
Packet reception is the process of identifying and gathering packets with the correct destination address and routing them to the appropriate function or service within the receiving device (via the port number on the IP address). Packet reception may involve the requesting of retransmission of missing packets and filtering (elimination) of duplicate packets that are received.
Decompression
Decompression is the processing of compressed digital information to convert it into its original uncompressed format. IPTV systems decompress multiple types of media such as video and audio.
Decoding
Decoding is the process of converting encoded data into its original signal format. For IPTV systems, the decoding process may involve converting digital audio and video into forms that can be played or displayed to the user.
Figure 1.5 shows how video can be sent via an IP transmission system. This diagram shows that an IP video system digitizes (A/D) and reformats (codes) the original media (video and audio). The system analyzes and compresses the media. The IP address and transmission control information is added to each packet. The packets travel through a packet data network. The receiver gathers and assembles the packets. The media is decompressed back into its original video and audio data form. The data is then converted into its original video and audio forms.
IPTV System
IPTV systems deliver multiple video and audio channels to viewing devices. IP Television networks are primarily constructed of computer servers, gateways, access connections and end user display devices. Servers control the overall system access and processing of channel connection requests and gateways convert the IP television network data to signals that can be used by television media viewers.
IPTV system operators link content providers to content consumers. To do this, IPTV systems gather content via a content acquisition network and convert the content to a format that it can use via a headend system. It then manages (e.g. playout) the content via an asset management system and transfers the content via a distribution network. The media is then converted to display on the desired viewing devices.
Content Aggregation
Content aggregation is the process of combining multiple content sources for distribution through other communication channels. Content (such as movies or television programs) may be gathered or provided via communication lines (leased lines), radio systems (satellite), or via stored media (DVDs or VHS tapes).
Headend
A headend is part of a television system that selects and processes video signals for distribution into a television distribution network. A variety of equipment is used at the headend, including antennas and satellite dishes to receive signals, preamplifiers, frequency converters, demodulators and modulators, processors, and scrambling and de-scrambling equipment. A system may interconnect headends in different geographic regions through the use of regional or super headends.
Core Network
The core network is the central network portion of a communication system. The core network primarily provides interconnection and transfer between edge networks. Core networks for IPTV systems can be fiber optic rings that can simultaneously distribute (simulcast) simultaneously transmitted television signals (live channels) throughout a large geographic area and provide connections to other media sources (such as direct connection to a tele-
vision studio). The core network may also be used to provide individual connection to stored media programs (on demand programming).
Access Network
An access network is a portion of a communication network (such as the public switched telephone network) that allows individual subscribers or devices to connect to the core network. IPTV access networks can be DSL, cable modem, wireless broadband, optical lines, or powerline data lines.
Premises Network
A premises distribution network (PDN) consists of the equipment and software that are used to transfer data and other media in a customers facility or home. A PDN is used to connect terminals (computers) and media devices (such as TV set top boxes) to each other and to wide area network connections. PDN systems may use wired Ethernet, wireless LAN, powerline, coaxial and phone lines to transfer data or media.
Viewing Devices
A viewing device is a combination of hardware and software that can convert media such as video, audio or images into a form that can be experienced by humans. Viewing devices may contain support for servicing different media formats and compression (codec) formats, as well as being able to communicate using multiple types of access networks and streaming protocols. Figure 1.6 shows a sample IPTV system. This diagram shows how the IPTV system gathers content from a variety of sources including network feeds, stored media, communication links and live studio sources. The headend converts the media sources into a form that can be managed and distributed. The asset management system stores, moves and sends out (playout) the media at scheduled times. The distribution system simultaneously Copyright , 2008, ALTHOS, Inc -20-
transfers multiple channels to users who are connected to the IPTV system. Users view IPTV programming on analog televisions that are converted by adapter boxes (IP set top boxes), on multimedia computers or on IP televisions (data only televisions).
Audio
IP audio is the transfer of audio (sound) information in IP packet data format. Transmission of IP audio involves digitizing audio, coding, addressing, transferring, receiving, decoding and converting (rendering) IP audio data into its original audio form.
Figure 1.7 shows how audio can be sent via an IP transmission system. This diagram shows that an IP audio system digitizes and reformats the original audio, codes and/or compresses the data, adds IP address information to each packet, transfers the packets through a packet data network, recombines the packets and extracts the digitized audio, decodes the data and converts the digital audio back into its original video form.
Audio Compression
Audio compression is a technique for converting or encoding audio (sound) information so that a smaller amount of information elements or reduced bandwidth is required to represent, store or transfer audio signals. Audio
compression coders and decoders (codecs) analyze digital audio signals to remove signal redundancies and sounds that cannot be heard by humans. Some of the basic coding processes include waveform coding, perceptual coding and voice coding. Audio compression systems can be lossless or lossy. Lossless compression is a coding system that analyzes a data or media signal and produces a new file format that can be converted back to its exact original form. Lossless compression searches the data or file for redundant patterns and converts them to smaller codes or tokens. Lossy compression is the process of reducing an amount of information (usually in digital form) by converting it into another format (such as MP3 or AAC) that represents the initial form of information. However, lossy compression does not have the ability to guarantee the exact recreation of the original signal when it is expanded back from its compressed form. Digital audio data is random in nature unlike digital video, which has repetitive information that occurs on adjacent image frames. This means that audio signals do not have a high amount of redundancy, making traditional data compression and prediction processes ineffective at compressing digital audio. It is possible to highly compress digital audio by removing sounds that can be heard or perceived by listeners through the process of perceptual (lossy) coding. The characteristics and limitations of human hearing can be taken advantage of when selecting, designing and using audio signals. The human ear can hear sounds from very low frequencies (20 Hz) to approximately 20 kHz. However, the ear is most sensitive to sounds in the 1 kHz to 5 kHz. Compression ratio is a comparison of data that has been compressed to the total amount of data before compression. For example, a file compressed to 1/4th its original size can be expressed as 4:1. In telecommunications, compression ratio also refers to the amount of bandwidth-reduction achieved. For example, 4:1 compression of a 64 kbps channel is 16 kbps.
The type of coder (type of analysis and compression) can dramatically vary and different types of coders may perform better for different types of audio sounds (e.g. speech audio as compared to music). Key types of audio coding include waveform coding, perceptual coding and voice coders.
Waveform Coding
Waveform coding consists of an analog to digital converter and a data compression circuit that converts analog waveform signals into digital signals that represent the waveform shapes. Waveform coders are capable of compressing and decompressing voice, audio, music and other complex signals such as fax or modem signals. Because waveform coding processes represent most of the information in an audio signal waveform, waveform coders do not offer much compression. This commonly results in larger media files or higher data transmission rates for waveform coders as compared to perceptual coders or voice coders.
Perceptual Coding
Perceptual coding is the process of converting information into a format that matches the human senses ability to perceive or capture the information. Perceptual coding can take advantage of the inability of human senses to capture specific types of information. For example, the human ear cannot simultaneously hear loud sounds at one tone (frequency) and soft sounds at another tone (different frequency). Using perceptual coding, it would not be necessary to send signals that cannot be heard even if the original signal contained multiple audio components. Perceptual coding may remove frequency components (frequency masking) or sequences of sounds (temporal masking) that a listener cannot hear. Because audio coders compress information or data into codes or data that represent tones or other audio attributes, small errors that occur during transmission can produce dramatically different sounds. As a result, errors that occur on some of the audio data bits (e.g. high volume levels or key frequency tones) can be more sensitive to the listener than errors that occur on other data bits. In some cases, error protection bits may be added to the more significant bits of the compressed audio stream to maintain the audio quality when errors occur. Copyright , 2008, ALTHOS, Inc -24-
Figure 1.8 shows the basic operation of an audio codec. This diagram shows that the audio coding process begins with digitization of audio signals. The next step is to analyze the signal into key parts or segments and to represent the digital audio signal with a compressed code or set of codes that represent the characteristics of the audio signal. The compressed code is transmitted to the receiving device that converts the code back into its original audio form.
Video
Digital video is a sequence of picture signals (frames) that are represented by binary data (bits) that describe a finite set of color and luminance levels. Sending a digital video picture involves the conversion of an image into digital information that is transferred to a digital video receiver. The digital information contains characteristics of the video signal and the position of the image (bit location) that will be displayed.
IP video is the transfer of video information in IP packet data format. Transmission of IP video involves digitizing video, coding, addressing, transferring, receiving, decoding and converting (rendering) IP video data into its original video form. Figure 1.9 shows how video can be sent via an IP transmission system. This diagram shows that an IP video system digitizes and reformats the original video, codes and/or compresses the data, adds IP address information to each packet, transfers the packets through a packet data network, recombines the packets and extracts the digitized video, decodes the data and converts the digital video back into its original video form.
Video Compression
Video compression is the process of reducing the amount of transmission bandwidth or the data transmission rate by analog processing and/or digital coding techniques. Moving pictures can be compressed by removing redundancy within each image (spatial redundancy) or between successive images over a period of time (temporal redundancy). When compressed, a video signal can be transmitted on circuits with relatively narrow channel bandwidth or using data rates 50 to 200 times lower than their original uncompressed form.
JPEG compression typically works better for photographs and reference video frames (key reference frames) rather than line art of cartoon graphics. This is because the compression methods tend to approximate portions of the image and the approximation of lines or sharp boundaries tends to get blurry with unwanted artifacts. The JPEG compression process begins by dividing a digital image into groups of blocks. These blocks are then converted from a pixel domain (bit maps) into a frequency domain (a group of images with different detail levels) using a discrete cosine transform (DCT) process. These frequency components are then converted into specific levels. The compression system may choose to remove frequency components that have a limited amount of information (low levels) through a threshold process. The data is then compressed using run length encoding to remove (to represent) long sequences by shorter codes (run length encoding) and then by variable length coding to convert repeated sequences over varying lengths into shorter codes (variable length encoding). DCT is a form of frequency analysis that is applied to discrete signals (e.g. binary data) to produce an output that is composed of the frequency components and the levels (coefficients) that represent the original digital signal. A DCT output is composed of a DC component (basic intensity) and a series of increasing frequency components that reflect the complexity of the underlying data. DCT uses thresholding to vary the amount of compression on an image. Thresholding is the process of modifying numbers or measurements that are within a range or meet some criteria to produce a lower or lesser number of data elements. Thresholding is used in lossy data compression processes (such as image compression) to reduce the amount of data through the loss of accuracy of information that has little impact on the user. In addition to analyzing and compressing images into its frequency components, the resulting data is then also compressed using run length encoding (RLE) and variable length encoding (VLE) processes. RLE represents repetitive data information by a notation that indicates the data that will be repeated and how many times the data will be repeated (run length). VLE
represents repetitive groups of data information by codes that are used to look up the data sequence along with how many times the data will be repeated (variable length). Figure 1.10 shows the basic process that can be used for JPEG image compression. This diagram shows that JPEG compression takes a portion (block) of a digital image (lines and column sample points) and analyzes the block of digital information into a new block sequence of frequency components (DCT). The sum of these DCT coefficient components can be processed and added together to reproduce the original block. Optionally, the coefficient levels can be changed a small amount (lossy compression) without significant image differences (thresholding). The new block of coefficients is converted to a sequence of data (serial format) by a zigzag process. The data is then further compressed, first using run length coding (RLC) to reduce repetitive bit patterns and then using variable length coding (VLC) to convert and reduce highly repetitive data sequences.
video clip. The first scene is of a sail boat that is slowly moving across the horizon on the water and the second scene is of a house on the shoreline. This example shows that a key frame is sent at the beginning of a scene and only the changes to the key frame are subsequently sent. When a new scene occurs, a new key frame is sent. Motion estimation is the process of searching a fixed region of a previous frame of video to find a matching block of pixels of the same size under consideration in the current frame. The process involves an exhaustive search of the many blocks surrounding the current block from the previous frame. Motion estimation is a computer-intensive process that is used to achieve high compression ratios. Block matching is the process of matching the images in a block (a portion of an image) to locations in other frames of a digital picture sequence (e.g. digital video). Figure 1.12 shows how a digital video system can use motion estimation to identify objects and how their positions change in a series of pictures. This diagram shows that a bird in a picture is flying across the picture. In each picture frame, the motion estimation system looks for blocks that approximate other blocks in previous pictures. Over time, the digital video motion
estimation system finds matches and determines the paths (motion vectors) that these objects take.
Video Elements
Video images are composed of pixels. Digital video systems group pixels within each image into small blocks and these blocks are grouped into macroblocks. Macroblocks can be combined into slices and each image may contain several slices. Slices make up frames, which come in several different types. The different types of frames can be combined into a group of pictures.
Pixels
A pixel is the smallest component in an image. Pixels can range in size and shape and are composed of color (possibly only black on white paper) and intensity. The number of pixels per unit of area is called the resolution. More pixels per unit area provide more detail in the image.
Blocks
Blocks are portions of an image within a frame of video usually defined by a number of horizontal and vertical pixels. For the MPEG system, each block is composed of 8 by 8 pixels and each block is processed separately.
Macroblocks
A macroblock is a region of a picture in a digital picture sequence (motion picture) that may be used to determine the motion compensation from a reference frame to other pictures in a sequence of images. Typically, a frame is divided into 16 by 16 pixel sized macroblocks, which is four 8 by 8-pixel blocks.
Slice
A slice is a part of an image that is used in digital video and is composed of a continuous group of macroblocks. Slices can vary in size and shape.
Frames
A frame is a single still image within the sequence of images that comprise the video. In an interlaced scanning video system, a frame comprises two fields. Each field contains half of the video scan lines that make up the picture, with the first field typically containing the odd numbered scan lines and the second field typically containing the even numbered scan lines.
To compress video signals, the MPEG system categorizes video images (frames) into different formats. These formats vary from frame types that only use spatial compression (independently compressed) to frame types that use both spatial compression and temporal compression (predicted frames). MPEG system frame types include independent reference frames (I-frames), predicted frames that are based on previous reference frames (P-frames), bidirectionally predicted frames using preceding frames and bidirectional frames (B-Frames).
Frame Rate
Frame rate is the number of images (frames or fields) that are displayed to a movie viewer over a period of time. Frame rate is typically indicated in frames per second (fps). The common frame rates for television signals range from 25 to 30 frames per second (fps) and 50 to 60 fields per second (fps). To reduce the bandwidth of video streams, some frames can be dropped. Frame dropping is the process of discarding or not using all the video frames in a sequence of frames. The process of dropping frames can be prioritized by dropping B frames first (lowest impact on video quality), P frames, and I frames (very high impact on video quality). When a frame is dropped, it may be replaced by an adjacent frame.
When frames are dropped, the viewer may perceive motion judder distortion in the video. Motion judder is the perceived variations in a sequence of images. Viewers are more sensitive to motion judder during motion or high activity scenes.
Groups of pictures can be independent (closed) GOPs or they can be relative (open) to other GOPs. An open group of pictures is a sequence of image frames that requires information from other GOPs to successfully decode all the frames within its sequence. A closed group of pictures is a sequence of image frames that can successfully decode all the frames within its sequence without using information from other GOPs.
Because P and B frames are created using other frames, when errors occur on previous frames, the error may propagate through additional frames (error retention). To overcome the challenge of error propagation, I frames are sent periodically to refresh the images and remove and existing error blocks. Figure 1.15 shows how errors that occur in an MPEG image may be retained in frames that follow. This example shows how errors in a B-Frame are transferred to frames that follow as the B-Frame images are created from preceding images.
Quantizer Scaling
Quantizer scaling is the process of changing the quantizer threshold levels to adjust the data transmission rates from a media encoder. The use of quantizer scaling allows an MPEG system to provide a fixed data transmission rate by adjusting the amount of media compression. To perform quantizer scaling, image blocks (macroblocks) are converted into their frequency components through the use of discrete cosine transform (DCT). The DCT converts an image map into its frequency components (from low detail to high detail). Each frequency component is converted (quantized) into a specific value (coefficient). The accuracy of each of these quantized values determines how closely the image block represents the original image. Because many of the frequency components hold small values (small amounts of detail), it is possible to reduce the amount of data that represents a block of an image by eliminating the fine details through the use of thresholding. Thresholds are values that must be exceeded for an event to occur or for data to be recorded. Quantizer scaling uses an adjustable threshold level that determines if the level of frequency component should be included in the data or if a 0 level (no information) should be transmitted in its place. The higher the quantizer level, the higher the amount of compression. However, as the quantizer level increases, so do the image distortion levels. Figure 1.16 shows how MPEG systems can use quantizer scaling to control the data rate by varying the amount of detail in an image. This example shows that an image is converted into frequency component levels and that each component has a specific level. This example shows that setting the quantizer level determines if the coefficient data will be sent or if a 0 (no data) will be used in its place.
MPEG
Motion picture experts group (MPEG) standards are digital video transmission and control processes that coordinate the transmission of multiple forms of media (multimedia). MPEG is a working committee that defines and develops industry standards for digital video systems. These standards specify the data compression and decompression processes and how they are delivered on digital broadcast systems. MPEG is part of International Standards Organization (ISO).
The MPEG system defines the components (such as a media stream or channel) of a multimedia signal (such as a digital television channel) and how these channels are combined, transmitted, received, separated, synchronized and converted (rendered) back into a multimedia format. The basic components of an MPEG system include elementary streams (the raw audio, data or video media), program streams (a group of elementary streams that make up a program) and transport streams that carry multiple programs. Figure 1.17 shows the basic operation of an MPEG system. This diagram shows that the MPEG system allows multiple media types to be used (voice,
audio and data), codes and compresses each media type, adds timing information and combines (multiplexes) the media channels into an MPEG program stream. This example shows that multiple program streams (e.g. television programs) can be combined into a transport channel. When the MPEG signal is received, the program channels are separated (demultiplexed), individual media channels are decoded and decompressed and they are converted back into their original media form.
data. This packet header includes a packet identification code (PID) that uniquely identifies the packetized elementary stream from all other packetized elementary streams that are transmitted. PES packets are variable length packets which have a length limit determined by 16 bits length field in the header of each packet. PES streams may include time decoding and presentation time stamps that help the receiver to decode and present the media. Decoding time stamps are the insertion of reference timing information that indicates when the decoding of a packet or stream of data should occur. A presentation time stamp is reference timing values that are included in MPEG packet media streams (digital audio, video or data) that are used to control the presentation time alignment of media.
MPEG transport streams (MPEG-TS) use a fixed length packet size and each transport packet within the transport stream is identified by a packet identifier. A packet identifier in an MPEG system identifies the packetized elementary streams (PES) of a program channel. A program (such as a television show) is usually composed of multiple PES channels (e.g. video and audio). Because MPEG-TSs can carry multiple programs, to identify the programs carried on a MPEG-TS, a program allocation table and program mapping table are periodically transmitted, which provide a list of the programs contained within the MPEG-TS. These program tables provide a list of programs and the associated PIDs for specific programs which allows the MPEG receiver/decoder to select and decode the correct packets for that specific program. MPEG transport packets are a fixed size of 188 bytes with a 4 byte header. The payload portion of the MPEG-TS packet is 184 bytes. The beginning of a transport packet includes a synchronization byte that allows the receiver to determine the exact start time of the packet. This is followed by an error indication (EI) bit that determines if there was an error in a previous transmission process. A payload unit start indicator (PUSI) flag alerts the receiver if the packet contains the beginning (start) of a new PES. The transport priority indicator identifies if the packet has low or high priority. The 13 bit packet identifier (PID) is used to define which PES is contained in the packet. The scrambling control flag identifies if the data is encrypted. An adaptation field control defines if an adaptation field is used in the payload of the transport packet and a continuity counter maintains a count index between sequential packets. Figure 1.19 shows an MPEG transport stream and a transport packet structure. This diagram shows that the MPEG-TS packet has a fixed size of 188 bytes including a 4 byte header. The header contains various fields including an initial synchronization (time alignment) field, flow control bits, packet identifier (tells which PES stream is contained in the payload) and additional format and flow control bits.
PES packets tend to be much longer than transport packets. This requires that the PES packets be divided into segments so they can fit into the 184 byte payload of a transport packet. Each packet in the transport stream only contains data from a single PES. Since the division of PES packets into 184 byte segments will likely result in a remainder portion (segment) that is not exactly 184 bytes, an adaptation field is used to fill the transport packet. An adaptation field is a portion of a data packet or block of data that is used to adjust (define) the length or format of data that is located in the packet or block of data. Figure 1.20 shows how PES packets are inserted into an MPEG transport stream. This example shows how a video and an audio packet elementary stream may be combined on an MPEG-TS. This example shows that each of the PES packets is larger than each MPEG transport stream packet. Each PES packet is divided into segments that fit into the transport stream packets.
Quality Metrics
Quality metrics are the gathering and/or use of values that indicate how accurately a system or service can reproduce media or perform actions within desired levels.
Objective Quality
Objective quality is the determination of accuracy or the ability of a system to provide desired results using evaluation criteria and sources that are repeatable. Objective quality of video signals can be calculated by comparing the pixel locations or signal levels in a source (reference) image to the pixel location or signal levels in a received image. These calculations can be in the form of average or peak error between the images or signals.
Subjective Quality
Subjective quality is determination of accuracy or the ability of a system to provide desired results using evaluation sources that can vary (such as the opinions of people) for determining the amount of a quantity or quality of data or media.
Audio Quality
Audio quality is the ability of a speaker or audio transfer system to recreate the key characteristics of an original digital audio signal. Some of the measures of audio quality include fidelity, frequency response, total harmonic distortion, noise level and signal to noise ratio. The type of audio coder that is used along with its compression parameters influences digital audio quality. Audio compression devices reduce the data transmission rate by approximating the audio signal and this may add distortion.
Packet loss and packet corruption errors will result in the distortion or muting of the audio signal. The compression type influences the amount of distortion that occurs with packet loss or bit errors. Audio coders that have high compression ratios (high efficiency) tend to be more sensitive to packet loss and errors. Even when small amounts of error occur in a speech coder, the result may be very different sounds (a warble) due to the use of codebooks. Warbles are sounds that are produced during the decoding of a compressed digital audio signal that has been corrupted (has errors) during transmission. The warble sound results from the creation of different sounds than those originally sent. Muting is the process of inhibiting audio (squelching). Muting can be automatically performed when packet loss is detected. Figure 1.21 shows some of the causes and effects of audio distortion in IP Television systems. This example shows that audio signals are digitized, compressed and error protection coded prior to transmission. During the transmission process, some packets are lost or corrupted. The loss of packets can result in the temporary muting of the audio signal. Because the data
compression process represents sounds by different codes that represent the original audio signal, packet corruption results in the creation of a different altered sound than the sound that was previously transmitted. When there is significant data corruption, unusual artifact sounds (Warble sounds) can be created.
Audio Fidelity
Audio fidelity is the degree to which a system or a portion of a system accurately reproduces upon its output, the essential characteristics of the signal impressed upon its input. Audio fidelity can be determined by comparing an original (reference) audio signal with a received audio signal to determine the difference levels. The difference in signal levels represents the distortion that occurs between the source and receiver of the audio signal. Figure 1.22 shows how to measure audio fidelity. This diagram shows that fidelity testing can identify the distortion that is added at various places in the recording, transmission and recreation of an audio signal. This example
explains that the same reference test signal is applied to the input of the system and to a comparator. The comparator removes the original reference signal to show the amount of distortion that is added in the transmission and processing of the signal.
Noise Level
Noise level is a measure of the combined energy of unwanted signals. Noise level is commonly specified as a ratio (in decibels) of noise level on a given circuit as compared to decibels above reference noise level for an electrical system or decibels sound pressure level for an acoustical system.
Video Quality
Digital video quality is the ability of a display or video transfer system to recreate the key characteristics of an original digital video signal. Digital video and transmission system impairments include tiling, error blocks, jerkiness, artifacts (edge busyness) and object retention. Distortion indicators are characteristics values of media (such as error blocks on a video display) that can be used to qualify and quantify distortion characteristics. Distortion indicators for video signals can include differences in brightness (luminance), differences in color components (Cr, Cb), and time shifts in object or frame display. Figure 1.23 shows some of the causes and effects of video distortion that may occur in IP Television systems. This example shows that video digitization and compression convert video into packets that can be sent through data networks (such as the Internet). Packet loss and packet corruption result in distorted video signals. This example shows that some types of distortion include tiling, error blocks and retained images.
Tiling
Tiling is the changing of a digital video image into square tiles that are located in positions other than their original positions on the screen. Tiling can occur during a burst of data errors where multiple blocks are incorrectly displayed.
Error Blocks
Block distortion (blockiness) is a variation in graphics display where square or rectangular areas of the display have been changed. Block distortion can occur in compressed image or video signals that use media blocking for compression (areas of the graphics are compressed separately) when some of the blocks or lost or distorted.
Error blocks are groups of image bits (a block of pixels) in a digital video signal that do not represent error signals other than the original image bits that were supposed to be in that image block. Figure 1.24 shows an example of how error blocks are displayed on a digital video signal. This diagram shows that transmission errors result in the loss of picture blocks. In this example, the error blocks continue to display until a new image that is received does not contain the errors.
Jerkiness
Jerkiness is the holding or skipping of video image frames or fields in a digital video. Jerkiness may occur when a significant number of burst errors occur during transmission resulting in the inability of a receiver to display a new image frame. Instead, the digital video receiver may display the previous frame to minimize the perceived distortion (a jittery image is better than no image).
Ringing
Ringing is the process of the inclusion of repetitive variations (such as ripples in a video) that results from the conversion (quantization) of a media signal. Ringing can be reduced or eliminated through the use of a capture system that can detect and compensate for ringing variations.
Quantization Noise
Quantization noise (or distortion) is the error that results from the conversion of a continuous analog signal into a finite number of digital samples that can not accurately reflect every possible analog signal level. Quantization noise is reduced by increasing the number of samples or the number of bits that represent each sample. This term also is known as quantization distortion. Quantization noise in video signals appears as a display of distortions (snow) across an entire image. Quantization noise can be caused by the processes or settings in the capture card or A/D converter assembly.
Aliasing Effects
Aliasing effects are unwanted distortions that result from the conversion of an image where the sampling of the image is at a speed less than half of the most rapid changes in the image. Aliasing effects commonly appear as lines or ripples in the scanned or converted image.
Artifacts
Artifacts are unintended, unwanted aberrations in media or information (such as blocks on a video image or speckles on a picture image around sharp edges). Artifacts may be created during the media compression process. A common artifact that is produced in digital video systems is mosquito noise. Mosquito noise is a blurring effect that occurs around the edges of image shapes that have a high contrast ratio. Mosquito noise can be created through the use of lossy compression when it is applied to objects that have sharp edges (such as text). Figure 1.25 shows an example of mosquito noise artifacts. This diagram shows that the use of lossy compression on images that have sharp edges (such as text) can generate blurry images.
Object Retention
Object retention is the keeping of a portion of a frame or field on a digital video display when the image has changed. Object retention occurs when the data stream that represents the object becomes unusable to the digital video receiver. The digital video receiver decides to keep displaying the existing object in successive frames until an error free frame can be received. Advanced compression systems (such as MPEG-4) can represent components of media as objects rather than video. This means that object retention may occur only in parts of IPTV systems that use MPEG-4 compression. Figure 1.26 shows a how a compressed digital video signal may have objects retained when errors occur. This example shows an original sequence where
the images have been converted into objects. When the scene change occurs, some of the bits from image objects are received in error, which results in the objects remaining (a bird and the sail of a boat) in the next few images until an error free portion of the image is received.
Brightness
Brightness is an attribute of a video display which is the amount of light that a display appears to emit. The eye is more sensitive to the intensity (brightness) of some colors than others.
Contrast
Contrast is the range of light-to-dark values of an image. For video signals, contrast is proportional to the difference between black and white voltage levels of the video signal. The contrast control adjusts video gain (white bar, white reference).
Slice Losses
Slice losses are sequential blocks of display information that are lost or unable to be displayed.
Blurring
Blurring is the reduction of detail images or sequences of images near the boundaries of graphics areas on the display. Because the amount of blurring that can occur increases as the amount of compression increases, blurring can become more pronounced during video sequences that have high motion or activity.
Color Pixelation
Color pixelation is the changing of small image elements (pixels) on a graphics image or display. Color pixelation can occur when the format of media is changed or sent through a transcoder process.
Testing Models
Testing models are a set of equipment configurations and tests that can be used to identify and quantify the operation or performance of devices, systems, or services. IPTV testing models can use full reference, partial reference, or zero reference testing models.
Full Reference
Full reference testing is the process of verifying the operation or performance of a system using a comparison between the received signal and the complete original signal. Video quality measurement full reference is a subjective testing process for video that is defined in ITU-T J.144. The full reference video quality system ranks the quality using a variety of metrics including MOS, blockiness, blur and PSNR. Because full reference testing requires both the original signal and the signal under test, it is commonly used for laboratory testing where both are available at the same location. Figure 1.27 shows how full reference testing compares an original signal (full reference) with a received or test signal to determine the quality or accuracy of the signal. The first step for full reference testing is to time align the signals (image frames) so they can be compared. The components (pixel location and levels) are then compared to determine the error between the reference signal and the signal (PSNR) under test.
Network Measurements
Network measurements are the identification and quantity determination of data related to the operation and performance of a network. Some of the key network measurements for IPTV systems include packet loss rate, error rate, latency, delay, and jitter.
IP systems are designed to lose packets during temporary increases in data traffic. If packets are not discarded in IP data networks, the capacity of the system is overbuilt. The key to a successful IP data network is to design the system to discard packets that have low priority or low impact on the service they are providing. Figure 1.30 shows how some packets may be lost during transmission through a communications system. This example shows that several packets enter into the Internet. The packets are forwarded toward their destination as usual. Unfortunately, a lighting strike corrupts (distorts) packet 8 and it cannot be forwarded. Packet 6 is lost (discarded) when a router has exceeded its capacity to forward packets because too many were arriving at the same time. This diagram shows that the packets are serialized to allow them to be placed in correct order at the receiving end. When the receiving end determines a packet is missing in the sequence, it can request that another packet be retransmitted. If the time delivery of packets is critical (such as for packetized voice), it is common that packet retransmission requests are not performed and the lost packets simply result in distortion of the received information (such as poor audio quality).
Packet Latency
Packet latency is the amount of time delay between the sending of a packet to the time when the packet is received or decoded. Packet latency is caused by a combination of delays that include transmitter queuing time (waiting for a transmit slot), transmission propagation time (packet travel time) and packet processing time (switching).
Packet Jitter
Packet jitter is the undesirable random change in the arrival rate of packets. Packet jitter can be caused by changes in packet travel paths (route flapping) and changes in how packets are received, processed, and forward by routers in the connection path. To overcome packet jitter, packet buffering can be used. Packet buffering is the process of temporarily storing (buffering) packets during the transmission of information to create a reserve of packets that can be used during packet transmission delays or retransmission requests. While a packet buffer is commonly located in the receiving device, a packet buffer may also be used in the sending device to allow the rapid selection and retransmission of packets when they are requested by the receiving device.
of buffer space needed at the receiving side in order to restore the original data transmission pattern. Figure 1.31 shows how packet buffering can be used to reduce the effects of packet delays and packet loss for streaming media systems. This diagram shows that during the transmission of packets from the media server to the viewer, some of the packet transmission times vary (jitter) and some of the packets are lost during transmission. The packet buffer temporarily stores data before providing it to the media player. This provides the time necessary to synchronize the packets and to request and replace packets that have been lost during transmission.
Out of order packets may cause additional distortion because MPEG packet sequencing from the encoder is not necessarily the same sequence that is required by the decoder. The decoding and presentation times for frames may not be the same because some of the frames may be created from future frames. Out of order packets may be measured by an out of sequence packet rate. Out of sequence packet rate is the ratio of the number of packets that have been received out of sequence to the total number of packets that have been received. The quality of packet delivery can also be indicated by measuring the number of duplicated packets. Duplicate packet rate is the ratio of the number of packets that have been received more than one to the total number of packets that have been received. Figure 1.32 shows how packets may arrive out of order when transmitted through a packet network. Packets may travel through the network over different paths which can result in variable transmission delays. Some proto-
cols add a packet sequence number that allows the received packets to be reassembled in the correct order.
Gap Loss
Gap loss is the number of packets or data that are not able to be received or processed due to a transmission delay that exceeds the amount of jitter buffer time. Gap length is the amount of time between the beginning and end of gap losses (when packets arrive after the buffer can accept them). Gap loss rate is the ratio of the number of packets that have not been received during a gap interval (when packets arrive after the buffer can accept them) to the total number of packets that should have been received.
Packet Gap
Packet gap is the time duration between successive packets. If the packet gap is excessive, the packet may be discarded if the time delay is higher than the jitter buffer time. Figure 1.33 shows how inter-packet gap loss can occur in IPTV systems. This diagram shows 3 media sources (channels). Each of these channels are providing an MPEG transport stream (MPEG-TS) that contains a mix of video (V), audio (A), and program clock reference - PCR (C) packets. Seven MPEG-TS packets are put into the payload of an IP data datagram (packet). These packets are combined onto a single transmission channel through a router. The time duration between these packets is the inter-packet gap time. This example shows that when other packets are mixed in with the packets (from other parts of the network), the inter-packet gap time can increase.
Route Flapping
Route flapping is the continual changing of a network connection path that results from an intermittent congestion period or loss of circuit connection that indicates to the current router connection path that there is a loss in connection or that a better connection path exists. This causes the packet routing path to continually change. These different paths can result in significant variance in transmission delay times (excessive jitter). Router flapping is overcome by newer IPV6 protocols and reservation protocols.
Loss of Signal
Loss of signal is the inability of a receiver or device to successfully receive or process a signal.
Line Rate
Line rate is the total amount of information that is transferred over a transmission line during a specific period of time.
Stream Rate
Stream rate is the amount of media information (the stream) that is transferred through a system or to a device over a specific period of time.
Frame Count
Frame count is the number of frames that have been transferred or received over a period of time. Frame count may be divided into categories of frames such as independent frames (I-frames), predictive frames (P-frames), or bidirectional frames (B-frames).
Buffer Time
Buffer time is the duration that occurs between the request to setup a buffer storage area to when a buffer begins to provide data to an application or service. Buffer time should be long enough to ensure enough packets are available to continuously supply the application when packet transmission delays occur. For IP media streaming services, buffer time can be several seconds.
Rebuffer Events
Rebuffer events are processes that initiate the setup of a new buffer. Rebuffer events may be triggered when a buffer runs out of data due to long packet transmission delays or a high number of packet retransmission requests. Rebuffer events usually cause video or audio to stop playing on the last available media frame. Like buffer delays, rebuffer events can last for several seconds. Rebuffer events may be measured over a time period (such as 15 rebuffer events per hour).
Rebuffer Time
Rebuffer time is the time period that occurs from when a request for rebuffering is initiated to when media playing starts again. Rebuffer time can be used to calculate the rebuffer ratio, which is the amount of rebuffer time as compared to the amount of play time.
Stream Integrity
Stream integrity is the accuracy of a sequence of data or information as compared to its original information source. Stream integrity may be verifiable through the use of protocol analysis and error detection codes that are sent along with the original data information.
Jitter Discards
Jitter discard are the number of packets that are eliminated due to excessive fluctuating (jitter) delay time.
Compression Ratio
Compression ratio is a comparison of data that has been compressed to the total amount of data before compression. The higher the compression ratio, the more sensitive the media is to bit errors, delays, and packet loss. Content quality measurement systems such as V Factor may use compression ratio as a factor for the determination or rating of quality level.
Protocol Conformance
Protocol conformance is the ability of a device or system to communicate and process commands according to the syntax (structure) of a protocol specification or its rules.
included in a transport stream. Synchronization loss can occur due to the loss or excessive delay of a packet containing MPEG PCR.
Transport Error
Transport error is a variation in the command structure, processing, or alteration of data that occurs on a transmission channel. Transport error can be detected by reviewing the header information of the transport packets.
Channel Map
A channel map is a listing of media components within a transport channel (such a programs within an MPEG transport stream).
Image Entropy
Image entropy is a measure of the amount of information that is contained within an image or display sequence. A high image entropy level usually indicates a higher level of image complexity or image objects. Image entropy may be used as a factor in determining the amount of compression that can be used (less entropy, more compression).
Missing Channels
Missing channels are programs or data streams that are not available or cannot be received by recipients. A test set or network probe may monitor for missing channels to determine where a stream or group of streams are lost (ended) within a network.
Channel change delay can be a significant amount of time associated with changes in media stream signals (such as IPTV). Identifying the causes of delays in channel change time can be difficult as some systems can use a fast channel change process that increases (bursts) the data transmission rate during a channel change request. This fills up the buffer quickly which reduces the channel change time. Figure 1.34 shows some of the contributors to channel change time in IPTV systems. Channel change time is the sum of the individual delays associated with processing the channel change request in the set top box, sending a IGMP join message to the nearest multicast router that is carrying the channel, channel rights validation, adding the device to the multicast routing table, filling up the channel buffer for the new channel, and presenting the media to the viewer.
Connect Time
Connect time is the time interval that occurs between the initiation of a service request (such as selecting an audio file to play) and when the service begins (when the audio starts playing).
mine the quality score The program clock reference is verified to determine timing jitter and network measurements are used to determine quality packet loss. The MPQM system also evaluates the type of content to determine the entropy that can influence how the video is displayed.
V Factor
V factor is a video quality rating score that is based on MPQM and it adds measurements that help to evaluate quality based on the content format and processing types. V-factor takes into account the underlying video content processing differences (such as group of pictures and compression amount) to adjust the quality score. The V-factor score ranges from 1 to 5. V-factor uses additional content related information such as compression type, group of pictures (GOP), and quantizer levels to determine the V factor quality score.
Figure 1.37 shows that V-Factor uses and enhanced version of the MPQM quality metrics system to determine the quality of a video signal. V-Factor uses measurements from timing jitter, enhanced image entropy, and network impairments to determine the quality score. The program clock reference is verified to determine timing jitter and network measurements are used to qualify packet loss. The MPQM system also evaluates the type of content using the type of coder, the mix of image frames (I, P, and B), and the quantizer level to determine the entropy that can influence how the video is displayed.
Figure 1.38 shows how a PEVQ score is calculated. This example shows that a reference video source and test video source (received video) are aligned in time and space. The PEVQ system then determines the differences between the video signals (Y, Cr, Cb). These differences are characterized and rated to determine a PEVQ score.
variations of MOS that are used to rate video, audio, and audiovisual quality during regular or burst error conditions.
Test Equipment
Test equipment can consist of a device or assembly that can measure or verify that a particular product or system meets specific requirements or if it is capable of performing specific functions or actions. Some of the common types of test equipment that are used by IPTV service providers include video analyzers, MPEG generators, protocol analyzers, built in test equipment, and impairment emulators.
Video Analyzer
A video analyzer is a test instrument that is designed to receive and evaluate video or media signals. Most modern day media analyzers have the capability of evaluating multiple types of media in various formats such as MPEG-1, MPEG-2 (broadcast TV), MPEG-4 (H.264 packet video), VC-1 (windows media), VP6 (Flash video). Video analyzers can usually identify the stream rates, bit rates, display motion vectors, quantizer values, frame rates, frame count, and can also measure various types of errors such as bit error rate, frame loss rate. Video analyzers may include small video displays that allow the technicians to see the channel content.
MPEG Generator
MPEG generator is an instrument that can create signals which simulate the source (headend) of a broadcast or IPTV system. MPEG generators are usually able to create both single program transport stream (SPTS) or multiple program transport stream (MPTS). MPEG generators may have the capability to insert or adjust the error rate to simulate common network impairments.
Protocol Analyzer
A protocol analyzer is a test instrument that is designed to monitor a network and provide analysis of the communication taking place on the network. This allows a technician to monitor a network and provides information for problem determination and resolution. Most modern day protocol analyzers are aware of all commonly used, industry standard protocols. More advanced protocol analyzers sit in-line between two devices, without the devices being aware that the analyzer is present. Other less sophisticated protocol analyzers can be created using standard PCs with network interface cards in promiscuous mode, whereby they copy all packets that appear on the network, regardless of destination address.
Impairment Emulator
An impairment emulator is a system that creates or simulates impairments to the operation or communication with a software program or hardware
device. The purpose of an impairment emulator is to allow developers to simulate the operation of programs or devices under conditions that may happen to their products or services and to determine the changes in performance or operation that may result from these impairments. Some of the impairments the emulator may produce include jitter, latency, burst loss, gap loss, packet loss, out of order packets, route flapping, and link failure. Impairment emulators may be used to evaluate the performance of systems under various conditions such as loaded and in failed conditions.
Network Monitoring
A network monitoring system is a combination of software and hardware that collects and analyzes information on network alarms and performance data and alerts a center if it detects trouble in loop, interoffice, or switching systems. Network monitoring may be passive (doesnt interfere with the network operation) or active (processes or interacts with media or data). Network monitoring may be setup in devices such as routers by copying or redirecting data to other ports (test ports) or through network probes that are installed at various points in the network.
Mirror Port
A mirror port is a connection point (a port) on a switch that is configured to duplicate the traffic appearing on another one of the switchs ports.
Active Port
An active port is a connection point (a port) on a router or switch that is configured to process traffic appearing on another one of the switchs ports.
In Line Monitoring
In line monitoring is the use of a device or process that receives from a line input, performs measurements, and produces or provides a signal on the line output.
Hierarchical Monitoring
Hierarchical monitoring is a structured measurement and reporting system that provides information on lower layer functions to areas combined with upper monitoring functions.
Alarm Views
Alarm views are the presentation of monitoring conditions or events. Alarm views may be grouped into functional processes such as media acquisition, distribution, or reception.
Network Probes
A network probe is a device or process that is inserted into a network to monitor for specific characteristics or conditions. Probes can be passive or active. Passive probes monitor signals or operations without changing or impacting the underlying functions or signals. Active probes process or alter processes to perform their measurement functions.
Measurement Probe
A measurement probe is a device or process that is inserted into a system or network to monitor and measure specific characteristics or conditions. Measurement probes are can be non-intrusive and simply monitor and report on information that passes through or by the probe.
Reference Probe
A reference probe is a monitoring device that is used to gather values that can be used as a basis for comparison level that is used to compute other measurements or values.
Test Client
A test client is a software program and/or associated hardware that is configured to monitor and report information about the operation and service performance within a device or network. Test clients may be installed in network equipment or end user devices (such as set top boxes). Test clients operate under the control of the operating system of the device for which it is installed. These devices (such as set top boxes) may have limited processing power of the device that will perform the test measurements (such as a set top box). As a result, the type of monitoring that a test client can use may be restricted by the device operating system and available performance capability. The test client typically is controlled by and communicates with the system which it is connected to. It can communicate using standard commands such as simple network management protocol (SNMP) or via proprietary messages. Figure 1.39 shows how a software client may be installed in a set top box so that it can monitor and report performance conditions. This example shows a test software module that has been downloaded and installed into the memory of the STB. This test software module can determine packet losses, monitor packet jitter, and analyze their impact on the display of the video.
Heartbeat Generator
A heartbeat generator is a communication test function that uses a repeated transmitted signal that travels through a system or network which is eventually returned back to the sender. If the heartbeat is received by the sender, it confirms that the network is still operating (it is alive).
Fault Management
Fault management identifies the network problems, failures and events, and corrects them. Fault management is the reactive form of network management. SNMP traps, syslog, and RMON are typically used in fault management. Fault management is one of the five functions defined in the Copyright , 2008, ALTHOS, Inc -92-
FCAPS model for network management. Fault management systems can be used to predict future failures, find faults, and analyze their causes.
Fault Predictions
Fault predictions are estimated unwanted conditions that are likely to occur as a result of measured or observed conditions.
Fault Finder
A fault finder is a test set or other type of device that enables faults to be identified and localized.
Fault Analysis
Fault analysis is the evaluation of the failed components or processes in a device or service to determine what caused the fault.
Appendix 1 - Acronyms
AN-Access Network BER-Bit Error Rate BITE-Built-In Test Equipment Blockiness-Block Distortion CBR-Constant Bit Rate CCP-Channel Change Performance CCT-Channel Change Time CDE-Content Delivery Engine CDS-Content Delivery System CN-Core Network COS-Class of Service CP-Content Protection CRC Error-Cyclic Redundancy Check Error Demarc-Demarcation Point DEU-Data Extraction Unit DF-Delay Factor DPI-Deep Packet Inspection DPI-Digital Program Insertion DPU-Data Polling Unit DUT-Device Under Test DVQ-Digital Video Quality EFS-Error Free Seconds EPSNR-Estimated Peak Signal to Noise Ratio FEC Effectiveness-Forward Error Correction Effectiveness FOA-First Office Application Gap MOS-V-Gap Video Mean Opinion Score HSS-Home Subscriber Subsystem HVS-Human Vision System IPLR-Internet Packet Loss Rate KQI-Key Quality Indicators
Lab Tests-Laboratory Testing LOS-Loss of Signal Luma-Luminance MAC Address-Medium Access Control Address MAPDV-Mean Absolute Packet Delay Variation MDI-Media Delivery Index MIB-Management Information Base MLR-Media Loss Rate MOS-Mean Opinion Score MOS-A-Mean Opinion Score Audio MOS-AV-Mean Opinion Score Audiovisual MOS-V-Mean Opinion Score Video MPEG TS Analysis-MPEG Transport Stream Analysis MPLS-Multi-Protocol Label Switching MPQM-Moving Picture Quality Metrics MR-DVR-Multi-Room Digital Video Recorder MSE-Mean Square Error nDVR-Network Digital Video Recorder NMS-Network Management Station NMS-Network Management System OID-Object Identifier OS-Operating System PAQ-Perceived Audio Quality PCR-Program Clock Reference PCR Error-Program Clock Reference Error PDN-Premises Distribution Network PDR-Packet Discard Rate
95
PEVQ-Perceptual Evaluation of Video Quality PID-Packet Identifier PIP-Picture in Picture PLR-Packet Loss Rate PMT-Program Map Table POC-Proof of Concept POE-Point of Entry PQA-Picture Quality Analysis PS-Program Stream PSNR-Peak Signal to Noise Ratio PTS Error-Presentation Time Stamp Error QoE-Quality of Experience QoS-Quality Of Service QoS Awareness-Quality of Service Awareness RMON-Remote network MONitoring RMON Probe-Remote Monitoring Probe SDI-Serial Digital Interface SHE-Super Headend SI-System Integration SLA-Service Level Agreement SLA Violations-Service Level Agreement Violations SNMP-Simple Network Management Protocol SNMPv1-Simple Network Management Protocol version 1 SNMPv2-Simple Network Management Protocol version 2 SNMPv3-Simple Network Management Protocol Version 3 SOC-System On Chip SOM-Server Operations and Management SRD-System Requirements Document SSCQE-Single Stimulus Continuous Quality Evaluation SUT-System Under Test
SVS-Switched Video Service SVT-System Verification Test Sync Impairment-Synchronization Impairments TAR-Test Accuracy Ratio Test Model-Testing Models TIMS-Transmission Impairment Measurement System TPU-Threshold Processing Unit TS Analysis-Transport Stream Analysis TS-Sync-Transport Stream Synchronization TS-Sync Loss-Transport Stream Synchronization Loss TV Server-Television Server TVQM-Television Video Quality Metrics VAC-Video Admission Control VBR-Variable Bit Rate V-Factor-Video Factor VPN-Virtual Private Network VQEG-Video Quality Experts Group VQS-Video Quality Score VSAQ-Video Service Audio Quality VSMQ-Video Service Multimedia Quality VSPQ-Video service picture quality VSTQ-Video Service Transmission Quality VTS-Video Test System
96
Agilent - IPTV Testing http://www.agilent.com Anacise Testnology - Triple Play http://www.anacise.com/IPTV.htm Empirix - IMS http://www.empirix.com/ Azimuth - WiMAX http://www.Azimuth.com Berkeley Varitronics - WiMAX http://www.bvsystems.com EXFO - Transport Testing http://www.exfo.com Hewlet Packard - IPTV and Communication Testing http://www.HP.com Inneoquest - IPTV and Network Testing http://www.inneoquest.com Ixia - IPTV and IP Testing http://www.Ixiacom.com JDSU - IPTV and Communications Testing http://www.JDSU.com MiraVid - Video Analyzers http://www.miravid.com Copyright , 2008, ALTHOS, Inc -97-
Omnicor - IP Network Testing http://www.omnicor.com Opticom - Video Testing http://www.opticom.de Pixelmetrix - IPTV and Broadcast Testing http://www.Pixelmetrix.com Semaca - Video Quality Testing http://www.semaca.co.uk Shenick - IPTV and Network Testing http://www.shenick.com Spirent - IPTV and VoIP Testing http://www.spirent.com Sunrise Telecom - IPTV and Optical Testing http://www.sunrisetelecom.com Symmetricom http://www.symmetricom.com Telchemy - Digital Video http://www.telchemy.com Tektronix - IPTV and Other Test Equipment http://Tektronix.com Video Clarity - Video Quality Testing http://www.videoclarity.com Witbe, Inc. - Website Load Testing http://www.witbe.net
Index
Acceptance Testing, 11 Access Network (AN), 12, 20 Active Port, 89 Alarm Views, 90 Alpha Testing, 13-14 Audio Distortion, 50 Audio Quality, 24, 49, 64, 84, 86 Audio Visual Synchronization Offset, 73 Audiovisual Quality, 86 Beta Testing, 14 Bit Error Rate (BER), 5, 70, 87 Block Distortion (Blockiness), 54, 60 Blurring, 57, 59 Brightness, 53, 59, 84 Built-In Test Equipment (BITE), 88 Burst Errors, 1, 55 Capture, 24, 56 Certification, 13 Channel Change Delay, 78 Channel Change Time (CCT), 77-78 Channel Map, 76 Chrominance, 84 Clock Rate Jitter, 74 Color Pixelation, 60 Compression Ratio, 23, 27, 74 Constant Bit Rate (CBR), 5, 8 Content Acquisition, 19 Content Dependency Factors, 7 Content Protection (CP), 9 Continuity Count Error, 75 Contrast, 57, 59, 84 Conversion, 7-8, 16, 25, 56 Core Network (CN), 12, 19-20, 81 Customer Satisfaction, 5-6 Cyclic Redundancy Check Error (CRC Error), 76 Decoding, 16-17, 21, 26, 45, 50, 67 Decompression, 16-17, 27, 30, 41 Delay Factor (DF), 71, 81 Device Operating System, 91 Diagnostic Testing, 12 Digital Video Quality (DVQ), 53 Distortion Indicators, 53, 84 Dropped Frame, 36 Duplicate Packet Rate, 67 Encoder Initialization Time, 79 Encoding, 7, 22, 28, 32, 37 End To End Testing, 11 Entropy Analysis, 80 Error Concealment, 7, 10 Error Free Seconds (EFS), 70 Error Protection, 24, 50 Fault Analysis, 93 Fault Finder, 93 Fault Management, 92-93 Fault Predictions, 6, 93 Feature Function Testing, 11 Field Testing, 11, 13-14 Frame Count, 71, 87 Frame Dropping, 36 Full Reference Testing, 60-61 Functional Tests, 10 Gap Length, 68 Gap Loss, 68, 89 Gap Loss Rate, 68 Gap Video Mean Opinion Score (Gap MOS-V), 86 Headend, 12, 19-20, 81, 88 Heartbeat, 92 Hierarchical Monitoring, 90 Human Vision System (HVS), 80 Image Entropy, 77, 83 Impairment Emulator, 88 In Line Monitoring, 90 Inter-Packet Gap, 68-69 Interoperability Testing, 14 J.144, 60 Jerkiness, 53, 55
Jitter Discards, 74 Laboratory Testing (Lab Tests), 13, 60 Lightweight Calculations, 81 Line Rate, 70 Load Testing, 15 Loopback Testing, 12-13 Loss of Signal (LOS), 62, 69 Lossy Compression, 8, 16, 23, 27, 29, 57 Luminance (Luma), 25, 53, 84 Mean Opinion Score (MOS), 60, 85-86 Mean Opinion Score Audio (MOS-A), 86 Mean Opinion Score Audiovisual (MOSAV), 86 Mean Opinion Score Video (MOS-V), 86 Mean Square Error (MSE), 49 Measurement Probe, 90 Media Compression, 8, 16, 40, 57 Media Delivery Index (MDI), 79, 81-82 Media Loss Rate (MLR), 71-72, 81 Media Player, 66 Metrics, 48, 60, 79-80, 83 Mirror Port, 89 Missing Channels, 77 Motion Judder, 37 Moving Picture Quality Metrics (MPQM), 79-83 MPEG Generator, 88 Multicast Join Time, 79 Multilayer Testing, 10-11 Network Impairments, 80, 83, 88 Network Measurements, 63, 81, 83 Network Monitoring System, 89 Network Probe, 77, 90 Network Utilization, 5-6 Objective Quality, 4, 48 Operating System (OS), 91 Operational Testing, 10 Out of Order Packets, 66-67, 89 Out of Sequence Packet Rate, 67 Packet Buffering, 65-66 Packet Corruption, 50-51, 53
Packet Delay, 65, 71, 80, 84 Packet Discard Rate (PDR), 65 Packet Gap, 68 Packet Identifier (PID), 45-46, 75 Packet Jitter, 62, 65, 91 Packet Loss Rate (PLR), 12, 63-64, 80 Packet Reception, 16-17 Packet Transmission, 1, 17, 43, 65-66, 72 Packetization, 16, 44 Peak Signal to Noise Ratio (PSNR), 49, 60-61, 84 Perceived Quality, 7 Perceptual Difference, 84 Perceptual Evaluation of Video Quality (PEVQ), 84-85 Performance Tests, 14 Premises Distribution Network (PDN), 20 Presentation, 45, 67, 73, 76, 90 Presentation Time Stamp Error (PTS Error), 76 Program Clock Reference (PCR), 68, 74, 76, 80-81, 83 Program Clock Reference Error (PCR Error), 76 Program Map Table (PMT), 75 Program Stream (PS), 43-45, 74 Program Stream Rate, 74 Program Transport Stream, 3, 45, 75, 88 Protocol Analyzer, 88 Protocol Conformance, 74 Quality Metrics, 48, 79-80, 83 Quality of Experience (QoE), 4-5 Quality Of Service (QoS), 4-5 Quality Score, 80-83 Quantization Noise, 56 Quantizer Scaling, 40-41 Rebuffer Events, 72 Rebuffering, 71-72 Reference Probe, 91 Reliability, 14
Index
Remote network MONitoring (RMON), 92 Route Flapping, 65, 69, 89 Service Capacity, 15 Service Level Agreement (SLA), 5-6 Set Top Box Initialization Time, 79 Simple Network Management Protocol (SNMP), 91-92 Single Stimulus Continuous Quality Evaluation (SSCQE), 87 Slice Losses, 59 Stream Integrity, 71, 73 Stream Rate, 70, 73-74 Stress Tests, 15 Subjective Quality, 49 Switched Video Service (SVS), 1-2 Synchronization Loss, 75-76 Test Client, 91-92 Test Equipment, 11, 87-88 Test System, 80 Testing, 1-94 Testing Models (Test Model), 60 Transmission Rate, 27, 40, 49, 78 Transport Error, 76 Transport Stream Rate, 73 Transport Stream Synchronization (TSSync), 75-76 Transport Stream Synchronization Loss (TS-Sync Loss), 75-76 Variable Bit Rate (VBR), 8 Video Analyzer, 87 Video Compression, 27, 30, 32-33, 37 Video Distortion, 53-54 Video Encoder, 79 Video Factor (V-Factor), 74, 79, 82-83 Video Quality, 36, 53, 60, 80-82, 84, 86 Video Quality Measurement Full Reference, 60 Video Service Audio Quality (VSAQ), 84 Video service picture quality (VSPQ), 84 Video Service Transmission Quality (VSTQ), 84 Viewing Device, 8, 12, 20
IPTV Service Quality explains how to identify, measure, and analyze the operation, performance and quality of IPTV systems and services.
If you need to understand how to monitor, test, and diagnose IPTV systems and services, this book is for you.
This Book Covers: IPTV Testing Challenges Testing Methods Audio, Video, and MPEG Formats Audio Quality Video Quality Network Measurements Content Quality Measurements Command and Control Metrics MDI, MPQM, V-Factor Quality Rating IPTV Test Equipment
This book explains how to monitor, test, and diagnose IPTV systems and services. Covered are the quantitative (packet loss, error rate) and qualitative (perceptual) quality measurement and control processes. Discover how quality of experience (QoE) can be very different than traditional quality of service (QoS) measurements. IPTV systems are complex multimedia communication systems and communicating through them involves the use of multiple layers, which can make testing and diagnostics more difficult. Discover how different layers in an IPTV system can interact and why multilayer testing may be used to evaluate and diagnose operation and performance issues. Learn about audio, video, and MPEG formats and what parts of them can be measured in IPTV systems. Audio quality characteristics including fidelity, frequency response, and signal to noise ratio are described. Key video quality characteristics such as error blocks, aliasing effects, object retention, and artifacts are explained. Discover the different types of network measurements such as packet loss, gap loss, and jitter and how they influence quality of service (QoS). The types of content measurements such as frame loss rate (FLR), synchronization loss, and audiovisual synchronization offset are explained. Command and control measurements such as channel change time, encoder initialization time, and connect time are described. The fundamentals of full reference, partial reference, and zero reference quality processes such as MDI, MPQM, and V-Factor are explained. Other types of quality measures including video, audio, synchronization, interaction (control), and other measurable values are explained. The different types of testing including laboratory, acceptance, conformance, field, and diagnostic testing are described along with some of the common types of IPTV test equipment that are used. You will learn about the different types of network monitoring devices and probes that are used in IPTV systems, what they can do, and how to understand and analyze the information they provide.
www.AlthosBooks.com
Althos Publishing