Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Multimedia

Download as pdf or txt
Download as pdf or txt
You are on page 1of 226
At a glance
Powered by AI
The key takeaways are that multimedia involves combining different media types like text, audio, images, video and animation. It has various applications in areas like education, entertainment etc. The evolution of internet has increased demand for multimedia content.

The elements of a multimedia system include text, graphics, audio, images, video and animation. Hypermedia is also considered as a type of multimedia application involving linking multimedia objects.

Multimedia can be broadly divided into linear and non-linear categories. Linear content progresses without user control while non-linear content allows user interactivity and control of progress.

1

UNIT - I

Lesson 1 Introduction to Multimedia


Contents

1.0 Aims and Objectives


1.1 Introduction
1.2 Elements of Multimedia System
1.3 Categories of Multimedia.
1.4 Features of Multimedia
1.5 Applications of Multimedia System.
1.6 Convergence of Multimedia System.
1.7 Stages of Multimedia Application Development
1.8 Let us sum up.
1.9 Lesson-end activities
1.10 Model answers to “Check your progress”
1.11 References

1.0 Aims and Objectives

In this lesson we will learn the preliminary concepts of Multimedia. We will discuss the
various benefits and applications of multimedia. After going through this chapter the
reader will be able to :
i) define multimedia
ii) list the elements of multimedia
iii) enumerate the different applications of multimedia
iv) describe the different stages of multimedia software development

1.1 Introduction

Multimedia has become an inevitable part of any presentation. It has found a


variety of applications right from entertainment to education. The evolution of internet
has also increased the demand for multimedia content.

Definition

Multimedia is the media that uses multiple forms of information content and
information processing (e.g. text, audio, graphics, animation, video, interactivity) to
inform or entertain the user. Multimedia also refers to the use of electronic media to store
and experience multimedia content. Multimedia is similar to traditional mixed media in
fine art, but with a broader scope. The term "rich media" is synonymous for interactive
multimedia.

Multimedia Systems- M.Sc(IT)


2

1.2 Elements of Multimedia System


Multimedia means that computer information can be represented through audio,
graphics, image, video and animation in addition to traditional media(text and graphics).
Hypermedia can be considered as one type of particular multimedia application.

Multimedia is a combination of content forms:


Audio

ar_app_aktion.png http://en.wikipedia.org/wiki/Image:Crystal_Clear_app_camera.png h

Video

Multimedia Systems- M.Sc(IT)


3

1.3 Categories of Multimedia


Multimedia may be broadly divided into linear and non-linear categories. Linear
active content progresses without any navigation control for the viewer such as a cinema
presentation. Non-linear content offers user interactivity to control progress as used with
a computer game or used in self-paced computer based training. Non-linear content is
also known as hypermedia content.

Multimedia presentations can be live or recorded. A recorded presentation may


allow interactivity via a navigation system. A live multimedia presentation may allow
interactivity via interaction with the presenter or performer.

Multimedia Systems- M.Sc(IT)


4

1.4 Features of Multimedia


Multimedia presentations may be viewed in person on stage, projected,
transmitted, or played locally with a media player. A broadcast may be a live or recorded
multimedia presentation. Broadcasts and recordings can be either analog or digital
electronic media technology. Digital online multimedia may be downloaded or streamed.
Streaming multimedia may be live or on-demand.

Multimedia games and simulations may be used in a physical environment with


special effects, with multiple users in an online network, or locally with an offline
computer, game system, or simulator.

Enhanced levels of interactivity are made possible by combining multiple forms


of media content But depending on what multimedia content you have it may vary
Online multimedia is increasingly becoming object-oriented and data-driven, enabling
applications with collaborative end-user innovation and personalization on multiple
forms of content over time. Examples of these range from multiple forms of content on
web sites like photo galleries with both images (pictures) and title (text) user-updated, to
simulations whose co-efficient, events, illustrations, animations or videos are modifiable,
allowing the multimedia "experience" to be altered without reprogramming.

1.5 Applications of Multimedia

Multimedia finds its application in various areas including, but not limited to,
advertisements, art, education, entertainment, engineering, medicine, mathematics,
business, scientific research and spatial, temporal applications.

A few application areas of multimedia are listed below:

Creative industries

Creative industries use multimedia for a variety of purposes ranging from


fine arts, to entertainment, to commercial art, to journalism, to media and software
services provided for any of the industries listed below. An individual multimedia
designer may cover the spectrum throughout their career. Request for their skills
range from technical, to analytical and to creative.

Commercial

Much of the electronic old and new media utilized by commercial artists is
multimedia. Exciting presentations are used to grab and keep attention in
advertising. Industrial, business to business, and interoffice communications are
often developed by creative services firms for advanced multimedia presentations
beyond simple slide shows to sell ideas or liven-up training. Commercial
multimedia developers may be hired to design for governmental services and
nonprofit services applications as well.

Multimedia Systems- M.Sc(IT)


5

Entertainment and Fine Arts

In addition, multimedia is heavily used in the entertainment industry,


especially to develop special effects in movies and animations. Multimedia games
are a popular pastime and are software programs available either as CD-ROMs or
online. Some video games also use multimedia features.

Multimedia applications that allow users to actively participate instead of just


sitting by as passive recipients of information are called Interactive Multimedia.

Education

In Education, multimedia is used to produce computer-based training


courses (popularly called CBTs) and reference books like encyclopaedia and
almanacs. A CBT lets the user go through a series of presentations, text about a
particular topic, and associated illustrations in various information formats.
Edutainment is an informal term used to describe combining education with
entertainment, especially multimedia entertainment.

Engineering

Software engineers may use multimedia in Computer Simulations for


anything from entertainment to training such as military or industrial training.
Multimedia for software interfaces are often done as collaboration between
creative professionals and software engineers.

Industry

In the Industrial sector, multimedia is used as a way to help present


information to shareholders, superiors and coworkers. Multimedia is also helpful
for providing employee training, advertising and selling products all over the
world via virtually unlimited web-based technologies.

Mathematical and Scientific Research

In Mathematical and Scientific Research, multimedia is mainly used for


modeling and simulation. For example, a scientist can look at a molecular model
of a particular substance and manipulate it to arrive at a new substance.
Representative research can be found in journals such as the Journal of
Multimedia.

Medicine

In Medicine, doctors can get trained by looking at a virtual surgery or they


can simulate how the human body is affected by diseases spread by viruses and
bacteria and then develop techniques to prevent it.

Multimedia Systems- M.Sc(IT)


6

Multimedia in Public Places

In hotels, railway stations, shopping malls, museums, and grocery stores,


multimedia will become available at stand-alone terminals or kiosks to provide
information and help. Such installation reduce demand on traditional information
booths and personnel, add value, and they can work around the clock, even in the
middle of the night, when live help is off duty.

A menu screen from a supermarket kiosk that provide services ranging


from meal planning to coupons. Hotel kiosk list nearby restaurant, maps of the
city, airline schedules, and provide guest services such as automated checkout.
Printers are often attached so users can walk away with a printed copy of the
information. Museum kiosk are not only used to guide patrons through the
exhibits, but when installed at each exhibit, provide great added depth, allowing
visitors to browser though richly detailed information specific to that display.

Check Your Progress 1


List five applications of multimedia
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

1.6 Convergence of Multimedia (Virtual Reality)

At the convergence of technology and creative invention in multimedia is virtual


reality, or VR. Goggles, helmets, special gloves, and bizarre human interfaces attempt to
place you “inside” a lifelike experience. Take a step forward, and the view gets closer,
turn your head, and the view rotates. Reach out and grab an object; your hand moves in
front of you. Maybe the object explodes in a 90-decibel crescendo as you wrap your
fingers around it. Or it slips out from your grip, falls to the floor, and hurriedly escapes
through a mouse hole at the bottom of the wall.

VR requires terrific computing horsepower to be realistic. In VR, your cyberspace


is made up of many thousands of geometric objects plotted in three-dimensional space:
the more objects and the more points that describe the objects, the higher resolution and
the more realistic your view. As the user moves about, each motion or action requires the
computer to recalculate the position, angle size, and shape of all the objects that make up
your view, and many thousands of computations must occur as fast as 30 times per
second to seem smooth.

Multimedia Systems- M.Sc(IT)


7

On the World Wide Web, standards for transmitting virtual reality worlds or
“scenes” in VRML (Virtual Reality Modeling Language) documents (with the file name
extension .wrl) have been developed.

Using high-speed dedicated computers, multi-million-dollar flight simulators built


by singer, RediFusion, and others have led the way in commercial application of
VR.Pilots of F-16s, Boeing 777s, and Rockwell space shuttles have made many dry runs
before doing the real thing. At the California Maritime academy and other merchant
marine officer training schools, computer-controlled simulators teach the intricate loading
and unloading of oil tankers and container ships.

Specialized public game arcades have been built recently to offer VR combat and
flying experiences for a price. From virtual World Entertainment in walnut Greek,
California, and Chicago, for example, BattleTech is a ten-minute interactive video
encounter with hostile robots. You compete against others, perhaps your friends, who
share coaches in the same containment Bay. The computer keeps score in a fast and
sweaty firefight. Similar “attractions” will bring VR to the public, particularly a youthful
public, with increasing presence during the 1990s.

The technology and methods for working with three-dimensional images and
for animating them are discussed. VR is an extension of multimedia-it uses the basic
multimedia elements of imagery, sound, and animation. Because it requires instrumented
feedback from a wired-up person, VR is perhaps interactive multimedia at its fullest
extension.

1.7 Stages of Multimedia Application Development


A Multimedia application is developed in stages as all other software are being
developed. In multimedia application development a few stages have to complete before
other stages being, and some stages may be skipped or combined with other stages.

Following are the four basic stages of multimedia project development :

1. Planning and Costing : This stage of multimedia application is the first stage
which begins with an idea or need. This idea can be further refined by outlining
its messages and objectives. Before starting to develop the multimedia project, it
is necessary to plan what writing skills, graphic art, music, video and other
multimedia expertise will be required.
It is also necessary to estimate the time needed to prepare all elements of
multimedia and prepare a budget accordingly. After preparing a budget, a
prototype or proof of concept can be developed.
2. Designing and Producing : The next stage is to execute each of the planned
tasks and create a finished product.
3. Testing : Testing a project ensure the product to be free from bugs. Apart from
bug elimination another aspect of testing is to ensure that the multimedia

Multimedia Systems- M.Sc(IT)


8

application meets the objectives of the project. It is also necessary to test whether
the multimedia project works properly on the intended deliver platforms and they
meet the needs of the clients.
4. Delivering : The final stage of the multimedia application development is to pack
the project and deliver the completed project to the end user. This stage has
several steps such as implementation, maintenance, shipping and marketing the
product.

1.8 Let us sum up

In this lesson we have discussed the following points


i) Multimedia is a woven combination of text, audio, video, images and
animation.
ii) Multimedia systems finds a wide variety of applications in different areas
such as education, entertainment etc.
iii) The categories of multimedia are linear and non-linear.
iv) The stages for multimedia application development are Planning and
costing, designing and producing, testing and delivery.

1.9 Lesson-end Activities

i) Create the credits for an imaginary multimedia production. Include several


outside organizations such as audio mixing, video production, text based
dialogues.
ii) Review two educational CD-ROMs and enumerate their features.

1.10 Model answers to “Check your progress”

1. Your answers may include the following


i) Education
ii) Entertainment
iii) Medicine
iv) Engineering
v) Industry
vi) Creative Industry
vii) Mathematical and scientific Industry
viii) Engineering
ix) Commercial

1.11 References
1. “Multimedia Making it work” By Tay Vaughan
2. “Multimedia in Practice – Technology and applications” By Jeffcoat

Multimedia Systems- M.Sc(IT)


9

Lesson 2 Text
Contents

2.0 Aims and Objectives


2.1 Introduction
2.2 Multimedia Building blocks
2.3 Text in multimedia
2.4 About fonts and typefaces
2.5 Computers and Text
2.6 Character set and alphabets
2.7 Font editing and design tools
2.8 Let us sum up
2.9 Lesson-end activities
2.10 Model answers to “Check your progress”
2.11 References

2.0 Aims and Objectives

In this lesson we will learn the different multimedia building blocks. Later we will
learn the significant features of text.
i) At the end of the lesson you will be able to
ii) List the different multimedia building blocks
iii) Enumerate the importance of text
iv) List the features of different font editing and designing tools

2.1 Introduction

All multimedia content consists of texts in some form. Even a menu text is
accompanied by a single action such as mouse click, keystroke or finger pressed in the
monitor (in case of a touch screen). The text in the multimedia is used to communicate
information to the user. Proper use of text and words in multimedia presentation will
help the content developer to communicate the idea and message to the user.

2.2 Multimedia Building Blocks

Any multimedia application consists any or all of the following components :

1. Text : Text and symbols are very important for communication in any medium.
With the recent explosion of the Internet and World Wide Web, text has become
more the important than ever. Web is HTML (Hyper text Markup language)
originally designed to display simple text documents on computer screens, with
occasional graphic images thrown in as illustrations.

Multimedia Systems- M.Sc(IT)


10

2. Audio : Sound is perhaps the most element of multimedia. It can provide the
listening pleasure of music, the startling accent of special effects or the ambience
of a mood-setting background.
3. Images : Images whether represented analog or digital plays a vital role in a
multimedia. It is expressed in the form of still picture, painting or a photograph
taken through a digital camera.
4. Animation : Animation is the rapid display of a sequence of images of 2-D
artwork or model positions in order to create an illusion of movement. It is an
optical illusion of motion due to the phenomenon of persistence of vision, and can
be created and demonstrated in a number of ways.
5. Video : Digital video has supplanted analog video as the method of choice for
making video for multimedia use. Video in multimedia are used to portray real
time moving pictures in a multimedia project.

2.3 Text in Multimedia

Words and symbols in any form, spoken or written, are the most common system
of communication. They deliver the most widely understood meaning to the greatest
number of people.

Most academic related text such as journals, e-magazines are available in the Web
Browser readable form.

2.4 About Fonts and Faces

A typeface is family of graphic characters that usually includes many type sizes
and styles. A font is a collection of characters of a single size and style belonging to a
particular typeface family. Typical font styles are bold face and italic. Other style
attributes such as underlining and outlining of characters, may be added at the users
choice.

The size of a text is usually measured in points. One point is approximately 1/72
of an inch i.e. 0.0138. The size of a font does not exactly describe the height or width of
its characters. This is because the x-height (the height of lower case character x) of two
fonts may differ.

Typefaces of fonts can be described in many ways, but the most common
characterization of a typeface is serif and sans serif. The serif is the little decoration at
the end of a letter stroke. Times, Times New Roman, Bookman are some fonts which
comes under serif category. Arial, Optima, Verdana are some examples of sans serif
font. Serif fonts are generally used for body of the text for better readability and sans
serif fonts are generally used for headings. The following fonts shows a few categories
of serif and sans serif fonts.

Multimedia Systems- M.Sc(IT)


11

F F
(Serif Font) (Sans serif font)

Selecting Text fonts

It is a very difficult process to choose the fonts to be used in a multimedia


presentation. Following are a few guidelines which help to choose a font in a multimedia
presentation.

 As many number of type faces can be used in a single presentation, this concept
of using many fonts in a single page is called ransom-note topography.
 For small type, it is advisable to use the most legible font.
 In large size headlines, the kerning (spacing between the letters) can be adjusted
 In text blocks, the leading for the most pleasing line can be adjusted.
 Drop caps and initial caps can be used to accent the words.
 The different effects and colors of a font can be chosen in order to make the text
look in a distinct manner.
 Anti aliased can be used to make a text look gentle and blended.
 For special attention to the text the words can be wrapped onto a sphere or bent
like a wave.
 Meaningful words and phrases can be used for links and menu items.
 In case of text links(anchors) on web pages the messages can be accented.

The most important text in a web page such as menu can be put in the top 320 pixels.

Check Your Progress 1


List a few fonts available in your computer.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

Multimedia Systems- M.Sc(IT)


12

2.5 Computers and text:

Fonts :

Postscript fonts are a method of describing an image in terms of mathematical


constructs (Bezier curves), so it is used not only to describe the individual characters of a
font but also to describe illustrations and whole pages of text. Since postscript makes use
of mathematical formula, it can be easily scaled bigger or smaller.

Apple and Microsoft announced a joint effort to develop a better and faster
quadratic curves outline font methodology, called truetype In addition to printing
smooth characters on printers, TrueType would draw characters to a low resolution (72
dpi or 96 dpi) monitor.

2.6 Character set and alphabets:

ASCII Character set

The American standard code for information interchange (SCII) is the 7


bit character coding system most commonly used by computer systems in the
United states and abroad. ASCII assigns a number of value to 128 characters,
including both lower and uppercase letters, punctuation marks, Arabic numbers
and math symbols. 32 control characters are also included. These control
characters are used for device control messages, such as carriage return, line feed,
tab and form feed.

The Extended Character set

A byte which consists of 8 bits, is the most commonly used building block
for computer processing. ASCII uses only 7 bits to code is 128 characters; the 8th
bit of the byte is unused. This extra bit allows another 128 characters to be
encoded before the byte is used up, and computer systems today use these extra
128 values for an extended character set. The extended character set is commonly
filled with ANSI (American National Standards Institute) standard characters,
including frequently used symbols.

Unicode

Unicode makes use of 16-bit architecture for multilingual text and


character encoding. Unicode uses about 65,000 characters from all known
languages and alphabets in the world.

Several languages share a set of symbols that have a historically related


derivation, the shared symbols of each language are unified into collections of

Multimedia Systems- M.Sc(IT)


13

symbols (Called scripts). A single script can work for tens or even hundreds of
languages.

Microsoft, Apple, Sun, Netscape, IBM, Xerox and Novell are participating
in the development of this standard and Microsoft and Apple have incorporated
Unicode into their operating system.

2.7 Font Editing and Design tools

There are several software that can be used to create customized font. These tools
help an multimedia developer to communicate his idea or the graphic feeling. Using
these software different typefaces can be created.

In some multimedia projects it may be required to create special characters. Using the
font editing tools it is possible to create a special symbols and use it in the entire text.

Following is the list of software that can be used for editing and creating fonts:

 Fontographer
 Fontmonger
 Cool 3D text

Special font editing tools can be used to make your own type so you can
communicate an idea or graphic feeling exactly. With these tools professional
typographers create distinct text and display faces.

1. Fontographer:
It is macromedia product, it is a specialized graphics editor for both
Macintosh and Windows platforms. You can use it to create postscript,
truetype and bitmapped fonts for Macintosh and Windows.
2. Making Pretty Text:
To make your text look pretty you need a toolbox full of fonts and special
graphics applications that can stretch, shade, color and anti-alias your
words into real artwork. Pretty text can be found in bitmapped drawings
where characters have been tweaked, manipulated and blended into a
graphic image.
3. Hypermedia and Hypertext:
Multimedia is the combination of text, graphic, and audio elements into a
single collection or presentation – becomes interactive multimedia when
you give the user some control over what information is viewed and when
it is viewed.

Multimedia Systems- M.Sc(IT)


14

When a hypermedia project includes large amounts of text or symbolic


content, this content can be indexed and its element then linked together to
afford rapid electronic retrieval of the associated information.
When text is stored in a computer instead of on printed pages the
computer’s powerful processing capabilities can be applied to make the
text more accessible and meaningful. This text can be called as hypertext.
4. Hypermedia Structures:
Two Buzzwords used often in hypertext are link and node. Links are
connections between the conceptual elements, that is, the nodes that ma
consists of text, graphics, sounds or related information in the knowledge
base.
5. Searching for words:
Following are typical methods for a word searching in hypermedia
systems: Categories, Word Relationships, Adjacency, Alternates,
Association, Negation, Truncation, Intermediate words, Frequency.

Check Your Progress 2


List a few font editing tools.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

2.8 Let us sum up.

In this lesson we have learnt the following


i) The multimedia building blocks such as text, audio, video, images,
animation
ii) The importance of text in multimedia
iii) The difference between fonts and typefaces
iv) Character sets used in computers and their significance
v) The font editing software which can be used for creating new fonts and the
features of such software.

2.9 Lesson-end activities

Create a new document in a word processor. Type a line of text in the word
processor and copy it five times and change each line into a different font. Finally
change the size of each line to 10 pt, 12pt, 14pt etc. Now distinguish each font
family and the typeface used in each font.

Multimedia Systems- M.Sc(IT)


15

2.10 Model answers to “Check your progress”

1. Your answers may include the following


Arial
Times New Roman
Garamond
Script
Courier
Georgia
Book Antiqua
Century Gothic

2. Your answer may include the following


a) Fontmonger
b) Cool 3D text

2.11 References

1. "Multimedia:Concepts and Practice" By Stephen McGloughlin


2. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
3. “Multimedia Making it work” By Tay Vaughan
4. “Multimedia in Practice – Technology and applications” By Jeffcoat

Multimedia Systems- M.Sc(IT)


16

Lesson 3 Audio
Contents

3.0 Aims and Objectives


3.1 Introduction
3.2 Power of Sound
3.3 Multimedia Sound Systems
3.4 Digital Audio
3.5 Editing Digital Recordings
3.6 Making MIDI Audio
3.7 Audio File Formats
3.8 Red Book Standard
3.9 Software used for Audio
3.10 Let us sum up
3.11 Lesson-end activities
3.12 Model answers to “Check your progress”
3.13 References

3.0 Aims and Objectives

In this lesson we will learn the basics of Audio. We will learn how a digital audio
is prepared and embedded in a multimedia system.

At the end of the chapter the learner will be able to :

i) Distinguish audio and sound


ii) Prepare audio required for a multimedia system
iii) The learner will be able to list the different audio editing softwares.
iv) List the different audio file formats

3.1 Introduction

Sound is perhaps the most important element of multimedia. It is meaningful


“speech” in any language, from a whisper to a scream. It can provide the listening
pleasure of music, the startling accent of special effects or the ambience of a mood-
setting background. Sound is the terminology used in the analog form, and the digitized
form of sound is called as audio.

Multimedia Systems- M.Sc(IT)


17

3.2 Power of Sound


When something vibrates in the air is moving back and forth it creates wave of
pressure. These waves spread like ripples from pebble tossed into a still pool and when it
reaches the eardrums, the change of pressure or vibration is experienced as sound.

Acoustics is the branch of physics that studies sound. Sound pressure levels are
measured in decibels (db); a decibel measurement is actually the ratio between a chosen
reference point on a logarithmic scale and the level that is actually experienced.

3.3 Multimedia Sound Systems

The multimedia application user can use sound right off the bat on both the
Macintosh and on a multimedia PC running Windows because beeps and warning sounds
are available as soon as the operating system is installed. On the Macintosh you can
choose one of the several sounds for the system alert. In Windows system sounds are
WAV files and they reside in the windows\Media subdirectory.

There are still more choices of audio if Microsoft Office is installed. Windows makes use
of WAV files as the default file format for audio and Macintosh systems use SND as
default file format for audio.

3.4 Digital Audio


Digital audio is created when a sound wave is converted into numbers – a process
referred to as digitizing. It is possible to digitize sound from a microphone, a synthesizer,
existing tape recordings, live radio and television broadcasts, and popular CDs. You can
digitize sounds from a natural source or prerecorded.

Digitized sound is sampled sound. Ever nth fraction of a second, a sample of


sound is taken and stored as digital information in bits and bytes. The quality of this
digital recording depends upon how often the samples are taken.

3.4.1 Preparing Digital Audio Files

Preparing digital audio files is fairly straight forward. If you have analog source
materials – music or sound effects that you have recorded on analog media such as
cassette tapes.

 The first step is to digitize the analog material and recording it onto a computer
readable digital media.
 It is necessary to focus on two crucial aspects of preparing digital audio files:
o Balancing the need for sound quality against your available RAM and
Hard disk resources.
o Setting proper recording levels to get a good, clean recording.

Multimedia Systems- M.Sc(IT)


18

Remember that the sampling rate determines the frequency at which samples will
be drawn for the recording. Sampling at higher rates more accurately captures the high
frequency content of your sound. Audio resolution determines the accuracy with which a
sound can be digitized.

Formula for determining the size of the digital audio

Monophonic = Sampling rate * duration of recording in seconds * (bit resolution / 8) * 1

Stereo = Sampling rate * duration of recording in seconds * (bit resolution / 8) * 2

 The sampling rate is how often the samples are taken.


 The sample size is the amount of information stored. This is called as bit
resolution.
 The number of channels is 2 for stereo and 1 for monophonic.
 The time span of the recording is measured in seconds.

3.5 Editing Digital Recordings

Once a recording has been made, it will almost certainly need to be edited. The
basic sound editing operations that most multimedia procedures needed are described in
the paragraphs that follow

1. Multiple Tasks: Able to edit and combine multiple tracks and then merge the
tracks and export them in a final mix to a single audio file.
2. Trimming: Removing dead air or blank space from the front of a recording and
an unnecessary extra time off the end is your first sound editing task.
3. Splicing and Assembly: Using the same tools mentioned for trimming, you will
probably want to remove the extraneous noises that inevitably creep into
recording.
4. Volume Adjustments: If you are trying to assemble ten different recordings into
a single track there is a little chance that all the segments have the same volume.
5. Format Conversion: In some cases your digital audio editing software might
read a format different from that read by your presentation or authoring program.
6. Resampling or downsampling: If you have recorded and edited your sounds at
16 bit sampling rates but are using lower rates you must resample or downsample
the file.
7. Equalization: Some programs offer digital equalization capabilities that allow
you to modify a recording frequency content so that it sounds brighter or darker.
8. Digital Signal Processing: Some programs allow you to process the signal with
reverberation, multitap delay, and other special effects using DSP routines.

Multimedia Systems- M.Sc(IT)


19

9. Reversing Sounds: Another simple manipulation is to reverse all or a portion of a


digital audio recording. Sounds can produce a surreal, other wordly effect when
played backward.
10. Time Stretching: Advanced programs let you alter the length of a sound file
without changing its pitch. This feature can be very useful but watch out: most
time stretching algorithms will severely degrade the audio quality.

Check Your Progress 1


List a few audio editing features
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

3.6 Making MIDI Audio

MIDI (Musical Instrument Digital Interface) is a communication standard


developed for electronic musical instruments and computers. MIDI files allow music and
sound synthesizers from different manufacturers to communicate with each other by
sending messages along cables connected to the devices.

Creating your own original score can be one of the most creative and rewarding
aspects of building a multimedia project, and MIDI (Musical Instrument Digital
Interface) is the quickest, easiest and most flexible tool for this task.

The process of creating MIDI music is quite different from digitizing existing
audio. To make MIDI scores, however you will need sequencer software and a sound
synthesizer.

The MIDI keyboard is also useful to simply the creation of musical scores. An
advantage of structured data such as MIDI is the ease with which the music director can
edit the data.

A MIDI file format is used in the following circumstances :

 Digital audio will not work due to memory constraints and more processing
power requirements
 When there is high quality of MIDI source
 When there is no requirement for dialogue.

Multimedia Systems- M.Sc(IT)


20

A digital audio file format is preferred in the following circumstances:

 When there is no control over the playback hardware


 When the computing resources and the bandwidth requirements are high.
 When dialogue is required.

3.7 Audio File Formats

A file format determines the application that is to be used for opening a file.
Following is the list of different file formats and the software that can be used for
opening a specific file.

1. *.AIF, *.SDII in Macintosh Systems


2. *.SND for Macintosh Systems
3. *.WAV for Windows Systems
4. MIDI files – used by north Macintosh and Windows
5. *.WMA –windows media player
6. *.MP3 – MP3 audio
7. *.RA – Real Player
8. *.VOC – VOC Sound
9. AIFF sound format for Macintosh sound files
10. *.OGG – Ogg Vorbis

3.8 Red Book Standard

The method for digitally encoding the high quality stereo of the consumer CD
music market is an instrument standard, ISO 10149. This is also called as RED BOOK
standard.

The developers of this standard claim that the digital audio sample size and
sample rate of red book audio allow accurate reproduction of all sounds that humans can
hear. The red book standard recommends audio recorded at a sample size of 16 bits and
sampling rate of 44.1 KHz.

Check Your Progress 2


Write the specifications used in red book standard
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

Multimedia Systems- M.Sc(IT)


21

3.9 Software used for Audio

Software such as Toast and CD-Creator from Adaptec can translate the digital
files of red book Audio format on consumer compact discs directly into a digital sound
editing file, or decompress MP3 files into CD-Audio. There are several tools available for
recording audio. Following is the list of different software that can be used for recording
and editing audio ;

 Soundrecorder from Microsoft


 Apple’s QuickTime Player pro
 Sonic Foundry’s SoundForge for Windows
 Soundedit16

3.10 Let us sum up


Following points have been discussed in this lesson:

 Audio is an important component of multimedia which can be used to provide


liveliness to a multimedia presentation.
 The red book standard recommends audio recorded at a sample size of 16 bits and
sampling rate of 44.1 KHz.
 MIDI is Musical Instrument Digital Interface.
 MIDI is a communication standard developed for electronic musical instruments
and computers.
 To make MIDI scores, however you will need sequencer software and a sound
synthesizer

3.11 Lesson-end activities


Record an audio clip using sound recorder in Microsoft Windows for 1 minute.
Note down the size of the file. Using any audio compression software convert the
recorded file to MP3 format and compare the size of the audio.

3.12 Model answers to “Check your progress”

1. Audio editing includes the following:


 Multiple Tasks
 Trimming
 Splicing and Assembly
 Volume Adjustments
 Format Conversion
 Resampling or downsampling
 Equalization

Multimedia Systems- M.Sc(IT)


22

 Digital Signal Processing


 Reversing Sounds
 Time Stretching

2. The red book standard recommends audio recorded at a sample size of 16 bits
and sampling rate of 44.1 KHz. The recording is done with 2 channels(stereo
mode).

3.13 References

1. “Multimedia Making it work” By Tay Vaughan


2. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.

Multimedia Systems- M.Sc(IT)


23

Lesson 4 Images
Contents

4.0 Aims and Objectives


4.1 Introduction
4.2 Digital image
4.3 Bitmaps
4.4 Making still images
4.4.1 Bitmap software
4.4.2 Capturing and editing images
4.5 Vectored drawing
4.6 Color
4.7 Image file formats
4.8 Let us sum up
4.9 Lesson-end activities
4.10 Model answers to “Check your progress”
4.11 References

4.0 Aims and Objectives

In this lesson we will learn how images are captured and incorporated into a
multimedia presentation. Different image file formats and the different color
representations have been discussed in this lesson.
At the end of this lesson the learner will be able to
i) Create his own image
ii) Describe the use of colors and palettes in multimedia
iii) Describe the capabilities and limitations of vector images.
iv) Use clip arts in the multimedia presentations

4.1 Introduction

Still images are the important element of a multimedia project or a web site. In
order to make a multimedia presentation look elegant and complete, it is necessary to
spend ample amount of time to design the graphics and the layouts. Competent,
computer literate skills in graphic art and design are vital to the success of a multimedia
project.

4.2 Digital Image

A digital image is represented by a matrix of numeric values each representing a


quantized intensity value. When I is a two-dimensional matrix, then I(r,c) is the intensity
value at the position corresponding to row r and column c of the matrix.

Multimedia Systems- M.Sc(IT)


24

The points at which an image is sampled are known as picture elements,


commonly abbreviated as pixels. The pixel values of intensity images are called gray
scale levels (we encode here the “color” of the image). The intensity at each pixel is
represented by an integer and is determined from the continuous image by averaging over
a small neighborhood around the pixel location. If there are just two intensity values, for
example, black, and white, they are represented by the numbers 0 and 1; such images are
called binary-valued images. If 8-bit integers are used to store each pixel value, the gray
levels range from 0 (black) to 255 (white).

4.2.1 Digital Image Format

There are different kinds of image formats in the literature. We shall consider the
image format that comes out of an image frame grabber, i.e., the captured image format,
and the format when images are stored, i.e., the stored image format.

Captured Image Format

The image format is specified by two main parameters: spatial resolution, which
is specified as pixelsxpixels (eg. 640x480)and color encoding, which is specified
by bits per pixel. Both parameter values depend on hardware and software for
input/output of images.

Stored Image Format

When we store an image, we are storing a two-dimensional array of values, in


which each value represents the data associated with a pixel in the image. For a
bitmap, this value is a binary digit.

4.3 Bitmaps

A bitmap is a simple information matrix describing the individual dots that are the
smallest elements of resolution on a computer screen or other display or printing device.
A one-dimensional matrix is required for monochrome (black and white); greater depth
(more bits of information) is required to describe more than 16 million colors the picture
elements may have, as illustrated in following figure. The state of all the pixels on a
computer screen make up the image seen by the viewer, whether in combinations of
black and white or colored pixels in a line of text, a photograph-like picture, or a simple
background pattern.

Multimedia Systems- M.Sc(IT)


25

1-bit bitmap 4-bit bitmap 8-bit bitmap


2 colors 16 colors 256 colors

Where do bitmap come from? How are they made?

 Make a bitmap from scratch with paint or drawing program.


 Grab a bitmap from an active computer screen with a screen capture program, and
then paste into a paint program or your application.
 Capture a bitmap from a photo, artwork, or a television image using a scanner or
video capture device that digitizes the image.

Once made, a bitmap can be copied, altered, e-mailed, and otherwise used in many
creative ways.

Clip Art

A clip art collection may contain a random assortment of images, or it may contain a
series of graphics, photographs, sound, and video related to a single topic. For
example, Corel, Micrografx, and Fractal Design bundle extensive clip art collection
with their image-editing software.

Multiple Monitors

When developing multimedia, it is helpful to have more than one monitor, or a single
high-resolution monitor with lots of screen real estate, hooked up to your computer.
In this way, you can display the full-screen working area of your project or
presentation and still have space to put your tools and other menus. This is
particularly important in an authoring system such as Macromedia Director, where the
edits and changes you make in one window are immediately visible in the presentation
window-provided the presentation window is not obscured by your editing tools.

Check Your Progress 1


List a few software that can be used for creating images.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………

Multimedia Systems- M.Sc(IT)


26

4.4 Making Still Images


Still images may be small or large, or even full screen. Whatever their form, still
images are generated by the computer in two ways: as bitmap (or paint graphics) and as
vector-drawn (or just plain drawn) graphics.

Bitmaps are used for photo-realistic images and for complex drawing requiring
fine detail. Vector-drawn objects are used for lines, boxes, circles, polygons, and other
graphic shapes that can be mathematically expressed in angles, coordinates, and
distances. A drawn object can be filled with color and patterns, and you can select it as a
single object. Typically, image files are compressed to save memory and disk space;
many image formats already use compression within the file itself – for example, GIF,
JPEG, and PNG.

Still images may be the most important element of your multimedia project. If
you are designing multimedia by yourself, put yourself in the role of graphic artist and
layout designer.

4.4.1 Bitmap Software

The abilities and feature of image-editing programs for both the Macintosh and
Windows range from simple to complex. The Macintosh does not ship with a painting
tool, and Windows provides only the rudimentary Paint (see following figure), so you
will need to acquire this very important software separately – often bitmap editing or
painting programs come as part of a bundle when you purchase your computer,
monitor, or scanner.

Figure: The Windows Paint accessory provides rudimentary bitmap editing

Multimedia Systems- M.Sc(IT)


27

4.4.2 Capturing and Editing Images

The image that is seen on a computer monitor is digital bitmap stored in video
memory, updated about every 1/60 second or faster, depending upon monitor’s scan
rate. When the images are assembled for multimedia project, it may often be needed
to capture and store an image directly from screen. It is possible to use the Prt Scr
key available in the keyboard to capture a image.

Scanning Images

After scanning through countless clip art collections, if it is not possible to find the
unusual background you want for a screen about gardening. Sometimes when you
search for something too hard, you don’t realize that it’s right in front of your face.
Open the scan in an image-editing program and experiment with different filters, the
contrast, and various special effects. Be creative, and don’t be afraid to try strange
combinations – sometimes mistakes yield the most intriguing results.

4.5 Vector Drawing

Most multimedia authoring systems provide for use of vector-drawn objects such
as lines, rectangles, ovals, polygons, and text.

Computer-aided design (CAD) programs have traditionally used vector-drawn


object systems for creating the highly complex and geometric rendering needed by
architects and engineers.

Graphic artists designing for print media use vector-drawn objects because the
same mathematics that put a rectangle on your screen can also place that rectangle on
paper without jaggies. This requires the higher resolution of the printer, using a page
description language such as PostScript.

Programs for 3-D animation also use vector-drawn graphics. For example, the various
changes of position, rotation, and shading of light required to spin the extruded.

How Vector Drawing Works

Vector-drawn objects are described and drawn to the computer screen using a fraction
of the memory space required to describe and store the same object in bitmap form. A
vector is a line that is described by the location of its two endpoints. A simple
rectangle, for example, might be defined as follows:

RECT 0,0,200,200

Multimedia Systems- M.Sc(IT)


28

4.6 Color
Color is a vital component of multimedia. Management of color is both a subjective and
a technical exercise. Picking the right colors and combinations of colors for your project
can involve many tries until you feel the result is right.

Understanding Natural Light and Color

The letters of the mnemonic ROY G. BIV, learned by many of us to remember the
colors of the rainbow, are the ascending frequencies of the visible light spectrum: red,
orange, yellow, green, blue, indigo, and violet. Ultraviolet light, on the other hand, is
beyond the higher end of the visible spectrum and can be damaging to humans.

The color white is a noisy mixture of all the color frequencies in the visible spectrum.
The cornea of the eye acts as a lens to focus light rays onto the retina. The light rays
stimulate many thousands of specialized nerves called rods and cones that cover the
surface of the retina. The eye can differentiate among millions of colors, or hues,
consisting of combination of red, green, and blue.

Additive Color

In additive color model, a color is created by combining colored light sources in three
primary colors: red, green and blue (RGB). This is the process used for a TV or
computer monitor

Subtractive Color

In subtractive color method, a new color is created by combining colored media such
as paints or ink that absorb (or subtract) some parts of the color spectrum of light and
reflect the others back to the eye. Subtractive color is the process used to create color
in printing. The printed page is made up of tiny halftone dots of three primary colors,
cyan, magenta and yellow (CMY).

Check Your Progress 2


Distinguish additive and subtractive colors and write their area of use.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

Multimedia Systems- M.Sc(IT)


29

4.7 Image File Formats


There are many file formats used to store bitmaps and vectored drawing. Following is a
list of few image file formats.

Format Extension
Microsoft Windows DIB .bmp .dib .rle
Microsoft Palette .pal
Autocad format 2D .dxf
JPEG .jpg
Windows Meta file .wmf
Portable network graphic .png
Compuserve gif .gif
Apple Macintosh .pict .pic .pct

4.8 Let us sum up

In this lesson the following points have been discussed.

 Competent, computer literate skills in graphic art and design are vital to the
success of a multimedia project.
 A digital image is represented by a matrix of numeric values each representing
a quantized intensity value.
 A bitmap is a simple information matrix describing the individual dots that are
the smallest elements of resolution on a computer screen or other display or
printing device
 In additive color model, a color is created by combining colored lights sources
in three primary colors: red, green and blue (RGB).
 Subtractive colors are used in printers and additive color concepts are used in
monitors and television.

4.9 Lesson-end activities

1. Discuss the difference between bitmap and vector graphics.


2. Open an image in an image editing program capable of identifying colors.
Select three different pixels in the image. Sample the color and write
down its value in RGS, HSB, CMYK and hexadecimal color.

4.10 Model answers to “Check your progress”

1. Software used for creating images


Coreldraw
MSPaint
Autocad

Multimedia Systems- M.Sc(IT)


30

2. In additive color model, a color is created by combining colored lights sources


in three primary colors: red, green and blue (RGB). This is the process used for a
TV or computer monitor. In subtractive color method, a new color is created by
subtracting colors from , cyan, magenta and yellow (CMY). Subtractive color is
the process used to create color in printing.

4.11 References

1. “Multimedia Making it work” By Tay Vaughan


2. “Multimedia in Practice – Technology and applications” By Jeffcoat

Multimedia Systems- M.Sc(IT)


31

Lesson 5 Animation and Video


Contents

5.0 Aims and Objectives


5.1 Introduction
5.2 Principles of Animation
5.3 Animation Techniques
5.4 Animation File formats
5.5 Video
5.6 Broadcast video Standard
5.7 Shooting and editing video
5.8 Video Compression
5.9 Let us sum up
5.10 Lesson-end activities
5.11 Model answers to “Check your progress”
5.12 References

5.0 Aims and Objectives

In this lesson we will learn the basics of animation and video. At the end of this
lesson the learner will be able to
i) List the different animation techniques.
ii) Enumerate the software used for animation.
iii) List the different broadcasting standards.
iv) Describe the basics of video recording and how they relate to multimedia
production.
v) Have a knowledge on different video formats.

5.1 Introduction

Animation makes static presentations come alive. It is visual change over time
and can add great power to our multimedia projects. Carefully planned, well-executed
video clips can make a dramatic difference in a multimedia project. Animation is created
from drawn pictures and video is created using real time visuals.

5.2 Principles of Animation

Animation is the rapid display of a sequence of images of 2-D artwork or model


positions in order to create an illusion of movement. It is an optical illusion of motion due
to the phenomenon of persistence of vision, and can be created and demonstrated in a
number of ways. The most common method of presenting animation is as a motion
picture or video program, although several other forms of presenting animation also exist

Multimedia Systems- M.Sc(IT)


32

Animation is possible because of a biological phenomenon known as persistence


of vision and a psychological phenomenon called phi. An object seen by the human eye
remains chemically mapped on the eye’s retina for a brief time after viewing. Combined
with the human mind’s need to conceptually complete a perceived action, this makes it
possible for a series of images that are changed very slightly and very rapidly, one after
the other, to seemingly blend together into a visual illusion of movement. The following
shows a few cells or frames of a rotating logo. When the images are progressively and
rapidly changed, the arrow of the compass is perceived to be
spinning.

Television video builds entire frames or pictures every second; the speed with which each
frame is replaced by the next one makes the images appear to blend smoothly into
movement. To make an object travel across the screen while it changes its shape, just
change the shape and also move or translate it a few pixels for each frame.

5.3 Animation Techniques

When you create an animation, organize its execution into a series of logical steps. First,
gather up in your mind all the activities you wish to provide in the animation; if it is
complicated, you may wish to create a written script with a list of activities and required
objects. Choose the animation tool best suited for the job. Then build and tweak your
sequences; experiment with lighting effects. Allow plenty of time for this phase when
you are experimenting and testing. Finally, post-process your animation, doing any
special rendering and adding sound effects.

5.3.1 Cel Animation

The term cel derives from the clear celluloid sheets that were used for drawing
each frame, which have been replaced today by acetate or plastic. Cels of famous
animated cartoons have become sought-after, suitable-for-framing collector’s
items.

Cel animation artwork begins with keyframes (the first and last frame of an
action). For example, when an animated figure of a man walks across the screen,
he balances the weight of his entire body on one foot and then the other in a series
of falls and recoveries, with the opposite foot and leg catching up to support the
body.

 The animation techniques made famous by Disney use a series of


progressively different on each frame of movie film which plays at 24 frames
per second.

Multimedia Systems- M.Sc(IT)


33

 A minute of animation may thus require as many as 1,440 separate frames.


 The term cel derives from the clear celluloid sheets that were used for drawing
each frame, which is been replaced today by acetate or plastic.
 Cel animation artwork begins with keyframes.

5.3.2 Computer Animation

Computer animation programs typically employ the same logic and procedural
concepts as cel animation, using layer, keyframe, and tweening techniques, and
even borrowing from the vocabulary of classic animators. On the computer, paint
is most often filled or drawn with tools using features such as gradients and anti-
aliasing. The word links, in computer animation terminology, usually means
special methods for computing RGB pixel values, providing edge detection, and
layering so that images can blend or otherwise mix their colors to produce special
transparencies, inversions, and effects.

 Computer Animation is same as that of the logic and procedural concepts as


cel animation and use the vocabulary of classic cel animation – terms such as
layer, Keyframe, and tweening.
 The primary difference between the animation software program is in how
much must be drawn by the animator and how much is automatically
generated by the software
 In 2D animation the animator creates an object and describes a path for the
object to follow. The software takes over, actually creating the animation on
the fly as the program is being viewed by your user.
 In 3D animation the animator puts his effort in creating the models of
individual and designing the characteristic of their shapes and surfaces.
 Paint is most often filled or drawn with tools using features such as gradients
and anti- aliasing.

5.3.3 Kinematics

 It is the study of the movement and motion of structures that have joints,
such as a walking man.
 Inverse Kinematics is in high-end 3D programs, it is the process by which
you link objects such as hands to arms and define their relationships and
limits.
 Once those relationships are set you can drag these parts around and let the
computer calculate the result.

5.3.4 Morphing

 Morphing is popular effect in which one image transforms into another.


Morphing application and other modeling tools that offer this effect can
perform transition not only between still images but often between moving
images as well.

Multimedia Systems- M.Sc(IT)


34

 The morphed images were built at a rate of 8 frames per second, with each
transition taking a total of 4 seconds.
 Some product that uses the morphing features are as follows
o Black Belt’s Easy Morph and WinImages,
o Human Software’s Squizz
o Valis Group’s Flo , MetaFlo, and MovieFlo.

Check Your Progress 1


List the different animation techniques
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

5.4 Animation File Formats

Some file formats are designed specifically to contain animations and the can be
ported among application and platforms with the proper translators.

 Director *.dir, *.dcr


 AnimationPro *.fli, *.flc
 3D Studio Max *.max
 SuperCard and Director *.pics
 CompuServe *.gif
 Flash *.fla, *.swf

Following is the list of few Software used for computerized animation:

 3D Studio Max
 Flash
 AnimationPro

Multimedia Systems- M.Sc(IT)


35

5.5 Video

Analog versus Digital

Digital video has supplanted analog video as the method of choice for making
video for multimedia use. While broadcast stations and professional production and post-
production houses remain greatly invested in analog video hardware (according to Sony,
there are more than 350,000 Betacam SP devices in use today), digital video gear
produces excellent finished products at a fraction of the cost of analog. A digital
camcorder directly connected to a computer workstation eliminates the image-degrading
analog-to-digital conversion step typically performed by expensive video capture cards,
and brings the power of nonlinear video editing and production to everyday users.

5.6 Broadcast Video Standards

Four broadcast and video standards and recording formats are commonly in use
around the world: NTSC, PAL, SECAM, and HDTV. Because these standards and
formats are not easily interchangeable, it is important to know where your multimedia
project will be used.

NTSC

The United States, Japan, and many other countries use a system for broadcasting
and displaying video that is based upon the specifications set forth by the 1952
National Television Standards Committee. These standards define a method for
encoding information into the electronic signal that ultimately creates a television
picture. As specified by the NTSC standard, a single frame of video is made up
of 525 horizontal scan lines drawn onto the inside face of a phosphor-coated
picture tube every 1/30th of a second by a fast-moving electron beam.

PAL

The Phase Alternate Line (PAL) system is used in the United Kingdom, Europe,
Australia, and South Africa. PAL is an integrated method of adding color to a
black-and-white television signal that paints 625 lines at a frame rate 25 frames
per second.

SECAM

The Sequential Color and Memory (SECAM) system is used in France, Russia,
and few other countries. Although SECAM is a 625-line, 50 Hz system, it differs
greatly from both the NTSC and the PAL color systems in its basic technology
and broadcast method.

Multimedia Systems- M.Sc(IT)


36

HDTV

High Definition Television (HDTV) provides high resolution in a 16:9 aspect


ratio (see following Figure). This aspect ratio allows the viewing of Cinemascope
and Panavision movies. There is contention between the broadcast and computer
industries about whether to use interlacing or progressive-scan technologies.

Safe title area


512X384 (4:3)

Monitor 640X480 (4:3)

NTSC television overscan HDTV


35mm slide / photo 1280X720 (16:9)
approx. 648X480 (4:3)
768X512 (3:2)
Figure: Difference between VGA and HDTV aspect ratios

Check Your Progress 2


List the different broadcast video standards and compare their specifications.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

5.7 Shooting and Editing Video

To add full-screen, full-motion video to your multimedia project, you will need to invest
in specialized hardware and software or purchase the services of a professional video
production studio. In many cases, a professional studio will also provide editing tools
and post-production capabilities that you cannot duplicate with your Macintosh or PC.

Multimedia Systems- M.Sc(IT)


37

Video Tips

A useful tool easily implemented in most digital video editing applications is “blue
screen,” “Ultimate,” or “chromo key” editing. Blue screen is a popular technique for
making multimedia titles because expensive sets are not required. Incredible
backgrounds can be generated using 3-D modeling and graphic software, and one or more
actors, vehicles, or other objects can be neatly layered onto that background.
Applications such as VideoShop, Premiere, Final Cut Pro, and iMovie provide this
capability.

Recording Formats

S-VHS video

In S-VHS video, color and luminance information are kept on two separate tracks.
The result is a definite improvement in picture quality. This standard is also used
in Hi-8. still, if your ultimate goal is to have your project accepted by broadcast
stations, this would not be the best choice.

Component (YUV)

In the early 1980s, Sony began to experiment with a new portable professional
video format based on Betamax. Panasonic has developed their own standard
based on a similar technology, called “MII,” Betacam SP has become the industry
standard for professional video field recording. This format may soon be eclipsed
by a new digital version called “Digital Betacam.”

Digital Video

Full integration of motion video on computers eliminates the analog television form of
video from the multimedia delivery platform. If a video clip is stored as data on a hard
disk, CD-ROM, or other mass-storage device, that clip can be played back on the
computer’s monitor without overlay boards, videodisk players, or second monitors. This
playback of digital video is accomplished using software architecture such as QuickTime
or AVI, a multimedia producer or developer; you may need to convert video source
material from its still common analog form (videotape) to a digital form manageable by
the end user’s computer system. So an understanding of analog video and some special
hardware must remain in your multimedia toolbox.

Analog to digital conversion of video can be accomplished using the video overlay
hardware described above, or it can be delivered direct to disk using FireWire cables. To
repetitively digitize a full-screen color video image every 1/30 second and store it to disk
or RAM severely taxes both Macintosh and PC processing capabilities–special hardware,
compression firmware, and massive amounts of digital storage space are required.

Multimedia Systems- M.Sc(IT)


38

5.8 Video Compression


To digitize and store a 10-second clip of full-motion video in your computer requires
transfer of an enormous amount of data in a very short amount of time. Reproducing just
one frame of digital video component video at 24 bits requires almost 1MB of computer
data; 30 seconds of video will fill a gigabyte hard disk. Full-size, full-motion video
requires that the computer deliver data at about 30MB per second. This overwhelming
technological bottleneck is overcome using digital video compression schemes or codecs
(coders/decoders). A codec is the algorithm used to compress a video for delivery and
then decode it in real-time for fast playback.

Real-time video compression algorithms such as MPEG, P*64, DVI/Indeo, JPEG,


Cinepak, Sorenson, ClearVideo, RealVideo, and VDOwave are available to compress
digital video information. Compression schemes use Discrete Cosine Transform (DCT),
an encoding algorithm that quantifies the human eye’s ability to detect color and image
distortion. All of these codecs employ lossy compression algorithms.

In addition to compressing video data, streaming technologies are being implemented to


provide reasonable quality low-bandwidth video on the Web. Microsoft, RealNetworks,
VXtreme, VDOnet, Xing, Precept, Cubic, Motorola, Viva, Vosaic, and Oracle are
actively pursuing the commercialization of streaming technology on the Web.

QuickTime, Apple’s software-based architecture for seamlessly integrating sound,


animation, text, and video (data that changes over time), is often thought of as a
compression standard, but it is really much more than that.

MPEG

The MPEG standard has been developed by the Moving Picture Experts Group, a
working group convened by the International Standards Organization (ISO) and the
International Electro-technical Commission (IEC) to create standards for digital
representation of moving pictures and associated audio and other data. MPEG1 and
MPEG2 are the current standards. Using MPEG1, you can deliver 1.2 Mbps of video and
250 Kbps of two-channel stereo audio using CD-ROM technology. MPEG2, a
completely different system from MPEG1, requires higher data rates (3 to 15 Mbps) but
delivers higher image resolution, picture quality, interlaced video formats,
multiresolution scalability, and multichannel audio features.

DVI/Indeo

DVI is a property, programmable compression/decompression technology based on the


Intel i750 chip set. This hardware consists of two VLSI (Very Large Scale Integrated)
chips to separate the image processing and display functions.

Two levels of compression and decompression are provided by DVI: Production Level
Video (PLV) and Real Time Video (RTV). PLV and RTV both use variable compression

Multimedia Systems- M.Sc(IT)


39

rates. DVI’s algorithms can compress video images at ratios between 80:1 and 160:1.
DVI will play back video in full-frame size and in full color at 30 frames per second.

Optimizing Video Files for CD-ROM

CD-ROMs provide an excellent distribution medium for computer-based video: they are
inexpensive to mass produce, and they can store great quantities of information. CD-
ROM players offer slow data transfer rates, but adequate video transfer can be achieved
by taking care to properly prepare your digital video files.

 Limit the amount of synchronization required between the video and audio. With
Microsoft’s AVI files, the audio and video data are already interleaved, so this is
not a necessity, but with QuickTime files, you should “flatten” your movie.
Flattening means you interleave the audio and video segments together.

 Use regularly spaced key frames, 10 to 15 frames apart, and temporal


compression can correct for seek time delays. Seek time is how long it takes the
CD-ROM player to locate specific data on the CD-ROM disc. Even fast 56x
drives must spin up, causing some delay (and occasionally substantial noise).

 The size of the video window and the frame rate you specify dramatically affect
performance. In QuickTime, 20 frames per second played in a 160X120-pixel
window is equivalent to playing 10 frames per second in a 320X240 window.
The more data that has to be decompressed and transferred from the CD-ROM to
the screen, the slower the playback.

5.9 Let us sum up

In this lesson we have learnt the use of animation and video in multimedia presentation.
Following points have been discussed in this lesson :
 Animation is created from drawn pictures and video is created using real time
visuals.
 Animation is possible because of a biological phenomenon known as persistence
of vision
 The different techniques used in animation are cel animation, computer
animation, kinematics and morphing.
 Four broadcast and video standards and recording formats are commonly in use
around the world: NTSC, PAL, SECAM, and HDTV.
 Real-time video compression algorithms such as MPEG, P*64, DVI/Indeo, JPEG,
Cinepak, Sorenson, ClearVideo, RealVideo, and VDOwave are available to
compress digital video information.

Multimedia Systems- M.Sc(IT)


40

5.10 Lesson-end activities

1. Choose animation software available for windows. List its name and its
capabilities. Find whether the software is capable of handling layers? Keyframes?
Tweening? Morphing? Check whether the software allows cross platform playback
facilities.
2. Locate three web sites that offer streaming video clips. Check the file formats and
the duration, size of video. Make a list of software that can play these video clips.

5.11 Model answers to “Check your progress”

1. The different techniques used in animation are


cel animation, computer animation, kinematics and morphing.
2. Four broadcast and video standards and recording formats are commonly in use
NTSC, PAL, SECAM, and HDTV.

5.12 References

1.”Multimedia Computing, Communication and application” By Steinmetz and


Klara Nahrstedt.
2. “Multimedia Making it work” By Tay Vaughan
3. “Multimedia in Practice – Technology and applications” By Jeffcoat
4. http://en.wikipedia.org/wiki/Animation_software

Multimedia Systems- M.Sc(IT)


41

UNIT - II

Lesson 6 Multimedia Hardware – Connecting Devices

Contents

6.0 Aims and Objectives


6.1 Introduction
6.2 Multimedia Hardware
6.3 Connecting Devices
6.4 SCSI
6.5 MCI
6.6 IDE
6.7 USB
6.8 Let us sum up
6.9 Lesson-end activities
6.10 Model answers to “Check your progress”
6.11 References

6.0 Aims and Objectives

In this lesson we will learn about the multimedia hardware required for
multimedia production. At the end of the lesson the learner will be able to identify the
proper hardware required for connecting various devices.

6.1 Introduction

The hardware required for multimedia PC depends on the personal preference,


budget, project delivery requirements and the type of material and content in the project.
Multimedia production was much smoother and easy in Macintosh than in Windows. But
Multimedia content production in windows has been made easy with additional storage
and less computing cost.
Right selection of multimedia hardware results in good quality multimedia
presentation.

6.2 Multimedia Hardware

The hardware required for multimedia can be classified into five. They are
1. Connecting Devices
2. Input devices
3. Output devices
4. Storage devices and
5. Communicating devices.

Multimedia Systems- M.Sc(IT)


42

6.3 Connecting Devices

Among the many hardware – computers, monitors, disk drives, video projectors,
light valves, video projectors, players, VCRs, mixers, sound speakers there are enough
wires which connect these devices. The data transfer speed the connecting devices
provide will determine the faster delivery of the multimedia content.
The most popularly used connecting devices are:
 SCSI
 USB
 MCI
 IDE
 USB

6.4 SCSI
SCSI (Small Computer System Interface) is a set of standards for physically
connecting and transferring data between computers and peripheral devices. The SCSI
standards define commands, protocols, electrical and optical interfaces. SCSI is most
commonly used for hard disks and tape drives, but it can connect a wide range of other
devices, including scanners, and optical drives (CD, DVD, etc.).

SCSI is most commonly pronounced "scuzzy".

Since its standardization in 1986, SCSI has been commonly used in the Apple
Macintosh and Sun Microsystems computer lines and PC server systems. SCSI has never
been popular in the low-priced IBM PC world, owing to the lower cost and adequate
performance of its ATA hard disk standard. SCSI drives and even SCSI RAIDs became
common in PC workstations for video or audio production, but the appearance of large
cheap SATA drives means that SATA is rapidly taking over this market.

Currently, SCSI is popular on high-performance workstations and servers. RAIDs


on servers almost always use SCSI hard disks, though a number of manufacturers offer
SATA-based RAID systems as a cheaper option. Desktop computers and notebooks more
typically use the ATA/IDE or the newer SATA interfaces for hard disks, and USB and
FireWire connections for external devices.

6.4.1 SCSI interfaces

SCSI is available in a variety of interfaces. The first, still very common, was
parallel SCSI (also called SPI). It uses a parallel electrical bus design. The traditional SPI
design is making a transition to Serial Attached SCSI, which switches to a serial point-to-
point design but retains other aspects of the technology. iSCSI drops physical

Multimedia Systems- M.Sc(IT)


43

implementation entirely, and instead uses TCP/IP as a transport mechanism. Finally,


many other interfaces which do not rely on complete SCSI standards still implement the
SCSI command protocol.

The following table compares the different types of SCSI.

Terms Bus Bus Number


Speed Width of Devices
(MB/sec) (Bits) supported
SCSI-1 5 8 8
SCSI-2 10 8 8
SCSI-3 20 8 16
SCSI-3 20 8 4
SCSI-3 1 20 16 16
SCSI-3 UW 40 16 16
SCSI-3 UW 40 16 8
SCSI-3 UW 40 16 4
SCSI-3 U2 40 8 8
SCSI-3 U2 80 16 2
SCSI-3 U2W 80 16 16
SCSI-3 U2W 80 16 2
SCSI-3 U3 160 16 16

6.4.2 SCSI cabling

Internal SCSI cables are usually ribbon cables that have multiple 68 pin or 50 pin
connectors. External cables are shielded and only have connectors on the ends.

iSCSI

ISCSI preserves the basic SCSI paradigm, especially the command set, almost
unchanged. iSCSI advocates project the iSCSI standard, an embedding of SCSI-3
over TCP/IP, as displacing Fibre Channel in the long run, arguing that Ethernet
data rates are currently increasing faster than data rates for Fibre Channel and
similar disk-attachment technologies. iSCSI could thus address both the low-end
and high-end markets with a single commodity-based technology.

Serial SCSI

Four recent versions of SCSI, SSA, FC-AL, FireWire, and Serial Attached SCSI
(SAS) break from the traditional parallel SCSI standards and perform data
transfer via serial communications. Although much of the documentation of SCSI
talks about the parallel interface, most contemporary development effort is on
serial SCSI. Serial SCSI has a number of advantages over parallel SCSI—faster
data rates, hot swapping, and improved fault isolation. The primary reason for the
shift to serial interfaces is the clock skew issue of high speed parallel interfaces,
which makes the faster variants of parallel SCSI susceptible to problems caused

Multimedia Systems- M.Sc(IT)


44

by cabling and termination. Serial SCSI devices are more expensive than the
equivalent parallel SCSI devices.

6.4.3 SCSI command protocol

In addition to many different hardware implementations, the SCSI standards also


include a complex set of command protocol definitions. The SCSI command architecture
was originally defined for parallel SCSI buses but has been carried forward with minimal
change for use with iSCSI and serial SCSI. Other technologies which use the SCSI
command set include the ATA Packet Interface, USB Mass Storage class and FireWire
SBP-2.

In SCSI terminology, communication takes place between an initiator and a


target. The initiator sends a command to the target which then responds. SCSI commands
are sent in a Command Descriptor Block (CDB). The CDB consists of a one byte
operation code followed by five or more bytes containing command-specific parameters.

At the end of the command sequence the target returns a Status Code byte which
is usually 00h for success, 02h for an error (called a Check Condition), or 08h for busy.
When the target returns a Check Condition in response to a command, the initiator
usually then issues a SCSI Request Sense command in order to obtain a Key Code
Qualifier (KCQ) from the target. The Check Condition and Request Sense sequence
involves a special SCSI protocol called a Contingent Allegiance Condition.

There are 4 categories of SCSI commands: N (non-data), W (writing data from


initiator to target), R (reading data), and B (bidirectional). There are about 60 different
SCSI commands in total, with the most common being:

 Test unit ready: Queries device to see if it is ready for data transfers (disk spun
up, media loaded, etc.).
 Inquiry: Returns basic device information, also used to "ping" the device since it
does not modify sense data.
 Request sense: Returns any error codes from the previous command that returned
an error status.
 Send diagnostic and Receives diagnostic results: runs a simple self-test or a
specialized test defined in a diagnostic page.
 Start/Stop unit: Spins disks up and down, load/unload media.
 Read capacity: Returns storage capacity.
 Format unit: Sets all sectors to all zeroes, also allocates logical blocks avoiding
defective sectors.
 Read Format Capacities: Read the capacity of the sectors.
 Read (four variants): Reads data from a device.
 Write (four variants): Writes data to a device.
 Log sense: Returns current information from log pages.
 Mode sense: Returns current device parameters from mode pages.
 Mode select: Sets device parameters in a mode page.

Multimedia Systems- M.Sc(IT)


45

Each device on the SCSI bus is assigned at least one Logical Unit Number (LUN).
Simple devices have just one LUN, more complex devices may have multiple LUNs. A
"direct access" (i.e. disk type) storage device consists of a number of logical blocks,
usually referred to by the term Logical Block Address (LBA). A typical LBA equates to
512 bytes of storage. The usage of LBAs has evolved over time and so four different
command variants are provided for reading and writing data. The Read(6) and Write(6)
commands contain a 21-bit LBA address. The Read(10), Read(12), Read Long,
Write(10), Write(12), and Write Long commands all contain a 32-bit LBA address plus
various other parameter options.

A "sequential access" (i.e. tape-type) device does not have a specific capacity because
it typically depends on the length of the tape, which is not known exactly. Reads and
writes on a sequential access device happen at the current position, not at a specific LBA.
The block size on sequential access devices can either be fixed or variable, depending on
the specific device. (Earlier devices, such as 9-track tape, tended to be fixed block, while
later types, such as DAT, almost always supported variable block sizes.)

6.4.4 SCSI device identification

In the modern SCSI transport protocols, there is an automated process of


"discovery" of the IDs. SSA initiators "walk the loop" to determine what devices are
there and then assign each one a 7-bit "hop-count" value. FC-AL initiators use the LIP
(Loop Initialization Protocol) to interrogate each device port for its WWN (World Wide
Name). For iSCSI, because of the unlimited scope of the (IP) network, the process is
quite complicated. These discovery processes occur at power-on/initialization time and
also if the bus topology changes later, for example if an extra device is added.

On a parallel SCSI bus, a device (e.g. host adapter, disk drive) is identified by a
"SCSI ID", which is a number in the range 0-7 on a narrow bus and in the range 0–15 on
a wide bus. On earlier models a physical jumper or switch controls the SCSI ID of the
initiator (host adapter). On modern host adapters (since about 1997), doing I/O to the
adapter sets the SCSI ID; for example, the adapter often contains a BIOS program that
runs when the computer boots up and that program has menus that let the operator choose
the SCSI ID of the host adapter. Alternatively, the host adapter may come with software
that must be installed on the host computer to configure the SCSI ID. The traditional
SCSI ID for a host adapter is 7, as that ID has the highest priority during bus arbitration
(even on a 16 bit bus).

The SCSI ID of a device in a drive enclosure that has a backplane is set either by
jumpers or by the slot in the enclosure the device is installed into, depending on the
model of the enclosure. In the latter case, each slot on the enclosure's back plane delivers
control signals to the drive to select a unique SCSI ID. A SCSI enclosure without a
backplane often has a switch for each drive to choose the drive's SCSI ID. The enclosure
is packaged with connectors that must be plugged into the drive where the jumpers are
typically located; the switch emulates the necessary jumpers. While there is no standard

Multimedia Systems- M.Sc(IT)


46

that makes this work, drive designers typically set up their jumper headers in a consistent
format that matches the way that these switches implement.

Note that a SCSI target device (which can be called a "physical unit") is often
divided into smaller "logical units." For example, a high-end disk subsystem may be a
single SCSI device but contain dozens of individual disk drives, each of which is a
logical unit (more commonly, it is not that simple—virtual disk devices are generated by
the subsystem based on the storage in those physical drives, and each virtual disk device
is a logical unit). The SCSI ID, WWNN, etc. in this case identifies the whole subsystem,
and a second number, the logical unit number (LUN) identifies a disk device within the
subsystem.

It is quite common, though incorrect, to refer to the logical unit itself as a "LUN."
Accordingly, the actual LUN may be called a "LUN number" or "LUN id".

Setting the bootable (or first) hard disk to SCSI ID 0 is an accepted IT community
recommendation. SCSI ID 2 is usually set aside for the Floppy drive while SCSI ID 3 is
typically for a CD ROM.

6.4.5 SCSI enclosure services

In larger SCSI servers, the disk-drive devices are housed in an intelligent


enclosure that supports SCSI Enclosure Services (SES). The initiator can communicate
with the enclosure using a specialized set of SCSI commands to access power, cooling,
and other non-data characteristics.

Check Your Progress 1


List a few types of SCSI.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

6.5 Media Control Interface (MCI)

The Media Control Interface, MCI in short, is an aging API for controlling
multimedia peripherals connected to a Microsoft Windows or OS/2 computer. MCI
makes it very simple to write a program which can play a wide variety of media files and
even to record sound by just passing commands as strings. It uses relations described in
Windows registries or in the [MCI] section of the file SYSTEM.INI.

Multimedia Systems- M.Sc(IT)


47

The MCI interface is a high-level API developed by Microsoft and IBM for
controlling multimedia devices, such as CD-ROM players and audio controllers.

The advantage is that MCI commands can be transmitted both from the programming
language and from the scripting language (open script, lingo). For a number of years, the
MCI interface has been phased out in favor of the DirectX APIs.

6.5.1 MCI Devices

The Media Control Interface consists of 4 parts:

 AVIVideo
 CDAudio
 Sequencer
 WaveAudio

Each of these so-called MCI devices can play a certain type of files e.g. AVI Video plays
avi files, CDAudio plays cd tracks among others. Other MCI devices have also been
made available over time.

6.5.2 Playing media through the MCI interface

To play a type of media, it needs to be initialized correctly using MCI commands. These
commands are subdivided into categories:

 System Commands
 Required Commands
 Basic Commands
 Extended Commands

6.6 IDE

Usually storage devices connect to the computer through an Integrated Drive


Electronics (IDE) interface. Essentially, an IDE interface is a standard way for a storage
device to connect to a computer. IDE is actually not the true technical name for the
interface standard. The original name, AT Attachment (ATA), signified that the
interface was initially developed for the IBM AT computer.

IDE was created as a way to standardize the use of hard drives in computers. The
basic concept behind IDE is that the hard drive and the controller should be combined.
The controller is a small circuit board with chips that provide guidance as to exactly how
the hard drive stores and accesses data. Most controllers also include some memory that
acts as a buffer to enhance hard drive performance.

Before IDE, controllers and hard drives were separate and often proprietary. In other
words, a controller from one manufacturer might not work with a hard drive from another

Multimedia Systems- M.Sc(IT)


48

manufacturer. The distance between the controller and the hard drive could result in poor
signal quality and affect performance. Obviously, this caused much frustration for
computer users.

IDE devices use a ribbon cable to connect to each other. Ribbon cables have all of
the wires laid flat next to each other instead of bunched or wrapped together in a bundle.
IDE ribbon cables have either 40 or 80 wires. There is a connector at each end of the
cable and another one about two-thirds of the distance from the motherboard connector.
This cable cannot exceed 18 inches (46 cm) in total length (12 inches from first to second
connector, and 6 inches from second to third) to maintain signal integrity. The three
connectors are typically different colors and attach to specific items:

 The blue connector attaches to the motherboard.


 The black connector attaches to the primary (master) drive.
 The grey connector attaches to the secondary (slave) drive.

Enhanced IDE (EIDE) — an extension to the original ATA standard again


developed by Western Digital — allowed the support of drives having a storage capacity
larger than 504 MiBs (528 MB), up to 7.8 GiBs (8.4 GB). Although these new names
originated in branding convention and not as an official standard, the terms IDE and
EIDE often appear as if interchangeable with ATA. This may be attributed to the two
technologies being introduced with the same consumable devices — these "new" ATA
hard drives.

With the introduction of Serial ATA around 2003, conventional ATA was
retroactively renamed to Parallel ATA (P-ATA), referring to the method in which data
travels over wires in this interface.

6.7 USB
Universal Serial Bus (USB) is a serial bus standard to interface devices. A major
component in the legacy-free PC, USB was designed to allow peripherals to be connected
using a single standardized interface socket and to improve plug-and-play capabilities by
allowing devices to be connected and disconnected without rebooting the computer (hot
swapping). Other convenient features include providing power to low-consumption
devices without the need for an external power supply and allowing many devices to be
used without requiring manufacturer specific, individual device drivers to be installed.

USB is intended to help retire all legacy varieties of serial and parallel ports. USB
can connect computer peripherals such as mouse devices, keyboards, PDAs, gamepads
and joysticks, scanners, digital cameras, printers, personal media players, and flash
drives. For many of those devices USB has become the standard connection method.
USB is also used extensively to connect non-networked printers; USB simplifies
connecting several printers to one computer. USB was originally designed for personal
computers, but it has become commonplace on other devices such as PDAs and video
game consoles.

Multimedia Systems- M.Sc(IT)


49

The design of USB is standardized by the USB Implementers Forum (USB-IF),


an industry standards body incorporating leading companies from the computer and
electronics industries. Notable members have included Apple Inc., Hewlett-Packard,
NEC, Microsoft, Intel, and Agere.

A USB system has an asymmetric design, consisting of a host, a multitude of


downstream USB ports, and multiple peripheral devices connected in a tiered-star
topology. Additional USB hubs may be included in the tiers, allowing branching into a
tree structure, subject to a limit of 5 levels of tiers. USB host may have multiple host
controllers and each host controller may provide one or more USB ports. Up to 127
devices, including the hub devices, may be connected to a single host controller.

USB devices are linked in series through hubs. There always exists one hub
known as the root hub, which is built-in to the host controller. So-called "sharing hubs"
also exist; allowing multiple computers to access the same peripheral device(s), either
switching access between PCs automatically or manually. They are popular in small-
office environments. In network terms they converge rather than diverge branches.

A single physical USB device may consist of several logical sub-devices that are
referred to as device functions, because each individual device may provide several
functions, such as a webcam (video device function) with a built-in microphone (audio
device function).

Check Your Progress 2


List the connecting devices discussed in this lesson.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

6.8 Let us sum up

In this lesson we have learnt the different hardware required for multimedia
production. We have discussed the following points related with connecting devices used
in a multimedia computer.

 SCSI (Small Computer System Interface) is a set of standards for physically


connecting and transferring data between computers and peripheral devices.
 On a parallel SCSI bus, a device (e.g. host adapter, disk drive) is identified by a
"SCSI ID", which is a number in the range 0-7 on a narrow bus and in the range
0–15 on a wide bus.
 The Media Control Interface, MCI in short, is an aging API for controlling
multimedia peripherals connected to a Microsoft Windows

Multimedia Systems- M.Sc(IT)


50

6.9 Lesson-end activities


1. Identify the SYSTEM.INI file present in a computer and find the list of
devices installed in a computer. Try to identify the settings for each
device.

6.10 Model answers to “Check your progress”

1. A few types of SCSI are Ultra SCSI, Wide SCSI, SCSI-1, SCSI-2 and SCSI-3

2. The connecting devices discussed in this lesson are


SCSI, IDE, MCI and USB

6.11 References

1. The Essential Guide to Computer Data Storage: From Floppy to DVD By


Andrei Khurshudov
2. The Scsi Bus and Ide Interface: Protocols, Applications and Programming By
Friedhelm Schmidt
3. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
4. “Multimedia Making it work” By Tay Vaughan.

Multimedia Systems- M.Sc(IT)


51

Lesson 7 Multimedia Hardware – Storage Devices

Contents
7.0 Aims and Objectives
7.1 Introduction
7.2 Memory and Storage Devices
7.3 Random Access Memory
7.4 Read Only Memory
7.5 Floppy and Hard Disks
7.6 Zip, jaz, SyQuest, and Optical storage devices
7.7 Digital Versatile Disk
7.8 CD ROM players
7.9 CD Recorders
7.10 Video Disc Players
7.11 Let us sum up
7.12 Model answers to “Check your progress”
7.13 References

7.0 Aims and Objectives

The aim of this lesson is to educate the learners about the second category of
multimedia hardware which is the storage device. At the end of this lesson the learner
will have an in-depth knowledge on the storage devices and their specifications.

7.1 Introduction

A data storage device is a device for recording (storing) information (data).


Recording can be done using virtually any form of energy. A storage device may hold
information, process information, or both. A device that only holds information is a
recording medium. Devices that process information (data storage equipment) may both
access a separate portable (removable) recording medium or a permanent component to
store and retrieve information.

Electronic data storage is storage which requires electrical power to store and
retrieve that data. Most storage devices that do not require visual optics to read data fall
into this category. Electronic data may be stored in either an analog or digital signal
format. This type of data is considered to be electronically encoded data, whether or not it
is electronically stored. Most electronic data storage media (including some forms of
computer storage) are considered permanent (non-volatile) storage, that is, the data will
remain stored when power is removed from the device. In contrast, electronically stored
information is considered volatile memory.

Multimedia Systems- M.Sc(IT)


52

7.2 MEMORY AND STORAGE DEVICES


By adding more memory and storage space to the computer, the computing needs
and habits to keep pace, filling the new capacity.

To estimate the memory requirements of a multimedia project- the space required


on a floppy disk, hard disk, or CD-ROM, not the random access sense of the project’s
content and scope. Color images, Sound bites, video clips, and the programming code
that glues it all together require memory; if there are many of these elements, you will
need even more. If you are making multimedia, you will also need to allocate memory for
storing and archiving working files used during production, original audio and video
clips, edited pieces, and final mixed pieces, production paperwork and correspondence,
and at least one backup of your project files, with a second backup stored at another
location.

7.3 Random Access Memory (RAM)

RAM is the main memory where the Operating system is initially loaded and the
application programs are loaded at a later stage. RAM is volatile in nature and every
program that is quit/exit is removed from the RAM. More the RAM capacity, higher will
be the processing speed.

If there is a budget constraint, then it is certain to produce a multimedia project on


a slower or limited-memory computer. On the other hand, it is profoundly frustrating to
face memory (RAM) shortages time after time, when you’re attempting to keep multiple
applications and files open simultaneously. It is also frustrating to wait the extra seconds
required oh each editing step when working with multimedia material on a slow
processor.

On the Macintosh, the minimum RAM configuration for serious multimedia


production is about 32MB; but even64MB and 256MB systems are becoming common,
because while digitizing audio or video, you can store much more data much more
quickly in RAM. And when you’re using some software, you can quickly chew up
available RAM – for example, Photoshop (16MB minimum, 20MB recommended); After
Effects (32MBrequired), Director (8MB minimum, 20MB better); Page maker (24MB
recommended); Illustrator (16MB recommended); Microsoft Office (12MB
recommended).

In spite of all the marketing hype about processor speed, this speed is ineffective if
not accompanied by sufficient RAM. A fast processor without enough RAM may waste
processor cycles while it swaps needed portions of program code into and out of memory.
In some cases, increasing available RAM may show more performance improvement on
your system than upgrading the processor clip.

On an MPC platform, multimedia authoring can also consume a great deal of


memory. It may be needed to open many large graphics and audio files, as well as your

Multimedia Systems- M.Sc(IT)


53

authoring system, all at the same time to facilitate faster copying/pasting and then testing
in your authoring software. Although 8MB is the minimum under the MPC standard,
much more is required as of now.

7.4 Read-Only Memory (ROM)

Read-only memory is not volatile, Unlike RAM, when you turn off the power to a
ROM chip, it will not forget, or lose its memory. ROM is typically used in computers to
hold the small BIOS program that initially boots up the computer, and it is used in
printers to hold built-in fonts. Programmable ROMs (called EPROM’s) allow changes to
be made that are not forgotten. A new and inexpensive technology, optical read-only
memory (OROM), is provided in proprietary data cards using patented holographic
storage. Typically, OROM s offer 128MB of storage, have no moving parts, and use only
about 200 mill watts of power, making them ideal for handheld, battery-operated devices.

7.5 Floppy and Hard Disks

Adequate storage space for the production environment can be provided by large-
capacity hard disks; a server-mounted disk on a network; Zip, Jaz, or SyQuest removable
cartridges; optical media; CD-R (compact disc-recordable) discs; tape; floppy disks;
banks of special memory devices; or any combination of the above.

Removable media (floppy disks, compact or optical discs, and cartridges) typically
fit into a letter-sized mailer for overnight courier service. One or many disks may be
required for storage and archiving each project, and it is necessary to plan for backups
kept off-site.

Floppy disks and hard disks are mass-storage devices for binary data-data that can
be easily read by a computer. Hard disks can contain much more information than floppy
disks and can operate at far greater data transfer rates. In the scale of things, floppies are,
however, no longer “mass-storage” devices.

A floppy disk is made of flexible Mylar plastic coated with a very thin layer of
special magnetic material. A hard disk is actually a stack of hard metal platters coated
with magnetically sensitive material, with a series of recording heads or sensors that
hover a hairbreadth above the fast-spinning surface, magnetizing or demagnetizing spots
along formatted tracks using technology similar to that used by floppy disks and audio
and video tape recording. Hard disks are the most common mass-storage device used on
computers, and for making multimedia, it is necessary to have one or more large-capacity
hard disk drives.

As multimedia has reached consumer desktops, makers of hard disks have been
challenged to build smaller profile, larger-capacity, faster, and less-expensive hard disks.
In 1994, hard disk manufactures sold nearly 70 million units; in 1995, more than 80
million units. And prices have dropped a full order of magnitude in a matter of months.
By 1998, street prices for 4GB drives (IDE) were less than $200. As network and Internet

Multimedia Systems- M.Sc(IT)


54

servers increase the demand for centralized data storage requiring terabytes (1 trillion
bytes), hard disks will be configured into fail-proof redundant array offering built-in
protection against crashes.

7.6 Zip, jaz, SyQuest, and Optical storage devices

SyQuest’s 44MB removable cartridges have been the most widely used portable
medium among multimedia developers and professionals, but Iomega’s inexpensive Zip
drives with their likewise inexpensive 100MB cartridges have significantly penetrated
SyQuest’s market share for removable media. Iomega’s Jaz cartridges provide a gigabyte
of removable storage media and have fast enough transfer rates for audio and video
development. Pinnacle Micro, Yamaha, Sony, Philips, and others offer CD-R “burners”
for making write-once compact discs, and some double as quad-speed players. As blank
CD-R discs become available for less than a dollar each, this write-once media competes
as a distribution vehicle. CD-R is described in greater detail a little later in the chapter.

Magneto-optical (MO) drives use a high-power laser to heat tiny spots on the
metal oxide coating of the disk. While the spot is hot, a magnet aligns the oxides to
provide a 0 or 1 (on or off) orientation. Like SyQuests and other Winchester hard disks,
this is rewritable technology, because the spots can be repeatedly heated and aligned.
Moreover, this media is normally not affected by stray magnetism (it needs both heat and
magnetism to make changes), so these disks are particularly suitable for archiving data.
The data transfer rate is, however, slow compared to Zip, Jaz, and SyQuest technologies.
One of the most popular formats uses a 128MB-capacity disk-about the size of a 3.5-inch
floppy. Larger-format magneto-optical drives with 5.25-inch cartridges offering 650MB
to 1.3GB of storage are also available.

7.7 Digital versatile disc (DVD)

In December 1995, nine major electronics companies (Toshiba, Matsushita, Sony,


Philips, Time Waver, Pioneer, JVC, Hitachi, and Mitsubishi Electric) agreed to promote a
new optical disc technology for distribution of multimedia and feature-length movies
called DVD.

With this new medium capable not only of gigabyte storage capacity but also full-
motion video (MPEG2) and high-quantity audio in surround sound, the bar has again
risen for multimedia developers. Commercial multimedia projects will become more
expensive to produce as consumer’s performance expectations rise. There are two types
of DVD-DVD-Video and DVD-ROM; these reflect marketing channels, not the
technology.

DVD can provide 720 pixels per horizontal line, whereas current television
(NTSC) provides 240-television pictures will be sharper and more detailed. With Dolby
AC-3 Digital surround Sound as part of the specification, six discrete audio channels can
be programmed for digital surround sound, and with a separate subwoofer channel,
developers can program the low-frequency doom and gloom music popular with

Multimedia Systems- M.Sc(IT)


55

Hollywood. DVD also supports Dolby pro-Logic Surround Sound, standard stereo and
mono audio. Users can randomly access any section of the disc and use the slow-motion
and freeze-frame features during movies. Audio tracks can be programmed for as many
as 8 different languages, with graphic subtitles in 32 languages. Some manufactures such
as Toshiba are already providing parental control features in their players (user’s select
lockout ratings from G to NC-17).

7.8 CD-ROM Players

Compact disc read-only memory (CD-ROM) players have become an integral part
of the multimedia development workstation and are important delivery vehicle for large,
mass-produced projects. A wide variety of developer utilities, graphic backgrounds, stock
photography and sounds, applications, games, reference texts, and educational software
are available only on this medium.

CD-ROM players have typically been very slow to access and transmit data
(150k per second, which is the speed required of consumer Red Book Audio CDs), but
new developments have led to double, triple, quadruple, speed and even 24x drives
designed specifically for computer (not Red Book Audio) use. These faster drives spool
up like washing machines on the spin cycle and can be somewhat noisy, especially if the
inserted compact disc is not evenly balanced.

7.9 CD Recorders

With a compact disc recorder, you can make your own CDs using special CD-
recordable (CD-R) blank optical discs to create a CD in most formats of CD-ROM and
CD-Audio. The machines are made by Sony, Phillips, Ricoh, Kodak, JVC, Yamaha, and
Pinnacle. Software, such as Adaptec’s Toast for Macintosh or Easy CD Creator for
Windows, lets you organize files on your hard disk(s) into a “virtual” structure, then
writes them to the CD in that order. CD-R discs are made differently than normal CDs
but can play in any CD-Audio or CD-ROM player. They are available in either a “63
minute” or “74 minute” capacity for the former, that means about 560MB, and for the
latter, about 650MB. These write-once CDs make excellent high-capacity file archives
and are used extensively by multimedia developers for premastering and testing CD-
ROM projects and titles.

7.10 Videodisc Players

Videodisc players (commercial, not consumer quality) can be used in conjunction


with the computer to deliver multimedia applications. You can control the videodisc
player from your authoring software with X-Commands (XCMDs) on the Macintosh and
with MCI commands in Windows. The output of the videodisc player is an analog
television signal, so you must setup a television separate from your computer monitor or
use a video digitizing board to “window” the analog signal on your monitor.

Multimedia Systems- M.Sc(IT)


56

Check Your Progress 1


Specify any five storage devices and list their storage capacity.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

7.11 Let us sum up

In this lesson we have learnt about the storage devices used in multimedia system.
The following points have been discussed :

 RAM is a storage devices for temporary storage which is used to store all the
application programs under execution.
 The secondary storage devices are used to store the data permanently. The
storage capacity of secondary storage is more compared with RAM.

7.12 Model answers to “Check your progress”


1. A few of the following devices could be listed as storage devices used for
multimedia :
Random Access Memory, Read Only Memory, Floppy and Hard Disks,
Zip, jaz, SyQuest, and Optical storage devices, Digital Versatile Disk, CD
ROM players, CD Recorders, Video Disc Players
7.13 References

1. “Multimedia Making it work” By Tay Vaughan


2. The Essential Guide to Computer Data Storage: From Floppy to DVD By
Andrei Khurshudov
3. ”Multimedia Computing, Communication and application” By Steinmetz
and Klara Nahrstedt.
4. “Multimedia and Imaging Databases” By Setrag Khoshafian, A. Brad Baker

Multimedia Systems- M.Sc(IT)


57

Lesson 8 Multimedia Storage – Optical Devices

Contents

8.0 Aims and Objectives


8.1 Introduction
8.2 CD-ROM
8.3 Logical formats of CD-ROM
8.4 DVD
8.4.1 DVD disc capacity
8.4.2 DVD recordable and rewritable
8.4.3 Security in DVD
8.4.4 Competitors and successors to DVD
8.5 Let us sum up
8.6 Lesson-end activities
8.7 Model answers to “Check your progress”
8.8 References

8.0 Aims and Objectives

In this lesson we will learn the different optical storage devices and their
specifications. At the end of this lesson the learner will;
i) Understand optical storage devices.
ii) Find the specifications of different optical storage devices.
iii) Specify the capabilities of DVD

8.1 Introduction

Optical storage devices have become the order of the day. The high storage
capacity available in the optical storage devices has influenced it as storage for
multimedia content. Apart from the high storage capacity the optical storage devices have
higher data transfer rate.

8.2 CD-ROM

A Compact Disc or CD is an optical disc used to store digital data, originally


developed for storing digital audio. The CD, available on the market since late 1982,
remains the standard playback medium for commercial audio recordings to the present
day, though it has lost ground in recent years to MP3 players.

An audio CD consists of one or more stereo tracks stored using 16-bit PCM
coding at a sampling rate of 44.1 kHz. Standard CDs have a diameter of 120 mm and can
hold approximately 80 minutes of audio. There are also 80 mm discs, sometimes used for

Multimedia Systems- M.Sc(IT)


58

CD singles, which hold approximately 20 minutes of audio. The technology was later
adapted for use as a data storage device, known as a CD-ROM, and to include record-
once and re-writable media (CD-R and CD-RW respectively). CD-ROMs and CD-Rs
remain widely used technologies in the computer industry as of 2007. The CD and its
extensions have been extremely successful: in 2004, the worldwide sales of CD audio,
CD-ROM, and CD-R reached about 30 billion discs. By 2007, 200 billion CDs had been
sold worldwide.

8.2.1 CD-ROM History

In 1979, Philips and Sony set up a joint task force of engineers to design a new
digital audio disc.

The CD was originally thought of as an evolution of the gramophone record,


rather than primarily as a data storage medium. Only later did the concept of an "audio
file" arise, and the generalizing of this to any data file. From its origins as a music format,
Compact Disc has grown to encompass other applications. In June 1985, the CD-ROM
(read-only memory) and, in 1990, CD-Recordable were introduced, also developed by
Sony and Philips.

8.2.2 Physical details of CD-ROM

A Compact Disc is made from a 1.2 mm thick disc of almost pure polycarbonate
plastic and weighs approximately 16 grams. A thin layer of aluminium (or, more rarely,
gold, used for its longevity, such as in some limited-edition audiophile CDs) is applied to
the surface to make it reflective, and is protected by a film of lacquer. CD data is stored
as a series of tiny indentations (pits), encoded in a tightly packed spiral track molded into
the top of the polycarbonate layer. The areas between pits are known as "lands". Each pit
is approximately 100 nm deep by 500 nm wide, and varies from 850 nm to 3.5 µm in
length.

The spacing between the tracks, the pitch, is 1.6 µm. A CD is read by focusing a
780 nm wavelength semiconductor laser through the bottom of the polycarbonate layer.

While CDs are significantly more durable than earlier audio formats, they are
susceptible to damage from daily usage and environmental factors. Pits are much closer
to the label side of a disc, so that defects and dirt on the clear side can be out of focus
during playback. Discs consequently suffer more damage because of defects such as
scratches on the label side, whereas clear-side scratches can be repaired by refilling them
with plastic of similar index of refraction, or by careful polishing.

Disc shapes and diameters

The digital data on a CD begins at the center of the disc and proceeds outwards to
the edge, which allows adaptation to the different size formats available. Standard CDs
are available in two sizes. By far the most common is 120 mm in diameter, with a 74 or

Multimedia Systems- M.Sc(IT)


59

80-minute audio capacity and a 650 or 700 MB data capacity. 80 mm discs ("Mini CDs")
were originally designed for CD singles and can hold up to 21 minutes of music or
184 MB of data but never really became popular. Today nearly all singles are released on
120 mm CDs, which is called a Maxi single.

8.3 Logical formats of CD-ROM

Audio CD

The logical format of an audio CD (officially Compact Disc Digital Audio or


CD-DA) is described in a document produced in 1980 by the format's joint creators, Sony
and Philips. The document is known colloquially as the "Red Book" after the color of its
cover. The format is a two-channel 16-bit PCM encoding at a 44.1 kHz sampling rate.
Four-channel sound is an allowed option within the Red Book format, but has never been
implemented.

The selection of the sample rate was primarily based on the need to reproduce
the audible frequency range of 20Hz - 20kHz. The Nyquist–Shannon sampling theorem
states that a sampling rate of double the maximum frequency to be recorded is needed,
resulting in a 40 kHz rate. The exact sampling rate of 44.1 kHz was inherited from a
method of converting digital audio into an analog video signal for storage on video tape,
which was the most affordable way to transfer data from the recording studio to the CD
manufacturer at the time the CD specification was being developed. The device that turns
an analog audio signal into PCM audio, which in turn is changed into an analog video
signal is called a PCM adaptor.

Main physical parameters

The main parameters of the CD (taken from the September 1983 issue of
the audio CD specification) are as follows:

 Scanning velocity: 1.2–1.4 m/s (constant linear velocity) – equivalent to


approximately 500 rpm at the inside of the disc, and approximately 200
rpm at the outside edge. (A disc played from beginning to end slows down
during playback.)
 Track pitch: 1.6 µm
 Disc diameter 120 mm
 Disc thickness: 1.2 mm
 Inner radius program area: 25 mm
 Outer radius program area: 58 mm
 Center spindle hole diameter: 15 mm

The program area is 86.05 cm² and the length of the recordable spiral is
86.05 cm² / 1.6 µm = 5.38 km. With a scanning speed of 1.2 m/s, the playing time
is 74 minutes, or around 650 MB of data on a CD-ROM. If the disc diameter were
only 115 mm, the maximum playing time would have been 68 minutes, i.e., six

Multimedia Systems- M.Sc(IT)


60

minutes less. A disc with data packed slightly more densely is tolerated by most
players (though some old ones fail). Using a linear velocity of 1.2 m/s and a track
pitch of 1.5 µm leads to a playing time of 80 minutes, or a capacity of 700 MB.
Even higher capacities on non-standard discs (up to 99 minutes) are available at
least as recordable, but generally the tighter the tracks are squeezed the worse the
compatibility.

Data structure

The smallest entity in a CD is called a frame. A frame consists of 33


bytes and contains six complete 16-bit stereo samples (2 bytes × 2 channels × six
samples equals 24 bytes). The other nine bytes consist of eight Cross-Interleaved
Reed-Solomon Coding error correction bytes and one subcode byte, used for
control and display. Each byte is translated into a 14-bit word using Eight-to-
Fourteen Modulation, which alternates with 3-bit merging words. In total we have
33 × (14 + 3) = 561 bits. A 27-bit unique synchronization word is added, so that
the number of bits in a frame totals 588 (of which only 192 bits are music).

These 588-bit frames are in turn grouped into sectors. Each sector
contains 98 frames, totaling 98 × 24 = 2352 bytes of music. The CD is played at a
speed of 75 sectors per second, which results in 176,400 bytes per second.
Divided by 2 channels and 2 bytes per sample, this result in a sample rate of
44,100 samples per second.

"Frame"

For the Red Book stereo audio CD, the time format is commonly measured in
minutes, seconds and frames (mm:ss:ff), where one frame corresponds to one
sector, or 1/75th of a second of stereo sound. Note that in this context, the term
frame is erroneously applied in editing applications and does not denote the
physical frame described above. In editing and extracting, the frame is the
smallest addressable time interval for an audio CD, meaning that track start and
end positions can only be defined in 1/75 second steps.

Logical structure

The largest entity on a CD is called a track. A CD can contain up to 99 tracks


(including a data track for mixed mode discs). Each track can in turn have up to
100 indexes, though players which handle this feature are rarely found outside of
pro audio, particularly radio broadcasting. The vast majority of songs are recorded
under index 1, with the pre-gap being index 0. Sometimes hidden tracks are
placed at the end of the last track of the disc, often using index 2 or 3. This is also
the case with some discs offering "101 sound effects", with 100 and 101 being
index 2 and 3 on track 99. The index, if used, is occasionally put on the track
listing as a decimal part of the track number, such as 99.2 or 99.3.

Multimedia Systems- M.Sc(IT)


61

CD-Text

CD-Text is an extension of the Red Book specification for audio CD that allows
for storage of additional text information (e.g., album name, song name, artist) on a
standards-compliant audio CD. The information is stored either in the lead-in area of the
CD, where there is roughly five kilobytes of space available, or in the subcode channels
R to W on the disc, which can store about 31 megabytes.
http://en.wikipedia.org/wiki/Image:CDTXlogo.svg

CD + Graphics

Compact Disc + Graphics (CD+G) is a special audio compact disc that contains
graphics data in addition to the audio data on the disc. The disc can be played on a
regular audio CD player, but when played on a special CD+G player, can output a
graphics signal (typically, the CD+G player is hooked up to a television set or a computer
monitor); these graphics are almost exclusively used to display lyrics on a television set
for karaoke performers to sing along with.

CD + Extended Graphics

Compact Disc + Extended Graphics (CD+EG, also known as CD+XG) is an


improved variant of the Compact Disc + Graphics (CD+G) format. Like CD+G, CD+EG
utilizes basic CD-ROM features to display text and video information in addition to the
music being played. This extra data is stored in subcode channels R-W.

CD-MIDI

Compact Disc MIDI or CD-MIDI is a type of audio CD where sound is recorded


in MIDI format, rather than the PCM format of Red Book audio CD. This provides much
greater capacity in terms of playback duration, but MIDI playback is typically less
realistic than PCM playback.

Video CD

Video CD (aka VCD, View CD, Compact Disc digital video) is a standard
digital format for storing video on a Compact Disc. VCDs are playable in dedicated VCD
players, most modern DVD-Video players, and some video game consoles.

The VCD standard was created in 1993 by Sony, Philips, Matsushita, and JVC
and is referred to as the White Book standard.

Overall picture quality is intended to be comparable to VHS video, though VHS


has twice as many scanlines (approximately 480 NTSC and 580 PAL) and therefore
double the vertical resolution. Poorly compressed video in VCD tends to be of lower
quality than VHS video, but VCD exhibits block artifacts rather than analog noise.

Multimedia Systems- M.Sc(IT)


62

Super Video CD

Super Video CD (Super Video Compact Disc or SVCD) is a format used for
storing video on standard compact discs. SVCD was intended as a successor to Video CD
and an alternative to DVD-Video, and falls somewhere between both in terms of
technical capability and picture quality.

SVCD has two-thirds the resolution of DVD, and over 2.7 times the resolution
of VCD. One CD-R disc can hold up to 60 minutes of standard quality SVCD-format
video. While no specific limit on SVCD video length is mandated by the specification,
one must lower the video bitrate, and therefore quality, in order to accommodate very
long videos. It is usually difficult to fit much more than 100 minutes of video onto one
SVCD without incurring significant quality loss, and many hardware players are unable
to play video with an instantaneous bitrate lower than 300 to 600 kilobits per second.

Photo CD

Photo CD is a system designed by Kodak for digitizing and storing photos in a


CD. Launched in 1992, the discs were designed to hold nearly 100 high quality images,
scanned prints and slides using special proprietary encoding. Photo CD discs are defined
in the Beige Book and conform to the CD-ROM XA and CD-i Bridge specifications as
well. They are intended to play on CD-i players, Photo CD players and any computer
with the suitable software irrespective of the operating system. The images can also be
printed out on photographic paper with a special Kodak machine.

Picture CD

Picture CD is another photo product by Kodak, following on from the earlier


Photo CD product. It holds photos from a single roll of color film, stored at 1024×1536
resolution using JPEG compression. The product is aimed at consumers.

CD Interactive

The Philips "Green Book" specifies the standard for interactive multimedia Compact
Discs designed for CD-i players. This Compact Disc format is unusual because it hides
the initial tracks which contains the software and data files used by CD-i players by
omitting the tracks from the disc's Table of Contents. This causes audio CD players to
skip the CD-i data tracks. This is different from the CD-i Ready format, which puts CD-i
software and data into the pregap of Track 1.

Multimedia Systems- M.Sc(IT)


63

Enhanced CD

Enhanced CD, also known as CD Extra and CD Plus, is a certification mark of the
Recording Industry Association of America for various technologies that combine audio
and computer data for use in both compact disc and CD-ROM players.

The primary data formats for Enhanced CD disks are mixed mode (Yellow Book/Red
Book), CD-i, hidden track, and multisession (Blue Book).

Recordable CD

Recordable compact discs, CD-Rs, are injection moulded with a "blank" data spiral. A
photosensitive dye is then applied, after which the discs are metalized and lacquer coated.
The write laser of the CD recorder changes the color of the dye to allow the read laser of
a standard CD player to see the data as it would an injection moulded compact disc. The
resulting discs can be read by most (but not all) CD-ROM drives and played in most (but
not all) audio CD players.

CD-R recordings are designed to be permanent. Over time the dye's physical
characteristics may change, however, causing read errors and data loss until the reading
device cannot recover with error correction methods. The design life is from 20 to 100
years depending on the quality of the discs, the quality of the writing drive, and storage
conditions. However, testing has demonstrated such degradation of some discs in as little
as 18 months under normal storage conditions. This process is known as CD rot.

CD-Rs follow the Orange Book standard.

Recordable Audio CD

The Recordable Audio CD is designed to be used in a consumer audio CD recorder,


which won't (without modification) accept standard CD-R discs. These consumer audio
CD recorders use SCMS (Serial Copy Management System), an early form of digital
rights management (DRM), to conform to the AHRA (Audio Home Recording Act). The
Recordable Audio CD is typically somewhat more expensive than CD-R due to (a) lower
volume and (b) a 3% AHRA royalty used to compensate the music industry for the
making of a copy.
http://en.wikipedia.org/wiki/Image:CDMSRlogo.svg
ReWritable CD

CD-RW is a re-recordable medium that uses a metallic alloy instead of a dye. The write
laser in this case is used to heat and alter the properties (amorphous vs. crystalline) of the
alloy, and hence change its reflectivity. A CD-RW does not have as great a difference in
reflectivity as a pressed CD or a CD-R, and so many earlier CD audio players cannot

Multimedia Systems- M.Sc(IT)


64

read CD-RW discs, although most later CD audio players and stand-alone DVD players
can. CD-RWs follow the Orange Book standard.

Check Your Progress 1


List the different CD-ROM formats.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

8.4 DVD
DVD (also known as "Digital Versatile Disc" or "Digital Video Disc") is a
popular optical disc storage media format. Its main uses are video and data storage. Most
DVDs are of the same dimensions as compact discs (CDs) but store more than 6 times the
data.

Variations of the term DVD often describe the way data is stored on the discs:
DVD-ROM has data which can only be read and not written, DVD-R can be written once
and then functions as a DVD-ROM, and DVD-RAM or DVD-RW holds data that can be
re-written multiple times.

DVD-Video and DVD-Audio discs respectively refer to properly formatted and


structured video and audio content. Other types of DVD discs, including those with video
content, may be referred to as DVD-Data discs. The term "DVD" is commonly misused
to refer to high density optical disc formats in general, such as Blu-ray and HD DVD.

"DVD" was originally used as an initialism for the unofficial term "digital video
disc". It was reported in 1995, at the time of the specification finalization, that the letters
officially stood for "digital versatile disc" (due to non-video applications), however, the
text of the press release announcing the specification finalization only refers to the
technology as "DVD", making no mention of what (if anything) the letters stood for.
Usage in the present day varies, with "DVD", "Digital Video Disc", and "Digital
Versatile Disc" all being common.

Multimedia Systems- M.Sc(IT)


65

8.4.1 DVD disc capacity

Single layer capacity Dual/Double layer capacity

Physical size GB GiB GB GiB

12 cm, single sided 4.7 4.37 8.54 7.95

12 cm, double sided 9.4 8.74 17.08 15.90

8 cm, single sided 1.4 1.30 2.6 2.42

8 cm, double sided 2.8 2.61 5.2 4.84

The 12 cm type is a standard DVD, and the 8 cm variety is known as a mini-DVD. These
are the same sizes as a standard CD and a mini-CD.

Note: GB here means gigabyte, equal to 109 (or 1,000,000,000) bytes. Many programs
will display gibibyte (GiB), equal to 230 (or 1,073,741,824) bytes.

Example: A disc with 8.5 GB capacity is equivalent to: (8.5 × 1,000,000,000) /


1,073,741,824 ≈ 7.92 GiB.

Capacity Note: There is a difference in capacity (storage space) between + and - DL


DVD formats. For example, the 12 cm single sided disc has capacities:

Disc Type Sectors bytes GB GiB

DVD-R SL 2,298,496 4,707,319,808 4.7 4.384

DVD+R SL 2,295,104 4,700,372,992 4.7 4.378

DVD-R DL 4,171,712 8,543,666,176 8.5 7.957

DVD+R DL 4,173,824 8,547,991,552 8.5 7.961

Multimedia Systems- M.Sc(IT)


66

Technology

DVD uses 650 nm wavelength laser diode light as opposed to 780 nm for CD. This
permits a smaller spot on the media surface that is 1.32 µm for DVD while it was
2.11 µm for CD.

Writing speeds for DVD were 1x, that is 1350 kB/s (1318 KiB/s), in first drives and
media models. More recent models at 18x or 20x have 18 or 20 times that speed. Note
that for CD drives, 1x means 153.6 kB/s (150 KiB/s), 9 times slower. DVD FAQ

8.4.2 DVD recordable and rewritable

HP initially developed recordable DVD media from the need to store data for back-up
and transport.

DVD recordables are now also used for consumer audio and video recording. Three
formats were developed: DVD-R/RW (minus/dash), DVD+R/RW (plus), DVD-RAM.

Dual layer recording

Dual Layer recording allows DVD-R and DVD+R discs to store significantly
more data, up to 8.5 Gigabytes per side, per disc, compared with 4.7 Gigabytes for single-
layer discs. DVD-R DL was developed for the DVD Forum by Pioneer Corporation,
DVD+R DL was developed for the DVD+RW Alliance by Philips and Mitsubishi
Kagaku Media (MKM).

A Dual Layer disc differs from its usual DVD counterpart by employing a second
physical layer within the disc itself. The drive with Dual Layer capability accesses the
second layer by shining the laser through the first semi-transparent layer. The layer
change mechanism in some DVD players can show a noticeable pause, as long as two
seconds by some accounts. This caused more than a few viewers to worry that their dual
layer discs were damaged or defective, with the end result that studios began listing a
standard message explaining the dual layer pausing effect on all dual layer disc
packaging.

DVD recordable discs supporting this technology are backward compatible with
some existing DVD players and DVD-ROM drives. Many current DVD recorders
support dual-layer technology, and the price is now comparable to that of single-layer
drives, though the blank media remain more expensive.

DVD-Video

DVD-Video is a standard for storing video content on DVD media.

Though many resolutions and formats are supported, most consumer DVD-Video
discs use either 4:3 or anamorphic 16:9 aspect ratio MPEG-2 video, stored at a resolution

Multimedia Systems- M.Sc(IT)


67

of 720×480 (NTSC) or 720×576 (PAL) at 24, 30, or 60 FPS. Audio is commonly stored
using the Dolby Digital (AC-3) or Digital Theater System (DTS) formats, ranging from
16-bits/48kHz to 24-bits/96kHz format with monaural to 7.1 channel "Surround Sound"
presentation, and/or MPEG-1 Layer 2. Although the specifications for video and audio
requirements vary by global region and television system, many DVD players support all
possible formats. DVD-Video also supports features like menus, selectable subtitles,
multiple camera angles, and multiple audio tracks.

DVD-Audio

DVD-Audio is a format for delivering high-fidelity audio content on a DVD. It offers


many channel configuration options (from mono to 7.1 surround sound) at various
sampling frequencies (up to 24-bits/192kHz versus CDDA's 16-bits/44.1kHz). Compared
with the CD format, the much higher capacity DVD format enables the inclusion of
considerably more music (with respect to total running time and quantity of songs) and/or
far higher audio quality (reflected by higher linear sampling rates and higher vertical bit-
rates, and/or additional channels for spatial sound reproduction).

Despite DVD-Audio's superior technical specifications, there is debate as to whether the


resulting audio enhancements are distinguishable in typical listening environments.
DVD-Audio currently forms a niche market, probably due to the very sort of format war
with rival standard SACD that DVD-Video avoided.

Check Your Progress 1


Specify the different storage capacities available in a DVD

Notes: a) Write your answers in the space given below.


b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

8.4.3 Security in DVD

DVD-Audio discs employ a robust copy prevention mechanism, called Content


Protection for Prerecorded Media (CPPM) developed by the 4C group (IBM, Intel,
Matsushita, and Toshiba).

To date, CPPM has not been "broken" in the sense that DVD-Video's CSS has been
broken, but ways to circumvent it have been developed. By modifying commercial
DVD(-Audio) playback software to write the decrypted and decoded audio streams to the
hard disk, users can, essentially, extract content from DVD-Audio discs much in the same
way they can from DVD-Video discs.

Multimedia Systems- M.Sc(IT)


68

8.4.4 Competitors and successors to DVD

There are several possible successors to DVD being developed by different


consortia. Sony/Panasonic's Blu-ray Disc (BD) and Toshiba's HD DVD and 3D optical
data storage are being actively developed.

The next generation of DVD will be HD DVD.

8.5 Let us sum up

In this lesson we have discussed the following topics :


 CD-ROM and DVD are optical storage devices.
 A Compact Disc is made from a 1.2 mm thick disc of almost pure polycarbonate
plastic and weighs approximately 16 grams.
 DVD (also known as "Digital Versatile Disc" or "Digital Video Disc") is a
popular optical disc storage media format.
 The different formats of CD-ROM includes
ReWritable CD Enhanced CD CD-MIDI
Recordable Picture CD CD + Graphics
Audio CD Photo CD CD + Extended
Recordable CD Super Video CD Graphics
CD Interactive Video CD CD-Text

8.6 Lesson-end activities

1. Identify the different storage capacities and the type of CD-ROM and DVD.
Compare their duration of recording with the memory capacity of the storage
device.

8.7 Model answers to “Check your progress”

1. The different formats of CD-ROM includes


i) ReWritable CD ix) Video CD
ii) Recordable Audio CD x) CD-MIDI
iii) Recordable CD xi) CD + Graphics
iv) CD Interactive xii) CD + Extended Graphics
v) Enhanced CD xiii) CD-Text
vi) Picture CD
vii) Photo CD
viii) Super Video CD

Multimedia Systems- M.Sc(IT)


69

2. The storage capacities of DVD are


- DVD-R SL-4,707,319,808 bytes - 4.7 GB
- DVD+R SL-4,700,372,9924,707,319,808 bytes -4.7 GB
- DVD-R DL-8,543,666,1764,707,319,808 bytes -8.5 GB
- DVD+R DL-8,547,991,5524,707,319,808 bytes -8.5 GB

8.8 References

1. Multimedia Making it work” By Tay Vaughan


2. “Multimedia in Practice – Technology and applications” By Jeffcoat
3. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.

Multimedia Systems- M.Sc(IT)


70

Lesson 9 Multimedia Hardware – Input Devices, Output


Devices, Communication Devices

Contents

9.0 Aims and Objectives


9.1 Introduction
9.2 Input Devices
9.3 Output Hardwares
9.4 Communication Devices
9.5 Let us sum up
9.6 Lesson-end activities
9.7 Model answers to “Check your progress”
9.8 References

9.0 Aims and Objectives

This lesson aims at introducing the multimedia hardware used for providing
interactivity between the user and the multimedia software.

At the end of this lesson the learner will be able to


i. Identify input devices
ii. Identify and select output hardware
iii. List and understand different communication devices

9.1 Introduction

An input device is a hardware mechanism that transforms information in the


external world for consumption by a computer.

An output device is a hardware used to communicate the result of data


processing carried out by the user or CPU.

9.2 Input devices

Often, input devices are under direct control by a human user, who uses them to
communicate commands or other information to be processed by the computer, which
may then transmit feedback to the user through an output device. Input and output
devices together make up the hardware interface between a computer and the user or
external world. Typical examples of input devices include keyboards and mice. However,
there are others which provide many more degrees of freedom. In general, any sensor
which monitors, scans for and accepts information from the external world can be
considered an input device, whether or not the information is under the direct control of a
user.

Multimedia Systems- M.Sc(IT)


71

9.2.1 Classification of Input Devices


Input devices can be classified according to:-

 the modality of input (e.g. mechanical motion, audio, visual, sound, etc.)
 whether the input is discrete (e.g. keypresses) or continuous (e.g. a mouse's
position, though digitized into a discrete quantity, is high-resolution enough to be
thought of as continuous)
 the number of degrees of freedom involved (e.g. many mice allow 2D positional
input, but some devices allow 3D input, such as the Logitech Magellan Space
Mouse)

Pointing devices, which are input devices used to specify a position in space, can further
be classified according to

 Whether the input is direct or indirect. With direct input, the input space coincides
with the display space, i.e. pointing is done in the space where visual feedback or
the cursor appears. Touchscreens and light pens involve direct input. Examples
involving indirect input include the mouse and trackball.
 Whether the positional information is absolute (e.g. on a touch screen) or relative
(e.g. with a mouse that can be lifted and repositioned)

Note that direct input is almost necessarily absolute, but indirect input may be either
absolute or relative. For example, digitizing graphics tablets that do not have an
embedded screen involve indirect input, and sense absolute positions and are often run in
an absolute input mode, but they may also be setup to simulate a relative input mode
where the stylus or puck can be lifted and repositioned.

9.2.2 Keyboards

A keyboard is the most common method of interaction with a computer.


Keyboards provide various tactile responses (from firm to mushy) and have various
layouts depending upon your computer system and keyboard model. Keyboards are
typically rated for at least 50 million cycles (the number of times a key can be pressed
before it might suffer breakdown).

The most common keyboard for PCs is the 101 style (which provides 101 keys),
although many styles are available with more are fewer special keys, LEDs, and others
features, such as a plastic membrane cover for industrial or food-service applications or
flexible “ergonomic” styles. Macintosh keyboards connect to the Apple Desktop Bus
(ADB), which manages all forms of user input- from digitizing tablets to mice.

Examples of types of keyboards include

 Computer keyboard
 Keyer

Multimedia Systems- M.Sc(IT)


72

 Chorded keyboard
 LPFK

9.2.3 Pointing devices

A pointing device is any computer hardware component (specifically human


interface device) that allows a user to input spatial (ie, continuous and multi-dimensional)
data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to
control and provide data to the computer using physical gestures - point, click, and drag -
typically by moving a hand-held mouse across the surface of the physical desktop and
activating switches on the mouse.

While the most common pointing device by far is the mouse, many more devices
have been developed. However, mouse is commonly used as a metaphor for devices that
move the cursor. A mouse is the standard tool for interacting with a graphical user
interface (GUI). All Macintosh computers require a mouse; on PCs, mice are not required
but recommended. Even though the Windows environment accepts keyboard entry in lieu
of mouse point-and-click actions, your multimedia project should typically be designed
with the mouse or touchscreen in mind. The buttons the mouse provide additional user
input, such as pointing and double-clicking to open a document, or the click-and-drag
operation, in which the mouse button is pressed and held down to drag (move) an object,
or to move to and select an item on a pull-down menu, or to access context-sensitive help.
The Apple mouse has one button; PC mice may have as many as three.

Examples of common pointing devices include

 mouse
 trackball
 touchpad
 spaceBall - 6 degrees-of-freedom controller
 touchscreen
 graphics tablets (or digitizing tablet) that use a stylus
 light pen
 light gun
 eye tracking devices
 steering wheel - can be thought of as a 1D pointing device
 yoke (aircraft)
 jog dial - another 1D pointing device
 isotonic joysticks - where the user can freely change the position of the stick,
with more or less constant force
o joystick
o analog stick
 isometric joysticks - where the user controls the stick by varying the amount
of force they push with, and the position of the stick remains more or less
constant
o pointing stick

Multimedia Systems- M.Sc(IT)


73

 discrete pointing devices


o directional pad - a very simple keyboard
o dance pad - used to point at gross locations in space with feet

9.2.4 High-degree of freedom input devices

Some devices allow many continuous degrees of freedom to be input, and could
sometimes be used as pointing devices, but could also be used in other ways that don't
conceptually involve pointing at a location in space.

 Wired glove
 Shape Tape

Composite devices

Wii Remote with attached strap

Input devices, such as buttons and joysticks, can be combined on a single physical
device that could be thought of as a composite device. Many gaming devices have
controllers like this.

 Game controller
 Gamepad (or joypad)
 Paddle (game controller)
 Wii Remote

9.2.5 Imaging and Video input devices

Flat-Bed Scanners

A scanner may be the most useful piece of equipment used in the course of
producing a multimedia project; there are flat-bed and handheld scanners. Most
commonly available are gray-scale and color flat-bed scanners that provide a
resolution of 300 or 600 dots per inch (dpi). Professional graphics houses may use
even higher resolution units. Handheld scanners can be useful for scanning small
images and columns of text, but they may prove inadequate for the multimedia
development.

Multimedia Systems- M.Sc(IT)


74

Be aware that scanned images, particularly those at high resolution and in


color, demand an extremely large amount of storage space on the hard disk, no
matter what instrument is used to do the scanning. Also remember that the final
monitor display resolution for your multimedia project will probably be just 72 or
95 dpi-leave the very expensive ultra-high-resolution scanners for the desktop
publishers. Most expensive flat-bed scanners offer at least 300 dpi resolution, and
most scanners allow to set the scanning resolution.

Scanners helps make clear electronic images of existing artwork such as


photos, ads, pen drawings, and cartoons, and can save many hours when you are
incorporating proprietary art into the application. Scanners also give a starting
point for the creative diversions. The devices used for capturing image and video
are:

 Webcam
 Image scanner
 Fingerprint scanner
 Barcode reader
 3D scanner
 medical imaging sensor technology
o Computed tomography
o Magnetic resonance imaging
o Positron emission tomography
o Medical ultrasonography

9.2.6 Audio input devices


The devices used for capturing audio are

 Microphone
 Speech recognition

Note that MIDI allows musical instruments to be used as input devices as well.

9.2.7 Touchscreens

Touchscreens are monitors that usually have a textured coating across the glass
face. This coating is sensitive to pressure and registers the location of the user’s finger
when it touches the screen. The Touch Mate System, which has no coating, actually
measures the pitch, roll, and yaw rotation of the monitor when pressed by a finger, and
determines how much force was exerted and the location where the force was applied.
Other touchscreens use invisible beams of infrared light that crisscross the front of the
monitor to calculate where a finger was pressed. Pressing twice on the screen in quick
and dragging the finger, without lifting it, to another location simulates a mouse click-
and-drag. A keyboard is sometimes simulated using an onscreen representation so users
can input names, numbers, and other text by pressing “keys”.

Multimedia Systems- M.Sc(IT)


75

Touchscreen recommended for day-to-day computer work, but are excellent for
multimedia applications in a kiosk, at a trade show, or in a museum delivery system-
anything involving public input and simple tasks. When your project is designed to use a
touchscreen, the monitor is the only input device required, so you can secure all other
system hardware behind locked doors to prevent theft or tampering.

Check Your Progress 1


List a few input devices.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

9.3 OUTPUT DEVICES

Presentation of the audio and visual components of the multimedia project requires
hardware that may or may not be included with the computer itself-speakers, amplifiers,
monitors, motion video devices, and capable storage systems. The better the equipment,
of course, the better the presentation. There is no greater test of the benefits of good
output hardware than to feed the audio output of your computer into an external amplifier
system: suddenly the bass sounds become deeper and richer, and even music sampled at
low quality may seem to be acceptable.

9.3.1.Audio devices

All Macintoshes are equipped with an internal speaker and a dedicated sound clip,
and they are capable of audio output without additional hardware and/or software. To
take advantage of built-in stereo sound, external speaker are required. Digitizing sound
on the Macintosh requires an external microphone and sound editing/recording software
such as SoundEdit16 from Macromedia, Alchemy from Passport, or SoundDesingner
from DigiDesign.

9.3.2 Amplifiers and Speakers


Often the speakers used during a project’s development will not be adequate for its
presentation. Speakers with built-in amplifiers or attached to an external amplifier are
important when the project will be presented to a large audience or in a noisy setting.
9.3.3 Monitors
The monitor needed for development of multimedia projects depends on the type of
multimedia application created, as well as what computer is being used. A wide variety of
monitors is available for both Macintoshes and PCs. High-end, large-screen graphics
monitors are available for both, and they are expensive.

Multimedia Systems- M.Sc(IT)


76

Serious multimedia developers will often attach more than one monitor to their
computers, using add-on graphic board. This is because many authoring systems allow to
work with several open windows at a time, so we can dedicate one monitor to viewing
the work we are creating or designing, and we can perform various editing tasks in
windows on other monitors that do not block the view of your work. Editing windows
that overlap a work view when developing with Macromedia’s authoring environment,
director, on one monitor. Developing in director is best with at least two monitors, one to
view the work the other two view the “score”. A third monitor is often added by director
developers to display the “Cast”.
9.3.4 Video Device
No other contemporary message medium has the visual impact of video. With a
video digitizing board installed in a computer, we can display a television picture on your
monitor. Some boards include a frame-grabber feature for capturing the image and
turning it in to a color bitmap, which can be saved as a PICT or TIFF file and then used
as part of a graphic or a background in your project.
Display of video on any computer platform requires manipulation of an enormous
amount of data. When used in conjunction with videodisc players, which give precise
control over the images being viewed, video cards you place an image in to a window on
the computer monitor; a second television screen dedicated to video is not required. And
video cards typically come with excellent special effects software.
There are many video cards available today. Most of these support various video-
in-a-window sizes, identification of source video, setup of play sequences are segments,
special effects, frame grabbing, digital movie making; and some have built-in television
tuners so you can watch your favorite programs in a window while working on other
things. In windows, video overlay boards are controlled through the Media Control
Interface. On the Macintosh, they are often controlled by external commands and
functions (XCMDs and XFCNs) linked to your authoring software.
Good video greatly enhances your project; poor video will ruin it. Whether you
delivered your video from tape using VISCA controls, from videodisc, or as a QuickTime
or AVI movie, it is important that your source material be of high quality.
9.3.5 Projectors
When it is necessary to show a material to more viewers than can huddle around a
computer monitor, it will be necessary to project it on to large screen or even a white-
painted wall. Cathode-ray tube (CRT) projectors, liquid crystal display (LCD) panels
attached to an overhead projector, stand-alone LCD projectors, and light-valve projectors
are available to splash the work on to big-screen surfaces.
CRT projectors have been around for quite a while- they are the original “big-
screen” televisions. They use three separate projection tubes and lenses (red, green, and
blue), and three color channels of light must “converge” accurately on the screen. Setup,
focusing, and aligning are important to getting a clear and crisp picture. CRT projectors
are compatible with the output of most computers as well as televisions.
LCD panels are portable devices that fit in a briefcase. The panel is placed on the
glass surface of a standard overhead projector available in most schools, conference

Multimedia Systems- M.Sc(IT)


77

rooms, and meeting halls. While they overhead projectors does the projection work, the
panel is connected to the computer and provides the image, in thousands of colors and,
with active-matrix technology, at speeds that allow full-motion video and animation.
Because LCD panels are small, they are popular for on-the-road presentations, often
connected to a laptop computer and using a locally available overhead projector.
More complete LCD projection panels contain a projection lamp and lenses and do
not recover a separate overheads projector. They typically produce an image brighter and
shaper than the simple panel model, but they are some what large and cannot travel in a
briefcase.
Light-valves complete with high-end CRT projectors and use a liquid crystal
technology in which a low-intensity color image modulates a high-intensity light beam.
These units are expensive, but the image from a light-valve projector is very bright and
color saturated can be projected onto screen as wide as 10 meters.
9.3.6 Printers
With the advent of reasonably priced color printers, hard-copy output has entered
the multimedia scene. From storyboards to presentation to production of collateral
marketing material, color printers have become an important part of the multimedia
development environment. Color helps clarify concepts, improve understanding and
retention of information, and organize complex data. As multimedia designers already
know intelligent use of colors is critical to the success of a project. Tektronix offers both
solid ink and laser options, and either Phases 560 will print more than 10000 pages at a
rate of 5 color pages or 14 monochrome pages per minute before requiring new toner.
Epson provides lower-cost and lower-performance solutions for home and small business
users; Hewlett Packard’s Color LaserJet line competes with both. Most printer
manufactures offer a color model-just as all computers once used monochrome monitors
but are now color, all printers will became color printers.
Check Your Progress 2
List a few output devices.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

Multimedia Systems- M.Sc(IT)


78

9.4 COMMUNICATION DEVICES

Many multimedia applications are developed in workgroups comprising


instructional designers, writers, graphic artists, programmers, and musicians located in
the same office space or building. The workgroup members’ computers typically are
connected on a local area network (LAN). The client’s computers, however, may be
thousands of miles distant, requiring other methods for good communication.

Communication among workshop members and with the client is essential to the
efficient and accurate completion of project. And when speedy data transfer is needed,
immediately, a modem or network is required. If the client and the service provider are
both connected to the Internet, a combination of communication by e-mail and by FTP
(File Transfer Protocol) may be the most cost-effective and efficient solution for both
creative development and project management.

In the workplace, it is necessary to use quality equipment and software for the
communication setup. The cost-in both time and money-of stable and fast networking
will be returned to the content developer.

9.4.1 Modems

Modems can be connected to the computer externally at the port or internally as a


separate board. Internal modems often include fax capability. Be sure your modem is
Hayes-compatible. The Hayes AT standard command set (named for the ATTENTION
command that precedes all other commands) allows to work with most software
communications packages.

Modem speed, measured in baud, is the most important consideration. Because the
multimedia file that contains the graphics, audio resources, video samples, and
progressive versions of your project are usually large, you need to move as much data as
possible in as short a time as possible. Today’s standards dictate at least a V.34 28,800
bps modem. Transmitting at only 2400 bps, a 350KB file may take as long as 45 minutes
to send, but at 28.8 kbps, you can be done in a couple of minutes. Most modems follows
the CCITT V.32 or V.42 standards that provide data compression algorithms when
communicating with another similarly equipped modem. Compression saves significant
transmission time and money, especially over long distance. Be sure the modem uses a
standard compression system (like V.32), not a proprietary one.

According to the laws of physics, copper telephone lines and the switching
equipment at the phone companies’ central offices can handle modulated analog signals
up to about 28,000 bps on “clean” lines. Modem manufactures that advertise data
transmission speeds higher than that (56 Kbps) are counting on their hardware-based
compression algorithms to crunch the data before sending it, decompressing it upon
arrival at the receiving end. If we have already compressed the data into a .SIT, .SEA,
.ARC, or .ZIP file, you may not reap any benefit from the higher advertised speeds

Multimedia Systems- M.Sc(IT)


79

because it is difficult to compress an already-compressed file. New high-speed/high-


transmission over telephone lines are on the horizon.

9.4.2 ISDN

For higher transmission speeds, you will need to use Integrated Services Digital
Network (ISDN), Switched-56, T1, T3, DSL, ATM, or another of the telephone
companies’ Digital Switched Network Services.

ISDN lines are popular because of their fast 128 Kbps data transfer rate-four to five
times faster than the more common 28.8 Kbps analog modem. ISDN lines (and the
required ISDN hardware, often misnamed “ISDN modems” even though no
modulation/demodulation of the analog signal occurs) are important for Internet access,
networking, and audio and video conferencing. They are more expensive than
conventional analog or POTS (Plain Old Telephone Service) lines, so analyze your costs
and benefits carefully before upgrading to ISDN. Newer and faster Digital Subscriber
Line (DSL) technology using copper lines and promoted by the telephone companies may
overtake ISDN.

9.4.3 Cable Modems

In November 1995, a consortium of cable television industry leaders announced


agreement with key equipment manufacturers to specify some of the technical ways cable
networks and data equipment talk with one another. 3COM, AT&T, COM21, General
Instrument, Hewlett Packard, Hughes, Hybrid, IBM, Intel, LANCity, MicroUnity,
Motorola, Nortel, Panasonic, Scientific Atlanta, Terrayon, Toshiba, and Zenith currently
supply cable modem products. While the cable television networks cross 97 percent of
property lines in North America, each local cable operator may use different equipment,
wires, and software, and cable modems still remain somewhat experimental. This was a
call for interoperability standards.

Cable modems operate at speeds 100 to 1,000 times as fast as a telephone modem,
receiving data at up to 10Mbps and sending data at speeds between 2Mbps and 10 Mbps.
They can provide not only high-bandwidth Internet access but also streaming audio and
video for television viewing. Most will connect to computers with 10baseT Ethernet
connectors.

Cable modems usually send and receive data asymmetrically – they receive more
(faster) than they send (slower). In the downstream direction from provider to user, the
date are modulated and placed on a common 6 MHz television carrier, somewhere
between 42 MHz and 750 MHz. the upstream channel, or reverse path, from the user
back to the provider is more difficult to engineer because cable is a noisy environment-
with interference from HAM radio, CB radio, home appliances, loose connectors, and
poor home installation.

Multimedia Systems- M.Sc(IT)


80

9.5 Let us sum up


In this lesson we have learnt the details about input devices, connecting devices and
output devices. We have discussed the following key points in this lesson :
 Input Devices and Output devices provide interactivity.
 Communication devices enables data transfer.
9.6 Lesson-end activities
1. Most input and output devices have a certain resolution. Locate the
specifications on the web for a CRT monitor, a LCD monitor, a scanner, and a
digital camera. Note the manufacturer, model, and resolution for each one.
Document your findings.
2. List the input devices present in a computer by identifying it from the control
panel.
9.7 Model answers to “Check your progress”
1. A few of the following input devices can be listed :
Keyboard, Mouse, Microphone, Scanner, Touchscreen, Joystick,
Lightpen, tablets.
2. The following is the list of Output devices. A few of them can be listed.
Printer, Speaker, Plotter, Monitor, Projectors
9.8 References
1. Multimedia Making it work” By Tay Vaughan
2. “Multimedia in Practice – Technology and applications” By Jeffcoat
3. http://www.wacona.com/input/input.html
4. www.webopedia.com/TERM/I/input_device.html
5. http://en.wikipedia.org/wiki/Input_device

Multimedia Systems- M.Sc(IT)


81

Lesson 10 Multimedia Workstation


Contents

10.0 Aims and Objectives


10.1 Introduction
10.2 Communication Architecture
10.3 Hybrid Systems
10.4 Digital System
10.5 Multimedia Workstation
10.6 Preference of Operating System for workstation
10.7 Let us sum up
10.8 Lesson-end activities
10.9 Model answers to “Check your progress”
10.10 References

10.0 Aims and Objectives

In this lesson we will learn the different requirements for a computer to become a
multimedia workstation. At the end of this chapter the learner will be able to identify the
requirements for making a computer, a multimedia workstation.

10.1 Introduction
A multimedia workstation is computer with facilities to handle multimedia objects
such as text, audio, video, animation and images. A multimedia workstation was earlier
identified as MPC (Multimedia Personal Computer). In the current scenario all
computers are prebuilt with multimedia processing facilities. Hence it is not necessary to
identify a computer as MPC.

A multimedia system is comprised of both hardware and software components,


but the major driving force behind a multimedia development is research and
development in hardware capabilities.

Besides the multimedia hardware capabilities of current personal computers (PCs)


and workstations, computer networks with their increasing throughput and speed start to
offer services which support multimedia communication systems. Also in this area,
computer networking technology advances faster than the software.

10.2 Communication Architecture

Local multimedia systems (i.e., multimedia workstations) frequently include a


network interface (e.g., Ethernet card) through which they can communicate with each

Multimedia Systems- M.Sc(IT)


82

other. However, the transmission of audio and video cannot be carried out with only the
conventional communication infrastructure and network adapters.

Until now, the solution was that continuous and discrete media have been
considered in different environments, independently of each other. It means that fully
different systems were built. For example, on the one hand, the analog telephone system
provides audio transmission services using its original dial devices connected by copper
wires to the telephone company’s nearest end office. The end offices are connected to
switching centers, called toll offices, and these centers are connected through high
bandwidth intertoll trunks to intermediate switching offices. This hierarchical structure
allows for reliable audio communication. On the other hand, digital computer networks
provide data transmission services at lower data rates using network adapters connected
by copper wires to switches and routers.

Even today, professional radio and television studios transmit audio and video
streams in the form of analog signals, although most network components (e.g.,
switches), over which these signals are transmitted, work internally in a digital mode.

10.3 Hybrid Systems

By using existing technologies, integration and interaction between analog and


digital environments can be implemented. This integration approach is called the hybrid
approach.

The main advantage of this approach is the high quality of audio and video and all
the necessary devices for input, output, storage and transfer that are available. The
hybrid approach is used for studying application user interfaces, application
programming interfaces or application scenarios.

Integrated Device Control

One possible integration approach is to provide a control of analog input/output


audio-video components in the digital environment. Moreover, the connection between
the sources (e.g., CD player, camera, microphone) and destinations (e.g., video recorder,
write-able CD), or the switching of audio-video signals can be controlled digitally.

Integrated Transmission Control

A second possibility to integrate digital and analog components is to provide a


common transmission control. This approach implies that analog audio-video sources and
destinations are connected to the computer for control purposes to transmit continuous
data over digital networks, such as a cable network.

Multimedia Systems- M.Sc(IT)


83

Integrated Transmission

The next possibility to integrated digital and analog components is to provide a


common transmission network. This implies that external analog audio-video devices are
connected to computers using A/D (D/A) converters outside of the computer, not only for
control, but also for processing purposes. Continuous data are transmitted over shared
data networks.

10.4 Digital Systems

Connection to Workstations

In digital systems, audio-video devices can be connected directly to the


computers (workstations) and digitized audio-video data are transmitted over
shared data networks, Audio-video devices in these systems can be either analog
or digital.

Connection to switches

Another possibility to connect audio-video devices to a digital network is


to connect them directly to the network switches.

10.5 Multimedia Workstation

Current workstations are designed for the manipulation of discrete media information.
The data should be exchanged as quickly as possible between the involved components,
often interconnected by a common bus. Computationally intensive and dedicated
processing requirements lead to dedicated hardware, firmware and additional boards.
Examples of these components are hard disk controllers and FDDI-adapters.

A multimedia workstation is designed for the simultaneous manipulation of discrete


and continuous media information. The main components of a multimedia workstation
are:

 Standard Processor(s) for the processing of discrete media information.


 Main Memory and Secondary Storage with corresponding autonomous
controllers.
 Universal Processor(s) for processing of data in real-time (signal processors).
 Special-Purpose Processors designed for graphics, audio and video media
(containing, for example, a micro code decompression method for DVI
processors).
 Graphics and video Adapters.
 Communications Adapters (for example, the Asynchronous Transfer Mode Host
Interface.
 Further special-purpose adapters.

Multimedia Systems- M.Sc(IT)


84

Bus

Within current workstations, data are transmitted over the traditional


asynchronous bus, meaning that if audio-video devices are connected to a workstation,
continuous data are processed in a workstation, and the data transfer is done over this
bus, which provides low and unpredictable time guarantees. In multimedia workstations,
in addition to this bus, the data will be transmitted over a second bus which can keep time
guarantees. In later technical implementations, a bus may be developed which transmits
two kinds of data according to their requirements (this is known as a multi-bus system).

The notion of a bus has to be divided into system bus and periphery bus. In their
current versions, system busses such as ISA, EISA, Microchannel, Q-bus and VME-bus
support only limited transfer of continuous data. The further development of periphery
busses, such as SCSI, is aimed at the development of data transfer for continuous media.

Multimedia Devices

The main peripheral components are the necessary input and output multimedia
devices. Most of these devices were developed for or by consumer electronics, resulting
in the relative low cost of the devices. Microphones, headphones, as well as passive and
active speakers, are examples. For the most part, active speakers and headphones are
connected to the computer because it, generally, does not contain an amplifier. The
camera for video input is also taken from consumer electronics. Hence, a video interface
in a computer must accommodate the most commonly used video techniques/standards,
i.e., NTSC, PAL, SECAM with FBAS, RGB, YUV and YIQ modes. A monitor serves for
video output. Besides Cathode Ray Tube (CRT) monitors (e.g., current workstation
terminals), more and more terminals use the color-LCD technique (e.g., a projection TV
monitor uses the LCD technique). Further, to display video, monitor characteristics, such
as color, high resolution, and flat and large shape, are important.

Primary Storage

Audio and video data are copied among different system components in a digital
system. An example of tasks, where copying of data is necessary, is a segmentation of the
LDUs or the appending of a Header and Trailer. The copying operation uses system
software-specific memory management designed for continuous media. This kind of
memory management needs sufficient main memory (primary storage). Besides ROMs,
PROMs, EPROMS, and partially static memory elements, low-cost of these modules,
together with steadily increasing storage capacities, profits the multimedia world.

Secondary Storage

The main requirements put on secondary storage and the corresponding controller
is a high storage density and low access time, respectively.

Multimedia Systems- M.Sc(IT)


85

On the one hand, to achieve a high storage density, for example, a Constant
Linear Velocity (CLV) technique was defined for the CD-DA (Compact Disc Digital
Audio). CLV guarantees that the data density is kept constant for the entire optical disk at
the expense of a higher mean access time. On the other hand, to achieve time guarantees,
i.e., lower mean access time, a Constant Angle Velocity (CAV) technique could be used.
Because the time requirement is more important, the systems with a CAV are more
suitable for multimedia than the systems with a CLV.

Processor

In a multimedia workstation, the necessary work is distributed among different


processors. Although currently, and for the near future, this does not mean that all
multimedia workstations must be multi-processor systems. The processors are designed
for different tasks. For example, a Dedicated Signal Processor (DSP) allows compression
and decompression of audio in real-time. Moreover, there can be special-purpose
processors employed for video. The following Figure shows an example of a multi-
processor for multimedia workstations envisioned for the future.

Vector 1 Vector 2 Vector 3


Bus Interface

Technology
DVI

Cache

CPU 1 CPU 2 CPU 3 CPU 4

Example of a Multiprocessor System

Operating System

Another possible variant to provide computation of discrete and continuous data


in a multimedia workstation could be distinguishing between processes for discrete data
computation and for continuous data processing. These processes could run on separate
processors. Given an adequate operating system, perhaps even one processor could be
shared according to the requirements between processes for discrete and continuous data.

Multimedia Systems- M.Sc(IT)


86

Check Your Progress 1


List a few components required for a multimedia workstation.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

10.6 Preference of Operating System for Workstation.

Selection of the proper platform for developing the multimedia project may be
based on your personal preference of computer, your budget constraints, and project
delivery requirements, and the type of material and content in the project. Many
developers believe that multimedia project development is smoother and easier on the
Macintosh than in Windows, even though projects destined to run in Windows must then
be ported across platforms. But hardware and authoring software tools for Windows
have improved; today you can produce many multimedia projects with equal ease in
either the Windows or Macintosh environment.

10.6.1 The Macintosh Platform

All Macintoshes can record and play sound. Many include hardware and software
for digitizing and editing video and producing DVD discs. High-quality graphics
capability is available “out of the box.” Unlike the Windows environment, where users
can operate any application with keyboard input, the Macintosh requires a mouse.

The Macintosh computer you will need for developing a project depends entirely
upon the project’s delivery requirements, its content, and the tools you will need for
production.

10.6.2 The Windows Platform

Unlike the Apple Macintosh computer, a Windows computer is not a computer


per se, but rather a collection of parts that are tied together by the requirements of the
Windows operating system. Power supplies, processors, hard disks, CD-ROM players,
video and audio components, monitors, key-boards and mice-it doesn’t matter where they
come from or who makes them. Made in Texas, Taiwan, Indonesia, Ireland, Mexico, or
Malaysia by widely known or little-known manufactures, these components are
assembled and branded by Dell, IBM, Gateway, and other into computers that run
Windows.

In the early days, Microsoft organized the major PC hardware manufactures into
the Multimedia PC Marketing Council to develop a set of specifications that would allow
Windows to deliver a dependable multimedia experience.

Multimedia Systems- M.Sc(IT)


87

10.6.3 Networking Macintosh and Windows Computers

When a user works in a multimedia development environment consisting of a


mixture of Macintosh and Windows computers, you will want them to communicate with
each other. It may also be necessary to share other resources among them, such as
printers.

Local area networks (LANs) and wide area networks (WANs) can connect the
members of a workgroup. In a LAN, workstations are usually located within a short
distance of one another, on the same floor of a building, for example. WANs are
communication systems spanning great distances, typically set up and managed by large
corporation and institutions for their own use, or to share with other users.

LANs allow direct communication and sharing of peripheral resources such as file
servers, printers, scanners, and network modems. They use a variety of proprietary
technologies, most commonly Ethernet or TokenRing, to perform the connections. They
can usually be set up with twisted-pair telephone wire, but be sure to use “data-grade
level 5” or “cat-5” wire-it makes a real difference, even if it’s a little more expensive!
Bad wiring will give the user never-ending headache of intermittent and often untraceable
crashes and failures.

10.7 Let us sum up


In this lesson we have learnt the different requirement for a multimedia workstation.
 A multimedia workstation is computer with facilities to handle multimedia objects
such as text, audio, video, animation and images.
 Macintosh is the pioneer in Multimedia OS.

10.8 Lesson-end activities

1. Identify the workstation components installed in a computer and list the


multimedia component/object associated with each devices.

10.9 Model answers to “Check your progress”

The multimedia workstation may consist of the following Bus, connecting


devices, processor, Operating system, Primary Storage and Secondary storage.

10.10 References
1. "Multimedia:Concepts and Practice" By Stephen McGloughlin
2. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
3. Digital Multimedia by Nigel Chapman, Jenny Chapman
4. Video and Image Processing in Multimedia Systems By Stephen W. Smoliar,
HongJiang Zhang

Multimedia Systems- M.Sc(IT)


88

UNIT - III

Lesson 11 Documents, Hypertext, Hypermedia


Contents

11.0 Aims and Objectives


11.1 Introduction
11.2 Documents
11.3 Hypertext
11.4 Hypermedia
11.5 Hypertext and Hypermedia
11.6 Hypertext, Hypermedia and Multimedia
11.7 Hypertext and the World Wide Web
11.8 Let us sum up
11.9 Lesson-end activities
11.10 Model answers to “Check your progress”
11.11 References

11.0 Aims and Objectives

This lesson aims at introducing the concepts of hypertext and hypermedia. At the
end of this chapter the learner will be able to :
i) Understand the concepts of hypertext and hypermedia
ii) Distinguish hypertext and hypermedia

11.1 Introduction

A document consists of a set of structural information that can be in different forms


of media, and during presentation they can be generated or recorded. A document is
aimed at the perception of a human, and is accessible for computer processing.

11.2 Documents

A multimedia document is a document which is comprised of information coded


in at least one continuous (time-dependent) medium and in one discrete (time-
independent) medium. Integration of the different media is given through a close relation
between information units. This is also called synchronization. A multimedia document is
closely related to its environment of tools, data abstractions, basic concepts and document
architecture.

11.2.1 Document Architecture:

Exchanging documents entails exchanging the document content as well as the


document structure. This requires that both documents have the same document

Multimedia Systems- M.Sc(IT)


89

architecture. The current standardized, respectively in the progress of standardization,


architectures are the Standard Generalized Markup Language(SGML) and the Open
Document Architecture(ODA). There are also proprietary document architectures, such
as DEC’s Document Content Architecture (DCA) and IBM’s Mixed Object Document
Content Architecture (MO:DCA).

Information architectures use their data abstractions and concepts. A document


architecture describes the connections among the individual elements represented as
models (e.g., presentation model, manipulation model). The elements in the document
architecture and their relations are shown in the following Figure. The Figure shows a
multimedia document architecture including relations between individual discrete media
units and continuous media units.

The manipulation model describes all the operations allowed for creation, change
and deletion of multimedia information. The representation model defines: (1) the
protocols for exchanging this information among different computers; and, (2) the
formats for storing the data. It includes the relations between the individual information
elements which need to be considered during presentation. It is important to mention that
an architecture may not include all described properties, respectively models.

Presentation
Model Manipulation
Model

Structure

Content

Representation Model

Document architecture and its elements.

11.3 HYPERTEXT

Hypertext most often refers to text on a computer that will lead the user to other,
related information on demand. Hypertext represents a relatively recent innovation to

Multimedia Systems- M.Sc(IT)


90

user interfaces, which overcomes some of the limitations of written text. Rather than
remaining static like traditional text, hypertext makes possible a dynamic organization of
information through links and connections (called hyperlinks).

Hypertext can be designed to perform various tasks; for instance when a user
"clicks" on it or "hovers" over it, a bubble with a word definition may appear, or a web
page on a related subject may load, or a video clip may run, or an application may open.
The prefix hyper ("over" or "beyond") signifies the overcoming of the old linear
constraints of written text.

Types and uses of hypertext

Hypertext documents can either be static (prepared and stored in advance) or


dynamic (continually changing in response to user input). Static hypertext can be used to
cross-reference collections of data in documents, software applications, or books on CD.
A well-constructed system can also incorporate other user-interface conventions, such as
menus and command lines. Hypertext can develop very complex and dynamic systems of
linking and cross-referencing. The most famous implementation of hypertext is the World
Wide Web.

11.4 Hypermedia

Hypermedia is used as a logical extension of the term hypertext, in which


graphics, audio, video, plain text and hyperlinks intertwine to create a generally non-
linear medium of information. This contrasts with the broader term multimedia, which
may be used to describe non-interactive linear presentations as well as hypermedia.
Hypermedia should not be confused with hypergraphics or super-writing which is not a
related subject.

The World Wide Web is a classic example of hypermedia, whereas a non-


interactive cinema presentation is an example of standard multimedia due to the absence
of hyperlinks. Most modern hypermedia is delivered via electronic pages from a variety
of systems. Audio hypermedia is emerging with voice command devices and voice
browsing.

11.5 Hypertext and Hypermedia

Communication reproduces knowledge stored in the human brain via several media.
Documents are one method of transmitting information. Reading a document is an act of
reconstructing knowledge. In an ideal case, knowledge transmission starts with an author
and ends with a reconstruction of the same ideas by a reader.

Today’s ordinary documents (excluding hypermedia), with their linear form,


support neither the reconstruction of knowledge, nor simplify its reproduction.
Knowledge must be artificially serialized before the actual exchange. Hence, it is
transformed into a linear document and the structural information is integrated into the

Multimedia Systems- M.Sc(IT)


91

actual content. In the case of hypertext and hypermedia, a graphical structure is possible
in a document which may simplify the writing and reading processes.

Document Interactive Hyper-/


Systems Multimedia Systems

MIFF, SGML Script/X, HyTime


Document Processable Script
Form Compose
Edit

format format

Print Start
Final
Layout Presentation
Form
Postscript ???

Problem Description

Check Your Progress 1

Distinguish hypertext and hypermedia.


Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

Multimedia Systems- M.Sc(IT)


92

11.6 Hypertext, Hypermedia and multimedia


A book or an article on a paper has a given structure and is represented in a
sequential form. Although it is possible to read individual paragraphs without reading
previous paragraphs, authors mostly assume a sequential reading. Therefore many
paragraphs refer to previous learning in the document. Novels, as well as movies, for
example, always assume a pure sequential reception. Scientific literature can consist of
independent chapters, although mostly a sequential reading is assumed. Technical
documentation (e.g., manuals) consist often of a collection of relatively independent
information units. A lexicon or reference book about the Airbus, for example, is
generated by several authors and always only parts are read sequentially. There also exist
many cross references in such documentations which lead to multiple searches at
different places for the reader. Here, an electronic help facility, consisting of information
links, can be very significant.

The following figure shows an example of such a link. The arrows point to such a
relation between the information units (Logical Data Units - LDU’s). In a text (top left in
the figure), a reference to the landing properties of aircrafts is given. These properties are
demonstrated through a video sequence (bottom left in the figure). At another place in the
text, sales of landing rights for the whole USA are shown (this is visualized in the form of
a map, using graphics- bottom right in the figure). Further information about the airlines
with their landing rights can be made visible graphically through a selection of a
particular city. A special information about the number of the different airplanes sold
with landing rights in Washington is shown at the top right in the figure with a bar
diagram. Internally, the diagram information is presented in table form. The left bar
points to the plane, which can be demonstrated with a video clip.

Hypertext Data. An example of linking information of different media

Multimedia Systems- M.Sc(IT)


93

Hypertext System:

A hypertext system is mainly determined through non-linear links of information.


Pointers connect the nodes. The data of different nodes can be represented with one or
several media types. In a pure text system, only text parts are connected. We understand
hypertext as an information object which includes links to several media.

Multimedia Hypermedia Hypertext

The hypertext, hypermedia and multimedia relationship

Multimedia System:

A multimedia system contains information which is coded at least in a continuous


and discrete medium.

For example, if only links to text data are present, then this is not a multimedia
system, it is a hypertext. A video conference, with simultaneous transmission of text and
graphics, generated by a document processing program, is a multimedia application.
Although it does not have any relation to hypertext and hypermedia.

Hypermedia System:

As the above figure shows, a hypermedia system includes the non-linear


information links of hypertext systems and the continuous and media of multimedia
systems. For example, if a non-linear link consists of text and video data, then this is a
hypermedia, multimedia and hypertext system.

11.7 Hypertext and the World Wide Web


In the late 1980s, Berners-Lee, then a scientist at CERN, invented the World
Wide Web to meet the demand for automatic information-sharing among scientists
working in different universities and institutes all over the world. In 1911, Lynx (web
browser) was born as the world's first Internet web browser. Its ability to provide
hypertext links within documents that could reach into documents anywhere on the
Internet began the creation of the web on the Internet.

Multimedia Systems- M.Sc(IT)


94

After the release of web browsers for both the PC and Macintosh environments,
traffic on the World Wide Web quickly exploded from only 500 known web servers in
1993 to over 10,000 in 1994. Thus, all earlier hypertext systems were overshadowed by
the success of the web, even though it originally lacked many features of those earlier
systems, such as an easy way to edit what you were reading, typed links, backlinks,
transclusion, and source tracking.

11.8 Let us sum up

In this lesson we have learnt concepts on hypertext and hypermedia. We have


discussed the following keypoints:
 A multimedia document is a document which is comprised of information coded in at
least one continuous (time-dependent) medium and in one discrete (time-
independent) medium.
 Hypertext most often refers to text on a computer that will lead the user to other,
related information on demand. Hypertext represents a relatively recent innovation to
user interfaces.
 Hypermedia is used as a logical extension of the term hypertext, in which graphics,
audio, video, plain text and hyperlinks intertwine to create a generally non-linear
medium of information.
 A hypermedia system includes the non-linear information links of hypertext systems
and the continuous and media of multimedia systems

11.9 Lesson-end activities

1. Check any multimedia website and identify the contents that have hyperlinks,
hypermedia.
2. Identify the different hypermedia and hypertext contents available in any
educational, E-lesson based CDROMS.

11.10 Model answers to “Check your progress”-1

Hypertext most often refers to text on a computer that will lead the user to other,
related information on demand. Hypertext represents a relatively recent
innovation to user interfaces.

Hypermedia is used as a logical extension of the term hypertext, in which


graphics, audio, video, plain text and hyperlinks intertwine to create a generally
non-linear medium of information.

11.11 References

1. "Multimedia: Concepts and Practice" By Stephen McGloughlin


2. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.

Multimedia Systems- M.Sc(IT)


95

Lesson 12 Document Architecture and MPEG

Contents
12.0 Aims and Objectives
12.1 Introduction
12.2 Document Architecture - SGML
12.3 Open Document Architecture
12.3.1 Details of ODA
12.3.2 Layout Structure and logical structure
12.3.3 ODA and Multimedia
12.4 MHEG
12.4.1 Example of Interactive Multimedia
12.4.2 Derivation of Class Hierarchy
12.5 Let us sum up
12.6 Lesson-end activities
12.7 Model answers to “Check your progress”
12.8 References

12.0 Aims and Objectives

This lesson aims at teaching the different document architecture followed in


Multimedia. At the end of this lesson the learner will be able to :
i) learn different document architectures.
ii) enumerate the architecture of MHEG.

12.1 Introduction

Exchanging documents entails exchanging the document content as well as the


document structure. This requires that both documents have the same document
architecture. The current standards in the document architecture are

1. Standard Generalized Markup Language


2. Open Document Architecture

12.2 Document Architecture - SGML

The Standard Generalized Markup Language (SGML) was supported mostly by


American publisher. Authors prepare the text, i.e., the content. They specify in a
uniform way the title, tables, etc., without a description of the actual representation (e.,
script type and line distance). The publisher specifies the resulting layout.

The basic idea is that the author uses tags for marking certain text parts. SGML
determines the form of tags. But it does not specify their location or meaning. User

Multimedia Systems- M.Sc(IT)


96

groups agree on the meaning of the tags. SGML makes a frame available with which the
user specifies the syntax description in an object-specific system. Here, classes and
objects, hierarchies of classes and objects, inheritance and the link to methods
(processing instructions) can be used by the specification. SGML specifies the syntax,
but not the semantics.

For example,

<title>Multimedia-Systems</title>
<author>Felix Gatou</author>
<side>IBM</side>
<summary>This exceptional paper from Peter…

This example shows an application of SGML in a text document.

The following figure shows the processing of an SGML document. It is divided into two
processes:

SGML : Document processing – from the information to the presentation

Only the formatter knows the meaning of the tag and it transforms the document
into a formatted document. The parser uses the tags, occurring in the document, in
combination with the corresponding document type. Specification of the document

Multimedia Systems- M.Sc(IT)


97

structure is done with tags. Here, parts of the layout are linked together. This is based on
the joint context between the originator of the document and the formatter process. It is
one defined through SGML.

12.2.1 SGML and Multimedia

Multimedia data are supported in the SGML standard only in the form of
graphics. A graphical image as a CGM (Computer Graphics Metafile) is embedded in an
SGML document. The standard does not refer to other media :

<!ATTLIST video id ID #IMPLIED>


<!ATTLIST video synch synch #MPLIED>
<!ELEMENT video (audio, movpic)>
<!ELEMENT audio (#NDATA)> -- not-text media
<!ELEMENT movpic (#NDATA)> -- not-text media
…..
<!ELEMENT story (preamble, body, postamble)> :

A link to concrete data can be specified through #NDATA. The data are stored mostly
externally in a separate file.

The above example shows the definition of video which consists of audio and motion
pictures.

Multimedia information units must be presented properly. The synchronization between


the components is very important here.

12.3 Open Document Architecture ODA

The Open Document Architecture (ODA) was initially called the Office
Document Architecture because it supports mostly office-oriented applications. The
main goal of this document architecture is to support the exchange, processing and
presentation of documents in open systems. ODA has been endorsed mainly by the
computer industry, especially in Europe.

12.3.1 Details of ODA

The main property of ODA is the distinction among content, logical structure and
layout structure. This is in contrast to SGML where only a logical structure and the
contents are defined. ODA also defines semantics. Following figure shows these three
aspects linked to a document. One can imagine these aspects as three orthogonal views
of the same document. Each of these views represent on aspect, together we get the
actual document.

The content of the document consists of Content Portions. These can be


manipulated according to the corresponding medium.

Multimedia Systems- M.Sc(IT)


98

ODA : Content layout and logical view

A content architecture describes for each medium: (1) the specification of the
elements, (2) the possible access functions and, (3) the data coding. Individual elements
are the Logical Data Units (LDUs), which are determined for each medium. The access
functions serve for the manipulation of individual elements. The coding of the data
determines the mapping with respect to bits and bytes.

ODA has content architectures for media text, geometrical graphics and raster
graphics. Contents of the medium text are defined through the Character Content
Architecture. The Geometric Graphics Content Architecture allows a content description
of still images. It also takes into account individual graphical objects. Pixel-oriented
still images are described through Raster Graphics Content Architecture. It can be a
bitmap as well as a facsimile.

12.3.2 Layout structure and Logical Structure

The Structure and presentation models describe-according to the information


architecture-the cooperation of information units. These kinds of meta information
distinguish layout and logical structure.

The layout structure specifies mainly the representation of a document. It is


related to a two dimensional representation with respect to a screen or paper. The
presentation model is a tree. Using frames the position and size of individual layout
elements is established. For example, the page size and type style are also determined.

The logical structure includes the partitioning of the content. Here, paragraphs
and individual heading are specified according to the tree structure. Lists with their
entries are defined as:

Multimedia Systems- M.Sc(IT)


99

Paper = preamble body postamble


Body = heading paragraph…picture…
Chapter2 = heading paragraph picture paragraph

The above example describes the logical structure of an article. Each article
consists of a preamble, a body and a postamble. The body includes two chapters, both of
them start with headings. Content is assigned to each element of this logical structure.

The Information architecture ODA includes the cooperative models shown in the
following figure. The fundamental descriptive means of the structural and presentational
models are linked to the individual nodes which build a document. The document is seen
as a tree. Each node (also a document) is a constituent, or an object. It consists of a set
of attributes, which represent the properties of the nodes. A node itself includes a
concrete value or it defines relations between other nodes. Hereby, relations and
operators, as shown in following table, are allowed.

Sequence All child nodes are ordered sequentially


Aggregate No ordering among the child nodes
Choice One of the child nodes has a successor
Optional One or no(operator
Repeat One….any times (operator)
Optional Repeat 0…any time (operator)

The simplified distinction is between the edition, formatting (Document Layout


Process and Content Layout Process) and actual presentation (Imaging Process). Current
WYSIWYG (What You See Is What You Get) editors include these in one single step. It
is important to mention that the processing assumes a liner reproduction. Therefore, this
is only partially suitable as document architecture for a hypertext system. Hence, work is
occurring on Hyper-ODA.

 A formatted document includes the specific layout structure, and eventually the
generic layout structure. It can be printed directly or displayed, but it cannot be
changed.
 A processable document consists of the specific logical structure, eventually the
generic logical structure, and later of the generic layout structure. The document
cannot be printed directly or displayed. Change of content is possible.
 A formatted processable document is a mixed form. It can be printed, displayed
and the content can be changed.

For the communication of an ODA document, the representation model, show in the
following Figure is used. This can be either the Open Document Interchange Format
(ODIF) (based on ASN.1), or the Open Document Language (ODL) (based on SGML).

Multimedia Systems- M.Sc(IT)


100

ODA Information Architecture with structure, content, presentation and


representation model

The manipulation model in ODA, shown in the above figure, makes use of
Document Application Profiles (DAPs). These profiles are an ODA (Text Only, Text +
Raster Graphics + Geometric Graphics, Advanced Level).

Check Your Progress 2


Distinguish additive and subtractive colors and write their area of use.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

Multimedia Systems- M.Sc(IT)


101

12.3.3 ODA and Multimedia

Multimedia requires, besides spatial representational dimensions, the time as a


main part of a document. If ODA should include continuous media, further extensions in
the standard are necessary. Currently, multimedia is not part of the standard. All further
paragraphs discuss only possible extensions, which formally may or may not be included
in ODA in this form.
Contents
The content portions will change to timed content portions. Hereby, the duration does not
have to be specified a priori. These types of content portions are called Open Timed
Content Portions. Let us consider an example of an animation which is generated during
the presentation time depending on external events. The information, which can be
included during the presentation time, is images taken from the camera. In the case of a
Closed Timed Content Portion, the duration is fixed. A suitable example is a song.
Structure
Operations between objects must be extended with a time dimension where the
time relation is specified in the farther node.
Content Architecture
Additional content architectures for audio and video must be defined. Hereby, the
corresponding elements, LDUs, must be specified. For the access functions, a set of
generally valid functions for the control of the media streams needs to be specified. Such
functions are, for example, Start and Stop. Many functions are very often device-
dependent. One of the most important aspects is a compatibility provision among
different systems implementing ODA.

Logical Structures

Extensions for multimedia of the logical structure also need to be considered. For
example, a film can include a logical structure. It could be a tree with the following
components:

1. Prelude
 Introductory movie segment
 Participating actors in the second movie segment
2. Scene 1
3. Scene 2
4. …
5. Postlude

Such a structure would often be desirable for the user. This would allow one to
deterministically skip some areas and to show or play other areas.

Multimedia Systems- M.Sc(IT)


102

Layout Structure

The layout structure needs extensions for multimedia. The time relation by a
motion picture and audio must be included. Further, questions such as When will
something be played?, From which point? And With which attributes and dependencies?
must be answered.

The spatial relation can specify, for example, relative and absolute positions by
the audio object. Additionally, the volume and all other attributes and dependencies
should be determined.

12.4 MPEG

The committee ISO/IEC JTCI/SC29 (Coding of Audio, Picture, Multimedia and


Hypermedia Information) works on the standardization of the exchange format for
multimedia systems. The actual standards are developed at the international level in three
working groups cooperating with the research and industry. The following figure shows
that the three standards deal with the coding and compressing of individual media. The
results of the working groups: the Joint Photographic Expert Group (JPEG) and the
Motion Picture Expert Group (MPEG) are of special importance in the area of
multimedia systems.

ISO/IEC JTC1/SC29
Coding of Audio, Picture, Multimedia and
Hypermedia Information

WG 1 WG 11 WG 12
Coding of Still Coding of Moving Pictures Coding of Multimedia
Pictures and Associated and Hypermedia
(JBIG/JPEG) Audio (MPEG) Information (MHEG)

Content Structure
Working groups within the ISO-SG29

In a multimedia presentation, the contents, in the form of individual information


objects, are described with the help of the above named standards. The structure (e.g.,
processing in time) is specified first through timely spatial relations between the
information objects. The standard of this structure description is the subject of the
working group WG12, which is known as the Multimedia and Hypermedia Information
Coding Expert Group (MHEG).

Multimedia Systems- M.Sc(IT)


103

The name of the developed standard is officially called Information Technology-


Coding of Multimedia and Hypermedia Information (MHEG). The final MHEG standard
will be described in three documents. The first part will discuss the concepts, as well as
the exchange format. The second part describes an alternative, semantically to the first
part, isomorph syntax of the exchange format. The third part should present a reference
architecture for a linkage to the script languages. The main concepts are covered in the
first document, and the last two documents are still in progress; therefore, we will focus
on the first document with the basic concepts. Further discussions about MHEG are
based mainly on the committee draft version, because: (1) all related experiences have
been gained on this basis; (2) the basic concepts between the final standard and this
committed draft remain to be the same; and, (3) the finalization of this standard is still in
progress.

12.4.1 Example of an Interactive Multimedia Presentation

Before a detailed description of the MHEG objects is given, we will briefly


examine the individual elements of a presentation using a small scenario. The following
figure presents a time diagram of an interactive multimedia presentation. The
presentation starts with some music. As soon as the voice of a news-speaker is heard in
the audio sequence, a graphic should appear on the screen for a couple of seconds. After
the graphic disappears, the viewer carefully reads a text. After the text presentation ends,
a Stop button appears on the screen. With this button the user can abort the audio
sequence. Now, using a displayed input field, the user enters the title of a desired video
sequence. These video data are displayed immediately after the modification.

Media
Selection
Stop
Video
Start
Image
Start
Modification
Text Start

Audio

Time

Start of
Presentation
Figure showing Time diagram of an interactive presentation

Multimedia Systems- M.Sc(IT)


104

Content

A presentation consists of a sequence of information representations. For the


representation of this information, media with very different properties are available.
Because of later reuse, it is useful to capture each information LDU as an individual
object. The contents in our example are: the video sequence, the audio sequence, the
graphics and the text.

Behavior

The notion behavior means all information which specifies the representation of
the contents as well as defines the run of the presentation. The first part is controlled by
the actions start, set volume, set position, etc. The last part is generated by the definition
of timely, spatial and conditional links between individual elements. If the state of the
content’s presentation changes, then this may result in further commands on other objects
(e.g., the deletion of the graphic causes the display of the text). Another possibility, how
the behavior of a presentation can be determined, is when external programs or functions
(script) are called.

User Interaction

In the discussed scenario, the running animation could be aborted by a


corresponding user interaction. There can be two kinds of user interactions. The first one
is the simple selection, which controls the run of the presentation through a pre specified
choice (e.g., push the Stop button). The second kind is the more complex modification,
which gives the user the possibility to enter data during the run of the presentation (e.g.,
editing of a data input field). Merging together several elements as discussed above, a
presentation, which progresses in time, can be achieved. To be able to exchange this
presentation between the involved systems, a composite element is necessary. This
element is comparable to a container. It links together all the objects into a unit. With
respect to hypertext/hypermedia documents, such containers can be ordered to a complex
structure, if they are linked together through so-called hypertext pointers.

12.4.2 Derivation of a Class Hierarchy

The following figure summarizes the individual elements in the MHEG class
hierarchy in the form of a tree. Instances can be created from all leaves (roman printed
classes). All internal nodes, including the root (italic printed classes), are abstract
classes, i.e., no instances can be generated from them. The leaves inherit some attributes
from the root of the tree as an abstract basic class. The internal nodes do not include any
further functions. Their task is to unify individual classes into meaningful groups. The
action, the link and the script classes are grouped under the behavior class, which defines
the behavior in a presentation.

The interaction class includes the user interaction, which is again modeled
through the selection and modification class. All the classes together with the content and

Multimedia Systems- M.Sc(IT)


105

composite classes specify the individual components in the presentation and determine
the component class. Some properties of the particular MHEG engine can be queried by
the descriptor class. The macro class serves as the simplification of the access,
respectively reuse of objects. Both classes play a minor role; therefore, they will not be
discussed further.

The development of the MHEG standard uses the techniques of object-oriented design.
Although a class hierarchy is considered a kernel of this technique, a closer look shows
that the MHEG class hierarchy does not have the meaning it is often assigned.

MH-Object-Class

The abstract MH-Object-Class inherits both data structures MHEG Identifier and
Descriptor.

MHEG Identifier consists of the attributes NHEG Identifier and Object Number
and it serves as the addressing of MHEG objects. The first attribute identifies a specific
application. The Object Number is a number which is defined only within the application.
The data structure Descriptor provides the possibility to characterize more precisely each
MHEG object through a number of optional attributes. For example, this can become
meaningful if a presentation is decomposed into individual objects and the individual
MHEG objects are stored in a database. Any author, supported by proper search
function, can reuse existing MHEG objects.

Multimedia Systems- M.Sc(IT)


106

12.5 Let us sum up


In this lesson we have learnt the different document architectures. We have
discussed the following points:
i) The Standard Generalized Markup Language (SGML) was supported mostly
by American publisher.
ii) The Open Document Architecture (ODA) was initially called the Office
Document Architecture because it supports mostly office-oriented
applications.
iii) The structure (e.g., processing in time) is specified first through timely spatial
relations between the information objects. The standard of this structure
description is the subject of the working group WG12, which is known as the
Multimedia and Hypermedia Information Coding Expert Group (MHEG).

12.6 Lesson-end activities

Consider any multimedia document (for example E learning CDROM/ Elearning


websites). List out the different hypermedia and document structures present in it.

12.7 Model answers to “Check your progress”-1


1) The layout structure specifies mainly the representation of a document.
It is related to a two dimensional representation with respect to a screen or
paper. .

The logical structure includes the partitioning of the content. Here,


paragraphs and individual heading are specified according to the tree
structure.

12.8 References
1. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
2. “Multimedia Making it work” By Tay Vaughan
3. www.w3.org/MarkUp/SGML/
4. http:\en.wikipedia.org/wiki/SGML

Multimedia Systems- M.Sc(IT)


107

Lesson 13 Basic Tools for Multimedia Objects


Contents

13.0 Aims and Objectives


13.1 Introduction
13.2 Text Editing and Word Processing Tools
13.3 OCR Software
13.4 Image editing tools
13.5 Painting and drawing tools
13.6 Sound editing tools
13.7 Animation, Video and Digital Movie Tools
13.8 Let us sum up
13.9 Lesson-end activities
13.10 Model answers to “Check your progress”
13.11 References

13.0 Aims and Objectives

This lesson is intended to teach the learner the basic tools (software) used for
creating and capturing multimedia. The second part of this lesson educates the user on
various video file formats.

At the end of the lesson the learner will be able to:


i) identify software for creating multimedia objects
ii) Locate software used for editing multimedia objects
iii) understand different video file formats

13.1 Introduction

The basic tools set for building multimedia project contains one or more authoring
systems and various editing applications for text, images, sound, and motion video. A
few additional applications are also useful for capturing images from the screen,
translating file formats and tools for the making multimedia production easier.

13.2 Text Editing and Word Processing Tools

A word processor is usually the first software tool computer users rely upon for
creating text. The word processor is often bundled with an office suite.

Word processors such as Microsoft Word and WordPerfect are powerful


applications that include spellcheckers, table formatters, thesauruses and prebuilt
templates for letters, resumes, purchase orders and other common documents.

Multimedia Systems- M.Sc(IT)


108

13.3 OCR Software


Often there will be multimedia content and other text to incorporate into a
multimedia project, but no electronic text file. With optical character recognition (OCR)
software, a flat-bed scanner, and a computer, it is possible to save many hours of
rekeying printed words, and get the job done faster and more accurately than a roomful of
typists.

OCR software turns bitmapped characters into electronically recognizable ASCII


text. A scanner is typically used to create the bitmap. Then the software breaks the
bitmap into chunks according to whether it contains text or graphics, by examining the
texture and density of areas of the bitmap and by detecting edges. The text areas of the
image are then converted to ASCII character using probability and expert system
algorithms.

13.4 Image-Editing Tools

Image-editing application is specialized and powerful tools for enhancing and


retouching existing bitmapped images. These applications also provide many of the
feature and tools of painting and drawing programs and can be used to create images
from scratch as well as images digitized from scanners, video frame-grabbers, digital
cameras, clip art files, or original artwork files created with a painting or drawing
package.

Here are some features typical of image-editing applications and of interest to


multimedia developers:

 Multiple windows that provide views of more than one image at a time
 Conversion of major image-data types and industry-standard file formats
 Direct inputs of images from scanner and video sources
 Employment of a virtual memory scheme that uses hard disk space as RAM for
images that require large amounts of memory
 Capable selection tools, such as rectangles, lassos, and magic wands, to select
portions of a bitmap
 Image and balance controls for brightness, contrast, and color balance
 Good masking features
 Multiple undo and restore features
 Anti-aliasing capability, and sharpening and smoothing controls
 Color-mapping controls for precise adjustment of color balance
 Tools for retouching, blurring, sharpening, lightening, darkening, smudging, and
tinting
 Geometric transformation such as flip, skew, rotate, and distort, and perspective
changes
 Ability to resample and resize an image

Multimedia Systems- M.Sc(IT)


109

 134-bit color, 8- or 4-bit indexed color, 8-bit gray-scale, black-and-white, and


customizable color palettes

 Ability to create images from scratch, using line, rectangle, square, circle, ellipse,
polygon, airbrush, paintbrush, pencil, and eraser tools, with customizable brush
shapes and user-definable bucket and gradient fills
 Multiple typefaces, styles, and sizes, and type manipulation and masking routines
 Filters for special effects, such as crystallize, dry brush, emboss, facet, fresco,
graphic pen, mosaic, pixelize, poster, ripple, smooth, splatter, stucco, twirl,
watercolor, wave, and wind
 Support for third-party special effect plug-ins
 Ability to design in layers that can be combined, hidden, and reordered

Plug-Ins
Image-editing programs usually support powerful plug-in modules available from
third-party developers that allow to wrap, twist, shadow, cut, diffuse, and otherwise
“filter” your images for special visual effects.

Check Your Progress 1

List a few image editing features that an image editing tool should possess.

Notes: a) Write your answers in the space given below.


b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

13.5 Painting and Drawing Tools

Painting and drawing tools, as well as 3-D modelers, are perhaps the most
important items in the toolkit because, of all the multimedia elements, the graphical
impact of the project will likely have the greatest influence on the end user. If the
artwork is amateurish, or flat and uninteresting, both the creator and the users will be
disappointed.

Painting software, such as Photoshop, Fireworks, and Painter, is dedicated to


producing crafted bitmap images. Drawing software, such as CorelDraw, FreeHand,
Illustrator, Designer, and Canvas, is dedicated to producing vector-based line art easily
printed to paper at high resolution.

Multimedia Systems- M.Sc(IT)


110

Some software applications combine drawing and painting capabilities, but many
authoring systems can import only bitmapped images. Typically, bitmapped images
provide the greatest choice and power to the artist for rendering fine detail and effects,
and today bitmaps are used in multimedia more often than drawn objects. Some vector-
based packages such as Macromedia’s Flash are aimed at reducing file download times
on the Web, and may contain both bitmaps and drawn art. The anti-aliased character
shown in the bitmap of Color Plate 5 is an example of the fine touches that improve the
look of an image.

Look for these features in a drawing or painting packages:

 An intuitive graphical user interface with pull-down menus, status bars, palette
control, and dialog boxes for quick, logical selection
 Scalable dimensions, so you can resize, stretch, and distort both large and small
bitmaps
 Paint tools to create geometric shapes, from squares to circles and from curves to
complex polygons
 Ability to pour a color, pattern, or gradient into any area
 Ability to paint with patterns and clip art
 Customizable pen and brush shapes and sizes
 Eyedropper tool that samples colors
 Auto trace tool that turns bitmap shapes into vector-based outlines
 Support for scalable text fonts and drop shadows
 Multiple undo capabilities, to let you try again
 Painting features such as smoothing coarse-edged objects into the background
with anti-aliasing, airbrushing in variable sizes, shapes, densities, and patterns;
washing colors in gradients; blending; and masking
 Support for third-party special effect plug-ins
 Object and layering capabilities that allow you to treat separate elements
independently
 Zooming, for magnified pixel editing
 All common color depths: 1-, 4-, 8-, and 16-, 134-, or 313- bit color, and gray-
scale
 Good color management and dithering capability among color depths using
various color models such as RGB, HSB, and CMYK
 Good palette management when in 8-bit mode
 Good file importing and exporting capability for image formats such as PIC, GIF,
TGA, TIF, WMF, JPG, PCX, EPS, PTN, and BMP

13.6 Sound Editing Tools

Sound editing tools for both digitized and MIDI sound lets hear music as well as
create it. By drawing a representation of a sound in fine increments, whether a score or a
waveform, it is possible to cut, copy, paste and otherwise edit segments of it with great
precision.

Multimedia Systems- M.Sc(IT)


111

System sounds are shipped both Macintosh and Windows systems and they are
available as soon the Operating system is installed. For MIDI sound, a MIDI synthesizer
is required to play and record sounds from musical instruments. For ordinary sound there
are varieties of software such as Soundedit, MP3cutter, Wavestudio.

13.7 Animation, Video and Digital Movie Tools

Animation and digital movies are sequences of bitmapped graphic scenes (frames,
rapidly played back. Most authoring tools adapt either a frame or object oriented
approach to animation.

Moviemaking tools typically take advantage of Quicktime for Macintosh and


Microsoft Video for Windows and lets the content developer to create, edit and present
digitized motion video segments.

13.7.1 Video formats

A video format describes how one device sends video pictures to another device,
such as the way that a DVD player sends pictures to a television or a computer to a
monitor. More formally, the video format describes the sequence and structure of frames
that create the moving video image.

Video formats are commonly known in the domain of commercial broadcast and
consumer devices; most notably to date, these are the analog video formats of NTSC,
PAL, and SECAM. However, video formats also describe the digital equivalents of the
commercial formats, the aging custom military uses of analog video (such as RS-170 and
RS-343), the increasingly important video formats used with computers, and even such
offbeat formats such as color field sequential.

Video formats were originally designed for display devices such as CRTs.
However, because other kinds of displays have common source material and because
video formats enjoy wide adoption and have convenient organization, video formats are a
common means to describe the structure of displayed visual information for a variety of
graphical output devices.

13.7.2 Common organization of video formats

A video format describes a rectangular image carried within an envelope containing


information about the image. Although video formats vary greatly in organization, there
is a common taxonomy:

 A frame can consist of two or more fields, sent sequentially, that are displayed
over time to form a complete frame. This kind of assembly is known as interlace.
An interlaced video frame is distinguished from a progressive scan frame, where
the entire frame is sent as a single intact entity.

Multimedia Systems- M.Sc(IT)


112

 A frame consists of a series of lines, known as scan lines. Scan lines have a
regular and consistent length in order to produce a rectangular image. This is
because in analog formats, a line lasts for a given period of time; in digital
formats, the line consists of a given number of pixels. When a device sends a
frame, the video format specifies that devices send each line independently from
any others and that all lines are sent in top-to-bottom order.

 As above, a frame may be split into fields – odd and even (by line "numbers") or
upper and lower, respectively. In NTSC, the lower field comes first, then the
upper field, and that's the whole frame. The basics of a format are Aspect Ratio,
Frame Rate, and Interlacing with field order if applicable: Video formats use a
sequence of frames in a specified order. In some formats, a single frame is
independent of any other (such as those used in computer video formats), so the
sequence is only one frame. In other video formats, frames have an ordered
position. Individual frames within a sequence typically have similar construction.
However, depending on its position in the sequence, frames may vary small
elements within them to represent additional information. For example, MPEG-13
compression may eliminate the information that is redundant frame-to-frame in
order to reduce the data size, preserving the information relating to changes
between frames.

Analog video formats

 NTSC
 PAL
 SECAM

Digital Video Formats

These are MPEG13 based terrestrial broadcast video formats

 ATSC Standards
 DVB
 ISDB

These are strictly the format of the video itself, and not for the modulation used for
transmission.

Broadcast video formats


5135 lines: NTSC • NTSC-J • PAL-M

6135 lines: PAL • PAL-N • PALplus • SECAM


Analog broadcast
Multichannel audio: BTSC (MTS) • NICAM-7138 • Zweiton (A13,
IGR)

Multimedia Systems- M.Sc(IT)


113

Interlaced: SDTV (480i, 576i) • HDTV (1080i)

Progressive: LDTV (1340p, 1388p, 1seg) • EDTV (480p, 576p) • HDTV


(7130p, 1080p)

Digital TV standards (MPEG-13):ATSC, DVB, ISDB, DMB-T/H


Digital broadcast
Digital TV standards (MPEG-4 AVC):DMB-T/H,DVB,SBTVD,ISDB
(1seg)

Multichannel audio: AAC (5.1) • Musicam • PCM • LPCM

Digital cinema: UHDV (13540p, 43130p) • DCI

13.7.3 QuickTime

QuickTime is a multimedia framework developed by Apple Inc. capable of


handling various formats of digital video, media clips, sound, text, animation, music, and
several types of interactive panoramic images. Available for Classic Mac OS, Mac OS X
and Microsoft Windows operating systems, it provides essential support for software
packages including iTunes, QuickTime Player (which can also serve as a helper
application for web browsers to play media files that might otherwise fail to open) and
Safari.

The QuickTime technology consists of the following:

1. The QuickTime Player application created by Apple, which is a media player.


2. The QuickTime framework, which provides a common set of APIs for encoding
and decoding audio and video.
3. The QuickTime Movie (.mov) file format, an openly-documented media
container.

QuickTime is integral to Mac OS X, as it was with earlier versions of Mac OS. All Apple
systems ship with QuickTime already installed, as it represents the core media framework
for Mac OS X. QuickTime is optional for Windows systems, although many software
applications require it. Apple bundles it with each iTunes for Windows download, but it
is also available as a stand-alone installation.

QuickTime players

QuickTime is distributed free of charge, and includes the QuickTime Player application.
Some other free player applications that rely on the QuickTime framework provide
features not available in the basic QuickTime Player. For example:

 iTunes can export audio in WAV, AIFF, MP3, AAC, and Apple Lossless.

Multimedia Systems- M.Sc(IT)


114

 In Mac OS X, a simple AppleScript can be used to play a movie in full-screen


mode. However, since version 7.13 the QuickTime Player now also supports for
full screen viewing in the non-pro version.

QuickTime framework

The QuickTime framework provides the following:

 Encoding and transcoding video and audio from one format to another.
 Decoding video and audio, and then sending the decoded stream to the graphics or
audio subsystem for playback. In Mac OS X, QuickTime sends video playback to
the Quartz Extreme (OpenGL) Compositor.
 A plug-in architecture for supporting additional codecs (such as DivX).

The framework supports the following file types and codecs natively:

Audio

 Apple Lossless
 Audio Interchange (AIFF)
 Digital Audio: Audio CD - 16-bit (CDDA), 134-bit, 313-bit integer & floating
point, and 64-bit floating point
 MIDI
 MPEG-1 Layer 3 Audio (.mp3)
 MPEG-4 AAC Audio (.m4a, .m4b, .m4p)
 Sun AU Audio
 ULAW and ALAW Audio
 Waveform Audio (WAV)

Video

 3GPP & 3GPP13 file formats


 AVI file format
 Bitmap (BMP) codec and file format
 DV file (DV NTSC/PAL and DVC Pro NTSC/PAL codecs)
 Flash & FlashPix files
 GIF and Animated GIF files
 H.1361, H.1363, and H.1364 codecs
 JPEG, Photo JPEG, and JPEG-13000 codecs and file formats
 MPEG-1, MPEG-13, and MPEG-4 Video file formats and associated codecs
(such as AVC)
 QuickTime Movie (.mov) and QTVR movies
 Other video codecs: Apple Video, Cinepak, Component Video, Graphics, and
Planar RGB
 Other still image formats: PNG, TIFF, and TGA

Multimedia Systems- M.Sc(IT)


115

Specification for QuickTime file format

QuickTime Movie
File extension: .mov
.qt
MIME type: video/quicktime

Type code: MooV


Uniform Type Identifier: com.apple.quicktime-movie
Developed by: Apple Inc.
Type of format: Media container
Container for: Audio, video, text

The QuickTime (.mov) file format functions as a multimedia container file that contains
one or more tracks, each of which stores a particular type of data: audio, video, effects, or
text (for subtitles, for example). Other file formats that QuickTime supports natively (to
varying degrees) include AIFF, WAV, DV, MP3, and MPEG-1. With additional
QuickTime Extensions, it can also support Ogg, ASF, FLV, MKV, DivX Media Format,
and others.

Check Your Progress 2

List the different file formats supported by Quicktime

Notes: a) Write your answers in the space given below.


b) Check your answers with the one given at the end of this lesson.

……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

13.8 Let us sum up


In this lesson we have learnt the software that can be used for creating and editing a
multimedia content. We have touched upon the following software tools
i) Text editing tools such as Microsoft Word, WordPerfect
ii) Image Editing tools and their features
iii) Sound editing tools and introduction to MIDI
iv) Video and their file formats
v) Quicktime Video format

Multimedia Systems- M.Sc(IT)


116

13.9 Lesson-end activities


1. Find the various file formats that can be played through the Windows Media
player software. Check for the files associated with windows media player.
2. Record sound using SoundRecorder software available in Windows and check for
the file type supported by SoundRecorder.
3. Visit three web sites of three video programs, and locate a page that summarizes
the capabilities of each. List the formats each is able to import from and export or
output to. Do they include libraries of “primitives? Check whether they support
features such as lighting, cameras etc.
13.10 Model answers to “Check your progress”

1. A few of these image editing features can be listed:


- Multiple windows
- Conversion of major image-data
- Direct inputs of images from scanner and video sources
- Capable selection tools, such as rectangles, lassos, and magic wands
- Controls for brightness, contrast, and color balance
- Multiple undo and restore features
- Anti-aliasing capability, and sharpening and smoothing controls
- Tools for retouching, blurring, sharpening, lightening, darkening,
smudging
- Geometric transformation such as flip, skew, rotate, and distorter sample
and resize an image
- customizable color palettes
- Ability to create images from scratch
- Multiple typefaces, styles, and sizes, and type manipulation and masking
routines
- Filters for special effects, such as crystallize, dry brush, emboss, facet,
Support for third-party special effect plug-ins

2. A few of the following file formats can be listed as formats supported by


QuickTime.
QT, WAV, AIFF, WAV, DV, MP3, and MPEG-1, Ogg, ASF, FLV,
MKV, DivX, .mov, .mp3

13.11 References
1. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
2. “Multimedia Making it work” By Tay Vaughan
3. “Multimedia in Practice – Technology and applications” By Jeffcoat
4. www.apple.com/quicktime/
5. http://en.wikipedia.org/wiki/QuickTime

Multimedia Systems- M.Sc(IT)


117

Lesson 14 Linking Multimedia Objects, Office


Suites

Contents
14.0 Aims and Objectives
14.1 Introduction
14.2 OLE
14.3 DDE
14.4 Office Suites
14.4.1 Components available in Office Suites
14.4.2 Currently available Office Suites
14.4.3 Comparison of Office Suites
14.5 Presentation Tools
14.6 Let us sum up
14.7 Lesson-end activities
14.8 Model answers to “Check your progress”
14.9 References

14.0 Aims and Objectives


This lesson aims at educating the learner about the different techniques used for
linking various multimedia concepts. The second section of this lesson is about office
suites.
At the end of this lesson, the learner will
- Have a detailed knowledge on OLE, DDE
- Be able to identify different office suites available and their features

14.1 Introduction

All the multimedia components are created in different tools. It is necessary to


link all the multimedia objects such as such as audio, video, text, images and animation in
order to make a complete multimedia presentation. There are different tools that provide
mechanisms for linking the multimedia components.

14.2 OLE

Object Linking and Embedding (OLE) is a technology that allows embedding


and linking to documents and other objects. OLE was developed by Microsoft. It is found
on the Component Object Model. For developers, it brought OLE custom controls
(OCX), a way to develop and use custom user interface elements. On a technical level, an

Multimedia Systems- M.Sc(IT)


118

OLE object is any object that implements the IOleObject interface, possibly along with a
wide range of other interfaces, depending on the object's needs.

Overviewhttp://en.wikipedia.org/wiki/Image:OLE-demonstratie_-
_een_grafiek_in_een_tekstdocument.png of OLE

http://en.wikipedia.org/wiki/Image:OLE-demonstratie_-
_een_grafiek_in_een_tekstdocument.pngOLE allows an editor to “farm out” part of a
document to another editor and then re-imports it. For example, a desktop publishing
system might send some text to a word processor or a picture to a bitmap editor using
OLE. The main benefit of using OLE is to display visualizations of data from other
programs that the host program is not normally able to generate itself (e.g. a pie-chart in a
text document), as well to create a master file. References to data in this file can be made
and the master file can then have changed data which will then take effect in the
referenced document. This is called "linking" (instead of "embedding").

Its primary use is for managing compound documents, but it is also used for
transferring data between different applications using drag and drop and clipboard
operations. The concept of "embedding" is also central to much use of multimedia in
Web pages, which tend to embed video, animation (including Flash animations), and
audio files within the hypertext markup language (such as HTML or XHTML) or other
structural markup language used (such as XML or SGML) — possibly, but not
necessarily, using a different embedding mechanism than OLE.

History of OLE

OLE 1.0

OLE 1.0, released in 1990, was the evolution of the original dynamic data
exchange, or DDE, concepts that Microsoft developed for earlier versions of
Windows. While DDE was limited to transferring limited amounts of data
between two running applications, OLE was capable of maintaining active links
between two documents or even embedding one type of document within another.

OLE servers and clients communicate with system libraries using virtual
function tables, or VTBLs. The VTBL consists of a structure of function pointers
that the system library can use to communicate with the server or client. The
server and client libraries, OLESVR.DLL and OLECLI.DLL, were originally designed
to communicate between themselves using the WM_DDE_EXECUTE message.

OLE 1.0 later evolved to become architecture for software components


known as the Component Object Model (COM), and later DCOM.

When an OLE object is placed on the clipboard or embedded in a


document, both a visual representation in native Windows formats (such as a
bitmap or metafile) is stored, as well as the underlying data in its own format.

Multimedia Systems- M.Sc(IT)


119

This allows applications to display the object without loading the application used
to create the object, while also allowing the object to be edited, if the appropriate
application is installed. For example, if an OpenOffice.org Writer object is
embedded, both a visual representation as an Enhanced Metafile is stored, as well
as the actual text of the document in the Open Document Format.

OLE 14.0

OLE 14.0 was the next evolution of OLE 1.0, sharing many of the same
goals, but was re-implemented over top of the Component Object Model instead
of using VTBLs. New features were automation, drag-and-drop, in-place
activation and structured storage.

Technical details of OLE

OLE objects and containers are implemented on top of the Component


Object Model; they are objects that can implement interfaces to export their
functionality. Only the IOleObject interface is compulsory, but other interfaces
may need to be implemented as well if the functionality exported by those
interfaces is required.

To ease understanding of what follows, a bit of terminology has to be


explained. The view status of an object is whether the object is transparent,
opaque, or opaque with a solid background, and whether it supports drawing with
a specified aspect. The site of an object is an object representing the location of
the object in its container. A container supports a site object for every object
contained. An undo unit is an action that can be undone by the user, with Ctrl-Z or
using the "Undo" command in the "Edit" menu.

14.3 DDE

Dynamic Data Exchange (DDE) is a technology for communication between


multiple applications under Microsoft Windows and OS/14.

Dynamic Data Exchange was first introduced in 1987 with the release of
Windows 14.0. It utilized the "Windows Messaging Layer" functionality within
Windows. This is the same system used by the "copy and paste" functionality. Therefore,
DDE continues to work even in modern versions of Windows. Newer technology has
been developed that has, to some extent, overshadowed DDE (e.g. OLE, COM, and OLE
Automation), however, it is still used in several places inside Windows, e.g. for Shell file
associations.

The primary function of DDE is to allow Windows applications to share data. For
example, a cell in Microsoft Excel could be linked to a value in another application and
when the value changed, it would be automatically updated in the Excel spreadsheet. The
data communication was established by a simple, three-segment model. Each program

Multimedia Systems- M.Sc(IT)


120

was known to DDE by its "application" name. Each application could further organize
information by groups known as "topic" and each topic could serve up individual pieces
of data as an "item". For example, if a user wanted to pull a value from Microsoft Excel
which was contained in a spreadsheet called "Book1.xls" in the cell in the first row and
first column, the application would be "Excel", the topic "Book1.xls" and the item "r1c1".

A common use of DDE was for custom developed applications to control off-the-
shelf software, e.g. a custom in-house application written in C or some other language
might use DDE to open a Microsoft Excel spreadsheet and fill it with data, by opening a
DDE conversation with Excel and sending it DDE commands. Today, however, one
could also use the Excel object model with OLE Automation (part of COM).

While newer technologies like COM offer features DDE doesn't have, there are also
issues with regard to configuration that can make COM more difficult to use than DDE.

NetDDE

A California-based company called Wonderware developed an extension for DDE


called NetDDE that could be used to initiate and maintain the network connections
needed for DDE conversations between DDE-aware applications running on different
computers in a network and transparently exchange data. A DDE conversation is the
interaction between client and server applications. NetDDE could be used along with
DDE and the DDE management library (DDEML) in applications.

/Windows/SYSTEM314
DDESHARE.EXE (DDE Share Manager)
NDDEAPIR.EXE (NDDEAPI Server Side)
NDDENB314.DLL (Network DDE NetBIOS Interface)
NETDDE.EXE (Network DDE - DDE Communication)

Microsoft licensed a basic (NetBIOS Frames protocol only) version of the product
for inclusion in various versions of Windows from Windows for Workgroups to
Windows XP. In addition, Wonderware also sold an enhanced version of NetDDE to
their own customers that included support for TCP/IP.. Basic Windows applications
using NetDDE are Clipbook Viewer, WinChat and Microsoft Hearts.

NetDDE was still included with Windows Server 14003 and Windows XP Service Pack
14, although it was disabled by default. It has been removed entirely in Windows Vista.
However, this will not prevent existing versions of NetDDE from being installed and
functioning on later versions of Windows.

14.4 Office suite

An office suite, sometimes called an office application suite or productivity


suite is a software suite intended to be used by typical clerical worker and knowledge
workers. The components are generally distributed together, which have a consistent user

Multimedia Systems- M.Sc(IT)


121

interface and usually can interact with each other, sometimes in ways that the operating
system would not normally allow.

14.4.1 Components of Office Suite

Most office application suites include at least a word processor and a spreadsheet
element. In addition to these, the suite may contain a presentation program, database tool,
graphics suite and communications tools. An office suite may also include an email client
and a personal information manager or groupware package.

14.4.2 Currently available Office Suites

The currently dominant office suite is Microsoft Office, which is available for
Microsoft Windows and the Apple Macintosh. It has become a proprietary de-facto
standard in office software.

An alternative is any of the OpenDocument suites, which use the free


OpenDocument file format, defined by ISO/IEC 146300. The most prominent of these is
OpenOffice.org, open-source software that is available for Windows, Linux, Macintosh,
and other platforms. OpenOffice.org, KOffice and Kingsoft Office support many of the
features of Microsoft Office, as well as most of its file formats, and has spawned several
derivatives such as NeoOffice, a port for Mac OS X that integrates into its Aqua
interface, and StarOffice, a commercial version by Sun Microsystems.

A new category of "online word processors" allows editing of centrally stored documents
using a web browser.

14.4.3 Comparison of office suites

The following table compares general and technical information for a number
of office suites. Please see the individual products' articles for further information. The
table only includes systems that are widely used and currently available.

Product Name Developer First public release Operating system


Ability Office Ability Plus 1985 Windows
Software
AppleWorks Apple 1991 Mac OS, Mac OS X
and Windows
Corel Office Corel 1991 Windows
GNOME Office GNOME ? Cross-platform
Foundation
AbiSource
Google Apps Google 2006 Cross-platform
Google Docs
IBM Lotus Symphony IBM 2007 Windows and Linux
iWork Apple 2005 Mac OS X
KOffice KDE Project 1998 BSD, Linux, Solaris
Lotus SmartSuite IBM 1991 Windows and OS/14

Multimedia Systems- M.Sc(IT)


122

MarinerPak Mariner Software 1996 Mac OS and Mac OS


X
Microsoft Office Microsoft 1990 (Office 1, for Windows and
Macintosh) Macintosh

1991 (Office 3, first


Windows version)
OpenOffice.org OpenOffice.org October 14, 2001 Cross-platform
Organization
ShareOffice ShareMethods May 14,2007 Cross-platform
SoftMaker Office SoftMaker 1989 FreeBSD, Linux,
PocketPC and
Windows
StarOffice Sun Microsystems 1995 Cross-platform
StarOffice Sun Microsystems 1995 Cross-platform
from Google Pack

Check Your Progress 1

List a few available office suites.


Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

14.5 Presentation Tools

Presentation is the process of presenting the content of a topic to an audience.


A presentation program is a computer software package used to display
information, normally in the form of a slide show. It typically includes three major
functions: an editor that allows text to be inserted and formatted, a method for
inserting and manipulating graphic images and a slide-show system to display the
content.

There are many different types of presentations including professional


(work-related), education, worship and for general communication.

A presentation program, such as OpenOffice.org Impress, Apple Keynote, i-


Ware CD Technologies' PMOS or Microsoft PowerPoint, is often used to generate
the presentation content.

Multimedia Systems- M.Sc(IT)


123

The most commonly known presentation program is Microsoft PowerPoint,


although there are alternatives such as OpenOffice.org Impress and Apple's
Keynote. In general, the presentation follows a hierarchical tree explored linearly
(like in a table of content) which has the advantage to follow a printed text often
given to participants. Adobe Acrobat is also a popular tool for presentation which
can be used to easily link other presentations of whatever kind and by adding the
faculty of zooming without loss of accuracy due to vector graphics inherent to
PostScript and PDF.

14.5.1 KEYNOTE : Keynote is a presentation software application developed


by Apple Inc. and a part of the iWork productivity suite (which also includes
Pages and Numbers) sold by Apple

14.5.2 Impress : A presentation program similar to Microsoft PowerPoint. It


can export presentations to Adobe Flash (SWF) files allowing them to be
played on any computer with the Flash player installed. It also includes the
ability to create PDF files, and the ability to read Microsoft PowerPoint's
.ppt format. Impress suffers from a lack of ready-made presentation designs.
However, templates are readily available on the Internet.

14.5.3 Microsoft PowerPoint: This is a presentation program developed by


Microsoft. It is part of the Microsoft Office system. Microsoft PowerPoint
runs on Microsoft Windows and the Mac OS computer operating systems.

It is widely used by business people, educators, students, and trainers and is


among the most prevalent forms of persuasion technology.

Other Presentation software includes:

Adobe Persuasion
AppleWorks
Astound by Gold Disk Inc.
Beamer (LaTeX)
Google Docs which now includes presentations
Harvard Graphics
HyperCard
IBM Lotus Freelance Graphics
Macromedia Action!
MagicPoint
Microsoft PowerPoint
OpenOffice.org Impress
Scala Multimedia
Screencast
Umibozu (wiki)
VCN ExecuVision

Multimedia Systems- M.Sc(IT)


124

Worship presentation program


Zoho
Presentations in HTML format: Presentacular, HTML Slidy.

Multimedia Systems- M.Sc(IT)


125

14.6 Let us sum up

In this lesson we have learnt how different multimedia components are linked. The
following points have been discussed in this lesson.
- The use OLE in linking multimedia objects. Object Linking and Embedding
(OLE) is a technology that allows embedding and linking to documents and
other objects. OLE was developed by Microsoft.
- Dynamic Data Exchange (DDE) is a technology for communication between
multiple applications under Microsoft Windows and OS/14.
- An office suite is a software suite intended to be used by typical clerical
worker and knowledge workers.
- The currently dominant office suite is Microsoft Office, which is available for
Microsoft Windows and the Apple Macintosh.

14.7 Lesson-end activities

1. Check for the different Office Suite available in your local computer
and list the different file formats supported by each office suite.
2. List the different presentation tools available for creating
presentation.

14.8 Model answers to “Check your progress”

A few of the following office suites can be listed :

Google Apps, Google Docs, IBM Lotus Symphony, iWork, KOffice,


Lotus SmartSuite, MarinerPak, Microsoft Office, OpenOffice.org,
ShareOffice, SoftMaker Office, StarOffice, StarOffice, Apple works, corel
office.

14.9 References

1. “Multimedia Making it work” By Tay Vaughan


2. "Multimedia: Concepts and Practice" By Stephen McGloughlin

Multimedia Systems- M.Sc(IT)


126

Lesson 15 User Interface

Contents
15.0 Aims and Objectives
15.1 Introduction
15.2 User interfaces
15.3 General design issues
15.4 Effective Human Computer Interaction
15.5 Video at the user interface
15.6 Audio at the user interface
15.7 User friendliness as the primary goal
15.8 Let us sum up
15.9 Model answers to “Check your progress”
15.10 References

15.0 Aims and Objectives

In this lesson we will learn how human computer interface is effectively done
when a multimedia presentation is designed.
At the end of the lesson the reader will be able to understand user interface and
the various criteria that are to be satisfied in order to have effective human-computer
interface.

15.1 Introduction

In computer science, we understand the user interface as the interactive input and
output of a computer as it’s is perceived and operated on by users. Multimedia user
interfaces are used for making the multimedia content active. Without user interface the
multimedia content is considered to be linear or passive.

15.2 User Interfaces

Multimedia user interfaces are computer interfaces that communicate with users
using multiple media modes such as written text together with spoken language.

Multimedia would be without much value without user interfaces. The input
media determines not only how human computer interaction occurs but also how well.
Graphical user interfaces – using the mouse as the main input device – have greatly
simplified human-machine interaction.

Multimedia Systems- M.Sc(IT)


127

15.3 General Design Issues


The main emphasis in the design of multimedia user interface is multimedia
presentation. There are several issues which must be considered.

1. To determine the appropriate information content to be communicated.


2. To represent the essential characteristics of the information.
3. To represent the communicative intent.
4. To chose the proper media for information presentation.
5. To coordinate different media and assembling techniques within a presentation.
6. To provide interactive exploration of the information presented.

15.3.1 Information Characteristics for presentation:

A complete set of information characteristics makes knowledge definitions and


representation easier because it allows for appropriate mapping between information and
presentation techniques. The information characteristics specify:

 Types

Characterization schemes are based on ordering information. There are two types of
ordered data: (1) coordinate versus amount, which signify points in time, space or
other domains; or (2) intervals versus ratio, which suggests the types of comparisons
meaningful among elements of coordinate and amount data types.

 Relational Structures

This group of characteristics refers to the way in which a relation maps among its
domain sets (dependency). There are functional dependencies and non-functional
dependencies. An example of a relational structure which expresses functional
dependency is a bar chart. An example of a relational structure which expresses non-
functional dependency is a student entry in a relational database.

 Multi-domain Relations

Relations can be considered across multiple domains, such as: (1) multiple attributes
of a single object set (e.g., positions, colors, shapes, and/or sizes of a set of objects in a
chart); (2) multiple object sets (e.g., a cluster of text and graphical symbols on a map);
and (3) multiple displays.

 Large Data Sets

Large data sets refer to numerous attributes of collections of heterogeneous objects


(e.g., presentations of semantic networks, databases with numerous object types and
attributes of technical documents for large systems, etc.).

Multimedia Systems- M.Sc(IT)


128

Check Your Progress 1

List a few information characters required for presentation.


Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

15.3.2 Presentation Function

Presentation function is a program which displays an object. It is important to


specify the presentation function independent from presentation form, style or the
information it conveys. Several approaches consider the presentation function from
different points of view

15.3.3 Presentation Design Knowledge

To design a presentation, issues like content selection, media and presentation


technique selection and presentation coordinating must be considered.

Control selection is the key to convey the information to the user. However, we
are not free in the selection of it because content can be influenced by constraints
imposed by the size and complexity of the presentation.

Media selection determines partly the information characteristics. For selecting


presentation techniques, rule can be used. For example, rules for selection methods, i.e.,
for supporting a users ability to locate on the facts in a presentation, may specify a
preference for graphical techniques.

Coordination can be viewed as a process of composition. Coordination needs


mechanisms such as (1) encoding techniques (2) presentation objects that represent facts
(3) multiple displays. Coordination of multimedia employs a set of composition operators
for merging, aligning and synthesizing different objects to construct displays that convey
multiple attributes of one or more data sets.

15.4 Effective Human-Computer Interaction

One of the most important issues regarding multimedia interfaces is effective


human-computer interaction of the interface, i.e., user-friendliness.

The main issues the user interface designer should keep in mind: (1) context; (2)
linkage to the world beyond the presentation display; (3) evaluation of the interface with

Multimedia Systems- M.Sc(IT)


129

respect to other human-computer interfaces; (4) interactive capabilities; and (5)


separability of the user interface from the application.

15.5 Video at the User Interface

A continuous sequence of, at least, 15 individual image per second gives a rough
perception of a continuous motion picture. At the use interface, video is implemented
through a continuous sequence of individual images. Hence, video can be manipulated at
this interface similar to manipulation of individual still images.

When an individual image consisting of pixels (no graphics, consisting of defined


objects) can be presented and modified, this should also be possible for video (e.g., to
create special effects in a movie). However, the functionalities for video are not as simple
to deliver because the high data transfer rate necessary is not guaranteed by most of the
hardware in current graphics systems.

15.6 Audio at the User Interface

Audio can be implemented at the user interface for application control. Thus,
speech analysis is necessary.

Speech analysis is either speaker-dependent or speaker-independent. Speaker-


dependent solutions allow the input of approximately 25,000 different words with a
relatively low error rate. Here, an intensive learning phase to train the speech analysis
system for speaker-specific characteristics is necessary prior to the speech analysis phase.
A speaker-independent system can recognize only a limited set of words and no training
phase is needed.

Audio Tool User Interface

During audio output, the additional presentation dimension of space can be


introduced using two or more separate channels to give a more natural distribution of
sound. The best-known example of this technique is stereo.

In the case of monophony, all audio sources have the same spatial location. A
listener can only properly understand the loudest audio signal. The same effect can be
simulated by closing one ear. Stereophony allows listeners with bilateral hearing

Multimedia Systems- M.Sc(IT)


130

capabilities to hear lower intensity sounds. It is important to mention that the main
advantage of bilateral hearing is not the spatial localization of audio sources, but the
extraction of less intensive signals in a loud environment.

15.7 User-friendliness as the Primary Goal

User-friendliness is the main property of a good user interface. In a multimedia


environment in office or home the following user friendliness could be implemented:

15.7.1 Easy to Learn Instructions

Application instructions must be easy-to-learn.

15.7.2 Context-sensitive Help Functions

A context-sensitive help function using hypermedia techniques is very helpful, I


e., according to the state of the application, different help-texts are displayed.

15.7.3 Easy to Remember Instructions

A user-friendly interface must also have the property that the user easily
remembers the application instruction rules. Easily remembered instructions
might be supported by the intuitive association to what the user already knows.

15.7.4 Effective Instructions

The user interface should enable effective use of the application. This means:

 Logically connected functions should be presented together and similarly.


 Graphically symbols or short clips are more effective than textual input and
output. They trigger faster recognition.
 Different media should be able to be exchanged among different
applications.
 Actions should be activated quickly.
 A configuration of a user interface should be usable by both professional
and sporadic users.

15.7.5 Aesthetics

With respect to aesthetics, the color combination, character sets, resolution and
form of the window need to be considered. They determine a user’s first and lasting
impressions.

Multimedia Systems- M.Sc(IT)


131

15.7.6 Entry elements

User interfaces use different ways to specify entries for the user:

 Entries in a menu
In menus there are visible and non-visible entry elements. Entries which are
relevant task are to be made available for east menu selection
 Entries on a graphical interface
 If the interface includes text, the entries can be marked through color
and/or different font
 If the interface includes images, the entries can be written over the
image.

15.7.7 Presentation

The presentation, i.e., the optical image at the user interface, can have the following
variants:

 Full text
 Abbreviated text
 Icons, i.e., graphics
 Micons, i.e., motion video

15.7.8 Dialogue Boxes

Different dialogue boxes should have similar constructions. This requirement


applies to the design of:
(1) The buttons OK and Abort;
(2) Joined windows; and
(3) Other applications in the same window system.

15.7.9 Additional Design Criteria

A few additional hints for designing a user friendly interface are


 The form of the cursor can change to visualize the current state of the system.
For example a hour glass can be shown for a processing task in progress
 When time intensive tasks are performed, the progress of the task should be
presented.
 The selected entry should be immediately highlights as “work in progress”
before performance actually starts.

The main emphasis has been on video and audio media because they represent live
information. At the user interface, these media become important because they help users
learn by enabling them to choose how to distribute research responsibilities among
applications (e.g., on-line encyclopedias, tutors, simulations), to compose and integrate
results and to share learned material with colleagues (e.g., video conferencing).

Multimedia Systems- M.Sc(IT)


132

Additionally, computer applications can effectively do less reasoning about selection of a


multimedia element (e.g., text, graphics, animation or sound) since alternative media can
be selected by the user.

Check Your Progress 2


Distinguish additive and subtractive colors and write their area of use.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

15.8 Let us sum up

In this lesson we have learn the effective use of interface. The following key
points have been discussed in this lesson.

15.9 Model answers to “Check your progress”

1. A few information characteristics used for presentation are


o Types
o Relational Structures
o Multi-domain Relations
o Large Data Sets

2. A few of the following characteristics can be listed


 Easy to Learn Instructions
 Context-sensitive Help Functions
 Easy to Remember Instructions
 Effective Instructions
 Aesthetics
 Entry elements
 Presentation
 Dialogue Boxes

15.10 References

1. “Multimedia Making it work” By Tay Vaughan


2. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
3. Multimedia and Virtual Reality: Designing Multisensory User Interfaces By
Alistair Sutcliffe

Multimedia Systems- M.Sc(IT)


133

UNIT – IV

Lesson 16 Multimedia Communication Systems


Contents

16.0 Aims and Objectives


16.1 Introduction
16.2 Application subsystem
16.2.1 Collaborative computing
16.2.2 Collaborative dimensions
16.2.3 Group communication Architecture
16.3 Application sharing approach
16.4 Conferencing
16.5 Session Management
16.5.1 Architecture
16.5.2 Session Control
16.6 Let us sum up
16.7 Lesson-end activities
16.8 Model answers to “Check your progress”
16.9 References

16.0 Aims and Objectives


In this lesson we discuss the important issues related to multimedia communication
systems which are present above the date link layer. In this lesson application
subsystems, management and service issues for group collaboration and session
orchestration are presented.

At the end of this lesson the learner will be able:


i) To understand the various layers of communication subsystem
ii) The transport subsystem and its features
iii) To understand group communication architecture
iv) Concepts behind conferencing
v) Enumerate the concepts of Session management

16.1 Introduction

The consideration of multimedia applications supports the view that local systems
expand toward distributed solutions. Applications such as kiosks, multimedia mail,
collaborative work systems, virtual reality applications and others require high-speed
networks with a high transfer rate and communication systems with adaptive, lightweight
transmission protocols on top of the networks.

Multimedia Systems- M.Sc(IT)


134

From the communication perspective, we divide the higher layers of the


Multimedia Communication System (MCS) into two architectural subsystems: an
application subsystem and a transport subsystem.

16.2 Application Subsystem

16.2.1 Collaborative Computing

The current infrastructure of networked workstations and PCs, and the availability
of audio and video at these end-points, makes it easier for people to cooperate and bridge
space and time. In this way, network connectivity and end-point integration of
multimedia provides users with a collaborative computing environment. Collaborative
computing is generally known as Computer-Supported Cooperative Work (CSCW).

There are many tools for collaborative computing, such as electronic mail,
bulletin boards (e.g., Usenet news), screen sharing tools (e.g., ShowMe from Sunsoft),
text-based conferencing systems(e.g., Internet Relay Chat, CompuServe, American
Online), telephone conference systems, conference rooms (e.g., VideoWindow from
Bellcore), and video conference systems (e.g., MBone tools nv, vat). Further, there are
many implemented CSCW systems that unify several tools, such as Rapport from AT&T,
MERMAID from NEC and others.

16.2.2 Collaborative Dimensions

Electronic collaboration can be categorized according to three main parameters:


time, user scale and control. Therefore, the collaboration space can be partitioned into a
three-dimensional space.

 Time

With respect to time, there are two modes of cooperative work: asynchronous and
synchronous. Asynchronous cooperative work specifies processing activities that do not
happen at the same time; the synchronous cooperative work happens at the same time.

 User Scale

The user scale parameter specifies whether a single user collaborates with another user or
a group of more that two users collaborate together. Groups can be further classified as
follows:

 A group may be static or dynamic during its lifetime. A group is static if its
participating members are pre-determined and membership does not change
during the activity. A group is dynamic if the number of group members varies
during the collaborative activity, i.e., group members can join or leave the activity
at any time.

Multimedia Systems- M.Sc(IT)


135

 Group members may have different roles in the CSCW, e.g., a member of a group
(if he or she is listed in the group definition), a participant of a group activity (if
he or she successfully joins the conference), a conference initiator, a conference
chairman, a token holder or an observer.
 Groups may consist of members who have homogeneous or heterogeneous or
heterogeneous characteristics and requirements of their collaborative
environment.

 Control

Control during collaboration can be centralized or distributed. Centralized control


means that there is a chairman (e.g., main manger) who controls the collaborative work
and every group member (e.g., user agent) reports to him or her. Distributed control
means that every group member has control over his/her own tasks in the collaborative
work and distributed control protocols are in place to provide consistent collaboration.

Other partition parameter may include locality, and collaboration awareness. Locality
partition means that a collaboration can occur either in the same place (e.g., a group
meeting in an officer or conference room) or among users located in different place
through tele-collaboration.

Group communication systems can be further categorized into computer-


augmented collaboration systems, where collaboration is emphasized, and collaboration-
augmented computing systems, where the concentrations are on computing.

16.2.3 Group Communication Architecture

Group communication (GC) involves the communication of multiple users in a


synchronous or an asynchronous mode with centralized or distributed control.

A group communication architecture consists of a support model, system model


and interface model. The GC support model includes group communication agents that
communicate via a multi-point multicast communication network as shown in following
Figure.

Group communication agents may use the following for their collaboration:

 Group Rendezvous
Group rendezvous denotes a method which allows one to organize
meetings, and to get information about the group, ongoing meetings and
other static and dynamic information.
 Shared Applications
Application sharing denotes techniques which allow one to replicate
information to multiple users simultaneously. The remote users may point
to interesting aspects (e.g., via tele-pointing) of the information and
modify it so that all users can immediately see the updated information

Multimedia Systems- M.Sc(IT)


136

(e.g., joint editing). Shared applications mostly belong to collaboration-


transparent applications.
 Conferencing
Conferencing is a simple form of collaborative computing. This service
provides the management of multiple users for communicating with each
other using multiple media. Conferencing applications belong to
collaboration-aware applications.

Group Communication Agent Group Communication Agent

Group Application Group Application


Rendezvous Sharing Rendezvous Sharing

conferencing conferencing

Communication (Transport) Support Communication (Transport) Support

Multicast Group Communication Agent


Communication
Network Group Application
Rendezvous Sharing

conferencing

Communication (Transport) Support

Group communication support model

The GC system model is based on a client-server model. Clients provide user interfaces
for smooth interaction between group members and the system. Servers supply functions
for accomplishing the group communication work, and each server specializes in its own
function.

Check Your Progress 1


List the collaborations used by group communication agents.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

Multimedia Systems- M.Sc(IT)


137

16.3 Application Sharing Approach


Sharing applications is recognized as a vital mechanism for supporting group
communication activities. Sharing applications means that when a shared application
program (e.g., editor) executes any input from a participant, all execution results
performed on the shared object (e.g., document text) are distributed among all the
participants. Shared objects are displayed, generally, in shared windows.

Application sharing is most often implemented in collaboration-transparent


systems, but can also be developed through collaboration-aware, special-purpose
applications. An example of a software toolkit that assists in development of shared
computer applications is Bellcore’s Rendezvous system (language and architecture).
Shared applications may be used as conversational props in tele-conferencing situations
for collaborative document editing and collaborative software development.

An important issue in application sharing is shared control. The primary design


decision in sharing applications is to determine whether they should be centralized or
replicated:

 Centralized Architecture

In a centralized architecture, a single copy of the shared application runs at one


site. All participants’ input to the application is then distributed to all sites.

The advantage of the centralized approach is easy maintenance because there is


only one copy of the application that updates the shared object. The disadvantage
is high network traffic because the output of the application needs to be
distributed every time.

 Replicated Architecture

In a replicated architecture, a copy of the shared application runs locally at each


site. Input events to each application are distributed to all sites and each copy of
the shared application is executed locally at each site.

The advantages of this architecture are low network traffic, because only input
events are distributed among the sites, and low response times, since all
participants get their output from local copies of the application. The
disadvantages are the requirement of the same execution environment for the
application at each site, and the difficulty in maintaining consistency.

16.4 Conferencing

Conferencing supports collaborative computing and is also called synchronous tele-


collaboration. Conferencing is a management service that controls the communication
among multiple users via multiple media, such as video and audio, to achieve

Multimedia Systems- M.Sc(IT)


138

simultaneous face-to-face communication. More precisely, video and audio have the
following purposes in a tele-conferencing system:

 Video is used in technical discussions to display view-graph and to indicate how


many users are still physically present at a conference. For visual support,
workstations, PCs or video walls can be used.

For conferences with more than three or four participants, the screen resources on
a PC or workstation run out quickly, particularly if other applications, such as
shared editors or drawing spaces, are used. Hence, mechanisms which quickly
resize individual images should be used.

Conferencing services control a conference (i.e., a collection of shared state


information such as who is participating in the conference, conference name, start of the
conference, policies associated with the conference, etc,) Conference control includes
several functions:

 Establishing a conference, where the conference participants agree upon a


common state, such as identity of a chairman (moderator), access rights (floor
control) and audio encoding. Conference systems may perform registration,
admission, and negotiation services during the conference establishment phase,
but they must be flexible and allow participants to join and leave individual media
sessions or the whole conference. The flexibility depends on the control model.
 Closing a conference.
 Adding new users and removing users who leave the conference.

Conference states can be stored (located) either on a central machine (centalised


control), where a central application acts as the repository for all information related
to the conference, or in a distributed fashion.

16.5 Session Management

Session management is an important part of the multimedia communication


architecture. It is the core part which separates the control, needed during the transport,
from the actual transport. Session management is extensively studied in the collaborative
computing area; therefore we concentrate on architectural and management issues in this
area.

16.5.1 Architecture

A session management architecture is built around an entity-session manager-


which separates the control from the transport. By creating a reusable session manager,
which is separated from the user-interface, conference-oriented tools avoid a duplication
of their effort.

Multimedia Systems- M.Sc(IT)


139

The session control architecture consists of the following components:

 Session Manager

Session manager includes local and remote functionalities. Local functionalities


may include

(1) Membership control management, such as participant authentication or


presentation of coordinated user interfaces;
(2) Control management for shared workspace, such as floor control
(3) Media control management, such as intercommunication among media
agents or synchronization
(4) Configuration management, such as an exchange of interrelated QoS
parameters of selection of appropriate services according to QoS; and
(5) Conference control management, such as an establishment,
modification and a closing of a conference.

 Media agents

Media agents are separate from the session manager and they are not responsible
for decisions specific to each type of media. The modularity allows replacement
of agents. Each agent performs its own control mechanism over the particular
medium, such as mute, unmute, change video quality, start sending, stop sending,
etc.

 Shared Workspace Agent

The shared workspace agent transmits shared objects (e.g., telepointer co-
ordinate, graphical or textual object) among the shared application.

16.5.2 Session Control

Each session is described through the session state. This state information is
either private or shared among all session participants. Dependent on the functions,
which an application required and a session control provides, several control mechanisms
are embedded in session management:

Floor control: In a shared workspace, the floor control is used to provide access
to the shared workspace. The floor control in shared application is often used to
maintain data consistency.

Conference Control: In conferencing applications, Conference control is used.

Multimedia Systems- M.Sc(IT)


140

Media control: This control mainly includes a functionality such as the


synchronization of media streams.

Configuration Control: Configuration control includes a control of media quality,


QOS handling, resource availability and other system components to provide a
session according to user’s requirements.

Membership control: This may include services, for example invitation to a


session, registration into a session, modification of the membership during the
session etc.

16.6 Let us sum up

In this lesson we have discussed the following points:


 Network connectivity and end-point integration of multimedia provides users
with a collaborative computing environment.
 Group communication (GC) involves the communication of multiple users in
a synchronous or an asynchronous mode with centralized or distributed
control.
 A group communication architecture consists of a support model, system
model and interface model.
 Conferencing is a management service that controls the communication
among multiple users via multiple media, such as video and audio, to achieve
simultaneous face-to-face communication.

16.7 Lesson-end activities

Search the internet for a multimedia network and list the application system
features available in that network.

16.8 Model answers to “Check your progress”


The following are the collaborations used in group communication agents.
o Group Rendezvous
o Shared Applications.
o Conferencing

16.9 References

1.“Multimedia: Concepts and Practice" By Stephen McGloughlin


2.Digital Multimedia by Nigel Chapman, Jenny Chapman
3.Multimedia : Computing, Communications and applications – By Ralf Steinmetz
and Klara Nahrstedt

Multimedia Systems- M.Sc(IT)


141

Lesson 17 Transport Subsystem


Contents

17.0 Aims and Objectives


17.1 Introduction
17.2 Transport Subsystem
17.3 Transport Layer
17.3.1 Internet Transport Protocols
17.3.2 Other protocols
17.4 Network Layer
17.4.1 Internet Services Protocol
17.4.2 Stream Protocol
17.5 Let us sum up
17.6 Lesson-end activities
17.7 Model answers to “Check your progress”
17.8 References

17.0 Aims and Objectives

We present in this section a brief overview of transport and network protocols and
their functionalities, which are used for multimedia transmissions. We evaluate them
with respect to their suitability for this task.

At the end of this lesson the reader will:


i) have a detailed idea on the applications requirements
ii) be able to list the protocols used for multimedia networking

17.1 Introduction

In order to distribute a multimedia product, it is necessary to have a set of


protocols which enable smooth transmission of data between two hosts. The protocols
are the rules and conventions useful for network communication between two computers.

17.2 Transport Subsystem

Requirements

Distributed multimedia applications put new requirements on application


designers, As well as network protocol and system designers. We analyze the most
important enforced requirements with respect to the multimedia transmission.

Multimedia Systems- M.Sc(IT)


142

User and Application Requirements

Network multimedia applications by themselves impose new requirements onto data


handling in computing and communications because they need (1) substantial data
throughput, (2) fast date forwarding, (3) service guarantees, and (4) multicasting.

 Data Throughput

Audio and video data resemble a stream-like behavior, and they demand, even in
a compressed mode, high data throughput. In a workstation or network, several of
those streams may exist concurrently demanding a high throughput. Further, the
date movement requirements on the local end-system translate into terms of
manipulation of large quantities of data in real-time where, for example, data
copying can create a bottleneck in the system.

 Fast Data Forwarding

Fast data forwarding imposes a problem on end-systems where different


applications exist in the same end-system, and they each require data movement
ranging from normal, error-free data transmission to new time-constraint traffic
types. But generally, the faster a communication system can transfer a data
packet, the fewer packets need to be buffered. The requirement leads to a careful
spatial and temporal resource management in the end-systems and
routers/switches. The application imposes constraints on the total maximal end-to-
end delay. In a retrieval-like application, such as video-on-demand, a delay of up
to one second may be easily tolerated. In an opposite dialogue application, such as
a videophone or videoconference, demand end-to-end delays lower that typically
200 m/sec inhibit a natural communication between the users.

 Service Guarantees

Distributed multimedia applications need service guarantees; otherwise their


acceptance does not come through as these systems, working with continuous
media, compete against radio and television services. To achieve services
guarantees, resource management must be used. Without resource management
in end-systems and switches/routers, multimedia systems cannot provide reliable
QoS to their users because transmission over unreserved resources leads to
dropped or delayed packets.

 Multicasting

Multicast is important for multimedia-distributed applications in terms of sharing


resources like the network bandwidth and the communication protocol processing
at end-systems.

Multimedia Systems- M.Sc(IT)


143

Processing and Protocol Constraints

Communication protocols have, on the contrary, some constraints which need to


be considered when we want to match application requirements to system platforms.

A typical multimedia application does not require processing of audio and video
to be performed by the application itself. Usually the data are obtained from a source
(e.g., microphone, camera, disk, and network) and are forwarded to a sink (e.g.,
speaker, display, and network). In such a case, the requirements of continuous-media
data are satisfied best if they take “take shortest possible path” through the system,
i.e., to copy data directly from adapter-to-adapter, and the program merely sets the
correct switches for the data flow by connecting sources to sinks. Hence, the
application itself never really touches the data as is the case in traditional processing.

A problem with direct copying from adapter-to-adapter is the control and the
change of QoS parameters. In multimedia systems, such an adapter-to-adapter
connection is defined by the capabilities of the two involved adapters and the bus
performance. In today’s systems, this connection is static. This architecture of low-
level data streaming corresponds to proposals for using additional new busses for
audio and video transfer within a computer. It also enables a switch-based rather than
bus-based data transfer architecture. Note, in practice we encounter headers and
trailers surrounding continuous-media data coming from devices and being delivered
to devices. In the case of compressed video data, e.g., MPEG-2, the program stream
contains several layers of headers compared with the actual group of pictures to be
displayed.

Protocols involve a lot of data movement because of the layered structure of the
communication architecture. But copying of data is expensive and has become a
bottleneck, hence other mechanisms for buffer management must be found.

Different layers of the communication system may have different PDU sizes;
therefore, a segmentation and reassembly occur. This phase has to be done fast, and
efficient. Hence, this portion of a protocol stack, at least in the lower layers, is done
in hardware, or through efficient mechanisms in software.

17.3 Transport Layer

Transport protocols, to support multimedia transmission, need to have new


features and provide the following function, semi-reliability, multicasting, NAK (None-
Acknowledgment)-based error recovery mechanism and rate control.

First, we present transport protocols, such as TCP and UDP, which are used in the
Internet protocol stack for multimedia transmission, and secondly we analyze new
emerging transport protocols, such as RTP, XTP and other protocols, which are suitable
for multimedia.

Multimedia Systems- M.Sc(IT)


144

17.3.1 Internet Transport Protocols

The Internet protocol stack includes two types of transport protocols:

 Transmission Control Protocol(TCP)

Early implementations of video conferencing applications were implemented on


top of the TCP protocol. TCP provides a reliable, serial communication path, or
virtual circuit, between processes exchanging a full-duplex stream of bytes. Each
process is assumed to reside in an internet host that is identified by an IP address.
Each process has a number of logical, full-duplex ports through which it can set
up and use as full-duplex TCP connections.

Multimedia applications do not always require full-duplex connections for


the transport of continuous media. An example is a TB broadcast over LAN,
which requires a full-duplex control connection, but often a simplex continuous
media connection is sufficient.

During the data transmission over the TCP connection, TCP must achieve
reliable, sequenced delivery of a stream of bytes by means of an underlying,
unreliable datagram service. To achieve this, TCP makes use of retransmission
on timeouts and positive acknowledgments upon receipt of information. Because
retransmission can cause both out-of-order arrival and duplication of data.,
sequence numbering is crucial. Flow control in TCP makes use of a window
technique in which the receiving side of the connection reports to the sending side
the sequence numbers it may transmit at any time and those it has received
contiguously thus far.

For multimedia, the positive acknowledgment causes substantial overhead


as all packets are sent with a fixed rate. Negative acknowledgment would be a
better strategy. Further, TCP is not suitable for real-time video and audio
transmission because its retransmission mechanism may cause a violation of
deadlines which disrupt the continuity of the continuous media streams. TCP was
designed as a transport protocol suitable for non-real-time reliable applications,
such as file transfer, where it performs the best.

 User Datagram Protocol(UDP)

UDP is a simple extension to the Internet network protocol IP that


supports multiplexing of datagrams exchanged between pairs of Internet hosts. It
offers only multiplexing and checksumming, nothing else. Higher-level protocols
using UDP must provide their own retransmission, packetization, reassembly,
flow control, congestion avoidance, etc.

Many multimedia applications use this protocol because it provides to


some degree the real-time transport property, although loss of PDUs may occur.

Multimedia Systems- M.Sc(IT)


145

For experimental purposes, UDP above IP can be used as a simple, unreliable


connection for medium transport.

In general, UDP is not suitable for continuous media streams because it


does not provide the notion of connections, at least at the transport layer;
therefore, different service guarantees cannot be provided.

 Real-time Transport Protocol(RTP)

RTP is an end-to-end protocol providing network transport function


suitable for applications transmitting real-time data, such as audio, video or
simulation data over multicast or unicast network services. It is specified and still
augmented by the Audio/Video Transport Working Group. RTP is primarily
designed to satisfy the needs of multi-party multimedia conferences, but it is not
limited to that particular application.

RTP has a companion protocol RTCP (RTP-Control Protocol) to convey


information about the participants of a conference. RTP provides functions, such
as determination of media encoding, synchronization, framing, error detection,
encryption, timing and source identification. RTCP is used for the monitoring of
QoS and for conveying information about the participants in an ongoing session.
The first aspect of RTCP, the monitoring is done by an application called a QoS
monitor which receives the RTCP messages.

RTP does not address resource reservation and does not guarantee QoS for real-
time services. This means that it does not provide mechanism to ensure timely
delivery of data or guaranteed delivery, but relies on lower-layer services to do so.

RTP makes use of the network protocol ST-II or UDP/IP for the delivery of data.
It relies on the underlying protocols(s) to provide demultiplexing.

Profiles are used to specify certain parts of the header for particular sets of
applications. This means that particular media information is stored in an
audio/video profile, such as a set of formats (e.g., media encodings) and a default
mapping of those formats.

 Xpress Transport Protocol (XTP)

XTP was designed to be an efficient protocol, taking into account the low error
ratios and higher speeds of current networks. It is still in the process of
augmentation by the XTP Form to provide a better platform for the incoming
variety of applications. XTP integrates transport and network protocol
functionalities to have more control over the environment in which it operates.
XTP is intended to be useful in a wide variety of environments, from real-time
control systems to remote procedure calls in distributed operating systems and
distributed databases to bulk data transfer. It defines for this purpose six service

Multimedia Systems- M.Sc(IT)


146

types: connection, transaction, unacknowledged data gram, acknowledged


datagram, isochronous stream and bulk data.

In XTP, the end-user is represented by a context becoming active within an XTP


implementation.

17.3.2 Other Transport Protocols

Some other designed transport protocols which are used for multimedia transmission
are:

 Tenet Transport Protocols

The Tenet protocol suite for support of multimedia transmission was developed
by the Tenet Group at the University of California at Berkeley. The transport
protocols in this protocol stack are the Teal-time Message Transport Protocol
(RMTP) and Continuous Media Transport Protocol (CMTP). They run above the
Real-Time Internet Protocol (RTIP).

 Heidelberg Transport System (HeiTS)

The Heidelberg Transport system (HeiTS) is a transport system for multimedia


communication. It was developed at the IBM European Networking Center
(ENC), Heidelberg. HeiTS provides the raw transport of multimedia over
networks.

 METS: A Multimedia Enhanced Transport Service

METS is the multimedia transport service developed at the University of


Lancaster. It runs on top of ATM networks.

The transport protocol provides an ordered, but non-assured, connection oriented


communication service and features resource allocation based on the user’s QoS
specification. It allows the user to select upcalls for the notification of corrupt
and lost data at the receiver, and also allows the user to re-negotiate QoS levels.

Check Your Progress 1


Write a brief account on protocols used for multimedia content transmission.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

Multimedia Systems- M.Sc(IT)


147

17.4 Network Layer


The requirements on the network layer for multimedia transmission are a
provision of high bandwidth, multicasting, resource reservation and Qos guarantees, new
routing protocols with support for streaming capabilities and new higher-capacity routers
with support of integrated services.

17.4.1 Internet Services and Protocols

Internet is currently going through major changes and extensions to meet the growing
need for real-time services from multimedia applications.

There are several protocols which are changing to provide integrated services, such as
best effort service, real-time service and controlled link sharing. The new service,
controlled link sharing, is requested by the network operators. They need the ability to
control the sharing of bandwidth on a particular link among different traffic classes.
They also want to divide traffic into few administration classes and assign to each a
minimal percentage to the link bandwidth under conditions of overload, while allowing
‘unused’ bandwidth to be available at other times.

 Internet Protocol (IP)

IP provides for the unreliable carriage of datagrams form source host to


destination host, possibly passing through one or more gateways (routers) and
networks in the process. We examine some of the IP properties which are relevant
to multimedia transmission requirements:

- Type of Service: IP includes identification of the service quality through


the Type of Service (TOS) specification. TOS specifies (1) precedence
relation and (2) services such as minimize delay, maximize throughput,
maximize reliability, minimize monetary cost and normal service. Any
assertion of TOS can only be used if the network into which an IP packet
is injected has a class of service that matches the particular combination of
TOS markings selected.

 Internet Group Management Protocol ( IGMP)

Internet Group Management protocol (ICMP) is a protocol for managing Internet


multicasting groups. It is used by conferencing applications to join and leave
particular multicast group. The basic service permits a source to send datagrams
to all members of a multicast group. There are no guarantees of the delivery to
any or all targets in the group.

Multimedia Systems- M.Sc(IT)


148

Multicast routers periodically send queries (Host Membership Query


Messages) to refresh their knowledge of memberships present on a particular
network. If no reports are received for a particular group after some number of
queries, the routers assume the group has no local members, and that they need
not forward remotely originated multicasts for that group onto the local network.
Otherwise, hosts respond to a query by generating reports (Host Membership
Reports), reporting each host group to which they belong on the network interface
from which the query was received. To avoid an “impulsion” of concurrent
reports there are two possibilities: either a host, rather than sending reports
immediately, delays for a D-second interval the generation of the report; or, a
report is sent with an IP destination address equal to the host group address being
reported. This causes other members of the same group on the network to
overhear the report and only one report per group is presented on the network.

 Resource reservation Protocol (RSVP)

RSVP is a protocol which transfers reservations and keeps a state at the


intermediate nodes. It does not have a data transfer component. RSVP messages
are sent as IP datagrams, and the router keeps “soft state”, which is refreshed by
periodic reservation messages. In the absence of the refresh messages, the routers
delete the reservation after a certain timeout.

This protocol was specified by IETF to provide one of the components for
integrated services on the Internet. To implement integrated services, four
components need to be implemented: the packet scheduler, admission control
routine, classifier, and the reservation setup protocol.

17.4.2 STream protocol, Version 2 (ST-II)

ST-II provides a connection-oriented, guaranteed service for data transport based


on the stream model. The connections between the sender and several receivers are setup
as uni-directional connections, although also duplex connections can be setup.

ST-II is an extension of the original ST protocol. It consists of two components:


the ST Control Message Protocol (SCMP), which is a reliable, connectionless transport
for the protocol messages and the ST protocol itself, which is an unreliable transport for
the data.

17.5 Let us sum up

We have discussed the following points in this lesson


 Network communications need (1) substantial data throughput, (2) fast date
forwarding, (3) service guarantees, and (4) multicasting
 Transport protocols, to support multimedia transmission, need to have new
features and provide the following function, semi-reliability, multicasting, NAK
(None-Acknowledgment)-based error recovery mechanism and rate control

Multimedia Systems- M.Sc(IT)


149

 Transport protocols, such as TCP and UDP, are used in the Internet protocol stack
for multimedia transmission, the new emerging transport protocols, such as RTP,
XTP and other protocols, are suitable for multimedia.

17.6 Lesson-end activities

Right click the My Network Places icon in your computer and choose
properties. Write the list the protocols installed in your computer and find out the
use of each protocol.

17.7 Model answers to “Check your progress”

The protocols used for multimedia are


TCP, RTP, XTP

17.8 References

1. “Multimedia : Computing, Communications and applications” – By Ralf


Steinmetz and Klara Nahrstedt
2. “Multimedia Networking: Technology, Management and Applications” - by
Syed Mahbubur Rahman
3. “Wireless Multimedia Network Technologies” - by Rajamani Ganesh, Kaveh
Pahlavan, Zoran Zvonar
4. “Wireless Communication Technologies: New Multimedia Systems” - by
orihiko Morinaga, Seiichi. Sampei, Ryuji Kohno
5. “Multimedia Systems, Standards, and Networks” By Atul Puri, Tsuhan Chen

Multimedia Systems- M.Sc(IT)


150

Lesson 18 Quality of Service and Resource Management


Contents

18.0 Aims and Objectives


18.1 Introduction
18.2 Quality of Service and Process Management
18.3 Translation
18.4 Managing resources during multimedia transmission
18.5 Architectural issues
18.6 Let us sum up
18.7 Lesson-end activities
18.8 Model answers to “Check your progress”
18.9 References

18.0 Aims and Objectives

In this chapter we will learn how to measure and improve the quality of service in
multimedia transmission.
At the end of this lesson, the learner will have a clear ideal on the parameters used
in measuring the quality of service and the various measures that are taken to ensure
quality in the multimedia content and transmission

18.1 Introduction

Every product is expected to have a quality apart from satisfying the


requirements. The quality is measured by various parameters.

Parameterization of the services is defined in ISO (International Standard


Organization) standards through the notion of Quality of Service (QoS). The ISO
standard defines QoS as a concept for specifying how “good” the offered networking
services are. QoS can be characterized by a number of specific parameters. There are
several important issues which need to be considered with respect to QoS:

18.2 Quality of Service and Process Management

The user/application requirements on the Multimedia Communication System


(MCS) are mapped into communication services which make the effort to satisfy the
requirements. Because of the heterogeneity of the requirements, coming from different
distributed multimedia applications, the services in the multimedia systems need to be
parameterized. Parameterization allows for flexibility and customization of the services,
so that each application does not result in implementing of a new set of service providers.

Multimedia Systems- M.Sc(IT)


151

QoS Layering

Traditional QoS (ISO standards) was provided by the network layer of the
communication system. An enhancement of QoS was achieved through inducing QoS
into transport services. For MCS, the QoS notion must be extended because many other
services contribute to the end-to-end service quality.

To discuss further QoS and resource management, we need a layered model of the
MCS with respect to QoS, we refer throughout this lesson the model shown in the
following Figure. The MCS consists of three layers: application, system(including
communication services and operating system services), and devices (network and
Multimedia (MM) devices). Above the application may or may not reside a human user.
This implies the introduction of QoS in the application (application QoS), in the system
(system QoS) and in the network (network QoS). In the case of having a human user, the
MCS may also have a user QoS specification. We concentrate in the network layer on
the network device and its QoS because it is of interest to us in the MCS. The MM
devices find their representation (partially) in application QoS.

User

User QoS

Application

Application QoS

System
(Operating and Communication System)

System QoS

Device QoS Network QoS

MM Devices Network

QoS layered model for Multimedia Communication System

QoS Description

The set of chosen parameters for the particular service determines what will be
measured as the QoS. Most of the current QoS parameters differ from the parameters
described in ISO because of the variety of applications, media sent and the quality of the
networks and end-systems. This also leads to many different QoS parameterizations in
the literature. We give here one possible set of QoS parameters for each layer of MCS.

Multimedia Systems- M.Sc(IT)


152

The application QoS parameters describe requirements on the communication


services and OS services resulting from the application QoS. They may be specified in
terms of both quantitative and qualitative criteria. Quantitative criteria are those which
can be evaluated in terms of certain measures, such as bits per second, number of errors,
task processing time, PDU size, etc. The QoS parameters include throughput, delay,
response time, rate, data corruption at the system level and task and buffer specification.

18.3 Translation

It is widely accepted that different MCS components require different QoS


parameters, for example, the man loss rate, known from packet networks, has no meaning
as a QoS video capture device. Likewise, frame quality is of little use to a link layer
service provider because the frame quality in terms of number of pixels in both axes in a
QoS value to initialize frame capture buffers.

We always distinguish between user and application, system and network with
different QoS parameters. However, in future systems, there may be even more “layers”
or there may be hierarchy of layers, where some QoS values are inherited and others are
specific to certain components. In any case, it must always be possible to derive all QoS
values from the user and application QoS values. This derivation-known as translation-
may require “additional knowledge” stored together with the specific component. Hence,
translation is an additional service for layer-to-layer communication during the call
establishment phase. The split of parameters, requires translation functions as follows:

 Human Interface-Application QoS

The service which may implement the translation between a human user and
application QoS parameters is called tuning service. A tuning service provides a
user with a Graphical user Interface (GUI) for input of application QoS, as well as
output of the negotiated application QoS. The translation is represented through
video and audio clips (in the case of audio-visual media), which will run at the
negotiated quality corresponding to, for example, the video frame resolution that
end-system and the network can support.

 Application QoS-System QoS

Here, the translation must map the application requirements into the system QoS
parameters, which may lead to translation such as from “high quality”
synchronization user requirement to a small (milliseconds) synchronization skew
QoS parameter, or from video frame size to transport packet size. It may also be
connected with possible segmentation/reassembly functions.

Multimedia Systems- M.Sc(IT)


153

 System QoS-Network QoS

This translation maps the system QoS (e.g., transport packet en-to-end delay) into
the underlying network QoA parameters (e.g., in ATM, the end-to-end delay of
cells) and vice versa.

18.4 Managing Resources during Multimedia Transmission

QoS guarantees must be met in the application, system and network to get the
acceptance of the users of MCS. There are several constraints which must be satisfied to
provide guarantees during multimedia transmission: (1) time constraints which include
delays; (2) space constraints such as system buffers; (3) device constraints such as frame
grabbers allocation; (4) frequency constraints which include network bandwidth and
system bandwidth for data transmission; and, (5) reliability constraints. These constraints
can be specified if proper resource management is available at the end-points, as well as
in the network.

Rate Control

If we assume an MCS to be a tightly coupled system, which has a central process


managing all system components, then this central instance can impose a synchronous
data handling over all resources; in effect we encounter a fixed, imposed data rate.
However, an MCS usually comprises loosely coupled end-systems which communicate
over networks. In such a setup, rates must be imposed. Here, we make use of all
available strategies in the communications environment.

A rate-based service discipline is one that provides a client with a minimum service
rate independent of the traffic characteristics of other clients. Such a discipline, operating
at a switch, manages the following resources: bandwidth, service time (priority) and
buffer space. Several rate-based scheduling disciplines have been developed.

 Fair Queuing

If N channels share an output trunk, then each one should get 1/Nth of the
bandwidth. If any channel uses less bandwidth than its share, then this portion is
shared among the rest equally. This mechanism can be achieved by the Bit by-bit
Round Tobin (BR) service among the channels. The BR discipline serves n
queues in the round robin service, sending one bit from each queue that has
packet in it. Clearly, this scheme is not efficient; hence, fair queuing emulates BR
as follows: each packet is given a finish number, which is the round number at
which the packet would have received service, if the server had been doing BR.
The packets are served in the order of the finish number. Channels can be given
different fractions of the bandwidth by assigning them weights, where weight
corresponds to the number of bits of service the channel receives per round of BR
service.

Multimedia Systems- M.Sc(IT)


154

 Virtual Clock

This discipline emulates Time Division Multiplexing (TDCM). A virtual


transmission time is allocated to each packet. It is the time at which the packet
would have been transmitted, if the server would actually be doing TDM.

 Delay Earliest-Due-Date (Delay EDD)

Delay EDD is an extension of EDF scheduling (Earliest Deadline First) where the
server negotiates a service contract with each source. The contract states that if a
source obeys a peak and average sending rate, them the server provides bounded
delay. The key then lies in the assignment of deadlines to packets. The server
sets a packet’s deadline to the time at which it should be sent, if it had been
received according to the contract. This actually is the expected arrival time added
to the delay bound at the server. By reserving bandwidth at the peak rate, Delay
EDD can assure each channel a guaranteed delay bound.

 Jitter Earliest-Due-Date (Jitter EDD)

Jitter EDD extends Delay EDD to provide delay-jitter bounds. After a packet has
been served at each server, it is stamped with the difference between its deadline
and actual finishing time. A regulator at the entrance of the next switch holds the
packet for this period before it is made eligible to be scheduled. This provides the
minimum and maximum delay guarantees.

 Stop-and-Go

This discipline preserves the “smoothness” property of the traffic as it traverses


through the network. The main idea is to treat all traffic as frames of length T
bits, meaning the time is divided into frames. At each frame time, only packets tht
have arrived at the server in the previous frame time are sent. It can be shown tht
the delay and delay-jitter are bounded, although the jitter bound does not come
free. The reason is that under Stop-and-Go rules, packets arriving at the start of an
incoming frame must be held by full time T before being forwarded. So, all the
packets that would arrive quickly are instead being delayed. Further, since the
delay and delay-jitter bounds are linked to the length of the frame time,
improvement of Stop-and-Go can be achieved using multiple frame sizes, which
means it may operate with various frame sizes.

 Hierarchical Round Robin (HRR)

An HRR server has several service levels where each level provides round robin
service to a fixed number of slots. Some number of slots at a selected level are
allocated to a channel and the server cycles through the slots at each level. The
time a server takes to service all the slots at a level is called the frame time at the
level. The key of HRR is that it gives each level a constant share of the

Multimedia Systems- M.Sc(IT)


155

bandwidth. “Higher” levels get more bandwidth than “lower” levels, so the frame
time at a higher level is smaller than the frame time at a lower level. Since a
server always completes one round through its slots once every frame time, it can
provide a maximum delay bound to the channels allocated to that level.

Check Your Progress 1


Enumerate a few rate-based scheduling disciplines :
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

18.5 Architectural Issues

Networked multimedia systems work in connection-oriented mode, although the


Internet is an example of a connectionless network where QoS is introduced on a packet
basis (every IP packet carries type of service parameters because the Internet does not
have a service notion). MCS, based on that Internet protocol stack, uses RSVP, the new
control reservation protocol, which accompanies the IP protocol and provides some kind
of “connection” along the path where resources are allocated.

QoS description, distribution, provision and connected resource admission,


reservation, allocation and provision must be embedded in different components of the
multimedia communication architecture. This means that proper services and protocols in
the end-points and the underlying network architectures must be provided.

Especially, the system domain needs to have QoS and resource management.
Several important issues, as described in detail in previous sections, must be considered
in the end-point architectures:

(1) QoS specification, negotiation and provision;


(2) resource admission and reservation for end-to-end QoS; and,
(3) QoS configurable transport systems.

Some examples of architectural choices where QoS and resource management are
designed and implemented include the following:

1. The OSI architecture provides QoS in the network layer and some enhancements
in the transport layer. The OSI 95 project considers integrated QoS specification
and negotiation in the transport protocols .
2. Lancaster’s QoS-Architecture (QoS-A) offers a framework to specify and
implement the required performance properties of multimedia applications over

Multimedia Systems- M.Sc(IT)


156

high-performance ATM-based networks. QoS-A incorporates the notions of flow,


service contract and flow management. The Multimedia Enhanced Transport
Service (METS) provides the functionality to contract QoS.
3. The Heidelberg Transport System (HeiTs), based on ST-II network protocol,
provides continuous-media exchange with QoS guarantees, upcall structure,
resource management and real-time mechanisms. HeiTS transfers continuous
media data streams from one origin to one or multiple targets via multicast.
HeiTS nodes negotiate QoS values by exchanging flow specification to determine
the resources required-delay, jitter, throughput and reliability.
4. The UC Berkeley’s Tenet Protocol Suite with protocol set RCAP, RTIP, RMTP
and CMTP provides network QoS negotiation, reservation and resource
administration through the RCAP control and management protocol.
5. The Internet protocol stack, based on IP protocol, provides resource reservation if
the RSVP control protocol is used.
6. QoS handling and management is provided in UPenn’s end-point architecture
(OMEGA Architecture) at the application and transport subsystems, where the
QoS Broker, as the end-to-end control and management protocol, implements
QoS handling over both subsystems and relies on control and management in
ATM networks.
7. The Native-Mode ATM Protocol Stack, developed in the IDLInet (IIT Delhi
Low-cost Integrated Network) tested at the Indian Institute of technology,
provides network QoS guarantees.

18.6 Let us sum up

This lesson on Quality of Service in Multimedia communication System has


discussed the following points :
 Parameterization of the services is defined in ISO (International Standard
Organization) standards through the notion of Quality of Service (QoS).
 The MCS consists of three layers: application, system(including communication
services and operating system services), and devices (network and Multimedia
(MM) devices).
 The QoS parameters include throughput, delay, response time, rate, data
corruption at the system level task and buffer specification.

18.7 Lesson-end activities

1. Consider any software of your choice. List down a few parameters that you
consider for determining QoS in the software
2. Consider any High level language of your own choice. Make a few
observations on how the compiler rectifies the errors on its own.

Multimedia Systems- M.Sc(IT)


157

18.8 Model answers to “Check your progress”

The rate-based scheduling disciplines can include a few of the following :

 Fair Queuing
 Virtual Clock
 Delay Earliest-Due-Date (Delay EDD)
 Jitter Earliest-Due-Date (Jitter EDD)
 Stop-and-Go
 Hierarchical Round Robin (HRR)

18.9 References
1. Multimedia : Computing, Communications and applications – By Ralf
Steinmetz and Klara Nahrstedt
2. "Multimedia:Concepts and Practice" By Stephen McGloughlin

Multimedia Systems- M.Sc(IT)


158

Lesson 19 Synchronisation
Contents

19.0 Aims and Objectives


19.1 Introduction
19.2 Notion of Synchronisation
19.3 Basic Synchronisation issues
19.4 Intra and Inter Object synchronisation
19.5 Lip synchronisation requirements
19.6 Pointer Synchronisation requirements
19.7 Reference model for Multimedia synchronisation
19.8 Synchronisation specification
19.9 Let us sum up
19.10 Lesson-end activities
19.11 Model answers to “Check your progress”
19.12 References

19.0 Aims and Objectives

This lesson aims at introducing the concept of synchronisation. At the end of this
lesson the learner will be able to :
i) Know the Meaning of synchronisation
ii) Synchronisation in audio
iii) Implementation of a Reference model for synchronisation

19.1 Introduction

Advanced multimedia systems are characterized by the integrated computer-


controlled generation, storage, communication, manipulation and presentation of
independent time-dependent and time-independent media. The key issue which provides
integration is the digital representation of any data and the synchronization of and
between various kinds of media and data.

The word synchronization refers to time. Synchronization in multimedia systems


refers to the temporal relations between media objects in the multimedia system. In a
more general and widely used sense some authors use synchronization in multimedia
systems as comprising content, spatial and temporal relations between media objects. We
differentiate between time-dependent media object are equal, it is called continuous
media object. A video consists of a number of ordered frames; each of these frames has
fixed presentation duration. A time-independent media object is any kind of traditional
media like text and images. The semantic of the respective content does not depend upon
a presentation according to the time domain.

Multimedia Systems- M.Sc(IT)


159

Synchronization between media objects comprises relations between time-


dependent media objects and time-independent media objects. A daily example of
synchronization between continuous media is the synchronization between the visual and
acoustical information in television. Ina multimedia system, the similar synchronization
must be provided for audio and moving pictures.

Synchronization is addressed and supported by many system components


including the operating system, communication systems, database, and documents and
even often by applications. Hence, synchronization must be considered at several levels
in a multimedia system.

19.2 Notion of Synchronization


Several definitions for the terms multimedia application and multimedia systems
are described in the literature. Three criteria for the classification of a system as a
multimedia system can be distinguished: the number of media, the types of supported
media and the degree of media integration.

The simplest criterion is the number of media used in an application, using only
this criterion, even a document processing application that supports text and graphics can
be regarded as a multimedia system.

The following figure classifies applications according to the three criteria. The
arrows indicate the increasing degree of multimedia capability for each criterion.

Classification of media use in multimedia systems

Multimedia Systems- M.Sc(IT)


160

Integrated digital systems can support all types of media, and due to digital
processing, may provide a high degree of media integration. Systems that handle time
dependent analog media objects and time-independent digital media objects are called
hybrid systems. The disadvantage of hybrid systems is that they are restricted with
regard to the integration of time-dependent and time-independent media, because, for
example, audio and video are stored on different devices than-independent media objects
and multimedia workstations must comprise both types of devices.

19.3 Basic Synchronization Issues

Integrated media processing is an important characteristic of a multimedia system.


The main reasons for these integration demands are the inherent dependencies between
the information coded in the media objects. These dependencies must be reflected in the
integrated processing including storage, manipulation, communication, capturing and, in
particular, the presentation of the media objects.

Content Relations

Content relations define a dependency of media objects from data.

Spatial Relations

The spatial relations that are usually known as layout relationships define the
space used for the presentation of a media object on an output device at a certain point of
time in a multimedia presentation. If the output device is two-dimensional (e.g., monitor
or paper), the layout specifies the two-dimensional area to be used.

In desktop-publishing applications, this is usually expressed using layout frames.


A layout frame is placed and content is assigned to this frame. The positioning of a
layout frame in a document may be fixed to a position in a document, to a position on a
page or it may be relative to the positioning of other frames.

Temporal Relations

Temporal relations define the temporal dependencies between media objects.


They are of interest whenever time-dependent media objects exist.

An example of temporal relations is the relation between a video and an audio


object that are recorded during a concert. If these objects are presented, the temporal
relation during the presentations of the two media objects must correspond to the
temporal relation at the recording moment.

Multimedia Systems- M.Sc(IT)


161

19.4 Intra and Inter Object Synchronization


We distinguish between time relations within the units of one time-dependent
media object itself and time relations between media objects. This separation helps to
clarify the mechanisms supporting both types of relations, which are often very different.

 Intra-object synchronization: Intra-object synchronization refers to the time


relation between various presentation units of one time-dependent media object.
An example is the time relation between the single frames of a video sequence.
For a video with a rate of 25 frames per second, each of the frames must be
displayed for 40ms. The following Figure shows this for a video sequence
presenting a bouncing ball.

40 ms

Intra-Object synchronization between frames of a video sequence showing a


jumping ball

 Inter-object synchronization: Inter-object synchronization refers to the


synchronization between media objects. The following figure shows an example
of the time relations of a multimedia synchronization that starts with an
audio/video sequence, followed by several pictures and an animation that is
commented by an audio sequence.

Audio Audio
1 2
P1 P2 P3
Video Animation

Inter-Object synchronization example that shows temporal relations in


multimedia presentation including audio, video, animation and picture
objects

Multimedia Systems- M.Sc(IT)


162

Live and Synthetic Synchronization


The live and synthetic synchronization distinction refers to the type of the
determination of temporal relations. In the case of live synchronization, the goal of the
synchronization is to exactly reproduce at a presentation the temporal relations as they
existed during the capturing process. In the case of synthetic synchronization, the
temporal relations are artificially specified.

Live Synchronization

A typical application of live synchronization is conversational services. In the


scope of a source/sink scenario, at the source, volatile data streams (i.e., data being
captured from the environment) are created which are presented at the sink. The
common context of several streams on the source site must be preserved at the sink. The
source may be comprised of acoustic and optical sensors, as well as media conversion
units. The connection offers a data path between source and sink. The sink presents the
units to the user. A source and sink may be located at different sites.

Source Sink

Synthetic Synchronization

The emphasis of synthetic synchronization is to support flexible synchronization relations


between media. In synthetic synchronization, two phases can be distinguished:

 In the specification phase, temporal relations between the media objects are
defined.
 In the presentation phase, a run-time system presents data in a synchronized
mode.

Source Sink Source Sink

Multimedia Systems- M.Sc(IT)


163

The following example shows aspects of live synchronization:

Two persons located at different sites of a company discuss a new product.


Therefore, they use a video conference application for person-to-person
discussion. In addition, they share a blackboard where they can display parts of
the product and they can point with their mouse pointers to details of these parts
and discuss some issues like: “This part is designed to…”

In the case of synthetic synchronization, temporal relations have been assigned to media
objects that were created independently of each other. The synthetic synchronization is
often used in presentation and retrieval-based systems with stored data objects that are
arranged to provide new combined multimedia objects. A media object may be part of
several multimedia objects.

19.5 Lip synchronization Requirements

Lip synchronization refers to the temporal relationship between an audio and


video stream for the particular case of human speaking. The time difference between
related audio and video LDUs is known as the skew. Streams which are perfectly “in
sync” have no skew, i.e., 0 ms. Experiments at the IBM European Networking Center
measured skews that were perceived as “out of sync.” In their experiments, users often
mentioned that something was wrong with the synchronization, but this did not disturb
their feeling for the quality of the presentation. Therefore, the experimenters additionally
evaluated the tolerance of the users by asking if the data out of sink affected the quality
of the presentation.

Steps of 40 ms were chosen for:

1. The difficulty of human perception to distinguish any lip synchronization skews


with a higher resolution.
2. The capability of multimedia software and hardware devices to refresh motion
video data every 33ms/40ms.

19.6 Pointer synchronization Requirements

In a computer-Supported Co-operative Work (CSCW) environment, cameras and


microphones are usually attached to the users’ workstations. In the next experiment, the
experimenters looked at a business report that contained some data with accompanying
graphics. All participants had a window with these graphics on their desktop where a
shared pointer was used in the discussion. Using this pointer, speakers pointed out
individual elements of the graphics which may have been relevant to the discussion
taking place. This obviously required synchronization of the audio and remote
telepointer.

Multimedia Systems- M.Sc(IT)


164

19.7 Reference Model for Multimedia Synchronization


A reference model is needed to understand the various requirements for
multimedia synchronization identify and structure run-time mechanisms that support the
execution of the synchronization, identify interface between run-time mechanisms and
compare system solutions for multimedia synchronization systems.

First the existing classification and structuring methods are described and then, a
four-layer reference model is presented and used for the classification of multimedia
synchronization systems in our case studies. As many multimedia synchronization
mechanisms operate in a networked environment, we also discuss special synchronization
issues in a distributed environment and their relation to the reference model.

19.7.1 The Synchronization Reference Model

A four-layer synchronization reference model is shown in the following Figure.


Each layer implements synchronization mechanisms which are provided by an
appropriate interface. These interfaces can be used to specify and/or enforce the temporal
relationships. Each interface defines services, i.e., offering the user a means to define
his/her requirements. Each interface can be used by an application directly, or by the
next higher layer to implement an interface. Higher layers offer higher programming and
Quality of Service (QoS) abstractions.

For each layer, typical objects and operations on these objects are described in the
following. The semantics of the objects and operations are the main criteria for assigning
them to one of the layers.

Multimedia Application

Specification Layer
High

Object Layer
Abstraction

Stream Layer

Media Layer
Low

Figure showing four layer reference models

Media Layer: At the media layer, an application operated on a single continuous media
stream, is treated as a sequence of LDUs.

Multimedia Systems- M.Sc(IT)


165

Stream Layer: The stream layer operates on continuous media streams, as well as on
groups of media streams. In a group, all streams are presented in parallel by using
mechanisms for interstream synchronization.

The abstraction offered by the stream layer is the notion of streams with timing
parameters concerning the QoS for intrastream synchronization in a stream and
interstream synchronization between streams of a group.

Continuous media is seen in the stream layer as a data flow with implicit time
constraints; individual LDUs are not visible. The streams are executed in a Real-Time
Environment (RTE), where all processing is constrained by well-defined time
specifications.

Object Layer: The object layer operates on all types of media and hides the differences
between discrete and continuous media.

The abstraction offered to the application is that of a complete, synchronized


presentation. This layer takes a synchronization specification as input and is responsible
for the correct schedule of the overall presentation.

The task of this layer is to close the gap between the needs for the execution of a
synchronized presentation and the stream-oriented services. The functions located at the
object layer are to compute and execute complete presentation schedules that include the
presentation of the no-continuous media objects and the calls to the stream layer.

Specification Layer: The specification layer is an open layer. It does not offer an explicit
interface. This layer contains applications and tools that are allowed to create
synchronization specifications. Such tools are synchronization editors, multimedia
documents editors and authoring systems. Also located at the specification layer are tools
for converting specifications to an object layer format.

The specification layer is also responsible for mapping QoS requirements of the user
level to the qualities offered at the object layer interface.

Synchronization specification methods can be classified into the following main


categories:

 Interval-based specifications, which allow the specification of temporal relations


between the time intervals of the presentations of media objects.
 Axes-based specifications, which relate presentation events to axes that are shared
by the objects of the presentation.
 Control flow-based specifications, in which at given synchronization points, the
flow of the presentations is synchronized.
 Event-based specifications, in which events in the presentation of media trigger
presentation actions.

Multimedia Systems- M.Sc(IT)


166

Check Your Progress 1


Write down the four layers present in the synchronisation reference mode.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

19.8 Synchronization Specification

The synchronization specification of a multimedia object describes all temporal


dependencies of the included objects in the multimedia object. It is produced using tools
at the specification layers and is used at the interface to the object layer. Because the
synchronization specification determines the whole presentation, it is a central issue in
multimedia systems. In the following, requirements for synchronization specifications
are described and specification methods are described and evaluated.

A synchronization specification should be comprised of:

 Intra-object synchronization specifications for the media objects of the


presentation.
 QoS descriptions for intra-object synchronization.
 Inter-object synchronization specifications for media objects of the presentation.
 QoS descriptions for inter-object synchronization.

The synchronization specification is part of the description of a multimedia object.

19.9 Let us sum up


In this lesson we have learnt the following points :
 Synchronization in multimedia systems refers to the temporal relations
between media objects in the multimedia system.
 Three criteria for the classification of a system as a multimedia system can be
distinguished: the number of media, the types of supported media and the
degree of media integration.
 Content relations define a dependency of media objects from some data.
 The spatial relations are usually known as layout relationships
 Intra-object synchronization refers to the time relation between various
presentation units of one time-dependent media object.
 Inter-object synchronization refers to the synchronization between media
objects.
 The synchronisation reference model consists of the following layers Media
layer, Stream layer, Object layer and Specification layer.

Multimedia Systems- M.Sc(IT)


167

19.10 Lesson-end activities


Consider any video editing software such as Ulead VideoStudio. Create your
subtitles and synchronize with the time and frames of video..

19.11 Model answers to “Check your progress”

The synchronisation reference model consists of the following layers


Media layer
Stream layer
Object layer
Specification layer.

19.12 References

1. Multimedia : Computing, Communications and applications – By Ralf Steinmetz


and Klara Nahrstedt
2. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen
3. Multimedia Communications: Directions and Innovations By Jerry D. Gibson

Multimedia Systems- M.Sc(IT)


168

Lesson 20 Multimedia Networking System


Contents

20.0 Aims and Objectives


20.1 Introduction
20.2 Layers, Protocols and Services
20.3 Multimedia on Networks
20.4 FDDI
20.5 Let us sum up
20.6 Lesson-end activities
20.7 Model answers to “Check your progress”
20.8 References

20.0 Aims and Objectives

We will examine in the remainder of this chapter different networks with respect
to their multimedia transmission capabilities.

At the end of this lesson the learner will be able to


i) understand the networking concepts
ii) identify the features available in FDDI

20.1 Introduction
A multimedia networking system allows for the data exchange of discrete and
continuous media among computers. This communication requires proper services and
protocols for data transmission. Multimedia networking enables distribution of media to
different workstation.

20.2 Layers, Protocols and Services

A service provides a set of operations to the requesting application. Logically


related services are grouped into layers according to the OSI reference model. Therefore,
each layer is a service provider to the layer lying above. The services describe the
behavior of the layer and its service elements (Service Data Units = SDUs). A proper
service specification contains no information concerning any aspects of the
implementation.

A protocol consists of a set of rules which must be followed by peer layer instances
during any communication between these two peers. It is comprised of the formal

Multimedia Systems- M.Sc(IT)


169

(syntax) and the meaning (semantics) of the exchanged data units (Protocol Data Units =
PDUs). The peer instances of different computers cooperate together to provide a service.

Multimedia communication puts several requirements on services and protocols,


which are independent from the layer in the network architecture. In general, this set of
requirements depends to a large extent on the respective application. However, without
defining a precise value for individual parameters, the following requirements must be
taken into account:

 Audio and video data process need to be bounded by deadlines or even defined by
a time interval. The data transmission-both between applications and transport
layer interfaces of the involved components-must follow within the demands
concerning the time domains.
 End –to-end jitter must be bounded. This is especially important for interactive
applications such as the telephone. Large jitter values would mean large buffers
and higher end-to-end delays.
 All guarantees necessary for achieving the data transfer within the required time
span must be met. This includes the required processor performance, as well as
the data transfer over a bus and the available storage for protocol processing.
 Cooperative work scenarios using multimedia conference systems are the main
application areas of multimedia communication systems. These systems should
support multicast connections to save resources. The sender instance may often
change during a single session. Further, a user should be able to join or leave a
multicast group without having to request a new connection setup,which needs to
be handled by all other members of this group.
 The services should provide mechanisms for synchronizing different data streams,
or alternatively perform the synchronization using available primitives
implemented in another system component.
 The multimedia communication must be compatible with the most widely used
communication protocols and must make use of existing, as well as future
networks. Communication compatibility means that different protocols at least
coexist and run on the same machine simultaneously. The relevance of envisaged
protocols can only be achieved if the same protocols are widely used. Many of the
current multimedia communication systems are, unfortunately, proprietary
experimental systems.
 The communication of discrete data should not starve because of preferred or
guaranteed video/audio transmission. Discrete data must be transmitted without
any penalty.
 The fairness principle among different applications, users and workstations must
be enforced.
 The actual audio/video data rate varies strongly. This leads to fluctuations of the
data rate, which needs to be handled by the services.

Multimedia Systems- M.Sc(IT)


170

20.2.1 Physical Layer

The physical layer defines the transmission method of individual bits over the
physical medium, such as fiver optics.

For example, the type of modulation and bit-synchronization are important issues.
With respect to the particular modulation, delays during the data transmission arise
due to the propagation speed of the transmission medium and the electrical circuits
used. They determine the maximal possible bandwidth of this communication
channel. For audio/video data in general, the delays must be minimized and a
relatively high bandwidth should be achieved.

20.2.2 Data Link Layer

The data link layer provides the transmission of information blocks known as data
frames. Further, this layer is responsible for access protocols to the physical medium,
error recognition and correction, flow control and block synchronization.

Access protocols are very much dependent on the network. Networks can be
divided into two categories: those using point-to-point connections and those using
broadcast channels, sometimes called multi-access channels or random access
channels. In a broadcast network, the key issue is how to determine, in the case of
competition, who gets access to the channel. To solve this problem, the Medium
Access Control (MAC) sublayer was introduced and MAC protocols, such as the
Timed Token Rotation Protocol and Carrier Sense Multiple Access with Collision
Detection (CSMA/CD), were developed.

Continuous data streams require reservation and throughput guarantees over a


line. To avoid larger delays, the error control for multimedia transmission needs a
different mechanism than retransmission because a late frame is a lost frame.

20.2.3. Network Layer

The network layer transports information blocks, called packets, from one station
to another. The transport may involve several networks. Therefore, this layer
provides services such as addressing, internetworking, error handling, network
management with congestion control and sequencing of packets.

Again, continuous media require resource reservation and guarantees for


transmission at this layer. A request for reservation for later resource guarantees is
defined through Quality of Service (QoS) parameters, which correspond to the
requirements for continuous data stream transmission. The reservation must be done
along the path between the communicating stations.

Multimedia Systems- M.Sc(IT)


171

20.2.4. Transport Layer

The transport layer provides a process-to-process connection. At this layer, the


QoS, which is provided by the network layer, is enhanced, meaning that if the
network service is poor, the transport layer has to bridge the gap between what the
transport users want and what the network layer provides. Large packets are
segmented at this layer and reassembled into their original size at the receiver. Error
handling is based on process-to-process communication.

20.2.5 Session Layer

In the case of continuous media, multimedia sessions which reside over one or
more transport connections, must be established. This introduces a more complex
view on connection reconstruction in the case of transport problems.

20.2.6 Presentation Layer

The presentation layer abstracts from different formats ( the local syntax) and
provides common formats ( transfer syntax). Therefore, this layer must provide
services for transformation between the application-specific formats and the agreed-
upon format. An example is the different representation of a number for Intel or
Motorola processors.

The multitude of audio and video formats also require conversion between
formats. This problem also comes up outside of the communication components
during exchange between data carriers, such as CD-ROMs, which store continuous
data. Thus, format conversion is often discussed in other contexts.

20.2.7 Application Layer

The application layer considers all application-specific services, such as file transfer
service embedded in the file transfer protocol (FTP) and the electronic mail service.
With respect to audio and video, special services for support of real-time access and
transmission must be provided.

Check Your Progress 1


List the different layers used in networking.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

Multimedia Systems- M.Sc(IT)


172

20.3 Multimedia on Networks

The main goal of distributed multimedia communication systems is to transmit all


their media over the same network.

Depending mainly on the distance between end-points (station/computers),


networks are divided into three categories: Local Area Networks (LANs), Metropolitan
Area Networks (MANs), and Wide Area Networks (WANs).

Local Area Networks (LANs)

A LAN is characterized by (1) its extension over a few kilometers at most, (20) a
total data rate of at least several Mbps, and (3) its complete ownership by a single
organization. Further, the number of stations connected to a LAN is typically limited to
100. However, the interconnection of several LANs allows the number of connected
stations to be increased. The basis of LAN communication is broadcasting using
broadcast channel (multi-access channel). Therefore, the MAC sublayer is of crucial
importance in these networks.

High-speed Ethernet

Ethernet is the most widely used LAN. Currently available Ethernet offers
bandwidth of at least 10 Mbps, but new fast LAN technologies for Ethernet with
bandwidths in the range of 100Mbps are starting to come on the market. This bus-based
network uses the CSMA/CD protocol for resolution of multiple access to the broadcast
channel in the MAC sublayer-before data transmission begins, the network state is
checked by the sender station. Each station may try to send its data only if, at that
moment, no other station transmits data. Therefore, each station can simultaneously
listen and send.

Dedicated Ethernet

Another possibility for the transmission of audio/video data is to dedicate a


separate Ethernet LAN to the transmission of continuous data. This solution requires
compliance with a proper additional protocol. Further, end-users need at least two
separate networks for their communications: one for continuous data and another for
discrete data. This approach makes sense for experimental systems, but means additional
expense in the end-systems and cabling.

Hub

A very pragmatic solution can be achieved by exploiting an installed network


configuration. Most of the Ethernet cables are not installed in the form of a bus system.
They make up a star (i.e., cables radiate from the central room to each station). In this
central room, each cable is attached to its own Ethernet interface.

Multimedia Systems- M.Sc(IT)


173

Instead of configuring bus, each station is connected via its own Ethernet to a hub.
Hence, each station has the full Ethernet bandwidth available, and a new network for
multimedia transmission is not necessary.

Fast Ethernet

Fast Ethernet, known as 100Base-T offers throughput speed of up to 100 Mbits/s,


and it permits users to move gradually into the world of high-speed LANs.

The Fast Ethernet Alliance, an industry group with more than 60 member
companies began work on the 100-Mbits/s 100 Base-TX specification in the early 1990s.
The alliance submitted the proposed standard to the IEEE and it was approved. During
the standardization process, the alliance and the IEEE also defined a Media-Independent
Interface (MII) for fast Ethernet, which enables it to support various cabling types on the
same Ethernet network. Therefore, fast Ethernet offers three media options: 100 Base-T4
for half-duplex operation on four pairs of UTP (Unshielded Twisted Pair cable), 100
Base-TX for half-or full-0duplex operation on two pairs of UTP oR STP (Shielded
Twisted Pair cable), and 100 Base-FX for half-and full-duplex transmission over fiber
optic cable.

Token Ring

The Token Ring is a LAN with 4 or 16 Mbits/s throughput. All stations are
connected to a logical ring. In a Token Ring, a special bit pattern (3-byte), called a token,
circulates around the ring whenever all stations are idle. When a station wants to transmit
a frame, it must get the token and remove it from the ring before transmitting. Ring
interfaces have two operating modes: listen and transmit. In the listen mode, input bits are
simply copied to the output. In the transmit mode, which is entered only after the token
has been seized, the interface breaks the connection between the input and the output,
entering its own data onto the ring. As the bits that were inserted and subsequently
propagated around the ring come back, they are removed from the ring by the sender.
After a station has finished transmitting the last bit of its last frame, it must regenerate the
token. When the last bit of the frame has gone around and returned, it must be removed,
and the interface must immediately switch back into the listen mode to avoid a duplicate
transmission of the data.

Each station receives, reads and sends frames circulating in the ring according to the
Token Ring MAC Sublayer Protocol (IEEE standard 8020.5). Each frame includes a
Sender Address (SA) and a Destination Address (DA). When the sending station drains
the frame from the ring, a Frame Status field is update, i.e., the A and C bits of the field
are examined. Three combinations are allowed:

 A=0, C=0 : destination not present or not powered up.


 A=1, C=0 : destination present but frame not accepted.
 A=1, C=1 : destination present and frame copied.

Multimedia Systems- M.Sc(IT)


174

20.4 FDDI
The Fiber Distributed Data Interface (FDDI) is a high-performance fiver optic
LAN, which is configured as a ring. It is often seen as the successor of the Token Ring
IEEE 802.5 protocol. The standardization began in the American Standards Institute
(ANSI in the group X3T9.5 in 1982. Early implementations appeared in 1988.

Compared to the Token Ring, FDDI is more a backbone than a LAN only because
it runs at 100 Mbps over distances up to 100 km with up to 500 stations. The Token Ring
supports typically between 50-2050 stations. The distance of neighboring stations is less
than 20 km in FDDI.

The FDDI design specification calls for no more than one error in 20.5*10^10
bits. Many implementations will do much better. The FDDI cabling consists of two fiber
rings, one transmitting clockwise and the other transmitting counter-clockwise. If either
one breaks, the other can be used as backup.

FDDI supports different transmission modes which are important for the
communication of multimedia data. The synchronous mode allows a bandwidth
reservation; the asynchronous mode behaves similar to the Token Ring protocol. Many
current implementations support only the asynchronous mode. Before diving into a
discussion of the different mode, we will briefly describe the topologies and FDDI
system components.

Non-isochronous

Synchronous Asynchronous

Restricted Non-restricted

Priority 7
Priority 0

An overview of data transmission in FDDI

Multimedia Systems- M.Sc(IT)


175

20.4.1 Topology of FDDI

The main topology features of FDDI are the two fiber rings, which operate in
opposite directions (dual ring topology). The primary ring provides the data
transmission, the secondary ring improves the fault tolerance. Individual stations can be –
but do not have to be – connected to both rings. FDDI defines two classes of stations, A
and B:

 Any class A station (Dual Attachment Station) connects to both rings. It is


connected either directly to a primary ring and secondary ring or via a
concentrator to a primary and secondary ring.
 The class B station ( Single Attachment Station) only connects to one of the rings.
It is connected via a concentrator to the primary ring.

20.4.2 FDDI Architecture

FDDI includes the following components which are shown in the following Figure:

 PHYsical Layer Protocol (PHY)

Is defined in the standard ISO 9314-1 Information processing Systems: Fiver


Distributed Data Interface-Part 1: Token Ring Physical Protocol.

 Physical Layer Medium-Dependent (PMD)

Is defined in the standard ISO 9314-1 Information Processing Systems: Fiver


Distributed Data Interface-Part 1: Token Ring Physical Layer, Medium
Dependent.

 Station Management (SMT)

Defines the management functions of the ring according to ANSI Preliminary


Draft Proposal American National Standard Z3T9.5/84-49 Revision 6.20, FDDI
Station Management.

 Media Access Control (MAC)

Defines the network access according to ISO 9314-20 Information Processing


Systems: Fiber Distributed Data Interface-Part 20: Token Ring Media Access
Control.

Multimedia Systems- M.Sc(IT)


176

IEEE 802.2 LLC

MAC
SMT Data link
-Packet Interpretation layer
-Token Passing
-Monitor -Packet Training
Ring
SMAP

Physical
PHY Layer
-Encode/Decode
-Clocking
-Manage
Ring

-Connection PMD Electrons


Managem
ent

Photons

FDDI Reference model


Check Your Progress 2
Explain the different components of FDDI.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

20.4.3 Further properties of FDDI

Multicasting : The multicasting service became one of the most important aspects
of networking. FDDI supports group addressing, which enables multicasting.

Synchronisation : Synchronisation among different data streams is not part of the


network, therefore it must be solved separately

Multimedia Systems- M.Sc(IT)


177

Packet Size : The size of the packets can directly influence the data delay
inapplications.

Implementations: Many FDDI implementations do not support the synchronous


mode, which is very useful for the transmission of continuous media. In
asynchronous mode additionally, the same methods can be used as described by
the token ring.

Restricted tokens : If only two stations interact by transmitting continuous media


data, then one can also use the asynchronous mode with Restricted token

Several new protocols at the network/transport layers in Internet and higher layers in
BISDN are currently centers of research to support more efficient transmission of
multimedia and multiple types of service.

20.5 Let us sum up


In this lesson on Multimedia networks we have discussed the following key points :
o A multimedia networking system allows for the data exchange of discrete
and continuous media among computers.
o A protocol consists of a set of rules which must be followed by peer layer
instances during any communication between two peer computers.
o The protocol layers are Physical Layer, Data Link Layer, Network Layer,
Transport Layer, Session Layer, Presentation Layer, Application Layer.
o The network layer transports information blocks, called packets, from one
station to another.
o The Fiber Distributed Data Interface (FDDI) is a high-performance fiver
optic LAN, which is configured as a ring.

20.6 Lesson-end activities

1. Identify a network of your own choice and find the protocols installed in the
computer.
2. Using the internet search engine list the different network standards available and
differentiate their features.

20.7 Model answers to “Check your progress”

1. The protocol layers are


Physical Layer
Data Link Layer
Network Layer
Transport Layer
Session Layer

Multimedia Systems- M.Sc(IT)


178

Presentation Layer
Application Layer

2. FDDI Architect includes the following :

 PHYsical Layer Protocol (PHY)


 Physical Layer Medium-Dependent (PMD)
 Station Management (SMT)
 Media Access Control (MAC)

20.8 References
1. Multimedia : Computing, Communications and applications – By Ralf
Steinmetz and Klara Nahrstedt
2. Multimedia Networking: Technology, Management and Applications - by Syed
Mahbubur Rahman -
3. Wireless Multimedia Network Technologies - by Rajamani Ganesh, Kaveh
Pahlavan, Zoran Zvonar
4. Wireless Communication Technologies: New Multimedia Systems - by
Norihiko Morinaga, Seiichi. Sampei, Ryuji Kohno
5. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen

Multimedia Systems- M.Sc(IT)


179

UNIT - V

Lesson 21 Multimedia Operating System


Contents
21.0 Aims and Objectives
21.1 Introduction
21.2 Multimedia Operating System
21.3 Real Time Process
21.4 Resource Management
21.4.1 Resources
21.4.2 Requirements
21.4.3 Components of the Resources
21.4.4 Phases of the Resource reservation and Management Process
21.4.5 Resource Allocation Scheme
21.5 Let us sum up
21.6 Lesson-end activities
21.7 Model answers to “Check your progress”
21.8 References

21.0 Aims and Objectives

In this lesson we will learn the concepts behind Multimedia Operating System.
Various issues related with handling of resources are discussed in this lesson.

21.1 Introduction

The operating system is the shield of the computer hardware against all software
components. It provides a comfortable environment for the execution of programs, and it
ensures effective utilization of the computer hardware. The operating system offers
various services, related to the essential resources of a computer: CPU, main memory,
storage and all input and output devices.

21.2 Multimedia Operating System


For the processing of audio and video, multimedia application demands that
humans perceive these media in a natural, error-free way. These continuous media data
originate at sources like microphones, cameras and files. From these sources, the data are
transferred to destinations like loudspeakers, video windows and files located at the same
computer or at a remote station.

The major aspect in this context is real-time processing of continuous media data.
Process management must take into account the timing requirements imposed by the
handling of multimedia data. Appropriate scheduling methods should be applied. In

Multimedia Systems- M.Sc(IT)


180

contrast to the traditional real-time operating systems, multimedia operating systems also
have to consider tasks without hard timing restrictions under the aspect of fairness.

The communication and synchronization between single processes must meet


the restrictions of real-time requirements and timing relations among different media.
The main memory is available as shared resource to single processes.

In multimedia systems, memory management has to provide access to data with a


guaranteed timing delay and efficient data manipulation functions. For instance, physical
data copy operations must be avoided due to their negative impact on performance;
buffer management operations (such as are known from communication systems) should
be used.

Database management is an important component in multimedia systems.


However, database management abstracts the details of storing data on secondary media
storage. Therefore, database management should rely on file management services
provided by the multimedia operating system to access single files and file systems.

Since the operating system shields devices from applications programs, it must
provide services for device management too. In multimedia systems, the important issue
is the integration of audio and video devices in a similar way to any other input/output
device. The addressing of a camera can be performed similar to the addressing of a
keyboard in the same system, although most current systems do not apply this technique.

21.3 Real Time Process


A real-time process is a process which delivers the results of the processing in a
given time-span. Programs for the processiîg of data must be available during the entire
run-time of the system. The data may require processing at a priorly known point in
time, or it may be demanded without any previous knowledge. The system must enforce
externally-defined time constraints. Internal dependencies and their related time limits
are implicitly considered. External events occur – deterministically (at a predetermined
instant) or stochastically (randomly). The real-time system has the permanent task of
receiving information from the environment, occurring spontaneously or in periodic time
intervals, and/or delivering it to the environment given certain time constraints.

21.3.1 Characteristics of Real Time Systems

The necessity of deterministic and predictable behavior of real-time systems


requires processing guarantees for time-critical tasks. Such guarantees cannot be assured
for events that occur at random intervals with unknown arrival times, processing
requirements or deadlines.

 Predictably fast response to time-critical events and accurate timing information.

Multimedia Systems- M.Sc(IT)


181

 A high degree of schedulability. Schedulability refers to the degree of resource


utilization at which, or below which, the deadline of each time-critical task can be
taken into account.
 Stability under transient overload. Under system overload, the processing of
critical tasks must be ensured.

21.3.2 Real Time and Multimedia

Audio and video data streams consist of single, periodically changing values of
continuous media data, e.g., audio samples or video frames. Each Logical Data Unit
(LDU) must be presented by a well-determined deadline. Jitter is only allowed before the
final presentation to the user. A piece of music, for example, must be played back at a
constant speed. To fulfill the timing requirements of continuous media, the operating
system must use real-time scheduling techniques. These techniques must be applied to
all system resources involved in the continuous media data processing, i.e., the entire
end-to-end data path is involved.

The real-time requirements of traditional real-time scheduling techniques (used for


command and control systems in application areas such as factory automation or aircraft
piloting) have a high demand for security and fault-tolerance.

 The fault-tolerance requirements of multimedia systems are usually less strict


than those of real-time systems that have a direct physical impact. The short time
failure of a continuous media system will not directly lead to the destruction of
technical equipment or constitute a threat to human life. Please note that this is a
general statement which does not always apply. For example, the support of
remove surgery by video and audio has stringent delay and correctness
requirements.

 For many multimedia system applications, missing a deadline is not a severe


failure, although it should be avoided. It may even go unnoticed, e.g., if an
uncompressed video frame (or parts of it) is not available on time it can simply be
omitted. The viewer will hardly notice this omission, assuming it does not
happen for a contiguous sequence of frames.

 A sequence of digital continuous media data is the result of periodically sampling


a sound or image signal. Hence, in processing the data units of such a data
sequence, all time-critical operations are periodic.

 The bandwidth demand of continuous media is not always that stringent; it must
not be a priori fixed, but it may eventually be lowered. As some compression
algorithms are capable of using different compression ratios – leading to different
qualities – the required bandwidth can be negotiated. If not enough bandwidth is
available for full quality, the application may also accept reduced quality (instead
of no service at all).

Multimedia Systems- M.Sc(IT)


182

21.4 Resource Management

Multimedia systems with integrated audio and video processing are at the limit of
their capacity, even with data compression and utilization of new technologies. Current
computers do not allow processing of data according to their deadlines without any
resource reservation and real-time process management.

Resource management in distributed multimedia systems covers several


computers and the involved communication networks. It allocates all resources involved
in the data transfer process between sources and sinks. In an integrated distributed
multimedia system, several applications compete for system resources. This shortage of
resources requires careful allocation. The system management must employ adequate
scheduling algorithms to serve the requirements of the applications. Thereby, the
resource is first allocated and then managed.

Resource management in distributed multimedia systems covers several


computers and the involved communication networks. It allocates all resources involved
in the data transfer process between sources and sinks.

21.4.1 Resources

A resource is a system entity required by tasks for manipulating data. Each


resource has a set of distinguishing characteristics classified using the following scheme:

 A resource can be active or passive. An active resource is the CPU or a network


adapter for protocol processing; it provides a service. A passive resource is the
main memory, communication bandwidth or a file system (whenever we do not
take care of the processing of the adapter); it denotes some system capability
required by active resources.
 A resource can be either used exclusively by one process at a time or shared
between various processes. Active resources are often exclusive; passive
resources can usually be shared among processes.
 A resource that exists only once in the system is known as a single, otherwise it is
a multiple resource. In a transporter-based multiprocessor system, the individual
CPU is a multiple resource.

21.4.2 Requirements

The requirements of multimedia applications and data streams must be served for
the single components of a multimedia system. The resource management maps these
requirements on the respective capacity. The transmission and processing requirements
of local and distributed multimedia applications can be specified according to the
following characteristics :

Multimedia Systems- M.Sc(IT)


183

1. The throughput is determined by the needed data rate of a connection to satisfy


the application requirements. It also depends on the size of the data units.
2. It is possible to distinguish between local and global end-to-end delay :
a. The delay at the resource is the maximum time span for the completion of
a certain task at this resource.
b. The end-to-end delay is the total delay for a data unit to be transmitted
from the source to its destination. For example, the source of a video
telephone is the camera, the destination is the video window on the screen
of the partner.
3. The jitter determines the maximum allowed variance in the arrival of data at the
destination.
4. The reliability defines error detection and correction mechanism used for the
transmission and processing of multimedia tasks. Errors can be ignored, indicated
and/or corrected. It is important to notice that error correction through re-
transmission is rarely appropriate for time-critical data because the retransmitted
data will usually arrive late. Forward error correction mechanisms are more
useful.

In accordance with communication systems, these requirements are also known as


Quality of Service parameters (QoS).

21.4.3 Components of the Resources

One possible realization of resource allocation and management is based on the


interaction between clients and their respective resource managers. The client selects the
resource and requests a resource allocation by specifying its requirements through a QoS
specification. This is equivalent to a workload request.

First, the resource manager checks its own resource utilization and decides if the
reservation request can be served or not. All existing reservations are stored. This way,
their share in terms of the respective resource capacity is guaranteed. Moreover, this
component negotiates the reservation request with other resource managers, if necessary.

In the following figure two computers are connected over a LAN. The
transmission of video data between a camera connected to a computer server and the
screen of the computer user involves, for all depicted components, a resource manager.

Multimedia Systems- M.Sc(IT)


184

Server Station User


Station
Frame Grabber Presentation in a
& Window
Encoder

Compression DeCompression

Communication Communication
Transport & Transport &
Network layer Network Layer

Data Link Data Link


Networks
& &
Network Adapter Network Adapter

Components grouped for the purpose of video data transmission

21.4.4 Phases of the Resource Reservation and Management Process

A resource manager provides components for the different phases of the


allocation and management process:

1. Schedulability Test

The resource manager checks with the given QoS parameters (e.g.,
throughput and reliability) to determine if there is enough remaining
resource capacity available to handle this additional request.

2. Quality of Service Calculation

After the schedulability test, the resource manager calculates the best
possible performance (e.g., delay) the resource can guarantee for the new
request.

3. Resource Reservation

The resource manager allocates the required capacity to meet the QoS
guarantees for each request.

Multimedia Systems- M.Sc(IT)


185

4. Resource Scheduling

Incoming messages from connections are scheduled according to the given


QoS guarantees. For process management, for instance, the allocation of
the resource is done by the scheduler at the moment the data arrive for
processing.

Check Your Progress-1

List the phases available in the resource reservation.

Notes: a) Write your answers in the space given below.


b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

21.4.5 Resource Allocation Scheme

Reservation of resources can be made either in a pessimistic or optimistic way:

 The pessimistic approach avoids resource conflicts by making reservations for


the worst case, i.e., resource bandwidth for the longest processing time and the
highest rate which might ever be needed by a task is reserved. Resource conflicts
are therefore avoided. This leads potentially to an underutilization of resources.
 With the optimistic approach, resources are reserved according to an average
workload only. This means that the CPU is only reserved for the average
processing time. This approach may overlook resources with the possibility of
unpredictable packet delays. QoS parameters are met as far as possible.
Resources are highly utilized, though an overload situation may result in failure.
To detect an overload situation and to handle it accordingly a monitor can be
implemented. The monitor may, for instance, preempt processes according to
their importance.

The optimistic approach is considered to be an extension of the pessimistic


approach. It requires that additional mechanisms to detect and solve resource conflicts be
implemented.

…………………………………………………………………

Multimedia Systems- M.Sc(IT)


186

Check Your Progress 2


Distinguish additive and subtractive colors and write their area of use.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

21.5 Let us sum up


In this lesson we have discussed the following points :
 Operating System provides a comfortable environment for the execution of
programs, and it ensures effective utilization of the computer hardware.
 The features to be covered in multimedia Operating System are Process
management, communication and synchronization, memory management,
Database management and device management.
 The operating system offers various services related to the essential resources
of a computer: CPU, main memory, storage and all input and output devices.
 The phases available in the resource reservation are :
o Schedulability Test
o Quality of Service Calculation
o Resource Reservation
o Resource Scheduling
 The two approaches to resource allocation are Optimistic Approach and
Pessimistic Approach.

21.6 Lesson-end activities

Browse the internet and list a few Operating Systems that supports the multimedia
processing. Also list the features supported by them.

Multimedia Systems- M.Sc(IT)


187

21.7 Model answers to “Check your progress”-1

1. The phases available in the resource reservation are :

i) Schedulability Test
ii) Quality of Service Calculation
iii) Resource Reservation
iv) Resource Scheduling

2. The two approaches to resource allocation are Optimistic Approach and


Pessimistic Approach.

21.8 References

1. “Multimedia Systems, Standards, and Networks” By Atul Puri, Tsuhan Chen.


2. “Multimedia Storage and Retrieval: An Algorithmic Approach “ By Jan
Korst, Verus Pronk.
3. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
4. “Multimedia Making it work” By Tay Vaughan.

Multimedia Systems- M.Sc(IT)


188

Lesson 22 Multimedia OS - Process Management


Contents

22.0 Aims and Objectives


22.1 Introduction
22.2 Process Management
22.3 Real Time Process Requirements
22.4 Traditional Real Time Scheduling
22.4.1 Earliest Deadline First Algorithm
22.4.2 Rate Monotonic Algorithm
22.4.3 Other Approaches to Rate Monotonic Algorithm
22.4.4 Other Approaches for In-Time Scheduling
22.5 Let us sum up
22.6 Lesson-end activities
22.7 Model answers to “Check your progress”
22.8 References

22.0 Aims and Objectives

This lesson aims at teaching the concepts of process management. At the end of
this lesson the learner will be able to :
i) Identify the requirements for processor management
ii) Enumerate the different real time processor scheduling algorithms

22.1 Introduction
One of the main activities of an Operating System is managing the multimedia
processes and managing the processor. Effective management of processor is necessary
to enhance the multimedia production and multimedia playback.

22.2 Process Management

Process management deals with administration of the resource main processor.


The capacity of this resource is specified as processor capacity. The process manager
maps single processes onto resources according to a specified scheduling policy such that
all processes meet their requirements.

In most systems, a process under control of the process manager can adopt one of the
following states:

 In the initial state, no process is assigned to the program. The process is in the
idle state.
 If a process is waiting for an event, i.e., the process lacks one of the necessary
resources for processing, it is in the blocked state.

Multimedia Systems- M.Sc(IT)


189

 If all necessary resources are assigned to the process, it is ready to run. The
process only needs the processor for the execution of the program.
 A process is running as long as the system processor is assigned to it.

The process manager is the scheduler. This component transfers a process into
the ready-to-run state by assigning it a position in the respective queue of the dispatcher,
which is the essential part of the operating system kernel. The dispatcher manages the
transition from ready-to-run to run. In most operating systems, the next process to run is
chosen according to priority policy. Between processes with the same priority, the one
with the longest ready time is chosen.

22.3 Real-time Processing Requirements

Continuous media data processing must occur in exactly predetermined – usually


periodic – intervals. Operations on these data recur over and over and must be completed
at certain deadlines. The real-time process manager determines a schedule for the
resource CPU that allows it to make reservations and to give processing guarantees. The
problem is to find a feasible scheduler which schedules all time critical continuous media
tasks in a way that each of them can meet its deadlines. This must be guaranteed for all
tasks in every period for the whole run-time of the system. In a multimedia system,
continuous and discrete media data are processed concurrently.

For scheduling of multimedia tasks, two conflicting goals must be considered :

 An uncritical process should not suffer from starvation because time critical
processes are executed. Multimedia applications rely as much on text and
graphics as on audio and view. Therefore, not all resources should be occupied
by the time critical processes and their management processes.
 Alternatively, a time critical process must never be subject to priority inversion.
The scheduler must ensure that any priority inversion (also between time-critical
processes with different priorities) is avoided or reduced as much as possible.

22.4 Traditional Real-time Scheduling

A few real-time scheduling methods are employed in operations research. They


differ from computer science real-time scheduling because they operate in a static
environment where no adaptation to changes of the workload is necessary.

The goal of traditional scheduling on time-sharing computers is optimal


throughput, optimal resource utilization and fair queuing. In contrast, the main goal of
real-time tasks is to provide a schedule that allows all, respectively, as many time-critical
processes are possible, to be processed in time, according to their deadline. The
scheduling algorithm must map tasks onto resources such that all tasks meet their time
requirements. Therefore, it must be possible to show, or to prove, that a scheduling
algorithm applied to real-time systems fulfills the timing requirements of the task.

Multimedia Systems- M.Sc(IT)


190

There are several attempts to solve real-time scheduling problems. Many of them
are just variations of basic algorithms. To find the best solutions for multimedia systems,
two basic algorithms are analyzed, Earliest Deadline First Algorithm and Rate Monotonic
Scheduling, and their advantages and disadvantages are elaborated.

22.4.1 Earliest Deadline First Algorithm

The Earliest Deadline First (EDF) algorithm is one of the best known algorithms
for real-time processing. At every new ready state, the scheduler selects the task with the
earliest deadline among the tasks that are ready and not fully processed. The requested
resource is assigned to the selected task. At any arrival of a new task EDF must be
computed immediately leading to a new order, i.e., the running task is preempted and the
new task is scheduled according to its deadline. The new task is processed immediately
if its deadline is earlier than that of the interrupted task. The processing of the interrupted
task is continued according to the EDF algorithm later on. EDF is not only an algorithm
for periodic tasks, but also for task with arbitrary requests, deadlines and service
execution times. In this case, no guarantee about the processing of any task can be given.

EDF is an optimal, dynamic algorithm, i.e., it produces a valid schedule whenever


one exists. A dynamic algorithm schedules every instance of each incoming task
according to its specific demands. Tasks of periodic processes must be scheduled in each
period again. With n tasks which have arbitrary ready-times and deadlines, the
complexity is Θ(n2).

EDF is used by different models as a basic algorithm. An extension of EDF is the


Time-Driven Scheduler (TDS). Tasks are scheduled according to their deadlines.
Further, the TDS is able to handle overload situations. If an overload situation occurs the
scheduler aborts tasks which cannot meet their deadlines anymore. If there is still an
overload situation, the scheduler removes tasks which have a low “value density”. The
value density corresponds to the importance of a task for the system.

Another priority-driven EDF scheduling algorithm is also introduced. Here, every


task is divided into a mandatory and an optional part. A task is terminated according to
the deadline of the mandatory part, even if it is not completed at this time. Tasks are
scheduled with respect to the deadline of the mandatory parts. A set of task is said to be
schedulable if all tasks can meet the deadlines of their fully utilized.

22.4.2 Rate Monotonic Algorithm

The rate monotonic scheduling principle was introduced by Liu and Layland in
1973. It is an optimal, static, priority-driven algorithm for preemptive, periodic jobs.
Optimal in this context means that there is no other static algorithm that is able to
schedule a task set which cannot be scheduled by the rate monotonic algorithm. A
process is scheduled by a static algorithm at the beginning of the processing.
Subsequently, each task is processed with the priority calculated at the beginning. No
further scheduling is required.

Multimedia Systems- M.Sc(IT)


191

The following five assumptions are necessary prerequisites to apply the rate
monotonic algorithm.

1. The requests for all tasks with deadlines are periodic, i.e., have constant
intervals between consecutive requests.
2. The processing of a single task must be finished before the next task of the
same data stream becomes ready for execution. Deadlines consist of
runability constraints only, i.e., each task must be completed before the next
request occurs.
3. All tasks are independent. This means that the requests for a certain task do
not depend on the initiation or completion of requests for any other task.
4. Run-time for each request of a task is constant. Run-time denotes the
maximum time which is required by a processor to execute the task without
interruption.
5. Any non-periodic task in the system has no required deadline. Typically, they
initiate periodic tasks or are tasks failure recovery. They usually displace
periodic tasks.

Static priorities are assigned to tasks, once at the connection set-up phase,
according to their request rates. The priority corresponds to the importance of a task
relative to other tasks. Tasks with higher request rates will have higher priorities. The
task with the shortest period gets the highest priority and the task with the longest period
gets the lowest priority.

The rate monotonic algorithm is simple method to schedule time-critical, periodic


tasks on the respective resource. A task will always meet its deadline, if this can be
proven to be true for the longest response time. The response time is the time span
between the request and the end of processing the task. This time span is maximal when
all processes with a higher priority request to be processed at the same time. This case is
known as the critical instant shown in the following figure. In this figure, the priority of
a is, according to the rate monotonic algorithm, higher than b, and b is higher than c. The
critical time zone is the time interval between the critical instant and the completion of a
task.

Multimedia Systems- M.Sc(IT)


192

Process A

Process B
Critical instant Critical instant Critical instant Critical instant
of B of B of B of B

Process C

Critical instant Critical instant


of C of C

An example of critical instants

22.4.3 Other Approaches to Rate Monotonic Algorithm

There are several approaches to this algorithm. One of them divides a task into a
mandatory and an optional part. The processing of the mandatory part delivers a result
which can be accepted by the user. The optional part only refines the result. The
mandatory part is schedule according to the rate monotonic algorithm. For the scheduling
of the optional part, other, different policies are suggested.

In some systems there are aperiodic tasks next to periodic ones. To meet the
requirements of periodic tasks and the response time requirements of a periodic requests,
it must be possible to schedule both aperiodic and periodic tasks. If the aperiodic request
is an aperiodic continuous stream (e.g video images as part of a slide show), we have the
possibility to transform it into a periodic stream. Every timed data item can be
substituted by n items. The new items have the duration of the minimal life span. The
numbers of streams is increased, but since the life span is decreased, the semantic
remains unchanged. The stream is now periodical because every item has the same life
span.

If the stream is not continuums, we can apply a sporadic server to respond to


aperiodic request. The server is provided with a computation budget. This budget is
refreshed t units of time after it has been exhausted. Earlier refreshing is also possible.
The budget represents the computation time reserved for aperiodic tasks.

Multimedia Systems- M.Sc(IT)


193

Check Your Progress 2


Distinguish additive and subtractive colors and write their area of use.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

22.4.4 Other Approaches for In-Time Scheduling

Apart from the two methods previously discussed, further scheduling algorithms
have been evaluated regarding their suitability for the processing of continuous media
data.

Least Laxity First (LLF). The laxity is the time between the actual time t and
the dead-line minus the remaining processing time. The laxity in period k is :

tk = (s + (k – 1)p + d) – (t + e)

LLF is an optimal, dynamic algorithm for exclusive resources. Furthermore, it is


an optimal algorithm for multiple resources if the ready-times of the real-time
tasks are the same. The laxity is a function of al deadline, the processing time and
the current time. Thereby, the processing time cannot be exactly specified in
advance. When calculating the laxity, the worst case is assumed. Therefore, the
determination of the laxity is inexact. The laxity of waiting processes
dynamically changes over time.

Deadline Monotone Algorithm.

If the deadlines of tasks are less than their period (di < pi), the prerequisites of the
rate monotonic algorithm are violated. In this case, a fixed priority assignment
according to the deadlines of the tasks is optimal. A task Ti gets a higher priority
than a task Tj if di < dj. No effective schedulability test for the deadline monotone
algorithm exists.

To determine the schedulability of a task set, each task must be checked if


it meets its deadline in the worst case. In this case, all tasks require execution to
their critical instant. Tasks with a deadline shorter than their period, for example,
arise during the measurements of temperature or pressure control systems. In
multimedia systems, deadlines equal to period lengths can be assumed.

Shortest Job First (SJF).

The task with the shortest remaining computation time is chosen for execution.
This algorithm guarantees that as many tasks as possible meet their deadlines

Multimedia Systems- M.Sc(IT)


194

under an overload situation if all of them have the same deadline. In multimedia
systems where the resource management allows overload situations this might be
a suitable algorithm.

22.5 Let us sum up

In this lesson we have learnt about the process management present in Multimedia
Operating System. We have touched upon the following points.
 Process management deals with allocation and managing the resource
main processor.
 The process manager transfers a process into the ready-to-run state by
assigning it a position in the respective queue of the dispatcher.
 The real time scheduling algorithms are Earliest Deadline First Algorithm,
Rate Monotonic Algorithm.
 EDF is an optimal, dynamic algorithm.
 Rate monotonic algorithm is optimal, static, priority driven algorithm for
preemptive periodic jobs

22.6 Lesson-end activities

1. Compare the different process scheduling activites present in a normal


operating system with a multimedia operating system.

22.7 Model answers to “Check your progress”

1. The real time scheduling algorithms are


i) Earliest Deadline First Algorithm
ii) Rate Monotonic Algorithm.

22.8 References
1. Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
2. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen
3. Multimedia Storage and Retrieval: An Algorithmic Approach By Jan Korst,
Verus Pronk
4. “Multimedia Making it work” By Tay Vaughan

Multimedia Systems- M.Sc(IT)


195

Lesson 23 Multimedia OS – File System


Contents

23.0 Aims and Objectives


23.1 Introduction
23.2 File Systems
23.3 File Structure
23.4 Disk Scheduling
23.5 Multimedia File System
23.6 Additional Operating System Issues
23.7 Let us sum up
23.8 Lesson-end activities
23.9 Model answers to “Check your progress”
23.10 References

23.0 Aims and Objectives

This lesson aims at teaching the learner how file systems are organised in the
Multimedia based Operating Systems. At the end of this chapter the learner will:

i) be able to identify different file systems available for Multimedia OS.


ii) Learn how the files are managed in the Operating System.
iii) Understand the algorithms used for disk scheduling.

23.1 Introduction

The file system is said to be the most visible part of an operating system. Most
programs write or read files. Their program code, as well as user data, is stored in files.
The organization of the file system is an important factor for the usability and
convenience of the operating system. A file sequence is a sequence of information held
as a unit for storage and use in a computer system.

23.2 File Systems

Files are stored in secondary storage, so they can be used by different


applications. The life-span of files is usually longer than the execution of a program. In
traditional file systems, the information types stored in files are sources, objects, libraries
and executables of programs, numeric data, text payroll records, etc. In multimedia
systems, the stored information also covers digitized video and audio with their related
real-time “read” and “write” demands. Therefore, additional requirements in the design
and implementation of file systems must be considered.

Multimedia Systems- M.Sc(IT)


196

The file system provides access and control functions for the storage and retrieval
of files. From the user’s viewpoint, it is important how the file system allows file
organization and structure. The internals, which are more important in our context, i.e.,
the organization of the file system, deal with the representation of information in files,
their structure and organization in secondary storage.

Traditional File Systems

The two main goals of traditional files systems are: (1) to provide a comfortable
interface for file access to the user and (2) to make efficient use of storage media.

23.3 File Structure

We commonly distinguish between two methods of file organization. In


sequential storage, each file is organized as a simple sequence of bytes or records. Files
are stored consecutively on the secondary storage media as shown in the following figure.

Contiguous and non contiguous storage

They are separated from each other by a well defined “end of file” bit pattern,
character or character sequence. A file descriptor is usually placed at the beginning of
the file and is, in some systems, repeated at the end of the file. Sequential storage is the
only possible way to organize the storage on tape, but it can also be used on disks. The
main advantage is its efficiency for sequential access, as well as for direct access. Disk
access time for reading and writing is minimized.

In non-sequential storage, the data items are stored in a non-contiguous order.


There exist mainly two approaches:

 One way is to use linked blocks, where physical blocks containing


consecutive logical locations are linked using pointers. The file descriptor

Multimedia Systems- M.Sc(IT)


197

must contain the number of blocks occupied by the file, the pointer to the
first block and it may also have the pointer to the last block. A serious
disadvantage of this method is the cost of the implementation for random
access because all prior data must be read. In MS-DOS, a similar method
is applied. A File Allocation Table (FAT) is associated with each disk.
One entry in the table represents one disk block. The directory entry of
each file holds the block number of the first block. The number in the slot
of an entry refers to the next block of a file. The slot of the last block of a
file contains an end-of -file mark.
 Another approach is to store block information in mapping tables. Each
file is associated with a table where, apart from the block numbers
information like owner, file size, creation time, last access time, etc., is
stored. Those tables usually have a fixed size, which means that number
of block references is bounded. Files with more blocks are referenced
indirectly by additional tables assigned to the files. In UNIX, a small table
(on disk) called i-node is associated with each file (See the following
figure ). The indexed sequential approach is an example for multi-level
mapping; here, logical and physical organizations are not clearly
separated.

The UNIX - inode

Multimedia Systems- M.Sc(IT)


198

Directory Structure

Files are usually organized in directories. Most of today’s operating systems


provide tree-structured directories where the user can organize the files according to
his/her personal needs. In multimedia systems, it is important to organize the files in a
way that allows easy, fast, and contiguous data access.

Disk Management

Disk access is a slow and costly transaction. In traditional systems, a common


technique to reduce disk access is block caches. Using block cache, blocks are kept in
memory because it is expected that future read or write operations access these data
again. Thus, performance is enhanced due to shorter access time.

Another way to enhance performance is to reduce disk arm motion. Blocks that
are likely to be accessed in sequence are placed together on one cylinder. To refine this
method, rotational positioning can be taken into account. Consecutive blocks are placed
on the same cylinder, but in an interleaved way as shown in the following figure.

Interleaved and non-interleaved storage

Another important issue is the placement of the mapping tables (e.g., I-nodes in
UNIX) on the disk. If they are placed near the beginning of the disk, the distance
between them and the blocks will be, on average, half the number of cylinders. To
improve this, they can be placed in the middle of the disk. Hence, the average seek time
is roughly reduced by a factor or two. In the same way, consecutive blocks should be
placed on the same cylinder. The use of the same cylinder for the storage of mapping
tables and referred blocks also improves performance.

Multimedia Systems- M.Sc(IT)


199

23.4 Disk Scheduling


Sequential storage devices (e.g., tapes) do not have a scheduling problem, for
random access storage devices, every file operation may require movements of the
read/write head. This operation, known as “to seek,” is very time consuming, i.e., seek
time in order of 250ms for CDs is still state-of-art. The actual time to read or write a disk
block is determined by:

 The seek time (the time required for the movement of the read/write head).
 The latency time or rotational delay (the time during which the transfer cannot
proceed until the right block or sector rotates under the read/write head).
 The actual data transfer time needed for the data to copy from disk into main
memory.

Usually the seek time is the largest factor of the actual transfer time. Most
systems try to keep the cost of seeking low by applying special algorithms to the
scheduling of disk read/write operations. The access of the storage device is a problem
greatly influenced by the file allocation method. Most systems apply one of the
following scheduling algorithms:

First-Come-First-Served (FCFS)

With this algorithm, the disk driver accepts requests one-at-a-time and
serves them in incoming order. This is easy to program and an intrinsically fair
algorithm. However, it is not optimal with respect to head movement because it
does not consider the location of the other queued requests. These results in a
high average seek time.

Shortest-Seek-Time First (SSTF)

At every point in time, when a data transfer is requested, SSTF selects


among all requests the one with the minimum seek time from the current head
position. Therefore, the head is moved to the closest track in the request queue.
This algorithm was developed to minimize seek time and it is in this sense
optimal. SSTF is a modification of Shortest Job First (SJF), and like SJF, it may
cause starvation of some requests. Request targets in the middle of the disk will
get immediate service at the expense of requests in the innermost and outermost
disk areas.

SCAN

Like SSTF, SCAN orders requests to minimize seek time. In contrast to


SSTF, it takes the direction of the current disk movement into account. It first
serves all requests in one direction until it does not have any requests in this
direction anymore. The head movement is then reversed and service is continued.
SCAN provides a very good seek time because the edge tracks get better service

Multimedia Systems- M.Sc(IT)


200

times. Note that middle tracks still get a better service then edge tracks. When
the head movement is reversed, it first serves tracks that have recently been
serviced, where the heaviest density of requests, assuming a uniform distribution,
is at the other end of the disk.

C-SCAN

C-SCAN also moves the head in one direction, but it offers fairer service
with more uniform waiting times. It does not alter the direction, as in SCAN.
Instead, it scans in cycles, always increasing or decreasing, with one idle head,
movement from one edge to the other between two consecutive scans. The
performance of C-SCAN is somewhat less than SCAN.

23.5 Multimedia File systems


Compared to the increased performance of processors and networks, storage
devices have become only marginally faster. The effect of this increasing speed mismatch
is the search for new storage structures, and storage and retrieval mechanisms with respect
to the file system. Continuous media data are different from discrete data in:

 Real Time Characteristics

As mentioned previously, the retrieval, computation and presentation of


continuous media is time-dependent. The data must be presented (read) before a
well-defined deadline with small jitter only.

 File Size

Compared to text and graphics, video and audio have very large storage space
requirements. Since the file system has to store information ranging from small,
unstructured units like text files to large, highly structured data units like video and
associated audio, it must organize the data on disk in a way that efficiently uses the
limited storage.

 Multiple Data Streams

A multimedia system must support different media at one time. It does not only have
to ensure that all of them get a sufficient share of the resources, it also must consider
tight relations between streams arriving from different sources.

There are different ways to support continuous media in file systems.


Basically there are two approaches. With the first approach, the organization of files
on disk remains as is. The necessary real-time support is provided through special
disk scheduling algorithms and sufficient buffer to avoid jitter. In the second
approach, the organization of audio and video files on disk is optimized for their use

Multimedia Systems- M.Sc(IT)


201

in multimedia systems. Scheduling of multiple data streams still remains an issue of


research.

23.5.1 Disk Scheduling Algorithms in Multimedia File System

The main goals of traditional disk scheduling algorithms are to reduce the cost of
seek operations, to achieve a high throughput and to provide fair disk access for every
process. The additional real-time requirements introduced by multimedia systems make
traditional disk scheduling algorithms, such as described previously, inconvenient for
multimedia systems. Systems without any optimized disk layout for the storage of
continuous media depend far more on reliable and efficient disk scheduling algorithms
than others. In the case of contiguous storage, scheduling is only needed to serve requests
from multiple streams concurrently. A round-robin scheduler is employed that is able to
serve hard real-time tasks. Here, additional optimization is provided through the close
physical placement of streams that are likely to be accessed together.

Earliest Deadline First:

EDF scheduling strategy as described for CPU scheduling, is also used for the file
system issue as well. Here the block of the stream with the nearest deadline would be
read first. The employment of EDF, in the strict sense results in poor throughput and
excessive seek time. Further, as EDF is most often applied as a preemptive scheduling
scheme, the costs for preemption of a task and scheduling of another task are
considerably high. The overhead caused by this is in the same order of magnitude as at
least one disk seek. Hence, EDF must be adapted or combined with file system strategies.

SCAN-Earliest Deadline First

The SCAN-EDF strategy is a combination of the SCAN and EDF mechanisms.


The seek optimization of SCAN and the real-time guarantees of EDF are combined in the
following way: like in EDF, the request with the earliest deadline is always served first;
among requests with the same deadline, the specific one that is first according to the scan
direction is served first; among the remaining requests, this principle is repeated until no
request with this deadline is left.

Group Sweeping Scheduling:

With Group Sweeping Scheduling (GSS), requests are served in cycles, in round-
robin manner. To reduce disk arm movements, the set of n streams is divided into g
groups. Groups are served in fixed order. Individual streams within a group are served
according to SCAN; therefore, it is not fixed at which time or order individual streams
within a group are served. In one cycle, a specific stream may be the first to be served; in
another cycle, it may be the last in the same group. A smoothing buffer which is sized
according to the cycle time and data rate of the stream assures continuity.

Multimedia Systems- M.Sc(IT)


202

Mixed Strategy:

The mixed strategy was introduced based on the shortest seek(also called greedy strategy)
and the balanced strategy. As shown in following figure, every time data are retrieved from
disk they are transferred into buffer memory allocated for the respective data stream. From
there, the application process removes them one at a time. The goal of the scheduling
algorithm is:

o To maximize transfer efficiency by minimizing seek time and latency.


o To serve process requirements with a limited buffer space.

With shortest seek, the first goal is served, i.e., the process of which data block is
closest is served first. The balanced strategy chooses the process which has the least
amount of buffered data for service because this process is likely to run out of data. The
crucial part of this algorithm is the decision of which of the two strategies must be applied
(shortest seek or balanced strategy). For the employment of shortest, seek two criteria must
be fulfilled: the number of buffers for all processes should be balanced (i.e., all processes
should nearly have the same number of buffered data) and the overall required bandwidth
should be sufficient for the number of active processes, so that none of them will try to
immediately read data out of an empty buffer. The urgency is introduced as an attempt to
measure both. The urgency is the sum of the reciprocals of the current “fullness” (amount
of buffered data). This number measures both the relative balance of all read processes and
the number of read processes. If the urgency is large, the balance strategy will be used; it is
small, it is safe to apply the shortest seek algorithm.

Multimedia Systems- M.Sc(IT)


203

Check Your Progress 2


Distinguish additive and subtractive colors and write their area of use.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

Continuous Media File System:

CMFS Disk scheduling is a non-preemptive disk scheduling scheme designed for


the Continuous Media File System (CMFS) at UC-Berkeley. Different policies can be
applied in this scheme. Here the notion of the slack time H is introduced. The slack time
is the time during which CMFS is free to do non-real-time operations or workahead for
real-time processes, because the current workahead of each process is sufficient so that
no process would starve, even if it would not be served for H seconds.

Several real time scheduling policies such as Static/Minimum policy, Greedy


policy, Cyclical plan policy have been implemented and tested in prototype file system.

23.6 Additional Operating System Issues

Interprocess Communication and Synchronization:

In multimedia systems, interprocess communication refers to the exchange of


different data between processes. This data transfer must be very efficient because
continuous media require the transfer of a large amount of data in a given time span. For
the exchange of discrete media data, the same mechanisms are used as in traditional
operating systems. Data interchange of continuous media is close related to memory
management and is discussed in the previous section.

Synchronization guarantees timing requirements between different processes. In


the context of multimedia, this is an especially interesting aspect.

Memory Management:

The memory manager assigns physical resource memory to a single process.


Virtual memory is mapped onto memory that is actually available. With paging, less
frequently used data is swapped between main memory and external storage. Pages are
transferred back into the main memory when data on them is required by a process. Note,
continuous media data must not be swapped out of the main memory.

If a page of virtual memory containing code or data required by a real-time


process is not in real memory when it is accessed by the process, a page fault occurs,

Multimedia Systems- M.Sc(IT)


204

meaning that the page must be read from disk. Page faults affect the real-time
performance very seriously, so they must be avoided. A possible approach is to lock code
and/or data into real memory. However, care should be taken when locking code and/or
data into real memory.

Device Management

Device management and the actual access to a device allow the operating system
to integrate all hardware components. The physical device is represented by an abstract
device driver. The physical characteristics of devices are hidden. In a conventional
system, such devices include a graphics adapter card, disk, keyboard and mouse. In
multimedia systems, additional devices like cameras, microphones, speakers and
dedicated storage devices for audio and video must be considered. In most existing
multimedia systems, such devices are not often integrated by device management and the
respective device drivers.

Existing operating system extensions for multimedia usually provide one common
system-wide interface for the control and management of data streams and devices.

23.7 Let us sum up


In this lesson we have learnt the different file system. We have discussed how
disk scheduling algorithms are implemented in Multimedia Operating systems.

We have also discussed the following key points.


- The file system provides access and control functions for the storage and
retrieval of files.
- Files are usually organized in directories. Most of today’s operating
systems provide tree-structured directories.
- The disk scheduling algorithms includes First-Come-First-Served
(FCFS), Shortest-Seek-Time First (SSTF), SCAN, C-SCAN
- The Disk Scheduling algorithms used in multimedia OS are
Earliest Deadline First: SCAN-Earliest Deadline First, Group
Sweeping Scheduling, Mixed Strategy.

23.8 Lesson-end activities

1. Compare the traditional algorithms used for Disk scheduling with the
algorithms used for multimedia operating systems.
2. Search the internet for any resource that lists the scheduling algorithms used
in Windows Operating System, Unix Operating System.

Multimedia Systems- M.Sc(IT)


205

23.9 Model answers to “Check your progress”

1) The Disk Scheduling algorithms discussed in this lesson are :


- Earliest Deadline First:
- SCAN-Earliest Deadline First
- Group Sweeping Scheduling:
- Mixed Strategy

23.10 References

1. “Multimedia Systems, Standards, and Networks” By Atul Puri, Tsuhan Chen


2. "Multimedia:Concepts and Practice" By Stephen McGloughlin
3. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.

Multimedia Systems- M.Sc(IT)


206

Lesson 24 Multimedia Databases


Contents

24.0 Aims and Objectives


24.1 Introduction
24.2 Multimedia DataBase Management System
24.3 Characteristics of an MDBMS
24.4 Data Analysis
24.5 Data Structure for Multimedia Databases
24.6 Relational Database Model
24.7 Multimedia Objects storage model
24.8 Different Architectures for Multimedia Databases
24.9 Let us sum up
24.10 Lesson-end activities
24.11 Model answers to “Check your progress”
24.12 References

24.0 Aims and Objectives

In this lesson we will learn the features and characteristics of Multimedia Data Base
Management Systems. At the end of the lesson the reader will be able
i) To understand the concepts behind MDBMS
ii) Characteristics of MDBMS
iii) To understand different representations for MDBMS

24.1 Introduction

Multimedia database systems are database systems where, besides text and other
discrete data, audio and video information will also be stored, manipulated and retrieved.
To provide this functionality, multimedia database systems require a proper storage
technology and five systems.

Current storage technology allows for the possibility of reading, storing and
writing audio and video information in real-time. This can happen either through
dedicated external devices, which have long been available, or through system integrated
secondary storage. The external devices were developed for studio and electronic
entertainment applications; they were not developed for storage of discrete media. An
example is a video recorder controlled through a digital interface.

Multimedia Systems- M.Sc(IT)


207

24.2 Multimedia Data Base systems


Multimedia applications often address file management interfaces at different levels
of abstraction. Consider the following three applications: a hypertext application can
manipulate nodes and edges; an audio editor can read, write and manipulate audio data
(sentences); an audio-video distribution service can distribute stored video information.

At first, it appears that these three applications do not have much in common, but in
all three their functions can uniformly be performed using a Multimedia DataBase
Management System (MDBMS). The reason is that in general, the main task of a
DataBase Management System (DBMS) is to abstract from the details of the storage
access and its management. MDBMS is embedded in the multimedia system domain,
located between the application domain (applications, documents) and the device domain
(storage, compression and computer technology). The MDBMS is integrated into the
system domain through the operating system and communication components.
Therefore, all three applications can be put on the same abstraction level with respect to
DBMS. Further, a DBMS provides other properties in addition to storage abstraction:

 Persistence of Data

The data may outlive processing programs, technologies, etc. For example,
insurance companies must keep data in database for several decades. This implies
that a DBMS should be able to manipulate data even after the changes of the
surrounding programs.

 Consistent View of Data

In multi-user systems it is important to provide a consistent view of data during


processing database requests at certain points. This property is achieved using
time synchronization protocols.

 Security of Data

Security of data and integrity protection in database in case of system failure is


one of the most important requirements DBMS. This property is provided using
the transaction concept.

 Query and Retrieval of Data

Different information (entries) is stored in databases, which later can be retrieved


through database queries. Database queries are formulated with query languages
such as SQL.

Multimedia Systems- M.Sc(IT)


208

24.3 Characteristics of an MDBMS


An MDBMS can be characterized by its objectives when handling multimedia data:

1. Corresponding Storage Media

Multimedia data must be stored and managed according to the specific


characteristics of the available storage media. Here, the storage media can be
both computer integrated components and external devices. Additionally, read-
only (such as a CD-ROM), write-once and write-many storage media can be used.

2. Descriptive Search Methods

During a search in a database, an entry, given in the form of text or a graphical


image, is found using different queries and the corresponding search
methods. A query of multimedia data should be based on a descriptive, content-
oriented search in the form, for example, of “The picture of the woman with a red
scarf”. This kind search of relates to all media, including video and audio.

3. Device-independent Interface

The interface to a database application should be device-independent. For


example, a parameter could specify that the following audio and video data will
not change in the future.

4. Format-independent Interface

Database queries should be independent from the underlying media format,


meaning that the interfaces should be format-independent. The programming
itself should also be format-independent, although in some cases, it should be
possible to access details of the concrete formats.

5. View-specific and Simultaneous Data Access

The same multimedia data can be accessed (even simultaneously) through


different queries by several applications. Hence, consistent access to shared data
(e.g., shared editing of a multimedia document among several users) can be
implemented.

6. Management of Large Amounts of Data

The DBMS must be capable of handling and managing large amounts of data and
satisfying queries for individual relations among data or attributes of relations.

Multimedia Systems- M.Sc(IT)


209

7. Relational Consistency of Data Management

Relations among data of one or different media must stay consistent


corresponding to their specification. The MDBMS manages these relations and
can use them for queries and data output. Therefore, for example, navigation
through a document is supported by managing relations among individual parts of
a document.

8. Real-time Data Transfer


The read and write operations of continuous data must be done in real-time. The
data transfer of continuous data has a higher priority than other database
management actions. Hence, the primitives of a multimedia operating system
should be used to support the real-time transfer of continuous data.
9. Long Transactions
The performance of a transaction in a MDBMS means that transfer of a large
amount of data will take a long time and must be done in a reliable fashion. An
example of a long transaction is the retrieval of a movie.

In the architecture model, the system components around MDBMS and MDBMS itself
have the following functions:

 The operating system provides the management interface for MDBMS to all local
devices.
 The MDBMS provides an abstraction of the stored data and their equivalent
devices, as is the case in DBMS without multimedia.
 The communication system provides for MDBMS abstractions for
communication with entities at remote computers. These communication
abstractions are specified through interfaces according to, for example, the Open
System Interconnection (OSI) architecture.
 A layer above the DBMS, operating system and communication system can unify
all these different abstractions and offer them, for example, in an object-oriented
environment such as a toolkit. Thus, an application should have access to each
abstraction at different levels.

24.4 Data Analysis

Meyer-Wegener performed a details analysis of multimedia data storage and


manipulation in a multimedia database system. This section describes the analysis of
compressed data and other media. In the analysis, two questions are addressed:

1. How are these data structured?

It is important to specify what kind of information is needed in the entry


structure to process the multimedia entry in a MDBMS.

Multimedia Systems- M.Sc(IT)


210

2. How can these data be accessed?

That is to say, how are the proper defined to access multimedia entries. One can
define media-dependent, as well as media-independent operations. In a next step,
a class hierarchy with respect to object-oriented programming may be
implemented.

Advantages of multimedia database

Following are advantages to store multimedia objects in the database:

 Better security: Multimedia objects are secured in the databases and can be
invoked any time when required. For example, a doctor can see M.R.I. report of
patient whenever he wants and can be preserved as long as the patient takes the
treatment of doctor.
 Greater control (resizing, manipulating): Multimedia objects can be resized
when required and certain changes can be made when required.
 Easy deletion: Images can be deleted without deleting the corresponding data
from the database. One can search for multimedia content in the same fashion as
they search for traditional relational data. For instance, customers can search for
images by a unique image name or key, by photographer, or by category. In
addition, customers can use Oracle interMedia’s powerful image content based
retrieval capability to search for ‘similar’ images recursively through the
database. And once a customer has found an image of interest, they can simply
click on it and see the full resolution image.
 Easy to extract statistics on usage Oracle interMedia enables Oracle9i to
manage multimedia content (image, audio, and video) in an integrated fashion
with other enterprise information. It extends Oracle9i reliability, availability, and
data management to multimedia content in media-rich Internet applications. As an
integral part of the Oracle9 i database server, Oracle interMedia data benefits
from all Oracle9i capabilities, including its speed, efficiency, scalability, security,
and power.

Check Your Progress 1

List a few advantages of Multimedia Databases.


Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

Multimedia Systems- M.Sc(IT)


211

24.5 Data Structure for Multimedia databases


In general, data can be stored in databases either in unformatted (unstructured)
form or in formatted (structured) form. Unformatted or unstructured data are presented
in a unit where the content cannot retrieve by accessing any structural detail.

Formatted or structured data are stored in variables, fields or attributes with


corresponding values.

Examples of Multimedia Structures

We present examples of raw, registered, and descriptive data for different media such as
text, image, video and audio.

In the case of text, the individual forms are:

1. Characters represent raw data.


2. The registering data describe the coding (e.g., ASCII). Additionally, a length
entry must follow or an end symbol must be defined.
3. The descriptive data may include information for layout and logical structuring of
the text or keywords.

Images can be stored in databases using the following forms:

1. Pixels (pixel matrix) represent raw data. A compressed image may also consist of
a transformed pixel set.
2. The registering data include the height and width of the picture. Additionally, the
details of coding are stored here.

24.6 Relational Database Model

The simplest possibility to implement a multimedia database is to use the


relational database model because the attributes of different media in relational databases
are defined in advance. Hence, the attributes can specify not only text (as is done in
current database systems), but also, for example, audio or video data types. The main
advantage of this approach is its compatibility with current database applications.

In the following paragraphs, we will analyze different types of the relational


model using an example. In this example, a relation “student” is given for the admission
of a sport institute’s students into the obligatory athletics course. A relation’s attributes
(e.g., picture, exercise devices) can be specified through different media types (e.g.,
image, motion video). Further, our example database includes other entries such as:
“athletics”, “swimming”, and “analysis”.

Multimedia Systems- M.Sc(IT)


212

Student (Admission_Number Integer,


Name String,
Picture Image,
Exercise_Device_1 Video,
Exercise_Device_2 Video)
Athletics (Admission_Number Integer,
Qualification Integer,
The_High_Jump Video,
The_Mile_Run Video)
Swimming (Admission_Number Integer,
Crawl Video)
Analysis (Qualification Integer,
Error_Pattern String,
Comment Audio)
 Type 1 Relational Model
In the type 1 Relational Model, the value of a certain attribute can be fixed over
the particular set of the corresponding attribute types, e.g., the frame rate of
motion video can be fixed.
 Type 2 Relational Model
A variable number of entries can be defined through the type 2 relational model.
 Type 3 Relational Model
In addition to the fixed values of attributes per relation and the variable number of
entries, an entry can simultaneously belong to several relations. This property is
called the type 3 relational model.

24.7 Multimedia Objects Storage Model

The multimedia have a common multimedia storage model. The multimedia


component of these objects can be stored in the database, as a BLOB under transaction
control.

The multimedia component can also be stored outside the database, without
transaction control. In this case, a pointer, under transaction control, is stored in the
database, while the multimedia component is stored in an external BFILE (operating
system flat file), at an HTTP server-based URL, on a specialized media server, or at a
user-defined source on other servers.

Multimedia content stored outside the database can provide a convenient


mechanism for managing large, or new multimedia repositories that reside as flat files on
erasable or read-only devices. This data can be imported and exported between BLOBs
and the external BFILE source at any time.

Multimedia Systems- M.Sc(IT)


213

Object metadata and methods are always stored in the database. Whether multimedia
content is stored inside or outside the database, Database manages metadata for all the
multimedia object types, and automatically extracts that metadata for each type. This
metadata includes the following:

 Data storage information including the source types, location, and name.
 Data update time and format.
 MIME media type (used in Web and mail applications).
 Image height and width, and image content length, format, and compression type.
 Audio encoding type, number of channels, sampling rate, sample size,
compression type, play time (duration), and description.
 Video frame widths and heights, frame resolution and rate, play time(duration),
number of frames, compression type, number of colors, bit rate, and description .
 Select application metadata (for example, singer or studio names).

24.8 Different architectures for multimedia databases

There are various architectures for multimedia databases. Following are three
architectures that we will discuss:

24.8.1 I2RP architecture [Intelligent Information Retrieval and Presentation with


Multimedia Databases]

This architecture is question-answer system. The result of query given by user is


multimedia presentation containing the answer. The following figure illustrates this
architecture:
Natural language
generator

fact text
query
Query Processor Presentation Multimedia Player
generator presentation

DB1
ontology MM
DB1
Ontology agent

Symantic
DB2 MM
generator
ontology DB2

Multimedia Systems- M.Sc(IT)


214

24.8.2 SQL+D architecture

SQL+D an extension to SQL, which allows users to dynamically specify how to display
answers to queries posed to multimedia databases. It provides tools to display multimedia
data plus other traditional GUI elements such as boxed text, checkbox, list, and button.

The version of SQL+D, includes:

 The full implementation of the Database Interface, allowing users to connect local
and remote ODBC (or JDBC) compliant database, such as ORACLE or Microsoft
Access.
 Simplified display specifications syntax and instantiation of display elements.
SQL+D differs from other efforts in that it is specifically designed for querying
multimedia databases. It emphasizes in the query, by-the-user specification of the
display of the output data. In contrast, others have focused on specification of the
query, data visualization, or data browsing. SQL+D allows all of these, and we
have one browsers and visual querying applications as a proof of concept. By
proposing SQL+D as a language extension to SQL, we intend to maintain the
flexibility that allowed SQL to be adopted as the query language of choice by a
great number of database management systems and browsers, as well as by many
programming languages that allow embedded SQL queries.

24.8.3 Entity-Multimedia-Relationship model (EMRM)

The EMRM model can be designed with traditional entities and their relationship
set along with the additional set called multimedia set. Multimedia sets are shown in this
model as entities and can be connected to the normal entities using relationship. For
example, one can design the EMRM with description of the entities and relationships as
given below: The entity “Document” is very important in the system. As a new media it
contains multimedia objects. The cardinality between “Document” and “Media Object” is
one to many, which means a Document has at least one media object and there is no
blank Document. The “media object” can exist without “Technique Report” so the
cardinality between “Media Object” and ‘Document” is 0 to many.

A “media object” can belong to multiple Documents. Different media entities


“Text”, “Picture”, “Audio” and “Video” have the (t, d) classification hierarchy with the
“Media Object”. Each media is a subclass of “Media Object” and they are disjoint with
each other.

In the system two kinds of user groups are anticipated which are “Reader” and
“Author” entities. A generic entity “Person” is defined to include the common attributes
of “Reader” and “Author”. It does make sense that a person can be the “Author” and
“Reader” at the same time so the classification hierarchy is (p, o) which means a
“Person” can be either a “Reader” or an “Author” or both. The ternary relationship
between “Reader” and “Document” is defined by connections “Read” and “Readby”. A
“Reader” can exist without any “Document” and the ‘Document” can have no any reader

Multimedia Systems- M.Sc(IT)


215

at all. Also the “Reader” can read unlimited number of “Document” and unlimited
number of “Reader” can read the “Document” so the cardinalities are from 0 to many.

Another ternary relationship is defined between the “Author” and the “Document”
as “Write” and “Writtenby” relationships. The difference is that a “Document” must have
atleast 1 “Author” while an author can write 0 to many “Document” The last entity is
“Publisher”. The ternary relationship addresses the connection between “Publisher” and
“Document”. The “Publisher” can be an organization such as ACM, a department like or
a laboratory. The “Document” can be conjointly published by some organizations so the
cardinality is from 1 to n.

24.9 Let us sum up

In this lesson we have learnt the purpose and specifications of Multimedia


Database Management Systems. We have discussed the following key points in this
lesson :
 Multimedia database systems are database systems where, besides text and other
discrete data, audio and video information will also be stored, manipulated and
retrieved. Unformatted or unstructured data are presented in a unit where the
content cannot retrieve by accessing any structural detail.
 Formatted or structured data are stored in variables, fields or attributes with
corresponding values.
 Simplest possibility to implement a multimedia database is to use the relational
database model because the attributes of different media in relational databases
are defined in advance.

24.10 Lesson-end activities

Check the availability of multimedia handling capacity in any RDBMS. Check for
the availability of multimedia components(text, audio, video, animation) as field
items in Oracle 9i.

24.11 Model answers to “Check your progress”

1. Advantages of storing multimedia databases are :


i) Better security
ii) Greater control (resizing, manipulation)
iii) Easy deletion
iv) Easy to extract statistics on usage

Multimedia Systems- M.Sc(IT)


216

24.12 References
1. “Multimedia Servers: Applications, Environments and Design” By Dinkar
Sitaram, Asit Dan
2. “Semantic Models for Multimedia Database Searching and Browsing “ By Shu-
Ching Chen, Rangasami L. Kashyap, Arif Ghafoor
3. "Multimedia:Concepts and Practice" By Stephen McGloughlin
4. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.

Multimedia Systems- M.Sc(IT)


217

Lesson 25 Case Studies


Contents
25.0 Aims and Objectives
25.1 Introduction
25.2 Case Study in Process Management
25.3 Case Study – Synchronization
25.3.1 Synchronization in MHEG
25.3.2 Synchronization in HyTime
25.3.3 Synchronization in MODE
25.4 Let us sum up
25.5 Lesson-end activities
25.6 Model answers to “Check your progress”
25.7 References

25.0 Aims and Objectives

In this lesson we will learn the different implementation of Multimedia Systems.


Different architectures of the implementation of multimedia are discussed in this lesson.
At the end of this lesson the user will be able to learn the implementations of Multimedia
system in Synchronization and Process Management.

25.1 Introduction

Synchronization in multimedia systems refers to the temporal relations between


media objects in the multimedia system. In a more general and widely used sense some
authors use synchronization in multimedia systems as comprising content, spatial and
temporal relations between media objects.
Process management deals with the resource main processor. The capacity of this
resource is specified as processor capacity. The process manager maps single processes
onto resources according to a specified scheduling policy such that all processes meet
their requirements.

25.2 Case Study for Process Management

Real Time Process Management in Conventional Operating Systems: An Example

UNIX and its variants, Microsoft’s Windows-NT, Apple’s System 7 and IBM’s
TM
OS/2 , are, and will be, the most widely installed operating systems with multitasking
capabilities on personal computers (including the Power PC) and workstations. Although
some are enhanced with special priority classes for real-time processes, this is not

Multimedia Systems- M.Sc(IT)


218

sufficient for multimedia applications. For example, the UNIX scheduler which provides
a real-time static priority scheduler in addition to a standard UNIX timesharing scheduler
is analyzed. For this investigation three applications have been chosen to run
concurrently; “typing” as an interactive application, “video” as a continuous media
application and a batch program. The result was that only through trial and error a
particular combination of priorities and scheduling class assignment might be found that
works for a specific application set, i.e., additional features must be provided for the
scheduling of multimedia data processing.

Threads

OS/2 was designed as a time-sharing operating system without taking serious real-
time applications into account. An OS/2 thread can be considered as a light-weight
process: it is the dispatchable unit of execution in the operating system. A thread belongs
to exactly one address space (called process in OS/2 terminology). All threads share the
resources allocated by the respective address space. Each thread has its own execution
stack, register values and dispatch state (either executing or ready-to-run). Each thread
belongs to one of the following priority classes:

 The time-critical class is reserved for threads that require immediate attention.
 The fixed-high class is intended for applications that require good responsiveness
without being time-critical.
 The regular class is used for the executing of normal tasks.
 The idle-time class contains threads with the lowest priorities. Any thread in this
class is only dispatched if no thread of any other class is ready to execute.

Priorities

Within each class, 32 different priorities (0, …, 31) exist. Through time-slicing,
threads of equal priority have equal chances to execute. A context switch occurs
whenever a thread tries to access an otherwise allocated resource. The thread with the
highest priority is dispatched and the time-slice is started again.

Threads are preemptive, i.e., if a higher-priority thread becomes ready to execute,


the scheduler preempts the lower-priority thread and assigns the CPU to the higher-
priority thread. The state of the preempted thread is recorded so that execution can be
resumed later.

Physical Device Driver as Process Manager

In OS/2, applications with real-time requirements can run as Physical Device


Drivers (PDD) at ring 0 (kernel mode). These PDDs can be made non-preemptable. An
interrupt that occurs on a device (e.g., packets arriving at the network adapter) can be
serviced from the PDD immediately. As soon as an interrupt happens on a device, the
PDD gets control and does all the work to handle the interrupt. This may also include

Multimedia Systems- M.Sc(IT)


219

tasks which could be done by application processes running in ring 3 (user mode). The
task running at ring 0 should (but must not) leave the kernel mode after 4 msec.

Operating system extensions for continuous media processing can be


implemented as PDDs. In this approach, a real-time scheduler and the process
management run as a PDD being activated by a high resolution timer. In principle, this is
the implementation scheme of the OS/2 Multimedia Presentation Manager.TM, which
represents the multimedia extension to OS/2.

25.3 Case Study - Synchronization

25.3.1 Synchronization in MHEG

The generic space in MHEG provides a virtual coordinate system that is used to
specify the layout and relation of content objects in space and time according to the
virtual axes-based specification method. The generic space has one time axis of infinite
length measured in Generic Time Units (GTU). The MHEG run-time environment must
map the GTU to Physical Time Units (PTUs). If no mapping is specified, the default is
one GTU mapped to one millisecond. Three spatial axes (X=latitude, Y=longitude,
Z=altitude) are used in the generic space. Each axis is of finite length in an interval of [-
32768,+32767]. Units are Generic Space Units (GSUs). Also, the MHEG engine must
perform mapping from the virtual to the real coordinate space.

The presentation of content objects is based on the exchange of action objects sent
to an object. Examples of actions are prepared to set the object in a presentable state, run
to start the presentation and stop to end the presentation. Action objects can be combined
to form an action list. Parallel action lists are executed in parallel. Each list is composed
of a delay followed by delayed sequential actions that are processed serially by the
MHEG engine, as shown in the following figure.

By using links it is possible to synchronize presentations based on events. Link


conditions may be associated with an event. If the conditions associated with a link are
fulfilled, the link is triggered and actions assigned to this link are performed. This is a
type of event-based synchronization.

Multimedia Systems- M.Sc(IT)


220

Parallel lists of actions

delay
delay
sequential
Delayed

actions

action action

Lists of actions.

MHEG Engine

At the European Networking Center in Heidelberg, an MHEG engine has been


developed. The MHEG engine is an implementation of the object layer. The architecture
of the MHEG engine is shown in following figure.

The Generic Presentation Services of the engine provide abstractions from the
presentation modules used to present the content objects. The Audio/Video-Subsystem is
a stream layer implementation. This component is responsible for the presentation of the
presentation of the continuous media streams, e.g., audio/video streams. The User
Interface Service provide the presentation of time-independent media, like text and
graphics, and the processing of user interactions, e.g., buttons and forms.

The MHEG engine receives the MHEG objects from the application. The Object
Manager manages these objects in the run-time environment. The Interpreter processes
the action objects and events. It is responsible for initiating the preparation and
presentation of the objects. The Link Processor monitors the stated of objects and
triggers links, if the trigger conditions of a link are fulfilled.

Multimedia Systems- M.Sc(IT)


221

Application

MHEG Engine AV – Sub-

Presentation Services
System

Generic
Object Manager

User
Interface
Link Processor Interpreter Services

Operating System

Architecture of an MHEG engine.

The run-time system communicates with the presentation services by events. The
User Interface Services provide events that indicate user actions. The Audio/Video-
Subsystem provides events about the status of the presentation streams, like end of the
presentation of a stream or reaching a cuepoint in a stream.

25.3.2 HyTime

HyTime (Hupermedia/Time-based Structuring Language) is an international


standard (ISO/IEC 10744) for the structured representation of hypermedia information.
HyTime is an application of the Standardized General Markup Language (SGML).

SGML is designed for document exchange, whereby the document structure is of


great importance, but the layout is a local matter. The logical structure is defined by
markup commands that are inserted in the text. The markups divide the text into SGML
elements. For each SGML document, a Data Type Definition (DTD) exists which
declares the element types of a document, the attributes of the elements and how the
instances are hierarchically related. A typical use of SGML is the publishing industry
where an author is responsible for the layout. As the content of the document is not
restricted by SGML, elements can be of type text, picture or other multimedia data.

HyTime defines how markup and DTDs can be used to describe the structure of
hyperlinked time-based multimedia documents. HyTime does not define the format or

Multimedia Systems- M.Sc(IT)


222

encoding of elements. It provides the framework for defining the relationship between
these elements.

Hytime supports addresses to identify a certain piece of information within an


element, inking facilities to establish links between parts of elements and temporal and
spatial alignment specifications to describe the relationships between media objects.

HyTime defines architectural forms that represent SGML element declaration


templates with associated attributes. The semantic of these architectural forms is defined
by HyTime. A HyTime application designer creates a HyTime-conforming DTD using
the architectural forms he/she needs for the HyTime document. In the Hytime DTD each
element type is associated with an architectural form by special HyTime attribute.

The HyTime architectural forms are grouped into the following modules:

 The Base Module specifies the architectural forms that comprise that document.
 The Measurement Module is used to add dimensions, measurement and counting
to the documents. Media objects in the document can be placed along with the
dimensions.
 The Location Address Module provides the means to address locations in a
document. The following addressing modes are supported:

 Name Space Addressing Schema: Addressing to a name identifying a piece of


information.
 Coordinate Location Schema: Addressing by referring to an interval of a
coordinate space if measuring along the coordinate space is possible. An example
is to address to a part of an audio sequence.
 Semantic Location Schema: Addressing by using application-specific
constructs.
 The Scheduling Module places media objects into Finite Coordinate Spaces
(FCSs). These space are collections of application-defined axes. To add measures
to the axes, the measurement module is needed. HyTime does not know the
dimension of its media objects.
 The Hyperlink Module enables building link connections between media objects.
Endpoints can be defined using the location address, measurements and
scheduling modules.
 The Rendition Module is used to specify how the events of a source FCS, that
typically provides a generic presentation description, are transformed to a target
FCS that is used for a particular presentation.

Multimedia Systems- M.Sc(IT)


223

Check Your Progress 1

List the modules available in HyTime Architecture


Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

HyTime Engine

The task of a HyTime engine is to take the output of an SGML parser, to


recognize architectural forms and to perform the HyTime-specific and application-
independent processing. Typical tasks of the HyTime engine are hyperlink resolution,
object addressing, parsing of measures and schedules, and transformation of schedules
and dimensions. The resulting information is then provided to the HyTime application.
The HyTime engine, HyOctane, developed at the University Massachusetts at Lowell,
has the following architecture: an SGML parser takes as input the application data type
definition that is used for the document and the HyTime document instance. It stores the
document object’s markups and contents, as well as the applications DTD in the SGML
layer of a database. The HyTime engine takes as input the information stored in the
SGML layer of a database. It identifies the architectural forms, resolves addresses form
the location address module, handles the functions of the scheduling module and
performs the mapping specified in the rendition module. It stores the information about
elements of the document that are instances of architectural forms in the HyTime layer of
the database.

The application layer of the database stores the objects and their attributes, as
defined by the DTD. An application presenter gets the information it needs for the
presentation of the database content, including the links between objects and the
presentation coordinates to use for the presentation, from the database.

25.3.3 MODE

The MODE (Multimedia Objects in a Distributed Environment) system,


developed at the University of Karsruhe, is a comprehensive approach to network
transparent synchronization specification and scheduling in heterogeneous distributed
systems. The heart of MODE is a distributed multimedia presentation service which
shares a customized multimedia object model, synchronization specifications and QoS
requirements with a given application. It also shares knowledge about networks and
workstations with a given run-time environment. The distributed service uses all this
information for synchronization scheduling when the presentation of a compound

Multimedia Systems- M.Sc(IT)


224

multimedia object is requested from the application. Thereby, it adapts the QoS of the
presentation to the available resources, taking into account a cost model and the QoS
requirements given by the application.

The MODE system contains the following synchronization-related components:

 The Synchronization Editor at the specification layer, which is used to create


synchronization and layout specifications for multimedia presentations.
 The MODE Server Manager at the object layer, which coordinates the execution
of the presentation service calls. This includes the coordination of the creation of
units of presentation (presentation objects) out of basic units of information
(information objects) and the transport of objects in a distributed environment.
 The Local synchronizer, which receives locally the presentation objects and
initiates their local presentation according to a synchronization specification.
 The Optimizer, part of the MODE Server Manager, which performs the planning
of the distributed synchronization and chooses presentation qualities and
presentation forms depending on user demands, network and workstation
capabilities and presentation performance.

Synchronization Model

In the MODE system, a synchronization model based on synchronization at


reference points is used. This model is extended to cover handling of time intervals,
objects of unpredictable duration and conditions which may be raised by the underlying
distributed heterogeneous environment.

A synchronization specification created with the Synchronization Editor and used


by the Synchronizer is stored in textual form. The syntax of this specification is defined
in the context-free grammar of the Synchronization Description Language. This way, a
synchronization specification can be used by MODE components, independent of their
implementation language and environment.

Local Synchronizer

The Local Synchronizer performs Synchronized presentations according to the


synchronization model introduced above. This comprises both intra-object and inter-
object synchronization. For intra-object synchronization, a presentation thread is created
which manages the presentation of a dynamic basic object. Threads with different
priorities may be sued to implement priorities of basic objects. All presentations of static
basic objects are managed by a single thread.

Synchronization is performed by signaling mechanism. Each presentation thread


reaching a synchronization point sends a corresponding signal to all other presentation
threads involved in the synchronization point. Having received such a signal, other
presentation threads may perform acceleration actions, if necessary. After the dispatch of
all signals, the presentation thread waits until it receives signals from all other

Multimedia Systems- M.Sc(IT)


225

participating threads of the synchronization point; meanwhile, it may perform a waiting


action.

Exceptions Caused by the Distributed Environment

The correct temporal execution of the plan depends on the underlying


environment, if the workstations and network provide temporal guarantees for the
execution of the operations. Therefore, MODE provides several guarantee levels. If the
underlying distributed environment cannot give full guarantees, MODE considers the
possible error conditions. Three types of actions are used to define a behavior in the case
of exception conditions, which may be raised during a distributed synchronized
presentation: 1) A waiting action can be carried out if a presentation of a dynamic basic
object has reached a synchronization point and waits longer than a specified time at this
synchronization point. Possible waiting actions are, for example, continuing presentation
of the last presentation object (“freezing” a video, etc.), pausing or cancellation of the
synchronization point. 2) When a presentation of a dynamic basic object has reached a
synchronization point and waits for other objects to reach this point, acceleration actions
represent an alternative to waiting actions. They move the delayed dynamic basic objects
to this synchronization point in due time.

Priorities may be used for basic objects to reflect their sensitivity to delays in their
presentation. For example, audio objects will usually be assigned higher priorities than
video objects because a user recognizes jitter in an audio stream earlier than jitter in a
video stream. Presentations with higher priorities are preferred over objects with lower
priorities in both presentation and synchronization.

25.4 Let us sum up

In this lesson we have learn the implementation of concepts such as process


management and synchronization in the multimedia systems.

25.5 Lesson-end activities


Browse the internet and search different implementations of multimedia concepts
in Operating Systems. List the operating systems which include multimedia
features.

25.6 Model answers to “Check your progress”

The HyTime architectural forms are grouped into the following modules

o The Base Module


o The Measurement Module
o The Location Address Module

Multimedia Systems- M.Sc(IT)


226

25.7 References
1. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
2. “Multimedia Systems, Standards, and Networks” By Atul Puri, Tsuhan Chen
3. “Multimedia Storage and Retrieval: An Algorithmic Approach” By Jan Korst,
Verus Pronk

Multimedia Systems- M.Sc(IT)

You might also like