Multimedia
Multimedia
Multimedia
UNIT - I
In this lesson we will learn the preliminary concepts of Multimedia. We will discuss the
various benefits and applications of multimedia. After going through this chapter the
reader will be able to :
i) define multimedia
ii) list the elements of multimedia
iii) enumerate the different applications of multimedia
iv) describe the different stages of multimedia software development
1.1 Introduction
Definition
Multimedia is the media that uses multiple forms of information content and
information processing (e.g. text, audio, graphics, animation, video, interactivity) to
inform or entertain the user. Multimedia also refers to the use of electronic media to store
and experience multimedia content. Multimedia is similar to traditional mixed media in
fine art, but with a broader scope. The term "rich media" is synonymous for interactive
multimedia.
ar_app_aktion.png http://en.wikipedia.org/wiki/Image:Crystal_Clear_app_camera.png h
Video
Multimedia finds its application in various areas including, but not limited to,
advertisements, art, education, entertainment, engineering, medicine, mathematics,
business, scientific research and spatial, temporal applications.
Creative industries
Commercial
Much of the electronic old and new media utilized by commercial artists is
multimedia. Exciting presentations are used to grab and keep attention in
advertising. Industrial, business to business, and interoffice communications are
often developed by creative services firms for advanced multimedia presentations
beyond simple slide shows to sell ideas or liven-up training. Commercial
multimedia developers may be hired to design for governmental services and
nonprofit services applications as well.
Education
Engineering
Industry
Medicine
On the World Wide Web, standards for transmitting virtual reality worlds or
“scenes” in VRML (Virtual Reality Modeling Language) documents (with the file name
extension .wrl) have been developed.
Specialized public game arcades have been built recently to offer VR combat and
flying experiences for a price. From virtual World Entertainment in walnut Greek,
California, and Chicago, for example, BattleTech is a ten-minute interactive video
encounter with hostile robots. You compete against others, perhaps your friends, who
share coaches in the same containment Bay. The computer keeps score in a fast and
sweaty firefight. Similar “attractions” will bring VR to the public, particularly a youthful
public, with increasing presence during the 1990s.
The technology and methods for working with three-dimensional images and
for animating them are discussed. VR is an extension of multimedia-it uses the basic
multimedia elements of imagery, sound, and animation. Because it requires instrumented
feedback from a wired-up person, VR is perhaps interactive multimedia at its fullest
extension.
1. Planning and Costing : This stage of multimedia application is the first stage
which begins with an idea or need. This idea can be further refined by outlining
its messages and objectives. Before starting to develop the multimedia project, it
is necessary to plan what writing skills, graphic art, music, video and other
multimedia expertise will be required.
It is also necessary to estimate the time needed to prepare all elements of
multimedia and prepare a budget accordingly. After preparing a budget, a
prototype or proof of concept can be developed.
2. Designing and Producing : The next stage is to execute each of the planned
tasks and create a finished product.
3. Testing : Testing a project ensure the product to be free from bugs. Apart from
bug elimination another aspect of testing is to ensure that the multimedia
application meets the objectives of the project. It is also necessary to test whether
the multimedia project works properly on the intended deliver platforms and they
meet the needs of the clients.
4. Delivering : The final stage of the multimedia application development is to pack
the project and deliver the completed project to the end user. This stage has
several steps such as implementation, maintenance, shipping and marketing the
product.
1.11 References
1. “Multimedia Making it work” By Tay Vaughan
2. “Multimedia in Practice – Technology and applications” By Jeffcoat
Lesson 2 Text
Contents
In this lesson we will learn the different multimedia building blocks. Later we will
learn the significant features of text.
i) At the end of the lesson you will be able to
ii) List the different multimedia building blocks
iii) Enumerate the importance of text
iv) List the features of different font editing and designing tools
2.1 Introduction
All multimedia content consists of texts in some form. Even a menu text is
accompanied by a single action such as mouse click, keystroke or finger pressed in the
monitor (in case of a touch screen). The text in the multimedia is used to communicate
information to the user. Proper use of text and words in multimedia presentation will
help the content developer to communicate the idea and message to the user.
1. Text : Text and symbols are very important for communication in any medium.
With the recent explosion of the Internet and World Wide Web, text has become
more the important than ever. Web is HTML (Hyper text Markup language)
originally designed to display simple text documents on computer screens, with
occasional graphic images thrown in as illustrations.
2. Audio : Sound is perhaps the most element of multimedia. It can provide the
listening pleasure of music, the startling accent of special effects or the ambience
of a mood-setting background.
3. Images : Images whether represented analog or digital plays a vital role in a
multimedia. It is expressed in the form of still picture, painting or a photograph
taken through a digital camera.
4. Animation : Animation is the rapid display of a sequence of images of 2-D
artwork or model positions in order to create an illusion of movement. It is an
optical illusion of motion due to the phenomenon of persistence of vision, and can
be created and demonstrated in a number of ways.
5. Video : Digital video has supplanted analog video as the method of choice for
making video for multimedia use. Video in multimedia are used to portray real
time moving pictures in a multimedia project.
Words and symbols in any form, spoken or written, are the most common system
of communication. They deliver the most widely understood meaning to the greatest
number of people.
Most academic related text such as journals, e-magazines are available in the Web
Browser readable form.
A typeface is family of graphic characters that usually includes many type sizes
and styles. A font is a collection of characters of a single size and style belonging to a
particular typeface family. Typical font styles are bold face and italic. Other style
attributes such as underlining and outlining of characters, may be added at the users
choice.
The size of a text is usually measured in points. One point is approximately 1/72
of an inch i.e. 0.0138. The size of a font does not exactly describe the height or width of
its characters. This is because the x-height (the height of lower case character x) of two
fonts may differ.
Typefaces of fonts can be described in many ways, but the most common
characterization of a typeface is serif and sans serif. The serif is the little decoration at
the end of a letter stroke. Times, Times New Roman, Bookman are some fonts which
comes under serif category. Arial, Optima, Verdana are some examples of sans serif
font. Serif fonts are generally used for body of the text for better readability and sans
serif fonts are generally used for headings. The following fonts shows a few categories
of serif and sans serif fonts.
F F
(Serif Font) (Sans serif font)
As many number of type faces can be used in a single presentation, this concept
of using many fonts in a single page is called ransom-note topography.
For small type, it is advisable to use the most legible font.
In large size headlines, the kerning (spacing between the letters) can be adjusted
In text blocks, the leading for the most pleasing line can be adjusted.
Drop caps and initial caps can be used to accent the words.
The different effects and colors of a font can be chosen in order to make the text
look in a distinct manner.
Anti aliased can be used to make a text look gentle and blended.
For special attention to the text the words can be wrapped onto a sphere or bent
like a wave.
Meaningful words and phrases can be used for links and menu items.
In case of text links(anchors) on web pages the messages can be accented.
The most important text in a web page such as menu can be put in the top 320 pixels.
Fonts :
Apple and Microsoft announced a joint effort to develop a better and faster
quadratic curves outline font methodology, called truetype In addition to printing
smooth characters on printers, TrueType would draw characters to a low resolution (72
dpi or 96 dpi) monitor.
A byte which consists of 8 bits, is the most commonly used building block
for computer processing. ASCII uses only 7 bits to code is 128 characters; the 8th
bit of the byte is unused. This extra bit allows another 128 characters to be
encoded before the byte is used up, and computer systems today use these extra
128 values for an extended character set. The extended character set is commonly
filled with ANSI (American National Standards Institute) standard characters,
including frequently used symbols.
Unicode
symbols (Called scripts). A single script can work for tens or even hundreds of
languages.
Microsoft, Apple, Sun, Netscape, IBM, Xerox and Novell are participating
in the development of this standard and Microsoft and Apple have incorporated
Unicode into their operating system.
There are several software that can be used to create customized font. These tools
help an multimedia developer to communicate his idea or the graphic feeling. Using
these software different typefaces can be created.
In some multimedia projects it may be required to create special characters. Using the
font editing tools it is possible to create a special symbols and use it in the entire text.
Following is the list of software that can be used for editing and creating fonts:
Fontographer
Fontmonger
Cool 3D text
Special font editing tools can be used to make your own type so you can
communicate an idea or graphic feeling exactly. With these tools professional
typographers create distinct text and display faces.
1. Fontographer:
It is macromedia product, it is a specialized graphics editor for both
Macintosh and Windows platforms. You can use it to create postscript,
truetype and bitmapped fonts for Macintosh and Windows.
2. Making Pretty Text:
To make your text look pretty you need a toolbox full of fonts and special
graphics applications that can stretch, shade, color and anti-alias your
words into real artwork. Pretty text can be found in bitmapped drawings
where characters have been tweaked, manipulated and blended into a
graphic image.
3. Hypermedia and Hypertext:
Multimedia is the combination of text, graphic, and audio elements into a
single collection or presentation – becomes interactive multimedia when
you give the user some control over what information is viewed and when
it is viewed.
Create a new document in a word processor. Type a line of text in the word
processor and copy it five times and change each line into a different font. Finally
change the size of each line to 10 pt, 12pt, 14pt etc. Now distinguish each font
family and the typeface used in each font.
2.11 References
Lesson 3 Audio
Contents
In this lesson we will learn the basics of Audio. We will learn how a digital audio
is prepared and embedded in a multimedia system.
3.1 Introduction
Acoustics is the branch of physics that studies sound. Sound pressure levels are
measured in decibels (db); a decibel measurement is actually the ratio between a chosen
reference point on a logarithmic scale and the level that is actually experienced.
The multimedia application user can use sound right off the bat on both the
Macintosh and on a multimedia PC running Windows because beeps and warning sounds
are available as soon as the operating system is installed. On the Macintosh you can
choose one of the several sounds for the system alert. In Windows system sounds are
WAV files and they reside in the windows\Media subdirectory.
There are still more choices of audio if Microsoft Office is installed. Windows makes use
of WAV files as the default file format for audio and Macintosh systems use SND as
default file format for audio.
Preparing digital audio files is fairly straight forward. If you have analog source
materials – music or sound effects that you have recorded on analog media such as
cassette tapes.
The first step is to digitize the analog material and recording it onto a computer
readable digital media.
It is necessary to focus on two crucial aspects of preparing digital audio files:
o Balancing the need for sound quality against your available RAM and
Hard disk resources.
o Setting proper recording levels to get a good, clean recording.
Remember that the sampling rate determines the frequency at which samples will
be drawn for the recording. Sampling at higher rates more accurately captures the high
frequency content of your sound. Audio resolution determines the accuracy with which a
sound can be digitized.
Once a recording has been made, it will almost certainly need to be edited. The
basic sound editing operations that most multimedia procedures needed are described in
the paragraphs that follow
1. Multiple Tasks: Able to edit and combine multiple tracks and then merge the
tracks and export them in a final mix to a single audio file.
2. Trimming: Removing dead air or blank space from the front of a recording and
an unnecessary extra time off the end is your first sound editing task.
3. Splicing and Assembly: Using the same tools mentioned for trimming, you will
probably want to remove the extraneous noises that inevitably creep into
recording.
4. Volume Adjustments: If you are trying to assemble ten different recordings into
a single track there is a little chance that all the segments have the same volume.
5. Format Conversion: In some cases your digital audio editing software might
read a format different from that read by your presentation or authoring program.
6. Resampling or downsampling: If you have recorded and edited your sounds at
16 bit sampling rates but are using lower rates you must resample or downsample
the file.
7. Equalization: Some programs offer digital equalization capabilities that allow
you to modify a recording frequency content so that it sounds brighter or darker.
8. Digital Signal Processing: Some programs allow you to process the signal with
reverberation, multitap delay, and other special effects using DSP routines.
Creating your own original score can be one of the most creative and rewarding
aspects of building a multimedia project, and MIDI (Musical Instrument Digital
Interface) is the quickest, easiest and most flexible tool for this task.
The process of creating MIDI music is quite different from digitizing existing
audio. To make MIDI scores, however you will need sequencer software and a sound
synthesizer.
The MIDI keyboard is also useful to simply the creation of musical scores. An
advantage of structured data such as MIDI is the ease with which the music director can
edit the data.
Digital audio will not work due to memory constraints and more processing
power requirements
When there is high quality of MIDI source
When there is no requirement for dialogue.
A file format determines the application that is to be used for opening a file.
Following is the list of different file formats and the software that can be used for
opening a specific file.
The method for digitally encoding the high quality stereo of the consumer CD
music market is an instrument standard, ISO 10149. This is also called as RED BOOK
standard.
The developers of this standard claim that the digital audio sample size and
sample rate of red book audio allow accurate reproduction of all sounds that humans can
hear. The red book standard recommends audio recorded at a sample size of 16 bits and
sampling rate of 44.1 KHz.
Software such as Toast and CD-Creator from Adaptec can translate the digital
files of red book Audio format on consumer compact discs directly into a digital sound
editing file, or decompress MP3 files into CD-Audio. There are several tools available for
recording audio. Following is the list of different software that can be used for recording
and editing audio ;
2. The red book standard recommends audio recorded at a sample size of 16 bits
and sampling rate of 44.1 KHz. The recording is done with 2 channels(stereo
mode).
3.13 References
Lesson 4 Images
Contents
In this lesson we will learn how images are captured and incorporated into a
multimedia presentation. Different image file formats and the different color
representations have been discussed in this lesson.
At the end of this lesson the learner will be able to
i) Create his own image
ii) Describe the use of colors and palettes in multimedia
iii) Describe the capabilities and limitations of vector images.
iv) Use clip arts in the multimedia presentations
4.1 Introduction
Still images are the important element of a multimedia project or a web site. In
order to make a multimedia presentation look elegant and complete, it is necessary to
spend ample amount of time to design the graphics and the layouts. Competent,
computer literate skills in graphic art and design are vital to the success of a multimedia
project.
There are different kinds of image formats in the literature. We shall consider the
image format that comes out of an image frame grabber, i.e., the captured image format,
and the format when images are stored, i.e., the stored image format.
The image format is specified by two main parameters: spatial resolution, which
is specified as pixelsxpixels (eg. 640x480)and color encoding, which is specified
by bits per pixel. Both parameter values depend on hardware and software for
input/output of images.
4.3 Bitmaps
A bitmap is a simple information matrix describing the individual dots that are the
smallest elements of resolution on a computer screen or other display or printing device.
A one-dimensional matrix is required for monochrome (black and white); greater depth
(more bits of information) is required to describe more than 16 million colors the picture
elements may have, as illustrated in following figure. The state of all the pixels on a
computer screen make up the image seen by the viewer, whether in combinations of
black and white or colored pixels in a line of text, a photograph-like picture, or a simple
background pattern.
Once made, a bitmap can be copied, altered, e-mailed, and otherwise used in many
creative ways.
Clip Art
A clip art collection may contain a random assortment of images, or it may contain a
series of graphics, photographs, sound, and video related to a single topic. For
example, Corel, Micrografx, and Fractal Design bundle extensive clip art collection
with their image-editing software.
Multiple Monitors
When developing multimedia, it is helpful to have more than one monitor, or a single
high-resolution monitor with lots of screen real estate, hooked up to your computer.
In this way, you can display the full-screen working area of your project or
presentation and still have space to put your tools and other menus. This is
particularly important in an authoring system such as Macromedia Director, where the
edits and changes you make in one window are immediately visible in the presentation
window-provided the presentation window is not obscured by your editing tools.
Bitmaps are used for photo-realistic images and for complex drawing requiring
fine detail. Vector-drawn objects are used for lines, boxes, circles, polygons, and other
graphic shapes that can be mathematically expressed in angles, coordinates, and
distances. A drawn object can be filled with color and patterns, and you can select it as a
single object. Typically, image files are compressed to save memory and disk space;
many image formats already use compression within the file itself – for example, GIF,
JPEG, and PNG.
Still images may be the most important element of your multimedia project. If
you are designing multimedia by yourself, put yourself in the role of graphic artist and
layout designer.
The abilities and feature of image-editing programs for both the Macintosh and
Windows range from simple to complex. The Macintosh does not ship with a painting
tool, and Windows provides only the rudimentary Paint (see following figure), so you
will need to acquire this very important software separately – often bitmap editing or
painting programs come as part of a bundle when you purchase your computer,
monitor, or scanner.
The image that is seen on a computer monitor is digital bitmap stored in video
memory, updated about every 1/60 second or faster, depending upon monitor’s scan
rate. When the images are assembled for multimedia project, it may often be needed
to capture and store an image directly from screen. It is possible to use the Prt Scr
key available in the keyboard to capture a image.
Scanning Images
After scanning through countless clip art collections, if it is not possible to find the
unusual background you want for a screen about gardening. Sometimes when you
search for something too hard, you don’t realize that it’s right in front of your face.
Open the scan in an image-editing program and experiment with different filters, the
contrast, and various special effects. Be creative, and don’t be afraid to try strange
combinations – sometimes mistakes yield the most intriguing results.
Most multimedia authoring systems provide for use of vector-drawn objects such
as lines, rectangles, ovals, polygons, and text.
Graphic artists designing for print media use vector-drawn objects because the
same mathematics that put a rectangle on your screen can also place that rectangle on
paper without jaggies. This requires the higher resolution of the printer, using a page
description language such as PostScript.
Programs for 3-D animation also use vector-drawn graphics. For example, the various
changes of position, rotation, and shading of light required to spin the extruded.
Vector-drawn objects are described and drawn to the computer screen using a fraction
of the memory space required to describe and store the same object in bitmap form. A
vector is a line that is described by the location of its two endpoints. A simple
rectangle, for example, might be defined as follows:
RECT 0,0,200,200
4.6 Color
Color is a vital component of multimedia. Management of color is both a subjective and
a technical exercise. Picking the right colors and combinations of colors for your project
can involve many tries until you feel the result is right.
The letters of the mnemonic ROY G. BIV, learned by many of us to remember the
colors of the rainbow, are the ascending frequencies of the visible light spectrum: red,
orange, yellow, green, blue, indigo, and violet. Ultraviolet light, on the other hand, is
beyond the higher end of the visible spectrum and can be damaging to humans.
The color white is a noisy mixture of all the color frequencies in the visible spectrum.
The cornea of the eye acts as a lens to focus light rays onto the retina. The light rays
stimulate many thousands of specialized nerves called rods and cones that cover the
surface of the retina. The eye can differentiate among millions of colors, or hues,
consisting of combination of red, green, and blue.
Additive Color
In additive color model, a color is created by combining colored light sources in three
primary colors: red, green and blue (RGB). This is the process used for a TV or
computer monitor
Subtractive Color
In subtractive color method, a new color is created by combining colored media such
as paints or ink that absorb (or subtract) some parts of the color spectrum of light and
reflect the others back to the eye. Subtractive color is the process used to create color
in printing. The printed page is made up of tiny halftone dots of three primary colors,
cyan, magenta and yellow (CMY).
Format Extension
Microsoft Windows DIB .bmp .dib .rle
Microsoft Palette .pal
Autocad format 2D .dxf
JPEG .jpg
Windows Meta file .wmf
Portable network graphic .png
Compuserve gif .gif
Apple Macintosh .pict .pic .pct
Competent, computer literate skills in graphic art and design are vital to the
success of a multimedia project.
A digital image is represented by a matrix of numeric values each representing
a quantized intensity value.
A bitmap is a simple information matrix describing the individual dots that are
the smallest elements of resolution on a computer screen or other display or
printing device
In additive color model, a color is created by combining colored lights sources
in three primary colors: red, green and blue (RGB).
Subtractive colors are used in printers and additive color concepts are used in
monitors and television.
4.11 References
In this lesson we will learn the basics of animation and video. At the end of this
lesson the learner will be able to
i) List the different animation techniques.
ii) Enumerate the software used for animation.
iii) List the different broadcasting standards.
iv) Describe the basics of video recording and how they relate to multimedia
production.
v) Have a knowledge on different video formats.
5.1 Introduction
Animation makes static presentations come alive. It is visual change over time
and can add great power to our multimedia projects. Carefully planned, well-executed
video clips can make a dramatic difference in a multimedia project. Animation is created
from drawn pictures and video is created using real time visuals.
Television video builds entire frames or pictures every second; the speed with which each
frame is replaced by the next one makes the images appear to blend smoothly into
movement. To make an object travel across the screen while it changes its shape, just
change the shape and also move or translate it a few pixels for each frame.
When you create an animation, organize its execution into a series of logical steps. First,
gather up in your mind all the activities you wish to provide in the animation; if it is
complicated, you may wish to create a written script with a list of activities and required
objects. Choose the animation tool best suited for the job. Then build and tweak your
sequences; experiment with lighting effects. Allow plenty of time for this phase when
you are experimenting and testing. Finally, post-process your animation, doing any
special rendering and adding sound effects.
The term cel derives from the clear celluloid sheets that were used for drawing
each frame, which have been replaced today by acetate or plastic. Cels of famous
animated cartoons have become sought-after, suitable-for-framing collector’s
items.
Cel animation artwork begins with keyframes (the first and last frame of an
action). For example, when an animated figure of a man walks across the screen,
he balances the weight of his entire body on one foot and then the other in a series
of falls and recoveries, with the opposite foot and leg catching up to support the
body.
Computer animation programs typically employ the same logic and procedural
concepts as cel animation, using layer, keyframe, and tweening techniques, and
even borrowing from the vocabulary of classic animators. On the computer, paint
is most often filled or drawn with tools using features such as gradients and anti-
aliasing. The word links, in computer animation terminology, usually means
special methods for computing RGB pixel values, providing edge detection, and
layering so that images can blend or otherwise mix their colors to produce special
transparencies, inversions, and effects.
5.3.3 Kinematics
It is the study of the movement and motion of structures that have joints,
such as a walking man.
Inverse Kinematics is in high-end 3D programs, it is the process by which
you link objects such as hands to arms and define their relationships and
limits.
Once those relationships are set you can drag these parts around and let the
computer calculate the result.
5.3.4 Morphing
The morphed images were built at a rate of 8 frames per second, with each
transition taking a total of 4 seconds.
Some product that uses the morphing features are as follows
o Black Belt’s Easy Morph and WinImages,
o Human Software’s Squizz
o Valis Group’s Flo , MetaFlo, and MovieFlo.
Some file formats are designed specifically to contain animations and the can be
ported among application and platforms with the proper translators.
3D Studio Max
Flash
AnimationPro
5.5 Video
Digital video has supplanted analog video as the method of choice for making
video for multimedia use. While broadcast stations and professional production and post-
production houses remain greatly invested in analog video hardware (according to Sony,
there are more than 350,000 Betacam SP devices in use today), digital video gear
produces excellent finished products at a fraction of the cost of analog. A digital
camcorder directly connected to a computer workstation eliminates the image-degrading
analog-to-digital conversion step typically performed by expensive video capture cards,
and brings the power of nonlinear video editing and production to everyday users.
Four broadcast and video standards and recording formats are commonly in use
around the world: NTSC, PAL, SECAM, and HDTV. Because these standards and
formats are not easily interchangeable, it is important to know where your multimedia
project will be used.
NTSC
The United States, Japan, and many other countries use a system for broadcasting
and displaying video that is based upon the specifications set forth by the 1952
National Television Standards Committee. These standards define a method for
encoding information into the electronic signal that ultimately creates a television
picture. As specified by the NTSC standard, a single frame of video is made up
of 525 horizontal scan lines drawn onto the inside face of a phosphor-coated
picture tube every 1/30th of a second by a fast-moving electron beam.
PAL
The Phase Alternate Line (PAL) system is used in the United Kingdom, Europe,
Australia, and South Africa. PAL is an integrated method of adding color to a
black-and-white television signal that paints 625 lines at a frame rate 25 frames
per second.
SECAM
The Sequential Color and Memory (SECAM) system is used in France, Russia,
and few other countries. Although SECAM is a 625-line, 50 Hz system, it differs
greatly from both the NTSC and the PAL color systems in its basic technology
and broadcast method.
HDTV
To add full-screen, full-motion video to your multimedia project, you will need to invest
in specialized hardware and software or purchase the services of a professional video
production studio. In many cases, a professional studio will also provide editing tools
and post-production capabilities that you cannot duplicate with your Macintosh or PC.
Video Tips
A useful tool easily implemented in most digital video editing applications is “blue
screen,” “Ultimate,” or “chromo key” editing. Blue screen is a popular technique for
making multimedia titles because expensive sets are not required. Incredible
backgrounds can be generated using 3-D modeling and graphic software, and one or more
actors, vehicles, or other objects can be neatly layered onto that background.
Applications such as VideoShop, Premiere, Final Cut Pro, and iMovie provide this
capability.
Recording Formats
S-VHS video
In S-VHS video, color and luminance information are kept on two separate tracks.
The result is a definite improvement in picture quality. This standard is also used
in Hi-8. still, if your ultimate goal is to have your project accepted by broadcast
stations, this would not be the best choice.
Component (YUV)
In the early 1980s, Sony began to experiment with a new portable professional
video format based on Betamax. Panasonic has developed their own standard
based on a similar technology, called “MII,” Betacam SP has become the industry
standard for professional video field recording. This format may soon be eclipsed
by a new digital version called “Digital Betacam.”
Digital Video
Full integration of motion video on computers eliminates the analog television form of
video from the multimedia delivery platform. If a video clip is stored as data on a hard
disk, CD-ROM, or other mass-storage device, that clip can be played back on the
computer’s monitor without overlay boards, videodisk players, or second monitors. This
playback of digital video is accomplished using software architecture such as QuickTime
or AVI, a multimedia producer or developer; you may need to convert video source
material from its still common analog form (videotape) to a digital form manageable by
the end user’s computer system. So an understanding of analog video and some special
hardware must remain in your multimedia toolbox.
Analog to digital conversion of video can be accomplished using the video overlay
hardware described above, or it can be delivered direct to disk using FireWire cables. To
repetitively digitize a full-screen color video image every 1/30 second and store it to disk
or RAM severely taxes both Macintosh and PC processing capabilities–special hardware,
compression firmware, and massive amounts of digital storage space are required.
MPEG
The MPEG standard has been developed by the Moving Picture Experts Group, a
working group convened by the International Standards Organization (ISO) and the
International Electro-technical Commission (IEC) to create standards for digital
representation of moving pictures and associated audio and other data. MPEG1 and
MPEG2 are the current standards. Using MPEG1, you can deliver 1.2 Mbps of video and
250 Kbps of two-channel stereo audio using CD-ROM technology. MPEG2, a
completely different system from MPEG1, requires higher data rates (3 to 15 Mbps) but
delivers higher image resolution, picture quality, interlaced video formats,
multiresolution scalability, and multichannel audio features.
DVI/Indeo
Two levels of compression and decompression are provided by DVI: Production Level
Video (PLV) and Real Time Video (RTV). PLV and RTV both use variable compression
rates. DVI’s algorithms can compress video images at ratios between 80:1 and 160:1.
DVI will play back video in full-frame size and in full color at 30 frames per second.
CD-ROMs provide an excellent distribution medium for computer-based video: they are
inexpensive to mass produce, and they can store great quantities of information. CD-
ROM players offer slow data transfer rates, but adequate video transfer can be achieved
by taking care to properly prepare your digital video files.
Limit the amount of synchronization required between the video and audio. With
Microsoft’s AVI files, the audio and video data are already interleaved, so this is
not a necessity, but with QuickTime files, you should “flatten” your movie.
Flattening means you interleave the audio and video segments together.
The size of the video window and the frame rate you specify dramatically affect
performance. In QuickTime, 20 frames per second played in a 160X120-pixel
window is equivalent to playing 10 frames per second in a 320X240 window.
The more data that has to be decompressed and transferred from the CD-ROM to
the screen, the slower the playback.
In this lesson we have learnt the use of animation and video in multimedia presentation.
Following points have been discussed in this lesson :
Animation is created from drawn pictures and video is created using real time
visuals.
Animation is possible because of a biological phenomenon known as persistence
of vision
The different techniques used in animation are cel animation, computer
animation, kinematics and morphing.
Four broadcast and video standards and recording formats are commonly in use
around the world: NTSC, PAL, SECAM, and HDTV.
Real-time video compression algorithms such as MPEG, P*64, DVI/Indeo, JPEG,
Cinepak, Sorenson, ClearVideo, RealVideo, and VDOwave are available to
compress digital video information.
1. Choose animation software available for windows. List its name and its
capabilities. Find whether the software is capable of handling layers? Keyframes?
Tweening? Morphing? Check whether the software allows cross platform playback
facilities.
2. Locate three web sites that offer streaming video clips. Check the file formats and
the duration, size of video. Make a list of software that can play these video clips.
5.12 References
UNIT - II
Contents
In this lesson we will learn about the multimedia hardware required for
multimedia production. At the end of the lesson the learner will be able to identify the
proper hardware required for connecting various devices.
6.1 Introduction
The hardware required for multimedia can be classified into five. They are
1. Connecting Devices
2. Input devices
3. Output devices
4. Storage devices and
5. Communicating devices.
Among the many hardware – computers, monitors, disk drives, video projectors,
light valves, video projectors, players, VCRs, mixers, sound speakers there are enough
wires which connect these devices. The data transfer speed the connecting devices
provide will determine the faster delivery of the multimedia content.
The most popularly used connecting devices are:
SCSI
USB
MCI
IDE
USB
6.4 SCSI
SCSI (Small Computer System Interface) is a set of standards for physically
connecting and transferring data between computers and peripheral devices. The SCSI
standards define commands, protocols, electrical and optical interfaces. SCSI is most
commonly used for hard disks and tape drives, but it can connect a wide range of other
devices, including scanners, and optical drives (CD, DVD, etc.).
Since its standardization in 1986, SCSI has been commonly used in the Apple
Macintosh and Sun Microsystems computer lines and PC server systems. SCSI has never
been popular in the low-priced IBM PC world, owing to the lower cost and adequate
performance of its ATA hard disk standard. SCSI drives and even SCSI RAIDs became
common in PC workstations for video or audio production, but the appearance of large
cheap SATA drives means that SATA is rapidly taking over this market.
SCSI is available in a variety of interfaces. The first, still very common, was
parallel SCSI (also called SPI). It uses a parallel electrical bus design. The traditional SPI
design is making a transition to Serial Attached SCSI, which switches to a serial point-to-
point design but retains other aspects of the technology. iSCSI drops physical
Internal SCSI cables are usually ribbon cables that have multiple 68 pin or 50 pin
connectors. External cables are shielded and only have connectors on the ends.
iSCSI
ISCSI preserves the basic SCSI paradigm, especially the command set, almost
unchanged. iSCSI advocates project the iSCSI standard, an embedding of SCSI-3
over TCP/IP, as displacing Fibre Channel in the long run, arguing that Ethernet
data rates are currently increasing faster than data rates for Fibre Channel and
similar disk-attachment technologies. iSCSI could thus address both the low-end
and high-end markets with a single commodity-based technology.
Serial SCSI
Four recent versions of SCSI, SSA, FC-AL, FireWire, and Serial Attached SCSI
(SAS) break from the traditional parallel SCSI standards and perform data
transfer via serial communications. Although much of the documentation of SCSI
talks about the parallel interface, most contemporary development effort is on
serial SCSI. Serial SCSI has a number of advantages over parallel SCSI—faster
data rates, hot swapping, and improved fault isolation. The primary reason for the
shift to serial interfaces is the clock skew issue of high speed parallel interfaces,
which makes the faster variants of parallel SCSI susceptible to problems caused
by cabling and termination. Serial SCSI devices are more expensive than the
equivalent parallel SCSI devices.
At the end of the command sequence the target returns a Status Code byte which
is usually 00h for success, 02h for an error (called a Check Condition), or 08h for busy.
When the target returns a Check Condition in response to a command, the initiator
usually then issues a SCSI Request Sense command in order to obtain a Key Code
Qualifier (KCQ) from the target. The Check Condition and Request Sense sequence
involves a special SCSI protocol called a Contingent Allegiance Condition.
Test unit ready: Queries device to see if it is ready for data transfers (disk spun
up, media loaded, etc.).
Inquiry: Returns basic device information, also used to "ping" the device since it
does not modify sense data.
Request sense: Returns any error codes from the previous command that returned
an error status.
Send diagnostic and Receives diagnostic results: runs a simple self-test or a
specialized test defined in a diagnostic page.
Start/Stop unit: Spins disks up and down, load/unload media.
Read capacity: Returns storage capacity.
Format unit: Sets all sectors to all zeroes, also allocates logical blocks avoiding
defective sectors.
Read Format Capacities: Read the capacity of the sectors.
Read (four variants): Reads data from a device.
Write (four variants): Writes data to a device.
Log sense: Returns current information from log pages.
Mode sense: Returns current device parameters from mode pages.
Mode select: Sets device parameters in a mode page.
Each device on the SCSI bus is assigned at least one Logical Unit Number (LUN).
Simple devices have just one LUN, more complex devices may have multiple LUNs. A
"direct access" (i.e. disk type) storage device consists of a number of logical blocks,
usually referred to by the term Logical Block Address (LBA). A typical LBA equates to
512 bytes of storage. The usage of LBAs has evolved over time and so four different
command variants are provided for reading and writing data. The Read(6) and Write(6)
commands contain a 21-bit LBA address. The Read(10), Read(12), Read Long,
Write(10), Write(12), and Write Long commands all contain a 32-bit LBA address plus
various other parameter options.
A "sequential access" (i.e. tape-type) device does not have a specific capacity because
it typically depends on the length of the tape, which is not known exactly. Reads and
writes on a sequential access device happen at the current position, not at a specific LBA.
The block size on sequential access devices can either be fixed or variable, depending on
the specific device. (Earlier devices, such as 9-track tape, tended to be fixed block, while
later types, such as DAT, almost always supported variable block sizes.)
On a parallel SCSI bus, a device (e.g. host adapter, disk drive) is identified by a
"SCSI ID", which is a number in the range 0-7 on a narrow bus and in the range 0–15 on
a wide bus. On earlier models a physical jumper or switch controls the SCSI ID of the
initiator (host adapter). On modern host adapters (since about 1997), doing I/O to the
adapter sets the SCSI ID; for example, the adapter often contains a BIOS program that
runs when the computer boots up and that program has menus that let the operator choose
the SCSI ID of the host adapter. Alternatively, the host adapter may come with software
that must be installed on the host computer to configure the SCSI ID. The traditional
SCSI ID for a host adapter is 7, as that ID has the highest priority during bus arbitration
(even on a 16 bit bus).
The SCSI ID of a device in a drive enclosure that has a backplane is set either by
jumpers or by the slot in the enclosure the device is installed into, depending on the
model of the enclosure. In the latter case, each slot on the enclosure's back plane delivers
control signals to the drive to select a unique SCSI ID. A SCSI enclosure without a
backplane often has a switch for each drive to choose the drive's SCSI ID. The enclosure
is packaged with connectors that must be plugged into the drive where the jumpers are
typically located; the switch emulates the necessary jumpers. While there is no standard
that makes this work, drive designers typically set up their jumper headers in a consistent
format that matches the way that these switches implement.
Note that a SCSI target device (which can be called a "physical unit") is often
divided into smaller "logical units." For example, a high-end disk subsystem may be a
single SCSI device but contain dozens of individual disk drives, each of which is a
logical unit (more commonly, it is not that simple—virtual disk devices are generated by
the subsystem based on the storage in those physical drives, and each virtual disk device
is a logical unit). The SCSI ID, WWNN, etc. in this case identifies the whole subsystem,
and a second number, the logical unit number (LUN) identifies a disk device within the
subsystem.
It is quite common, though incorrect, to refer to the logical unit itself as a "LUN."
Accordingly, the actual LUN may be called a "LUN number" or "LUN id".
Setting the bootable (or first) hard disk to SCSI ID 0 is an accepted IT community
recommendation. SCSI ID 2 is usually set aside for the Floppy drive while SCSI ID 3 is
typically for a CD ROM.
The Media Control Interface, MCI in short, is an aging API for controlling
multimedia peripherals connected to a Microsoft Windows or OS/2 computer. MCI
makes it very simple to write a program which can play a wide variety of media files and
even to record sound by just passing commands as strings. It uses relations described in
Windows registries or in the [MCI] section of the file SYSTEM.INI.
The MCI interface is a high-level API developed by Microsoft and IBM for
controlling multimedia devices, such as CD-ROM players and audio controllers.
The advantage is that MCI commands can be transmitted both from the programming
language and from the scripting language (open script, lingo). For a number of years, the
MCI interface has been phased out in favor of the DirectX APIs.
AVIVideo
CDAudio
Sequencer
WaveAudio
Each of these so-called MCI devices can play a certain type of files e.g. AVI Video plays
avi files, CDAudio plays cd tracks among others. Other MCI devices have also been
made available over time.
To play a type of media, it needs to be initialized correctly using MCI commands. These
commands are subdivided into categories:
System Commands
Required Commands
Basic Commands
Extended Commands
6.6 IDE
IDE was created as a way to standardize the use of hard drives in computers. The
basic concept behind IDE is that the hard drive and the controller should be combined.
The controller is a small circuit board with chips that provide guidance as to exactly how
the hard drive stores and accesses data. Most controllers also include some memory that
acts as a buffer to enhance hard drive performance.
Before IDE, controllers and hard drives were separate and often proprietary. In other
words, a controller from one manufacturer might not work with a hard drive from another
manufacturer. The distance between the controller and the hard drive could result in poor
signal quality and affect performance. Obviously, this caused much frustration for
computer users.
IDE devices use a ribbon cable to connect to each other. Ribbon cables have all of
the wires laid flat next to each other instead of bunched or wrapped together in a bundle.
IDE ribbon cables have either 40 or 80 wires. There is a connector at each end of the
cable and another one about two-thirds of the distance from the motherboard connector.
This cable cannot exceed 18 inches (46 cm) in total length (12 inches from first to second
connector, and 6 inches from second to third) to maintain signal integrity. The three
connectors are typically different colors and attach to specific items:
With the introduction of Serial ATA around 2003, conventional ATA was
retroactively renamed to Parallel ATA (P-ATA), referring to the method in which data
travels over wires in this interface.
6.7 USB
Universal Serial Bus (USB) is a serial bus standard to interface devices. A major
component in the legacy-free PC, USB was designed to allow peripherals to be connected
using a single standardized interface socket and to improve plug-and-play capabilities by
allowing devices to be connected and disconnected without rebooting the computer (hot
swapping). Other convenient features include providing power to low-consumption
devices without the need for an external power supply and allowing many devices to be
used without requiring manufacturer specific, individual device drivers to be installed.
USB is intended to help retire all legacy varieties of serial and parallel ports. USB
can connect computer peripherals such as mouse devices, keyboards, PDAs, gamepads
and joysticks, scanners, digital cameras, printers, personal media players, and flash
drives. For many of those devices USB has become the standard connection method.
USB is also used extensively to connect non-networked printers; USB simplifies
connecting several printers to one computer. USB was originally designed for personal
computers, but it has become commonplace on other devices such as PDAs and video
game consoles.
USB devices are linked in series through hubs. There always exists one hub
known as the root hub, which is built-in to the host controller. So-called "sharing hubs"
also exist; allowing multiple computers to access the same peripheral device(s), either
switching access between PCs automatically or manually. They are popular in small-
office environments. In network terms they converge rather than diverge branches.
A single physical USB device may consist of several logical sub-devices that are
referred to as device functions, because each individual device may provide several
functions, such as a webcam (video device function) with a built-in microphone (audio
device function).
In this lesson we have learnt the different hardware required for multimedia
production. We have discussed the following points related with connecting devices used
in a multimedia computer.
1. A few types of SCSI are Ultra SCSI, Wide SCSI, SCSI-1, SCSI-2 and SCSI-3
6.11 References
Contents
7.0 Aims and Objectives
7.1 Introduction
7.2 Memory and Storage Devices
7.3 Random Access Memory
7.4 Read Only Memory
7.5 Floppy and Hard Disks
7.6 Zip, jaz, SyQuest, and Optical storage devices
7.7 Digital Versatile Disk
7.8 CD ROM players
7.9 CD Recorders
7.10 Video Disc Players
7.11 Let us sum up
7.12 Model answers to “Check your progress”
7.13 References
The aim of this lesson is to educate the learners about the second category of
multimedia hardware which is the storage device. At the end of this lesson the learner
will have an in-depth knowledge on the storage devices and their specifications.
7.1 Introduction
Electronic data storage is storage which requires electrical power to store and
retrieve that data. Most storage devices that do not require visual optics to read data fall
into this category. Electronic data may be stored in either an analog or digital signal
format. This type of data is considered to be electronically encoded data, whether or not it
is electronically stored. Most electronic data storage media (including some forms of
computer storage) are considered permanent (non-volatile) storage, that is, the data will
remain stored when power is removed from the device. In contrast, electronically stored
information is considered volatile memory.
RAM is the main memory where the Operating system is initially loaded and the
application programs are loaded at a later stage. RAM is volatile in nature and every
program that is quit/exit is removed from the RAM. More the RAM capacity, higher will
be the processing speed.
In spite of all the marketing hype about processor speed, this speed is ineffective if
not accompanied by sufficient RAM. A fast processor without enough RAM may waste
processor cycles while it swaps needed portions of program code into and out of memory.
In some cases, increasing available RAM may show more performance improvement on
your system than upgrading the processor clip.
authoring system, all at the same time to facilitate faster copying/pasting and then testing
in your authoring software. Although 8MB is the minimum under the MPC standard,
much more is required as of now.
Read-only memory is not volatile, Unlike RAM, when you turn off the power to a
ROM chip, it will not forget, or lose its memory. ROM is typically used in computers to
hold the small BIOS program that initially boots up the computer, and it is used in
printers to hold built-in fonts. Programmable ROMs (called EPROM’s) allow changes to
be made that are not forgotten. A new and inexpensive technology, optical read-only
memory (OROM), is provided in proprietary data cards using patented holographic
storage. Typically, OROM s offer 128MB of storage, have no moving parts, and use only
about 200 mill watts of power, making them ideal for handheld, battery-operated devices.
Adequate storage space for the production environment can be provided by large-
capacity hard disks; a server-mounted disk on a network; Zip, Jaz, or SyQuest removable
cartridges; optical media; CD-R (compact disc-recordable) discs; tape; floppy disks;
banks of special memory devices; or any combination of the above.
Removable media (floppy disks, compact or optical discs, and cartridges) typically
fit into a letter-sized mailer for overnight courier service. One or many disks may be
required for storage and archiving each project, and it is necessary to plan for backups
kept off-site.
Floppy disks and hard disks are mass-storage devices for binary data-data that can
be easily read by a computer. Hard disks can contain much more information than floppy
disks and can operate at far greater data transfer rates. In the scale of things, floppies are,
however, no longer “mass-storage” devices.
A floppy disk is made of flexible Mylar plastic coated with a very thin layer of
special magnetic material. A hard disk is actually a stack of hard metal platters coated
with magnetically sensitive material, with a series of recording heads or sensors that
hover a hairbreadth above the fast-spinning surface, magnetizing or demagnetizing spots
along formatted tracks using technology similar to that used by floppy disks and audio
and video tape recording. Hard disks are the most common mass-storage device used on
computers, and for making multimedia, it is necessary to have one or more large-capacity
hard disk drives.
As multimedia has reached consumer desktops, makers of hard disks have been
challenged to build smaller profile, larger-capacity, faster, and less-expensive hard disks.
In 1994, hard disk manufactures sold nearly 70 million units; in 1995, more than 80
million units. And prices have dropped a full order of magnitude in a matter of months.
By 1998, street prices for 4GB drives (IDE) were less than $200. As network and Internet
servers increase the demand for centralized data storage requiring terabytes (1 trillion
bytes), hard disks will be configured into fail-proof redundant array offering built-in
protection against crashes.
SyQuest’s 44MB removable cartridges have been the most widely used portable
medium among multimedia developers and professionals, but Iomega’s inexpensive Zip
drives with their likewise inexpensive 100MB cartridges have significantly penetrated
SyQuest’s market share for removable media. Iomega’s Jaz cartridges provide a gigabyte
of removable storage media and have fast enough transfer rates for audio and video
development. Pinnacle Micro, Yamaha, Sony, Philips, and others offer CD-R “burners”
for making write-once compact discs, and some double as quad-speed players. As blank
CD-R discs become available for less than a dollar each, this write-once media competes
as a distribution vehicle. CD-R is described in greater detail a little later in the chapter.
Magneto-optical (MO) drives use a high-power laser to heat tiny spots on the
metal oxide coating of the disk. While the spot is hot, a magnet aligns the oxides to
provide a 0 or 1 (on or off) orientation. Like SyQuests and other Winchester hard disks,
this is rewritable technology, because the spots can be repeatedly heated and aligned.
Moreover, this media is normally not affected by stray magnetism (it needs both heat and
magnetism to make changes), so these disks are particularly suitable for archiving data.
The data transfer rate is, however, slow compared to Zip, Jaz, and SyQuest technologies.
One of the most popular formats uses a 128MB-capacity disk-about the size of a 3.5-inch
floppy. Larger-format magneto-optical drives with 5.25-inch cartridges offering 650MB
to 1.3GB of storage are also available.
With this new medium capable not only of gigabyte storage capacity but also full-
motion video (MPEG2) and high-quantity audio in surround sound, the bar has again
risen for multimedia developers. Commercial multimedia projects will become more
expensive to produce as consumer’s performance expectations rise. There are two types
of DVD-DVD-Video and DVD-ROM; these reflect marketing channels, not the
technology.
DVD can provide 720 pixels per horizontal line, whereas current television
(NTSC) provides 240-television pictures will be sharper and more detailed. With Dolby
AC-3 Digital surround Sound as part of the specification, six discrete audio channels can
be programmed for digital surround sound, and with a separate subwoofer channel,
developers can program the low-frequency doom and gloom music popular with
Hollywood. DVD also supports Dolby pro-Logic Surround Sound, standard stereo and
mono audio. Users can randomly access any section of the disc and use the slow-motion
and freeze-frame features during movies. Audio tracks can be programmed for as many
as 8 different languages, with graphic subtitles in 32 languages. Some manufactures such
as Toshiba are already providing parental control features in their players (user’s select
lockout ratings from G to NC-17).
Compact disc read-only memory (CD-ROM) players have become an integral part
of the multimedia development workstation and are important delivery vehicle for large,
mass-produced projects. A wide variety of developer utilities, graphic backgrounds, stock
photography and sounds, applications, games, reference texts, and educational software
are available only on this medium.
CD-ROM players have typically been very slow to access and transmit data
(150k per second, which is the speed required of consumer Red Book Audio CDs), but
new developments have led to double, triple, quadruple, speed and even 24x drives
designed specifically for computer (not Red Book Audio) use. These faster drives spool
up like washing machines on the spin cycle and can be somewhat noisy, especially if the
inserted compact disc is not evenly balanced.
7.9 CD Recorders
With a compact disc recorder, you can make your own CDs using special CD-
recordable (CD-R) blank optical discs to create a CD in most formats of CD-ROM and
CD-Audio. The machines are made by Sony, Phillips, Ricoh, Kodak, JVC, Yamaha, and
Pinnacle. Software, such as Adaptec’s Toast for Macintosh or Easy CD Creator for
Windows, lets you organize files on your hard disk(s) into a “virtual” structure, then
writes them to the CD in that order. CD-R discs are made differently than normal CDs
but can play in any CD-Audio or CD-ROM player. They are available in either a “63
minute” or “74 minute” capacity for the former, that means about 560MB, and for the
latter, about 650MB. These write-once CDs make excellent high-capacity file archives
and are used extensively by multimedia developers for premastering and testing CD-
ROM projects and titles.
In this lesson we have learnt about the storage devices used in multimedia system.
The following points have been discussed :
RAM is a storage devices for temporary storage which is used to store all the
application programs under execution.
The secondary storage devices are used to store the data permanently. The
storage capacity of secondary storage is more compared with RAM.
Contents
In this lesson we will learn the different optical storage devices and their
specifications. At the end of this lesson the learner will;
i) Understand optical storage devices.
ii) Find the specifications of different optical storage devices.
iii) Specify the capabilities of DVD
8.1 Introduction
Optical storage devices have become the order of the day. The high storage
capacity available in the optical storage devices has influenced it as storage for
multimedia content. Apart from the high storage capacity the optical storage devices have
higher data transfer rate.
8.2 CD-ROM
An audio CD consists of one or more stereo tracks stored using 16-bit PCM
coding at a sampling rate of 44.1 kHz. Standard CDs have a diameter of 120 mm and can
hold approximately 80 minutes of audio. There are also 80 mm discs, sometimes used for
CD singles, which hold approximately 20 minutes of audio. The technology was later
adapted for use as a data storage device, known as a CD-ROM, and to include record-
once and re-writable media (CD-R and CD-RW respectively). CD-ROMs and CD-Rs
remain widely used technologies in the computer industry as of 2007. The CD and its
extensions have been extremely successful: in 2004, the worldwide sales of CD audio,
CD-ROM, and CD-R reached about 30 billion discs. By 2007, 200 billion CDs had been
sold worldwide.
In 1979, Philips and Sony set up a joint task force of engineers to design a new
digital audio disc.
A Compact Disc is made from a 1.2 mm thick disc of almost pure polycarbonate
plastic and weighs approximately 16 grams. A thin layer of aluminium (or, more rarely,
gold, used for its longevity, such as in some limited-edition audiophile CDs) is applied to
the surface to make it reflective, and is protected by a film of lacquer. CD data is stored
as a series of tiny indentations (pits), encoded in a tightly packed spiral track molded into
the top of the polycarbonate layer. The areas between pits are known as "lands". Each pit
is approximately 100 nm deep by 500 nm wide, and varies from 850 nm to 3.5 µm in
length.
The spacing between the tracks, the pitch, is 1.6 µm. A CD is read by focusing a
780 nm wavelength semiconductor laser through the bottom of the polycarbonate layer.
While CDs are significantly more durable than earlier audio formats, they are
susceptible to damage from daily usage and environmental factors. Pits are much closer
to the label side of a disc, so that defects and dirt on the clear side can be out of focus
during playback. Discs consequently suffer more damage because of defects such as
scratches on the label side, whereas clear-side scratches can be repaired by refilling them
with plastic of similar index of refraction, or by careful polishing.
The digital data on a CD begins at the center of the disc and proceeds outwards to
the edge, which allows adaptation to the different size formats available. Standard CDs
are available in two sizes. By far the most common is 120 mm in diameter, with a 74 or
80-minute audio capacity and a 650 or 700 MB data capacity. 80 mm discs ("Mini CDs")
were originally designed for CD singles and can hold up to 21 minutes of music or
184 MB of data but never really became popular. Today nearly all singles are released on
120 mm CDs, which is called a Maxi single.
Audio CD
The selection of the sample rate was primarily based on the need to reproduce
the audible frequency range of 20Hz - 20kHz. The Nyquist–Shannon sampling theorem
states that a sampling rate of double the maximum frequency to be recorded is needed,
resulting in a 40 kHz rate. The exact sampling rate of 44.1 kHz was inherited from a
method of converting digital audio into an analog video signal for storage on video tape,
which was the most affordable way to transfer data from the recording studio to the CD
manufacturer at the time the CD specification was being developed. The device that turns
an analog audio signal into PCM audio, which in turn is changed into an analog video
signal is called a PCM adaptor.
The main parameters of the CD (taken from the September 1983 issue of
the audio CD specification) are as follows:
The program area is 86.05 cm² and the length of the recordable spiral is
86.05 cm² / 1.6 µm = 5.38 km. With a scanning speed of 1.2 m/s, the playing time
is 74 minutes, or around 650 MB of data on a CD-ROM. If the disc diameter were
only 115 mm, the maximum playing time would have been 68 minutes, i.e., six
minutes less. A disc with data packed slightly more densely is tolerated by most
players (though some old ones fail). Using a linear velocity of 1.2 m/s and a track
pitch of 1.5 µm leads to a playing time of 80 minutes, or a capacity of 700 MB.
Even higher capacities on non-standard discs (up to 99 minutes) are available at
least as recordable, but generally the tighter the tracks are squeezed the worse the
compatibility.
Data structure
These 588-bit frames are in turn grouped into sectors. Each sector
contains 98 frames, totaling 98 × 24 = 2352 bytes of music. The CD is played at a
speed of 75 sectors per second, which results in 176,400 bytes per second.
Divided by 2 channels and 2 bytes per sample, this result in a sample rate of
44,100 samples per second.
"Frame"
For the Red Book stereo audio CD, the time format is commonly measured in
minutes, seconds and frames (mm:ss:ff), where one frame corresponds to one
sector, or 1/75th of a second of stereo sound. Note that in this context, the term
frame is erroneously applied in editing applications and does not denote the
physical frame described above. In editing and extracting, the frame is the
smallest addressable time interval for an audio CD, meaning that track start and
end positions can only be defined in 1/75 second steps.
Logical structure
CD-Text
CD-Text is an extension of the Red Book specification for audio CD that allows
for storage of additional text information (e.g., album name, song name, artist) on a
standards-compliant audio CD. The information is stored either in the lead-in area of the
CD, where there is roughly five kilobytes of space available, or in the subcode channels
R to W on the disc, which can store about 31 megabytes.
http://en.wikipedia.org/wiki/Image:CDTXlogo.svg
CD + Graphics
Compact Disc + Graphics (CD+G) is a special audio compact disc that contains
graphics data in addition to the audio data on the disc. The disc can be played on a
regular audio CD player, but when played on a special CD+G player, can output a
graphics signal (typically, the CD+G player is hooked up to a television set or a computer
monitor); these graphics are almost exclusively used to display lyrics on a television set
for karaoke performers to sing along with.
CD + Extended Graphics
CD-MIDI
Video CD
Video CD (aka VCD, View CD, Compact Disc digital video) is a standard
digital format for storing video on a Compact Disc. VCDs are playable in dedicated VCD
players, most modern DVD-Video players, and some video game consoles.
The VCD standard was created in 1993 by Sony, Philips, Matsushita, and JVC
and is referred to as the White Book standard.
Super Video CD
Super Video CD (Super Video Compact Disc or SVCD) is a format used for
storing video on standard compact discs. SVCD was intended as a successor to Video CD
and an alternative to DVD-Video, and falls somewhere between both in terms of
technical capability and picture quality.
SVCD has two-thirds the resolution of DVD, and over 2.7 times the resolution
of VCD. One CD-R disc can hold up to 60 minutes of standard quality SVCD-format
video. While no specific limit on SVCD video length is mandated by the specification,
one must lower the video bitrate, and therefore quality, in order to accommodate very
long videos. It is usually difficult to fit much more than 100 minutes of video onto one
SVCD without incurring significant quality loss, and many hardware players are unable
to play video with an instantaneous bitrate lower than 300 to 600 kilobits per second.
Photo CD
Picture CD
CD Interactive
The Philips "Green Book" specifies the standard for interactive multimedia Compact
Discs designed for CD-i players. This Compact Disc format is unusual because it hides
the initial tracks which contains the software and data files used by CD-i players by
omitting the tracks from the disc's Table of Contents. This causes audio CD players to
skip the CD-i data tracks. This is different from the CD-i Ready format, which puts CD-i
software and data into the pregap of Track 1.
Enhanced CD
Enhanced CD, also known as CD Extra and CD Plus, is a certification mark of the
Recording Industry Association of America for various technologies that combine audio
and computer data for use in both compact disc and CD-ROM players.
The primary data formats for Enhanced CD disks are mixed mode (Yellow Book/Red
Book), CD-i, hidden track, and multisession (Blue Book).
Recordable CD
Recordable compact discs, CD-Rs, are injection moulded with a "blank" data spiral. A
photosensitive dye is then applied, after which the discs are metalized and lacquer coated.
The write laser of the CD recorder changes the color of the dye to allow the read laser of
a standard CD player to see the data as it would an injection moulded compact disc. The
resulting discs can be read by most (but not all) CD-ROM drives and played in most (but
not all) audio CD players.
CD-R recordings are designed to be permanent. Over time the dye's physical
characteristics may change, however, causing read errors and data loss until the reading
device cannot recover with error correction methods. The design life is from 20 to 100
years depending on the quality of the discs, the quality of the writing drive, and storage
conditions. However, testing has demonstrated such degradation of some discs in as little
as 18 months under normal storage conditions. This process is known as CD rot.
Recordable Audio CD
CD-RW is a re-recordable medium that uses a metallic alloy instead of a dye. The write
laser in this case is used to heat and alter the properties (amorphous vs. crystalline) of the
alloy, and hence change its reflectivity. A CD-RW does not have as great a difference in
reflectivity as a pressed CD or a CD-R, and so many earlier CD audio players cannot
read CD-RW discs, although most later CD audio players and stand-alone DVD players
can. CD-RWs follow the Orange Book standard.
8.4 DVD
DVD (also known as "Digital Versatile Disc" or "Digital Video Disc") is a
popular optical disc storage media format. Its main uses are video and data storage. Most
DVDs are of the same dimensions as compact discs (CDs) but store more than 6 times the
data.
Variations of the term DVD often describe the way data is stored on the discs:
DVD-ROM has data which can only be read and not written, DVD-R can be written once
and then functions as a DVD-ROM, and DVD-RAM or DVD-RW holds data that can be
re-written multiple times.
"DVD" was originally used as an initialism for the unofficial term "digital video
disc". It was reported in 1995, at the time of the specification finalization, that the letters
officially stood for "digital versatile disc" (due to non-video applications), however, the
text of the press release announcing the specification finalization only refers to the
technology as "DVD", making no mention of what (if anything) the letters stood for.
Usage in the present day varies, with "DVD", "Digital Video Disc", and "Digital
Versatile Disc" all being common.
The 12 cm type is a standard DVD, and the 8 cm variety is known as a mini-DVD. These
are the same sizes as a standard CD and a mini-CD.
Note: GB here means gigabyte, equal to 109 (or 1,000,000,000) bytes. Many programs
will display gibibyte (GiB), equal to 230 (or 1,073,741,824) bytes.
Technology
DVD uses 650 nm wavelength laser diode light as opposed to 780 nm for CD. This
permits a smaller spot on the media surface that is 1.32 µm for DVD while it was
2.11 µm for CD.
Writing speeds for DVD were 1x, that is 1350 kB/s (1318 KiB/s), in first drives and
media models. More recent models at 18x or 20x have 18 or 20 times that speed. Note
that for CD drives, 1x means 153.6 kB/s (150 KiB/s), 9 times slower. DVD FAQ
HP initially developed recordable DVD media from the need to store data for back-up
and transport.
DVD recordables are now also used for consumer audio and video recording. Three
formats were developed: DVD-R/RW (minus/dash), DVD+R/RW (plus), DVD-RAM.
Dual Layer recording allows DVD-R and DVD+R discs to store significantly
more data, up to 8.5 Gigabytes per side, per disc, compared with 4.7 Gigabytes for single-
layer discs. DVD-R DL was developed for the DVD Forum by Pioneer Corporation,
DVD+R DL was developed for the DVD+RW Alliance by Philips and Mitsubishi
Kagaku Media (MKM).
A Dual Layer disc differs from its usual DVD counterpart by employing a second
physical layer within the disc itself. The drive with Dual Layer capability accesses the
second layer by shining the laser through the first semi-transparent layer. The layer
change mechanism in some DVD players can show a noticeable pause, as long as two
seconds by some accounts. This caused more than a few viewers to worry that their dual
layer discs were damaged or defective, with the end result that studios began listing a
standard message explaining the dual layer pausing effect on all dual layer disc
packaging.
DVD recordable discs supporting this technology are backward compatible with
some existing DVD players and DVD-ROM drives. Many current DVD recorders
support dual-layer technology, and the price is now comparable to that of single-layer
drives, though the blank media remain more expensive.
DVD-Video
Though many resolutions and formats are supported, most consumer DVD-Video
discs use either 4:3 or anamorphic 16:9 aspect ratio MPEG-2 video, stored at a resolution
of 720×480 (NTSC) or 720×576 (PAL) at 24, 30, or 60 FPS. Audio is commonly stored
using the Dolby Digital (AC-3) or Digital Theater System (DTS) formats, ranging from
16-bits/48kHz to 24-bits/96kHz format with monaural to 7.1 channel "Surround Sound"
presentation, and/or MPEG-1 Layer 2. Although the specifications for video and audio
requirements vary by global region and television system, many DVD players support all
possible formats. DVD-Video also supports features like menus, selectable subtitles,
multiple camera angles, and multiple audio tracks.
DVD-Audio
To date, CPPM has not been "broken" in the sense that DVD-Video's CSS has been
broken, but ways to circumvent it have been developed. By modifying commercial
DVD(-Audio) playback software to write the decrypted and decoded audio streams to the
hard disk, users can, essentially, extract content from DVD-Audio discs much in the same
way they can from DVD-Video discs.
1. Identify the different storage capacities and the type of CD-ROM and DVD.
Compare their duration of recording with the memory capacity of the storage
device.
8.8 References
Contents
This lesson aims at introducing the multimedia hardware used for providing
interactivity between the user and the multimedia software.
9.1 Introduction
Often, input devices are under direct control by a human user, who uses them to
communicate commands or other information to be processed by the computer, which
may then transmit feedback to the user through an output device. Input and output
devices together make up the hardware interface between a computer and the user or
external world. Typical examples of input devices include keyboards and mice. However,
there are others which provide many more degrees of freedom. In general, any sensor
which monitors, scans for and accepts information from the external world can be
considered an input device, whether or not the information is under the direct control of a
user.
the modality of input (e.g. mechanical motion, audio, visual, sound, etc.)
whether the input is discrete (e.g. keypresses) or continuous (e.g. a mouse's
position, though digitized into a discrete quantity, is high-resolution enough to be
thought of as continuous)
the number of degrees of freedom involved (e.g. many mice allow 2D positional
input, but some devices allow 3D input, such as the Logitech Magellan Space
Mouse)
Pointing devices, which are input devices used to specify a position in space, can further
be classified according to
Whether the input is direct or indirect. With direct input, the input space coincides
with the display space, i.e. pointing is done in the space where visual feedback or
the cursor appears. Touchscreens and light pens involve direct input. Examples
involving indirect input include the mouse and trackball.
Whether the positional information is absolute (e.g. on a touch screen) or relative
(e.g. with a mouse that can be lifted and repositioned)
Note that direct input is almost necessarily absolute, but indirect input may be either
absolute or relative. For example, digitizing graphics tablets that do not have an
embedded screen involve indirect input, and sense absolute positions and are often run in
an absolute input mode, but they may also be setup to simulate a relative input mode
where the stylus or puck can be lifted and repositioned.
9.2.2 Keyboards
The most common keyboard for PCs is the 101 style (which provides 101 keys),
although many styles are available with more are fewer special keys, LEDs, and others
features, such as a plastic membrane cover for industrial or food-service applications or
flexible “ergonomic” styles. Macintosh keyboards connect to the Apple Desktop Bus
(ADB), which manages all forms of user input- from digitizing tablets to mice.
Computer keyboard
Keyer
Chorded keyboard
LPFK
While the most common pointing device by far is the mouse, many more devices
have been developed. However, mouse is commonly used as a metaphor for devices that
move the cursor. A mouse is the standard tool for interacting with a graphical user
interface (GUI). All Macintosh computers require a mouse; on PCs, mice are not required
but recommended. Even though the Windows environment accepts keyboard entry in lieu
of mouse point-and-click actions, your multimedia project should typically be designed
with the mouse or touchscreen in mind. The buttons the mouse provide additional user
input, such as pointing and double-clicking to open a document, or the click-and-drag
operation, in which the mouse button is pressed and held down to drag (move) an object,
or to move to and select an item on a pull-down menu, or to access context-sensitive help.
The Apple mouse has one button; PC mice may have as many as three.
mouse
trackball
touchpad
spaceBall - 6 degrees-of-freedom controller
touchscreen
graphics tablets (or digitizing tablet) that use a stylus
light pen
light gun
eye tracking devices
steering wheel - can be thought of as a 1D pointing device
yoke (aircraft)
jog dial - another 1D pointing device
isotonic joysticks - where the user can freely change the position of the stick,
with more or less constant force
o joystick
o analog stick
isometric joysticks - where the user controls the stick by varying the amount
of force they push with, and the position of the stick remains more or less
constant
o pointing stick
Some devices allow many continuous degrees of freedom to be input, and could
sometimes be used as pointing devices, but could also be used in other ways that don't
conceptually involve pointing at a location in space.
Wired glove
Shape Tape
Composite devices
Input devices, such as buttons and joysticks, can be combined on a single physical
device that could be thought of as a composite device. Many gaming devices have
controllers like this.
Game controller
Gamepad (or joypad)
Paddle (game controller)
Wii Remote
Flat-Bed Scanners
A scanner may be the most useful piece of equipment used in the course of
producing a multimedia project; there are flat-bed and handheld scanners. Most
commonly available are gray-scale and color flat-bed scanners that provide a
resolution of 300 or 600 dots per inch (dpi). Professional graphics houses may use
even higher resolution units. Handheld scanners can be useful for scanning small
images and columns of text, but they may prove inadequate for the multimedia
development.
Webcam
Image scanner
Fingerprint scanner
Barcode reader
3D scanner
medical imaging sensor technology
o Computed tomography
o Magnetic resonance imaging
o Positron emission tomography
o Medical ultrasonography
Microphone
Speech recognition
Note that MIDI allows musical instruments to be used as input devices as well.
9.2.7 Touchscreens
Touchscreens are monitors that usually have a textured coating across the glass
face. This coating is sensitive to pressure and registers the location of the user’s finger
when it touches the screen. The Touch Mate System, which has no coating, actually
measures the pitch, roll, and yaw rotation of the monitor when pressed by a finger, and
determines how much force was exerted and the location where the force was applied.
Other touchscreens use invisible beams of infrared light that crisscross the front of the
monitor to calculate where a finger was pressed. Pressing twice on the screen in quick
and dragging the finger, without lifting it, to another location simulates a mouse click-
and-drag. A keyboard is sometimes simulated using an onscreen representation so users
can input names, numbers, and other text by pressing “keys”.
Touchscreen recommended for day-to-day computer work, but are excellent for
multimedia applications in a kiosk, at a trade show, or in a museum delivery system-
anything involving public input and simple tasks. When your project is designed to use a
touchscreen, the monitor is the only input device required, so you can secure all other
system hardware behind locked doors to prevent theft or tampering.
Presentation of the audio and visual components of the multimedia project requires
hardware that may or may not be included with the computer itself-speakers, amplifiers,
monitors, motion video devices, and capable storage systems. The better the equipment,
of course, the better the presentation. There is no greater test of the benefits of good
output hardware than to feed the audio output of your computer into an external amplifier
system: suddenly the bass sounds become deeper and richer, and even music sampled at
low quality may seem to be acceptable.
9.3.1.Audio devices
All Macintoshes are equipped with an internal speaker and a dedicated sound clip,
and they are capable of audio output without additional hardware and/or software. To
take advantage of built-in stereo sound, external speaker are required. Digitizing sound
on the Macintosh requires an external microphone and sound editing/recording software
such as SoundEdit16 from Macromedia, Alchemy from Passport, or SoundDesingner
from DigiDesign.
Serious multimedia developers will often attach more than one monitor to their
computers, using add-on graphic board. This is because many authoring systems allow to
work with several open windows at a time, so we can dedicate one monitor to viewing
the work we are creating or designing, and we can perform various editing tasks in
windows on other monitors that do not block the view of your work. Editing windows
that overlap a work view when developing with Macromedia’s authoring environment,
director, on one monitor. Developing in director is best with at least two monitors, one to
view the work the other two view the “score”. A third monitor is often added by director
developers to display the “Cast”.
9.3.4 Video Device
No other contemporary message medium has the visual impact of video. With a
video digitizing board installed in a computer, we can display a television picture on your
monitor. Some boards include a frame-grabber feature for capturing the image and
turning it in to a color bitmap, which can be saved as a PICT or TIFF file and then used
as part of a graphic or a background in your project.
Display of video on any computer platform requires manipulation of an enormous
amount of data. When used in conjunction with videodisc players, which give precise
control over the images being viewed, video cards you place an image in to a window on
the computer monitor; a second television screen dedicated to video is not required. And
video cards typically come with excellent special effects software.
There are many video cards available today. Most of these support various video-
in-a-window sizes, identification of source video, setup of play sequences are segments,
special effects, frame grabbing, digital movie making; and some have built-in television
tuners so you can watch your favorite programs in a window while working on other
things. In windows, video overlay boards are controlled through the Media Control
Interface. On the Macintosh, they are often controlled by external commands and
functions (XCMDs and XFCNs) linked to your authoring software.
Good video greatly enhances your project; poor video will ruin it. Whether you
delivered your video from tape using VISCA controls, from videodisc, or as a QuickTime
or AVI movie, it is important that your source material be of high quality.
9.3.5 Projectors
When it is necessary to show a material to more viewers than can huddle around a
computer monitor, it will be necessary to project it on to large screen or even a white-
painted wall. Cathode-ray tube (CRT) projectors, liquid crystal display (LCD) panels
attached to an overhead projector, stand-alone LCD projectors, and light-valve projectors
are available to splash the work on to big-screen surfaces.
CRT projectors have been around for quite a while- they are the original “big-
screen” televisions. They use three separate projection tubes and lenses (red, green, and
blue), and three color channels of light must “converge” accurately on the screen. Setup,
focusing, and aligning are important to getting a clear and crisp picture. CRT projectors
are compatible with the output of most computers as well as televisions.
LCD panels are portable devices that fit in a briefcase. The panel is placed on the
glass surface of a standard overhead projector available in most schools, conference
rooms, and meeting halls. While they overhead projectors does the projection work, the
panel is connected to the computer and provides the image, in thousands of colors and,
with active-matrix technology, at speeds that allow full-motion video and animation.
Because LCD panels are small, they are popular for on-the-road presentations, often
connected to a laptop computer and using a locally available overhead projector.
More complete LCD projection panels contain a projection lamp and lenses and do
not recover a separate overheads projector. They typically produce an image brighter and
shaper than the simple panel model, but they are some what large and cannot travel in a
briefcase.
Light-valves complete with high-end CRT projectors and use a liquid crystal
technology in which a low-intensity color image modulates a high-intensity light beam.
These units are expensive, but the image from a light-valve projector is very bright and
color saturated can be projected onto screen as wide as 10 meters.
9.3.6 Printers
With the advent of reasonably priced color printers, hard-copy output has entered
the multimedia scene. From storyboards to presentation to production of collateral
marketing material, color printers have become an important part of the multimedia
development environment. Color helps clarify concepts, improve understanding and
retention of information, and organize complex data. As multimedia designers already
know intelligent use of colors is critical to the success of a project. Tektronix offers both
solid ink and laser options, and either Phases 560 will print more than 10000 pages at a
rate of 5 color pages or 14 monochrome pages per minute before requiring new toner.
Epson provides lower-cost and lower-performance solutions for home and small business
users; Hewlett Packard’s Color LaserJet line competes with both. Most printer
manufactures offer a color model-just as all computers once used monochrome monitors
but are now color, all printers will became color printers.
Check Your Progress 2
List a few output devices.
Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of this lesson.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
Communication among workshop members and with the client is essential to the
efficient and accurate completion of project. And when speedy data transfer is needed,
immediately, a modem or network is required. If the client and the service provider are
both connected to the Internet, a combination of communication by e-mail and by FTP
(File Transfer Protocol) may be the most cost-effective and efficient solution for both
creative development and project management.
In the workplace, it is necessary to use quality equipment and software for the
communication setup. The cost-in both time and money-of stable and fast networking
will be returned to the content developer.
9.4.1 Modems
Modem speed, measured in baud, is the most important consideration. Because the
multimedia file that contains the graphics, audio resources, video samples, and
progressive versions of your project are usually large, you need to move as much data as
possible in as short a time as possible. Today’s standards dictate at least a V.34 28,800
bps modem. Transmitting at only 2400 bps, a 350KB file may take as long as 45 minutes
to send, but at 28.8 kbps, you can be done in a couple of minutes. Most modems follows
the CCITT V.32 or V.42 standards that provide data compression algorithms when
communicating with another similarly equipped modem. Compression saves significant
transmission time and money, especially over long distance. Be sure the modem uses a
standard compression system (like V.32), not a proprietary one.
According to the laws of physics, copper telephone lines and the switching
equipment at the phone companies’ central offices can handle modulated analog signals
up to about 28,000 bps on “clean” lines. Modem manufactures that advertise data
transmission speeds higher than that (56 Kbps) are counting on their hardware-based
compression algorithms to crunch the data before sending it, decompressing it upon
arrival at the receiving end. If we have already compressed the data into a .SIT, .SEA,
.ARC, or .ZIP file, you may not reap any benefit from the higher advertised speeds
9.4.2 ISDN
For higher transmission speeds, you will need to use Integrated Services Digital
Network (ISDN), Switched-56, T1, T3, DSL, ATM, or another of the telephone
companies’ Digital Switched Network Services.
ISDN lines are popular because of their fast 128 Kbps data transfer rate-four to five
times faster than the more common 28.8 Kbps analog modem. ISDN lines (and the
required ISDN hardware, often misnamed “ISDN modems” even though no
modulation/demodulation of the analog signal occurs) are important for Internet access,
networking, and audio and video conferencing. They are more expensive than
conventional analog or POTS (Plain Old Telephone Service) lines, so analyze your costs
and benefits carefully before upgrading to ISDN. Newer and faster Digital Subscriber
Line (DSL) technology using copper lines and promoted by the telephone companies may
overtake ISDN.
Cable modems operate at speeds 100 to 1,000 times as fast as a telephone modem,
receiving data at up to 10Mbps and sending data at speeds between 2Mbps and 10 Mbps.
They can provide not only high-bandwidth Internet access but also streaming audio and
video for television viewing. Most will connect to computers with 10baseT Ethernet
connectors.
Cable modems usually send and receive data asymmetrically – they receive more
(faster) than they send (slower). In the downstream direction from provider to user, the
date are modulated and placed on a common 6 MHz television carrier, somewhere
between 42 MHz and 750 MHz. the upstream channel, or reverse path, from the user
back to the provider is more difficult to engineer because cable is a noisy environment-
with interference from HAM radio, CB radio, home appliances, loose connectors, and
poor home installation.
In this lesson we will learn the different requirements for a computer to become a
multimedia workstation. At the end of this chapter the learner will be able to identify the
requirements for making a computer, a multimedia workstation.
10.1 Introduction
A multimedia workstation is computer with facilities to handle multimedia objects
such as text, audio, video, animation and images. A multimedia workstation was earlier
identified as MPC (Multimedia Personal Computer). In the current scenario all
computers are prebuilt with multimedia processing facilities. Hence it is not necessary to
identify a computer as MPC.
other. However, the transmission of audio and video cannot be carried out with only the
conventional communication infrastructure and network adapters.
Until now, the solution was that continuous and discrete media have been
considered in different environments, independently of each other. It means that fully
different systems were built. For example, on the one hand, the analog telephone system
provides audio transmission services using its original dial devices connected by copper
wires to the telephone company’s nearest end office. The end offices are connected to
switching centers, called toll offices, and these centers are connected through high
bandwidth intertoll trunks to intermediate switching offices. This hierarchical structure
allows for reliable audio communication. On the other hand, digital computer networks
provide data transmission services at lower data rates using network adapters connected
by copper wires to switches and routers.
Even today, professional radio and television studios transmit audio and video
streams in the form of analog signals, although most network components (e.g.,
switches), over which these signals are transmitted, work internally in a digital mode.
The main advantage of this approach is the high quality of audio and video and all
the necessary devices for input, output, storage and transfer that are available. The
hybrid approach is used for studying application user interfaces, application
programming interfaces or application scenarios.
Integrated Transmission
Connection to Workstations
Connection to switches
Current workstations are designed for the manipulation of discrete media information.
The data should be exchanged as quickly as possible between the involved components,
often interconnected by a common bus. Computationally intensive and dedicated
processing requirements lead to dedicated hardware, firmware and additional boards.
Examples of these components are hard disk controllers and FDDI-adapters.
Bus
The notion of a bus has to be divided into system bus and periphery bus. In their
current versions, system busses such as ISA, EISA, Microchannel, Q-bus and VME-bus
support only limited transfer of continuous data. The further development of periphery
busses, such as SCSI, is aimed at the development of data transfer for continuous media.
Multimedia Devices
The main peripheral components are the necessary input and output multimedia
devices. Most of these devices were developed for or by consumer electronics, resulting
in the relative low cost of the devices. Microphones, headphones, as well as passive and
active speakers, are examples. For the most part, active speakers and headphones are
connected to the computer because it, generally, does not contain an amplifier. The
camera for video input is also taken from consumer electronics. Hence, a video interface
in a computer must accommodate the most commonly used video techniques/standards,
i.e., NTSC, PAL, SECAM with FBAS, RGB, YUV and YIQ modes. A monitor serves for
video output. Besides Cathode Ray Tube (CRT) monitors (e.g., current workstation
terminals), more and more terminals use the color-LCD technique (e.g., a projection TV
monitor uses the LCD technique). Further, to display video, monitor characteristics, such
as color, high resolution, and flat and large shape, are important.
Primary Storage
Audio and video data are copied among different system components in a digital
system. An example of tasks, where copying of data is necessary, is a segmentation of the
LDUs or the appending of a Header and Trailer. The copying operation uses system
software-specific memory management designed for continuous media. This kind of
memory management needs sufficient main memory (primary storage). Besides ROMs,
PROMs, EPROMS, and partially static memory elements, low-cost of these modules,
together with steadily increasing storage capacities, profits the multimedia world.
Secondary Storage
The main requirements put on secondary storage and the corresponding controller
is a high storage density and low access time, respectively.
On the one hand, to achieve a high storage density, for example, a Constant
Linear Velocity (CLV) technique was defined for the CD-DA (Compact Disc Digital
Audio). CLV guarantees that the data density is kept constant for the entire optical disk at
the expense of a higher mean access time. On the other hand, to achieve time guarantees,
i.e., lower mean access time, a Constant Angle Velocity (CAV) technique could be used.
Because the time requirement is more important, the systems with a CAV are more
suitable for multimedia than the systems with a CLV.
Processor
Technology
DVI
Cache
Operating System
Selection of the proper platform for developing the multimedia project may be
based on your personal preference of computer, your budget constraints, and project
delivery requirements, and the type of material and content in the project. Many
developers believe that multimedia project development is smoother and easier on the
Macintosh than in Windows, even though projects destined to run in Windows must then
be ported across platforms. But hardware and authoring software tools for Windows
have improved; today you can produce many multimedia projects with equal ease in
either the Windows or Macintosh environment.
All Macintoshes can record and play sound. Many include hardware and software
for digitizing and editing video and producing DVD discs. High-quality graphics
capability is available “out of the box.” Unlike the Windows environment, where users
can operate any application with keyboard input, the Macintosh requires a mouse.
The Macintosh computer you will need for developing a project depends entirely
upon the project’s delivery requirements, its content, and the tools you will need for
production.
In the early days, Microsoft organized the major PC hardware manufactures into
the Multimedia PC Marketing Council to develop a set of specifications that would allow
Windows to deliver a dependable multimedia experience.
Local area networks (LANs) and wide area networks (WANs) can connect the
members of a workgroup. In a LAN, workstations are usually located within a short
distance of one another, on the same floor of a building, for example. WANs are
communication systems spanning great distances, typically set up and managed by large
corporation and institutions for their own use, or to share with other users.
LANs allow direct communication and sharing of peripheral resources such as file
servers, printers, scanners, and network modems. They use a variety of proprietary
technologies, most commonly Ethernet or TokenRing, to perform the connections. They
can usually be set up with twisted-pair telephone wire, but be sure to use “data-grade
level 5” or “cat-5” wire-it makes a real difference, even if it’s a little more expensive!
Bad wiring will give the user never-ending headache of intermittent and often untraceable
crashes and failures.
10.10 References
1. "Multimedia:Concepts and Practice" By Stephen McGloughlin
2. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
3. Digital Multimedia by Nigel Chapman, Jenny Chapman
4. Video and Image Processing in Multimedia Systems By Stephen W. Smoliar,
HongJiang Zhang
UNIT - III
This lesson aims at introducing the concepts of hypertext and hypermedia. At the
end of this chapter the learner will be able to :
i) Understand the concepts of hypertext and hypermedia
ii) Distinguish hypertext and hypermedia
11.1 Introduction
11.2 Documents
The manipulation model describes all the operations allowed for creation, change
and deletion of multimedia information. The representation model defines: (1) the
protocols for exchanging this information among different computers; and, (2) the
formats for storing the data. It includes the relations between the individual information
elements which need to be considered during presentation. It is important to mention that
an architecture may not include all described properties, respectively models.
Presentation
Model Manipulation
Model
Structure
Content
Representation Model
11.3 HYPERTEXT
Hypertext most often refers to text on a computer that will lead the user to other,
related information on demand. Hypertext represents a relatively recent innovation to
user interfaces, which overcomes some of the limitations of written text. Rather than
remaining static like traditional text, hypertext makes possible a dynamic organization of
information through links and connections (called hyperlinks).
Hypertext can be designed to perform various tasks; for instance when a user
"clicks" on it or "hovers" over it, a bubble with a word definition may appear, or a web
page on a related subject may load, or a video clip may run, or an application may open.
The prefix hyper ("over" or "beyond") signifies the overcoming of the old linear
constraints of written text.
11.4 Hypermedia
Communication reproduces knowledge stored in the human brain via several media.
Documents are one method of transmitting information. Reading a document is an act of
reconstructing knowledge. In an ideal case, knowledge transmission starts with an author
and ends with a reconstruction of the same ideas by a reader.
actual content. In the case of hypertext and hypermedia, a graphical structure is possible
in a document which may simplify the writing and reading processes.
format format
Print Start
Final
Layout Presentation
Form
Postscript ???
Problem Description
The following figure shows an example of such a link. The arrows point to such a
relation between the information units (Logical Data Units - LDU’s). In a text (top left in
the figure), a reference to the landing properties of aircrafts is given. These properties are
demonstrated through a video sequence (bottom left in the figure). At another place in the
text, sales of landing rights for the whole USA are shown (this is visualized in the form of
a map, using graphics- bottom right in the figure). Further information about the airlines
with their landing rights can be made visible graphically through a selection of a
particular city. A special information about the number of the different airplanes sold
with landing rights in Washington is shown at the top right in the figure with a bar
diagram. Internally, the diagram information is presented in table form. The left bar
points to the plane, which can be demonstrated with a video clip.
Hypertext System:
Multimedia System:
For example, if only links to text data are present, then this is not a multimedia
system, it is a hypertext. A video conference, with simultaneous transmission of text and
graphics, generated by a document processing program, is a multimedia application.
Although it does not have any relation to hypertext and hypermedia.
Hypermedia System:
After the release of web browsers for both the PC and Macintosh environments,
traffic on the World Wide Web quickly exploded from only 500 known web servers in
1993 to over 10,000 in 1994. Thus, all earlier hypertext systems were overshadowed by
the success of the web, even though it originally lacked many features of those earlier
systems, such as an easy way to edit what you were reading, typed links, backlinks,
transclusion, and source tracking.
1. Check any multimedia website and identify the contents that have hyperlinks,
hypermedia.
2. Identify the different hypermedia and hypertext contents available in any
educational, E-lesson based CDROMS.
Hypertext most often refers to text on a computer that will lead the user to other,
related information on demand. Hypertext represents a relatively recent
innovation to user interfaces.
11.11 References
Contents
12.0 Aims and Objectives
12.1 Introduction
12.2 Document Architecture - SGML
12.3 Open Document Architecture
12.3.1 Details of ODA
12.3.2 Layout Structure and logical structure
12.3.3 ODA and Multimedia
12.4 MHEG
12.4.1 Example of Interactive Multimedia
12.4.2 Derivation of Class Hierarchy
12.5 Let us sum up
12.6 Lesson-end activities
12.7 Model answers to “Check your progress”
12.8 References
12.1 Introduction
The basic idea is that the author uses tags for marking certain text parts. SGML
determines the form of tags. But it does not specify their location or meaning. User
groups agree on the meaning of the tags. SGML makes a frame available with which the
user specifies the syntax description in an object-specific system. Here, classes and
objects, hierarchies of classes and objects, inheritance and the link to methods
(processing instructions) can be used by the specification. SGML specifies the syntax,
but not the semantics.
For example,
<title>Multimedia-Systems</title>
<author>Felix Gatou</author>
<side>IBM</side>
<summary>This exceptional paper from Peter…
The following figure shows the processing of an SGML document. It is divided into two
processes:
Only the formatter knows the meaning of the tag and it transforms the document
into a formatted document. The parser uses the tags, occurring in the document, in
combination with the corresponding document type. Specification of the document
structure is done with tags. Here, parts of the layout are linked together. This is based on
the joint context between the originator of the document and the formatter process. It is
one defined through SGML.
Multimedia data are supported in the SGML standard only in the form of
graphics. A graphical image as a CGM (Computer Graphics Metafile) is embedded in an
SGML document. The standard does not refer to other media :
A link to concrete data can be specified through #NDATA. The data are stored mostly
externally in a separate file.
The above example shows the definition of video which consists of audio and motion
pictures.
The Open Document Architecture (ODA) was initially called the Office
Document Architecture because it supports mostly office-oriented applications. The
main goal of this document architecture is to support the exchange, processing and
presentation of documents in open systems. ODA has been endorsed mainly by the
computer industry, especially in Europe.
The main property of ODA is the distinction among content, logical structure and
layout structure. This is in contrast to SGML where only a logical structure and the
contents are defined. ODA also defines semantics. Following figure shows these three
aspects linked to a document. One can imagine these aspects as three orthogonal views
of the same document. Each of these views represent on aspect, together we get the
actual document.
A content architecture describes for each medium: (1) the specification of the
elements, (2) the possible access functions and, (3) the data coding. Individual elements
are the Logical Data Units (LDUs), which are determined for each medium. The access
functions serve for the manipulation of individual elements. The coding of the data
determines the mapping with respect to bits and bytes.
ODA has content architectures for media text, geometrical graphics and raster
graphics. Contents of the medium text are defined through the Character Content
Architecture. The Geometric Graphics Content Architecture allows a content description
of still images. It also takes into account individual graphical objects. Pixel-oriented
still images are described through Raster Graphics Content Architecture. It can be a
bitmap as well as a facsimile.
The logical structure includes the partitioning of the content. Here, paragraphs
and individual heading are specified according to the tree structure. Lists with their
entries are defined as:
The above example describes the logical structure of an article. Each article
consists of a preamble, a body and a postamble. The body includes two chapters, both of
them start with headings. Content is assigned to each element of this logical structure.
The Information architecture ODA includes the cooperative models shown in the
following figure. The fundamental descriptive means of the structural and presentational
models are linked to the individual nodes which build a document. The document is seen
as a tree. Each node (also a document) is a constituent, or an object. It consists of a set
of attributes, which represent the properties of the nodes. A node itself includes a
concrete value or it defines relations between other nodes. Hereby, relations and
operators, as shown in following table, are allowed.
A formatted document includes the specific layout structure, and eventually the
generic layout structure. It can be printed directly or displayed, but it cannot be
changed.
A processable document consists of the specific logical structure, eventually the
generic logical structure, and later of the generic layout structure. The document
cannot be printed directly or displayed. Change of content is possible.
A formatted processable document is a mixed form. It can be printed, displayed
and the content can be changed.
For the communication of an ODA document, the representation model, show in the
following Figure is used. This can be either the Open Document Interchange Format
(ODIF) (based on ASN.1), or the Open Document Language (ODL) (based on SGML).
The manipulation model in ODA, shown in the above figure, makes use of
Document Application Profiles (DAPs). These profiles are an ODA (Text Only, Text +
Raster Graphics + Geometric Graphics, Advanced Level).
Logical Structures
Extensions for multimedia of the logical structure also need to be considered. For
example, a film can include a logical structure. It could be a tree with the following
components:
1. Prelude
Introductory movie segment
Participating actors in the second movie segment
2. Scene 1
3. Scene 2
4. …
5. Postlude
Such a structure would often be desirable for the user. This would allow one to
deterministically skip some areas and to show or play other areas.
Layout Structure
The layout structure needs extensions for multimedia. The time relation by a
motion picture and audio must be included. Further, questions such as When will
something be played?, From which point? And With which attributes and dependencies?
must be answered.
The spatial relation can specify, for example, relative and absolute positions by
the audio object. Additionally, the volume and all other attributes and dependencies
should be determined.
12.4 MPEG
ISO/IEC JTC1/SC29
Coding of Audio, Picture, Multimedia and
Hypermedia Information
WG 1 WG 11 WG 12
Coding of Still Coding of Moving Pictures Coding of Multimedia
Pictures and Associated and Hypermedia
(JBIG/JPEG) Audio (MPEG) Information (MHEG)
Content Structure
Working groups within the ISO-SG29
Media
Selection
Stop
Video
Start
Image
Start
Modification
Text Start
Audio
Time
Start of
Presentation
Figure showing Time diagram of an interactive presentation
Content
Behavior
The notion behavior means all information which specifies the representation of
the contents as well as defines the run of the presentation. The first part is controlled by
the actions start, set volume, set position, etc. The last part is generated by the definition
of timely, spatial and conditional links between individual elements. If the state of the
content’s presentation changes, then this may result in further commands on other objects
(e.g., the deletion of the graphic causes the display of the text). Another possibility, how
the behavior of a presentation can be determined, is when external programs or functions
(script) are called.
User Interaction
The following figure summarizes the individual elements in the MHEG class
hierarchy in the form of a tree. Instances can be created from all leaves (roman printed
classes). All internal nodes, including the root (italic printed classes), are abstract
classes, i.e., no instances can be generated from them. The leaves inherit some attributes
from the root of the tree as an abstract basic class. The internal nodes do not include any
further functions. Their task is to unify individual classes into meaningful groups. The
action, the link and the script classes are grouped under the behavior class, which defines
the behavior in a presentation.
The interaction class includes the user interaction, which is again modeled
through the selection and modification class. All the classes together with the content and
composite classes specify the individual components in the presentation and determine
the component class. Some properties of the particular MHEG engine can be queried by
the descriptor class. The macro class serves as the simplification of the access,
respectively reuse of objects. Both classes play a minor role; therefore, they will not be
discussed further.
The development of the MHEG standard uses the techniques of object-oriented design.
Although a class hierarchy is considered a kernel of this technique, a closer look shows
that the MHEG class hierarchy does not have the meaning it is often assigned.
MH-Object-Class
The abstract MH-Object-Class inherits both data structures MHEG Identifier and
Descriptor.
MHEG Identifier consists of the attributes NHEG Identifier and Object Number
and it serves as the addressing of MHEG objects. The first attribute identifies a specific
application. The Object Number is a number which is defined only within the application.
The data structure Descriptor provides the possibility to characterize more precisely each
MHEG object through a number of optional attributes. For example, this can become
meaningful if a presentation is decomposed into individual objects and the individual
MHEG objects are stored in a database. Any author, supported by proper search
function, can reuse existing MHEG objects.
12.8 References
1. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
2. “Multimedia Making it work” By Tay Vaughan
3. www.w3.org/MarkUp/SGML/
4. http:\en.wikipedia.org/wiki/SGML
This lesson is intended to teach the learner the basic tools (software) used for
creating and capturing multimedia. The second part of this lesson educates the user on
various video file formats.
13.1 Introduction
The basic tools set for building multimedia project contains one or more authoring
systems and various editing applications for text, images, sound, and motion video. A
few additional applications are also useful for capturing images from the screen,
translating file formats and tools for the making multimedia production easier.
A word processor is usually the first software tool computer users rely upon for
creating text. The word processor is often bundled with an office suite.
Multiple windows that provide views of more than one image at a time
Conversion of major image-data types and industry-standard file formats
Direct inputs of images from scanner and video sources
Employment of a virtual memory scheme that uses hard disk space as RAM for
images that require large amounts of memory
Capable selection tools, such as rectangles, lassos, and magic wands, to select
portions of a bitmap
Image and balance controls for brightness, contrast, and color balance
Good masking features
Multiple undo and restore features
Anti-aliasing capability, and sharpening and smoothing controls
Color-mapping controls for precise adjustment of color balance
Tools for retouching, blurring, sharpening, lightening, darkening, smudging, and
tinting
Geometric transformation such as flip, skew, rotate, and distort, and perspective
changes
Ability to resample and resize an image
Ability to create images from scratch, using line, rectangle, square, circle, ellipse,
polygon, airbrush, paintbrush, pencil, and eraser tools, with customizable brush
shapes and user-definable bucket and gradient fills
Multiple typefaces, styles, and sizes, and type manipulation and masking routines
Filters for special effects, such as crystallize, dry brush, emboss, facet, fresco,
graphic pen, mosaic, pixelize, poster, ripple, smooth, splatter, stucco, twirl,
watercolor, wave, and wind
Support for third-party special effect plug-ins
Ability to design in layers that can be combined, hidden, and reordered
Plug-Ins
Image-editing programs usually support powerful plug-in modules available from
third-party developers that allow to wrap, twist, shadow, cut, diffuse, and otherwise
“filter” your images for special visual effects.
List a few image editing features that an image editing tool should possess.
Painting and drawing tools, as well as 3-D modelers, are perhaps the most
important items in the toolkit because, of all the multimedia elements, the graphical
impact of the project will likely have the greatest influence on the end user. If the
artwork is amateurish, or flat and uninteresting, both the creator and the users will be
disappointed.
Some software applications combine drawing and painting capabilities, but many
authoring systems can import only bitmapped images. Typically, bitmapped images
provide the greatest choice and power to the artist for rendering fine detail and effects,
and today bitmaps are used in multimedia more often than drawn objects. Some vector-
based packages such as Macromedia’s Flash are aimed at reducing file download times
on the Web, and may contain both bitmaps and drawn art. The anti-aliased character
shown in the bitmap of Color Plate 5 is an example of the fine touches that improve the
look of an image.
An intuitive graphical user interface with pull-down menus, status bars, palette
control, and dialog boxes for quick, logical selection
Scalable dimensions, so you can resize, stretch, and distort both large and small
bitmaps
Paint tools to create geometric shapes, from squares to circles and from curves to
complex polygons
Ability to pour a color, pattern, or gradient into any area
Ability to paint with patterns and clip art
Customizable pen and brush shapes and sizes
Eyedropper tool that samples colors
Auto trace tool that turns bitmap shapes into vector-based outlines
Support for scalable text fonts and drop shadows
Multiple undo capabilities, to let you try again
Painting features such as smoothing coarse-edged objects into the background
with anti-aliasing, airbrushing in variable sizes, shapes, densities, and patterns;
washing colors in gradients; blending; and masking
Support for third-party special effect plug-ins
Object and layering capabilities that allow you to treat separate elements
independently
Zooming, for magnified pixel editing
All common color depths: 1-, 4-, 8-, and 16-, 134-, or 313- bit color, and gray-
scale
Good color management and dithering capability among color depths using
various color models such as RGB, HSB, and CMYK
Good palette management when in 8-bit mode
Good file importing and exporting capability for image formats such as PIC, GIF,
TGA, TIF, WMF, JPG, PCX, EPS, PTN, and BMP
Sound editing tools for both digitized and MIDI sound lets hear music as well as
create it. By drawing a representation of a sound in fine increments, whether a score or a
waveform, it is possible to cut, copy, paste and otherwise edit segments of it with great
precision.
System sounds are shipped both Macintosh and Windows systems and they are
available as soon the Operating system is installed. For MIDI sound, a MIDI synthesizer
is required to play and record sounds from musical instruments. For ordinary sound there
are varieties of software such as Soundedit, MP3cutter, Wavestudio.
Animation and digital movies are sequences of bitmapped graphic scenes (frames,
rapidly played back. Most authoring tools adapt either a frame or object oriented
approach to animation.
A video format describes how one device sends video pictures to another device,
such as the way that a DVD player sends pictures to a television or a computer to a
monitor. More formally, the video format describes the sequence and structure of frames
that create the moving video image.
Video formats are commonly known in the domain of commercial broadcast and
consumer devices; most notably to date, these are the analog video formats of NTSC,
PAL, and SECAM. However, video formats also describe the digital equivalents of the
commercial formats, the aging custom military uses of analog video (such as RS-170 and
RS-343), the increasingly important video formats used with computers, and even such
offbeat formats such as color field sequential.
Video formats were originally designed for display devices such as CRTs.
However, because other kinds of displays have common source material and because
video formats enjoy wide adoption and have convenient organization, video formats are a
common means to describe the structure of displayed visual information for a variety of
graphical output devices.
A frame can consist of two or more fields, sent sequentially, that are displayed
over time to form a complete frame. This kind of assembly is known as interlace.
An interlaced video frame is distinguished from a progressive scan frame, where
the entire frame is sent as a single intact entity.
A frame consists of a series of lines, known as scan lines. Scan lines have a
regular and consistent length in order to produce a rectangular image. This is
because in analog formats, a line lasts for a given period of time; in digital
formats, the line consists of a given number of pixels. When a device sends a
frame, the video format specifies that devices send each line independently from
any others and that all lines are sent in top-to-bottom order.
As above, a frame may be split into fields – odd and even (by line "numbers") or
upper and lower, respectively. In NTSC, the lower field comes first, then the
upper field, and that's the whole frame. The basics of a format are Aspect Ratio,
Frame Rate, and Interlacing with field order if applicable: Video formats use a
sequence of frames in a specified order. In some formats, a single frame is
independent of any other (such as those used in computer video formats), so the
sequence is only one frame. In other video formats, frames have an ordered
position. Individual frames within a sequence typically have similar construction.
However, depending on its position in the sequence, frames may vary small
elements within them to represent additional information. For example, MPEG-13
compression may eliminate the information that is redundant frame-to-frame in
order to reduce the data size, preserving the information relating to changes
between frames.
NTSC
PAL
SECAM
ATSC Standards
DVB
ISDB
These are strictly the format of the video itself, and not for the modulation used for
transmission.
13.7.3 QuickTime
QuickTime is integral to Mac OS X, as it was with earlier versions of Mac OS. All Apple
systems ship with QuickTime already installed, as it represents the core media framework
for Mac OS X. QuickTime is optional for Windows systems, although many software
applications require it. Apple bundles it with each iTunes for Windows download, but it
is also available as a stand-alone installation.
QuickTime players
QuickTime is distributed free of charge, and includes the QuickTime Player application.
Some other free player applications that rely on the QuickTime framework provide
features not available in the basic QuickTime Player. For example:
iTunes can export audio in WAV, AIFF, MP3, AAC, and Apple Lossless.
QuickTime framework
Encoding and transcoding video and audio from one format to another.
Decoding video and audio, and then sending the decoded stream to the graphics or
audio subsystem for playback. In Mac OS X, QuickTime sends video playback to
the Quartz Extreme (OpenGL) Compositor.
A plug-in architecture for supporting additional codecs (such as DivX).
The framework supports the following file types and codecs natively:
Audio
Apple Lossless
Audio Interchange (AIFF)
Digital Audio: Audio CD - 16-bit (CDDA), 134-bit, 313-bit integer & floating
point, and 64-bit floating point
MIDI
MPEG-1 Layer 3 Audio (.mp3)
MPEG-4 AAC Audio (.m4a, .m4b, .m4p)
Sun AU Audio
ULAW and ALAW Audio
Waveform Audio (WAV)
Video
QuickTime Movie
File extension: .mov
.qt
MIME type: video/quicktime
The QuickTime (.mov) file format functions as a multimedia container file that contains
one or more tracks, each of which stores a particular type of data: audio, video, effects, or
text (for subtitles, for example). Other file formats that QuickTime supports natively (to
varying degrees) include AIFF, WAV, DV, MP3, and MPEG-1. With additional
QuickTime Extensions, it can also support Ogg, ASF, FLV, MKV, DivX Media Format,
and others.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
13.11 References
1. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
2. “Multimedia Making it work” By Tay Vaughan
3. “Multimedia in Practice – Technology and applications” By Jeffcoat
4. www.apple.com/quicktime/
5. http://en.wikipedia.org/wiki/QuickTime
Contents
14.0 Aims and Objectives
14.1 Introduction
14.2 OLE
14.3 DDE
14.4 Office Suites
14.4.1 Components available in Office Suites
14.4.2 Currently available Office Suites
14.4.3 Comparison of Office Suites
14.5 Presentation Tools
14.6 Let us sum up
14.7 Lesson-end activities
14.8 Model answers to “Check your progress”
14.9 References
14.1 Introduction
14.2 OLE
OLE object is any object that implements the IOleObject interface, possibly along with a
wide range of other interfaces, depending on the object's needs.
Overviewhttp://en.wikipedia.org/wiki/Image:OLE-demonstratie_-
_een_grafiek_in_een_tekstdocument.png of OLE
http://en.wikipedia.org/wiki/Image:OLE-demonstratie_-
_een_grafiek_in_een_tekstdocument.pngOLE allows an editor to “farm out” part of a
document to another editor and then re-imports it. For example, a desktop publishing
system might send some text to a word processor or a picture to a bitmap editor using
OLE. The main benefit of using OLE is to display visualizations of data from other
programs that the host program is not normally able to generate itself (e.g. a pie-chart in a
text document), as well to create a master file. References to data in this file can be made
and the master file can then have changed data which will then take effect in the
referenced document. This is called "linking" (instead of "embedding").
Its primary use is for managing compound documents, but it is also used for
transferring data between different applications using drag and drop and clipboard
operations. The concept of "embedding" is also central to much use of multimedia in
Web pages, which tend to embed video, animation (including Flash animations), and
audio files within the hypertext markup language (such as HTML or XHTML) or other
structural markup language used (such as XML or SGML) — possibly, but not
necessarily, using a different embedding mechanism than OLE.
History of OLE
OLE 1.0
OLE 1.0, released in 1990, was the evolution of the original dynamic data
exchange, or DDE, concepts that Microsoft developed for earlier versions of
Windows. While DDE was limited to transferring limited amounts of data
between two running applications, OLE was capable of maintaining active links
between two documents or even embedding one type of document within another.
OLE servers and clients communicate with system libraries using virtual
function tables, or VTBLs. The VTBL consists of a structure of function pointers
that the system library can use to communicate with the server or client. The
server and client libraries, OLESVR.DLL and OLECLI.DLL, were originally designed
to communicate between themselves using the WM_DDE_EXECUTE message.
This allows applications to display the object without loading the application used
to create the object, while also allowing the object to be edited, if the appropriate
application is installed. For example, if an OpenOffice.org Writer object is
embedded, both a visual representation as an Enhanced Metafile is stored, as well
as the actual text of the document in the Open Document Format.
OLE 14.0
OLE 14.0 was the next evolution of OLE 1.0, sharing many of the same
goals, but was re-implemented over top of the Component Object Model instead
of using VTBLs. New features were automation, drag-and-drop, in-place
activation and structured storage.
14.3 DDE
Dynamic Data Exchange was first introduced in 1987 with the release of
Windows 14.0. It utilized the "Windows Messaging Layer" functionality within
Windows. This is the same system used by the "copy and paste" functionality. Therefore,
DDE continues to work even in modern versions of Windows. Newer technology has
been developed that has, to some extent, overshadowed DDE (e.g. OLE, COM, and OLE
Automation), however, it is still used in several places inside Windows, e.g. for Shell file
associations.
The primary function of DDE is to allow Windows applications to share data. For
example, a cell in Microsoft Excel could be linked to a value in another application and
when the value changed, it would be automatically updated in the Excel spreadsheet. The
data communication was established by a simple, three-segment model. Each program
was known to DDE by its "application" name. Each application could further organize
information by groups known as "topic" and each topic could serve up individual pieces
of data as an "item". For example, if a user wanted to pull a value from Microsoft Excel
which was contained in a spreadsheet called "Book1.xls" in the cell in the first row and
first column, the application would be "Excel", the topic "Book1.xls" and the item "r1c1".
A common use of DDE was for custom developed applications to control off-the-
shelf software, e.g. a custom in-house application written in C or some other language
might use DDE to open a Microsoft Excel spreadsheet and fill it with data, by opening a
DDE conversation with Excel and sending it DDE commands. Today, however, one
could also use the Excel object model with OLE Automation (part of COM).
While newer technologies like COM offer features DDE doesn't have, there are also
issues with regard to configuration that can make COM more difficult to use than DDE.
NetDDE
/Windows/SYSTEM314
DDESHARE.EXE (DDE Share Manager)
NDDEAPIR.EXE (NDDEAPI Server Side)
NDDENB314.DLL (Network DDE NetBIOS Interface)
NETDDE.EXE (Network DDE - DDE Communication)
Microsoft licensed a basic (NetBIOS Frames protocol only) version of the product
for inclusion in various versions of Windows from Windows for Workgroups to
Windows XP. In addition, Wonderware also sold an enhanced version of NetDDE to
their own customers that included support for TCP/IP.. Basic Windows applications
using NetDDE are Clipbook Viewer, WinChat and Microsoft Hearts.
NetDDE was still included with Windows Server 14003 and Windows XP Service Pack
14, although it was disabled by default. It has been removed entirely in Windows Vista.
However, this will not prevent existing versions of NetDDE from being installed and
functioning on later versions of Windows.
interface and usually can interact with each other, sometimes in ways that the operating
system would not normally allow.
Most office application suites include at least a word processor and a spreadsheet
element. In addition to these, the suite may contain a presentation program, database tool,
graphics suite and communications tools. An office suite may also include an email client
and a personal information manager or groupware package.
The currently dominant office suite is Microsoft Office, which is available for
Microsoft Windows and the Apple Macintosh. It has become a proprietary de-facto
standard in office software.
A new category of "online word processors" allows editing of centrally stored documents
using a web browser.
The following table compares general and technical information for a number
of office suites. Please see the individual products' articles for further information. The
table only includes systems that are widely used and currently available.
Adobe Persuasion
AppleWorks
Astound by Gold Disk Inc.
Beamer (LaTeX)
Google Docs which now includes presentations
Harvard Graphics
HyperCard
IBM Lotus Freelance Graphics
Macromedia Action!
MagicPoint
Microsoft PowerPoint
OpenOffice.org Impress
Scala Multimedia
Screencast
Umibozu (wiki)
VCN ExecuVision
In this lesson we have learnt how different multimedia components are linked. The
following points have been discussed in this lesson.
- The use OLE in linking multimedia objects. Object Linking and Embedding
(OLE) is a technology that allows embedding and linking to documents and
other objects. OLE was developed by Microsoft.
- Dynamic Data Exchange (DDE) is a technology for communication between
multiple applications under Microsoft Windows and OS/14.
- An office suite is a software suite intended to be used by typical clerical
worker and knowledge workers.
- The currently dominant office suite is Microsoft Office, which is available for
Microsoft Windows and the Apple Macintosh.
1. Check for the different Office Suite available in your local computer
and list the different file formats supported by each office suite.
2. List the different presentation tools available for creating
presentation.
14.9 References
Contents
15.0 Aims and Objectives
15.1 Introduction
15.2 User interfaces
15.3 General design issues
15.4 Effective Human Computer Interaction
15.5 Video at the user interface
15.6 Audio at the user interface
15.7 User friendliness as the primary goal
15.8 Let us sum up
15.9 Model answers to “Check your progress”
15.10 References
In this lesson we will learn how human computer interface is effectively done
when a multimedia presentation is designed.
At the end of the lesson the reader will be able to understand user interface and
the various criteria that are to be satisfied in order to have effective human-computer
interface.
15.1 Introduction
In computer science, we understand the user interface as the interactive input and
output of a computer as it’s is perceived and operated on by users. Multimedia user
interfaces are used for making the multimedia content active. Without user interface the
multimedia content is considered to be linear or passive.
Multimedia user interfaces are computer interfaces that communicate with users
using multiple media modes such as written text together with spoken language.
Multimedia would be without much value without user interfaces. The input
media determines not only how human computer interaction occurs but also how well.
Graphical user interfaces – using the mouse as the main input device – have greatly
simplified human-machine interaction.
Types
Characterization schemes are based on ordering information. There are two types of
ordered data: (1) coordinate versus amount, which signify points in time, space or
other domains; or (2) intervals versus ratio, which suggests the types of comparisons
meaningful among elements of coordinate and amount data types.
Relational Structures
This group of characteristics refers to the way in which a relation maps among its
domain sets (dependency). There are functional dependencies and non-functional
dependencies. An example of a relational structure which expresses functional
dependency is a bar chart. An example of a relational structure which expresses non-
functional dependency is a student entry in a relational database.
Multi-domain Relations
Relations can be considered across multiple domains, such as: (1) multiple attributes
of a single object set (e.g., positions, colors, shapes, and/or sizes of a set of objects in a
chart); (2) multiple object sets (e.g., a cluster of text and graphical symbols on a map);
and (3) multiple displays.
Control selection is the key to convey the information to the user. However, we
are not free in the selection of it because content can be influenced by constraints
imposed by the size and complexity of the presentation.
The main issues the user interface designer should keep in mind: (1) context; (2)
linkage to the world beyond the presentation display; (3) evaluation of the interface with
A continuous sequence of, at least, 15 individual image per second gives a rough
perception of a continuous motion picture. At the use interface, video is implemented
through a continuous sequence of individual images. Hence, video can be manipulated at
this interface similar to manipulation of individual still images.
Audio can be implemented at the user interface for application control. Thus,
speech analysis is necessary.
In the case of monophony, all audio sources have the same spatial location. A
listener can only properly understand the loudest audio signal. The same effect can be
simulated by closing one ear. Stereophony allows listeners with bilateral hearing
capabilities to hear lower intensity sounds. It is important to mention that the main
advantage of bilateral hearing is not the spatial localization of audio sources, but the
extraction of less intensive signals in a loud environment.
A user-friendly interface must also have the property that the user easily
remembers the application instruction rules. Easily remembered instructions
might be supported by the intuitive association to what the user already knows.
The user interface should enable effective use of the application. This means:
15.7.5 Aesthetics
With respect to aesthetics, the color combination, character sets, resolution and
form of the window need to be considered. They determine a user’s first and lasting
impressions.
User interfaces use different ways to specify entries for the user:
Entries in a menu
In menus there are visible and non-visible entry elements. Entries which are
relevant task are to be made available for east menu selection
Entries on a graphical interface
If the interface includes text, the entries can be marked through color
and/or different font
If the interface includes images, the entries can be written over the
image.
15.7.7 Presentation
The presentation, i.e., the optical image at the user interface, can have the following
variants:
Full text
Abbreviated text
Icons, i.e., graphics
Micons, i.e., motion video
The main emphasis has been on video and audio media because they represent live
information. At the user interface, these media become important because they help users
learn by enabling them to choose how to distribute research responsibilities among
applications (e.g., on-line encyclopedias, tutors, simulations), to compose and integrate
results and to share learned material with colleagues (e.g., video conferencing).
In this lesson we have learn the effective use of interface. The following key
points have been discussed in this lesson.
15.10 References
UNIT – IV
16.1 Introduction
The consideration of multimedia applications supports the view that local systems
expand toward distributed solutions. Applications such as kiosks, multimedia mail,
collaborative work systems, virtual reality applications and others require high-speed
networks with a high transfer rate and communication systems with adaptive, lightweight
transmission protocols on top of the networks.
The current infrastructure of networked workstations and PCs, and the availability
of audio and video at these end-points, makes it easier for people to cooperate and bridge
space and time. In this way, network connectivity and end-point integration of
multimedia provides users with a collaborative computing environment. Collaborative
computing is generally known as Computer-Supported Cooperative Work (CSCW).
There are many tools for collaborative computing, such as electronic mail,
bulletin boards (e.g., Usenet news), screen sharing tools (e.g., ShowMe from Sunsoft),
text-based conferencing systems(e.g., Internet Relay Chat, CompuServe, American
Online), telephone conference systems, conference rooms (e.g., VideoWindow from
Bellcore), and video conference systems (e.g., MBone tools nv, vat). Further, there are
many implemented CSCW systems that unify several tools, such as Rapport from AT&T,
MERMAID from NEC and others.
Time
With respect to time, there are two modes of cooperative work: asynchronous and
synchronous. Asynchronous cooperative work specifies processing activities that do not
happen at the same time; the synchronous cooperative work happens at the same time.
User Scale
The user scale parameter specifies whether a single user collaborates with another user or
a group of more that two users collaborate together. Groups can be further classified as
follows:
A group may be static or dynamic during its lifetime. A group is static if its
participating members are pre-determined and membership does not change
during the activity. A group is dynamic if the number of group members varies
during the collaborative activity, i.e., group members can join or leave the activity
at any time.
Group members may have different roles in the CSCW, e.g., a member of a group
(if he or she is listed in the group definition), a participant of a group activity (if
he or she successfully joins the conference), a conference initiator, a conference
chairman, a token holder or an observer.
Groups may consist of members who have homogeneous or heterogeneous or
heterogeneous characteristics and requirements of their collaborative
environment.
Control
Other partition parameter may include locality, and collaboration awareness. Locality
partition means that a collaboration can occur either in the same place (e.g., a group
meeting in an officer or conference room) or among users located in different place
through tele-collaboration.
Group communication agents may use the following for their collaboration:
Group Rendezvous
Group rendezvous denotes a method which allows one to organize
meetings, and to get information about the group, ongoing meetings and
other static and dynamic information.
Shared Applications
Application sharing denotes techniques which allow one to replicate
information to multiple users simultaneously. The remote users may point
to interesting aspects (e.g., via tele-pointing) of the information and
modify it so that all users can immediately see the updated information
conferencing conferencing
conferencing
The GC system model is based on a client-server model. Clients provide user interfaces
for smooth interaction between group members and the system. Servers supply functions
for accomplishing the group communication work, and each server specializes in its own
function.
Centralized Architecture
Replicated Architecture
The advantages of this architecture are low network traffic, because only input
events are distributed among the sites, and low response times, since all
participants get their output from local copies of the application. The
disadvantages are the requirement of the same execution environment for the
application at each site, and the difficulty in maintaining consistency.
16.4 Conferencing
simultaneous face-to-face communication. More precisely, video and audio have the
following purposes in a tele-conferencing system:
For conferences with more than three or four participants, the screen resources on
a PC or workstation run out quickly, particularly if other applications, such as
shared editors or drawing spaces, are used. Hence, mechanisms which quickly
resize individual images should be used.
16.5.1 Architecture
Session Manager
Media agents
Media agents are separate from the session manager and they are not responsible
for decisions specific to each type of media. The modularity allows replacement
of agents. Each agent performs its own control mechanism over the particular
medium, such as mute, unmute, change video quality, start sending, stop sending,
etc.
The shared workspace agent transmits shared objects (e.g., telepointer co-
ordinate, graphical or textual object) among the shared application.
Each session is described through the session state. This state information is
either private or shared among all session participants. Dependent on the functions,
which an application required and a session control provides, several control mechanisms
are embedded in session management:
Floor control: In a shared workspace, the floor control is used to provide access
to the shared workspace. The floor control in shared application is often used to
maintain data consistency.
Search the internet for a multimedia network and list the application system
features available in that network.
16.9 References
We present in this section a brief overview of transport and network protocols and
their functionalities, which are used for multimedia transmissions. We evaluate them
with respect to their suitability for this task.
17.1 Introduction
Requirements
Data Throughput
Audio and video data resemble a stream-like behavior, and they demand, even in
a compressed mode, high data throughput. In a workstation or network, several of
those streams may exist concurrently demanding a high throughput. Further, the
date movement requirements on the local end-system translate into terms of
manipulation of large quantities of data in real-time where, for example, data
copying can create a bottleneck in the system.
Service Guarantees
Multicasting
A typical multimedia application does not require processing of audio and video
to be performed by the application itself. Usually the data are obtained from a source
(e.g., microphone, camera, disk, and network) and are forwarded to a sink (e.g.,
speaker, display, and network). In such a case, the requirements of continuous-media
data are satisfied best if they take “take shortest possible path” through the system,
i.e., to copy data directly from adapter-to-adapter, and the program merely sets the
correct switches for the data flow by connecting sources to sinks. Hence, the
application itself never really touches the data as is the case in traditional processing.
A problem with direct copying from adapter-to-adapter is the control and the
change of QoS parameters. In multimedia systems, such an adapter-to-adapter
connection is defined by the capabilities of the two involved adapters and the bus
performance. In today’s systems, this connection is static. This architecture of low-
level data streaming corresponds to proposals for using additional new busses for
audio and video transfer within a computer. It also enables a switch-based rather than
bus-based data transfer architecture. Note, in practice we encounter headers and
trailers surrounding continuous-media data coming from devices and being delivered
to devices. In the case of compressed video data, e.g., MPEG-2, the program stream
contains several layers of headers compared with the actual group of pictures to be
displayed.
Protocols involve a lot of data movement because of the layered structure of the
communication architecture. But copying of data is expensive and has become a
bottleneck, hence other mechanisms for buffer management must be found.
Different layers of the communication system may have different PDU sizes;
therefore, a segmentation and reassembly occur. This phase has to be done fast, and
efficient. Hence, this portion of a protocol stack, at least in the lower layers, is done
in hardware, or through efficient mechanisms in software.
First, we present transport protocols, such as TCP and UDP, which are used in the
Internet protocol stack for multimedia transmission, and secondly we analyze new
emerging transport protocols, such as RTP, XTP and other protocols, which are suitable
for multimedia.
During the data transmission over the TCP connection, TCP must achieve
reliable, sequenced delivery of a stream of bytes by means of an underlying,
unreliable datagram service. To achieve this, TCP makes use of retransmission
on timeouts and positive acknowledgments upon receipt of information. Because
retransmission can cause both out-of-order arrival and duplication of data.,
sequence numbering is crucial. Flow control in TCP makes use of a window
technique in which the receiving side of the connection reports to the sending side
the sequence numbers it may transmit at any time and those it has received
contiguously thus far.
RTP does not address resource reservation and does not guarantee QoS for real-
time services. This means that it does not provide mechanism to ensure timely
delivery of data or guaranteed delivery, but relies on lower-layer services to do so.
RTP makes use of the network protocol ST-II or UDP/IP for the delivery of data.
It relies on the underlying protocols(s) to provide demultiplexing.
Profiles are used to specify certain parts of the header for particular sets of
applications. This means that particular media information is stored in an
audio/video profile, such as a set of formats (e.g., media encodings) and a default
mapping of those formats.
XTP was designed to be an efficient protocol, taking into account the low error
ratios and higher speeds of current networks. It is still in the process of
augmentation by the XTP Form to provide a better platform for the incoming
variety of applications. XTP integrates transport and network protocol
functionalities to have more control over the environment in which it operates.
XTP is intended to be useful in a wide variety of environments, from real-time
control systems to remote procedure calls in distributed operating systems and
distributed databases to bulk data transfer. It defines for this purpose six service
Some other designed transport protocols which are used for multimedia transmission
are:
The Tenet protocol suite for support of multimedia transmission was developed
by the Tenet Group at the University of California at Berkeley. The transport
protocols in this protocol stack are the Teal-time Message Transport Protocol
(RMTP) and Continuous Media Transport Protocol (CMTP). They run above the
Real-Time Internet Protocol (RTIP).
Internet is currently going through major changes and extensions to meet the growing
need for real-time services from multimedia applications.
There are several protocols which are changing to provide integrated services, such as
best effort service, real-time service and controlled link sharing. The new service,
controlled link sharing, is requested by the network operators. They need the ability to
control the sharing of bandwidth on a particular link among different traffic classes.
They also want to divide traffic into few administration classes and assign to each a
minimal percentage to the link bandwidth under conditions of overload, while allowing
‘unused’ bandwidth to be available at other times.
This protocol was specified by IETF to provide one of the components for
integrated services on the Internet. To implement integrated services, four
components need to be implemented: the packet scheduler, admission control
routine, classifier, and the reservation setup protocol.
Transport protocols, such as TCP and UDP, are used in the Internet protocol stack
for multimedia transmission, the new emerging transport protocols, such as RTP,
XTP and other protocols, are suitable for multimedia.
Right click the My Network Places icon in your computer and choose
properties. Write the list the protocols installed in your computer and find out the
use of each protocol.
17.8 References
In this chapter we will learn how to measure and improve the quality of service in
multimedia transmission.
At the end of this lesson, the learner will have a clear ideal on the parameters used
in measuring the quality of service and the various measures that are taken to ensure
quality in the multimedia content and transmission
18.1 Introduction
QoS Layering
Traditional QoS (ISO standards) was provided by the network layer of the
communication system. An enhancement of QoS was achieved through inducing QoS
into transport services. For MCS, the QoS notion must be extended because many other
services contribute to the end-to-end service quality.
To discuss further QoS and resource management, we need a layered model of the
MCS with respect to QoS, we refer throughout this lesson the model shown in the
following Figure. The MCS consists of three layers: application, system(including
communication services and operating system services), and devices (network and
Multimedia (MM) devices). Above the application may or may not reside a human user.
This implies the introduction of QoS in the application (application QoS), in the system
(system QoS) and in the network (network QoS). In the case of having a human user, the
MCS may also have a user QoS specification. We concentrate in the network layer on
the network device and its QoS because it is of interest to us in the MCS. The MM
devices find their representation (partially) in application QoS.
User
User QoS
Application
Application QoS
System
(Operating and Communication System)
System QoS
MM Devices Network
QoS Description
The set of chosen parameters for the particular service determines what will be
measured as the QoS. Most of the current QoS parameters differ from the parameters
described in ISO because of the variety of applications, media sent and the quality of the
networks and end-systems. This also leads to many different QoS parameterizations in
the literature. We give here one possible set of QoS parameters for each layer of MCS.
18.3 Translation
We always distinguish between user and application, system and network with
different QoS parameters. However, in future systems, there may be even more “layers”
or there may be hierarchy of layers, where some QoS values are inherited and others are
specific to certain components. In any case, it must always be possible to derive all QoS
values from the user and application QoS values. This derivation-known as translation-
may require “additional knowledge” stored together with the specific component. Hence,
translation is an additional service for layer-to-layer communication during the call
establishment phase. The split of parameters, requires translation functions as follows:
The service which may implement the translation between a human user and
application QoS parameters is called tuning service. A tuning service provides a
user with a Graphical user Interface (GUI) for input of application QoS, as well as
output of the negotiated application QoS. The translation is represented through
video and audio clips (in the case of audio-visual media), which will run at the
negotiated quality corresponding to, for example, the video frame resolution that
end-system and the network can support.
Here, the translation must map the application requirements into the system QoS
parameters, which may lead to translation such as from “high quality”
synchronization user requirement to a small (milliseconds) synchronization skew
QoS parameter, or from video frame size to transport packet size. It may also be
connected with possible segmentation/reassembly functions.
This translation maps the system QoS (e.g., transport packet en-to-end delay) into
the underlying network QoA parameters (e.g., in ATM, the end-to-end delay of
cells) and vice versa.
QoS guarantees must be met in the application, system and network to get the
acceptance of the users of MCS. There are several constraints which must be satisfied to
provide guarantees during multimedia transmission: (1) time constraints which include
delays; (2) space constraints such as system buffers; (3) device constraints such as frame
grabbers allocation; (4) frequency constraints which include network bandwidth and
system bandwidth for data transmission; and, (5) reliability constraints. These constraints
can be specified if proper resource management is available at the end-points, as well as
in the network.
Rate Control
A rate-based service discipline is one that provides a client with a minimum service
rate independent of the traffic characteristics of other clients. Such a discipline, operating
at a switch, manages the following resources: bandwidth, service time (priority) and
buffer space. Several rate-based scheduling disciplines have been developed.
Fair Queuing
If N channels share an output trunk, then each one should get 1/Nth of the
bandwidth. If any channel uses less bandwidth than its share, then this portion is
shared among the rest equally. This mechanism can be achieved by the Bit by-bit
Round Tobin (BR) service among the channels. The BR discipline serves n
queues in the round robin service, sending one bit from each queue that has
packet in it. Clearly, this scheme is not efficient; hence, fair queuing emulates BR
as follows: each packet is given a finish number, which is the round number at
which the packet would have received service, if the server had been doing BR.
The packets are served in the order of the finish number. Channels can be given
different fractions of the bandwidth by assigning them weights, where weight
corresponds to the number of bits of service the channel receives per round of BR
service.
Virtual Clock
Delay EDD is an extension of EDF scheduling (Earliest Deadline First) where the
server negotiates a service contract with each source. The contract states that if a
source obeys a peak and average sending rate, them the server provides bounded
delay. The key then lies in the assignment of deadlines to packets. The server
sets a packet’s deadline to the time at which it should be sent, if it had been
received according to the contract. This actually is the expected arrival time added
to the delay bound at the server. By reserving bandwidth at the peak rate, Delay
EDD can assure each channel a guaranteed delay bound.
Jitter EDD extends Delay EDD to provide delay-jitter bounds. After a packet has
been served at each server, it is stamped with the difference between its deadline
and actual finishing time. A regulator at the entrance of the next switch holds the
packet for this period before it is made eligible to be scheduled. This provides the
minimum and maximum delay guarantees.
Stop-and-Go
An HRR server has several service levels where each level provides round robin
service to a fixed number of slots. Some number of slots at a selected level are
allocated to a channel and the server cycles through the slots at each level. The
time a server takes to service all the slots at a level is called the frame time at the
level. The key of HRR is that it gives each level a constant share of the
bandwidth. “Higher” levels get more bandwidth than “lower” levels, so the frame
time at a higher level is smaller than the frame time at a lower level. Since a
server always completes one round through its slots once every frame time, it can
provide a maximum delay bound to the channels allocated to that level.
Especially, the system domain needs to have QoS and resource management.
Several important issues, as described in detail in previous sections, must be considered
in the end-point architectures:
Some examples of architectural choices where QoS and resource management are
designed and implemented include the following:
1. The OSI architecture provides QoS in the network layer and some enhancements
in the transport layer. The OSI 95 project considers integrated QoS specification
and negotiation in the transport protocols .
2. Lancaster’s QoS-Architecture (QoS-A) offers a framework to specify and
implement the required performance properties of multimedia applications over
1. Consider any software of your choice. List down a few parameters that you
consider for determining QoS in the software
2. Consider any High level language of your own choice. Make a few
observations on how the compiler rectifies the errors on its own.
Fair Queuing
Virtual Clock
Delay Earliest-Due-Date (Delay EDD)
Jitter Earliest-Due-Date (Jitter EDD)
Stop-and-Go
Hierarchical Round Robin (HRR)
18.9 References
1. Multimedia : Computing, Communications and applications – By Ralf
Steinmetz and Klara Nahrstedt
2. "Multimedia:Concepts and Practice" By Stephen McGloughlin
Lesson 19 Synchronisation
Contents
This lesson aims at introducing the concept of synchronisation. At the end of this
lesson the learner will be able to :
i) Know the Meaning of synchronisation
ii) Synchronisation in audio
iii) Implementation of a Reference model for synchronisation
19.1 Introduction
The simplest criterion is the number of media used in an application, using only
this criterion, even a document processing application that supports text and graphics can
be regarded as a multimedia system.
The following figure classifies applications according to the three criteria. The
arrows indicate the increasing degree of multimedia capability for each criterion.
Integrated digital systems can support all types of media, and due to digital
processing, may provide a high degree of media integration. Systems that handle time
dependent analog media objects and time-independent digital media objects are called
hybrid systems. The disadvantage of hybrid systems is that they are restricted with
regard to the integration of time-dependent and time-independent media, because, for
example, audio and video are stored on different devices than-independent media objects
and multimedia workstations must comprise both types of devices.
Content Relations
Spatial Relations
The spatial relations that are usually known as layout relationships define the
space used for the presentation of a media object on an output device at a certain point of
time in a multimedia presentation. If the output device is two-dimensional (e.g., monitor
or paper), the layout specifies the two-dimensional area to be used.
Temporal Relations
40 ms
Audio Audio
1 2
P1 P2 P3
Video Animation
Live Synchronization
Source Sink
Synthetic Synchronization
In the specification phase, temporal relations between the media objects are
defined.
In the presentation phase, a run-time system presents data in a synchronized
mode.
In the case of synthetic synchronization, temporal relations have been assigned to media
objects that were created independently of each other. The synthetic synchronization is
often used in presentation and retrieval-based systems with stored data objects that are
arranged to provide new combined multimedia objects. A media object may be part of
several multimedia objects.
First the existing classification and structuring methods are described and then, a
four-layer reference model is presented and used for the classification of multimedia
synchronization systems in our case studies. As many multimedia synchronization
mechanisms operate in a networked environment, we also discuss special synchronization
issues in a distributed environment and their relation to the reference model.
For each layer, typical objects and operations on these objects are described in the
following. The semantics of the objects and operations are the main criteria for assigning
them to one of the layers.
Multimedia Application
Specification Layer
High
Object Layer
Abstraction
Stream Layer
Media Layer
Low
Media Layer: At the media layer, an application operated on a single continuous media
stream, is treated as a sequence of LDUs.
Stream Layer: The stream layer operates on continuous media streams, as well as on
groups of media streams. In a group, all streams are presented in parallel by using
mechanisms for interstream synchronization.
The abstraction offered by the stream layer is the notion of streams with timing
parameters concerning the QoS for intrastream synchronization in a stream and
interstream synchronization between streams of a group.
Continuous media is seen in the stream layer as a data flow with implicit time
constraints; individual LDUs are not visible. The streams are executed in a Real-Time
Environment (RTE), where all processing is constrained by well-defined time
specifications.
Object Layer: The object layer operates on all types of media and hides the differences
between discrete and continuous media.
The task of this layer is to close the gap between the needs for the execution of a
synchronized presentation and the stream-oriented services. The functions located at the
object layer are to compute and execute complete presentation schedules that include the
presentation of the no-continuous media objects and the calls to the stream layer.
Specification Layer: The specification layer is an open layer. It does not offer an explicit
interface. This layer contains applications and tools that are allowed to create
synchronization specifications. Such tools are synchronization editors, multimedia
documents editors and authoring systems. Also located at the specification layer are tools
for converting specifications to an object layer format.
The specification layer is also responsible for mapping QoS requirements of the user
level to the qualities offered at the object layer interface.
19.12 References
We will examine in the remainder of this chapter different networks with respect
to their multimedia transmission capabilities.
20.1 Introduction
A multimedia networking system allows for the data exchange of discrete and
continuous media among computers. This communication requires proper services and
protocols for data transmission. Multimedia networking enables distribution of media to
different workstation.
A protocol consists of a set of rules which must be followed by peer layer instances
during any communication between these two peers. It is comprised of the formal
(syntax) and the meaning (semantics) of the exchanged data units (Protocol Data Units =
PDUs). The peer instances of different computers cooperate together to provide a service.
Audio and video data process need to be bounded by deadlines or even defined by
a time interval. The data transmission-both between applications and transport
layer interfaces of the involved components-must follow within the demands
concerning the time domains.
End –to-end jitter must be bounded. This is especially important for interactive
applications such as the telephone. Large jitter values would mean large buffers
and higher end-to-end delays.
All guarantees necessary for achieving the data transfer within the required time
span must be met. This includes the required processor performance, as well as
the data transfer over a bus and the available storage for protocol processing.
Cooperative work scenarios using multimedia conference systems are the main
application areas of multimedia communication systems. These systems should
support multicast connections to save resources. The sender instance may often
change during a single session. Further, a user should be able to join or leave a
multicast group without having to request a new connection setup,which needs to
be handled by all other members of this group.
The services should provide mechanisms for synchronizing different data streams,
or alternatively perform the synchronization using available primitives
implemented in another system component.
The multimedia communication must be compatible with the most widely used
communication protocols and must make use of existing, as well as future
networks. Communication compatibility means that different protocols at least
coexist and run on the same machine simultaneously. The relevance of envisaged
protocols can only be achieved if the same protocols are widely used. Many of the
current multimedia communication systems are, unfortunately, proprietary
experimental systems.
The communication of discrete data should not starve because of preferred or
guaranteed video/audio transmission. Discrete data must be transmitted without
any penalty.
The fairness principle among different applications, users and workstations must
be enforced.
The actual audio/video data rate varies strongly. This leads to fluctuations of the
data rate, which needs to be handled by the services.
The physical layer defines the transmission method of individual bits over the
physical medium, such as fiver optics.
For example, the type of modulation and bit-synchronization are important issues.
With respect to the particular modulation, delays during the data transmission arise
due to the propagation speed of the transmission medium and the electrical circuits
used. They determine the maximal possible bandwidth of this communication
channel. For audio/video data in general, the delays must be minimized and a
relatively high bandwidth should be achieved.
The data link layer provides the transmission of information blocks known as data
frames. Further, this layer is responsible for access protocols to the physical medium,
error recognition and correction, flow control and block synchronization.
Access protocols are very much dependent on the network. Networks can be
divided into two categories: those using point-to-point connections and those using
broadcast channels, sometimes called multi-access channels or random access
channels. In a broadcast network, the key issue is how to determine, in the case of
competition, who gets access to the channel. To solve this problem, the Medium
Access Control (MAC) sublayer was introduced and MAC protocols, such as the
Timed Token Rotation Protocol and Carrier Sense Multiple Access with Collision
Detection (CSMA/CD), were developed.
The network layer transports information blocks, called packets, from one station
to another. The transport may involve several networks. Therefore, this layer
provides services such as addressing, internetworking, error handling, network
management with congestion control and sequencing of packets.
In the case of continuous media, multimedia sessions which reside over one or
more transport connections, must be established. This introduces a more complex
view on connection reconstruction in the case of transport problems.
The presentation layer abstracts from different formats ( the local syntax) and
provides common formats ( transfer syntax). Therefore, this layer must provide
services for transformation between the application-specific formats and the agreed-
upon format. An example is the different representation of a number for Intel or
Motorola processors.
The multitude of audio and video formats also require conversion between
formats. This problem also comes up outside of the communication components
during exchange between data carriers, such as CD-ROMs, which store continuous
data. Thus, format conversion is often discussed in other contexts.
The application layer considers all application-specific services, such as file transfer
service embedded in the file transfer protocol (FTP) and the electronic mail service.
With respect to audio and video, special services for support of real-time access and
transmission must be provided.
A LAN is characterized by (1) its extension over a few kilometers at most, (20) a
total data rate of at least several Mbps, and (3) its complete ownership by a single
organization. Further, the number of stations connected to a LAN is typically limited to
100. However, the interconnection of several LANs allows the number of connected
stations to be increased. The basis of LAN communication is broadcasting using
broadcast channel (multi-access channel). Therefore, the MAC sublayer is of crucial
importance in these networks.
High-speed Ethernet
Ethernet is the most widely used LAN. Currently available Ethernet offers
bandwidth of at least 10 Mbps, but new fast LAN technologies for Ethernet with
bandwidths in the range of 100Mbps are starting to come on the market. This bus-based
network uses the CSMA/CD protocol for resolution of multiple access to the broadcast
channel in the MAC sublayer-before data transmission begins, the network state is
checked by the sender station. Each station may try to send its data only if, at that
moment, no other station transmits data. Therefore, each station can simultaneously
listen and send.
Dedicated Ethernet
Hub
Instead of configuring bus, each station is connected via its own Ethernet to a hub.
Hence, each station has the full Ethernet bandwidth available, and a new network for
multimedia transmission is not necessary.
Fast Ethernet
The Fast Ethernet Alliance, an industry group with more than 60 member
companies began work on the 100-Mbits/s 100 Base-TX specification in the early 1990s.
The alliance submitted the proposed standard to the IEEE and it was approved. During
the standardization process, the alliance and the IEEE also defined a Media-Independent
Interface (MII) for fast Ethernet, which enables it to support various cabling types on the
same Ethernet network. Therefore, fast Ethernet offers three media options: 100 Base-T4
for half-duplex operation on four pairs of UTP (Unshielded Twisted Pair cable), 100
Base-TX for half-or full-0duplex operation on two pairs of UTP oR STP (Shielded
Twisted Pair cable), and 100 Base-FX for half-and full-duplex transmission over fiber
optic cable.
Token Ring
The Token Ring is a LAN with 4 or 16 Mbits/s throughput. All stations are
connected to a logical ring. In a Token Ring, a special bit pattern (3-byte), called a token,
circulates around the ring whenever all stations are idle. When a station wants to transmit
a frame, it must get the token and remove it from the ring before transmitting. Ring
interfaces have two operating modes: listen and transmit. In the listen mode, input bits are
simply copied to the output. In the transmit mode, which is entered only after the token
has been seized, the interface breaks the connection between the input and the output,
entering its own data onto the ring. As the bits that were inserted and subsequently
propagated around the ring come back, they are removed from the ring by the sender.
After a station has finished transmitting the last bit of its last frame, it must regenerate the
token. When the last bit of the frame has gone around and returned, it must be removed,
and the interface must immediately switch back into the listen mode to avoid a duplicate
transmission of the data.
Each station receives, reads and sends frames circulating in the ring according to the
Token Ring MAC Sublayer Protocol (IEEE standard 8020.5). Each frame includes a
Sender Address (SA) and a Destination Address (DA). When the sending station drains
the frame from the ring, a Frame Status field is update, i.e., the A and C bits of the field
are examined. Three combinations are allowed:
20.4 FDDI
The Fiber Distributed Data Interface (FDDI) is a high-performance fiver optic
LAN, which is configured as a ring. It is often seen as the successor of the Token Ring
IEEE 802.5 protocol. The standardization began in the American Standards Institute
(ANSI in the group X3T9.5 in 1982. Early implementations appeared in 1988.
Compared to the Token Ring, FDDI is more a backbone than a LAN only because
it runs at 100 Mbps over distances up to 100 km with up to 500 stations. The Token Ring
supports typically between 50-2050 stations. The distance of neighboring stations is less
than 20 km in FDDI.
The FDDI design specification calls for no more than one error in 20.5*10^10
bits. Many implementations will do much better. The FDDI cabling consists of two fiber
rings, one transmitting clockwise and the other transmitting counter-clockwise. If either
one breaks, the other can be used as backup.
FDDI supports different transmission modes which are important for the
communication of multimedia data. The synchronous mode allows a bandwidth
reservation; the asynchronous mode behaves similar to the Token Ring protocol. Many
current implementations support only the asynchronous mode. Before diving into a
discussion of the different mode, we will briefly describe the topologies and FDDI
system components.
Non-isochronous
Synchronous Asynchronous
Restricted Non-restricted
Priority 7
Priority 0
The main topology features of FDDI are the two fiber rings, which operate in
opposite directions (dual ring topology). The primary ring provides the data
transmission, the secondary ring improves the fault tolerance. Individual stations can be –
but do not have to be – connected to both rings. FDDI defines two classes of stations, A
and B:
FDDI includes the following components which are shown in the following Figure:
MAC
SMT Data link
-Packet Interpretation layer
-Token Passing
-Monitor -Packet Training
Ring
SMAP
Physical
PHY Layer
-Encode/Decode
-Clocking
-Manage
Ring
Photons
Multicasting : The multicasting service became one of the most important aspects
of networking. FDDI supports group addressing, which enables multicasting.
Packet Size : The size of the packets can directly influence the data delay
inapplications.
Several new protocols at the network/transport layers in Internet and higher layers in
BISDN are currently centers of research to support more efficient transmission of
multimedia and multiple types of service.
1. Identify a network of your own choice and find the protocols installed in the
computer.
2. Using the internet search engine list the different network standards available and
differentiate their features.
Presentation Layer
Application Layer
20.8 References
1. Multimedia : Computing, Communications and applications – By Ralf
Steinmetz and Klara Nahrstedt
2. Multimedia Networking: Technology, Management and Applications - by Syed
Mahbubur Rahman -
3. Wireless Multimedia Network Technologies - by Rajamani Ganesh, Kaveh
Pahlavan, Zoran Zvonar
4. Wireless Communication Technologies: New Multimedia Systems - by
Norihiko Morinaga, Seiichi. Sampei, Ryuji Kohno
5. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen
UNIT - V
In this lesson we will learn the concepts behind Multimedia Operating System.
Various issues related with handling of resources are discussed in this lesson.
21.1 Introduction
The operating system is the shield of the computer hardware against all software
components. It provides a comfortable environment for the execution of programs, and it
ensures effective utilization of the computer hardware. The operating system offers
various services, related to the essential resources of a computer: CPU, main memory,
storage and all input and output devices.
The major aspect in this context is real-time processing of continuous media data.
Process management must take into account the timing requirements imposed by the
handling of multimedia data. Appropriate scheduling methods should be applied. In
contrast to the traditional real-time operating systems, multimedia operating systems also
have to consider tasks without hard timing restrictions under the aspect of fairness.
Since the operating system shields devices from applications programs, it must
provide services for device management too. In multimedia systems, the important issue
is the integration of audio and video devices in a similar way to any other input/output
device. The addressing of a camera can be performed similar to the addressing of a
keyboard in the same system, although most current systems do not apply this technique.
Audio and video data streams consist of single, periodically changing values of
continuous media data, e.g., audio samples or video frames. Each Logical Data Unit
(LDU) must be presented by a well-determined deadline. Jitter is only allowed before the
final presentation to the user. A piece of music, for example, must be played back at a
constant speed. To fulfill the timing requirements of continuous media, the operating
system must use real-time scheduling techniques. These techniques must be applied to
all system resources involved in the continuous media data processing, i.e., the entire
end-to-end data path is involved.
The bandwidth demand of continuous media is not always that stringent; it must
not be a priori fixed, but it may eventually be lowered. As some compression
algorithms are capable of using different compression ratios – leading to different
qualities – the required bandwidth can be negotiated. If not enough bandwidth is
available for full quality, the application may also accept reduced quality (instead
of no service at all).
Multimedia systems with integrated audio and video processing are at the limit of
their capacity, even with data compression and utilization of new technologies. Current
computers do not allow processing of data according to their deadlines without any
resource reservation and real-time process management.
21.4.1 Resources
21.4.2 Requirements
The requirements of multimedia applications and data streams must be served for
the single components of a multimedia system. The resource management maps these
requirements on the respective capacity. The transmission and processing requirements
of local and distributed multimedia applications can be specified according to the
following characteristics :
First, the resource manager checks its own resource utilization and decides if the
reservation request can be served or not. All existing reservations are stored. This way,
their share in terms of the respective resource capacity is guaranteed. Moreover, this
component negotiates the reservation request with other resource managers, if necessary.
In the following figure two computers are connected over a LAN. The
transmission of video data between a camera connected to a computer server and the
screen of the computer user involves, for all depicted components, a resource manager.
Compression DeCompression
Communication Communication
Transport & Transport &
Network layer Network Layer
1. Schedulability Test
The resource manager checks with the given QoS parameters (e.g.,
throughput and reliability) to determine if there is enough remaining
resource capacity available to handle this additional request.
After the schedulability test, the resource manager calculates the best
possible performance (e.g., delay) the resource can guarantee for the new
request.
3. Resource Reservation
The resource manager allocates the required capacity to meet the QoS
guarantees for each request.
4. Resource Scheduling
…………………………………………………………………
Browse the internet and list a few Operating Systems that supports the multimedia
processing. Also list the features supported by them.
i) Schedulability Test
ii) Quality of Service Calculation
iii) Resource Reservation
iv) Resource Scheduling
21.8 References
This lesson aims at teaching the concepts of process management. At the end of
this lesson the learner will be able to :
i) Identify the requirements for processor management
ii) Enumerate the different real time processor scheduling algorithms
22.1 Introduction
One of the main activities of an Operating System is managing the multimedia
processes and managing the processor. Effective management of processor is necessary
to enhance the multimedia production and multimedia playback.
In most systems, a process under control of the process manager can adopt one of the
following states:
In the initial state, no process is assigned to the program. The process is in the
idle state.
If a process is waiting for an event, i.e., the process lacks one of the necessary
resources for processing, it is in the blocked state.
If all necessary resources are assigned to the process, it is ready to run. The
process only needs the processor for the execution of the program.
A process is running as long as the system processor is assigned to it.
The process manager is the scheduler. This component transfers a process into
the ready-to-run state by assigning it a position in the respective queue of the dispatcher,
which is the essential part of the operating system kernel. The dispatcher manages the
transition from ready-to-run to run. In most operating systems, the next process to run is
chosen according to priority policy. Between processes with the same priority, the one
with the longest ready time is chosen.
An uncritical process should not suffer from starvation because time critical
processes are executed. Multimedia applications rely as much on text and
graphics as on audio and view. Therefore, not all resources should be occupied
by the time critical processes and their management processes.
Alternatively, a time critical process must never be subject to priority inversion.
The scheduler must ensure that any priority inversion (also between time-critical
processes with different priorities) is avoided or reduced as much as possible.
There are several attempts to solve real-time scheduling problems. Many of them
are just variations of basic algorithms. To find the best solutions for multimedia systems,
two basic algorithms are analyzed, Earliest Deadline First Algorithm and Rate Monotonic
Scheduling, and their advantages and disadvantages are elaborated.
The Earliest Deadline First (EDF) algorithm is one of the best known algorithms
for real-time processing. At every new ready state, the scheduler selects the task with the
earliest deadline among the tasks that are ready and not fully processed. The requested
resource is assigned to the selected task. At any arrival of a new task EDF must be
computed immediately leading to a new order, i.e., the running task is preempted and the
new task is scheduled according to its deadline. The new task is processed immediately
if its deadline is earlier than that of the interrupted task. The processing of the interrupted
task is continued according to the EDF algorithm later on. EDF is not only an algorithm
for periodic tasks, but also for task with arbitrary requests, deadlines and service
execution times. In this case, no guarantee about the processing of any task can be given.
The rate monotonic scheduling principle was introduced by Liu and Layland in
1973. It is an optimal, static, priority-driven algorithm for preemptive, periodic jobs.
Optimal in this context means that there is no other static algorithm that is able to
schedule a task set which cannot be scheduled by the rate monotonic algorithm. A
process is scheduled by a static algorithm at the beginning of the processing.
Subsequently, each task is processed with the priority calculated at the beginning. No
further scheduling is required.
The following five assumptions are necessary prerequisites to apply the rate
monotonic algorithm.
1. The requests for all tasks with deadlines are periodic, i.e., have constant
intervals between consecutive requests.
2. The processing of a single task must be finished before the next task of the
same data stream becomes ready for execution. Deadlines consist of
runability constraints only, i.e., each task must be completed before the next
request occurs.
3. All tasks are independent. This means that the requests for a certain task do
not depend on the initiation or completion of requests for any other task.
4. Run-time for each request of a task is constant. Run-time denotes the
maximum time which is required by a processor to execute the task without
interruption.
5. Any non-periodic task in the system has no required deadline. Typically, they
initiate periodic tasks or are tasks failure recovery. They usually displace
periodic tasks.
Static priorities are assigned to tasks, once at the connection set-up phase,
according to their request rates. The priority corresponds to the importance of a task
relative to other tasks. Tasks with higher request rates will have higher priorities. The
task with the shortest period gets the highest priority and the task with the longest period
gets the lowest priority.
Process A
Process B
Critical instant Critical instant Critical instant Critical instant
of B of B of B of B
Process C
There are several approaches to this algorithm. One of them divides a task into a
mandatory and an optional part. The processing of the mandatory part delivers a result
which can be accepted by the user. The optional part only refines the result. The
mandatory part is schedule according to the rate monotonic algorithm. For the scheduling
of the optional part, other, different policies are suggested.
In some systems there are aperiodic tasks next to periodic ones. To meet the
requirements of periodic tasks and the response time requirements of a periodic requests,
it must be possible to schedule both aperiodic and periodic tasks. If the aperiodic request
is an aperiodic continuous stream (e.g video images as part of a slide show), we have the
possibility to transform it into a periodic stream. Every timed data item can be
substituted by n items. The new items have the duration of the minimal life span. The
numbers of streams is increased, but since the life span is decreased, the semantic
remains unchanged. The stream is now periodical because every item has the same life
span.
Apart from the two methods previously discussed, further scheduling algorithms
have been evaluated regarding their suitability for the processing of continuous media
data.
Least Laxity First (LLF). The laxity is the time between the actual time t and
the dead-line minus the remaining processing time. The laxity in period k is :
tk = (s + (k – 1)p + d) – (t + e)
If the deadlines of tasks are less than their period (di < pi), the prerequisites of the
rate monotonic algorithm are violated. In this case, a fixed priority assignment
according to the deadlines of the tasks is optimal. A task Ti gets a higher priority
than a task Tj if di < dj. No effective schedulability test for the deadline monotone
algorithm exists.
The task with the shortest remaining computation time is chosen for execution.
This algorithm guarantees that as many tasks as possible meet their deadlines
under an overload situation if all of them have the same deadline. In multimedia
systems where the resource management allows overload situations this might be
a suitable algorithm.
In this lesson we have learnt about the process management present in Multimedia
Operating System. We have touched upon the following points.
Process management deals with allocation and managing the resource
main processor.
The process manager transfers a process into the ready-to-run state by
assigning it a position in the respective queue of the dispatcher.
The real time scheduling algorithms are Earliest Deadline First Algorithm,
Rate Monotonic Algorithm.
EDF is an optimal, dynamic algorithm.
Rate monotonic algorithm is optimal, static, priority driven algorithm for
preemptive periodic jobs
22.8 References
1. Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
2. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen
3. Multimedia Storage and Retrieval: An Algorithmic Approach By Jan Korst,
Verus Pronk
4. “Multimedia Making it work” By Tay Vaughan
This lesson aims at teaching the learner how file systems are organised in the
Multimedia based Operating Systems. At the end of this chapter the learner will:
23.1 Introduction
The file system is said to be the most visible part of an operating system. Most
programs write or read files. Their program code, as well as user data, is stored in files.
The organization of the file system is an important factor for the usability and
convenience of the operating system. A file sequence is a sequence of information held
as a unit for storage and use in a computer system.
The file system provides access and control functions for the storage and retrieval
of files. From the user’s viewpoint, it is important how the file system allows file
organization and structure. The internals, which are more important in our context, i.e.,
the organization of the file system, deal with the representation of information in files,
their structure and organization in secondary storage.
The two main goals of traditional files systems are: (1) to provide a comfortable
interface for file access to the user and (2) to make efficient use of storage media.
They are separated from each other by a well defined “end of file” bit pattern,
character or character sequence. A file descriptor is usually placed at the beginning of
the file and is, in some systems, repeated at the end of the file. Sequential storage is the
only possible way to organize the storage on tape, but it can also be used on disks. The
main advantage is its efficiency for sequential access, as well as for direct access. Disk
access time for reading and writing is minimized.
must contain the number of blocks occupied by the file, the pointer to the
first block and it may also have the pointer to the last block. A serious
disadvantage of this method is the cost of the implementation for random
access because all prior data must be read. In MS-DOS, a similar method
is applied. A File Allocation Table (FAT) is associated with each disk.
One entry in the table represents one disk block. The directory entry of
each file holds the block number of the first block. The number in the slot
of an entry refers to the next block of a file. The slot of the last block of a
file contains an end-of -file mark.
Another approach is to store block information in mapping tables. Each
file is associated with a table where, apart from the block numbers
information like owner, file size, creation time, last access time, etc., is
stored. Those tables usually have a fixed size, which means that number
of block references is bounded. Files with more blocks are referenced
indirectly by additional tables assigned to the files. In UNIX, a small table
(on disk) called i-node is associated with each file (See the following
figure ). The indexed sequential approach is an example for multi-level
mapping; here, logical and physical organizations are not clearly
separated.
Directory Structure
Disk Management
Another way to enhance performance is to reduce disk arm motion. Blocks that
are likely to be accessed in sequence are placed together on one cylinder. To refine this
method, rotational positioning can be taken into account. Consecutive blocks are placed
on the same cylinder, but in an interleaved way as shown in the following figure.
Another important issue is the placement of the mapping tables (e.g., I-nodes in
UNIX) on the disk. If they are placed near the beginning of the disk, the distance
between them and the blocks will be, on average, half the number of cylinders. To
improve this, they can be placed in the middle of the disk. Hence, the average seek time
is roughly reduced by a factor or two. In the same way, consecutive blocks should be
placed on the same cylinder. The use of the same cylinder for the storage of mapping
tables and referred blocks also improves performance.
The seek time (the time required for the movement of the read/write head).
The latency time or rotational delay (the time during which the transfer cannot
proceed until the right block or sector rotates under the read/write head).
The actual data transfer time needed for the data to copy from disk into main
memory.
Usually the seek time is the largest factor of the actual transfer time. Most
systems try to keep the cost of seeking low by applying special algorithms to the
scheduling of disk read/write operations. The access of the storage device is a problem
greatly influenced by the file allocation method. Most systems apply one of the
following scheduling algorithms:
First-Come-First-Served (FCFS)
With this algorithm, the disk driver accepts requests one-at-a-time and
serves them in incoming order. This is easy to program and an intrinsically fair
algorithm. However, it is not optimal with respect to head movement because it
does not consider the location of the other queued requests. These results in a
high average seek time.
SCAN
times. Note that middle tracks still get a better service then edge tracks. When
the head movement is reversed, it first serves tracks that have recently been
serviced, where the heaviest density of requests, assuming a uniform distribution,
is at the other end of the disk.
C-SCAN
C-SCAN also moves the head in one direction, but it offers fairer service
with more uniform waiting times. It does not alter the direction, as in SCAN.
Instead, it scans in cycles, always increasing or decreasing, with one idle head,
movement from one edge to the other between two consecutive scans. The
performance of C-SCAN is somewhat less than SCAN.
File Size
Compared to text and graphics, video and audio have very large storage space
requirements. Since the file system has to store information ranging from small,
unstructured units like text files to large, highly structured data units like video and
associated audio, it must organize the data on disk in a way that efficiently uses the
limited storage.
A multimedia system must support different media at one time. It does not only have
to ensure that all of them get a sufficient share of the resources, it also must consider
tight relations between streams arriving from different sources.
The main goals of traditional disk scheduling algorithms are to reduce the cost of
seek operations, to achieve a high throughput and to provide fair disk access for every
process. The additional real-time requirements introduced by multimedia systems make
traditional disk scheduling algorithms, such as described previously, inconvenient for
multimedia systems. Systems without any optimized disk layout for the storage of
continuous media depend far more on reliable and efficient disk scheduling algorithms
than others. In the case of contiguous storage, scheduling is only needed to serve requests
from multiple streams concurrently. A round-robin scheduler is employed that is able to
serve hard real-time tasks. Here, additional optimization is provided through the close
physical placement of streams that are likely to be accessed together.
EDF scheduling strategy as described for CPU scheduling, is also used for the file
system issue as well. Here the block of the stream with the nearest deadline would be
read first. The employment of EDF, in the strict sense results in poor throughput and
excessive seek time. Further, as EDF is most often applied as a preemptive scheduling
scheme, the costs for preemption of a task and scheduling of another task are
considerably high. The overhead caused by this is in the same order of magnitude as at
least one disk seek. Hence, EDF must be adapted or combined with file system strategies.
With Group Sweeping Scheduling (GSS), requests are served in cycles, in round-
robin manner. To reduce disk arm movements, the set of n streams is divided into g
groups. Groups are served in fixed order. Individual streams within a group are served
according to SCAN; therefore, it is not fixed at which time or order individual streams
within a group are served. In one cycle, a specific stream may be the first to be served; in
another cycle, it may be the last in the same group. A smoothing buffer which is sized
according to the cycle time and data rate of the stream assures continuity.
Mixed Strategy:
The mixed strategy was introduced based on the shortest seek(also called greedy strategy)
and the balanced strategy. As shown in following figure, every time data are retrieved from
disk they are transferred into buffer memory allocated for the respective data stream. From
there, the application process removes them one at a time. The goal of the scheduling
algorithm is:
With shortest seek, the first goal is served, i.e., the process of which data block is
closest is served first. The balanced strategy chooses the process which has the least
amount of buffered data for service because this process is likely to run out of data. The
crucial part of this algorithm is the decision of which of the two strategies must be applied
(shortest seek or balanced strategy). For the employment of shortest, seek two criteria must
be fulfilled: the number of buffers for all processes should be balanced (i.e., all processes
should nearly have the same number of buffered data) and the overall required bandwidth
should be sufficient for the number of active processes, so that none of them will try to
immediately read data out of an empty buffer. The urgency is introduced as an attempt to
measure both. The urgency is the sum of the reciprocals of the current “fullness” (amount
of buffered data). This number measures both the relative balance of all read processes and
the number of read processes. If the urgency is large, the balance strategy will be used; it is
small, it is safe to apply the shortest seek algorithm.
Memory Management:
meaning that the page must be read from disk. Page faults affect the real-time
performance very seriously, so they must be avoided. A possible approach is to lock code
and/or data into real memory. However, care should be taken when locking code and/or
data into real memory.
Device Management
Device management and the actual access to a device allow the operating system
to integrate all hardware components. The physical device is represented by an abstract
device driver. The physical characteristics of devices are hidden. In a conventional
system, such devices include a graphics adapter card, disk, keyboard and mouse. In
multimedia systems, additional devices like cameras, microphones, speakers and
dedicated storage devices for audio and video must be considered. In most existing
multimedia systems, such devices are not often integrated by device management and the
respective device drivers.
Existing operating system extensions for multimedia usually provide one common
system-wide interface for the control and management of data streams and devices.
1. Compare the traditional algorithms used for Disk scheduling with the
algorithms used for multimedia operating systems.
2. Search the internet for any resource that lists the scheduling algorithms used
in Windows Operating System, Unix Operating System.
23.10 References
In this lesson we will learn the features and characteristics of Multimedia Data Base
Management Systems. At the end of the lesson the reader will be able
i) To understand the concepts behind MDBMS
ii) Characteristics of MDBMS
iii) To understand different representations for MDBMS
24.1 Introduction
Multimedia database systems are database systems where, besides text and other
discrete data, audio and video information will also be stored, manipulated and retrieved.
To provide this functionality, multimedia database systems require a proper storage
technology and five systems.
Current storage technology allows for the possibility of reading, storing and
writing audio and video information in real-time. This can happen either through
dedicated external devices, which have long been available, or through system integrated
secondary storage. The external devices were developed for studio and electronic
entertainment applications; they were not developed for storage of discrete media. An
example is a video recorder controlled through a digital interface.
At first, it appears that these three applications do not have much in common, but in
all three their functions can uniformly be performed using a Multimedia DataBase
Management System (MDBMS). The reason is that in general, the main task of a
DataBase Management System (DBMS) is to abstract from the details of the storage
access and its management. MDBMS is embedded in the multimedia system domain,
located between the application domain (applications, documents) and the device domain
(storage, compression and computer technology). The MDBMS is integrated into the
system domain through the operating system and communication components.
Therefore, all three applications can be put on the same abstraction level with respect to
DBMS. Further, a DBMS provides other properties in addition to storage abstraction:
Persistence of Data
The data may outlive processing programs, technologies, etc. For example,
insurance companies must keep data in database for several decades. This implies
that a DBMS should be able to manipulate data even after the changes of the
surrounding programs.
Security of Data
3. Device-independent Interface
4. Format-independent Interface
The DBMS must be capable of handling and managing large amounts of data and
satisfying queries for individual relations among data or attributes of relations.
In the architecture model, the system components around MDBMS and MDBMS itself
have the following functions:
The operating system provides the management interface for MDBMS to all local
devices.
The MDBMS provides an abstraction of the stored data and their equivalent
devices, as is the case in DBMS without multimedia.
The communication system provides for MDBMS abstractions for
communication with entities at remote computers. These communication
abstractions are specified through interfaces according to, for example, the Open
System Interconnection (OSI) architecture.
A layer above the DBMS, operating system and communication system can unify
all these different abstractions and offer them, for example, in an object-oriented
environment such as a toolkit. Thus, an application should have access to each
abstraction at different levels.
That is to say, how are the proper defined to access multimedia entries. One can
define media-dependent, as well as media-independent operations. In a next step,
a class hierarchy with respect to object-oriented programming may be
implemented.
Better security: Multimedia objects are secured in the databases and can be
invoked any time when required. For example, a doctor can see M.R.I. report of
patient whenever he wants and can be preserved as long as the patient takes the
treatment of doctor.
Greater control (resizing, manipulating): Multimedia objects can be resized
when required and certain changes can be made when required.
Easy deletion: Images can be deleted without deleting the corresponding data
from the database. One can search for multimedia content in the same fashion as
they search for traditional relational data. For instance, customers can search for
images by a unique image name or key, by photographer, or by category. In
addition, customers can use Oracle interMedia’s powerful image content based
retrieval capability to search for ‘similar’ images recursively through the
database. And once a customer has found an image of interest, they can simply
click on it and see the full resolution image.
Easy to extract statistics on usage Oracle interMedia enables Oracle9i to
manage multimedia content (image, audio, and video) in an integrated fashion
with other enterprise information. It extends Oracle9i reliability, availability, and
data management to multimedia content in media-rich Internet applications. As an
integral part of the Oracle9 i database server, Oracle interMedia data benefits
from all Oracle9i capabilities, including its speed, efficiency, scalability, security,
and power.
We present examples of raw, registered, and descriptive data for different media such as
text, image, video and audio.
1. Pixels (pixel matrix) represent raw data. A compressed image may also consist of
a transformed pixel set.
2. The registering data include the height and width of the picture. Additionally, the
details of coding are stored here.
The multimedia component can also be stored outside the database, without
transaction control. In this case, a pointer, under transaction control, is stored in the
database, while the multimedia component is stored in an external BFILE (operating
system flat file), at an HTTP server-based URL, on a specialized media server, or at a
user-defined source on other servers.
Object metadata and methods are always stored in the database. Whether multimedia
content is stored inside or outside the database, Database manages metadata for all the
multimedia object types, and automatically extracts that metadata for each type. This
metadata includes the following:
Data storage information including the source types, location, and name.
Data update time and format.
MIME media type (used in Web and mail applications).
Image height and width, and image content length, format, and compression type.
Audio encoding type, number of channels, sampling rate, sample size,
compression type, play time (duration), and description.
Video frame widths and heights, frame resolution and rate, play time(duration),
number of frames, compression type, number of colors, bit rate, and description .
Select application metadata (for example, singer or studio names).
There are various architectures for multimedia databases. Following are three
architectures that we will discuss:
fact text
query
Query Processor Presentation Multimedia Player
generator presentation
DB1
ontology MM
DB1
Ontology agent
Symantic
DB2 MM
generator
ontology DB2
SQL+D an extension to SQL, which allows users to dynamically specify how to display
answers to queries posed to multimedia databases. It provides tools to display multimedia
data plus other traditional GUI elements such as boxed text, checkbox, list, and button.
The full implementation of the Database Interface, allowing users to connect local
and remote ODBC (or JDBC) compliant database, such as ORACLE or Microsoft
Access.
Simplified display specifications syntax and instantiation of display elements.
SQL+D differs from other efforts in that it is specifically designed for querying
multimedia databases. It emphasizes in the query, by-the-user specification of the
display of the output data. In contrast, others have focused on specification of the
query, data visualization, or data browsing. SQL+D allows all of these, and we
have one browsers and visual querying applications as a proof of concept. By
proposing SQL+D as a language extension to SQL, we intend to maintain the
flexibility that allowed SQL to be adopted as the query language of choice by a
great number of database management systems and browsers, as well as by many
programming languages that allow embedded SQL queries.
The EMRM model can be designed with traditional entities and their relationship
set along with the additional set called multimedia set. Multimedia sets are shown in this
model as entities and can be connected to the normal entities using relationship. For
example, one can design the EMRM with description of the entities and relationships as
given below: The entity “Document” is very important in the system. As a new media it
contains multimedia objects. The cardinality between “Document” and “Media Object” is
one to many, which means a Document has at least one media object and there is no
blank Document. The “media object” can exist without “Technique Report” so the
cardinality between “Media Object” and ‘Document” is 0 to many.
In the system two kinds of user groups are anticipated which are “Reader” and
“Author” entities. A generic entity “Person” is defined to include the common attributes
of “Reader” and “Author”. It does make sense that a person can be the “Author” and
“Reader” at the same time so the classification hierarchy is (p, o) which means a
“Person” can be either a “Reader” or an “Author” or both. The ternary relationship
between “Reader” and “Document” is defined by connections “Read” and “Readby”. A
“Reader” can exist without any “Document” and the ‘Document” can have no any reader
at all. Also the “Reader” can read unlimited number of “Document” and unlimited
number of “Reader” can read the “Document” so the cardinalities are from 0 to many.
Another ternary relationship is defined between the “Author” and the “Document”
as “Write” and “Writtenby” relationships. The difference is that a “Document” must have
atleast 1 “Author” while an author can write 0 to many “Document” The last entity is
“Publisher”. The ternary relationship addresses the connection between “Publisher” and
“Document”. The “Publisher” can be an organization such as ACM, a department like or
a laboratory. The “Document” can be conjointly published by some organizations so the
cardinality is from 1 to n.
Check the availability of multimedia handling capacity in any RDBMS. Check for
the availability of multimedia components(text, audio, video, animation) as field
items in Oracle 9i.
24.12 References
1. “Multimedia Servers: Applications, Environments and Design” By Dinkar
Sitaram, Asit Dan
2. “Semantic Models for Multimedia Database Searching and Browsing “ By Shu-
Ching Chen, Rangasami L. Kashyap, Arif Ghafoor
3. "Multimedia:Concepts and Practice" By Stephen McGloughlin
4. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
25.1 Introduction
UNIX and its variants, Microsoft’s Windows-NT, Apple’s System 7 and IBM’s
TM
OS/2 , are, and will be, the most widely installed operating systems with multitasking
capabilities on personal computers (including the Power PC) and workstations. Although
some are enhanced with special priority classes for real-time processes, this is not
sufficient for multimedia applications. For example, the UNIX scheduler which provides
a real-time static priority scheduler in addition to a standard UNIX timesharing scheduler
is analyzed. For this investigation three applications have been chosen to run
concurrently; “typing” as an interactive application, “video” as a continuous media
application and a batch program. The result was that only through trial and error a
particular combination of priorities and scheduling class assignment might be found that
works for a specific application set, i.e., additional features must be provided for the
scheduling of multimedia data processing.
Threads
OS/2 was designed as a time-sharing operating system without taking serious real-
time applications into account. An OS/2 thread can be considered as a light-weight
process: it is the dispatchable unit of execution in the operating system. A thread belongs
to exactly one address space (called process in OS/2 terminology). All threads share the
resources allocated by the respective address space. Each thread has its own execution
stack, register values and dispatch state (either executing or ready-to-run). Each thread
belongs to one of the following priority classes:
The time-critical class is reserved for threads that require immediate attention.
The fixed-high class is intended for applications that require good responsiveness
without being time-critical.
The regular class is used for the executing of normal tasks.
The idle-time class contains threads with the lowest priorities. Any thread in this
class is only dispatched if no thread of any other class is ready to execute.
Priorities
Within each class, 32 different priorities (0, …, 31) exist. Through time-slicing,
threads of equal priority have equal chances to execute. A context switch occurs
whenever a thread tries to access an otherwise allocated resource. The thread with the
highest priority is dispatched and the time-slice is started again.
tasks which could be done by application processes running in ring 3 (user mode). The
task running at ring 0 should (but must not) leave the kernel mode after 4 msec.
The generic space in MHEG provides a virtual coordinate system that is used to
specify the layout and relation of content objects in space and time according to the
virtual axes-based specification method. The generic space has one time axis of infinite
length measured in Generic Time Units (GTU). The MHEG run-time environment must
map the GTU to Physical Time Units (PTUs). If no mapping is specified, the default is
one GTU mapped to one millisecond. Three spatial axes (X=latitude, Y=longitude,
Z=altitude) are used in the generic space. Each axis is of finite length in an interval of [-
32768,+32767]. Units are Generic Space Units (GSUs). Also, the MHEG engine must
perform mapping from the virtual to the real coordinate space.
The presentation of content objects is based on the exchange of action objects sent
to an object. Examples of actions are prepared to set the object in a presentable state, run
to start the presentation and stop to end the presentation. Action objects can be combined
to form an action list. Parallel action lists are executed in parallel. Each list is composed
of a delay followed by delayed sequential actions that are processed serially by the
MHEG engine, as shown in the following figure.
delay
delay
sequential
Delayed
actions
action action
Lists of actions.
MHEG Engine
The Generic Presentation Services of the engine provide abstractions from the
presentation modules used to present the content objects. The Audio/Video-Subsystem is
a stream layer implementation. This component is responsible for the presentation of the
presentation of the continuous media streams, e.g., audio/video streams. The User
Interface Service provide the presentation of time-independent media, like text and
graphics, and the processing of user interactions, e.g., buttons and forms.
The MHEG engine receives the MHEG objects from the application. The Object
Manager manages these objects in the run-time environment. The Interpreter processes
the action objects and events. It is responsible for initiating the preparation and
presentation of the objects. The Link Processor monitors the stated of objects and
triggers links, if the trigger conditions of a link are fulfilled.
Application
Presentation Services
System
Generic
Object Manager
User
Interface
Link Processor Interpreter Services
Operating System
The run-time system communicates with the presentation services by events. The
User Interface Services provide events that indicate user actions. The Audio/Video-
Subsystem provides events about the status of the presentation streams, like end of the
presentation of a stream or reaching a cuepoint in a stream.
25.3.2 HyTime
HyTime defines how markup and DTDs can be used to describe the structure of
hyperlinked time-based multimedia documents. HyTime does not define the format or
encoding of elements. It provides the framework for defining the relationship between
these elements.
The HyTime architectural forms are grouped into the following modules:
The Base Module specifies the architectural forms that comprise that document.
The Measurement Module is used to add dimensions, measurement and counting
to the documents. Media objects in the document can be placed along with the
dimensions.
The Location Address Module provides the means to address locations in a
document. The following addressing modes are supported:
HyTime Engine
The application layer of the database stores the objects and their attributes, as
defined by the DTD. An application presenter gets the information it needs for the
presentation of the database content, including the links between objects and the
presentation coordinates to use for the presentation, from the database.
25.3.3 MODE
multimedia object is requested from the application. Thereby, it adapts the QoS of the
presentation to the available resources, taking into account a cost model and the QoS
requirements given by the application.
Synchronization Model
Local Synchronizer
Priorities may be used for basic objects to reflect their sensitivity to delays in their
presentation. For example, audio objects will usually be assigned higher priorities than
video objects because a user recognizes jitter in an audio stream earlier than jitter in a
video stream. Presentations with higher priorities are preferred over objects with lower
priorities in both presentation and synchronization.
The HyTime architectural forms are grouped into the following modules
25.7 References
1. ”Multimedia Computing, Communication and application” By Steinmetz and
Klara Nahrstedt.
2. “Multimedia Systems, Standards, and Networks” By Atul Puri, Tsuhan Chen
3. “Multimedia Storage and Retrieval: An Algorithmic Approach” By Jan Korst,
Verus Pronk