Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

GM Unit 3,4,5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 39

lOMoARcPSD|22781286

Gm full - notes of cs in graphics and multimedia unit 2


bharathiyar university
B.Sc. Computer Science (Bharathiar University)

Studocu is not sponsored or endorsed by any college or university


Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)
lOMoARcPSD|22781286

COMPUTER GRAPHICS AND MULTIMEDIA

SYLLABUS
UNIT-I
Output Primitives: Points and Lines – Line-Drawing algorithms – Loading frame Buffer – Line
function – Circle-Generating algorithms – Ellipse-generating algorithms.

Attributes of Output Primitives: Line Attributes – Curve attributes – Color and Grayscale
Levels – Area-fill attributes – Character Attributes.

UNIT-II
2D Geometric Transformations: Basic Transformations – Matrix Representations –
Composite Transformations – Other Transformations. 2D Viewing: The Viewing Pipeline –
Viewing Co-ordinate Reference Frame – Window-to-Viewport Co-ordinate Transformation - 2D
Viewing Functions – Clipping Operations.

UNIT-III
Text: Types of Text – Unicode Standard – Font – Insertion of Text – Text compression – File
formats. Image: Image Types – Seeing Color – Color Models – Basic Steps for Image Processing
– Scanner – Digital Camera – Interface Standards – Specification of Digital Images – CMS –
Device Independent Color Models – Image Processing software – File Formats – Image Output
on Monitor and Printer.

UNIT-IV
Audio: Introduction – Acoustics – Nature of Sound Waves – Fundamental Characteristics
of Sound – Microphone – Amplifier – Loudspeaker – Audio Mixer – Digital Audio –
Synthesizers – MIDI – Basics of Staff Notation – Sound Card – Audio Transmission – Audio File
formats and CODECs – Audio Recording Systems – Audio and Multimedia – Voice Recognition
and Response - Audio Processing Software.

UNIT-V
Video: Analog Video Camera – Transmission of Video Signals – Video Signal Formats –
Television Broadcasting Standards – PC Video – Video File Formats and CODECs – Video
Editing – Video Editing Software. Animation: Types of Animation – Computer Assisted
Animation – Creating Movement – Principles of Animation – Some Techniques of Animation –
Animation on the Web – Special Effects – Rendering Algorithms. Compression: MPEG-1 Audio
– MPEG-1 Video - MPEG-2Audio – MPEG-2 Video.

TEXT BOOKS
1. COMPUTER GRAPHICS –Donald Hearn, M.Pauline Baker, 2ndedition, PHI
2. PRINCIPLES OF MULTIMEDIA –Ranjan Parekh, 2007, TMH.

Rajeshkanna Assistant Professor, Dept.BCA, Dr.N.G.P. Arts & Science College,


Coimbatore.

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

UNIT-I
QUESTIONS
SECTION A
1. The slope intercept line equation is given by .
2. Expand DDA.
3. is used for photo realistic images and for computer drawings.
4. is a faster method for calculating pixel position.
5. The efficient raster line algorithm was developed by .
6. The circle is a frequently used component in & .
7. The mid-point method, the circle function is given by .
8. The general equation of ellipse is stated as .
9. The eccentric circle of ellipse is called as .
10. For any circle point the distance relationship is expressed by .
SECTION B
1. Explain about different types of line attributes.
2. Describe the various area fill attributes.
3. Write a note on color and gray scale level.
4. Explain about points and lines.
5. Write short notes on DDA algorithm.
6. Explain about line equations.
7. Explain about properties of circles.
8. Explain about properties of ellipse.
9. Write short notes on line width.
10. Write a short note on styles of fill attributes.
SECTION C
1. With procedure explain the Bresenham’s line drawing algorithm.
2. Explain briefly about circle generating algorithm.
3. Briefly discuss the midpoint ellipse algorithm.
4. Describe about area fill attributes.
5. Explain briefly about DDA line drawing algorithm.
6. Discuss briefly the character attributes.
7. Explain the ellipse generating algorithm.
8. Discuss the midpoint circle algorithm in detail.
9. Explain briefly about properties of circle and ellipse generating algorithm.
10. Explain briefly about line attributes

Rajeshkanna Assistant Professor, Dept.BCA, Dr.N.G.P. Arts & Science College,


Coimbatore.

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

UNIT-II
QUESTIONS
SECTION A
1. The translation distance pair ( tx , ty ) is called as .
2. transformation that moves objects without deformation.
3. Rotation point is also called as .
4. Unequal values of sx and sy results .
5. coordinate is used in mathematics to refer the effect of Cartesian equation.
6. coordinate is used in mathematics to refer the effect of Cartesian equation.
7. A two dimensional scene is selected for display is called a .
8. Windowing transformation is also called as .
9. Viewing window is also called as .
10. is the binary region code window based on clipping rectangle.
SECTION B
1. Describe the reflection transformation.
2. List the types of clipping and explain point clipping.
3. Explain the matrix representation of 2D viewing.
4. Explain the viewing coordinate reference frame.
5. Define linear transformation and define their properties.
6. Give the matrix representation of rotation and scaling.
7. Explain the concept of other transformation.
8. Write short notes on windowing transformation.
9. Explain text clipping briefly.
10. Explain about viewing transformation.
SECTION C
1. Explain briefly about composite transformation.
2. Discuss in detail about window to viewport transformation.
3. Explain in detail about various transformations.
4. Define clipping and describe the clipping operation in detail
5. Explain when Sutherland line clipping algorithm in detail.
6. Explain the 2D transformation in detail.
7. Explain briefly about two-dimensional rotation.
8. Discuss briefly about pivot point rotation and fixed point scaling.
9. Discuss briefly about shear transformation.
10. Explain about two dimensional viewing pipelines.

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

UNIT-III
Text: Types of Text – Unicode Standard – Font – Insertion of Text – Text compression – File
formats. Image: Image Types – Seeing Color – Color Models – Basic Steps for Image Processing
– Scanner – Digital Camera – Interface Standards – Specification of Digital Images – CMS –
Device Independent Color Models – Image Processing software – File Formats – Image Output
on Monitor and Printer.

TEXT
Ever since the inception of human civilization ideas have been largely articulated using
the written mode. The flexibility and ease of use of the textual medium makes it ideal for
learning. Word processing programs emerged as one the earliest application programs. In
multimedia presentations, text can be combined with other media in a powerful way to present
information and express moods.Text can be of various types.
 Plain Text
It consisting of fixed size characters having essentially the same type of appearance.
 Formatted Text
Where appearance can be changed using font parameters.
 Hyperlink
Which can serve to link different electronic documents and enable the user to jump from
one to the other in a non-linear way. Internally text is represented via binary codes as per the
ASCII table. The ASCII table is however quite limited in its scope and a new standard has been
developed to eventually replace the ASCII standard. This standard is called the Unicode
standard and is capable of representing international characters from various languages
throughout the world. Text can be inserted into an application using various means.

The simplest way is directly typing text into the application by using the keyboard;
alternatively text can be copied from another pre-existing file or application and pasted into the
application.
Nowadays we also generate text automatically from a scanned version of a paper
document or image using Optical Character Recognition (OCR) software.

TYPES OF TEXT
Essentially there are three types of text that can be used to produce pages of a document.

 Unformatted text
 Formatted text
 Hypertext
 Unformatted Text

Also known as plain text, this comprise of fixed sized characters from a limited character
set. The character set is called ASCII table which is short for American Standard Code for
Information Interchange and is one of the most widely used character sets.

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

It basically consists of a table where each character is represented by a unique 7-bit


binary code. This means there are 27 or 128 code words which can be used to identify the
characters.
The characters include a to z, A to Z, 0 to 9, and other punctuation characters like
parenthesis, ampersand, single and double quotes, mathematical operators etc. All the characters
are of the same height.
In addition to normal alphabetic, numeric and punctuation characters collectively called
printable characters, the ASCII characters set also includes a number of control characters.
Whenever characters included in the table are required to be stored or processed by a
computer, the corresponding numeric code is retrieved from the ASCII table in binary form and
substituted for the actual text for internal processing. This includes both the printable and control
characters example: each line of text in a document is terminated by a linefeed character.

Later as requirements increased an extended version of ASCII table was introduced


known as the extended ASCII character set, while the original table came to be known as
standard ASCII set.

 Formatted Text

Formatted texts are those where apart from the actual alphanumeric characters, other
control characters are used to change the appearance of the characters example: bold, underline,
italics, varying shapes, sizes and colors. Most text processing software uses such formatting
options to change text appearance.

It is also extensively used in the publishing sector for the preparation of books, papers,
magazines, journals and soon. In addition a variety of document formatting options are supported
to enable an author to structure a documents into chapters, sections and paragraphs and with
tables and graphics inserted at appropriate points. The control characters used to format the text
is application dependent and may vary from one package to another example bold appearance.

 Hypertext

Documents provide a method of transmitting information. Reading a document is an act


of reconstructing knowledge. A book or an article on paper has a given structure and is
represented in a sequential form. Although it is possible to read individual paragraphs, without
reading previous paragraphs, authors mostly assume sequential reading. Novels as well as
movies always assume a pure sequential reception.

Technical documentation (example: manuals) consists often of a collection of relatively


independent information units. There also exists many cross- references in such documentation
which leads to multiple searches at different places for the reader. A hypertext document can be
used to handle such situations.
The term hyper is usually used to mean something excessive (beyond super) example:
hyper active, hyper tension etc. Here the term is used to mean certain extra capabilities imparted
to normal or standard text like normal text, a hypertext document can be used to reconstruct
knowledge through sequential reading but additionally it can be used to link multiple documents
in such a way that the user can navigate non-0 sequentially from one document to the other for

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

cross-references. These links are called hyperlinks. Hyperlinks from one of the core structures in
multimedia presentations, because multimedia emphasizes a non-linear mode of presentation.

Example: A multimedia tourist brochure can have text information about various places
of interest, with photographic images; can have voice annotations about how to get to those
places and the modes of travel available, video clips of how tourists are traveling and the
facilities provided to them. All these information can be hyperlinked like a list of hotels at each
place along with their changes. A tourist can make use of a search facility to navigate to the
information regarding a specific place of interest and then use the hyperlinks provided to view
each category of information.
. The underlined text string on which the user clicks the mouse is called an anchor and
the document which opens as a result of clicking is called target document on the web target
documents are specified by a specific nomenclature called web sites address technically known
as Uniform Resource Locator or URL. Example of hypertext:

 Node Or Anchor

The anchor is the actual visual element (text) which not only is associated with some
meaning by itself but also provides an entry point to another document. An important factor in
designing an user interface is the concept of how the anchor can be represented properly

 Pointer Or Link
These provide connection to other information units known as target documents. A link
has to be defined at the time of creating the hyperlink, so that when the user clicks on an anchor
the appropriate target document can be fetched and displayed.

UNICODE STANDARD
The Unicode standard is a new universal character coding scheme for written characters
and text .It defines a consistent way of encoding multilingual text which enables textual data to
be exchanged universally. The Unicode standard goes far beyond ASCII’S limited more than 1
million characters.

Multilingual support is provided for European, Middle Eastern and Asian languages. The
Unicode consortium was incorporated in 1991 to promote the Unicode standard. The Unicode
Technical Committee (UTC) is the working group within the consortium responsible for the
creation, maintenance, and quality of the Unicode standard
The Unicode standard deals only with character codes, representation of the glyphs are
not part of the standard. These are to be dealt with by the font vendors and hardware vendors. A
font and its rendering process define a mapping from the Unicode values to glyphs. Example:
the Hindi character ‘Pa’ is represented by the Unicode sequence 0000100100101010(U + 092A),
how it will be rendered on the screen will be decided by the font vendor. The first byte represents
the language area while the next byte represents the actual character. Some of the languages and
their corresponding codes are: Latin (00) , Greek(03) , Arabic(06), Devanagiri/ Bengali (09) ,
Oriya / Tami9l (0b) etc.

The Unicode consortium based in California overlooks the development of the standard.
Members include major software and hardware vendors like Apple, Adobe, IBM, Microsoft HP,

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

etc. The consortium first published in the Unicode standard 1.0 in 1991. the latest version is 4.1
released in 2005. The Unicode version 3.0 is identical to the ISO standard 10646. Several
methods have been suggested to implement Unicode based on variations in storage space and
compatibility. The mapping methods are called Unicode Transformation Formats (UTF) and
Universal Character Set (UCS)

FONT

Font Appearance
The appearance of each character in case of formatted text is determined by specifying
what a font name. Font name refers to font files which contain the actual description of the
character appears on the windows platform font files are stored in specific folder called fonts
under the windows folder. These files are usually in Vector format meaning that character
descriptions are stored mathematically.
Windows call these fonts as true types fonts because their appearance stays the same on
different devices like monitors, printers and plotters, and they have a file extension of TTF.An
alternative form of font files is the bitmap format where each character is described as a
collection of pixels.

Examples Of Default (Upper Row) And Downloaded (Lower Row) Fonts


Times Roman Arial, Century Gothic, Verdana, Courier New, ABADDON AERO
AVQUEST, Cassandra, FAKTOS.

Some of the standard font types included with the windows OS package shown in upper
row. Other than theses there are thousands of font types made by various organizations and many
of them are freely downloadable over the Internet. Each application has a default font name
associated with it. When a specific font requested by the user is not available in the system, then
the default font is used which is assumed to be always available.

FONT STYLE AND SIZE


Font characters have a number of sizes. Size is usually specified in a unit called point (pt)
where 1 point equal 1\72 of an inch. Sometimes the size may also be specified in pixels.
Standard characters in textual documents usually range from 10 to 12 pts in size, while the upper
limit may go well beyond 100.
Specified font types can be displayed in a variety of styles. Some of the common styles
used are: bold, italics , underline, super script and sub script . Some application packages allow
changing the horizontal gap between the characters called kerning and the vertical gap between
two lines of text called leading.

INSERTION OF TEXT
Text can be inserted in a document using variety of methods. These are;

 USING A KEY BOARD


The most common process of inserting text in to a digital document is by typing the text
using an input device like the keyboard. Usually a text editing software, like Microsoft
word, is used to control the appearance of text which allows the user to manipulate variables like

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

the font, size, style, color etc. Some image processing and multimedia authoring software
provide a separate text editing window where the user can type text and integrate it with the rest
of the media like background images.

 COPYING AND PASTING

Another way of inserting text in to a document is by copying text from a pre- existing
digital document. The existing document is opened using the corresponding text processing
program and portions of the text may be selected by using the key board or mouse. Using the
copy command the selected text is copied to the clipboard. This text can then be inserted in to
another existing document or a new document or even in another place of the same document by
choosing the paste command, where upon the text is copied from the clipboard in to the target
document. This text can then be edited as per the user’s requirements.

TEXT COMPRESSION
Large text documents covering a number of pages may take a lot of disk space. We can
apply compression algorithms to reduce the size of then text file during storage. A reverse
algorithms must be applied to decompress the file before it contents can be displayed on screen.
However to be meaningful, the compression – decompression process must not change the
textual content in anyway, not even a single character. There are two types of compression
methods that are applied to text as explained.

 Huffmancoding

This type of coding is intended for applications in which the text to be compressed has
known characteristics in terms of the occurrences. Using the information, instead of using fixed
length code words, an optimum set of variable – lengths code words is derived such that the
shortest codeword is used to represent the most frequently occurring characters. This approach is
called Huffman coding.

 Lempel –Ziv (Lz) Coding

In the second approach followed by the Lempel – ziv (LZ) method, instead of using a
single character as a basis of the coding operation a string of characters is used. Example: a table
containing all the possible words that occur in a text document is held by both the encoder and
decoder

 Lempel – Ziv Welsh (Lzw) Coding

Most word processing packages have a dictionary associated with them which is used for
both spell checking and compression of text. Typically they contain in the region of 25000 words
and hence, 15 bits (215 = 32768) are required to encoding the index. To encode the word
compression with such as scheme would require only 15 bits instead of 777 bits with 7 bit ASCII
code words. The above method may not however produce efficient result for documents with a
small subject of words in the dictionary.

FILE FORMATS
The following text formats are usually used for textual documents.

 Txt (Text)

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

Unformatted text document created by an editor kike note pad on those platform.
Unformatted text document can be used to transfer textual information between different
platform like windows DOS and UNIX. The data is encoded using ASCII codes but sometimes
Unicode encodings like UTF – 8 or UTF – 16 may be used.

 Doc (Document)

Developed by Microsoft as a native format for storing documents created by the ms word
package. Contains a rich set of formatting capabilities . Since it require property software it is not
considered a document exchange format.

 Rtf(Rich Text Format)

The word pad editor earlier created RTF files by the default although now it has switched
to the DOC format RTF control codes are human readable , similar to HTML code.
Example this is (\b bold) text; \\par a new paragraph begins.

 Pdf (Portable Document Format)

Developed by adobe systems for cross platform exchange of documents. In addition to


text the format also support images in graphics PDF is an open standard and any one make write
programs that can read and write PDF s without any associated royalty charges.PDF readers can
be downloaded for free from adobe site and there are several free open source readers available.

 Ps(Post Script)

Post script is a page description language used mainly for desktop publishing. A page
description language is a high level language that can describe the contents of a page such that it
can be accurately displayed on output devices usually a printer. A post script interpret a inside
the printer converted the vectors back in to the raster dots to be printed. These allow arbitrary
sealing, rotating and other transformations.

IMAGE
After text the next element that comes under the preview of multimedia are pictures. A
picture being ‘worth a thousand words’ can be made to import large amount of information in a
compact way. It is a fact that most people dislike going through pages of text especially on a
computer screen, and so it has been the endeavor of most multimedia developers to supplement
words with picture in presentation.

The pictures that we see in our everyday life can be broadly classified into two groups
those that depict some real world situation typically captured by a camera and those that have
been drawn or painted and can depict any fictitious scenario.

The first of pictures are called images and the second types are called graphics. Images
can either be pure black and white or grey scale having a number of grey shades or color
containing a number of color shades..
The input stage deals with the issues of converting hardcopy paper images into electronic
versions. This is usually done via a device called the scanner. A number of electronic sensors
within the scanner within the scanner each convert a small portion of the original image into

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

pixels and store them as binary numbers within the storage device of a computer. While scanner
is used to digitalize documents, another device called the digital camera can convert real world
scene into a digital image. Digital cameras also contain a number of these electronic sensors
which are known as Charged Coupled Device (CCD) and essentially operate on the same
principle as scanner. Once a digital version of a image is generated, an editing software is used to
manipulate the image in various way.

IMAGE TYPE
Image that we see in our everyday lives can be categorized into various types.

 Hardcopy Vs Soft Copy

The typical image that we usually come across are the pictures that have been printed on
paper or some other kinds of surfaces like plastic, cloth, wood etc, these are also called hardcopy
images because they have been printed on solid surfaces.

Sometimes images are also seen in electronic forms on the TV screen or computer
monitor. Such images have been transformed from hard copy images or real objects into the
electronic form using specialized procedures and are referred to as softcopy images.

 Continious Tone, Half-Tone And Bitone

Photographs are also known as continuous tone image because they are usually composed
of a large number of varying tones or shades of colors. Sometimes due to limitations of the
display or printing devices, all the colors of a photograph cannot be represented adequately. In
those cases a subset of the total number of colors are displayed. Such images are called partial
tone or half tone images.

SEEING COLOR
The phenomenon of the seeing color is dependent on a triad of factors: the nature of light,
the interaction of light and matter and the physiology of human vision. Each factor plays a vital
part and the absence of I any one would make seeing color impossible. Light is a form of energy
known as electromagnetic radiation.

Electromagnetic radiation consists of a large number of waves with varying frequencies


and wavelengths. At one extreme are the radio waves having the longest wavelength (several
kilometers) and at the other extreme are the gamma rays with the shortest wavelength (0.1
nanometers) out of the total electromagnetic spectrum a small range of wave’s cause’s sensations
of light in our eyes. This is called visible spectrum of waves

The second part of the color triad is the human vision. The retina is the light sensitive part
of the eye and its surface is composed of photoreceptors or nerve endings. These receive the light

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

and pass it along through the optic nerve as a stimulus to the brain.
When light strikes an opaque object (i.e an object that does not transmit light), the
object’s surface plays an important role In determining whether the light is fully reflected, fully
diffused, or same of both. A smooth or glossy surface is one made up of particles of equal or
nearly equal refractive index. These surfaces reflect light at an intensity and angle to the incident
beam.
Scattering or diffusion is another aspect of reflection. When a substance contains
particles of a different refractive index, a light beam striking the substance will be scattered. The
amount of light scattered depends on the difference in the two refractive indices and also on the
size of the particles. Most commonly, light striking an opaque object will be both reflected and
scattered. This happens when an object is neither wholly glossy nor wholly rough.

COLOR MODELS
Color models help us in recognizing and expressing information related to color. In our
everyday life we see a large variety of colors which we cannot express by names. Researchers
have found Out of the most of the colors that we see around us can be derived from mixing a few
elementary colors. These elementary colors are known as primary colors. Primary colors mixed
in varying proportions produce other color called composite colors. This provides us with a way
to refer to any arbitrary color: by specifying the name and proportions of the primary colors from
which it can be produced.

Two primary colors mixed in equal proportions produce a secondary color. The primary
colors along with the total range of composite colors they can produce constitute a color model.
There can be multiple color models each with its own set of primary and composite colors.

 RGB Model
The RGB color model is used to describe behavior of colored lights like those emitted
from a TV screen or a computer monitor.
This model has three primary colors: red, green, blue in short RGB.
Inside a CRT, electron beams falling an red, green and blue phosphor dots produce
corresponding colored lights which mix together in different proportions to produce lights of
composite colors.
Proportions of colors are determined by the beam strength. An electron beam having the
maximum intensity falling on a phosphor dot creates 100% of the corresponding color. 50% of
the color results from a beam having half the peak strength. Proportions are measured in
percentage values.
An arbitrary color, say orange, can be specified as 96% red, 40% green and 14% blue.
This means that to produce orange colored light on the screen, the three electron beams striking
the red, green and blue phosphors need to have 96%, 40% and 14% of their maximum insanities
respectively.
The three primary colors in varying percentages from composite colors.

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

 CMYK Model

The RGB model is only valid for describing behavior of colored lights. When specifying
colors of ink on paper we require a different model.
Consider a blue spot of ink on paper. The ink looks blue because it only reflects blue light
to our eyes while absorbing the other color components from white light.
If we now mix a spot of red ink with blue ink what resultant do we expect? If the RGB
model was followed, the resultant would have been magenta.
But try to analyze the situation here the red ink would try to absorb the blue light reflected from
the blue ink and similarly the blue ink would try to absorb the red light from the red ink.

The result is that no light comes from the ink mixture to our eyes and it looks black.
Thus, clearly the RGB model is not being followed here and we need a new model to explain its
behavior.
Thus the secondary colors of the CMYK model are the same as the primary colors of the
RGB model and vice versa. These two methods are thus known as complimentary models.

BASIC STEPS FOR IMAGE PROCESSING


Image processing is the name given to the entire process involved with the input, editing
and output of images from a system. When studied in connection to multimedia it implies digital
manipulation of images. There are three basic steps.

 Input

Image input is the first stage of image processing. It is concerned with getting natural
images into a computer system for subsequent work. Essentially it deals with the conversion of
analog images into digital forms. This is mainly done using two devices. The first is the scanner
which can convert a printed image or document into the digital form.
The second is the digital camera which digitizes real world images, similar to how a
conventional camera works. Sometimes we can start with ready-made digital images

 Editing
After the images have been digitized and stored as files on the hard disk of a computer,

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

they are changed or manipulated to make them more suitable for specific requirement.
This step is called editing and usually involves one or more image editing software which
provides various tools and functionalities for editing the images.
Before the actual editing process can begin, an important step called color calibration needs to be
performed to ensure that the image looks consistent when viewed on multiple monitors.
After editing, the images are usually compressed using mathematical algorithms and then
shared into specific file formats.

 Output

Image output is the last stage in image processing concerned with displaying the edited
image to the user. The image can either be displayed in a stand-alone manner or all as part of
some application like a presentation or web page. In the most cases the image need to be
displayed on-screen via a monitor. However for some application like printing a catalog or
brochure, the images need to be printed on paper using a printer.

SCANNER
For images digitization involves physical devices like scanner or digital camera.The
scanner is a device used to convert analog images into the digital form.The most common type of
scanner for the office environment is called the flatbed scanner.It looks like a photocopying
machine with a glass panel and a moving scan head below it.
The paper document to be scanned is placed down on the glass panel and the scanner is
activated using software from a computer to which the scanner remains attached.

 Bar – Code Scanners

A bar code scanner is designed specifically to read barcodes printed on various surfaces.
A barcode is a machine – readable representation of information in a visual format. Traditionally
barcodes use a set of parallel vertical lines whose widths and spacing between them is used to
encode information. Nowadays they come in other forms like dots and concentric circles.
Barcodes relieve the operator of typing strings in a computer, the encoded information is directly
read by the scanner. They are extensively used to indicate details of products at retail outlets and
other automated environments like baggage routing in airports.
The other type is where the barcode holds the complete information and does not need
external databases. This led to barcode symbologies that can represent more than digits, typically
the entire ASCII character set.
A LASER barcode scanner is more expensive than a LED one but is capable of scanning
barcodes at a distance of about 25cm. The barcode scanner can either be hand-held or
stationary type. Hand-held types are small devices that can be held by hand and usually contains
a switch for triggering and light emission. It can be either wedge – shaped or pen- shaped, both
of which are slided over the barcode for scanning

COLOR SCANNING
Since the CCD elements are sensitive to the brightness of the light, the pixels essentially
store only the brightness information of the original image.This is also known as luminance (or
luma) information To include the color of chrominance (or chroma) information there are three
CCD elements for each pixel of the image format. These three elements are sensitive to the red,

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

blue and green components of light.


White light reflected off the paper document is split in to the primary color components
by a glass prism and made to fall on the corresponding CCD sub – components. The signal
output from each sub – component can be combined to produce a color scanned image. The
pixels in this case contain both the luma and chroma information.(below figure)

DIGITAL CAMERA
CONSTRUCTION AND WORKING PRINCIPLE
Apart from the scanner used to digitize paper documents and film, another device used to
digitize real world images is the digital camera. ust like a conventional camera, a digital camera
also has a lens through which light from real world objects enter the camera. ut instead of falling
on film to initiate a chemical reaction the light instead falls on a CCD array, similar to that inside
the scanner.Just like a scanner the voltage pulses from the CCD array travel to an ADC where
they are converted to binary representations and stored as a digital image file. Unlike a scanner a
digital camera is usually not attached to a computer via a cable. The camera has its own storage
facility inside it usually in the form of a floppy drive, which can save the images created in to a
floppy disc. Images however cannot be stored in floppy discs in their raw forms as they would
tend to take too much space. So instead they are compressed to reduce their file sizes and stored
usually in the JPEG format. This is a lossy compression technique and results in slight loss in
image quality. Each floppy inserted into the camera can hold about 15 to 25 images, depending
on the amount of compression. The floppy can then simply be taken out of the camera, inserted
into a PC and the files copied. Users are usually not permitted to set parameters like bit-depth
and resolution and the digital camera uses its default set of values. Earlier digital cameras had
CCD arrays of about 640*480 elements but modern high – end cameras have as many as
20484*1536 elements in its CCD arrays. Modern digital cameras contain memory chips within
them for storing images, which can range from 50 mb to 500 mb or beyond.

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

INTERFACE STANDARDS
Interface standards determine how data from acquisition devices like scanners and digital cameras
flow to the computer in an efficient way.Two main interface standards exist: TWAIN and ISIS

 TWAIN

TWAIN is a very important standard in image acquisition , developed by Hewlett – Packard ,


Kodak, Aldus , Logitech which specifies how image acquisition devices such as scanners , digital cameras
and other devices transfer data to software applications. It is basically an image capture API for Microsoft
windows and Apple Macintosh platforms. The standard was first released in 1992 and currently has a
version of 1.9 as of January 2000.

 IMAGE AND SCANNER INTERFACE SPECIFICATION (ISIS)

The second important standard for document scanner is the Image and Scanner Interface
Specification (ISIS).It was developed by Pixel Translations and they retain control over its development
and licensing. ISIS has a wider set of features than TWAIN and typically uses the SCSI – 2 interface while
TWAIN mostly uses the USB interface .

SPECIFICATIONS OF DIGITAL IMAGES

 Pixel Dimensions

The number of pixels along the height and width of a bitmap image is known as the pixel
dimensions of the image.The display size of an image on – screen is determined by the pixel dimensions of
the image plus the size and setting of the monitor.

 Image Resolution

The number of pixels displayed per unit length of the image is known as image resolution, usually
measured in pixels per inch (ppi) .An image with a high resolution contains more and therefore smaller
pixels than an image with a low resolution.

 File Size

The digital size of an image measured in kilobytes, megabytes, or gigabytes is proportional to the
pixel dimensions of the image.Images with more pixels may produce more detail but they require more
storage space and may be slower to edit and print.

 Color Depth

This defines the number of bits required to store the information of each pixel in the image, and in
turn determines the total number of possible colors that can be displayed in the image. Photographic images
usually need a depth of 24 – bits for true representations.

CONTENT MANAGEMENT SYSTEM (CMS)

A Content Management System (CMS) is a software system used for content management. This
includes computer files, image media, audio files, electronic documents and web content. The idea behind a
CMS is to make these files available inter-office, as well as over the web. A content management system
would most often be used as archival as well. Many

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

companies use a CMS to store files in a non-proprietary form. Companies use a CMS file sharewith ease,
as most systems use server based software, even further broadening file availability.

A content management system may support the following features:

 Import and creation of documents and multimedia material

 Identification of all key users and their content management roles

 The ability to assign roles and responsibilities to different content categories or types.

 Definition of the content workflow tasks, often coupled with event messaging so that
content managers are alerted to changes in content.

 The ability to track and manage multiple versions of a single instance of content.

 The ability to publish the content to a repository to support access to the content.
Increasingly, the repository is an inherent part of the system, and incorporates enterprise
search and retrieval.

 Some content management systems allow the textual aspect of content to be separated to
some extent from formatting. For example the CMS may automatically set default color,
fonts, or layout.

IMAGE PROCESSING SOFTWARE

In electrical engineering and computer science, image processing is any form of signal processing
for which the input is an image, such as a photograph or video frame; the output of image processing may
be either an image or, a set of characteristics or parameters related to the image. Most image-processing
techniques involve treating the image as a two-dimensional signal and applying standard signal-processing
techniques to it.
Image processing usually refers to digital image processing, but optical and analog image processing
also are possible. This article is about general techniques that apply to all of them. The acquisition of
images (producing the input image in the first place) is referred to as imaging.

FILE FORMATS

Multimedia data and information must be stored in a disk file using formats similar to image file
formats. Multimedia formats, however, are much more complex than most other file formats because of the
wide variety of data they must store. Such data includes text, image data, audio and video data, computer
animations, and other forms of binary data, such as Musical Instrument) control information, and graphical
fonts section later in this chapter.) Typical multimedia formats do not define new methods for storing these
types of data. Instead, they offer the ability to store data in one or more existing data formats that are already
in general use.

For example, a multimedia format may allow text to be stored as PostScript or Rich Text Format
(RTF) data rather than in conventional ASCII plain-text format. Still-image bitmap data may be stored as

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

BMP or TIFF files rather than as raw bitmaps. Similarly, audio, video, and animation data can be stored
using industry-recognized formats specified as being supported

Multimedia formats are also optimized for the types of data they store and the format of the medium
on which they are stored. Multimedia information is commonly stored on CD-ROM. Unlike conventional
disk files, CD-ROMs are limited in the amount of information they can store. A multimedia format must
therefore make the best use of available data storage techniques to efficiently store data on the CD-ROM
medium.

There are many types of CD-ROM devices and standards that may be used by multimedia
applications. If you are interested in multimedia, you should become familiar with them.

The original Compact Disc first introduced in early 1980s was used for storing only audio
information using the CD-DA (Compact Disc-Digital Audio) standard produced by Phillips and Sony. CD-
DA (also called the Red Book) is an optical data storage format that allows the storage of up to 74 minutes
of audio (764 megabytes of data) on a conventional CD-ROM.

The CD-DA standard evolved into the CD-XA (Compact Disc-Extended Architecture) standard, or
what we call the CD-ROM (Compact Disc-Read Only Memory). CD-XA (also called the Yellow Book)
allows the storage of both digital audio and data on a CD-ROM. Audio may be combined with data, such as
text, graphics, and video, so that it may all be read at the same time. An ISO 9660 file system may also be
encoded on a CD-ROM, allowing its files to be read by a wide variety of different computer system
platforms.
The CD-I (Compact Disc-Interactive) standard defines the storage of interactive multimedia data.
CD-I (also called the Green Book) describes a computer system with audio and video playback capabilities
designed specifically for the consumer market. CD-I units allow the integration of fully interactive
multimedia applications into home computer systems.

A still-evolving standard is CD-R (Compact Disc-Recordable or Compact Disc-Write Once), which


specifies a CD-ROM that may be written to by a personal desktop computer and read by any CD-ROM
player.
UNIT-III QUESTIONS
SECTION A
1. TIFF stands for ..
2. format is used to exchange files between application computer platforms.
3. Adjusting the space between the lines is called .
4. Define text.
5. Antialiasing is also known as .
6. Define image.
7. Little decoration at the end of the letter is called a .
8. The number of pixels displayed per unit length is called as .
9. CMS stands for .
10. The quality of scanned image is determined its .
SECTION B
1. Give a brief account on device independent color model.
2. Describe the use of digital camera in multimedia.

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

3. Explain briefly the basic image types.


4. Write a short note on image processing software.
5. Explain the file format for text.
6. Write short notes on Scanner.
7. Write short notes on Digital camera.
8. List and explain the specification of digital images.
9. Discuss about various text insertion methods.
10. Explain the types of text.
SECTION C
1. Discuss in detail about the various image type and image file formats.
2. Explain the RGB color model in detail.
3. Describe in detail device independent color models.
4. Explain briefly about: a) Text compression b) Scanner.
5. What are the color models? Explain.
6. Explain the basic steps for image processing in detail with neat diagram.
7. Discuss in brief about the major text file formats.
8. List and explain the specification of digital images.
9. Discuss about various text insertion methods.
10. Explain the basic steps of image processing.

UNIT-IV
Audio: Introduction – Acoustics – Nature of Sound Waves – Fundamental Characteristicsof Sound –
Microphone – Amplifier – Loudspeaker – Audio Mixer – Digital Audio – Synthesizers – MIDI – Basics of
Staff Notation – Sound Card – Audio Transmission – Audio File formats and CODECs – Audio Recording
Systems – Audio and Multimedia – Voice Recognitionand Response - Audio Processing Software.

AUDIO
After text, images and graphics the next element to be used extensively in multimedia is sound.
Sound is a form of energy capable of flowing from one place to another through amaterial medium.

ACOUSTICS
Acoustics is the interdisciplinary science that deals with the study of all mechanical waves in gases,
liquids, and solids including vibration, sound, ultrasound and infrasound.

A scientist who works in the field of acoustics is an acoustician while someone working in the field
of acoustics technology may be called an acoustical or audio engineer. The application of acoustics can
be seen in almost all aspects of modern society with the most obvious being the audio and noise
control industries

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

NATURE OF SOUND WAVES

Sound is one kind of longitudinal wave, in which the particles oscillate to and fro in the same
direction of wave propagation. Sound waves cannot be transmitted through vacuum. The transmission of
sound requires at least a medium, which can be solid, liquid, or gas.
Acoustics is the branch of science dealing with the study of sound and is concerned with the
generation, transmission and reception of sound waves. The application of acoustics in technology is called
acoustical engineering.

CHARACTERISTICS OF SOUND
Sound waves travel at great distances in a very short time, but as the distance increases the waves
tend to spread out. As the sound waves spread out, their energy simultaneously spreads through an
increasingly larger area.
Thus, the wave energy becomes weaker as the distance from the source is increased. Sounds may be
broadly classified into two general groups. One group is NOISE, which includes sounds such as the
pounding of a hammer or the slamming of a door.The other group is musical sounds, or TONES.
The distinction between noise and tone is based on the regularity of the vibrations, the degree of
damping, and the ability of the ear to recognize components having a musical sequence.

 Amplitude
Amplitude of wave is the maximum displacement of a particle in the path of a wave and is measure
of the peak- to – peak height of the wave.The physical manifestation of amplitude is the intensity of energy
of the wave.

 Frequency
This measures the number of vibrations of a particle in the path of a wave. The physical

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

manifestation of frequency of a sound wave is the pitch of sound. A high pitched sound, like that of a
whistle, has higher frequency than a dull flat sound, like the sound of a drum. Frequency is measured in a
unit called Hertz and denoted by Hz.
 Musical Note And Pitch
Sound pleasant to hear is called musical and those unpleasant to our ears are called noise. Musical
sounds most commonly originate from vibrating strings, like in guitars and violins, vibrating plates, like
drums and tabla, and vibrating air columns, like in pipes and horns . Musicology is the scientific study of
music which attempts to apply methods of systematic investigation and research in understanding the
principles of musical art.

MICROPHONE
A microphone (colloquially called a mic or mike; both pronounced) is an acoustic-to- electric
transducer or sensor that converts sound into an electrical signal. In 1876, Emile Berliner invented the first
microphone used as a telephone voice transmitter.

Microphones are used in many applications such as telephones, tape recorders, karaoke systems,
hearing aids, motion picture production, live and recorded audio engineering, FRS radios, megaphones, in
radio and television broadcasting and in computers for recording voice, speech recognition, and for non-
acoustic purposes such as ultrasonic checking or knock sensors..

AMPLIFIER
Generally, an amplifier or simply amp is any device that changes, usually increases, the amplitude
of a signal. The relationship of the input to the output of an amplifier—usually expressed as a function of
the input frequency—is called the transfer function of the amplifier, and the magnitude of the transfer
function is termed the gain.

In popular use, the term usually describes an electronic amplifier, in which the input "signal" is
usually a voltage or a current. In audio applications, amplifiers drive the loudspeakers used in PA systems to
make the human voice louder or play recorded music. Amplifiers may be classified according to the input
(source) they are designed to amplify (such as a guitar amplifier, to perform with an electric guitar), the
device they are intended to drive (such as a headphone amplifier), the frequency range of the signals
(Audio, IF, RF, and VHF amplifiers, for example), whether they invert the signal (inverting amplifiers and
non-inverting amplifiers), or the type of device used in the amplification (valve or tube amplifiers, FET
amplifiers, etc.).

LOUDSPEAKER
A loudspeaker (or "speaker") is an electro acoustic transducer that converts an electrical signal into
sound. The speaker moves in accordance with the variations of an electrical signal and causes sound waves
to propagate through a medium such as air or water. After the acoustics of the listening space, loudspeakers
(and other electro acoustic transducers) are the most variable elements in a modern audio system and are
usually responsible for most distortion and audible differences when comparing sound systems

AUDIO MIXER
In professional studios multiple microphones may be used to record multiple tracks of sound at a
time. Example recording performance of an orchestra. A device called an audio mixer is used to record
these individual tracks and edit them separately Each of these tracks a number of controls for adjusting the

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

volume, tempo, mute etc.,


DIGITAL AUDIO
Digital audio is created when a sound wave is converted into numbers – a process referred to
as digitizing. It is possible to digitize sound from a microphone, a synthesizer, existing tape recordings, live
radio and television broadcasts, and popular CDs. Digitized sound is sampled sound. Ever nth fraction of a
second, a sample of sound is taken and stored as digital information in bits and bytes. The quality of this
digital recording depends upon how often the samples are taken.

SYNTHESIZER
 Polyphony
The polyphony of a sound generator refers to its ability to play more than one note at a time.
Polyphony is generally measured or specified as a number of notes or voices. Most of the early music
synthesizers were monophonic, meaning that they could only play one note at a time. If you pressed five
keys simultaneously on the keyboard of a monophonic synthesizer, you would only hear one note.

Sounds
The different sounds that a synthesizer or sound generator can produce are sometimes called
"patches", "programs", "algorithms", or "timbres". Programmable synthesizers commonly assign "program
numbers" (or patch numbers) to each sound. For instance, a sound module might use patch number 1
for its acoustic piano sound, and patch number 36 for its fretless bass sound.

A synthesizer or sound generator is said to be multitimbral if it is capable of producing two or more


different instrument sounds simultaneously. If a synthesizer can play five notes simultaneously, and it can
produce a piano sound and an acoustic bass sound at the same time, then it is multitimbral. With enough
notes of polyphony and "parts" (multitimbral) a single synthesizer could produce the entire sound of a band
or orchestra.

Multitimbral operation will generally require the use of a sequencer to send the various MIDI
messages required. For example, a sequencer could send MIDI messages for a piano parton Channel 1,
bass on Channel 2, saxophone on Channel 3, drums on Channel 10, etc. A 16 part multitimbral synthesizer
could receive a different part on each of MIDI's 16 logical channels.

MIDI
MIDI (Musical Instrument Digital Interface) is a communication standard developed for
electronic musical instruments and computers. MIDI files allow music and sound synthesizers from
different manufacturers to communicate with each other by sending messages along cables connected to the
devices.Creating your own original score can be one of the most creative and rewarding aspects of
building a multimedia project, and MIDI (Musical Instrument Digital Interface) is the quickest, easiest and
most flexible tool for this task.The process of creating MIDI music is quite different from digitizing
existing audio. To make MIDI scores, however you will need sequencer software and a sound synthesizer.
The MIDI keyboard is also useful to simply the creation of musical scores. An advantage of
structured data such as MIDI is the ease with which the music director can edit the data.

A MIDI file format is used in the following circumstances :


Digital audio will not work due to memory constraints and more processing power requirements

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

When there is high quality of MIDI source When there is no requirement for dialogue.

A digital audio file format is preferred in the following circumstances.

 When there is no control over the playback hardware


 When the computing resources and the bandwidth requirements arehigh.
 When dialogue is required.

AUDIO FILE FORMATS


A file format determines the application that is to be used for opening a file. Following is the list of
different file formats and the software that can be used for opening a specific file.

 The MIDI Format

The MIDI (Musical Instrument Digital Interface) is a format for sending music information between
electronic music devices like synthesizers and PC sound cards.
 The MIDI format was developed in 1982 by the music industry.
 The MIDI format is very flexible and can be used for everything from very simple to real
professional music making.
 MIDI files do not contain sampled sound, but a set of digital musical instructions
 (musical notes) that can be interpreted by your PC's sound card.
 The downside of MIDI is that it cannot record sounds (only notes). Or, to put it another way:
It cannot store songs, only tunes.
 The upside of the MIDI format is that since it contains only instructions (notes), MIDI files
can be extremely small. The example above is only 23K in size but it plays for nearly 5
minutes.
 The MIDI format is supported by many different software systems over a large range of
platforms.
 MIDI files are supported by all the most popular Internet browsers.
 Sounds stored in the MIDI format

SOUND CARD
A sound card (also known as an audio card) is an internal computer expansion card that facilitates
the input and output of audio signals to and from a computer under control ofcomputer programs.
The term sound card is also applied to external audio interfaces that use software to generate sound,
as opposed to using hardware inside the PC. Typical uses of sound cards include providing the audio
component for multimedia applications such as music composition, editing video or audio, presentation,
education and entertainment (games) and video projection. Many computers have sound capabilities built
in, while others require additional expansion cards to provide for audio capability.
Sound cards usually feature a digital-to-analog converter (DAC), which converts recorded or
generated digital data into an analog format. The output signal is connected to an amplifier, headphones, or
external device using standard interconnects, such as a TRS connector or an RCA connector. If the number
and size of connectors is too large for the space on the backplate the connectors will be off-board, typically
using a breakout box, or an auxiliary backplate.
More advanced cards usually include more than one sound chip to provide for higher data rates and
multiple simultaneous functionality, for example digital production of synthesized sounds (usually for real-

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

time generation of music and sound effects using minimal data and CPU time)

AUDIO TRANSMISSION
 Audio file format
An audio file format is a file format for storing digital audio data on a computer system. This data
can be stored uncompressed, or compressed to reduce the file size. It can be a raw bitstream, but it is usually
a container format or an audio data format with defined storage layer.

 Types of formats

It is important to distinguish between a file format and an audio codec. A codec performs the
encoding and decoding of the raw audio data while the data itself is stored in a file with a specific audio file
format. Although most audio file formats support only one type of audio data (created with an audio coder),
a multimedia container format (as Matroska or AVI) may support multiple types of audio and video data.

There are three major groups of audio file formats:


 Uncompressed audio formats, such as WAV, AIFF, AU or raw header-less PCM;
 Formats with lossless compression, such as FLAC, Monkey's Audio (filename extension
APE), WavPack (filename extension WV), TTA, ATRAC Advanced Lossless, Apple
Lossless (filename extension m4a), MPEG-4 SLS, MPEG-4 ALS, MPEG-4 DST, Windows
Media Audio Lossless (WMA Lossless), and Shorten (SHN).
 Formats with lossy compression, such as MP3, Vorbis, Musepack, AAC, ATRAC and
Windows Media Audio Lossy (WMA lossy)).

Uncompressed audio formats


There is one major uncompressed audio format, PCM, which is usually stored in a .wav file on
Windows or in a .aiff file on Mac OS. The AIFF format is based on the Interchange File Format (IFF). The
WAV format is based on the Resource Interchange File Format (RIFF), which is similar to IFF. WAV and
AIFF are flexible file formats designed to store more or less any combination of sampling rates or bitrates.
This makes them suitable file formats for storing and archiving an original recording.
Timestamp reference which allows for easy synchronization with a separate picture element. Stand-
alone, file based, multi-track recorders from Sound Devices, Zaxcom, HHB USA, Fostex, and Aaton all use
BWF as their preferred format.
The .cda (Compact Disk Audio Track) is a small file that serves as a shortcut to the audio data for a
track on a music CD. It does not contain audio data and is therefore not considered to be a proper audio file
format.
Lossless compressed audio formats
A lossless compressed format stores data in less space by eliminating unnecessary data.
Uncompressed audio formats encode both sound and silence with the same number of bits per unit
of time. Encoding an uncompressed minute of absolute silence produces a file of the same size as encoding
an uncompressed minute of music
Lossless compression formats enable the original uncompressed data to be recreated exactly. They
include the common[5] FLAC, WavPack, Monkey's Audio, ALAC (Apple Lossless). They provide a
compression ratio of about 2:1 (i.e. their files take up half the space of the originals). Development in
lossless compression formats aims to reduce processing time while maintaining a good compression
ratio.
Lossy compressed audio formats

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

Lossy compression enables even greater reductions in file size by removing some of the data. Lossy
compression typically achieves far greater compression but somewhat reduced quality than lossless
compression by simplifying the complexities of the data. [6] A variety of techniques are used, mainly by
exploiting psychoacoustics, to remove data with minimal reduction in the quality of reproduction. For many
everyday listening situations, the loss in data (and thus quality) is imperceptible.

CODEC
A codec is a device or computer program capable of encoding or decoding a digital data stream or
signal. The word codec is a portmanteau of "compressor-decompressor" or, more commonly, "coder-
decoder". A codec (the program) should not be confused with a coding or compression format or standard
– a format is a document (the standard), a way of storing data, while a codec is a program (an
implementation) which can read or write such files. In practice "codec" is sometimes used loosely to refer to
formats, however.

A codec encodes a data stream or signal for transmission, storage or encryption, or decodes it for
playback or editing. Codecs are used in videoconferencing, streaming media and
video editing applications. A video camera's analog-to-digital converter (ADC) converts its analog signals
into digital signals, which are then passed through a video compressor for digital transmission or storage.
Media codecs
Codecs are often designed to emphasize certain aspects of the media, or their use, to be encoded. For
example, a digital video (using a DV codec) of a sports event needs to encode motion well but not
necessarily exact colors, while a video of an art exhibit needs to perform well encoding color and surface
texture.
Audio codecs for cell phones need to have very low latency between source encoding and playback;
while audio codecs for recording or broadcast can use high-latency audio compression techniques to
achieve higher fidelity at a lower bit-rate.
AUDIO RECORDING SYSTEMS
A "digital audio recording device" is any machine or device of a type commonly distributed to
individuals for use by individuals, whether or not included with or as part of some other machine or device,
the digital recording function of which is designed or marketed for the
primary purpose of, and that is capable of, making a digital audio copied recording for privateuse. he
definition of "digital audio recording medium" is similar:

A "digital audio recording medium" is any material object in a form commonly distributed for use
by individuals, that is primarily marketed or most commonly used by consumers for the purpose of making
digital audio copied recordings by use of a digital audio recording device

AUDIO AND MULTIMEDIA


Multimedia content on the Web, by its definition - including or involving the use of several media -
would seem to be inherently accessible or easily made accessible.
However, if the information is audio, such as a RealAudio feed from a news conference or the
proceedings in a courtroom, a person who is deaf or hard of hearing cannot access that content unless
provision is made for a visual presentation of audio content. Similarly, if the content is pure video, a blind
person or a person with severe vision loss will miss the message without the important information in the
video being described.

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

Some Definitions
A transcript of audio content is a word-for-word textual representation of the audio, including
descriptions of non-text sounds like "laughter" or "thunder." Transcripts of audio content are valuable not
only for persons with disabilities but in addition, they permit searching and indexing of that content
which is not possible with just the audio

VOICE RECOGNITION AND RESPONSE


Voice recognition and voice response promise to be the easiest method of providing a user
interface for data entry and conversational computing, since speech is the easiest, most natural means of
human communication. Voice input and output of data have now become technologically and economically
feasible for a variety of applications.
Voice recognition systems analyze and classify speech or vocal tract patterns and convert them into
digital codes for entry into a computer system. Most voice recognition systems require "training" the
computer to recognize a limited vocabulary of standard words for each user.
Voice recognition devices are used in work situations where operators need to perform data entry
without using their hands to key in data or instructions, or where it would Cartier Replica provide faster
and more accurate input.
Voice response devices range from mainframe audio-response units to voice-messaging
minicomputers to speech synthesizer microprocessors. Speech microprocessors can be found in toys,
calculators, appliances, automobiles, and a variety of other consumer, commercial, and industrial products.
Voice-messaging minicomputer and mainframe audio response units use voice-response software to
verbally guide an operator through the steps of a task in many kinds of activities. They may also allow
computers to respond to verbal and touch-tone input over the telephone..

AUDIO PROCESSING SOFTWARE


There are a variety of reasons to use audio editing software; while some people use it to create and
record files, others use it to edit and restore old recordings, and some use it only to convert and change file
types. Thanks to audio editing software, more people have the opportunity to get their creative juices
flowing by using the affordable and capable applications available on the market today.

If you are curious about what you can do with audio editing software and trying to decide if it’s the
right purchase for you, here are some basic features each application allows you to perform:

 Create – With an easy-to-use basic program, even an amateur can create unique voice and music
mixes for an internet radio station, website, PowerPoint presentation or for personal use.

 Restore – Advanced filters and other tools within the audio editing software applications can restore
the sound of aged LPs or damaged audio recordings. You can also use these filters to filter out
background noises, static or other unwanted noise.

 Edit – Audio editing software applications include several editing tools, including cut and paste
options and the ability to edit tag or media information.

 Record – Through a compatible program, you can record your favorite podcasts, internet radio

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

stations and other types of streaming audio. You can also pull audio from a video or audio file, CDs,
DVDs or even your sound card, so that you can edit and listen to it on your computer or a portable
device.
 Convert – Most programs can convert file formats. For example, from a MIDI to anMP3,
WMA, WAV or OGG file.

 We are continually researching, reviewing and ranking the top audio editing software
choices available so you have the latest and the greatest features, tools and additions at your
fingertips. We have included articles on audio editing software, along with reviews of each
top product, including: Magix Music Maker, WavePad and Dexster Audio Editor. Audio
Editing Software Benefits
The top eight programs in our review ranked quite close to each other and are within a similar price
range; however, they all support different formats, editing tools and recording/ burning abilities.

There are several factors to consider when looking for suitable applications. Before choosing a
program, determine what you want to do with the software. Are you interested in working with streaming
audio, making arrangements for your iPod or MP3 player, restoring sound files, forensics, creating audio
for your website or burning a CD of your band's music?

Below are the criteria TopTenREVIEWS used to evaluate each application:

Audio Editing
All good programs contain play, record, cut, copy, paste and so on; this criterion looks beyond the
essential editing tools to include tools such as equalizers, processors, mixers, preset effects, filters and
analyzing tools like the waveform or spectrogram.

Recording/Editing Ability
The best programs will capture audio from files, the sound card or from downloaded CDs as well as
from outside sources such as a line-in from a stereo, MIDI device or microphone. As a bonus, it is also
helpful if the product includes burning software so that you can use your CD or DVD burner to save edited
files. To be the most compatible, the product must be able to work with and convert many file formats, like
the various WAV file types, Windows Media Audio (WMA), AIFF (used by Apple) and MP3 files.

UNIT-IV QUESTIONS
SECTION A
1. Sound is in nature.
2. The number of vibrations per second is called .
3. Define sound.
4. Sound pressure levels are measured in .
5. MIDI stands for .
6. Sound is usually represented as .
7. is a Dolby's sound-generation professional system.
8. handle low frequencies.
9. is the smallest distinguishable sound in a language.
10. Define use of Audio Processing Software.
SECTION B

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

1. Explain the fundamental characteristic of sound.


2. List and explain basic internal components of the sound cards.
3. Discuss the types and characteristic of synthesizers.
4. Discuss about any three audio file formats
5. Explain briefly about Audio recording systems.
6. Define acoustics and explain the nature of sound waves.
7. Explain about sound card.
8. Write a note on audio synthesizers.
9. Briefly explain about CODECS.
10. Write a note about Microphone and Amplifier.
SECTION B
1. Discuss in detail about audio transmission and audio processing software.
2. List the characteristics of sound.
3. Explain briefly about audio recording system and its fie formats.
4. Discuss in detail about voice recognition and response.
5. State the features of MIDI and differentiate MIDI from Digital Audio.
6. Discuss the important parameters of digital audio.
7. Discuss in detail about MIDI.
8. Discuss in detail about any four audio file format.
9. Explain various interfaces for audio transmission.
10. Briefly explain about audio file formats and CODECS

UNIT-V
Video: Analog Video Camera – Transmission of Video Signals – Video Signal Formats – Television
Broadcasting Standards – PC Video – Video File Formats and CODECs – Video Editing – Video Editing
Software. Animation: Types of Animation – Computer Assisted Animation – Creating Movement –
Principles of Animation – Some Techniques of Animation – Animation on the Web – Special Effects –
Rendering Algorithms. Compression: MPEG-1 Audio
– MPEG-1 Video - MPEG-2Audio – MPEG-2 Video.

VIDEO
Motion video is a combination of image and audio. It consists of a set of still images called frames
displayed to the user one after another at a specific speed , known as the frame rate measured in number of
frames per second. Video is the technology of electronically capturing, recording, processing, storing,
transmitting, and reconstructing a sequence of still images representing scenes in motion

ANALOG VIDEO CAMERA

CAMCORDER
A camcorder (video camera recorder) is an electronic device that combines a video camera and a
video recorder into one unit. Equipment manufacturers do not seem to have strict guidelines for the term
usage. Marketing materials may present a video recording device as a camcorder, but the delivery package
would identify content as video camera recorder. In order to differentiate a camcorder from other
devices that are capable of recording video, like cell
phones and compact digital cameras, a camcorder is generally identified as a portable device having video
capture and recording as its primary function.

The earliest camcorders employed analog recording onto videotape. Since the 1990s digital
recording has become the norm, but tape remained the primary recording media. Starting from early 2000s

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

tape is being gradually replaced with other storage media including optical disks, hard disk drives and flash
memory.All tape-based camcorders use removable media in form of video cassettes.
Camcorders that permit using more than one type of media, like built-in hard disk drive and memory
card, are often called hybrid camcorders. Camcorders contain 3 major components: lens, imager, and
recorder. The lens gathers and focuses light on the imager. The imager (usually a CCD or CMOS sensor on
modern camcorders; earlier examples often used vidicon tubes) converts incident light into an electrical
signal. Finally, the recorder converts the electric signal into digital video and encodes it into a storable
form. More commonly, the optics and imager are referred to as the camera section.

TRANSMISSION OF VIDEO SIGNALS:


Analog transmission is a transmission method of conveying voice, data, image, signal or video
information using a continuous signal which varies in amplitude, phase, or some other property in
proportion to that of a variable.

It could be the transfer of an analog source signal using an analog modulation method such as FM or
AM, or no modulation at all.

MODES OF TRANSMISSION:
 Analog transmission can be conveyed in many different fashions:
twisted-pair or coax cable
 fiber-optic
 Via water
There are two basic kinds of analog transmission, both based on how they modulate data to combine an
input signal with a carrier signal. Usually, this carrier signal is a specific frequency, and data is transmitted
through its variations. The two techniques are amplitude modulation (AM), which varies the amplitude of
the carrier signal, and frequency modulation (FM), which modulates the frequency of the carrier

VIDEO SIGNAL FORMAT


COMPONENT VIDEO

This refers to a video signal which is stored or transmitted as three separate component signals. The
simplest form is the collection of R , G and B signals which usually form the output of analog video
cameras.

COMPOSITE VIDEO
For ease is signal transmission , specially TV broadcasting , also reduce cabel / channel
requirements , components signals are often combined in to a single signal which is transmitted along a
single wire or channel.

VIDEO RECORDING FORMATS:


BETACAM SP
Developed by Sony, perhaps the most popular component format for both field acquisition and post
production today. Betacam uses cassettes and transports similar to the old Betamax home video format, but
the similarities end there. Tape speed is six times higher, and luminance and chrominance are recorded on
two separate tracks. The two color difference signals are compressed in time by two and recorded

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

sequentially on a single track.

M-II

M-II (read em-two) was developed by Matsushita for Japan's national broadcasting company NHK.
Today M-II is one of the most popular broadcast quality component formats with quality similar to Betacam
SP. Large users of M-II include NHK, of course, and NBC. Recording technique is similar to Betacam SP
but uses some enhancements which compensate for the lower tape speed.

EBU C format
These machines use 1" tape in open reels. The main advantages are very fast transports and low
recording density, which makes the format rather immune to drop-outs. Tape costs are high. The units can
record single frames, which makes them popular in computer animation. Some units with vacuum capstans
can operate from stop to nominal speed within one video field. The tape makes almost a full circle around
the picture drum, and a single head is able to record and playback the entire video signal.

EBU B format
Similar to C format, but uses segmented helical scan. The diameter of the picture drum is small, and
a single video field is recorded in 6 separate tracks.

D SERIES DIGITAL FORMATS


D-1 was the first practical digital format, introduced by Sony in 1986. Although still considered a
quality reference, D-1 is expensive to buy and use and has been mostly superseded by the more cost
effective later formats.

D-2 was developed by Ampex around the same time as D-1 was introduced and is meant to be a
fully transparent storage for composite video, useful for composing "spot tapes" for programmes such as
news.
D-5 units can use two different sample rate / resolution combinations and are generally capable of
playing back D-3 tapes. While D-5 is still a studio format, D-3 camcorders are available from Panasonic.

D-6 is a digital HDTV recording format by Toshiba/BTS. Stores 600


GB worth of data on a physically huge 64 minute cassette.PC VIDEO:

Beside animation there is one more media element, which is known as video. With latest technology
it is possible to include video impact on clips of any type into any multimedia creation, be it corporate
presentation, fashion design, entertainment games, etc.

The video clips may contain some dialogues or sound effects and moving pictures. These video clips
can be combined with the audio, text and graphics for multimedia presentation.
All the video available are in analog format. To make it usable by computer, the video clips are
needed to be converted into computer understandable format, i.e., digital format. Both combinations of
software and hardware make it possible to convert the analog video clips into digital format.

However, latest video compression software makes it possible to compress the digitised video clips
to its maximum. In the process, it takes lesser storage space. One more advantage of using digital video is,
the quality of video will not deteriorate from copy to copy as the digital video signal is made up of digital
code and not electrical signal. Caution should be taken while digitizing the video from analog source to

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

avoid frame droppings and distortion. A good quality video source should be used for digitization.

Currently, video is good for:


Promoting television shows, films, or other non-computer media that traditionally have used trailers
in their advertising.Giving users an impression of a speaker’s personality. showing
things that move. For example a clip from a motion picture. Product demos of physical products are also
well suited for video.

VIDEO FILE FORMATS:


MULTIMEDIA CONTAINER FORMATS:
The container file is used to identify and interleave different data types. Simpler container formats
can contain different types of audio codecs, while more advanced container formats can support multiple
audio and video streams, subtitles, chapter-information, and meta-data (tags) — along with the
synchronization information needed to play back the various streams together.In most cases, the file header,
most of the metadata and the synchro chunks are specified by the container format. For example, container
formats exist for optimized, low-quality, internet video streaming which differs from high-quality DVD
streaming requirements.

Container format parts have various names: "chunks" as in RIFF and PNG, "packets" in MPEG-TS
(from the communications term), and "segments" in JPEG. The main content of a chunk is called the "data"
or "payload". Most container formats have chunks in sequence, each with a header, while TIFF instead
stores offsets. Modular chunks make it easy to recover other chunks in case of file corruption or dropped
frames or bit slip, while offsets result in framing errors in cases of bit slip.

CODECs:
A codec is a device or computer program capable of encoding and/or decoding a digital data stream
or signal. The word codec is a portmanteau of 'compressor-decompressor' or, more commonly, 'coder-
decoder'. A codec (the program) should not be confused with a coding or compression format or standard –
a format is a document (the standard), a way of storing data, while a codec is a program (an
implementation) which can read or write such files.In practice "codec" is sometimes used loosely to refer to
formats, however.A codec encodes a data stream or signal for transmission, storage or encryption, or
decodes it for playback or editing. Codecs are used in videoconferencing, streaming media and video
editing applications.
A video camera's analog-to-digital converter (ADC) converts its analog signals into digital signals,
which are then passed through a video compressor for digital transmission or storage. A receiving device
then runs the signal through a video decompresses, then a digital-to- analog converter (DAC) for analog
display. The term codec is also used as a generic name for a video conferencing unit.

Video Editing
Video editing is the process of manipulating and rearranging video shots to create a new work.
Editing is usually considered to be one part of the post production process — other post- production tasks
include titling, colour correction, sound mixing, etc.

Many people use the term editing to describe all their post-production work, especially in non-
professional situations. Whether or not you choose to be picky about terminology is up to

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

you. In this tutorial we are reasonably liberal with our terminology and we use the word editing
to mean any of the following:

 Rearranging, adding and/or removing sections of video clips and/or audio clips.
Applying colour correction, filters and other enhancements.

 Creating transitions between clips.

Goals of Editing
There are many reasons to edit a video and your editing approach will depend on the desired
outcome. Before you begin you must clearly define your editing goals, which could include any of the
following:

Remove unwanted footage

This is the simplest and most common task in editing. Many videos can be dramatically improved
by simply getting rid of the flawed or unwanted bits.

Choose the best footage

It is common to shoot far more footage than you actually need and choose only the best material for
the final edit. Often you will shoot several versions (takes) of a shot and choose the best one when editing.

ANIMATION
Animation is a visual technique that provides the illusion of motion by displaying a collection of
images in rapid sequence. Each image contains a small change, for example a leg moves slightly, or the
wheel of a car turns. When the images are viewed rapidly, your eye fills in the details and the illusion of
movement is complete.

When used appropriately in your application’s user interface, animation can enhance the user
experience while providing a more dynamic look and feel. Moving user interface elements smoothly around
the screen, gradually fading them in and out, and creating new custom controls with special visual effects
can combine to create a cinematic computing experience for your users.
In Time Machine perceived distance provides an intuitive metaphor for a linear progression in time.
Older file system snapshots are shown further away, allowing you to move through them to find the version
you want to restore.
Using Animation in Your Applications
How you incorporate animation into your own application largely depends on the type of interface
your application provides. Applications that use the Aqua user interface can best integrate animation by
creating custom controls or views. Applications that create their own user interface, such as educational
software, casual games, or full-screen applications such as Front Row, have much greater leeway in
determining how much animation is appropriate for their users.
With judicious use of animation and visual effects, the most mundane system utility can become a
rich visual experience for users, thus providing a compelling competitive advantage for your application.

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

TYPES OF ANIMATION:
There are many different types of animation that are used nowadays. The three main types areclay
animation, computer animation, and regular animation.
Clay Animation

Clay animation is not really a new technique as many people might think. Clayanimation began
shortly after plasticine (a clay-like substance) was invented in 1897, and one ofthe first films to use it was
made in 1902. This type of animation was not very popular untilGumby was invented. The invention of
Gumby was a big step in the history of clay animation. Now, clay animation has become more popular and
easier to do.
Computer Animation

Computer animation has also become common. Computer animation began about 40 years ago
when the first computer drawing system was created by General Motors and IBM. It allowed the user to
view a 3D model of a car and change the angles and rotation. Years later, more people helped make
computer animation better. computer animation.

Cel-Shaded Animation
Cel-shaded animation is makes computer graphics appear to be hand-drawn. This type of animation is
most commonly turning up in console video games. Most of the time the cel- shading process starts with a
typical 3D model. The difference occurs when a cel-shaded object is drawn on-screen. The rendering
engine only selects a few shades of each color for the object, making it look flat.

In order to draw black ink lines outlining an object's contours, the back-face culling is inverted to
draw back-faced triangles with black-colored vertices. The vertices must be drawn many times with a slight
change in translation to make the lines thick. This produces a black- shaded silhouette. The back-face
culling is then set back to normal to draw the shading and optional textures of the object. The result is that
the object is drawn with a black outline.

Regular Animation
Animation began with Winsor McCay. He did his animations all by himself, and it took him a long time
(about a year for a five minute cartoon). But for some, it was ridiculous that they would have to wait so
much for so little. Then the modern animation studio came to be. Years later, more people would invent
more cartoon characters. Otto Messmer invented the character 'Felix the Cat'. Later on, the Walt Disney
Studio created 'Steamboat Willie', which introduced the character Mickey Mouse. Other companies started
to make their own cartoons; some of which we can still watch today.

Computer Assisted Animation


Computer-assisted animation is common in modern animated films. Recent films such as "Beowulf"
were created using computer-assisted animation. These techniques enhance modern animated films in ways
not seen in film history.

Definition
Computer-assisted animation is animation that could not be completed without using a computer. Functions like
in-betweening and motion capture are examples of computer-assisted animation.
In-betweening
Tweening is a technique used in two-dimensional animation that blends two animationcels

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

together. Each is individually drawn. These are played rapidly, which gives the impression
of movement. Between each cel or key frame in the sequence, there is a visual gap in which a transition
drawing is placed. Now computers are used to draw the transition or "in-between" drawing so that the film
looks smooth (see Resources for visual demonstrations).

Motion Capture
Motion capture uses reflective dots that are placed at an actor's joints. When he moves, a computer
picks up the dots and creates a model of the performance, which is stored in the computer. Animators later
use the sensor points as a "skeleton" to create a three-dimensional character (see Resources for visual
demonstrations).

CREATING MOVEMENT:
If you were wondering how they create movies such as Wallace and Gromit or those groovy Lego
shorts on YouTube, your search is over! Although creating stop motion animation is not difficult, it is time-
consuming, repetitive and requires patience. As long as you're forewarned and keen, this makes a fantastic
hobby and sometimes even grows into a career.

Place your camera in front of the "set" that you are going to take photos of. Check that it can view
the entire frame. It is very important to support the camera or place it so that it is sitting steadily and cannot
shake as you take the photos. Otherwise, the end result will appear chaotic and lack continuity. Keep in
mind that the more photos, the smoother the video results. If you do not have a tripod, good alternatives
include balancing on solid books, poster tack on the surface of the set or a piece of solid furniture at the
same height. In single frame, 24 pictures equals one second of film. It's best to take two pictures of the same
shot, so you only require 12.
Set up a good source of lighting. It might be a lamp or a flashlight. If your light is flickering, you
need to shut off other sources of light. Close the blind, or curtains etc.
Take a single photo of the figure in the selected position. This photo shows the Lego set being
readied for photographing.

Begin the movement sequence. Move the figure bit by bit - very small movements each time. It
may be the entire body if the figure is walking, or it may just be an arm, head or leg. If you are moving only
one body part and you find that the figure is tilting or threatening to fall over, make use of poster tack under
the feet or other area touching part of the set.
Repeat the movement sequence until your action step is completed, or your camera's memory is
full.Save the pictures onto your computer in an easy to remember place.

Use your movie-making software as instructed (or see two popular software methods below). The
basics involve:

Import the pictures into the desired program.


Make sure the pictures are at a very small duration so they flow very fast. If you are disappointed by
the speed at which your program can animate, try exporting the project as a video file (before adding
audio), then importing it again, and using a speed effect on it, such as
double speed (these effects only work on video clips). Then, if the resulting speed is sufficient, you may add
your audio.

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

Add titles and credits if you would like.

Make sure you like the end result of your stop motion animation. Keep going if you need to
complete more actions to create a story.
Save the video. If you plan on having multiple stop motion segments, save each segment as a separate
movie. Once the entire group of segments is completed, you can import all the segments into the final
movie, and it will look much better and be a lot easier to finalize.
Add effects or transitions, or whatever else you feel makes it look good.

Share your movie by burning it into a CD or place it into an iPod. Continue making other ones!

PRINCIPLES OF ANIMATION
The principles are:
1. Timing
2. Ease In and Out (or Slow In and Out)

3. Arcs
4. Anticipation
5. Exaggeration
6. Squash and Stretch
7. Secondary Action
8. Follow Through and Overlapping Action

9. Straight Ahead Action and Pose-To-Pose Action


10. Staging
11. Appeal
12. Slow-out and Slow-in
Simply memorizing these principles isn’t the point. No one will care whether or not you know this list. It’s
whether or not you truly understand and can utilize these ideas that matter. If you do, it will show
automatically in your work.
1. Timing
Timing is the essence of animation. The speed at which something moves gives a sense ofwhat the
object is, the weight of an object, and why it is moving. Something like an eyeblink can be fast or slow. If
it’s fast, a character will seem alert and awake. If it’s slow thecharacter may seem tired and lethargic.J.
Lesseter’s example. Head that turns left and right.
Head turns back and forth really slow: it may seem as if the character is stretching hisneck (lotsof in
between frames).
 A bit faster it can be seen as saying "no" (a few in between frames)

 Really fast, and the character is reacting to getting hit by a baseball bat (almost none inbetween

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

frames).

2. Ease In and Out (or Slow In and Out)


Ease in and out has to do with gradually causing an object to accelerate, or come to rest,from a pose.
An object or limb may slow down as it approaches a pose (Ease In) or
gradually start to move from rest (Ease Out).For example, a bouncing ball tends to have a lot of ease in and
out when at the top of its bounce. As it goes up, gravity affects it and slows down (Ease In), then it starts its
downward motion more and more rapidly (Ease Out), until it hits the ground.
Note that this doesn’t mean slow movement. This really means keep the in between frames close to each
extreme.
3. Arcs
In the real world almost all action moves in an arc. When creating animation one should
try to have motion follow curved paths rather than linear ones. It is very seldom that a character or part of a
character moves in a straight line. Even gross body movements when you walk somewhere tend not be
perfectly straight. When a hand/arm reaches out to reach something, it tends to move in an arc.

4. Anticipation
Action in animation usually occurs in three sections. The setup for the motion, the actual action and
then follow-through of the action. The first part is known as anticipation. In some cases anticipation is
needed physically. For example, before you can throw a ball you must first swing your arm backwards. The
backwards motion is the anticipation, the throw itself is the motion. Anticipation is used to lead the viewers
eye to prepare them for the action that follows. Longer period of anticipation is needed for faster actions.
Example, a character zips off screen leaving a puff of smoke. Exaggeration

The key is to take something and make it more extreme in order to give it more life, but not so much
that it destroys believability. Example: exaggerating the lamp proportions to give a sense of dad and son.
5. SQUASH AND STRETCH
This action gives the illusion of weight and volume to a character as it moves. Also squash and
stretch is useful in animating dialogue and doing facial expressions. How extreme the use of squash and
stretch is, depends on what is required in animating the scene. Usually it's broader in a short style of picture
and subtler in a feature. It is used in all forms of character animation from a bouncing ball to the body
weight of a person walking. This is the most important element you will be required to master and will be
used often.
6. SECONDARY ACTION
This action adds to and enriches the main action and adds more dimension to the character
animation, supplementing and/or re-enforcing the main action. Example: A character is
angrily walking toward another character. The walk is forceful, aggressive, and forward leaning. The leg
action is just short of a stomping walkFOLLOW THROUGH AND OVERLAPPING ACTION

7. STARIGHT AHEAD AND POSE TO POSE ANIMATION


Straight ahead animation starts at the first drawing and works drawing to drawing to the end of a
scene. You can lose size, volume, and proportions with this method, but it does have spontaneity and
freshness. Fast, wild action scenes are done this way. Pose to Pose is more planned out and charted with
key drawings done at intervals throughout the scene. Size, volumes, and proportions are controlled better
this way, as is the action.
8. STAGING

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

A pose or action should clearly communicate to the audience the attitude, mood, reaction or idea of
the character as it relates to the story and continuity of the story line. The effective use of long, medium, or
close up shots, as well as camera angles also helps in telling the story. There is a limited amount of time in a
film, so each sequence, scene and frame of film must relate to the overall story. Do not confuse the audience
with too many actions at once.
9. APPEAL
A live performer has charisma. An animated character has appeal. Appealing animation does not
mean just being cute and cuddly. All characters have to have appeal whether they are heroic, villainous,
comic or cute. Appeal, as you will use it, includes an easy to read design, clear drawing, and personality
development that will capture and involve the audience's interest. Early cartoons were basically a series of
gags strung together on a main theme
10. SLOW-OUT AND SLOW-IN
As action starts, we have more drawings near the starting pose, one or two in the middle, and more
drawings near the next pose. Fewer drawings make the action faster and more drawings make the action
slower. Slow-ins and slow-outs soften the action, making it more life- like. For a gag action, we may omit
some slow-out or slow-ins for shock appeal or the surprise element. This will give more snap to the scene.
Animation techniques
Drawn on film animation: a technique where footage is produced by creating the images directly on film
stock.
Paint-on-glass animation: a technique for making animated films by manipulating slow drying oil paints
on sheets of glass.

Erasure animation: a technique using tradition 2D medium, photographed over time as the artist
manipulates the image. For example, William Kentridge is famous for his charcoal erasure films, and Piotr
Dumała for his auteur technique of animating scratches on plaster.

Pinscreen animation: makes use of a screen filled with movable pins, which can be moved in or out by
pressing an object onto the screen. The screen is lit from the side so that the pins cast shadows. The
technique has been used to create animated films with a range of textural effects difficult to achieve with
traditional animation.

Sand animation: sand is moved around on a back- or front-lighted piece of glass to create each frame for
an animated film. This creates an interesting effect when animated because of the light contrast.
Flip book: A flip book (sometimes, especially in British English, called a flick book) is a book with a series
of pictures that vary gradually from one page to the next, so that when the pages are turned rapidly, the
pictures appear to animate by simulating motion or some other change. Flipbooks are often illustrated
books for children, but may also be geared towards adults and employ a series of photographs rather than
drawings. Flip books are not always separate books, but may appear as an added feature in ordinary books
or magazines, often in the page corners. Software packages and websites are also available that convert
digital video files into custom-made flip books.

Animation on the web:


Animation on a web page is any form of movement of objects or images. Animations are usually
done in Adobe Flash, although Java and GIF animations are also used in many websites. Streaming video in
Flash is coming increasingly popular.
Reasons to have motion on a web page are to draw attention to something, to provide a
demonstration or to entertain. The need for movement on a page depends on the purpose and content of the
page. A financial institute would not really need animations on their pages, while an entertainment site

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

obviously would have such movement.

Computer Animation Special Effects


The words "special effects animation" conjure up images of starships blowing away asteroids in some
distant galaxy light years away. Although blowing up make-believe spaceships is part of special effects
animation, it's actually only a very small part. This kind of technology can be used to create anything that
can be imagined. It is additionally a multifaceted art form requiring animators who have been trained in a
number of disciplines. The use of special effects animation calls for a great deal of planning and offers the
potential to revolutionize a number of industries not related to the production of film and

RENDERING ALGORITHMS
Rendering is the process of generating an image from a model (or models in what collectively
could be called a scene file), by means of computer programs. A scene file contains objects in a strictly
defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading
information as a description of the virtual scene. The data contained in the scene file is then passed to a
rendering program to be processed and output to a
digital image or raster graphics image file. The term "rendering" may be by analogy with an "artist's
rendering" of a scene. Though the technical details of rendering methods vary, the general challenges to
overcome in producing a 2D image from a 3D representation stored in a scene file are outlined as the
graphics pipeline along a rendering device, such as a GPU.
Rendering is one of the major sub-topics of 3D computer graphics, and in practice always connected
to the others. In the graphics pipeline, it is the last major step, giving the final appearance to the models and
animation. With the increasing sophistication of computer graphics since the 1970s, it has become a more
distinct subject.

Multimedia Compression
Multimedia compression is employing tools and techniques in order to reduce the file size of
various media formats. With the development of World Wide Web the importance of compress algorithm
was highlighted because it performs faster in networks due to its highly reduced file size. Furthermore with
the popularity of voice and video conferencing over the internet ,compression method for multimedia has
reached it next generation to provide smooth service even in unreliable network infrastructure. Although
many methods are used for this purpose, in general these methods can be divided into two broad categories
named Lossless and Lossy methods.
MPEG-1 Audio
MPEG-1 Layer I or II Audio is a generic subband coder operating at bit rates in the range of 32 to
448 kb/s and supporting sampling frequencies of 32, 44.1 and 48 kHz. Typical bit rates for Layer II are in
the range of 128-256 kbit/s, and 384 kb/s for professional applications.
MPEG-1 Layers I and II (MP1 or MP2) are perceptual audio coders for 1- or 2-channel audio
content. Layer I has been designed for applications that require both low complexity decoding and
encoding.
MPEG-1 Layer 3 (or MP3) is a 1- or 2-channel perceptual audio coder that provides excellent
compression of music signals. Compared to Layer 1 and Layer 2 it provides a higher compression
efficiency. It can typically compress high quality audio CD data by a factor of 12 while maintaining a high
audio quality
Thanks to its low complexity decoding combined with high robustness against cascaded
encoding/decoding and transmission errors, MPEG-1 Layer II is used in digital audio and video broadcast

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)


lOMoARcPSD|22781286

applications (DVB and DAB). It is also used in Video CD, as well as in a variety of studio applications.

Layer 3, or as it is mostly called nowadays ”mp3”, is the most pervasive audio coding format for
storage of music on PC platforms, and transmission of music over the Internet. Mp3 has created a new class
of consumer electronics devices named after it, the mp3 player. It is found on almost all CD and DVD
players and in an increasing number of car stereo systems and new innovative home stereo devices like
networked home music servers. Additionally, Layer 3 finds wide application in satellite digital audio
broadcast and on cellular phones.
MPEG-1 Layer 3 was standardized for the higher sampling rates of 32, 44.1 and 48 kHz in MPEG-1
in 1992..

Figure 1 shows a high level overview of the MPEG-1 Layers I and II coders. The input signal is
transformed into 32 subband signals that are uniformly distributed over frequency by means of a critically
sampled QMF filterbank.

UNIT-V QUESTIONS
SECTION A
1. CODECS stands for .
2. MPEG stands for .
3. is the process of generating an image from a model by means of computer program.
4. Define Animation.
5. What is Tweening.
6. What is Aliasing.
7. A set of still images is called .
8. SECAM is a .
9. animation is also known as sprite animation.
10. coding technique is known as predictive coding technique.
SECTION B
1. Describe the various audio and video standards.
2. Explain the cell animation.
3. How to edit video? Explain.
4. Explain the Rendering algorithm.
5. Explain the method to generate YC signals from RGB..
6. Explain various video signal formats.
7. Discuss the roles of I,P and B frames of MPEG-1 video standard.
8. Explain about television broadcasting standards.
9. Explain the Principles of Animation.
10. Discuss about Special Effects.

SECTION C
1. Describe how rendering algorithms helps to create special effects.
2. Explain briefly about: i) Video Editing ii) Analog Video Camera.
3. Explain the concepts of video editing.
4. Discuss any four rendering algorithms.
5. Discuss the various principles of animation.
6. Discuss about MPEG-1 audio standard.
7. Explain the following: i) Video signal format ii) PC video iii) Video file format.
8. Describe in detail about the following: i) MPEG-1 Audio ii) MPEG-1 Video.
9. State the principle of animation and explain how animation is carried on the web.
10. Describe in detail about the following: i) MPEG-2 Audio ii) MPEG-2 Video

Downloaded by PREMKANTH IS (20bscs129premkanthis@skacas.ac.in)

You might also like