Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Graphics and Multimedia: Unit 1 2D Primitives

You are on page 1of 36

Graphics and Multimedia

UNIT 1

2D PRIMITIVES

PART A
1. Define Computer Graphics.
The use of a computer to produce and manipulate pictorial images on a video screen, as
in animation techniques or the production of audio visual aids.

2. Explain any 3 uses of computer graphics applications.


Computer Aided Design, Entertainment, Education and training.

3. What are the advantages of DDA algorithm?


 The DDA algorithm is a faster method for calculating pixel positions
 It eliminates the multiplication

4. What are the disadvantages of DDA algorithm?


 The accumulation of roundoff error in successive additions of the floating-point
increment, however, can cause the calculated pixel positions to drift away from the true
line path for long line segments.
 The rounding operations and floating-point arithmetic in procedure lineDDA are still
time-consuming

5. Define Scan-line Polygon fill algorithm.


For each scan line crossing a polygon, the area-fill algorithm locates the intersection
points of the scan line with the polygon edges. These intersection points are then sorted from left
to right, and the corresponding frame-buffer positions between each intersection pair are set to
the specified fill color.

6. What are Inside-Outside tests?


 Jordan Curve Theorem
 Even-odd rule
 Nonzero winding number rule

7. Define Boundary-Fill algorithm.


Area filling starts at a point inside a region and paint the interior outward toward the
boundary. If the boundary is specified in a single color, the fill algorithm proceeds outward pixel
by pixel until the boundary color is encountered. This method, called the boundary-till algorithm

8. Define Flood-Fill algorithm.


Sometimes we want to fill in (or recolor) an area that is not defined within a single color
boundary. We can paint such areas by replacing a specified interior color instead of
searching for a boundary color value. This approach is called a flood-fill algorithm.

9. Define attribute parameter. Give examples.


Parameter that affects the way a primitive is to be displayed is referred to as an attribute
parameter Some attribute parameters, such as color and size, determine the fundamental
characteristics of a primitive.

10. What is the command used to draw the thickness of lines.

SetLineWidthScaleFactor(lw)

11. What are the three types of thick lines?

(a) butt caps, (b) mund caps, and (c) projecting square caps

12. What are the attribute commands for a line color?

setPolylineColourIndex (61 );

13. What is color table? List the color codes.


Color tables are an alternate means for providing extended color capabilities to a user
without requiring large frame buffers.

14. What is a marker symbol and where it is used?


The appearance of displayed characters is controlled by attributes such as font, size,
color, and orientation. Attributes can be set both for entire character strings (text) and for
individual characters defined as marker symbols.

15. Discuss about inquiry functions.


Current settings for attributes and other parameters, such as workstation types and status,
in the system lists can be retrieved with inquiry functions. These functions allow current values
to be copied into specified parameters, which can then be saved for later reuse or used to check
the current state of the system if an error occurs.

16. Define translation and translation vector.


A translation is applied to an object by repositioning it along a straight-line path from one
coordinate location to another. The translation distance pair (t,, t,) is called a translation vector or
shift vector.

17. Define window and view port.


A world-coordinate area selected for display is called a window. An area on a display
device to which a window is mapped is called a viewport. The window defines what is to be
viewed; the viewport defines where it is to be displayed.
Window - defines what is to be viewed
View Port defines where it is to be displayed

18. Define viewing transformation.


The mapping of a part of a world-coordinate scene to device coordinates is referred to as
a viewing transformation.

19. Give the equation for window to viewport transformation.

MWC.VC = R.T

20. Define view up vector.


First, a viewing-coordinate origin is selected at some world position: Po = (x,, yo). Then
we need to establish the orientation, or rotation, of this reference frame. One way to do this is to
specify world vector V that defines the viewing y, direction. Vector V is called the view up
vector.

21. What is meant by clipping? Where it happens?


Any procedure that identifies those portions of a picture that are either inside or outside
of a specified region of space is referred to as a clipping algorithm, or simply clipping. The
region against which an object is to clipped is called a clip window.

22. What is point clipping and what are its inequalities?


Assuming that the clip window is a rectangle in standard position, we save a point P = (x,
y) for display if the following inequalities are satisfied:
xwmin <= x <= xwmax

ywmin <= y <= ywmax

23. What is line clipping and what are their parametric representations?

A line segment with end points (x1,y1) and (x2,y2) one or both end points outside
clipping rectangle, the parametric representation x= x1 +u(x2 – x1) y= y1 +u( y – y1)

24. Write down two attributes of a line?


The line type, width and color are the attributes of the line. The line type include
solid line, dashed lines, and dotted lines.

25. Distinguish between window port & view port?


A portion of a picture that is to be displayed by a window is known as window port.
The display area of the part selected or the form in which the selected part is viewed is
known as view port.

26. What is the major difference between symmetric DDA and simple DDA.

"Simple DDA" does not require special skills for implementation.


27. What is Text clipping? and List different types of text clipping methods available?

Text clipping is to clip the components of individual characters. All or None text
clipping, All or none character clipping and single character clipping.

28. Write down the shear transformation matrix.


Shear is a transformation that distorts the shape of an object such that the transformed
shape appears as if the object were composed of internal layers that had been caused to slide over
each other. Two common shearing transformations are those that shift coordinate x values and
those that shift y values. An x direction shear relative to the x axis is produced with the
transformation matrix
1 shx 0
0 1 0
0 0 1
Which transforms coordinate positions as X’ = x + shx.y y’ =y.

29. What is Transformation?


Transformation is the process of introducing changes in the shape size and orientation of
the object using scaling rotation reflection shearing & translation etc.

30. What is translation?


Translation is the process of changing the position of an object in a straight-line path
from one coordinate location to another. Every point (x , y) in the object must under go a
displacement to (xI,yI). the transformation is: x’ = x + tx ; y’ = y+ty

31. What is rotation?


A 2-D rotation is done by repositioning the coordinates along a circular path, in the x-y
plane by making an angle with the axes. The transformation is given by:
X’ = r cos (q + f) and Y’ = r sin (q + f).

32. What is scaling?


A scaling transformation alters the size of an object. This operation can be carried out for
polygons by multiplying the coordinate values (x,y) of each vertex by scaling factors sx and sy
to produce the transformed coordinates ( x', y' ). x' = x. sx, y' = y. sy
33. What is shearing?
The shearing transformation actually slants the object along the X direction or the Y
direction as required. ie; this transformation slants the shape of an object along a required
plane.

34. What is reflection?


The reflection is actually the transformation that produces a mirror image of an
object. For this use some angles and lines of reflection.

35. Define clipping?


Clipping is the method of cutting a graphics display to neatly fit a predefined graphics
region or the view port.
36. What is the need of homogeneous coordinates?
To perform more than one transformation at a time, use homogeneous coordinates or
matrixes. They reduce unwanted calculations intermediate steps saves time and memory
and produce a sequence of transformations.

37. Distinguish between uniform scaling and differential scaling?


When the scaling factors sx and sy are assigned to the same value, a uniform scaling is
produced that maintains relative object proportions. Unequal values for sx and sy result in a
differential scaling that is often used in design application

38. What is fixed point scaling?


The location of a scaled object can be controlled by a position called the fixed point
that is to remain unchanged after the scaling transformation.

39. What is Zooming?


Zooming means enlarging a digital image to see it more clearly or make it easier to alter.
It allows the user to see more detail for a specific area of the image.

40. What is Rubber Banding?


Rubber Banding is another form of zooming. The user can use a mouse to define two
corners of a rectangle. The selected area can be copied to a clipboard, cut, moved or zoomed.

41. What is an output primitive?


Graphics programming packages provide function to describe a scene in terms of these
basic geometric structures, referred to as output primitives.

42. List out the graphics applications


 paint programs : Allow you to create rough freehand drawings. The images are stored as
bit maps and can easily be edited.
 illustration/design programs: Supports more advanced features than paint programs,
particularly for drawing curved lines. The images are usually stored in vector-based
formats. Illustration/design programs are often called draw programs.
 presentation graphics software : Lets you create bar charts, pie charts, graphics, and other
types of images for slide shows and reports. The charts can be based on data imported
from spreadsheet applications.
 animation software: Enables you to chain and sequence a series of images to simulate
movement. Each image is like a frame in a movie.
 CAD software: Enables architects and engineers to draft designs.

43. What is meant by aliasing?


The distortion of information due to low frequency sampling (Under sampling) is called
aliasing. We can improve the appearance of displayed raster lines by applying antialiasing
methods that compensate for the under sampling process.

44. Digitize a line from (10,12) to (15,15) on a raster screen using Bresenhams straight line
algorithm.
(11,13), (12,13), (13,14), (14,14) and (15,15)

45. State the nature of Line Primitive in Graphics? (NOV/DEC 2015)

The line type, width and color are the attributes of the line. The line type include solid line,
dashed lines, and dotted lines.

46. Define viewing pipeline? (NOV/DEC 2015)

The viewing pipeline is a group of processes common from wireframe display through to near photo-
realistic image generation, and is basically concerned with transforming objects to be displayed from
specific viewpoint and removing surfaces that cannot be seen from this viewpoint.

PART B

1. Consider a line from (0,0) to (6,7).Using simple DDA algorithm,rasterize thisline.


2. Applying Bresenham’s algorithm to draw a line from (4,4) and end point is (-3,0).
3. Plot a circle at origin having centre as (0,0) and radius=8 using Bresenham’s
circle algorithm.
4. Plot a circle using mid point algorithm whose radius=3 and center is at (0,0).
5. The input ellipse parameters are rx=8 and ry=6.Using midpoint ellipse
method, rasterize this ellipse.
6. Explain in detail about line attributes with neat diagram.
7. Explain briefly about curve attributes?(8) (NOV/DEC 2015)
8. Explain three primary color used in graphics and explain how other colors are
achieved?
9. Explain in detail about color and grey scale levels?
10. Explain color and grey scale levels.
12. Explain the area fill attributes and character attributes.
13. Explain character attributes in detail.
14. Briefly discuss about basic 2D transformations.(8) (NOV/DEC 2015)
15. Discuss about composite transformations.
16. Explain about reflection and shear.
17. Explain cohen- sutherland line clipping algorithm with an example.
18. Discuss the logical classifications of input devices.
19 . Explain the details of 2d viewing transformation pipeline.
20 . Explain point, line, curve, text, exterior clipping?
21 . Explain the basic concept of Midpoint ellipse algorithm. Derive the decision parameters
for the algorithm and write down the algorithm steps. (16)
22. Explain two dimensional Translation and Scaling with an example. (8)
23. Obtain a transformation matrix for rotating an object about a specified pivot point. (16)
24. Explain DDA line drawing algorithm. (16)
25. What is polygon clipping? Explain Sutherland-Hodgeman algorithm for polygon (16)
26. Consider a triangle ABC whose coordinates are A[4,1], B[5,2], C[4,3]
a. Reflect the given triangle about X axis. (4)
b. Reflect the given triangle about Y-axis. (4)
c. Reflect the given triangle about Y=X axis. (4)
d. Reflect the given triangle about X axis. (4)
27 . Explain Sutherland Hodgeman polygon clipping algorithm. Explain the Disadvantage of
it and how to rectify this disadvantage. (16)
28 . Explain Two Dimensional Viewing. (16)
29. Write down and explain the midpoint circle drawing algorithm. Assume 10cm as the
radius and co-ordinate origin as the centre of the circle (8)(NOV/DEC 2015)
30. Explain about Bresenham’s circle generating algorithm.
31. Calculate the pixel location approximating the first octant of a circle having centre at (4,5)
and radius 4 units using Bresenhams algorithm.
32 . Discuss in brief: Antialising techniques.
33 . Explain the different Graphics systems in detail with neat diagram?
34. Write brief notes on clipping against rectangular boundaries?(8) (NOV/DEC 2015)

UNIT-2
THREE-DIMENSIONAL CONCEPTS
PART A

1. What are spline curves?


A spline curve is a mathematical representation for which it is easy to build an interface
that will allow a user to design and control the shape of complex curves and surfaces. The
general approach is that the user enters a sequence of points, and a curve is constructed whose
shape closely follows this sequence. The points are called control points. A curve that actually
passes through each control point is called an interpolating curve; a curve that passes near to the
control points but not necessarily through them is called an approximating curve.

2. Define polygon or quadric surfaces. (NOV/DEC 2015)


Surfaces represented by second degree polynomials are quadratic surfaces. Ex: Sphere,
Ellipsoid, Torus and cones.

3. What you mean by parallel projection?


Parallel projection is one in which z coordinates is discarded and parallel lines from
each vertex on the object are extended until they intersect the view plane.
4. What do you mean by Perspective projection?
Perspective projection is one in which the lines of projection are not parallel. Instead,
they all converge at a single point called the center of projection.

5. Differentiate parallel projection from perspective projection.

Objection descriptions are projected to the view plane

a. Parallel projection b. Perspective projection

6. Categorize the 3D representations?


Boundary representation (B-reps) and space-partitioning representations.

7. What Boundary representation?

It describes a 3D object as a set of surfaces that separate the object interior from the
environment. e.g. polygon facets and spline patches.

8. What space-partitioning representation?


This is used to describe interior properties, by partitioning the spatial region containing
an object in to a set of small, non-overlapping, contiguous solids. e.g.octree.

9. What is Blobby Object?


Some objects do not maintain a fixed shape, but change their surface characteristics in
certain motions or when in proximity to other objects. Examples in this class of objects include
molecular structures, water droplets and other liquid effects, melting objects and muscle shapes
in the human body. These objects can be described as exhibiting "blobbiness" and are often
simply referred to as blobby objects, since their shapes show a certain degree of fluidity.

10. What is the Surface rendering?


It is used to generate a degree of realism in a displayed scene.

11. What are the different ways of specifying spline curve?


 Using a set of boundary conditions that are imposed on the spline.
 Using the state matrix that characteristics the spline
 Using a set of blending functions that calculate the positions along the
curve path by specifying combination of geometric constraints on the curve

12. Write about depth cueing.


A simple method for indicating depth with wire frame displays is to vary the intensity of
objects according to their distance from the viewing position - Line closest to the viewing
position are displayed with highest intensities - Lines Farther - > Decreasing Intensities

Uses: Choosing maximum and minimum intensities Range of distances over which the
intensities are to vary Modeling the effect of atmosphere

13. What is projection?


The process of displaying 3D objects on a 2D display is called as Projection.

14. What is meant by view reference coordinate systems?


View Plane defined by:
 point on the plane - View Reference Point (VRP)
 normal to the plane pointing towards the center of projection- View-Plane Normal
(VPN)
view plane can be anywhere in the world-space
The center of projection represents the location of the viewer's eye or the camera's lens.
Need to define a 3D Viewing Reference Coordinate system (VRC) which has axis u, v, n

 Origin of VRC is the VRP


 n axis of VRC is the VPN
 v axis of VRC is called the View-UP vector (VUP)
 u axis of VRC is defined to form a right-hand coordinate system with n and v

Viewpoint Coordinate System -Also known as the "camera" coordinate system. This
coordinate system is based upon the viewpoint of the observer, and changes as they change their
view. Moving an object "forward" in this coordinate system moves it along the direction that the
viewer happens to be looking at the time.

15. What are the steps involved in 3D transformation?


 Modeling Transformation
 Viewing Transformation
 Projection Transformation
 Workstation Transformation

16. What do you mean by view plane?


A view plane is nothing but the film plane in camera which is positioned and oriented for
a particular shot of the scene.

17. What is Projection reference point?


In Perspective projection, the lines of projection are not parallel. Instead, they all
converge at a single point called Projection reference point.
18. What is center of projection? What is the other name of it?
The center of projection represents the location of the viewer's eye or the camera's lens.

19. What is the Surface rendering?


It is used to generate a degree of realism in a displayed scene.

20. What is Composite transformation?


It can be formed by multiplying the matrix representation for the individuals operations
in the transformation sequence

21. How are fractals classified?


 A Fractal is an object whose shape is irregular at all scales.
 The patterns in the random fractals are no longer perfect and the random defects at all
scale.
 A geometric fractal is a fractal that repeats self-similar patterns over all scales.
 Exact self similarity
 Quasi self similarity and Statistical self similarity

22. List any four Animation techniques


a. Squash and stretch
b. Timing
c. Follow-through actions
d. Staging
e. Anticipation

23. What does Y, I, Q represent in YIQ color model?


 Y is luminance only part picked up by Black and White Televisions
 Y is given most bandwith in signal
 I, Q channels (or U,V) contain chromaticity information

24. Define Morphing.


Transformation of object shapes from one form to another is called morphing.

25. Define View port with an example.


An area on a display device to which a window is mapped is called a view port.

26. What is chromaticity?


The term chromaticity is used to refer collectively to the two properties describing color
characteristics: Purity and dominant frequency.

27. Define Color model.


A Color model is a method for explaining the properties or behavior of color within
some particular context.

28. What are the uses of chromaticity diagram?


The chromaticity diagram is useful for the following: Comparing color gamuts for
different sets of primaries. Identifying complementary colors. Determining dominant
wavelength and purity of a given color.

29. Give the transformation matrix for conversion of RGB to YIQ.

30. What is HSV model?


The HSV(Hue,Saturation,Value) model is a color model which uses color descriptions
that have a more intuitive appeal to a user. To give a color specification, a user selects a spectral
color and the amounts of white and black that are to be added to obtain different shades, tint, and
tones.

31. What for CMY color model used?


A color model defined with the primary colors cyan, magenta, and yellow is useful for
describing color output to hard-copy devices.

32. What are the parameters in the HLS color model?


Hue, Lightness and Saturation.
33. Define Computer animation.
Computer animation refers to any time sequence of visual changes in a scene. In addition
to changing object position with translations or rotations, a computer generated animation could
display time variations in object size, color, transparency, or surface texture.

34. What are the steps in animation sequence?


 Story board layout
 Object definition
 Key-frame specifications
 Generation of in-between frames
35. How frame-by-frame animation works?
Here each frame of the scene is separately generated and stored. Later the frames can be
recorded on film or they can be consecutively displayed in "real-time playback" mode.
36. What is morphing?
Transformation of object shapes from one form to another is called morphing.

37. What are the methods of motion specifications?


 Direct motion specification
 Goal-directed Systems
 Kinematics and Dynamics.

38. Explain vanishing point and principal vanishing point.


Projections of lines that are not parallel to the view plane (i.e. lines that are not
perpendicular to the view plane normal) appear to meet at some point on the view plane. This
point is called the vanishing point. A vanishing point corresponds to every set of parallel lines.
Vanishing points corresponding to the three principle directions are referred to as "Principle
Vanishing Points (PVPs)". We can thus have at most three PVPs. If one or more of these are at
infinity (that is parallel lines in that direction continue to appear parallel on the projection plane),
we get 1 or 2 PVP perspective projection.

39. What is called axonometric and isometric projections?


Projectors are orthogonal to the projection plane but plane can be at angle to principal
face
– Isometric – symmetric with all three axes
– Dimetric – symmetric with two axes
– Trimetric – general case
– Produces foreshortening of distances
40. Give the general expression of Bezier Bernstein polynomial.
P(u) = ∑M!/((M-K)!K!) UK(1-U)M-KPK, 0≤U≤1 and Uk=0 when u and k both zero.
41. Give the single point perspective projection transformation matrix when projectors are
placed on the z-axis.
Refer class notes
42. What are the advantages of B spline over Bezier Curve?
B-spline curves can be considered a generalization of Bezier curves; they share many
properties (they must obey the convex hull property, for one). Advantages in using B-splines is
that they do provide affine invariance. This means that the coordinate system it is represented in
can change without affecting the relative geometry of the curve; this is seen when the geometry
of curve remains consistent when it is rotated, scaled, or translated. B-spline curves also address
the issue of local control. This means that that modifying one control point only affects the part
of the curve near that control point, which is really useful when designing shapes.
43. What is Critical Fusion Frequency?
Frequency of a light stimulation at which it becomes perceived as a stable and continuous
sensation. That frequency depends upon various factors: luminance, colour, contrast, retinal
eccentricity, etc.
44. Define color model.
A Color model is a method for explaining the properties or behavior of color
with in some particular context.

45. Define dominant frequency.


If low frequencies are predominant in the reflected light, the object is described as red. In
this case, the perceived light has the dominant frequency at the red end of the spectrum. The
dominant frequency is also called the hue, or simply the color of the light.

46. Define complementary colors.


If the 2 color sources combine to produce white light, they are called complementary
colors. E.g., Red and Cyan, green and magenta, and blue and yellow.

47. Define colors gamut.


Color models that are used to describe combinations of light in terms of dominant
frequency use 3 colors to obtain a wide range of colors, called the color gamut.

48. CIE Chromaticity Diagram

49. Define about shades, tint and tones.


 A shade is the mixture of a color with black, which reduces lightness.
 A tint is the mixture of a color with white, which increases lightness
 A tone is produced either by mixing with gray, or by both tinting and shading.

50. Describe about RGB model.


The RGB color model is an additive color model in which red, green, and
blue light are added together in various ways to reproduce a broad array of colors. The
name of the model comes from the initials of the three additive primary colors, red,
green, and blue.
The main purpose of the RGB color model is for the sensing, representation,
and display of images in electronic systems, such as televisions and computers, though it
has also been used in conventional photography. Before the electronic age, the RGB
color model already had a solid theory behind it, based in human perception of colors.
51. Explain about HLS model.
HSL stands for hue, saturation, and lightness, and is often also called HLS.
This HLS model describes colours in the following terms:
 Hue, which is the horizontal axis of square box in the picture above, and varies from
magenta - red - yellow - green - cyan - blue - magenta.
 Saturation, which is the vertical axis of the square box, and describes how "grey" the
colour.
 Lightness, which is the second picture box and varies from black through the colour to
white.

With this model it is easier to pick the correct colour. The model also allows you to do some
things in code which you can't really achieve with RGB, such as determining what a lighter
or darker tone of a given RGB colour is.

52. Explain about HSV model.


The HSV(Hue, Saturation, Value) model is a color model which uses
color
descriptions that have a more intuitive appeal to a user. To give a color specification, a user
selects a spectral color and the amounts of white and black that are to be added to obtain
different shades, tint, and tones.

53. Define CMY.


Cyan, magenta, and yellow are the secondary colors with respect to the primary colors of
red, green, and blue. However, in this subtractive model, they are the primary colors and
red, green, and blue, are the secondaries. In this model, colors are formed by subtraction,
where adding different pigments causes various colors not to be reflected and thus not to be
seen. Here, white is the absence of colors, and black is the sum of all of them. This is
generally the model used for printing.
Difference between CMY and HSV.

54. Define computer animation.


Computer animation refers to any time sequence of visual changes in a scene. In addition
to changing object position with translations or rotations, a computer generated animation
could display time variations in object size, color, transparency, or surface texture.

55. List out steps involved in computer animation sequence.


 Story board layout
 Object definition
 Key-frame specifications
 Generation of in-between frames

56. List out Computer Animation Languages.


What is Keyframe.
 Animation functions include a graphics editor, a key frame generator and standard
graphics routines.
 The graphics editor allows designing and modifying object shapes, using spline surfaces,
constructive solid geometry methods or other representation schemes.
 Scene description includes the positioning of objects and light sources defining the
photometric parameters and setting the camera parameters.
 Action specification involves the layout of motion paths for the objects and camera.
 Keyframe systems are specialized animation languages designed dimply to generate the
in-betweens from the user specified keyframes.
 Parameterized systems allow object motion characteristics to be specified as part of the
object definitions. The adjustable parameters control such object characteristics as
degrees of freedom motion limitations and allowable shape changes.
 Scripting systems allow object specifications and animation sequences to be defined with
a user input script. From the script, a library of various objects and motions can be
constructed.

57. Define Raster Animation? (NOV/DEC 2015)


Raster based animation frames (and all raster images for that matter) are made up of
individual pixels. These pixels each contain information about the color and brightness of
that particular spot on the image. This is somewhat similar to the concept of pointillism in
painting, with the sum of the points making up the totality of the picture or frame.

58. Define Morphing


Transformation of object shapes from one form to another is called morphing.
59. What is Virtual reality?

Virtual reality is an artificial environment that is created with software and presented to
the user in such a way that the user suspends belief and accepts it as a real environment. On
a computer, virtual reality is primarily experienced through two of the five senses: sight and
sound.

PART B

1. Discuss the three-dimensional composite transformation.


2. Differentiate parallel and perspective projection.
3. Explain in detail about Rotations in 3D space?(8) (NOV/DEC 2015)
4. Explain the 3-d transformation for translation, rotation, scaling?
5. Describe how 3D curves are stored in computers? (8)(NOV/DEC 2015)
6. Explain about 3D object representation.
7. Write notes on quadratic surfaces.(8)
8. Discuss the Characteristics of Bezier curves and Bezier surfaces in detail (12).
9. Write a short note on B-spline curves (4)
11. How do you implement Morphing animation technique? Discuss with an example(8)
12. Compare Bezier curve and B splines.

13. Define Animation. Explain in detail about the animation language (8)
14. Describe briefly Modeling and Coordinate transformation in computer graphics.
15. Derive the Outline projection of coordinate position(x,y,z) to position (x0,y0) on the view plane
16. Compare HLS and HSV color models.
17. Discuss about the Properties of light.
18. Explain any one visible surface identification algorithm.
19. Explain a method to rotate an object about an axis that is not parallel to the coordinate
axis with neat diagram and derive the transformation matrix for the same.
20. Discuss on the various visualization techniques.(8)
21. Discuss on Area subdivision method of hidden surface identification algorithm.
22. Explain Virtual reality in detail?
23. Discuss briefly about parallel projections.(8) (NOV/DEC 2015)
24. Write brief notes on color models?(8) (NOV/DEC 2015)

UNIT III

MULTIMEDIA SYSTEMS DESIGN

PART A

1. Define Multimedia?

Multimedia is defined as a Computer based Interactive Communication process that


incorporates text, numeric data, record based data, graphic art, video and audio elements,
animation etc. It is used for describing sophisticated systems that support moving images and
audio. Eg. Personal Computer.

2. Give the applications of Multimedia?

1. Document Imaging
2. Image Processing and Image Recognition
3. Full Motion Digital Video Applications
4. Electronic messaging
5. Entertainment
6. Corporate Communications

3. What are the data elements of Multimedia? (NOV/DEC 2015)

1. Facsimile
2. Document Images
3. Photographic Images
4. Geographic Information System Maps (GIS)
5. Voice Commands and Voice Synthesis
6. Audio Messages
7. Video Messages
8. Full motion stored and Live Video
9. Holographic Images
10. Fractals

4. State the resolution of Facsimile, Document Images and Photographic Images?

• Facsimile-100 to 200 dpi (dots/pixels per inch)


• Document images – 300 dpi
• Photographic images – 600 dpi

5. What is the compression technique used in Facsimile and Document Images?

• Facsimile - CCITT Group3


• Document Images - CCITT Group4

6. What are the applications of Photographic Images?

1. Photographic images are used in Imaging Systems that are used for identification
2. Security Badges
3. Fingerprint Cards
4. Photo Identification Systems
5. Bank Signature Cards
6. Patient Medical Histories

7. What is the use of Document Images?


It is used for storing business documents that must be retained for long periods of time
and accessed by large number of people. It removes the need for making several copies for
storage or distribution.

8. Explain about GIS Systems?

GIS means Geographic Information System maps. It is used for natural resource and wild
life management and urban planning.

9. What are the two technologies used for storage and display of GIS systems?

•Raster Storage
•Raster Image (Raster Image has basic color map, vector overlay and text display)

10. Explain about Voice Synthesis?

This approach breaks down the message completely to a canonical form based on
phonetics. It is used for presenting the results of an action to the user in a synthesized voice. It is
used in Patient Monitoring System in a Surgical Theatre.

11. What is Isochronous Playback?

Isochronous playback is defined as a playback at a constant rate. Audio and Video


systems require isochronous playback.

12. Explain about Full motion and live video?

 Full motion video refers to prestored video clip. i.e., video stored in CD
Eg: games, courseware, training manuals, MM online manuals etc Live video
refers to live telecast.
 It is live and must be processed while the camera is capturing it i.e., Instant
occurring is transferred at the same time.

Eg: Live Cricket Show (in television)

13. Explain the terms Holography and Hologram?

Holography is defined as the means of creating a unique photographic image without the
use of lens. The photographic recording of the image is called a Hologram.

14. State the use of Holographic images?

It is used in design and manufacturing tasks. Holographs on credit cards are used to
ensure authenticity.

15. State the properties of Holographic images?


Holographic images are
• Not clear diagrams
• 3-dimensional
• can also be recorded on materials other than photographic plates
• Records intensity of light and phase
• created by coherent light using a laser beam

16. Define Fractals?

Fractals are regular objects with a high degree of irregular shapes. It is a lossy
Compression technique but it doesn’t change the shape of the image. Fractals are decompressed
images that result from a compression format

17. Explain Fractal Compression?

Fractal Compression is based on image content i.e., it is based on similarity of patterns


within an image. The steps in Fractal compression are

• A digitized image is broken into segments


• The individual segments are checked against a library of fractals
• The library contains a compact set of numbers called iterated function system codes.
• These system codes will reproduce the corresponding fractal

18. State the applications of Document Imaging?

Document Imaging is used in organizations such as

a. Insurance agencies
b. Law offices
c. Country and State Governments
d. Federal Government
e. Department of Defence (DOD)

19. Define Compression Efficiency?

Compression Efficiency is defined as the ratio in bytes of an uncompressed image to the


same image after compression.

20. What is Image Processing?

Image Processing refers to processing a digital image using a digital computer.

An image processing system will alter the contents of the image. It involves Image Recognition,
Image Enhancement, Image Synthesis and Image Reconstruction.
21. Explain Image Calibration?

The overall image density is calibrated. In Image calibration the image pixels are adjusted
to a predefined level.

22. What is Grayscale Normalization?

The overall grayscale of an image or picture is evaluated to determine if it is skewed in


one direction and if it needs correction.

23. What is Frame Averaging?

The intensity level of the frame is averaged to overcome the effects of very dark or very
light areas by adjusting the middle tones.

24. What is Image Animation?

Images are displayed sequentially at controlled display speeds to provide image


animation. Image Animation is the basic concept of displaying successive images at short
intervals to give the perception of motion.

Image Animation is a technology developed by Walt Disney and brought to every home
in the form of cartoons.

25. How Image Annotation is done?

Image Annotation can be performed in two ways

•As a text file stored along the image.

•As a small image stored with the original image.

26. Explain Optical Character Recognition (OCR)?

Optical Character Recognition is used for data entry by scanning typed or printed words
in a form. OCR technology is now available in software it has the capability to decipher a large
number of printed fonts used in many document image applications. It is used for reading the
number of invoice or for capturing entire photographs of text.

27. Explain about Handwriting Recognition?

It is used for recognizing hand written characters. The key consideration of these systems
is the ability to recognize the writer-independent cursive handwriting in real time. It has been
evolved from pen-based systems and it allows the user to write commands on an electronic
tablet.

28. How does a Handwriting engine work?

The Handwriting engines use the following techniques

1. Complex Algorithms - to capture data in real time


2. Shape Recognizer - to determine the geometry and topology of stroke
3. Prototype Character set - the strokes are compared with the predefined Prototypes until a
match is found
4. Context Analyzer - used to check a collection of characters treated as a word
5. Dictionary - the word is checked here and corrections are indicated based on potential
matches 4.

29. What is Vector Data?

Vector data is the collection of points and some mathematical functions. It treats an
image as a series of points (or collection of dots) and mathematical functions that describe the
figures such as line, circles, arcs etc.

30. Define Vectorisation?

The process of converting rustered (scattered) data into vector data is known as
Vectorisation.

31. What are the properties of Full-motion video clip?

• Full-motion video clips should be sharable


• It is possible to attach Full-motion video clips to other documents such as memos, text,
presentations etc
• Full-motion video clips should be indexed
• Users should be able to place their own indexing
• It should be possible to view the same clip on a variety of display terminal types with varying
resolutions
• It should be possible for users to move and resize the window displaying the video clip
• Users should be able to adjust the contrast, brightness and volume of the video clip
• Users should be able to suppress sound or mix sound from other sources
• When video clips are spliced the sound components are spliced separately

32. Explain the infrastructure required by a multimedia enabled E-mail system?

 Message store and forward facility


 Message Transfer agents
 Message Repositories(servers)
 Repositories
 Electronic Hypermedia messages
 Dynamic access and Transaction managers
 Local and Global directories
 Automatic Database Synchronization
 Automatic Protocol Conversions
 Administrative tool

33. State the applications of Non-Textual Image Recognition?

1. Recognition of human faces


2. Interpretation of facial expressions
3. Designing, Manufacturing and Medical fields
4. Security systems
34. What is meant by Multimedia database? (NOV/DEC 2015)

A Multimedia database (MMDB) is a collection of related multimedia data. The


multimedia data include one or more primary media data types such as text, images, graphic
objects (including drawings, sketches and illustrations) animation sequences, audio and video.
PART-B

1. Explain list of Multimedia applications. Explain them briefly.


2. Briefly discuss the history and future of Multimedia.
3. Explain the characteristics of MDBMS.
4. Write short notes on multimedia system architecture. (8) (NOV/DEC 2015)
5. What is multimedia? Explain the properties of multimedia systems.
6. Explain the data stream characteristics for continuous media.

7. Explain the different file formats used in multimedia.


8. Suggests with reasons 5 potential applications of multimedia other than the
applications in the field of entertainment and education.
9. Explain various multimedia interface standards. (8) (NOV/DEC 2015)
10. Describe various building block of multimedia system.
11. Write short notes on MDBMS.
12. Explain Database Organization for Multimedia Applications.
13. Explain 3-D Technology and Holography.
14. (i) Explain hypermedia and its functions. (8)
(ii) Short notes on Multimedia databases. (8)
15. Write brief notes on multimedia storage and retrieval? (8) (NOV/DEC
2015)
16. Explain in detail about full motion digital video applications?
(NOV/DEC2015)

UNIT IV

MULTIMEDIA FILE HANDLING

PART A

1. What is Image Compression?

Image Compression is the process of reducing the size of the image by removing
redundant information in a lossless or lossy manner to conserve storage space and transmission
time.

2. What is the need for Compression?

To manage large multimedia data objects efficiently Reduce file size for storage of
objects Compression eliminate redundancies in the pattern of data .

3. State the two types of Compression?

1. Lossy Compression
2. Lossless Compression

4. What is Lossy Compression? (NOV/DEC 2015)

Lossy compression causes some information to be lost. Even if some data is lost it does
not affect the originality of the image. It is used for compressing audio, grayscale or color images
and video objects in which absolute data accuracy is not essential. it is used in Medical
Screening Systems, Video teleconferencing and Multimedia Electronic messaging systems

5. What is Lossless Compression?

Lossless compression preserves the exact image throughout the compression and
decompression process. Lossless Compression techniques are good for text data and for
repetitive images in images like binary and grayscale images.

6. What are the advantages of Compression?

 Compressed data object


 Require less disk memory space for storage
 Takes less time for transmission over a network
7. State the types of Lossy Compression?

 JPEG (Joint Photographic Experts Group)


 MPEG (Moving Picture Experts Group)
 Intel DVI (Digital Video Interface)
 CCITT H.261(P*64)
 Fractal

8. State the types of Lossless Compression?

1. Packbits Encoding
2. CCITT Group3 1D
3. CCITT Group3 2D
4. CCITT Group 4
5. Lempel-Ziv and Welch Algorithm (LZW) CCITT

9. What is A Binary Image?

Binary Images contain black and white pixels and generated when a document is scanned
in a binary mode.

10. What is Cadecs?

Compression and decompression software or programs are called Cadecs.

11. What is Cadence?

Cadence is the term used to define the regular rise and fall in the intensity of sound.
Examples are the beats in music, changes in intensity of sound as a person speaks.

12. Explain about Busy Image and Continuous-tone Images?

In a Busy image adjacent pixels or group of adjacent pixels change rapidly. The grayscale
or color images or known as Continuous-tone images

13. What is Negative or Reverse Compression?

If the number of bytes is increased than the bytes in runlentgh encoding.i.e. If the number
of bytes is increased than the original image during Compression then it is called Negative
Compression.

14. Give some applications of compression and Decompression Techniques?

1. Facsimile Systems
2. Printer Systems
3. Document Storage and Retrieval Systems
4. Video Teleconferencing Systems
5. Electronic Multimedia Messaging Systems
6. Medical Screening Systems

15. Explain Magnetic Media Technology?

In magnetic media data is stored on magnetic medium by magnetization of particles in the


medium. Magnetization is caused by passing current through a coil in the read write head.

16. Mention the advantages of hard drives?

1. High capacity storage


2. Availability of low cost

17. Explain ST506 and MFM hard drives?

It is an interface developed by Seagate. ST506 defines the operation of signals between a hard
disk controller and the hard disk. It is used to control platter speed and the movement of heads
for a drive. ST506 have two ribbon cables i.e., a 36-pin and 20-pin cable. The encoding schemes
used are MFM, FM and RLL (Run Length Limited).

18. What is MFM?

MFM means Modified Frequency Modulation. Parallel data is converted to a series of encoded
pulse by MFM.

19. Explain ESDI hard drive?

ESDI means Enhanced Small Device Interface. It converts the data into serial bit streams. It uses
two ribbon cables, 36-pin cable for control signal and a 20-pin cable for data signal.

20. Explain IDE?

IDE means Integrated Device Electronics. IDE interface supports two drives; one acts as master
and other as slave. A jumper on drive electronics configures the drive as master or a slave.

21. Explain SCSI?

SCSI means Small Computer System Interface. It was developed by X3T9.2 Standard. It defines
both hardware and software interfaces.

22. Explain SCSI 1?

SCSI1 defines an 8-bit parallel data path between a host adapter and a device. The SCSI1
specification calls the host adapter as initiator and the device as target. There can be a
combination of up to eight initiators and targets daisy chained on the bus.
23. State the different phases of a SCSI bus and its uses?

 Arbitration phase - an initiator starts arbitration and tries to acquire the bus
 Selection phase - selects the target to which it needs to talk
 Command Phase - request a command from the initiator
 Data Phase - request data transfer with the initiator
 Status Phase - indicates the end of data transfer to the initiator
 Message Phase - target enters this phase to interrupt the initiator’s signaling &
completion of the read command
 Bus free Phase - phase without any activity on the bus; the bus can settle down
 Before the next transaction

24. Explain SCSI 2?

SCSI2 has faster data transfer rates. The new command defined for SCSI2 is tagged command.
The tagged command was defined to queue up commands; up to 256 commands can be queued
up for a single device.

25. State the two types of latency?

 Seek latency
 Rotational latency

26. State the types of Seek latency?

1. Mid transfer seek


2. Elevator seek

27. What is Overlapped seek?

Seek on one drive and then on second drive and then reconnect to first drive when seek is
complete.

28. What is Midtransfer seek?

In midtransfer seek device controller can be set to seek during data transfer through a
separate port provided on the SCSI chip.

29. What is elevator seek?

A track close to the head will be read first and then a more distant track even though the distant
track was requested first.

30. State the two methods used to reduce latency?


 Zero latency read/write
 Interleave factor

31. Define Transfer rate?

Transfer rate is defined as the rate at which data is transfered from the drive buffer to the
host adapter memory.

32. Give the formula for maximum throughput?

Max throughput for I/O = Block transfer size / Total latency


where, Total latency = T1 + T2 + T3 + T4 +T5
T1- Seek latency
T2 - Rotational latency
T3 - Time required to transfer data from disk to system memory
T4 - Firmware latency
T5 - Final action on data

33. Define I/O per second?

I/O per second is a measure of the number of Input / Output transactions performed in a
second. It is defines as I/O per second = Maximum throughput / Block size

34. What is Command Queuing?

Command queuing allows execution of multiple sequential commands with system CPU
intervention. It helps in minimizing head switching and disk rotational latency.

35. Define Disk spanning?

Disk spanning is a method of attaching multiple drives to a single host adapter. In this
approach all drives appear as a single contiguous logical unit. Data is written to the first drive
first and when the first drive is full the controller switches to second drive and so on.

36. Explain RAID?

RAID – Redundant Array of Inexpensive Disks. RAID is a storage subsystem. It is an


array of multiple disks. Here data is spread across multiple drives. RAID is used to achieve
Large storage capacity
Fault tolerance
Performance improvement
Mass storage systems

37. What are the key objectives of RAID systems?

1. Hot backup of disk systems


2. Large volume storage at lower cost
3. Higher performance at lower cost
4. Ease of data recovery

38. State some applications of RAID systems?

 Mainframe and N/w systems


 Super computers and Multimedia systems
 Data server applications

39. State the types of RAID systems?

RAID level 0 - Disk striping


RAID level 1 - Disk mirroring
RAID level 2 - Bit Interleaving of data
RAID level 3 – Parallel Disk Array
RAID level 4 – Sector Interleaving
RAID level 5 – Block Interleaving

40. What is Disk Striping?

RAID level 0 has multiple drives connected to a single disk controller. Data is striped to
spread segments of data across multiple drives. The data being written to the disk is broken into
segments. The first segment is written to first drive, second segment to second drive and so on. It
is used in database applications.

41. What is Disk Mirroring?

RAID level 1 causes two copies of every file to be written on two separate drives. Each
main drive has a mirror drive. All data written to main drive is written to the mirror drive at the
same time. Complete data redundancy is achieved. It is used in mainframe and network systems.

42. Explain RAID level 2?

RAID level 2 is called as Bit Interleaving of data. It contains arrays of multiple drives
connected to a disk array controller using SCSI channels. Data is written one bit at a time and it
is interleaved across multiple drives. It also contains multiple check disks to detect and correct
errors. It uses Hamming Error Correction Codes to detect and correct errors.

43. Explain On-the-fly parity generation and parity checking?

During data writes a parity bit is generated and written to the parity drive. During data reads
parity checking takes place. This process is called On-the-fly parity generation and parity
checking.
44. Explain Sector Interleaving?

RAID level 4 is called as Sector Interleaving. It writes successive sectors of data on


different drives. Employs multiple data drives and a single dedicated parity drive. The first sector
of data is written to first drive, second sector of data to second drive and so on. In RAID level 4
data is interleaved at sector level.

45. Explain Block Interleaving?

RAID level s is called as Block Interleaving. Data is block interleaved and it does not use
a dedicated parity drive. Parity data is spread across multiple drives in the data stream. Multiple
concurrent reads and writes can be performed in RAID 5.

46. What is the use of Optical Media?

Optical media is used for storing large volumes of data. It is indestructible and unaffected
by magnetic field or water. E.g. Optical drives such as CD-ROM, WORM, and Rewriteable
Optical Systems.

47. How Optical media is classified?

1. Optical media can be classified as follows


2. CROM - Compact Disc Read Only Memory
3. WORM - Write Once Read Many
4. Rewriteable
5. Multifunction

48. State the reasons for the growth of CD-ROM’s?

 Ease of use and durability of data


 Random access capability
 Very high sound fidelity
 High storage volumes

49. What are the Physical layers in CD-ROM’s?

 Polycarbonate Substrate
 Reflective Aluminium layer
 Protective coat of lacquer

50. Explain about the Polycarbonate Substrate layer?

CD-ROM’s contain polycarbonate disc, which is 120mm in diameter, 1.2 mm in


thickness and has a 15 mm spindle hole in center. Polycarbonate substrate contains lands and
pits.
51. Write short notes on MPEG-2? (NOV/DEC 2015)

MPEG-2 (also known as H.222/H.262 as defined by the ITU) is a standard for "the
generic coding of moving pictures and ISO/IEC 13818 MPEG-2 at the ISO Store.

PART-B

1. List the types of fixed and removable storage devices available for multimedia, and
discuss the strength and weakness of each one.
2. Explain the data compression technique used in multimedia.
3. Define MIDI. List its attribute. Compare and contrast the use of MIDI and
digitized audio in multimedia production.
4. List and explain important steps and considerations in recording and editing digital
audio.
5. Describe the capabilities and limitations of bitmap images and vector images.
6. Define animation and describe how it can be used in multimedia.

7. Explain Color, Gray Scale and Still Video Image Compression method.

8. Explain data and file format standards.

9. Explain multimedia input and output Technologies. (8)(NOV/DEC 2015)

10. Uses of magnetic Storage in Multimedia Systems.


11. Discuss briefly about image compression schemes? (8) (NOV/DEC 2015)
12. Explain in detail about types of voice recognition system? (8) (NOV/DEC 2015)
13. Explain in detail about TIFF implementation issues? (8) (NOV/DEC 2015)

UNIT V

HYPERMEDIA

PART A

1. State the applications of Non-Textual Image Recognition?

a. Recognition of human faces


b. Interpretation of facial expressions
c. Designing, Manufacturing and Medical fields
d. Security systems

2 What is Hypermedia?
The linking of media for easy access is called Hypermedia. The media may be of any
type such as text, audio, video etc. A hypermedia document contains a text and any other
sub objects such as images, sound, full-motion video etc

3. What is Hypertext?

The linking of associated data for easy access is called Hypertext. It is an application of
indexing text to provide a rapid search of specific text strings in one or more documents. It is
an integral component of Hypermedia. Hypermedia document is the basic object and text is a
sub object.

4. What is multimedia PC:

Multimedia PC is a computer that has a CD-ROM or DVD drive and supports 8-bit and
16-bit waveform audio recording and playback, MIDI sound synthesis, and MPEG movie
watching, with a central processor fast enough and a RAM large enough to enable the user to
play and interact with these media in real time, and with a hard disk large enough to store
multimedia works that the user can create.

5. Where to use multimedia?

Multimedia improves information relation. Multimedia applications includes the following:


1. Business
2. Schools
3. Home
6. What is meant by Multimedia User Interface?

Multimedia user interface is a computer interface that communicates with users multiple
media.

7. Define Virtual Reality Systems?

Virtual Reality systems are designed to produce the cognitive effect of feeling immersed
in the environment. It is created by the computer using sensory inputs such as vision, hearing,
feeling and sensation of motion.

8. State the key design issues that provide virtual reality functionality?

1. Human factors
2. Multimedia Inputs and Outputs
3. Virtual Reality Modeling
4. Virtual Reality Design considerations

9. What are the human factors involved in Virtual reality?


1. Color, Brightness and Shading
2. Object Recognition
3. Navigation
4. Motion Processing
5. Depth Processing
6. Lag aces and Shared Execution Environment
7. Business Process Workflow Applications

10. Explain about Cable convertor?

A Cable convertor is a small electronic channel convertor. It is connected between a


cable of satellite dish and television. It allows user to select broadcast stations. Cable convertor
consists of analog demodulation and switching circuits. It can select 60 or more analog channels.

11. What is Set-top system?

Set-top box is the short name for the next generation of digital information processing
systems. Set-top system acts as a cable converter as well as programmable interface between
user and service provider. It allows users to connect a computer system to a television set.

12. State the classifications of Business systems?

1. Dedicated Systems
2. Departmental Systems

13. What is Depth Perception?


1. Perceiving the change in the distance of the object from the eye is called depth
perception. The three important factors in depth perception are
2. Motion
3. Pictorial Clues
4. Sensory Clues

14. Explain about Pictorial Clues?

1. Pictorial Clues consist of


2. Changes in shapes and sizes
3. Changes in gradient of surfaces
4. Changes in density of objects
5. Field of vision
6. Change in brightness and light reflection from object surfaces

15. Define Lag?

Lag is defined as the time between the participant action and the associated application
response. The design factors used to measure lag are
1. Location of multimedia object server
2. Network bandwidth
3. Capability of workstation to process multiple streams concurrently

16. State the approaches used for designing concurrent operation of multiple devices and
user feedback?

1. Simulation Loops
2. Multiple Processes
3. Concurrent Objects

17. What is Simulation loop?

A set of objects such as sound clips, video clips, graphics and sensory stimuli participate
in simulation. A procedure is created and timestep is allocated for each object. Each procedure is
assigned a slot in the timeline for simulation. It is called loop because the main process loops
around the simple logic of which the object is scheduled next. The simulation rate is bound to the
display rate.

18. What is meant by Multimedia Authoring systems?(NOV/DEC 2015)

An authoring system is a program that has pre-programmed elements for the


development of interactive multimedia software titles. Authoring systems can be defined as
software that allows its user to create multimedia applications for manipulating multimedia
objects.

19. What are the design issues in Gesture recognition?

1. Start and end of gesture


2. Path recognition and velocity of movement
3. Combination effects of multiple related gestures
4. Environmental context in which the gesture was performed

20. State the User Interface design tools?

1. Media Editors
2. Authoring Application
3. Hypermedia Object Creation
4. Multimedia Object Locator and Browser

21. What is navigation?

Navigation refers to the sequence in which the application progress and objects are created,
searched and used. It can be done in direct mode or browse mode.

22. State the different Metaphors used for Multimedia applications?


1. Organizer Metaphor
2. Telephone metaphor
3. Aural User Interface(AUI)
4. VCR Metaphor

23. Explain Organizer metaphor?

Organizer metaphor associates the concept of embedding multimedia objects in the


appointment diary or notepad. The Lotus organizer was the first to use a screen representation of
office-diary type organizer.

24. What is the use of Telephone metaphor?

The telephone metaphor combines normal windows user interface ideas with the
telephone keypad. The telephone metaphor on a computer screen allows using the computer
interface as telephone keypad is used.

25. Explain AUI?

Aural User Interface (AUI) allows computer systems to accept speech as direct input and
provide an oral response to the user actions. The real challenge in AUI systems is to create an
aural desktop that substitutes voice and ear for the keyboard and display.

26. Define Mobile Messaging? (NOV/DEC 2015)

Mobile Messaging (MM) is a presence enabled messaging service that aims to transpose
the Internet desktop messaging such as ICQ or MSN experience to the usage scenario of being
connected via a mobile/cellular device.

27. What is Scaling?

Scaling allows enlarging or shrinking the whole or part of an image. Image scaling is
performed after decompression. The image is scaled to fit in a user defined window.

28. What is Zooming?

Zooming means enlarging a digital image to see it more clearly or make it easier to alter.
It allows the user to see more detail for a specific area of the image.

29. What is Rubber Banding?


Rubber Banding is another form of zooming. The user can use a mouse to define two
corners of a rectangle. The selected area can be copied to a clipboard, cut, moved or zoomed.

30. What is Frame Interleaving?

Frame Interleaving defines the structure of the video file in terms of the layout of sound
and video components.

31. What is 1:1 interleaving?

1:1 interleaving means that the storage for every video frame is followed by storage for
sound component of that frame.

32. What is programmed degradation?

The playback control can be exercised at the time of decompression and playback. This is
called programmed degradation. Programmed degradation get into effective when the client
workstation is unable to keep up with the incoming data.

33. What is the use of Planar Imaging Technique?

Planar Imaging Technique is used in computer-aided tomography (CAT scan) systems. It


displays a two-dimensional cut of X-ray images through multi-dimensional data.

34. Explain user workstation?

User workstation can serve as the input node for voice or video input. It can also serve as
the output node for text, graphics, image, audio/voice or video.

35. What is the use of Gateway nodes?

The gateway node is a standard means of communication with other systems.

36. What is the use of Database server?

The database server supports the database requirements of the application and stores the
attribute information for real-world objects in the application. Database servers are based on the
UNIX OS/2 or Windows platform.

37. What is the use of Voice mail server?

Voice mail server is connected to a PBX (Private Branch Exchange). It is used for voice
mail messages.

38. What is the use of Audio Server?


Audio Server manages all digitized voice and audio objects. Audio servers should be
capable of maintaining isochronous playback of audio objects.

39. Explain about the Video Server?

Video Server manages video objects. Video servers should be capable of maintaining
constant playback speed.

40. What is the use of Audio/Video Duplication?

Audio/Video Duplication node allows users to create audio or videotapes for


transportation of multimedia documents.

41. What is the use of Duplication station?

Duplication station is provides specialized high-speed duplication equipment such as


diskettes, CD-ROM’s, Recordable CD’s, Optical disks, Optical tapes etc.

PART-B

1. Distinguish between multimedia system and hypermedia system.


2. (i) List the main attribute, benefits and drawbacks of 3 types of authoring systems. (8)
(ii) Write short notes on following.
a. Mobile messaging. (4)
b. Document management. (4)
4. Explain time based and object oriented multimedia authoring tool.
5. What is editing features? Explain it briefly.
6. Briefly explain integrated document management in multimedia. (8)(NOV/DEC 2015)
7. How to create hypermedia message? Give an example also explain hypermedia
message components.
8. Explain the components of Distributed multimedia Systems. (8) (NOV/DEC 2015)
9. Explain multimedia Authoring and User Interface design.

10. Explain the types of multimedia authoring systems? (8) (NOV/DEC 2015)

11. Describe the term Hypermedia and applications? (8) (NOV/DEC 2015)

You might also like