ANEXCERPTFROM
© 2007 Autodesk, Inc. All Rights Reserved.
This publication, or parts thereof, may not be reproduced in any form, by any method, for any purpose.
AUTODESK, INC., MAKES NO WARRANTY, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE REGARDING THESE MATERIALS, AND
MAKES SUCH MATERIALS AVAILABLE SOLELY ON AN “AS-IS” BASIS. IN NO EVENT SHALL AUTODESK, INC., BE LIABLE
TO ANYONE FOR SPECIAL, COLLATERAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR
ARISING OUT OF ACQUISITION OR USE OF THESE MATERIALS. THE SOLE AND EXCLUSIVE LIABILITY TO AUTODESK,
INC., REGARDLESS OF THE FORM OF ACTION, SHALL NOT EXCEED THE PURCHASE PRICE, IF ANY, OF THE MATERIALS
DESCRIBED HEREIN.
Autodesk, Inc., reserves the right to revise and improve its products as it sees fit. This publication describes the state of this
product at the time of its publication, and may not reflect the product at all times in the future.
The following are registered trademarks or trademarks of Autodesk, Inc., in the USA and other countries: 3DEC (design/logo),
3December, 3December.com, 3ds Max, ActiveShapes, Actrix, ADI, Alias, Alias (swirl design/logo), AliasStudio, Alias|Wavefront
(design/logo), ATC, AUGI, AutoCAD, AutoCAD Learning Assistance, AutoCAD LT, AutoCAD Simulator, AutoCAD SQL Extension,
AutoCAD SQL Interface, Autodesk, Autodesk Envision, Autodesk Insight, Autodesk Intent, Autodesk Inventor, Autodesk Map,
Autodesk MapGuide, Autodesk Streamline, AutoLISP, AutoSnap, AutoSketch, AutoTrack, Backdraft, Built with ObjectARX
(logo), Burn, Buzzsaw, CAiCE, Can You Imagine, Character Studio, Cinestream, Civil 3D, Cleaner, Cleaner Central, ClearScale,
Colour Warper, Combustion, Communication Specification, Constructware, Content Explorer, Create>what’s>Next> (design/
logo), Dancing Baby (image), DesignCenter, Design Doctor, Designer’s Toolkit, DesignKids, DesignProf, DesignServer,
DesignStudio, Design|Studio (design/logo), Design Your World, Design Your World (design/logo), DWF, DWG, DWG (logo),
DWG TrueConvert, DWG TrueView, DXF, EditDV, Education by Design, Extending the Design Team, FBX, Filmbox, FMDesktop,
GDX Driver, Gmax, Heads-up Design, Heidi, HOOPS, HumanIK, i-drop, iMOUT, Incinerator, IntroDV, Kaydara, Kaydara (design/
logo), LocationLogic, Lustre, Maya, Mechanical Desktop, MotionBuilder, ObjectARX, ObjectDBX, Open Reality, PolarSnap,
PortfolioWall, Powered with Autodesk Technology, Productstream, ProjectPoint, Reactor, RealDWG, Real-time Roto, Render
Queue, Revit, Showcase, SketchBook, StudioTools, Topobase, Toxik, Visual, Visual Bridge, Visual Construction, Visual Drainage,
Visual Hydro, Visual Landscape, Visual Roads, Visual Survey, Visual Syllabus, Visual Toolbox, Visual Tugboat, Visual LISP, Voice
Reality, Volo, and Wiretap.
The following are registered trademarks or trademarks of Autodesk Canada Co. in the USA and/or Canada and other countries: Backburner, Discreet, Fire, Flame, Flint, Frost, Inferno, Multi-Master Editing, River, Smoke, Sparks, Stone, Wire.
All other brand names, product names, or trademarks belong to their respective holders.
Published By: Autodesk, Inc.
111 Mclnnis Parkway
San Rafael, CA 94903, USA
International Standard Book Number 1-897177-47-X
First printing: October 2000
Second printing: September 2002
Third printing: May 2004
Fourth printing: January 2005
Fifth printing: April 2007
Table Of Contents
3D Computer Graphics
7
3D Computer Animation
Technical Creativity
The Animation Pipeline
8
10
12
Time and Space
15
3D Space
Time
Bitmap Space
16
18
20
Exploring Maya
23
The Workspace
Objects and Components
Dependency Graph
Transformations
Animation
Animation Techniques
Setting Keys
Modeling
Geometry
Modeling Techniques
NURBS Surfaces
Polygon Modeling
Deformations
Deforming Objects
Deformers
24
28
30
32
35
36
38
41
42
44
46
48
51
52
54
Table Of Contents
Character Animation
3D Characters
Animating Characters
Materials and Textures
Shading Your Models
Surface Materials
Digital Cinematography
How Light Works
Casting Shadows
Rendering
Rendering Scenes
Render Output
Interactive 3D
Interactive 3D
Game Creation
57
58
60
63
64
66
69
70
72
75
76
78
81
82
84
Maya
A N
I N T R O D U C T I O N
T O
3 D
C O M P U T E R
G R A P H I C S
3D Computer Graphics
Animation is an art form created and cultivated over the last century.
While drawing, painting, sculpting and photography allow artists to represent
shape and form at a single point in time, animation lets artists explore a world
in motion. Through animation, new worlds can be imagined. This modern
art form evokes emotion through the movement of a sequence of drawings,
paintings, photographs or rendered images.
The introduction of 3D computer graphics over the last couple of
decades has had a big impact on the world of animation. Digital characters
and sets can now be built and animated, then presented in different media
formats such as film, video and interactive games. Characters and visual
effects can even be seamlessly integrated into live-action footage.
Autodesk® Maya® is a 3D animation system that lets artists play the roles
of director, actor, set designer and cinematographer.
3D Computer Animation
T
Bingo © 2000 Autodesk, Inc.
CNN Headline News,© 2000 CNN, Image courtesy of David Price
Animated Short Films
For many years 3D computer graphics were used
primarily in animated short films. The experimental
nature of these films was a good match for this new
computer graphics technology. Smaller teams of
artists, or even individual artists, could explore the
use of computers to generate animation without the
pressures of a larger feature production schedule.
In fact, Chris Landreth’s Bingo, an animated
short film, was created while Maya was still in development. Using Maya, Chris and his team were able
to tell a compelling story about the influences of our
society on the average person.
Short films provide a fertile ground for
experimentation that help drive innovation in the
computer graphics industry. It is also a great way for
young animators and students to begin using their
animation skills as a vehicle for storytelling.
Broadcast
There is a good chance that anyone involved in the
early years of 3D computer graphics has had to
animate a flying logo. This use of 3D offered a new
and dynamic way of getting the message across –
always important in the world of advertising.
Since then, the use of 3D in broadcast has evolved
and more sophisticated artwork is being produced.
Flying logos are now integrated into more
complete 3D environments where a product is advertised or a corporate message introduced. Character
animation is also used more to bring objects to life
and help sell the message.
Maya has helped open the door to a more
complex use of 3D in the broadcast world. With integrated modeling, animation, characters, visual effects
and rendering, a smaller video production house can
now easily add 3D into their existing 2D workflow.
8
© Blockbuster Entertainment 2002
3d computer graphics
3D Computer Animation
he world of 3D computer graphics has grown
from experimental short films to full integration
into the creative process for many types of media.
From flying logos to digital actors, the field of 3D
computer graphics has evolved rapidly over the last
two decades. The use of 3D graphic tools is now an
important part of many television, film and multimedia projects.
What makes 3D such a useful tool is the way
it simulates real objects. The way objects appear in
perspective, the way a surface bends and twists,
or the way a light illuminates a space—all of these
complex 3D effects can now be recreated on the
computer. The resulting digital images can then be
integrated into other media types using familiar
compositing and editing techniques.
Autodesk® Maya® is a 3D animation system
that addresses the needs of a wide variety of digital
content creators. The Maya software tools and techniques have been developed with the artist in mind,
while command-based scripting offers ways to
build customized tools that suit more integrated
production workflows.
THE ART OF MAYA
Feature Films
© 2000 Nihilistic Software, Image courtesy of Activision.
The last few years have seen a sharp rise in the use of 3D in feature
films. While many films have integrated 3D into existing live-action
scenes, Pixar’s Toy Story® became the first feature-length animation
that used 3D exclusively for characters and sets. Sony Pictures
Imageworks’ Stuart Little® took this one step further and made a digital
mouse the star of a live-action movie. Digital creatures, characters and
sets continue to show up in the movies and even traditional filmmakers
are starting to consider 3D a standard part of the production process.
Feature films tend to use many computer programs to complete a
project, including in-house software and off-the-shelf software such
as Maya. Maya is most often used for modeling, animation, character
animation and dynamics simulations such as cloth. The Maya software
open architecture makes it easy for computer graphics (CG) supervisors
to build custom tools to help streamline production.
Wing Commander © 2000 Digital Anvil
Visual Effects
While CG actors star in movies of their own, 3D computer graphics
is changing how visual effects are used for both film and television.
Smaller productions can now afford to integrate 3D graphics into their
work, while large film productions can now achieve effects only dreamed
of in the past.
Film sets can be partially built and then extended with detailed 3D
digital sets. Also, animated stunt people can be thrown off buildings in
ways not recommended for real people. And smoke, fire and exploding
objects can now be simulated within the safety of
a computer screen.
The Maya software
tools, especially Maya
dynamics, are ideal for
generating visual effects
that can be fully integrated into live-action
shots. The best effects
make it impossible to find
the line between reality
and where computer
graphics are used.
Over the years, video games have developed from
black and white pixels to real-time virtual environments built with 3D characters and sets. The graphics
used in these games have always conformed to the
capabilities of the game console on which they are
delivered. Next-generation game consoles are continually increasing their computing power to be comparable to the workstations used to run Maya. This is
breaking down limitations of the past.
Game artwork is becoming more sophisticated
with complex 3D models, texture maps, lighting and
even dynamics. Maya is an ideal tool for generating
this kind of 3D artwork and includes tools to address
the special needs required to build content for
real time.
Visualization and Web
Digital content creation tools are used in a number
of fields including fine arts, architecture, design,
education and scientific research.
Some of these fields require 3D computer
graphics to produce highly realistic images for the
evaluation of projects or prototypes. With advances in
the web’s ability to present graphic and 3D information, visualization on the internet is emerging as an
important tool for many companies.
Lee Irvine
© 2002 Autodesk, Inc.
Interactive Video Games
9
Technical Creativity
A
s an artist working in a new medium, you must
first understand the technical aspects of your
new tools before you can reach your full creative
potential. Just as a painter must learn how a particular
paint mixes and dries on canvas, and a photographer
must learn what film speed works best with a particular lens, a 3D artist must learn the basics of setting
keyframes, working with 3D geometry and setting
up materials and lights for photorealistic rendering.
To fully master computer animation, you must
have a balance of artistic and technical skills. Not
only must you learn how to work with shape, form,
motion, color and texture, but also you must learn
how the computer interprets all of these elements.
While Maya will allow you to go far without understanding all the technical details, you will have
greater creative freedom with more knowledge.
Getting to Know Your Computer
3d computer graphics
Technical Creativity
If you are sitting down at the computer for the first
time, you may be intimidated by the many computerbased tasks you must learn such as opening applications, moving and saving files, and how to work
over a network. If you work in a larger production
house, you probably have technical assistance onsite to help you get through this part of the learning
process. In a smaller production house, you likely
have less assistance and must learn more on your
own. Luckily, these skills come quickly with experience. The best way to learn is to dive in and start
working.
Getting Started with Maya
There are several steps to getting started with
Maya. This book is designed to give you a conceptual understanding of how Maya works, while the
Learning Maya | Foundation book gives you projectbased experience. You can also use the reference
manuals and web tutorials offered at the Autodesk
web site.
While these academic tools are important, they
can’t replace true production hands-on experience.
One good way to begin using the software is to
model, render and animate a real object—an object
you can study, document and accurately turn into a
digital scene. Try to build and animate your favorite
old toy, a household appliance or even your
own face.
By using a real object, you will be able to evaluate
your success against the real object. By focusing on
creating something, you will be able to apply the
knowledge you have gained from this process.
10
Transferring Traditional Skills
Artists with skills in traditional media will find the transition to
3D computer graphics easier once they get used to working on a
computer. In fact, new 3D artists should take the time to learn one
or more of the following traditional art forms because they can help
enhance 3D skills:
Drawing and Sketching
Drawing is a technique of representing the real world by means of
lines and shapes. This skill requires the ability to observe and record
the three-dimensional world. This skill can also be used to create
storyboards and character sketches—great tools for developing an
idea before proceeding to computer graphics.
Cel Animation
Cel animators create 2D art through motion. Cel animation includes
traditional techniques such as squash and stretch, anticipation,
overlapping action and follow through. Many of these 2D techniques
translate very well into 3D environments.
Painting
Painters learn to work with color, light, shape, form and composition.
On the computer, these skills help create texture maps, position lights
and compose scenes.
Cinematography
Knowledge of traditional cinematography will help artists use realworld techniques when setting up CG lights and cameras. This skill
is very important when working with 3D graphics that are integrated
into live-action plates.
Photography
Still photography requires an understanding of lighting and
camera effects such as key lights, focal length and depth of field.
Photography also teaches good composition techniques that are
useful for framing scenes.
Sculpture
Sculpturing with clay, stone and metal requires an intimate understanding of shape and form. Hands-on experience in shaping complex
surfaces is a great asset when working with digital surfaces in Maya.
Architecture
Architects often make good 3D artists because they are trained to
think in plane, section, elevation and perspective. Building models by
hand is another skill they develop that makes it much easier to work in
a digital environment.
THE ART OF MAYA
Right and Left Brain Thinking with Maya
Switching Sides
Maya has a creative and a technical side. The creative side of Maya offers you tools that
make it easy to work in a 3D world with shape and form. These tools free you up to make
creative decisions on your project. The technical side of Maya offers you access to the
inside workings of both your scenes and Maya itself. This access makes it possible to
build your own custom tools and to speed up production where repetitive tasks appear.
By having this dual nature, Maya is able to contribute to different stages of a production
and to different ways of working.
While working as a 3D artist, you will be
required to be both technical and creative
at the same time. One strength of Maya
is that you can start off with a technical
approach as you rig up your characters
and models with controls. Once this work
is complete, you can focus on the creative
process using a few higher level controls
that let you put aside the technical issues
for a while.
Left Brain Thinking
The Technical Edge
Maya has many editors that give you access to all
parts of a scene. For example, the Attribute Editor
can access the mathematical values assigned to
all objects, shaders and animated sequences in
your scene.
Maya lets you group objects together to build
complex hierarchies. These groups help you organize
your models while offering methods of animating
complex objects.
Right Brain Thinking
Maya Embedded Language (MEL) and Python®
are powerful scripting languages used to execute
commands and build custom user interface elements. This is the ideal tool for technical leads
who need to create tools that support production
workflows used by their teams.
Maya is built on a complex interconnection of
objects known as the Dependency Graph. This
establishes the connections between objects
and can be viewed and manipulated for incredible control. Understand the Dependency Graph
and you understand the technical side of Maya.
The Creative Edge
Many tools in Maya use Manipulator Handles
to offer visual clues as you edit an object. By
using the manipulator, you are able to make
your decisions visually without relying on the
actual numbers stored in Maya.
Materials and textures are presented visually
using icons and swatches that help you make
decisions. This is one step in the creation
of rendered scenes. Material changes can
also be explored using Maya IPR (Interactive
Photorealistic Rendering).
Maya includes fully shaded
and textured views so
that what you see in your
interactive scene resembles
what your final rendering
will look like.
Animation information is presented in visual graphs that
help you visualize motion.
This makes all the numbers
easier to understand. You
can then easily edit this
graph in the same way you
would edit a curve in 3D.
Mathematics, Scripting and Programming
Creative Awareness
Mathematics is used by Maya in a number of ways: objects
in Maya exist in a 3D coordinate system, colors are stored
as RGB values, and animation is created as values that are
mapped against time. A Maya scene is basically a database of numbers that is interpreted by the software into geometry, color and texture. In some cases, you may need to do
some math outside of Maya to make sure the right numbers
are plugged in. Also you may want to set up a mathematical
equation or expression to create more complex motion in
your scene.
One of the goals of creating artwork in a 3D graphics
application such as Maya is to mimic the real world.
This means that the more you are aware of the world
around you, the easier it will be to recreate it on
the computer.
Maya is built on MEL (Maya Embedded Language),
a scripting language that you can use to build custom tools
and workflows. This language is fairly easy to learn and
more technically minded artists might want to explore its use
in their work. Maya also offers the opportunity to use the
Python scripting language as an alternative to MEL. If you
want the tool integrated into Maya, you can also program
plugins using the Maya API. To develop these skills, a foundation in C++ programming is an asset. However, you can get
quite far by using existing scripts and source code as inspiration.
As you come into contact with people, places and
objects, take a closer look and imagine that you have
to model, animate and render all of the details that you
see. Details such as how a person swings his or her arms
while walking, or how light enters a room, offer great
reference for the 3D artist to incorporate into their work.
Any seasoned animator will tell you the importance of
observing the world around you.
You should continue this kind of awareness when
you go to the movies. In many ways, your animations
will have roots more in movies than in real life. While
watching movies, observe camera angles, set lighting, the
staging and framing of actors, and performances.
An understanding of how people, places, color, shape
and form are captured on film can help you become a
better animator.
11
The Animation Pipeline
A
number of different stages lead up to a final
animated 3D sequence. When computers were
first used for 3D graphics, these stages were broken
down into modeling, animating and rendering.
These stages have since been expanded with the
introduction of character animation, effects and
more sophisticated camera and lighting tools.
Each stage of 3D animation is a full area of
study on its own. It is useful to be familiar with all
the stages, even if you find yourself focusing on
only one later on. Knowing how the stages in the
animation pipeline work together will help you make
decisions that benefit everyone down the line.
Modeling, Animating and Rendering
3d computer graphics
The Animation Pipeline
The animation pipeline can be summarized in seven
stages: modeling; characters; animation; materials and
textures; lights and cameras; effects; and rendering
and compositing. These general stages describe the
main tasks required to create an animation.
On a project, you will often work on different parts
of the pipeline at the same time. It is a good idea to
have the teams work closely, using storyboards and
sketches to tie elements together. If you work in a
larger office, you may focus on one of these areas,
although having an understanding of several areas
is beneficial.
12
1. Modeling
This is the stage where you build geometry to represent objects
and characters. This geometry describes the position and shape of
your models and can be manipulated in the 3D workspace of Maya.
2. Characters
3. Animation
4. Materials and Textures
Characters are models that use special controls such as
skeleton joints and inverse kinematics for animation. These
controls make it possible to create the complex mechanics
required by characters.
Once a model has been setup for animation, you can begin
to animate it. By changing its position or shape over time,
you bring it to life. The timing can then be tweaked to
create very specific motion.
In order for geometry to be rendered, it must be given
material attributes that define how it will be shaded by
light. Texture can also be added to bring detail and visual
richness to the surfaces.
5. Lights and Cameras
6. Effects
7. Rendering and Compositing
As you would on a movie set, you must set up lights
and cameras to illuminate and frame objects. You can
then animate both the lights and the camera to
further mimic Hollywood effects.
There are many effects such as fire, fields of grass and
glowing lights that can’t be easily represented using
models and textures. Tools such as particles and Maya
Paint Effects can be used to add effects.
Once all the scene’s parts are ready, you can render a single
image or a sequence of images. You can also render objects
separately, then bring them back together in 2D using a
compositing system.
THE ART OF MAYA
Animating in Maya
Looking at the animation pipeline
from the perspective of a Maya
user, several stages use animation as
their foundation—such as modeling,
characters and effects.
Since almost any attribute in Maya
can be animated, you can begin
preparing for the animation process
at any time.
After setting up and animating
a scene, you can render and
composite the 3D objects and bring
them into a 2D bitmap world. The
rendering and compositing stages
seem to stand on their own at the
end of the pipeline. However, you
can apply test renderings throughout
the animation process and undertake compositing earlier on.
Modeling
Production Pipelines
The way in which you approach the animation
pipeline will depend on the environment you create
in. From a single artist on his or her own to an
artist in a large corporation, the approach to using
3D graphics may differ. Here are some general
descriptions of the production pipelines you can
expect to encounter.
Single Artist
As a lone artist, you will be in charge of all aspects of the
production process. You will, therefore, work in a more linear
fashion. As technical lead for yourself, you may want to set up a
consistent control strategy for your characters and scenes so that
when you are animating you can think more creatively.
Small Production House
In a smaller production house, the focus is on cutting production time
and making the most of limited resources. You will be called upon
to play a few roles, although some specialization will occur. Custom
tools may be put into place to streamline production.
Large Production House
In a large production house, specialization is more likely. You will
focus on either modeling, texturing, lighting, animating, effects
or rendering. Technical leads will take care of custom tools and
character rigging. Maya will also be part of a larger production tool
kit and MEL scripts and plugins will be required for data transfer
to proprietary tools.
Gaming Company
A game company can work like either a small or large production
house. Here the focus is on modeling with polygons, setting up
texture UVs, painting textures and animating. The exact workflow for
your models and scenes will depend on the game engine and which
custom tools are available for exporting.
School/Student
If you are at school and working on a production, you can either
work alone, which may limit the complexity of your animation, or you
can work with your fellow students to create a production house
scenario. Here you would choose an area of expertise and specialize
in that area with your classmates’ support. The first approach offers
a more general view of the pipeline while the second approach gives
you production-level experience in a particular area.
Materials
and Textures
Characters
Lights
and Cameras
Effects
Animation
Rendering
Compositing
Technical Leads
In production houses, technical directors (TDs) and computer graphics
(CG) supervisors offer their teams support with scripts, expressions,
plugins, and character rigging. Technical leads set up controls that allow
animators to focus on creating motion.
In building up a character, the technical
lead might also build high-level controls
that create a particular kind of motion for
use by the animators. For example, if many
different animators are working on a bird
character, the technical lead might want to
make sure the wing beat is always animated
the same way. Therefore, a single control can
be created that drives all the components of
the wing beat. The high-level control makes
sure all of the wings beat the same way.
Management of the production workflow
may also involve creating custom tools.
Since many production houses use in-house
High-level Controls
For example, custom attributes make
software, MEL scripts might also be used to
it easier for an animator to control a
pipe Maya scenes out to a custom file format.
character’s hand.
Animators
While the setup of scenes and
characters is an important part of
the process, the animation of these
elements is where the art is created.
Animators must tell a story using
motion as the main tool.
A well set up scene gives an
animator space to focus on setting
keys on the various high-level
controls built by the technical lead.
With non-linear animation, a whole
library of motion can be saved and
used in different parts of a project.
Such a library provides an animator
with a sort of animation palette.
Well built controls and skilled
animators are the ideal combination
for creating art through animation.
Trax Editor and Visor
The non-linear workflow of Maya is an ideal tool for
animating a scene. High-level clips and poses can be mixed
and blended in the Trax Editor. Motion can be quickly laid
out, then edited with simple click+drag actions.
13
Time and Space
With 3D computer animation, artists work in a digital world where space,
color, texture, time, shape and form are tools for creating images and
sequences of images.
All of these physical realities must be translated into a computer
language based on numbers. In fact, Maya scenes and images are really
just databases of numbers that are interpreted by Maya software and
presented on the computer screen in a more visual and artist-friendly manner.
While artists do not have to know how the numbers are interpreted by
the computer, they do need to understand some of the ways in which space,
color and time are quantified and recorded. Learning how the computer
interprets digital information—such as 3D coordinates, frames per second
or the RGB information stored in a bitmap image—can help artists understand how this information relates to their own perception of time and space.
3D Space
E
very day, you come into contact with three-dimensional objects
and spaces. You have learned how to recognize and work with
three dimensions in your daily routine and have an intuitive feel
for how it works. If you have ever drawn a sketch, built a model or
sculpted a model, you also have a creative feel for how shape and
form can be described in 3D.
Three-dimensional objects can be measured and quantified. If
you have ever measured the length, width and height of an object,
you have analyzed its three dimensions. You can also determine an
object’s position by measuring it in relation to another object or to a
point in space. In Maya, you can explore three-dimensional objects
and recreate them on screen as rendered images complete with lights
and shadows.
Two Dimensions
When you measure the width and height of an
object, you are analyzing two of its dimensions.
The X and Y axes can be used to find points on
an object, such as the center of the wheel or
the position of the headlight in this twodimensional space.
XYZ Coordinate Space
In Maya, 3D space is measured using three axes that are defined as
the X-axis, the Y-axis and the Z-axis. If you imagine looking into a
movie screen, the width would be the X-axis, the height would be
the Y-axis and the depth would be the Z-axis. In Maya, these axes are
presented with X and Z on the ground and Y as the height.
You can find any point in this 3D world by defining a coordinate
for each of the axes. To help you visualize these coordinates, a grid
with axis indicators shows you their orientation.
When you measure the length, width and height of
an object, you must consider a third dimension as
defined by the Z-axis when defining points in space.
Origin
Transformations
Points in a 3D coordinate system are measured against
an origin point. This point is assigned a value of 0, 0, 0.
When an object is moved, rotated or scaled, the X,
Y and Z axes are used for reference. An object is
moved along, rotated around, or scaled along the
chosen axis line. Values for these transformations
are stored for each of the three axes.
Axis indicator
To help you visualize the three axes,
each is given a corresponding RGB color.
X – red
Y – green
Z – blue
3D Space
Three Dimensions
time and space
The axes indicators point
in the positive direction
for X, Y and Z.
16
The Ground Grid
Y-up and Z-up Worlds
To create a ground surface to reference your work in
XYZ, Maya includes a grid that maps out an area 24
x 24 units. The X and Z axes are on the ground and
form the lines of the grid. The Y-axis is the height.
By default, Maya is Y-up where the Y-axis
represents the height. Some 3D packages,
especially CAD applications, might use Z as
the height. If you import a model from one of
these packages, you have to either re-orient
the model or set up Maya as a Z-up world.
THE ART OF MAYA
Perspective Space
When you visualize objects in the real world, you do
not usually think about axis lines and 3D coordinates.
Instead, you see the world in perspective where lines
vanish to the horizon and objects get smaller as they
get further away. A Perspective view allows you to
visualize a 3D space in a way similar to how you
view the world through either your eyes or the lens
of a camera.
Most artists have learned to sketch a 3D scene
in perspective or use drafting techniques to create
more accurate perspective drawings. With Maya,
the 3D Perspective view is automatically calculated
for you, based on a camera position and a view
angle that you set.
Orthographic Projections
While a Perspective view can help you compose a
shot, it is not always the ideal method for modeling
and animating objects. Therefore, an Orthographic
view lets you analyze your scene using parallel
projections of only two axes at a time. Using these
views, you can more accurately determine how an
object is positioned.
Most 3D animators find themselves using
perspective views to compose a shot while Orthographic
views offer a place to view the scene in a more
analytical manner. Both views are crucial to working
properly in 3D.
World Space and Local Space
When you build objects in 3D, it is possible
to parent one object to another. This creates a
hierarchy where the parent object determines the
position of the group in world space. The child
objects inherit this positioning and combine this
with their own local space position. This parent-child
relationship is used during the animation of an
object where keyframes can be set on both the
child and the parent.
Top view
Perspective view
Front view
Side view
The Complete Picture
The different points of view afforded
by Orthographic and Perspective views
let you build and evaluate your models.
The Camera
Perspective Camera
Perspective views are generated by
cameras that simulate real-world
UV Coordinate Space
camera attributes. It is possible to
One of the object types you will build in Maya is surfaces. While
surfaces are positioned in 3D space using X, Y, Z coordinates, they also
have their own coordinate system that is specific to the topology of
the surface. Instead of using X, Y, Z axes, this system uses U, V and N,
where U and V represent the two axes that lie on the surface and N is
the “surface normal” axis that points out from the front of the surface.
When you create a curve, it has a U direction that lets you measure
points along the curve. When a surface is created, it has a U and a V
direction that define the surface parameterization. You can draw and
manipulate curves in this 2D surface space. The placement of textures
can also take place in UV space.
Local Space
UV Coordinates
Curve-on-Surface
The handle bar and the front
The origin of the UV system lies at one of
Curves can be drawn in UV
wheel use an angled axis line
the surface corners. U runs along one axis
space. Later edits will be in 2D
to set up the local rotations.
and V along the other.
along U and V axes.
World Space
Local Space
Normal
Moving Curves
When the whole object moves in world
The wheel rotates around its center using a local rotation axis.
The surface Normal always points out from the
When you select and move a curve in UV space,
front of the surface. You can see the Normal lines
you can only move along the two axes of the
pointing out from each intersection on the UV grid.
surface grid. Curve control points can also be
space, child objects such as the handle
bar and the wheels move with it.
edited in UV space.
17
Time
I
n the world of 3D animation, time is the fourth
dimension. An object will appear animated if it either
moves, rotates or changes shape from one point in
time to another. Therefore, learning how time works
is crucial to the animation process.
Both live-action and animation use either film
or video to capture motion. Both media formats use
a series of still images that appear animated when
played back as a sequence.
Film and video images are often referred to as
frames and most animation is measured using frames
as the main unit of time. The relationship between
these frames and real time differs depending on
whether you are working with video, film or other
digital media.
Frames per Second
Time
Frames can be played back at different speeds that
are measured in frames per second (fps). This is
known as the frame rate and it is used to set the
timing of an animation. The frame rate is required to
output animation to film or video and to synchronize
that animation with sound and live-action footage.
In Maya, you can set your frame rate by
selecting Window > Settings/Preferences >
Preferences and selecting the Settings category.
By default, the Maya software
frame rate is 24 fps. If you have
a background in animation,
confirm your time units to
ensure you set keys properly.
Because seconds are the
base unit of time, it is possible
to set keys at 24 fps, then
change your frame rate to 30
fps. This will scale the timing
of your animation to match the
timing as measured in seconds.
time and space
Playback
When you preview your animations, you will often
use interactive playback. You can set the Playback
Speed by selecting Window > Settings/Preferences
> Preferences and selecting the Timeline category.
The default Playback Speed is Play Every Frame.
At this speed, Maya will play every frame in your
scene one after another. The actual playback speed
will depend on your workstation’s ability to process
the animated elements in your scene. Setting your
Playback Speed to Real-time asks Maya to maintain
your chosen frame rate as accurately as possible.
This means that Maya may skip frames in order to
maintain the frame rate. Note that if you are synchronizing to sound, your Playback Speed should be
set to Real-time, but if you are previewing dynamic
simulations, it should be set to Play Every Frame.
18
Time Code
Time code is a frame numbering system that assigns
a number to each frame of video that indicates
hours, minutes, seconds and frames. This is what
gets burned onto video tape. This gives you an
accurate representation of time for synchronization.
You can set up time code display in Maya from the
Preferences window.
Fields
The concept of Fields is important if you output your
animation to video. To make video play back smoothly,
Fields are used in place of frames. Each Field uses
alternating rows of pixels—called scanlines—that are
interlaced during playback. Fields are timed at 60 fps
for NTSC, with each Field containing only half the
number of scanlines as a typical frame.
In Maya, you can render directly to Fields that
have the 60 fps (50 fps PAL) timing or you can
output a 30 fps (25 fps PAL) sequence and use a
compositing system to convert it to Fields. For fastmoving objects,rendering directly to Fields offers
smoother playback since each Field displays half-frame
intervals.
Frame 2
Frame 1
Field 1 Odd
Field 1 Even
Field 2 Odd
Field 2 Even
THE ART OF MAYA
Double Time
At a frame rate of 24 fps, a 6 minute animation would require 8640
frames (24 fps x 360 sec). Animators working with either cel animation
or stop-motion sometimes use double time, where only every second
frame is created, then repeated twice. Double animations don’t play
back as smoothly as the full frame rate, but they save you in rendering
time. Students, especially, might consider this option when confronted
with a tight deadline. You can set up double time by setting By Frame
to 2 in Maya Render Settings window, under the Common Settings tab.
3:2 Pulldown
Frame 1
B
A
Frame 1
A
Frame 2
Frame 1
When an animation created for film at 24 fps is transferred to NTSC
video, a 3:2 pulldown can be used in place of re-rendering at 30 fps.
This technique spreads every four frames into five frames by remixing
the fields of the first, second and third frames to match the film’s frame
rate. A 3:2 pullup takes NTSC back to film. Both these techniques can be
accomplished in a compositing package such as Autodesk® Combustion®
or Autodesk® Toxik® software. PAL video does not generally require a pulldown because PAL’s frame rate (25 fps) matches film more closely.
How Objects Are Animated Using Keyframes
Keyframe animation is created by capturing values for attributes such
as translation or rotation at key points in time. An animation curve is
then drawn between the keys that defines or interpolates where the
object attribute would be at all the in-between frames.
Animation curves can be viewed as a graph where time is mapped
to one axis and the animated attribute is mapped to the other. In Maya,
virtually every attribute can be animated in this manner. The way in
which you set keys and control the in-between motion determines the
quality of an animation. As scenes become more complex, you will
learn to create control attributes that can drive the motion of different
parts of your scene to help simplify the process of setting keys.
Frame 5
Frame 3
A
B
Frame 3
B
C
A|A
A|B
B|C
Frame 2
Frame 3
Frame 5
D
C|C D|D
Frame 4
Frame 5
One Frame at a Time
When you render an animation, images are
created for every frame. The accurate playback
of these frames will be based on the Time
Units you choose in the Preferences.
Setting Keys
In-between
When you know that your object or character
The position of objects in-between the two
keyframes is determined by the shape of
the animation curve.
needs to be at a certain place at a certain
C
Frame 4
A
Frame 1
B
Frame 4
Frame 3
Frame 2
C
time, you set a key. With characters, you can
create poses out of a number of keys set for
different parts of the character.
Mapping Against Time
Position in X
Two keyframes are mapped against time,
then an animation curve interpolates
the motion between the keys. The shape
of the curve determines the quality of
the motion.
n mo
t wee
In-be
Pivot Points
tion
Key
Key
You animate objects in Maya based on a single
point called the pivot point. The pivot for the whole
scooter would lie on the ground, while the pivot
for a wheel would be at its center. The position of
the pivot sets the center of the axes for rotating or
scaling objects in your scene.
Time in Frames
19
Bitmap Space
A
bitmap is a representation of an image, consisting of rows and
columns of pixels, that is stored color information. Each pixel
(picture element) contains a color value for a number of channels –
red, green and blue. When you view these channels together at a high
enough resolution, all of the different colors form a complete image.
These images can then be output to video, film or printed on paper.
Bitmap images play a number of roles in an animation system
such as Maya. When Maya renders a scene, the geometry, lights and
materials are calculated from the camera’s point of view and a bitmap
image or a series of images results. Further manipulation of the
image in two dimensions is then possible using compositing or paint
packages. Bitmap images are also used as texture maps to help add
color and detail to the surfaces in scenes.
Pixels
Up close, you can clearly see
the grid of pixels that make up
the bitmap image.
Bitmap Sources
Bitmap images are common in
computer graphics and can be created
and manipulated in paint, compositing
and 3D rendering packages.
Full Resolution
As pixels are presented at a higher
resolution, the grid is no longer
visible and you get a clearer view of
the final image.
Bitmap Channels
Each pixel is made up of at least three
color values – red, green and blue.
These channels combine to create
the visible color.
time and space
Bitmap Space
Image and Display Resolution
Maya uses the term Image resolution to refer to the total pixel size of the bitmap image. Display resolution
refers to how many pixels you will find in one inch on the screen. This resolution is measured in pixels per inch
(ppi) or dots per inch (dpi). Monitors have a display resolution of about 72 dpi, although your graphics card
may offer several settings which will alter this value.
As an animator, you will focus on producing images with a particular Image resolution such as 640 x 480
pixels for video or one of a variety of resolutions for film. The default Display resolution for these images is 72
dpi. If you are taking an image to print, you will need to consider a Display resolution of around 300 dpi. This
value may be higher or lower depending on your printing needs. Below, you can see how different resolutions
look when printed. You can see how the 300 dpi image provides a higher quality image on the printed page.
2” x 2” @ 72 dpi (144 x 144 pixels)
20
2” x 2” @ 150 dpi (300 x 300 pixels)
2” x 2” @ 300 dpi (600 x 600 pixels)
THE ART OF MAYA
Aliasing and Anti-aliasing
The bitmap image grid can create a staircase-like or jagged effect within
an image where lines run diagonally against the pixel grid. To create
realistic bitmap images, you must soften these jagged edges using an
effect called anti-aliasing.
Anti-aliasing modifies the color of pixels at the edges between
objects to blur the line between the object and its background. This
results in a softer look. Anti-aliasing is most important when you are
working with lower display resolutions (72 dpi). Higher display
resolutions (300 dpi) used for printing, hide jagged edges better.
Anti-aliasing is important when you render your scenes. You can
set an anti-aliasing value in the Render Settings window. An accurate
calculation of anti-aliasing increases rendering time, but yields better
results. Later, when you learn more about rendering, the issue of antialiasing will be explored in more detail, including the issue of antialiasing an animated sequence.
No Anti-aliasing
Image Formats
Over the years, many different
image formats have been
created. You can choose one of
these in Maya Render Settings
window. The Maya Software
default format is called IFF and
it handles RGB, mask, and depth
channels. Maya also has several
movie formats that contain
sequences of bitmap frames.
In the Rendering chapter
of this book, image formats are
discussed in more detail.
Non-square Pixels
While most bitmap images use square pixels, digital
video uses pixels that are slightly taller than they
are wide. Therefore, an image that uses non-square
pixels will appear squashed on a computer monitor
that uses square pixels.
On a video monitor, the image would appear
with its pixels stretched to their proper aspect ratio.
If you are rendering to digital video, you must take
the pixel aspect ratio into account.
Without anti-aliasing, a bitmap
image will display jagged edges
where one object meets another
and on the interior of a surface
where a texture map can appear
jagged without proper anti-aliasing settings.
With Anti-aliasing
An anti-aliased image removes
the staircase-like effect by
creating in-between pixels
that soften the edge. This
gives results that more closely
resemble a real photograph
that has been scanned into the
computer as a bitmap.
Other Channels
In a typical bitmap, the first three channels
contain color information. You can also create
other channels that offer useful information about
the image. Maya is able to render images with
mask and depth channels for use in compositing
packages. These channels can be used when you
want to layer several images together seamlessly,
including live-action plates created outside Maya.
Mask Channel
Depth Channel
A mask (or alpha) channel, defines
where an image needs to be solid or
transparent. This channel can be used
to layer images for compositing or to
texture map attributes such as transparency or bump. White is opaque and
black is empty.
A depth channel can provide actual
3D information about an image. Depth
channels are very useful in a compositing package where you can combine
layers or add effects such as fog or
depth of field. White is close to
camera and black is far.
Computer Monitor Display
Video Monitor Display
Bitmap File Textures
Bitmaps can be used to texture objects in Maya. They can be used to
add color, bump, transparency and other effects on a surface. The RGB
and the alpha (mask) channels can all be used to texture the object.
Bitmaps used as textures can add detail to geometry without
requiring any extra modeling. These bitmaps are ideally saved as Maya
IFF files and work best with image resolutions that use base-2 such as
256 x 256, 512 x 512, or 1024 x 1024. This is because these sizes fit best
with Maya bitmapping algorithm used to filter textures.
When objects are rendered, textures are affected by the lighting and
shading on the object, then output as another composed bitmap image.
In an animation, each frame would be rendered as a different bitmap
that creates motion when played back at the right frame rate. These
images can also be output to video or to film.
Bitmap Textures
Geometry
Final Bitmap Image
21
Exploring Maya
Before exploring modeling and animation concepts, it is a good idea to become
familiar with the Autodesk Maya user interface. The user interface is where 3D
artists display and organize scenes, save and open files, and transform and
animate objects. While developing these skills, 3D artists learn just how they can
make Maya do what they want it to.
Maya has a very clean user interface where many of the elements
share generic editor windows. At first, this may make it difficult to distinguish
different parts of a scene but with experience, 3D artists learn the power of this
paradigm. The generic way in which Maya presents information makes it very
easy to transfer skills from one area of Maya to another. This lets 3D artists focus
on learning the underlying concepts of Maya software instead of always relearning how the user interface works.
The Workspace
C
reating an animation in Maya involves the manipulation of many
graphic elements such as curves, surfaces, colors and textures.
Information about these elements is stored in Maya as numeric values
that can be viewed in a number of different ways. In the workspace
of Maya, you can choose how you want to view a scene and access
different tools to alter its 3D information. Maya offers several ways of
accessing and altering your scene, giving you the flexibility to build
workflows that best suit the way you work.
User Interface Elements
When you first launch Maya, the workspace is presented to you
with a number of user interface (UI) elements. Each is designed to help
you work with your models, access tools and edit object attributes.
Initially, you should learn the locations of the UI elements so you can
easily find them while you work.
Many UI panels can be set up as floating
windows in case you just need them
temporarily. Menus can also be broken off
from menu bars in case you need to focus
on the menu’s contents.
Menu Sets
Menus
Status Bar
Shelf
View Compass
The first six menus are
always available in
Maya; the remaining
menus change depending on which menu set
you choose. This helps
focus your work on
related tools.
Menus contain tools and
actions for creating and
editing objects and setting
up scenes. There is a main
menu at the top of the
Maya window and individual menus for the panels
and option windows.
The Status Bar contains
shortcuts for a number of
menu items, as well as
tools for setting up object
selection and snapping.
A Quick Selection field is
also available that can be
set up for numeric input.
The Shelf is available for you
to set up customized tool sets
that can be quickly accessed
with a single click. You can
set up shelves to support
different workflows. Press
Shift+Ctrl+Alt when selecting
a menu item to add it to the
Shelf.
The View Compass is
a navigation tool that
allows you to quickly
switch between the
perspective and
orthographic views.
The Workspace
The Tool Box is where
you find some of Maya’s
most common tools.
From top to bottom,
they are: Select, Lasso,
Paint Selection, Move,
Rotate, Scale, Universal
Manipulator, Soft
Modification, Show
Manipulators and the last
tool you used.
Layers
Maya has two types of
layers. Display Layers are
used to manage a scene,
while Render Layers are
used to set up render
passes for compositing.
In each case, there is
a default layer where
objects are initially
placed upon creation.
Panels
The workspace can be
divided into
multiple panels that offer different ways of creating and
evaluating your scenes.
Quick Layout Buttons
The Quick Layout Buttons
offer you quick access to
Maya’s predefined panel
layouts.
Playback Controls
The Playback controls
let you move around
time and preview your
animations as defined
by the Time Slider range.
exploring maya
Time Slider
The Time Slider shows
you the time range as
defined by the range
slider, the current time
and the keys on selected
objects or characters.
You can also use it
to “scrub” through
an animation.
24
Channel Box
The Channel Box lets
you edit and key values
for selected objects.
Tool Box
Characters
The Character Menu
lets you define one
or more characters,
then prepare them for
being animated.
Range Slider
Help Line
Command Line
This bar lets you set up the start
and end time of the scene’s animation and
a playback range, if you want to focus on a
smaller portion of the Time Line.
The Help Line gives a short description of tools
and menu items as you scroll over them in the
UI. This bar also prompts you with the steps
required to complete a certain tool workflow.
This bar has an area to the left for inputting
simple MEL commands and an area to the right for
feedback. You will use these areas if you choose to
become familiar with the MEL scripting language.
THE ART OF MAYA
Simplifying the User Interface
The Hotbox
All of the UI tools that are available when Maya is
first launched can be turned off or on as needed.
In fact, you can turn them all off and focus on one
single view panel if this is how you like to work. In
this case, you would use interface techniques such
as the hotbox, hotkeys or the right mouse button to
access tools and options.
The hotbox gives you access to all of the menu items and tools right
at your cursor position. When you press and hold down the space bar
on your keyboard, after a short delay, the hotbox appears. The hotbox
is fully customizable and lets you focus on the tools you feel are most
important to your workflow. The Hotbox controls let you turn off the
main menus and the panel menus in the workspace. When the menus
and panels are off, you can focus entirely on using the hotbox.
Common Menus
Marking Menus
These are the menus that
are always available on
the main menu bar.
In the four quadrants
and in the center of the
hotbox, you can access
marking menus that are
designed to help you set
up your workspace.
Panel Menus
These are the menus
that are associated
with the active panel.
Recent Commands
Recent Commands
display a list of the last
few tools and actions
used. This is useful when
commands are repeated
several times.
This is a single shaded view panel with the hotbox being used to
access tools. Most likely, you will configure the workspace somewhere between this minimal setup and the default setup.
Hotkeys
Hotkeys will give you quick access to many of
the tools found in Maya. To set up hotkeys, select
Window > Settings/Preferences > Hotkey Editor.
The Hotkey Editor lets you set up either a single key
or a key and a modifier key such as Ctrl, Shift or Alt,
to access any tool in
Maya that is listed
in the Editor.
It is also possible
to build custom
commands using a
MEL (Maya Embedded
Language) script.
This feature allows
you to set up the UI to
completely reflect your
The Hotkey Editor gives you access to
own workflow.
all of the tools in Maya.
Mouse Buttons
Each of the three buttons on your mouse plays a
slightly different role when manipulating objects in the
workspace. Listed here are some of the generic uses
of the mouse buttons. When used with modifiers such
as the Alt key, they also aid in viewing your scene.
Menu Sets
Hotbox Controls
These are the menus associated
with the various menu sets.
With the Hotbox Controls you can turn
on all menus or pick and choose the
menus you want in the hotbox. You can
also turn off the workspace menus.
Marking Menus
Marking menus are accessed by selecting a hotkey and clicking with
your left mouse button. The menu appears in a radial form so that all
your options are simply a stroke away. Once you learn the location of
the menu options, you can quickly stroke in the direction of an option
without having to see the menu itself. Because the menu is radial, it is
very easy to remember the location of each menu option. It will only
take a short time for you to master this way of accessing tools.
You can set up your own marking menus by building them in
Window > Settings/Preferences > Marking Menu Editor, then assigning
the new marking menu to a hotkey.
First Use
Expert Use
When you first use a marking menu, press the
hotkey, click and hold it with the mouse while you
view the options, then drag to the desired location
in the menu.
As you become an expert, you can quickly press the
hotkey and click+drag to the desired location in the
menu. The menu doesn’t appear and your selection
quickly flashes by.
Left Mouse Button
This button is used to select tools and objects and
access visible manipulators.
Middle Mouse Button
This button is used to edit objects without the
manipulator. This button is also used to click+drag.
If your mouse has a wheel, it can be used to scroll
into windows.
Right Mouse Button
This button is used to invoke context-sensitive
menus and marking menus such as the one
shown here.
25
Viewing 3D Scenes
W
hen building a scene in Maya, you work in
three-dimensional space. Orthographic and
Perspective view cameras offer several ways of
looking at the objects in your scene as you work.
There are also different display options that change
the way objects in your scene are shaded.
Default Views
Viewing 3D Scenes
In Maya, the default views are set as Perspective,
top, front and side.
The Perspective view is a representation of your
object in 3D space, allowing you to move along the
X, Y and Z axes. The top, front and side views are
referred to as Orthographic views and allow you to
move in two dimensions at a time.
exploring maya
In addition to the default views,
you can create your own
cameras. To add a new 2D view,
select Panels > Orthographic >
New > and select Front, Side or
Top. To add a new 3D view, select
Panels > Perspective > New.
Orthographic views
Camera views
To create new orthographic views,
select Panels > Orthographic >
New. Make a back view panel by
creating a new front view and change
Translate Z to -100 and Rotate Y to
180 in the Channel Box.
To create a new camera, select
Panels > Perspective > New. You
can also select Create > Camera
from the menu which offers some
unique camera options.
Default Top View
Default Perspective View
New Back View
Default Front View
Default Side View
New Perspective View
View Tools
By pressing the Alt key along
with different mouse button
combinations, you can navigate
around the objects in your scene.
While the Tumble Tool is only
used to rotate a 3D Perspective
view, you can track and dolly
in many other views
including the
Orthographic,
Hypergraph,
Hypershade,
Visor and
Render View.
26
Extra Views
Tumble
Press the Alt key plus the left mouse
button to rotate the camera around a 3D
Perspective view.
Track
Press the Alt key plus the middle mouse
button to pan from left to right and up
and down.
Dolly
Press the Alt key plus the right mouse
button to dolly in and out of your
scene. Using the mouse wheel also has
the same effect. Note that a dolly is
different from a zoom. Dollying moves
your camera closer to or farther from
your subject. Zoom is accomplished by
changing the focal length of a lens.
THE ART OF MAYA
Shading
The Shading menu in Maya offers several options for displaying
objects in a scene. Shading can be different for each view panel,
allowing geometry to be shown at different levels of complexity.
The more detailed a scene becomes, the greater the need to
simplify the objects in it. Although Maya is very good at processing
complex levels of geometry, it is a good idea to view your objects in
a less complex shading mode until you are ready to render or make
adjustments to those objects.
There are several shading display options to choose from. The
default shading in Maya is Wireframe. Other display options include:
Bounding Box, Points, Flat Shade, Smooth Shade, Hardware Lighting,
Wireframe on Shaded and X-Ray.
Bounding Box
Points
Wireframe
Bounding Box displays
objects as boxes. This
is useful when working
with complex scenes.
Points shading displays
objects as a group of
points that represent the
shape of the object.
Wireframe shading is
the default shading
quality in Maya.
Flat Shade
Smooth Shade
Hardware Texturing
Flat Shade displays all or
only the selected objects
with lower resolution,
faceted display.
Smooth Shade displays
all or only the selected
objects, as smooth surfaces with surface color
and shading properties.
Hardware Texturing
displays smooth-shaded
surfaces with textures.
Hardware Lighting
Hardware Fog
Hardware Lighting
displays smooth-shaded
surfaces with textures
and lighting.
Hardware Fog displays
surfaces with a userdefined fog. This is
useful to simulate a
scene fog.
High Quality
Rendering
Show Menu
The Show Menu allows you to show and hide different elements of
a scene. You can show all or none or specific items such as NURBS
curves, lights or cameras.
The Show Menu is accessible from all views and can hide items
in one view while displaying them in another.
The Show Menu allows you to edit a NURBS curve in the Side view with NURBS surfaces
turned off while viewing the Perspective view panel with NURBS surfaces visible.
High Quality Rendering
displays surfaces with
more accurate texturing,
bump mapping, transparency and lighting.
Isolate Select
Shade Options
The Isolate Select option allows you to hide surfaces at both the object
and component level, on a per-panel basis. To hide the Control Vertices
(CVs) of an object, choose the CVs you want to modify and select
Show > Isolate Select > View Selected. This will hide all unselected CVs.
Another advantage of using Isolate Select is that it affects hardware
rendering only, allowing hidden objects to be viewed during
software rendering.
In addition to the default shading mode, there are
two shading options for viewing your models:
Wireframe on Shaded and X-Ray modes. Both of
these options can be used at the same time.
Viewing your model in Wireframe on Shaded
mode will allow you to easily view surface isoparms
for all objects in your view panel without viewing
through the object.
Viewing your model in X-Ray mode will allow
you to view through your model using a semi-transparent shading. This is useful when you want to
see a surface that is behind other surfaces or inside
objects such as skeletons within a character.
Using Isolate Select allows you to hide CVs
on a NURBS surface.
X-Ray
X-Ray shading displays
objects with a semitransparent surface.
Wireframe on Shaded
Wireframe on Shaded
displays a wireframe on a
27
Objects and Components
Y
ou can transform objects in Maya by selecting objects and their
components. Selection masks allow you the flexibility to select
only the items you want in a scene. These masks are grouped into three
categories: Hierarchy, Object type, and Component type selections.
A simple hierarchy where
middleFingerGroup is the parent of
middleFinger and middleFingerNail.
Object Types
Hierarchies
Scene objects are items such as cameras, curves, surfaces, dynamics,
joints, handles and deformers. Objects created in Maya are made up
of two parts: a Transform node and a Shape node. The Transform node
contains basic information about an object’s position, orientation and
scale in space. The Shape node defines what the object looks like.
A hierarchy consists of at
least two nodes arranged in
a parent-child relationship.
Working with hierarchies can
help keep your modeling
and animations organized.
You can use the Outliner or
Hypergraph windows to see
your hierarchies.
Rendering
Scene objects such as
lights, cameras and
texture placement nodes
are rendering object types.
Skeleton Joints
Curves
Skeleton joints are used
to help control characters.
Objects and Components
Turning off the curve
selection means you can
not select the curves in
the scene.
Handles
Selecting by surfaces
allows you to select the
surface geometry of
an object.
Deformations
Deformers such as cluster
flexors and lattices modify
the shape of an object.
Dynamics
Dynamic objects such as
particles can be separately
selected by toggling the
Dynamics button on.
Component Types
exploring maya
IK handles are applied
to joint chains for animation control.
Surfaces
In order to change the shape of an
object in Maya, you need to modify
component type information.
There are a variety of
component types such as points,
isoparms, faces, hulls, pivot points
and handles. These components
can be used to interactively modify
and reshape the appearance of
an object.
28
Points
Faces
Points such as CVs
and polygonal vertices
are used to modify the
shape of an object.
Faces are patches
created by intersecting lines.
Hulls
Param points are points
that lie directly on a curve
or surface.
Hulls are guides that
connect CVs. They can
be used to select and
transform rows of CVs
at once.
Lines
Pivot Points
Lines such as isoparms
and trim edges define
the shape of an object.
Pivot points determine
the location around which
transformations occur.
Param Points
THE ART OF MAYA
Hierarchy
Object
Component
Selection Masks
Selection masks allow you to select
the specific items you want to work
on. There are three main groups of
selection masks: Hierarchy, Object
and Component.
Hierarchy mode allows you to
select nodes at different levels. In
this mode, you can select the Root,
Leaf and Template nodes.
Object mode allows you to select
scene elements at the Transform
node level. These include objects
such as surfaces, curves and joints.
Component type selections are
selections made to objects at the
Shape node level, such as isoparms
and CVs.
Hierarchy
Selecting by hierarchy allows you to select objects at either the Root,
Leaf or Template node level. Unlike Object and Component selection
masks, you are not able to turn on more than one mask at a time.
Object
Object selection masks allow you to make selections based on the
object types you specify. Left-clicking on the arrow to the left of the
pick masks displays a menu allowing you to turn all objects on or off.
Component
Component selection masks offer a variety of pick masks to choose
from. Right-clicking on a mask displays more selection options.
Selection Priority
Right Mouse Button Selections
Objects and Components are
selected in order of priority based
on an assumed production workflow. For example, if you want to
select both joints and surfaces, Maya
anticipates that you want to select
joints first. To select more than one
object with different priorities, select
the first object and Shift+click on the
object of different priority.
Clicking the right mouse button over an object will
bring up a marking menu that allows you to choose
from both Object and Component selection types,
while remaining in Component mode. The menu
choices are specific to
the object selected or
the object beneath the
marking menu.
Hierarchies
Preferences
You can change the order in which Objects and
Components are selected by choosing Window >
Settings/Preferences > Preferences and
choosing the Selection category .
Root node selection
viewed in Hypergraph.
When working with a group of
objects that are arranged in a hierarchy, you may want to specifically
work at the Root node or Leaf
node level.
If you choose to work at the Root
node level of a group (also known as
the top node in a hierarchy) you can
toggle on the Select by Hierarchy:
Root mask. In this selection mode,
you can click on
any object in the hierarchy and
only the top node of the object
picked will be selected.
If you want to work at the Leaf
Toggling on the Select by Hierarchy: Root mask
node level, toggle on the Select by
allows you to select any Leaf node in a hierarchy to
automatically select the Root node.
Hierarchy: Leaf mask. In this mode,
only the leaf nodes or children of a
hierarchy will be selected.
Quick Select
Using Quick Select, you can type in the name of an
object in the text field and it will become selected
in your scene.
When there are several objects in a scene
with a common name, you can type in the name
preceeded and/or followed by an asterisk (*) and
all objects containing that name will be selected.
29
Dependency Graph
E
verything in Maya is represented by a node with attributes that can be
connected to other node attributes. This node-based architecture allows
connections to be made between virtually everything in Maya. Node attributes determine such things as the shape, position, construction history
and shading of an object. With this architecture, you can create inter-object
dependencies, shading group dependencies, and make your own node
connections.
Nodes with Attributes that are Connected
The Dependency Graph is a collection of nodes which are connected.
These connections allow information to move from one node to another
and can be viewed in a diagrammatic fashion through the Hypergraph and
Hypershade windows.
Right Mouse Button
Clicking the right mouse button over a
selected object will give you access to an
object’s input and output connections.
Shading Group Dependencies
Animation Curve
When an animation is produced in Maya, node dependencies are
created between the animation curves and the object being animated.
When a material is created in Maya, a network of
node dependencies is built. This network is referred
to as a Shading Network.
The Hypershade window allows you to make
and break connections between shading group
nodes. The Hypershade displays thumbnail images
representing each node. The diagrams below both
show the same shading group dependency in the
Hypershade and Hypergraph windows.
Node Dependencies
Dependency Graph
In the diagram below you can see the nodes that are
dependent on each other to make up a chess piece.
Each node plays a part in creating the final rendered
object. Here you see that: the Shader node is dependent on the Shape node to render the material, the
Shape node is dependent on the Revolve node for
the chess piece surface and the Revolve node is
dependent on the Curve node to make the revolve.
Hypershade
exploring maya
Using the Hypershade, you can make materials and textures and
view the node dependencies used to create them.
Hypergraph
The Hypergraph window can also be used to view and create
shading group dependencies. However, it does not have swatches as
the Hypershade does.
30
THE ART OF MAYA
Viewing Dependencies
Making Connections
Dependencies are relationships created between nodes that are
connected. There are many ways to view and edit dependencies in
Maya including the Hypergraph, Attribute Editor and Channel Box.
By selecting a node and clicking the Up and Downstream
Connections button in the Hypergraph window, you can view node
dependencies on a selected node. This window visually displays the
connection between nodes, with arrows showing the direction of their
dependency to one another.
The Attribute Editor is made up of several tabs allowing you to
view related nodes of a dependency group. In the Attribute Editor,
you can edit the attributes that affect these nodes.
In the Channel Box, the selected node is shown with a listing of any
keyable attributes that belong to it. Depending on the node selected,
it will also show input, output or shape nodes. If you select more than
one node with the same keyable attributes, you can modify them at the
same time using the Channel Box.
Connections made in Maya represent the flow of
information from one node to another. You can
make your own connections between nodes as well
as break connections using the Connection Editor.
The Connection Editor offers a list of node
attributes that can be connected to other node
attributes. For example, you can map the scale
of one object to influence the rotation of another.
This creates a connection between the two nodes
where every time you scale one, the other
automatically rotates.
The Connection Editor can extend the
possibilities of your production by automating
tasks done through the connection of nodes.
Channel Box
Attribute Editor
In the Attribute Editor,
you can adjust the
attributes on the input
and output connections
of a selected node.
In the Channel Box, you can edit any keyable
attributes on the selected node.
Hypergraph
In the Hypergraph window,
you can see the input and
output connections of a
selected node.
Connection Editor
The Connection Editor allows you to make connections
between nodes attributes.
Hypergraph
In the Hypergraph, you can view the result of connections
made in the Connection Editor.
Construction History
When an object is built in Maya, Input nodes can be viewed in the
Dependency Graph containing information on how the object was
created. These Input nodes allow you to edit an object based on the
geometry used to build it. For example, if you were to create a curve
and use the Revolve Tool to make a surface from it, the curve used to
create the surface would hold information as to how the surface was
created. Using construction history, you can go back to the original
curve and alter the shape of the object.
Construction History On
Construction History Deleted
31
Transformations
T
ransformations are changes made to an object’s position,
orientation and scale in space. The Transform node holds all of this
information and the Transformation Tools such as the Move, Scale and
Rotate Tools are used to transform an object along the X, Y and Z axes.
Manipulators
Manipulators are used to move, scale and rotate objects in orthographic and 3D space. Each of the manipulators use red, green and
blue colored handles. These match the colors of the X, Y, Z locator at
the bottom left corner of the view, making it easier to distinguish the
direction of the transformation. These handles are designed to constrain
the transformation to one, two or three axes at a time, allowing for
complete control.
Move Tool
Rotate Tool
Scale Tool
The Move Tool has a handle
for each X, Y and Z axis and
a center handle to move
relative to the view.
The Rotate Tool has a ring for the
X, Y and Z axes. One ring moves
relative to the view, and a virtual
sphere rotates in all directions.
With the Scale Tool, you can scale
non-proportionally in X, Y or Z. You
can also scale proportionally by
selecting the center handle.
Setting Pivot Points for Transformations
Transformations
Objects are transformed around their pivot point location. This is important to be aware of because the position of your pivot point affects the
outcome of your transformation. To change the location of your pivot
point, select a Transform manipulator and press the Insert key on a PC or
the Home key on a Mac. Move your pivot point to the desired location
and press the Insert or Home key again to set the pivot point.
Move/Rotate/Scale Tool
This tool incorporates the Move, Rotate, and
Scale manipulators into one tool. Select Modify >
Transformation Tools > Move/Rotate/Scale
tool to use this tool.
QWERTY Hotkeys
To work quickly and
efficiently in Maya, the
QWERTY hotkeys offer
a fast way to access the
transformation tools.
To select the tool you
want, simply press its
corresponding key on
the keyboard: Select
(q), Move (w), Rotate
(e), Scale (r), Show
Manipulators (t). In
addition to these tools,
Maya offers access to
the last tool you used by
pressing the y key.
q
w
e
r
t
y
Use the QWERTY shortcut keys on your
keyboard to select and transform the
objects in your scene.
Reset Transformations
Once you have manipulated an
object, you may not be satisfied
with its new transformation. To
reset your object to its original
position, select Modify >
Reset Transformations.
Pivot point is in the wrong location.
exploring maya
Freeze Transformations
Select Modify > Freeze
Transformations to keep your
object’s current position, rotation
and scale as its default position. This
means that your object will now
have values of 0 for its Translate and
Rotate attributes and a value of 1 for
its Scale attributes.
Object is rotating around a properly positioned pivot point.
32
THE ART OF MAYA
Scale Tool
Rotate Options
The Scale Tool allows you to change the size of an
object both proportionally and non-proportionally.
The default coordinate system for scaling is local.
Double-clicking on the Scale Tool icon will open the
Scale Tool Settings window, where you can
modify the tool’s default behavior.
Use the Rotate Tool to rotate an object around any
of the three axes. The Rotate Tool Settings window
offers three rotate modes: Local, Global and Gimbal.
Move Tool
Local
Local is the default
setting. It allows
you to rotate an
object in Object
space. Note that
in this mode, the
axes rotate with
the object.
The Move Tool enables you to move objects through 3D space using
one of four coordinate systems: Object, Local, World and Normal.
The default system is World, but you can change this in the Move Tool
Settings window by double-clicking the Move Tool icon.
Each object has its own coordinate space with the origin located
at the object’s pivot point. When the object is rotated, the object’s axes
are also rotated.
Local space uses the pivot point of the parent or root node in a
hierarchy as the location for its axes. Thus, all objects are moved in
relation to their parent.
World space is the coordinate system for the scene. When objects
are moved, they are moved relative to the origin of the scene.
Normal space exists at the component level and all coordinates
are relative to the surface you are working on. The axes are U, V and N,
where U and V represent the axes that lie on the surface and N the axis
that points out from the surface, known as the surface normal.
Global
Selecting Global
mode means the
object will rotate
within world
space. In this
mode, the manipulator rings never
change direction.
Gimbal
In Gimbal mode
you can rotate your
object in only one
axis at a time.
Local
Normal
Universal Manipulator
Object
World
2D Transformations
When transforming an object using the Move Tool in the top, front
and side views, you are constrained to move only in two dimensions.
When using the Rotate and Scale Tools in an Orthographic view, you
can transform an object in both two and three dimensions.
The Universal Manipulator is similar to the Move/
Rotate/Scale Tool, but it lets you transform objects
with greater
precision. When
you click on its
manipulator
handles, a
numerical input
field appears
for you to enter
a precise value
by which you
can transform
an object.
33
Animation
When 3D artists animate, they paint with motion instead of color. As an object
moves, rotates or changes shape over time, it is being animated. This motion
can be at a constant speed or it can accelerate or decelerate. At times, this
motion will attempt to mimic real-world events such as an object falling off
a table, while at other times, it will take the form of an actor telling an
audience a story.
Models that are animated must be set up with mechanical properties that
define how they work. To have a door open and close or a drawer slide in
and out, 3D artists must understand the mechanics of their models so they
can animate them.
There are a number of tools for creating motion in Maya software. In some
cases, 3D artists will animate all the parts of an object separately. In others cases,
they use higher level controls to help streamline their workflow. Situations can
even be set up where the animation of one object controls that of another.
Animation Techniques
W
hen you animate, you bring to life otherwise static and motionless objects. You take aspects of the object such as its position,
size, shape and color and change these over time. If these changes are
set up properly, you create motion that instills character and life in
the object.
In Maya, there are a number of ways to animate an object.
Using a bouncing ball as a common example, it is possible to explore
the different animation techniques available in Maya. In a real project,
you will most often combine several of these techniques to achieve the
best results.
Path Pivot
Translate Pivot
Setting Keys
Setting keys or keyframing, is the most fundamental technique for
animating in 3D on a computer. This technique involves recording
attribute values as keys for one or more objects at particular points in
time. As you set multiple keys, you can play back the scene to see your
object animated.
Setting keys gives you a great deal of control over timing. When
you animate using keys, you generate animation curves that plot the
key values against time. These curves are great tools for analyzing and
editing the motion of an object. Other animation techniques are usually
combined with some keyframing. Most animation you do in Maya will
involve some form of setting keys.
Scale Pivot
Rotate Pivot
Hierarchical Animation
You set many of your keys on the Transform
nodes of your object. By grouping an object to
itself, you can set up different nodes with their
own pivot points and orientation, then key them
on their own. This added complexity isn’t really
required for a bouncing ball, but can be very
useful for more complex objects where you
want different levels of animation.
Keying Attributes
Animation Techniques
By setting keys on attributes at
different times, you define the
motion of an object.
For example, Translate X is
keyed at the beginning and end
of the bounce.
Translate Y is keyed with an up
and down motion that is fast
near the ground and slow near
the peak of the bounce.
Animation Curves
Secondary Motion
Rotate Z defines the rolling of the
ball; scaling of the ball is used to
indicate its impact with the ground.
Path Animation
Path animation involves attaching the object to a curve where points
on the path are used to determine where the object will be at
particular points in time. It is easy to understand the way an object
moves around in 3D space through a path, since its curve clearly
depicts where the object is going.
animation
By setting keys, you map the attribute’s
value to time. The keyframes are then
connected by an animation curve that helps
define the attribute inbetween the keys. The
Maya Graph Editor shows you the speed of
the motion and lets you reshape the curve.
Method 1
Method 2
A curve is used to represent
the path of the bouncing ball.
This method lets you describe
the path of the bounce by
shaping the curve, but timing
the bounce requires the setting of several motion path
keys to lock down the motion.
Here, a curve is used to replace
the X and Z translation of the ball
while the Y translation, rotation
and scaling are keyed normally.
This method is ideal if you want
to animate the ball bouncing
along a curved path, which might
suit a cartoon-style bounce.
Path Pivot
36
When building a hierarchy for motion
path animations, it is always helpful to
reserve a node for the motion path.
Path Pivot
The bouncing of the ball
is keyed on a lower node
in the hierarchy.
THE ART OF MAYA
Set Driven Key
Non-linear Animation
Set Driven Key allows you to control or "drive,"
the value of one attribute with another attribute. The
relationship between the two attributes is defined by
an animation curve. The driving attribute can be used
to drive multiple attributes. For example, the rotation
of an elbow joint could drive a bulging bicep muscle
and the wrinkling of a sleeve.
Custom attributes can be added to a control node,
then connected to other attributes in the scene using
Set Driven Key. This creates centralized controls.
Non-linear animation uses animation clips that contain keyframed
motion. These clips can be cycled and blended with other clips in the
Maya Trax Editor. For a bouncing ball, a single bounce clip could be
cycled, then blended with a clip of the ball rolling. These clips can be
moved, scaled, cycled and blended. You can also add and subtract clips
from the Trax Editor to quickly explore different animation options.
Bounce Clip
Keys are set for the up,
down and forward motion
of a single bounce. This
one bounce can be cycled
to create a number of
cycles.
Roll Clip
Here, a clip is created for
the rolling of the ball. This
clip contains keys for the
Rotate Z, Translate X and
Translate Y of the ball.
Drivers
Set Driven Key lets you use one attribute to drive other attributes.
Here, the Translate X of the ball drives the Translate Y and roll of the
ball. As the ball moves forward, the bouncing action takes place. The
resulting animation curves map the keyed attributes to Translate X
instead of time.
Blended Clips
The blending of the two
clips has the ball bounce
several times while getting
closer to the ground and
rolling forward.
Expressions
Another way of animating object attributes
is through expressions. Expressions can be
mathematical equations, conditional statements
or MEL commands that define the value of a given
attribute. Expressions are evaluated on every frame.
You can animate using an expression when you
have a mathematical relationship that you want to
achieve. In the case below, the absolute value of a
sine wave creates the bounce of the ball.
Dynamic Simulation
To animate a ball that is bouncing off a series of objects or against
a non-flat terrain, a dynamics simulation is required for the most
realistic results. The ball can be turned into a rigid body that is
propelled forward using dynamic attributes. Forces such as gravity
or wind can then be applied to the ball to bring it to the ground.
Objects in the scene can also be turned into rigid bodies so the ball
will collide with them. If they are passive, they will not be affected by
the collision. If they are Active Rigid Bodies, they will move as the ball
hits them. In the end, the simulation can be baked to turn the motion
into keys.
Active Rigid Bodies
sine
Initial Velocity
When an object is set up for dynamics, it can
have attributes such as initial velocity and
initial spin that give it a starting motion.
An active object is affected by
forces and by collisions with other
Active Rigid Bodies. Active objects
will animate during a simulation.
abs (sine)
Bounce Expression
A sine wave placed on time creates the bouncing
motion. An absolute value function keeps the motion
in positive Y and the forward motion is driven directly Fields
Objects can be subjected to fields
by time. Other multipliers are used to control the size
such as gravity, wind or turbulence.
of the bounce and the phase of the motion. Expressions
are evaluated at every frame of the animation.
Passive Rigid Bodies
Passive objects are used as collision
objects by active objects, but
they do not react to either forces
or collisions.
37
Setting Keys
F
or an object to be animated, it must change over time. For example,
a car might move forward or a light might blink on and off. To
animate these changes in Maya, you need to set keys for the car’s
Translate X attribute or for the light’s intensity. Keys are used to mark
attribute values at specific times. Then, animation curves are used to
determine the value in-between the keys.
As a 3D artist, setting keys is one of your most important techniques. This animation technique can be easily applied to your objects
and the results can be easily edited. Once you are familiar with this
technique, you will soon find that you spend less time setting keys and
more time editing the motion.
Keying Attributes
Setting Keys
Keying Attributes
When you set keys, you key values for one or more of an object’s
attributes at specific frames in time. These keyframes set the values,
while tangents set at each key determine the interpolation in-between
the keys. This interpolation results in an animation curve that can be
edited in the Graph Editor. This editing feature helps you control the
quality of your motion.
In Maya, virtually every attribute is keyable.
As you learn more about the different nodes
available in Maya, you will begin to discover
unique possibilities for animating your
models. For example, if you keep an object’s
Construction History, you can set keys on
the Input node’s history. You can also set
keys on attributes belonging to lights,
materials, cameras and other node types.
Step 1: Keyframes
Step 2: Animation Curve Shape
Step 3: Playback
Keys are set for at least two points in time. You
can set keys for one or more attributes at the same
time. The keys are then stored as animation curves.
In the Graph Editor, you can view and edit the
animation curves. At each key, there are tangents
set that define the shape of the curve.
When you play back an animation, the object uses
the keys and the values defined by the animation
curve to create the resulting motion.
The Time Slider
Timing is one of the most important components when creating an
animation. You must ensure your key poses are timed properly and that
the in-between motion achieves the desired results. The Time Slider lets
you play back or “scrub” your animation to evaluate this timing. You
can also edit the timing of the keys.
animation
< Backward | Forward >
Selecting and Modifying Keys
Right Mouse Button
Keys show up as red lines in the Time Slider,
depending on which object or character you
have selected.
You can click+drag over several keys using the Shift
key. This creates an editing bar. You can use this to
move keys by dragging on the center arrows and
scaling keys by dragging on the end arrows.
With selected keys, you can then click on the right
mouse button over the Time Slider to access a popup menu allowing you to cut, copy and paste keys.
You can also change tangents on the selected keys.
Scrubbing
Sound
Time Range
Click+drag in the Time Slider to quickly preview the
motion. You can drag with your middle mouse button to change time without updating object values.
You can import a sound file into Maya, then load it
into the Time Slider using the right mouse button.
The audio waves will be visible to help you
synchronize your keys.
The animation’s Range and the Playback Range can
be set separately. This makes it possible to preview
subsections of a larger animation by updating the
Range Slider.
Object or Character Keys
38
Last Frame
Next Frame
Next Key
Play
THE ART OF MAYA
How To Set Keys
There are a number of ways to set keys in Maya.
Each one offers a different way of recording time
and value information. In some cases, you may want
to set keys for a number of objects and in others,
you may want to set keys for a single attribute. The
results are always the same, as animation curves
are created for any attribute for which you set a
keyframe. The only difference lies in choosing
a workflow that meets your needs.
Hotkeys
Selected Keys
A fast and easy method for setting keys is through
hotkeys. Hitting the S key will set a keyframe for all
keyable attributes on the selected object or character
set. Pressing Shift w, Shift e or Shift r, will set keys
for just the translation, rotation or scale of an object.
Selected Character
Set Key
The Animate > Set Key tool is designed to create
keys for all the keyable attributes that exist on
selected objects or characters. If a character is
selected from the character pop-up then it is keyed.
Otherwise the selected object is keyed.
Prompt
One of the Set Key options is to use a prompt
window to let you set the keys for multiple points in
time. For example, you could set a key for frames 5,
10 and 15 all at once using this window. The same
attribute value would be keyed for each of these
times.
Auto Key Off
Auto Key On
Attribute Editor
Auto Key
The Channel Box always displays the keyable attributes of a selected
object. The Channel Box also lets you highlight one or more channels and
then select Channels > Key Selected to set keys for the highlighted
attributes on all the selected objects.
When viewing attributes in the Attribute Editor,
you can click the right mouse button over individual
attributes and choose Set Key. Since this window
shows both keyable and non-keyable attributes,
you can use this method if you need to key an
attribute that does not appear in the Channel Box.
Auto Key lets you key automatically as you edit
objects in your 3D views. Auto Key will set keys
whenever the value of an animated attribute is
changed. Make sure to turn this option off when you
have finished using it.
What is Keyed?
Viewing and Editing Keys
Generally, you set keys on attributes belonging
to selected objects. You can further control which
attributes are keyed using the Set Key options
where you can work based on keyable attributes, the
manipulator handles or the pop-up Character Menu
(where you only key character-specific attributes).
To view and edit keys, you can focus on the animation curve’s shape
or its timing. Select Window > Animation Editors > Graph Editor to
access the animation curves and define their shape and timing. Select
Window > Animation Editors > Dope Sheet to focus on timing. In both
windows, you can set the attribute value, edit tangents and cut, copy
and paste keys.
In the Graph Editor, you can edit the weighting of the tangents.
This feature provides you more control over in-between motion. This is
in addition to the various in and out tangents that you can set in both
windows using the menus. As you become more proficient with the
Graph Editor curves, you will begin to appreciate the Dope Sheet where
you can easily make more general edits.
Channel Box
Keyable Attributes
By default, the Set Key command sets keys for all keyable attributes
of a given object or character set. Attributes can be set as either
keyable or non-keyable from Window > General Editors > Channel
control. The keyable attributes are visible in the Channel Box and
can be keyed with the Animate > Set Key command. Animated
attributes that are non-keyable retain the keys set while keyable.
Manipulator
Keys can also be set using your manipulator as a reference. In the
Set Key options, you can choose to either use the manipulator or
the manipulator handles as the keyable attribute. Therefore, keys
would be set depending on which manipulator you are working with.
This allows you to focus your keyframing on the attributes you are
currently editing.
Character Menu
If a character has been selected from the Character menu found
under the playback controls, only that node will be keyed with
Set Key. The use of this menu assumes you have chosen a characterfocused workflow when setting keys. You can set the character to
None to set keys on other objects in the scene.
Graph Editor
Dope Sheet
This window offers a view of the animation curves
themselves. This makes it possible to view the
in-between motion and edit curve tangents. You can
also move keys around and edit their values.
The Dope Sheet focuses on keys. You can select
keys hierarchically and edit them using this window.
For example, you can use the Dope Sheet Summary
to edit keys for all the selected objects.
39
Modeling
Modeling is the process of creating shape and form on screen. Models
in
Maya can be hard objects with sharp edges or organic objects with a
softer look. Using one of several geometry types, 3D artists can build
surfaces,
then push and pull points to change their shape.
Modeling on the computer can be a challenge at first because the
goal is to mimic 3D objects on a 2D screen. In this chapter, 3D artists
will learn how to use manipulators and different view panels to navigate this virtual world so they can focus on sculpting and building their
models.
While building good looking models is important, 3D artists must
also be aware of how the model will be used down the line. Models
might need to bend or twist or simply move around your scene. Also,
the way surfaces are texture mapped will depend on how they were
Geometry
T
he mathematics of geometry is used by the
computer to determine what you see on the
screen. The Maya user interface gives you tools to
edit geometry without having to understand the
math behind it.
In order to build complex scenes, you need
to understand how to manipulate geometry and
how the geometry will be animated and texture
mapped down the line. A good looking model is only
complete when it satisfies the needs of all aspects of
the animation process.
Points, Curves and Surfaces
Points, curves and surfaces are the basic geometric elements that
you will use to create and manipulate 3D objects on the computer.
The creation of surfaces from points and curves is the essence of
modeling in Maya. Sometimes, you start with an existing surface and
manipulate its points to define shape and form. Other times, you start
with carefully constructed lines or curves that are then used to build a
surface. Either way, you will work to give a physical presence to these
basic geometric elements.
Building a Scene
Above is a wireframe view of a street scene.
Complex scenes can be redrawn more quickly when
viewed without hardware shading or texturing.
Curves
Geometry
When two or more points are
connected, you have a curve. Curves
are useful for defining the shape of an
object. They can also be used as paths
for animating objects. Since curves
only have one dimension, they cannot
be rendered. Instead, they can play a
key role in defining how surfaces work
in 3D space.
Points
Points are defined in three dimensions
using X,Y and Z coordinates. In Maya,
control points are used to help define
the shape of object types such as curves
(CVs), surfaces (vertices, edit points)
and lattice deformers (lattice points).
Points are also very useful as references
for snapping.
Objects
modeling
Surfaces
When a series of lines is connected in two
directions, you have a surface. Surfaces can
be textured and rendered to create 3D images.
When you shine light onto a surface, you can
see the shape of the surface as gradations of
tone and highlight.
42
One surface is often not enough to fully
define an object in 3D. When a series of
surfaces is positioned in relation to each
other, you begin to get more complex
models. These models require grouping to
bring together the parts into a selectable
hierarchy that can work as a single object,
while not denying you access to the
individual parts.
THE ART OF MAYA
Geometry Types
One of the first decisions you have to make when you start a project is how you are going to build your models. There are
four types of geometry: polygons, NURBS, Subdivision and Bezier surfaces.
You can use any geometry type to create either simple or complex models. You can use one geometry type as a starting
point for another or you can build models that combine geometry types. In general, if you are building organic shapes, you will
probably use NURBS or Subdivision Surfaces. They will give you smooth surfaces and have the fewest control points, making
edits to the surface easier. Since NURBS are limited to a four-sided patch, there are limitations to the types of organic shapes
you can make from a single surface. This is where it is beneficial to use Subdivision Surfaces because they can represent many
more types of shapes with a single surface. If you are building non-organic shapes, such as a desk or wall, it is easier to use
polygons because they easily make shapes like corners or edges. If you are building a surface that combines hard edges with
an organic shape, Subdivision Surfaces work well. In this chapter, you will learn more about your options so you can decide on
the geometry that best suits the way you want to work.
NURBS
Polygons
Subdivision Surfaces
NURBS geometry is spline-based. The geometry is derived from
curves and surfaces approximated from the surface’s control
vertices (points) locations. NURBS allow you to start with curves
that are then used to generate surfaces. This workflow offers
precise results that can be easily controlled. All NURBS surfaces
are four-sided patches, although this shape can be altered using
the Trim Tool.
Polygons are shapes defined by vertices that
create three, four or n-sided shapes. Polygonal
objects are made up of many polygons.
Polygons can appear flat when rendered or the
Normals across adjacent faces can be interpolated to appear smooth.
To create objects with Subdivision Surfaces, you need
some understanding of both NURBS and polygonal modeling.
Subdivision Surfaces are mostly built using a polygon mesh as a base
and then refined. The advantage of using this geometry type is that
detail is added only where needed. It creates smooth surfaces like
NURBS but does not have the limitations of being four-sided patches.
Scene Hierarchy View
Tessellation
Within the Hypergraph window, you can view
the objects in the scene and any relationships
between them. An object will have a Transform and
a Shape node. The Transform node contains information such as translation, rotation and scale. The
Shape node contains information such as History,
Tessellation, Render Stats and Object Display. When
you select an object, the Channel Box will display
information for both the Transform and Shape
nodes. If you are using the Attribute Editor, the
Transform and Shape nodes will be represented by
different tabs.
The Maya renderer requires polygonal objects in order to be able to
execute rendering calculations. Therefore, NURBS and Subdivision
Surfaces are broken down into triangles or tessellated, during the
rendering process. The advantage of letting the renderer tessellate a
spline-based model is that you can set the quality of your tessellation
to match the size and scale of your object in a scene.
Transform node
Shape node
Even though the boot is a NURBS surface, it will be
tessellated into triangles when it is rendered. This is true
for all surface types that are rendered in Maya.
43
Modeling Techniques
C
hoosing the geometry type that best suits your model will depend
on several factors, such as: how the model is going to be used,
how complex the model has to be, whether the model will be animated
and deformed and what kind of texture maps will be used. If you are
unsure of what type of geometry to work with, it is possible to begin
with NURBS because it can be converted to polygons or Subdivision
Surfaces later. Polygons, however, cannot be converted to NURBS, but
can be converted to Subdivision Surfaces.
Starting with Primitives
modeling
Modeling Techniques
One of the most common ways to create a model is to begin
with a primitive shape. This simple shape is then molded or expanded
to add more detail. This technique using polygons is frequently used
for developing environments and characters for interactive games.
NURBS primitives, such as spheres and cylinders, are commonly used
to begin organic modeling of objects such as body parts. A polygon
cube is a good place to start a Subdivision model by simply converting
it to a Subdivision Surface and then beginning to extrude.
Primitives
Primitives can be made of NURBS or polygons.
All primitives have the option of having
different spans and sections.
The model on the left was created from a NURBS primitive sphere that had several
spans and sections in both directions to have sufficient detail. The Artisan Sculpt
Surfaces Tool was used to create the main shape, which was then tweaked with
CV manipulation. The model above began as a polygon cube. It was then scaled,
had faces extruded and was finally converted to a Subdivision Surface.
Network of Curves
For more precise surfaces, a
network of curves can be used to
control the shape and parameterization of the surface. Surfaces can
be created from curves, trim edges
or isoparms. For industrial types
of modeling, creating a network of
curves is essential for smooth and
precise surfaces. There are several
tools within Maya to create a
network, such as: Snap to curves
and Point Snapping, intersecting
and projecting curves, Animated
Snapshots, curve rebuilding and
surface curve duplication.
The thumb was created using a profile curve for the base of the thumb and
attached to a motion path. The curve was scaled and deformed at the end of
the path to the shape of a thumbnail. Finally, an Animated Snapshot was performed to create the curves to use for a Loft.
44
THE ART OF MAYA
Symmetry
Organic Modeling
Most objects in life, whether they are organic or
industrial, have symmetry. Modeling only half the
object and mirroring it offers an efficient method
for completing the entire object. This technique is
widely used for industrial design, but can also be
used for organic shapes such as heads and bodies.
A helpful tip for viewing a mirrored copy update
interactively while you work on one half, is to use
an Instance duplication with a negative scaling
instead of a regular copy.
When the surfacing tools are not sufficient to create the shape you are
looking for, direct control point manipulation sometimes is the only
solution. Artisan is an excellent tool for creating broad shapes but it
can be difficult to use in tight areas where you may need to manipulate
only a few CVs or vertices at a time. Manipulating on such a fine level is
an art in itself that demands patience and skill.
Selecting the points for manipulation can be the first challenge
because it is easy to accidentally select points on the back of the model.
Artisan Paint Selection Tool can be handy for selecting or deselecting
points since it works on the surface under the brush and does not affect
points on the back surface. Also, being able to hide unselected CVs lets
you focus on the surface without the clutter, making it easier to change
your selection. On NURBS models, when hulls are on they offer good
visual clues as to where the CVs are in space. After the selected CVs
have been modified, use the keyboard arrows to pick-walk to the next
row.
Artisan Paint
using reflection
Mirroring half
of head
The human head is one surface and not mirrored. Instead, it was created using
the Artisan Paint Tool with the Reflection feature that sculpts on both halves
at the same time. The alien head was modeled as one half and mirrored as a -1
scale in the X-axis.
The left model displays only those CVs that are being modified. By pressing F, the view is focused to
center on whatever is selected, making tumbling the camera easier for evaluation of the affected surfaces.
The model on the right uses the Paint Selection Tool to select front surface vertices, avoiding wrong
selections on the opposite side of the model.
Patch Surface Modeling
Rotoscoping
This method of modeling requires more planning
than the others. This method creates a surface out
of many smaller NURBS surfaces that have surface
continuity and, typically, the same number and positioning of isoparms.
The planning stage of
patch modeling involves
deciding where the cutlines
are to be positioned and what
the parameterization of the
surfaces will be. The Stitch
and Rebuild Surface Tools are
used extensively to create
surfaces with this method.
If the model needs to have exact
proportions or is being developed
from a sketch, you can import reference images as backdrops and rotoscope (or trace) them. Maya Image
Planes are objects in the scene that
can display images or textures. Each
Image Plane is attached to a specific
camera and provides a background
or environment for scenes seen
through that camera.
An Image Plane is used as a guide to model a hand.
Image Planes can use single image files, a numbered
sequence of image files or a movie.
45
NURBS Surfaces
T
he foundation of a NURBS surface is the NURBS curve. To create NURBS
surfaces efficiently, you must be proficient in creating good curves. The same
principles behind NURBS curves are applied to NURBS surfaces since the
two are related. There is an obvious difference: a NURBS curve has only
one direction, while the NURBS surface has two directions. The two directions on a NURBS surface have an origin and together they define the
Normals of the surface, which determine the front and back of the
surface. Being aware of these surface properties will help when using
certain modeling and rendering operations, such as attaching surfaces
or texture placement.
Anatomy of a NURBS Surface
The components of the NURBS surface are very similar to those
of the NURBS curve, except the edit points are not moveable.
NURBS surfaces have CVs, hulls and spans which define the shape
of a four-sided surface. NURBS models, whether they are organic or
industrial in nature, are generally made up of several adjoining foursided patches. As with the NURBS curve, it is desirable to define surfaces
with the fewest evenly spaced isoparms or CVs. As earlier stated, the
quality and type of curve will affect the characteristics of the surface.
However, the surface parameterization can be modified after creation by
duplicating the surface curves at the desired locations and re-lofting.
Hulls
NURBS Surfaces
The hull comprises straight lines that
connect CVs. When you select a hull, you
are actually selecting all of its associated
CVs. The hull offers a better visual cue for
the distribution of CVs in a crowded area.
Isoparms
Isoparms are lines that represent cross-sections in the U and
V directions. Isoparms can be inserted, removed, used to make
curves and snapped to. If you select an isoparm that's not a span
or section, it displays as yellow dots. If you select an isoparm
that is a span or section, it displays as a solid, yellow line.
This distinction is important for some modeling actions.
Control Vertices
They do not exist on the actual
surface but are used to manipulate
the shape of the surface.
Surface Point
modeling
You can select a Surface
point that represents a measurement of U and V. The values at this point
are dependent on the parameterization
of the surface.
Surface Origin
Turning on this display option highlights
the first U and V isoparms (red and green)
and labels them U and V. It also draws
a line indicating the surface Normal
direction (blue).
Surface Patch
A NURBS surface patch is defined
by an enclosed span square.
Several patches can be selected
and duplicated to create individual
NURBS surfaces.
Spans
NURBS Marking Menu
46
A span or segment is the space between isoparms
at edit points. When creating surfaces using
Revolve, Primitives, Loft or rebuilding, you can
specify the number of segments or spans.
THE ART OF MAYA
Building Surfaces
The majority of the surfacing tools begin with creating curves defining the surface.
In some cases, the curves are used to create simple surfaces that are then rebuilt and
modified by CV manipulation. Other times, the curves are used to create much more
complex surfaces that would be difficult to attain otherwise. To help you understand
the operation of the tools, view the Help
Line as you scroll through the menus.
Fillet Blend Surface
The Fillet Tool creates a seamless blend between
two surfaces. The three types are: Circular Fillet,
Freeform Fillet and Fillet Blend. These terms are
discussed later in this chapter.
Profile Curve 1
Profile Curve 2
Rail 2
Birail Surface
The Birail Tool creates a surface by using two or
more profile curves that sweep along two rails.
The profile curves must intersect the rail curves
to create a surface. Profile and rail curves can be
isoparms, Curves-on-Surface, trim boundaries,
or boundary curves of an existing surface.
The advantage of this tool over the Loft is
greater control with the addition of rails.
Profile Curve 3
Freeform Fillet
Rail 1
Loft Surface
A Loft Surface is created when a surface is applied
to a series of profile curves that define a frame.
There must be at least two curves or surface
isoparms and ideally the same parameterization for
each curve, to achieve a clean surface. If the curves
have the same curve degree and parameterization,
the Loft Surface will have the same number of spans
in the U direction.
Primitives
NURBS primitives are common geometric
objects such as spheres, cubes and
cylinders. Primitives are often used as
the foundation for other shapes.
Deformed
Half Sphere
Curve 1
Curve 2
Revolve Surface
Trim Surface
Extrude Surface
To create a trimmed surface, a closed Curve-onSurface is required. There are various ways of
creating these curves which will be discussed later
in this chapter.
The Extrude Tool creates a surface by sweeping a cross-sectional profile curve
along a path curve. The profile curve can be an open or closed curve, a surface
isoparm, a Curve-on-Surface or a trim boundary. The extruded surface on this
model creates a lip for the scooter surfaces and gives the illusion of depth.
The Revolve Tool creates a surface defined by a
profile curve that revolves around a defined axis.
The use of construction history is very useful to
tweak the shape after the revolve operation. The
front fender began as a revolved surface that was
then scaled, deformed and finally trimmed. The
tire and rims are simple revolves.
Path Curve
Curve-on-Surface
Profile Curve
Profile Curve
47
Polygon Modeling
P
olygons can be defined as a number of connected points that
create a shape or face. Points are connected by edges that surround
the resulting face. A face can exist as triangles, quadrangles (quads) or
n-gons. Joined together, they create a polygon mesh. A polygon mesh
can be created using the Primitives that come with Maya, but a more
complex shape results from using the Maya polygon editing
operations. A polygon mesh can also be created by a conversion from
NURBS, Subdivision Surfaces, Paint FX, Displacement or Fluids.
Polygon Creation
To create a polygon shape, select Mesh >
Create Polygon Tool. After placing a point, use
the middle mouse button to alter its position.
3 rd Click
2 nd Click
1st Click
Polygon Components
Each polygon mesh consists of components that are modifiable to help
create and edit the mesh. These main components are vertices, edges,
faces and UVs. There are polygon editing operations in Maya that allow
you to edit these components. You will need to select the individual
components that you wish to modify, by toggling on Convert component selection (Preferences > Modeling > Polygons). Maya will
automatically switch to the right component type for any given edit
operation and perform the operation as instructed.
After the 3rd click, the polygon can be completed
by pressing Enter. The dashed line represents the
final edge.
5 th Click
Vertices
Selected Vertices
A vertex is a point in 3D space.
Three or more connected vertices
make a face. Press F9 for
Vertex Selection mode.
Edges
Edges connect vertices by drawing
a straight line between them.
A single edge can be moved,
scaled or rotated. Press F10
for Edge Selection mode.
Selected UVs
Quad
Selected Edges
Polygon Modeling
Selected Faces
Triangle
Faces
UVs
A face is made up of three or
more connected edges. A face
with three sides is a triangle,
with four sides is a quad and with
more than four sides is a n-gon. A
face can be moved, scaled or rotated.
Press F11 for Face selection mode.
UVs are the two-dimensional
coordinates that are required
to display or render a texture
on a mesh. A UV directly
corresponds to a vertex on the
mesh. Press F12 for UV selection mode.
4 th Click
You can continue to place points until the desired
shape is achieved. Press Enter to finish.
Non-planar Polygons
When working with quads and n-gons, you should
be aware that if a vertex lies off the plane from
the other vertices, it creates a non-planar face. To
avoid this when creating or appending polygons,
you can toggle on Mesh > Create Polygon Tool
> Options > Keep new faces planar. If this situation occurs as a result of a modeling operation,
then either triangulate the non-planar faces or
use the cleanup operation to tessellate faces with
four or more sides. While non-planar faces can
be rendered in Maya, they may cause problems
if you are creating a polygon mesh for export to a
game engine.
Use the right mouse button over the model to access a marking menu that will
allow you to change selection modes interactively. This method allows you to
select multiple component types.
modeling
Polygon Objects
All of these objects were created from
polygons. Some models (like the scooter
and the gallery) were created with a
specific polygon count target in mind.
This means that these models do not
exceed a certain number of polygon faces.
While these restrictions apply to game
content, models created for softwarerendered output often do not fall under
these restrictions.
48
The first shape has only three sides and by nature
is planar. The second and third shapes have four
sides each, but appear to be planar.
The third shape has the fourth vertex on a different
plane than the other vertices of its face. The third
face is considered non-planar.
THE ART OF MAYA
Polygon Primitives
Maya includes several polygon primitives that can give
you a starting point for your model. The majority of these
primitives are closed shapes and all primitives are created
with a default set of UV information. These primitives have
construction history which can be modified at any time.
Sphere
Cube
Modeling a Head
Extruding Faces
Using some of the more common
polygon modeling tools, the steps
for creating a polygonal head are
illustrated below.
You can further refine your shape by extruding the face
of a polygon. This extrusion operation inserts faces at the
edges of the face to be extruded and allows the selected
face or faces to be moved, scaled or rotated from their
original position. If you are extruding multiple faces and
want them to maintain a cohesive shape, toggle on Edit
Mesh > Keep Faces Together. This only inserts faces at
the edges on the border of the selected faces. Otherwise,
faces will be inserted at every edge. This tool can be found
under Edit Mesh > Extrude.
A default cube was created and
then smoothed by selecting Mesh
> Smooth. The two rear bottom
faces were then extruded to create the neck and the front lower
edges were moved down to create
the chin.
Cylinder
Cone
Plane
Torus
Prism
Pyramid
Pipe
Helix
Soccer Ball Platonic Solid
Face four is selected. This is the
face that will be extruded.
After the extrude is complete, new
faces were added at the edges of
the extruded face.
A common use of the Split Polygon Tool
is to divide a face in half. This is done
with a tolerance of 100.
However, the face can be split any
way you want as long as the last
vertex ends up on an edge.
The torso and arms are selected
and ready to be combined. Mesh >
Combine to do this.
The three objects have been
combined into one object.
However, there are three separate shells.
Splitting Polygons
The Split Polygon Tool allows you to divide a polygonal
face. You can also use the tool to insert vertices on an
edge. To assist you, the tool has options that allow you
to set how many Snapping Magnets you want and the
Snapping Tolerance for the magnets. By setting Snapping
Magnets to 3, the edge being split will have three equally
spaced division points. Increasing the Snapping Tolerance
increases the influence of the magnets. With a Snapping
Tolerance of 0, the vertex can be added anywhere on the
edge. A tolerance of 100 will force the vertex to snap
directly to the division points.
Joining Objects
The lower faces of the front were
split for the nose and the middle
edges were moved up to create
the eyebrow area. The nose and
eyes were created by splitting
faces in the appropriate areas and
then moving vertices to get the
desired shape.
Sometimes you will create the individual parts of a model
and then want to join them together. For objects to be
joined, they must match certain criteria. The objects
must be combined to create a single object and must have
their Normals pointing in the same direction. Combining
objects will create a single object with construction history
relating back to the original objects. The separate pieces
of the new object are called shells. Shells are pieces of an
object that are not connected to the rest of the object by
shared edges. Objects that are combined with opposing
Normals will give an unexpected result when their edges
are merged. This is because their edges run in different
directions. This is true for appending polygons between
shells as well.
The mouth was created by splitting
the faces in the mouth area and
then moving vertices.
The border edges of the left arm and left torso
opening are selected and the Edit Mesh >
Merge operation is applied. The arm and torso
now share common edges and the holes in the
left side of the torso and left arm are closed.
The eyes and neck were further
refined and then the edges of
the model were smoothed in
the proper areas.
The same steps as before are applied. But,
the result is different because the edge
direction of the two shells are opposite each
other. Maya was able to merge the front
edge of the right arm and the rear edge of
the torso, but could not continue and stopped
merging edges.
Here you can see the problem. The left arm and
torso have their Normals pointing out, while the
right arm’s Normals are pointing in. You could
solve this by selecting a face on the right arm,
select Normals > Reverse > Options. Toggle
on All faces in the shell and Reserve user
normals and then press Reverse Normals. The
right arm's Normals now follow the rest of the
object and the edges can be merged properly.
49
Deformations
In the real world, some objects are hard and some are soft. The surfaces
on the soft objects can be bent and folded into different shapes. This
kind of surface deformation can be set up and animated in Autodesk®
Maya®.
Pushing and pulling the control points on a surface deforms its
shape.
Yet to sculpt every surface point-by-point can be time consuming. Maya
offers deformation tools that give 3D artists a higher level of control.
A deformer applied to one or more objects can be used to achieve
bending and twisting
by editing a few control points or attributes.
Deformers can also be used as modeling tools because they are
great
Deforming Objects
M
any objects in our 3D world are able to change their shape —
a soft chair gives as someone sits in it, a rubber ball squashes
and stretches as it hits the ground, and human skin bends as the elbow
rotates. To achieve these kinds of effects in Maya, surfaces have to
be able to have their shape animated. This means animating the
positions of control points instead of simply translating and rotating
the whole object.
A face with no
deformations.
Types of Deformation
In Maya, there are a number of ways to change or deform the shape
of an object. These deformers can be used to help you model surfaces
or animate organic forms. While there are a set of tools in Maya called
deformers, there are other tools that change the shape of objects. By
becoming familiar with all of these techniques, you can best decide
which one can be used in your work.
The same face after using
surface deformation to
reshape the nose, add
smile, round out the cheeks,
widen the chin, and make
folds for eyelids.
Deformers
CV and Vertex Edits
The most rudimentary method of deforming a curve
or geometry is to select component level control
points and translate, rotate or scale them. This is
useful when you need to move a surface point to a
specific location.
deformations
Deforming Objects
Maya has a category of tools called deformers that either
perform a specific type of surface deformation such as
twist or bend or make the process of deforming a surface
easier in some way. For example, a lattice is a cage-like
manipulator made of a small number of lattice points.
Each lattice point controls several control points in a
specified region of the surface. Moving one lattice point
can affect many control points on the surface that would
be difficult to select and move individually.
52
Simulated Deformations
Skeleton Chains
Maya has features for simulating properties of clothing and soft, dynamic moving materials like curtains,
flames and flags. A soft body is a geometry object
whose control points are controlled by particles and
dynamic fields such as turbulence and gravity.
A skeleton chain consists of joint nodes that are
connected visually by bone icons. Skeleton chains
are a continuous hierarchy of joint nodes that are
parented to each other. You can group or bind
geometry (skin) to these joint hierarchies. You
can then animate the joints (usually by rotating
them) and the geometry will be animated. Binding
geometry to a skeleton causes the geometry to be
deformed as the skeleton is animated. For example,
you could rotate a neck joint and the geometry
around the neck joint would rotate as well.
THE ART OF MAYA
Deformer Sets
When a deformer is created, certain control points of
the surface will be affected by it. The control points
that are affected by a given deformer are said to be
part of that deformer’s membership. Maya keeps
track of which control points are members of which
deformers by using sets. It is possible to add or
remove control points from a deformer’s set membership by selecting Edit Deformers > Edit Membership
Tool.
A non-deformed sphere with its CVs displayed.
The same sphere with a lattice deformer applied.
The top rows of lattice points have been moved up
to deform the sphere.
Alternatively, an explicit list of the
CVs belonging to this set membership can be viewed and edited by
selecting Window > Relationship
Editors > Deformer Sets.
To select the lattice, click on Edit Deformers >
Edit Membership Tool. This highlights the CVs on
the sphere that are affected by the lattice. The yellow CVs are part of the lattice’s membership.
A row of CVs is removed from the lattice’s membership with the Edit Membership Tool. Now the
deformer has no effect on those CVs.
Deformer Order
One of the many powerful and flexible features of deformers is the ability to layer
multiple deformers together. For example, you could apply Bend, Lattice, Blend Shape,
Skeleton and Cluster deformers on the same object and all will interact to produce the
final deformation. However, the order in which Maya evaluates these deformers does
affect the final shape. Fortunately, you have control over the order in which the deformers
are evaluated.
Here, a Squash deformer is added first, then a
Bend deformer. Notice that the how the deformation looks.
When the order of deformation is reversed so
that the Bend deformer is evaluated before the
Squash deformer, the result is a different look to
the deformation.
Click+drag
to reorder
To access the List of History Operations
window, place the cursor over the object and
use the right mouse button and select Inputs >
All Inputs. This window displays a list of Input
nodes connected to the text object. Click+drag
the middle mouse button on the list of items to
the left to change the order Maya evaluates these
Input nodes.
Artisan Sculpt Surfaces Tool
Maya Artisan provides you with a set of brushes to
sculpt detailed organic models, for simple NURBS
surfaces, subdivision surfaces and polygon meshes.
You can push, pull, smooth or erase deformations on
your model and even add a texture map to further
displace it.
Select Edit NURBS/Subdiv Surfaces > Sculpt
Geometry Tool to use the Artisan Sculpt Surfaces
Tool. This is an easy way to paint deformation onto a
surface interactively.
The model after the Sculpt Geometry Tool has
been applied. This second image uses a more
detailed surface, allowing subtle definition and
wrinkles to be added.
53
Deformers
attices, clusters, wires, sculpts and wraps are deformers that can
manipulate a large number of points using simpler objects and,
therefore, offer a mechanism to animate points in a more controlled
and predictable manner. Lattice, cluster, wire, sculpt and wrap
deformers are used in both the modeling and animation process,
with the focus being more heavily on animation. These deformers are
located under the Deform menu and each offers a unique solution for
deforming surfaces.
When any deformer is used as a modeling tool, you can use a
Delete History operation to bake in the deformations. If you are not
satisfied with the resulting deformation, you can simply delete the
deformer and the surface will snap back to its original shape.
Lattice Deformer
s
ion
ivis
UD
A lattice deformer surrounds a deformable object with a cage-like box
that you can manipulate to change the object's shape. Operations such
as translation, rotation or scaling can be applied to the lattice object
or to the components to deform the underlying surface. The lattice can
encompass the entire object or any number of control points. Even a
lattice can be deformed since it is also a deformable object.
When a lattice is created, there are actually two lattices created:
an influence lattice and a base lattice, the latter of which is invisible
by default. When the influence lattice is edited, the resulting deformed
surface is generated by calculating differences between the lattice
points of the influence lattice and the base lattice.
T Divisions
L
S Divisions
Lattice Deformer
You can specify the lattice's structure in terms of
S, T and U divisions. The greater the number of
divisions, the greater the lattice point resolution.
Deformers
Soft Modification Tool
Pushing Through a Lattice
If you push an object through a lattice,
you get the cartoon effect of an object
being pushed through a keyhole.
Wire Deformer
The Soft Modification Tool lets you push and pull
geometry as a sculptor would pull and push on a
sculpture. The amount of deformation is greatest at
the center of the push/pull, and gradually falls off
farther away from the center.
The Soft Modification Tool is located in the
Toolbox. The corresponding action is Create
Deformers > Soft Modification.
deformations
Wire deformers are like the armatures used by
sculptors to shape objects. With a wire deformer,
you use one or more NURBS curves to change the
shape of objects.
In character setup, wire deformers can be useful
for setting up lip and eyebrow deformations. Wire
deformers can also be useful for shaping objects
during modeling. To create further wrinkling effects,
you can also use the Wrinkle deformer.
History
Every time you use the Soft Modification Tool, a softMod node is created, allowing you to go back to previous nodes and make changes.
These nodes can also be animated just like any other deformers.
54
THE ART OF MAYA
Flexors
Clusters
Flexors are special deformers designed for use with
rigid skinning. They provide various types of deformation effects that improve and enhance the effects
provided by rigid skinning. There are three types of
flexors that can be attached to a skeleton: lattice,
sculpt and joint flexors. Once they are applied, the
joint is usually rotated to see the effect of the flexor
and further adjustments can be made. Set Driven
Key is an ideal tool to set up a relationship between
the flexor and its affecting joint.
Using clusters solves some fundamental problems for keyframing
control points. These points can only have their position in space
animated because they don’t have a Transform node. Clusters are
deformers that create a set out of any number of control points from
one or more multiple surfaces and provide them with a Transform node.
Once a cluster has been created, you have the ability to keyframe its
scale and rotation based on the cluster’s Transform node’s pivot point.
You can also group clusters into a skeleton hierarchy.
Cluster icon
Weight of 0.5
Weight of 1.0
Cluster rotation pivot
Wrap Deformer
The function of a Wrap deformer is similar to that of
a lattice deformer, with some slight differences. The
most obvious being that a Wrap deformer can be
made from a NURBS or polygon mesh and be any
shape. Just like the lattice, the wrap also creates a
base shape and any difference in position, orientation or shape between the base shape and the wrap
influence object results in a deformation of the
surface. This technique uses an influence object with
fewer points than the object you are deforming. The
primary visibility is turned off for the wrap deformer,
so it does not render.
Six CVs around the top eyelid are clustered and
weighted. The pivot point for the cluster is then placed
at the center of the eye so the surface deforms around
the eyeball. Without a cluster and pivot point, the
eyelid would move straight through the eyeball.
After creating a cluster, you can assign a percentage to the CVs to
control the amount the points will move via the Edit Deformers > Paint
Cluster Weights Tool. For example, if a control point is weighted 1.0, it
will move 100 percent of the transformation. A value of 0.5 will only
transform 50 percent. The top row of CVs for the eyelid are weighted
0.5 and the bottom 1.0. This allows for nicer tucking when the eye is
open. Otherwise, the top of the eyelid would recess too far into the
head with a value of 1.0.
Polygon mesh used as Wrap
deformer
The above left image shows how a cluster was
created with the weighting at the default value of
1.0. The adjacent image shows the same cluster,
but with the CVs weighted in a tapering effect.
The selected CVs are shown in the Component
Editor with their assigned weight.
Flow Path Object
Sculpt Deformer
The Flow Path Object function creates a lattice around
the object that has been animated on a motion path.
This technique allows the object to deform to the
shape of the curve. There are two options: around the
object or around the path curve. They both achieve the
same look unless you decide to add deformations to
the lattice.
Sculpt deformers are useful for creating any kind of rounded deformation effect. For example, in setting a character for animation, you could
use sculpt deformers to control a character's cheeks or to bulge a bicep.
When you create a Sculpt deformer, select components to only affect
that region of the model.
The Sculpt Tool can be very useful for simple
bulging effects, but it also allows for much more
complex deformations. For example, you can use
a NURBS surface that deforms to create a more
organic feel, or you could use a texture map where
you can paint the deformer influence.
55
Character Animation
One of the most challenging and rewarding forms of computer graphics is
character animation. Here, 3D artists combine the transformation of a digital skeleton
with the deformation of a skinned surface to set up a character that walks, talks and
moves around in 3D space.
Of course, a character doesn’t have to be a human or an animal. Any object that
is animated with expression and tries to speak to the audience through its actions is
considered a character. In fact, the same techniques used to animate a dog might be
used to animate a dancing bottle, a tiger or a tree.
Autodesk Maya allows you to combine all the controls found on different parts
of a character into one or more character sets. This makes it easier to pose characters
and work with them in the Maya non-linear Trax animation system. These techniques
3D Characters
A
3D character is a digital actor. Whether your character is a tin can that bounces with
personality, or a photorealistic human being, the animator will need to control it
easily and interactively. The specific requirements of the character's motion will dictate the
complexity of the character's controls. Maya offers many tools for the creation of these
digital performers.
A Typical Character
3D Characters
The character’s mechanics must be convincing to an audience and the skin and clothing
must also move and bend properly. Maya includes a number of tools that help you
manage the parts that make up a typical character. This process of preparing character
controls is called rigging and is used to let the animator focus on the process of animating.
A fully rigged character can be quite complex as it brings together skeleton joints, surfaces,
deformers, expressions, Set Driven Key, constraints, IK, BlendShapes, etc.
Skeleton Joints
Facial Animation
Joints are used to create a framework for a
character’s hierarchy. The rotation of the skeleton
joints defines the motion of the character. You can
use inverse kinematics for even more control.
To animate facial features, you can use deformers
such as BlendShape to create facial poses that can
be used for talking and for showing emotion.
Character Controls
Kinematics
Using animation techniques such as Set Driven
Key and expressions, you can set up attributes
for controlling different parts of a character.
For example, a hand joint could have attributes
used to control the different finger joints.
To control your skeleton joints, you can choose
from forward or inverse kinematics. Forward
kinematics allows you to set the joint rotations
directly. IK allows you to position IK handles,
which rotates the joints.
Constraints
Bound Surfaces
It is possible to constrain the kinematic controls of
a skeleton to objects in your scene or even simple
locators. You can then animate the constraint
weights to make a character pick something up or
grab hold of a fixed object.
Surfaces of a character’s skin and clothing can
be either parented or bound to the skeleton
joints to make them move together. Binding
places points from a surface into clusters that
are then associated with particular joints.
Selection Handles
Deformers
Selection handles give you quick access to parts
of a character’s hierarchy that are to be animated.
This makes it easier to work with a character after
it has been rigged up for animation.
To help the surfaces bend realistically at joints,
deformers such as flexors and influence objects
can be used.
character animation
Character Resolution
A fully rigged 3D character includes many bound
surfaces and deformers that can slow down the
interactive manipulation and playback of the scene.
Therefore, a low resolution character that has
surfaces parented to the skeleton makes it possible
to work interactively while animating. You can then
switch to the fully rigged character for rendering. A
low resolution version of a character also makes it
possible to begin animating before the entire
character is fully developed.
58
As you animate, you can use low resolution surfaces parented to your
skeleton to achieve more interactivity while animating.
THE ART OF MAYA
Motion Capture
The development and animation of a 3D character involves a
number of steps. Once you have a design, you must begin to build
the character’s model, lay down skeleton joints and rig the skeleton
so that it is capable of an appropriate range of motion. Character
controls can also be set up to assist the animation process.
While it is possible to work in a linear fashion, starting with
modeling and ending with rendering, most productions require
some form of concurrent work to be done. An animator might need
to begin laying down motion while the model is still being finished.
At the same time, character deformations and texture maps may
each be assigned to different parts of a team. For this reason, you
may use your low resolution character to begin animating and
blocking out scenes while the higher resolution character is refined
and set up for deformations and rendering.
As an alternative to setting keys, you can use motion
capture to simulate real-life motion on a character.
Generally, motion capture involves recording joint
positions and rotations from an actor that are then
applied to a skeleton. Motion capture works well with
non-linear animation where motion capture clips can
be blended together.
Note that you can import mocap data from
Autodesk® MotionBuilderTM software via the
FBX® file format.
© 2000, Image courtesy of Ascension Technology Corp.
A Typical Character Animation Workflow
Motion capture offers realistic motion performances that can be
imported as animation curves and applied to digital characters.
Modeling
Character Design
In support of the story, the
character is designed using
sketches, storyboards and
in some cases, clay models.
These visual aids give the 3D
artist a clear understanding
of the character and the
character’s range of motion
and emotion.
Using the sketches, a detailed
model is built with an awareness of
how it will be bound to the skeleton
later.
Binding Skeleton
The surfaces are bound to the
skeleton and joint rotations
tested. Deformers are used
to enhance the final look.
Rendering
Integration
Animation
Skeleton Rigging
Using sketches or the model
as a guide, joints are drawn
and kinematics and character
controls are set up.
A fully rigged model that uses
low resolution surfaces parented
to the skeleton can be used for
initial animation studies.
Character Sets
On a typical character, you will have
many attributes on many different
nodes that need to be keyed. A
character set allows you to collect
those attributes in one place and build
up a character definition. When this
character is highlighted in the
pop-up Character menu, that character set is active whether or not it is
selected. This feature makes it possible to easily set and edit keys for
that character since it is always active. Character sets are also necessary
to animate using non-linear animation through the Trax Editor since Trax
clips can only be created for a character set.
Select Character > Create Character Set to start a character, then
select Character > Set Current Character Set > Character Set Editor to
add and subtract attributes from the set. The Character menu found near
the Time Slider can also be used to highlight a character.
The animation from the low
resolution character is applied
to a fully rigged skeleton with
bound surfaces and deformers.
The character is rendered in its final
setting. You might also render the
character on its own, then use a
compositing package to integrate
it with
the background.
Character Sets
This set is the root of a character
setup. You can have attributes
assigned to the character or you can
assign sub-characters to it. If you
select this set and set keys, you also
key the sub-characters.
Sub-Character Sets
These sets help break down a
character into smaller parts in case
you want to focus on one area. These
sets can be set up to control specific
body parts such as arms, legs and
facial features.
Keyable Attributes
For each character and sub-character set, you can choose which keyable attributes need to be brought
together to effectively animate the
different parts of the character.
59
Animating Characters
T
o animate a character, you must deal with a large number of attributes that are scattered around the many joints, IK handles and
Transform nodes that make up the character. A number of tools can
help you consolidate these attributes and make it easier to set keys on
your character.
Character Sets
Character Sets offer you high-level control over your character. These
sets let you collect attributes from different parts of a Character and
edit and set keys on them in a single place. Keys set on Character Sets
are transferred to the associated attributes.
Character Sets are given special treatment when they are highlighted on the Character menu. A highlighted character is keyed by
the Set Key Tool (S key), even if its parts are not selected in the workspace. Highlighted characters also show up in the Graph Editor without
having to be selected.
Sub-characters
Sub-characters can be created for different parts
of a character to give you more control. These
breakdowns should mimic areas that you want to
animate as a group.
Character Set
This is essentially the root set of a character. It
may not contain any attributes if you are using
it with Sub-character sets. Select Character >
Create Character to create one of these sets.
Sub-Character Set
character animation
Animating Characters
A Sub-character set is a
typical character set that has
been assigned to a character.
Select Character > Create
Sub-character to create one
of these sets. It will be assigned
to the highlighted character.
Attributes
Some attributes will be part of a character or
sub-character when they are created. To add more
attributes, you can highlight them in the Channel
Box, then select Character > Add to Character
Set. You can also use the Relationship Editor to
add attributes to characters.
60
Relationship Editor
Character Pop-up Menu
This pop-up menu found next to the timeline in the
lower right of the workspace lets you quickly select
and edit characters and sub-characters. The character set highlighted here is the active character.
If you select Character > Set Current Character
Set > Character Set Editor or Character Set
Editor from the Range Slider toolbar, you open the
Relationship Editor. Highlight a Character Set on the
left, then click on attributes on the right to add them
to the Character Sets.
Non-linear Animation
Character Mapper
To animate using non-linear animation, you must have character sets set up.
Only character and sub-character sets will be recognized when you create clips
and poses and place them into the Trax Editor.
Use the Character Mapper to establish a relationship
between a source character's nodes or attributes and
its target. Then you can import and export or copy
and paste, animation clips between the mapped
characters in the Trax Editor.
THE ART OF MAYA
Constraints
Constraints allow you to control a
character using other objects (such
as locators). Constraints let you
control parts of a character like the
position of IK handles with a point
constraint or the rotation of joints
with an orient constraint.
The advantage of constraints
is that they are flexible. If an arm
or leg cannot reach its constraint,
it pulls away from the constraint
gently, rather than being abruptly
stopped. When you see this
pulling, you can quickly adjust
other constraints to minimize the
pulling of the first constraint.
Arm
Eyes
Elbows and Knees
A character’s hands can be constrained
using point and orient constraints. You
may want to parent these to the shoulders
so they move with the body. If you have
any finger attributes, you may want to
use the locator as a Control node.
Aim constraints can be used to control a
character’s eyes. These would be parented
to the head but could still be moved on
their own to offset the gaze. It is a good
idea to use a different locator for each eye
to avoid having a cross-eyed look.
A pole vector constraint can be used to
control the positioning of elbows and knees.
These locators help orient the IK handle’s
pole vector, which helps prevent the IK solution from flipping. Pole vector constraints can
also be parented to parts of the body.
Animated Constraint Weights
Control Nodes
It is possible to add more than one constraint to an object. Each of
these constraints is given a weight, and the object will be constrained
based on the average of the constraints’ weights. Therefore, you can
animate an object that switches from one constraint to another by
keying the weights from 0 to 1. If you are animating the weights, make
sure you don’t set all the weights to 0. This would create a confusing
situation for your object when working interactively.
In some cases, you will not want to add every
attribute to a character. Instead, you will want
specific Control nodes that have custom attributes
linked using reactive animation techniques (such as
Set Driven Key or direct connections),
to other attributes in the scene. This creates an
intermediate level of control that lets you focus on
fewer attributes, while maintaining control over
many attributes.
seat constraint
handlebar
constraints
hand constraints
pelvis constraint
feet constraints
running board constraints
Step 1: Character Constraints
Step 2: Scooter Constraints
The character’s pelvis, arms and legs are constrained
to locators. The arm locators are parented to the
shoulder so they move with the body, while the leg and
pelvis locators are in world space. This lets you move
the character freely without it being locked down.
The arm locators are parented to the handle-bars
so they rotate with the steering mechanism. The
leg locators are parented to the running board and
the pelvis locator to the seat. When the scooter
moves, the character constraints will also move.
Step 3: Key Initial Weighting
Step 4: Key Final Weighting
Control Nodes and Manipulators
While the character walks freely, the character constraints are keyed at 1 and the scooter constraints
are keyed at 0. This puts the character constraints
in control until the character sits down. You will
want to add the Constraint node’s weight attributes
to a Character node to make sure they are keyed
with the character.
As the character sits on the scooter, the character’s
constraint weights are keyed at 0 and the scooter
constraint weights are keyed at 1. Now you can
animate the scooter and the character will follow.
You might want to animate the scooter constraint
weight for the pelvis to move up and down as the
scooter goes over bumps.
Dynamic or custom attributes can be added to specific nodes to
let you animate several other joints or attributes using one control.
Some custom attributes could roll a finger while others could control
an eye’s pupil dilation. These attributes would later be included in a
character set for automatic keyframing.
61
Materials and Textures
While geometry describes the shape of an object, its material describes
how its surface will appear when rendered. In the real world, when light
hits a surface, it reacts to the surface qualities. Some of the light is absorbed
and some is reflected. A shiny object reflects light directly, while a matte
object diffuses the light. While reflected light does not actually illuminate
surfaces in Autodesk Maya, materials and textures can be set up to simulate
the real-world reaction of surfaces to light.
To create realistic images, material qualities such as color, specularity,
reflectivity, transparency and surface detail must all be set. Maya uses
special connected nodes called Shading Networks to set up the material
qualities of your surfaces.
Textures let 3D artists create more complex looks for their surfaces.
A texture can be a set of procedures set up in Maya or a bitmap image
Shading Your Models
W
hile geometry defines the shape of a model, shading defines how
the model’s surfaces react to light and details such as color, transparency and texture.
Maya uses Shading group nodes to tell the renderer which materials, textures and lights will affect the final look of a surface. Shading
networks are made up of nodes that define the final look of a rendered
surface. Learning the proper role of each of these nodes will ensure
that you build shading networks that render successfully.
Material Qualities
Before actually looking at a more complex shading network, it is
useful to consider the various material qualities that you will be trying
to achieve. A basic understanding of how an object is shaded can be
translated into attributes on shading network nodes in Maya.
The geometry shown as a wireframe becomes more
realistic with the addition of shading networks that
add color and texture.
Basic Shading
Highlights and Reflections
Shading shows you how the surface appears
when illuminated. As light hits a surface, it defines
a gradation from light to dark that makes the
surface’s 3D qualities apparent.
As a surface becomes shinier, it begins to show highlights and reflections.
Specular highlights show the hotspots where the light sources are reflected,
while reflections simulate light bounced from surrounding objects.
Specular
highlights
Shading Your Models
Reflections
Surface Relief
Surface relief, such as bumps and scratches, helps
add a realistic look to a surface. This effect can be
achieved with special textures called bump and
Displacement Maps.
Transparency
materials and textures
It is possible to see through transparent areas, such as
the glass on this jar, while opaque areas, such as the
label, cap and paint, remain solid. Transparent surfaces,
such as glass, can also bend light. This is called refraction
and can be achieved in Maya using Raytracing.
Evaluating Shading Networks
To preview shading networks and texture maps,
set up a camera, then illuminate your objects with
lights and render. Hardware rendering can be used
to quickly preview textures and some lights, while
software rendering is required to explore all
shading situations. More in-depth discussion of
rendering types is found in the Rendering chapter.
Lighting and camera information is found in the
Digital Cinematography chapter.
64
Hardware Rendering
Software Rendering
It offers a preview of the color of
textures and up to eight lights.
It is capable of rendering all shading
effects such as bump, specular, shadows
and all lights.
THE ART OF MAYA
The Anatomy of a Shading Network
Shading networks are built as nodes that control specific aspects of the shading
effect. These networks define how various color and texture nodes work with associated
lights and surfaces. The placement of textures on surfaces is also controlled by nodes
within the network.
There are several ways to view shading networks in Maya. The Hypershade
window lets you easily connect nodes and view the connected attributes. You can
also double-click on any node to open the Attribute Editor. Along the way, you
can zoom in and out in the Hypershade window to get the complete picture.
You can also view shading networks in the Hypergraph window but this
view does not give you swatch images.
2D Texture Placement Node
File Texture Node
A texture is mapped in 2D space when it is
mapped to the UV space of the geometry. This
node is used to define the texture’s positioning
and orientation within the UV space.
File textures are bitmap
images imported into Maya
that can be used for texture
mapping attributes such as
color, bump or transparency.
Shading Group
This node is the root of the shading network. It sends information
about materials and textures, lighting and geometry to the renderer.
In most cases, you will not have to work directly with the Shading
group node because the Material node is where you will make most
of your texture connections.
3D Texture Placement Node
This node lets you define a position in 3D space
for your texture and makes it easier to texture
multiple surfaces as if they were one. The icon in
the modeling views can be used to interactively
establish the texture’s position in world space.
Material Node
Environment Texture Node
An environment texture is used to simulate
reflections on the surface. This node might be
shared among several shaders and have an
effect on many surfaces.
Material nodes define
how the surfaces will
react to light. The term
shader is often used to
describe the role played
by the Material node.
In general, this node
will be the focus of
your work as you build
up all of your shading
networks.
Shading Network Connections
Shading network nodes have input and output attributes. Texture mapping involves making
connections between these Input and Output attributes. One way to connect them is to
drag one node onto another in the Hypershade window. You are then offered a list of
input attributes to map to. In this case, the Output attribute is a default attribute, such as
outColor. For more complex mapping, the Connection Editor allows you to select input and
output attributes directly.
When a texture node is dragged onto
a Material node, you are given a popup menu of possible input attributes.
This makes it easy to connect two nodes together.
You can also drag nodes directly onto attribute names in
the Attribute Editor to connect them.
Placing the cursor over the line that connects two nodes
gives you information about the connected input and
output attributes. You can use this information for
future reference. To break the connection, select this line
and delete it.
The Connection Editor can be used when the desired
attribute is not available in the pop-up menu, or when
you want to make a special connection, such as the
Out Color R of the one node to the Diffuse of another.
65
Surface Materials
M
aterials in the real world react to light by absorbing or reflecting it.
Polished surfaces are shiny because they reflect light with strong highlights,
while rough surfaces have a softer look because they disperse light. A Material
node is a mathematical shading model that simulates a natural reaction to light.
The Material node contains a number of attributes that let you control how
surfaces are shaded. Maya includes several material types, such as Phong, Blinn
and Lambert, that each define a different shading model. The Material node acts
as a focal point for shading and texturing information. It is then fed into the
Shading group node where it is combined with information about lights and the
geometry to be rendered.
Material Qualities
The behavior of light when it strikes a surface in real life is quite complex.
Surface imperfections can distort the angle at which light rays are reflected,
causing them to scatter, become trapped, or be absorbed. This type of scattered
reflected light appears soft and even and is known as diffuse light. Very smooth
surfaces have little or no surface imperfections, so light is not absorbed and
reflected light is more coherent or focused. When this light reaches our eyes, we
see bright specular highlights. These real world behaviors are simulated in Maya
with the Diffuse and Specular attributes.
Ambient Color
Specular Highlights
This attribute creates the effect of
even illumination, without requiring
a light source. In this image, the
Ambient color has RGB values of
0.25, 0.25, 0.25 on all objects.
Specular shading attributes determine the amount
of light that is reflected at a consistent angle,
resulting in an intensely bright region called a
specular highlight. Perfectly smooth surfaces will
have very bright, tiny highlights because there are
no surface imperfections to distort the reflection
angle. Rougher surfaces like brushed metals will
have a softer highlight.
Surface Materials
Diffuse
Diffuse determines how much
light is absorbed and how much
is scattered in all directions by
surface imperfections. Rougher
surfaces tend to have higher
Diffuse values while smooth or
mirror-like surfaces have Diffuse
values that approach 0.
In real life, the proportions of the specular and diffuse
components of the total reflected light will vary,
depending on the characteristics of the surface.
Color
This attribute lets you add
surface relief by using a texture
map to alter the direction of the
surface Normals.
Combined Effect
Bump
Color is made up of red, green
and blue attributes. The color of
light and reflections will influence
this base color.
Reflectivity
This attribute controls the amount a surface
reflects its environment. This environment could
be a 3D texture map connected to the material’s
Reflected Color, or actual Raytraced reflections of
objects in the scene.
Transparency
materials and textures
The Material nodes have attributes in
two main sections. Common Material
Attributes are found on most Material
nodes, while the Specular Shading
Attributes change, depending on the
chosen material type.
White is transparent, black is
opaque and other values are semitransparent. You can also use a
texture map to create the
appearance of holes in a surface.
Reflected Color
Incandescence
This attribute can be used to make a surface appear to emit light.
The Incandescence attribute is not actually emitting light and has
no effect on other surfaces.
Glow
This attribute, found in the Special Effects section of the Material
node, can be used to add the appearance of atmospheric noise
to a surface.
66
This attribute can be texture mapped to
define a reflected environment without
relying on Raytraced reflections. These
texture maps are positioned in world
space and can be assigned to various
materials to make sure the scene’s
reflections are consistent.
THE ART OF MAYA
Material and Shader types
Several different material and shader types offer you distinct shading
characteristics. The main difference between materials is how they
handle specular highlights when rendered. Shaders are specialized
materials that render differently and specifically for some objects.
Below are six of the most commonly used material types and
five specialized shaders. Various attributes such as color, bump and
specularity can be mapped with textures and will affect the appearance
of the final render.
Materials and shaders can
be dragged from the Create
section of the Hypershade, where
you can assign them and make
texture connections.
After creating a Material node, you
can change the material type quickly
using the pop-up menu in the Attribute
Editor. This will change the types of
attributes available for shading.
Lambert Material
Phong Material
PhongE Material
This material type is the most basic
and does not include any attributes for
specularity. This makes it perfect for
matte surfaces that do not reflect the
surrounding environment. The Lambert
material type can be transparent and will
refract in a Raytrace rendering, but without any specularity, it won’t reflect.
This material adds a sharp specular
highlight to the Lambert material.
The size and intensity of the highlights are controlled by the Cosine
Power attribute. This material can
also have reflections from either
an environment map or Raytraced
reflections. The Phong material is
good for plastics.
This material type adds a
different kind of specular highlight to the Lambert. The PhongE
material includes attributes such
as Roughness that controls
the softness of the highlight,
Whiteness that controls its
intensity and Highlight Size.
Many artists use this material type
exclusively because it offers high-quality
specular highlights using attributes such
as Eccentricity and Specular Roll Off.
This material type can be edited to look
like a Phong material, which has sharper
highlights, in cases where you need
better anti-aliasing of highlights during
an animation. This material is good for
glass and metals.
Anisotropic Material
Shading Map Material
This material type simulates
surfaces which have micro-facet
grooves and the specular highlight
tends to be perpendicular to the
direction of the grooves. Materials
such as hair, satin and CDs all have
anisotropic highlights.
This material type allows you
to create custom shading on
surfaces. A ramp texture controls
the positioning and color of the
shading and highlights on the
surface. If you want to emphasize
the dark areas, simply darken the
lower end of the ramp.
Ramp Shader
Ocean Shader
Hair Tube Shader
This shader gives you extra control
over the way color changes with light
and the view angle. You can simulate
a variety of exotic materials and
tweak traditional shading in subtle
ways. All the color-related attributes
in the Ramp Shader are controlled
by ramps.
The Ocean shader is a specialized
shader with attributes defining
realistic waves on large bodies of
water. It is usually used through the
Fluid Effects > Ocean > Create
Ocean command, which automatically creates nodes required to
render an ocean.
Hair tube shader simulates a thin
tube, where the width of the tube
is small enough that local shading
effects can be ignored. All shading
derives from the view and the tube
direction. Because the highlights
are spread across the entire tube
width, rendering fine hairs does not
require as high anti-aliasing levels.
Blinn Material
Layered Shader
A Layered shader allows you to combine two or more Material nodes
that each have their own qualities. Each material can be designed
separately, then connected to the Layered shader. The top layer’s
transparency can be adjusted or mapped to reveal parts of the
layer below.
Layered shaders render more slowly than other materials. Instead
of using a Layered shader, it may be better to set up a regular Material
node that uses a Layered Texture node mapped to Color. Specular and
Diffuse maps can create the appearance of variations in the material
qualities of the surface.
Tin material
(top layer)
Transparency map
( top layer )
Rust material
(bottom layer)
Use Background Shader
The Use Background material is primarily meant
for combining CG and live-action components. The
material is assigned to stand-in geometry that represents surfaces and objects in the live-action plate.
The material then catches shadows and reflections
from objects in the scene. Many of the diagrams in
this book have been rendered with a white background and a Use Background material assigned to a
plane to catch shadows.
Reflection catchers
Layered shader
White areas of the map are completely transparent, while black areas are opaque. Shown above are
the two materials and a map that would result in the layered shader.
Ramp
texture
Shadow catchers
To create convincing results, you must set up your lighting and position your
camera to match the background image to your model. While compositing, you
would remove the image plane and render with a black background. The Use
Background shadows are available in the rendered mask channel.
67
Digital Cinematography
When preparing a digital scene, lights and cameras play a very important
role. Both lights and cameras make it possible to view objects in a realistic
context. Artistically, they both allow 3D artists to control the look of their
animation with the same creative control as a live-action cinematographer.
In some ways, the most difficult aspect of using lights and cameras
in Maya is that the possibilities are endless. It is very easy to fly a
camera around without a clear sense of purpose or add too many lights
to a scene. The question is whether or not the creative decisions support
the story being told. Therefore, it is a good idea to consider how liveaction movies make use of camera moves and lighting.
How Light Works
L
ight affects the way in which we see the world around us. Light
defines the shape and form of objects and spaces, while at the same
time, it works on an emotional level by setting mood and atmosphere.
Learning to control light is an important 3D skill.
Cinematographers use light to illuminate the objects in the scene,
in order to support the scene’s emotional context. The quality of light in
a digital shot is equally important, although the rules are different.
Real World vs. Digital Cinematography
In the real world, light bounces. Light starts from a light source, such as
the sun or a lamp, and is either bounced or absorbed by all surfaces.
An object appears red because the green and blue light is absorbed
while the red light is reflected. A cinematographer sets up lights, then
measures the light levels, which include both direct and indirect light.
This information is used to adjust the exposure settings of the camera.
In Maya, surfaces are illuminated directly by lights. There is no
bounced light coming from other surfaces. This is because CG lighting
doesn’t bounce. Here, film isn’t exposed to light and camera controls
don’t need to be adjusted. Instead, light levels are controlled using the
intensity settings of the lights themselves.
Creating Lights
Lights can be created using either the
Create > Lights menu or using the
swatches in the Hypershade. Light
attributes can be edited using the
light’s shape node.
1
In the Real World
On the Computer
The film’s exposure to light is
controlled by the camera.
C
1 Light is emitted from a source
with a controllable intensity.
Direct light is hard, while light
bounced from other surfaces
is softer.
C
A The renderer samples a point on
a surface from the camera’s point
of view.
1
B
2
How Light Works
2 Light levels are measured using
a light meter to determine the
proper exposure settings for
the camera.
3 Camera controls such as FStop, shutter angle, exposure
time and film speed are set
to control how much light is
exposed to the film.
The scene’s lighting is directly
controlled by the lights.
B A list of associated lights is taken
from the surface’s shading group
and used to determine which lights
should be used in the shot.
C The light’s attributes, such as
intensity, color and decay, are used
to calculate the illumination on the
surface.
3
To simulate a bounced light in Maya, you
would need to use a secondary light such
as an area light or an ambient light.
Eye Point
digital cinematography
Positioning Lights
Look at
Point
Lights can be positioned using the Show Manipulator Tool. Each light
is displayed with an eye point that defines the position of the light
source and a look at point that defines where the camera is pointing.
Adjusting these points sets the translation and rotation values on the
light’s Transform node.
The line between the eye and look at points defines the light’s
direction. Spot, Area and Directional lights must have their directions
set to work properly, while Ambient and point lights only require an eye
point position.
You can also position a light by selecting the light, then choosing
Panels > Look Through Selected. This lets you use the Alt key to dolly and
pan the view as if it were a camera. This method often makes it
more intuitive to position the light and its look at point.
Looking through a spot light.
70
Cycle Index
THE ART OF MAYA
Light Types
Light Nodes
Maya has several light types, each of which
illuminates a scene differently. A typical scene
combines a number of different light types. You can
switch between light types in the Attribute Editor.
When a light is created, it is built
with two nodes. The Transform
node holds all the information
about the light’s position and
orientation. For most light types, scaling a light will not change its
shape, or the effect of its illumination, but it will allow you to change
the size of the light icon to make it more visible in the workspace.
The one exception is with area lights, as their intensity is affected by
scaling. The Shape node holds all the information about the light’s illumination. Some of the spot light attributes can be edited when using
the Show Manipulator Tool by clicking on the Cycle Index icon. Each
click allows access to different manipulators that control attributes such
as Cone Angle or Penumbra Angle.
Spot
Spot lights emit light that radiates
from a point within a limited cone
angle. You can use this cone angle
to limit the area receiving light.
Spot Light Attributes
Directional
Directional lights use parallel rays of
light to illuminate a scene. Shading is
very uniform without any hotspots.
These rays are similar to the light of
the sun, which hits the earth with
parallel rays.
The spot light’s Shape node
contains attributes that control how
the light will illuminate the scene.
Since the spot light contains the
most attributes, it is used as the
example here. The other light types
contain a subset of the Spot
Light Attributes.
Point
Point lights emit light in all
directions, radiating from a single
point. This creates an effect similar
to a light bulb. This light creates
subtle shading effects with definite
hot spots.
Intensity
Decay
Cone Angle
This attribute determines how much
light is emitted from the light source.
As you increase the Decay and
Dropoff values, you need a more
intense light.
This attribute determines how
much the light intensity diminishes
as the light gets further from its
source. Therefore, if you choose to
use Decay, you need to increase
the Intensity.
This attribute determines
the width of the spot
light’s cone of influence.
The areas outside the cone
are not illuminated.
Area
Color
Penumbra Angle
Area lights emit light using a twodimensional area. The area light’s
icon can be used to help define the
light’s direction and intensity. A larger
area light has a stronger intensity.
You can set RGB values for the light
being emitted. This will have an
influence on the color of your scene.
This attribute creates an
area at the edge of the spot
light where the light fades.
A larger value here creates
a soft look for the light.
Ambient
Ambient lights emit light
uniformly in all directions. The
Ambient Shade attribute adds
positional behavior. Bump maps
are not visible with ambient
light alone.
Hotspot
The point where the light is most intense is
referred to as the hotspot. You also know
it as a specular highlight. The look of the
highlight is a result of the intensity of the
light and the shading qualities of the
surface’s Material node.
Dropoff
This attribute determines how
much the light intensity diminishes as it gets to the outer
edge of the light. This puts
more emphasis on the
light’s hotspot.
Volume
Volume lights emit light in all
directions for a finite distance
based on a 3D geometric shape.
The light shape can be a box, a
sphere, a cylinder or a cone.
71
Casting Shadows
O
ne of the most dramatic aspects of lighting occurs in areas of no
light. Shadows add drama to your scene while helping to anchor
characters and props to the ground. If your character leaps into the air,
you know what is happening because the shadow and character no
longer touch each other.
In Maya, there are many factors that affect the look and quality of
your shadows. You can choose from Depth Map and Raytraced shadows
which offer different levels of quality and rendering speed. Sometimes
light attributes, such as Cone Angle, will affect your shadows and must
be taken into account. The more you know about how shadows are cast,
the easier it will be to adjust the appropriate attributes.
Depth Map Shadows
Depth Map shadows are the more efficient of the two shadow types. A
Depth Map shadow can be created by setting Use Depth Map Shadows
to On in the light’s Attribute Editor.
Depth Map shadows work by recording the Z-depth information
from the light’s point of view, then using this information to evaluate
whether or not a point in your scene is in shadow. The diagram below
shows how a spot light evaluates Depth Map information to generate
shadows. You can see that the Depth Map is generated from the light’s
point of view.
Here are two shots of a scene. The first does not
use shadows and the second one does. You can see
how the scooter in the second image is much more
grounded, and it is easier to read the scene’s depth.
While shadows do require extra work when you set
up a scene, they are well worth the effort.
Step 1
Step 3
When rendering starts, a Depth Map is created that
measures how far the various objects are from the
light. White is used to show surface points closest
to the light, while the various shades of gray show
a greater distance from the light.
This measurement is then compared to the depth
information stored in the Depth Map. If the point’s
distance is greater than the distance stored in the
Depth Map, the point is in shadow.
Casting Shadows
Step 4
If the point is in shadow, the light’s illumination
does not contribute to the shading.
Note: Another light, such as an ambient light, may
illuminate parts of the scene where the spot light
does not. That is why you can see the wood texture
underneath the chair in this image.
Stored Depth Map Value
digital cinematography
With and Without Shadows
Step 2
When a point on a surface is being
shaded during the rendering process,
the distance is measured between the
point and the light source.
72
THE ART OF MAYA
Raytraced Shadows
To calculate Raytraced shadows, Maya sends a ray
from the camera and when this ray hits a surface, it
spawns another ray toward the light. This shadow
ray reports whether or not it hits any shadowcasting objects on its way to the light. If it does hit a
shadow-casting object, then the original surface is
in shadow.
Raytraced shadows have the disadvantage of
being slower to render than Depth Map shadows.
However, depending on the look you are interested
in, there are several reasons why you would use
Raytraced shadows in your scene. These include
transparent shadows, colored transparent shadows
and shadow attenuation.
If you want Raytraced shadows, but not
reflections and refractions, then set Reflections and
Refractions to 0 in the Render Quality section of the
Render Settings.
Transparent Shadows
When casting shadows from transparent objects,
Depth Map shadows do not take into account the
transparent qualities of a surface, while Raytraced
shadows do. This may be a deciding factor when it
comes to choosing which technique you will use to
cast shadows.
Depth Map Shadows
Raytraced Shadows
When a Depth Map is generated at
the start of a render, it does not take
transparency into account. For this
reason, the shadow generated by a
Depth Map will appear solid.
Raytraced shadows are computed
during the rendering process.
Therefore, the transparency of the
object is taken into account. As a
result, Raytraced shadows clearly
represent the details of a transparent
or transparency mapped object.
Depth Map Soft
732 x 450 pixels
Render Time - 1.05x
Depth Map Default
732 x 450 pixels
Render Time - 1x
Raytrace Soft
732 x 450 pixels
Render Time - 11.87x
Raytrace Default
732 x 450 pixels
Render Time - 1.38x
Default Renderings
Soft Shadows
Using the default settings built into a spot light,
the Raytrace rendering offers a sharper shadow
than the Depth Map shadows. Rendering time is
longer for the Raytraced scooter.
By tweaking the Depth Map shadows attributes
you can see much better results. Using Light
Radius and Shadow Rays to soften the Raytraced
shadows, you can see how the rendering took even
longer.
Shadow Attenuation
By default, Raytraced shadows look more accurate and crisp than Depth
Map shadows. This can result in an undesirable computer-generated
look in most cases. To avoid this, the shadows can be softened using a
combination of a non-zero Light Radius and Shadow Rays greater than
1. These controls are found in the Raytrace Shadow Attributes section
of the light's Attribute Editor.
The biggest difference between a Raytraced soft shadow and a
Depth Map shadow is that a Depth Map shadow is evenly soft around
its edges. By contrast, a Raytraced shadow will dissipate or attenuate
with distance from the shadow-casting object. This can be slow to
render but is often used to create beautiful looking shadows in
still renderings.
Colored Transparent Shadows
Another feature of Raytraced shadows is that you can
create colored transparent shadows. For example, in
the real world, when light passes through a stained
glass window, you see the colors transmitted by the
light passing through the window onto the floor. In
Maya, Raytraced shadows will automatically create
colored shadows when the transparency channel on
a material is colored or when it is mapped with a
colored texture.
Raytraced Shadows
The color of the transparency automatically casts a colored shadow.
Depth Map Soft Shadows
Raytraced Soft Shadows
The light’s Dmap Filter value affects the softness
of a Depth Map shadow.
Light Radius and Shadow Rays define the
softness of a Raytraced shadow.
Shadow Limit
When working with Raytraced shadows, you should also set the
Shadow limit attribute. For example, if you have a shadow-casting
object with several transparent surfaces behind it followed by an
opaque surface, you would expect to see a shadow on the opaque
surface. In order to see this shadow, set the Ray Depth Limit on the
light to a value that is the number of transparent surfaces + 1. Be
sure that the Shadow limit in the Raytracing Quality section of the
Render Settings is not set lower than this value, or you will not see
your shadow.
73
Rendering
A 3D artist’s ultimate goal is to create a sequence of images that can be
synchronized with sound and played back as a movie. The creation of
these images occurs in the renderer where surfaces, materials, lights
and motion
are all taken into account and turned into bitmap images.
The art of rendering involves finding a balance between the visual
complexity needed to tell a story and the rendering speed that determines
how many frames can be rendered in a given time. Simple models
render quickly and complex models take longer.
Rendering Scenes
R
endering is where all of the work in setting up models, textures, lights, cameras
and effects comes together into a final sequence of images. In very simple terms,
rendering is the creation of pixels that are given different colors in order to form a
complete image. A render involves a large number of complex calculations that can keep
your computer busy for quite a while. The key at this stage in the animation process is to
find a way of getting the best image quality and the fastest render times so that you can
meet your deadlines.
Before exploring the specific details of rendering animation sequences, it is important
to realize that there are two different methods to render images: software and hardware
rendering. More specifically, there are three different software renderers shipping with
Maya. They are Maya software, mental ray and vector.
The software renderers are considerably more complex than hardware rendering
and require a great deal more knowledge and understanding to get the best results. The
renderer is where all of your scene data and settings are handed off to the software and
render calculations are performed that result in final bitmap images. To give you some
insight into what Maya actually does with your scene data during the software render
process, an overview is shown throughout this chapter.
Hardware Rendering
rendering
Rendering Scenes
Hardware shading, texturing and lighting use the
computer’s graphics hardware to display objects on
the screen. The Maya hardware renderer presents
a seamlessly integrated rendering solution that
leverages the ever increasing power of next-generation graphics cards to render frames. It is currently
used primarily to render hardware-type particle
effects and previews, which are later enhanced in a
compositing application. However, this is changing
as the future of rendering lies in this type of
renderer, especially due to the performance advantage over the other renderers.
Maya also supports CgFX and ASHLI hardware
shaders. To view these shaders, you must have
a qualified graphics card. They also can only be
rendered with the hardware renderer.
Hardware Shading
Hardware Renderer
You can display hardware shading,
texturing and lighting in any view panel
using the panel’s Shading menu.
The Hardware renderer can produce
broadcast-resolution images in less
time than with software rendering.
Software Rendering
All three software renderers use complex algorithms
to combine elements, such as geometry, cameras
and textures, with the physics of light to create final
bitmap images. Because some aspects of light’s true
behavior would be prohibitively slow to calculate,
most renderers let you employ shortcuts, such as
ambient lights, in place of Global Illumination to
make sure that rendering times support the
production cycle.
Software rendering has the advantage of being
more flexible than hardware rendering. Software
companies can add functionality by changing
algorithms in the code without being restricted by
the computer’s hardware. Therefore, while software
rendering is not as fast as hardware rendering,
the added functionality lets you achieve more
sophisticated results.
76
Software Rendering
You can create a software rendering in the Render view window or by using
Render > Batch Render. The Batch Renderer is always used to render animation and can also be launched from a Command Line.
THE ART OF MAYA
Software Render Process
LEGEND
1. Geometry Filtering
To start the rendering process, Maya determines
which objects in the Maya scene file will be
rendered. Any objects that are hidden, templated,
or do not belong to a shading group, will not
be rendered.
Main render flow
When Raytracing is on
Triggered when needed
2. Light Depth Maps
From the point of view of each shadow-casting light,
Maya renders Z-depth files called Depth Maps to
be used later to compute the shadows in the scene.
Two Depth Maps are created because the Use Mid
Dist is turned on by default in the light’s Attribute
Editor.
3. Tiling/Primary Visibility
By looking at the bounding boxes of the geometry,
Maya can determine which objects are visible to
the camera and approximately how much memory
will be required to render them. Based on these
estimates, the image is divided into rectangular
regions called tiles. Each tile is a manageable
amount of data for the renderer to process at one
time. It is possible to explicitly set the maximum
tile size (in pixels) from a Command Line.
When in IPR mode
Tessellation
Maya uses triangles at render time
to approximate NURBS surfaces,
Subdivision surfaces and displacement
mapped or quadrilateral polygonal
objects. This process is called tessellation. Tessellation is time-con-
suming, so the artist sets
attributes on a per-object
basis to manage the
number of triangles
the renderer will use.
During the render
process, tessellation
is triggered only
when needed.
4. Shading
Maya computes all of the texturing, lighting,
shading Anti-aliasing, 3D motion blur, etc. for the
visible surfaces in each tile.
Deep Raster Generation
When an IPR render is launched, a
temporary file is created and written
to disk in the iprImages directory.
This deep raster file stores the image
itself and all the data required to
allow interactive tuning of shading
and lighting attributes during an
IPR render.
Raytracing
5. Post Processing
After the frame is rendered, Maya completes the
final image by creating and automatically compositing any of the post process effects that you have
specified. These effects include Depth of Field, 2D
motion blur, Glow, Paint Effects, Fur, etc.
If Raytracing is enabled in the
Render Settings window, any
secondary rays needed for reflections, refractions or raytraced shadows
will be computed and will contribute to
the shading process. The hybrid nature
of the rendering architecture ensures
that primary rays are not raytraced and
only specified objects participate in
Raytracing. This allows for a highly
efficient approach to achieving added
realism in a render.
77
Render Output
B
ased on your post-production requirements, your final rendered
image or sequence of images will need to suit the medium you are
outputting it to. These image properties, such as size, format and frame
padding, are set from the Render Settings Window.
Renderable Camera
Image Formats
Render Output
The Image Formats pop-up list allows
you to specify the format you need your
rendered frames to be in. The online
documentation of Maya has a detailed
description of how each of these formats handles image, mask and depth
information. The Maya IFF file format
is also documented. Some formats are
only available on certain platforms.
rendering
Rendering Animation
An image file name consists of three components when rendering an
animation: file name, frame number extension and file format extension. A combination of these three components is referred to as the
file name syntax.
The file name is the base name for all images in the animation
sequence. The frame number extension represents the frame in the
Time Slider in which the image is rendered. The file format extension
indicates your chosen file format. You can see these combined as a
preview at the top of the Render Settings window.
You need to tell Maya what frames to render when rendering an
animation. After a Start Frame and an End Frame are specified, Maya
renders all the frames in between by default. However, if you want to
render every 10th frame for test purposes, you can set the By Frame
attribute to 10. In this case, Maya only renders every 10th frame,
beginning with the Start Frame number.
78
The pop-up camera list allows you to
choose which camera will be used
in the Batch render. It is possible to
render from more than one camera in
a Batch render. This Render Settings
attribute does not affect which camera
is used while rendering in the Render
View window within Maya.
Render Resolution
The Render Resolution refers to the
dimensions of a rendered frame in
pixels. The list of presets in Maya
allows you to quickly select a resolution from a list of those commonly
used in the industry.
THE ART OF MAYA
Channels
Image Formats
The color channels of a rendered image are made
up of red, green and blue (RGB). A mask channel,
or alpha channel, stores information about the
coverage and opacity of the objects in your
scene. This channel allows you to work with
your rendered images in a compositing software
application like Autodesk Combustion or
Autodesk Toxik software.
A depth channel records the distance from
the camera to the objects in the scene. This is
often called Z-depth and you can look at it using
the Z-key in FCheck.
Maya can render an image file that contains
RGB color information, a mask channel, a depth
channel or any combination of the three. The flags
in the Render Settings window let you choose
what channels will be rendered. Some image
formats do not support embedded mask or depth
channels; in these cases, Maya generates a separate mask or depth file and puts it in the mask or
depth sub directory of your current project.
The Maya Render Settings lets you render your
images in the specific format that you need for your
production pipeline. The default image format is the
Maya IFF, but you can choose from a list of many
standards used in computer graphics such as
TIFF, GIF, JPEG, PSD, etc. Most of the formats
are 8 bits per channel, but Maya also renders
10 bit Cineon and several 16 bit formats which are
commonly used for film. Human eyes can perceive
more colors than 8 bits can represent, so highresolution formats, such as film, sometimes require
better color definition to look realistic.
Maya also lets you render directly to AVI or
QuickTime movies. While this may be convenient,
you might also consider rendering in one of the other
formats that creates separate images, so that you
have more flexibility. An application such as Autodesk
Combustion allows you to create a movie from the
images after any adjustments have been made.
color (RGB)
mask
depth
Rendering Fields
When an NTSC television displays images, it uses a technique
called interlace, where electron beams horizontally scan all of the oddnumbered lines first and then fill in all of the even numbered lines (for
PAL, even lines are scanned first). Each of these scans is called a field
and they are half a frame apart in time. This means that instead of 30
frames per second, you actually see 60 fields per second (50 fields per
second for PAL) when you watch television.
Rendering fields in Maya takes advantage of this interlace
technique to achieve smoother motion on video. Instead of rendering
whole frames, Maya renders each frame in two fields where
the odd scan lines are rendered separately from the even
scan lines at half-frame time intervals. The two fields
are automatically put back together to form a whole
image at the end of the frame render. The result
is that the motion of an object in-between
frames is captured in the render. In the final
interlaced image, moving objects will look
ghosted when viewed on your computer
monitor because the monitor shows
you both sets of scan lines at the same
time. However, when the same image
is viewed on a television screen, the
objects will look correct.
AVI (.avi)
AVI is the Microsoft Audio Video Interleaved movie
file format. Maya only renders uncompressed AVI
files as these are the most common for reading into
other applications.
QuickTime (.mov)
This is the file extension for Apple QuickTime
movie files. Maya only renders out uncompressed
QuickTime files as these are the most general for
reading into other applications.
Animated scooter rendered as
interlaced fields.
79
Interactive 3D
Through video games, visualization and the world wide web, 3D
computer graphics have moved beyond the movie screen and into our
everyday lives. Today, computer games consist of complex 3D characters and environments produced under aggressive production schedules. As such, game production has shifted away from the traditional
animation pipeline, adapting new ideas such as game engines and blind
data.
While many of the concepts covered in this book can be used by
game developers to help them in their work, gaming environments
have rules that are required to get graphics to play in the most efficient
manner possible. Therefore, the kinds of models and textures that game
developers can
create are limited.
Interactive 3D
T
he rendering of 3D scenes into animated movies
supports a narrative tradition where a story is
told to an audience. The audience plays a passive
role in relation to the content. In our digital world,
this relationship is changing as audiences begin to
demand more interaction with the content they
are viewing.
A Different Workflow
Interactive 3D has different rules compared
to film and video. Understanding these rules will
help you make the best choices when deciding on
such issues as geometry type, texture mapping and
the animation.
Animation
Interactive
The Video Game Generation
Interaction with content is most evident in video
games where the main character is the viewer or
“player” who makes key decisions that shape the
resulting action. Players want to be immersed in
new worlds where they can become the actors and
the decision makers.
The ability to interact in 3D environments
similar to our own, serves to further enhance
the experience. For this reason, Maya includes a
number of tools designed to help game developers
create these 3D environments and the actors who
inhabit them.
Modeling
Models built for interactive 3D are primarily polygonal models that use
controlled polygon counts to suit the gaming system. Some next-generation
game systems already have support for Bezier geometry, and NURBS and
Subdivision surfaces may soon follow.
Animation
Interactive 3D
When playing a game, the player drives the motion of the scene. Therefore,
animation is stored in small sequences that are controlled using the joystick
(the player's control). For character animation, some systems support inverse
kinematics (IK) and some do not.
3D on the Web
While interactive 3D is most prevalent in video
games, similar content is beginning to appear
on the web for more mass-market consumption.
E-commerce sites are beginning to use technologies
such as VRML (virtual reality modeling language)
to let customers preview their products in 3D.
Effects
interactive 3d
Generally, effects are added to video games using 2D sprites that are composited
on the fly. With next-generation systems, there will be support for more complex
dynamics and particle effects. The limits are set by the system itself.
Rendering
Whereas an animation is rendered to a series of still frames, games use
hardware rendering to present content to the player. Models and textures
must be designed for real-time playback. Lighting and texturing tricks help
attain faster playback.
82
THE ART OF MAYA
A Short History of Gaming
Sprite-based Games
Real-Time 3D
Bitmap graphics have been the driving force for video games from
the beginning. These bitmaps were created as “sprites” that would
be overlayed during game play in reaction to the player’s moves. To
preserve playability, the number of colors was limited to suit the power
of the game system. To this day, games are being produced using 24
bit sprites rendered in programs such as Maya. Sprites are sometimes
used in real-time games to add effects.
Once graphics engines began supporting
the display of polygons to the screen, 3D
became interactive. With the creation of true 3D environments, players could now roam around a game
and discover its secrets. Every year, the systems
become stronger and there are fewer restrictions on
the games.
1 bit B&W Pixels
~ 1980 – 1990
8 bit Color Pixels
~ 1985 – 1994
24 bit Color Pixels
~ 1993 – Present
Low Polygon 3D
~ 1995 – 2000
Next Generation
~ 2000 – Future
These games used simple bitmaps to generate the game play.
Graphics could be easily created
using simple pixel based
paint systems.
Adding more colors offered richer
environments, but the limited
palette meant that the sprites had
to be painted by hand.
With a more complex color
space, 3D programs could now
be used to generate sprites.
This created richer gameplay environments.
These games offer the freedom
of motion within a 3D environment, but models maintained
a jagged look because of low
polygon counts.
The polygon counts are now
becoming high enough for much
smoother artwork and more
stunning visual effects.
The Rules Are Changing
Setting the Limits
To make a game truly interactive, the game
art must not interfere with the game play.
Therefore, the speed at which a game can get
polygons and other sprite based graphics to the
screen is crucial to this interactivity.
Hardware
Games are run on both
computers and game
consoles. The ability of the
hardware to process graphic
information has a strong impact on the
look of the final game. Hardware rendering
is measured in Polygons Per Second (PPS) and the higher the PPS,
the more complex an interactive 3D scene can be. This PPS rate is
affected by textures, animations and in-game effects, and can lower the
hardware’s total PPS.
In the world of interactive 3D, the term
next-generation is a bit of a moving target.
Systems are becoming more powerful, game
engines more open and game developers more
inventive. Also, systems are beginning to accept
different geometry types, and hardware rendering
techniques are becoming more realistic.
As mass market uses of interactive 3D rise,
these techniques will begin to be incorporated into
more open standards, such as Web3D, which will
help make interactive 3D a more integrated part
of how we work and live.
Game Engine
The game engine is the software that handles the user’s input,
calculates animation and dynamics, and renders the graphics to the
screen. The game engine is built by the game developer to create
the most sophisticated game experience possible for players. When
designing a game engine, a developer must decide on a production
pipeline that matches the intended hardware platform.
83
Game Creation
T
o create a game environment, a game developer must put together
a wide range of digital content such as 3D characters, levels, motion,
behavior, textures, shaders and lighting. This information is then given
to game programmers, who build it into a real-time game. The coordination of all these parts is a complex undertaking that requires the game
developer to work with concept artists, modelers, animators, texture
artists, level designers, programmers, writers and musicians, to name a
few. With the ability for near photorealistic output from next-generation
systems (game consoles, graphics hardware and PCs), development
teams have grown from small teams of 5 or so people, to larger teams
of 30 or more.
Gaming Workflow
interactive 3d
Game Creation
While a distinct workflow will vary from studio to studio, there is a basic
gaming workflow. This workflow, from design document to concept
art, game art, level building and game programming, is not necessarily
a linear one as all of these activities can happen at the same time.
Because of the number of tasks involved, establishing a workflow at the
beginning of a project can mean the difference between making a deadline and missing it.
84
Design and Planning
Game Art
Every game starts with a design document that lays the groundwork for the
player’s real time experience. Characters, environments, puzzles and music are
laid out along with detailed design sketches that become the game’s storyboards. The plan defines the workings of the game, while the sketches define
the look and feel.
Starting with the design sketches, 3D characters, props and sets are built using
techniques defined by the chosen game engine. Texture maps, lighting and
other game specific data are also added. Models are built for different levels of
detail since props and characters in the foreground can have more detail than
those in the background.
Level Building
Game Programming
Level building is where the various digital game art pieces are brought together
to create the gaming environment. This is where many objects will be placed
repetitively to make the most efficient use of game memory. The level builder is
designed to work intimately with the actual programming of the game.
This is where most 3D artists tend to be less involved. The artwork and levels are
run through a game engine designed for a specific console. Ideally, the game art is
built in such a way that the programming runs very smoothly.
THE ART OF MAYA
Game Art
Level of Detail
All the visual content created for a game, whether it be concept drawings, 3D assets or levels, can be referred to as game art. The concept
drawings can be used in Maya as a reference to create 3D assets. 3D
assets can consist of characters models, or items for use in the game.
These 3D assets can also be rendered to create sprites. Levels often use
varied amounts of these 3D assets to help create a feeling of realism in
the space and for items like weapons or power-ups.
One of the ways to keep frame rates consistent in
a game is to use a technique called Level of Detail
(LOD). With this technique, multiple versions of a
model created with varying polygon counts are
switched depending on their proximity to the camera.
When the model is close to the camera, the
higher detailed model will be used. As the object
moves farther away, the higher detail models will be
replaced by lower detail models. Maya will let you
take a group of models and assign them to an LOD
group. This group will switch between the models
based on their distance from the camera in Maya.
Environments
Environments, or levels, comprise the
world that the player sees and interacts
with. These environments can be broken
down into interiors and exteriors. They
can be created in Maya using polygon
manipulation, procedurally with height
maps or a combination of the two.
Props
Props can be items like the pictures and benches
in the gallery or a key that the player must collect.
Props help a game convey its realism or lack of
realism. Maya toolsets allow the creation and
modification of these props.
Characters
Characters help bring a game to life.
A character can be the hero of the
game or its evil villain. Maya
character animation tools allow
for the creation of animation that
can breathe life into your game
characters. Animation that you
create can also be reused on
different characters to speed up
production.
LOD Visualization
When setting up the LOD group, the highdetail model is selected first, with lower
detail models selected in descending order.
You need to have two or more objects
selected to create a level of detail group.
Edit > Level Of Detail > Group will assign
them to the LOD node and automatically
set up distances to switch models. You can
select this node and change the distances
that the models will switch. For example,
looking at the images to the left, the top
image shows a model that is closest to
the camera and consists of 1422 polygons.
The second image was set up to switch at
8 units from the camera; it contains 474
polygons. The third and final model of the
LOD group will switch at 20 units from
the camera and consists of 220 polygons.
You can also use a quad with a software
render of your model mapped to it as a
texture map.
Blind Data
Color and Texture
The skillful use of color and texture on a model can help you imply
detail. You can apply color to a model and use a grayscale texture
map to suggest shading. This technique can also help you save
texture memory. Maya not only allows you to paint color information on a model, but also to create textures by using Paint Effects in
canvas mode.
Traditionally, data such as item spawn points,
environmental conditions (slipperiness, noisy
floors) and condition triggers (enemy attacks,
door opens, platform moves) fell into the game
programmer’s domain. Today, the ability to
assign this information visually also allows 3D
artists and level designers to control where it’s
placed on a model or environment.
Textures
Textures are used to add detail where no geometric
detail exists. Some textures will be on what is
called a decal. A decal can be a singular texture
map or multiple textures laid out into a square to
maximize texture space.
Color per Vertex
Color information can be stored at the vertex level
and is derived from prelighting geometry, painting
the color on or setting color values at each vertex.
Painting Blind Data
Lightmaps
Lighting information in a scene can be saved to a
picture file that can be applied to the base texture
as a light map so that lighting for a level does not
have to be calculated at runtime.
Blind data can be painted onto components using the Artisan tool sets to help
you see where the data is being applied. You can set multiple data types and
assign colors to them. Here, the door trigger is being painted. You can see the
color applied to the model.
85
Official Autodesk Training Guide
8.0
Foundation
www.autodesk.com/store-maya