Livemove 2 User Manual
Livemove 2 User Manual
R
2
User Manual
TM
AiLive
www.AiLive.net
Machine Learning for Games
c 2010 AiLive Inc.
All rights reserved. No part of this manual may be reproduced or transmitted in any form or by any means
without the written consent of AiLive Inc.
AiLive, LiveMove, LiveMove Pro, LiveMove 2 and LiveAI are either registered trademarks or trademarks of
AiLive Inc. in the United States and/or other countries. Other product and company names mentioned
herein may be the trademarks of their respective owners.
Contents
1 Introduction to LiveMove 2 1
1.1 Motion Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motion Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Key concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 LiveMove components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 The structure of the manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Tutorial 5
2.1 Check requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Understand the data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Start lmMaker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Design your moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 Create a classifier project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6 Collect motion examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.7 Build a classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.8 Test the classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.9 Tune the classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.10 Build a buttonless classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.11 Test the buttonless classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.12 What’s next? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3 lmMaker 23
3.1 Data organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Project window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Creating a new project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4 Classification Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.5 Project operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.6 Collecting example motions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.6.1 Recording control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.6.2 Collecting directly from your game . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.7 Building classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
v
3.7.1 Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.7.2 Tunability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.7.3 Capacity and tunability trade-offs . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.7.4 Slack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.7.5 Buttonless classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8 Testing classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.9 Debugging classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.9.1 Outliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.9.2 View motions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.9.3 Motion Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.10 The move tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.10.1 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.10.2 How to read the move tree output . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.10.3 Building a move tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.10.4 Move tree quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.11 Move label file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.12 Display options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.13 lmMaker command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5 Guidelines 77
5.1 Understand your data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.2 Design your Gold Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.3 Collect correct but varied examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.4 Check your data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.5 Give immediate feedback to the player . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.6 Design moves that diverge early . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.7 Build the classifier before the move tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.8 Understand buttonless recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.9 Suggested workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.10 Collecting examples for mixed buttonless classification modes . . . . . . . . . . . . . . . . 86
5.11 Integrating LiveMove into your tool chain . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6 Tracking 89
6.1 Orientation Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.2 Position Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.3 LiveMove 2 Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.4 Adding Tracking to Your Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.5 Starting Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.6 Dealing with Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.6.1 Gotchas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A LiveMove 2 FAQ 93
B Applications 103
B.1 The PC Application lmsConvertMotions . . . . . . . . . . . . . . . . . . . . . . . . . 103
B.2 The PC Application lmHostIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
B.3 The Console Application lmRecorder . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
B.4 Running lmHostIO.exe and lmRecorder from lmMaker . . . . . . . . . . . . . . . 104
Chapter 1
Introduction to LiveMove 2
• Use LiveMove 2 to build motion recognizers using example motions. No complex coding required.
1
2 CHAPTER 1. INTRODUCTION TO LIVEMOVE 2
• LiveMove 2 can recognize any motions performed with a motion sensing device.
Figure 1.2: A motion is any movement of the Wii Remote (normally half a second to five seconds). In this
example the motion is a counter-clockwise full circle with the Wii Remote pointing forward.
Figure 1.3: A motion set is a collection of motions of the same type. But they are not identical. This motion
set contains four motions, all counter-clockwise circles.
1.4. LIVEMOVE COMPONENTS 3
circle
Figure 1.4: A move is a labeled motion set. In this example, the move is ‘circle’. You collect example
motions from different people to define a single move. A move is a player-level concept, like a sword thrust
or cross-court forehand.
Classifier
Runtime
Library
Wii,
NDEV
Figure 1.5: You define moves and LiveMove 2 automatically builds a classifier to recognize them. A classi-
fier is a motion recognizer. Feed it a motion at run-time and it tells you what move it is.
lmRecorder A Wii application that records motions from the Wii Remote, the Nunchuk, and the MotionPlus.
lmMaker runs it automatically when collecting motion sets and when testing classifiers.
Run-time Library The LiveMove 2 run-time library links with your game. You can create a tracking object
that allows you to track the movement of a Wii Remote with an attached MotionPlus. You can also load
classifier objects and use them to recognize motions.
lmsClassify A simple application that loads a classifier and uses it to classify motions.
lmsClassifyRSO Implements the same function as lmsClassify using libLM.rso instead of libLM.lib.
lmsTrack A simple application that creates a tracker and uses it to track the motion of a MotionPlus-
enabled Wii Remote.
lmsClassifyAndTrack A simple application that creates a tracker and loads a classifier. It uses the
tracker to track the motion of a MotionPlus-enabled Wii Remote and the classifier to simultane-
ously classify the tracked motions.
lmsClassifyButtonless A simple application that loads a buttonless classifier (See Section 3.4 for
definitions of classification modes) and uses it to classify a continuous stream of motion data
from the Wii Remote.
lmsRecord A simple application that shows you how to use the LiveMove 2 API to record motions.
lmsPad Device-dependent driver code that wraps your motion device’s low-level driver code and mar-
shals raw data samples from your device into the LiveMove 2 data sample format. You should
incorporate and use this code for all your game applications that use LiveMove 2.
lmTuner A Wii application that tunes a classifier to a specific game player.
trackerDemoGame A Wii application that allows you to try out motion tracking with a simple game
of skittles.
balloonPop A Wii application that allows you to try out motion recognition with a simple game of
drawing numbers in the air.
lmHostIO A PC application that listens on the Wii USB port and writes files out to the host PC. This program
is required when executing lmRecorder or lmTuner.
LiveMove 2 running on the Wii game console tracks the position and orientation of a Wii Remote with an at-
tached MotionPlus and recognizes motions performed with the Wii Remote, MotionPlus and Nunchuk motion
sensors.
Tutorial
• Record motion examples from the Wii Remote with lmRecorder in lmMaker .
• lmMaker imports the motion files and builds a classifier (Figure 2.2).
5
6 CHAPTER 2. TUTORIAL
Zero
Motions Zero
Motion Set
WRMotion001.raw
lmRecorder lmHostIO WRMotion002.raw
WRMotion003.raw
NDEV PC
Figure 2.1: Collect motion examples as .raw files. lmRecorder records the data from the Wii Remote.
lmHostIO helps transfer this data to the PC.
lmMaker
Classifier
PC
Figure 2.2: lmMaker receives the motion data and constructs motion sets. These are used to build a classifier.
2.3. START LMMAKER 7
Figure 2.4: Draw these numerals in the air. The end of your Wii Controller should trace these lines.
The “gold standard” is your definition of a move. It is the ideal form of the move. The gold standard
defines criteria for deciding whether a given motion should count as an example of a move.
8 CHAPTER 2. TUTORIAL
Q. What is a gold standard? – See section 5.2 and answer 15 in the FAQ.
• Select File->New Project from the main menu of lmMaker. This will launch the New Project
dialog.
• Select Collect->Collect Motions from the main menu. This launches the Collect Motion
dialog.
• Enter the performer name “me” (Figure 2.7).
• There are two check boxes in the motion collection dialog. By default, both are enabled. Leave them
as is. See Section 3.6 for more detailed description on these two checkboxes.
• Click OK.
A tabbed motion collection window (Figure 2.8) appears on the screen while lmRecorder boots up on the
NDEV. The tabs indicate all the moves you are to collect motions for.
lmMaker automatically runs lmHostIO and lmRecorder when collecting motions from the Wii. Ensure
that you have already set the DVD root using the command setndenv DvdRoot ‘‘path’’. It is usu-
ally fine to simply set it to the current directory: “setndenv DvdRoot .”. Setting the DVD root only
needs to be done once. (Consult the README.txt for troubleshooting tips if you get a “missing DLL” or
other errors when lmMaker invokes lmHostIO.exe)
After lmRecorder has loaded on the NDEV you will see a motion sensor display on the Wii monitor (Figure
2.9), as well as text prompt informing you what move you are to give an example motion for next.
lmRecorder uses the first connected Wii Remote (channel 0) as the source of motion data when the move
uses a single Wii Remote. To record a motion, keep the B button on the Wii Remote depressed while you
perform it.
10 CHAPTER 2. TUTORIAL
Figure 2.8: When starting to collect motions you will see this window.
• Pay attention to what move lmRecorder is prompting you to record a motion for. Draw the cor-
responding number in air with the Wii Remote while holding down the B button. Ensure that it is
performed as natually as possible while conforming to the gold standard.
• Once you finish recording a motion (by releasing the B button), lmRecorder prompts you to either
accept or reject it. This is important in making sure that the newly recorded motion is intended to be
an example of the chosen move.
• If you accepted the motion, the moves tab in the collection window gets set to the corresponding move
and a new line appears in the window showing the disk location of that motion. After having collected
some motions for the number0 move, the collection window should look like Figure 2.10.
If you rejected the motion, it will not show up in the collection window.
If you accepted a motion by mistake, you can always select it in the collection window and click
Delete Motion. You can also right-click on the bad motion to bring up a menu which contains a
“delete” option.
• Repeat this process until lmRecorder tells you that you’ve successfully collected the designated
number of motions per move.
• Click Finish Collecting. By default, lmMaker automatically imports all the samples you just
collected into your project (Figure 2.11).
All the motion examples you collected are stored in the folder
ailiveBase/bin/LiveMoveData/raw/numbers/me/.
In our example of the numbers move set, the definition of the moves (the gold standard) is quite narrow in
scope and does not allow differing ways of drawing the same numeral.
Say you are developing a cooking game. One move is “rolling dough”. You want to allow several ways of
doing “rolling dough”. In this case, record all the different ways of performing the move, including rolling
backwards, forwards, and sideways. All these variations should be presented to lmMaker .
2.6. COLLECT MOTION EXAMPLES 13
Figure 2.9: lmRecorder screen for recording motions in random collect mode.
14 CHAPTER 2. TUTORIAL
You may have some general questions about collecting examples. Please consult Chapter 5 for guidelines.
Q. When do I stop collecting examples? - See section 5.9 and answer 18 in the FAQ.
Q. How many motion examples should I record for one move? - See answer 19 in the FAQ.
Q. How many different ways of performing a single move should I record? - See answer 20 in the FAQ.
• Click Suggest Capacity to get lmMaker to suggest a capacity for your project. (There are more
details about capacity in Section 3.7.1.)
• Set Tunability equal to the current value of Capacity so we can tune this classifier later. (A
tunability value of zero makes the classifier untunable. Do this to minimize classifier size when tuning
is not needed.)
• Click Build on the buttom-left corner of the window. A pop-up window will show the progress bar
of the build process.
2.7. BUILD A CLASSIFIER 15
• When finished building the classifier, the lmMaker output window states that the classifier is saved to
ailiveBase/bin/LiveMoveData/Projects/numbers/numbers.lmc.
Once the classifier is built the project window changes (see Figure 2.12):
All the classifications are made by the classifier you just built.
This is a small-scale project so all your motion examples should have the correct label. The classification
rates should be 100% for all motion sets.
Congratulations, you have built your first classifier!
• Click Test->Test Classifier from the main menu. A dialog box pops up (Figure 2.13).
lmRecorder will boot on the NDEV in test mode and lmMaker will show an information dialog (Figure
2.14) throughout the testing process. You are now ready to classify motions!
As before, hold the B button on the Wii remote to start performing the motion, and release when done.
lmRecorder classifies your motions using the classifier you just built and displays results on the NDEV
monitor.
• If you draw a zero in the air – then lmRecorder will display “number0”.
• If you draw a two in the air – then lmRecorder will display “number2”.
• If your motion is unlike any of your moves – then lmRecorder will display “–”, which means it was
classified as “undetermined”.
18 CHAPTER 2. TUTORIAL
The classifier may not recognize many varied ways of drawing numbers at this point because motion samples
provided are not sufficient. But it should recognize your motions for gold standard numbers 0 through 4 with
high accuracy. Until you build a classifier with examples collected from a variety of different people, it likely
will not work as nicely for someone else as it did for you.
Try experimenting now. The classifier is data you load into your game. The game can then recognize moves
0, 1, 2, 3 and 4 in real-time.
When you’ve finished testing, click the “Finish Testing” box (Figure 2.14) to quit lmRecorder.
In another case, one of your moves is very simple. During game testing you discover that everyone can
perform it easily (after a period of initial learning). You decide you do not need to tune it.
Q. What does tuning do? – See section 4.17 and answer 34 in the FAQ.
The Wii application lmTuner lets you test tuning with a classifier built by lmMaker . You can find
lmTuner in the directory ailiveBase/sample/lmTuner. Note that lmTuner is not present in
the evaluation version of LiveMove 2 .
Now let’s try to tune the “numbers” classifier we just made:
Figure 2.15: lmTuner Screen while tuning for the “number 1” move.
2.10. BUILD A BUTTONLESS CLASSIFIER 21
• Hit the Enter key (or click Add) and enter “number1”.
• Click OK.
You cannot use motion samples collected for a standard project in a buttonless project. So, let’s collect
buttonless samples.
• Select Collect->Collect Motions from the main menu to launch the Collect Motion dialog.
• Next to the Random Collect checkbox, increase the default number of motions to collect per move
from 10 to 20. Buttonless classification usually requires more example motions to work well.
• Click OK.
The motion collection window appears on your PC while lmRecorder boots up on the NDEV.
To record buttonless motion samples with lmRecorder you need two Wii Remotes. Perform motions with
Remote 1. Specify the start and end of a motion with Remote 2. This ensures that any acceleration on the
Wii Remote caused by pressing a button is not recorded.
Normally the operator of lmMaker will use Remote 2 and the person providing examples will use Remote
1. But for this test simply place Remote 1 in your dominant hand and Remote 2 in your other hand.
Go through the same motion collection process as described previously in section 2.6. Make sure that each
example motion starts with reasonably large speed and force. This helps the classifier learn to distinguish
when valid motions start.
As before, upon finishing, lmMaker automatically imports all your examples to your project.
• Click Suggest Threshold to get lmMaker to suggest a reasonable force threshold for detecting
the starts of your moves. (For more information about force threshold see Section 3.7.5.)
• Click Suggest Capacity to get lmMaker to suggest a capacity for your project. The suggested
capacity is based on the threshold setting.
• Click Build.
lmRecorder boots up on the NDEV in test mode. Simply perform a motion. lmRecorder recognizes
your motions using the buttonless classifier you just built.
A buttonless classifier splits the incoming motion stream into segments. A segment starts based on the force
threshold.
A “--” sign indicates that the motion segment was classified as undetermined. LiveMove 2 will generate
a sequence of “--” results if you move the Wii Remote randomly.
A buttonless classifier needs more examples to produce the same classification accuracy as a standard classi-
fier.
Try experimenting now. Even with just 20 example motions per move you should get reasonably good
recognition rate
If a move is hard to perform (i.e. you often get the undetermined label) then increase its slack in the corre-
sponding column in lmMaker’s project window. If a move is too easy to perform (or other valid moves often
end up being classified as this move) then decrease its slack (see Section 3.7.4). If any random motion tends
to be recognized as a particular move, see Answer 26 in the FAQ.
Q. What is the best way to use buttonless classification in my game? - See section 5.8.
lmMaker
projects raw
These folders are created for your convenience. You can save your project and motion data anywhere on disk.
When you create a new project, lmMaker creates two project folders under projects and raw.
LiveMoveData
projects raw
my project my project
The project folder under projects contains all project related files except motions, such as a project file
(extension: .lmproj), a classifier file (.lmc), move tree files (.lmtree), and an include file (.h). The
project file saves states between lmMaker sessions.
The project folder under raw usually contains all motion files collected for the project and is organized like
23
24 CHAPTER 3. LMMAKER
this:
my project
When you run lmMaker , two windows appear: the Command Prompt and the project window (Figure 3.1).
The main menu options are:
To create a new project, select File->New Project from the main menu. It will open the New Project
dialog (Figure 3.2).
• Enter the name of the project. It will be used as the name of a project folder as well.
• Specify where you want to create the project folder.
• Choose an input device type from one of the following seven types.
– Wii Remote for single-handed moves using a Wii Remote
– Nunchuk for single-handed moves using a Nunchuk
– Freestyle for coordinated, double-handed moves using a Wii Remote and a Nunchuk
– MotionPlus Device for single-handed moves using a MotionPlus-attached Wii Remote
– MotionPlus Freestyle for coordinated, double-handed moves using a MotionPlus-attached
Wii Remote and a Nunchuk
– Paired Wii Remotes for coordinated, double-handed moves using two Wii Remotes
– Paired MotionPlus Devices for coordinated, double-handed moves using two MotionPlus-
attached Wii Remotes
• Choose a classification mode (see Section 3.4).
• Add moves in the move list. You can also add or remove moves later in the Project Window.
The combination of an input device and a mode determines the type of motion samples collected as well as
the type of motion data the built classifier can handle.
Click OK when you are done, and you will be brought back to the project window. You can start collecting
motions for the project.
26 CHAPTER 3. LMMAKER
Standard : In this mode LiveMove 2 is asked to classify motion segments where each segment is a potential
valid motion.
Buttonless : LiveMove 2 is presented with a continuous stream of motion data and asked to classify possible
valid motions contained within. In this mode the LiveMove 2 classifier is responsible for detecting the
start and end of potential valid motions as well as classifying them.
Each classifier type also implies a distinct recording mode for recording training motions. To build a standard
classifier you will need to record standard motion samples while building a buttonless classifier requires
buttonless motion samples. See Section 4.5.1 for more detail.
my project
... Jim
• Delete Selected From Disk: Remove selected motion(s) and delete motion files from disk.
This does the same thing as the ”Remove Selected” option but also removes the raw motion files from
disk. Use this option with care (especially if you share motion files among projects) since you will not
be able to recover any deleted raw motion files.
• Select Outliers: Select the motions that are outliers according to the most recent classifier built in the
project.
Any outliers (see Section 3.9.1) that have been identified by the current classifier will be marked in
red in the project window. This menu option gives an easy way of selecting them all (for viewing or
removal).
• Project Info: View the project information, such as its motion device type and how many motions are
contained in each move set.
• View Motions: View motion graphs for the selected motions (see Section 3.9.2).
• Suggest Capacity: Suggests a capacity for the (to-be built) classifier based on example motions in the
current project.
The suggested capacity is a reasonable trade-off between classification performance and run-time
speed. Use it as a guideline in case you need to tweak capacity value to better suit your game. This
operation is almost as expensive as building a classifier.
• Set Force Threshold: Set the force threshold (buttonless projects only).
This allows you to change the force threshold (see Section 3.7.5) while showing the per-move percent-
age of motion data that would be discarded when using that threshold.
Actions do not update the classifier until you click Build or choose Project->Build Classifier.
For example, if you add a new motion set called “number8”, then the classification of the new examples in
the set will be from the previously built classifer until you rebuild.
• Enter a performer name. A new folder will be created where all samples by the performer are stored.
28 CHAPTER 3. LMMAKER
• Specify where you want to create the performer folder. The default location should work fine for most
cases.
• There are two checkboxes in the motion collection dialog. By default, both are enabled. Unless you
have specific reasons to disable either one of them, we recommend that you leave them as is.
– The first option enables automatic importing of the collected motions into the project window
upon finishing the collection session.
– The second option puts the collection mode to Random Collect. In this mode, lmRecorder
prompts you to give example motion of a randomly chosen move, one at a time, until n examples
per move are collected. n equals 10 by default but you can change it easily to suit your needs.
Random collection addresses a specific pitfall of sequential collection. In sequential collection,
you collect all example motions of a move together in a row and then move on to the the next
move. This often results in homogeneous example motions for each move that lack the legiti-
mate variations present in normal game play. This will in turn negatively impact the classifier’s
performance. Random collection is designed to minimize this problem.
Click OK, and a tabbed motion collection window (Figure 2.8) appears on the screen while lmRecorder
boots up on the NDEV. The tabs indicate all the moves you are to collect motions for.
After lmRecorder has loaded on the NDEV you will see a motion sensor display on the Wii monitor (Figure
2.9), as well as text prompt informing you what move you are to give an example motion for next. See Figure
2.9.
In general, we recommand that the operator of lmMaker (i.e. you) control the recording while a second
person (i.e. the performer) provides motion examples.
A typical recording session proceeds like this:
• lmRecorder screen displays the name of a move it wants the performer to record.
30 CHAPTER 3. LMMAKER
• Let the performer perform the move. Watch closely how he/she does the move and determine whether
the recorded motion conforms to the gold standard of that move.
– Reject the motion if you, or the performer, decide that the motion is not a good example of the
target move.
– Confirm otherwise.
• Once a motion is confirmed, the collection window will refresh and display the tab for the target move.
A new file name is appended to the list of motion file names in the window. If you or the performer
had made a mistake in accepting the motion, you can delete the new file now (select it in the window
and press the Delete key or click Delete Motion).
• Repeat this until lmRecorder informs you that the designated number of motions for all moves have
been collected.
• Click Finish Collecting in the collection window when you are done.
All the motion samples collected will be automatically imported into the project if you have checked the
“Import collected motions into project automatically” checkbox. If you are collecting from many performers,
you may want to uncheck it to speed the whole process. There is an easy way to import all the samples of all
performers later (see Section 3.5).
Sometimes it may be desirable to quickly collect a few motions per move or to collect a few more motions
for a specific move. In this case you can use sequential collect instead. Here is how:
• Uncheck the “Random Collect” box in the collection dialog and click “OK”
• On the Motion Collection window, click the tab of the move you are about to collect motions for.
• Collect some motion samples from the performer. Watch closely his or her motions. If they do not
conform to the gold standard, delete them immediately (select it and press the Delete key or click
Delete Motion).
• Click on another move tab, and collect motion samples for that move.
• Repeat this for all the moves you wish to collect example motions for.
You can switch moves any time during the session by clicking the move tabs. Make sure the correct tab is
active before you start collecting motions.
– Wii Remote, Wii Remote with MotionPlus, Freestyle, or Freestyle with MotionPlus: Hold the B
(trigger) button of the first Wii Remote.
– Paired Wii Remotes or Paired MotionPlus-attached Wii Remotes: Hold the B button of the first
Wii Remote.
– Nunchuk: Hold the Z button of the Nunchuk.
3.7. BUILDING CLASSIFIERS 31
– Wii Remote, Wii Remote with MotionPlus, Freestyle, or Freestyle with MotionPlus: Hold the B
button of the second Wii Remote.
– Paired Wii Remotes or Paired MotionPlus-attached Wii Remotes: Hold the B button of the third
Wii Remote.
– Nunchuk: Hold the B button of the attached Wii Remote.
Normally, the operator of lmMaker should control the extra remote used to mark the start and end of
motions.
The recording controller(s) must be relatively still for a short period of time before recording can begin.
lmRecorder displays RECORDING PREVENTED if this condition is not met when the operator
presses down the trigger button to signal the start of a new motion. This ensures that LiveMove 2 gets
good quality motion data for the start of motion.
The performer must begin performing the motion shortly after the operator starts the recording. The
performer can monitor the start of recording from the motion graphs shown in the lmRecorder
display. Alternatively, the operator can simply issue a voice command to the performer to start the
motion. In either case, the operator releases the recording button after the motion has completed.
• Replicate the functionality of lmRecorder in your game (see Chapter 4 and sample code). Make
your game write .raw motion files to the current working directory of lmMaker . Then you can
collect directly from the game into lmMaker ’s collection window.
• Make your game write .raw motion files to a PC directory structure of your choosing. Then import
the motions into lmMaker later.
Ensure that only examples that conform to your gold standard are used to build a classifier (see Section 5.2).
3.7.1 Capacity
A classifier’s capacity is a measure of how many different ways of performing the same move that it
can recognize.
The higher (lower) the capacity the more (less) ways of performing the same move can be recognized.
– If there are many ways of performing the same move – then a high capacity is required.
– If there are few ways of performing the same move – then a high capacity is wasteful.
• A value of 1.0 corresponds to using on average 5% of the Wii CPU per 1/60 frame with 8 Wii controllers
generating motions for LiveMove 2.
CPU costs are linear in the number of Wii controllers generating motions for LiveMove 2.
• Without tuning (i.e. without calling lmTuneClassifier() – see Chapter 4) halving capacity will
cut CPU costs by about half, up to the minimum bounds.
Doubling capacity tends to double CPU requirements.
A value of 0.0 guarantees minimum capacity.
So, if your game supports only 2 Wii Remotes, you can set capacity to 4, but still consume 5% of the
CPU.
3.7.2 Tunability
A classifier’s tunability is a measure of how many different ways of performing the same move that it
can recognize during tuning.
Tunability is the capacity that is active during tuning.
• High tunability equals higher CPU and memory costs during tuning.
This affects calls to lmsTuneClassifier() but not calls to lmsUpdateClassifier or lmsClassifyMotion
(see Chapter 4).
Capacity
High Low
Recognition better worse
CPU cost worse better
Memory cost worse better
In this case:
• Build the classifier with a low capacity and high tunability. You can use the “Suggested capacity” value
here. For example, you can set capacity to half of the suggested value and set tunability to equal the
suggested value.
• During tuning the classifier can adapt to the varied styles of many different game players.
• Once tuned, the classifier will recognize the particular game player. But it will use less CPU and
memory resources than a high capacity classifier.
34 CHAPTER 3. LMMAKER
Section 5.9 contains guidelines for setting capacity and tunability during development.
A tunable classifier requires additional memory. If you do not need tuning then create an untunable classifier:
3.7.4 Slack
A move’s slack is a measure of how strict its recognition is.
Each motion set name has a slack entry. Click to change it.
The higher (lower) the slack the more (less) imprecision in the recognition of a move.
You can think of slack as fuzziness. Moves with high fuzziness will be recognized more easily compared to
moves with low fuzziness.
– A value < 1.0 means that the classifier will reject more motions as examples of the given move.
Game players must be more precise in their motions to be recognized. For example, in the “num-
bers” project (Chapter 2), if the slack of move “number2” is set low then it becomes harder to
perform the move.
– A value > 1.0 means that the classifier will accept more motions as examples of the given move.
Game players can be less precise in their motions to be recognized. For example, in the “numbers”
project if the slack of move “number2” is set high enough, then some “number3” motions might
start to be recognized as move “number2” instead.
– The maximum value is 2.0. In this case most motions will be recognized as some move.
• You can set slack independently for each move. So you can make some moves precise and others
imprecise.
• Adjust slack when your classifier often rejects but rarely misclassifies a move. (The classification is
either correct or undetermined).
In this case, increasing slack reduces the rejection frequency and increases the correct classifications.
Q. What’s the difference between tunability and slack? – See answer 35 in the FAQ.
Q. How do I control the difficulty level of performing moves? – See answer 36 in the FAQ.
3.7. BUILDING CLASSIFIERS 35
You can set the threshold in the project window (see Figure 3.6).
Click Suggest Threshold to set a suggested force threshold for your project. If your project contains
moves that start with low (high) force then the suggested threshold will tend to be low (high).
A high threshold will discard more data from the start of your recorded examples (which start from rest).
Choose Project->Set Force Threshold to see how much data on average is discarded for each
move (see Figure 3.7).
Figure 3.7: A higher threshold discards more data from the start of your examples.
• A very low threshold means that any movement on the Wii Remote will trigger classification. So your
game will need to handle many “false starts” that terminate with the undetermined label.
• A very high threshold means that most movement on the Wii Remote will not be registered by LiveMove 2.
In addition, lots of data is discarded, which makes it harder for LiveMove 2 to recognize moves.
You must re-build the classifier to check how the overall classification accuracy is changed by a new threshold
setting.
36 CHAPTER 3. LMMAKER
A high threshold may cause some moves to be not registered. This means that the motion example
never exceeded the threshold.
Since the force threshold changes where each example motion starts, it alters the training set for the classifier.
As a result, the buttonless classifier’s capacity is dependent on the force threshold. Once a new force
threshold is entered, you should click “Suggest Capacity” to get a new suggested capacity value.
A buttonless classifier normally requires more examples to get the same classification accuracy compared to
standard classification. Section 5.8 contains guidelines for building buttonless classifiers.
For a simple, interactive test session, just click OK here. It brings up a dialog box shown in Figure 3.9 on the
screen while lmRecorder boots up in test mode. lmRecorder classifies your motions using the classifier
and displays results. Click Finished Testing on the dialog to end the session.
If you wish to save test motions or classify existing motions, check “Save test motions between sessions”.
Enter a session name (defaulted to “test”) and specify where you want a session folder created if you don’t
want to use the default location. Click OK and lmRecorder will boot up on the NDEV in test mode.
If you have checked the box to save test motions then a separate window similar to the project window will
also pop up. Every time you perform a motion, a new line appears in the bottom half of this window showing
the location of the saved motion as well as its classification result.
If you entered an existing session name in the dialog and the session folder already contains motions saved
3.9. DEBUGGING CLASSIFIERS 37
in a previous test session, they are classified by lmMaker immediately and displayed in gray. Motions
performed during the current session will be added to the same folder and displayed in black.
Q. What do I do if one of my moves is hard to reproduce? – See section 5.3 and answer 38 in the FAQ.
38 CHAPTER 3. LMMAKER
Q. What do I do if two of my moves are getting confused? – See section 5.4 and answer 37 in the FAQ.
Q. How can I tell when I have collected enough data? – See answer 18 in the FAQ.
3.9.1 Outliers
An outlier is an example motion that is unlike other examples in the same motion set. An outlier indicates
either
• you need to provide more examples like the outlier (if the outlier conforms to your gold standard), or
• you need to delete the outlier because it was generated by a recording error.
lmMaker reports all outliers, if any, when it builds a classifier (Figure 3.11). Outliers are highlighted in red
in the project window (Figure 3.12). Project->Select Outliers allows you to quickly select all the
motions that are outliers.
Z gravity
X
Figure 3.13: Force axes from lmRecorder (KPAD library conventions). The pointer end of the Wii Remote
is the +Z direction. Imagine the accelerometer is a small cube half-full with water. At horizontal rest the
water, under the force of gravity, impacts upon the -Y cube face. So in this state we see (X,Y,Z)=(0,-1,0)
corresponding to 1G gravity in the -Y direction. A forward jab initially generates -Z force as the water
impacts the -Z cube face. Then the water impacts the +Z cube face as the Wii Remote decelerates to rest.
If you think that an outlier represents a bad motion you can view examples of non-outliers and visually
compare them to the outlier. Bad motions often look very different from other examples in the same motion
set.
View motions can also be crucial when debugging problems caused by high force theshold in buttonless
classifiers:
Q. In my buttonless project, almost any short motion I make with the motion device gets recognized as
one of the moves, what should I do? – See answer 26 in the FAQ.
MotionPlus
If you have a MotionPlus attached, then the raw gyroscope data represents the angular velocity around the
three axes of the Wii Remote. Please check with Nintendo’s documentation for additional details on the data
representation.
In lmMaker when you choose to View Selected Motions, you will also see a plot of the gyroscope
data if the motions were recorded with an attached MotionPlus.
Figure 3.14: A motion view. Acceleration/angular velocity is the vertical axis and motion sensor sample
(each lmsMotionAppendElement call) is the horizontal axis. The X, Y, and Z acceleration and angular
velocity are plotted separately and are color-coded.
• Duration: The duration (in seconds) of the relevant part of the motion (i.e. between motion start and
end).
rotated right up: Rotated to the right in XY, with Z pointing up.
rotated left up: Rotated to the left in XY, with Z pointing up.
rotated right down: Rotated to the right in XY, with Z pointing down.
rotated left down: Rotated to the left in XY, with Z pointing down.
level: Mixed in the XY, with Z level. (Not flat level, and not
rotated left level or rotated right level).
• Initial impulse: Estimated direction of the initial movement of the motion controller with respect to
gravity (i.e. vertical direction).
One of:
Q. How do I design moves that support early classification? – See Section 5.6.
• What kinds of animation sequences will synchronize with your motion controls to give immediate and
correct feedback to the player.
A simple application of the move tree is determining the earliest possible moment when it is safe to classify
a player’s motion.
An advanced application is:
1. At development time designing an animation tree that corresponds to the move tree, and
2. At game time choosing when to commit to intermediate and final animations that synchronize with the
player’s movement.
3.10.1 An example
Let’s look at a simple example move set consisting the numbers: one, two, three and four. Moves
one and four were designed to generate different motion sensor data very quickly: starting from the same
holding position, one is a vertical slash down (Z force), while four starts to the left (-X force). two and
three, on the other hand, both start to the right, or +X force. So two and three both initially generate
similar data.
The move tree is displayed in Figure 3.15. Take a look at the first row that starts with one. Notice that in
the Possible Moves / Joint Animations column there is only one move: {one}. This means
that – within the first 2.5% of the move – one is completely unconfused. So animation can begin almost
immediately. This is also true for the move four.
But moves two and three remain confused with each other during the first 35% of the move. But at that
point, three becomes distinguishable from two.
When 35% of the move has completed and if the result of LiveMove 2’s early classification is:
• one then the player is probably not performing anything else. So it is safe to commit to a one anima-
tion.
• two then the player could be performing either a two or a three, but not one or four. If you had
a joint two-three animation, you could commit to that at this point.
3.10. THE MOVE TREE 45
• three then the player is probably not performing anything else. So it is safe to commit to a three
animation.
• four then the player is probably not performing anything else. So it is safe to commit to a four
animation.
After 80% progress no moves are confused with each other. So it is safe to commit to whatever LiveMove 2’s
early classification returns at this point.
The move tree also indicates:
• We need a generic “start” animation that lasts for 2.5% of the move that is common to all of your
moves.
For example, this shared animation could be a “get ready” look, a “power glow”, a Wii Remote rumble,
or anything that immediately registers to the player that something is about to happen.
• We need a two-three intermediate animation during 2.5% to 80% of the move that follows the
“start” animation and precedes either a two animation or a three animation.
(If this is not possible then you should rethink your motion design. Moves that are initially confused
should translate to in-game animations that initially share the same start).
• An early classification can always turn out wrong. For example, the player could start a four but then
drop the controller.
So we need different “abort” animations to correspond to the user starting a valid move but failing to
carry through at different stages of completion.
At development time the move tree can help an animator design a corresponding animation tree for the
player’s on-screen avatar.
At game time the move tree tells the game at the earliest moment when it is safe to commit to intermediate
and final animations.
Q. How do I use the move tree for early classification? – See Sections 4.12 and 5.6.
• The Best Guess column has one row for every move in your classifier. At game time, calling
lmsGetClassification (anytime during a motion) returns LiveMove 2’s “best guess”.
So, for example, the three row at 2.5% progress contains all the relevant move
tree information that you’d need when lmsGetClassification returns three and
lmsGetMotionProgressForClass for three is 2.5%.
– Light green text indicates that the move is already safe and you can commit to the final animation.
3.10. THE MOVE TREE 47
– Bold green text indicates that the move is safe, and this is the point at which it becomes safe.
Look at the two row at 80% progress. The two is highlighted in bold green to indicate that: if
the “best guess” is two and it’s percent progress reaches 80%, it becomes unconfused with the
move three, i.e., it is safe to assume that you’ve performed a two, not a three.
– Black text indicates that the move is not completely unconfused.
• The Possible Moves / Joint Animations column lists which moves the “best guess” is
currently confused with.
For example, in the three row at 2.5% progress, the Possible Moves are: {three, two}.
This means that if your best guess is three and it is at 2.5% progress, your final classification is
probably either three or two.
If you have designed a joint two-three animation then it is safe to play it at this time.
• The Animation Transitions column contains the parent animation for the Possible Moves
column. There is an entry only when the Possible Moves / Joint Animations recommen-
dation has changed from the previous stage.
For example, the two row at 35% progress has no animation transition. This means the {two,
three} joint animation is still active.
But the three row at 35% progress requires a transition from {three, two} to the currently rec-
ommended animation of three.
Figure 3.16: You specify a classification accuracy for each move tree.
• Low accuracy means that LiveMove 2 takes “bigger risks” to predict when a move gets unconfused.
• High accuracy implies that LiveMove 2 takes “smaller risks” to predict when a move gets unconfused.
• A low accuracy tree predicts that a move gets unconfused earlier compared to a high accuracy tree.
48 CHAPTER 3. LMMAKER
• Use a low accuracy tree to get earlier commitment to best guess classification.
But the prediction is more likely to be wrong. So you may get more “false positives”. Whether this is
important depends on your game design.
Note that if your move design allows early classification and the tree accuracy is not too high then your moves
will normally be unconfused before 100% completion.
In general a completely unconfused move does not subsequently become confused with other moves. So
four will continue to appear in a group on its own.
Building move trees can take time especially if your project is large.
A move tree depends on your classifier and motion examples. So remember to rebuild the
move tree if these change; otherwise your move tree is out-of-date.
Using an out-of-date move tree for early classification at game time is a potential source of
error.
Select Move Tree->View Move Tree to inspect the built move trees (as shown in figure 3.15).
• An insufficient number of examples. In this case the move tree will not make good predictions, espe-
cially if you have a large number of moves or wide gold standards.
• Imbalanced examples. The move tree will not make good predictions if the number of examples for
some moves is much larger than others.
• collect at least 50 examples for each variation of your gold standard for each move in your move set,
and
• maintain a balanced project.
enum lmmyproject
{
lmmyproject_move1 = 0,
lmmyproject_move2,
lmmyproject_move3
};
#endif
Use this header file in your game source code. It has two main advantages.
lmsBeginClassify( classifier );
do
{
// read lmsPad data and call lmsUpdateClassifier on it
...
} while( !endOfMotion );
lmsEndClassify( classifier );
lmsClassLabel l = lmsGetClassification( classifier );
if ( l == lmmyproject_move1 )
{
... did move move1 ! ...
}
• Your game source is robust to changes in the order or contents of the moves in your project (on condi-
tion that the moves are not renamed).
• Turn some columns on and off. Right click on the display to select which information is displayed.
The “Classified as” column is expensive to recompute. To turn it off, right-click on any motion and
uncheck Show Classification. You can also turn Show Summary on to display motion sum-
mary information.
• Sort by column. Click the top of any column. The motions are then sorted according to the clicked
column.
For example, with Show Summary on, click the “Duration” column. The motions are sorted in
ascending order of duration. Click again and the motions are sorted in descending order of duration.
• -dir <filepath>
Set the LiveMoveData directory absolute path. The default is the current working directory.
The parameters available for batch mode operation can be viewed from the command line by typing:
• lmMaker.exe -h
List lmMaker command line parameters.
Chapter 4
• Record motions.
• Track motions.
• Recognize the player’s motions: Load classifier templates built with lmMaker to the Wii. Use clas-
sifiers to recognize what motion the player is performing with the Wii Remote.
• Recognize the player’s motion early: Use intermediate queries and the move tree to give early feed-
back to the player.
• Tune the classifier to an individual game player.
• All memory requests issued within the LiveMove 2 library will be routed to the memory allocator and
de-allocator passed to lmsOpen.
• On shutdown call lmsClose once. This frees all memory used by the LiveMove 2 library.
• You can reclaim memory used by the LiveMove 2 library without calling lmsClose if no
LiveMove 2 functions, including lmsOpen, are called thereafter.
51
52 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME
For example, you can call the following function to initialize LiveMove 2 :
#include <stdlib.h>
An lmsMotionDevice object represents a physical motion sensing device or a combination of such de-
vices each player controls.
For example:
...
// Create a motion device object for a Wii Remote with a MotionPlus attached.
lmsMotionDevice* md = lmsNewMotionDevice( lmsMotionDeviceTypeMOTIONPLUS );
...
4.3 lmsPad
Data from the controllers is received and buffered by the lmsPad library. lmsPad encapsulates the rec-
ommended method of interaction with WPAD, KPAD and KMPLS. It simplifies and ensures the correct use of
4.3. LMSPAD 53
KMPLS
KMPLSRead( . )
VISetPostRetraceCallback( . )
these libraries. Complete source code of lmsPad is provided and you may freely modify and recompile it to
suit your game should the need arise.
The data type of each motion sample lmsMotionDataSample is typedef’ed to the struct
lmsPadSample. It is the interface between lmsPad and the LiveMove API.
lmsPadSample sample;
// Add to recording
lmsMotionAppendElement( motion, &sample, NULL );
// Update classification
lmsUpdateClassifier( classifier, &sample, NULL );
// Update tracking
if ( lmsHasGyroData( &sample ) )
lmsUpdateTracker( tracker, &sample );
}
}
// Cleanup
lmsPadClose();
54 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME
lmsPad auto-detects the MotionPlus extension. Data from the MotionPlus extension is required in order
to use LiveMove’s position and orientation tracking functions. lmsPad will also check whether a Nunchuk
is present in the MotionPlus extension. If found, WPAD DEV MPLS FREESTYLE data (MotionPlus and
Nunchuk) will be received, otherwise, WPAD DEV MPLS data (MotionPlus only) will be received.
As of this writing, there is no provision to detect whether a Nunchuk was inserted into the MotionPlus after
the initial connection without interrupting the stream of incoming data.
It is essential that you check the result of lmsPadGetOverflowCount in order to identify data loss. Data
loss occurs if the internal buffer of lmsPad fills before the game can read data from it with lmsPadPop.
This is usually the result of very low frame rates (less than 20 fps) seen in development and debug builds.
Data loss will degrade the accuracy of LiveMove. The solution is to either call lmsPadPop more frequently
or increase the internal buffer size of lmsPad and recompile.
WPAD, KPAD and KMPLS have a number of functions for modifying various parameters that alter the data
produced by these libraries. For the purposes of motion recognition, data collected with one parameter setting
does not look like data collected with a different parameter setting. As a result, the same settings must be
used during data collection and classification in game.
Generally, performance of both tracking and classification may degrade if the data-modifying parameters of
WPAD, KPAD and KMPLS are altered from their default values.
Please also note that, for the purposes of classification,
• WPAD DEV MPLS FREESTYLE Nunchuk accelerometer data is not interchangeable with
WPAD DEV FREESTYLE Nunchuk accelerometer data.
• WPAD DEV MPLS FREESTYLE gyro data is not interchangeable with WPAD DEV MPLS gyro data.
• Wii Remote accelerometer and Nunchuk accelerometer data are not interchangeable.
• Wii Remote accelerometer data is interchangeable with each WPAD DEV mode data.
The following table summarizes the main API calls associated with motion tracking. More detailed descrip-
tions of each call is in liveMove2.h.
See Chapter 6 for more details on how to quickly and easily add motion tracking to your game.
4.5. RECORDING MOTION 55
The lmsMotionDevice md has to be a “trackable” device. This means that it needs to have
both the accelorometer data and gyroscope data. Currently, the only trackable device is of type
lmsMotionDeviceTypeMOTIONPLUS.
You can store motion data for later use; for example, to import into lmMaker to build a classifier.
56 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME
The mode input argument to the lmsNewMotion() call determines how the motion is to be recorded.
Setting mode to lmsRecordingModeSTANDARD indicates that the motion is to be recorded as a standard
motion. The start and the end, or the duration, of such a motion is signalled by some player or game event,
like holding a button or trigger down on the motion device itself. A button does not have to be used as the
signaling event. But in most cases an on-device button is the easiest and most intuitive way of indicating when
a valid motion starts and ends. As is currently implemented in the lmRecorder application, a designated
button on the recording controller is used for this purpose. If the motions in your game can be effectively
marked by game contexts (e.g., a dance game where a valid motion can only start when a music note begins
and it can only last for a maximal amount of time), then you should use them instead of controller button
events. The advantage is that one doesn’t have to hold a button down while performing a motion.
Keep in mind that, the way training motions were recorded implies the kind of classifier you can build
from them and how that classifier should be used in game. Specifically, standard motions should only
be used to build standard classfiers. A standard classifier begins and ends classification at the same exact
moment when a valid motion starts and ends. This means that the same signaling events used to mark the
boundaries of training motions at development time should be used at run time for classification. For
example, if you held the X button on the controller to begin and end recording of each training motion, you
should do the same when performing a motion to be recognized in game (the button events would be used to
start and end classification of a in-game motion). Again, the signaling events for recording and classifying
can be “virtual buttons” like game contexts that don’t involve real buttons at all.
It is sometimes desirable to simply let the classifier classify continuously on an incoming stream of motion
data. This way, the classifier decides if and when a possible valid motion just started, and if and when it ends
with or without a positive identification. This means that the player is not required to do anything to indicate
(e.g. use button press) or to monitor (e.g. watch for game events) motion boundaries while playing. This of
course makes it a much harder job for the classifier to do well. LiveMove 2 refers to these type of classifiers as
buttonless classifiers (so as to distinguish them from their standard counterpart). In order for LiveMove 2 to
build buttonless classifiers, the training motions need to be recorded as buttonless motions.
You can specify mode as lmsRecordingModeBUTTONLESS when creating an empty buttonless motion
object to record into. Recording a buttonless motion requires that no on-device button be used to mark the
motion’s boundaries while recording. This is to ensure that the way training motions are performed matches
4.5. RECORDING MOTION 57
the way in game motions are to be performed by the player. Since no button presses are needed to inform
LiveMove 2 of motion start and end when using a buttonless classifier in game, the same should be true when
recording training motions.
lmRecorder offers one convenient way to record buttonless motions by using an extra motion device. Its
sole purpose is to provide signaling events that mark the boundaries of each recorded buttonless motion. You
can also use a suitable game event to serve the same purpose. See Section 4.5.3 for more detail on how to
record buttonless motions.
• Every new value read from lmsPad must be passed in temporal order to LiveMove 2 using
lmsMotionAppendElement. Also, make sure that samples are popped from lmsPad fast enough
so that it does not overflow. This is important. Please verify it is happening correctly in your code.
• The lmsMotion object grows in memory size for each lmsMotionAppendElement call.
• The lmsMotion object stops growing in memory size when lmsEndRecording is called.
• Once recording is complete, the lmsMotion object represents the motion that was performed between
the calls lmsBeginRecording and lmsEndRecording.
• This approach generates standard motion data that can be used to build a standard classifier. But this
data should not be used to build a buttonless classifier.
• Issue a small sequence of lmsMotionAppendElement calls before the motion starts and a small
sequence after the motion ends. The start and end of a motion may be marked by a signaling event like
a button on a secondary input device.
Recording a window of 0.25 seconds of motion data before and after the motion should be sufficient.
• The motion sensor must be relatively still for approximately 0.1 second before the motion starts.
“Relatively still” means that the overall force magnitude on the motion sensor, minus gravity, should
be less than about 0.3G.
This ensures that LiveMove 2 does not miss potentially important data at the beginning of a motion.
For example, lmRecorder displays RECORDING PREVENTED if this condition is not met, and
throws the data away.
58 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME
• Have a rolling buffer that stores the most recent 0.25 seconds of motion data samples.
• Call lmsBeginRecording and lmsSetMotionStart when the motion proper starts. Then a
sequence of calls to lmsMotionAppendElement to push all the buffered data in to the motion.
• Call lmsEndRecording
For your convenience, lmRecorder has an option to record buttonless motions that implements all the
above functionalities. Motion boundaries (i.e., the start and end of motion proper) are marked by holding the
trigger button down on a secondary (non-recording) wii Remote.
To save a motion in storage media, serialize the lmsMotion object into a buffer and write the data to a file
(or something equivalent).
To load the object from storage media, read the file into a buffer and unserialize the object.
For example:
// Clean up
lmsDeleteSerial( serial );
}
// Unserialize motion
lmsMotion* motion = lmsUnserializeMotion( buf, size );
return motion;
}
lmMaker builds a classifier template from motion samples and store it in a “.lmc” file. To create the template
in your game, load the file into a buffer and unserialize an lmsClassifierTemplate object.
A classifier template is a set of static rules for classification and does not change unless tuning is per-
formed. It is used to create an lmsClassifier object that actually handles classification sessions
(lmsClassifier is explained in the next section).
For Example:
return ct;
}
Q. Can I use a standard classifier without buttons? – See Section 4.13 and answer 11.
The sample application lmsClassify.cxx in the LiveMove 2 sample code directory ailiveBase/-
sample/lmsSamples shows you how to use LiveMove 2 API to classify motions in game with a standard
classifier. A few things to note:
• The average time cost of simultaneously classifying 3D accelerometer motion sensor data from 8
lmsMotionDevice objects using a capacity 1 classifier running on a Nintendo Wii is 5% of a 1/60
FPS frame.
The cost is less than 10% of a frame 95% of the time.
• Costs scale linearly. So the average cost of classifying the data from a single lmsMotionDevice
object is under 1% of a frame.
For sample source code on how to use LiveMove 2 API to do standard classification, go to your
LiveMove 2 installation’s sample directory ailiveBase/sample/lmsSamples and take a look at
lmsClassify.cxx.
• Only call lmsBeginClassify aand lmsEndClassify once at the start and end of a classification
session that might classify many motions.
For example, you might lmsBeginClassify at the start of a game and lmsEndClassify when
the game is over.
• lmsIsMotionActive tells you which of the following four states the current session is in.
TM
For example, in a Wario Ware style game the player is given a countdown to begin a move. In this situation
the game can tell LiveMove 2 when the motion starts; and then LiveMove 2 tells the game when the move has
stopped.
For example:
...
do
{
if ( moveStarts )
{
lmsBeginActiveMotion( classifier );
// The next lmsUpdateClassifier is the start of the active motion
}
} while( !endOfClassificationSession );
...
For example, in a tennis game the player starts swinging before addressing the ball. LiveMove 2 tells the game
when the player starts moving; then the game tells LiveMove 2 the move has stopped when the ball hits the
racket.
It is also possible for the game to determine that a move has stopped via some heuristic that combines game
logic with queries to LiveMove. Queries such as the current best guess label, its likelihood and motion
progress and/or calls to lmsBestGuessIsSafe using a lmsMoveTree built from the classifier.
For example:
...
do
{
// Collect new status from controller 0
lmsMotionDataSample sample;
if ( lmsPadPop( WPAD CHAN0, &sample ) )
{
// Update classifier with new status
lmsUpdateClassifier( classifier, &sample, NULL );
if ( moveEnds )
{
lmsEndActiveMotion( classifier );
// The next lmsUpdateClassifier is not part of the active motion
}
...
}
} while( !endOfClassificationSession );
...
Q. How do I collect examples for LiveMove-Game mode? – See See Section 5.10.
• Intermediate queries incur approximately a 15% increase in CPU costs. So only call
lmsSetDoIntermediateClassification with true if you need it.
Q. How do I know when I should trust the best guess? – The move tree is designed to answer this
question. See the next section.
• Likelihood is a value between 0 (not likely) and 1 (highly likely). The sum of likelihoods for every
move, including lmsUndeterminedClass, is always 1. So you can interpret the likelihood as the
probability that the final classification will be classID.
• LiveMove 2 may guess badly at the start of a move. The best guess will change during the move.
First load the move tree data into Wii memory and unserialize it to create a lmsMoveTree object.
For example:
return theMoveTree;
}
IF
• the motion being performed on classifier ‘*c’ is an example of a move in the tree ‘*t’, and
THEN
66 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME
• the current label returned by lmsGetClassification is a correct early predication of the true
classification lmsGetMoveTreeAccuracy percent of the time.
• Use lmsBestGuessIsSafe to avoid writing code to initiate in-game events based on intermediate
likelihood values.
• You can poll lmsBestGuessIsSafe frequently. But the result does not change in between
lmsUpdateClassifier calls.
For example:
lmsBeginClassify( classifier );
bool endOfMotion = false;
do
{
// Collect new status from controller 0
lmsMotionDataSample sample;
if ( !lmsPadPop( WPAD CHAN0, &sample ) )
{
continue;
}
endOfMotion = ...
} while( !endOfMotion );
4.12. USING A MOVE TREE WITH EARLY CLASSIFICATION 67
lmsEndClassify( classifier );
lmsClassLabel finalClassification = lmsGetClassification( classifier );
if ( finalClassification != earlyGuess )
{
// Either abort the early classification event or
// accept the false positive rate.
...
}
...
• lmsGetIntermediateConfusionSet writes a set of class labels. The set represents the moves
that are currently confused with each other. LiveMove 2 hasn’t yet received sufficient motion sensor
data to know exactly which move the player is performing.
IF
• the motion being performed on classifier ‘*c’ is an example of a move in the tree ‘*t’
THEN
• You can poll lmsGetIntermediateConfusionSet frequently. But the result does not change
in between lmsUpdateClassifier calls.
• But it is always possible that an intermediate confusion set may subsequently become unsafe. A con-
fusion set becomes unsafe when lmsGetIntermediateConfusionSet returns a set that is not
a subset of a previously returned set. This sequence corresponds to a traversal of the move tree that is
revoked.
You can use this condition to abort the move. This condition is more stringent than aborting when
lmsGetClassification() returns a sequence of lmsUndeterminedClass labels.
For example:
// include auto-generated header file from lmMaker to get class label enums
#include ’boxing.h’
lmsBeginClassify( classifier );
bool endOfMotion = false;
do
{
// Collect new status from controller 0
lmsMotionDataSample sample;
if ( !lmsPadPop( WPAD CHAN0, &sample ) )
{
continue;
}
endOfMotion = ...
} while( !endOfMotion );
lmsEndClassify( classifier );
lmsClassLabel finalClassification = lmsGetClassification( classifier );
if ( finalClassification != earlyGuess )
{
// Either abort the early classification event or
// accept the false positive rate.
...
}
...
lmsDeleteClassLabelSet( confusedSet ); // cleanup
Different parameters to KPADSetAccParam reduce the amount of information that LiveMove 2 has
to work with.
The settings above must be replicated in your game otherwise LiveMove 2 may not operate correctly.
70 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME
• Watch out for motion sensor data loss at low frame rates. Recording motion sensor data from the
lmsPad at low frame rates (e.g. < 20 frames per second) results in data loss. LiveMove 2 will not
operate correctly on this data. The general principle is:
LiveMove 2 must receive all the data generated by the motion sensor during the perfor-
mance of a motion.
lmRecorder does not drop data. More specifically, it does not save motions if data was dropped.
But your game, especially during development in a debug build, may drop data from the Wii Remote.
LiveMove 2 will not operate correctly under these conditions.
Use lmsPad and make sure samples are popped fast enough so that it does not overflow.
• Be sure your start and stop criteria for motions are the same when recording as in-game.
For example, if you designed your game such that the player starts a motion by pressing the B button
and ends it by pressing it again, do not have players hold the B button to capture example motions.
Implement the same motion start/stop behavior in your data-capture application. Otherwise, small
differences can creep into your data that will result in degraded performance.
Likewise, when collecting examples for a buttonless classifier do not signal motion start/stop with
a button press on the recording controller(s). The force associated with the button press is usually
significant enough to be noticed and recorded. But this force is absent when buttonless classification
is used in game. Furthurmore, button pressing can often affect how the user holds the device and how
she performs the motions.
• Use motion progress when you want to know “how far” a player is through the performance of a move.
For example, the progress of a “turn” or “twisting” move can control the rotation of a door handle.
For example, progress can drive the frame-rate for an animation that matches the current move. So you
can animate slow and fast punches.
Here’s an example of just one way of using motion progress in conjunction with lmsBestGuessIsSafe:
#include ’myGame.h’
lmsBeginClassify( classifier );
bool endOfMotion = false;
do
{
// Collect new status from controller 0
lmsMotionDataSample sample;
if ( !lmsPadPop( WPAD CHAN0, &sample ) )
{
continue;
}
endOfMotion = ...
} while( !endOfMotion );
...
Distribution
Mean Variance
Low stability High High
High stability Low Low
Experiment with different stability parameters for different moves to give better feedback.
• The computation of an accuracy score incurs a negligible computational cost.
For example:
lmsBeginClassify( classifier );
...
...
lmsEndClassify( classifier );
lmsClassLabel const classID = lmsGetClassification( classifier );
• The quality of the accuracy score depends on the quality of the classifier. To get good accuracy scores
build a robust classifier that can correctly classify motions 95% of the time.
• Accuracy scores give better feedback if a move is easy to perform (e.g., above 95% recognition rate).
In this case, the accuracy score is predictable and consistent with how players feel about their motions.
If a move is difficult to perform then accuracy scores tend to be “jumpy” and can be more confusing
than informative.
• Often the accuracy score alone is sufficient for players to perform a trial-and-error “gradient ascent” to
improve their performance and get higher scores.
• Motion accuracy works best in conjunction with standard classification. However if a move is very
short and a button is used to mark the starts and ends of motions, human variability in the timing of
button presses can make the accuracy scores appear unstable.
• LiveMove 2 pays attention to many features of a motion. The accuracy score is a condensed numerical
summary of the player’s attempt. Without higher-level feedback it can be unclear why the player got a
particular score.
• You may want to prevent the recognition of a move. For example, in your game the player cannot
perform the move ‘wave flag’ if they are not holding a flag. But you do not want to create different
classifiers for holding and not holding a flag.
• Use lmsMaskMove to prevent a classifier from recognizing a particular move.
• If the player performs a masked move lmsGetClassification will normally return
lmsUndeterminedClass. But a move that is correctly recognized by an unmasked classifier may
get incorrectly recognized as one of the unmasked moves by a classifier in which it is masked.
• Masking a move reduces the time cost of subsequent lmsUpdateClassifier calls.
• All masks are cleared by calling lmsClearMasks.
• Do not mask moves or clear masks between calls to lmsBeginClassfy and lmsEndClassify.
• Check whether a move is masked by calling lmsMoveIsMasked.
if ( gameContext == g0 )
{
74 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME
lmsBeginClassify( classifier );
endOfMotion = ...
} while( !endOfMotion );
lmsEndClassify( classifier );
lmsClassLabel finalClassification = lmsGetClassification( classifier );
• Once the classifier has seen examples of an individual player’s motion style then similar examples of
that player’s motions can be recognized more easily.
• Tuning lets you ship classifier templates with lower capacity (and therefore lower CPU costs) that will,
– not work as well compared to higher capacity classifier templates on a large population of players,
– but will work equally well or better for the player that has tuned the classifier template.
• To tune a classifier template ‘*ct’ for the move whose class label is ‘label’
4.17. CLASSIFIER TUNING 75
• lmsTuneClassifier returns false if the classifier deems ‘*m’ not an acceptable example of the
move and rejects it.
• If lmsTuneClassifier returns true then ‘*m’ is accepted. The classifier template ‘*ct’ has been
tuned with it.
For example:
lmsClassifierTemplate* ct = ...;
lmsMotionDevice const* md = lmsGetMotionDeviceFromClassifierTemplate( ct );
lmsMotion* m = lmsNewMotion( md, lmsRecordingModeSTANDARD );
lmsClassLabel label = ...;
• The CPU cost of each call to lmsTuneClassifier can be expensive, often between 20 to 40 times
more expensive than a call to lmsUpdateClassifier. These calls need to be carefully placed so
they do not harm the pacing of your game. You may want to store tuning motions and tune the classifier
during a pause in the game (such as a loading screen).
• Tuned classifier templates maintain internal histories of the past tuning examples that are still valid. Up
to numExamplesForClass examples are kept for a class.
• More player examples may give better accuracy. In general if there are several ways to perform a mo-
tion (fast, slow, different directions) then let the player tune each of these ways for good performance.
• Increasing numExamplesForClass will increase the CPU costs of the tuned classifier template.
To maintain the CPU performance restrict the number of tuning examples allowed for each move with
the parameter numExamplesForClass passed to lmsTuneClassifier().
In our experience, setting numExamplesForClass to 1 or 2 is sufficient to create a tuned classifier
template that has an exceptional recognition rate for the player who tuned it.
• But tuning is more accepting (i.e. lmsTuneClassifier rejects fewer motions) if:
76 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME
– more diverse examples of moves are given to lmMaker when creating the classifier template, or,
– the Tunability setting in lmMaker is increased
– the slack on the move is increased (also see Section 3.7).
For example:
Guidelines
77
78 CHAPTER 5. GUIDELINES
• The change of orientation of the controller during a motion is important. But sometimes a change
of orientation can get lost in the middle of a forceful motion. The force of rotation is small compared
to the force of translation.
When this happens, two visually distinct moves may generate nearly identical data.
In this case, record the start and/or the end of the motion with the Wii Remote at or near rest. This helps
LiveMove 2 notice the change of orientation. In a few cases you may need to redesign your moves.
• LiveMove 2 motion recognition ignores whatever else the player might be doing.
Only the motion of the controller(s) matters. Secondary movements of your knees, legs, or head do not
matter if they don’t impace the controllers’ movements.
Modifying the standard during collection is OK. For example, half-way through collecting you decide
there is not one but two “correct” ways (the gold standard motions) to perform a tennis serve. All your
subsequent examples must be similar to one of the two standard motions.
Use the slack setting to make a move easier to perform. Do not intentionally collect bad examples.
• The scope of your standard: The gold standard can be narrow or wide.
The move “any square” is a wide gold standard. You will accept varied square motions as valid exam-
ples of your gold standard.
The move “small, fast counter-clockwise square written on the floor starting in the top left-hand corner”
is a narrow gold standard. You will accept only this precise type of motion as valid examples of your
gold standard.
• Exaggerate your standard. LiveMove 2 notices forces. Forces cause changes in speed and direction.
So exaggerate your swoops, corners and wiggles. It gives LiveMove 2 more interesting data to work
with.
For example, you have “square” and “circle” moves.. Make your squares have sharp edges.
Communicate your gold standard to people who provide examples at development time.
Communicate your gold standard to players who provide examples at game time.
The more varied your examples that conform to your gold standard, the better players’ initial experience
with an untuned classifier will be.
Tuning at game time ensures an individual player’s style is recognized.
• Collect examples in context. Compare performing a move in lmRecorder to performing the same
move in your game when you are about to defeat the final enemy.
People perform the same move in different ways depending on the context.
So it can help to collect examples directly from your game.
The LiveMove 2 run-time library provides functions that let you record and collect motions from your
game.
• Randomize collection order. People may perform moves in different ways depending on the order
they perform them, or how tired they get.
So getting people to provide examples in random order can help collect better data. This is why by
default lmMaker enables random collection mode in the moton collection options dialog.
Warning: pay attention to what move lmRecorder asks the performer to do. It can be easy
to mistakenly perform the wrong move. A single incorrect example can degrade classification
performance.
• Negative examples.
Any of these might indicate a bad recording. If so the example should be deleted from disk.
• Use negative examples. You may have a move that is too easy to perform, but lowering its slack makes
it too hard to perform.
In this case try adding a negative example move.
For example, your move overhandThrow incorrectly accepts motions that are “sideways throw”.
Add a new move sidewaysThrow and collect examples. LiveMove 2 will distinguish between them.
In the game, when LiveMove 2 classifies a motion as a sidewaysThrow you can ignore it, or tell the
player they need to throw overhand.
START OF
MOTION
ABORT MOVE−INDEPENDENT
INTERMEDIATE FEEDBACK
CLASSIFICATION
ABORT TRANSITIONAL
EARLY FEEDBACK
CLASSIFICATION
ABORT MOVE−DEPENDENT
FINAL FEEDBACK
CLASSIFICATION
The performance of a motion has four stages, as depicted in Figure 5.1. You can give different types of
feedback to the player, from generic to more specific, as the motion progresses through each stage.
• Intermediate classification.
At this stage, lmsGetClassification() != lmsUndeterminedClass but
lmsBestGuessIsSafe() == false. So in all likelihood the player is performing a known
move but LiveMove 2 is not yet sure what it is.
In this case, use lmsGetIntermediateConfusionSet to get LiveMove 2’s guess of which
moves the player might be performing. Transition from move-independent feedback to something
more specific, such as intermediate animations that correspond to the confusion set. For example, if
the confusion set is (shieldRaise,daggerThrow) then play an animation segment that is com-
mon to both raising a shield and throwing a dagger.
• Early classification.
At this stage, lmsBestGuessIsSafe() == true. So LiveMove 2 thinks it likely that the player
is performing lmsGetClassification() given the tree accuracy.
In this case, transition to move-dependent feedback, such as finishing animations for the move. For
example, if the best guess is daggerThrow, and it is safe, then play the final segment of a dagger
throw.
The more separable your move design (see Section 5.6) the earlier the player can get move-dependent feed-
back.
To classify motions as early as possible design your moves to initially generate very different
motion sensor data.
Sometimes moves that look visually different generate similar motion sensor data. So lmMaker provides
two features to check whether your move design supports early classification:
• Compare the raw motion data generated by different moves by selecting Project->View
Motions.
For example, graphs of a “2” and a “3” will normally generate similar X,Y,Z plots during the first half
of the move, which indicates that LiveMove 2 will have a harder job unconfusing them.
• Select Move Tree->Build Move Tree to check at what stage of motion completion
LiveMove 2 can unconfuse your moves.
For example, the moves “2” and “3” will normally group into a confusion set that persists late into the
move.
You may need to redesign your moves if you discover that groups are confused for too long.
• the move tree is dependent on your classifier and your examples: if they change, then the move tree
must be rebuilt; and
• the move tree needs at least 50 examples for each variation of your gold standard for each move in your
move set in order to generate useful predictions.
So before fully committing to an animation design that corresponds to the move tree you should
84 CHAPTER 5. GUIDELINES
• spend as much time as needed to collect good examples that performs well for all users; and
• extensively test your classifier, including iteratively adjusting slack values and perhaps capacity value,
so that you have a satisfactory classifier for your game.
exceeds the specified force threshold. M is the magnitude of acceleration minus gravity and
serves as a simple estimation of the controller device’s acceleration without knowing its orientation.
LiveMove 2 uses this thresholding scheme because orientation of the device is generally not known in
classification mode. 1
• A motion becomes inactive when the classifier either
– aborts a false move, or
– terminates a valid motion.
• In many cases LiveMove 2 supports “chaining” of buttonless moves in an uninterrupted sequence.
But sometimes LiveMove 2 may not abort a false move sufficiently quickly. So the start of a valid move
can get lost in the end of a false move.
For example, if the player initiates a false move then immediately transitions to a valid move, the valid
move can get split across segments. In this case, LiveMove 2 will not recognize the move.
In general, buttonless recognition deteriorates as the pauses between moves get shorter. Therefore:
Buttonless recognition works best when your moves are separated by natural pauses.
gravity reading and less sensitive along axes that are orthogonal to gravity. This is more noticeable with small threshold values and less
so when thresholds are higher. Generally we expect applications to build buttonless classifiers with reasonably high threshold (¿ 0.5g
such that the player’s unintentional jitters do not trigger classification). Buttonless classifiers built for moves with strong starting force
also perform better in general.
5.9. SUGGESTED WORKFLOW 85
• A high-force threshold applied to moves with low-force starts delays feedback to the player.
For example, you set the force threshold so high that 70% of a “whipping” move is discarded. So
LiveMove 2 signals that start of a “whipping” move just before the final high-force crack. In this case,
feedback given when lmsIsMotionActive() == true may feel too late.
• Buttonless recognition combined with early classification requires more stringent move design.
In this case, you want LiveMove 2 to quickly recognize the start of motions and quickly unconfuse
them.
In general,
Buttonless recognition works best with early classification when every move starts forcefully in a
different direction.
The different directions do not need to be orthogonal but can be characteristic curves, like clockwise
and counterclockwise motions.
• Avoid grouping moves of low and high force in the same buttonless classifier.
In this case, LiveMove 2 may tend to delay aborting false moves.
• Avoid grouping moves with significantly shared preambles in the same buttonless classifier.
For example, combining the moves “2” and “3” in the same buttonless classifier increases the risk that
LiveMove 2 will recognize examples of “3”s as “2”s.
In practice, however, examples of “2”s are not strict subsets of examples of “3”s. So the significance
of this problem depends on your data.
• A buttonless classifier normally requires more examples to get the same classification accuracy as a
standard classifier.
• If you anticipate that moves are allowed to be executed at varying speeds in game, it is important
to collect training examples that are performed at varying speeds. This will help the classifier better
determine the end of a motion.
• Build a classifier.
Consider the “numbers” project of Chapter 2. Everyone knows what numbers look like. People will perform
numerals in a very similar style. In this case, there is a small number of narrowly scoped moves with easily
understood descriptions. So your interaction with LiveMove 2 may be a few 10 minute iterations with data
from 2 or 3 people plus some play testing.
Consider a project that has 20 or 30 different moves that are less well understood, such as “lasso”, “whip”,
“any square”, and “chicken dance”. You define the gold standards to be very wide.
In this case, you must collect more data from more people. Expect more iterations of 20 - 40 minute sessions
with each person, and more testing.
For production-quality classifiers try this workflow:
1. Collect correct but varied examples from the current data provider (i.e. performer);
3. Repeat the previous steps to collect from at least 3-4 people for small move sets ( containing <= 5
moves) or from 6-8 people for larger move sets;
4. If the classification accuracy of certain moves in the current classifier fall below 90%, collect data from
more people or try to figure out if there are problems with your data (See Section 5.4);
5. For the next data provider, ask him or her to test the current classifier in lmMaker first.
• If he or she is recognized well by the current classifier, you may decide to skip to the next data
provider or simply collect just a few examples per move from him or her.
• If he or she is not recognized well, repeat steps 1 and 2.
6. You may stop collecting data if new data providers are consistently recognized well by the current
classifier (before their data is collected).
(Remember that tuning at game time can adapt the classifier to an individual’s movement style.)
Once all the examples have been collected and the classifier is built:
• If you opt for run-time tuning then set tunability to the suggested capacity.
This means that a game player’s experience with the classifier during tuning will be identical to the
experience obtained during development time.
Otherwise set tunability to 0 to make an untunable classifier that consumes less memory resources.
• Reduce the suggested capacity if it exceeds the available CPU budget allocated to LiveMove 2 . The
classification accuracy of the untuned classifier will decrease. But the classifier will perform exactly
as you expect when tuning.
Once tuned to an individual player it will consume less resources yet also recognize the individual
player very well (see Section 4.17).
• Game-LiveMove mode: The game tells LiveMove 2 when a motion starts but LiveMove 2 tells the game
when it ends. See Section 4.10.1.
• LiveMove-Game mode: LiveMove 2 tells the game when a motion starts but the game tells
LiveMove 2 when it ends. See Section 4.10.2.
To maintain data consistency (see Section 4.13) it is important that recording take place under identical or
similar conditions to those active when classifying. Therefore these mixed modes normally require that
motion examples be collected in-game, not via lmRecorder .
Use lmsBeginRecording() and lmsEndRecording() to record examples directly from your game
(see section 4.5).
For example, if the game tells the player to start moving after a specific event (e.g., a countdown timer, or
when an on-screen character hits a jump etc.) then it is important that examples are recorded when the person
providing examples is presented with the same situation.
For example, if the game stops a move after a specific event (e.g., when the ball hits the bat on-screen) then it
is important that examples are recorded when the person providing examples attempts to time their motions
to hit the ball.
The aim in both cases is to record motion examples that correspond to the precise motions that are performed
under game conditions.
Tracking
LiveMove 2 can track the position and orientation of a Wii Remote with an attached MotionPlus.
89
90 CHAPTER 6. TRACKING
First, you need to initialize the LiveMove 2 library, create a motion device with the appropriate motion device
type and then create the tracker object.
lmsOpen();
lmsMotionDevice* md =
lmsNewMotionDevice( lmsMotionDeviceTypeMOTIONPLUS );
lmsTracker* theTracker = lmsNewTracker( md );
Next you need to set up lmsPad . It is the AiLive-device independent driver – a very thin layer on top of
Nintendo’s libraries that provides some important features like enhanced buffering. The following example
sets up lmsPad for a MotionPlus controller.
In your main loop, you need to always update the tracker with the latest
lmsMotionDataSample sample obtained from the lmsPadPop(...) driver function.
It is important that you always update the tracking, even when you’re not explicitly going to use the tracker’s
output. This is necessary because the tracker has some internal variables that are cheap to update and allow
the tracker to start tracking in an instant.
The tracker is always tracking the orientation of the controller so you can get orientation estimates at any
time.
You will, however, need to tell the tracker when to start tracking 3D positions and it will reset the starting
position estimate to (0, 0, 0):
lmsBeginPositionTracking( theTracker );
You might want to begin tracking position in response to a button press, an in-game event or some other
criteria. You might also want to take into account the tracker’s readiness to start position tracking. If you start
tracking before the tracker is ready, the output is likely to be less accurate.
In normal operation, the tracker will quickly become ready to track position after being held still for just a
fraction of a second (e.g. 1/5 of a second). You can check if the tracker is ready to track position from this
point forward by calling:
The function Returns ‘true’ if the tracker is ready, otherwise false. See the liveMove2.h header file for
more details.
Use the following calls to get the current estimate of position (during position tracking) and orientation (at
any time) from the tracker:
If you prefer, you can also get the orientation represented as a matrix, check the liveMove2.h header file
for details.
During tracking, it is also important that you check for buffer overflows in lmsPad and take appropriate
action.
if (lmsPadGetOverflowCount( WPAD_CHAN0 ))
{
// You need a bigger buffer or a faster game. Tracking results
// will be poor.
lmsPadResetOverflowCount( WPAD_CHAN0 );
}
When you’ve finished position tracking, you also need to tell the tracker. Otherwise, the position estimate will
never get reset and the drift will swamp the output. Ending position tracking also avoids wasting unnecessary
CPU cycles.
lmsEndPositionTracking( theTracker );
...
// always update the tracker with new data samples
lmsUpdateTracker( theTracker, &sample );
if ( startPositionTrackingEventHappened )
{ // make the tracker return positions relative to ‘‘loc’’
lmsVec3 loc = { 1.0f, -2.0f, -3.45f };
lmsBeginPositionTracking( theTracker );
lmsSetLocation( theTracker, loc );
}
if ( lmsIsTrackingPosition( theTracker ) )
{ // Read position from tracker. p will be a offset by ‘‘loc’’
lmsVec3 p = { 0.0f, 0.0f, 0.0f };
lmsGetLocation( theTracker, &p );
}
if ( endPositionTrackingEventHappend )
{
lmsEndPositionTracking( theTracker );
}
92 CHAPTER 6. TRACKING
Values near 1 indicate high confidence. As values approach 0 you should take appropriate action. Appro-
priate action can include stopping tracking and encouraging players to return to the canonical rest position,
suspending tracking and encouraging players to pause, or imposing some bounding constraint.
For example, imposing a bounding constraint can be done by defining a bounding box. The game object
being controlled by the controller can then never stray outside of the bounding box. This avoids having the
player see objects fly off into the distance if the drift ever becomes too large.
If you do use some bounding constraint, then you need to tell the tracker about it so that it can potentially
take advantage of the information. You can do this by telling the tracker to replace its internal estimate of
position with the one supplied by you with a call to:
6.6.1 Gotchas
• It is possible to use KPADSetAccParam to change the way Nintendo’s KPAD library handles data
from the motion sensors in the Wii Remote. If you have set play_radius or sensitivity to
anything other than their default values of 0 and 1, respectively, then tracking behavior is undefined.
You can check the values with KPADGetAccParam and LiveMove 2 should warn you if they are set
to non-standard values.
• You need to reset the tracker object if data ever stops being sent from the controller (e.g. whenever the
controller is power-cycled).
lmsResetTracker( theTracker );
Appendix A
LiveMove 2 FAQ
1. Q. What is LiveMove 2?
• Use LiveMove 2 to make games that can recognize any motions performed with the Wii remote
(with or without the nunchuk or the MotionPlus ).
• You define moves (e.g. “circle”, “punch”, “lasso”) with example motions, not code.
• LiveMove 2 automatically recognizes the moves you define.
• LiveMove 2 can recognize any variations of a move as the same type.
• LiveMove 2 can recognize any motion that can be performed with a motion-sensitive controller.
For example, LiveMove 2 can recognize when a player hits a ball, draws a star in the air, or even
imitates a monkey.
• The recognition is robust and accurate across different users.
• With the MotionPlus accessory, you can also use LiveMove 2 to track the orienation of the
Wii remote as well as its relative 3D position. Position tracking is most accurate for fast mo-
tions that last 1-2 seconds, such as punches, sword swipes, etc.
LiveMove 2 makes it easy for developers to unlock the potential of motion-sensitive controllers to create
new and exciting game play.
2. Q. What is in LiveMove 2?
LiveMove 2 has 2 major components:
• A run-time library libLM that runs on the Nintendo Wii, links with your game code, and performs
fast motion classification in real-time.
• A development-time GUI application lmMaker that helps you collect and organize motion ex-
amples that are used to build classifiers for your game.
93
94 APPENDIX A. LIVEMOVE 2 FAQ
• Your organization.
• The version number of LiveMove 2.
• The version number of relevant Nintendo Wii SDK, firmware, hardware and your host PC OS.
• How to reproduce the bug.
• If the bug was a crash, the exact text that was printed at termination.
5. Q. How do I get started?
Please read the User Manual, in particular the Introduction (Chapter 1) and the Tutorial (Chapter 2).
6. Q. How do I define moves for my game?
You define them by providing examples. You perform the move and record the motion data, and you
get others to do the same.
7. Q. Who can create moves?
Anyone can provide examples to LiveMove 2 .
One of the key benefits of LiveMove 2 is that anyone who can use a Wii Remote can provide motion
examples. So any person in a games studio can design and create moves.
The LiveMove 2 tools have been designed to be easy to use. The task of designing a move set, collecting
examples, and building a classifier for testing can be performed by anyone.
8. Q. What is the difference between a move and a motion?
A move is a type of motion. You define moves by giving example motions.
For example, a ‘square’ is a move that you might define for your game. Examples of a ‘square’ could
be motions such as ‘small square pointing at the floor’, or ‘large square pointing forwards and drawn
anti-clockwise’.
You have complete freedom to define how general or how narrow you want your move to be.
For example, you might decide that you want two square-like moves in your game: ‘small anti-
clockwise square’ and ‘large clockwise-square’. Then the motion ‘small anti-clockwise square pointing
down’ is an example of the first move, whereas the motion ‘large clockwise square pointing forward’
is an example of the second move.
The scope of the definition of a move is controlled by the motion examples you provide.
9. Q. At run-time what are the inputs and outputs of LiveMove 2?
The inputs are motion sensor data from any active Wii Remotes and/or attached Nunchuk and Motion-
Plus.
The outputs are move labels that describe what type of motion each player is performing. You define
the moves at development time.
LiveMove 2 ’s motion recognition can provide zero-lag feedback during a motion at any time. Such
feedback includes best-guess label, motion progress, possible moves set, and more. Please read the
Run-time API (Chapter 4) for more detail.
If orientation and/or position tracking is enabled, you will also get orientation and position data.
10. Q. Does LiveMove 2 require that players press buttons?
No.
LiveMove 2 supports standard and buttonless classification modes.
In standard classification mode, the application tells LiveMove 2 when a motion starts and ends. You
can choose a method where your application can identify motion boundary without button presses. See
answer 11.
In buttonless classification mode, LiveMove 2 tells the application when a motion starts and stops. No
button presses are needed at run-time.
95
• Get the player to press or hold a button (e.g., the B button on the Wii Remote).
• Get the game to tell the player when to perform a move in a given time limit (e.g., ‘READY’,
‘STEADY’, ’GO!’).
• Detect when the force on the Wii Remote satisfies start and stop threshold conditions.
19. Q. How many motion examples should I record for one move?
LiveMove 2 considers all the examples it is given. In general, the more examples the better, on condition
that the examples conform to your gold standard.
A wide gold standard by definition requires more varied examples than a narrow gold standard. In
practice, this usually means that a wide gold standard has more examples.
A narrow standard on a move set that is well defined and understood by most people, like “numbers”,
may require less than a hundred examples (i.e. just a few minutes of work).
A wide standard on a vague move set, like “cowboy moves”, may require several hundred per move.
20. Q. How many different ways of performing a single move should I record?
You should collect all the different kinds of examples that conform to your gold standard.
For example, if your gold standard is ‘a big, clockwise square with my arm pointing straight out’ then
collect different examples that conform to this definition.
So you will vary speed, the initial orientation of the Wii Remote, etc. But you will always perform a
big, clockwise square with your arm pointing straight out.
This is an example of a reasonably ‘narrow’ gold standard.
If your gold standard is ‘any square’ then as before collect different examples that conform to this
definition. But, in this case, the gold standard is wider in scope.
So in addition to varying speed, initial orientation etc., also vary the direction your hand is pointing
(up, down, left, right), whether clockwise or anti-clockwise, and the spatial extent of the square.
This is an example of a ‘wide’ gold standard.
97
LiveMove 2 gives you a lot of flexibility in defining and creating your moves: you can make your gold
standard as narrow or as wide as you wish. See Section 5.2.
If you add examples to your project that do not conform to your gold standard, you are changing the
definition of your move without realizing it.
Chapter 5 of the User Manual contains guidelines for collecting examples.
21. Q. Players will perform the same move in different ways. Do I need to consider all the different
ways of performing the same move in advance and add them as examples to LiveMove 2?
You should think of all possible motions that conform to your gold standard and add them as examples.
For example, if the move is a “tennis serve” but you don’t wish to distinguish between an ‘overarm
serve’ and an ‘underarm serve’ then both kinds of examples must be included in your classifier project.
However, LiveMove 2 will generalize from the examples it is given. So you do not have to anticipate
every detailed variation that players might try.
22. Q. When collecting new examples, how should I decide to accept or reject an example motion?
At collection time you only reject or accept examples depending on whether they conform to your idea
of the gold standard. Watch closely as the performer is performing each motion and determine if that
motion fits your gold standard.
Once the new example motions are imported to the project window, they will be automatically classified
by the current classifier (i.e., classifier built before the import). It is normal to see the new motions not
being correctly classified - the current classifier has not fully learned your gold standard yet. In fact,
that’s why you are giving new examples to LiveMove 2 – to teach it your gold standard.
For example, the current classifier may never have seen an ‘underarm serve’ as an example of the move
‘serve’, so it may classify this example as undetermined, or some other move. You should accept it if
it fits your standard. Likewise, if the classifier accepts a motion as a serve, but that motion does not fit
your gold standard, then you should delete the motion.
The decision on accepting or rejecting an example is crucial because this is how you define a move to
LiveMove 2.
23. Q. How do I choose which individual examples to give to lmMaker to make a classifier with the
highest accuracy?
Do not make this decision. This is LiveMove 2’s job.
Tweaking the composition of your set of examples to optimize the accuracies you see in lmMaker is
a mistake.
Instead, simply design and collect motions that fit your gold standard, then feed them all to LiveMove 2.
The best way to ensure good classification is to ensure all the data you give to lmMaker fits your gold
standard. A few bad examples can cause LiveMove 2 to misfire.
25. Q. I have 3 different ways of performing the same move. One way is a standard way, but the
other two ways are more extreme, and I think are less likely to be tried by game players. Should
I record more standard examples compared to extreme examples?
No. We recommend that you keep reasonably balanced ratios between the different ways of performing
a move.
98 APPENDIX A. LIVEMOVE 2 FAQ
You must include extreme motions if you want these examples to be included in your definition of the
move (the gold standard).
If the ratio between usual and extreme examples is very imbalanced LiveMove 2 may decide it is not
worth representing the smaller set. In this case, increase the number of examples of the smaller set, or
increase the capacity of your classifier.
26. Q. In my buttonless project, almost any short motion I make gets recognized as one of the moves,
what should I do?
This can happen when your buttonless project has a high force threshold (i.e. > 1.0) and you have
collected some short or weak example motions for a move.
A high force threshold means that, for those example motions that are short or are weak and with weak
starts, a good percentage of these motions are cut off by the threshold. So effectively, you end up with
some very short “leftover” example motions for the move which could match well with almost any
short motion.
To deal with this, you can either:
• View all the motions in the problem move set and find the ones that are cut short by the force
threshold and remove or delete them; or
• Experiment with a lower force threshold.
27. Q. WHen testing my buttonless classifier, I notice a lag between the end of motion execution and
the final classification result. Is that normal and how can I eliminate it?
Since a buttonless classifier must determine if and when a motion has ended, it is not unusual that
sometimes it needs to take extra samples in order to be sure that the motion has ended. This is especially
true when you don’t have enough training examples. However there are ways that you can eliminate or
at least minimize the lag.
One way you can try dealing with the lag is to use the “LiveMove-Game” classification mode described
in Section 4.10.2. In this mode, your game can determine when a move ends before LiveMove does.
Otherwise, you can try the following:
(a) Collect more examples that are performed at varying speeds for the move(s) that exhibits lag.
(b) Import the newly collected motions in your project and click ‘suggest capacity’ to get the lm-
Maker recommended capacity value.
(c) Build and test the classifier. If this has fixed your problem, you can skip the remaining steps.
Otherwise,
(d) Double the current capacity value.
(e) Build and test the classifier.
(f) Repeat the previous two steps (c and d) until you no longer notice any lag or are satisfied with the
result.
(g) Experiment with a lower capacity value (a value that is inbetween the suggested and the current
value) until you find one that is just high enough to eliminate any noticeable lag. This way you
will not incur unnecessary CPU and memory cost.
• Tuning modifies the accept and reject behavior of a classifier at run-time by using the player’s
motions as new examples.
• Once the classifier has seen examples of an individual player’s motion style then similar examples
of that player’s motions can be recognized more easily.
• Tuning lets you ship classifiers with lower capacity (and therefore lower CPU costs) that will
– not work as well compared to higher capacity classifiers on a large population of gamers,
– but will work as nicely as a high capacity classifier for individual game players.
• Tuning allows a classifier to be adapted to work for players that are different from those seen
during internal testing during development time.
Typically, you will create a classifier at development time and then a player will tune it before they play
the game (or during the initial stages of the game).
The tuning process could be presented to the player as a short training section. Or it can be hidden
from the player.
100 APPENDIX A. LIVEMOVE 2 FAQ
Whether a tuning step is necessary for all the moves in your game depends on the game design. For
some moves, like a secret “super-smash” tennis serve, you may want your players to learn how to be
recognized (rather than the game learning how to recognize them by tuning).
Please read Section 4.17 of the User Manual.
Get a pro-tennis player to provide examples of tennis moves. Build a classifier from their data alone.
Keep slack low. Do not allow tuning. Performing these moves will be challenging for the average game
player.
Get lots of average people to provide lots of varied examples of tennis moves. Experiment by increasing
the slack and build the classifier. Stop collecting examples when new players are easily recognized.
Allow tuning with a high tunability setting (equal to or higher than the recommended capacity of the
classifier). Performing these moves would be easy for the average game player.
These are two extremes. There is a continuum between them. A game can incorporate the full range
of difficulty levels. For example, you can built different classifiers to present different performance
challenges.
You should consider different ways and orders for collecting motions to improve recognition. See
Section 5.3 of the User Manual.
101
39. Q. I added some more examples but now one of my moves is hard to reproduce. What do I do?
Check the newly added examples and make sure they are correct.
Try collecting more motion examples for the few hardest-to-reproduce classes and re-build the classifier
(Section 5.4 of the User Manual).
41. Q. I have a lot of examples from lots of people. Which examples should I use?
Eliminate all data that does not fit your gold standard. Feed all data that fits your gold standard to
lmMaker . LiveMove 2 will do the rest. See Section 5.4 of the User Manual.
42. Q. I gave an example of a counter-clockwise circle, but my classifier doesn’t recognize it any
more. What do I do?
LiveMove 2 will generalize over the examples it sees to create a classifier that optimizes performance
both within the known examples, and for the unseen data from your as-yet unseen players.
If your ratio between clockwise and counter-clockwise circle examples is very imbalanced,
LiveMove 2 may decide it is not worth representing the smaller set of the two in the classifier.
In this case
In general it is good to keep reasonably balanced ratios between the different ways of executing each
move in your classifier.
43. Q. I built a classifier from lots of different people, but it still doesn’t work well when someone
new tries to perform the moves. What do I do?
Don’t panic. It may take 5 or 6 different body types before your classifier has a broad enough base of
examples to perform well on a new person.
Some things to try:
47. Q. How do I eliminate lag between the player’s movement and on-screen events?
Any noticeable lag between the player’s movement and on-screen events can be entirely eliminated.
LiveMove 2 supports a range of motion-control use cases. Here are two extremes:
• The player presses a button to signal the start of a motion. The motion is performed for 2 seconds.
Then the player releases a button to signal the end of the motion. At this point LiveMove 2 returns
the final classification, which invokes an in-game event.
In this case the lag between the initial movement and in-game event is 2 seconds.
• The player simply moves the Wii Remote in a star shape to cast a magic spell. As soon as the
player moves the Wii Remote rumbles and a star-spell animation unfolds on-screen that is syn-
chronized with the player’s motion.
In this case there is no noticeable lag.
See Sections 3.10, 4.11, 4.14, 5.5 and 5.6 of the User Manual.
48. Q. Are CPU resources wasted if a single classifier can recognize lots of moves?
No. A classifier’s capacity is the main factor that affects CPU costs because the capacity is shared
across the number of moves that can be recognized. But you may be wasting capacity.
For example, consider a game that includes a fishing mini-game and a cooking mini-game. Capacity is
wasted if a single classifier is used to recognize both fishing and cooking moves because the different
kinds of moves are never performed in the same context. In such cases it is better to “divide and
conquer” and build two classifiers – one for recognizing fishing moves, and one for recognizing cooking
moves.
Appendix B
Applications
1. If there is any chance you will need to continue to use your motions in the previous product, make sure
to create a backup of your motion files before converting. The conversion process is irreversible.
2. Locate lmsConvertMotions.exe in LiveMove 2 installation’s bin directory. You can view the
help message by typing lmsConvertMotions.exe -h in a command shell window.
3. Run lmsConvertMotions.exe -path <full path> where “full path” specifies the full path
to the top directory holding motions. This will convert all the old motions in the specified directory
to the LiveMove 2 format in-place. You should be able to see some text output while the motions are
being converted.
103
104 APPENDIX B. APPLICATIONS
By default, when Collect->Collect Motions is selected lmMaker performs the following com-
mands in order:
(a) Start a new process by sending the command specified on “Command Line” to the operating
system to execute in lmMaker’s working directory.
(a) Start a new process by sending the command in “Environment” to the operating system.
(b) Append lmRecorder arguments depending on the type of collection specified to the text
entered in “Command Line”.
(c) Send the combined text string, plus “additional arguments”, to the standard input for the
environment process.
• Otherwise
(a) Append lmRecorder arguments depending on the type of collection specified to the text
entered in “Command Line”.
(b) Start a new process by sending the combined text string, plus “additional arguments”, to the
operating system.
When collection ends lmMaker then performs the following commands in order:
The default values work for the majority of Wii game development systems. If your system differs, un-check
Use Default Options to make all options editable and either change the settings or manually invoke
lmHostIO.exe and lmRecorder .
Note that the controller should be powered off either manually or by exiting lmRecorder using the Home and
Plus buttons before closing the collection window in lmMaker. Failure to do this will result in the controller
being left powered on.