Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
209 views

Livemove 2 User Manual

Manual de este gran programa

Uploaded by

Nose Nolose
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
209 views

Livemove 2 User Manual

Manual de este gran programa

Uploaded by

Nose Nolose
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 113

LiveMove

R
2
User Manual

TM
AiLive
www.AiLive.net
Machine Learning for Games

c 2010 AiLive Inc.

All rights reserved. No part of this manual may be reproduced or transmitted in any form or by any means
without the written consent of AiLive Inc.

AiLive, LiveMove, LiveMove Pro, LiveMove 2 and LiveAI are either registered trademarks or trademarks of
AiLive Inc. in the United States and/or other countries. Other product and company names mentioned
herein may be the trademarks of their respective owners.
Contents

1 Introduction to LiveMove 2 1
1.1 Motion Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motion Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Key concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 LiveMove components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 The structure of the manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Tutorial 5
2.1 Check requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Understand the data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Start lmMaker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Design your moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 Create a classifier project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6 Collect motion examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.7 Build a classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.8 Test the classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.9 Tune the classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.10 Build a buttonless classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.11 Test the buttonless classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.12 What’s next? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3 lmMaker 23
3.1 Data organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Project window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Creating a new project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4 Classification Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.5 Project operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.6 Collecting example motions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.6.1 Recording control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.6.2 Collecting directly from your game . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.7 Building classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

v
3.7.1 Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.7.2 Tunability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.7.3 Capacity and tunability trade-offs . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.7.4 Slack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.7.5 Buttonless classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8 Testing classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.9 Debugging classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.9.1 Outliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.9.2 View motions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.9.3 Motion Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.10 The move tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.10.1 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.10.2 How to read the move tree output . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.10.3 Building a move tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.10.4 Move tree quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.11 Move label file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.12 Display options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.13 lmMaker command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4 Using LiveMove 2 in your game 51


4.1 LiveMove 2 initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2 Creation of motion devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 lmsPad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.1 Sample Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.3.2 MotionPlus Controller Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.3 Data Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.4 Data Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.4 Tracking motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.5 Recording motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.5.1 Recording Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.5.2 Recording standard examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.5.3 Recording buttonless examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.6 Saving and loading motions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.7 Loading classifier templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.8 Performing Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.9 Standard classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.10 Buttonless classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.10.1 Mixed buttonless mode: Game-LiveMove mode . . . . . . . . . . . . . . . . . . . . 62
4.10.2 Mixed buttonless mode: LiveMove-Game mode . . . . . . . . . . . . . . . . . . . . 63
4.11 Early classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.12 Using a move tree with early classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.12.1 Loading a move tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.12.2 When the best guess is safe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.12.3 When intermediate groups are safe . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.13 Data capture consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.14 Motion progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.15 Motion accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.16 Masking moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.17 Classifier tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.17.1 Adding tuning examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.17.2 Rejected tunings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.17.3 Saving a tuned classifier template . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.18 Associating metadata with LiveMove 2 objects . . . . . . . . . . . . . . . . . . . . . . . . 76

5 Guidelines 77
5.1 Understand your data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.2 Design your Gold Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.3 Collect correct but varied examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.4 Check your data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.5 Give immediate feedback to the player . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.6 Design moves that diverge early . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.7 Build the classifier before the move tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.8 Understand buttonless recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.9 Suggested workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.10 Collecting examples for mixed buttonless classification modes . . . . . . . . . . . . . . . . 86
5.11 Integrating LiveMove into your tool chain . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

6 Tracking 89
6.1 Orientation Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.2 Position Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.3 LiveMove 2 Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.4 Adding Tracking to Your Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.5 Starting Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.6 Dealing with Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.6.1 Gotchas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

A LiveMove 2 FAQ 93

B Applications 103
B.1 The PC Application lmsConvertMotions . . . . . . . . . . . . . . . . . . . . . . . . . 103
B.2 The PC Application lmHostIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
B.3 The Console Application lmRecorder . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
B.4 Running lmHostIO.exe and lmRecorder from lmMaker . . . . . . . . . . . . . . . 104
Chapter 1

Introduction to LiveMove 2

1.1 Motion Recognition

Figure 1.1: LiveMove motion recognition.

• Use LiveMove 2 to build motion recognizers using example motions. No complex coding required.

1
2 CHAPTER 1. INTRODUCTION TO LIVEMOVE 2

• LiveMove 2 can recognize any motions performed with a motion sensing device.

• LiveMove 2 automatically recognizes the motions you define.

• The recognition is accurate across different users.

1.2 Motion Tracking


LiveMove 2 can track the position and orientation of a Wii Remote with an attached MotionPlus. See Chapter
6 for details on how to quickly and easily add motion tracking to your game.
More advanced motion tracking capabilities are still in development. As new features become available this
manual will be updated to provide documentation on how to use them effectively. In the meantime, the
majority of this manual (aside from Chapter 6) is devoted to motion recognition.

1.3 Key concepts

Figure 1.2: A motion is any movement of the Wii Remote (normally half a second to five seconds). In this
example the motion is a counter-clockwise full circle with the Wii Remote pointing forward.

Figure 1.3: A motion set is a collection of motions of the same type. But they are not identical. This motion
set contains four motions, all counter-clockwise circles.
1.4. LIVEMOVE COMPONENTS 3

circle

Figure 1.4: A move is a labeled motion set. In this example, the move is ‘circle’. You collect example
motions from different people to define a single move. A move is a player-level concept, like a sword thrust
or cross-court forehand.

Classifier

What is it? It’s a circle!

Runtime
Library

Wii,
NDEV

Figure 1.5: You define moves and LiveMove 2 automatically builds a classifier to recognize them. A classi-
fier is a motion recognizer. Feed it a motion at run-time and it tells you what move it is.

1.4 LiveMove components


lmMaker A PC application that builds classifiers from moves. The moves are defined by collecting motion
sets.

lmRecorder A Wii application that records motions from the Wii Remote, the Nunchuk, and the MotionPlus.
lmMaker runs it automatically when collecting motion sets and when testing classifiers.

Run-time Library The LiveMove 2 run-time library links with your game. You can create a tracking object
that allows you to track the movement of a Wii Remote with an attached MotionPlus. You can also load
classifier objects and use them to recognize motions.

Sample Code Shows you how to use LiveMove 2 run-time libraries.


4 CHAPTER 1. INTRODUCTION TO LIVEMOVE 2

lmsClassify A simple application that loads a classifier and uses it to classify motions.
lmsClassifyRSO Implements the same function as lmsClassify using libLM.rso instead of libLM.lib.
lmsTrack A simple application that creates a tracker and uses it to track the motion of a MotionPlus-
enabled Wii Remote.
lmsClassifyAndTrack A simple application that creates a tracker and loads a classifier. It uses the
tracker to track the motion of a MotionPlus-enabled Wii Remote and the classifier to simultane-
ously classify the tracked motions.
lmsClassifyButtonless A simple application that loads a buttonless classifier (See Section 3.4 for
definitions of classification modes) and uses it to classify a continuous stream of motion data
from the Wii Remote.
lmsRecord A simple application that shows you how to use the LiveMove 2 API to record motions.
lmsPad Device-dependent driver code that wraps your motion device’s low-level driver code and mar-
shals raw data samples from your device into the LiveMove 2 data sample format. You should
incorporate and use this code for all your game applications that use LiveMove 2.
lmTuner A Wii application that tunes a classifier to a specific game player.

Demo Applications Show you some of the capabilities of LiveMove 2.

trackerDemoGame A Wii application that allows you to try out motion tracking with a simple game
of skittles.
balloonPop A Wii application that allows you to try out motion recognition with a simple game of
drawing numbers in the air.

lmHostIO A PC application that listens on the Wii USB port and writes files out to the host PC. This program
is required when executing lmRecorder or lmTuner.

LiveMove 2 running on the Wii game console tracks the position and orientation of a Wii Remote with an at-
tached MotionPlus and recognizes motions performed with the Wii Remote, MotionPlus and Nunchuk motion
sensors.

1.5 The structure of the manual


• Chapter 2 is a motion recognition tutorial.
• Chapter 3 explains lmMaker.

• Chapter 4 explains how to recognize motions on the Wii.


• Chapter 5 provides general motion recognition guidelines.
• Chapter 6 explains how to track motions using the MotionPlus.

• Appendix A is a FAQ that answers common questions about LiveMove 2.


Chapter 2

Tutorial

We introduce these activities:

• Create a classifier project with lmMaker.

• Record motion examples from the Wii Remote with lmRecorder in lmMaker .

• Build a classifier from the data with lmMaker 1 .

• Test the built classifier in lmMaker .

• Tune a classifier to an individual using lmTuner .

2.1 Check requirements


Before starting the tutorial ensure

• You have installed LiveMove 2 .

• You can run programs on the Wii devkit.

See ailiveBase/ReleaseNotes.txt for up-to-date details of system requirements. If you encounter


any difficulties installing LiveMove 2 please send email to: support@ailive.net.
We refer to the LiveMove 2 installation folder as ailiveBase.

2.2 Understand the data flow


• Example motions are recorded and transferred to the host PC (Figure 2.1). Each motion is stored as a
.raw data file.

• lmMaker imports the motion files and builds a classifier (Figure 2.2).

• The classifier can be tested in lmMaker , lmRecorder , lmTuner or your game.


1 For convenience, we often short-hand “classifier template” as “classifier” in this document.

5
6 CHAPTER 2. TUTORIAL

Zero
Motions Zero
Motion Set

WRMotion001.raw
lmRecorder lmHostIO WRMotion002.raw
WRMotion003.raw

NDEV PC

Figure 2.1: Collect motion examples as .raw files. lmRecorder records the data from the Wii Remote.
lmHostIO helps transfer this data to the PC.

Zero One Two


Motion Set Motion Set Motion Set

WRMotion001.raw WRMotion023.raw WRMotion102.raw


WRMotion002.raw WRMotion032.raw WRMotion104.raw
WRMotion003.raw WRMotion047.raw WRMotion105.raw

lmMaker

Classifier

PC

Figure 2.2: lmMaker receives the motion data and constructs motion sets. These are used to build a classifier.
2.3. START LMMAKER 7

2.3 Start lmMaker


The PC application lmMaker is the main classifier development tool of LiveMove 2 that allows you to define
moves, collect motion samples, build classifiers, and test them on the spot.
Start lmMaker by double-clicking on the lmMaker icon on your desktop, or ailiveBase/bin/lmMaker.exe
in Windows Explorer.
Two windows appear: the lmMaker project window (Figure 2.3) and a command prompt for text output.

Figure 2.3: What you will see after starting lmMaker .

2.4 Design your moves


In this tutorial we build a classifier to recognize the numerals 0, 1, 2, 3, and 4, drawn in the air with the
Wii Remote.
There are different ways to draw the numerals in the air. Before collecting examples we must define which
of these ways we will allow in our “game”.
Examine Figure 2.4. These diagrams represent our definition of the moves we are about to create. We call
these definitions the gold standard.

Figure 2.4: Draw these numerals in the air. The end of your Wii Controller should trace these lines.

The “gold standard” is your definition of a move. It is the ideal form of the move. The gold standard
defines criteria for deciding whether a given motion should count as an example of a move.
8 CHAPTER 2. TUTORIAL

For example, let’s define the move “number 0” as a motion that

• begins pointing straight ahead,


• starts at the top of the zero, and
• moves in an anti-clockwise direction,
• returning to the starting point.

This is the gold standard for move “number 0”.


We could have defined the move “number 0” to be more general. But for now, let’s keep things simple. The
concept of the gold standard is important. It is discussed in more detail later.

Q. What is a gold standard? – See section 5.2 and answer 15 in the FAQ.

2.5 Create a classifier project


The first thing you do to create a classifier is to create a new project with lmMaker.

• Select File->New Project from the main menu of lmMaker. This will launch the New Project
dialog.

• In the dialog, enter the project name “numbers”.


• Click Add, enter “number0”, and hit the Enter key.
• Hit the Enter key and enter “number1”.
• Repeat this for “number2”, “number3”, and “number4”.

• The dialog should look like Figure 2.5.


• Click OK.
• The new project you just created is in the project window (Figure 2.6).
2.6. COLLECT MOTION EXAMPLES 9

Figure 2.5: New Project Dialog

2.6 Collect motion examples


Let’s collect some example motions.

• Select Collect->Collect Motions from the main menu. This launches the Collect Motion
dialog.
• Enter the performer name “me” (Figure 2.7).
• There are two check boxes in the motion collection dialog. By default, both are enabled. Leave them
as is. See Section 3.6 for more detailed description on these two checkboxes.
• Click OK.

A tabbed motion collection window (Figure 2.8) appears on the screen while lmRecorder boots up on the
NDEV. The tabs indicate all the moves you are to collect motions for.
lmMaker automatically runs lmHostIO and lmRecorder when collecting motions from the Wii. Ensure
that you have already set the DVD root using the command setndenv DvdRoot ‘‘path’’. It is usu-
ally fine to simply set it to the current directory: “setndenv DvdRoot .”. Setting the DVD root only
needs to be done once. (Consult the README.txt for troubleshooting tips if you get a “missing DLL” or
other errors when lmMaker invokes lmHostIO.exe)
After lmRecorder has loaded on the NDEV you will see a motion sensor display on the Wii monitor (Figure
2.9), as well as text prompt informing you what move you are to give an example motion for next.
lmRecorder uses the first connected Wii Remote (channel 0) as the source of motion data when the move
uses a single Wii Remote. To record a motion, keep the B button on the Wii Remote depressed while you
perform it.
10 CHAPTER 2. TUTORIAL

Figure 2.6: Empty Project


2.6. COLLECT MOTION EXAMPLES 11

Figure 2.7: Collect Motion Dialog


.
12 CHAPTER 2. TUTORIAL

Figure 2.8: When starting to collect motions you will see this window.

• Pay attention to what move lmRecorder is prompting you to record a motion for. Draw the cor-
responding number in air with the Wii Remote while holding down the B button. Ensure that it is
performed as natually as possible while conforming to the gold standard.
• Once you finish recording a motion (by releasing the B button), lmRecorder prompts you to either
accept or reject it. This is important in making sure that the newly recorded motion is intended to be
an example of the chosen move.
• If you accepted the motion, the moves tab in the collection window gets set to the corresponding move
and a new line appears in the window showing the disk location of that motion. After having collected
some motions for the number0 move, the collection window should look like Figure 2.10.
If you rejected the motion, it will not show up in the collection window.
If you accepted a motion by mistake, you can always select it in the collection window and click
Delete Motion. You can also right-click on the bad motion to bring up a menu which contains a
“delete” option.
• Repeat this process until lmRecorder tells you that you’ve successfully collected the designated
number of motions per move.
• Click Finish Collecting. By default, lmMaker automatically imports all the samples you just
collected into your project (Figure 2.11).
All the motion examples you collected are stored in the folder
ailiveBase/bin/LiveMoveData/raw/numbers/me/.

In our example of the numbers move set, the definition of the moves (the gold standard) is quite narrow in
scope and does not allow differing ways of drawing the same numeral.
Say you are developing a cooking game. One move is “rolling dough”. You want to allow several ways of
doing “rolling dough”. In this case, record all the different ways of performing the move, including rolling
backwards, forwards, and sideways. All these variations should be presented to lmMaker .
2.6. COLLECT MOTION EXAMPLES 13

Figure 2.9: lmRecorder screen for recording motions in random collect mode.
14 CHAPTER 2. TUTORIAL

Figure 2.10: After collecting some number0 motions.

You may have some general questions about collecting examples. Please consult Chapter 5 for guidelines.

Q. When do I stop collecting examples? - See section 5.9 and answer 18 in the FAQ.

Q. How many motion examples should I record for one move? - See answer 19 in the FAQ.

Q. How many different ways of performing a single move should I record? - See answer 20 in the FAQ.

2.7 Build a classifier


Now you are almost ready to build a classifier that will recognize the numbers 0, 1, 2, 3 and 4.
The top half of the project window displays information about the built classifier (currently empty).
The bottom half displays information about the motion sets (Figure 2.11). Click to expand the motion sets.
The motion sets contain the examples you collected. The labeled motion sets define the moves. lmMaker
makes a classifier to recognize the moves you have defined.
Choose File->Save to save the state of your project. You can now quit the project at any time without
losing work.

• Click Suggest Capacity to get lmMaker to suggest a capacity for your project. (There are more
details about capacity in Section 3.7.1.)
• Set Tunability equal to the current value of Capacity so we can tune this classifier later. (A
tunability value of zero makes the classifier untunable. Do this to minimize classifier size when tuning
is not needed.)
• Click Build on the buttom-left corner of the window. A pop-up window will show the progress bar
of the build process.
2.7. BUILD A CLASSIFIER 15

Figure 2.11: All examples imported for each of the numbers


16 CHAPTER 2. TUTORIAL

Figure 2.12: The project window after building the classifier


2.8. TEST THE CLASSIFIER 17

• When finished building the classifier, the lmMaker output window states that the classifier is saved to
ailiveBase/bin/LiveMoveData/Projects/numbers/numbers.lmc.

Once the classifier is built the project window changes (see Figure 2.12):

• The top half of the window describes the classifier.

• The bottom half of the window now displays

– The class (i.e. classification label) of each motion example.


It is always possible that examples are classified differently to what you expect. Assume that you
collected examples for move “number 6”. But some of your examples are classified as “number
0”. 6’s and 0’s can quite easily get confused. An unexpected classification indicates that you
may have made a mistake when recording the “number 6” example motion, or that you need to
adjust parameters of the classifier, or re-design your move set. Section 5.4 provides guidelines on
checking your data.
– The majority class of each motion set.
For example, there are 3 motions in the set “number 0”. 2 are classified as “number 0”, and 1 is
classified as “number 6”. The majority class is therefore “number 0”.
– The percentage of motions in the majority class.
For example, in the previous example the majority percentage is 66.6%.
Do not expect to always see 100% classification in your projects. Section 5.4 provides guidelines
on checking your data, and Section 5.9 provides workflow suggestions.

All the classifications are made by the classifier you just built.

This is a small-scale project so all your motion examples should have the correct label. The classification
rates should be 100% for all motion sets.
Congratulations, you have built your first classifier!

2.8 Test the classifier


Now, let’s try out the classifier.

• Click Test->Test Classifier from the main menu. A dialog box pops up (Figure 2.13).

• Click OK on the dialog box.

lmRecorder will boot on the NDEV in test mode and lmMaker will show an information dialog (Figure
2.14) throughout the testing process. You are now ready to classify motions!
As before, hold the B button on the Wii remote to start performing the motion, and release when done.
lmRecorder classifies your motions using the classifier you just built and displays results on the NDEV
monitor.

• If you draw a zero in the air – then lmRecorder will display “number0”.

• If you draw a two in the air – then lmRecorder will display “number2”.

• If your motion is unlike any of your moves – then lmRecorder will display “–”, which means it was
classified as “undetermined”.
18 CHAPTER 2. TUTORIAL

Figure 2.13: Test Classifier dialog

The classifier may not recognize many varied ways of drawing numbers at this point because motion samples
provided are not sufficient. But it should recognize your motions for gold standard numbers 0 through 4 with
high accuracy. Until you build a classifier with examples collected from a variety of different people, it likely
will not work as nicely for someone else as it did for you.
Try experimenting now. The classifier is data you load into your game. The game can then recognize moves
0, 1, 2, 3 and 4 in real-time.
When you’ve finished testing, click the “Finish Testing” box (Figure 2.14) to quit lmRecorder.

Figure 2.14: Test session in progress

2.9 Tune the classifier


Classifiers are made at development time. But different people perform moves differently. Game players
come in all shapes and sizes with different physical abilities.
Tuning happens at game time. Tuning adapts a classifier to an individual player. The player gives examples
of how he or she performs the moves.
Tuning allows a classifier to work well for all players while still keeping good run-time performance of the
classifier. The tuning process can be presented to the player as a short training session. Or it can be hidden
from the player.
Your game design dictates which moves may benefit from tuning. Not all moves need tuning.
For example, you hire a celebrity tennis player to record their actual tennis serves. You use their data for the
move “super serve”. You may not want a game player to tune this move, but instead present it a performance
challenge to the player.
2.9. TUNE THE CLASSIFIER 19

In another case, one of your moves is very simple. During game testing you discover that everyone can
perform it easily (after a period of initial learning). You decide you do not need to tune it.

Q. What does tuning do? – See section 4.17 and answer 34 in the FAQ.

The Wii application lmTuner lets you test tuning with a classifier built by lmMaker . You can find
lmTuner in the directory ailiveBase/sample/lmTuner. Note that lmTuner is not present in
the evaluation version of LiveMove 2 .
Now let’s try to tune the “numbers” classifier we just made:

• Quit lmMaker (save the changes to your project).


• Copy the classifier template numbers.lmc from
ailiveBase/bin/LiveMoveData/projects/numbers/
to the lmTuner directory ailiveBase/sample/lmTuner/.
• Run lmHostIO.exe from the lmTuner directory in an RVL SDK/RVL NDEV window. Make sure
that you have already set the DVD root by running the command setndenv DvdRoot ‘‘path’’.
If you get a “missing DLL” or another error while running lmHostIO.exe , consult the README.txt
for troubleshooting tips. (Note that lmHostIO.exe must always be run from the same directory as
its associated application, lmTuner.elf in this case).
• Type ndrun lmTuner.elf -a -h. This will print out a help message containing the command-
line arguments required to run lmTuner . Follow the direction and run it with the appropirate argu-
ments.
• Once run, the lmTuner diplay screen will show up. Follow the on-screen instructions to tune or test
the numbers.lmc classifier.

When tuning the classifier for a selected move:

• Provide a tuning motion by performing the move in your style.


• Your motion will either be accepted or rejected by the classifier.
– If accepted the classifier is tuned with the new motion to better recognize your style of the move.
– If rejected then your motion is too different from the gold standard of the move (as defined at
development time).
For example, you cannot tune move “number 1” with a motion that looks like move “number 3”.

You will see a tuning screen like Figure 2.15.


By default, lmTuner allows five tuning examples per move (you can change this with a command-line
option). If you keep tuning the same move with new examples the classifier will only be tuned by the newest
five accepted motions (tuning motions are discarded on a first in first out basis). For more details see Chapter
4.
Once you have finished tuning all the moves:

• Switch to test mode.


• Now test your tuned classifier by performing a move. The lmTuner screen will show classification
results from both the tuned and the original, untuned classifier. You should be able to notice that the
tuned classifier can better recognize your moves.
• Repeat the tune and test process if necessary.
• lmTuner writes out the tuned classifier when you quit the application.
• In a game the tuned classifier could be saved to a data storage card (as part of the saved game data).
The game player does not have to tune again.
20 CHAPTER 2. TUTORIAL

Figure 2.15: lmTuner Screen while tuning for the “number 1” move.
2.10. BUILD A BUTTONLESS CLASSIFIER 21

2.10 Build a buttonless classifier


Now let us create a buttonless classifier that recognizes the numbers 0, 1, 2, 3 and 4.
A buttonless classifier is not told when a player’s move starts and stops by player-generated events (e.g.,
holding a button down for duration of the move). Instead, a buttonless classifier tells the game when a
player’s move starts and stops (i.e., the player simply moves the controller(s)).

• From the main menu select File->New Project.

• In the dialog, enter the project name “buttonlessNumbers”.

• Select “Buttonless” from the mode list.

• Click Add, enter “number0”, and hit the Enter key.

• Hit the Enter key (or click Add) and enter “number1”.

• Repeat this for “number2”, “number3”, and “number4”.

• Click OK.

You cannot use motion samples collected for a standard project in a buttonless project. So, let’s collect
buttonless samples.

• Select Collect->Collect Motions from the main menu to launch the Collect Motion dialog.

• Enter the performer name “me”.

• Next to the Random Collect checkbox, increase the default number of motions to collect per move
from 10 to 20. Buttonless classification usually requires more example motions to work well.

• Click OK.

The motion collection window appears on your PC while lmRecorder boots up on the NDEV.
To record buttonless motion samples with lmRecorder you need two Wii Remotes. Perform motions with
Remote 1. Specify the start and end of a motion with Remote 2. This ensures that any acceleration on the
Wii Remote caused by pressing a button is not recorded.
Normally the operator of lmMaker will use Remote 2 and the person providing examples will use Remote
1. But for this test simply place Remote 1 in your dominant hand and Remote 2 in your other hand.
Go through the same motion collection process as described previously in section 2.6. Make sure that each
example motion starts with reasonably large speed and force. This helps the classifier learn to distinguish
when valid motions start.
As before, upon finishing, lmMaker automatically imports all your examples to your project.

• Click Suggest Threshold to get lmMaker to suggest a reasonable force threshold for detecting
the starts of your moves. (For more information about force threshold see Section 3.7.5.)

• Click Suggest Capacity to get lmMaker to suggest a capacity for your project. The suggested
capacity is based on the threshold setting.

• Click Build.

Congratulations! You have built your first buttonless classifier.


22 CHAPTER 2. TUTORIAL

2.11 Test the buttonless classifier


You are now ready to recognize motions performed with the remote (without having to hold the B button
down). The buttonless classifier will decide whether you have started or ended a valid move.

• Select Test->Test Classifier from the main menu.


• Click OK on the dialog.

lmRecorder boots up on the NDEV in test mode. Simply perform a motion. lmRecorder recognizes
your motions using the buttonless classifier you just built.
A buttonless classifier splits the incoming motion stream into segments. A segment starts based on the force
threshold.
A “--” sign indicates that the motion segment was classified as undetermined. LiveMove 2 will generate
a sequence of “--” results if you move the Wii Remote randomly.
A buttonless classifier needs more examples to produce the same classification accuracy as a standard classi-
fier.
Try experimenting now. Even with just 20 example motions per move you should get reasonably good
recognition rate
If a move is hard to perform (i.e. you often get the undetermined label) then increase its slack in the corre-
sponding column in lmMaker’s project window. If a move is too easy to perform (or other valid moves often
end up being classified as this move) then decrease its slack (see Section 3.7.4). If any random motion tends
to be recognized as a particular move, see Answer 26 in the FAQ.

Q. What is the best way to use buttonless classification in my game? - See section 5.8.

2.12 What’s next?


It is easy to collect examples and build a classifier to recognize motions. You define moves by examples.
All applications of LiveMove 2 , simple or advanced, follow the basic template outlined in this tutorial.

• Chapter 3 explains lmMaker.


• Chapter 4 explains how to recognize motions on the Wii.
• Chapter 5 provides general guidelines for using LiveMove 2 for motion recognition.

• Chapter 6 provides more information specific to LiveMove 2 tracking functionalities.


• Appendix A is a FAQ that answers common questions about LiveMove 2.
Chapter 3

lmMaker

lmMaker is a PC application. At development time it performs three main functions:

• It helps you collect example motions from the Wii.

• It builds a classifier from the collected examples.

• It allows you to debug and test the classifier.

The classifier is used to recognize motions at game time.

3.1 Data organization

By default lmMaker creates the folder LiveMoveData in the working directory


(typically C:\Program Files\LiveMove2\bin). You can change it from the main menu (File->Settings).
lmMaker creates two sub-folders in LiveMoveData: projects and raw.
LiveMoveData

projects raw
These folders are created for your convenience. You can save your project and motion data anywhere on disk.
When you create a new project, lmMaker creates two project folders under projects and raw.
LiveMoveData

projects raw

my project my project
The project folder under projects contains all project related files except motions, such as a project file
(extension: .lmproj), a classifier file (.lmc), move tree files (.lmtree), and an include file (.h). The
project file saves states between lmMaker sessions.
The project folder under raw usually contains all motion files collected for the project and is organized like

23
24 CHAPTER 3. LMMAKER

this:
my project

Bob Kay Tim

chop fry chop fry chop fry


During motion collection sessions, lmMaker automatically creates those sub-folders and stores motion files
(.raw) in correct locations so that you can easily find who performed the motion for which move.
To protect your data assets (project files, classifiers and motion examples), we recommend you place the
LiveMoveData folder under the control of a Version Control System.

3.2 Project window

Figure 3.1: Main menu.

When you run lmMaker , two windows appear: the Command Prompt and the project window (Figure 3.1).
The main menu options are:

• File: Create, save, and open projects (see Section 3.3).


• Project: Project operations such as adding moves and importing motions (see Section 3.5).
• Collect: Collect motion sample from the Wii (see Section 3.6).
• Test: Test classifiers (see Section 3.8).
• Move Tree: Create and view move trees (see Section 3.10).
• Help: Has links to LiveMove 2 documentation and AiLive’s website, which contains the latest product
information.

3.3 Creating a new project


The project window is empty when lmMaker starts up. Since most of the operations require a current project
to work on, the first thing you do is to create a new project or open an existing one.
3.3. CREATING A NEW PROJECT 25

To create a new project, select File->New Project from the main menu. It will open the New Project
dialog (Figure 3.2).

Figure 3.2: New Project Dialog

• Enter the name of the project. It will be used as the name of a project folder as well.
• Specify where you want to create the project folder.
• Choose an input device type from one of the following seven types.
– Wii Remote for single-handed moves using a Wii Remote
– Nunchuk for single-handed moves using a Nunchuk
– Freestyle for coordinated, double-handed moves using a Wii Remote and a Nunchuk
– MotionPlus Device for single-handed moves using a MotionPlus-attached Wii Remote
– MotionPlus Freestyle for coordinated, double-handed moves using a MotionPlus-attached
Wii Remote and a Nunchuk
– Paired Wii Remotes for coordinated, double-handed moves using two Wii Remotes
– Paired MotionPlus Devices for coordinated, double-handed moves using two MotionPlus-
attached Wii Remotes
• Choose a classification mode (see Section 3.4).
• Add moves in the move list. You can also add or remove moves later in the Project Window.

The combination of an input device and a mode determines the type of motion samples collected as well as
the type of motion data the built classifier can handle.
Click OK when you are done, and you will be brought back to the project window. You can start collecting
motions for the project.
26 CHAPTER 3. LMMAKER

3.4 Classification Modes


LiveMove 2 lets you build two types of classifiers. Each type operates in a distinct mode: standard mode or
buttonless mode.

Standard : In this mode LiveMove 2 is asked to classify motion segments where each segment is a potential
valid motion.
Buttonless : LiveMove 2 is presented with a continuous stream of motion data and asked to classify possible
valid motions contained within. In this mode the LiveMove 2 classifier is responsible for detecting the
start and end of potential valid motions as well as classifying them.

Each classifier type also implies a distinct recording mode for recording training motions. To build a standard
classifier you will need to record standard motion samples while building a buttonless classifier requires
buttonless motion samples. See Section 4.5.1 for more detail.

3.5 Project operations


You can perform the following project operations:

• Add New Move: Add a new move to the project.


Choose a name that is descriptive, e.g. “smash”, “lob”, “smallCircleClockwisePointingToTheFloor”
etc.
• Import Motions: Import motion samples to a motion set.
Select a move by clicking on it in the Project Window. You can import a single file or a collection of
files into the selected move.
• Import Sets: Import motion sets from folders on disk.
You can choose a single folder or collection of folders. lmMaker recursively traverses all sub-folders.
For each folder or sub-folder that contains raw motion files lmMaker either
– creates a new motion set (for a new move), or
– adds the motions to the existing motion set (of the existing move)
depending on whether the folder name matches an existing move name in the project.
For example, if your project contains the single move “smash” and you import the folder “my project”
that has the following sub-folders:
raw

my project

... Jim

smash lob etc.

motion000.raw motion001.raw, motion002.raw


then lmMaker will add WRmotion000.raw to the existing motion set for the move “smash” and cre-
ate a new motion set for the move “lob” containing motions WRmotion001.raw and WRmotion002.raw.
3.6. COLLECTING EXAMPLE MOTIONS 27

• Remove Selected: Remove selected motion(s) or motion set(s) from project.


This does not delete any raw motion files from the LiveMoveData folder. It simply removes them
from your project. When a motion set is deleted, the corresponding move is deleted as well.

• Delete Selected From Disk: Remove selected motion(s) and delete motion files from disk.
This does the same thing as the ”Remove Selected” option but also removes the raw motion files from
disk. Use this option with care (especially if you share motion files among projects) since you will not
be able to recover any deleted raw motion files.

• Build Classifier: Build a classifier.


For example, if your project is called my_project the classifier is saved to the file
LiveMoveData\projects\my_project\my_project.lmc.
Once a classifier is built the “Classified as” column is active. Each example motion is classified by the
current classifier.
For large projects (e.g. projects containing thousands of example motions) building can take time (a
few minutes or maybe even ten or more minutes). The progress bar indicates how long you need to
wait.
For large projects recalculating the classification column after building can also take time. You can
avoid recalculation by turning off the classification column. See Section 3.12.

• Select Outliers: Select the motions that are outliers according to the most recent classifier built in the
project.
Any outliers (see Section 3.9.1) that have been identified by the current classifier will be marked in
red in the project window. This menu option gives an easy way of selecting them all (for viewing or
removal).

• Project Info: View the project information, such as its motion device type and how many motions are
contained in each move set.

• View Motions: View motion graphs for the selected motions (see Section 3.9.2).

• Suggest Capacity: Suggests a capacity for the (to-be built) classifier based on example motions in the
current project.
The suggested capacity is a reasonable trade-off between classification performance and run-time
speed. Use it as a guideline in case you need to tweak capacity value to better suit your game. This
operation is almost as expensive as building a classifier.

• Set Force Threshold: Set the force threshold (buttonless projects only).
This allows you to change the force threshold (see Section 3.7.5) while showing the per-move percent-
age of motion data that would be discarded when using that threshold.

Actions do not update the classifier until you click Build or choose Project->Build Classifier.
For example, if you add a new motion set called “number8”, then the classification of the new examples in
the set will be from the previously built classifer until you rebuild.

3.6 Collecting example motions


Select Collect->Collect Motion from the main menu. This opens the Collect Motions dialog (Figure
3.3).

• Enter a performer name. A new folder will be created where all samples by the performer are stored.
28 CHAPTER 3. LMMAKER

Figure 3.3: Collect Motions dialog


3.6. COLLECTING EXAMPLE MOTIONS 29

• Specify where you want to create the performer folder. The default location should work fine for most
cases.

• There are two checkboxes in the motion collection dialog. By default, both are enabled. Unless you
have specific reasons to disable either one of them, we recommend that you leave them as is.

– The first option enables automatic importing of the collected motions into the project window
upon finishing the collection session.
– The second option puts the collection mode to Random Collect. In this mode, lmRecorder
prompts you to give example motion of a randomly chosen move, one at a time, until n examples
per move are collected. n equals 10 by default but you can change it easily to suit your needs.
Random collection addresses a specific pitfall of sequential collection. In sequential collection,
you collect all example motions of a move together in a row and then move on to the the next
move. This often results in homogeneous example motions for each move that lack the legiti-
mate variations present in normal game play. This will in turn negatively impact the classifier’s
performance. Random collection is designed to minimize this problem.

Click OK, and a tabbed motion collection window (Figure 2.8) appears on the screen while lmRecorder
boots up on the NDEV. The tabs indicate all the moves you are to collect motions for.

Figure 3.4: Motion Collection window

After lmRecorder has loaded on the NDEV you will see a motion sensor display on the Wii monitor (Figure
2.9), as well as text prompt informing you what move you are to give an example motion for next. See Figure
2.9.
In general, we recommand that the operator of lmMaker (i.e. you) control the recording while a second
person (i.e. the performer) provides motion examples.
A typical recording session proceeds like this:

• lmRecorder screen displays the name of a move it wants the performer to record.
30 CHAPTER 3. LMMAKER

• Let the performer perform the move. Watch closely how he/she does the move and determine whether
the recorded motion conforms to the gold standard of that move.

• lmRecorder then asks to either confirm or reject the recorded motion.

– Reject the motion if you, or the performer, decide that the motion is not a good example of the
target move.
– Confirm otherwise.

• Once a motion is confirmed, the collection window will refresh and display the tab for the target move.
A new file name is appended to the list of motion file names in the window. If you or the performer
had made a mistake in accepting the motion, you can delete the new file now (select it in the window
and press the Delete key or click Delete Motion).

• Repeat this until lmRecorder informs you that the designated number of motions for all moves have
been collected.

• Click Finish Collecting in the collection window when you are done.

All the motion samples collected will be automatically imported into the project if you have checked the
“Import collected motions into project automatically” checkbox. If you are collecting from many performers,
you may want to uncheck it to speed the whole process. There is an easy way to import all the samples of all
performers later (see Section 3.5).
Sometimes it may be desirable to quickly collect a few motions per move or to collect a few more motions
for a specific move. In this case you can use sequential collect instead. Here is how:

• Uncheck the “Random Collect” box in the collection dialog and click “OK”

• On the Motion Collection window, click the tab of the move you are about to collect motions for.

• Collect some motion samples from the performer. Watch closely his or her motions. If they do not
conform to the gold standard, delete them immediately (select it and press the Delete key or click
Delete Motion).

• Click on another move tab, and collect motion samples for that move.

• Repeat this for all the moves you wish to collect example motions for.

• Click Finish Collecting when you are done.

You can switch moves any time during the session by clicking the move tabs. Make sure the correct tab is
active before you start collecting motions.

3.6.1 Recording control


How you control the start and end of motion recording depends on the type of classifier you are building.

• Standard collection for standard classifier:

– Wii Remote, Wii Remote with MotionPlus, Freestyle, or Freestyle with MotionPlus: Hold the B
(trigger) button of the first Wii Remote.
– Paired Wii Remotes or Paired MotionPlus-attached Wii Remotes: Hold the B button of the first
Wii Remote.
– Nunchuk: Hold the Z button of the Nunchuk.
3.7. BUILDING CLASSIFIERS 31

• Buttonless collection for buttonless classifier:


To mark the start and end of a motion, it is important to use a second or third Wii Remote that is
separate from the one being used to perform the actual motion. The reason is that button presses on
the Wii Remote used to perform the motion can actually affect the recognition signal and will make the
data inconsistent (See Section 4.5.3 for more detail).

– Wii Remote, Wii Remote with MotionPlus, Freestyle, or Freestyle with MotionPlus: Hold the B
button of the second Wii Remote.
– Paired Wii Remotes or Paired MotionPlus-attached Wii Remotes: Hold the B button of the third
Wii Remote.
– Nunchuk: Hold the B button of the attached Wii Remote.

Normally, the operator of lmMaker should control the extra remote used to mark the start and end of
motions.
The recording controller(s) must be relatively still for a short period of time before recording can begin.
lmRecorder displays RECORDING PREVENTED if this condition is not met when the operator
presses down the trigger button to signal the start of a new motion. This ensures that LiveMove 2 gets
good quality motion data for the start of motion.
The performer must begin performing the motion shortly after the operator starts the recording. The
performer can monitor the start of recording from the motion graphs shown in the lmRecorder
display. Alternatively, the operator can simply issue a voice command to the performer to start the
motion. In either case, the operator releases the recording button after the motion has completed.

3.6.2 Collecting directly from your game


You may wish to collect motion examples directly from your game. That way the motion examples you
collect can better reflect the conditions game players face at game time. You can:

• Replicate the functionality of lmRecorder in your game (see Chapter 4 and sample code). Make
your game write .raw motion files to the current working directory of lmMaker . Then you can
collect directly from the game into lmMaker ’s collection window.
• Make your game write .raw motion files to a PC directory structure of your choosing. Then import
the motions into lmMaker later.

Ensure that only examples that conform to your gold standard are used to build a classifier (see Section 5.2).

3.7 Building classifiers


To build a classifier, click Build on the bottom-left corner of the project window. lmMaker uses the
example motions in the bottom half of the window to create a new classifier, which will be displayed in the
top half of the window.

3.7.1 Capacity
A classifier’s capacity is a measure of how many different ways of performing the same move that it
can recognize.
The higher (lower) the capacity the more (less) ways of performing the same move can be recognized.

• High capacity equals higher CPU and memory costs.


32 CHAPTER 3. LMMAKER

Figure 3.5: Project window after building a classifier.


3.7. BUILDING CLASSIFIERS 33

– If there are many ways of performing the same move – then a high capacity is required.
– If there are few ways of performing the same move – then a high capacity is wasteful.

• A value of 1.0 corresponds to using on average 5% of the Wii CPU per 1/60 frame with 8 Wii controllers
generating motions for LiveMove 2.
CPU costs are linear in the number of Wii controllers generating motions for LiveMove 2.

• Without tuning (i.e. without calling lmTuneClassifier() – see Chapter 4) halving capacity will
cut CPU costs by about half, up to the minimum bounds.
Doubling capacity tends to double CPU requirements.
A value of 0.0 guarantees minimum capacity.
So, if your game supports only 2 Wii Remotes, you can set capacity to 4, but still consume 5% of the
CPU.

Section 5.9 contains guidelines for setting capacity during development.

3.7.2 Tunability
A classifier’s tunability is a measure of how many different ways of performing the same move that it
can recognize during tuning.
Tunability is the capacity that is active during tuning.

• High tunability equals higher CPU and memory costs during tuning.
This affects calls to lmsTuneClassifier() but not calls to lmsUpdateClassifier or lmsClassifyMotion
(see Chapter 4).

Section 5.9 contains guidelines for setting tunability during development.

3.7.3 Capacity and tunability trade-offs


Capacity and tunability control a performance versus resources trade-off.
If your examples contain varied ways of performing the same move then in general:

Capacity
High Low
Recognition better worse
CPU cost worse better
Memory cost worse better

In this case:

• Build the classifier with a low capacity and high tunability. You can use the “Suggested capacity” value
here. For example, you can set capacity to half of the suggested value and set tunability to equal the
suggested value.

• During tuning the classifier can adapt to the varied styles of many different game players.

• Once tuned, the classifier will recognize the particular game player. But it will use less CPU and
memory resources than a high capacity classifier.
34 CHAPTER 3. LMMAKER

Section 5.9 contains guidelines for setting capacity and tunability during development.
A tunable classifier requires additional memory. If you do not need tuning then create an untunable classifier:

• Set tunability to 0.0 at development time.

• Or call lmsMakeUntunableClassifier() (see Chapter 4) at game time.


Previous tunings are maintained in the untunable classifier. So use this method to make a tuned classi-
fier that uses minimum memory.

3.7.4 Slack
A move’s slack is a measure of how strict its recognition is.
Each motion set name has a slack entry. Click to change it.
The higher (lower) the slack the more (less) imprecision in the recognition of a move.
You can think of slack as fuzziness. Moves with high fuzziness will be recognized more easily compared to
moves with low fuzziness.

• The default value is 1.0.

– A value < 1.0 means that the classifier will reject more motions as examples of the given move.
Game players must be more precise in their motions to be recognized. For example, in the “num-
bers” project (Chapter 2), if the slack of move “number2” is set low then it becomes harder to
perform the move.
– A value > 1.0 means that the classifier will accept more motions as examples of the given move.
Game players can be less precise in their motions to be recognized. For example, in the “numbers”
project if the slack of move “number2” is set high enough, then some “number3” motions might
start to be recognized as move “number2” instead.
– The maximum value is 2.0. In this case most motions will be recognized as some move.

• You can set slack independently for each move. So you can make some moves precise and others
imprecise.

• Adjust slack when your classifier often rejects but rarely misclassifies a move. (The classification is
either correct or undetermined).
In this case, increasing slack reduces the rejection frequency and increases the correct classifications.

• High slack increases the chance of false positives.


For example, a player performs a fuzzy or incorrect motion but it is classified as a valid move.
The danger of false positives depends on the game design.
Consider a stealth action game where the wrong move will set off an alarm. In this case, high slack is
dangerous.
Consider a boxing game where it is better to punch than do nothing. In this case, high slack is less
dangerous.

Q. What’s the difference between tunability and slack? – See answer 35 in the FAQ.

Q. How do I control the difficulty level of performing moves? – See answer 36 in the FAQ.
3.7. BUILDING CLASSIFIERS 35

3.7.5 Buttonless classifiers


Building a buttonless classifier is essentially the same as building a standard classifier. You provide examples
and LiveMove 2 does the rest.
A buttonless classifier notices the start of a motion once the force on the motion sensor exceeds a threshold.
So compared to standard classification there is an extra parameter.

Figure 3.6: You set a force threshold for a buttonless project.

You can set the threshold in the project window (see Figure 3.6).
Click Suggest Threshold to set a suggested force threshold for your project. If your project contains
moves that start with low (high) force then the suggested threshold will tend to be low (high).
A high threshold will discard more data from the start of your recorded examples (which start from rest).
Choose Project->Set Force Threshold to see how much data on average is discarded for each
move (see Figure 3.7).

Figure 3.7: A higher threshold discards more data from the start of your examples.

Consider these trade-offs when choosing a threshold:

• A very low threshold means that any movement on the Wii Remote will trigger classification. So your
game will need to handle many “false starts” that terminate with the undetermined label.
• A very high threshold means that most movement on the Wii Remote will not be registered by LiveMove 2.
In addition, lots of data is discarded, which makes it harder for LiveMove 2 to recognize moves.

You must re-build the classifier to check how the overall classification accuracy is changed by a new threshold
setting.
36 CHAPTER 3. LMMAKER

A high threshold may cause some moves to be not registered. This means that the motion example
never exceeded the threshold.
Since the force threshold changes where each example motion starts, it alters the training set for the classifier.
As a result, the buttonless classifier’s capacity is dependent on the force threshold. Once a new force
threshold is entered, you should click “Suggest Capacity” to get a new suggested capacity value.
A buttonless classifier normally requires more examples to get the same classification accuracy compared to
standard classification. Section 5.8 contains guidelines for building buttonless classifiers.

3.8 Testing classifiers


To test a classifier, choose Test->Test Classifier from the main menu. It will launch the “Test
Classifier” dialog (Figure 3.8).

Figure 3.8: Test Classifier dialog

For a simple, interactive test session, just click OK here. It brings up a dialog box shown in Figure 3.9 on the
screen while lmRecorder boots up in test mode. lmRecorder classifies your motions using the classifier
and displays results. Click Finished Testing on the dialog to end the session.

Figure 3.9: Test session in progress

If you wish to save test motions or classify existing motions, check “Save test motions between sessions”.
Enter a session name (defaulted to “test”) and specify where you want a session folder created if you don’t
want to use the default location. Click OK and lmRecorder will boot up on the NDEV in test mode.
If you have checked the box to save test motions then a separate window similar to the project window will
also pop up. Every time you perform a motion, a new line appears in the bottom half of this window showing
the location of the saved motion as well as its classification result.
If you entered an existing session name in the dialog and the session folder already contains motions saved
3.9. DEBUGGING CLASSIFIERS 37

in a previous test session, they are classified by lmMaker immediately and displayed in gray. Motions
performed during the current session will be added to the same folder and displayed in black.

Figure 3.10: Test Classifier Window

3.9 Debugging classifiers


Debug information helps you understand your motion data.
Debug information helps:

• spot bad motions;


• understand why a motion is not recognized;
• identify problem areas in your classifier;
• tell you when the collection process has stopped generating new kinds of examples; and
• provides feedback that you can give to the player at game time.

Q. What do I do if one of my moves is hard to reproduce? – See section 5.3 and answer 38 in the FAQ.
38 CHAPTER 3. LMMAKER

Q. What do I do if two of my moves are getting confused? – See section 5.4 and answer 37 in the FAQ.

Q. How can I tell when I have collected enough data? – See answer 18 in the FAQ.

3.9.1 Outliers
An outlier is an example motion that is unlike other examples in the same motion set. An outlier indicates
either

• you need to provide more examples like the outlier (if the outlier conforms to your gold standard), or

• you need to delete the outlier because it was generated by a recording error.

lmMaker reports all outliers, if any, when it builds a classifier (Figure 3.11). Outliers are highlighted in red
in the project window (Figure 3.12). Project->Select Outliers allows you to quickly select all the
motions that are outliers.

Figure 3.11: Outliers detected while building a classifier

3.9.2 View motions


Figure 3.13 explains the meaning of the raw accelerometer data.
You can view a graphical display of the raw motion sensor data in each motion file. See Figure 3.14. Highlight
a motion (or a group of motions) then either choose Project->View Motions or right click on the
mouse and choose View Selected Motions.
The duration of a standard motion is identical to the duration of the graph. The duration of a buttonless
motion is embedded in the graph between two additional markers that indicate the recorded start and stop of
the motion. There is also a yellow line that marks where the current force threshold is. This will inform you
how much the beginning part of the motion is cut off by the force threshold (i.e. everything to the left of the
line is cut off).
3.9. DEBUGGING CLASSIFIERS 39

Figure 3.12: Outliers highlighted in red


40 CHAPTER 3. LMMAKER

Z gravity
X

Figure 3.13: Force axes from lmRecorder (KPAD library conventions). The pointer end of the Wii Remote
is the +Z direction. Imagine the accelerometer is a small cube half-full with water. At horizontal rest the
water, under the force of gravity, impacts upon the -Y cube face. So in this state we see (X,Y,Z)=(0,-1,0)
corresponding to 1G gravity in the -Y direction. A forward jab initially generates -Z force as the water
impacts the -Z cube face. Then the water impacts the +Z cube face as the Wii Remote decelerates to rest.

If you think that an outlier represents a bad motion you can view examples of non-outliers and visually
compare them to the outlier. Bad motions often look very different from other examples in the same motion
set.
View motions can also be crucial when debugging problems caused by high force theshold in buttonless
classifiers:
Q. In my buttonless project, almost any short motion I make with the motion device gets recognized as
one of the moves, what should I do? – See answer 26 in the FAQ.

MotionPlus

If you have a MotionPlus attached, then the raw gyroscope data represents the angular velocity around the
three axes of the Wii Remote. Please check with Nintendo’s documentation for additional details on the data
representation.
In lmMaker when you choose to View Selected Motions, you will also see a plot of the gyroscope
data if the motions were recorded with an attached MotionPlus.

3.9.3 Motion Summary


To display summarization information, right-click on any motion on the button half of the project window
and check “Show Summaries”. This information is also available at game time (see LiveMove2.h).
For example, you can tell the player they are moving too slowly, or their initial starting position was wrong,
etc.
Summarization is based on the 3D accelerometer readings from your controller.
The summarization information is

• Average force: The average force of the motion in units of gravity G.


3.9. DEBUGGING CLASSIFIERS 41

Figure 3.14: A motion view. Acceleration/angular velocity is the vertical axis and motion sensor sample
(each lmsMotionAppendElement call) is the horizontal axis. The X, Y, and Z acceleration and angular
velocity are plotted separately and are color-coded.

• Duration: The duration (in seconds) of the relevant part of the motion (i.e. between motion start and
end).

• Initial orientation: Estimated initial orientation of the motion controller.


The orientation is measured when you begin recording. It is with respect to a right-handed grip on the
controller, with the index finger on the B button.
One of:
undefined: No reportable initial orientation due to movement.

flat up: Flat on the XY plane, with Z pointing up.


42 CHAPTER 3. LMMAKER

rotated right up: Rotated to the right in XY, with Z pointing up.

rotated left up: Rotated to the left in XY, with Z pointing up.

up: Mixed in the XY, with Z pointing up.

flat down: Flat on the XY plane, with Z pointing down.

rotated right down: Rotated to the right in XY, with Z pointing down.

rotated left down: Rotated to the left in XY, with Z pointing down.

down: Mixed in the XY, with Z pointing down.


3.9. DEBUGGING CLASSIFIERS 43

flat level: Flat on the XY plane, with Z level.

rotated right level: Rotated to the right in XY, with Z level.

rotated left level: Rotated to the left in XY, with Z level.

level: Mixed in the XY, with Z level. (Not flat level, and not
rotated left level or rotated right level).

• Initial impulse: Estimated direction of the initial movement of the motion controller with respect to
gravity (i.e. vertical direction).
One of:

– going up: Impulse is up, i.e. away from gravity.


– going down: Impulse is down, i.e. towards gravity.
– level: No detectable vertical motion.

The orientation is unimportant.


If there is not enough data to measure the initial impulse you will see a “low data” result.
Note that if you begin recording a motion while moving the initial impulse can be a deceleration. This
can generate an “up” response even when you think you were going down.
Such counter-intuitive results are useful to understand the properties of the data you give to
LiveMove 2.

Believe the summary information, not always your eyes or memory.


For example, perform a simple rotation about the Z axis.
Most human wrists are structured so that rotating clockwise with your right hand generates a small
vertical-up impulse. Rotating counter-clockwise generates a small vertical-down impulse.
44 CHAPTER 3. LMMAKER

3.10 The move tree


Suppose your game wants to recognize different kinds of numerals drawn in the air, and you want to synchro-
nize on-screen animations (or other in-game events) with the player’s actual motions. LiveMove 2’s move tree
helps you do this.
The move tree identifies moves that are indistinguishable or confused with each other at different stages over
the duration of a move. This information is very useful at several phases of the development cycle.
At development time the move tree tells you:

• Whether your move design helps or hinders early classification.


For example, different moves that generate similar motion sensor data at their starts will remain con-
fused for longer than moves that don’t. In this case, accurate early classification is difficult.

Q. How do I design moves that support early classification? – See Section 5.6.

• What kinds of animation sequences will synchronize with your motion controls to give immediate and
correct feedback to the player.

At game time it tells you

• when it is safe to start animations while the player is performing a move.


For example, you can recognize what move the player is performing before the move has completed.

A simple application of the move tree is determining the earliest possible moment when it is safe to classify
a player’s motion.
An advanced application is:

1. At development time designing an animation tree that corresponds to the move tree, and
2. At game time choosing when to commit to intermediate and final animations that synchronize with the
player’s movement.

3.10.1 An example
Let’s look at a simple example move set consisting the numbers: one, two, three and four. Moves
one and four were designed to generate different motion sensor data very quickly: starting from the same
holding position, one is a vertical slash down (Z force), while four starts to the left (-X force). two and
three, on the other hand, both start to the right, or +X force. So two and three both initially generate
similar data.
The move tree is displayed in Figure 3.15. Take a look at the first row that starts with one. Notice that in
the Possible Moves / Joint Animations column there is only one move: {one}. This means
that – within the first 2.5% of the move – one is completely unconfused. So animation can begin almost
immediately. This is also true for the move four.
But moves two and three remain confused with each other during the first 35% of the move. But at that
point, three becomes distinguishable from two.
When 35% of the move has completed and if the result of LiveMove 2’s early classification is:

• one then the player is probably not performing anything else. So it is safe to commit to a one anima-
tion.
• two then the player could be performing either a two or a three, but not one or four. If you had
a joint two-three animation, you could commit to that at this point.
3.10. THE MOVE TREE 45

Figure 3.15: An example move tree.


46 CHAPTER 3. LMMAKER

• three then the player is probably not performing anything else. So it is safe to commit to a three
animation.

• four then the player is probably not performing anything else. So it is safe to commit to a four
animation.

After 80% progress no moves are confused with each other. So it is safe to commit to whatever LiveMove 2’s
early classification returns at this point.
The move tree also indicates:

• We need a generic “start” animation that lasts for 2.5% of the move that is common to all of your
moves.
For example, this shared animation could be a “get ready” look, a “power glow”, a Wii Remote rumble,
or anything that immediately registers to the player that something is about to happen.

• We need a two-three intermediate animation during 2.5% to 80% of the move that follows the
“start” animation and precedes either a two animation or a three animation.
(If this is not possible then you should rethink your motion design. Moves that are initially confused
should translate to in-game animations that initially share the same start).

• An early classification can always turn out wrong. For example, the player could start a four but then
drop the controller.
So we need different “abort” animations to correspond to the user starting a valid move but failing to
carry through at different stages of completion.

At development time the move tree can help an animator design a corresponding animation tree for the
player’s on-screen avatar.
At game time the move tree tells the game at the earliest moment when it is safe to commit to intermediate
and final animations.
Q. How do I use the move tree for early classification? – See Sections 4.12 and 5.6.

3.10.2 How to read the move tree output


Look again at the move tree in Figure 3.15.
The move tree describes a time-line that flows from the top to the bottom of the table, measured in units of
motion progress percentage. Progress indicates how much of the move has been performed: 0% when the
move has just begun, and 100% when the move has completed.
As time progresses LiveMove 2 receives more controller data that helps it distinguish moves. At game time,
you get motion progress by calling
lmsGetMotionProgressForClass (see Section 4.14).
At each stage of progress there are Best Guess, Possible Moves / Joint Animations, and
Animation Transitions columns.

• The Best Guess column has one row for every move in your classifier. At game time, calling
lmsGetClassification (anytime during a motion) returns LiveMove 2’s “best guess”.
So, for example, the three row at 2.5% progress contains all the relevant move
tree information that you’d need when lmsGetClassification returns three and
lmsGetMotionProgressForClass for three is 2.5%.

– Light green text indicates that the move is already safe and you can commit to the final animation.
3.10. THE MOVE TREE 47

– Bold green text indicates that the move is safe, and this is the point at which it becomes safe.
Look at the two row at 80% progress. The two is highlighted in bold green to indicate that: if
the “best guess” is two and it’s percent progress reaches 80%, it becomes unconfused with the
move three, i.e., it is safe to assume that you’ve performed a two, not a three.
– Black text indicates that the move is not completely unconfused.

• The Possible Moves / Joint Animations column lists which moves the “best guess” is
currently confused with.
For example, in the three row at 2.5% progress, the Possible Moves are: {three, two}.
This means that if your best guess is three and it is at 2.5% progress, your final classification is
probably either three or two.
If you have designed a joint two-three animation then it is safe to play it at this time.
• The Animation Transitions column contains the parent animation for the Possible Moves
column. There is an entry only when the Possible Moves / Joint Animations recommen-
dation has changed from the previous stage.
For example, the two row at 35% progress has no animation transition. This means the {two,
three} joint animation is still active.
But the three row at 35% progress requires a transition from {three, two} to the currently rec-
ommended animation of three.

3.10.3 Building a move tree


The move tree predicts how moves get unconfused at game time. The accuracy of this prediction is partly
controlled by the tree classification accuracy, which ranges from 0 (lowest accuracy) to 1 (highest accuracy).
Select Move Tree->Build Move Tree to build up to 5 move trees at a time.

Figure 3.16: You specify a classification accuracy for each move tree.

• Low accuracy means that LiveMove 2 takes “bigger risks” to predict when a move gets unconfused.
• High accuracy implies that LiveMove 2 takes “smaller risks” to predict when a move gets unconfused.
• A low accuracy tree predicts that a move gets unconfused earlier compared to a high accuracy tree.
48 CHAPTER 3. LMMAKER

• Use a low accuracy tree to get earlier commitment to best guess classification.
But the prediction is more likely to be wrong. So you may get more “false positives”. Whether this is
important depends on your game design.

Note that if your move design allows early classification and the tree accuracy is not too high then your moves
will normally be unconfused before 100% completion.
In general a completely unconfused move does not subsequently become confused with other moves. So
four will continue to appear in a group on its own.
Building move trees can take time especially if your project is large.

A move tree depends on your classifier and motion examples. So remember to rebuild the
move tree if these change; otherwise your move tree is out-of-date.
Using an out-of-date move tree for early classification at game time is a potential source of
error.

Select Move Tree->View Move Tree to inspect the built move trees (as shown in figure 3.15).

3.10.4 Move tree quality


A move tree is a prediction. A prediction is only as good as the data it is based on.
You should avoid:

• An insufficient number of examples. In this case the move tree will not make good predictions, espe-
cially if you have a large number of moves or wide gold standards.
• Imbalanced examples. The move tree will not make good predictions if the number of examples for
some moves is much larger than others.

For better prediction

• collect at least 50 examples for each variation of your gold standard for each move in your move set,
and
• maintain a balanced project.

3.11 Move label file


A move label file is a C header file that helps you classify motions at game time.
Clicking Build saves the move label file called myproject.h (in addition to creating the classifier .lmc
file).
The move label file contains an enum declaration that maps lmsClassLabel indices to enumeration labels.
For example, for a project with 3 moves, the move label header file looks like:

/* -- LiveMove 2 classification labels */


/* */
/* Generated by lmMaker. Do not modify by hand. */
/* */
/* -- AiLive LiveMove 2 (c) 2000-2008 AiLive Inc. */
#ifndef LM_MYPROJECT
#define LM_MYPROJECT
3.12. DISPLAY OPTIONS 49

enum lmmyproject
{
lmmyproject_move1 = 0,
lmmyproject_move2,
lmmyproject_move3
};

#endif

Use this header file in your game source code. It has two main advantages.

• Checking for moves avoids hard-coded magic numbers:

lmsBeginClassify( classifier );
do
{
// read lmsPad data and call lmsUpdateClassifier on it
...
} while( !endOfMotion );

lmsEndClassify( classifier );
lmsClassLabel l = lmsGetClassification( classifier );

if ( l == lmmyproject_move1 )
{
... did move move1 ! ...
}

• Your game source is robust to changes in the order or contents of the moves in your project (on condi-
tion that the moves are not renamed).

3.12 Display options


You can control how information is displayed in project and collection windows.

• Turn some columns on and off. Right click on the display to select which information is displayed.
The “Classified as” column is expensive to recompute. To turn it off, right-click on any motion and
uncheck Show Classification. You can also turn Show Summary on to display motion sum-
mary information.
• Sort by column. Click the top of any column. The motions are then sorted according to the clicked
column.
For example, with Show Summary on, click the “Duration” column. The motions are sorted in
ascending order of duration. Click again and the motions are sorted in descending order of duration.

3.13 lmMaker command line


lmMaker has two modes: a GUI mode and a batch mode.
The batch mode lets you build a classifier from the command-line. So building a classifier can be a step in
your build system. So you can run regression tests on classifier building.
Default operation is GUI mode. In this mode lmMaker has a single command-line parameter:
50 CHAPTER 3. LMMAKER

• -dir <filepath>
Set the LiveMoveData directory absolute path. The default is the current working directory.

The parameters available for batch mode operation can be viewed from the command line by typing:

• lmMaker.exe -h
List lmMaker command line parameters.
Chapter 4

Using LiveMove 2 in your game

This chapter explains how to:

• Record motions.
• Track motions.
• Recognize the player’s motions: Load classifier templates built with lmMaker to the Wii. Use clas-
sifiers to recognize what motion the player is performing with the Wii Remote.
• Recognize the player’s motion early: Use intermediate queries and the move tree to give early feed-
back to the player.
• Tune the classifier to an individual game player.

The LiveMove 2 runtime API is ailiveBase/include/liveMove2.h.


The LiveMove 2 library is ailiveBase/lib/libLM.lib. During development link to the debug version
of the LiveMove 2 library ailiveBase/lib/libLMd.lib for error trapping and error messages. In
release builds of your game link to the optimized version of the library for faster performance.
In this chapter only the most important functions in liveMove2.h are discussed. Please consult the header
file for additional information.
The sample code in ailiveBase/sample/lmsSamples/ has examples of using the
LiveMove 2 runtime interface.

4.1 LiveMove 2 initialization


liveMove2.h call Description
lmsOpen( lmsMallocFunction mf, Open and initialize LiveMove 2 runtime library.
lmsFreeFunction ff ) Must be the first call to the LiveMove 2 interface.
lmsClose() Shutdown LiveMove 2.
lmsGetVersionString() Return a version string.

• All memory requests issued within the LiveMove 2 library will be routed to the memory allocator and
de-allocator passed to lmsOpen.
• On shutdown call lmsClose once. This frees all memory used by the LiveMove 2 library.
• You can reclaim memory used by the LiveMove 2 library without calling lmsClose if no
LiveMove 2 functions, including lmsOpen, are called thereafter.

51
52 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

For example, you can call the following function to initialize LiveMove 2 :

#include <stdlib.h>

static void setupLM()


{
// Using standard malloc and free for allocation...
lmsOpen( malloc, free );
OSReport( "LM is open...");
}

4.2 Creation of motion devices


liveMove2.h call Description
lmsNewMotionDevice( lmsMotionDeviceType type ) Create a new motion device
of the specified type.
lmsDeleteMotionDevice( lmsMotionDevice* md ) Delete motion device ‘*md’.
lmsGetMotionDeviceType( Find the type of motion device ‘*md’.
lmsMotionDevice const* md )
lmsGetMotionDeviceIsTrackable( Find if motion device ‘*md’
lmsMotionDevice const* md ) is trackable.
lmsGetSamplingInterval( Find the average time interval between
lmsMotionDevice const* md ) data samples that ‘*md’ outputs.

An lmsMotionDevice object represents a physical motion sensing device or a combination of such de-
vices each player controls.

• Create an lmsMotionDevice object of the correct type for each player.


• Seven different motion device types are currently supported:
– lmsMotionDeviceTypeWII REMOTE: Wii Remote only
– lmsMotionDeviceTypeNUNCHUK: Nunchuck only
– lmsMotionDeviceTypeFREESTYLE: Wii Remote + Nunchuck (double-handed motions)
– lmsMotionDeviceTypeMOTIONPLUS: MotionPlus-enabled Wii Remote (trackable)
– lmsMotionDeviceTypeMOTIONPLUS FREESTYLE: Wii Remote + MotionPlus +
Nunchuck (double-handed motions)
– lmsMotionDeviceTypePAIR WII: Two Wii Remotes (double-handed motions)
– lmsMotionDeviceTypePAIR MOTIONPLUS: Two MotionPlus-enabled Wii Remotes
(double-handed motions)

For example:

...
// Create a motion device object for a Wii Remote with a MotionPlus attached.
lmsMotionDevice* md = lmsNewMotionDevice( lmsMotionDeviceTypeMOTIONPLUS );
...

4.3 lmsPad
Data from the controllers is received and buffered by the lmsPad library. lmsPad encapsulates the rec-
ommended method of interaction with WPAD, KPAD and KMPLS. It simplifies and ensures the correct use of
4.3. LMSPAD 53

Wii Remote™ WPAD lmsPadSample


Nunchuk™
flags
MotionPlus™ dt
KPAD lmsPad KPADStatus libLM
KPADUnifiedWpadStatus
KPADRead( . ) lmsPadPop( . ) KMPLSStatus

KMPLS
KMPLSRead( . )

VISetPostRetraceCallback( . )

Figure 4.1: lmsPad library interactions.

these libraries. Complete source code of lmsPad is provided and you may freely modify and recompile it to
suit your game should the need arise.
The data type of each motion sample lmsMotionDataSample is typedef’ed to the struct
lmsPadSample. It is the interface between lmsPad and the LiveMove API.

4.3.1 Sample Usage


A sample interaction with lmsPad is as follows (for single-handed motion with one controller):

lmsPadSample sample;

lmsPadOpen( malloc, free );

const int controllerChan = 0;


while ( true )
{
if ( lmsPadGetOverflowCount( controllerChan ) > 0 )
{
error( "Data was lost!" );
lmsPadResetOverflowCount( controllerChan );
}

while ( lmsPadPop( controllerChan, &sample ) )


{
// Process new sample

// Add to recording
lmsMotionAppendElement( motion, &sample, NULL );

// Update classification
lmsUpdateClassifier( classifier, &sample, NULL );

// Update tracking
if ( lmsHasGyroData( &sample ) )
lmsUpdateTracker( tracker, &sample );

}
}

// Cleanup
lmsPadClose();
54 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

4.3.2 MotionPlus Controller Extensions

lmsPad auto-detects the MotionPlus extension. Data from the MotionPlus extension is required in order
to use LiveMove’s position and orientation tracking functions. lmsPad will also check whether a Nunchuk
is present in the MotionPlus extension. If found, WPAD DEV MPLS FREESTYLE data (MotionPlus and
Nunchuk) will be received, otherwise, WPAD DEV MPLS data (MotionPlus only) will be received.
As of this writing, there is no provision to detect whether a Nunchuk was inserted into the MotionPlus after
the initial connection without interrupting the stream of incoming data.

4.3.3 Data Loss

It is essential that you check the result of lmsPadGetOverflowCount in order to identify data loss. Data
loss occurs if the internal buffer of lmsPad fills before the game can read data from it with lmsPadPop.
This is usually the result of very low frame rates (less than 20 fps) seen in development and debug builds.
Data loss will degrade the accuracy of LiveMove. The solution is to either call lmsPadPop more frequently
or increase the internal buffer size of lmsPad and recompile.

4.3.4 Data Consistency

WPAD, KPAD and KMPLS have a number of functions for modifying various parameters that alter the data
produced by these libraries. For the purposes of motion recognition, data collected with one parameter setting
does not look like data collected with a different parameter setting. As a result, the same settings must be
used during data collection and classification in game.
Generally, performance of both tracking and classification may degrade if the data-modifying parameters of
WPAD, KPAD and KMPLS are altered from their default values.
Please also note that, for the purposes of classification,

• WPAD DEV MPLS FREESTYLE Nunchuk accelerometer data is not interchangeable with
WPAD DEV FREESTYLE Nunchuk accelerometer data.

• WPAD DEV MPLS FREESTYLE gyro data is not interchangeable with WPAD DEV MPLS gyro data.

• Wii Remote accelerometer and Nunchuk accelerometer data are not interchangeable.

• Wii Remote accelerometer data is interchangeable with each WPAD DEV mode data.

4.4 Tracking motion

The following table summarizes the main API calls associated with motion tracking. More detailed descrip-
tions of each call is in liveMove2.h.
See Chapter 6 for more details on how to quickly and easily add motion tracking to your game.
4.5. RECORDING MOTION 55

liveMove2.h call Description


lmsNewTracker( lmsMotionDevice const* md ) Create a new tracker object.
lmsResetTracker( lmsTracker* t ) Clear all state data from tracker.
lmsDeleteTracker( lmsTracker* t ) Delete the tracker object.
lmsIsReadyToTrackPosition( lmsTracker const* t ) Ready to start position tracking?
lmsIsReadyToTrackPosition2( lmsTracker const* t, Parameterized form of
float lengthOfTime ) lmsIsReadyToTrackPosition.
lmsBeginPositionTracking( lmsTracker* t ) Start tracking position.
lmsIsTrackingPosition( lmsTracker const* t ) Is the tracker tracking position?
lmsEndPositionTracking( lmsTracker* t ) Stop tracking position.
lmsUpdateTracker( lmsTracker* t, Pass motion sample to the tracker and
lmsMotionDataSample const* s ) Update orientation and position estimates.
lmsIsValidForTracker( lmsTracker const* t, True if sample is valid
lmsMotionDataSample const* s ) for the given tracker.
lmsGetLocation( lmsTracker const* t, Get relative location estimate of the
lmsVec3* loc ) motion device.
lmsGetVelocity( lmsTracker const* t, Get the linear velocity estimate of
lmsVec3* vel ) the motion device.
lmsGetForce( lmsTracker const* t, lmsVec3* f ) Get the linear acceleration estimate of
the motion device.
lmsGetDeviceIsStopped( lmsTracker const* t ) Is the tracked controller stopped moving?
lmsGetOrientationQuat( lmsTracker const* t, Get rotation of tracker object in
lmsQuat* q ) quaternion form.
lmsGetOrientationRotMatrix( lmsTracker const* t, Get rotation of tracker object in 3x3
lmsRotMatrix* r ) rotation matrix form.
lmsGetAngularVelocity( lmsTracker const* t, Get angular velocity of tracker object.
lmsVec3* w )
lmsGetTrackerConfidence( lmsTracker const* t ) Confidence in the current estimates.
lmsSetLocation( lmsTracker* t, Set location of tracker object.
lmsVec3 const* loc )
lmsSetTrackerMovementThreshold( lmsTracker* t, Set acceleration threshold for movement.
float acc )

The lmsMotionDevice md has to be a “trackable” device. This means that it needs to have
both the accelorometer data and gyroscope data. Currently, the only trackable device is of type
lmsMotionDeviceTypeMOTIONPLUS.

4.5 Recording motion

liveMove2.h call Description


lmsNewMotion( Create an empty motion object to record
lmsMotionDevice* md, a motion from specified motion device ’*md’
lmsRecordingMode mode ) using recording mode ’mode’
lmsDeleteMotion( lmsMotion* mo ) Delete a motion object ’*mo’
lmsBeginRecording( lmsMotion* mo ) Start recording a motion to ’*mo’
lmsIsRecording( lmsMotion const* mo ) Check if a motion is being recorded to ’*mo’
lmsEndRecording( lmsMotion* mo ) Stop recording a motion to ’*mo’
lmsMotionAppendElement( lmsMotion* mo, Append a motion sample
lmsMotionDataSample const* s, to a single-handed motion, or
lmsMotionDataSample const* s2 ) two motion samples to a two-handed motion.

You can store motion data for later use; for example, to import into lmMaker to build a classifier.
56 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

A motion can be filled through the lmsBeginRecording, lmsMotionAppendElement, and


lmsEndRecording calls, and saved with lmsSerializeMotion.
Each element of a motion holds a subset of the lmsMotionDataSample data passed into
lmsMotionAppendElement. The data stored depends upon the motion’s lmsMotionDevice parameter
(i.e., md). In other words, md decides what part of the sensory input LiveMove 2 stores in the motion object.
It is important to note that md is not necessarily the same as the full physical device you use
while recording motions. For example, you could be using a Wii Remote with a MotionPlus
attachment (i.e., type lmsMotionDeviceTypeMOTIONPLUS), but only wish to record the ac-
celorometer data in the Wii Remote (e.g. for building a classifier) then md needs to be of type
lmsMotionDeviceTypeWII REMOTE, not of type lmsMotionDeviceTypeMOTIONPLUS.
The data stored is sufficient to replay the motion to the LiveMove 2 API. For example, a
lmsMotionDeviceTypeFREESTYLE motion can replay both the Wii Remote and Nunchuk accelerom-
eter data, while a lmsMotionDeviceTypeWII REMOTE motion can only replay Wii Remote accelerom-
eter data, even if a Nunchuk was attached and its data was present during lmsMotionAppendElement.
Data in a lmsMotionDataSample that is not used by LiveMove 2 (such as button press information or
other motion data not specified by md), is not recorded in the motion object.
You normally need to record motions only if you want to collect examples directly from your game rather
than using lmRecorder. If you do this then follow all the data consistency rules described in Section 4.13.

4.5.1 Recording Modes

The mode input argument to the lmsNewMotion() call determines how the motion is to be recorded.
Setting mode to lmsRecordingModeSTANDARD indicates that the motion is to be recorded as a standard
motion. The start and the end, or the duration, of such a motion is signalled by some player or game event,
like holding a button or trigger down on the motion device itself. A button does not have to be used as the
signaling event. But in most cases an on-device button is the easiest and most intuitive way of indicating when
a valid motion starts and ends. As is currently implemented in the lmRecorder application, a designated
button on the recording controller is used for this purpose. If the motions in your game can be effectively
marked by game contexts (e.g., a dance game where a valid motion can only start when a music note begins
and it can only last for a maximal amount of time), then you should use them instead of controller button
events. The advantage is that one doesn’t have to hold a button down while performing a motion.
Keep in mind that, the way training motions were recorded implies the kind of classifier you can build
from them and how that classifier should be used in game. Specifically, standard motions should only
be used to build standard classfiers. A standard classifier begins and ends classification at the same exact
moment when a valid motion starts and ends. This means that the same signaling events used to mark the
boundaries of training motions at development time should be used at run time for classification. For
example, if you held the X button on the controller to begin and end recording of each training motion, you
should do the same when performing a motion to be recognized in game (the button events would be used to
start and end classification of a in-game motion). Again, the signaling events for recording and classifying
can be “virtual buttons” like game contexts that don’t involve real buttons at all.
It is sometimes desirable to simply let the classifier classify continuously on an incoming stream of motion
data. This way, the classifier decides if and when a possible valid motion just started, and if and when it ends
with or without a positive identification. This means that the player is not required to do anything to indicate
(e.g. use button press) or to monitor (e.g. watch for game events) motion boundaries while playing. This of
course makes it a much harder job for the classifier to do well. LiveMove 2 refers to these type of classifiers as
buttonless classifiers (so as to distinguish them from their standard counterpart). In order for LiveMove 2 to
build buttonless classifiers, the training motions need to be recorded as buttonless motions.
You can specify mode as lmsRecordingModeBUTTONLESS when creating an empty buttonless motion
object to record into. Recording a buttonless motion requires that no on-device button be used to mark the
motion’s boundaries while recording. This is to ensure that the way training motions are performed matches
4.5. RECORDING MOTION 57

the way in game motions are to be performed by the player. Since no button presses are needed to inform
LiveMove 2 of motion start and end when using a buttonless classifier in game, the same should be true when
recording training motions.
lmRecorder offers one convenient way to record buttonless motions by using an extra motion device. Its
sole purpose is to provide signaling events that mark the boundaries of each recorded buttonless motion. You
can also use a suitable game event to serve the same purpose. See Section 4.5.3 for more detail on how to
record buttonless motions.

4.5.2 Recording standard examples


You can use lmRecorder to record standard motions whose boundaries are marked by holding the trigger
(B button) down on the recording wii remote.
In case you want to do your own motion recording using LiveMove 2 API, there is a small sample pro-
gram lmsRecord.cxx in the the LiveMove 2 sample directory ailiveBase/sample/lmsSamples.
It shows you how to use the LiveMove 2 API to record motions from a Wii remote. Several things to note:

• Every new value read from lmsPad must be passed in temporal order to LiveMove 2 using
lmsMotionAppendElement. Also, make sure that samples are popped from lmsPad fast enough
so that it does not overflow. This is important. Please verify it is happening correctly in your code.

• An lmsMotion object stores the motion data.

• The lmsMotion object grows in memory size for each lmsMotionAppendElement call.

• The lmsMotion object stops growing in memory size when lmsEndRecording is called.

• Once recording is complete, the lmsMotion object represents the motion that was performed between
the calls lmsBeginRecording and lmsEndRecording.

• This approach generates standard motion data that can be used to build a standard classifier. But this
data should not be used to build a buttonless classifier.

4.5.3 Recording buttonless examples


You must record data both before and after the motion proper to create data that can be used to build a
buttonless classifier. The extra data is used by the classifier to learn to better detect the starts and ends of
valid motions.

• Issue a small sequence of lmsMotionAppendElement calls before the motion starts and a small
sequence after the motion ends. The start and end of a motion may be marked by a signaling event like
a button on a secondary input device.
Recording a window of 0.25 seconds of motion data before and after the motion should be sufficient.

• The motion sensor must be relatively still for approximately 0.1 second before the motion starts.
“Relatively still” means that the overall force magnitude on the motion sensor, minus gravity, should
be less than about 0.3G.
This ensures that LiveMove 2 does not miss potentially important data at the beginning of a motion.
For example, lmRecorder displays RECORDING PREVENTED if this condition is not met, and
throws the data away.
58 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

liveMove2.h call Description


lmsSetMotionStart( lmsMotion* m ) Declare the start of the active motion.
The next lmsMotionAppendElement
marks the start of the motion.
lmsSetMotionEnd( lmsMotion* m ) Declare the end of the active motion.
The previous lmsMotionAppendElement
marks the end of the motion.

• Have a rolling buffer that stores the most recent 0.25 seconds of motion data samples.

• Call lmsBeginRecording and lmsSetMotionStart when the motion proper starts. Then a
sequence of calls to lmsMotionAppendElement to push all the buffered data in to the motion.

• Call lmsSetMotionEnd when the motion proper stops.

• Call a sequence of lmsMotionAppendElement after the motion proper stops.

• Call lmsEndRecording

For your convenience, lmRecorder has an option to record buttonless motions that implements all the
above functionalities. Motion boundaries (i.e., the start and end of motion proper) are marked by holding the
trigger button down on a secondary (non-recording) wii Remote.

4.6 Saving and loading motions

liveMove2.h call Description


lmsSerializeMotion( lmsMotion* m, Serialize motion ‘*m’ into a buffer.
char const** buffer, int* size )
lmsUnserializeMotion( char const* buffer, int size ) Unserialize a motion from a buffer.
lmsDeleteSerial( lmsSerial* s ) Delete serial object ‘*s’.

To save a motion in storage media, serialize the lmsMotion object into a buffer and write the data to a file
(or something equivalent).
To load the object from storage media, read the file into a buffer and unserialize the object.
For example:

void saveMotion( char* fileName, lmsMotion* motion )


{
// Serialize motion into buffer
int size;
char* buf;
lmsSerial* serial = lmsSerializeMotion( motion, &buf, &size );

saveFile( fileName, buf, size );

// Clean up
lmsDeleteSerial( serial );
}

lmsMotion* loadMotion( char const* fileName )


{
// Load motion data from a file
int size;
4.7. LOADING CLASSIFIER TEMPLATES 59

char* buf = loadFile( fileName, &size );

// Unserialize motion
lmsMotion* motion = lmsUnserializeMotion( buf, size );

// Finished with the buffer, delete the memory


delete buf;

return motion;
}

4.7 Loading classifier templates

liveMove2.h call Description


lmsUnserializeClassifierTemplate( Create a classifier template from a buffer.
char* b, int size )
lmsSerializeClassifierTemplate( Create a serialized version
lmsClassifierTemplate const* ct, of a classifier template.
char* b, int size )
lmsDeleteClassifierTemplate( Delete a classifier template.
lmsClassifierTemplate* ct )
lmsGetMotionClassName( Find the name of a motion class
lmsClassifierTemplate const* ct, given its label.
lmsClassLabel label )
lmsGetLabelFromClassName( Find the label of a motion class
lmsClassifierTemplate const* ct, given its name.
char const* name )
lmsGetMotionClassCount( Get the number of motion classes
lmsClassifierTemplate const* ct ) in the classifier template.
lmsIsButtonlessClassifier( Returns true if ‘*ct’ is buttonless.
lmsClassifierTemplate const* ct )

lmMaker builds a classifier template from motion samples and store it in a “.lmc” file. To create the template
in your game, load the file into a buffer and unserialize an lmsClassifierTemplate object.
A classifier template is a set of static rules for classification and does not change unless tuning is per-
formed. It is used to create an lmsClassifier object that actually handles classification sessions
(lmsClassifier is explained in the next section).
For Example:

lmsClassifierTemplate* loadClassifierTemplate( char const* filename )


{
// load classifier template into a buffer
char* buffer;
int sizeOfBuffer;
buffer = loadFile( filename, &sizeOfBuffer );

// instantiate a classifier template from the buffer


lmsClassifierTemplate* ct = lmsUnserializeClassifierTemplate( buffer, sizeOfBuffer );

// finished with the buffer, delete the memory


delete[] buffer;
60 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

return ct;
}

4.8 Performing Classification

liveMove2.h call Description


lmsNewClassifier( lmsMotionDevice const* md, Create a new motion classifier and associate
lmsClassifierTemplate const* ct ) it with motion device ‘*md’ and classifier
template ‘*ct’.
lmsDeleteClassifier( lmsClassifier* c ) Delete classifier ‘*c’.
lmsSetDoIntermediateClasification( Switch intermediate classification
lmsClassifier* c, bool flag ) on or off.
lmsBeginClassify( lmsClassifier* c ) Begin classification session.
lmsEndClassify( lmsClassifier* c ) End classification session.
lmsIsClassifying( lmsClassifier const* c ) Return true if classifier ‘*c’ has
begun classifying data.
lmsUpdateClassifier( lmsClassifier* c, Update classifier ‘*c’ with one motion
lmsMotionDataSample const* s, sample for single-handed moves or two
lmsMotionDataSample const* s2 ) motion samples for two-handed moves.
lmsGetClassification( lmsClassifier const* c ) Return the current classification
result of motion data captured.

4.9 Standard classification


In standard classification mode the game tells LiveMove 2 when a motion starts and stops. You must use a
standard classifier in this mode (you can check that lmsIsButtonlessClassifier returns false).

• Create an lmsMotionDevice object for each player.


• Create an lmsClassifierTemplate object by unserializing the “.lmc” file you created with
lmMaker.
• Create an lmsClassifier object for each player.
• An lmsClassifierTemplate object can be associated with multiple lmsClassifier objects.
• The lmsClassifierTemplate object must not be deleted if there are classifiers associated with
them and still used.
• You control when to lmsBeginClassify and lmsEndClassify, i.e., you control how to identify
the start and end of a motion.
Keep in mind, however, that the method of identifying motion start and motion end at classification
time should be consistent with that at development time when recording example motions.

Q. Can I use a standard classifier without buttons? – See Section 4.13 and answer 11.
The sample application lmsClassify.cxx in the LiveMove 2 sample code directory ailiveBase/-
sample/lmsSamples shows you how to use LiveMove 2 API to classify motions in game with a standard
classifier. A few things to note:

• Always call lmsBeginClassify at the start of a motion.


• Always call lmsEndClassify once the motion is finished.
4.10. BUTTONLESS CLASSIFICATION 61

• lmsGetClassification returns a final, stable classification result once lmsEndClassify is


called.

• lmsGetClassification returns an intermediate classification result if it is called before


lmsEndClassify is called. Set lmsSetDoIntermediateClassification to true before
calling lmsBeginClassify to enable intermediate classification.

• The average time cost of simultaneously classifying 3D accelerometer motion sensor data from 8
lmsMotionDevice objects using a capacity 1 classifier running on a Nintendo Wii is 5% of a 1/60
FPS frame.
The cost is less than 10% of a frame 95% of the time.

• Costs scale linearly. So the average cost of classifying the data from a single lmsMotionDevice
object is under 1% of a frame.

For sample source code on how to use LiveMove 2 API to do standard classification, go to your
LiveMove 2 installation’s sample directory ailiveBase/sample/lmsSamples and take a look at
lmsClassify.cxx.

4.10 Buttonless classification


In buttonless classification mode LiveMove 2 tells the game when a motion starts and stops. You must use a
buttonless classifier in this mode (you can check that lmsIsButtonlessClassifier returns true).

liveMove2.h call Description


lmsIsMotionActive( lmsClassifier const* c ) Returns true if classifier ‘*c’ decides that
a valid/active motion is currently being
performed on the motion device

The sample application lmsClassifyButtonless.cxx in the LiveMove 2 sample code directory


ailiveBase/sample/lmsSamples shows you how to use the LiveMove 2 API to classify motions in-
game with a buttonless classifier. A few things to note:

• Only call lmsBeginClassify aand lmsEndClassify once at the start and end of a classification
session that might classify many motions.
For example, you might lmsBeginClassify at the start of a game and lmsEndClassify when
the game is over.

• A buttonless classifier recognizes multiple motions within a lmsBeginClassify and


lmsEndClassify context.

• lmsIsMotionActive tells you which of the following four states the current session is in.

– When lmsIsMotionActive is false, no valid motion is being performed.


– When lmsIsMotionActive changes from false to true, a motion has just started.
– When lmsIsMotionActive is true, the motion is being performed.
– When lmsIsMotionActive changes from true to false, the motion has just ended.

• lmsGetClassification returns the most recent final classification result when


lmsIsMotionActive is false. If no classification has been performed yet, it returns
lmsUndetermined.
62 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

• lmsGetClassification returns an intermediate classification result when


lmsIsMotionActive is true. To enable intermediate classification, set
lmsSetDoIntermediateClassification to true before calling lmsBeginClassify.
• The game can force the start of a motion by calling lmsBeginActiveMotion and its end by calling
lmsEndActiveMotion.
• Buttonless classification requires more CPU cycles compared to standard classification.
• The average time cost of simultaneously classifying 3D accelerometer motion sensor data from 8
lmsMotionDevice objects using a capacity 1 buttonless classifier running on a Nintendo Wii is 9%
of a 1/60 FPS frame.
The cost is less than 14% of a frame 95% of the time.
• Costs scale linearly. So the average cost of classifying the data from a single lmsMotionDevice
object is under 1.5% of a frame.

Q. How do I handle frequent lmsUndeterminedClass results from buttonless classification? – See


Section 5.8.

4.10.1 Mixed buttonless mode: Game-LiveMove mode


In this mode the game tells LiveMove 2 when a motion starts.

liveMove2.h call Description


lmsBeginActiveMotion( lmsClassifier* c ) Declare the start of an active motion. The next
lmsMotionAppendElement becomes the
start of the motion.

TM
For example, in a Wario Ware style game the player is given a countdown to begin a move. In this situation
the game can tell LiveMove 2 when the motion starts; and then LiveMove 2 tells the game when the move has
stopped.
For example:

...
do
{
if ( moveStarts )
{
lmsBeginActiveMotion( classifier );
// The next lmsUpdateClassifier is the start of the active motion
}

// Collect new status from controller 0


lmsMotionDataSample sample;
if ( lmsPadPop( WPAD CHAN0, &sample ) )
{
// Update classifier with new sample
lmsUpdateClassifier( classifier, &sample, NULL );

bool activeMotion = lmsIsMotionActive( classifier );


...
// When activeMotion returns false after moveStarts==true then the
// move has stopped.
...
4.10. BUTTONLESS CLASSIFICATION 63

} while( !endOfClassificationSession );
...

• lmsIsMotionActive returns true after calling lmsBeginActiveMotion.


• When lmsIsMotionActive subsequently returns false the motion has stopped.

Q. How do I collect examples for Game-LiveMove mode? – See Section 5.10.

4.10.2 Mixed buttonless mode: LiveMove-Game mode


In this mode the game tells LiveMove 2 when a motion stops.

liveMove2.h call Description


lmsEndActiveMotion( lmsClassifier *c ) Declare the end of the active motion. The previous
lmsMotionAppendElement is the end of the motion.

For example, in a tennis game the player starts swinging before addressing the ball. LiveMove 2 tells the game
when the player starts moving; then the game tells LiveMove 2 the move has stopped when the ball hits the
racket.
It is also possible for the game to determine that a move has stopped via some heuristic that combines game
logic with queries to LiveMove. Queries such as the current best guess label, its likelihood and motion
progress and/or calls to lmsBestGuessIsSafe using a lmsMoveTree built from the classifier.
For example:

...
do
{
// Collect new status from controller 0
lmsMotionDataSample sample;
if ( lmsPadPop( WPAD CHAN0, &sample ) )
{
// Update classifier with new status
lmsUpdateClassifier( classifier, &sample, NULL );

bool activeMotion = lmsIsMotionActive( classifier );

if ( moveEnds )
{
lmsEndActiveMotion( classifier );
// The next lmsUpdateClassifier is not part of the active motion
}
...
}
} while( !endOfClassificationSession );
...

• lmsIsMotionActive returns false after calling lmsEndActiveMotion.


• When lmsActiveMotion subsequently returns true a new motion has started.
• LiveMove 2 can decide that the move has ended before the game. For example, LiveMove 2 decides that
the move is not a valid tennis swing before the ball hits the racket.
64 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

Q. How do I collect examples for LiveMove-Game mode? – See See Section 5.10.

4.11 Early classification


LiveMove 2 provides a collection of intermediate queries that guess which move the player is performing
before they complete it. So animations or other in-game events can begin very quickly.
An early classification is a prediction of what will happen. Like any prediction it can turn out to be wrong.
For example, the player drops the Wii Remote half-way through a perfect example of punch1.
How you handle a mismatch between an early classification and a final classification depends on your game
design. For example, you could complete the punch1 event, or you could abort it.
Since early classification is based on incomplete data it is normally less accurate than final classification.

liveMove2.h call Description


lmsSetDoIntermediateClassification( Turn intermediate classification on or off.
lmsClassifier* c, bool turnOn )
lmsGetClassificationLikelihoodForClass( Return how likely the motion data
lmsClassifier* c, received so far will eventually
lmsClassLabel classID ) be classified as classID.

• Intermediate queries incur approximately a 15% increase in CPU costs. So only call
lmsSetDoIntermediateClassification with true if you need it.

• Call lmsGetClassification() when a motion is active to get LiveMove 2 ’s best guess.


A motion is active in standard classification mode between a call to lmsBeginClassify and
lmsEndClassify. A motion is active in buttonless classification when lmsIsMotionActive
returns true.
So you can get LiveMove 2 ’s best guess at any time during the performance of a motion. But
LiveMove 2 makes better guesses as more data is received.

Q. How do I know when I should trust the best guess? – The move tree is designed to answer this
question. See the next section.

• Call lmsGetClassificationLikelihoodForClass when a motion is active to get the like-


lihood of a given move. Likelihood measures how likely the current motion is an example of a given
move.

• Likelihood is a value between 0 (not likely) and 1 (highly likely). The sum of likelihoods for every
move, including lmsUndeterminedClass, is always 1. So you can interpret the likelihood as the
probability that the final classification will be classID.

• In fact lmsGetClassification() returns the move of highest likelihood.

• LiveMove 2 may guess badly at the start of a move. The best guess will change during the move.

• You can poll lmsGetClassification


and lmsGetClassificationLikelihoodForClass frequently. But the results do not change
in between lmsUpdateClassifier calls.
4.12. USING A MOVE TREE WITH EARLY CLASSIFICATION 65

4.12 Using a move tree with early classification


Consider that after 0.1 seconds from the start of a motion lmsGetClassification returns punch1 and
the likelihood is 0.65. Should you commit to a punch1 event (such as starting an animation) or should you
wait until the likelihood rises further?
Use the move tree to answer this type of question.

4.12.1 Loading a move tree


Create a move tree with lmMaker at development time (see Section 3.10). You can load and use it at
run-time.

liveMove2.h call Description


lmsUnserializeMoveTree( char* buffer, int s ) Load a move tree from a buffer.
lmsDeleteMoveTree( lmsMoveTree* t ) Delete a move tree to free resources.

First load the move tree data into Wii memory and unserialize it to create a lmsMoveTree object.
For example:

lmsMoveTree* loadMoveTree( char const* filename )


{
lmsMoveTree* theMoveTree = NULL;
char* buffer;
int sizeOfBuffer;

//load the move tree into a buffer


buffer = loadFile( filename, &sizeOfBuffer);

//Instantiate a move tree from the buffer


theMoveTree = lmsUnserializeMoveTree( buffer, sizeOfBuffer );

// Finished with the buffer, delete the memory


delete[] buffer;

return theMoveTree;
}

4.12.2 When the best guess is safe


liveMove2.h call Description
lmsBestGuessIsSafe( lmsClassifier* c, Returns true if it is safe to assume that
lmsMoveTree* t ) the current best guess is a correct
prediction of the true classification.
lmsGetMoveTreeAccuracy( lmsMoveTree* t ) Return the accuracy of tree ‘*t’.

IF

• the motion being performed on classifier ‘*c’ is an example of a move in the tree ‘*t’, and

• lmsBestGuessIsSafe returns true

THEN
66 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

• the current label returned by lmsGetClassification is a correct early predication of the true
classification lmsGetMoveTreeAccuracy percent of the time.

In this precise sense the best guess is safe.


For example, you build a classifier that recognizes 3 moves: punch1, punch2 and kick. You build a
move tree of 95% accuracy. At game time the player performs a motion. After 0.1 seconds the best guess is
punch1 and lmsBestGuessIsSafe returns true. If the player is performing either punch1, punch2
or kick then punch1 will be a correct prediction of the player’s motion 95% of the time. It is therefore safe
to commit to a punch1 event before the player completes the move.

• Call lmsBestGuessIsSafe when a motion is active.

• A motion may subsequently become unsafe.


For example, the player drops the Wii Remote. For example, the player does not complete the punch1
move.

• Use lmsBestGuessIsSafe to avoid writing code to initiate in-game events based on intermediate
likelihood values.

• You can poll lmsBestGuessIsSafe frequently. But the result does not change in between
lmsUpdateClassifier calls.

• lmsBestGuessIsSafe tells you when a motion is at a leaf of the move tree.

For example:

lmsClassifier* classifier = ...


lmsMoveTree* tree = ...
lmsClassLabel earlyGuess = lmsUndeterminedClass;

lmsBeginClassify( classifier );
bool endOfMotion = false;
do
{
// Collect new status from controller 0
lmsMotionDataSample sample;
if ( !lmsPadPop( WPAD CHAN0, &sample ) )
{
continue;
}

lmsUpdateClassifier( classifier, &sample, NULL );


earlyGuess = lmsGetClassification( classifier );
if ( earlyGuess != lmsUndeterminedClass )
{
bool isSafe = lmsBestGuessIsSafe( classifier, tree );
if ( isSafe )
{
// switch on earlyGuess and initiate an early classification event
...
}
}

endOfMotion = ...

} while( !endOfMotion );
4.12. USING A MOVE TREE WITH EARLY CLASSIFICATION 67

lmsEndClassify( classifier );
lmsClassLabel finalClassification = lmsGetClassification( classifier );
if ( finalClassification != earlyGuess )
{
// Either abort the early classification event or
// accept the false positive rate.
...
}
...

4.12.3 When intermediate groups are safe


You can commit to early classification events earlier, even before lmsBestGuessIsSafe returns true.

liveMove2.h call Description


lmsGetIntermediateConfusionSet( lmsClassifier* c, Set ‘*s’ contains the set of moves
lmsMoveTree* t, lmsClassLabelSet* s ) currently confused with each other.

• Call lmsGetIntermediateConfusionSet when a motion is active.

• Call lmsGetIntermediateConfusionSet when lmsGetClassification has a best guess


(i.e. does not return lmsUndeterminedClass).

• lmsGetIntermediateConfusionSet writes a set of class labels. The set represents the moves
that are currently confused with each other. LiveMove 2 hasn’t yet received sufficient motion sensor
data to know exactly which move the player is performing.

• Create a lmsClassLabelSet to store the confusion set.

liveMove2.h call Description


lmsNewClassLabelSet() Create a set that stores class labels.
lmsNumClassLabelSetElements( lmsClassLabelSet* l ) Return the number of labels in a set.
lmsDeleteClassLabelSet( lmsClassLabelSet l ) Delete a class label set to free resources.
lmsGetClassLabelFromSet( lmsClassLabelSet* l, Get the ‘n’th class label in the set.
int n )

IF

• the motion being performed on classifier ‘*c’ is an example of a move in the tree ‘*t’

THEN

• the set returned by lmsGetIntermediateConfusionSet contains a move that is the correct


early prediction of the true classification lmsGetMoveTreeAccuracy percent of the time.

In this precise sense the intermediate confusion set is safe.


Continuing our example: after 0.05 seconds lmsGetIntermediateConfusionSet returns a set con-
taining punch2 and kick. If the player is performing either punch1, punch2 or kick then the final
classification of the player’s motion will either be punch2 or kick 95% of the time. It is therefore safe to
commit to an animation that can eventually transition to either a punch2 or kick.
68 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

• lmsGetIntermediateConfusionSet returning a set with a single element is logically equiva-


lent to lmsBestGuessIsSafe returning true.

• Use lmsGetIntermediateConfusionSet to avoid writing code to initiate transitional in-game


events based on intermediate likelihood values.

• You can poll lmsGetIntermediateConfusionSet frequently. But the result does not change
in between lmsUpdateClassifier calls.

• lmsGetIntermediateConfusionSet tells you when a motion is at a node of the move tree.

• The sequence of sets returned by lmsGetIntermediateConfusionSet is normally a sequence


of subsets that terminates with a set containing a single element. This sequence corresponds to a
traversal of the move tree.

• But it is always possible that an intermediate confusion set may subsequently become unsafe. A con-
fusion set becomes unsafe when lmsGetIntermediateConfusionSet returns a set that is not
a subset of a previously returned set. This sequence corresponds to a traversal of the move tree that is
revoked.
You can use this condition to abort the move. This condition is more stringent than aborting when
lmsGetClassification() returns a sequence of lmsUndeterminedClass labels.

For example:

// include auto-generated header file from lmMaker to get class label enums
#include ’boxing.h’

lmsClassifier* classifier = ...


lmsMoveTree* tree = ...
lmsClassLabel earlyGuess = lmsUndeterminedClass;
lmsClassLabelSet* confusedSet = lmsNewClassLabelSet();

lmsBeginClassify( classifier );
bool endOfMotion = false;
do
{
// Collect new status from controller 0
lmsMotionDataSample sample;
if ( !lmsPadPop( WPAD CHAN0, &sample ) )
{
continue;
}

// Update classifier with new status


lmsUpdateClassifier( classifier, &sample, NULL );

earlyGuess = lmsGetClassification( classifier );


if ( earlyGuess != lmsUndeterminedClass )
{
bool isSafe = lmsBestGuessIsSafe( classifier, tree );
if ( isSafe )
{
// switch on earlyGuess and initiate an early classification event
...
}
else
{
// The best guess is not safe, so examine the confused set
4.13. DATA CAPTURE CONSISTENCY 69

lmsGetIntermediateConfusionSet( classifier, tree, confusedSet );


// Collect members of set into an array of predicates
bool preds[ boxing_num_moves ];
for ( unsigned int i = 0; i < lmsNumClassLabelSetElements( confusedSet ); i++ )
{
preds[ lmsGetClassLabelFromSet( confusedSet, i ) ] = true;
}
// Commit to any transitional animations or other in-game events
...
if ( preds[ boxing_punch2 ] && preds[ boxing_kick ] && !preds[ boxing_punch1 ] )
{
// Initiate transitional ‘‘punch2 or kick’’ animation
...
}
...
}
}

endOfMotion = ...

} while( !endOfMotion );

lmsEndClassify( classifier );
lmsClassLabel finalClassification = lmsGetClassification( classifier );
if ( finalClassification != earlyGuess )
{
// Either abort the early classification event or
// accept the false positive rate.
...
}
...
lmsDeleteClassLabelSet( confusedSet ); // cleanup

4.13 Data capture consistency


To get good classification accuracy it is very important that the method used to update LiveMove 2 when
recording example motions is identical to the method used to update LiveMove 2 when classifying motions.
The general principle is:

Ensure motion data collection is consistent across recording and recognizing.

Please be aware of the following sources of error:

• Ensure motion sensor data is passed to lmsMotionAppendElement/lmsUpdateClassifier


in temporal order.
• Ensure the motion sensor calibration settings are consistent across both recording and recogniz-
ing. For example, lmRecorder sets the Wii Remote and Nunchuk on channel 0 to

f32 acc_play_radius = 0.00f ;


f32 acc_sensitivity = 1.00f ;
KPADSetAccParam( 0, acc_play_radius, acc_sensitivity ) ;

Different parameters to KPADSetAccParam reduce the amount of information that LiveMove 2 has
to work with.
The settings above must be replicated in your game otherwise LiveMove 2 may not operate correctly.
70 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

• Watch out for motion sensor data loss at low frame rates. Recording motion sensor data from the
lmsPad at low frame rates (e.g. < 20 frames per second) results in data loss. LiveMove 2 will not
operate correctly on this data. The general principle is:

LiveMove 2 must receive all the data generated by the motion sensor during the perfor-
mance of a motion.

lmRecorder does not drop data. More specifically, it does not save motions if data was dropped.
But your game, especially during development in a debug build, may drop data from the Wii Remote.
LiveMove 2 will not operate correctly under these conditions.
Use lmsPad and make sure samples are popped fast enough so that it does not overflow.

• Be sure your start and stop criteria for motions are the same when recording as in-game.
For example, if you designed your game such that the player starts a motion by pressing the B button
and ends it by pressing it again, do not have players hold the B button to capture example motions.
Implement the same motion start/stop behavior in your data-capture application. Otherwise, small
differences can creep into your data that will result in degraded performance.
Likewise, when collecting examples for a buttonless classifier do not signal motion start/stop with
a button press on the recording controller(s). The force associated with the button press is usually
significant enough to be noticed and recorded. But this force is absent when buttonless classification
is used in game. Furthurmore, button pressing can often affect how the user holds the device and how
she performs the motions.

• Do not mix and match Wii Remote and Nunchuk data.


The motion sensors in the Wii Remote and Nunchuk behave differently. If Wii Remote motion sen-
sor data is passed to lmsMotionAppendElement at example recording time, but Nunchuk mo-
tion sensor data is passed to lmsUpdateClassifier at classification time (or vice-versa), then
LiveMove 2 will not operate correctly.
Similarly, a Wii Remote classifier cannot be used to recognize Nunchuk data.
Note that lmRecorder and lmMaker ensure that Wii Remote and Nunchuk data is never mixed up.
But you need to watch out for this issue in your game code.

4.14 Motion progress

liveMove2.h call Description


lmsGetMotionProgressForClass( lmsClassifier* c, Returns percent progress of a motion
lmsClassLabel classID ) assuming it is an example of class classID.

• Use motion progress when you want to know “how far” a player is through the performance of a move.
For example, the progress of a “turn” or “twisting” move can control the rotation of a door handle.
For example, progress can drive the frame-rate for an animation that matches the current move. So you
can animate slow and fast punches.

• Call lmsGetMotionProgressForClass when a motion is active to get the motion progress.

• lmsGetMotionProgressForClass returns a value between 0 (no progress at the start of a move)


to 1 (the motion is complete at the end of a move).
For example, a value of 0.45 means that LiveMove 2 estimates that the player is about 45% of the way
through the performance of move classID.
This does not imply that the player is actually performing move classID.
4.15. MOTION ACCURACY 71

• If lmsGetClassification stabilizes to a particular move prior to completion then


lmsGetMotionProgressForClass is a nondecreasing function of time. But it is not guaran-
teed to be smooth.

Here’s an example of just one way of using motion progress in conjunction with lmsBestGuessIsSafe:

#include ’myGame.h’

lmsClassifier* classifier = ...


lmsMoveTree* tree = ...
lmsClassLabel earlyGuess = lmsUndeterminedClass;
float initialMotionProgress = 0.0f;

lmsBeginClassify( classifier );
bool endOfMotion = false;
do
{
// Collect new status from controller 0
lmsMotionDataSample sample;
if ( !lmsPadPop( WPAD CHAN0, &sample ) )
{
continue;
}

lmsUpdateClassifier( classifier, &sample, NULL );


earlyGuess = lmsGetClassification( classifier );
if ( earlyGuess != lmsUndeterminedClass )
{
bool isSafe = lmsBestGuessIsSafe( classifier, tree );
if ( isSafe )
{
// switch on earlyGuess and initiate an early classification event
if ( earlyGuess == myGame__turnDoorHandleClockwise )
{
float motionProgress =
lmsGetMotionProgressForClass( classifier, earlyGuess );
if ( initialMotionProgress == 0.0f )
{
initialMotionProgress = motionProgress;
}
float normalizedProgress =
(motionProgress - initialMotionProgress) / ( 1.0f - initialMotionProgress );
// Drive an on-screen animation
rotateDoorHandleClockwise( normalizedProgress );
}
...
}
}

endOfMotion = ...

} while( !endOfMotion );
...

4.15 Motion accuracy


72 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

liveMove.h call Description


lmsSetCalcMotionAccuracy( lmsClassifier* c, Turn on/off motion accuracy calculation.
bool turnOn )
lmsGetCalcMotionAccuracy( Returns true if accuracy calculation is active.
lmsClassifier const* c )
lmsGetMotionAccuracy( Returns the accuracy of the last classified motion.
lmsClassifier const* c,
lmsClassLabel classID,
float stability )

• Use the accuracy score when you want to


– differentially reward the player,
– control in-game events depending on how well the player performed, and
– provide feedback so the player can optimize their performance.
• A motion is accurate if it is similar to the gold standard.
• The accuracy score ranges from 0 (least similar) to 1 (most similar).
• An accuracy score is available only for successfully classified motions. The accuracy score of
lmUndeterminedClass is 0.
• The stability parameter controls the distribution of the accuracy scores. In general, large values produce
low and stable scores while small values produce high and jumpy scores.

Distribution
Mean Variance
Low stability High High
High stability Low Low

Experiment with different stability parameters for different moves to give better feedback.
• The computation of an accuracy score incurs a negligible computational cost.

For example:

lmsClassifier* classifier = ...


lmsSetCalcMotionAccuracy( classifier, true );

lmsBeginClassify( classifier );
...
...
lmsEndClassify( classifier );
lmsClassLabel const classID = lmsGetClassification( classifier );

// use different stabilities for different moves


float scoreStability;
switch ( classID )
{
case MOVE1: scoreStability = 0.9f; break;
case MOVE2: scoreStability = 2.0f; break;
case MOVE3: scoreStability = 1.2f; break;
default: scoreStability = 1.0f; break;
}
float accuracy = lmsGetMotionAccuracy( classifier, classID, scoreStability );
4.16. MASKING MOVES 73

In buttonless classification mode call lmsGetMotionAccuracy after lmsIsMotionActive returns


false.

• The quality of the accuracy score depends on the quality of the classifier. To get good accuracy scores
build a robust classifier that can correctly classify motions 95% of the time.
• Accuracy scores give better feedback if a move is easy to perform (e.g., above 95% recognition rate).
In this case, the accuracy score is predictable and consistent with how players feel about their motions.
If a move is difficult to perform then accuracy scores tend to be “jumpy” and can be more confusing
than informative.
• Often the accuracy score alone is sufficient for players to perform a trial-and-error “gradient ascent” to
improve their performance and get higher scores.
• Motion accuracy works best in conjunction with standard classification. However if a move is very
short and a button is used to mark the starts and ends of motions, human variability in the timing of
button presses can make the accuracy scores appear unstable.
• LiveMove 2 pays attention to many features of a motion. The accuracy score is a condensed numerical
summary of the player’s attempt. Without higher-level feedback it can be unclear why the player got a
particular score.

4.16 Masking moves

liveMove2.h call Description


lmsMaskMove( lmsClassifier* c, lmsClassLabel id ) Mask out the move with label ‘id’.
lmsMoveIsMasked( lmsClassifier* c, Returns true if move ‘id’ is masked.
lmsClassLabel id )
lmsClearMasks( lmsClassifier* c ) Unmask all previously masked moves.

• You may want to prevent the recognition of a move. For example, in your game the player cannot
perform the move ‘wave flag’ if they are not holding a flag. But you do not want to create different
classifiers for holding and not holding a flag.
• Use lmsMaskMove to prevent a classifier from recognizing a particular move.
• If the player performs a masked move lmsGetClassification will normally return
lmsUndeterminedClass. But a move that is correctly recognized by an unmasked classifier may
get incorrectly recognized as one of the unmasked moves by a classifier in which it is masked.
• Masking a move reduces the time cost of subsequent lmsUpdateClassifier calls.
• All masks are cleared by calling lmsClearMasks.
• Do not mask moves or clear masks between calls to lmsBeginClassfy and lmsEndClassify.
• Check whether a move is masked by calling lmsMoveIsMasked.

Here is a simple example of using move masks:

lmsClassifier* classifier = ...

if ( gameContext == g0 )
{
74 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

// in gameContext g0 mask out moves 0 and 6


lmsMaskMove( classifier, 0 );
lmsMaskMove( classifier, 6 );
...
}

lmsBeginClassify( classifier );

bool endOfMotion = false;


do
{
// Collect new status from controller 0
lmsMotionDataSample sample;
if ( !lmsPadPop( WPAD CHAN0, &sample ) )
{
continue;
}
lmsUpdateClassifier( classifier, &sample, NULL );

endOfMotion = ...

} while( !endOfMotion );

lmsEndClassify( classifier );
lmsClassLabel finalClassification = lmsGetClassification( classifier );

// clear the masks when context changes


if ( gameContext != g0 )
{
lmsClearMasks( classifier );
}

4.17 Classifier tuning


• Tuning modifies a classifier template during game time by using the player’s own motions as examples
of moves.

• Once the classifier has seen examples of an individual player’s motion style then similar examples of
that player’s motions can be recognized more easily.

• Tuning lets you ship classifier templates with lower capacity (and therefore lower CPU costs) that will,

– not work as well compared to higher capacity classifier templates on a large population of players,
– but will work equally well or better for the player that has tuned the classifier template.

4.17.1 Adding tuning examples


liveMove2.h call Description
lmsTuneClassifier(
lmsClassifierTemplate* ct, Tune a classifier template with an example motion.
lmsMotion const* m, The motion has class label ‘label’ and
lmsClassLabel label, ‘numExamplesForClass’ sets how many
unsigned numExamplesForClass ) tuning motions are allowed for that class label.

• To tune a classifier template ‘*ct’ for the move whose class label is ‘label’
4.17. CLASSIFIER TUNING 75

– record a player’s motion ‘*m’ as a tuning example for the move


– pass ‘*m’ (along with ‘*ct’ and ‘label’) to lmsTuneClassifier function

• lmsTuneClassifier returns false if the classifier deems ‘*m’ not an acceptable example of the
move and rejects it.

• If lmsTuneClassifier returns true then ‘*m’ is accepted. The classifier template ‘*ct’ has been
tuned with it.

For example:

lmsClassifierTemplate* ct = ...;
lmsMotionDevice const* md = lmsGetMotionDeviceFromClassifierTemplate( ct );
lmsMotion* m = lmsNewMotion( md, lmsRecordingModeSTANDARD );
lmsClassLabel label = ...;

// Record a player’s motion of class ’label’ into ’m’


{
...
}

// Tune ct with the motion


bool accepted = lmsTuneClassifier( ct, m, label, 1 );
lmsDeleteMotion( m );

• The CPU cost of each call to lmsTuneClassifier can be expensive, often between 20 to 40 times
more expensive than a call to lmsUpdateClassifier. These calls need to be carefully placed so
they do not harm the pacing of your game. You may want to store tuning motions and tune the classifier
during a pause in the game (such as a loading screen).

• Tuned classifier templates maintain internal histories of the past tuning examples that are still valid. Up
to numExamplesForClass examples are kept for a class.

• Examples are removed from the history on a first-in-first-out basis.

• More player examples may give better accuracy. In general if there are several ways to perform a mo-
tion (fast, slow, different directions) then let the player tune each of these ways for good performance.

• To remove all prior tuning for a class call: lmsTuneClassifier( classifierTemplate,


NULL, label, 0 );.

• Increasing numExamplesForClass will increase the CPU costs of the tuned classifier template.
To maintain the CPU performance restrict the number of tuning examples allowed for each move with
the parameter numExamplesForClass passed to lmsTuneClassifier().
In our experience, setting numExamplesForClass to 1 or 2 is sufficient to create a tuned classifier
template that has an exceptional recognition rate for the player who tuned it.

4.17.2 Rejected tunings


• Tuning prevents cheating: players must provide examples that are similar to the moves you defined at
development time.
For example, a player cannot re-tune “number 3” using “number 1” motions.
Your definition of a move cannot be overwritten by a player at game time.

• But tuning is more accepting (i.e. lmsTuneClassifier rejects fewer motions) if:
76 CHAPTER 4. USING LIVEMOVE 2 IN YOUR GAME

– more diverse examples of moves are given to lmMaker when creating the classifier template, or,
– the Tunability setting in lmMaker is increased
– the slack on the move is increased (also see Section 3.7).

4.17.3 Saving a tuned classifier template


Use lmsSerializeClassifierTemplate() and lmsUnserializeClassifierTemplate()
to save and load tuned classifiers templates.
LiveMove 2 classifier templates are small in memory size. You can save them to persistent memory, such as
memory cards, so players do not need to re-tune at the start of every game session.

4.18 Associating metadata with LiveMove 2 objects


You can associate int and string properties with objects of type lmsMotion and
lmsClassifierTemplate. Properties get serialized and unserialized. This is useful to tag
LiveMove 2 data with your own information.

liveMove2.h call Description


lmsSetIntProperty( lmsObject* o, Associate int property *name
char* name, int v ) of value v to object *o.
lmsGetIntProperty( lmsObject* o, char* name ) Get int property *name from object *o.
lmsSetStringProperty( lmsObject* o, Associate string property *name
char* name, char* s ) of value *s to object *o.
lmsGetStringProperty( lmsObject* o, char* name ) Get string property *name from object *o.

For example:

lmsClassifierTemplate* tunedClassifierTemplate = ...


// Mark this tuned classifier as belonging to player 2
lmsSetIntProperty( tunedClassifierTemplate, "playerNum", 2 );
...
// Save tuned classifier template
char* buffer;
int size;
lmsSerial* s = lmsSerializeClassifier( tunedClassifierTemplate, &buffer, &size );
// Write buffer to storage media
...
lmsDeleteSerial( s );
...
// Later load tuned classifier from media
...
lmsClassifierTemplate* newTunedClassifier = lmsUnserializeClassifierTemplate( buffer, size );
if ( lmsGetIntProperty( newTunedClassifier, "playerNum" ) == 2 )
{
// player 2’s tuning
...
}
Chapter 5

Guidelines

5.1 Understand your data


LiveMove 2’s algorithms operate on motion sensor data. Think of this as the force applied to the
Wii controllers over time.
As a result:

• The initial orientation of the Wii Remote is important.


Pick up the Remote from a table with the DPad facing up. You will get accelerometer reading indicating
force along he Y axis.
Pick up the Remote from a table with the Remote resting on its side. You will get +X or -X force.
Moves that look the same to human eyes generate different force data.

• LiveMove 2 motion recognition assumes all the data is important.


For example, LiveMove 2 can notice the difference between a square that starts from the lower-left
corner, from a square that starts from the top-right corner.
For example, LiveMove 2 can notice the difference between a square drawn pointing at the floor, from
a square drawn pointing at the ceiling.
LiveMove 2 will distinguish these kinds of “square” if you group them into different move sets.
LiveMove 2 will consider all these kinds of “square” as variants of the single move “square” if you
group them into the same move set.
So LiveMove 2 can recognize moves based on both specific and general features.
In all cases, LiveMove 2 assumes that all the example data you provided for a given move is important.

• LiveMove 2 motion recognition is sensitive to the forces applied.


Large changes in speed and direction are caused by large accelerations.
So LiveMove 2 will notice:

– How fast (or slow) a motion begins.


– How fast (or slow) a motion ends.
– Dramatic (or the absence of) changes during a motion.

• LiveMove 2 motion recognition is sensitive to the duration of the motion.


Note that when players are excited they reduce the duration of a move. If this reduction fits your gold
standard then collect examples that vary in duration.

77
78 CHAPTER 5. GUIDELINES

• LiveMove 2 motion recognition is sensitive to the overall force applied.


LiveMove 2 can notice the difference between a strong tennis smash and a very powerful tennis smash.

• The change of orientation of the controller during a motion is important. But sometimes a change
of orientation can get lost in the middle of a forceful motion. The force of rotation is small compared
to the force of translation.
When this happens, two visually distinct moves may generate nearly identical data.
In this case, record the start and/or the end of the motion with the Wii Remote at or near rest. This helps
LiveMove 2 notice the change of orientation. In a few cases you may need to redesign your moves.

• LiveMove 2 motion recognition is bound by the sensitivity of the motion sensor.


If the forces are so large that the controller’s motion sensors max out, LiveMove 2 will not get accurate
force data.
When this happens, two visually distinct moves may generate nearly identical data.
For example, if using only accelorometer data, LiveMove 2 will not distinguish a small forceful arc and
a large forceful arc that both max out the accelorometers.

• LiveMove 2 motion recognition ignores absolute orientation.


Your orientation with respect to the television, the game console, or any other device is not important.
As long as you are performing the same move according to the gold standard, you will be recognized
as doing the same move, regardless of which way you are facing or how far away you are from the
television.

• LiveMove 2 motion recognition ignores whatever else the player might be doing.
Only the motion of the controller(s) matters. Secondary movements of your knees, legs, or head do not
matter if they don’t impace the controllers’ movements.

5.2 Design your Gold Standard


The main goal when using LiveMove 2 motion recognition is to

• recognize the set of moves you want,

• identify moves as early as possible,

• as precisely as you want, but no less and no more.

Your motion examples define the moves in your game.

• Gold Standard: The gold standard is your definition of the move.


Consider a motion. If you want this motion to be classified as move “A” then this motion belongs to
your gold standard definition of move “A”. If you do not want this motion to be classified as move “A”
then this motion does not belong to your gold standard.
Have a clear idea of the gold standard for all your moves before collecting examples.

• You must ensure every example is a good representative of your standard.


LiveMove 2 does not distinguish between good and bad examples. It assumes that all your examples
used for building the classifier are important.
Ensure you apply your gold standard consistently. Don’t use any examples that fall outside your com-
fort zone because LiveMove 2 will generalize – potentially pushing you further from your standard.
5.3. COLLECT CORRECT BUT VARIED EXAMPLES 79

Modifying the standard during collection is OK. For example, half-way through collecting you decide
there is not one but two “correct” ways (the gold standard motions) to perform a tennis serve. All your
subsequent examples must be similar to one of the two standard motions.
Use the slack setting to make a move easier to perform. Do not intentionally collect bad examples.

• The scope of your standard: The gold standard can be narrow or wide.
The move “any square” is a wide gold standard. You will accept varied square motions as valid exam-
ples of your gold standard.
The move “small, fast counter-clockwise square written on the floor starting in the top left-hand corner”
is a narrow gold standard. You will accept only this precise type of motion as valid examples of your
gold standard.

• Define the start and end of your standard.


Is the initial and/or final orientation important? Or is it unimportant?
Should the move begin and/or end at rest? Or is it unimportant?
Answering these questions helps define your gold standard.
LiveMove 2 must see examples of all the allowed starts and ends.
For example, the move is a baseball pitch. Does the move end when you release the ball, or when your
body stops moving?

• Define the grip for your standard.


You may restrict the grip to a single standard. Or you may allow different grips.
Answering this question helps define your gold standard.
LiveMove 2 must see examples of all the allowed different grips.
For example, you are drawing numbers in the air. You decide that the standard grip is either DPad
facing up or Dpad facing left.

• Exaggerate your standard. LiveMove 2 notices forces. Forces cause changes in speed and direction.
So exaggerate your swoops, corners and wiggles. It gives LiveMove 2 more interesting data to work
with.
For example, you have “square” and “circle” moves.. Make your squares have sharp edges.

• Design your moves to be different if they should never get confused.


You do not want LiveMove 2 to confuse “CHARGE!!” with “stay down” in a combat game.
In this case, define the moves to be different.
You can pick orthogonal axes for the motions. For example, motion 1 in the X-Y plane, motion 2 along
the Z axis.
You can make the durations differ. You can make the overall force differ. You can make the complexity
differ. These differences will help LiveMove 2 to not confuse the moves.

Communicate your gold standard to people who provide examples at development time.
Communicate your gold standard to players who provide examples at game time.

5.3 Collect correct but varied examples


• Vary your style. When you collect examples vary the style of your motion on condition it conforms to
your gold standard. This helps LiveMove 2 recognize different players at game time.
80 CHAPTER 5. GUIDELINES

– Vary your speed.


Remember that players will perform moves quickly when excited.
– Collect left-handed examples.
– Start and stop the motion at slightly different times if you’re building a buttonless classifier.
For example, in lmRecorder sometimes press the B button before your hand moves, sometimes
a little after.
– Perform the motion with a stiff wrist and a loose wrist.
– Perform the motion with and without moving your elbow.
– Perform the motion when you are fresh and when you are tired.
– Perform the motion standing and sitting.
– Combinations of all of the above.

The more varied your examples that conform to your gold standard, the better players’ initial experience
with an untuned classifier will be.
Tuning at game time ensures an individual player’s style is recognized.

• Collect examples in context. Compare performing a move in lmRecorder to performing the same
move in your game when you are about to defeat the final enemy.
People perform the same move in different ways depending on the context.
So it can help to collect examples directly from your game.
The LiveMove 2 run-time library provides functions that let you record and collect motions from your
game.

• Randomize collection order. People may perform moves in different ways depending on the order
they perform them, or how tired they get.
So getting people to provide examples in random order can help collect better data. This is why by
default lmMaker enables random collection mode in the moton collection options dialog.
Warning: pay attention to what move lmRecorder asks the performer to do. It can be easy
to mistakenly perform the wrong move. A single incorrect example can degrade classification
performance.

5.4 Check your data


Once your first classifier is built do not expect to see perfect 100% classification in the project window. Most
classes should be 95%+ with some near 90%. If so then you are ready to test the classifier in your game.
But if your classification rates are worse then you should check your data for problems.
If you find a problem there are four control points:

• The choice of motion examples.

• The slack settings for each move.

• The capacity of the classifier.

• Negative examples.

The most powerful and important is your choice of motion examples.


5.5. GIVE IMMEDIATE FEEDBACK TO THE PLAYER 81

• Delete bad examples.


First check whether your project has any outliers (highlighted in red when Project->Select
Outliers is selected). Decide whether the outlier is a true outlier (it does not conform to your
gold standard) or whether it represents an acceptable move that lacks similar examples. Viewing the
raw motion data (select Project->View Motion) and comparing to non-outliers can help you
make this decision.
The summary information also helps identify what is different or wrong with individual motions.
Check the summary information in the LiveMove 2 project window (see Chapter 3 for details). Look
for outliers, such as

– very long maximum times


– minimum times of 0
– strange starting orientations or impulses

Any of these might indicate a bad recording. If so the example should be deleted from disk.

• Lower the slack of other moves.


Consider that your move overhandThrow has a poor classification rate because it is incorrectly
classified as other move(s). Try reducing the slack of these other moves.

• Increase the slack of your move.


Consider that your move overhandThrow has a poor classification rate because it is incorrectly
classified as lmUndeterminedClass. Try increasing its slack.

• Increase the capacity of your classifier.


Consider you are making moves “slow X” and “fast X”. But you have some fairly fast examples in
your “slow X” motion set. LiveMove 2 may confuse them.
In this case, try increasing the capacity of your classifier.

• Use negative examples. You may have a move that is too easy to perform, but lowering its slack makes
it too hard to perform.
In this case try adding a negative example move.
For example, your move overhandThrow incorrectly accepts motions that are “sideways throw”.
Add a new move sidewaysThrow and collect examples. LiveMove 2 will distinguish between them.
In the game, when LiveMove 2 classifies a motion as a sidewaysThrow you can ignore it, or tell the
player they need to throw overhand.

In rare cases, LiveMove 2 may still confuse moves.


In this case, consider whether your moves are distinguishable at the level of the motion sensor data (see
Section 5.1). You may have to redesign your moves to avoid the clash.

5.5 Give immediate feedback to the player


A simple button press can immediately invoke an in-game event. For example, pressing a button to throw a
dagger.
But a motion, unlike a button press, has a non-trivial duration. A move takes place over time. So it is not
possible to know whether a player is, for example, throwing a dagger or raising a shield when they first start
moving.
But your motion controls do not need to feel “laggy”. Lag can be entirely eliminated if appropriate feedback
is given to the player during the performance of the whole move, from start to finish.
82 CHAPTER 5. GUIDELINES

START OF
MOTION

ABORT MOVE−INDEPENDENT
INTERMEDIATE FEEDBACK
CLASSIFICATION

ABORT TRANSITIONAL
EARLY FEEDBACK
CLASSIFICATION

ABORT MOVE−DEPENDENT
FINAL FEEDBACK
CLASSIFICATION

Figure 5.1: The four stages of motion recognition

The performance of a motion has four stages, as depicted in Figure 5.1. You can give different types of
feedback to the player, from generic to more specific, as the motion progresses through each stage.

• The start of an unknown motion.


At this stage, LiveMove 2 cannot know which move the player is about to perform. It could be any one
of a set of valid moves or it could be nothing (i.e. not one of the defined moves).
In this case, immediately register to the player that the game has detected motion by generating move-
independent feedback. This could be: a sound, a rumble, a glow appearing on the character, or a
generic “get ready” animation, etc.

• Intermediate classification.
At this stage, lmsGetClassification() != lmsUndeterminedClass but
lmsBestGuessIsSafe() == false. So in all likelihood the player is performing a known
move but LiveMove 2 is not yet sure what it is.
In this case, use lmsGetIntermediateConfusionSet to get LiveMove 2’s guess of which
moves the player might be performing. Transition from move-independent feedback to something
more specific, such as intermediate animations that correspond to the confusion set. For example, if
the confusion set is (shieldRaise,daggerThrow) then play an animation segment that is com-
mon to both raising a shield and throwing a dagger.

• Early classification.
At this stage, lmsBestGuessIsSafe() == true. So LiveMove 2 thinks it likely that the player
is performing lmsGetClassification() given the tree accuracy.
In this case, transition to move-dependent feedback, such as finishing animations for the move. For
example, if the best guess is daggerThrow, and it is safe, then play the final segment of a dagger
throw.

• Final classification: the end of a known motion.


At this stage, the motion is classified.
If the final classification does not match the early classification you can either abort or accept the false
positive rate.
5.6. DESIGN MOVES THAT DIVERGE EARLY 83

• Abort at any stage


In buttonless classification mode LiveMove 2 can decide to abort a motion at any stage if the player is
not performing a valid move.
In standard classification mode it is your job to write abort conditions. The precise conditions that
work best for your game depend on the move design. Normally a motion should be aborted if
lmsGetClassification() returns a sequence of lmsUndeterminedClass labels. Alter-
natively, if you want a more stringent condition, abort if the sequence of confusion sets returned by
lmsGetIntermediateConfusionSet is not a sequence of subsets.

The more separable your move design (see Section 5.6) the earlier the player can get move-dependent feed-
back.

5.6 Design moves that diverge early


Consider drawing a “2” and then a “3” in the air with your controller. During the first half they generate very
similar motion sensor data to each other. So early classification is harder.
Compare stabbing the controller forward with pulling it sharply backwards. These motions immediately
generate very different motion sensor data (+Z force versus -Z force). So early classification is easier.
Your move design can help or hinder early classification. The general rule of thumb is:

To classify motions as early as possible design your moves to initially generate very different
motion sensor data.

Sometimes moves that look visually different generate similar motion sensor data. So lmMaker provides
two features to check whether your move design supports early classification:

• Compare the raw motion data generated by different moves by selecting Project->View
Motions.
For example, graphs of a “2” and a “3” will normally generate similar X,Y,Z plots during the first half
of the move, which indicates that LiveMove 2 will have a harder job unconfusing them.
• Select Move Tree->Build Move Tree to check at what stage of motion completion
LiveMove 2 can unconfuse your moves.
For example, the moves “2” and “3” will normally group into a confusion set that persists late into the
move.
You may need to redesign your moves if you discover that groups are confused for too long.

5.7 Build the classifier before the move tree


During early prototyping you will test classifiers built from small amounts of data. At this stage the move
tree will provide a reasonable indication of the early classification properties of your move design. But be
aware that

• the move tree is dependent on your classifier and your examples: if they change, then the move tree
must be rebuilt; and
• the move tree needs at least 50 examples for each variation of your gold standard for each move in your
move set in order to generate useful predictions.

So before fully committing to an animation design that corresponds to the move tree you should
84 CHAPTER 5. GUIDELINES

• spend as much time as needed to collect good examples that performs well for all users; and
• extensively test your classifier, including iteratively adjusting slack values and perhaps capacity value,
so that you have a satisfactory classifier for your game.

5.8 Understand buttonless recognition


• A buttonless classifier continually segments the received motion sensor data into discrete motions with
a definite beginning and end.
• A motion becomes active when
p
M = | accX2 + accY2 + accZ2 − 1|,

exceeds the specified force threshold. M is the magnitude of acceleration minus gravity and
serves as a simple estimation of the controller device’s acceleration without knowing its orientation.
LiveMove 2 uses this thresholding scheme because orientation of the device is generally not known in
classification mode. 1
• A motion becomes inactive when the classifier either
– aborts a false move, or
– terminates a valid motion.
• In many cases LiveMove 2 supports “chaining” of buttonless moves in an uninterrupted sequence.
But sometimes LiveMove 2 may not abort a false move sufficiently quickly. So the start of a valid move
can get lost in the end of a false move.
For example, if the player initiates a false move then immediately transitions to a valid move, the valid
move can get split across segments. In this case, LiveMove 2 will not recognize the move.
In general, buttonless recognition deteriorates as the pauses between moves get shorter. Therefore:

Buttonless recognition works best when your moves are separated by natural pauses.

• You get more false moves with a lower force threshold.


For example, simply changing your grip on the controller can exceed a very low force threshold.
LiveMove 2 will signal the start of an active motion that aborts with lmsUndeterminedClass.
In general,

Buttonless recognition works best at higher force thresholds.

So design your moves to have reasonably high-force starts.


• You may get more false positives with very high force thresholds.
Very high force thresholds discard lots of data. So some of your examples become small fragments of
a motion that simply describe an overall force on the controller. These move fragments can potentially
match multiple parts of any motion of non-trivial duration.
For example, you have punch1 and longWaveGoodbye in the same classifier. You set the force
threshold too high. When the player performs a longWaveGoodbye LiveMove 2 recognizes a se-
quence of punch1s.
1 Using M as a proxy for acceleration causes thesholding to be more sensitive along axes that already have a high component of the

gravity reading and less sensitive along axes that are orthogonal to gravity. This is more noticeable with small threshold values and less
so when thresholds are higher. Generally we expect applications to build buttonless classifiers with reasonably high threshold (¿ 0.5g
such that the player’s unintentional jitters do not trigger classification). Buttonless classifiers built for moves with strong starting force
also perform better in general.
5.9. SUGGESTED WORKFLOW 85

• A high-force threshold applied to moves with low-force starts delays feedback to the player.
For example, you set the force threshold so high that 70% of a “whipping” move is discarded. So
LiveMove 2 signals that start of a “whipping” move just before the final high-force crack. In this case,
feedback given when lmsIsMotionActive() == true may feel too late.

• Buttonless recognition combined with early classification requires more stringent move design.
In this case, you want LiveMove 2 to quickly recognize the start of motions and quickly unconfuse
them.
In general,

Buttonless recognition works best with early classification when every move starts forcefully in a
different direction.

The different directions do not need to be orthogonal but can be characteristic curves, like clockwise
and counterclockwise motions.

• The suggested force threshold is only a suggestion.


LiveMove 2 can do a better job with more data. So often you can improve classification accuracy by
tuning the force threshold lower than the suggested value.

• Avoid grouping moves of low and high force in the same buttonless classifier.
In this case, LiveMove 2 may tend to delay aborting false moves.

• Avoid grouping moves with significantly shared preambles in the same buttonless classifier.
For example, combining the moves “2” and “3” in the same buttonless classifier increases the risk that
LiveMove 2 will recognize examples of “3”s as “2”s.
In practice, however, examples of “2”s are not strict subsets of examples of “3”s. So the significance
of this problem depends on your data.

• A buttonless classifier normally requires more examples to get the same classification accuracy as a
standard classifier.

• If you anticipate that moves are allowed to be executed at varying speeds in game, it is important
to collect training examples that are performed at varying speeds. This will help the classifier better
determine the end of a motion.

5.9 Suggested workflow


Building a classifier is an iterative process that requires you:

• Define your gold standards for your moves.

• Collect motion examples.

• Build a classifier.

• Test the classifier.

• Play-test the classifier with tuning.

The number of iterations depends on:

• The number of moves.


86 CHAPTER 5. GUIDELINES

• The scope of your gold standards (wide or narrow).

• How easily the moves can be communicated to the game player.

Consider the “numbers” project of Chapter 2. Everyone knows what numbers look like. People will perform
numerals in a very similar style. In this case, there is a small number of narrowly scoped moves with easily
understood descriptions. So your interaction with LiveMove 2 may be a few 10 minute iterations with data
from 2 or 3 people plus some play testing.
Consider a project that has 20 or 30 different moves that are less well understood, such as “lasso”, “whip”,
“any square”, and “chicken dance”. You define the gold standards to be very wide.
In this case, you must collect more data from more people. Expect more iterations of 20 - 40 minute sessions
with each person, and more testing.
For production-quality classifiers try this workflow:

1. Collect correct but varied examples from the current data provider (i.e. performer);

2. Click Suggest capacity. Build the classifier;

3. Repeat the previous steps to collect from at least 3-4 people for small move sets ( containing <= 5
moves) or from 6-8 people for larger move sets;

4. If the classification accuracy of certain moves in the current classifier fall below 90%, collect data from
more people or try to figure out if there are problems with your data (See Section 5.4);

5. For the next data provider, ask him or her to test the current classifier in lmMaker first.

• If he or she is recognized well by the current classifier, you may decide to skip to the next data
provider or simply collect just a few examples per move from him or her.
• If he or she is not recognized well, repeat steps 1 and 2.

6. You may stop collecting data if new data providers are consistently recognized well by the current
classifier (before their data is collected).
(Remember that tuning at game time can adapt the classifier to an individual’s movement style.)

Once all the examples have been collected and the classifier is built:

• If you opt for run-time tuning then set tunability to the suggested capacity.
This means that a game player’s experience with the classifier during tuning will be identical to the
experience obtained during development time.
Otherwise set tunability to 0 to make an untunable classifier that consumes less memory resources.

• Reduce the suggested capacity if it exceeds the available CPU budget allocated to LiveMove 2 . The
classification accuracy of the untuned classifier will decrease. But the classifier will perform exactly
as you expect when tuning.
Once tuned to an individual player it will consume less resources yet also recognize the individual
player very well (see Section 4.17).

5.10 Collecting examples for mixed buttonless classification


modes
There are two mixed buttonless classification modes using a buttonless classifier:
5.11. INTEGRATING LIVEMOVE INTO YOUR TOOL CHAIN 87

• Game-LiveMove mode: The game tells LiveMove 2 when a motion starts but LiveMove 2 tells the game
when it ends. See Section 4.10.1.
• LiveMove-Game mode: LiveMove 2 tells the game when a motion starts but the game tells
LiveMove 2 when it ends. See Section 4.10.2.

To maintain data consistency (see Section 4.13) it is important that recording take place under identical or
similar conditions to those active when classifying. Therefore these mixed modes normally require that
motion examples be collected in-game, not via lmRecorder .
Use lmsBeginRecording() and lmsEndRecording() to record examples directly from your game
(see section 4.5).
For example, if the game tells the player to start moving after a specific event (e.g., a countdown timer, or
when an on-screen character hits a jump etc.) then it is important that examples are recorded when the person
providing examples is presented with the same situation.
For example, if the game stops a move after a specific event (e.g., when the ball hits the bat on-screen) then it
is important that examples are recorded when the person providing examples attempts to time their motions
to hit the ball.
The aim in both cases is to record motion examples that correspond to the precise motions that are performed
under game conditions.

5.11 Integrating LiveMove into your tool chain


The liveMove2.h interface lets you write your own utilities for capturing, viewing, summarizing and
manipulating motion data.
The LiveMove 2 command-line lets you transform a collection of raw motion data files into a classifier. So
LiveMove 2 can be a step in build scripts to remake classifiers based on data changes, or to perform regression
tests.
88 CHAPTER 5. GUIDELINES
Chapter 6

Tracking

LiveMove 2 can track the position and orientation of a Wii Remote with an attached MotionPlus.

6.1 Orientation Tracking


Nintendo’s SDK provides various API calls for determining the orientation of the controller. Please see their
documentation for details.

6.2 Position Tracking


Nintendo’s SDK does not currently provide any API calls for determining the current location of the con-
troller.
For many uses of the MotionPlus in games, orientation information will be enough. For games that require
positional information as well, you might assume that it is a simple matter of looking up the right differential
equations. Unfortunately, things are not quite so simple. The motion sensors in the Wii Remote and the
MotionPlus have small errors that can quickly accumulate and make your position calculations drift over
time.
You will therefore have to take various steps to reduce the drift and this turns out to be non-trivial. Which
is one of the reasons we propose you use LiveMove 2, as we have already done the work required to track
position.

6.3 LiveMove 2 Tracking


LiveMove 2 can provide reliable (relative) position estimates (in meters) for 1-2 seconds. The reason this is
possible is that LiveMove 2 tracking software has an Artificial Intelligence component that has knowledge
about human motion. This enables it to cleverly compensate for the errors that would otherwise accumulate
in the readings from the motion sensors.
For the complete LiveMove 2 tracking API calls and comments, please read the “TRACKING API” section
in the liveMove2.h header file.

6.4 Adding Tracking to Your Game


Adding tracking to your game is straightforward.

89
90 CHAPTER 6. TRACKING

First, you need to initialize the LiveMove 2 library, create a motion device with the appropriate motion device
type and then create the tracker object.

lmsOpen();

lmsMotionDevice* md =
lmsNewMotionDevice( lmsMotionDeviceTypeMOTIONPLUS );
lmsTracker* theTracker = lmsNewTracker( md );

Next you need to set up lmsPad . It is the AiLive-device independent driver – a very thin layer on top of
Nintendo’s libraries that provides some important features like enhanced buffering. The following example
sets up lmsPad for a MotionPlus controller.

lmsPadOpen( malloc, free );


lmsPadSetGyroMode( WPAD_CHAN0, true );

// Start looking for overflows.


lsPadResetOverflowCount( WPAD_CHAN0 );

In your main loop, you need to always update the tracker with the latest
lmsMotionDataSample sample obtained from the lmsPadPop(...) driver function.
It is important that you always update the tracking, even when you’re not explicitly going to use the tracker’s
output. This is necessary because the tracker has some internal variables that are cheap to update and allow
the tracker to start tracking in an instant.

lmsUpdateTracker( theTracker, &sample );

The tracker is always tracking the orientation of the controller so you can get orientation estimates at any
time.
You will, however, need to tell the tracker when to start tracking 3D positions and it will reset the starting
position estimate to (0, 0, 0):

lmsBeginPositionTracking( theTracker );

You might want to begin tracking position in response to a button press, an in-game event or some other
criteria. You might also want to take into account the tracker’s readiness to start position tracking. If you start
tracking before the tracker is ready, the output is likely to be less accurate.
In normal operation, the tracker will quickly become ready to track position after being held still for just a
fraction of a second (e.g. 1/5 of a second). You can check if the tracker is ready to track position from this
point forward by calling:

bool lmsIsReadyToTrackPosition( theTracker );

The function Returns ‘true’ if the tracker is ready, otherwise false. See the liveMove2.h header file for
more details.
Use the following calls to get the current estimate of position (during position tracking) and orientation (at
any time) from the tracker:

lmsVec3 p = { 0.0f, 0.0f, 0.0f };


lmsQuat q = { 0.0f, 0.0f, 0.0f, 0.0f };

// Read position and orientation from tracker.


lmsGetLocation( theTracker, &p );
lmsGetOrientationQuat( theTracker, &q );
6.5. STARTING POSITION 91

If you prefer, you can also get the orientation represented as a matrix, check the liveMove2.h header file
for details.
During tracking, it is also important that you check for buffer overflows in lmsPad and take appropriate
action.

if (lmsPadGetOverflowCount( WPAD_CHAN0 ))
{
// You need a bigger buffer or a faster game. Tracking results
// will be poor.

lmsPadResetOverflowCount( WPAD_CHAN0 );
}

When you’ve finished position tracking, you also need to tell the tracker. Otherwise, the position estimate will
never get reset and the drift will swamp the output. Ending position tracking also avoids wasting unnecessary
CPU cycles.

lmsEndPositionTracking( theTracker );

6.5 Starting Position


Position tracking typically works well when starting from rest and performing short moves that take 1-2
seconds. For example, in a sword fighting game the player can have some canonical rest position from which
moves start from and to which she is expected to return after the move has finished.
Since tracking only provides relative position, the starting position must be determined by convention or by
using other means, such as the DPD. So long as the player starts from the assumed canonical rest position,
the player should feel her motions are being tracked precisely. Of course, if she starts from a position that is
different from the assumed one, then she might feel that the relative motion displayed on the screen does not
correspond at all.
You can set the starting position by placing a call to lmsSetLocation right after the
lmsBeginPositionTracking call (note that you can only call lmsSetLocation inside the
lmsBeginPositionTracking and lmsEndPositionTracking block):

...
// always update the tracker with new data samples
lmsUpdateTracker( theTracker, &sample );

if ( startPositionTrackingEventHappened )
{ // make the tracker return positions relative to ‘‘loc’’
lmsVec3 loc = { 1.0f, -2.0f, -3.45f };
lmsBeginPositionTracking( theTracker );
lmsSetLocation( theTracker, loc );
}

if ( lmsIsTrackingPosition( theTracker ) )
{ // Read position from tracker. p will be a offset by ‘‘loc’’
lmsVec3 p = { 0.0f, 0.0f, 0.0f };
lmsGetLocation( theTracker, &p );
}

if ( endPositionTrackingEventHappend )
{
lmsEndPositionTracking( theTracker );
}
92 CHAPTER 6. TRACKING

6.6 Dealing with Drift


Because the position estimates can drift over time, you can get a measure of the confidence that the tracker
has in its estimates:

float const confidence = lmsGetTrackerConfidence( theTracker );

Values near 1 indicate high confidence. As values approach 0 you should take appropriate action. Appro-
priate action can include stopping tracking and encouraging players to return to the canonical rest position,
suspending tracking and encouraging players to pause, or imposing some bounding constraint.
For example, imposing a bounding constraint can be done by defining a bounding box. The game object
being controlled by the controller can then never stray outside of the bounding box. This avoids having the
player see objects fly off into the distance if the drift ever becomes too large.
If you do use some bounding constraint, then you need to tell the tracker about it so that it can potentially
take advantage of the information. You can do this by telling the tracker to replace its internal estimate of
position with the one supplied by you with a call to:

lmsSetLocation( theTracker, loc );

6.6.1 Gotchas
• It is possible to use KPADSetAccParam to change the way Nintendo’s KPAD library handles data
from the motion sensors in the Wii Remote. If you have set play_radius or sensitivity to
anything other than their default values of 0 and 1, respectively, then tracking behavior is undefined.
You can check the values with KPADGetAccParam and LiveMove 2 should warn you if they are set
to non-standard values.
• You need to reset the tracker object if data ever stops being sent from the controller (e.g. whenever the
controller is power-cycled).

lmsResetTracker( theTracker );
Appendix A

LiveMove 2 FAQ

1. Q. What is LiveMove 2?

• Use LiveMove 2 to make games that can recognize any motions performed with the Wii remote
(with or without the nunchuk or the MotionPlus ).
• You define moves (e.g. “circle”, “punch”, “lasso”) with example motions, not code.
• LiveMove 2 automatically recognizes the moves you define.
• LiveMove 2 can recognize any variations of a move as the same type.
• LiveMove 2 can recognize any motion that can be performed with a motion-sensitive controller.
For example, LiveMove 2 can recognize when a player hits a ball, draws a star in the air, or even
imitates a monkey.
• The recognition is robust and accurate across different users.
• With the MotionPlus accessory, you can also use LiveMove 2 to track the orienation of the
Wii remote as well as its relative 3D position. Position tracking is most accurate for fast mo-
tions that last 1-2 seconds, such as punches, sword swipes, etc.

LiveMove 2 makes it easy for developers to unlock the potential of motion-sensitive controllers to create
new and exciting game play.

2. Q. What is in LiveMove 2?
LiveMove 2 has 2 major components:

• A run-time library libLM that runs on the Nintendo Wii, links with your game code, and performs
fast motion classification in real-time.
• A development-time GUI application lmMaker that helps you collect and organize motion ex-
amples that are used to build classifiers for your game.

3. Q. How do I get help with LiveMove 2?


Please use your LiveMove 2 account information to access the latest LiveMove 2 product information
on the AiLive website: www.ailive.net.
Please address support questions to support@ailive.net.

4. Q. How do I report bugs in LiveMove 2?


Please report bugs to support@ailive.net. The more information you give the easier it will be
to fix. Please include

• Your full name.

93
94 APPENDIX A. LIVEMOVE 2 FAQ

• Your organization.
• The version number of LiveMove 2.
• The version number of relevant Nintendo Wii SDK, firmware, hardware and your host PC OS.
• How to reproduce the bug.
• If the bug was a crash, the exact text that was printed at termination.
5. Q. How do I get started?
Please read the User Manual, in particular the Introduction (Chapter 1) and the Tutorial (Chapter 2).
6. Q. How do I define moves for my game?
You define them by providing examples. You perform the move and record the motion data, and you
get others to do the same.
7. Q. Who can create moves?
Anyone can provide examples to LiveMove 2 .
One of the key benefits of LiveMove 2 is that anyone who can use a Wii Remote can provide motion
examples. So any person in a games studio can design and create moves.
The LiveMove 2 tools have been designed to be easy to use. The task of designing a move set, collecting
examples, and building a classifier for testing can be performed by anyone.
8. Q. What is the difference between a move and a motion?
A move is a type of motion. You define moves by giving example motions.
For example, a ‘square’ is a move that you might define for your game. Examples of a ‘square’ could
be motions such as ‘small square pointing at the floor’, or ‘large square pointing forwards and drawn
anti-clockwise’.
You have complete freedom to define how general or how narrow you want your move to be.
For example, you might decide that you want two square-like moves in your game: ‘small anti-
clockwise square’ and ‘large clockwise-square’. Then the motion ‘small anti-clockwise square pointing
down’ is an example of the first move, whereas the motion ‘large clockwise square pointing forward’
is an example of the second move.
The scope of the definition of a move is controlled by the motion examples you provide.
9. Q. At run-time what are the inputs and outputs of LiveMove 2?
The inputs are motion sensor data from any active Wii Remotes and/or attached Nunchuk and Motion-
Plus.
The outputs are move labels that describe what type of motion each player is performing. You define
the moves at development time.
LiveMove 2 ’s motion recognition can provide zero-lag feedback during a motion at any time. Such
feedback includes best-guess label, motion progress, possible moves set, and more. Please read the
Run-time API (Chapter 4) for more detail.
If orientation and/or position tracking is enabled, you will also get orientation and position data.
10. Q. Does LiveMove 2 require that players press buttons?
No.
LiveMove 2 supports standard and buttonless classification modes.
In standard classification mode, the application tells LiveMove 2 when a motion starts and ends. You
can choose a method where your application can identify motion boundary without button presses. See
answer 11.
In buttonless classification mode, LiveMove 2 tells the application when a motion starts and stops. No
button presses are needed at run-time.
95

11. Q. Can I build and use a standard classifier without buttons?


Yes.
A standard classifier simply implies that the application is required to inform LiveMove 2 when a mo-
tion starts and ends at run time. Whether to use a controller button or to use other application events
for that purpose is entirely up to you.
For example, you may want to call lmsBeginClassify when the player presses a button and call
lmsEndClassify when the player releases it.
Or, you may want to call lmsBeginClassify when the game tells the player to move and call
lmsEndClassify after a fixed period of time.
In either case, the method of identifying motion start and motion end at classification time should
be consistent with that at development time when recording example motions using calls to
lmsBeginRecording and lmsEndRecording. You will need to write your own motion recorder
to record motions based on your chosen method.

12. Q. Can LiveMove 2 recognize Nunchuk and Nunchuk style moves?


Yes. LiveMove 2 can recognize motions performed on the Wii Remote or the Nunchuk, either using each
controller individually or both simultaneously. Additionally, LiveMove 2 can recognize two-handed
moves either performed by two Wii Remotes, two Wii Remotes with MotionPlus, or a Wii Remote with
Nunchuk (i.e. freeStyle) or a Wii Remote with MotionPlus and Nunchuk (i.e. MotionPlus freestyle).

13. Q. Can LiveMove 2 recognize how the controller is moving?


Yes. LiveMove 2 provides the ability to track both 3D orientation and position if you use Wii Remote
with the MotionPlus accessory.
If you are only using the motion recognition capabilities of LiveMove 2 , it can tell you: (i) what move
is most likely being performed (‘best-guess’), and (ii) how far the player is through the move (‘motion
progress’), but it does not provide an accurate 1:1 representation of how the move is performed in space
like motion tracking.
However, you can still create similar 1:1 visual feedback to the user while she is performing. This is
done by tracking her motion progress through the corresponding animation of the move, or by using
the ‘best guess’ to synchronize animations with her performance. Read Section 5.5 of the User Manual
for an example on how to achieve this.

14. Q. Is the classification available before the end of the motion?


Yes. You can get the current, ‘best guess’ classification at any time during the motion by calling
lmGetClassification.
LiveMove 2 ’s move tree analysis gives you the ability to use the ‘best guess’ to closely synchronize
animations with the player’s performance (see Section 5.5 of the User Manual).

15. Q. What is a gold standard?


The gold standard is your idea or definition of the move.
A motion falls within your gold standard if you would be happy to see that motion accepted as an
example of the move in your game.
For example, you are making a wizard spell game and you want a spell to be a ‘star shape’. But what
do you mean by that?
Do you consider a star to have 4 spikes, or 5? Should it be done quickly or slowly? Should it be drawn
very big, or very small? Should it be performed straight ahead, or pointing at the ceiling? Should it be
performed with flicks of the wrist, or by the whole arm? Or are all these possibilities OK?
Defining a ‘gold standard’ requires answers to all these questions. You should have a clear idea of your
gold standard prior to collecting motion examples. The more your examples fully represent your gold
standard, the better your move recognition at game time.
96 APPENDIX A. LIVEMOVE 2 FAQ

16. Q. How do I know when to start and stop recording a motion?


You must start and stop recording such that all the important motion sensor data is captured.
For example, if the initial orientation of the Wii Remote is an important feature of the gold standard,
then it is vital that you record the initial orientation. This is true both at development time, when
collecting examples, and at game-time, when classifying the motions performed by game players.

17. Q. How do I know when to start and stop classifying a motion?


In buttonless recognition mode LiveMove 2 tells you when a motion starts, stops and its final classifi-
cation.
In standard recognition mode there are various methods to inform LiveMove 2 that a motion has started
or stopped, for instance:

• Get the player to press or hold a button (e.g., the B button on the Wii Remote).
• Get the game to tell the player when to perform a move in a given time limit (e.g., ‘READY’,
‘STEADY’, ’GO!’).
• Detect when the force on the Wii Remote satisfies start and stop threshold conditions.

18. Q. When do I stop collecting examples?


You know when to stop collecting examples when a new person, who has not provided examples, can
consistently perform all the moves in the classifier (after an initial period of learning).
In our experience, examples from at least 5-6 different people are required before a new person can
either

• learn to consistently perform the moves without entering example motions, or


• have a good experience with tuning, i.e., he or she can easily tune the classifier on all moves.

See Section 3.7 and Section 5.9 for more detail.

19. Q. How many motion examples should I record for one move?
LiveMove 2 considers all the examples it is given. In general, the more examples the better, on condition
that the examples conform to your gold standard.
A wide gold standard by definition requires more varied examples than a narrow gold standard. In
practice, this usually means that a wide gold standard has more examples.
A narrow standard on a move set that is well defined and understood by most people, like “numbers”,
may require less than a hundred examples (i.e. just a few minutes of work).
A wide standard on a vague move set, like “cowboy moves”, may require several hundred per move.

20. Q. How many different ways of performing a single move should I record?
You should collect all the different kinds of examples that conform to your gold standard.
For example, if your gold standard is ‘a big, clockwise square with my arm pointing straight out’ then
collect different examples that conform to this definition.
So you will vary speed, the initial orientation of the Wii Remote, etc. But you will always perform a
big, clockwise square with your arm pointing straight out.
This is an example of a reasonably ‘narrow’ gold standard.
If your gold standard is ‘any square’ then as before collect different examples that conform to this
definition. But, in this case, the gold standard is wider in scope.
So in addition to varying speed, initial orientation etc., also vary the direction your hand is pointing
(up, down, left, right), whether clockwise or anti-clockwise, and the spatial extent of the square.
This is an example of a ‘wide’ gold standard.
97

LiveMove 2 gives you a lot of flexibility in defining and creating your moves: you can make your gold
standard as narrow or as wide as you wish. See Section 5.2.
If you add examples to your project that do not conform to your gold standard, you are changing the
definition of your move without realizing it.
Chapter 5 of the User Manual contains guidelines for collecting examples.

21. Q. Players will perform the same move in different ways. Do I need to consider all the different
ways of performing the same move in advance and add them as examples to LiveMove 2?
You should think of all possible motions that conform to your gold standard and add them as examples.
For example, if the move is a “tennis serve” but you don’t wish to distinguish between an ‘overarm
serve’ and an ‘underarm serve’ then both kinds of examples must be included in your classifier project.
However, LiveMove 2 will generalize from the examples it is given. So you do not have to anticipate
every detailed variation that players might try.

22. Q. When collecting new examples, how should I decide to accept or reject an example motion?
At collection time you only reject or accept examples depending on whether they conform to your idea
of the gold standard. Watch closely as the performer is performing each motion and determine if that
motion fits your gold standard.
Once the new example motions are imported to the project window, they will be automatically classified
by the current classifier (i.e., classifier built before the import). It is normal to see the new motions not
being correctly classified - the current classifier has not fully learned your gold standard yet. In fact,
that’s why you are giving new examples to LiveMove 2 – to teach it your gold standard.
For example, the current classifier may never have seen an ‘underarm serve’ as an example of the move
‘serve’, so it may classify this example as undetermined, or some other move. You should accept it if
it fits your standard. Likewise, if the classifier accepts a motion as a serve, but that motion does not fit
your gold standard, then you should delete the motion.
The decision on accepting or rejecting an example is crucial because this is how you define a move to
LiveMove 2.

23. Q. How do I choose which individual examples to give to lmMaker to make a classifier with the
highest accuracy?
Do not make this decision. This is LiveMove 2’s job.
Tweaking the composition of your set of examples to optimize the accuracies you see in lmMaker is
a mistake.
Instead, simply design and collect motions that fit your gold standard, then feed them all to LiveMove 2.
The best way to ensure good classification is to ensure all the data you give to lmMaker fits your gold
standard. A few bad examples can cause LiveMove 2 to misfire.

24. Q. How do I know which examples are bad?


When collecting many examples of different moves from multiple people it is always possible that a
bad example, which does not conform to your gold standard, gets collected in error.
lmMaker can display outliers in your project, which are potentially bad examples (see Section 3.9.1).
You can visualize the raw motion data in lmMaker to help search for bad examples (see Section 3.9.2).
The summarization data can also help you detect bad examples (see Section 3.9.3).

25. Q. I have 3 different ways of performing the same move. One way is a standard way, but the
other two ways are more extreme, and I think are less likely to be tried by game players. Should
I record more standard examples compared to extreme examples?
No. We recommend that you keep reasonably balanced ratios between the different ways of performing
a move.
98 APPENDIX A. LIVEMOVE 2 FAQ

You must include extreme motions if you want these examples to be included in your definition of the
move (the gold standard).
If the ratio between usual and extreme examples is very imbalanced LiveMove 2 may decide it is not
worth representing the smaller set. In this case, increase the number of examples of the smaller set, or
increase the capacity of your classifier.
26. Q. In my buttonless project, almost any short motion I make gets recognized as one of the moves,
what should I do?
This can happen when your buttonless project has a high force threshold (i.e. > 1.0) and you have
collected some short or weak example motions for a move.
A high force threshold means that, for those example motions that are short or are weak and with weak
starts, a good percentage of these motions are cut off by the threshold. So effectively, you end up with
some very short “leftover” example motions for the move which could match well with almost any
short motion.
To deal with this, you can either:
• View all the motions in the problem move set and find the ones that are cut short by the force
threshold and remove or delete them; or
• Experiment with a lower force threshold.

27. Q. WHen testing my buttonless classifier, I notice a lag between the end of motion execution and
the final classification result. Is that normal and how can I eliminate it?
Since a buttonless classifier must determine if and when a motion has ended, it is not unusual that
sometimes it needs to take extra samples in order to be sure that the motion has ended. This is especially
true when you don’t have enough training examples. However there are ways that you can eliminate or
at least minimize the lag.
One way you can try dealing with the lag is to use the “LiveMove-Game” classification mode described
in Section 4.10.2. In this mode, your game can determine when a move ends before LiveMove does.
Otherwise, you can try the following:

(a) Collect more examples that are performed at varying speeds for the move(s) that exhibits lag.
(b) Import the newly collected motions in your project and click ‘suggest capacity’ to get the lm-
Maker recommended capacity value.
(c) Build and test the classifier. If this has fixed your problem, you can skip the remaining steps.
Otherwise,
(d) Double the current capacity value.
(e) Build and test the classifier.
(f) Repeat the previous two steps (c and d) until you no longer notice any lag or are satisfied with the
result.
(g) Experiment with a lower capacity value (a value that is inbetween the suggested and the current
value) until you find one that is just high enough to eliminate any noticeable lag. This way you
will not incur unnecessary CPU and memory cost.

28. Q. Can LiveMove 2 differentiate strong from weak motions?


Yes. But be careful to distinguish strong vs. weak examples in your project. For example, if you allow
very weak ‘strong saws’ as examples of your strong data, you may get more ‘strong’ results than you
want. The summarization data in lmMaker can help you avoid this problem.
29. Q. Can LiveMove 2 handle repetitive motions?
Yes. For example, LiveMove 2 can recognize a single strong saw stroke as a ‘strong saw’, or 10 repeated
strong saw strokes as a ‘strong saw’, or anything between. But you must collect balanced data for these
variations.
99

30. Q. What about left-handed people?


You must collect data from left-handed people.
From the point of view of LiveMove 2 a left-handed version of a move poses no special problems.
Adding such examples simply widens the scope of your gold standard to include left-handed variants.

31. Q. Should I design my moves to have inflection points?


LiveMove 2 can distinguish between any moves that generate different motion sensor data. The pres-
ence of an inflection point in the data is not necessary. But it may help to insure the motion sensor data
is different.
So it can help to accentuate, emphasize and exaggerate your moves. Exaggerating your sweeps, corners
and wiggles will give LiveMove 2 more interesting data to work with.

32. Q. What kinds of moves is LiveMove 2 good at recognizing?


All kinds of moves. LiveMove 2 can distinguish between any motions that generate different motion
sensor data.

33. Q. What kinds of moves is LiveMove 2 bad at recognizing?


LiveMove 2 has problems distinguishing between any moves that generate similar motion sensor data.
In rare cases, a move can look different visually, but present nearly identical data. It is always important
to remember that LiveMove 2 operates on the low-level motion sensor data and not how the move
appears to our eyes.
For example, a ‘vertical cut’ and a ‘slant cut’, where the Wii Remote is being held like a sword, can
generate nearly identical motion sensor data in the middle of the motions. So from the motion sensor
point of view they look almost identical.
But if you record examples that include the start (from rest) and the end (after stopping) of the vertical
and slant cuts then LiveMove 2 will distinguish between them.
Please read Section 5.1 in the User Manual.

34. Q. What does tuning do?


Players come in all shapes and sizes with wildly different physical abilities. They may perform your
moves in unexpected or quirky ways. So even if your classifier performs very well on most new people
at development time it may still not recognize some individuals.
Tuning is a guarantee. It adapts a classifier at game-time to recognize an individual player. Tuning helps
adjust the classifier to an individual’s quirks and improves classification performance (how accurately
motions are recognized) for that player.

• Tuning modifies the accept and reject behavior of a classifier at run-time by using the player’s
motions as new examples.
• Once the classifier has seen examples of an individual player’s motion style then similar examples
of that player’s motions can be recognized more easily.
• Tuning lets you ship classifiers with lower capacity (and therefore lower CPU costs) that will
– not work as well compared to higher capacity classifiers on a large population of gamers,
– but will work as nicely as a high capacity classifier for individual game players.
• Tuning allows a classifier to be adapted to work for players that are different from those seen
during internal testing during development time.

Typically, you will create a classifier at development time and then a player will tune it before they play
the game (or during the initial stages of the game).
The tuning process could be presented to the player as a short training section. Or it can be hidden
from the player.
100 APPENDIX A. LIVEMOVE 2 FAQ

Whether a tuning step is necessary for all the moves in your game depends on the game design. For
some moves, like a secret “super-smash” tennis serve, you may want your players to learn how to be
recognized (rather than the game learning how to recognize them by tuning).
Please read Section 4.17 of the User Manual.

35. Q. What’s the difference between tunability and slack?


Tunability applies to a classifier. Slack applies to a move.
A classifier’s tunability is a measure of how many different ways of performing the same move it
can recognize during tuning.
The tunability parameter controls the extent to which a classifier will accept new ways to execute moves
from an individual at game time.
High tunability means that more individual variations will be accepted as new examples of a move.
Low tunability means that less will be accepted. Note, however, that a new tuning example is accepted
only if it is related to the gold standard you defined at development time.
A move’s slack is a measure of how strictly it is recognized.
Slack is fixed at development time. A low slack means that the classifier will reject more motions as
being examples of the given move than the default setting. A high slack means that more motions will
be accepted compared to the default setting.

36. Q. How do I control the difficulty level of performing moves?

• By controlling the examples you give to LiveMove 2.


• By changing slack and tunability.

Get a pro-tennis player to provide examples of tennis moves. Build a classifier from their data alone.
Keep slack low. Do not allow tuning. Performing these moves will be challenging for the average game
player.
Get lots of average people to provide lots of varied examples of tennis moves. Experiment by increasing
the slack and build the classifier. Stop collecting examples when new players are easily recognized.
Allow tuning with a high tunability setting (equal to or higher than the recommended capacity of the
classifier). Performing these moves would be easy for the average game player.
These are two extremes. There is a continuum between them. A game can incorporate the full range
of difficulty levels. For example, you can built different classifiers to present different performance
challenges.

37. Q. What do I do if two of my moves are getting confused?


There could be bad example motions. Remove outliers from your project if there are any and re-build
the classifier. If no outliers were reported, you could still have mistakenly collected example motions
for one move that are meant for the other. View the motion graphs of the mis-classified example
motions and determine if this might be the case. If yes, remove those motions and rebuild the classifier.
If the above does not help, double-check that your motions are separable from the data point of view
(Section 5.1 of the User Manual) and that your motion design is adequate (Section 5.2 of the User
Manual). Then try adjusting slack (Section 5.4 of the User Manual).

38. Q. What do I do if one of my moves is hard to reproduce?

• You might have bad motion examples in your set.


• You may not have enough example motions to completely cover how your moves are done, espe-
cially in the context of the game.

You should consider different ways and orders for collecting motions to improve recognition. See
Section 5.3 of the User Manual.
101

39. Q. I added some more examples but now one of my moves is hard to reproduce. What do I do?
Check the newly added examples and make sure they are correct.
Try collecting more motion examples for the few hardest-to-reproduce classes and re-build the classifier
(Section 5.4 of the User Manual).

40. Q. What do I do if Joe from Accounting can’t do move X?


You should:

• Let Joe add his motion examples to move X; Or


• Let Joe tune the classifier on move X (see Section 4.17 of the User Manual). If the tuning process
does not accept any of Joe’s tuning motions, try increasing the classifier’s tunability; or
• Try increasing the slack on move X.

41. Q. I have a lot of examples from lots of people. Which examples should I use?
Eliminate all data that does not fit your gold standard. Feed all data that fits your gold standard to
lmMaker . LiveMove 2 will do the rest. See Section 5.4 of the User Manual.

42. Q. I gave an example of a counter-clockwise circle, but my classifier doesn’t recognize it any
more. What do I do?
LiveMove 2 will generalize over the examples it sees to create a classifier that optimizes performance
both within the known examples, and for the unseen data from your as-yet unseen players.
If your ratio between clockwise and counter-clockwise circle examples is very imbalanced,
LiveMove 2 may decide it is not worth representing the smaller set of the two in the classifier.
In this case

• increase the number of counter-clockwise examples, or


• increase the capacity setting of the classifier.

In general it is good to keep reasonably balanced ratios between the different ways of executing each
move in your classifier.

43. Q. I built a classifier from lots of different people, but it still doesn’t work well when someone
new tries to perform the moves. What do I do?
Don’t panic. It may take 5 or 6 different body types before your classifier has a broad enough base of
examples to perform well on a new person.
Some things to try:

• Add more examples from different people.


• Encourage people to give you examples of performing moves in slightly different styles while
still staying in the bounds of your gold standard. Add their data and build a new classifier.
• Ensure you apply your gold standard consistently. Don’t use any examples that fall outside of
your comfort zone because LiveMove 2 will generalize – potentially pushing you further from
your standard.

44. Q. Does reducing slack make my classifier more accurate?


Not necessarily.
High slack value for a class would tend to make that class very “eager” in terms of recognizing motions
as its own, which could be inaccurate because it may recognize motions belonging to other classes as
its own. However a low slack value for that class doesn’t necessarily increase accuracy. All that does is
to make that class more “reluctant” or “fussy” in terms of recognizing motions as its own, which could
also be inaccurate because it may reject motions that actually does belong to that class.
102 APPENDIX A. LIVEMOVE 2 FAQ

45. Q. One of my moves is recognized too much. What do I do?


Reduce the slack on this move from its current value. Also if one move is too hard to perform, try
increasing the slack.
46. Q. Are CPU costs affected by the number of moves that a classifier can recognize?
No. CPU costs are not affected by the number of moves. A classifier’s capacity is the main factor that
affects CPU costs because the capacity is shared across the number of moves that can be recognized.
In practice, however, a classifier that recognizes lots of moves will need a higher capacity, especially if
there are many different ways of performing each move.

47. Q. How do I eliminate lag between the player’s movement and on-screen events?
Any noticeable lag between the player’s movement and on-screen events can be entirely eliminated.
LiveMove 2 supports a range of motion-control use cases. Here are two extremes:

• The player presses a button to signal the start of a motion. The motion is performed for 2 seconds.
Then the player releases a button to signal the end of the motion. At this point LiveMove 2 returns
the final classification, which invokes an in-game event.
In this case the lag between the initial movement and in-game event is 2 seconds.
• The player simply moves the Wii Remote in a star shape to cast a magic spell. As soon as the
player moves the Wii Remote rumbles and a star-spell animation unfolds on-screen that is syn-
chronized with the player’s motion.
In this case there is no noticeable lag.
See Sections 3.10, 4.11, 4.14, 5.5 and 5.6 of the User Manual.
48. Q. Are CPU resources wasted if a single classifier can recognize lots of moves?
No. A classifier’s capacity is the main factor that affects CPU costs because the capacity is shared
across the number of moves that can be recognized. But you may be wasting capacity.
For example, consider a game that includes a fishing mini-game and a cooking mini-game. Capacity is
wasted if a single classifier is used to recognize both fishing and cooking moves because the different
kinds of moves are never performed in the same context. In such cases it is better to “divide and
conquer” and build two classifiers – one for recognizing fishing moves, and one for recognizing cooking
moves.
Appendix B

Applications

B.1 The PC Application lmsConvertMotions


You can re-use
motions from previous LiveMove products (LiveMove1.1 , LiveMove Pro or
LiveMove2BetaEval ).
You need to convert them to the new LiveMove 2 motion format first. The
lmsConvertMotions application is provided for this purpose. You can find it in the ailiveBase/bin
directory.

1. If there is any chance you will need to continue to use your motions in the previous product, make sure
to create a backup of your motion files before converting. The conversion process is irreversible.

2. Locate lmsConvertMotions.exe in LiveMove 2 installation’s bin directory. You can view the
help message by typing lmsConvertMotions.exe -h in a command shell window.

3. Run lmsConvertMotions.exe -path <full path> where “full path” specifies the full path
to the top directory holding motions. This will convert all the old motions in the specified directory
to the LiveMove 2 format in-place. You should be able to see some text output while the motions are
being converted.

B.2 The PC Application lmHostIO


lmHostIO.exe allows applications running on the Wii dev kit to write files to the host PC. It uses the USB
host/dev kit interface.
Run lmHostIO.exe from the ailiveBase/bin directory in an RVL SDK/RVL NDEV window. Ensure
that you have already set your DEV root using the command setndenv DvdRoot ‘‘path’’. It is
usually fine to simply set it to the current directory (i.e. setndenv DvdRoot .).
If you get a “missing DLL” or another error while running lmHostIO.exe, consult the README.txt
for troubleshooting tips. (Note that lmHostIO.exe must always be run from the same directory as its
associated application, lmMaker in this case).

B.3 The Console Application lmRecorder


lmRecorder records motions as data from the specified motion device (e.g. Wii Remote and/or the
nunchuk). It is run automatically by lmMaker when collecting motion examples.
You can run lmRecorder.elf on the Wii dev kit from the ailiveBase/bin directory by hand.

103
104 APPENDIX B. APPLICATIONS

If recording standard motions, a motion is recorded while the B button is depressed.


To see the complete list of command line options, run lmRecorder as follows:
ndrun lmRecorder.elf -a -h
The output will be sent to stdout and can be read from the serial port on the Wii development kit. Consult
Nintendo’s documentation on how to do this.

B.4 Running lmHostIO.exe and lmRecorder from lmMaker


By default lmMaker will run lmHostIO.exe then lmRecorder when collecting motions. Edit the
settings for running these two programs by selecting Collect->Preferences. You will see the dialog
shown in Figure B.1.

Figure B.1: Interface options for motion recording.

By default, when Collect->Collect Motions is selected lmMaker performs the following com-
mands in order:

1. If “Launch NDEV file manager” is checked

(a) Start a new process by sending the command specified on “Command Line” to the operating
system to execute in lmMaker’s working directory.

2. If “Launch motion recorder” is checked

• If “Environment” is not empty


B.4. RUNNING LMHOSTIO.EXE AND LMRECORDER FROM LMMAKER 105

(a) Start a new process by sending the command in “Environment” to the operating system.
(b) Append lmRecorder arguments depending on the type of collection specified to the text
entered in “Command Line”.
(c) Send the combined text string, plus “additional arguments”, to the standard input for the
environment process.
• Otherwise
(a) Append lmRecorder arguments depending on the type of collection specified to the text
entered in “Command Line”.
(b) Start a new process by sending the combined text string, plus “additional arguments”, to the
operating system.

When collection ends lmMaker then performs the following commands in order:

1. If “Launch motion recorder” is checked


• If “Environment” is not empty
(a) Send the text in “Stop Command” to the standard input for the environment process.
(b) Wait a few seconds for the process to end.
(c) If the process does not end, terminate the environment process.
• Otherwise
(a) Send the text in “Stop Command” to the standard input for the process started by sending
“Command Line” to the operating system.
(b) Wait a few seconds for the process to end.
(c) If the process does not end, terminate the process.

2. If “Launch NDEV file manager” is checked


(a) Send the text in “Text Input to Quit” to the standard input for the file manager process.
(b) Wait a few seconds for the process to end.
(c) If the process does not end, terminate the process.

The default values work for the majority of Wii game development systems. If your system differs, un-check
Use Default Options to make all options editable and either change the settings or manually invoke
lmHostIO.exe and lmRecorder .
Note that the controller should be powered off either manually or by exiting lmRecorder using the Home and
Plus buttons before closing the collection window in lmMaker. Failure to do this will result in the controller
being left powered on.

You might also like