Multimedia Programming Guide
Multimedia Programming Guide
2010-09-01
Apple Inc.
2010 Apple Inc.
All rights reserved.
No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in
any form or by any means, mechanical,
electronic, photocopying, recording, or
otherwise, without prior written permission of
Apple Inc., with the following exceptions: Any
person is hereby authorized to store
documentation on a single computer for
personal use only and to print copies of
documentation for personal use provided that
the documentation contains Apples copyright
notice.
The Apple logo is a trademark of Apple Inc.
No licenses, express or implied, are granted
with respect to any of the technology described
in this document. Apple retains all intellectual
property rights associated with the technology
described in this document. This document is
intended to assist application developers to
develop applications only for Apple-labeled
computers.
Apple Inc.
1 Infinite Loop
Cupertino, CA 95014
408-996-1010
Apple, the Apple logo, iPhone, iPod, iPod touch,
iTunes, Mac, Mac OS, Objective-C, and Xcode
are trademarks of Apple Inc., registered in the
United States and other countries.
iPad is a trademark of Apple Inc.
NeXT is a trademark of NeXT Software, Inc.,
registered in the United States and other
countries.
IOS is a trademark or registered trademark of
Cisco in the U.S. and other countries and is used
under license.
Even though Apple has reviewed this document,
APPLE MAKES NO WARRANTY OR REPRESENTATION,
EITHER EXPRESS OR IMPLIED, WITH RESPECT TO
THIS DOCUMENT, ITS QUALITY, ACCURACY,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR
PURPOSE. AS A RESULT, THIS DOCUMENT IS
PROVIDED AS IS, AND YOU, THE READER, ARE
ASSUMING THE ENTIRE RISK AS TO ITS QUALITY
AND ACCURACY.
IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT,
INDIRECT, SPECIAL, INCIDENTAL, OR
CONSEQUENTIAL DAMAGES RESULTING FROM ANY
DEFECT OR INACCURACY IN THIS DOCUMENT, even
if advised of the possibility of such damages.
THE WARRANTY AND REMEDIES SET FORTH ABOVE
ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL
OR WRITTEN, EXPRESS OR IMPLIED. No Apple
Contents
Introduction
Chapter 1
Using Audio 9
The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions 10
iOS Hardware and Software Audio Codecs 10
Audio Sessions 12
Playing Audio 14
Playing Audio Items with iPod Library Access 14
Playing UI Sound Effects or Invoking Vibration Using System Sound Services 15
Playing Sounds Easily with the AVAudioPlayer Class 17
Playing Sounds with Control Using Audio Queue Services 19
Playing Sounds with Positioning Using OpenAL 22
Recording Audio 22
Recording with the AVAudioRecorder Class 22
Recording with Audio Queue Services 24
Parsing Streamed Audio 24
Audio Unit Support in iOS 25
Best Practices for iOS Audio 26
Tips for Using Audio 26
Preferred Audio Formats in iOS 27
Chapter 2
Using Video 29
Recording and Editing Video 29
Playing Video Files 30
3
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
CONTENTS
4
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
Using Audio 9
Figure 1-1
Table 1-1
Table 1-2
Table 1-3
Table 1-4
Table 1-5
Table 1-6
Listing 1-1
Listing 1-2
Listing 1-3
Listing 1-4
Listing 1-5
Listing 1-6
Listing 1-7
Listing 1-8
Listing 1-9
Listing 1-10
Listing 1-11
Chapter 2
Using Video 29
Figure 2-1
Listing 2-1
5
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
6
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
INTRODUCTION
Whether multimedia features are central or incidental to your application, iPhone, iPod touch, and iPad users
expect high quality. When presenting video content, take advantage of the devices high-resolution screen
and high frame rates. When designing the audio portion of your application, keep in mind that compelling
sound adds immeasurably to a users overall experience.
You can take advantage of the iOS multimedia frameworks for adding features like:
In iOS 4.0 and later, the AV Foundation framework gives you fine-grained control over inspecting, editing,
and presenting audio-visual assets.
Using Audio (page 9) shows how to use the systems audio technologies to play and record audio.
Using Video (page 29) shows how to use the systems video technologies to play and capture video.
INTRODUCTION
CHAPTER 1
Using Audio
Important: This document contains information that used to be in iOS Application Programming Guide. The
information in this document has not been updated specifically for iOS 4.0.
iOS offers a rich set of tools for working with sound in your application. These tools are arranged into
frameworks according to the features they provide, as follows:
Use the Media Player framework to play songs, audio books, or audio podcasts from a users iPod library.
For details, see Media Player Framework Reference, iPod Library Access Programming Guide, and the AddMusic
sample code project.
Use the AV Foundation framework to play and record audio using a simple Objective-C interface. For
details, see AV Foundation Framework Reference and the avTouch sample code project.
Use the Audio Toolbox framework to play audio with synchronization capabilities, access packets of
incoming audio, parse audio streams, convert audio formats, and record audio with access to individual
packets. For details, see Audio Toolbox Framework Reference and the SpeakHere sample code project.
Use the Audio Unit framework to connect to and use audio processing plug-ins. For details, see Audio
Unit Hosting Guide for iOS.
Use the OpenAL framework to provide positional audio playback in games and other applications. iOS
supports OpenAL 1.1. For information on OpenAL, see the OpenAL website, OpenAL FAQ for iPhone OS,
and the oalTouch sample code project.
To allow your code to use the features of an audio framework, add that framework to your Xcode project,
link against it in any relevant targets, and add an appropriate #import statement near the top of relevant
source files. For example, to provide access to the AV Foundation framework in a source file, add a #import
<AVFoundation/AVFoundation.h> statement near the top of the file. For detailed information on how
to add frameworks to your project, see Files in Projects in Xcode Project Management Guide.
Important: To use the features of the Audio Unit framework, add the Audio Toolbox framework to your Xcode
project and link against it in any relevant targets. Then add a #import <AudioToolbox/AudioToolbox.h>
statement near the top of relevant source files.
This section on sound provides a quick introduction to implementing iOS audio features, as listed here:
To play songs, audio podcasts, and audio books from a users iPod library, see Playing Audio Items with
iPod Library Access (page 14).
To play and record audio in the fewest lines of code, use the AV Foundation framework. See Playing
Sounds Easily with the AVAudioPlayer Class (page 17) and Recording with the AVAudioRecorder
Class (page 22).
9
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
CHAPTER 1
Using Audio
To provide full-featured audio playback including stereo positioning, level control, and simultaneous
sounds, use OpenAL. See Playing Sounds with Positioning Using OpenAL (page 22).
To provide lowest latency audio, especially when doing simultaneous input and output (such as for a
VoIP application), use the I/O unit or the Voice Processing I/O unit. See Audio Unit Support in iOS (page
25).
To play sounds with the highest degree of control, including support for synchronization, use Audio
Queue Services. See Playing Sounds with Control Using Audio Queue Services (page 19). Audio Queue
Services also supports recording and provides access to incoming audio packets, as described in
Recording with Audio Queue Services (page 24).
To parse audio streamed from a network connection, use Audio File Stream Services. See Parsing
Streamed Audio (page 24).
To play user-interface sound effects, or to invoke vibration on devices that provide that feature, use
System Sound Services. See Playing UI Sound Effects or Invoking Vibration Using System Sound
Services (page 15).
Be sure to read the next section, The Basics: Audio Codecs, Supported Audio Formats, and Audio
Sessions (page 10), for critical information on how audio works in iOS. Also read Best Practices for iOS
Audio (page 26), which offers guidelines and lists the audio and file formats to use for best performance
and best user experience.
When youre ready to dig deeper, the iOS Dev Center contains guides, reference books, sample code, and
more. For tips on how to perform common audio tasks, see Audio & Video Coding How-To's. For in-depth
explanations of audio development in iOS, see Core Audio Overview, Audio Session Programming Guide, Audio
Queue Services Programming Guide, Audio Unit Hosting Guide for iOS, and iPod Library Access Programming
Guide.
10
The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
CHAPTER 1
Using Audio
Table 1-1 describes the playback audio codecs available on iOS devices.
Table 1-1
Yes
Yes
Yes
Yes
IMA4 (IMA/ADPCM)
Yes
Yes
Yes
Yes
When using hardware-assisted decoding, the device can play only a single instance of one of the supported
formats at a time. For example, if you are playing a stereo MP3 sound using the hardware codec, a second
simultaneous MP3 sound will use software decoding. Similarly, you cannot simultaneously play an AAC and
an ALAC sound using hardware. If the iPod application is playing an AAC or MP3 sound in the background,
it has claimed the hardware codec; your application then plays AAC, ALAC, and MP3 audio using software
decoding.
To play multiple sounds with best performance, or to efficiently play sounds while the iPod is playing in the
background, use linear PCM (uncompressed) or IMA4 (compressed) audio.
To learn how to check at runtime which hardware and software codecs are available on a device, read the
discussion for the kAudioFormatProperty_HardwareCodecCapabilities constant in Audio Format
Services Reference and read Technical Q&A QA1663, Determining the availability of the AAC hardware encoder
at runtime.
To summarize how iOS supports audio formats for single or multiple playback:
Linear PCM and IMA4 (IMA/ADPCM) You can play multiple linear PCM or IMA4 sounds simultaneously
in iOS without incurring CPU resource problems. The same is true for the iLBC speech-quality format,
and for the -law and a-law compressed formats. When using compressed formats, check the sound
quality to ensure it meets your needs.
AAC, HE-AAC, MP3, and ALAC (Apple Lossless) Playback for AAC, HE-AAC, MP3, and ALAC sounds can
use efficient hardware-assisted decoding on iOS devices, but these codecs all share a single hardware
path. The device can play only a single instance of one of these formats at a time using hardware-assisted
decoding.
The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
11
CHAPTER 1
Using Audio
The single hardware path for AAC, HE-AAC, MP3, and ALAC playback has implications for play along style
applications, such as a virtual piano. If the user is playing a song in one of these three formats in the iPod
application, then your applicationto play along over that audiowill employ software decoding.
Table 1-2 describes the recording audio codecs available on iOS devices.
Table 1-2
Hardware-assisted encoding
Software-based encoding
Yes
Yes
IMA4 (IMA/ADPCM)
Yes
Yes
Yes
Audio Sessions
The iOS audio session APIs let you define your applications general audio behavior and design it to work
well within the larger audio context of the device its running on. These APIs are described in Audio Session
Services Reference and AVAudioSession Class Reference. Using these APIs, you can specify such behaviors as:
Whether or not your audio should be silenced by the Silent switch (on iPhone, this is called the Ring/Silent
switch)
Whether other audio, such as from the iPod, should continue playing or be silenced when your audio
starts
The audio session APIs also let you respond to user actions, such as the plugging in or unplugging of headsets,
and to events that use the devices sound hardware, such as Clock and Calendar alarms and incoming phone
calls.
The audio session APIs provide three programmatic features, described in Table 1-3.
12
The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
CHAPTER 1
Using Audio
Table 1-3
Description
Setting categories
A category is a key that identifies a set of audio behaviors for your application.
By setting a category, you indicate your audio intentions to iOS, such as whether
your audio should continue when the screen locks. There are six categories,
described in Audio Session Categories. You can fine-tune the behavior of
some categories, as explained in Fine-Tuning the Category in Audio Session
Programming Guide.
Handling interruptions
and route changes
Your audio session posts messages when your audio is interrupted, when an
interruption ends, and when the hardware audio route changes. These messages
let you respond gracefully to changes in the larger audio environmentsuch
as an interruption due to an incoming phone call. For details, see Handling
Audio Hardware Route Changes and Handling Audio Interruptions.
You can query the audio session to discover characteristics of the device your
application is running on, such as hardware sample rate, number of hardware
channels, and whether audio input is available. For details, see Optimizing for
Device Hardware.
There are two interfaces for working with the audio session:
A streamlined, objective-C interface that gives you access to the core audio session features and is
described in AVAudioSession Class Reference and AVAudioSessionDelegate Protocol Reference.
A C-based interface that provides comprehensive access to all basic and advanced audio session features
and is described in Audio Session Services Reference.
You can mix and match audio session code from AV Foundation and Audio Session Servicesthe interfaces
are completely compatible.
An audio session comes with some default behavior that you can use to get started in development. However,
except for certain special cases, the default behavior is unsuitable for a shipping application that uses audio.
For example, when using the default audio session, audio in your application stops when the Auto-Lock
period times out and the screen locks. If you want to ensure that playback continues with the screen locked,
include the following lines in your applications initialization code:
NSError *setCategoryErr = nil;
NSError *activationErr = nil;
[[AVAudioSession sharedInstance]
setCategory: AVAudioSessionCategoryPlayback
error: &setCategoryErr];
[[AVAudioSession sharedInstance]
setActive: YES
error: &activationErr];
The AVAudioSessionCategoryPlayback category ensures that playback continues when the screen locks.
Activating the audio session puts the specified category into effect.
How you handle the interruption caused by an incoming phone call or Clock or Calendar alarm depends on
the audio technology you are using, as shown in Table 1-4.
The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
13
CHAPTER 1
Using Audio
Table 1-4
Audio technology
OpenAL
Every iOS applicationwith rare exceptionshould actively manage its audio session. For a complete
explanation of how to do this, read Audio Session Programming Guide. To ensure that your application conforms
to Apple recommendations for audio session behavior, read Sound in iOS Human Interface Guidelinesin iOS
Human Interface Guidelines.
Playing Audio
This section introduces you to playing sounds in iOS using iPod library access, System Sound Services, Audio
Queue Services, the AV Foundation framework, and OpenAL.
14
Playing Audio
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
CHAPTER 1
Using Audio
As shown in Figure 1-1, your application has two ways to retrieve media items. The media item picker, shown
on the left, is an easy-to-use, pre-packaged view controller that behaves like the built-in iPod applications
music selection interface. For many applications, this is sufficient. If the picker doesnt provide the specialized
access control you want, the media query interface will. It supports predicate-based specification of items
from the iPod library.
Figure 1-1
Music Player
Your application
Media Picker
Media Query
iPod Library
As depicted in the figure to the right of your application, you then play the retrieved media items using the
music player provided by this API.
For a complete explanation of how to add media item playback to your application, see iPod Library Access
Programming Guide. For a code example, see the AddMusic sample code project.
Playing Audio
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
15
CHAPTER 1
Using Audio
Sounds play at the current system audio volume, with no programmatic volume control available
Simultaneous playback is unavailable: You can play only one sound at a time
The similar AudioServicesPlayAlertSound function plays a short sound as an alert. If a user has configured
their device to vibrate in Ring Settings, calling this function invokes vibration in addition to playing the sound
file.
Note: System-supplied alert sounds and system-supplied user-interface sound effects are not available to
your application. For example, using the kSystemSoundID_UserPreferredAlert constant as a parameter
to the AudioServicesPlayAlertSound function will not play anything.
To play a sound with the AudioServicesPlaySystemSound or AudioServicesPlayAlertSound function,
first create a sound ID object, as shown in Listing 1-1.
Listing 1-1
- (IBAction) playSystemSound {
AudioServicesPlaySystemSound (self.soundFileObject);
}
In typical use, which includes playing a sound occasionally or repeatedly, retain the sound ID object until
your application quits. If you know that you will use a sound only oncefor example, in the case of a startup
soundyou can destroy the sound ID object immediately after playing the sound, freeing memory.
16
Playing Audio
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
CHAPTER 1
Using Audio
Applications running on iOS devices that support vibration can trigger that feature using System Sound
Services. You specify the vibrate option with the kSystemSoundID_Vibrate identifier. To trigger it, use the
AudioServicesPlaySystemSound function, as shown in Listing 1-3.
Listing 1-3
Triggering vibration
#import <AudioToolbox/AudioToolbox.h>
#import <UIKit/UIKit.h>
- (void) vibratePhone {
AudioServicesPlaySystemSound (kSystemSoundID_Vibrate);
}
Loop sounds
Control relative playback level for each sound you are playing
Seek to a particular point in a sound file, which supports application features such as fast forward and
rewind
Obtain audio power data that you can use for audio level metering
The AVAudioPlayer class lets you play sound in any audio format available in iOS, as described in Table
1-1 (page 11). For a complete description of this classs interface, see AVAudioPlayer Class Reference.
To configure an audio player:
1.
2.
Prepare the audio player for playback, which acquires the hardware resources it needs.
3.
Designate an audio player delegate object, which handles interruptions as well as the playback-completed
event.
The code in Listing 1-4 illustrates these steps. It would typically go into an initialization method of the
controller class for your application. (In production code, youd include appropriate error handling.)
Playing Audio
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
17
CHAPTER 1
Using Audio
Listing 1-4
The delegate (which can be your controller object) handle interruptions and updates the user interface when
a sound has finished playing. The delegate methods for the AVAudioPlayer class are described in
AVAudioPlayerDelegate Protocol Reference. Listing 1-5 shows a simple implementation of one delegate method.
This code updates the title of a Play/Pause toggle button when a sound has finished playing.
Listing 1-5
To play, pause, or stop an AVAudioPlayer object, call one of its playback control methods. You can test
whether or not playback is in progress by using the playing property. Listing 1-6 shows a basic play/pause
toggle method that controls playback and updates the title of a UIButton object.
Listing 1-6
18
Playing Audio
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
CHAPTER 1
Using Audio
[self.player play];
}
}
The AVAudioPlayer class uses the Objective-C declared properties feature for managing information about
a soundsuch as the playback point within the sounds timeline, and for accessing playback optionssuch
as volume and looping. For example, you can set the playback volume for an audio player as shown here:
[self.player setVolume: 1.0];
For more information on the AVAudioPlayer class, see AVAudioPlayer Class Reference.
Play audio that you have captured from a stream using Audio File Stream Services
Audio Queue Services lets you play sound in any audio format available in iOS, as described in Table 1-1 (page
11). You can also use this technology for recording, as explained in Recording Audio (page 22).
For detailed information on using this technology, see Audio Queue Services Programming Guide and Audio
Queue Services Reference. For sample code, see the SpeakHere sample.
Create a data structure to manage information needed by the audio queue, such as the audio format
for the data you want to play.
2.
Define a callback function for managing audio queue buffers. The callback uses Audio File Services to
read the file you want to play. (In iOS 2.1 and later, you can also use Extended Audio File Services to read
the file.)
3.
Listing 1-7 illustrates these steps using ANSI C. (In production code, youd include appropriate error handling.)
The SpeakHere sample project shows these same steps in the context of a C++ program.
Listing 1-7
Playing Audio
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
19
CHAPTER 1
Using Audio
AudioQueueRef
AudioQueueBufferRef
SInt64
UInt32
AudioStreamPacketDescription
bool
mQueue;
mBuffers[kNumberBuffers];
mCurrentPacket;
mNumPacketsToRead;
*mPacketDescs;
mDone;
};
// Define a playback audio queue callback function
static void AQTestBufferCallback(
void
*inUserData,
AudioQueueRef
inAQ,
AudioQueueBufferRef
inCompleteAQBuffer
) {
myAQStruct *myInfo = (myAQStruct *)inUserData;
if (myInfo->mDone) return;
UInt32 numBytes;
UInt32 nPackets = myInfo->mNumPacketsToRead;
AudioFileReadPackets (
myInfo->mAudioFile,
false,
&numBytes,
myInfo->mPacketDescs,
myInfo->mCurrentPacket,
&nPackets,
inCompleteAQBuffer->mAudioData
);
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
AudioQueueEnqueueBuffer (
inAQ,
inCompleteAQBuffer,
(myInfo->mPacketDescs ? nPackets : 0),
myInfo->mPacketDescs
);
myInfo->mCurrentPacket += nPackets;
} else {
AudioQueueStop (
myInfo->mQueue,
false
);
myInfo->mDone = true;
}
}
// Instantiate an audio queue object
AudioQueueNewOutput (
&myInfo.mDataFormat,
AQTestBufferCallback,
&myInfo,
CFRunLoopGetCurrent(),
kCFRunLoopCommonModes,
0,
&myInfo.mQueue
);
20
Playing Audio
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
CHAPTER 1
Using Audio
Float32 volume = 1;
// linear scale, range from 0.0 through 1.0
AudioQueueSetParameter (
myAQstruct.audioQueueObject,
kAudioQueueParam_Volume,
volume
);
You can also set playback level for an audio queue buffer by using the
AudioQueueEnqueueBufferWithParameters function. This lets you assign audio queue settings that are,
in effect, carried by an audio queue buffer as you enqueue it. Such changes take effect when the buffer
begins playing.
In both cases, level changes for an audio queue remain in effect until you change them again.
2.
The value of this property is an array of AudioQueueLevelMeterState structures, one per channel. Listing
1-9 shows this structure:
Listing 1-9
Playing Audio
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
21
CHAPTER 1
Using Audio
Recording Audio
iOS supports audio recording using the AVAudioRecorder class and Audio Queue Services. These interfaces
do the work of connecting to the audio hardware, managing memory, and employing codecs as needed.
You can record audio in any of the formats listed in Table 1-2 (page 12).
Recording takes place at a system-defined input level in iOS. The system takes input from the audio source
that the user has chosenthe built-in microphone or, if connected, the headset microphone or other input
source.
2.
3.
Application launch is a good time to do this part of the setup, as shown in Listing 1-10. Variables such as
soundFileURL and recording in this example are declared in the class interface. (In production code, you
would include appropriate error handling.)
Listing 1-10
- (void) viewDidLoad {
[super viewDidLoad];
NSString *tempDir = NSTemporaryDirectory ();
22
Recording Audio
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
CHAPTER 1
Using Audio
NSString *soundFilePath =
[tempDir stringByAppendingString: @"sound.caf"];
NSURL *newURL = [[NSURL alloc] initFileURLWithPath: soundFilePath];
self.soundFileURL = newURL;
[newURL release];
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
audioSession.delegate = self;
[audioSession setActive: YES error: nil];
recording = NO;
playing = NO;
}
To handle interruptions and the completion of recording, add the AVAudioSessionDelegate and
AVAudioRecorderDelegate protocol names to the interface declaration for your implementation. If your
application also does playback, also adopt the AVAudioPlayerDelegate Protocol Reference protocol.
To implement a record method, you can use code such as that shown in Listing 1-11. (In production code,
you would include appropriate error handling.)
Listing 1-11
Recording Audio
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
23
CHAPTER 1
Using Audio
self.soundRecorder = newRecorder;
[newRecorder release];
soundRecorder.delegate = self;
[soundRecorder prepareToRecord];
[soundRecorder record];
[recordOrStopButton setTitle: @"Stop" forState: UIControlStateNormal];
[recordOrStopButton setTitle: @"Stop" forState: UIControlStateHighlighted];
recording = YES;
}
}
For more information on the AVAudioRecorder class, see AVAudioRecorder Class Reference.
24
AIFC
AIFF
CAF
NeXT
WAVE
CHAPTER 1
Using Audio
Having retrieved audio packets, you can play back the recovered sound in any of the formats supported in
iOS, as listed in Table 1-1 (page 11).
For best performance, network streaming applications should use data from Wi-Fi connections. iOS lets you
determine which networks are reachable and available through its System Configuration framework and its
SCNetworkReachabilityRef opaque type, described in SCNetworkReachability Reference. For sample code,
see the Reachability sample in the iOS Dev Center.
To connect to a network stream, use interfaces from Core Foundation, such as the one described in
CFHTTPMessage Reference. Parse the network packets to recover audio packets using Audio File Stream
Services. Then buffer the audio packets and send them to a playback audio queue object.
Audio File Stream Services relies on interfaces from Audio File Services, such as the
AudioFramePacketTranslation structure and the AudioFilePacketTableInfo structure. These are
described in Audio File Services Reference.
For more information on using streams, refer to Audio File Stream Services Reference.
Audio unit
Description
3D Mixer unit
The Multichannel Mixer unit, of type kAudioUnitSubType_MultiChannelMixer, lets you mix multiple mono or stereo audio streams
to a single stereo stream. It also supports left/right panning for each input.
For a demonstration of how to use this audio unit, see the sample code project
Audio Mixer (MixerHost).
25
CHAPTER 1
Using Audio
Audio unit
Description
Voice Processing I/O unit The Voice Processing I/O unit, of type kAudioUnitSubType_VoiceProcessingIO, has the characteristics of the I/O unit and adds echo
suppression and other features for two-way communication.
Generic Output unit
Converter unit
For more information on using system audio units, see Audio Unit Hosting Guide for iOS. For reference
documentation, see Audio Unit Framework Reference and Audio Unit Processing Graph Services Reference. The
iOS Dev Center provides two sample-code projects that demonstrate use of system audio units: aurioTouch
and iPhoneMultichannelMixerTest.
26
Audio tips
Tip
Action
For AAC, MP3, and ALAC (Apple Lossless) audio, decoding can take place using
hardware-assisted codecs. While efficient, this is limited to one audio stream at
a time. If you need to play multiple sounds simultaneously, store those sounds
using the IMA4 (compressed) or linear PCM (uncompressed) format.
The afconvert tool in Mac OS X lets you convert to a wide range of audio data
formats and file types. See Preferred Audio Formats in iOS (page 27) and the
afconvert man page.
CHAPTER 1
Using Audio
Tip
Action
When playing sound with Audio Queue Services, you write a callback that sends
short segments of audio data to audio queue buffers. In some cases, loading an
entire sound file to memory for playback, which minimizes disk access, is best.
In other cases, loading just enough data at a time to keep the buffers full is best.
Test and evaluate which strategy works best for your application.
Reduce audio file sizes by Sample rate and the number of bits per sample have a direct impact on the size
limiting sample rates, bit of your audio files. If you need to play many such sounds, or long-duration
depths, and channels
sounds, consider reducing these values to reduce the memory footprint of the
audio data. For example, rather than use 44.1 kHz sampling rate for sound effects,
you could use a 32 kHz (or possibly lower) sample rate and still provide
reasonable quality.
Using monophonic (single-channel) audio instead of stereo (two channel) reduces
file size. For each sound asset, consider whether mono could suit your needs.
Pick the appropriate
technology
Use OpenAL when you want a convenient, high-level interface for positioning
sounds in a stereo field or when you need low latency playback. To parse audio
packets from a file or a network stream, use Audio File Stream Services. For
simple playback of single or multiple sounds, use the AVAudioPlayer class.
For recording to a file, use the AVAudioRecorder class. For audio chat, use the
Voice Processing I/O unit. To play audio resources synced from a users iTunes
library, use iPod Library Access. When your sole audio need is to play alerts and
user-interface sound effects, use Core Audios System Sound Services. For other
audio applications, including playback of streamed audio, precise synchronization,
and access to packets of incoming audio, use Audio Queue Services.
For the lowest possible playback latency, use OpenAL or use the I/O unit directly.
The afconvert tool lets you convert to a wide range of audio data formats and file types. See the afconvert
man page, and enter afconvert -h at a shell prompt, for more information.
For compressed audio when playing one sound at a time, and when you dont need to play audio
simultaneously with the iPod application, use the AAC format packaged in a CAF or m4a file.
For less memory usage when you need to play multiple sounds simultaneously, use IMA4 (IMA/ADPCM)
compression. This reduces file size but entails minimal CPU impact during decompression. As with linear
PCM data, package IMA4 data in a CAF file.
27
CHAPTER 1
Using Audio
28
CHAPTER 2
Using Video
Important: This document contains information that used to be in iOS Application Programming Guide. The
information in this document has not been updated specifically for iOS 4.0.
29
CHAPTER 2
Using Video
To initiate video playback, you must know the URL of the file you want to play. For files your application
provides, this would typically be a pointer to a file in your applications bundle; however, it can also be a
pointer to a file on a remote server. Use this URL to instantiate a new instance of the
MPMoviePlayerController class. This class presides over the playback of your video file and manages
user interactions, such as user taps in the transport controls (if shown). To start playback, call the play method
described in MPMediaPlayback Protocol Reference.
Listing 2-1 shows a sample method that plays back the video at a specified URL. The play method is an
asynchronous call that returns control to the caller while the movie plays. The movie controller loads the
movie in a full-screen view, and animates the movie into place on top of the applications existing content.
When playback is finished, the movie controller sends a notification received by the application controller
object, which releases the movie controller now that it is no longer needed.
Listing 2-1
30
CHAPTER 2
Using Video
[[NSNotificationCenter defaultCenter]
addObserver: self
selector: @selector(myMovieFinishedCallback:)
name: MPMoviePlayerPlaybackDidFinishNotification
object: theMovie];
// Movie playback is asynchronous, so this method returns immediately.
[theMovie play];
}
// When the movie is done, release the controller.
-(void) myMovieFinishedCallback: (NSNotification*) aNotification
{
MPMoviePlayerController* theMovie = [aNotification object];
[[NSNotificationCenter defaultCenter]
removeObserver: self
name: MPMoviePlayerPlaybackDidFinishNotification
object: theMovie];
// Release the movie instance created in playMovieAtURL:
[theMovie release];
}
31
CHAPTER 2
Using Video
32
REVISION HISTORY
Notes
2010-09-01
2010-05-27
Updated Table 1-2 (page 12) for iOS 4.0 by clarifying support for AAC encoding.
33
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
REVISION HISTORY
34
2010-09-01 | 2010 Apple Inc. All Rights Reserved.