Audio Queue Programming Guide
Audio Queue Programming Guide
Contents
Introduction 6
What Is Audio Queue Services? 6 Who Should Read This Guide? 7 Organization of This Document 7 See Also 7
Recording Audio 23
Define a Custom Structure to Manage State 23 Write a Recording Audio Queue Callback 25 The Recording Audio Queue Callback Declaration 25 Writing an Audio Queue Buffer to Disk 26 Enqueuing an Audio Queue Buffer 27 A Full Recording Audio Queue Callback 27 Write a Function to Derive Recording Audio Queue Buffer Size 29 Set a Magic Cookie for an Audio File 31 Set Up an Audio Format for Recording 32 Create a Recording Audio Queue 34 Creating a Recording Audio Queue 34 Getting the Full Audio Format from an Audio Queue 35 Create an Audio File 36 Set an Audio Queue Buffer Size 37 Prepare a Set of Audio Queue Buffers 38 Record Audio 39 Clean Up After Recording 40
Contents
Playing Audio 41
Define a Custom Structure to Manage State 41 Write a Playback Audio Queue Callback 43 The Playback Audio Queue Callback Declaration 43 Reading From a File into an Audio Queue Buffer 44 Enqueuing an Audio Queue Buffer 44 Stopping an Audio Queue 45 A Full Playback Audio Queue Callback 46 Write a Function to Derive Playback Audio Queue Buffer Size 47 Open an Audio File for Playback 49 Obtaining a CFURL Object for an Audio File 49 Opening an Audio File 50 Obtaining a Files Audio Data Format 51 Create a Playback Audio Queue 52 Set Sizes for a Playback Audio Queue 53 Setting Buffer Size and Number of Packets to Read 53 Allocating Memory for a Packet Descriptions Array 54 Set a Magic Cookie for a Playback Audio Queue 55 Allocate and Prime Audio Queue Buffers 57 Set an Audio Queues Playback Gain 59 Start and Run an Audio Queue 59 Clean Up After Playing 61
Recording Audio 23
Listing 2-1 Listing 2-2 Listing 2-3 Listing 2-4 Listing 2-5 Listing 2-6 Listing 2-7 Listing 2-8 Listing 2-9 Listing 2-10 Listing 2-11 Listing 2-12 Listing 2-13 Listing 2-14 Listing 2-15 A custom structure for a recording audio queue 24 The recording audio queue callback declaration 25 Writing an audio queue buffer to disk 26 Enqueuing an audio queue buffer after writing to disk 27 A recording audio queue callback function 27 Deriving a recording audio queue buffer size 29 Setting a magic cookie for an audio file 31 Specifying an audio queues audio data format 33 Creating a recording audio queue 34 Getting the audio format from an audio queue 35 Creating an audio file for recording 36 Setting an audio queue buffer size 37 Preparing a set of audio queue buffers 38 Recording audio 39 Cleaning up after recording 40
Playing Audio 41
Listing 3-1 Listing 3-2 Listing 3-3 Listing 3-4 Listing 3-5 Listing 3-6 Listing 3-7 Listing 3-8 A custom structure for a playback audio queue 41 The playback audio queue callback declaration 43 Reading from an audio file into an audio queue buffer 44 Enqueuing an audio queue buffer after reading from disk 45 Stopping an audio queue 45 A playback audio queue callback function 46 Deriving a playback audio queue buffer size 48 Obtaining a CFURL object for an audio file 50
Listing 3-9 Listing 3-10 Listing 3-11 Listing 3-12 Listing 3-13 Listing 3-14 Listing 3-15 Listing 3-16 Listing 3-17 Listing 3-18
Opening an audio file for playback 50 Obtaining a files audio data format 51 Creating a playback audio queue 52 Setting playback audio queue buffer size and number of packets to read 53 Allocating memory for a packet descriptions array 55 Setting a magic cookie for a playback audio queue 56 Allocating and priming audio queue buffers for playback 58 Setting an audio queues playback gain 59 Starting and running an audio queue 59 Cleaning up after playing an audio file 61
Introduction
This document describes how to use Audio Queue Services, a C programming interface in Core Audios Audio Toolbox framework.
Linear PCM. Any compressed format supported natively on the Apple platform you are developing for. Any other format for which a user has an installed codec.
Audio Queue Services is high level. It lets your application use hardware recording and playback devices (such as microphones and loudspeakers) without knowledge of the hardware interface. It also lets you use sophisticated codecs without knowledge of how the codecs work. At the same time, Audio Queue Services supports some advanced features. It provides fine-grained timing control to support scheduled playback and synchronization. You can use it to synchronize playback of multiple audio queues and to synchronize audio with video.
Note: Audio Queue Services provides features similar to those previously provided by the Sound Manager in Mac OS X. It adds additional features such as synchronization. The Sound Manager is deprecated in Mac OS X v10.5 and does not work with 64-bit applications. Apple recommends Audio Queue Services for all new development and as a replacement for the Sound Manager in existing Mac OS X applications.
Audio Queue Services is a pure C interface that you can use in Cocoa applications as well as in Mac OS X command-line tools. To help keep the focus on Audio Queue Services, the code examples in this document are sometimes simplified by using C++ classes from the Core Audio SDK. However, neither the SDK nor the C++ language is necessary to use Audio Queue Services.
The C programming language Using Xcode to build iOS or Mac OS X applications The terminology described in Core Audio Glossary
About Audio Queues (page 9) describes the capabilities, architecture, and internal workings of audio queues. Recording Audio (page 23) describes how to record audio. Playing Audio (page 41) describes how to play audio.
See Also
You may find the following documents helpful:
The companion document Audio Queue Services Reference provides descriptions of the functions, callbacks, constants, and data types in Audio Queue Services. Core Audio Data Types Reference describes data types essential for using Audio Queue Services.
Core Audio Overview provides a summary of the Core Audio frameworks, and includes an appendix on Supported Audio File and Data Formats in OS X. Core Audio Glossary defines key terms used in the Core Audio documentation.
In this chapter you learn about the capabilities, architecture, and internal workings of audio queues. You get introduced to audio queues, audio queue buffers, and the callback functions that audio queues use for recording or playback. You also find out about audio queue states and parameters. By the end of this chapter you will have gained the conceptual understanding you need to use this technology effectively.
Connecting to audio hardware Managing memory Employing codecs, as needed, for compressed audio formats Mediating recording or playback
You can use audio queues with other Core Audio interfaces, and a relatively small amount of custom code, to create a complete digital audio recording or playback solution in your application.
A set of audio queue buffers, each of which is a temporary repository for some audio data A buffer queue, an ordered list for the audio queue buffers An audio queue callback function, that you write
The architecture varies depending on whether an audio queue is for recording or playback. The differences are in how the audio queue connects its input and output, and in the role of the callback function.
The input side of a recording audio queue typically connects to external audio hardware, such as a microphone. In iOS, the audio comes from the device connected by the userbuilt-in microphone or headset microphone, for example. In the default case for Mac OS X, the audio comes from the systems default audio input device as set by a user in System Preferences. The output side of a recording audio queue makes use of a callback function that you write. When recording to disk, the callback writes buffers of new audio data, that it receives from its audio queue, to an audio file. However, recording audio queues can be used in other ways. You could also use one, for example, in a realtime audio analyzer. In such a case, your callback would provide audio data directly to your application instead of writing it to disk. Youll learn more about this callback in The Recording Audio Queue Callback Function (page 17). Every audio queuewhether for recording or playbackhas one or more audio queue buffers. These buffers are arranged in a specific sequence called a buffer queue. In the figure, the audio queue buffers are numbered according to the order in which they are filledwhich is the same order in which they are handed off to the callback. Youll learn how an audio queue uses its buffers in The Buffer Queue and Enqueuing (page 12).
10
In a playback audio queue, the callback is on the input side. The callback is responsible for obtaining audio data from disk (or some other source) and handing it off to the audio queue. Playback callbacks also tell their audio queues to stop when theres no more data to play. Youll learn more about this callback in The Playback Audio Queue Callback Function (page 18). A playback audio queues output typically connects to external audio hardware, such as a loudspeaker. In iOS, the audio goes to the device chosen by the userfor example, the receiver or the headset. In the default case in Mac OS X, the audio goes to the systems default audio output device as set by a user in System Preferences.
The mAudioData field, highlighted in the code listing, points to the buffer per se: a block of memory that serves as a container for transient blocks of audio data being played or recorded. The information in the other fields helps an audio queue manage the buffer.
11
An audio queue can use any number of buffers. Your application specifies how many. A typical number is three. This allows one to be busy with, say, writing to disk while another is being filled with fresh audio data. The third buffer is available if needed to compensate for such things as disk I/O delays. Figure 1-3 (page 13) illustrates this. Audio queues perform memory management for their buffers.
An audio queue allocates a buffer when you call the AudioQueueAllocateBuffer function. When you release an audio queue by calling the AudioQueueDispose function, the queue releases its buffers.
This improves the robustness of the recording and playback features you add to your application. It also helps optimize resource usage. For a complete description of the AudioQueueBuffer data structure, see Audio Queue Services Reference .
12
The audio queue hands off filled buffers of audio data to your callback in the order in which they were acquired. Figure 1-3 illustrates how recording works when using an audio queue.
Figure 1-3 The recording process
In step 1 of Figure 1-3, recording begins. The audio queue fills a buffer with acquired data. In step 2, the first buffer has been filled. The audio queue invokes the callback, handing it the full buffer (buffer 1). The callback (step 3) writes the contents of the buffer to an audio file. At the same time, the audio queue fills another buffer (buffer 2) with freshly acquired data.
13
In step 4, the callback enqueues the buffer (buffer 1) that it has just written to disk, putting it in line to be filled again. The audio queue again invokes the callback (step 5), handing it the next full buffer (buffer 2). The callback (step 6) writes the contents of this buffer to the audio file. This looping steady state continues until the user stops the recording.
14
The audio queue hands off played buffers of audio data to your callback in the order in which they were played. The callback reads new audio data into a buffer and then enqueues it. Figure 1-4 illustrates how playback works when using an audio queue.
Figure 1-4 The playback process
15
In step 1 of Figure 1-4, the application primes the playback audio queue. The application invokes the callback once for each of the audio queue buffers, filling them and adding them to the buffer queue. Priming ensures that playback can start instantly when your application calls the AudioQueueStart function (step 2). In step 3, the audio queue sends the first buffer (buffer 1) to output. As soon as the first buffer has been played, the playback audio queue enters a looping steady state. The audio queue starts playing the next buffer (buffer 2, step 4) and invokes the callback (step 5), handing it the just-played buffer (buffer 1). The callback (step 6) fills the buffer from the audio file and then enqueues it for playback.
Set the precise playback time for a buffer. This lets you support synchronization. Trim frames at the start or end of an audio queue buffer. This lets you remove leading or trailing silence. Set the playback gain at the granularity of a buffer.
For more about setting playback gain, see Audio Queue Parameters (page 21). For a complete description of the AudioQueueEnqueueBufferWithParameters function, see Audio Queue Services Reference .
16
A recording audio queue, in invoking your callback, supplies everything the callback needs to write the next set of audio data to the audio file:
inUserData is, typically, a custom structure that youve set up to contain state information for the audio
queue and its buffers, an audio file object (of type AudioFileID) representing the file youre writing to, and audio data format information for the file.
inAQ is the audio queue that invoked the callback. inBuffer is an audio queue buffer, freshly filled by the audio queue, containing the new data your
callback needs to write to disk. The data is already formatted according to the format you specify in the custom structure (passed in the inUserData parameter). For more on this, see Using Codecs and Audio Data Formats (page 18).
inStartTime is the sample time of the first sample in the buffer. For basic recording, your callback doesnt
If you are recording to a VBR (variable bitrate) format, the audio queue supplies a value for this parameter to your callback, which in turn passes it on to the AudioFileWritePackets function. CBR (constant bitrate) formats dont use packet descriptions. For a CBR recording, the audio queue sets this and the inPacketDescs parameter to NULL.
inPacketDescs is the set of packet descriptions corresponding to the samples in the buffer. Again, the
audio queue supplies the value for this parameter, if the audio data is in a VBR format, and your callback passes it on to the AudioFileWritePackets function (declared in the AudioFile.h header file). For more information on the recording callback, see Recording Audio (page 23) in this document, and see Audio Queue Services Reference .
17
A playback audio queue, in invoking your callback, supplies what the callback needs to read the next set of audio data from the audio file:
inUserData is, typically, a custom structure that youve set up to contain state information for the audio
queue and its buffers, an audio file object (of type AudioFileID) representing the file youre writing to, and audio data format information for the file. In the case of a playback audio queue, your callback keeps track of the current packet index using a field in this structure.
inAQ is the audio queue that invoked the callback. inBuffer is an audio queue buffer, made available by the audio queue, that your callback is to fill with
the next set of data read from the file being played. If your application is playing back VBR data, the callback needs to get the packet information for the audio data its reading. It does this by calling the AudioFileReadPackets function, declared in the AudioFile.h header file. The callback then places the packet information in the custom data structure to make it available to the playback audio queue. For more information on the playback callback, see Playing Audio (page 41) in this document, and see Audio Queue Services Reference .
18
Heres how this works. Each audio queue has an audio data format, represented in an AudioStreamBasicDescription structure. When you specify the formatin the mFormatID field of the structurethe audio queue uses the appropriate codec. You then specify sample rate and channel count, and thats all there is to it. You'll see examples of setting audio data format in Recording Audio (page 23) and Playing Audio (page 41). A recording audio queue makes use of an installed codec as shown in Figure 1-5.
Figure 1-5 Audio format conversion during recording
In step 1 of Figure 1-5, your application tells an audio queue to start recording, and also tells it the data format to use. In step 2, the audio queue obtains new audio data and converts it, using a codec, according to the format youve specified. The audio queue then invokes the callback, handing it a buffer containing appropriately formated audio data. In step 3, your callback writes the formatted audio data to disk. Again, your callback does not need to know about the data formats.
19
A playback audio queue makes use of an installed codec as shown in Figure 1-6.
Figure 1-6 Audio format conversion during playback
In step 1 of Figure 1-6, your application tells an audio queue to start playing, and also tells it the data format contained in the audio file to be played. In step 2, the audio queue invokes your callback, which reads data from the audio file. The callback hands off the data, in its original format, to the audio queue. In step 3, the audio queue uses the appropriate codec and then sends the audio along to the destination. An audio queue can make use of any installed codec, whether native to Mac OS X or provided by a third party. To designate a codec to use, you supply its four-character code ID to an audio queues AudioStreamBasicDescription structure. Youll see an example of this in Recording Audio (page 23). Mac OS X includes a wide range of audio codecs, as listed in the format IDs enumeration in the CoreAudioTypes.h header file and as documented in Core Audio Data Types Reference . You can determine the codecs available on a system by using the interfaces in the AudioFormat.h header file, in the Audio Toolbox Framework. You can display the codecs on a system using the Fiendishthngs application, available as sample code at http://developer.apple.com/samplecode/Fiendishthngs/.
20
Prime (AudioQueuePrime). For playback, call before calling AudioQueueStart to ensure that there is data available immediately for the audio queue to play. This function is not relevant to recording. Stop (AudioQueueStop). Call to reset the audio queue (see the description below for AudioQueueReset) and to then stop recording or playback. A playback audio queue callback calls this function when theres no more data to play. Pause (AudioQueuePause). Call to pause recording or playback without affecting buffers or resetting the audio queue. To resume, call the AudioQueueStart function. Flush (AudioQueueFlush). Call after enqueuing the last audio queue buffer to ensure that all buffered data, as well as all audio data in the midst of processing, gets recorded or played. Reset (AudioQueueReset). Call to immediately silence an audio queue, remove all buffers from previously scheduled use, and reset all decoder and DSP state.
Synchronous stopping happens immediately, without regard for previously buffered audio data. Asynchronous stopping happens after all queued buffers have been played or recorded.
See Audio Queue Services Reference for a complete description of each of these functions, including more information on synchronous and asynchronous stopping of audio queues.
Per audio queue, using the AudioQueueSetParameter function. This lets you change settings for an audio queue directly. Such changes take effect immediately. Per audio queue buffer, using the AudioQueueEnqueueBufferWithParameters function. This lets you assign audio queue settings that are, in effect, carried by an audio queue buffer as you enqueue it. Such changes take effect when the audio queue buffer begins playing.
In both cases, parameter settings for an audio queue remain in effect until you change them.
21
You can access an audio queues current parameter values at any time with the AudioQueueGetParameter function. See Audio Queue Services Reference for complete descriptions of the functions for getting and setting parameter values.
22
Recording Audio
When you record using Audio Queue Services, the destination can be just about anythingan on-disk file, a network connection, an object in memory, and so on. This chapter describes the most common scenario: basic recording to an on-disk file. Note: This chapter describes an ANSI-C based implementation for recording, with some use of C++ classes from the Mac OS X Core Audio SDK. For an Objective-C based example, see the SpeakHere sample code in the iOS Dev Center.
To add recording functionality to your application, you typically perform the following steps:
1. 2. 3.
Define a custom structure to manage state, format, and path information. Write an audio queue callback function to perform the actual recording. Optionally write code to determine a good size for the audio queue buffers. Write code to work with magic cookies, if youll be recording in a format that uses cookies. Fill the fields of the custom structure. This includes specifying the data stream that the audio queue sends to the file its recording into, as well as the path to that file. Create a recording audio queue and ask it to create a set of audio queue buffers. Also create a file to record into. Tell the audio queue to start recording. When done, tell the audio queue to stop and then dispose of it. The audio queue disposes of its buffers.
4.
5.
6. 7.
23
Listing 2-1
static const int kNumberBuffers = 3; struct AQRecorderState { AudioStreamBasicDescription AudioQueueRef AudioQueueBufferRef AudioFileID UInt32 SInt64 bool }; mDataFormat; mQueue; mBuffers[kNumberBuffers]; mAudioFile; bufferByteSize; mCurrentPacket; mIsRunning;
// 2 // 3 // 4 // 5 // 6 // 7 // 8
Sets the number of audio queue buffers to use. An AudioStreamBasicDescription structure (from CoreAudioTypes.h) representing the audio data format to write to disk. This format gets used by the audio queue specified in the mQueue field. The mDataFormat field gets filled initially by code in your program, as described in Set Up an Audio Format for Recording (page 32). It is good practice to then update the value of this field by querying the audio queue's kAudioQueueProperty_StreamDescription property, as described in Getting the Full Audio Format from an Audio Queue (page 35). On Mac OS X v10.5, use the kAudioConverterCurrentInputStreamDescription property instead. For details on the AudioStreamBasicDescription structure, see Core Audio Data Types Reference .
3. 4. 5. 6.
The recording audio queue created by your application. An array holding pointers to the audio queue buffers managed by the audio queue. An audio file object representing the file into which your program records audio data. The size, in bytes, for each audio queue buffer. This value is calculated in these examples in the DeriveBufferSize function, after the audio queue is created and before it is started. See Write a Function to Derive Recording Audio Queue Buffer Size (page 29). The packet index for the first packet to be written from the current audio queue buffer. A Boolean value indicating whether or not the audio queue is running.
7. 8.
24
Writes the contents of a newly filled audio queue buffer to the audio file youre recording into Enqueues the audio queue buffer (whose contents were just written to disk) to the buffer queue
This section shows an example callback declaration, then describes these two tasks separately, and finally presents an entire recording callback. For an illustration of the role of a recording audio queue callback, you can refer back to Figure 1-3 (page 13).
static void HandleInputBuffer ( void AudioQueueRef AudioQueueBufferRef const AudioTimeStamp UInt32 const AudioStreamPacketDescription ) *aqData, inAQ, inBuffer, *inStartTime, inNumPackets, *inPacketDesc // 1 // 2 // 3 // 4 // 5 // 6
Typically, aqData is a custom structure that contains state data for the audio queue, as described in Define a Custom Structure to Manage State (page 23). The audio queue that owns this callback. The audio queue buffer containing the incoming audio data to record. The sample time of the first sample in the audio queue buffer (not needed for simple recording). The number of packet descriptions in the inPacketDesc parameter. A value of 0 indicates CBR data. For compressed audio data formats that require packet descriptions, the packet descriptions produced by the encoder for the packets in the buffer.
2. 3. 4. 5. 6.
25
The AudioFileWritePackets function, declared in the AudioFile.h header file, writes the contents of a buffer to an audio data file. The audio file object (of type AudioFileID) that represents the audio file to write to. The pAqData variable is a pointer to the data structure described in Listing 2-1 (page 24). Uses a value of false to indicate that the function should not cache the data when writing. The number of bytes of audio data being written. The inBuffer variable represents the audio queue buffer handed to the callback by the audio queue. An array of packet descriptions for the audio data. A value of NULL indicates no packet descriptions are required (such as for CBR audio data). The packet index for the first packet to be written. On input, the number of packets to write. On output, the number of packets actually written. The new audio data to write to the audio file.
2.
3. 4.
5.
6. 7. 8.
26
The AudioQueueEnqueueBuffer function adds an audio queue buffer to an audio queues buffer queue. The audio queue to add the designated audio queue buffer to. The pAqData variable is a pointer to the data structure described in Listing 2-1. The audio queue buffer to enqueue. The number of packet descriptions in the audio queue buffer's data. Set to 0 because this parameter is unused for recording. The array of packet descriptions describing the audio queue buffers data. Set to NULL because this parameter is unused for recording.
3. 4.
5.
27
// 1
// 2
inBuffer->mAudioDataByteSize / pAqData->mDataFormat.mBytesPerPacket;
if (AudioFileWritePackets ( pAqData->mAudioFile, false, inBuffer->mAudioDataByteSize, inPacketDesc, pAqData->mCurrentPacket, &inNumPackets, inBuffer->mAudioData ) == noErr) { pAqData->mCurrentPacket += inNumPackets; } if (pAqData->mIsRunning == 0) return;
// 3
// 4
// 5
// 6
28
Recording Audio Write a Function to Derive Recording Audio Queue Buffer Size
1.
The custom structure supplied to the audio queue object upon instantiation, including an audio file object representing the audio file to record into as well as a variety of state data. See Define a Custom Structure to Manage State (page 23). If the audio queue buffer contains CBR data, calculate the number of packets in the buffer. This number equals the total bytes of data in the buffer divided by the (constant) number of bytes per packet. For VBR data, the audio queue supplies the number of packets in the buffer when it invokes the callback. Writes the contents of the buffer to the audio data file. For a detailed description , see Writing an Audio Queue Buffer to Disk (page 26). If successful in writing the audio data, increment the audio data files packet index to be ready for writing the next buffer's worth of audio data. If the audio queue has stopped, return. Enqueues the audio queue buffer whose contents have just been written to the audio file. For a detailed description, see Enqueuing an Audio Queue Buffer (page 27).
2.
3.
4.
5. 6.
void DeriveBufferSize ( AudioQueueRef AudioStreamBasicDescription Float64 UInt32 ) { static const int maxBufferSize = 0x50000; // 5 audioQueue, &ASBDescription, seconds, *outBufferSize // 1 // 2 // 3 // 4
// 6 // 7
29
Recording Audio Write a Function to Derive Recording Audio Queue Buffer Size
&maxPacketSize, &maxVBRPacketSize ); }
Float64 numBytesForTime = ASBDescription.mSampleRate * maxPacketSize * seconds; // 8 *outBufferSize = UInt32 (numBytesForTime < maxBufferSize ? numBytesForTime : maxBufferSize); } // 9
The audio queue that owns the buffers whose size you want to specify. The AudioStreamBasicDescription structure for the audio queue. The size you are specifying for each audio queue buffer, in terms of seconds of audio. On output, the size for each audio queue buffer, in terms of bytes. An upper bound for the audio queue buffer size, in bytes. In this example, the upper bound is set to 320 KB. This corresponds to approximately five seconds of stereo, 24 bit audio at a sample rate of 96 kHz. For CBR audio data, get the (constant) packet size from the AudioStreamBasicDescription structure. Use this value as the maximum packet size. This assignment has the side effect of determining if the audio data to be recorded is CBR or VBR. If it is VBR, the audio queues AudioStreamBasicDescription structure lists the value of bytes-per-packet as 0.
6.
7. 8. 9.
For VBR audio data, query the audio queue to get the estimated maximum packet size. Derive the buffer size, in bytes. Limit the buffer size, if needed, to the previously set upper bound.
30
OSStatus SetMagicCookieForFile ( AudioQueueRef inQueue, AudioFileID ) { OSStatus result = noErr; UInt32 cookieSize; // 3 // 4 inFile // 1 // 2
if ( AudioQueueGetPropertySize ( inQueue, kAudioQueueProperty_MagicCookie, &cookieSize ) == noErr ) { char* magicCookie = (char *) malloc (cookieSize); if ( AudioQueueGetProperty ( inQueue, kAudioQueueProperty_MagicCookie, magicCookie, &cookieSize ) == noErr ) result = AudioFileSetProperty ( inFile, // 8 // 7 // 6 // 5
31
The audio queue youre using for recording. The audio file youre recording into. A result variable that indicates the success or failure of this function. A variable to hold the magic cookie data size. Gets the data size of the magic cookie from the audio queue and stores it in the cookieSize variable. Allocates an array of bytes to hold the magic cookie information. Gets the magic cookie by querying the audio queues kAudioQueueProperty_MagicCookie property. Sets the magic cookie for the audio file youre recording into. The AudioFileSetProperty function is declared in the AudioFile.h header file. Frees the memory for the temporary cookie variable.
9.
Audio data format type (such as linear PCM, AAC, etc.) Sample rate (such as 44.1 kHz) Number of audio channels (such as 2, for stereo) Bit depth (such as 16 bits)
32
Frames per packet (linear PCM, for example, uses one frame per packet) Audio file type (such as CAF, AIFF, etc.) Details of the audio data format required for the file type
Listing 2-8 illustrates setting up an audio format for recording, using a fixed choice for each attribute. In production code, youd typically allow the user to specify some or all aspects of the audio format. With either approach, the goal is to fill the mDataFormat field of the AQRecorderState custom structure, described in Define a Custom Structure to Manage State (page 23).
Listing 2-8 Specifying an audio queues audio data format
// 1
AQRecorderState aqData;
aqData.mDataFormat.mFormatID aqData.mDataFormat.mSampleRate
= kAudioFormatLinearPCM; // 2 = 44100.0; // 3 // 4 // 5 // 6
= kAudioFileAIFFType;
// 8 // 9
| kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
Creates an instance of the AQRecorderState custom structure. The structures mDataFormat field contains an AudioStreamBasicDescription structure. The values set in the mDataFormat field provide an initial definition of the audio format for the audio queuewhich is also the audio format for the file you record into. In Listing 2-10 (page 35), you obtain a more complete specification of the audio format, which Core Audio provides to you based on the format type and file type. Defines the audio data format type as linear PCM. See Core Audio Data Types Reference for a complete listing of the available data formats.
2.
33
3. 4. 5. 6.
Defines the sample rate as 44.1 kHz. Defines the number of channels as 2. Defines the bit depth per channel as 16. Defines the number of bytes per packet, and the number of bytes per frame, to 4 (that is, 2 channels times 2 bytes per sample). Defines the number of frames per packet as 1. Defines the file type as AIFF. See the audio file types enumeration in the AudioFile.h header file for a complete listing of the available file types. You can specify any file type for which there is an installed codec, as described in Using Codecs and Audio Data Formats (page 18). Sets the format flags needed for the specified file type.
7. 8.
9.
34
2. 3.
The audio data format to use for the recording. See Set Up an Audio Format for Recording (page 32). The callback function to use with the recording audio queue. See Write a Recording Audio Queue Callback (page 25). The custom data structure for the recording audio queue. See Define a Custom Structure to Manage State (page 23). The run loop on which the callback will be invoked. Use NULL to specify default behavior, in which the callback will be invoked on a thread internal to the audio queue. This is typical useit allows the audio queue to record while your applications user interface thread waits for user input to stop the recording. The run loop modes in which the callback can be invoked. Normally, use the kCFRunLoopCommonModes constant here. Reserved. Must be 0. On output, the newly allocated recording audio queue.
4.
5.
6.
7. 8.
// 2 // 3 // 4
&aqData.mDataFormat, &dataFormatSize );
// 5 // 6
Gets an expected property value size to use when querying the audio queue about its audio data format.
35
2. 3. 4. 5.
The AudioQueueGetProperty function obtains the value for a specified property in an audio queue. The audio queue to obtain the audio data format from. The property ID for obtaining the value of the audio queues data format. On output, the full audio data format, in the form of an AudioStreamBasicDescription structure, obtained from the audio queue. On input, the expected size of the AudioStreamBasicDescription structure. On output, the actual size. Your recording application does not need to make use of this value.
6.
// 6 // 7 // 8 // 9 // 10 // 11
The CFURLCreateFromFileSystemRepresentation function, declared in the CFURL.h header file, creates a CFURL object representing a file to record into.
36
2. 3.
Use NULL (or kCFAllocatorDefault) to use the current default memory allocator. The file-system path you want to convert to a CFURL object. In production code, you would typically obtain a value for filePath from the user. The number of bytes in the file-system path. A value of false indicates that filePath represents a file, not a directory. The AudioFileCreateWithURL function, from the AudioFile.h header file, creates a new audio file or initializes an existing file. The URL at which to create the new audio file, or to initialize in the case of an existing file. The URL was derived from the CFURLCreateFromFileSystemRepresentation in step 1. The file type for the new file. In the example code in this chapter, this was previously set to AIFF by way of the kAudioFileAIFFType file type constant. See Set Up an Audio Format for Recording (page 32). The data format of the audio that will be recorded into the file, specified as an AudioStreamBasicDescription structure. In the example code for this chapter, this was also set in Set Up an Audio Format for Recording (page 32).
4. 5. 6.
7.
8.
9.
10. Erases the file, in the case that the file already exists. 11. On output, an audio file object (of type AudioFileID) representing the audio file to record into.
The DeriveBufferSize function, described in Write a Function to Derive Recording Audio Queue Buffer Size (page 29), sets an appropriate audio queue buffer size.
37
2. 3.
The audio queue that youre setting buffer size for. The audio data format for the file you are recording. See Set Up an Audio Format for Recording (page 32). The number of seconds of audio that each audio queue buffer should hold. One half second, as set here, is typically a good choice. On output, the size for each audio queue buffer, in bytes. This value is placed in the custom structure for the audio queue.
4.
5.
// 6 // 7 // 8 // 9 // 10
Iterates to allocate and enqueue each audio queue buffer. The AudioQueueAllocateBuffer function asks an audio queue to allocate an audio queue buffer. The audio queue that performs the allocation and that will own the buffer.
38
4.
The size, in bytes, for the new audio queue buffer being allocated. See Write a Function to Derive Recording Audio Queue Buffer Size (page 29). On output, the newly allocated audio queue buffer. The pointer to the buffer is placed in the custom structure youre using with the audio queue. The AudioQueueEnqueueBuffer function adds an audio queue buffer to the end of a buffer queue. The audio queue whose buffer queue you are adding the buffer to. The audio queue buffer you are enqueuing. This parameter is unused when enqueuing a buffer for recording.
5.
6. 7. 8. 9.
Record Audio
All of the preceding code has led up to the very simple process of recording, as shown in Listing 2-14.
Listing 2-14 Recording audio
aqData.mCurrentPacket = 0; aqData.mIsRunning = true; // 1 // 2
// 3 // 4 // 5
// Wait, on user interface thread, until user stops the recording AudioQueueStop ( aqData.mQueue, true ); // 6 // 7 // 8
aqData.mIsRunning = false;
// 9
Initializes the packet index to 0 to begin recording at the start of the audio file. Sets a flag in the custom structure to indicate that the audio queue is running. This flag is used by the recording audio queue callback.
39
3. 4. 5. 6. 7. 8.
The AudioQueueStart function starts the audio queue, on its own thread. The audio queue to start. Uses NULL to indicate that the audio queue should start recording immediately. The AudioQueueStop function stops and resets the recording audio queue. The audio queue to stop. Use true to use synchronous stopping. See Audio Queue Control and State (page 20) for an explanation of synchronous and asynchronous stopping. Sets a flag in the custom structure to indicate that the audio queue is not running.
9.
AudioFileClose (aqData.mAudioFile);
// 4
The AudioQueueDispose function disposes of the audio queue and all of its resources, including its buffers. The audio queue you want to dispose of. Use true to dispose of the audio queue synchronously (that is, immediately). Closes the audio file that was used for recording. The AudioFileClose function is declared in the AudioFile.h header file.
2. 3. 4.
40
Playing Audio
When you play audio using Audio Queue Services, the source can be just about anythingan on-disk file, a software-based audio synthesizer, an object in memory, and so on. This chapter describes the most common scenario: playing back an on-disk file. Note: This chapter describes an ANSI-C based implementation for playback, with some use of C++ classes from the Mac OS X Core Audio SDK. For an Objective-C based example, see the SpeakHere sample code in the iOS Dev Center.
To add playback functionality to your application, you typically perform the following steps:
1. 2. 3. 4. 5. 6.
Define a custom structure to manage state, format, and path information. Write an audio queue callback function to perform the actual playback. Write code to determine a good size for the audio queue buffers. Open an audio file for playback and determine its audio data format. Create a playback audio queue and configure it for playback. Allocate and enqueue audio queue buffers. Tell the audio queue to start playing. When done, the playback callback tells the audio queue to stop. Dispose of the audio queue. Release resources.
7.
41
// 2 // 3 // 4 // 5 // 6 // 7 // 8 // 9 // 10
Most fields in this structure are identical (or nearly so) to those in the custom structure used for recording, as described in the Recording Audio chapter in Define a Custom Structure to Manage State (page 23). For example, the mDataFormat field is used here to hold the format of the file being played. When recording, the analogous field holds the format of the file being written to disk. Heres a description of the fields in this structure:
1.
Sets the number of audio queue buffers to use. Three is typically a good number, as described in Audio Queue Buffers (page 11). An AudioStreamBasicDescription structure (from CoreAudioTypes.h) representing the audio data format of the file being played. This format gets used by the audio queue specified in the mQueue field. The mDataFormat field gets filled by querying an audio file's kAudioFilePropertyDataFormat property, as described in Obtaining a Files Audio Data Format (page 51). For details on the AudioStreamBasicDescription structure, see Core Audio Data Types Reference .
2.
3. 4. 5. 6.
The playback audio queue created by your application. An array holding pointers to the audio queue buffers managed by the audio queue. An audio file object that represents the audio file your program plays. The size, in bytes, for each audio queue buffer. This value is calculated in these examples in the DeriveBufferSize function, after the audio queue is created and before it is started. See Write a Function to Derive Playback Audio Queue Buffer Size (page 47). The packet index for the next packet to play from the audio file. The number of packets to read on each invocation of the audio queues playback callback. Like the bufferByteSize field, this value is calculated in these examples in the DeriveBufferSize function, after the audio queue is created and before it is started.
7. 8.
42
9.
For VBR audio data, the array of packet descriptions for the file being played. For CBR data, the value of this field is NULL.
10. A Boolean value indicating whether or not the audio queue is running.
Reads a specified amount of data from an audio file and puts it into an audio queue buffer Enqueues the audio queue buffer to the buffer queue When theres no more data to read from the audio file, tells the audio queue to stop
This section shows an example callback declaration, describes each of these tasks separately, and finally presents an entire playback callback. For an illustration of the role of a playback callback, you can refer back to Figure 1-4 (page 15).
Typically, aqData is the custom structure that contains state information for the audio queue, as described in Define a Custom Structure to Manage State (page 41). The audio queue that owns this callback. An audio queue buffer that the callback is to fill with data by reading from an audio file.
2. 3.
43
The AudioFileReadPackets function, declared in the AudioFile.h header file, reads data from an audio file and places it into a buffer. The audio file to read from. Uses a value of false to indicate that the function should not cache the data when reading. On output, the number of bytes of audio data that were read from the audio file. On output, an array of packet descriptions for the data that was read from the audio file. For CBR data, the input value of this parameter is NULL. The packet index for the first packet to read from the audio file. On input, the number of packets to read from the audio file. On output, the number of packets actually read. On output, the filled audio queue buffer containing data that was read from the audio file.
2. 3. 4. 5.
6. 7.
8.
44
Listing 3-4
The AudioQueueEnqueueBuffer function adds an audio queue buffer to a buffer queue. The audio queue that owns the buffer queue. The audio queue buffer to enqueue The number of packets represented in the audio queue buffers data. For CBR data, which uses no packet descriptions, uses 0. For compressed audio data formats that use packet descriptions, the packet descriptions for the packets in the buffer. .
5.
// 5
45
1.
Checks if the number of packets read by the AudioFileReadPackets function (invoked earlier by the callback) is 0. The AudioQueueStop function stops the audio queue. The audio queue to stop. Stops the audio queue asynchronously, when all queued buffers have been played. See Audio Queue Control and State (page 20). Sets a flag in the custom structure to indicate that playback is finished.
2. 3. 4.
5.
static void HandleOutputBuffer ( void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer ) { AQPlayerState *pAqData = (AQPlayerState *) aqData; if (pAqData->mIsRunning == 0) return; UInt32 numBytesReadFromFile; UInt32 numPackets = pAqData->mNumPacketsToRead; AudioFileReadPackets ( pAqData->mAudioFile, false, &numBytesReadFromFile, pAqData->mPacketDescs, pAqData->mCurrentPacket, &numPackets, inBuffer->mAudioData ); if (numPackets > 0) { inBuffer->mAudioDataByteSize = numBytesReadFromFile; AudioQueueEnqueueBuffer ( pAqData->mQueue, inBuffer, (pAqData->mPacketDescs ? numPackets : 0), pAqData->mPacketDescs ); pAqData->mCurrentPacket += numPackets; } else { AudioQueueStop ( pAqData->mQueue, false
// // // //
1 2 3 4
// 5 // 6
// 7
46
Playing Audio Write a Function to Derive Playback Audio Queue Buffer Size
); pAqData->mIsRunning = false; } }
The custom data supplied to the audio queue upon instantiation, including the audio file object (of type AudioFileID) representing the file to play as well as a variety of state data. See Define a Custom Structure to Manage State (page 41). If the audio queue is stopped, returns immediately. A variable to hold the number of bytes of audio data read from the file being played. Initializes the numPackets variable with the number of packets to read from the file being played. Tests whether some audio data was retrieved from the file. If so, enqueues the newly-filled buffer. If not, stops the audio queue. Tells the audio queue buffer structure the number of bytes of data that were read. Increments the packet index according to the number of packets that were read.
2. 3. 4. 5.
6. 7.
Derive the number of packets to read each time your callback invokes the AudioFileReadPackets function Set a lower bound on buffer size, to avoid overly frequent disk access
The calculation here takes into account the audio data format youre reading from disk. The format includes all the factors that might affect buffer size, such as the number of audio channels.
47
Playing Audio Write a Function to Derive Playback Audio Queue Buffer Size
Listing 3-7
void DeriveBufferSize ( AudioStreamBasicDescription &ASBDesc, UInt32 Float64 UInt32 UInt32 ) { static const int maxBufferSize = 0x50000; static const int minBufferSize = 0x4000; // 6 // 7 maxPacketSize, seconds, *outBufferSize, *outNumPacketsToRead // 1 // 2 // 3 // 4 // 5
// 8
ASBDesc.mSampleRate / ASBDesc.mFramesPerPacket * seconds; *outBufferSize = numPacketsForTime * maxPacketSize; } else { *outBufferSize = maxBufferSize > maxPacketSize ? maxBufferSize : maxPacketSize; } // 9
if ( *outBufferSize > maxBufferSize && *outBufferSize > maxPacketSize ) *outBufferSize = maxBufferSize; else { if (*outBufferSize < minBufferSize) *outBufferSize = minBufferSize; }
// 10
// 11
// 12
48
1. 2.
The AudioStreamBasicDescription structure for the audio queue. The estimated maximum packet size for the data in the audio file youre playing. You can determine this value by invoking the AudioFileGetProperty function (declared in the AudioFile.h header file) with a property ID of kAudioFilePropertyPacketSizeUpperBound. See Set Sizes for a Playback Audio Queue (page 53). The size you are specifying for each audio queue buffer, in terms of seconds of audio. On output, the size for each audio queue buffer, in bytes. On output, the number of packets of audio data to read from the file on each invocation of the playback audio queue callback. An upper bound for the audio queue buffer size, in bytes. In this example, the upper bound is set to 320 KB. This corresponds to approximately five seconds of stereo, 24 bit audio at a sample rate of 96 kHz. A lower bound for the audio queue buffer size, in bytes. In this example, the lower bound is set to 16 KB. For audio data formats that define a fixed number of frames per packet, derives the audio queue buffer size. For audio data formats that do not define a fixed number of frames per packet, derives a reasonable audio queue buffer size based on the maximum packet size and the upper bound youve set. the estimated maximum packet size.
3. 4. 5.
6.
7. 8.
9.
10. If the derived buffer size is above the upper bound youve set, adjusts it the boundtaking into account
11. If the derived buffer size is below the lower bound youve set, adjusts it to the bound. 12. Calculates the number of packets to read from the audio file on each invocation of the callback.
Obtain a CFURL object representing the audio file you want to play. Open the file. Obtain the files audio data format.
49
Listing 3-8
CFURLRef audioFileURL = CFURLCreateFromFileSystemRepresentation ( NULL, (const UInt8 *) filePath, strlen (filePath), false ); // 1 // 2 // 3 // 4 // 5
The CFURLCreateFromFileSystemRepresentation function, declared in the CFURL.h header file, creates a CFURL object representing the file to play. Uses NULL (or kCFAllocatorDefault) to use the current default memory allocator. The file-system path you want to convert to a CFURL object. In production code, you would typically obtain a value for filePath from the user. The number of bytes in the file-system path. A value of false indicates that filePath represents a file, not a directory.
2. 3.
4. 5.
AQPlayerState aqData;
CFRelease (audioFileURL);
// 7
50
Creates an instance of the AQPlayerState custom structure (see Define a Custom Structure to Manage State (page 41)). You use this instance when you open an audio file for playback, as a place to hold the audio file object (of type AudioFileID) that represents the audio file. The AudioFileOpenURL function, declared in the AudioFile.h header file, opens the file you want to play. A reference to the file to play. The file permissions you want to use with the file youre playing. The available permissions are defined in the File Managers File Access Permission Constants enumeration. In this example you request permission to read the file. An optional file type hint. A value of 0 here indicates that the example does not use this facility. On output, a reference to the audio file is placed in the custom structures mAudioFile field. Releases the CFURL object that was created in step 1.
2.
3. 4.
5. 6. 7.
// 2 // 3 // 4 // 5 // 6
Gets an expected property value size to use when querying the audio file about its audio data format. The AudioFileGetProperty function, declared in the AudioFile.h header file, obtains the value for a specified property in an audio file.
51
3.
An audio file object (of type AudioFileID) representing the file whose audio data format you want to obtain. The property ID for obtaining the value of the audio files data format. On input, the expected size of the AudioStreamBasicDescription structure that describes the audio files data format. On output, the actual size. Your playback application does not need to make use of this value. On output, the full audio data format, in the form of an AudioStreamBasicDescription structure, obtained from the audio file. This line applies the files audio data format to the audio queue by storing it in the audio queues custom structure.
4. 5.
6.
The AudioQueueNewOutput function creates a new playback audio queue. The audio data format of the file that the audio queue is being set up to play. See Obtaining a Files Audio Data Format (page 51). The callback function to use with the playback audio queue. See Write a Playback Audio Queue Callback (page 43). The custom data structure for the playback audio queue. See Define a Custom Structure to Manage State (page 41). The current run loop, and the one on which the audio queue playback callback will be invoked.
3.
4.
5.
52
6.
The run loop modes in which the callback can be invoked. Normally, use the kCFRunLoopCommonModes constant here. Reserved. Must be 0. On output, the newly allocated playback audio queue.
7. 8.
Audio queue buffer size Number of packets to read for each invocation of the playback audio queue callback Array size for holding the packet descriptions for one buffers worth of audio data
53
// 6 // 7 // 8 // 9 // 10 // 11
The AudioFileGetProperty function, declared in the AudioFile.h header file, obtains the value of a specified property for an audio file. Here you use it to get a conservative upper bound, in bytes, for the size of the audio data packets in the file you want to play. An audio file object (of type AudioFileID) representing the file you want to play. See Opening an Audio File (page 50). The property ID for obtaining a conservative upper bound for packet size in an audio file. On output, the size, in bytes, for the kAudioFilePropertyPacketSizeUpperBound property. On output, a conservative upper bound for packet size, in bytes, for the file you want to play. The DeriveBufferSize function, described in Write a Function to Derive Playback Audio Queue Buffer Size (page 47), sets a buffer size and a number of packets to read on each invocation of the playback audio queue callback. The audio data format of the file you want to play. See Obtaining a Files Audio Data Format (page 51). The estimated maximum packet size in the audio file, from line 5 of this listing. The number of seconds of audio that each audio queue buffer should hold. One half second, as set here, is typically a good choice. the audio queue.
2.
3. 4. 5. 6.
7. 8. 9.
10. On output, the size for each audio queue buffer, in bytes. This value is placed in the custom structure for
11. On output, the number of packets to read on each invocation of the playback audio queue callback. This
value is also placed in the custom structure for the audio queue.
54
// 2
Determines if the audio files data format is VBR or CBR. In VBR data, one or both of the bytes-per-packet or frames-per-packet values is variable, and so will be listed as 0 in the audio queues AudioStreamBasicDescription structure. For an audio file that contains VBR data, allocates memory for the packet descriptions array. Calculates the memory needed based on the number of audio data packets to be read on each invocation of the playback callback. See Setting Buffer Size and Number of Packets to Read (page 53). For an audio file that contains CBR data, such as linear PCM, the audio queue does not use a packet descriptions array.
2.
3.
55
// 8
// 9 // 10 // 11 // 12 // 13
// 14 // 15 // 16 // 17 // 18
free (magicCookie); }
// 19
56
2.
Captures the result of the AudioFileGetPropertyInfo function. If successful, this function returns a value of NoErr, equivalent to Boolean false. The AudioFileGetPropertyInfo function, declared in the AudioFile.h header file, gets the size of the value of a specified property. You use this to set the size of the variable that holds the property value. An audio file object (of type AudioFileID) that represents the audio file you want to play. The property ID representing an audio files magic cookie data. On input, an estimated size for the magic cookie data. On output, the actual size. Uses NULL to indicate that you dont care about the read/write access for the property. If the audio file does contain a magic cookie, allocate memory to hold it. The AudioFileGetProperty function, declared in the AudioFile.h header file, gets the value of a specified property. In this case, it gets the audio files magic cookie. magic cookie you are getting.
3.
4. 5. 6. 7. 8. 9.
10. An audio file object (of type AudioFileID) that represents the audio file you want to play, and whose
11. The property ID representing the audio files magic cookie data. 12. On input, the size of the magicCookie variable obtained using the AudioFileGetPropertyInfo
function. On output, the actual size of the magic cookie in terms of the number of bytes written to the magicCookie variable.
13. On output, the audio files magic cookie. 14. The AudioQueueSetProperty function sets a property in an audio queue. In this case, it sets a magic
cookie for the audio queue, matching the magic cookie in the audio file to be played.
15. The audio queue that you want to set a magic cookie for. 16. The property ID representing an audio queues magic cookie. 17. The magic cookie from the audio file that you want to play. 18. The size, in bytes, of the magic cookie. 19. Releases the memory that was allocated for the magic cookie.
57
Listing 3-15 Allocating and priming audio queue buffers for playback
aqData.mCurrentPacket = 0; // 1
// 2 // 3 // 4 // 5 // 6
// 7 // 8 // 9 // 10
Sets the packet index to 0, so that when the audio queue callback starts filling buffers (step 7) it starts at the beginning of the audio file. Allocates and primes a set of audio queue buffers. (You set this number, kNumberBuffers, to 3 in Define a Custom Structure to Manage State (page 41).) The AudioQueueAllocateBuffer function creates an audio queue buffer by allocating memory for it. The audio queue that is allocating the audio queue buffer. The size, in bytes, for the new audio queue buffer. On output, adds the new audio queue buffer to the mBuffers array in the custom structure. The HandleOutputBuffer function is the playback audio queue callback you wrote. See Write a Playback Audio Queue Callback (page 43). The custom structure for the audio queue. The audio queue whose callback youre invoking.
2.
3. 4. 5. 6. 7.
8. 9.
10. The audio queue buffer that youre passing to the audio queue callback.
58
// Optionally, allow user to override gain setting here AudioQueueSetParameter ( aqData.mQueue, kAudioQueueParam_Volume, gain ); // 2 // 3 // 4 // 5
Sets a gain to use with the audio queue, between 0 (for silence) and 1 (for unity gain). The AudioQueueSetParameter function sets the value of a parameter for an audio queue. The audio queue that you are setting a parameter on. The ID of the parameter you are setting. The kAudioQueueParam_Volume constant lets you set an audio queues gain. The gain setting that you are applying to the audio queue.
5.
// 2 // 3 // 4
59
);
// 5 // 6 // 7 // 8 // 9
// 10
Sets a flag in the custom structure to indicate that the audio queue is running. The AudioQueueStart function starts the audio queue, on its own thread. The audio queue to start. Uses NULL to indicate that the audio queue should start playing immediately. Polls the custom structures mIsRunning field regularly to check if the audio queue has stopped. The CFRunLoopRunInMode function runs the run loop that contains the audio queues thread. Uses the default mode for the run loop. Sets the run loops running time to 0.25 seconds. Uses false to indicate that the run loop should continue for the full time specified. currently playing has time to finish.
10. After the audio queue has stopped, runs the run loop a bit longer to ensure that the audio queue buffer
60
AudioFileClose (aqData.mAudioFile);
// 4
free (aqData.mPacketDescs);
// 5
The AudioQueueDispose function disposes of the audio queue and all of its resources, including its buffers. The audio queue you want to dispose of. Use true to dispose of the audio queue synchronously. Closes the audio file that was played. The AudioFileClose function is declared in the AudioFile.h header file. Releases the memory that was used to hold the packet descriptions.
2. 3. 4.
5.
61
This table describes the changes to Audio Queue Services Programming Guide .
2007-10-31
New document that describes how to record and play audio using Audio Queue Services.
62
Apple Inc. Copyright 2013 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apples copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, Mac, Mac OS, Objective-C, OS X, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. .Mac is a service mark of Apple Inc., registered in the U.S. and other countries. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license.
Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. AS A RESULT, THIS DOCUMENT IS PROVIDED AS IS, AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state.