Izotope iOS Audio Programming Guide
Izotope iOS Audio Programming Guide
Code is taken from Apples sample iOS applications as well as the iZotope audio effect sample application, distributed with each iZotope iOS SDK.
Introduction
Audio processing in iOS is done through Core Audio. Within Core Audio, there are two major strategies that allow an application to process audio data directly and play it back through the iPhone or iPad hardware, and both will be covered in detail in the main body of this document.
1. Audio queues are consecutive buffers of audio that are passed to
the hardware and then reused. There are typically three buffers in a queue, and processing is done via a callback function, which will provide a buffer of audio that needs to be processed in time for it to be output to the iPhone/iPad hardware.
output units, which take audio input and produce audio output. In iOS, built-in audio units are connected in audio processing graphs, wired together in the same way hardware components are connected. An audio input is provided to the processing graph, which then processes the audio before handing it off to the hardware. The above methods do all the work of interacting with hardware and audio drivers so that an apps code can use a simple interface to handle audio, and both methods have several qualities in common.
1. Working with digital audio on any system requires working
knowledge of the basic topics of digital signal processing. For more information on DSP, take a look at the Appendix Introductory DSP Tutorial section of this document, which covers most of the basics.
2. Audio format information is also crucial. The channel count,
sampling rate, bit depth, and other formatting information are all contained in audio les along with the audio data itself. When working with audio data in iOS, this format information is critical for processing and playing sound accurately. Apple provides a structure for communicating this information in the AudioStreamBasic Description struct, and the strategies described in this guide rely heavily on this struct. Much of the work that goes into handling audio in iOS is in designing code that can correctly deal with a variety of audio formats.
3. Processing digital audio requires working with buffers. A buffer is
just a short segment of samples whose length is convenient for processing, usually in a size that is a power of two. Buffers are time efcient because processing functions are called only when a buffer is needed, and theyre space efcient because they dont occupy a huge amount of memory. The strategies for audio processing in iOS involve gathering buffers of audio from audio les or microphone input, and providing individual buffers to objects that will pass them off to the hardware.
using a callback. Callbacks for audio input will have a few arguments, including a pointer to a buffer to be lled and an object or struct containing additional information that is useful for processing. This is where the digital audio is handed off, and will be passed to the audio hardware. Due to having direct access to a buffer of audio within the callback, it is possible to perform sound synthesis or processing at the individual sample level. This is also where any iZotope audio processing will take place. This document explains audio queues and audio units by detailing the implementation of an audio playback application with both strategies. Other strategies for audio in iOS do not allow direct manipulation of audio at the sample level, so they wont be covered here. These strategies include simple frameworks for playing and recording sounds, like AVAudioPlayer and MPMediaPlayer. There is also a more complex framework for rendering 3D sound environments, called OpenAL, which is also not covered in detail within this document.
1. Audio Queues!...........................................................................1
i. Introduction to Audio Queue Services! .................................................1 ii. Designing a Helper Class!.....................................................................2 iii. Initialization!..........................................................................................5 iv. Setup and Cleanup!...............................................................................8 v. Audio Queue Callback Functionality! .................................................17 vi. Running the Audio Queue! .................................................................21
3. Glossary !..................................................................................57 4. Additional Help and Resources! .............................................61 5. Appendix - Introductory DSP Tutorial!...................................63
1. Audio Queues
i. Introduction to Audio Queue Services
An audio queue is an object used in Mac OS and iOS for recording and playing audio. It is a short queue of buffers that are constantly being lled with audio, passed to the audio hardware, and then lled again. An audio queue lls its buffers one at a time and then does the work of turning that data into sound. Audio queues handle interaction with codecs and hardware so that an apps code can interact with audio at a high level. The best resource for understanding audio queue programming is the Audio Queue Services Programming Guide in the Mac OS X Developer Library. Reading that document for a full understanding of Audio Queue Services is highly recommended for anyone planning to use audio in an iOS application. Additionally, documentation of specic functions can be found in the Audio Queue Services Reference, and a list of further references can be found in the last section of this chapter. Audio queues are dened in AudioQueue.h in the AudioToolbox framework. Most of the functions explained here are dened in AudioQueue.h or in AudioFile.h. Functions in AudioQueue.h have the prex AudioQueue, and functions in AudioFile.h have the prex AudioFile.
of audio. 2. They are well documented and straightforward to use. 3. They allow precise scheduling and synchronized playback of sounds. However, they have higher latency and are not as versatile as audio units.
1
2011 iZotope, Inc.
Interface
The AQPlayer class will have simple public member functions for interaction with the rest of the application. 1. The class will create and destroy AQPlayer objects with a constructor AQPlayer() and a destructor ~AQPlayer(). 2. To initialize an audio queue, there is a function CreateQueueForFile() that takes a path to an audio le in the form of a CFURLRef. 3. To start, stop, and pause audio queues, there are the functions StartQueue(), StopQueue(), and PauseQueue(). 4. To dispose of an audio queue, there is the DisposeQueue() function. 5. There will also be various functions to set and return the values of member variables, which keep track of the state of the audio queue.
Member Variables
To keep track of state in AQPlayer, several variables are required. Apple suggests using a custom struct that contains these, but in this example it is easier to just make them member variables of the C++ class AQPlayer. For clarity, all member variables will start with m. Variables of the following types will likely be needed for an app dealing with audio data:
2
2011 iZotope, Inc.
AudioStreamBasicDescription - mDataFormat; AudioQueueRef - mQueue; AudioQueueBufferRef - mBuffers[kNumberBuffers]; AudioFileID - mAudioFile; SInt64 - mCurrentPacket; UInt32 - mNumPacketsToRead; AudioStreamPacketDescription - *mPacketDescs; bool - mIsDone; Heres a brief description of each: AudioStreamBasicDescription mDataFormat The audio data format of the le being played. This includes information such as sample rate, number of channels, and so on. AudioQueueRef mQueue A reference to the audio queue being used for playback. AudioQueueBufferRef mBuffers[kNumberBuffers] The buffers used by the audio queue. The standard value for kNumberBuffers is 3, so that one buffer can be lled as another is in use, with an extra in case theres some lagging. In the callback, the audio queue will pass the individual buffer it wants lled. mBuffers is just used to allocate and prime the buffers. This is proof that the audio queue is literally just a queue of buffers that get lled up and wait in line to be output to the hardware. AudioFileID mAudioFile The le being played from. SInt64 mCurrentPacket The index of the next packet of audio in the audio le. This will be incremented each time playback is called. UInt32 mNumPacketsToRead The number of packets to read each time playback is called. This is also calculated after the audio queue is created. AudioStreamPacketDescription *mPacketDescs For variable bit rate (VBR) data, this is an array of packet descriptions for the audio le. For constant bit rate les, you can make this NULL.
3
2011 iZotope, Inc.
bool mIsDone Whether the audio queue is currently running. It may be helpful to keep track of other information, such as whether the queue is initialized, whether it should loop playback, and so on. The only thing the constructor, AQPlayer(), needs to do is initialize these member variables to appropriate values.
Opening an audio le to determine its format. Creating an audio queue. Calculating buffer size. Allocating buffers." Some additional setup for the audio queue, like setting magic cookies, channel layout, and volume.
The details of the audio queue setup are covered in the Initialization section of this document, and other setup is described in the Setup and Cleanup section.
4
2011 iZotope, Inc.
iii. Initialization
Initialization functions are needed to create an audio queue for a given audio le. In this case, initialization can be performed in a member function of the AQPlayer class, CreateQueueForFile().
Variable descriptions: NULL The function will use the default memory allocator. filePath A char* with the full path of the audio le. strlen( filePath ) The length of the path. false The path is not for a directory, but for a single le. In this example, a CFURLRef is created outside the AQPlayers member functions, and then passed to CreateQueueForFile(). 2. Open the le using AudioFileOpenURL(). Here is a typical call: OSStatus result= AudioFileOpenURL( audioFileURL, fsRdPerm, 0, &mAudioFile );
5
2011 iZotope, Inc.
Variable descriptions: audioFileURL The CFURLRef that was just created. fsRdPerm Requests read permission (rather than write, read/ write, etc.) for the le. 0 Dont use a le type hint. &mAudioFile A pointer to an AudioFileID that will represent the le. After nishing using the les CFURLRef, it is important to release it using CFRelease( audioFileURL ) to prevent a memory leak.
3. Get the les audio data format using AudioFileGetProperty(). Here is a typical call: UInt32 dataFormatSize= sizeof( mDataFormat ); AudioFileGetProperty( mAudioFile, kAudioFilePropertyDataFormat, &dataFormatSize, &mDataFormat ); Variable descriptions: mAudioFile The AudioFileID for the le, obtained from AudioFileOpenURL(). kAudioFilePropertyDataFormat The desired property is the data format. &dataFormatSize A pointer to an integer containing the size of the data format. &mDataFormat A pointer to an AudioStreamBasic Description. This has information like the sample rate, number of channels, and so on.
6
2011 iZotope, Inc.
7
2011 iZotope, Inc.
Setup
After creating the audio queue, the CreateQueueForFile() function will call SetupNewQueue() to perform some additional setup, as explained in the Setup and Cleanup section.
8
2011 iZotope, Inc.
void AQPlayer::CalculateBytesForTime ( AudioStreamBasicDescription &inDesc, UInt32 inMaxPacketSize, Float64 inSeconds, UInt32 *outBufferSize, UInt32 *outNumPacketsToRead ) { } Variable descriptions: &inDesc A pointer to the les audio format description. inMaxPacketSize An upper bound on the packet size. inSeconds The duration in seconds for the buffer size to approximate. *outBufferSize The buffer size being returned. *outNumPacketsToRead The number of packets to read each time the callback is called, which will also calculated. Heres what is done in the body of the function. 1. First, a reasonable maximum and minimum buffer size is chosen. For live audio processing, small values like 2048 and 256 bytes are used: static const int maxBufferSize = 0x800; static const int minBufferSize = 0x100;
2. Next, we see if the audio format has a xed number of frames per
packet and set the buffer size appropriately if it does. If it doesnt, just use a default value for buffer size from the maximum buffer size or maximum packet size:
9
2011 iZotope, Inc.
if( inDesc.mFramesPerPacket != 0 ) { Float64 numPacketsForTime= inDesc.mSampleRate / inDesc.mFramesPerPacket * inSeconds; *outBufferSize= numPacketsForTime * inMaxPacketSize; } else { *outBufferSize= maxBufferSize > inMaxPacketSize ? maxBufferSize : inMaxPacketSize; } 3. Then limit the buffer size to our maximum and minimum: if( *outBufferSize > maxBufferSize && *outBufferSize > inMaxPacketSize ) { *outBufferSize= maxBufferSize; } else { if( *outBufferSize < minBufferSize ) *outBufferSize= minBufferSize; }
4. Finally calculate the number of packets to read in each callback.
*outNumPacketsToRead= *outBufferSize / inMaxPacketSize; Next, get the upper bound for packet size using AudioFileGetProperty(). The call looks like this: AudioFileGetProperty( mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize ); Variable descriptions: mAudioFile The AudioFileID for the le. kAudioFilePropertyPacketSizeUpperBound The desired property is the packet size upper bound, a conservative estimate for the maximum packet size.
10
2011 iZotope, Inc.
&size A pointer to an integer with the size of the desired property (set to sizeof( maxPacketSize )). &maxPacketSize A pointer to an integer where the maximum packet size will be stored on output. Use CalculateBytesForTime to set the buffer size and number of packets to read. Heres what the call looks like: UInt32 bufferByteSize; CalculateBytesForTime( mDataFormat, maxPacketSize, kBufferDurationSeconds, &bufferByteSize, &mNumPacketsToRead ); Variable descriptions: mDataFormat The audio data format of the le being played. maxPacketSize The maximum packet size found with AudioFileGetProperty(). kBufferDurationSeconds A set duration for the buffer to match, approximately. In this example, the duration of the maximum buffer size is used (the buffer size in bytes divided by the sample rate). &bufferByteSize The buffer size in bytes to be set. mNumPacketsToRead The member variable where the number of packets for the callback function to read from the audio le is being stored.
Allocating Buffers
The audio queues buffers are allocated manually. First, it is determined whether the le is in a variable bit rate format, in which case the packet descriptions in the buffers will be needed.
11
2011 iZotope, Inc.
bool isFormatVBR= ( mDataFormat.mBytesPerPacket == 0 || mDataFormat.mFramesPerPacket == 0 ); UInt32 numPacketDescs= isFormatVBR ? mNumPacketsToRead : 0; Then the audio queues buffers are looped through and each one is allocated using AudioQueueAllocateBufferWithPacket Descriptions(), as follows: for( int iBuf= 0; iBuf < kNumberBuffers; ++iBuf ) { AudioQueueAllocateBufferWithPacketDescriptions ( mQueue, bufferByteSize, numPacketDescs, mBuffers [iBuf] ); } Variable descriptions: mQueue The audio queue. bufferByteSize The buffer size in bytes calculated above. numPacketDescs 0 for CBR (constant bit rate) data, otherwise the number of packets per call to the callback. mBuffers[i] - The current audio queue buffer. kNumberBuffers is the number of audio queue buffers. This has already been set to 3. If VBR (variable bit rate) data wont be used, it is possible to just use AudioQueueAllocateBuffer(), which takes all the same arguments except for numPacketDescs.
Some compressed audio formats store information about the le in a structure called a magic cookie. If the le being played has a magic cookie, it needs to be provided to the audio queue. The following code will check if there is a magic cookie and give it to the audio queue if there is. First, check whether the audio le has a magic cookie property: size= sizeof( UInt32 ); OSStatus result= AudioFileGetPropertyInfo( mAudioFile, kAudioFilePropertyMagicCookieData, &size, NULL ); If it does, get the property from the audio le and set it in the audio queue: if( result == noErr && size > 0 ) { char* cookie= new char [size]; AudioFileGetProperty( mAudioFile, kAudioFilePropertyMagicCookieData, &size, cookie ); AudioQueueSetProperty( mQueue, kAudioQueueProperty_MagicCookie, cookie, size ); delete [] cookie; } Variable descriptions: mAudioFile The audio le. kAudioFilePropertyMagicCookieData The desired property is magic cookie data. &size Pointer to an integer with the size of the desired property. cookie Temporary storage for the magic cookie data. mQueue The audio queue. kAudioQueueProperty_MagicCookie The desired property to set is the queues magic cookie.
13
2011 iZotope, Inc.
If an audio le has a channel layout property, the channel layout of the audio queue needs to be set. The channel layout property is normally used only for audio with more than two channels. First, check whether the audio le has a channel layout property: size= sizeof( UInt32 ); result= AudioFileGetPropertyInfo( mAudioFile, kAudioFilePropertyChannelLayout, &size, NULL ); If it does, get the property from the audio le and set it in the audio queue: if( result == noErr && size > 0 ){ AudioChannelLayout* acl= static_cast<AudioChannelLayout*>(malloc( size )); AudioFileGetProperty( mAudioFile, kAudioFilePropertyChannelLayout, &size, acl ); AudioQueueSetProperty( mQueue, kAudioQueueProperty_ChannelLayout, acl, size ); free( acl ); } Variable descriptions: mAudioFile The audio le. kAudioFilePropertyChannelLayout The desired property is the channel layout. &size Pointer to an integer with the size of the desired property. acl Temporary storage for the channel layout data. mQueue The audio queue. kAudioQueueProperty_ChannelLayout The desired property to set is the queues channel layout.
14
2011 iZotope, Inc.
Property Listeners
A property listener is a function that the audio queue will call when one of its properties changes. Property listeners can be useful for tracking the state of the audio queue. For example, in this example there is a member function isRunningProc() that is called when the audio queues IsRunning property changes. The function denition looks like this: void AQPlayer::isRunningProc( void* inAQObject, AudioQueueRef inAQ, AudioQueuePropertyID inID ) { } Variable descriptions: void* inAQObject A pointer to the AQPlayer object. AudioQueueRef inAQ The audio queue calling the property listener. AudioQueuePropertyID inID A value specifying the property that has changed. In the body of the function, the value of our AQPlayers mIsRunning property is updated. A property listener can take any action that would be appropriate when a specic property changes. As done in the callback, cast inAQObject as a pointer to an AQPlayer:
AQPlayer* THIS= static_cast<AQPlayer*>(inAQObject);
Then update the mIsRunning property using AudioQueueGetProperty (). UInt32 size= sizeof( THIS->mIsRunning ); OSStatus result= AudioQueueGetProperty( inAQ, kAudioQueueProperty_IsRunning, &THIS->mIsRunning, &size );
15
2011 iZotope, Inc.
To add the property listener to the audio queue so that it will be called, use AudioQueueAddPropertyListener(). That is done in the SetupNewQueue() function. Heres what the call looks like: AudioQueueAddPropertyListener( mQueue, kAudioQueueProperty_IsRunning, isRunningProc, this ); Variable descriptions: mQueue The audio queue. kAudioQueueProperty_IsRunning The desired property to listen for is IsRunning. isRunningProc The property listener function created. this The AQPlayer object, which the audio queue will pass as the inAQObject argument to isRunningProc. To remove a property listener, use AudioQueueRemovePropertyListener(), which takes the same arguments.
Setting Volume
Volume is an adjustable parameter of an audio queue, (in fact, it is the only adjustable parameter), and to set it, use the AudioQueue SetParameter() function as follows: AudioQueueSetParameter( mQueue, kAudioQueueParam_Volume, 1.0 ); Variable descriptions: mQueue The audio queue. kAudioQueueParam_Volume The parameter to set is" volume. The third argument is volume as a oating point value from 0.0 to 1.0.
16
2011 iZotope, Inc.
Cleanup
When nished with the audio queue, dispose of the queue and close the audio le. Disposing of an audio queue also disposes of its buffers. This is done in the member function DisposeQueue(). AudioQueueDispose( mQueue, true ); Variable descriptions: mQueue The audio queue. true Dispose immediately, rather than playing all queued buffers rst. (This is a synchronous action.) The DisposeQueue() function takes a boolean inDisposeFile specifying whether to dispose the audio le along with the queue. To dispose the audio le use: AudioFileClose( mAudioFile ); The destructor, ~AQPlayer(), just needs to call DisposeQueue( true ) to dispose of the audio queue and audio le.
17
2011 iZotope, Inc.
Variable descriptions: void* inAQObject A pointer to any struct or object used to manage state. Typically this will be either a custom C struct or an Objective-C or C++ object handling the audio queue. In these examples, this will be a pointer to the C++ AQPlayer object, which handles audio queues and stores all the necessary information in member variables. AudioQueueRef inAQ The audio queue calling the callback. The callback will place a buffer of audio on this queue. AudioQueueBufferRef inBuffer The buffer the audio queue is requesting be lled. Usually the callback will read from a le into the buffer and then enqueue the buffer.
18
2011 iZotope, Inc.
Variable descriptions: THIS->mAudioFile The audio le being played from. false Dont cache data. &numBytes Pointer to an integer that will give the number of bytes read. inBuffer->mPacketDescriptions The packet descriptions in the current buffer. THIS->mCurrentPacket The index of the next packet of audio. &nPackets Pointer to the number of packets to read. On output, will tell how many were actually read. inBuffer->mAudioData The audio buffer that will be lled.
Processing
After lling an audio buffer from a le, it is possible to process the audio directly by manipulating the values in the buffer. And if the application uses synthesis, not playback, it is possible to ll the buffer directly without using a le. This is also where any iZotope effects would be applied. Details regarding the implementation of iZotope effects are distributed along with our effect libraries.
Enqueueing a Buffer
After lling a buffer with audio data, check whether any packets were actually read by looking at the value of nPackets. If no packets were read, the le has ended, so stop the audio queue and set the mIsDone variable to true. Otherwise, place the lled buffer in the audio queue using the AudioQueueEnqueueBuffer() function. A call to this function has the following form: AudioQueueEnqueueBuffer( inAQ, inBuffer, numPackets, THIS->mPacketDescs );
19
2011 iZotope, Inc.
Variable descriptions: inAQ The audio queue to place the buffer on. inBuffer The buffer to enqueue. numPackets The number of packets in the buffer. For constant bit rate audio formats, which do not use packets, this value should be 0. THIS->mPacketDescs The packet descriptions for the packets in the buffer. For constant bit rate audio formats, which do not use packets, this value should be NULL.
20
2011 iZotope, Inc.
21
2011 iZotope, Inc.
This function will pause the audio queue provided to it. This is the only thing the PauseQueue() function in the AQPlayer does. The queue will start again when AudioQueueStart() is called.
22
2011 iZotope, Inc.
2. Audio Units
i. Introduction to Audio Units
Audio units are a service used for audio processing at a lower level in iOS. An individual audio unit takes audio input and produces audio output, and they may be connected into an audio processing graph, in which each unit passes its output to the next. For example, it is possible to pass audio to an EQ unit, the output of which is passed to a mixer unit, whose output is passed to an I/O (input/output) unit, which gives it to the hardware to turn into sound. The graph structure is similar to the layout of an audio hardware setup, which makes it easy to visualize the ow of audio in an application. Alternatively, it is possible to use just a single audio unit, or connect them without using a processing graph. In Mac OS, audio unit programming is a much broader topic because it is possible to create audio units. iOS programming however, simply makes use of audio units that already exist. Since the audio units provide access to the low-level audio data, it is possible to process that audio in any desired way. In iOS 4, there are seven audio units available, which handle EQ, mixing, I/O, voice processing, and format conversion. Normally an app will have a very simple audio processing graph with just a few audio units. A great resource for understanding audio unit programming for iOS is the Audio Unit Hosting Guide in the iOS Developer Library. This document covers the details of using audio units for iOS. (The Audio Unit Programming Guide in the Mac OS X Developer Library does not have as much useful information for iOS programming). Documentation of specic functions can be found in the Audio Unit Framework Reference. Apples MixerHost sample application is a great example of using audio units. This guide will focus on that sample code along with some discussion of other sample code like aurioTouch. For an example like MixerHost but with a more complex audio processing graph, take a look at the iPhoneMixerEQGraphTest example. Audio units and other structures necessary for using them are dened in AudioComponent.h, AUGraph.h and other headers included from AudioToolbox.h in the AudioToolbox framework. Similarly, necessary
23
2011 iZotope, Inc.
functions for handling audio sessions are found in AVFoundation.h in the AVFoundation framework (an application can just include AudioToolbox/AudioToolbox.h and AVFoundation/ AVFoundation.h). It should be noted that many of the functions and structures used by audio units have the AU prex.
24
2011 iZotope, Inc.
Interface
The MixerHostAudio class is a subclass of NSObject. So, as expected, it implements the init and dealloc methods, which handle proper initialization and destruction. It also has several instance methods, some for setup and some for interfacing with the rest of the application. The methods that are important when using a MixerHostAudio object, as can be seen from how its used by the MixerHostViewController, are for initialization (init), starting and stopping the audio processing graph (startAUGraph and stopAUGraph), and setting the state and p a r a m e t e r s o f t h e m i x e r u n i t (e n a b l e M i x e r I n p u t : i s O n : , setMixerOutputGain:, and setMixerInput:gain:). The other methods are used by the MixerHostAudio object itself for initialization are - obtainSoundFileURLs - setupAudioSession - setupStereoStreamFormat - setupMonoStreamFormat - readAudioFilesIntoMemory - configureAndInitializeAudioProcessingGraph and for logging messages conveniently: - printASBD: - printErrorMessage: So the interface for the MixerHostAudio class is simpler than it seems at rst glance initialize it, start and stop its processing graph, and set the parameters for its mixer audio unit.
Member Variables
To keep track of state in MixerHostAudio, there are several member variables. First, there is a custom struct (soundStruct) dened, which contains necessary information to pass to the callback function:
25
2011 iZotope, Inc.
typedef struct { BOOL isStereo; UInt32 frameCount; UInt32 sampleNumber; AudioUnitSampleType *audioDataLeft; AudioUnitSampleType *audioDataRight; } soundStruct, *soundStructPtr; Variable descriptions: isStereo Whether the input data has a right channel. frameCount Total frames in the input data. sampleNumber The index of the current sample in the input data. audioDataLeft The audio data from the left channel (or the only channel, for mono data). audioDataRight The audio data from the right channel (or NULL for mono data). The member variables of MixerHostAudio are: ! Float64 - graphSampleRate; CFURLRef - sourceURLArray[NUM_FILES]; soundStruct - soundStructArray[NUM_FILES]; AudioStreamBasicDescription - stereoStreamFormat; AudioStreamBasicDescription - monoStreamFormat; AUGraph - processingGraph; BOOL - playing; BOOL - interruptedDuringPlayback;
26
2011 iZotope, Inc.
AudioUnit - mixerUnit; Heres a brief description of each: Float64 graphSampleRate The sample rate in Hz used by the audio processing graph. CFURLRef sourceURLArray[NUM_FILES] An array of URLs for the input audio les used for playback. soundStruct soundStructArray[NUM_FILES] An array of soundStructs, which are dened above. One soundStruct is kept for each le in order to keep track of all the necessary data separately. AudioStreamBasicDescription stereoStreamFormat The audio data format for a stereo le. Along with the number of channels, this includes information like sample rate, bit depth, and so on. AudioStreamBasicDescription monoStreamFormat The audio data format for a mono le. AUGraph processingGraph The audio processing graph that contains the Multichannel Mixer audio unit and the Remote I/O unit. The graph does all the audio processing. BOOL playing Whether audio is currently playing. BOOL interruptedDuringPlayback Whether the audio session has been interrupted (by a phone call, for example). This ag lets allows for restarting the audio session after returning from an interruption.
27
2011 iZotope, Inc.
AudioUnit mixerUnit The Multichannel Mixer audio unit. Its member variables are kept track of in order to change its parameters later, unlike the Remote I/O unit, which doesnt need to be referenced after it is added to the processing graph.
The details of the audio unit setup are covered in the Audio Unit Initialization section of this document.
to the member variable soundStructArray so the necessary information thats been stored there can be accessed. The details of what the callback does are covered in the Audio Unit Callback Functionality section of this document.
iii. Initialization
To initialize the MixerHostAudio object and its audio processing graph, it is necessary to do several things. Initialization is grouped into several methods in MixerHostAudio to keep things organized. (For brevity, the code shown here leaves out the error checking that the sample code implements.)
29
2011 iZotope, Inc.
Now activate the audio session and then match the graph sample rate to the one the hardware is actually using. [mySession setActive: YES error: &audioSessionError]; self.graphSampleRate = [mySession currentHardwareSampleRate]; Finally, register a property listener, just as was done with an audio queue. Here it is possible to detect audio route changes (like plugging in headphones) so that playback can be stopped when that happens. AudioSessionAddPropertyListener ( kAudioSessionProperty_AudioRouteChange, audioRouteChangeListenerCallback, self ); Variable descriptions: kAudioSessionProperty_AudioRouteChange The property of interest, in this case the audio route change property. audioRouteChangeListenerCallback A function that has been dened and will be called when the property changes. self A pointer to any data for the callback to take as an argument, in this case the MixerHostAudio object.
Property listeners
Just as with audio queues, an audio sessions property listeners will be called when some property of the session changes. The function that was just registered, audioRouteChangeListenerCallback(), is called when the audio queues AudioRouteChange property changese.g. when headphones are plugged into the device. The function denition looks like this: void audioRouteChangeListenerCallback ( void *inUserData, AudioSessionPropertyID inPropertyID, UInt32 inPropertyValueSize, const void *inPropertyValue ) { }
30
2011 iZotope, Inc.
Variable descriptions: void* inUserData A pointer to any struct or object containing the desired information, in this case the MixerHostAudio object. AudioSessionPropertyID inID A value specifying the property that has changed. UInt32 inPropertyValueSize The size of the property value. const void *inPropertyValue A pointer to the property value. In the body of the function, check whether playback should stop based on what type of audio route change occurred. First, make sure this is a route change message, then get the MixerHostAudio object with a C-style cast of inUserData. if (inPropertyID != kAudioSessionProperty_AudioRouteChange) return; MixerHostAudio *audioObject = (MixerHostAudio *)inUserData; If sound is not playing, do nothing. Otherwise, check the reason for the route change and stop playback if necessary by posting a notication that the view controller will get. Theres some fairly complicated stuff going on with types here, but it wont be explored in detail.
31
2011 iZotope, Inc.
if (NO == audioObject.isPlaying) { return; } else { CFDictionaryRef routeChangeDictionary = inPropertyValue; CFNumberRef routeChangeReasonRef = CFDictionaryGetValue( routeChangeDictionary, CFSTR (kAudioSession_AudioRouteChangeKey_Reason)); SInt32 routeChangeReason; CFNumberGetValue (routeChangeReasonRef, kCFNumberSInt32Type, &routeChangeReason); if (routeChangeReason == kAudioSessionRouteChangeReason_OldDeviceUnavailabl e) { NSString*MixerHostAudioObjectPlaybackStateDidChang eNotification =@MixerHostAudioObjectPlaybackStateDidChangeNotif ication; [[NSNotificationCenter defaultCenter] postNotificationName: MixerHostAudioObjectPlaybackStateDidChangeNotifica tion object: self]; } }
32
2011 iZotope, Inc.
Then set all the values in the AudioStreamBasicDescription. stereoStreamFormat.mFormatID = kAudioFormatLinearPCM; stereoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical; stereoStreamFormat.mBytesPerPacket = bytesPerSample; stereoStreamFormat.mFramesPerPacket = 1; stereoStreamFormat.mBytesPerFrame = bytesPerSample; stereoStreamFormat.mChannelsPerFrame = 2; stereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample; stereoStreamFormat.mSampleRate = graphSampleRate; Variable descriptions: mFormatID An identier tag for the format. Linear PCM (pulsecode modulation) is very commonits the format for .wav and .aif les, for example. Here it is used for the .caf les Apple has included.
33
2011 iZotope, Inc.
mFormatFlags Here it is possible to specify some ags to hold additional details about the format. In this case, the canonical audio unit sample format is being used to avoid extraneous conversion. mBytesPerPacket The number of bytes in each packet of audio. A packet is just a small chunk of data whose size is determined by the format and is the smallest readable piece of data. For uncompressed linear PCM data (which is the data type here) each packet has just one sample, so this is set to the number of bytes per sample. mFramesPerPacket The number of frames in each packet of audio. A frame is just a group of samples that happen at the same instantso in stereo, there are two samples in" each frame. Again, for uncompressed audio this value is one. Just read frames one at a time from the le. mBytesPerFrame The number of bytes in each frame. Even though each frame has two samples in stereo, this will again be the number of bytes per sample. This is because the data is noninterleaved each channel is held in a separate array, rather than having a single array in which samples alternate between the two channels. mChannelsPerFrame The number of channels in the audio one for mono, two for stereo, etc. mBitsPerChannel The bit depth of each channelhow many bits represent a single sample. Using AudioUnitSampleType, just convert from bytes to bits by multiplying by 8. mSampleRate The sample rate of the format in frames per second (Hz.) This is the value obtained from the hardware, which may or may not be the preferred sample rate.
34
2011 iZotope, Inc.
properly allocated for the data, and so its possible to keep track of location when reading it. Store the value in the soundStruct that corresponds with the le.
35
2011 iZotope, Inc.
UInt64 totalFramesInFile = 0; UInt32 frameLengthPropertySize = sizeof (totalFramesInFile); result = ExtAudioFileGetProperty ( audioFileObject, kExtAudioFileProperty_FileLengthFrames, &frameLengthPropertySize, &totalFramesInFile); soundStructArray[audioFile].frameCount = totalFramesInFile; Variable descriptions: audioFileObject The le, ExtAudioFileRef from above. kExtAudioFileProperty_FileLengthFrames The property of interest, in this case le length in frames. &frameLengthPropertySize Pointer to the size of the property of interest. &totalFramesInFile Pointer to the variable where this value should be stored.
2. Now get the channel count in the le for correct playback. Store the
value in a variable channelCount, which will be used shortly. AudioStreamBasicDescription fileAudioFormat = {0}; UInt32 formatPropertySize = sizeof (fileAudioFormat); result = ExtAudioFileGetProperty (audioFileObject, kExtAudioFileProperty_FileDataFormat, &formatPropertySize, &fileAudioFormat); UInt32 channelCount= fileAudioFormat.mChannelsPerFrame;
36
2011 iZotope, Inc.
Variable descriptions: audioFileObject The le, ExtAudioFileRef from above. kExtAudioFileProperty_FileDataFormat The property of interest, in this case the data format description. &formatPropertySize Pointer to the size of the property of interest. &fileAudioFormat Pointer to the variable where this value should be stored, here the AudioStreamBasicDescription.
Allocating memory
Now that the channel count is known, memory can be allocated to store the audio data.
1. First, allocate the left channel, whose size is just the number of
samples times the size of each sample. " soundStructArray[audioFile].audioDataLeft = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof(AudioUnitSampleType));
2. Next, if the le is stereo, allocate the right channel in the same way.
Also set the isStereo value in the soundStruct and store the appropriate audio data format in a variable importFormat, which will be used below. AudioStreamBasicDescription importFormat = {0}; if (2 == channelCount) { soundStructArray[audioFile].isStereo = YES; soundStructArray[audioFile].audioDataRight = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); importFormat = stereoStreamFormat; } else if (1 == channelCount) { soundStructArray[audioFile].isStereo = NO; importFormat = monoStreamFormat; }
37
2011 iZotope, Inc.
audioFileObject so that it will use the correct format when the audio data is copied into each les soundStruct. result = ExtAudioFileSetProperty ( audioFileObject, kExtAudioFileProperty_ClientDataFormat, sizeof (importFormat), &importFormat );
Setting up a buffer list
To read the audio les into memory, rst set up an AudioBufferList object. As Apples comments explain, this object gives the ExtAudioFileRead function the correct conguration for adding data to the buffer, and it points to the memory that should be written to, which was allocated in the soundStruct. 1. First, allocate the buffer list and set its channel count. AudioBufferList *bufferList; bufferList = (AudioBufferList *) malloc ( sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1); bufferList->mNumberBuffers = channelCount;
2. Create an empty buffer as a placeholder and place it at each index
in the buffer array. AudioBuffer emptyBuffer = {0}; size_t arrayIndex; for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) { bufferList->mBuffers[arrayIndex] = emptyBuffer; }
38
2011 iZotope, Inc.
3. Finally, set the properties of each bufferthe channel count, the size of the data, and where the data will actually be stored. bufferList->mBuffers[0].mNumberChannels = 1; bufferList->mBuffers[0].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType); bufferList->mBuffers[0].mData = soundStructArray [audioFile].audioDataLeft; if (2 == channelCount) { bufferList->mBuffers[1].mNumberChannels = 1; bufferList->mBuffers[1].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType); bufferList->mBuffers[1].mData = soundStructArray [audioFile].audioDataRight; }
With the buffers all set up and ready to be lled, the audio data can be read in from the audio les using ExtAudioFileRead. UInt32 numberOfPacketsToRead = (UInt32 totalFramesInFile; result = ExtAudioFileRead ( audioFileObject, &numberOfPacketsToRead, bufferList ); free (bufferList); After that, along with error checking, all that needs to be done is set the sample index to 0 and get rid of audioFileObject. " soundStructArray[audioFile].sampleNumber = 0; ExtAudioFileDispose (audioFileObject);
39
2011 iZotope, Inc.
component description for each one, the Remote I/O unit and the Multichannel Mixer unit. AudioComponentDescription iOUnitDescription; iOUnitDescription.componentType = kAudioUnitType_Output; iOUnitDescription.componentSubType = kAudioUnitType_RemoteIO; iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; iOUnitDescription.componentFlags = 0; iOUnitDescription.componentFlagsMask = 0; AudioComponentDescription MixerUnitDescription; MixerUnitDescription.componentType = kAudioUnitType_Mixer; MixerUnitDescription.componentSubType = kAudioUnitType_MultiChannelMixer; MixerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; MixerUnitDescription.componentFlags = 0;
40
2011 iZotope, Inc.
MixerUnitDescription.componentFlagsMask = 0;
Variable descriptions: componentType The general type of the audio unit. The project has an output unit and a mixer unit. componentSubType The specic audio unit subtype. The project has a Remote I/O unit and a Multichannel Mixer unit. componentManufacturer Manufacturer of the audio unit. All the audio units usable in iOS are made by Apple. componentFlags Always 0. componentFlagsMask Always 0.
3. Then add the nodes to the processing graph. AUGraphAddNode
takes the processing graph, a pointer to the description of a node, and a pointer to an AUNode in which to store the added node. AUNode iONode; AUNode mixerNode; result = AUGraphAddNode ( processingGraph, &iOUnitDescription, &iONode ); result = AUGraphAddNode ( processingGraph, &MixerUnitDescription, &mixerNode );
4. Open the graph to prepare it for processing. Once its been opened,
the audio units have been instantiated, so its possible to retrieve the Multichannel Mixer unit using AUGraphNodeInfo. result = AUGraphOpen (processingGraph); result = AUGraphNodeInfo ( processingGraph, mixerNode, NULL, &mixerUnit );
41
2011 iZotope, Inc.
Some setup needs to be done with the mixer unit. 1. Set the number of buses and set properties separately for the guitar and beats buses. " UInt32 busCount = 2; UInt32 guitarBus = 0; UInt32 beatsBus = 1; result = AudioUnitSetProperty ( mixerUnit, kAudioUnitPropertyElementCount, kAudioUnitScopeInput, 0, &busCount, sizeof (busCount)); Variable descriptions: mixerUnit The audio unit a property is being set in. kAudioUnitPropertyElementCount The desired property to set is the number of elements. kAudioUnitScopeInput The scope of the property. Audio units normally have input scope, output scope, and global scope. Here there should be two input buses so input scope is used. 0 The element to set the property in. &busCount Pointer to the value being set. sizeof (busCount) Size of the value being set.
2. Now increase the maximum frames per slice to use larger slices
when the screen is locked. UInt32 maximumFramesPerSlice = 4096; result = AudioUnitSetProperty ( mixerUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maximumFramesPerSlice, sizeof (maximumFramesPerSlice) );
42
2011 iZotope, Inc.
3. Next attach the input callback to each input bus in the mixer node.
The input nodes on the mixer are the only place where audio will enter the processing graph. for (UInt16 busNumber = 0; busNumber < busCount; ++busNumber) { AURenderCallbackStruct inputCallbackStruct; inputCallbackStruct.inputProc = &inputRenderCallback; inputCallbackStruct.inputProcRefCon = soundStructArray; result = AUGraphSetNodeInputCallback ( processingGraph, mixerNode, busNumber, &inputCallbackStruct ); } The inputProcRefCon variable is what will be passed to the callbackit can be a pointer to anything. Here it is set to soundStructArray because thats where the data the callback will need is being kept. inputRenderCallback is a static C function which will be dened in MixerHostAudio.m.
4. Now set the stream formats for the two input buses.
result = AudioUnitSetProperty ( mixerUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, guitarBus, &stereoStreamFormat, sizeof (stereoStreamFormat) ); result = AudioUnitSetProperty ( mixerUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, beatsBus, &monoStreamFormat, sizeof (monoStreamFormat) );
43
2011 iZotope, Inc.
Now that the nodes are good to go, its possible to connect them and initialize the graph. result = AUGraphConnectNodeInput ( processingGraph, mixerNode, 0, iONode, 0 ); result = AUGraphInitialize (processingGraph); Variable descriptions: processingGraph The audio processing graph. mixerNode The mixer unit node, which is the source node. 0 The output bus number of the source node. The mixer unit has only one output bus, bus 0. iONode The I/O unit node, which is the destination node. 0 The input bus number of the destination node. The I/O unit has only one input bus, bus 0.
static OSStatus inputRenderCallback( void* inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData ) { } Variable descriptions: void* inRefCon A pointer to any struct or object used to manage state. Typically this will be either a custom C struct or an Objective-C or C++ object. In this case, it will be a pointer to the soundStruct, which has the necessary data for the callback. AudioUnitRenderActionFlags *ioActionFlags Unused here. This can be used to mark areas of silence when generating sound. const AudioTimeStamp* inTimeStamp Unused here. UInt32 inBusNumber The number of the bus which is calling the callback. UInt32 inNumberFrames The number of frames being requested. AudioBufferList *ioData The buffer list which has the buffer to ll.
45
2011 iZotope, Inc.
Also grab some useful data from the sound structthe total frame count and whether the data is stereo. UInt32 frameTotalForSound = soundStructPointerArray[inBusNumber].frameCount; BOOL isStereo = soundStructPointerArray [inBusNumber].isStereo;
Filling a Buffer
Now to ll the buffer with the audio data stored in the soundStruct array. First, set up pointers to the input data and the output buffers. " AudioUnitSampleType *dataInLeft; AudioUnitSampleType *dataInRight; dataInLeft = soundStructPointerArray [inBusNumber].audioDataLeft; dataInRight = soundStructPointerArray [inBusNumber].audioDataRight; AudioUnitSampleType *outSamplesChannelLeft; AudioUnitSampleType *outSamplesChannelRight; outSamplesChannelLeft = (AudioUnitSampleType *) ioData->mBuffers[0].mData; outSamplesChannelRight = (AudioUnitSampleType *) ioData->mBuffer[1].mData;
Next, get the sample number to start reading from, which is stored in the soundStruct array, and copy the data from the input buffers into the output buffers. When the sample number reaches the end of the le, set it to 0 so that playback will just loop.
46
2011 iZotope, Inc.
for (UInt32 frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) { outSamplesChannelLeft[frameNumber] = dataInLeft [sampleNumber]; if (isStereo) outSamplesChannelRight [frameNumber] = dataInRight[sampleNumber]; sampleNumber++; if (sampleNumber >= frameTotalForSound) sampleNumber = 0; } soundStructPointerArray[inBusNumber].sampleNumber = sampleNumber; Thats all the callback needs to doit just lls a buffer whenever its called. The processing graph will call the callback at the right times to keep the audio playing smoothly.
Processing
After lling the audio buffer, just as in an audio queue callback, its possible to process the audio directly by manipulating the values in the buffer. And if the application uses synthesis, not playback, the buffer can be lled directly without using a le. This is also where any iZotope effects would be applied. Details regarding the implementation of iZotope effects are distributed along with our effect libraries.
47
2011 iZotope, Inc.
Setting Gain
To set the input gain in the two input buses of the mixer, again use AudioUnitSetParameter(). The arguments are identical except for the parameter ID, which is now kMultiChannelMixerParam_Volume, and the parameter value to set, which is an argument to this method newGain. OSStatus result = AudioUnitSetParameter ( mixerUnit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, inputBus, newGain, 0 ); AudioUnitSetParameter() is also used to set the output gain of the mixer. The arguments are similar to those for input gain, but the scope is now kAudioUnitScope_Input and the element number is 0, because there is only one output element while there are two input buses. OSStatus result = AudioUnitSetParameter ( mixerUnit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Output, 0, newGain, 0 );
49
2011 iZotope, Inc.
And to stop it, stopAUGraph rst checks whether its running, and stops it if it is. Boolean isRunning = false; OSStatus result = AUGraphIsRunning (processingGraph, &isRunning); if (isRunning) { result = AUGraphStop (processingGraph); self.playing = NO; }
Handling Interruptions
Because it is declared as implementing the <AVAudioSession Delegate> protocol, MixerHostAudio has methods to handle interruptions to the audio stream, like phone calls and alarms. It is possible to keep track of whether the audio session has been interrupted while playing in a boolean member variable, interruptedDuringPlayback. In beginInterruption, its possible to check whether sound is currently playing. If it is, set interruptedDuringPlayback and post a notication that the view controller will see. if (playing) { self.interruptedDuringPlayback = YES; NSString* MixerHostAudioObjectPlaybackStateDidChangeNotif ication= @MixerHostAudioObjectPlaybackStateDidChangeNot ification; [[NSNotificationCenter defaultCenter] postNotificationName: MixerHostAudioObjectPlaybackStateDidChangeNotif ication object: self]; } The MixerHostViewController has registered to receive these notications in the registerForAudioObjectNotifications method. When it receives such a notication, it will stop playback. When the audio session resumes from an interruption, end InterruptionWithFlags: is called. The flags value is a series of bits
50
2011 iZotope, Inc.
in which each bit can be a single ag, so it can be checked against AVAudioSessionInterruptionFlags_ShouldResume with a bitwise and, &. if (flags & AVAudioSessionInterruptionFlags_ShouldResume) {} If the session should resume, reactivate the audio session: [[AVAudioSession sharedInstance] setActive: YES error: &endInterruptionError]; Then update the interruptedDuringPlayback value and post a notication to resume playback if necessary. if (interruptedDuringPlayback) { self.interruptedDuringPlayback = NO; NSString* MixerHostAudioObjectPlaybackStateDidChangeNotif ication = @MixerHostAudioObjectPlaybackStateDidChangeNot ification; [[NSNotificationCenter defaultCenter] PostNotificationName: MixerHostAudioObjectPlaybackStateDidChangeNotif ication object: self]; }
vii. aurioTouch
The aurioTouch sample application also uses audio units, but in a different way. First of all, it uses the Remote I/O unit for input as well as output, and this is the only unit that is used. (The aurioTouch project is also not quite as neat and well-commented as MixerHost, so it may be a little harder to understand). This document will cover the structure of how aurioTouch works but wont dive too deep into its complexities.
51
2011 iZotope, Inc.
Structure
Most of the audio functionality of aurioTouch is in aurioTouchApp Delegate.mm. This is where an interruption listener rio InterruptionListener(), a property listener propListener(), and an audio unit callback PerformThru() can be found. Most audio and graphics initialization takes place in applicationDidFinish Launching:, while setup for the Remote I/O unit takes place in aurio_helper.cpp, in the SetupRemoteIO function. Most of the other code in the app is for its fast Fourier transform (FFT) functionality, and for graphics.
Now it initializes the audio session, sets some desired properties, and activates it. Unlike MixerHost, the session category is play and record because input audio is being used. Just as was done in MixerHost, there is a request for a preferred buffer size, but the actual buffer size must be retrieved because theres no guarantee they match. AudioSessionInitialize(NULL, NULL, rioInterruptionListener, self); UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord; AudioSessionSetProperty ( kAudioSessionProperty_AudioCategory, sizeof (audioCategory), &audioCategory); AudioSessionAddPropertyListener ( kAudioSessionProperty_AudioRouteChange, propListener, self); Float32 preferredBufferSize = .005; AudioSessionSetProperty ( kAudioSessionProperty_PreferredHardwareIOBufferDur ation, sizeof(preferredBufferSize), &preferredBufferSize); UInt32 size = sizeof(hwSampleRate); AudioSessionGetProperty ( kAudioSessionProperty_CurrentHardwareSampleRate, &size, &hwSampleRate); AudioSessionSetActive(true); Next it initializes the Remote I/O unit using SetupRemoteIO(), which will be covered in the next section. Hand it the audio unit, the callback, and the audio data format.
SetupRemoteIO(rioUnit, inputProc, thruFormat);
Then aurioTouch sets up custom objects for DC ltering and FFT processing, which wont be examined in detail here. Next it starts the Remote I/O unit and sets thruFormat to match the audio units output format.
53
2011 iZotope, Inc.
Finally set the format for both input and output and initialize the unit. SetAUCanonical() is a function from CAStreamBasic Description.h which creates a canonical AU format with the given number of channels. false means it wont be interleaved. outFormat.SetAUCanonical(2, false); outFormat.mSampleRate = 44100; AudioUnitSetProperty(inRemoteIOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &outFormat, sizeof (outFormat)); AudioUnitSetProperty(inRemoteIOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &outFormat, sizeof (outFormat)); AudioUnitInitialize(inRemoteIOUnit);
55
2011 iZotope, Inc.
for(UInt32 i = 0; i < ioData->mNumberBuffers; ++i) THIS->dcFilter[i].InplaceFilter((SInt32*) (ioData->mBuffers[i].mData), inNumberFrames, 1); Then pass the buffer of audio to drawing functions depending on which drawing mode is enabled. These functions will not alter the audio data, just read it to draw in the oscilloscope display or the FFT. if (THIS->displayMode == aurioTouchDisplayModeOscilloscopeWaveform) { } else if ((THIS->displayMode == aurioTouchDisplayModeSpectrum) || (THIS>displayMode == aurioTouchDisplayModeOscilloscopeFFT)) { } Finally, check whether the mute member variable is on. If it is, silence the data in the buffer using the SilenceData() function, which just zeroes out the buffers. The buffer is passed through the Remote I/O unit and is played, whether it is silence or normal audio. if (THIS->mute == YES) { SilenceData(ioData); }
56
2011 iZotope, Inc.
3. Glossary
ADC An analog to digital converter, a device that will sample an analog sound at a xed sampling rate and convert it into a digital signal. amplitude The strength of vibration of a sound waveroughly speaking, the volume of a sound at a particular moment. analog A continuous real-world signal rather than discrete numbers. An analog signal must be sampled through an ADC in order to be converted to a digital signal that may be manipulated through processing in a computer. audio queue A Core Audio service for working with audio at a high level. Consists of a short list of buffers that are passed to the hardware and then reused. The audio queue object will call the callback whenever it needs a buffer lled. audio unit A Core Audio service for working with audio at a low level. An individual audio unit is a component, like a mixer, an equalizer, or an I/O unit, which takes audio input and produces audio output. Audio units are plugged into one another, creating a chain of audio processing components that provide real-time audio processing. bit A binary digit. The smallest computational unit provided by a computer. A bit has only two states, on or off (1 or 0) and represents a single piece of binary data. bit depth The total possible size in bits of each sample in a signalfor example, an individual sample might consist of a signed 16-bit integer, where 32,767 (2^16 - 1) is the maximum amplitude, and -32,768 is the minimum. buffer A short list of samples whose length is convenient for processing. For live audio processing, buffers are typically small, a few hundred to a few thousand samples (usually in powers of two), in order to minimize latency. For ofine processing, buffers may be larger. Buffers are time efcient because processing functions are only called every time a buffer is needed, and theyre space efcient because they dont occupy a huge amount of memory. Typical buffer sizes are 256, 512, 1024 and 2048 samples, though they may vary depending on the application.
57
2011 iZotope, Inc.
callback A written function that the system calls. When working with audio, callbacks are used to provide buffers whenever they are requested. Callbacks for audio input will have a few arguments, including a pointer to a buffer that should be lled and an object or struct containing additional information. The callback is where an audio buffer is handed off to an iZotope effect for processing, and upon return that audio is passed to the audio hardware for output channel An individual audio signal in a stream. Channels are independent of one another. There may be one or many channels in an audio stream or lefor example, a mono sound has one channel and a stereo sound has two channels. channel count The number of individual channels in an input stream. Core Audio The general API for audio in Mac OS and iOS, which includes audio queues and audio units along with a large base of other functionality for working with audio. DAC A digital to analog converter. A device which reconstructs a continuous signal from individual samples by interpolating what should be between the samples. deinterleaved See interleaved. digital Consisting of discrete numbers rather than a continuous signal. xed-point A system for representing numbers that are essentially integers. This data type has a xed number of digits after the decimal point (radix point). A typical xed point number may look like 12345. oating-point A system for representing numbers that include a decimal place. Floating point refers to the fact that the decimal place can be placed anywhere relative to the signicant digits of a number. Floating point numbers can typically represent a larger range of values than their xed point counterparts at the expense of additional memory. A oating point number may look like 123.45 or 1.2345.
58
2011 iZotope, Inc.
frame A set of samples that occur at the same time. For example, stereo audio will have two samples in each frame.
L R L R L R L R
Frame
Frame
Frame
Frame
Buffer or packet
frequency - How often an event occurs. For sounds, this refers to the frequency of oscillation of the sound wave. Frequency is measured in Hertz. Hertz or Hz - A unit of frequencycycles per second. interleaved and deinterleaved These terms refer to the way samples are organized by channel in audio data. Interleaved audio data is data in which samples are stored in a single dimensional array. Its called interleaved because the audio alternates between channels, providing the rst sample of the rst channel followed by the rst sample of the second channel and so on. So, where L is a sample from the left channel and R is a sample from the right, its organized as follows:
0 L 0 R 1 L 1 R 2 L 2 R 3 L 3 R 4 L 4 R 5 L 5 R 6 L 6 R 7 L 7 R
In contrast, a deinterleaved or noninterleaved audio stream in which the channels are separated from one another and each channel is represented as its own contiguous group of samples. A deinterleaved audio stream may be stored in a two-dimensional array with each dimension consisting of a single channel.
0 L R 1 L R 2 L R 3 L R 4 L R 5 L R 6 L R 7 L R
59
2011 iZotope, Inc.
latency - Time delay due to processing. This refers to the amount of time that passes when audio is sent to a processing function and when that audio is output. Any audio processing system will have some amount of inherent delay due to the amount of time it takes to process the audio. Minimizing the amount of latency is desirable for real-time audio application. noninterleaved See interleaved. packet A small chunk of audio that is read from a le. sample - A single number indicating the amplitude of a sound at a particular moment. Digital audio is represented as a series of samples at some sampling rate, such as 44,100 or 48,000 Hz, and with some bit depth, such as 16 bits. sampling rate - The frequency with which samples of a sound are taken. Typical values are 22,050, 44,100 or 48,000 samples per second, or Hz. Your trusty old compact disc player hums along at a sampling rate of 44.1 kHz, or 44,100 Hz.
60
2011 iZotope, Inc.
61
2011 iZotope, Inc.
General iOS Help The iOS Developer Library: http://developer.apple.com/library/ios/ navigation/ The Apple Developer Forums: http://developer.apple.com/devforums/
62
2011 iZotope, Inc.
Once the signal is digital, a computer can manipulate it in a variety of ways through digital signal processing. DSP is simply the mathematics used for effects, noise reduction, metering, and a wide array of other audio applications. After the audio is processed through DSP algorithms, it must be converted to sound through a digital to analog converter, or DAC. The DAC will reconstruct a continuous waveform by interpolating the values it expects should be between the samples and playing that audio back through speakers or headphones.
64
2011 iZotope, Inc.