Web Audio API
Web Audio API
Copyright 2013-2015 W3C (MIT, ERCIM, Keio, Beihang). W3C liability, trademark and document use rules apply.
Abstract
This specification describes a high-level JavaScript API for processing and synthesizing audio in web applications. The primary paradigm is
of an audio routing graph, where a number of AudioNode objects are connected together to define the overall audio rendering. The actual
processing will primarily take place in the underlying implementation (typically optimized Assembly / C / C++ code), but direct JavaScript
processing and synthesis is also supported.
This API is designed to be used in conjunction with other APIs and elements on the web platform, notably: XMLHttpRequest [XHR] (using
the responseType and response attributes). For games and interactive applications, it is anticipated to be used with the canvas 2D
[2dcontext] and WebGL [WEBGL] 3D graphics APIs.
This document was published by the Audio Working Group as a Working Draft. This document is intended to become a W3C
Recommendation. If you wish to make comments regarding this document, please send them to public-audio@w3.org (subscribe,
archives). All comments are welcome.
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated,
replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any
patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An
individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in
accordance with section 6 of the W3C Patent Policy.
Table of Contents
Abstract
Status of This Document
Introduction
0.1 Features
0.1.1 Modular Routing
0.2 API Overview
1. Conformance
2. The Audio API
2.1 The BaseAudioContext Interface
2.1.1 Attributes
2.1.2 Methods
2.1.3 Callback DecodeSuccessCallback Parameters
2.1.4 Callback DecodeErrorCallback Parameters
2.1.5 Dictionary AudioContextOptions Members
2.1.6 Lifetime
2.1.7 Lack of introspection or serialization primitives
2.2 The AudioContext Interface
2.2.1 Methods
2.3 The OfflineAudioContext Interface
2.3.1 Attributes
2.3.2 Methods
2.3.3 The OfflineAudioCompletionEvent Interface
2.3.3.1 Attributes
2.4 The AudioNode Interface
2.4.1 Attributes
2.4.2 Methods
2.4.3 Lifetime
2.5 The AudioDestinationNode Interface
2.5.1 Attributes
2.6 The AudioParam Interface
2.6.1 Attributes
2.6.2 Methods
2.6.3 Computation of Value
2.6.4 AudioParam Automation Example
2.7 The GainNode Interface
2.7.1 Attributes
2.8 The DelayNode Interface
2.8.1 Attributes
2.9 The AudioBuffer Interface
https://www.w3.org/TR/webaudio/ 1/27
2017-7-4 Web Audio API
2.9 The AudioBuffer Interface
2.9.1 Attributes
2.9.2 Methods
2.10 The AudioBufferSourceNode Interface
2.10.1 Attributes
2.10.2 Methods
2.10.3 Looping
2.11 The MediaElementAudioSourceNode Interface
2.11.1 Security with MediaElementAudioSourceNode and cross-origin resources
2.12 The AudioWorker interface
2.12.1 Attributes
2.12.2 Methods
2.12.3 The AudioWorkerNode Interface
2.12.3.1 Attributes
2.12.3.2 Methods
2.12.4 The AudioWorkerParamDescriptor Interface
2.12.4.1 Attributes
2.12.5 The AudioWorkerGlobalScope Interface
2.12.5.1 Attributes
2.12.5.2 Methods
2.12.6 The AudioWorkerNodeProcessor Interface
2.12.6.1 Attributes
2.12.6.2 Methods
2.12.7 Audio Worker Examples
2.12.7.1 A Bitcrusher Node
2.12.7.2 TODO: fix up this example. A Volume Meter and Clip Detector
2.12.7.3 Reimplementing ChannelMerger
2.13 The ScriptProcessorNode Interface - DEPRECATED
2.13.1 Attributes
2.14 The AudioWorkerNodeCreationEvent Interface
2.14.1 Attributes
2.15 The AudioProcessEvent Interface
2.15.1 Attributes
2.16 The AudioProcessingEvent Interface - DEPRECATED
2.16.1 Attributes
2.17 The PannerNode Interface
2.17.1 Attributes
2.17.2 Methods
2.17.3 Channel Limitations
2.18 The AudioListener Interface
2.18.1 Methods
2.19 The SpatialPannerNode Interface
2.19.1 Attributes
2.20 The SpatialListener Interface
2.20.1 Attributes
2.21 The StereoPannerNode Interface
2.21.1 Attributes
2.21.2 Channel Limitations
2.22 The ConvolverNode Interface
2.22.1 Attributes
2.22.2 Channel Configurations for Input, Impulse Response and Output
2.23 The AnalyserNode Interface
2.23.1 Attributes
2.23.2 Methods
2.23.3 FFT Windowing and smoothing over time
2.24 The ChannelSplitterNode Interface
2.25 The ChannelMergerNode Interface
2.26 The DynamicsCompressorNode Interface
2.26.1 Attributes
2.27 The BiquadFilterNode Interface
2.27.1 Attributes
2.27.2 Methods
2.27.3 Filters characteristics
2.28 The IIRFilterNode Interface
2.28.1 Methods
2.28.2 Filter Definition
2.29 The WaveShaperNode Interface
2.29.1 Attributes
2.30 The OscillatorNode Interface
2.30.1 Attributes
2.30.2 Methods
2.30.3 Basic Waveform Phase
2.31 The PeriodicWave Interface
2.31.1 PeriodicWaveConstraints
2.31.1.1 Dictionary PeriodicWaveConstraints Members
2.31.2 Waveform Generation
2.31.3 Waveform Normalization
2.31.4 Oscillator Coefficients
2.32 The MediaStreamAudioSourceNode Interface
2.33 The MediaStreamAudioDestinationNode Interface
2.33.1 Attributes
3. Mixer Gain Structure
3.1 Summing Inputs
3.2 Gain Control
3.3 Example: Mixer with Send Busses
4. Dynamic Lifetime
4.1 Background
4.2 Example
5. Channel up-mixing and down-mixing
5.1 Speaker Channel Layouts
5.2 Channel ordering
5.3 Up Mixing speaker layouts
5.4 Down Mixing speaker layouts
5.5 Channel Rules Examples
6. Audio Signal Values
7. Spatialization / Panning
7.1 Background
7.2 Azimuth and Elevation
7.3 Panning Algorithm
7.3.1 Equal-power panning
7.3.2 HRTF panning (stereo only)
7.4 Distance Effects
7.5 Sound Cones
7.6 Doppler Shift
8. Performance Considerations
8.1 Latency
8.2 Audio Buffer Copying
8.3 AudioParam Transitions
8.4 Audio Glitching
8.5 JavaScript Issues with Real-Time Processing and Synthesis:
9. Security Considerations
https://www.w3.org/TR/webaudio/ 2/27
2017-7-4 Web Audio API
9. Security Considerations
10. Privacy Considerations
11. Requirements and Use Cases
12. Acknowledgements
13. Web Audio API Change Log
A. References
A.1 Normative references
A.2 Informative references
Introduction
Audio on the web has been fairly primitive up to this point and until very recently has had to be delivered through plugins such as Flash and
QuickTime. The introduction of the audio element in HTML5 is very important, allowing for basic streaming audio playback. But, it is not
powerful enough to handle more complex audio applications. For sophisticated web-based games or interactive applications, another
solution is required. It is a goal of this specification to include the capabilities found in modern game audio engines as well as some of the
mixing, processing, and filtering tasks that are found in modern desktop audio production applications.
The APIs have been designed with a wide variety of use cases [webaudio-usecases] in mind. Ideally, it should be able to support any use
case which could reasonably be implemented with an optimized C++ engine controlled via JavaScript and run in a browser. That said,
modern desktop audio software can have very advanced capabilities, some of which would be difficult or impossible to build with this
system. Apple's Logic Audio is one such application which has support for external MIDI controllers, arbitrary plugin audio effects and
synthesizers, highly optimized direct-to-disk audio file reading/writing, tightly integrated time-stretching, and so on. Nevertheless, the
proposed system will be quite capable of supporting a large range of reasonably complex games and interactive applications, including
musical ones. And it can be a very good complement to the more advanced graphics features offered by WebGL. The API has been
designed so that more advanced capabilities can be added at a later time.
0.1 Features
Modular routing for simple or complex mixing/effect architectures, including multiple sends and submixes.
High dynamic range, using 32bits floats for internal processing.
Sample-accurate scheduled sound playback with low latency for musical applications requiring a very high degree of rhythmic
precision such as drum machines and sequencers. This also includes the possibility of dynamic creation of effects.
Automation of audio parameters for envelopes, fade-ins / fade-outs, granular effects, filter sweeps, LFOs etc.
Flexible handling of channels in an audio stream, allowing them to be split and merged.
Processing of audio sources from an audio or video media element.
Processing live audio input using a MediaStream from getUserMedia().
Integration with WebRTC
Processing audio received from a remote peer using a MediaStreamAudioSourceNode and [webrtc].
Sending a generated or processed audio stream to a remote peer using a MediaStreamAudioDestinationNode and [webrtc].
Audio stream synthesis and processing directly in JavaScript.
Spatialized audio supporting a wide range of 3D games and immersive environments:
Panning models: equalpower, HRTF, pass-through
Distance Attenuation
Sound Cones
Obstruction / Occlusion
Doppler Shift
Source / Listener based
A convolution engine for a wide range of linear effects, especially very high-quality room effects. Here are some examples of possible
effects:
Small / large room
Cathedral
Concert hall
Cave
Tunnel
Hallway
Forest
Amphitheater
Sound of a distant room through a doorway
Extreme filters
Strange backwards effects
Extreme comb filter effects
Dynamics compression for overall control and sweetening of the mix
Efficient real-time time-domain and frequency analysis / music visualizer support
Efficient biquad filters for lowpass, highpass, and other common filters.
A Waveshaping effect for distortion and other non-linear effects
Oscillators
Modular routing allows arbitrary connections between different AudioNode objects. Each node can have inputs and/or outputs. A source
node has no inputs and a single output. A destination node has one input and no outputs, the most common example being
AudioDestinationNode the final destination to the audio hardware. Other nodes such as filters can be placed between the source and
destination nodes. The developer doesn't have to worry about low-level stream format details when two objects are connected together; the
right thing just happens. For example, if a mono audio stream is connected to a stereo input it should just mix to left and right channels
appropriately.
In the simplest case, a single source can be routed directly to the output. All routing occurs within an AudioContext containing a single
AudioDestinationNode:
Illustrating this simple routing, here's a simple example playing a single sound:
EXAMPLE 1
var context = new AudioContext();
function playSound() {
var source = context.createBufferSource();
source.buffer = dogBarkingBuffer;
source.connect(context.destination);
source.start(0);
}
Here's a more complex example with three sources and a convolution reverb send with a dynamics compressor at the final output stage:
https://www.w3.org/TR/webaudio/ 3/27
2017-7-4 Web Audio API
EXAMPLE 2
var context = 0;
var compressor = 0;
var reverb = 0;
var source1 = 0;
var source2 = 0;
var source3 = 0;
var lowpassFilter = 0;
var waveShaper = 0;
var panner = 0;
var dry1 = 0;
var dry2 = 0;
var dry3 = 0;
var wet1 = 0;
var wet2 = 0;
var wet3 = 0;
var masterDry = 0;
var masterWet = 0;
function setupRoutingGraph () {
context = new AudioContext();
source1.buffer = manTalkingBuffer;
source2.buffer = footstepsBuffer;
source3.frequency.value = 440;
// Connect source1
dry1 = context.createGain();
wet1 = context.createGain();
source1.connect(lowpassFilter);
lowpassFilter.connect(dry1);
lowpassFilter.connect(wet1);
dry1.connect(masterDry);
wet1.connect(reverb);
// Connect source2
dry2 = context.createGain();
wet2 = context.createGain();
source2.connect(waveShaper);
waveShaper.connect(dry2);
waveShaper.connect(wet2);
dry2.connect(masterDry);
wet2.connect(reverb);
// Connect source3
dry3 = context.createGain();
wet3 = context.createGain();
source3.connect(panner);
panner.connect(dry3);
panner.connect(wet3);
dry3.connect(masterDry);
wet3.connect(reverb);
Modular routing also permits the output of AudioNodes to be routed to an AudioParam parameter that controls the behavior of a different
AudioNode. In this scenario, the output of a node can act as a modulation signal rather than an input signal.
https://www.w3.org/TR/webaudio/ 4/27
2017-7-4 Web Audio API
Fig. 3 Modular routing illustrating one Oscillator modulating the frequency of another.
EXAMPLE 3
function setupRoutingGraph() {
var context = new AudioContext();
// Create the low frequency oscillator that supplies the modulation signal
var lfo = context.createOscillator();
lfo.frequency.value = 1.0;
// Create a gain node whose gain determines the amplitude of the modulation signal
var modulationGain = context.createGain();
modulationGain.gain.value = 50;
An AudioContext interface, which contains an audio signal graph representing connections betweens AudioNodes.
An AudioNode interface, which represents audio sources, audio outputs, and intermediate processing modules. AudioNodes can be
dynamically connected together in a modular fashion. AudioNodes exist in the context of an AudioContext
An AudioDestinationNode interface, an AudioNode subclass representing the final destination for all rendered audio.
An AudioBuffer interface, for working with memory-resident audio assets. These can represent one-shot sounds, or longer audio
clips.
An AudioBufferSourceNode interface, an AudioNode which generates audio from an AudioBuffer.
A MediaElementAudioSourceNode interface, an AudioNode which is the audio source from an audio, video, or other media element.
A MediaStreamAudioSourceNode interface, an AudioNode which is the audio source from a MediaStream such as live audio input, or
from a remote peer.
A MediaStreamAudioDestinationNode interface, an AudioNode which is the audio destination to a MediaStream sent to a remote peer.
An AudioWorker interface representing a factory for creating custom nodes that can process audio directly in JavaScript.
An AudioWorkerNode interface, an AudioNode representing a node processed in an AudioWorker.
An AudioWorkerGlobalScope interface, the context in which AudioWorker processing scripts run.
An AudioWorkerNodeProcessor interface, representing a single node instance inside an audio worker.
An AudioParam interface, for controlling an individual aspect of an AudioNode's functioning, such as volume.
An GainNode interface, an AudioNode for explicit gain control. Because inputs to AudioNodes support multiple connections (as a unity-
gain summing junction), mixers can be easily built with GainNodes.
A BiquadFilterNode interface, an AudioNode for common low-order filters such as:
Low Pass
High Pass
Band Pass
Low Shelf
High Shelf
Peaking
Notch
Allpass
A IIRFilterNode interface, an AudioNode for a general IIR filter.
A DelayNode interface, an AudioNode which applies a dynamically adjustable variable delay.
A SpatialPannerNode interface, an AudioNode for positioning audio in 3D space.
A SpatialListener interface, which works with a SpatialPannerNode for spatialization.
A StereoPannerNode interface, an AudioNode for equal-power positioning of audio input in a stereo stream.
A ConvolverNode interface, an AudioNode for applying a real-time linear effect (such as the sound of a concert hall).
A AnalyserNode interface, an AudioNode for use with music visualizers, or other visualization applications.
A ChannelSplitterNode interface, an AudioNode for accessing the individual channels of an audio stream in the routing graph.
A ChannelMergerNode interface, an AudioNode for combining channels from multiple audio streams into a single audio stream.
A DynamicsCompressorNode interface, an AudioNode for dynamics compression.
A WaveShaperNode interface, an AudioNode which applies a non-linear waveshaping effect for distortion and other more subtle warming
effects.
A OscillatorNode interface, an AudioNode for generating a periodic waveform.
There are also several features that have been deprecated from the Web Audio API but not yet removed, pending implementation
experience of their replacements:
A PannerNode interface, an AudioNode for spatializing / positioning audio in 3D space. This has been replaced by SpatialPannerNode,
and StereoPannerNode for simpler scenarios.
An AudioListener interface, which works with a PannerNode for spatialization.
A ScriptProcessorNode interface, an AudioNode for generating or processing audio directly in JavaScript.
An AudioProcessingEvent interface, which is an event type used with ScriptProcessorNode objects.
1. Conformance
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-
normative. Everything else in this specification is normative.
The key words MUST, REQUIRED, and SHALL are to be interpreted as described in [RFC2119].
conforming implementation
A user agent is considered to be a conforming implementation if it satisfies all of the MUST-, REQUIRED- and SHALL-level criteria in this
specification that apply to implementations.
User agents that use ECMAScript to implement the APIs defined in this specification must implement them in a manner consistent with the
https://www.w3.org/TR/webaudio/ 5/27
2017-7-4 Web Audio API
User agents that use ECMAScript to implement the APIs defined in this specification must implement them in a manner consistent with the
ECMAScript Bindings defined in the Web IDL specification [WEBIDL] as this specification uses that specification and terminology.
BaseAudioContext is not instantiated directly, but is instead extended by the concrete interfaces AudioContext (for real-time rendering) and
OfflineAudioContext (for offline rendering).
WebIDL
enum AudioContextState {
"suspended",
"running",
"closed"
};
Enumeration description
suspended This context is currently suspended (context time is not proceeding, audio hardware may be powered down/released).
running Audio is being processed.
This context has been released, and can no longer be used to process audio. All system audio resources have been
closed released. Attempts to create new Nodes on this context will throw InvalidStateError. (AudioBuffers may still be created,
through createBuffer or decodeAudioData.)
WebIDL
enum AudioContextPlaybackCategory {
"balanced",
"interactive",
"playback"
};
Enumeration description
balanced Balance audio output latency and stability/power consumption.
interactive Provide the lowest audio output latency possible without glitching. This is the default.
playback Prioritize sustained playback without interruption over audio output latency. Lowest power consumption.
WebIDL
dictionary AudioContextOptions {
AudioContextPlaybackCategory playbackCategory = "interactive";
};
2.1.1 Attributes
This is the time in seconds of the sample frame immediately following the last sample-frame in the block of audio most recently
processed by the context's rendering graph. If the context's rendering graph has not yet processed a block of audio, then
currentTime has a value of zero.
In the time coordinate system of currentTime, the value of zero corresponds to the first sample-frame in the first block processed
by the graph. Elapsed time in this system corresponds to elapsed time in the audio stream generated by the BaseAudioContext,
which may not be synchronized with other clocks in the system. (For an OfflineAudioContext, since the stream is not being
actively played by any device, there is not even an approximation to real time.)
All scheduled times in the Web Audio API are relative to the value of currentTime.
When the BaseAudioContext is in the running state, the value of this attribute is monotonically increasing and is updated by the
rendering thread in uniform increments, corresponding to the audio block size of 128 samples. Thus, for a running context,
currentTime increases steadily as the system processes audio blocks, and always represents the time of the start of the next
audio block to be processed. It is also the earliest possible time when any change scheduled in the current state might take
effect.
An AudioDestinationNode with a single input representing the final destination for all audio. Usually this will represent the actual
audio hardware. All AudioNodes actively rendering audio will directly or indirectly connect to destination.
https://www.w3.org/TR/webaudio/ 6/27
2017-7-4 Web Audio API
The sample rate (in sample-frames per second) at which the BaseAudioContext handles audio. It is assumed that all AudioNodes in
the context run at this rate. In making this assumption, sample-rate converters or "varispeed" processors are not supported in
real-time processing.
When the state is "suspended", a call to resume() will cause a transition to "running", or a call to close() will cause a transition to
"closed".
When the state is "running", a call to suspend() will cause a transition to "suspended", or a call to close() will cause a transition to
"closed".
2.1.2 Methods
close
Closes the audio context, releasing any system audio resources used by the BaseAudioContext. This will not automatically release
all BaseAudioContext-created objects, unless other references have been released as well; however, it will forcibly release any
system audio resources that might prevent additional AudioContexts from being created and used, suspend the progression of
the BaseAudioContext's currentTime, and stop processing audio data. The promise resolves when all AudioContext-creation-
blocking resources have been released. If this is called on OfflineAudioContext, then return a promise rejected with a
DOMException whose name is InvalidStateError.
No parameters.
Return type: Promise<void>
createAnalyser
Create an AnalyserNode.
No parameters.
Return type: AnalyserNode
createAudioWorker
Creates an AudioWorker object and loads the associated script into an AudioWorkerGlobalScope, then resolves the returned
Promise.
Parameter Type Nullable Optional Description
scriptURL DOMString This parameter represents the URL of the script to be loaded as an
AudioWorker node factory. See AudioWorker section for more detail.
createBiquadFilter
Creates a BiquadFilterNode representing a second order filter which can be configured as one of several common filter types.
No parameters.
Return type: BiquadFilterNode
createBuffer
Creates an AudioBuffer of the given size. The audio data in the buffer will be zero-initialized (silent). A NotSupportedError
exception MUST be thrown if any of the arguments is negative, zero, or outside its nominal range.
Parameter Type Nullable Optional Description
numberOfChannels unsigned long Determines how many channels the buffer will have. An
implementation must support at least 32 channels.
length unsigned long Determines the size of the buffer in sample-frames.
sampleRate float Describes the sample-rate of the linear PCM audio data in the
buffer in sample-frames per second. An implementation must
support sample rates in at least the range 8192 to 96000.
Return type: AudioBuffer
createBufferSource
Creates an AudioBufferSourceNode.
No parameters.
Return type: AudioBufferSourceNode
createChannelMerger
Creates a ChannelMergerNode representing a channel merger. An IndexSizeError exception MUST be thrown for invalid parameter
values.
Parameter Type Nullable Optional Description
numberOfInputs unsigned long = The numberOfInputs parameter determines the number of inputs.
6 Values of up to 32 must be supported. If not specified, then 6 will be
used.
Return type: ChannelMergerNode
createChannelSplitter
Creates an ChannelSplitterNode representing a channel splitter. An IndexSizeError exception MUST be thrown for invalid
parameter values.
Parameter Type Nullable Optional Description
numberOfOutputs unsigned long = The number of outputs. Values of up to 32 must be supported. If
6 not specified, then 6 will be used.
Return type: ChannelSplitterNode
createConvolver
Creates a ConvolverNode.
No parameters.
Return type: ConvolverNode
createDelay
Creates a DelayNode representing a variable delay line. The initial default delay time will be 0 seconds.
https://www.w3.org/TR/webaudio/ 7/27
2017-7-4 Web Audio API
Parameter Type Nullable Optional Description
maxDelayTime double = 1.0 The maxDelayTime parameter is optional and specifies the
maximum delay time in seconds allowed for the delay line. If
specified, this value MUST be greater than zero and less than three
minutes or a NotSupportedError exception MUST be thrown.
Return type: DelayNode
createDynamicsCompressor
Creates a DynamicsCompressorNode
No parameters.
Return type: DynamicsCompressorNode
createGain
Create an GainNode.
No parameters.
Return type: GainNode
createIIRFilter
Creates an IIRFilterNode representing a general IIR Filter.
Parameter Type Nullable Optional Description
feedforward sequence<double> An array of the feedforward (numerator) coefficients for the transfer
function of the IIR filter. The maximum length of this array is 20. If all
of the values are zero, an InvalidStateError MUST be thrown. A
NotSupportedError MUST be thrown if the array length is 0 or greater
than 20.
feedback sequence<double> An array of the feedback (denominator) coefficients for the tranfer
function of the IIR filter. The maximum length of this array is 20. If the
first element of the array is 0, an InvalidStateError MUST be thrown. A
NotSupportedError MUST be thrown if the array length is 0 or greater
than 20.
Return type: IIRFilterNode
createOscillator
Creates an OscillatorNode
No parameters.
Return type: OscillatorNode
createPanner
This method is DEPRECATED, as it is intended to be replaced by createSpatialPanner or createStereoPanner, depending on the
scenario. Creates a PannerNode.
No parameters.
Return type: PannerNode
createPeriodicWave
Creates a PeriodicWave representing a waveform containing arbitrary harmonic content. The real and imag parameters must be
of type Float32Array (described in [TYPED-ARRAYS]) of equal lengths greater than zero or an IndexSizeError exception MUST be
thrown. All implementations must support arrays up to at least 8192. These parameters specify the Fourier coefficients of a
Fourier series representing the partials of a periodic waveform. The created PeriodicWave will be used with an OscillatorNode
and, by default, will represent a normalized time-domain waveform having maximum absolute peak value of 1. Another way of
saying this is that the generated waveform of an OscillatorNode will have maximum peak value at 0dBFS. Conveniently, this
corresponds to the full-range of the signal values used by the Web Audio API. Because the PeriodicWave is normalized by
default on creation, the real and imag parameters represent relative values. If normalization is disabled via the
disableNormalization parameter, this normalization is disabled, and the time-domain waveform has the amplitudes as given by
the Fourier coefficients.
As PeriodicWave objects maintain their own copies of these arrays, any modification of the arrays uses as the real and imag
parameters after the call to createPeriodicWave() will have no effect on the PeriodicWave object.
createScriptProcessor
This method is DEPRECATED, as it is intended to be replaced by createAudioWorker. Creates a ScriptProcessorNode for direct
audio processing using JavaScript. An IndexSizeError exception MUST be thrown if bufferSize or numberOfInputChannels or
numberOfOutputChannels are outside the valid range. It is invalid for both numberOfInputChannels and numberOfOutputChannels to
be zero. In this case an IndexSizeError MUST be thrown.
Parameter Type Nullable Optional Description
bufferSize unsigned long = The bufferSize parameter determines the buffer size in
0 units of sample-frames. If it's not passed in, or if the value
is 0, then the implementation will choose the best buffer
size for the given environment, which will be constant
power of 2 throughout the lifetime of the node. Otherwise
if the author explicitly specifies the bufferSize, it must be
one of the following values: 256, 512, 1024, 2048, 4096,
8192, 16384. This value controls how frequently the
audioprocess event is dispatched and how many sample-
frames need to be processed each call. Lower values for
bufferSize will result in a lower (better) latency. Higher
values will be necessary to avoid audio breakup and
glitches. It is recommended for authors to not specify this
buffer size and allow the implementation to pick a good
createSpatialPanner
Creates a SpatialPannerNode.
No parameters.
Return type: SpatialPannerNode
createStereoPanner
Creates a StereoPannerNode.
No parameters.
Return type: StereoPannerNode
createWaveShaper
Creates a WaveShaperNode representing a non-linear distortion.
No parameters.
Return type: WaveShaperNode
decodeAudioData
Asynchronously decodes the audio file data contained in the ArrayBuffer. The ArrayBuffer can, for example, be loaded from an
XMLHttpRequest's response attribute after setting the responseType to "arraybuffer". Audio file data can be in any of the formats
supported by the audio or video elements. The buffer passed to decodeAudioData has its content-type determined by sniffing, as
described in [mimesniff].
Although the primary method of interfacing with this function is via its promise return value, the callback parameters are provided
for legacy reasons. The system shall ensure that the AudioContext is not garbage collected before the promise is resolved or
rejected and any callback function is called and completes.
resume
Resumes the progression of the BaseAudioContext's currentTime in an audio context that has been suspended, which may
involve re-priming the frame buffer contents. The promise resolves when the system has re-acquired (if necessary) access to
audio hardware and has begun streaming to the destination, or immediately (with no other effect) if the context is already running.
The promise is rejected if the context has been closed. If the context is not currently suspended, the promise will resolve.
Note that until the first block of audio has been rendered following a call to this method, currentTime remains unchanged.
No parameters.
Return type: Promise<void>
suspend
Suspends the progression of BaseAudioContext's currentTime, allows any current context processing blocks that are already
processed to be played to the destination, and then allows the system to release its claim on audio hardware. This is generally
useful when the application knows it will not need the BaseAudioContext for some time, and wishes to let the audio hardware
power down. The promise resolves when the frame buffer is empty (has been handed off to the hardware), or immediately (with
no other effect) if the context is already suspended. The promise is rejected if the context has been closed.
While the system is suspended, MediaStreams will have their output ignored; that is, data will be lost by the real time nature of
media streams. HTMLMediaElements will similarly have their output ignored until the system is resumed. Audio Workers and
ScriptProcessorNodes will simply not fire their onaudioprocess events while suspended, but will resume when resumed. For the
purpose of AnalyserNode window functions, the data is considered as a continuous stream - i.e. the resume()/suspend() does not
cause silence to appear in the AnalyserNode's stream of data.
No parameters.
Return type: Promise<void>
2.1.6 Lifetime
Once created, an AudioContext will continue to play sound until it has no more sound to play, or the page goes away.
The Web Audio API takes a fire-and-forget approach to audio source scheduling. That is, source nodes are created for each note during the
lifetime of the AudioContext, and never explicitely removed from the graph. This is incompatible with a serialization API, since there is no
stable set of nodes that could be serialized.
Moreover, having an introspection API would allow content script to be able to observe garbage collections.
WebIDL
2.2.1 Methods
createMediaElementSource
Creates a MediaElementAudioSourceNode given an HTMLMediaElement. As a consequence of calling this method, audio
playback from the HTMLMediaElement will be re-routed into the processing graph of the AudioContext.
Parameter Type Nullable Optional Description
mediaElement HTMLMediaElement The media element that will be re-routed.
Return type: MediaElementAudioSourceNode
createMediaStreamDestination
Creates a MediaStreamAudioDestinationNode
No parameters.
Return type: MediaStreamAudioDestinationNode
createMediaStreamSource
The OfflineAudioContext is constructed with the same arguments as AudioContext.createBuffer. A NotSupportedException exception MUST
be thrown if any of the arguments is negative, zero, or outside its nominal range.
WebIDL
2.3.1 Attributes
2.3.2 Methods
resume
Resumes the progression of time in an audio context that has been suspended. The promise resolves immediately because the
OfflineAudioContext does not require the audio hardware. If the context is not currently suspended or the rendering has not
started, the promise is rejected with InvalidStateError.
In contrast to a live AudioContext, the value of currentTime always reflects the start time of the next block to be rendered by the
audio graph, since the context's audio stream does not advance in time during suspension.
No parameters.
Return type: Promise<void>
startRendering
Given the current connections and scheduled changes, starts rendering audio. The system shall ensure that the
OfflineAudioContext is not garbage collected until either the promise is resolved and any callback function is called and
completes, or until the suspend function is called.
Although the primary method of getting the rendered audio data is via its promise return value, the instance will also fire an event
named complete for legacy reasons.
1. If startRendering has already been called previously, then return a promise rejected with InvalidStateError.
https://www.w3.org/TR/webaudio/ 10/27
2017-7-4 Web Audio API
1. If startRendering has already been called previously, then return a promise rejected with InvalidStateError.
2. Let promise be a new promise.
3. Asynchronously perform the following steps:
1. Let buffer be a new AudioBuffer, with a number of channels, length and sample rate equal respectively to the
numberOfChannels, length and sampleRate parameters used when this instance's constructor was called.
2. Given the current connections and scheduled changes, start rendering length sample-frames of audio into buffer.
3. For every render quantum, check and suspend the rendering if necessary.
4. If a suspended context is resumed, continue to render the buffer.
5. Once the rendering is complete,
1. Resolve promise with buffer.
2. Queue a task to fire an event named complete at this instance, using an instance of
OfflineAudioCompletionEvent whose renderedBuffer property is set to buffer.
4. Return promise.
No parameters.
Return type: Promise<AudioBuffer>
suspend
Schedules a suspension of the time progression in the audio context at the specified time and returns a promise. This is generally
useful when manipulating the audio graph synchronously on OfflineAudioContext.
Note that the maximum precision of suspension is the size of the render quantum and the specified suspension time will be
rounded down to the nearest render quantum boundary. For this reason, it is not allowed to schedule multiple suspends at the
same quantized frame. Also scheduling should be done while the context is not running to ensure the precise suspension.
1. is negative or
2. is less than or equal to the current time or
3. is greater than or equal to the total render duration or
4. is scheduled by another suspend for the same time,
WebIDL
[Constructor]
interface AudioContext : BaseAudioContext {
};
WebIDL
2.3.3.1 Attributes
Each output has one or more channels. The exact number of channels depends on the details of the specific AudioNode.
An output may connect to one or more AudioNode inputs, thus fan-out is supported. An input initially has no connections, but may be
connected from one or more AudioNode outputs, thus fan-in is supported. When the connect() method is called to connect an output of an
AudioNode to an input of an AudioNode, we call that a connection to the input.
Each AudioNode input has a specific number of channels at any given time. This number can change depending on the connection(s) made
to the input. If the input has no connections then it has one channel which is silent.
For each input, an AudioNode performs a mixing (usually an up-mixing) of all connections to that input. Please see 3. Mixer Gain Structure
for more informative details, and the 5. Channel up-mixing and down-mixing section for normative requirements.
The processing of inputs and the internal operations of an AudioNode take place continuously with respect to AudioContext time, regardless
of whether the node has connected outputs, and regardless of whether these outputs ultimately reach an AudioContext's
AudioDestinationNode.
For performance reasons, practical implementations will need to use block processing, with each AudioNode processing a fixed number of
sample-frames of size block-size. In order to get uniform behavior across implementations, we will define this value explicitly. block-size is
defined to be 128 sample-frames which corresponds to roughly 3ms at a sample-rate of 44.1KHz.
AudioNodes are EventTargets, as described in DOM [DOM]. This means that it is possible to dispatch events to AudioNodes the same way
that other EventTargets accept events.
WebIDL
enum ChannelCountMode {
"max",
"clamped-max",
"explicit"
};
Enumeration description
computedNumberOfChannels is computed as the maximum of the number of channels of all connections. In this mode
max
channelCount is ignored
clamped-
Same as max up to a limit of the channelCount
max
explicit computedNumberOfChannels is the exact value as specified in channelCount
https://www.w3.org/TR/webaudio/ 11/27
2017-7-4 Web Audio API
WebIDL
enum ChannelInterpretation {
"speakers",
"discrete"
};
Enumeration description
use up-down-mix equations for mono/stereo/quad/5.1. In cases where the number of channels do not match any of these
speakers
basic speaker layouts, revert to "discrete".
discrete Up-mix by filling channels until they run out then zero out remaining channels. down-mix by filling as many channels as
WebIDL
2.4.1 Attributes
The number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2
except for specific nodes where its value is specially determined. This attribute has no effect for nodes with no inputs. If this value
is set to zero or to a value greater than the implementation's maximum number of channels the implementation MUST throw a
NotSupportedError exception.
See the 5. Channel up-mixing and down-mixing section for more information on this attribute.
Determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node. This attribute
has no effect for nodes with no inputs.
See the 5. Channel up-mixing and down-mixing section for more information on this attribute.
Determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. This
attribute has no effect for nodes with no inputs.
See the 5. Channel up-mixing and down-mixing section for more information on this attribute.
2.4.2 Methods
connect
There can only be one connection between a given output of one specific node and a given input of another specific node.
Multiple connections with the same termini are ignored. For example:
EXAMPLE 4
nodeA.connect(nodeB);
nodeA.connect(nodeB);
EXAMPLE 5
nodeA.connect(nodeB);
https://www.w3.org/TR/webaudio/ 12/27
2017-7-4 Web Audio API
DelayNode in the cycle or a NotSupportedError exception MUST be
thrown.
Return type: AudioNode
connect
Connects the AudioNode to an AudioParam, controlling the parameter value with an audio-rate signal.
It is possible to connect an AudioNode output to more than one AudioParam with multiple calls to connect(). Thus, "fan-out" is
supported.
It is possible to connect more than one AudioNode output to a single AudioParam with multiple calls to connect(). Thus, "fan-in" is
supported.
An AudioParam will take the rendered audio data from any AudioNode output connected to it and convert it to mono by down-mixing
if it is not already mono, then mix it together with other such outputs and finally will mix with the intrinsic parameter value (the
value the AudioParam would normally have without any audio connections), including any timeline changes scheduled for the
parameter.
There can only be one connection between a given output of one specific node and a specific AudioParam. Multiple connections
with the same termini are ignored. For example:
nodeA.connect(param);
nodeA.connect(param);
disconnect
No parameters.
Return type: void
disconnect
Disconnects a single output of the AudioNode from any other AudioNode or AudioParam objects to which it is connected.
disconnect
disconnect
disconnect
Disconnects a specific output of the AudioNode from a specific input of some destination AudioNode.
disconnect
Disconnects all outputs of the AudioNode that go to a specific destination AudioParam. The contribution of this AudioNode to the
computed parameter value goes to 0 when this operation takes effect. The intrinsic parameter value is not affected by this
operation.
disconnect
Disconnects a specific output of the AudioNode from a specific destination AudioParam. The contribution of this AudioNode to the
computed parameter value goes to 0 when this operation takes effect. The intrinsic parameter value is not affected by this
operation.
output unsigned long The output parameter is an index describing which output of the
AudioNode from which to disconnect. If the parameter is out-of-bound,
an IndexSizeError exception MUST be thrown.
Return type: void
2.4.3 Lifetime
An implementation may choose any method to avoid unnecessary resource usage and unbounded memory growth of unused/finished
nodes. The following is a description to help guide the general expectation of how node lifetime would be managed.
An AudioNode will live as long as there are any references to it. There are several types of references:
Any AudioNodes which are connected in a cycle and are directly or indirectly connected to the AudioDestinationNode of the AudioContext will
stay alive as long as the AudioContext is alive.
NOTE
The uninterrupted operation of AudioNodes implies that as long as live references exist to a node, the node will continue processing
its inputs and evolving its internal state even if it is disconnected from the audio graph. Since this processing will consume CPU and
power, developers should carefully consider the resource usage of disconnected nodes. In particular, it is a good idea to minimize
resource consumption by explicitly putting disconnected nodes into a stopped state when possible.
When an AudioNode has no references it will be deleted. Before it is deleted, it will disconnect itself from any other AudioNodes which it is
connected to. In this way it releases all connection references (3) it has to other nodes.
Regardless of any of the above references, it can be assumed that the AudioNode will be deleted when its AudioContext is deleted.
numberOfInputs : 1
numberOfOutputs : 0
channelCount = 2;
channelCountMode = "explicit";
channelInterpretation = "speakers";
WebIDL
2.5.1 Attributes
The maximum number of channels that the channelCount attribute can be set to. An AudioDestinationNode representing the audio
hardware end-point (the normal case) can potentially output more than 2 channels of audio if the audio hardware is multi-channel.
maxChannelCount is the maximum number of channels that this hardware is capable of supporting. If this value is 0, then this
indicates that channelCount may not be changed. This will be the case for an AudioDestinationNode in an OfflineAudioContext
and also for basic implementations with hardware support for stereo output only.
channelCount defaults to 2 for a destination in a normal AudioContext, and may be set to any non-zero value less than or equal to
maxChannelCount. An IndexSizeError exception MUST be thrown if this value is not within the valid range. Giving a concrete
example, if the audio hardware supports 8-channel output, then we may set channelCount to 8, and render 8-channels of output.
For anAudioDestinationNode in an OfflineAudioContext, the channelCount is determined when the offline context is created and
this value may not be changed.
Some synthesis and processing AudioNodes have AudioParams as attributes whose values must be taken into account on a per-audio-
sample basis. For other AudioParams, sample-accuracy is not important and the value changes can be sampled more coarsely. Each
individual AudioParam will specify that it is either an a-rate parameter which means that its values must be taken into account on a per-audio-
sample basis, or it is a k-rate parameter.
Implementations must use block processing, with each AudioNode processing 128 sample-frames in each block.
For each 128 sample-frame block, the value of a k-rate parameter must be sampled at the time of the very first sample-frame, and that
value must be used for the entire block. a-rate parameters must be sampled for each sample-frame of the block.
An AudioParam maintains a time-ordered event list which is initially empty. The times are in the time coordinate system of the AudioContext's
https://www.w3.org/TR/webaudio/ 14/27
2017-7-4 Web Audio API
An AudioParam maintains a time-ordered event list which is initially empty. The times are in the time coordinate system of the AudioContext's
currentTime attribute. The events define a mapping from time to value. The following methods can change the event list by adding a new
event into the list of a type specific to the method. Each event has a time associated with it, and the events will always be kept in time-order
in the list. These methods will be called automation methods:
setValueAtTime() - SetValue
linearRampToValueAtTime() - LinearRampToValue
exponentialRampToValueAtTime() - ExponentialRampToValue
setTargetAtTime() - SetTarget
setValueCurveAtTime() - SetValueCurve
If one of these events is added at a time where there is already an event of the exact same type, then the new event will replace the
old one.
If one of these events is added at a time where there is already one or more events of a different type, then it will be placed in the list
after them, but before events whose times are after the event.
If setValueCurveAtTime() is called for time \(T\) and duration \(D\) and there are any events having a time greater than \(T\), but less
than \(T + D\), then a NotSupportedError exception MUST be thrown. In other words, it's not ok to schedule a value curve during a time
period containing other events.
Similarly a NotSupportedError exception MUST be thrown if any automation method is called at a time which is inside of the time
interval of a SetValueCurve event at time T and duration D.
WebIDL
interface AudioParam {
attribute float value;
readonly attribute float defaultValue;
AudioParam setValueAtTime (float value, double startTime);
AudioParam linearRampToValueAtTime (float value, double endTime);
AudioParam exponentialRampToValueAtTime (float value, double endTime);
AudioParam setTargetAtTime (float target, double startTime, float timeConstant);
AudioParam setValueCurveAtTime (Float32Array values, double startTime, double duration);
AudioParam cancelScheduledValues (double startTime);
};
2.6.1 Attributes
The parameter's floating-point value. This attribute is initialized to the defaultValue. If value is set during a time when there are
any automation events scheduled then it will be ignored and no exception will be thrown.
The effect of setting this attribute is equivalent to calling setValueAtTime() with the current AudioContext's currentTime and the
requested value. Subsequent accesses to this attribute's getter will return the same value.
2.6.2 Methods
cancelScheduledValues
Cancels all scheduled parameter changes with times greater than or equal to startTime. Active setTargetAtTime automations
(those with startTime less than the supplied time value) will also be cancelled.
exponentialRampToValueAtTime
Schedules an exponential continuous change in parameter value from the previous scheduled parameter value to the given
value. Parameters representing filter frequencies and playback rate are best changed exponentially because of the way humans
perceive sound.
The value during the time interval \(T_0 \leq t < T_1\) (where \(T_0\) is the time of the previous event and \(T_1\) is the endTime
parameter passed into this method) will be calculated as:
$$
v(t) = V_0 \left(\frac{V_1}{V_0}\right)^\frac{t - T_0}{T_1 - T_0}
$$
where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is the value parameter passed into this method. It is an error if either \
(V_0\) or \(V_1\) is not strictly positive.
This also implies an exponential ramp to 0 is not possible. A good approximation can be achieved using setTargetAtTime with an
appropriately chosen time constant.
If there are no more events after this ExponentialRampToValue event then for \(t \geq T_1\), \(v(t) = V_1\).
linearRampToValueAtTime
Schedules a linear continuous change in parameter value from the previous scheduled parameter value to the given value.
The value during the time interval \(T_0 \leq t < T_1\) (where \(T_0\) is the time of the previous event and \(T_1\) is the endTime
parameter passed into this method) will be calculated as:
$$
v(t) = V_0 + (V_1 - V_0) \frac{t - T_0}{T_1 - T_0}
$$
Where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is the value parameter passed into this method.
If there are no more events after this LinearRampToValue event then for \(t \geq T_1\), \(v(t) = V_1\).
https://www.w3.org/TR/webaudio/ 15/27
2017-7-4 Web Audio API
If there are no more events after this LinearRampToValue event then for \(t \geq T_1\), \(v(t) = V_1\).
setTargetAtTime
Start exponentially approaching the target value at the given time with a rate having the given time constant. Among other uses,
this is useful for implementing the "decay" and "release" portions of an ADSR envelope. Please note that the parameter value
does not immediately change to the target value at the given time, but instead gradually changes to the target value.
During the time interval: \(T_0 \leq t < T_1\), where \(T_0\) is the startTime parameter and \(T_1\) represents the time of the event
following this event (or \(\infty\) if there are no following events):
$$
v(t) = V_1 + (V_0 - V_1)\, e^{-\left(\frac{t - T_0}{\tau}\right)}
$$
where \(V_0\) is the initial value (the .value attribute) at \(T_0\) (the startTime parameter), \(V_1\) is equal to the target
parameter, and \(\tau\) is the timeConstant parameter.
setValueAtTime
If there are no more events after this SetValue event, then for \(t \geq T_0\), \(v(t) = V\), where \(T_0\) is the startTime parameter
and \(V\) is the value parameter. In other words, the value will remain constant.
If the next event (having time \(T_1\)) after this SetValue event is not of type LinearRampToValue or ExponentialRampToValue,
then, for \(T_0 \leq t < T_1\):
$$
v(t) = V
$$
In other words, the value will remain constant during this time interval, allowing the creation of "step" functions.
If the next event after this SetValue event is of type LinearRampToValue or ExponentialRampToValue then please see
linearRampToValueAtTime or exponentialRampToValueAtTime, respectively.
setValueCurveAtTime
Sets an array of arbitrary parameter values starting at the given time for the given duration. The number of values will be scaled
to fit into the desired duration.
Let \(T_0\) be startTime, \(T_D\) be duration, \(V\) be the values array, and \(N\) be the length of the values array. Then, during
the time interval: \(T_0 \le t < T_0 + T_D\), let
$$
\begin{align*} k &= \left\lfloor \frac{N - 1}{T_D}(t-T_0) \right\rfloor \\
\end{align*}
$$
After the end of the curve time interval (\(t \ge T_0 + T_D\)), the value will remain constant at the final curve value, until there is
another automation event (if any).
computedValue is the final value controlling the audio DSP and is computed by the audio rendering thread during each rendering time
quantum. It must be internally computed as follows:
1. An intrinsic parameter value will be calculated at each time, which is either the value set directly to the value attribute, or, if there are
any scheduled parameter changes (automation events) with times before or at this time, the value as calculated from these events. If
the value attribute is set after any automation events have been scheduled, then these events will be removed. When read, the value
attribute always returns the intrinsic value for the current time. If automation events are removed from a given time range, then the
intrinsic value will remain unchanged and stay at its previous value until either the value attribute is directly set, or automation events
are added for the time range.
2. An AudioParam will take the rendered audio data from any AudioNode output connected to it and convert it to mono by down-mixing if it
is not already mono, then mix it together with other such outputs. If there are no AudioNodes connected to it, then this value is 0,
having no effect on the computedValue.
3. The computedValue is the sum of the intrinsic value and the value calculated from (2).
EXAMPLE 6
var t0 = 0;
var t1 = 0.1;
var t2 = 0.2;
var t3 = 0.3;
var t4 = 0.325;
var t5 = 0.5;
var t6 = 0.6;
var t7 = 0.7;
var t8 = 1.0;
var timeConstant = 0.1;
param.setValueAtTime(0.2, t0);
param.setValueAtTime(0.3, t1);
param.setValueAtTime(0.4, t2);
param.linearRampToValueAtTime(1, t3);
param.linearRampToValueAtTime(0.8, t4);
param.setTargetAtTime(.5, t4, timeConstant);
// Compute where the setTargetAtTime will be at time t5 so we can make
// the following exponential start at the right point so there's no
// jump discontinuity. From the spec, we have
// v(t) = 0.5 + (0.8 - 0.5)*exp(-(t-t4)/timeConstant)
// Thus v(t5) = 0.5 + (0.8 - 0.5)*exp(-(t5-t4)/timeConstant)
param.setValueAtTime(0.5 + (0.8 - 0.5)*Math.exp(-(t5 - t4)/timeConstant), t5);
param.exponentialRampToValueAtTime(0.75, t6);
param.exponentialRampToValueAtTime(0.05, t7);
param.setValueCurveAtTime(curve, t7, t8 - t7);
numberOfInputs : 1
numberOfOutputs : 1
channelCountMode = "max";
channelInterpretation = "speakers";
Each sample of each channel of the input data of the GainNode MUST be multiplied by the computedValue of the gain AudioParam.
WebIDL
2.7.1 Attributes
numberOfInputs : 1
numberOfOutputs : 1
channelCountMode = "max";
channelInterpretation = "speakers";
The number of channels of the output always equals the number of channels of the input.
It delays the incoming audio signal by a certain amount. Specifically, at each time t, input signal input(t), delay time delayTime(t) and output
signal output(t), the output will be output(t) = input(t - delayTime(t)). The default delayTime is 0 seconds (no delay).
When the number of channels in a DelayNode's input changes (thus changing the output channel count also), there may be delayed audio
samples which have not yet been output by the node and are part of its internal state. If these samples were received earlier with a different
channel count, they must be upmixed or downmixed before being combined with newly received input so that all internal delay-line mixing
takes place using the single prevailing channel layout.
WebIDL
2.8.1 Attributes
An AudioParam object representing the amount of delay (in seconds) to apply. Its default value is 0 (no delay). The minimum value
is 0 and the maximum value is determined by the maxDelayTime argument to the AudioContext method createDelay.
If DelayNode is part of a cycle, then the value of the delayTime attribute is clamped to a minimum of 128 frames (one block).
An AudioBuffer may be used by one or more AudioContexts, and can be shared between an OfflineAudioContext and an AudioContext.
WebIDL
interface AudioBuffer {
readonly attribute float sampleRate;
readonly attribute long length;
readonly attribute double duration;
readonly attribute long numberOfChannels;
Float32Array getChannelData (unsigned long channel);
void copyFromChannel (Float32Array destination, unsigned long channelNumber, optional unsigned long startInChannel = 0
);
void copyToChannel (Float32Array source, unsigned long channelNumber, optional unsigned long startInChannel = 0
);
};
2.9.1 Attributes
2.9.2 Methods
copyFromChannel
The copyFromChannel method copies the samples from the specified channel of the AudioBuffer to the destination array.
Parameter Type Nullable Optional Description
destination Float32Array The array the channel data will be copied to.
channelNumber unsigned long The index of the channel to copy the data from. If channelNumber is
greater or equal than the number of channel of the AudioBuffer, an
IndexSizeError MUST be thrown.
startInChannel unsigned long = An optional offset to copy the data from. If startInChannel is greater
0 than the length of the AudioBuffer, an IndexSizeError MUST be
thrown.
Return type: void
copyToChannel
The copyToChannel method copies the samples to the specified channel of the AudioBuffer, from the source array.
Parameter Type Nullable Optional Description
source Float32Array The array the channel data will be copied from.
channelNumber unsigned long The index of the channel to copy the data to. If channelNumber is
greater or equal than the number of channel of the AudioBuffer, an
IndexSizeError MUST be thrown.
startInChannel unsigned long = An optional offset to copy the data to. If startInChannel is greater
0 than the length of the AudioBuffer, an IndexSizeError MUST be
thrown.
Return type: void
getChannelData
Returns the Float32Array representing the PCM audio data for the specific channel.
Parameter Type Nullable Optional Description
channel unsigned long This parameter is an index representing the particular channel to get
data for. An index value of 0 represents the first channel. This index
value MUST be less than numberOfChannels or an IndexSizeError
exception MUST be thrown.
Return type: Float32Array
https://www.w3.org/TR/webaudio/ 18/27
2017-7-4 Web Audio API
Return type: Float32Array
NOTE
The methods copyToChannel and copyFromChannel can be used to fill part of an array by passing in a Float32Array that's a view onto
the larger array. When reading data from an AudioBuffer's channels, and the data can be processed in chunks, copyFromChannel
should be preferred to calling getChannelData and accessing the resulting array, because it may avoid unnecessary memory
allocation and copying.
An internal operation acquire the contents of an AudioBuffer is invoked when the contents of an AudioBuffer are needed by some API
implementation. This operation returns immutable channel data to the invoker.
When an acquire the content operation occurs on an AudioBuffer, run the following steps:
1. If any of the AudioBuffer's ArrayBuffer have been neutered, abort these steps, and return a zero-length channel data buffers to the
invoker.
2. Neuter all ArrayBuffers for arrays previously returned by getChannelData on this AudioBuffer.
3. Retain the underlying data buffers from those ArrayBuffers and return references to them to the invoker.
4. Attach ArrayBuffers containing copies of the data to the AudioBuffer, to be returned by the next call to getChannelData.
The acquire the contents of an AudioBuffer operation is invoked in the following cases:
When AudioBufferSourceNode.start is called, it acquires the contents of the node's buffer. If the operation fails, nothing is played.
When a ConvolverNode's buffer is set to an AudioBuffer while the node is connected to an output node, or a ConvolverNode is
connected to an output node while the ConvolverNode's buffer is set to an AudioBuffer, it acquires the content of the AudioBuffer.
When the dispatch of an AudioProcessingEvent completes, it acquires the contents of its outputBuffer.
NOTE
This means that copyToChannel cannot be used to change the content of an AudioBuffer currently in use by an AudioNode that has
acquired the content of an AudioBuffer, since the AudioNode will continue to use the data previously acquired.
The start() method is used to schedule when sound playback will happen. The start() method may not be issued multiple times. The
playback will stop automatically when the buffer's audio data has been completely played (if the loop attribute is false), or when the stop()
method has been called and the specified time has been reached. Please see more details in the start() and stop() description.
numberOfInputs : 0
numberOfOutputs : 1
The number of channels of the output always equals the number of channels of the AudioBuffer assigned to the .buffer attribute, or is one
channel of silence if .buffer is NULL.
WebIDL
2.10.1 Attributes
2.10.2 Methods
start
Schedules a sound to playback at an exact time. start may only be called one time and must be called before stop is called or
an InvalidStateError exception MUST be thrown.
Parameter Type Nullable Optional Description
https://www.w3.org/TR/webaudio/ 19/27
2017-7-4 Web Audio API
Parameter Type Nullable Optional Description
when double = 0 The when parameter describes at what time (in seconds) the sound
should start playing. It is in the same time coordinate system as the
AudioContext's currentTime attribute. If 0 is passed in for this value or
if the value is less than currentTime, then the sound will start playing
immediately. A TypeError exception MUST be thrown if when is negative.
offset double = 0 The offset parameter describes the offset time in the buffer (in
seconds) where playback will begin. If 0 is passed in for this value,
then playback will start from the beginning of the buffer. A TypeError
exception MUST be thrown if offset is negative. If offset is greater
than loopEnd, playback will begin at loopEnd (and immediately loop to
loopStart). This parameter is converted to an exact sample frame
offset within the buffer by multiplying by the buffer's sample rate and
rounding to the nearest integer value. Thus its behavior is
independent of the value of the playbackRate parameter.
duration double The duration parameter describes the duration of the portion (in
seconds) to be played. If this parameter is not passed, the duration
will be equal to the total duration of the AudioBuffer minus the offset
parameter. Thus if neither offset nor duration are specified then the
implied duration is the total duration of the AudioBuffer. An TypeError
exception MUST be thrown if duration is negative.
Return type: void
stop
Schedules a sound to stop playback at an exact time.
Parameter Type Nullable Optional Description
when double = 0 The when parameter describes at what time (in seconds) the sound
should stop playing. It is in the same time coordinate system as the
AudioContext's currentTime attribute. If 0 is passed in for this value or
if the value is less than currentTime, then the sound will stop playing
immediately. A TypeError exception MUST be thrown if when is negative.
If stop is called again after already have been called, the last
invocation will be the only one applied; stop times set by previous calls
will not be applied, unless the buffer has already stopped prior to any
subsequent calls. If the buffer has already stopped, further calls to
stop will have no effect. If a stop time is reached prior to the
scheduled start time, the sound will not play.
Return type: void
Both playbackRate and detune are k-rate parameters and are used together to determine a computedPlaybackRate value:
The computedPlaybackRate is the effective speed at which the AudioBuffer of this AudioBufferSourceNode MUST be played.
This MUST be implemented by resampling the input data using a resampling ratio of 1 / computedPlaybackRate, hence changing both the pitch
and speed of the audio.
2.10.3 Looping
If the loop attribute is true when start() is called, then playback will continue indefinitely until stop() is called and the stop time is reached.
We'll call this "loop" mode. Playback always starts at the point in the buffer indicated by the offset argument of start(), and in loop mode
will continue playing until it reaches the actualLoopEnd position in the buffer (or the end of the buffer), at which point it will wrap back
around to the actualLoopStart position in the buffer, and continue playing according to this pattern.
In loop mode then the actual loop points are calculated as follows from the loopStart and loopEnd attributes:
if ((loopStart || loopEnd) && loopStart >= 0 && loopEnd > 0 && loopStart < loopEnd) {
actualLoopStart = loopStart;
actualLoopEnd = min(loopEnd, buffer.duration);
} else {
actualLoopStart = 0;
actualLoopEnd = buffer.duration;
}
Note that the default values for loopStart and loopEnd are both 0, which indicates that looping should occur from the very start to the very
end of the buffer.
Please note that as a low-level implementation detail, the AudioBuffer is at a specific sample-rate (usually the same as the AudioContext
sample-rate), and that the loop times (in seconds) must be converted to the appropriate sample-frame positions in the buffer according to
this sample-rate.
When scheduling the beginning and the end of playback using the start() and stop() methods, the resulting start or stop time MUST be
rounded to the nearest sample-frame in the sample rate of the AudioContext. That is, no sub-sample scheduling is possible.
numberOfInputs : 0
numberOfOutputs : 1
The number of channels of the output corresponds to the number of channels of the media referenced by the HTMLMediaElement. Thus,
changes to the media element's .src attribute can change the number of channels output by this node. If the .src attribute is not set, then the
number of channels output will be one silent channel.
WebIDL
The HTMLMediaElement must behave in an identical fashion after the MediaElementAudioSourceNode has been created, except that the
rendered audio will no longer be heard directly, but instead will be heard as a consequence of the MediaElementAudioSourceNode being
connected through the routing graph. Thus pausing, seeking, volume, src attribute changes, and other aspects of the HTMLMediaElement
must behave as they normally would if not used with a MediaElementAudioSourceNode.
EXAMPLE 7
var mediaElement = document.getElementById('mediaElementID');
var sourceNode = context.createMediaElementSource(mediaElement);
sourceNode.connect(filterNode);
HTMLMediaElement allows the playback of cross-origin resources. Because Web Audio can allows one to inspect the content of the resource
(e.g. using a MediaElementAudioSourceNode, and a ScriptProcessorNode to read the samples), information leakage can occur if scripts from
one origin inspect the content of a resource from another origin.
To prevent this, a MediaElementAudioSourceNode MUST output silence instead of the normal output of the HTMLMediaElement if it has been
created using an HTMLMediaElement for which the execution of the fetch algorithm labeled the resource as CORS-cross-origin.
These main thread objects cause the instantiation of a processing context in the audio thread. All audio processing by AudioWorkerNodes
runs in the audio processing thread. This has a few side effects that bear mentioning: blocking the audio worker's thread can cause glitches
in the audio, and if the audio thread is normally elevated in thread priority (to reduce glitching possibility), it must be demoted to normal
thread priority (in order to avoid escalating thread priority of user-supplied script code).
From inside an audio worker script, the Audio Worker factory is represented by an AudioWorkerGlobalScope object representing the node's
contextual information, and individual audio nodes created by the factory are represented by AudioWorkerNodeProcessor objects.
In addition, all AudioWorkerNodes that are created by the same AudioWorker share an AudioWorkerGlobalScope; this can allow them to share
context and data across nodes (for example, loading a single instance of a shared database used by the individual nodes, or sharing
context in order to implement oscillator synchronization).
WebIDL
2.12.1 Attributes
2.12.2 Methods
addParameter
The name parameter is the name used for the read-only AudioParam added to the AudioWorkerNode, and the name used for the
read-only Float32Array that will be present on the parameters object exposed on subsequent AudioProcessEvents.
The defaultValue parameter is the default value for the AudioParam's value attribute, as well as therefore the default value that will
appear in the Float32Array in the worker script (if no other parameter changes or connections affect the value).
createNode
Creates a node instance in the audio worker.
Parameter Type Nullable Optional Description
numberOfInputs int
numberOfOutputs int
Return type: AudioWorkerNode
postMessage
postMessage may be called to send a message to the AudioWorkerGlobalScope, similar to the algorithm defined by [Workers].
Parameter Type Nullable Optional Description
message any
transfer sequence<Transferable>
Return type: void
removeParameter
Removes a previously-added parameter named name from all AudioWorkerNodes associated with this AudioWorker and its
AudioWorkerGlobalScope. This will also remove the correspondingly-named read-only AudioParam from the AudioWorkerNode, and
will remove the correspondingly-named read-only Float32Arrays from the AudioProcessEvent's parameters member on
subsequent audio processing events. A NotFoundError exception must be thrown if no parameter with that name exists on this
AudioWorker.
terminate
The terminate() method, when invoked, must cause the cessation of any AudioProcessEvents being dispatched inside the
AudioWorker's associated AudioWorkerGlobalScope. It will also cause all associated AudioWorkerNodes to cease processing, and
https://www.w3.org/TR/webaudio/ 21/27
2017-7-4 Web Audio API
AudioWorker's associated AudioWorkerGlobalScope. It will also cause all associated AudioWorkerNodes to cease processing, and
will cause the destruction of the worker's context. In practical terms, this means all nodes created from this AudioWorker will
disconnect themselves, and will cease performing any useful functions.
No parameters.
Return type: void
Note that AudioWorkerNode objects will also have read-only AudioParam objects for each named parameter added via the addParameter
method. As this is dynamic, it cannot be captured in IDL.
As the AudioWorker interface inherits from Worker, AudioWorkers must implement the Worker interface for communication with the audio
worker script.
This interface represents an AudioNode which interacts with a Worker thread to generate, process, or analyse audio directly. The user
creates a separate audio processing worker script, which is hosted inside the AudioWorkerGlobalScope and runs inside the audio
processing thread, rather than the main UI thread. The AudioWorkerNode represents the processing node in the main processing thread's
node graph; the AudioWorkerGlobalScope represents the context in which the user's audio processing script is run.
Nota bene that if the Web Audio implementation normally runs audio process at higher than normal thread priority, utilizing
AudioWorkerNodes may cause demotion of the priority of the audio thread (since user scripts cannot be run with higher than normal
priority).
numberOfInputs : variable
numberOfOutputs : variable
channelCount = numberOfInputChannels;
channelCountMode = "explicit";
channelInterpretation = "speakers";
The number of input and output channels specified in the createAudioWorkerNode() call determines the initial number of input and output
channels (and the number of channels present for each input and output in the AudioBuffers passed to the AudioProcess event handler
inside the AudioWorkerGlobalScope). It is invalid for both numberOfInputChannels and numberOfOutputChannels to be zero.
Example usage:
var bitcrusherFactory = context.createAudioWorker( "bitcrusher.js" );
var bitcrusherNode = bitcrusherFactory.createNode();
WebIDL
2.12.3.1 Attributes
postMessage
postMessage may be called to send a message to the AudioWorkerNodeProcessor, via the algorithm defined by the Worker
specification. Note that this is different from calling postMessage() on the AudioWorker itself, as that would affect the
AudioWorkerGlobalScope.
Parameter Type Nullable Optional Description
message any
transfer sequence<Transferable>
Return type: void
Note that AudioWorkerNode objects will also have read-only AudioParam objects for each named parameter added via the addParameter
method on the AudioWorker. As this is dynamic, it cannot be captured here in IDL.
This interface represents the description of an AudioWorkerNode AudioParam - in short, its name and default value. This enables easy
iteration over the AudioParams from an AudioWorkerGlobalScope (which does not have an instance of those AudioParams).
WebIDL
interface AudioWorkerParamDescriptor {
readonly attribute DOMString name;
readonly attribute float defaultValue;
};
2.12.4.1 Attributes
This interface is a DedicatedWorkerGlobalScope-derived object representing the context in which an audio processing script is run; it is
designed to enable the generation, processing, and analysis of audio data directly using JavaScript in a Worker thread, with shared context
between multiple instances of audio nodes. This facilitates nodes that may have substantial shared data, e.g. a convolution node.
The AudioWorkerGlobalScope handles - audioprocess events dispatched - synchronously to process audio frame blocks for nodes created
by this worker. - audioprocess events are only - dispatched for nodes that have at least one input - or one output connected. TODO: should
this be true?
WebIDL
2.12.5.1 Attributes
https://www.w3.org/TR/webaudio/ 22/27
2017-7-4 Web Audio API
2.12.5.1 Attributes
2.12.5.2 Methods
addParameter
object exposed on the AudioProcessEvent on subsequent audio processing events for nodes created from this factory.
It is purposeful that AudioParams can be added (or removed) from an Audio Worker from either the main thread or the worker
script; this enables immediate creation of worker-based nodes and their prototypes, but also enables packaging an entire worker
including its AudioParam configuration into a single script. It is recommended that nodes be used only after the
AudioWorkerNode's oninitialized has been called, in order to allow the worker script to configure the node.
The name parameter is the name used for the read-only AudioParam added to the AudioWorkerNode, and the name used for the
read-only Float32Array that will be present on the parameters object exposed on subsequent AudioProcessEvents.
The defaultValue parameter is the default value for the AudioParam's value attribute, as well as therefore the default value that
will appear in the Float32Array in the worker script (if no other parameter changes or connections affect the value).
removeParameter
Removes a previously-added parameter named name from nodes processed by this factory. This will also remove the
correspondingly-named read-only AudioParam from the AudioWorkerNode, and will remove the correspondingly-named read-only
Float32Array from the AudioProcessEvent's parameters member on subsequent audio processing events. A NotFoundError
exception MUST be thrown if no parameter with that name exists on this node.
An object supporting this interface represents each individual node instantiated in an AudioWorkerGlobalScope; it is designed to manage the
data for an individual node. Shared context between multiple instances of audio nodes is accessible from the AudioWorkerGlobalScope; this
object represents the individual node and can be used for data storage or main-thread communication.
WebIDL
2.12.6.1 Attributes
2.12.6.2 Methods
postMessage
postMessage may be called to send a message to the AudioWorkerNode, via the algorithm defined by the Worker specification.
Note that this is different from calling postMessage() on the AudioWorker itself, as that would dispatch to the
AudioWorkerGlobalScope.
Parameter Type Nullable Optional Description
message any
transfer sequence<Transferable>
Return type: void
Bitcrushing is a mechanism by which the audio quality of an audio stream is reduced - both by quantizing the value (simulating lower bit-
depth in integer-based audio), and by quantizing in time (simulating a lower digital sample rate). This example shows how to use
AudioParams (in this case, treated as a-rate) inside an AudioWorker.
https://www.w3.org/TR/webaudio/ 23/27
2017-7-4 Web Audio API
{ // cache 'factory' in case you want to create more nodes!
bitcrusherFactory = factory;
var bitcrusherNode = factory.createNode();
bitcrusherNode.bits.setValueAtTime(8,0);
bitcrusherNode.connect(output);
input.connect(bitcrusherNode);
}
);
bitcrusher_worker.js
onnodecreate=function(e) {
e.node.phaser = 0;
e.node.lastDataValue = 0;
}
2.12.7.2 TODO: fix up this example. A Volume Meter and Clip Detector
Another common need is a clip-detecting volume meter. This example shows how to communicate basic parameters (that do not need
AudioParam scheduling) across to a Worker, as well as communicating data back to the main thread. This node does not use any output.
function setupNodeMessaging(node) {
// This handles communication back from the volume meter
node.onmessage = function (event) {
if (event.data instanceof Object ) {
if (event.data.hasOwnProperty("clip")
this.clip = event.data.clip;
if (event.data.hasOwnProperty("volume")
this.volume = event.data.volume;
}
}
// Set up volume and clip attributes. These will be updated by our onmessage.
node.volume = 0;
node.clip = false;
}
audioContext.createAudioWorker("vu_meter_worker.js").then( function(factory)
{ // cache 'factory' in case you want to create more nodes!
vuFactory = factory;
vuNode = factory.createNode([1], []); // we don't need an output, and let's force to mono
setupNodeMessaging(vuNode);
}
);
window.requestAnimationFrame( function(timestamp) {
if (vuNode) {
// Draw a bar based on vuNode.volume and vuNode.clip
}
});
vu_meter_worker.js
onnodecreate=function(e) {
e.node.timeToNextUpdate = 0.1 * sampleRate;
e.node.smoothing = 0.5;
e.node.clipLevel = 0.95;
e.node.clipLag = 1;
e.node.updatingInterval = 150;
// This just handles setting attribute values
e.node.onmessage = function ( event ) {
if (event.data instanceof Object ) {
if (event.data.hasOwnProperty("smoothing")
this.smoothing = event.data.smoothing;
if (event.data.hasOwnProperty("clipLevel")
this.clipLevel = event.data.clipLevel;
if (event.data.hasOwnProperty("clipLag")
this.clipLag = event.data.clipLag / 1000; // convert to seconds
if (event.data.hasOwnProperty("updating") // convert to samples
this.updatingInterval = event.data.updating * sampleRate / 1000 ;
}
};
}
This worker shows how to merge inputs into a single output channel.
audioContext.createAudioWorker("merger_worker.js").then( function(factory)
{ // cache 'factory' in case you want to create more nodes!
mergerFactory = factory;
var merger6channelNode = factory.createNode( [1,1,1,1,1,1], [6] );
// connect inputs and outputs here
}
);
merger_worker.js
This interface is an AudioNode which can generate, process, or analyse audio directly using JavaScript. This node type is deprecated, to be
replaced by the AudioWorkerNode; this text is only here for informative purposes until implementations remove this node type.
numberOfInputs : 1
numberOfOutputs : 1
channelCount = numberOfInputChannels;
channelCountMode = "explicit";
channelInterpretation = "speakers";
The channelCountMode cannot be changed from "explicit" and the channelCount cannot be changed. An attempt to change either of these
MUST throw an InvalidStateError exception.
The ScriptProcessorNode is constructed with a bufferSize which must be one of the following values: 256, 512, 1024, 2048, 4096, 8192,
16384. This value controls how frequently the audioprocess event is dispatched and how many sample-frames need to be processed each
call. audioprocess events are only dispatched if the ScriptProcessorNode has at least one input or one output connected. Lower numbers
for bufferSize will result in a lower (better) latency. Higher numbers will be necessary to avoid audio breakup and glitches. This value will be
picked by the implementation if the bufferSize argument to createScriptProcessor is not passed in, or is set to 0.
numberOfInputChannels and numberOfOutputChannels determine the number of input and output channels. It is invalid for both
numberOfInputChannels and numberOfOutputChannels to be zero.
WebIDL
2.13.1 Attributes
WebIDL
2.14.1 Attributes
The event handler processes audio from the input (if any) by accessing the audio data from the inputBuffers attribute. The audio data
which is the result of the processing (or the synthesized data if there are no inputs) is then placed into the outputBuffers.
WebIDL
2.15.1 Attributes
A readonly Array of Arrays of Float32Arrays. The top-level Array is organized by input; each input may contain multiple channels;
each channel contains a Float32Array of sample data. The initial size of the channel array will be determined by the number of
channels specified for that input in the createAudioWorkerNode() method. However, an onprocess handler may alter this number
of channels in the input dynamically, either by adding a Float32Array of blocksize length (128) or by reducing the Array (by
reducing the Array.length or by using Array.pop() or Array.slice(). The event object, the Array and the Float32Arrays will be reused
by the processing system, in order to minimize memory churn.
Any reordering performed on the Array for an input will not reorganize the connections to the channels for subsequent events.
The node to which this processing event is being dispatched. Any node-local data storage (e.g., the buffer for a delay node)
should be maintained on this object.
A readonly Array of Arrays of Float32Arrays. The top-level Array is organized by output; each output may contain multiple
channels; each channel contains a Float32Array of sample data. The initial size of the channel array will be determined by the
number of channels specified for that output in the createAudioWorkerNode() method. However, an onprocess handler may alter
this number of channels in the output dynamically, either by adding a Float32Array of blocksize length (128) or by reducing the
Array (by reducing the Array.length or by using Array.pop() or Array.slice(). The event object, the Array and the Float32Arrays will
be reused by the processing system, in order to minimize memory churn.
Any reordering performed on the Array for an output will not reorganize the connections to the channels for subsequent events.
This is an Event object which is dispatched to ScriptProcessorNode nodes. It will be removed when the ScriptProcessorNode is removed, as
the replacement AudioWorker uses the AudioProcessEvent.
The event handler processes audio from the input (if any) by accessing the audio data from the inputBuffer attribute. The audio data which
is the result of the processing (or the synthesized data if there are no inputs) is then placed into the outputBuffer.
WebIDL
2.16.1 Attributes
numberOfInputs : 1
numberOfOutputs : 1
https://www.w3.org/TR/webaudio/ 26/27
2017-7-4 Web Audio API
numberOfOutputs : 1
channelCount = 2;
channelCountMode = "clamped-max";
channelInterpretation = "speakers";
The input of this node is either mono (1 channel) or stereo (2 channels) and cannot be increased. Connections from nodes with fewer or
more channels will be up-mixed or down-mixed appropriately, but a NotSupportedError MUST be thrown if an attempt is made to set
channelCount to a value greater than 2 or if
https://www.w3.org/TR/webaudio/ 27/27