Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
341 views

Web Audio API

Web Audio API Docs

Uploaded by

quakig
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
341 views

Web Audio API

Web Audio API Docs

Uploaded by

quakig
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

2017-7-4 Web Audio API

Web Audio API


W3C Working Draft 08 December 2015
This version:
http://www.w3.org/TR/2015/WD-webaudio-20151208/
Latest published version:
http://www.w3.org/TR/webaudio/
Latest editor's draft:
https://webaudio.github.io/web-audio-api/
Previous version:
http://www.w3.org/TR/2013/WD-webaudio-20131010/
Editors:
Paul Adenot, Mozilla, padenot@mozilla.com
Chris Wilson, Google, Inc., cwilso@google.com
Previous editor:
Chris Rogers (Until August 2013)
Repository:
https://github.com/WebAudio/web-audio-api
Bug tracker:
https://github.com/WebAudio/web-audio-api/issues?state=open

Copyright 2013-2015 W3C (MIT, ERCIM, Keio, Beihang). W3C liability, trademark and document use rules apply.

Abstract
This specification describes a high-level JavaScript API for processing and synthesizing audio in web applications. The primary paradigm is
of an audio routing graph, where a number of AudioNode objects are connected together to define the overall audio rendering. The actual
processing will primarily take place in the underlying implementation (typically optimized Assembly / C / C++ code), but direct JavaScript
processing and synthesis is also supported.

The introductory section covers the motivation behind this specification.

This API is designed to be used in conjunction with other APIs and elements on the web platform, notably: XMLHttpRequest [XHR] (using
the responseType and response attributes). For games and interactive applications, it is anticipated to be used with the canvas 2D
[2dcontext] and WebGL [WEBGL] 3D graphics APIs.

Status of This Document


This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of
current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at
http://www.w3.org/TR/.

This document was published by the Audio Working Group as a Working Draft. This document is intended to become a W3C
Recommendation. If you wish to make comments regarding this document, please send them to public-audio@w3.org (subscribe,
archives). All comments are welcome.

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated,
replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any
patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An
individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in
accordance with section 6 of the W3C Patent Policy.

This document is governed by the 1 September 2015 W3C Process Document.

Table of Contents
Abstract
Status of This Document
Introduction
0.1 Features
0.1.1 Modular Routing
0.2 API Overview
1. Conformance
2. The Audio API
2.1 The BaseAudioContext Interface
2.1.1 Attributes
2.1.2 Methods
2.1.3 Callback DecodeSuccessCallback Parameters
2.1.4 Callback DecodeErrorCallback Parameters
2.1.5 Dictionary AudioContextOptions Members
2.1.6 Lifetime
2.1.7 Lack of introspection or serialization primitives
2.2 The AudioContext Interface
2.2.1 Methods
2.3 The OfflineAudioContext Interface
2.3.1 Attributes
2.3.2 Methods
2.3.3 The OfflineAudioCompletionEvent Interface
2.3.3.1 Attributes
2.4 The AudioNode Interface
2.4.1 Attributes
2.4.2 Methods
2.4.3 Lifetime
2.5 The AudioDestinationNode Interface
2.5.1 Attributes
2.6 The AudioParam Interface
2.6.1 Attributes
2.6.2 Methods
2.6.3 Computation of Value
2.6.4 AudioParam Automation Example
2.7 The GainNode Interface
2.7.1 Attributes
2.8 The DelayNode Interface
2.8.1 Attributes
2.9 The AudioBuffer Interface
https://www.w3.org/TR/webaudio/ 1/27
2017-7-4 Web Audio API
2.9 The AudioBuffer Interface
2.9.1 Attributes
2.9.2 Methods
2.10 The AudioBufferSourceNode Interface
2.10.1 Attributes
2.10.2 Methods
2.10.3 Looping
2.11 The MediaElementAudioSourceNode Interface
2.11.1 Security with MediaElementAudioSourceNode and cross-origin resources
2.12 The AudioWorker interface
2.12.1 Attributes
2.12.2 Methods
2.12.3 The AudioWorkerNode Interface
2.12.3.1 Attributes
2.12.3.2 Methods
2.12.4 The AudioWorkerParamDescriptor Interface
2.12.4.1 Attributes
2.12.5 The AudioWorkerGlobalScope Interface
2.12.5.1 Attributes
2.12.5.2 Methods
2.12.6 The AudioWorkerNodeProcessor Interface
2.12.6.1 Attributes
2.12.6.2 Methods
2.12.7 Audio Worker Examples
2.12.7.1 A Bitcrusher Node
2.12.7.2 TODO: fix up this example. A Volume Meter and Clip Detector
2.12.7.3 Reimplementing ChannelMerger
2.13 The ScriptProcessorNode Interface - DEPRECATED
2.13.1 Attributes
2.14 The AudioWorkerNodeCreationEvent Interface
2.14.1 Attributes
2.15 The AudioProcessEvent Interface
2.15.1 Attributes
2.16 The AudioProcessingEvent Interface - DEPRECATED
2.16.1 Attributes
2.17 The PannerNode Interface
2.17.1 Attributes
2.17.2 Methods
2.17.3 Channel Limitations
2.18 The AudioListener Interface
2.18.1 Methods
2.19 The SpatialPannerNode Interface
2.19.1 Attributes
2.20 The SpatialListener Interface
2.20.1 Attributes
2.21 The StereoPannerNode Interface
2.21.1 Attributes
2.21.2 Channel Limitations
2.22 The ConvolverNode Interface
2.22.1 Attributes
2.22.2 Channel Configurations for Input, Impulse Response and Output
2.23 The AnalyserNode Interface
2.23.1 Attributes
2.23.2 Methods
2.23.3 FFT Windowing and smoothing over time
2.24 The ChannelSplitterNode Interface
2.25 The ChannelMergerNode Interface
2.26 The DynamicsCompressorNode Interface
2.26.1 Attributes
2.27 The BiquadFilterNode Interface
2.27.1 Attributes
2.27.2 Methods
2.27.3 Filters characteristics
2.28 The IIRFilterNode Interface
2.28.1 Methods
2.28.2 Filter Definition
2.29 The WaveShaperNode Interface
2.29.1 Attributes
2.30 The OscillatorNode Interface
2.30.1 Attributes
2.30.2 Methods
2.30.3 Basic Waveform Phase
2.31 The PeriodicWave Interface
2.31.1 PeriodicWaveConstraints
2.31.1.1 Dictionary PeriodicWaveConstraints Members
2.31.2 Waveform Generation
2.31.3 Waveform Normalization
2.31.4 Oscillator Coefficients
2.32 The MediaStreamAudioSourceNode Interface
2.33 The MediaStreamAudioDestinationNode Interface
2.33.1 Attributes
3. Mixer Gain Structure
3.1 Summing Inputs
3.2 Gain Control
3.3 Example: Mixer with Send Busses
4. Dynamic Lifetime
4.1 Background
4.2 Example
5. Channel up-mixing and down-mixing
5.1 Speaker Channel Layouts
5.2 Channel ordering
5.3 Up Mixing speaker layouts
5.4 Down Mixing speaker layouts
5.5 Channel Rules Examples
6. Audio Signal Values
7. Spatialization / Panning
7.1 Background
7.2 Azimuth and Elevation
7.3 Panning Algorithm
7.3.1 Equal-power panning
7.3.2 HRTF panning (stereo only)
7.4 Distance Effects
7.5 Sound Cones
7.6 Doppler Shift
8. Performance Considerations
8.1 Latency
8.2 Audio Buffer Copying
8.3 AudioParam Transitions
8.4 Audio Glitching
8.5 JavaScript Issues with Real-Time Processing and Synthesis:
9. Security Considerations
https://www.w3.org/TR/webaudio/ 2/27
2017-7-4 Web Audio API
9. Security Considerations
10. Privacy Considerations
11. Requirements and Use Cases
12. Acknowledgements
13. Web Audio API Change Log
A. References
A.1 Normative references
A.2 Informative references

Introduction
Audio on the web has been fairly primitive up to this point and until very recently has had to be delivered through plugins such as Flash and
QuickTime. The introduction of the audio element in HTML5 is very important, allowing for basic streaming audio playback. But, it is not
powerful enough to handle more complex audio applications. For sophisticated web-based games or interactive applications, another
solution is required. It is a goal of this specification to include the capabilities found in modern game audio engines as well as some of the
mixing, processing, and filtering tasks that are found in modern desktop audio production applications.

The APIs have been designed with a wide variety of use cases [webaudio-usecases] in mind. Ideally, it should be able to support any use
case which could reasonably be implemented with an optimized C++ engine controlled via JavaScript and run in a browser. That said,
modern desktop audio software can have very advanced capabilities, some of which would be difficult or impossible to build with this
system. Apple's Logic Audio is one such application which has support for external MIDI controllers, arbitrary plugin audio effects and
synthesizers, highly optimized direct-to-disk audio file reading/writing, tightly integrated time-stretching, and so on. Nevertheless, the
proposed system will be quite capable of supporting a large range of reasonably complex games and interactive applications, including
musical ones. And it can be a very good complement to the more advanced graphics features offered by WebGL. The API has been
designed so that more advanced capabilities can be added at a later time.

0.1 Features

The API supports these primary features:

Modular routing for simple or complex mixing/effect architectures, including multiple sends and submixes.
High dynamic range, using 32bits floats for internal processing.
Sample-accurate scheduled sound playback with low latency for musical applications requiring a very high degree of rhythmic
precision such as drum machines and sequencers. This also includes the possibility of dynamic creation of effects.
Automation of audio parameters for envelopes, fade-ins / fade-outs, granular effects, filter sweeps, LFOs etc.
Flexible handling of channels in an audio stream, allowing them to be split and merged.
Processing of audio sources from an audio or video media element.
Processing live audio input using a MediaStream from getUserMedia().
Integration with WebRTC
Processing audio received from a remote peer using a MediaStreamAudioSourceNode and [webrtc].
Sending a generated or processed audio stream to a remote peer using a MediaStreamAudioDestinationNode and [webrtc].
Audio stream synthesis and processing directly in JavaScript.
Spatialized audio supporting a wide range of 3D games and immersive environments:
Panning models: equalpower, HRTF, pass-through
Distance Attenuation
Sound Cones
Obstruction / Occlusion
Doppler Shift
Source / Listener based
A convolution engine for a wide range of linear effects, especially very high-quality room effects. Here are some examples of possible
effects:
Small / large room
Cathedral
Concert hall
Cave
Tunnel
Hallway
Forest
Amphitheater
Sound of a distant room through a doorway
Extreme filters
Strange backwards effects
Extreme comb filter effects
Dynamics compression for overall control and sweetening of the mix
Efficient real-time time-domain and frequency analysis / music visualizer support
Efficient biquad filters for lowpass, highpass, and other common filters.
A Waveshaping effect for distortion and other non-linear effects
Oscillators

0.1.1 Modular Routing

Modular routing allows arbitrary connections between different AudioNode objects. Each node can have inputs and/or outputs. A source
node has no inputs and a single output. A destination node has one input and no outputs, the most common example being
AudioDestinationNode the final destination to the audio hardware. Other nodes such as filters can be placed between the source and
destination nodes. The developer doesn't have to worry about low-level stream format details when two objects are connected together; the
right thing just happens. For example, if a mono audio stream is connected to a stereo input it should just mix to left and right channels
appropriately.

In the simplest case, a single source can be routed directly to the output. All routing occurs within an AudioContext containing a single
AudioDestinationNode:

Fig. 1 A simple example of modular routing.

Illustrating this simple routing, here's a simple example playing a single sound:

EXAMPLE 1
var context = new AudioContext();

function playSound() {
var source = context.createBufferSource();
source.buffer = dogBarkingBuffer;
source.connect(context.destination);
source.start(0);
}

Here's a more complex example with three sources and a convolution reverb send with a dynamics compressor at the final output stage:

https://www.w3.org/TR/webaudio/ 3/27
2017-7-4 Web Audio API

Fig. 2 A more complex example of modular routing.

EXAMPLE 2
var context = 0;
var compressor = 0;
var reverb = 0;

var source1 = 0;
var source2 = 0;
var source3 = 0;

var lowpassFilter = 0;
var waveShaper = 0;
var panner = 0;

var dry1 = 0;
var dry2 = 0;
var dry3 = 0;

var wet1 = 0;
var wet2 = 0;
var wet3 = 0;

var masterDry = 0;
var masterWet = 0;

function setupRoutingGraph () {
context = new AudioContext();

// Create the effects nodes.


lowpassFilter = context.createBiquadFilter();
waveShaper = context.createWaveShaper();
panner = context.createPanner();
compressor = context.createDynamicsCompressor();
reverb = context.createConvolver();

// Create master wet and dry.


masterDry = context.createGain();
masterWet = context.createGain();

// Connect final compressor to final destination.


compressor.connect(context.destination);

// Connect master dry and wet to compressor.


masterDry.connect(compressor);
masterWet.connect(compressor);

// Connect reverb to master wet.


reverb.connect(masterWet);

// Create a few sources.


source1 = context.createBufferSource();
source2 = context.createBufferSource();
source3 = context.createOscillator();

source1.buffer = manTalkingBuffer;
source2.buffer = footstepsBuffer;
source3.frequency.value = 440;

// Connect source1
dry1 = context.createGain();
wet1 = context.createGain();
source1.connect(lowpassFilter);
lowpassFilter.connect(dry1);
lowpassFilter.connect(wet1);
dry1.connect(masterDry);
wet1.connect(reverb);

// Connect source2
dry2 = context.createGain();
wet2 = context.createGain();
source2.connect(waveShaper);
waveShaper.connect(dry2);
waveShaper.connect(wet2);
dry2.connect(masterDry);
wet2.connect(reverb);

// Connect source3
dry3 = context.createGain();
wet3 = context.createGain();
source3.connect(panner);
panner.connect(dry3);
panner.connect(wet3);
dry3.connect(masterDry);
wet3.connect(reverb);

// Start the sources now.


source1.start(0);
source2.start(0);
source3.start(0);
}

Modular routing also permits the output of AudioNodes to be routed to an AudioParam parameter that controls the behavior of a different
AudioNode. In this scenario, the output of a node can act as a modulation signal rather than an input signal.

https://www.w3.org/TR/webaudio/ 4/27
2017-7-4 Web Audio API

Fig. 3 Modular routing illustrating one Oscillator modulating the frequency of another.

EXAMPLE 3
function setupRoutingGraph() {
var context = new AudioContext();

// Create the low frequency oscillator that supplies the modulation signal
var lfo = context.createOscillator();
lfo.frequency.value = 1.0;

// Create the high frequency oscillator to be modulated


var hfo = context.createOscillator();
hfo.frequency.value = 440.0;

// Create a gain node whose gain determines the amplitude of the modulation signal
var modulationGain = context.createGain();
modulationGain.gain.value = 50;

// Configure the graph and start the oscillators


lfo.connect(modulationGain);
modulationGain.connect(hfo.detune);
hfo.connect(context.destination);
hfo.start(0);
lfo.start(0);
}

0.2 API Overview


The interfaces defined are:

An AudioContext interface, which contains an audio signal graph representing connections betweens AudioNodes.
An AudioNode interface, which represents audio sources, audio outputs, and intermediate processing modules. AudioNodes can be
dynamically connected together in a modular fashion. AudioNodes exist in the context of an AudioContext
An AudioDestinationNode interface, an AudioNode subclass representing the final destination for all rendered audio.
An AudioBuffer interface, for working with memory-resident audio assets. These can represent one-shot sounds, or longer audio
clips.
An AudioBufferSourceNode interface, an AudioNode which generates audio from an AudioBuffer.
A MediaElementAudioSourceNode interface, an AudioNode which is the audio source from an audio, video, or other media element.
A MediaStreamAudioSourceNode interface, an AudioNode which is the audio source from a MediaStream such as live audio input, or
from a remote peer.
A MediaStreamAudioDestinationNode interface, an AudioNode which is the audio destination to a MediaStream sent to a remote peer.
An AudioWorker interface representing a factory for creating custom nodes that can process audio directly in JavaScript.
An AudioWorkerNode interface, an AudioNode representing a node processed in an AudioWorker.
An AudioWorkerGlobalScope interface, the context in which AudioWorker processing scripts run.
An AudioWorkerNodeProcessor interface, representing a single node instance inside an audio worker.
An AudioParam interface, for controlling an individual aspect of an AudioNode's functioning, such as volume.
An GainNode interface, an AudioNode for explicit gain control. Because inputs to AudioNodes support multiple connections (as a unity-
gain summing junction), mixers can be easily built with GainNodes.
A BiquadFilterNode interface, an AudioNode for common low-order filters such as:
Low Pass
High Pass
Band Pass
Low Shelf
High Shelf
Peaking
Notch
Allpass
A IIRFilterNode interface, an AudioNode for a general IIR filter.
A DelayNode interface, an AudioNode which applies a dynamically adjustable variable delay.
A SpatialPannerNode interface, an AudioNode for positioning audio in 3D space.
A SpatialListener interface, which works with a SpatialPannerNode for spatialization.
A StereoPannerNode interface, an AudioNode for equal-power positioning of audio input in a stereo stream.
A ConvolverNode interface, an AudioNode for applying a real-time linear effect (such as the sound of a concert hall).
A AnalyserNode interface, an AudioNode for use with music visualizers, or other visualization applications.
A ChannelSplitterNode interface, an AudioNode for accessing the individual channels of an audio stream in the routing graph.
A ChannelMergerNode interface, an AudioNode for combining channels from multiple audio streams into a single audio stream.
A DynamicsCompressorNode interface, an AudioNode for dynamics compression.
A WaveShaperNode interface, an AudioNode which applies a non-linear waveshaping effect for distortion and other more subtle warming
effects.
A OscillatorNode interface, an AudioNode for generating a periodic waveform.

There are also several features that have been deprecated from the Web Audio API but not yet removed, pending implementation
experience of their replacements:

A PannerNode interface, an AudioNode for spatializing / positioning audio in 3D space. This has been replaced by SpatialPannerNode,
and StereoPannerNode for simpler scenarios.
An AudioListener interface, which works with a PannerNode for spatialization.
A ScriptProcessorNode interface, an AudioNode for generating or processing audio directly in JavaScript.
An AudioProcessingEvent interface, which is an event type used with ScriptProcessorNode objects.

1. Conformance
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-
normative. Everything else in this specification is normative.

The key words MUST, REQUIRED, and SHALL are to be interpreted as described in [RFC2119].

The following conformance classes are defined by this specification:

conforming implementation

A user agent is considered to be a conforming implementation if it satisfies all of the MUST-, REQUIRED- and SHALL-level criteria in this
specification that apply to implementations.

User agents that use ECMAScript to implement the APIs defined in this specification must implement them in a manner consistent with the
https://www.w3.org/TR/webaudio/ 5/27
2017-7-4 Web Audio API
User agents that use ECMAScript to implement the APIs defined in this specification must implement them in a manner consistent with the
ECMAScript Bindings defined in the Web IDL specification [WEBIDL] as this specification uses that specification and terminology.

2. The Audio API


2.1 The BaseAudioContext Interface
This interface represents a set of AudioNode objects and their connections. It allows for arbitrary routing of signals to an
AudioDestinationNode. Nodes are created from the context and are then connected together.

BaseAudioContext is not instantiated directly, but is instead extended by the concrete interfaces AudioContext (for real-time rendering) and
OfflineAudioContext (for offline rendering).

WebIDL

enum AudioContextState {
"suspended",
"running",
"closed"
};

Enumeration description
suspended This context is currently suspended (context time is not proceeding, audio hardware may be powered down/released).
running Audio is being processed.
This context has been released, and can no longer be used to process audio. All system audio resources have been
closed released. Attempts to create new Nodes on this context will throw InvalidStateError. (AudioBuffers may still be created,
through createBuffer or decodeAudioData.)

WebIDL

enum AudioContextPlaybackCategory {
"balanced",
"interactive",
"playback"
};

Enumeration description
balanced Balance audio output latency and stability/power consumption.
interactive Provide the lowest audio output latency possible without glitching. This is the default.
playback Prioritize sustained playback without interruption over audio output latency. Lowest power consumption.

WebIDL

dictionary AudioContextOptions {
AudioContextPlaybackCategory playbackCategory = "interactive";
};

callback DecodeErrorCallback = void (DOMException error);

callback DecodeSuccessCallback = void (AudioBuffer decodedData);

[Constructor(optional AudioContextOptions contextOptions)]


interface BaseAudioContext : EventTarget {
readonly attribute AudioDestinationNode destination;
readonly attribute float sampleRate;
readonly attribute double currentTime;
readonly attribute AudioListener listener;
readonly attribute AudioContextState state;
Promise<void> suspend ();
Promise<void> resume ();
Promise<void> close ();
attribute EventHandler onstatechange;
AudioBuffer createBuffer (unsigned long numberOfChannels, unsigned long length, float sampleRate);
Promise<AudioBuffer> decodeAudioData (ArrayBuffer audioData, optional DecodeSuccessCallback successCallback, optional DecodeErrorCallback errorCallback);
AudioBufferSourceNode createBufferSource ();
Promise<AudioWorker> createAudioWorker (DOMString scriptURL);
ScriptProcessorNode createScriptProcessor (optional unsigned long bufferSize = 0
, optional unsigned long numberOfInputChannels = 2
, optional unsigned long numberOfOutputChannels = 2
);
AnalyserNode createAnalyser ();
GainNode createGain ();
DelayNode createDelay (optional double maxDelayTime = 1.0
);
BiquadFilterNode createBiquadFilter ();
IIRFilterNode createIIRFilter (sequence<double> feedforward, sequence<double> feedback);
WaveShaperNode createWaveShaper ();
PannerNode createPanner ();
SpatialPannerNode createSpatialPanner ();
StereoPannerNode createStereoPanner ();
ConvolverNode createConvolver ();
ChannelSplitterNode createChannelSplitter (optional unsigned long numberOfOutputs = 6
);
ChannelMergerNode createChannelMerger (optional unsigned long numberOfInputs = 6
);
DynamicsCompressorNode createDynamicsCompressor ();
OscillatorNode createOscillator ();
PeriodicWave createPeriodicWave (Float32Array real, Float32Array imag, optional PeriodicWaveConstraints constraints);
};

2.1.1 Attributes

currentTime of type double, readonly

This is the time in seconds of the sample frame immediately following the last sample-frame in the block of audio most recently
processed by the context's rendering graph. If the context's rendering graph has not yet processed a block of audio, then
currentTime has a value of zero.

In the time coordinate system of currentTime, the value of zero corresponds to the first sample-frame in the first block processed
by the graph. Elapsed time in this system corresponds to elapsed time in the audio stream generated by the BaseAudioContext,
which may not be synchronized with other clocks in the system. (For an OfflineAudioContext, since the stream is not being
actively played by any device, there is not even an approximation to real time.)

All scheduled times in the Web Audio API are relative to the value of currentTime.

When the BaseAudioContext is in the running state, the value of this attribute is monotonically increasing and is updated by the
rendering thread in uniform increments, corresponding to the audio block size of 128 samples. Thus, for a running context,
currentTime increases steadily as the system processes audio blocks, and always represents the time of the start of the next
audio block to be processed. It is also the earliest possible time when any change scheduled in the current state might take
effect.

destination of type AudioDestinationNode, readonly

An AudioDestinationNode with a single input representing the final destination for all audio. Usually this will represent the actual
audio hardware. All AudioNodes actively rendering audio will directly or indirectly connect to destination.

https://www.w3.org/TR/webaudio/ 6/27
2017-7-4 Web Audio API

listener of type AudioListener, readonly

An AudioListener which is used for 3D spatialization.

onstatechange of type EventHandler


A property used to set the EventHandler for an event that is dispatched to BaseAudioContext when the state of the AudioContext
has changed (i.e. when the corresponding promise would have resolved). An event of type Event will be dispatched to the event
handler, which can query the AudioContext's state directly. A newly-created AudioContext will always begin in the "suspended"
state, and a state change event will be fired whenever the state changes to a different state.

sampleRate of type float, readonly

The sample rate (in sample-frames per second) at which the BaseAudioContext handles audio. It is assumed that all AudioNodes in
the context run at this rate. In making this assumption, sample-rate converters or "varispeed" processors are not supported in
real-time processing.

state of type AudioContextState, readonly


Describes the current state of this BaseAudioContext. The context state MUST begin in "suspended", and transitions to "running"
when system resources are acquired and audio has begun processing. For OfflineAudioContexts, the state will remain in
"suspended" until startRendering() is called, at which point it will transition to "running", and then to "closed" once audio
processing has completed and oncomplete has been fired.

When the state is "suspended", a call to resume() will cause a transition to "running", or a call to close() will cause a transition to
"closed".

When the state is "running", a call to suspend() will cause a transition to "suspended", or a call to close() will cause a transition to
"closed".

When the state is "closed", no further state transitions are possible.

2.1.2 Methods

close
Closes the audio context, releasing any system audio resources used by the BaseAudioContext. This will not automatically release
all BaseAudioContext-created objects, unless other references have been released as well; however, it will forcibly release any
system audio resources that might prevent additional AudioContexts from being created and used, suspend the progression of
the BaseAudioContext's currentTime, and stop processing audio data. The promise resolves when all AudioContext-creation-

blocking resources have been released. If this is called on OfflineAudioContext, then return a promise rejected with a
DOMException whose name is InvalidStateError.
No parameters.
Return type: Promise<void>

createAnalyser
Create an AnalyserNode.
No parameters.
Return type: AnalyserNode

createAudioWorker
Creates an AudioWorker object and loads the associated script into an AudioWorkerGlobalScope, then resolves the returned
Promise.
Parameter Type Nullable Optional Description
scriptURL DOMString This parameter represents the URL of the script to be loaded as an
AudioWorker node factory. See AudioWorker section for more detail.

Return type: Promise<AudioWorker>

createBiquadFilter
Creates a BiquadFilterNode representing a second order filter which can be configured as one of several common filter types.
No parameters.
Return type: BiquadFilterNode

createBuffer
Creates an AudioBuffer of the given size. The audio data in the buffer will be zero-initialized (silent). A NotSupportedError
exception MUST be thrown if any of the arguments is negative, zero, or outside its nominal range.
Parameter Type Nullable Optional Description
numberOfChannels unsigned long Determines how many channels the buffer will have. An
implementation must support at least 32 channels.
length unsigned long Determines the size of the buffer in sample-frames.
sampleRate float Describes the sample-rate of the linear PCM audio data in the
buffer in sample-frames per second. An implementation must
support sample rates in at least the range 8192 to 96000.
Return type: AudioBuffer

createBufferSource
Creates an AudioBufferSourceNode.
No parameters.
Return type: AudioBufferSourceNode

createChannelMerger
Creates a ChannelMergerNode representing a channel merger. An IndexSizeError exception MUST be thrown for invalid parameter
values.
Parameter Type Nullable Optional Description
numberOfInputs unsigned long = The numberOfInputs parameter determines the number of inputs.
6 Values of up to 32 must be supported. If not specified, then 6 will be
used.
Return type: ChannelMergerNode

createChannelSplitter
Creates an ChannelSplitterNode representing a channel splitter. An IndexSizeError exception MUST be thrown for invalid
parameter values.
Parameter Type Nullable Optional Description
numberOfOutputs unsigned long = The number of outputs. Values of up to 32 must be supported. If
6 not specified, then 6 will be used.
Return type: ChannelSplitterNode

createConvolver
Creates a ConvolverNode.
No parameters.
Return type: ConvolverNode

createDelay
Creates a DelayNode representing a variable delay line. The initial default delay time will be 0 seconds.

https://www.w3.org/TR/webaudio/ 7/27
2017-7-4 Web Audio API
Parameter Type Nullable Optional Description
maxDelayTime double = 1.0 The maxDelayTime parameter is optional and specifies the
maximum delay time in seconds allowed for the delay line. If
specified, this value MUST be greater than zero and less than three
minutes or a NotSupportedError exception MUST be thrown.
Return type: DelayNode

createDynamicsCompressor
Creates a DynamicsCompressorNode
No parameters.
Return type: DynamicsCompressorNode

createGain
Create an GainNode.
No parameters.
Return type: GainNode

createIIRFilter
Creates an IIRFilterNode representing a general IIR Filter.
Parameter Type Nullable Optional Description
feedforward sequence<double> An array of the feedforward (numerator) coefficients for the transfer
function of the IIR filter. The maximum length of this array is 20. If all
of the values are zero, an InvalidStateError MUST be thrown. A
NotSupportedError MUST be thrown if the array length is 0 or greater
than 20.
feedback sequence<double> An array of the feedback (denominator) coefficients for the tranfer
function of the IIR filter. The maximum length of this array is 20. If the
first element of the array is 0, an InvalidStateError MUST be thrown. A
NotSupportedError MUST be thrown if the array length is 0 or greater
than 20.
Return type: IIRFilterNode

createOscillator
Creates an OscillatorNode
No parameters.
Return type: OscillatorNode

createPanner
This method is DEPRECATED, as it is intended to be replaced by createSpatialPanner or createStereoPanner, depending on the
scenario. Creates a PannerNode.
No parameters.
Return type: PannerNode

createPeriodicWave
Creates a PeriodicWave representing a waveform containing arbitrary harmonic content. The real and imag parameters must be
of type Float32Array (described in [TYPED-ARRAYS]) of equal lengths greater than zero or an IndexSizeError exception MUST be
thrown. All implementations must support arrays up to at least 8192. These parameters specify the Fourier coefficients of a
Fourier series representing the partials of a periodic waveform. The created PeriodicWave will be used with an OscillatorNode
and, by default, will represent a normalized time-domain waveform having maximum absolute peak value of 1. Another way of
saying this is that the generated waveform of an OscillatorNode will have maximum peak value at 0dBFS. Conveniently, this
corresponds to the full-range of the signal values used by the Web Audio API. Because the PeriodicWave is normalized by
default on creation, the real and imag parameters represent relative values. If normalization is disabled via the
disableNormalization parameter, this normalization is disabled, and the time-domain waveform has the amplitudes as given by
the Fourier coefficients.

As PeriodicWave objects maintain their own copies of these arrays, any modification of the arrays uses as the real and imag
parameters after the call to createPeriodicWave() will have no effect on the PeriodicWave object.

Parameter Type Nullable Optional Description


real Float32Array The real parameter represents an array of cosine terms
(traditionally the A terms). In audio terminology, the first
element (index 0) is the DC-offset of the periodic waveform.
The second element (index 1) represents the fundamental
frequency. The third element represents the first overtone,
and so on. The first element is ignored and implementations
must set it to zero internally.
imag Float32Array The imag parameter represents an array of sine terms
(traditionally the B terms). The first element (index 0) should
be set to zero (and will be ignored) since this term does not
exist in the Fourier series. The second element (index 1)
represents the fundamental frequency. The third element
represents the first overtone, and so on.
constraints PeriodicWaveConstraints If not given, the waveform is normalized. Otherwise, the
waveform is normalized according the value given by
constraints.

Return type: PeriodicWave

createScriptProcessor
This method is DEPRECATED, as it is intended to be replaced by createAudioWorker. Creates a ScriptProcessorNode for direct
audio processing using JavaScript. An IndexSizeError exception MUST be thrown if bufferSize or numberOfInputChannels or
numberOfOutputChannels are outside the valid range. It is invalid for both numberOfInputChannels and numberOfOutputChannels to
be zero. In this case an IndexSizeError MUST be thrown.
Parameter Type Nullable Optional Description
bufferSize unsigned long = The bufferSize parameter determines the buffer size in
0 units of sample-frames. If it's not passed in, or if the value
is 0, then the implementation will choose the best buffer
size for the given environment, which will be constant
power of 2 throughout the lifetime of the node. Otherwise
if the author explicitly specifies the bufferSize, it must be
one of the following values: 256, 512, 1024, 2048, 4096,
8192, 16384. This value controls how frequently the
audioprocess event is dispatched and how many sample-
frames need to be processed each call. Lower values for
bufferSize will result in a lower (better) latency. Higher
values will be necessary to avoid audio breakup and
glitches. It is recommended for authors to not specify this
buffer size and allow the implementation to pick a good

buffer size to balance between latency and audio quality.


If the value of this parameter is not one of the allowed
power-of-2 values listed above, an IndexSizeError MUST be
thrown.
numberOfInputChannels unsigned long = This parameter determines the number of channels for
2 this node's input. Values of up to 32 must be supported.
https://www.w3.org/TR/webaudio/ 8/27
2017-7-4 Web Audio API
2 this node's input. Values of up to 32 must be supported.
numberOfOutputChannels unsigned long = This parameter determines the number of channels for
2 this node's output. Values of up to 32 must be supported.
Return type: ScriptProcessorNode

createSpatialPanner
Creates a SpatialPannerNode.
No parameters.
Return type: SpatialPannerNode

createStereoPanner
Creates a StereoPannerNode.
No parameters.
Return type: StereoPannerNode

createWaveShaper
Creates a WaveShaperNode representing a non-linear distortion.
No parameters.
Return type: WaveShaperNode

decodeAudioData
Asynchronously decodes the audio file data contained in the ArrayBuffer. The ArrayBuffer can, for example, be loaded from an
XMLHttpRequest's response attribute after setting the responseType to "arraybuffer". Audio file data can be in any of the formats
supported by the audio or video elements. The buffer passed to decodeAudioData has its content-type determined by sniffing, as
described in [mimesniff].

Although the primary method of interfacing with this function is via its promise return value, the callback parameters are provided
for legacy reasons. The system shall ensure that the AudioContext is not garbage collected before the promise is resolved or
rejected and any callback function is called and completes.

The following steps must be performed:

1. Let promise be a new promise.


2. If audioData is null or not a valid ArrayBuffer:
1. Let error be a DOMException whose name is NotSupportedError.
2. Reject promise with error.
3. If errorCallback is not missing, invoke errorCallback with error.
4. Terminate this algorithm.
3. Neuter the audioData ArrayBuffer in such a way that JavaScript code may not access or modify the data anymore.
4. Queue a decoding operation to be performed on another thread.
5. Return promise.
6. In the decoding thread:
1. Attempt to decode the encoded audioData into linear PCM.
2. If a decoding error is encountered due to the audio format not being recognized or supported, or because of
corrupted/unexpected/inconsistent data, then, on the main thread's event loop:
1. Let error be a DOMException whose name is "EncodingError".
2. Reject promise with error.
3. If errorCallback is not missing, invoke errorCallback with error.
3. Otherwise:
1. Take the result, representing the decoded linear PCM audio data, and resample it to the sample-rate of the
AudioContext if it is different from the sample-rate of audioData.
2. On the main thread's event loop:
1. Let buffer be an AudioBuffer containing the final result (after possibly sample-rate converting).
2. Resolve promise with buffer.
3. If successCallback is not missing, invoke successCallback with buffer.

Parameter Type Nullable Optional Description


audioData ArrayBuffer An ArrayBuffer containing compressed audio data
successCallback DecodeSuccessCallback A callback function which will be invoked when the decoding
is finished. The single argument to this callback is an
AudioBuffer representing the decoded PCM audio data.
errorCallback DecodeErrorCallback A callback function which will be invoked if there is an error
decoding the audio file.
Return type: Promise<AudioBuffer>

resume

Resumes the progression of the BaseAudioContext's currentTime in an audio context that has been suspended, which may
involve re-priming the frame buffer contents. The promise resolves when the system has re-acquired (if necessary) access to
audio hardware and has begun streaming to the destination, or immediately (with no other effect) if the context is already running.
The promise is rejected if the context has been closed. If the context is not currently suspended, the promise will resolve.

Note that until the first block of audio has been rendered following a call to this method, currentTime remains unchanged.

No parameters.
Return type: Promise<void>

suspend

Suspends the progression of BaseAudioContext's currentTime, allows any current context processing blocks that are already
processed to be played to the destination, and then allows the system to release its claim on audio hardware. This is generally
useful when the application knows it will not need the BaseAudioContext for some time, and wishes to let the audio hardware
power down. The promise resolves when the frame buffer is empty (has been handed off to the hardware), or immediately (with
no other effect) if the context is already suspended. The promise is rejected if the context has been closed.

While the system is suspended, MediaStreams will have their output ignored; that is, data will be lost by the real time nature of
media streams. HTMLMediaElements will similarly have their output ignored until the system is resumed. Audio Workers and
ScriptProcessorNodes will simply not fire their onaudioprocess events while suspended, but will resume when resumed. For the
purpose of AnalyserNode window functions, the data is considered as a continuous stream - i.e. the resume()/suspend() does not
cause silence to appear in the AnalyserNode's stream of data.

No parameters.
Return type: Promise<void>

2.1.3 Callback DecodeSuccessCallback Parameters

decodedData of type AudioBuffer


The AudioBuffer containing the decoded audio data.

2.1.4 Callback DecodeErrorCallback Parameters

error of type DOMException


The error that occurred while decoding.

2.1.5 Dictionary AudioContextOptions Members

playbackCategory of type AudioContextPlaybackCategory, defaulting to "interactive"


https://www.w3.org/TR/webaudio/ 9/27
2017-7-4 Web Audio API
playbackCategory of type AudioContextPlaybackCategory, defaulting to "interactive"
Identify the type of playback, which affects tradeoffs between audio output latency and power consumption.

2.1.6 Lifetime

Once created, an AudioContext will continue to play sound until it has no more sound to play, or the page goes away.

2.1.7 Lack of introspection or serialization primitives

This section is non-normative.

The Web Audio API takes a fire-and-forget approach to audio source scheduling. That is, source nodes are created for each note during the
lifetime of the AudioContext, and never explicitely removed from the graph. This is incompatible with a serialization API, since there is no
stable set of nodes that could be serialized.

Moreover, having an introspection API would allow content script to be able to observe garbage collections.

2.2 The AudioContext Interface


This interface represents an audio graph whose AudioDestinationNode is routed to a real-time output device that produces a signal directed
at the user. In most use cases, only a single AudioContext is used per document.

WebIDL

interface AudioContext : BaseAudioContext {


MediaElementAudioSourceNode createMediaElementSource (HTMLMediaElement mediaElement);
MediaStreamAudioSourceNode createMediaStreamSource (MediaStream mediaStream);
MediaStreamAudioDestinationNode createMediaStreamDestination ();
};

2.2.1 Methods

createMediaElementSource
Creates a MediaElementAudioSourceNode given an HTMLMediaElement. As a consequence of calling this method, audio
playback from the HTMLMediaElement will be re-routed into the processing graph of the AudioContext.
Parameter Type Nullable Optional Description
mediaElement HTMLMediaElement The media element that will be re-routed.
Return type: MediaElementAudioSourceNode

createMediaStreamDestination
Creates a MediaStreamAudioDestinationNode
No parameters.
Return type: MediaStreamAudioDestinationNode

createMediaStreamSource

Parameter Type Nullable Optional Description


mediaStream MediaStream The media stream that will act as source.
Return type: MediaStreamAudioSourceNode

2.3 The OfflineAudioContext Interface


OfflineAudioContext is a particular type of AudioContext for rendering/mixing-down (potentially) faster than real-time. It does not render to
the audio hardware, but instead renders as quickly as possible, fulfilling the returned promise with the rendered result as an AudioBuffer.

The OfflineAudioContext is constructed with the same arguments as AudioContext.createBuffer. A NotSupportedException exception MUST
be thrown if any of the arguments is negative, zero, or outside its nominal range.

unsigned long numberOfChannels


Determines how many channels the buffer will have. See createBuffer for the supported number of channels.
unsigned long length
Determines the size of the buffer in sample-frames.
float sampleRate
Describes the sample-rate of the linear PCM audio data in the buffer in sample-frames per second. See createBuffer for valid sample
rates.

WebIDL

[Constructor(unsigned long numberOfChannels, unsigned long length, float sampleRate)]


interface OfflineAudioContext : BaseAudioContext {
Promise<AudioBuffer> startRendering ();
Promise<void> resume ();
Promise<void> suspend (double suspendTime);
attribute EventHandler oncomplete;
};

2.3.1 Attributes

oncomplete of type EventHandler

An EventHandler of type OfflineAudioCompletionEvent.

2.3.2 Methods

resume

Resumes the progression of time in an audio context that has been suspended. The promise resolves immediately because the
OfflineAudioContext does not require the audio hardware. If the context is not currently suspended or the rendering has not
started, the promise is rejected with InvalidStateError.

In contrast to a live AudioContext, the value of currentTime always reflects the start time of the next block to be rendered by the
audio graph, since the context's audio stream does not advance in time during suspension.

No parameters.
Return type: Promise<void>

startRendering

Given the current connections and scheduled changes, starts rendering audio. The system shall ensure that the
OfflineAudioContext is not garbage collected until either the promise is resolved and any callback function is called and
completes, or until the suspend function is called.

Although the primary method of getting the rendered audio data is via its promise return value, the instance will also fire an event
named complete for legacy reasons.

The following steps must be performed:

1. If startRendering has already been called previously, then return a promise rejected with InvalidStateError.
https://www.w3.org/TR/webaudio/ 10/27
2017-7-4 Web Audio API
1. If startRendering has already been called previously, then return a promise rejected with InvalidStateError.
2. Let promise be a new promise.
3. Asynchronously perform the following steps:
1. Let buffer be a new AudioBuffer, with a number of channels, length and sample rate equal respectively to the
numberOfChannels, length and sampleRate parameters used when this instance's constructor was called.
2. Given the current connections and scheduled changes, start rendering length sample-frames of audio into buffer.
3. For every render quantum, check and suspend the rendering if necessary.
4. If a suspended context is resumed, continue to render the buffer.
5. Once the rendering is complete,
1. Resolve promise with buffer.
2. Queue a task to fire an event named complete at this instance, using an instance of
OfflineAudioCompletionEvent whose renderedBuffer property is set to buffer.
4. Return promise.

No parameters.
Return type: Promise<AudioBuffer>

suspend

Schedules a suspension of the time progression in the audio context at the specified time and returns a promise. This is generally
useful when manipulating the audio graph synchronously on OfflineAudioContext.

Note that the maximum precision of suspension is the size of the render quantum and the specified suspension time will be
rounded down to the nearest render quantum boundary. For this reason, it is not allowed to schedule multiple suspends at the
same quantized frame. Also scheduling should be done while the context is not running to ensure the precise suspension.

Parameter Type Nullable Optional Description


suspendTime double Schedules a suspension of the rendering at the specified time, which
is quantized and rounded down to the render quantum size. If the
quantized frame number

1. is negative or
2. is less than or equal to the current time or
3. is greater than or equal to the total render duration or
4. is scheduled by another suspend for the same time,

then the promise is rejected with InvalidStateError.


Return type: Promise<void>

WebIDL

[Constructor]
interface AudioContext : BaseAudioContext {
};

2.3.3 The OfflineAudioCompletionEvent Interface

This is an Event object which is dispatched to OfflineAudioContext for legacy reasons.

WebIDL

interface OfflineAudioCompletionEvent : Event {


readonly attribute AudioBuffer renderedBuffer;
};

2.3.3.1 Attributes

renderedBuffer of type AudioBuffer, readonly

An AudioBuffer containing the rendered audio data.

2.4 The AudioNode Interface


AudioNodes are the building blocks of an AudioContext. This interface represents audio sources, the audio destination, and intermediate
processing modules. These modules can be connected together to form processing graphs for rendering audio to the audio hardware. Each
node can have inputs and/or outputs. A source node has no inputs and a single output. An AudioDestinationNode has one input and no
outputs and represents the final destination to the audio hardware. Most processing nodes such as filters will have one input and one
output. Each type of AudioNode differs in the details of how it processes or synthesizes audio. But, in general, an AudioNode will process its
inputs (if it has any), and generate audio for its outputs (if it has any).

Each output has one or more channels. The exact number of channels depends on the details of the specific AudioNode.

An output may connect to one or more AudioNode inputs, thus fan-out is supported. An input initially has no connections, but may be
connected from one or more AudioNode outputs, thus fan-in is supported. When the connect() method is called to connect an output of an
AudioNode to an input of an AudioNode, we call that a connection to the input.

Each AudioNode input has a specific number of channels at any given time. This number can change depending on the connection(s) made
to the input. If the input has no connections then it has one channel which is silent.

For each input, an AudioNode performs a mixing (usually an up-mixing) of all connections to that input. Please see 3. Mixer Gain Structure
for more informative details, and the 5. Channel up-mixing and down-mixing section for normative requirements.

The processing of inputs and the internal operations of an AudioNode take place continuously with respect to AudioContext time, regardless
of whether the node has connected outputs, and regardless of whether these outputs ultimately reach an AudioContext's
AudioDestinationNode.

For performance reasons, practical implementations will need to use block processing, with each AudioNode processing a fixed number of
sample-frames of size block-size. In order to get uniform behavior across implementations, we will define this value explicitly. block-size is
defined to be 128 sample-frames which corresponds to roughly 3ms at a sample-rate of 44.1KHz.

AudioNodes are EventTargets, as described in DOM [DOM]. This means that it is possible to dispatch events to AudioNodes the same way
that other EventTargets accept events.

WebIDL

enum ChannelCountMode {
"max",
"clamped-max",
"explicit"
};

Enumeration description
computedNumberOfChannels is computed as the maximum of the number of channels of all connections. In this mode
max
channelCount is ignored
clamped-
Same as max up to a limit of the channelCount
max
explicit computedNumberOfChannels is the exact value as specified in channelCount

https://www.w3.org/TR/webaudio/ 11/27
2017-7-4 Web Audio API

WebIDL

enum ChannelInterpretation {
"speakers",
"discrete"
};

Enumeration description
use up-down-mix equations for mono/stereo/quad/5.1. In cases where the number of channels do not match any of these
speakers
basic speaker layouts, revert to "discrete".
discrete Up-mix by filling channels until they run out then zero out remaining channels. down-mix by filling as many channels as

possible, then dropping remaining channels.

WebIDL

interface AudioNode : EventTarget {


AudioNode connect (AudioNode destination, optional unsigned long output = 0
, optional unsigned long input = 0
);
void connect (AudioParam destination, optional unsigned long output = 0
);
void disconnect ();
void disconnect (unsigned long output);
void disconnect (AudioNode destination);
void disconnect (AudioNode destination, unsigned long output);
void disconnect (AudioNode destination, unsigned long output, unsigned long input);
void disconnect (AudioParam destination);
void disconnect (AudioParam destination, unsigned long output);
readonly attribute AudioContext context;
readonly attribute unsigned long numberOfInputs;
readonly attribute unsigned long numberOfOutputs;
attribute unsigned long channelCount;
attribute ChannelCountMode channelCountMode;
attribute ChannelInterpretation channelInterpretation;
};

2.4.1 Attributes

channelCount of type unsigned long

The number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2
except for specific nodes where its value is specially determined. This attribute has no effect for nodes with no inputs. If this value
is set to zero or to a value greater than the implementation's maximum number of channels the implementation MUST throw a
NotSupportedError exception.

See the 5. Channel up-mixing and down-mixing section for more information on this attribute.

channelCountMode of type ChannelCountMode

Determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node. This attribute
has no effect for nodes with no inputs.

See the 5. Channel up-mixing and down-mixing section for more information on this attribute.

channelInterpretation of type ChannelInterpretation

Determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. This
attribute has no effect for nodes with no inputs.

See the 5. Channel up-mixing and down-mixing section for more information on this attribute.

contextof type AudioContext, readonly


The AudioContext which owns this AudioNode.

numberOfInputs of type unsigned long, readonly


The number of inputs feeding into the AudioNode. For source nodes, this will be 0. This attribute is predetermined for many
AudioNode types, but some AudioNode, like the ChannelMergerNode and the AudioWorkerNode have variable number of inputs.

numberOfOutputs of type unsigned long, readonly


The number of outputs coming out of the AudioNode. This attribute is predetermined for some AudioNode types, but can be
variable, like for the ChannelSplitterNode and the AudioWorkerNode.

2.4.2 Methods

connect

There can only be one connection between a given output of one specific node and a given input of another specific node.
Multiple connections with the same termini are ignored. For example:

EXAMPLE 4
nodeA.connect(nodeB);
nodeA.connect(nodeB);

will have the same effect as

EXAMPLE 5
nodeA.connect(nodeB);

Parameter Type Nullable Optional Description


destination AudioNode The destination parameter is the AudioNode to connect to. If the
destination parameter is an AudioNode that has been created using
another AudioContext, an InvalidAccessError MUST be thrown. That is,
AudioNodes cannot be shared between AudioContexts.
output unsigned long = The output parameter is an index describing which output of the
0 AudioNode from which to connect. If this paremeter is out-of-bound, an
IndexSizeError exception MUST be thrown. It is possible to connect an
AudioNode output to more than one input with multiple calls to
connect(). Thus, "fan-out" is supported.
input unsigned long = The input parameter is an index describing which input of the
0 destination AudioNode to connect to. If this parameter is out-of-bounds,
an IndexSizeError exception MUST be thrown. It is possible to connect
an AudioNode to another AudioNode which creates a cycle: an
AudioNode may connect to another AudioNode, which in turn connects
back to the first AudioNode. This is allowed only if there is at least one
DelayNode in the cycle or a NotSupportedError exception MUST be

https://www.w3.org/TR/webaudio/ 12/27
2017-7-4 Web Audio API
DelayNode in the cycle or a NotSupportedError exception MUST be
thrown.
Return type: AudioNode

connect
Connects the AudioNode to an AudioParam, controlling the parameter value with an audio-rate signal.

It is possible to connect an AudioNode output to more than one AudioParam with multiple calls to connect(). Thus, "fan-out" is
supported.

It is possible to connect more than one AudioNode output to a single AudioParam with multiple calls to connect(). Thus, "fan-in" is
supported.

An AudioParam will take the rendered audio data from any AudioNode output connected to it and convert it to mono by down-mixing
if it is not already mono, then mix it together with other such outputs and finally will mix with the intrinsic parameter value (the
value the AudioParam would normally have without any audio connections), including any timeline changes scheduled for the
parameter.

There can only be one connection between a given output of one specific node and a specific AudioParam. Multiple connections
with the same termini are ignored. For example:

nodeA.connect(param);
nodeA.connect(param);

will have the same effect as


nodeA.connect(param);

Parameter Type Nullable Optional Description


destination AudioParam The destination parameter is the AudioParam to connect to. This
method does not return destination AudioParam object.
output unsigned long = The output parameter is an index describing which output of the
0 AudioNode from which to connect. If the parameter is out-of-bound, an
IndexSizeError exception MUST be thrown.
Return type: void

disconnect

Disconnects all outgoing connections from the AudioNode.

No parameters.
Return type: void

disconnect

Disconnects a single output of the AudioNode from any other AudioNode or AudioParam objects to which it is connected.

Parameter Type Nullable Optional Description


output unsigned long This parameter is an index describing which output of the AudioNode to
disconnect. It disconnects all outgoing connections from the given
output. If this parameter is out-of-bounds, an IndexSizeError exception
MUST be thrown.

Return type: void

disconnect

Disconnects all outputs of the AudioNode that go to a specific destination AudioNode.

Parameter Type Nullable Optional Description


destination AudioNode The destination parameter is the AudioNode to disconnect. It
disconnects all outgoing connections to the given destination. If there
is no connection to destination, an InvalidAccessError exception MUST
be thrown.
Return type: void

disconnect

Disconnects a specific output of the AudioNode from a specific destination AudioNode.

Parameter Type Nullable Optional Description


destination AudioNode The destination parameter is the AudioNode to disconnect. If there is
no connection to the destination from the given output, an
InvalidAccessError exception MUST be thrown.
output unsigned long The output parameter is an index describing which output of the
AudioNode from which to disconnect. If this parameter is out-of-bound,
an IndexSizeError exception MUST be thrown.
Return type: void

disconnect

Disconnects a specific output of the AudioNode from a specific input of some destination AudioNode.

Parameter Type Nullable Optional Description


destination AudioNode The destination parameter is the AudioNode to disconnect. If there is
no connection to the destination from the given output to the given
input, an InvalidAccessError exception MUST be thrown.
output unsigned long The output parameter is an index describing which output of the
AudioNode from which to disconnect. If this parameter is out-of-bound,
an IndexSizeError exception MUST be thrown.
input unsigned long The input parameter is an index describing which input of the
destination AudioNode to disconnect. If this parameter is out-of-
bounds, an IndexSizeError exception MUST be thrown.
Return type: void

disconnect

Disconnects all outputs of the AudioNode that go to a specific destination AudioParam. The contribution of this AudioNode to the
computed parameter value goes to 0 when this operation takes effect. The intrinsic parameter value is not affected by this
operation.

Parameter Type Nullable Optional Description


destination AudioParam The destination parameter is the AudioParam to disconnect. If there is
no connection to the destination, an InvalidAccessError exception
MUST be thrown.
https://www.w3.org/TR/webaudio/ 13/27
2017-7-4 Web Audio API
MUST be thrown.
Return type: void

disconnect

Disconnects a specific output of the AudioNode from a specific destination AudioParam. The contribution of this AudioNode to the
computed parameter value goes to 0 when this operation takes effect. The intrinsic parameter value is not affected by this
operation.

Parameter Type Nullable Optional Description


destination AudioParam The destination parameter is the AudioParam to disconnect. If there is
no connection to the destination, an InvalidAccessError exception
MUST be thrown.

output unsigned long The output parameter is an index describing which output of the
AudioNode from which to disconnect. If the parameter is out-of-bound,
an IndexSizeError exception MUST be thrown.
Return type: void

2.4.3 Lifetime

This section is informative.

An implementation may choose any method to avoid unnecessary resource usage and unbounded memory growth of unused/finished
nodes. The following is a description to help guide the general expectation of how node lifetime would be managed.

An AudioNode will live as long as there are any references to it. There are several types of references:

1. A normal JavaScript reference obeying normal garbage collection rules.


2. A playing reference for both AudioBufferSourceNodes and OscillatorNodes. These nodes maintain a playing reference to themselves
while they are currently playing.
3. A connection reference which occurs if another AudioNode is connected to it.
4. A tail-time reference which an AudioNode maintains on itself as long as it has any internal processing state which has not yet been
emitted. For example, a ConvolverNode has a tail which continues to play even after receiving silent input (think about clapping your
hands in a large concert hall and continuing to hear the sound reverberate throughout the hall). Some AudioNodes have this property.
Please see details for specific nodes.

Any AudioNodes which are connected in a cycle and are directly or indirectly connected to the AudioDestinationNode of the AudioContext will
stay alive as long as the AudioContext is alive.

NOTE

The uninterrupted operation of AudioNodes implies that as long as live references exist to a node, the node will continue processing
its inputs and evolving its internal state even if it is disconnected from the audio graph. Since this processing will consume CPU and
power, developers should carefully consider the resource usage of disconnected nodes. In particular, it is a good idea to minimize
resource consumption by explicitly putting disconnected nodes into a stopped state when possible.

When an AudioNode has no references it will be deleted. Before it is deleted, it will disconnect itself from any other AudioNodes which it is
connected to. In this way it releases all connection references (3) it has to other nodes.

Regardless of any of the above references, it can be assumed that the AudioNode will be deleted when its AudioContext is deleted.

2.5 The AudioDestinationNode Interface


This is an AudioNode representing the final audio destination and is what the user will ultimately hear. It can often be considered as an audio
output device which is connected to speakers. All rendered audio to be heard will be routed to this node, a "terminal" node in the
AudioContext's routing graph. There is only a single AudioDestinationNode per AudioContext, provided through the destination attribute of
AudioContext.

numberOfInputs : 1
numberOfOutputs : 0

channelCount = 2;
channelCountMode = "explicit";
channelInterpretation = "speakers";

WebIDL

interface AudioDestinationNode : AudioNode {


readonly attribute unsigned long maxChannelCount;
};

2.5.1 Attributes

maxChannelCount of type unsigned long, readonly

The maximum number of channels that the channelCount attribute can be set to. An AudioDestinationNode representing the audio
hardware end-point (the normal case) can potentially output more than 2 channels of audio if the audio hardware is multi-channel.
maxChannelCount is the maximum number of channels that this hardware is capable of supporting. If this value is 0, then this
indicates that channelCount may not be changed. This will be the case for an AudioDestinationNode in an OfflineAudioContext
and also for basic implementations with hardware support for stereo output only.

channelCount defaults to 2 for a destination in a normal AudioContext, and may be set to any non-zero value less than or equal to
maxChannelCount. An IndexSizeError exception MUST be thrown if this value is not within the valid range. Giving a concrete
example, if the audio hardware supports 8-channel output, then we may set channelCount to 8, and render 8-channels of output.

For anAudioDestinationNode in an OfflineAudioContext, the channelCount is determined when the offline context is created and
this value may not be changed.

2.6 The AudioParam Interface


AudioParam controls an individual aspect of an AudioNode's functioning, such as volume. The parameter can be set immediately to a
particular value using the value attribute. Or, value changes can be scheduled to happen at very precise times (in the coordinate system of
AudioContext's currentTime attribute), for envelopes, volume fades, LFOs, filter sweeps, grain windows, etc. In this way, arbitrary timeline-
based automation curves can be set on any AudioParam. Additionally, audio signals from the outputs of AudioNodes can be connected to an
AudioParam, summing with the intrinsic parameter value.

Some synthesis and processing AudioNodes have AudioParams as attributes whose values must be taken into account on a per-audio-
sample basis. For other AudioParams, sample-accuracy is not important and the value changes can be sampled more coarsely. Each
individual AudioParam will specify that it is either an a-rate parameter which means that its values must be taken into account on a per-audio-
sample basis, or it is a k-rate parameter.

Implementations must use block processing, with each AudioNode processing 128 sample-frames in each block.

For each 128 sample-frame block, the value of a k-rate parameter must be sampled at the time of the very first sample-frame, and that
value must be used for the entire block. a-rate parameters must be sampled for each sample-frame of the block.

An AudioParam maintains a time-ordered event list which is initially empty. The times are in the time coordinate system of the AudioContext's
https://www.w3.org/TR/webaudio/ 14/27
2017-7-4 Web Audio API
An AudioParam maintains a time-ordered event list which is initially empty. The times are in the time coordinate system of the AudioContext's
currentTime attribute. The events define a mapping from time to value. The following methods can change the event list by adding a new
event into the list of a type specific to the method. Each event has a time associated with it, and the events will always be kept in time-order
in the list. These methods will be called automation methods:

setValueAtTime() - SetValue
linearRampToValueAtTime() - LinearRampToValue
exponentialRampToValueAtTime() - ExponentialRampToValue
setTargetAtTime() - SetTarget
setValueCurveAtTime() - SetValueCurve

The following rules will apply when calling these methods:

If one of these events is added at a time where there is already an event of the exact same type, then the new event will replace the
old one.
If one of these events is added at a time where there is already one or more events of a different type, then it will be placed in the list
after them, but before events whose times are after the event.
If setValueCurveAtTime() is called for time \(T\) and duration \(D\) and there are any events having a time greater than \(T\), but less
than \(T + D\), then a NotSupportedError exception MUST be thrown. In other words, it's not ok to schedule a value curve during a time
period containing other events.
Similarly a NotSupportedError exception MUST be thrown if any automation method is called at a time which is inside of the time
interval of a SetValueCurve event at time T and duration D.

WebIDL

interface AudioParam {
attribute float value;
readonly attribute float defaultValue;
AudioParam setValueAtTime (float value, double startTime);
AudioParam linearRampToValueAtTime (float value, double endTime);
AudioParam exponentialRampToValueAtTime (float value, double endTime);
AudioParam setTargetAtTime (float target, double startTime, float timeConstant);
AudioParam setValueCurveAtTime (Float32Array values, double startTime, double duration);
AudioParam cancelScheduledValues (double startTime);
};

2.6.1 Attributes

defaultValue of type float, readonly


Initial value for the value attribute.

value of type float

The parameter's floating-point value. This attribute is initialized to the defaultValue. If value is set during a time when there are
any automation events scheduled then it will be ignored and no exception will be thrown.

The effect of setting this attribute is equivalent to calling setValueAtTime() with the current AudioContext's currentTime and the
requested value. Subsequent accesses to this attribute's getter will return the same value.

2.6.2 Methods

cancelScheduledValues

Cancels all scheduled parameter changes with times greater than or equal to startTime. Active setTargetAtTime automations
(those with startTime less than the supplied time value) will also be cancelled.

Parameter Type Nullable Optional Description


startTime double The starting time at and after which any previously scheduled
parameter changes will be cancelled. It is a time in the same time
coordinate system as the AudioContext's currentTime attribute. A
TypeError exception MUST be thrown if startTime is negative or is not a
finite number.
Return type: AudioParam

exponentialRampToValueAtTime

Schedules an exponential continuous change in parameter value from the previous scheduled parameter value to the given
value. Parameters representing filter frequencies and playback rate are best changed exponentially because of the way humans
perceive sound.

The value during the time interval \(T_0 \leq t < T_1\) (where \(T_0\) is the time of the previous event and \(T_1\) is the endTime
parameter passed into this method) will be calculated as:

$$
v(t) = V_0 \left(\frac{V_1}{V_0}\right)^\frac{t - T_0}{T_1 - T_0}
$$

where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is the value parameter passed into this method. It is an error if either \
(V_0\) or \(V_1\) is not strictly positive.

This also implies an exponential ramp to 0 is not possible. A good approximation can be achieved using setTargetAtTime with an
appropriately chosen time constant.

If there are no more events after this ExponentialRampToValue event then for \(t \geq T_1\), \(v(t) = V_1\).

Parameter Type Nullable Optional Description


value float The value the parameter will exponentially ramp to at the given time. A
NotSupportedError exception MUST be thrown if this value is less than
or equal to 0, or if the value at the time of the previous event is less
than or equal to 0.
endTime double The time in the same time coordinate system as the AudioContext's
currentTime attribute where the exponential ramp ends. A TypeError
exception MUST be thrown if endTime is negative or is not a finite
number.
Return type: AudioParam

linearRampToValueAtTime

Schedules a linear continuous change in parameter value from the previous scheduled parameter value to the given value.

The value during the time interval \(T_0 \leq t < T_1\) (where \(T_0\) is the time of the previous event and \(T_1\) is the endTime
parameter passed into this method) will be calculated as:

$$
v(t) = V_0 + (V_1 - V_0) \frac{t - T_0}{T_1 - T_0}
$$

Where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is the value parameter passed into this method.

If there are no more events after this LinearRampToValue event then for \(t \geq T_1\), \(v(t) = V_1\).
https://www.w3.org/TR/webaudio/ 15/27
2017-7-4 Web Audio API
If there are no more events after this LinearRampToValue event then for \(t \geq T_1\), \(v(t) = V_1\).

Parameter Type Nullable Optional Description


value float The value the parameter will linearly ramp to at the given time.
endTime double The time in the same time coordinate system as the AudioContext's
currentTime attribute at which the automation ends. A TypeError
exception MUST be thrown if endTime is negative or is not a finite
number.

Return type: AudioParam

setTargetAtTime

Start exponentially approaching the target value at the given time with a rate having the given time constant. Among other uses,
this is useful for implementing the "decay" and "release" portions of an ADSR envelope. Please note that the parameter value
does not immediately change to the target value at the given time, but instead gradually changes to the target value.

During the time interval: \(T_0 \leq t < T_1\), where \(T_0\) is the startTime parameter and \(T_1\) represents the time of the event
following this event (or \(\infty\) if there are no following events):
$$
v(t) = V_1 + (V_0 - V_1)\, e^{-\left(\frac{t - T_0}{\tau}\right)}
$$

where \(V_0\) is the initial value (the .value attribute) at \(T_0\) (the startTime parameter), \(V_1\) is equal to the target
parameter, and \(\tau\) is the timeConstant parameter.

Parameter Type Nullable Optional Description


target float The value the parameter will start changing to at the given time.
startTime double The time at which the exponential approach will begin, in the same
time coordinate system as the AudioContext's currentTime attribute. A
TypeError exception MUST be thrown if start is negative or is not a
finite number.
timeConstant float The time-constant value of first-order filter (exponential) approach to
the target value. The larger this value is, the slower the transition will
be. The value must be strictly positive or a TypeError exception MUST
be thrown.

More precisely, timeConstant is the time it takes a first-order linear


continuous time-invariant system to reach the value \(1 - 1/e\) (around
63.2%) given a step input response (transition from 0 to 1 value).

Return type: AudioParam

setValueAtTime

Schedules a parameter value change at the given time.

If there are no more events after this SetValue event, then for \(t \geq T_0\), \(v(t) = V\), where \(T_0\) is the startTime parameter
and \(V\) is the value parameter. In other words, the value will remain constant.

If the next event (having time \(T_1\)) after this SetValue event is not of type LinearRampToValue or ExponentialRampToValue,
then, for \(T_0 \leq t < T_1\):

$$
v(t) = V
$$

In other words, the value will remain constant during this time interval, allowing the creation of "step" functions.

If the next event after this SetValue event is of type LinearRampToValue or ExponentialRampToValue then please see
linearRampToValueAtTime or exponentialRampToValueAtTime, respectively.

Parameter Type Nullable Optional Description


value float The value the parameter will change to at the given time.
startTime double The time in the same time coordinate system as the AudioContext's
currentTime attribute at which the parameter changes to the given
value. A TypeError exception MUST be thrown if startTime is negative
or is not a finite number.
Return type: AudioParam

setValueCurveAtTime

Sets an array of arbitrary parameter values starting at the given time for the given duration. The number of values will be scaled
to fit into the desired duration.

Let \(T_0\) be startTime, \(T_D\) be duration, \(V\) be the values array, and \(N\) be the length of the values array. Then, during
the time interval: \(T_0 \le t < T_0 + T_D\), let

$$
\begin{align*} k &= \left\lfloor \frac{N - 1}{T_D}(t-T_0) \right\rfloor \\
\end{align*}
$$

Then \(v(t)\) is computed by linearly interpolating between \(V[k]\) and \(V[k+1]\),

After the end of the curve time interval (\(t \ge T_0 + T_D\)), the value will remain constant at the final curve value, until there is
another automation event (if any).

Parameter Type Nullable Optional Description


values Float32Array A Float32Array representing a parameter value curve. These values
will apply starting at the given time and lasting for the given duration.
When this method is called, an internal copy of the curve is created for
automation purposes. Subsequent modifications of the contents of the
passed-in array therefore have no effect on the the AudioParam.
startTime double The start time in the same time coordinate system as the
AudioContext's currentTime attribute at which the value curve will be
applied. A TypeError exception MUST be thrown if startTime is negative
or is not a finite number.
duration double The amount of time in seconds (after the time parameter) where
values will be calculated according to the values parameter.
Return type: AudioParam

2.6.3 Computation of Value


https://www.w3.org/TR/webaudio/ 16/27
2017-7-4 Web Audio API
2.6.3 Computation of Value

computedValue is the final value controlling the audio DSP and is computed by the audio rendering thread during each rendering time
quantum. It must be internally computed as follows:

1. An intrinsic parameter value will be calculated at each time, which is either the value set directly to the value attribute, or, if there are
any scheduled parameter changes (automation events) with times before or at this time, the value as calculated from these events. If
the value attribute is set after any automation events have been scheduled, then these events will be removed. When read, the value
attribute always returns the intrinsic value for the current time. If automation events are removed from a given time range, then the
intrinsic value will remain unchanged and stay at its previous value until either the value attribute is directly set, or automation events
are added for the time range.
2. An AudioParam will take the rendered audio data from any AudioNode output connected to it and convert it to mono by down-mixing if it
is not already mono, then mix it together with other such outputs. If there are no AudioNodes connected to it, then this value is 0,
having no effect on the computedValue.
3. The computedValue is the sum of the intrinsic value and the value calculated from (2).

2.6.4 AudioParam Automation Example

Fig. 4 An example of parameter automation.

EXAMPLE 6

var curveLength = 44100;


var curve = new Float32Array(curveLength);
for (var i = 0; i < curveLength; ++i)
curve[i] = Math.sin(Math.PI * i / curveLength);

var t0 = 0;
var t1 = 0.1;
var t2 = 0.2;
var t3 = 0.3;
var t4 = 0.325;
var t5 = 0.5;
var t6 = 0.6;
var t7 = 0.7;
var t8 = 1.0;
var timeConstant = 0.1;

param.setValueAtTime(0.2, t0);
param.setValueAtTime(0.3, t1);
param.setValueAtTime(0.4, t2);
param.linearRampToValueAtTime(1, t3);
param.linearRampToValueAtTime(0.8, t4);
param.setTargetAtTime(.5, t4, timeConstant);
// Compute where the setTargetAtTime will be at time t5 so we can make
// the following exponential start at the right point so there's no
// jump discontinuity. From the spec, we have
// v(t) = 0.5 + (0.8 - 0.5)*exp(-(t-t4)/timeConstant)
// Thus v(t5) = 0.5 + (0.8 - 0.5)*exp(-(t5-t4)/timeConstant)
param.setValueAtTime(0.5 + (0.8 - 0.5)*Math.exp(-(t5 - t4)/timeConstant), t5);
param.exponentialRampToValueAtTime(0.75, t6);
param.exponentialRampToValueAtTime(0.05, t7);
param.setValueCurveAtTime(curve, t7, t8 - t7);

2.7 The GainNode Interface


Changing the gain of an audio signal is a fundamental operation in audio applications. The GainNode is one of the building blocks for
creating mixers. This interface is an AudioNode with a single input and single output:

numberOfInputs : 1
numberOfOutputs : 1

channelCountMode = "max";
channelInterpretation = "speakers";

Each sample of each channel of the input data of the GainNode MUST be multiplied by the computedValue of the gain AudioParam.

WebIDL

interface GainNode : AudioNode {


readonly attribute AudioParam gain;
};

2.7.1 Attributes

gain of type AudioParam, readonly


Represents the amount of gain to apply. Its default value is 1 (no gain change). The nominal minValue is 0, but may be set
negative for phase inversion. The nominal maxValue is 1, but higher values are allowed (no exception thrown).This parameter is a-
rate

2.8 The DelayNode Interface


A delay-line is a fundamental building block in audio applications. This interface is an AudioNode with a single input and single output:
https://www.w3.org/TR/webaudio/ 17/27
2017-7-4 Web Audio API
A delay-line is a fundamental building block in audio applications. This interface is an AudioNode with a single input and single output:

numberOfInputs : 1
numberOfOutputs : 1

channelCountMode = "max";
channelInterpretation = "speakers";

The number of channels of the output always equals the number of channels of the input.

It delays the incoming audio signal by a certain amount. Specifically, at each time t, input signal input(t), delay time delayTime(t) and output
signal output(t), the output will be output(t) = input(t - delayTime(t)). The default delayTime is 0 seconds (no delay).

When the number of channels in a DelayNode's input changes (thus changing the output channel count also), there may be delayed audio
samples which have not yet been output by the node and are part of its internal state. If these samples were received earlier with a different
channel count, they must be upmixed or downmixed before being combined with newly received input so that all internal delay-line mixing
takes place using the single prevailing channel layout.

WebIDL

interface DelayNode : AudioNode {


readonly attribute AudioParam delayTime;
};

2.8.1 Attributes

delayTime of type AudioParam, readonly

An AudioParam object representing the amount of delay (in seconds) to apply. Its default value is 0 (no delay). The minimum value
is 0 and the maximum value is determined by the maxDelayTime argument to the AudioContext method createDelay.

If DelayNode is part of a cycle, then the value of the delayTime attribute is clamped to a minimum of 128 frames (one block).

This parameter is a-rate.

2.9 The AudioBuffer Interface


This interface represents a memory-resident audio asset (for one-shot sounds and other short audio clips). Its format is non-interleaved
IEEE 32-bit linear PCM with a nominal range of -1 -> +1. It can contain one or more channels. Typically, it would be expected that the length
of the PCM data would be fairly short (usually somewhat less than a minute). For longer sounds, such as music soundtracks, streaming
should be used with the audio element and MediaElementAudioSourceNode.

An AudioBuffer may be used by one or more AudioContexts, and can be shared between an OfflineAudioContext and an AudioContext.

WebIDL

interface AudioBuffer {
readonly attribute float sampleRate;
readonly attribute long length;
readonly attribute double duration;
readonly attribute long numberOfChannels;
Float32Array getChannelData (unsigned long channel);
void copyFromChannel (Float32Array destination, unsigned long channelNumber, optional unsigned long startInChannel = 0
);
void copyToChannel (Float32Array source, unsigned long channelNumber, optional unsigned long startInChannel = 0
);
};

2.9.1 Attributes

duration of type double, readonly


Duration of the PCM audio data in seconds.

length of type long, readonly


Length of the PCM audio data in sample-frames.

numberOfChannels of type long, readonly


The number of discrete audio channels.

sampleRate of type float, readonly


The sample-rate for the PCM audio data in samples per second.

2.9.2 Methods

copyFromChannel
The copyFromChannel method copies the samples from the specified channel of the AudioBuffer to the destination array.
Parameter Type Nullable Optional Description
destination Float32Array The array the channel data will be copied to.
channelNumber unsigned long The index of the channel to copy the data from. If channelNumber is
greater or equal than the number of channel of the AudioBuffer, an
IndexSizeError MUST be thrown.
startInChannel unsigned long = An optional offset to copy the data from. If startInChannel is greater
0 than the length of the AudioBuffer, an IndexSizeError MUST be
thrown.
Return type: void

copyToChannel
The copyToChannel method copies the samples to the specified channel of the AudioBuffer, from the source array.
Parameter Type Nullable Optional Description
source Float32Array The array the channel data will be copied from.
channelNumber unsigned long The index of the channel to copy the data to. If channelNumber is
greater or equal than the number of channel of the AudioBuffer, an
IndexSizeError MUST be thrown.
startInChannel unsigned long = An optional offset to copy the data to. If startInChannel is greater
0 than the length of the AudioBuffer, an IndexSizeError MUST be
thrown.
Return type: void

getChannelData
Returns the Float32Array representing the PCM audio data for the specific channel.
Parameter Type Nullable Optional Description
channel unsigned long This parameter is an index representing the particular channel to get
data for. An index value of 0 represents the first channel. This index
value MUST be less than numberOfChannels or an IndexSizeError
exception MUST be thrown.
Return type: Float32Array
https://www.w3.org/TR/webaudio/ 18/27
2017-7-4 Web Audio API
Return type: Float32Array

NOTE

The methods copyToChannel and copyFromChannel can be used to fill part of an array by passing in a Float32Array that's a view onto
the larger array. When reading data from an AudioBuffer's channels, and the data can be processed in chunks, copyFromChannel
should be preferred to calling getChannelData and accessing the resulting array, because it may avoid unnecessary memory
allocation and copying.

An internal operation acquire the contents of an AudioBuffer is invoked when the contents of an AudioBuffer are needed by some API
implementation. This operation returns immutable channel data to the invoker.

When an acquire the content operation occurs on an AudioBuffer, run the following steps:

1. If any of the AudioBuffer's ArrayBuffer have been neutered, abort these steps, and return a zero-length channel data buffers to the
invoker.
2. Neuter all ArrayBuffers for arrays previously returned by getChannelData on this AudioBuffer.
3. Retain the underlying data buffers from those ArrayBuffers and return references to them to the invoker.
4. Attach ArrayBuffers containing copies of the data to the AudioBuffer, to be returned by the next call to getChannelData.

The acquire the contents of an AudioBuffer operation is invoked in the following cases:

When AudioBufferSourceNode.start is called, it acquires the contents of the node's buffer. If the operation fails, nothing is played.
When a ConvolverNode's buffer is set to an AudioBuffer while the node is connected to an output node, or a ConvolverNode is
connected to an output node while the ConvolverNode's buffer is set to an AudioBuffer, it acquires the content of the AudioBuffer.
When the dispatch of an AudioProcessingEvent completes, it acquires the contents of its outputBuffer.

NOTE

This means that copyToChannel cannot be used to change the content of an AudioBuffer currently in use by an AudioNode that has
acquired the content of an AudioBuffer, since the AudioNode will continue to use the data previously acquired.

2.10 The AudioBufferSourceNode Interface


This interface represents an audio source from an in-memory audio asset in an AudioBuffer. It is useful for playing audio assets which
require a high degree of scheduling flexibility, for instance, playing back in rhythmically-perfect ways. If sample-accurate playback of
network- or disk-backed assets is required, an implementer should use AudioWorker to implement playback.

The start() method is used to schedule when sound playback will happen. The start() method may not be issued multiple times. The
playback will stop automatically when the buffer's audio data has been completely played (if the loop attribute is false), or when the stop()
method has been called and the specified time has been reached. Please see more details in the start() and stop() description.

numberOfInputs : 0
numberOfOutputs : 1

The number of channels of the output always equals the number of channels of the AudioBuffer assigned to the .buffer attribute, or is one
channel of silence if .buffer is NULL.

WebIDL

interface AudioBufferSourceNode : AudioNode {


attribute AudioBuffer? buffer;
readonly attribute AudioParam playbackRate;
readonly attribute AudioParam detune;
attribute boolean loop;
attribute double loopStart;
attribute double loopEnd;
void start (optional double when = 0
, optional double offset = 0
, optional double duration);
void stop (optional double when = 0
);
attribute EventHandler onended;
};

2.10.1 Attributes

buffer of type AudioBuffer, nullable


Represents the audio asset to be played. This attribute can only be set once, or a InvalidStateError MUST be thrown.

detune of type AudioParam, readonly


An aditional parameter to modulate the speed at which is rendered the audio stream. Its default value is 0. Its nominal range is
[-1200; 1200]. This parameter is k-rate.

loop of type boolean


Indicates if the audio data should play in a loop. The default value is false. If loop is dynamically modified during playback, the
new value will take effect on the next processing block of audio.

loopEnd of type double


An optional value in seconds where looping should end if the loop attribute is true. Its value is exclusive of the content of the loop:
the sample frames comprising the loop run from the values loopStart to loopEnd-(1.0/sampleRate). Its default value is 0, and it
may usefully be set to any value between 0 and the duration of the buffer. If loopEnd is less than 0, looping will end at 0. If loopEnd
is greater than the duration of the buffer, looping will end at the end of the buffer. This attribute is converted to an exact sample
frame offset within the buffer by multiplying by the buffer's sample rate and rounding to the nearest integer value. Thus its
behavior is independent of the value of the playbackRate parameter.

loopStart of type double


An optional value in seconds where looping should begin if the loop attribute is true. Its default value is 0, and it may usefully be
set to any value between 0 and the duration of the buffer. If loopStart is less than 0, looping will begin at 0. If loopStart is greater
than the duration of the buffer, looping will begin at the end of the buffer. This attribute is converted to an exact sample frame
offset within the buffer by multiplying by the buffer's sample rate and rounding to the nearest integer value. Thus its behavior is
independent of the value of the playbackRate parameter.

onended of type EventHandler


A property used to set the EventHandler (described in HTML[HTML]) for the ended event that is dispatched to
AudioBufferSourceNode node types. When the playback of the buffer for an AudioBufferSourceNode is finished, an event of type
Event (described in HTML [HTML]) will be dispatched to the event handler.

playbackRate of type AudioParam, readonly


The speed at which to render the audio stream. Its default value is 1. This parameter is k-rate.

2.10.2 Methods

start
Schedules a sound to playback at an exact time. start may only be called one time and must be called before stop is called or
an InvalidStateError exception MUST be thrown.
Parameter Type Nullable Optional Description
https://www.w3.org/TR/webaudio/ 19/27
2017-7-4 Web Audio API
Parameter Type Nullable Optional Description
when double = 0 The when parameter describes at what time (in seconds) the sound
should start playing. It is in the same time coordinate system as the
AudioContext's currentTime attribute. If 0 is passed in for this value or
if the value is less than currentTime, then the sound will start playing
immediately. A TypeError exception MUST be thrown if when is negative.

offset double = 0 The offset parameter describes the offset time in the buffer (in
seconds) where playback will begin. If 0 is passed in for this value,
then playback will start from the beginning of the buffer. A TypeError
exception MUST be thrown if offset is negative. If offset is greater
than loopEnd, playback will begin at loopEnd (and immediately loop to
loopStart). This parameter is converted to an exact sample frame
offset within the buffer by multiplying by the buffer's sample rate and
rounding to the nearest integer value. Thus its behavior is
independent of the value of the playbackRate parameter.
duration double The duration parameter describes the duration of the portion (in
seconds) to be played. If this parameter is not passed, the duration
will be equal to the total duration of the AudioBuffer minus the offset
parameter. Thus if neither offset nor duration are specified then the
implied duration is the total duration of the AudioBuffer. An TypeError
exception MUST be thrown if duration is negative.
Return type: void

stop
Schedules a sound to stop playback at an exact time.
Parameter Type Nullable Optional Description
when double = 0 The when parameter describes at what time (in seconds) the sound
should stop playing. It is in the same time coordinate system as the
AudioContext's currentTime attribute. If 0 is passed in for this value or
if the value is less than currentTime, then the sound will stop playing
immediately. A TypeError exception MUST be thrown if when is negative.
If stop is called again after already have been called, the last
invocation will be the only one applied; stop times set by previous calls
will not be applied, unless the buffer has already stopped prior to any
subsequent calls. If the buffer has already stopped, further calls to
stop will have no effect. If a stop time is reached prior to the
scheduled start time, the sound will not play.
Return type: void

Both playbackRate and detune are k-rate parameters and are used together to determine a computedPlaybackRate value:

computedPlaybackRate(t) = playbackRate(t) * pow(2, detune(t) / 1200)

The computedPlaybackRate is the effective speed at which the AudioBuffer of this AudioBufferSourceNode MUST be played.

This MUST be implemented by resampling the input data using a resampling ratio of 1 / computedPlaybackRate, hence changing both the pitch
and speed of the audio.

2.10.3 Looping

If the loop attribute is true when start() is called, then playback will continue indefinitely until stop() is called and the stop time is reached.
We'll call this "loop" mode. Playback always starts at the point in the buffer indicated by the offset argument of start(), and in loop mode
will continue playing until it reaches the actualLoopEnd position in the buffer (or the end of the buffer), at which point it will wrap back
around to the actualLoopStart position in the buffer, and continue playing according to this pattern.

In loop mode then the actual loop points are calculated as follows from the loopStart and loopEnd attributes:

if ((loopStart || loopEnd) && loopStart >= 0 && loopEnd > 0 && loopStart < loopEnd) {
actualLoopStart = loopStart;
actualLoopEnd = min(loopEnd, buffer.duration);
} else {
actualLoopStart = 0;
actualLoopEnd = buffer.duration;
}

Note that the default values for loopStart and loopEnd are both 0, which indicates that looping should occur from the very start to the very
end of the buffer.

Please note that as a low-level implementation detail, the AudioBuffer is at a specific sample-rate (usually the same as the AudioContext
sample-rate), and that the loop times (in seconds) must be converted to the appropriate sample-frame positions in the buffer according to
this sample-rate.

When scheduling the beginning and the end of playback using the start() and stop() methods, the resulting start or stop time MUST be
rounded to the nearest sample-frame in the sample rate of the AudioContext. That is, no sub-sample scheduling is possible.

2.11 The MediaElementAudioSourceNode Interface


This interface represents an audio source from an audio or video element.

numberOfInputs : 0
numberOfOutputs : 1

The number of channels of the output corresponds to the number of channels of the media referenced by the HTMLMediaElement. Thus,
changes to the media element's .src attribute can change the number of channels output by this node. If the .src attribute is not set, then the
number of channels output will be one silent channel.

WebIDL

interface MediaElementAudioSourceNode : AudioNode {


};

A MediaElementAudioSourceNode is created given an HTMLMediaElement using the AudioContext createMediaElementSource() method.


The number of channels of the single output equals the number of channels of the audio referenced by the HTMLMediaElement passed in as
the argument to createMediaElementSource(), or is 1 if the HTMLMediaElement has no audio.

The HTMLMediaElement must behave in an identical fashion after the MediaElementAudioSourceNode has been created, except that the
rendered audio will no longer be heard directly, but instead will be heard as a consequence of the MediaElementAudioSourceNode being
connected through the routing graph. Thus pausing, seeking, volume, src attribute changes, and other aspects of the HTMLMediaElement
must behave as they normally would if not used with a MediaElementAudioSourceNode.

EXAMPLE 7
var mediaElement = document.getElementById('mediaElementID');
var sourceNode = context.createMediaElementSource(mediaElement);
sourceNode.connect(filterNode);

2.11.1 Security with MediaElementAudioSourceNode and cross-origin resources


https://www.w3.org/TR/webaudio/ 20/27
2017-7-4 Web Audio API
2.11.1 Security with MediaElementAudioSourceNode and cross-origin resources

HTMLMediaElement allows the playback of cross-origin resources. Because Web Audio can allows one to inspect the content of the resource
(e.g. using a MediaElementAudioSourceNode, and a ScriptProcessorNode to read the samples), information leakage can occur if scripts from
one origin inspect the content of a resource from another origin.

To prevent this, a MediaElementAudioSourceNode MUST output silence instead of the normal output of the HTMLMediaElement if it has been
created using an HTMLMediaElement for which the execution of the fetch algorithm labeled the resource as CORS-cross-origin.

2.12 The AudioWorker interface


An AudioWorker object is the main-thread representation of a worker "thread" that supports processing of audio in Javascript. This
AudioWorker object is a factory that is used to create multiple audio nodes of the same type; this enables easy sharing of code, program
data and global state across nodes. An AudioWorker can then be used to create instances of AudioWorkerNode, which is the main-thread
representation of an individual node processed by that AudioWorker.

These main thread objects cause the instantiation of a processing context in the audio thread. All audio processing by AudioWorkerNodes
runs in the audio processing thread. This has a few side effects that bear mentioning: blocking the audio worker's thread can cause glitches
in the audio, and if the audio thread is normally elevated in thread priority (to reduce glitching possibility), it must be demoted to normal
thread priority (in order to avoid escalating thread priority of user-supplied script code).

From inside an audio worker script, the Audio Worker factory is represented by an AudioWorkerGlobalScope object representing the node's
contextual information, and individual audio nodes created by the factory are represented by AudioWorkerNodeProcessor objects.

In addition, all AudioWorkerNodes that are created by the same AudioWorker share an AudioWorkerGlobalScope; this can allow them to share
context and data across nodes (for example, loading a single instance of a shared database used by the individual nodes, or sharing
context in order to implement oscillator synchronization).

WebIDL

interface AudioWorker : Worker {


void terminate ();
void postMessage (any message, optional sequence<Transferable> transfer);
readonly attribute AudioWorkerParamDescriptor[] parameters;
attribute EventHandler onmessage;
attribute EventHandler onloaded;
AudioWorkerNode createNode (int numberOfInputs, int numberOfOutputs);
AudioParam addParameter (DOMString name, float defaultValue);
void removeParameter (DOMString name);
};

2.12.1 Attributes

onloaded of type EventHandler


The onloaded handler is called after the script is successfully loaded and its global scope code is run to initialize the
AudioWorkerGlobalScope.

onmessage of type EventHandler


The onmessage handler is called whenever the AudioWorkerGlobalScope posts a message back to the main thread.

parameters of type array of AudioWorkerParamDescriptor, readonly


This array contains descriptors for each of the current parameters on nodes created by this AudioWorker. This enables users of
the AudioWorker to easily iterate over the AudioParam names and default values.

2.12.2 Methods

addParameter

Causes a correspondingly-named read-only AudioParam to be present on any AudioWorkerNodes created (previously or


subsequently) by this AudioWorker, and a correspondingly-named read-only Float32Array to be present on the parameters object
exposed on the AudioProcessEvent on subsequent audio processing events for such nodes. The AudioParam may immediately
have its scheduling methods called, its .value set, or AudioNodes connected to it.

The name parameter is the name used for the read-only AudioParam added to the AudioWorkerNode, and the name used for the
read-only Float32Array that will be present on the parameters object exposed on subsequent AudioProcessEvents.

The defaultValue parameter is the default value for the AudioParam's value attribute, as well as therefore the default value that will
appear in the Float32Array in the worker script (if no other parameter changes or connections affect the value).

Parameter Type Nullable Optional Description


name DOMString
defaultValue float
Return type: AudioParam

createNode
Creates a node instance in the audio worker.
Parameter Type Nullable Optional Description
numberOfInputs int
numberOfOutputs int
Return type: AudioWorkerNode

postMessage
postMessage may be called to send a message to the AudioWorkerGlobalScope, similar to the algorithm defined by [Workers].
Parameter Type Nullable Optional Description
message any
transfer sequence<Transferable>
Return type: void

removeParameter

Removes a previously-added parameter named name from all AudioWorkerNodes associated with this AudioWorker and its
AudioWorkerGlobalScope. This will also remove the correspondingly-named read-only AudioParam from the AudioWorkerNode, and
will remove the correspondingly-named read-only Float32Arrays from the AudioProcessEvent's parameters member on
subsequent audio processing events. A NotFoundError exception must be thrown if no parameter with that name exists on this
AudioWorker.

The name parameter identifies the parameter to be removed.

Parameter Type Nullable Optional Description


name DOMString
Return type: void

terminate
The terminate() method, when invoked, must cause the cessation of any AudioProcessEvents being dispatched inside the
AudioWorker's associated AudioWorkerGlobalScope. It will also cause all associated AudioWorkerNodes to cease processing, and
https://www.w3.org/TR/webaudio/ 21/27
2017-7-4 Web Audio API
AudioWorker's associated AudioWorkerGlobalScope. It will also cause all associated AudioWorkerNodes to cease processing, and
will cause the destruction of the worker's context. In practical terms, this means all nodes created from this AudioWorker will
disconnect themselves, and will cease performing any useful functions.
No parameters.
Return type: void

Note that AudioWorkerNode objects will also have read-only AudioParam objects for each named parameter added via the addParameter
method. As this is dynamic, it cannot be captured in IDL.

As the AudioWorker interface inherits from Worker, AudioWorkers must implement the Worker interface for communication with the audio
worker script.

2.12.3 The AudioWorkerNode Interface

This interface represents an AudioNode which interacts with a Worker thread to generate, process, or analyse audio directly. The user
creates a separate audio processing worker script, which is hosted inside the AudioWorkerGlobalScope and runs inside the audio
processing thread, rather than the main UI thread. The AudioWorkerNode represents the processing node in the main processing thread's
node graph; the AudioWorkerGlobalScope represents the context in which the user's audio processing script is run.

Nota bene that if the Web Audio implementation normally runs audio process at higher than normal thread priority, utilizing
AudioWorkerNodes may cause demotion of the priority of the audio thread (since user scripts cannot be run with higher than normal
priority).

numberOfInputs : variable
numberOfOutputs : variable

channelCount = numberOfInputChannels;
channelCountMode = "explicit";
channelInterpretation = "speakers";

The number of input and output channels specified in the createAudioWorkerNode() call determines the initial number of input and output
channels (and the number of channels present for each input and output in the AudioBuffers passed to the AudioProcess event handler
inside the AudioWorkerGlobalScope). It is invalid for both numberOfInputChannels and numberOfOutputChannels to be zero.

Example usage:
var bitcrusherFactory = context.createAudioWorker( "bitcrusher.js" );
var bitcrusherNode = bitcrusherFactory.createNode();

WebIDL

interface AudioWorkerNode : AudioNode {


void postMessage (any message, optional sequence<Transferable> transfer);
attribute EventHandler onmessage;
};

2.12.3.1 Attributes

onmessage of type EventHandler


The onmessage handler is called whenever the AudioWorkerNodeProcessor posts a node message back to the main thread.
2.12.3.2 Methods

postMessage
postMessage may be called to send a message to the AudioWorkerNodeProcessor, via the algorithm defined by the Worker
specification. Note that this is different from calling postMessage() on the AudioWorker itself, as that would affect the
AudioWorkerGlobalScope.
Parameter Type Nullable Optional Description
message any
transfer sequence<Transferable>
Return type: void

Note that AudioWorkerNode objects will also have read-only AudioParam objects for each named parameter added via the addParameter
method on the AudioWorker. As this is dynamic, it cannot be captured here in IDL.

2.12.4 The AudioWorkerParamDescriptor Interface

This interface represents the description of an AudioWorkerNode AudioParam - in short, its name and default value. This enables easy
iteration over the AudioParams from an AudioWorkerGlobalScope (which does not have an instance of those AudioParams).

WebIDL

interface AudioWorkerParamDescriptor {
readonly attribute DOMString name;
readonly attribute float defaultValue;
};

2.12.4.1 Attributes

defaultValue of type float, readonly


The default value of the AudioParam.

name of type DOMString, readonly


The name of the AudioParam.

2.12.5 The AudioWorkerGlobalScope Interface

This interface is a DedicatedWorkerGlobalScope-derived object representing the context in which an audio processing script is run; it is
designed to enable the generation, processing, and analysis of audio data directly using JavaScript in a Worker thread, with shared context
between multiple instances of audio nodes. This facilitates nodes that may have substantial shared data, e.g. a convolution node.

The AudioWorkerGlobalScope handles - audioprocess events dispatched - synchronously to process audio frame blocks for nodes created
by this worker. - audioprocess events are only - dispatched for nodes that have at least one input - or one output connected. TODO: should
this be true?

WebIDL

interface AudioWorkerGlobalScope : DedicatedWorkerGlobalScope {


readonly attribute float sampleRate;
AudioParam addParameter (DOMString name, float defaultValue);
void removeParameter (DOMString name);
attribute EventHandler onaudioprocess;
attribute EventHandler onnodecreate;
readonly attribute AudioWorkerParamDescriptor[] parameters;
};

2.12.5.1 Attributes
https://www.w3.org/TR/webaudio/ 22/27
2017-7-4 Web Audio API
2.12.5.1 Attributes

onaudioprocess of type EventHandler


A property used to set the EventHandler (described in [HTML]) for the audioprocess event that is dispatched to
AudioWorkerGlobalScope to process audio while the associated nodes are connected (to at least one input or output). An event of
type AudioProcessEvent will be dispatched to the event handler.

onnodecreate of type EventHandler


A property used to set the EventHandler (described in [HTML]) for the nodecreate event that is dispatched to
AudioWorkerGlobalScope when a new AudioWorkerNode has been created. This enables the scope to do node-level initialization of
the AudioNodeProcessor object. An event of type AudioWorkerNodeCreationEvent will be dispatched to the event handler.

parameters of type array of AudioWorkerParamDescriptor, readonly


This array contains descriptors for each of the current parameters on nodes created in this AudioWorkerGlobalScope. This
enables audio worker implementations to easily iterate over the AudioParam names and default values.

sampleRate of type float, readonly


The sample rate of the host AudioContext (since inside the Worker scope, the user will not have direct access to the AudioContext.

2.12.5.2 Methods

addParameter

Causes a correspondingly-named read-only AudioParam to be present on previously-created and subsequently-created


AudioWorkerNodes created by this factory, and a correspondingly-named read-only Float32Array to be present on the parameters

object exposed on the AudioProcessEvent on subsequent audio processing events for nodes created from this factory.

It is purposeful that AudioParams can be added (or removed) from an Audio Worker from either the main thread or the worker
script; this enables immediate creation of worker-based nodes and their prototypes, but also enables packaging an entire worker
including its AudioParam configuration into a single script. It is recommended that nodes be used only after the
AudioWorkerNode's oninitialized has been called, in order to allow the worker script to configure the node.

The name parameter is the name used for the read-only AudioParam added to the AudioWorkerNode, and the name used for the
read-only Float32Array that will be present on the parameters object exposed on subsequent AudioProcessEvents.

The defaultValue parameter is the default value for the AudioParam's value attribute, as well as therefore the default value that
will appear in the Float32Array in the worker script (if no other parameter changes or connections affect the value).

Parameter Type Nullable Optional Description


name DOMString
defaultValue float
Return type: AudioParam

removeParameter

Removes a previously-added parameter named name from nodes processed by this factory. This will also remove the
correspondingly-named read-only AudioParam from the AudioWorkerNode, and will remove the correspondingly-named read-only
Float32Array from the AudioProcessEvent's parameters member on subsequent audio processing events. A NotFoundError
exception MUST be thrown if no parameter with that name exists on this node.

The name parameter identifies the parameter to be removed.

Parameter Type Nullable Optional Description


name DOMString
Return type: void

2.12.6 The AudioWorkerNodeProcessor Interface

An object supporting this interface represents each individual node instantiated in an AudioWorkerGlobalScope; it is designed to manage the
data for an individual node. Shared context between multiple instances of audio nodes is accessible from the AudioWorkerGlobalScope; this
object represents the individual node and can be used for data storage or main-thread communication.

WebIDL

interface AudioWorkerNodeProcessor : EventTarget {


void postMessage (any message, optional sequence<Transferable> transfer);
attribute EventHandler onmessage;
};

2.12.6.1 Attributes

onmessage of type EventHandler


The onmessage handler is called whenever the AudioWorkerNode posts a node message back to the audio thread.

2.12.6.2 Methods

postMessage
postMessage may be called to send a message to the AudioWorkerNode, via the algorithm defined by the Worker specification.
Note that this is different from calling postMessage() on the AudioWorker itself, as that would dispatch to the
AudioWorkerGlobalScope.
Parameter Type Nullable Optional Description
message any
transfer sequence<Transferable>
Return type: void

2.12.7 Audio Worker Examples

This section is non-normative.

2.12.7.1 A Bitcrusher Node

Bitcrushing is a mechanism by which the audio quality of an audio stream is reduced - both by quantizing the value (simulating lower bit-
depth in integer-based audio), and by quantizing in time (simulating a lower digital sample rate). This example shows how to use
AudioParams (in this case, treated as a-rate) inside an AudioWorker.

Main file javascript

var bitcrusherFactory = null;


audioContext.createAudioWorker("bitcrusher_worker.js").then( function(factory)
{ // cache 'factory' in case you want to create more nodes!

https://www.w3.org/TR/webaudio/ 23/27
2017-7-4 Web Audio API
{ // cache 'factory' in case you want to create more nodes!
bitcrusherFactory = factory;
var bitcrusherNode = factory.createNode();
bitcrusherNode.bits.setValueAtTime(8,0);
bitcrusherNode.connect(output);
input.connect(bitcrusherNode);
}
);

bitcrusher_worker.js

// Custom parameter - number of bits to crush down to - default 8


this.addParameter( "bits", 8 );

// Custom parameter - frequency reduction, 0-1, default 0.5


this.addParameter( "frequencyReduction", 0.5 );

onnodecreate=function(e) {
e.node.phaser = 0;
e.node.lastDataValue = 0;
}

onaudioprocess= function (e) {


for (var channel=0; channel<e.inputs[0].length; channel++) {
var inputBuffer = e.inputs[0][channel];
var outputBuffer = e.outputs[0][channel];
var bufferLength = inputBuffer.length;
var bitsArray = e.parameters.bits;
var frequencyReductionArray = e.parameters.frequencyReduction;

for (var i=0; i<bufferLength; i++) {


var bits = bitsArray ? bitsArray[i] : 8;
var frequencyReduction = frequencyReductionArray ? frequencyReductionArray[i] : 0.5;

var step = Math.pow(1/2, bits);


e.node.phaser += frequencyReduction;
if (e.node.phaser >= 1.0) {
e.node.phaser -= 1.0;
e.node.lastDataValue = step * Math.floor(inputBuffer[i] / step + 0.5);
}
outputBuffer[i] = e.node.lastDataValue;
}
}
};

2.12.7.2 TODO: fix up this example. A Volume Meter and Clip Detector

Another common need is a clip-detecting volume meter. This example shows how to communicate basic parameters (that do not need
AudioParam scheduling) across to a Worker, as well as communicating data back to the main thread. This node does not use any output.

Main file javascript

function setupNodeMessaging(node) {
// This handles communication back from the volume meter
node.onmessage = function (event) {
if (event.data instanceof Object ) {
if (event.data.hasOwnProperty("clip")
this.clip = event.data.clip;
if (event.data.hasOwnProperty("volume")
this.volume = event.data.volume;
}
}

// Set up some default configuration parameters


node.postMessage(
{ "smoothing": 0.9, // Smoothing parameter
"clipLevel": 0.9, // Level to consider "clipping"
"clipLag": 750, // How long to keep "clipping" lit up after clip (ms)
"updating": 100 // How frequently to update volume and clip param (ms)
});

// Set up volume and clip attributes. These will be updated by our onmessage.
node.volume = 0;
node.clip = false;
}

var vuNode = null;

audioContext.createAudioWorker("vu_meter_worker.js").then( function(factory)
{ // cache 'factory' in case you want to create more nodes!
vuFactory = factory;
vuNode = factory.createNode([1], []); // we don't need an output, and let's force to mono
setupNodeMessaging(vuNode);
}
);

window.requestAnimationFrame( function(timestamp) {
if (vuNode) {
// Draw a bar based on vuNode.volume and vuNode.clip
}
});

vu_meter_worker.js

// Custom parameter - number of bits to crush down to - default 8


this.addParameter( "bits", 8 );

// Custom parameter - frequency reduction, 0-1, default 0.5


this.addParameter( "frequencyReduction", 0.5 );

onnodecreate=function(e) {
e.node.timeToNextUpdate = 0.1 * sampleRate;
e.node.smoothing = 0.5;
e.node.clipLevel = 0.95;
e.node.clipLag = 1;
e.node.updatingInterval = 150;
// This just handles setting attribute values
e.node.onmessage = function ( event ) {
if (event.data instanceof Object ) {
if (event.data.hasOwnProperty("smoothing")
this.smoothing = event.data.smoothing;
if (event.data.hasOwnProperty("clipLevel")
this.clipLevel = event.data.clipLevel;
if (event.data.hasOwnProperty("clipLag")
this.clipLag = event.data.clipLag / 1000; // convert to seconds
if (event.data.hasOwnProperty("updating") // convert to samples
this.updatingInterval = event.data.updating * sampleRate / 1000 ;
}
};
}

onaudioprocess = function ( event ) {


var buf = event.inputs[0][0]; // Node forces mono
var bufLength = buf.length;
var sum = 0;
https://www.w3.org/TR/webaudio/ 24/27
2017-7-4 Web Audio API
var sum = 0;
var x;

// Do a root-mean-square on the samples: sum up the squares...


for (var i=0; i<bufLength; i++) {
x = buf[i];
if (Math.abs(x)>=event.node.clipLevel) {
event.node.clipping = true;
event.node.unsentClip = true; // Make sure, for every clip, we send a message.
event.node.lastClip = event.playbackTime + (i/sampleRate);
}
sum += x * x;
}

// ... then take the square root of the sum.


var rms = Math.sqrt(sum / bufLength);

// Now smooth this out with the smoothing factor applied


// to the previous sample - take the max here because we
// want "fast attack, slow release."
event.node.volume = Math.max(rms, event.node.volume*event.node.smoothing);
if (event.node.clipping && (!event.node.unsentClip) && (event.playbackTime > (this.lastClip + clipLag)))
event.node.clipping = false;

// How long has it been since our last update?


event.node.timeToNextUpdate -= event.node.last;
if (event.node.timeToNextUpdate<0) {
event.node.timeToNextUpdate = event.node.updatingInterval;
event.node.postMessage(
{ "volume": event.node.volume,
"clip": event.node.clipping });
event.node.unsentClip = false;
}
};

2.12.7.3 Reimplementing ChannelMerger

This worker shows how to merge inputs into a single output channel.

Main file javascript

var mergerNode = audioContext.createAudioWorker("merger_worker.js", [1,1,1,1,1,1], [6] );

var mergerFactory = null;

audioContext.createAudioWorker("merger_worker.js").then( function(factory)
{ // cache 'factory' in case you want to create more nodes!
mergerFactory = factory;
var merger6channelNode = factory.createNode( [1,1,1,1,1,1], [6] );
// connect inputs and outputs here
}
);

merger_worker.js

onaudioprocess= function (e) {


for (var input=0; input<e,node.inputs.length; input++)
e.node.outputs[0][input].set(e.node.inputs[input][0]);
};

2.13 The ScriptProcessorNode Interface - DEPRECATED


This section is non-normative.

This interface is an AudioNode which can generate, process, or analyse audio directly using JavaScript. This node type is deprecated, to be
replaced by the AudioWorkerNode; this text is only here for informative purposes until implementations remove this node type.

numberOfInputs : 1
numberOfOutputs : 1

channelCount = numberOfInputChannels;
channelCountMode = "explicit";
channelInterpretation = "speakers";

The channelCountMode cannot be changed from "explicit" and the channelCount cannot be changed. An attempt to change either of these
MUST throw an InvalidStateError exception.

The ScriptProcessorNode is constructed with a bufferSize which must be one of the following values: 256, 512, 1024, 2048, 4096, 8192,
16384. This value controls how frequently the audioprocess event is dispatched and how many sample-frames need to be processed each
call. audioprocess events are only dispatched if the ScriptProcessorNode has at least one input or one output connected. Lower numbers
for bufferSize will result in a lower (better) latency. Higher numbers will be necessary to avoid audio breakup and glitches. This value will be
picked by the implementation if the bufferSize argument to createScriptProcessor is not passed in, or is set to 0.

numberOfInputChannels and numberOfOutputChannels determine the number of input and output channels. It is invalid for both
numberOfInputChannels and numberOfOutputChannels to be zero.

var node = context.createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels);

WebIDL

interface ScriptProcessorNode : AudioNode {


attribute EventHandler onaudioprocess;
readonly attribute long bufferSize;
};

2.13.1 Attributes

bufferSize of type long, readonly


The size of the buffer (in sample-frames) which needs to be processed each time onaudioprocess is called. Legal values are (256,
512, 1024, 2048, 4096, 8192, 16384).

onaudioprocess of type EventHandler


A property used to set the EventHandler (described in HTML[HTML]) for the audioprocess event that is dispatched to
ScriptProcessorNode node types. An event of type AudioProcessingEvent will be dispatched to the event handler.

2.14 The AudioWorkerNodeCreationEvent Interface


This is an Event object which is dispatched to AudioWorkerGlobalScope objects when a new node instance is created. This allows
AudioWorkers to initialize any node-local data (e.g. allocating a delay or initializing local variables).

WebIDL

interface AudioWorkerNodeCreationEvent : Event {


readonly attribute AudioWorkerNodeProcessor node;
readonly attribute Array inputs;
readonly attribute Array outputs;
};
https://www.w3.org/TR/webaudio/ 25/27
2017-7-4 Web Audio API
};

2.14.1 Attributes

inputs of type Array, readonly


An array of channelCounts for the inputs.

node of type AudioWorkerNodeProcessor, readonly


The new node being created. Any node-local data storage (e.g., the buffer for a delay node) should be created on this object.

outputs of type Array, readonly


An array of channelCounts for the outputs.

2.15 The AudioProcessEvent Interface


This is an Event object which is dispatched to AudioWorkerGlobalScope objects to perform processing.

The event handler processes audio from the input (if any) by accessing the audio data from the inputBuffers attribute. The audio data
which is the result of the processing (or the synthesized data if there are no inputs) is then placed into the outputBuffers.

WebIDL

interface AudioProcessEvent : Event {


readonly attribute double playbackTime;
readonly attribute AudioWorkerNodeProcessor node;
readonly attribute Float32Array[][] inputs;
readonly attribute Float32Array[][] outputs;
readonly attribute object parameters;
};

2.15.1 Attributes

inputs of type array of array of Float32Array, readonly

A readonly Array of Arrays of Float32Arrays. The top-level Array is organized by input; each input may contain multiple channels;
each channel contains a Float32Array of sample data. The initial size of the channel array will be determined by the number of
channels specified for that input in the createAudioWorkerNode() method. However, an onprocess handler may alter this number
of channels in the input dynamically, either by adding a Float32Array of blocksize length (128) or by reducing the Array (by
reducing the Array.length or by using Array.pop() or Array.slice(). The event object, the Array and the Float32Arrays will be reused
by the processing system, in order to minimize memory churn.

Any reordering performed on the Array for an input will not reorganize the connections to the channels for subsequent events.

node of type AudioWorkerNodeProcessor, readonly

The node to which this processing event is being dispatched. Any node-local data storage (e.g., the buffer for a delay node)
should be maintained on this object.

outputs of type array of array of Float32Array, readonly

A readonly Array of Arrays of Float32Arrays. The top-level Array is organized by output; each output may contain multiple
channels; each channel contains a Float32Array of sample data. The initial size of the channel array will be determined by the
number of channels specified for that output in the createAudioWorkerNode() method. However, an onprocess handler may alter
this number of channels in the output dynamically, either by adding a Float32Array of blocksize length (128) or by reducing the
Array (by reducing the Array.length or by using Array.pop() or Array.slice(). The event object, the Array and the Float32Arrays will
be reused by the processing system, in order to minimize memory churn.

Any reordering performed on the Array for an output will not reorganize the connections to the channels for subsequent events.

parameters of type object, readonly


This object attribute exposes a correspondingly-named read-only Float32Array for each parameter that has been added via
addParameter. As this is dynamic, this cannot be captured in IDL. The length of this Float32Array will correspond to the length of
the inputBuffer. The contents of this Float32Array will be the values to be used for the AudioParam at the corresponding points in
time. It is expected that this Float32Array will be reused by the audio engine.

playbackTime of type double, readonly


The starting time of the block of audio being processed in response to this event. By definition this will be equal to the value of
BaseAudioContext's currentTime attribute that was most recently observable in the control thread.

2.16 The AudioProcessingEvent Interface - DEPRECATED


This section is non-normative.

This is an Event object which is dispatched to ScriptProcessorNode nodes. It will be removed when the ScriptProcessorNode is removed, as
the replacement AudioWorker uses the AudioProcessEvent.

The event handler processes audio from the input (if any) by accessing the audio data from the inputBuffer attribute. The audio data which
is the result of the processing (or the synthesized data if there are no inputs) is then placed into the outputBuffer.

WebIDL

interface AudioProcessingEvent : Event {


readonly attribute double playbackTime;
readonly attribute AudioBuffer inputBuffer;
readonly attribute AudioBuffer outputBuffer;
};

2.16.1 Attributes

inputBuffer of type AudioBuffer, readonly


An AudioBuffer containing the input audio data. It will have a number of channels equal to the numberOfInputChannels parameter
of the createScriptProcessor() method. This AudioBuffer is only valid while in the scope of the onaudioprocess function. Its values
will be meaningless outside of this scope.

outputBuffer of type AudioBuffer, readonly


An AudioBuffer where the output audio data should be written. It will have a number of channels equal to the
numberOfOutputChannels parameter of the createScriptProcessor() method. Script code within the scope of the onaudioprocess
function is expected to modify the Float32Array arrays representing channel data in this AudioBuffer. Any script modifications to
this AudioBuffer outside of this scope will not produce any audible effects.

playbackTime of type double, readonly


The time when the audio will be played in the same time coordinate system as the AudioContext's currentTime.

2.17 The PannerNode Interface


This interface represents a processing node which positions / spatializes an incoming audio stream in three-dimensional space. The
spatialization is in relation to the AudioContext's AudioListener (listener attribute).

numberOfInputs : 1
numberOfOutputs : 1
https://www.w3.org/TR/webaudio/ 26/27
2017-7-4 Web Audio API
numberOfOutputs : 1

channelCount = 2;
channelCountMode = "clamped-max";
channelInterpretation = "speakers";

The input of this node is either mono (1 channel) or stereo (2 channels) and cannot be increased. Connections from nodes with fewer or
more channels will be up-mixed or down-mixed appropriately, but a NotSupportedError MUST be thrown if an attempt is made to set
channelCount to a value greater than 2 or if

https://www.w3.org/TR/webaudio/ 27/27

You might also like