Gstremer Plugin
Gstremer Plugin
Gstremer Plugin
8)
This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, v1.0 or later (the latest version
is presently available at http://www.opencontent.org/openpub/).
Table of Contents
I. Introduction ......................................................................................................................................... vii
1. Preface............................................................................................................................................1
1.1. What is GStreamer?...........................................................................................................1
1.2. Who Should Read This Guide? .........................................................................................1
1.3. Preliminary Reading..........................................................................................................2
1.4. Structure of This Guide .....................................................................................................2
2. Foundations....................................................................................................................................5
2.1. Elements and Plugins ........................................................................................................5
2.2. Pads....................................................................................................................................5
2.3. GstMiniObject, Buffers and Events...................................................................................6
2.4. Media types and Properties ...............................................................................................8
II. Building a Plugin ................................................................................................................................11
3. Constructing the Boilerplate ........................................................................................................12
3.1. Getting the GStreamer Plugin Templates ........................................................................12
3.2. Using the Project Stamp ..................................................................................................12
3.3. Examining the Basic Code ..............................................................................................13
3.4. Element metadata ............................................................................................................14
3.5. GstStaticPadTemplate......................................................................................................15
3.6. Constructor Functions .....................................................................................................17
3.7. The plugin_init function ..................................................................................................17
4. Specifying the pads ......................................................................................................................19
5. The chain function .......................................................................................................................20
6. The event function........................................................................................................................23
7. The query function .......................................................................................................................25
8. What are states? ...........................................................................................................................27
8.1. Managing filter state ........................................................................................................27
9. Adding Properties ........................................................................................................................30
10. Signals........................................................................................................................................33
11. Building a Test Application .......................................................................................................34
III. Advanced Filter Concepts ................................................................................................................38
12. Request and Sometimes pads.....................................................................................................39
12.1. Sometimes pads .............................................................................................................39
12.2. Request pads..................................................................................................................42
13. Different scheduling modes .......................................................................................................45
13.1. The pad activation stage ................................................................................................45
13.2. Pads driving the pipeline ...............................................................................................46
13.3. Providing random access ...............................................................................................49
14. Caps negotiation.........................................................................................................................52
14.1. Caps negotiation basics .................................................................................................52
14.2. Caps negotiation use cases ............................................................................................52
14.3. Upstream caps (re)negotiation.......................................................................................58
14.4. Implementing a CAPS query function ..........................................................................58
14.5. Pull-mode Caps negotiation ..........................................................................................60
15. Memory allocation .....................................................................................................................61
15.1. GstMemory....................................................................................................................61
iii
iv
V. Appendices.........................................................................................................................................123
27. Things to check when writing an element ...............................................................................124
27.1. About states .................................................................................................................124
27.2. Debugging ...................................................................................................................124
27.3. Querying, events and the like ......................................................................................125
27.4. Testing your element ...................................................................................................125
28. Porting 0.8 plug-ins to 0.10......................................................................................................127
28.1. List of changes.............................................................................................................127
29. Porting 0.10 plug-ins to 1.0......................................................................................................129
30. GStreamer licensing .................................................................................................................130
30.1. How to license the code you write for GStreamer.......................................................130
List of Tables
2-1. Table of Example Types .......................................................................................................................8
16-1. Table of Audio Types .......................................................................................................................76
16-2. Table of Video Types........................................................................................................................81
16-3. Table of Container Types..................................................................................................................87
16-4. Table of Subtitle Types .....................................................................................................................88
16-5. Table of Other Types ........................................................................................................................88
vi
I. Introduction
GStreamer is an extremely powerful and versatile framework for creating streaming media applications.
Many of the virtues of the GStreamer framework come from its modularity: GStreamer can seamlessly
incorporate new plugin modules. But because modularity and power often come at a cost of greater
complexity (consider, for example, CORBA (http://www.omg.org/)), writing new plugins is not always
easy.
This guide is intended to help you understand the GStreamer framework (version 1.0.8) so you can
develop new plugins to extend the existing functionality. The guide addresses most issues by following
the development of an example plugin - an audio filter plugin - written in C. However, the later parts of
the guide also present some issues involved in writing other types of plugins, and the end of the guide
describes some of the Python bindings for GStreamer.
Chapter 1. Preface
1.1. What is GStreamer?
GStreamer is a framework for creating streaming media applications. The fundamental design comes
from the video pipeline at Oregon Graduate Institute, as well as some ideas from DirectShow.
GStreamers development framework makes it possible to write any type of streaming multimedia
application. The GStreamer framework is designed to make it easy to write applications that handle
audio or video or both. It isnt restricted to audio and video, and can process any kind of data flow. The
pipeline design is made to have little overhead above what the applied filters induce. This makes
GStreamer a good framework for designing even high-end audio applications which put high demands
on latency or performance.
One of the most obvious uses of GStreamer is using it to build a media player. GStreamer already
includes components for building a media player that can support a very wide variety of formats,
including MP3, Ogg/Vorbis, MPEG-1/2, AVI, Quicktime, mod, and more. GStreamer, however, is much
more than just another media player. Its main advantages are that the pluggable components can be
mixed and matched into arbitrary pipelines so that its possible to write a full-fledged video or audio
editing application.
The framework is based on plugins that will provide the various codec and other functionality. The
plugins can be linked and arranged in a pipeline. This pipeline defines the flow of the data.
The GStreamer core function is to provide a framework for plugins, data flow, synchronization and
media type handling/negotiation. It also provides an API to write applications using the various plugins.
Anyone who wants to add support for new ways of processing data in GStreamer. For example, a
person in this group might want to create a new data format converter, a new visualization tool, or a
new decoder or encoder.
Anyone who wants to add support for new input and output devices. For example, people in this group
might want to add the ability to write to a new video output system or read data from a digital camera
or special microphone.
Anyone who wants to extend GStreamer in any way. You need to have an understanding of how the
plugin system works before you can understand the constraints that the plugin system places on the
Chapter 1. Preface
rest of the code. Also, you might be surprised after reading this at how much can be done with plugins.
This guide is not relevant to you if you only want to use the existing functionality of GStreamer, or if you
just want to use an application that uses GStreamer. If you are only interested in using existing plugins to
write a new application - and there are quite a lot of plugins already - you might want to check the
GStreamer Application Development Manual. If you are just trying to get help with a GStreamer
application, then you should check with the user manual for that particular application.
Building a Plugin - Introduction to the structure of a plugin, using an example audio filter for
illustration.
This part covers all the basic steps you generally need to perform to build a plugin, such as registering
the element with GStreamer and setting up the basics so it can receive data from and send data to
neighbour elements. The discussion begins by giving examples of generating the basic structures and
registering an element in Constructing the Boilerplate. Then, you will learn how to write the code to
get a basic filter plugin working in Chapter 4, Chapter 5 and Chapter 8.
After that, we will show some of the GObject concepts on how to make an element configurable for
applications and how to do application-element interaction in Adding Properties and Chapter 10. Next,
you will learn to build a quick test application to test all that youve just learned in Chapter 11. We will
just touch upon basics here. For full-blown application development, you should look at the
Application Development Manual
(http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html).
Chapter 1. Preface
Chapter 1. Preface
written using a base-class (Pre-made base classes), and later also goes into writing special types of
elements in Writing a Demuxer or Parser, Writing a N-to-1 Element or Muxer and Writing a Manager.
The remainder of this introductory part of the guide presents a short overview of the basic concepts
involved in GStreamer plugin development. Topics covered include Elements and Plugins, Pads,
Data, Buffers and Events and Types and Properties. If you are already familiar with this information, you
can use this short overview to refresh your memory, or you can skip to Building a Plugin.
As you can see, there a lot to learn, so lets get started!
Creating compound and complex elements by extending from a GstBin. This will allow you to create
plugins that have other plugins embedded in them.
Adding new media types to the registry along with typedetect functions. This will allow your plugin to
operate on a completely new media type.
Chapter 2. Foundations
This chapter of the guide introduces the basic concepts of GStreamer. Understanding these concepts will
help you grok the issues involved in extending GStreamer. Many of these concepts are explained in
greater detail in the GStreamer Application Development Manual; the basic concepts presented here
serve mainly to refresh your memory.
2.2. Pads
Pads are used to negotiate links and data flow between elements in GStreamer. A pad can be viewed as a
Chapter 2. Foundations
place or port on an element where links may be made with other elements, and through which data
can flow to or from those elements. Pads have specific data handling capabilities: A pad can restrict the
type of data that flows through it. Links are only allowed between two pads when the allowed data types
of the two pads are compatible.
An analogy may be helpful here. A pad is similar to a plug or jack on a physical device. Consider, for
example, a home theater system consisting of an amplifier, a DVD player, and a (silent) video projector.
Linking the DVD player to the amplifier is allowed because both devices have audio jacks, and linking
the projector to the DVD player is allowed because both devices have compatible video jacks. Links
between the projector and the amplifier may not be made because the projector and amplifier have
different types of jacks. Pads in GStreamer serve the same purpose as the jacks in the home theater
system.
For the most part, all data in GStreamer flows one way through a link between elements. Data flows out
of one element through one or more source pads, and elements accept incoming data through one or
more sink pads. Source and sink elements have only source and sink pads, respectively.
See the GStreamer Library Reference for the current implementation details of a GstPad
(../../gstreamer/html/GstPad.html).
An exact type indicating what type of data (event, buffer, ...) this GstMiniObject is.
A reference count indicating the number of elements currently holding a reference to the miniobject.
When the reference count falls to zero, the miniobject will be disposed, and its memory will be freed
in some sense (see below for more details).
For data transport, there are two types of GstMiniObject defined: events (control) and buffers (content).
Buffers may contain any sort of data that the two linked pads know how to handle. Normally, a buffer
contains a chunk of some sort of audio or video data that flows from one element to another.
Buffers also contain metadata describing the buffers contents. Some of the important types of metadata
are:
Chapter 2. Foundations
Pointers to one or more GstMemory objects. GstMemory objects are refcounted objects that
encapsulate a region of memory.
A timestamp indicating the preferred display timestamp of the content in the buffer.
Events contain information on the state of the stream flowing between the two linked pads. Events will
only be sent if the element explicitly supports them, else the core will (try to) handle the events
automatically. Events are used to indicate, for example, a media type, the end of a media stream or that
the cache should be flushed.
Events may contain several of the following items:
The other contents of the event depend on the specific event type.
Events will be discussed extensively in Chapter 17. Until then, the only event that will be used is the
EOS event, which is used to indicate the end-of-stream (usually end-of-file).
See the GStreamer Library Reference for the current implementation details of a GstMiniObject
(../../gstreamer/html/gstreamer-GstMiniObject.html), GstBuffer
(../../gstreamer/html/gstreamer-GstBuffer.html) and GstEvent
(../../gstreamer/html/gstreamer-GstEvent.html).
Chapter 2. Foundations
Many sink elements have accelerated methods for copying data to hardware, or have direct access to
hardware. It is common for these elements to be able to create a GstBufferPool or GstAllocator for their
upstream peers. One such example is ximagesink. It creates buffers that contain XImages. Thus, when an
upstream peer copies data into the buffer, it is copying directly into the XImage, enabling ximagesink to
draw the image directly to the screen instead of having to copy data into an XImage first.
Filter elements often have the opportunity to either work on a buffer in-place, or work while copying
from a source buffer to a destination buffer. It is optimal to implement both algorithms, since the
GStreamer framework can choose the fastest algorithm as appropriate. Naturally, this only makes sense
for strict filters -- elements that have exactly the same format on source and sink pads.
Description
audio/*
channels
Property
Property
Description
integer
integer
The number of
channels of
audio data.
greater than 0
greater than 0
Chapter 2. Foundations
Media Type
Description
Property
audio/x-raw
Unstructured
format
and
uncompressed
raw integer audio
data.
string
audio/mpeg
Audio data
mpegversion
compressed
using the MPEG
audio encoding
scheme.
integer
1, 2 or 4
framed
boolean
A true value
indicates that
each buffer
contains exactly
one frame. A
false value
indicates that
frames and
buffers do not
necessarily
match up.
0 or 1
Property
Description
The
MPEG-version
used for
encoding the
data. The value 1
refers to
MPEG-1, -2 and
-2.5 layer 1, 2 or
3. The values 2
and 4 refer to the
MPEG-AAC
audio encoding
schemes.
Chapter 2. Foundations
Media Type
Description
Property
layer
integer
1, 2, or 3
bitrate
integer
greater than 0
audio/x-vorbis
Property
Description
The bitrate, in
bits per second.
For VBR
(variable bitrate)
MPEG data, this
is the average
bitrate.
There are
currently no
specific
properties
defined for this
type.
10
This command will check out a series of files and directories into gst-template. The template you will
be using is in the gst-template/gst-plugin/ directory. You should look over the files in that
directory to get a general idea of the structure of a source tree for a plugin.
If for some reason you cant access the git repository, you can also download a snapshot of the latest
revision (http://cgit.freedesktop.org/gstreamer/gst-template/commit/) via the cgit web interface.
12
Note: Capitalization is important for the name of the plugin. Keep in mind that under some operating
systems, capitalization is also important when specifying directory and file names in general.
Now one needs to adjust the Makefile.am to use the new filenames and run autogen.sh from the
parent directory to bootstrap the build environment. After that, the project can be built and installed using
the well known make && sudo make install commands.
Note: Be aware that by default autogen.sh and configure would choose /usr/local as a default
location. One would need to add /usr/local/lib/gstreamer-1.0 to GST_PLUGIN_PATH in order
to make the new plugin show up in a gstreamer thats been installed from packages.
Note: FIXME: this section is slightly outdated. gst-template is still useful as an example for a minimal
plugin build system skeleton. However, for creating elements the tool gst-element-maker from
gst-plugins-bad is recommended these days.
13
} GstMyFilter;
/* Standard definition defining a class for this element. */
typedef struct _GstMyFilterClass {
GstElementClass parent_class;
} GstMyFilterClass;
/* Standard macros for defining types for this element. */
#define GST_TYPE_MY_FILTER (gst_my_filter_get_type())
#define GST_MY_FILTER(obj) \
(G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_MY_FILTER,GstMyFilter))
#define GST_MY_FILTER_CLASS(klass) \
(G_TYPE_CHECK_CLASS_CAST((klass),GST_TYPE_MY_FILTER,GstMyFilterClass))
#define GST_IS_MY_FILTER(obj) \
(G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_MY_FILTER))
#define GST_IS_MY_FILTER_CLASS(klass) \
(G_TYPE_CHECK_CLASS_TYPE((klass),GST_TYPE_MY_FILTER))
/* Standard function returning type information. */
GType gst_my_filter_get_type (void);
Using this header file, you can use the following macro to setup the GObject basics in your source file
so that all functions will be called appropriately:
#include "filter.h"
G_DEFINE_TYPE (GstMyFilter, gst_my_filter, GST_TYPE_ELEMENT);
14
The type of the element, see the docs/design/draft-klass.txt document in the GStreamer core source
tree for details and examples.
The name of the author of the element, optionally followed by a contact email address in angle
brackets.
For example:
gst_element_class_set_static_metadata (klass,
"An example plugin",
"Example/FirstExample",
"Shows the basic structure of a plugin",
"your name <your.name@your.isp>");
The element details are registered with the plugin during the _class_init () function, which is part
of the GObject system. The _class_init () function should be set for this GObject in the function
where you register the type with GLib.
static void
gst_my_filter_class_init (GstMyFilterClass * klass)
{
GstElementClass *element_class = GST_ELEMENT_CLASS (klass);
[..]
gst_element_class_set_static_metadata (element_klass,
"An example plugin",
"Example/FirstExample",
"Shows the basic structure of a plugin",
"your name <your.name@your.isp>");
}
3.5. GstStaticPadTemplate
A GstStaticPadTemplate is a description of a pad that the element will (or might) create and use. It
contains:
15
Pad direction.
Existence property. This indicates whether the pad exists always (an always pad), only in some
cases (a sometimes pad) or only if the application requested such a pad (a request pad).
For example:
static GstStaticPadTemplate sink_factory =
GST_STATIC_PAD_TEMPLATE (
"sink",
GST_PAD_SINK,
GST_PAD_ALWAYS,
GST_STATIC_CAPS ("ANY")
);
Those pad templates are registered during the _class_init () function with the
gst_element_class_add_pad_template (). For this function you need a handle the the
GstPadTemplate which you can create from the static pad template with
gst_static_pad_template_get (). See below for more details on this.
Pads are created from these static templates in the elements _init () function using
gst_pad_new_from_static_template (). In order to create a new pad from this template using
gst_pad_new_from_static_template (), you will need to declare the pad template as a global
variable. More on this subject in Chapter 4.
static GstStaticPadTemplate sink_factory = [..],
src_factory = [..];
static void
gst_my_filter_class_init (GstMyFilterClass * klass)
{
GstElementClass *element_class = GST_ELEMENT_CLASS (klass);
[..]
gst_element_class_add_pad_template (element_class,
gst_static_pad_template_get (&src_factory));
gst_element_class_add_pad_template (element_class,
gst_static_pad_template_get (&sink_factory));
}
The last argument in a template is its type or list of supported types. In this example, we use ANY,
which means that this element will accept all input. In real-life situations, you would set a media type
16
Values surrounded by curly brackets ({ and }) are lists, values surrounded by square brackets ([
and ]) are ranges. Multiple sets of types are supported too, and should be separated by a semicolon
(;). Later, in the chapter on pads, we will see how to use types to know the exact format of a stream:
Chapter 4.
static gboolean
plugin_init (GstPlugin *plugin)
{
17
Note that the information returned by the plugin_init() function will be cached in a central registry. For
this reason, it is important that the same information is always returned by the function: for example, it
must not make element factories available based on runtime conditions. If an element can only work in
certain conditions (for example, if the soundcard is not being used by some other process) this must be
reflected by the element being unable to enter the READY state if unavailable, rather than the plugin
attempting to deny existence of the plugin.
18
static void
gst_my_filter_init (GstMyFilter *filter)
{
/* pad through which data comes in to the element */
filter->sinkpad = gst_pad_new_from_static_template (
&sink_template, "sink");
/* pads are configured here with gst_pad_set_*_function () */
19
Obviously, the above doesnt do much useful. Instead of printing that the data is in, you would normally
process the data there. Remember, however, that buffers are not always writeable.
In more advanced elements (the ones that do event processing), you may want to additionally specify an
event handling function, which will be called when stream-events are sent (such as caps, end-of-stream,
newsegment, tags, etc.).
static void
gst_my_filter_init (GstMyFilter * filter)
{
20
static gboolean
gst_my_filter_sink_event (GstPad
*pad,
GstObject *parent,
GstEvent *event)
{
GstMyFilter *filter = GST_MY_FILTER (parent);
switch (GST_EVENT_TYPE (event)) {
case GST_EVENT_CAPS:
/* we should handle the format here */
break;
case GST_EVENT_EOS:
/* end-of-stream, we should close down all stream leftovers here */
gst_my_filter_stop_processing (filter);
break;
default:
break;
}
return gst_pad_event_default (pad, parent, event);
}
static GstFlowReturn
gst_my_filter_chain (GstPad
*pad,
GstObject *parent,
GstBuffer *buf)
{
GstMyFilter *filter = GST_MY_FILTER (parent);
GstBuffer *outbuf;
outbuf = gst_my_filter_process_data (filter, buf);
gst_buffer_unref (buf);
if (!outbuf) {
/* something went wrong - signal an error */
GST_ELEMENT_ERROR (GST_ELEMENT (filter), STREAM, FAILED, (NULL), (NULL));
return GST_FLOW_ERROR;
}
return gst_pad_push (filter->srcpad, outbuf);
}
In some cases, it might be useful for an element to have control over the input data rate, too. In that case,
you probably want to write a so-called loop-based element. Source elements (with only source pads) can
21
22
23
It is a good idea to call the default event handler gst_pad_event_default () for unknown events.
Depending on the event type, the default handler will forward the event or simply unref it. The CAPS
event is by default not forwarded so we need to do this in the event handler ourselves.
24
25
It is a good idea to call the default query handler gst_pad_query_default () for unknown queries.
Depending on the query type, the default handler will forward the query or simply unref it.
26
GST_STATE_NULL
GST_STATE_READY
GST_STATE_PAUSED
GST_STATE_PLAYING
which will from now on be referred to simply as NULL, READY, PAUSED and PLAYING.
GST_STATE_NULL is the default state of an element. In this state, it has not allocated any runtime
resources, it has not loaded any runtime libraries and it can obviously not handle data.
GST_STATE_READY is the next state that an element can be in. In the READY state, an element has all
default resources (runtime-libraries, runtime-memory) allocated. However, it has not yet allocated or
defined anything that is stream-specific. When going from NULL to READY state
(GST_STATE_CHANGE_NULL_TO_READY), an element should allocate any non-stream-specific
resources and should load runtime-loadable libraries (if any). When going the other way around (from
READY to NULL, GST_STATE_CHANGE_READY_TO_NULL), an element should unload these
libraries and free all allocated resources. Examples of such resources are hardware devices. Note that
files are generally streams, and these should thus be considered as stream-specific resources; therefore,
they should not be allocated in this state.
GST_STATE_PAUSED is the state in which an element is ready to accept and handle data. For most
elements this state is the same as PLAYING. The only exception to this rule are sink elements. Sink
elements only accept one single buffer of data and then block. At this point the pipeline is prerolled and
ready to render data immediately.
GST_STATE_PLAYING is the highest state that an element can be in. For most elements this state is
exactly the same as PAUSED, they accept and process events and buffers with data. Only sink elements
need to differentiate between PAUSED and PLAYING state. In PLAYING state, sink elements actually
render incoming data, e.g. output audio to a sound card or render video pictures to an image sink.
27
static GstStateChangeReturn
gst_my_filter_change_state (GstElement *element, GstStateChange transition)
{
GstStateChangeReturn ret = GST_STATE_CHANGE_SUCCESS;
GstMyFilter *filter = GST_MY_FILTER (element);
switch (transition) {
case GST_STATE_CHANGE_NULL_TO_READY:
if (!gst_my_filter_allocate_memory (filter))
return GST_STATE_CHANGE_FAILURE;
break;
default:
break;
}
ret = GST_ELEMENT_CLASS (parent_class)->change_state (element, transition);
if (ret == GST_STATE_CHANGE_FAILURE)
return ret;
switch (transition) {
case GST_STATE_CHANGE_READY_TO_NULL:
gst_my_filter_free_memory (filter);
28
29
/* properties */
enum {
PROP_0,
PROP_SILENT
/* FILL ME */
};
static void gst_my_filter_set_property (GObject
guint
prop_id,
const GValue *value,
GParamSpec
*pspec);
static void gst_my_filter_get_property (GObject
guint
prop_id,
GValue
*value,
GParamSpec
*pspec);
*object,
*object,
static void
gst_my_filter_class_init (GstMyFilterClass *klass)
{
GObjectClass *object_class = G_OBJECT_CLASS (klass);
/* define virtual function pointers */
object_class->set_property = gst_my_filter_set_property;
object_class->get_property = gst_my_filter_get_property;
/* define properties */
g_object_class_install_property (object_class, PROP_SILENT,
g_param_spec_boolean ("silent", "Silent",
"Whether to be very verbose or not",
FALSE, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
}
static void
gst_my_filter_set_property (GObject
guint
prop_id,
const GValue *value,
GParamSpec
*pspec)
*object,
30
The above is a very simple example of how properties are used. Graphical applications will use these
properties and will display a user-controllable widget with which these properties can be changed. This
means that - for the property to be as user-friendly as possible - you should be as exact as possible in the
definition of the property. Not only in defining ranges in between which valid properties can be located
(for integers, floats, etc.), but also in using very descriptive (better yet: internationalized) strings in the
definition of the property, and if possible using enums and flags instead of integers. The GObject
documentation describes these in a very complete way, but below, well give a short example of where
this is useful. Note that using integers here would probably completely confuse the user, because they
make no sense in this context. The example is stolen from videotestsrc.
typedef enum {
GST_VIDEOTESTSRC_SMPTE,
GST_VIDEOTESTSRC_SNOW,
GST_VIDEOTESTSRC_BLACK
} GstVideotestsrcPattern;
[..]
31
32
33
directory that GStreamer searches, then you will need to set the plugin path. Either set
GST_PLUGIN_PATH to the directory containing your plugin, or use the command-line option
--gst-plugin-path. If you based your plugin off of the gst-plugin template, then this will look something
like gst-launch-1.0 --gst-plugin-path=$HOME/gst-template/gst-plugin/src/.libs TESTPIPELINE
However, you will often need more testing features than gst-launch-1.0 can provide, such as seeking,
events, interactivity and more. Writing your own small testing program is the easiest way to accomplish
this. This section explains - in a few words - how to do that. For a complete application development
guide, see the Application Development Manual (../../manual/html/index.html).
At the start, you need to initialize the GStreamer core library by calling gst_init (). You can
alternatively call gst_init_get_option_group (), which will return a pointer to GOptionGroup.
You can then use GOption to handle the initialization, and this will finish the GStreamer initialization.
You can create elements using gst_element_factory_make (), where the first argument is the
element type that you want to create, and the second argument is a free-form name. The example at the
end uses a simple filesource - decoder - soundcard output pipeline, but you can use specific debugging
elements if thats necessary. For example, an identity element can be used in the middle of the
pipeline to act as a data-to-application transmitter. This can be used to check the data for misbehaviours
or correctness in your test application. Also, you can use a fakesink element at the end of the pipeline
to dump your data to the stdout (in order to do this, set the dump property to TRUE). Lastly, you can use
valgrind to check for memory errors.
During linking, your test application can use filtered caps as a way to drive a specific type of data to or
from your element. This is a very simple and effective way of checking multiple types of input and
output in your element.
Note that during running, you should listen for at least the error and eos messages on the bus and/or
your plugin/element to check for correct handling of this. Also, you should add events into the pipeline
and make sure your plugin handles these correctly (with respect to clocking, internal caching, etc.).
Never forget to clean up memory in your plugin or your test application. When going to the NULL state,
your element should clean up allocated memory and caches. Also, it should close down any references
held to possible support libraries. Your application should unref () the pipeline and make sure it
doesnt crash.
#include <gst/gst.h>
static gboolean
bus_call (GstBus
GstMessage *msg,
*bus,
34
data)
{
GMainLoop *loop = data;
switch (GST_MESSAGE_TYPE (msg)) {
case GST_MESSAGE_EOS:
g_print ("End-of-stream\n");
g_main_loop_quit (loop);
break;
case GST_MESSAGE_ERROR: {
gchar *debug = NULL;
GError *err = NULL;
gst_message_parse_error (msg, &err, &debug);
g_print ("Error: %s\n", err->message);
g_error_free (err);
if (debug) {
g_print ("Debug details: %s\n", debug);
g_free (debug);
}
g_main_loop_quit (loop);
break;
}
default:
break;
}
return TRUE;
}
gint
main (gint
argc,
gchar *argv[])
{
GstStateChangeReturn ret;
GstElement *pipeline, *filesrc, *decoder, *filter, *sink;
GstElement *convert1, *convert2, *resample;
GMainLoop *loop;
GstBus *bus;
guint watch_id;
/* initialization */
gst_init (&argc, &argv);
loop = g_main_loop_new (NULL, FALSE);
if (argc != 2) {
g_print ("Usage: %s <mp3 filename>\n", argv[0]);
return 01;
}
/* create elements */
35
36
/* run */
ret = gst_element_set_state (pipeline, GST_STATE_PLAYING);
if (ret == GST_STATE_CHANGE_FAILURE) {
GstMessage *msg;
g_print ("Failed to start up pipeline!\n");
/* check if there is an error message with details on the bus */
msg = gst_bus_poll (bus, GST_MESSAGE_ERROR, 0);
if (msg) {
GError *err = NULL;
gst_message_parse_error (msg, &err, NULL);
g_print ("ERROR: %s\n", err->message);
g_error_free (err);
gst_message_unref (msg);
}
return -1;
}
g_main_loop_run (loop);
/* clean up */
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
g_source_remove (watch_id);
g_main_loop_unref (loop);
return 0;
}
37
foo
bar
boo
bye
The code to parse this file and create the dynamic sometimes pads, looks like this:
39
40
41
Note that we use a lot of checks everywhere to make sure that the content in the file is valid. This has two
purposes: first, the file could be erroneous, in which case we prevent a crash. The second and most
important reason is that - in extreme cases - the file could be used maliciously to cause undefined
behaviour in the plugin, which might lead to security issues. Always assume that the file could be used to
do bad things.
*element,
42
*name,
*caps);
*element,
*caps)
43
44
If all pads of an element are activated in push-mode scheduling, the element as a whole is operating in
push-mode. For source elements this means that they will have to start a task that pushes out buffers on
the source pad to the downstream elements. Downstream elements will have data pushed to them by
upstream elements using the sinkpads _chain ()-function which will push out buffers on the source
pads. Prerequisites for this scheduling mode are that a chain-function was set for each sinkpad using
gst_pad_set_chain_function () and that all downstream elements operate in the same mode.
Alternatively, sinkpads can be the driving force behind a pipeline by operating in pull-mode, while the
sourcepads of the element still operate in push-mode. In order to be the driving force, those pads start
a GstTask when they are activated. This task is a thread, which will call a function specified by the
element. When called, this function will have random data access (through gst_pad_pull_range
()) over all sinkpads, and can push data over the sourcepads, which effectively means that this
element controls data flow in the pipeline. Prerequisites for this mode are that all downstream
elements can act in push mode, and that all upstream elements operate in pull-mode (see below).
45
Lastly, all pads in an element can be activated in PULL-mode. However, contrary to the above, this
does not mean that they start a task on their own. Rather, it means that they are pull slave for the
downstream element, and have to provide random data access to it from their _get_range
()-function. Requirements are that the a _get_range ()-function was set on this pad using the
function gst_pad_set_getrange_function (). Also, if the element has any sinkpads, all those
pads (and thereby their peers) need to operate in PULL access mode, too.
When a sink element is activated in PULL mode, it should start a task that calls
gst_pad_pull_range () on its sinkpad. It can only do this when the upstream SCHEDULING
query returns support for the GST_PAD_MODE_PULL scheduling mode.
In the next two sections, we will go closer into pull-mode scheduling (elements/pads driving the pipeline,
and elements/pads providing random access), and some specific use cases will be given.
Demuxers, parsers and certain kinds of decoders where data comes in unparsed (such as MPEG-audio
or video streams), since those will prefer byte-exact (random) access from their input. If possible,
however, such elements should be prepared to operate in push-mode mode, too.
Certain kind of audio outputs, which require control over their input data flow, such as the Jack sound
server.
First you need to perform a SCHEDULING query to check if the upstream element(s) support pull-mode
scheduling. If that is possible, you can activate the sinkpad in pull-mode. Inside the activate_mode
function you can then start the task.
#include "filter.h"
#include <string.h>
static gboolean gst_my_filter_activate
(GstPad
GstObject
static gboolean gst_my_filter_activate_mode (GstPad
GstObject
* pad,
* parent);
* pad,
* parent,
46
mode
(GstMyFilter * filter);
static void
gst_my_filter_init (GstMyFilter * filter)
{
[..]
gst_pad_set_activate_function (filter->sinkpad, gst_my_filter_activate);
gst_pad_set_activatemode_function (filter->sinkpad,
gst_my_filter_activate_mode);
[..]
}
[..]
static gboolean
gst_my_filter_activate (GstPad * pad, GstObject * parent)
{
GstQuery *query;
gboolean pull_mode;
/* first check what upstream scheduling is supported */
query = gst_query_new_scheduling ();
if (!gst_pad_peer_query (pad, query)) {
gst_query_unref (query);
goto activate_push;
}
/* see if pull-mode is supported */
pull_mode = gst_query_has_scheduling_mode_with_flags (query,
GST_PAD_MODE_PULL, GST_SCHEDULING_FLAG_SEEKABLE);
gst_query_unref (query);
if (!pull_mode)
goto activate_push;
/* now we can activate in pull-mode. GStreamer will also
* activate the upstream peer in pull-mode */
return gst_pad_activate_mode (pad, GST_PAD_MODE_PULL, TRUE);
activate_push:
{
/* something not right, we fallback to push-mode */
return gst_pad_activate_mode (pad, GST_PAD_MODE_PUSH, TRUE);
47
Once started, your task has full control over input and output. The most simple case of a task function is
one that reads input and pushes that over its source pad. Its not all that useful, but provides some more
flexibility than the old push-mode case that weve been looking at so far.
#define BLOCKSIZE 2048
static void
gst_my_filter_loop (GstMyFilter * filter)
{
GstFlowReturn ret;
guint64 len;
GstFormat fmt = GST_FORMAT_BYTES;
GstBuffer *buf = NULL;
if (!gst_pad_query_duration (filter->sinkpad, fmt, &len)) {
GST_DEBUG_OBJECT (filter, "failed to query duration, pausing");
goto stop;
}
48
Data sources, such as a file source, that can provide data from any offset with reasonable low latency.
Filters that would like to provide a pull-mode scheduling over the whole pipeline.
49
Parsers who can easily provide this by skipping a small part of their input and are thus essentially
"forwarding" getrange requests literally without any own processing involved. Examples include tag
readers (e.g. ID3) or single output parsers, such as a WAVE parser.
The following example will show how a _get_range ()-function can be implemented in a source
element:
#include "filter.h"
static GstFlowReturn
gst_my_filter_get_range (GstPad
GstObject * parent,
guint64
offset,
guint
length,
GstBuffer ** buf);
* pad,
static void
gst_my_filter_init (GstMyFilter * filter)
{
[..]
gst_pad_set_getrange_function (filter->srcpad,
gst_my_filter_get_range);
[..]
}
static GstFlowReturn
gst_my_filter_get_range (GstPad
GstObject * parent,
guint64
offset,
guint
length,
GstBuffer ** buf)
{
* pad,
In practice, many elements that could theoretically do random access, may in practice often be activated
in push-mode scheduling anyway, since there is no downstream element able to start its own task.
Therefore, in practice, those elements should implement both a _get_range ()-function and a _chain
50
51
A downstream element suggest a format on its sinkpad and places the suggestion in the result of the
CAPS query performed on the sinkpad. See also Implementing a CAPS query function.
An upstream element decides on a format. It sends the selected media format downstream on its
source pad with a CAPS event. Downstream elements reconfigure themselves to handle the media type
in the CAPS event on the sinkpad.
An upstream element can inform downstream that it would like to suggest a new format by sending a
RECONFIGURE event upstream. The RECONFIGURE event simply instructs an upstream element
to restart the negotiation phase. Because the element that sent out the RECONFIGURE event is now
suggesting another format, the format in the pipeline might change.
In addition to the CAPS and RECONFIGURE event and the CAPS query, there is an ACCEPT_CAPS
query to quickly check if a certain caps can be accepted by an element.
All negotiation follows these simple rules. Lets take a look at some typical uses cases and how
negotiation happens.
Fixed negotiation. An element can output one format only. See Section 14.2.1.
Transform negotiation. There is a (fixed) transform between the input and output format of the
element, usually based on some element property. The caps that the element will produce depend on
the upstream caps and the caps that the element can accept depend on the downstream caps. See
Section 14.2.2.
Dynamic negotiation. An element can output many formats. See Section 14.2.3.
52
A typefinder, since the type found is part of the actual data stream and can thus not be re-negotiated.
The typefinder will look at the stream of bytes, figure out the type, send a CAPS event with the caps
and then push buffers of the type.
Pretty much all demuxers, since the contained elementary data streams are defined in the file headers,
and thus not renegotiable.
Some decoders, where the format is embedded in the data stream and not part of the peercaps and
where the decoder itself is not reconfigurable, too.
gst_pad_use_fixed_caps() is used on the source pad with fixed caps. As long as the pad is not
negotiated, the default CAPS query will return the caps presented in the padtemplate. As soon as the pad
is negotiated, the CAPS query will return the negotiated caps (and nothing else). These are the relevant
code snippets for fixed caps source pads.
[..]
pad = gst_pad_new_from_static_template (..);
gst_pad_use_fixed_caps (pad);
[..]
The fixed caps can then be set on the pad by calling gst_pad_set_caps ().
[..]
caps = gst_caps_new_simple ("audio/x-raw",
"format", G_TYPE_STRING, GST_AUDIO_NE(F32),
"rate", G_TYPE_INT, <samplerate>,
"channels", G_TYPE_INT, <num-channels>, NULL);
if (!gst_pad_set_caps (pad, caps)) {
GST_ELEMENT_ERROR (element, CORE, NEGOTIATION, (NULL),
("Some debug information here"));
return GST_FLOW_ERROR;
}
[..]
53
Videobox. It adds configurable border around a video frame depending on object properties.
Identity elements. All elements that dont change the format of the data, only the content. Video and
audio effects are an example. Other examples include elements that inspect the stream.
Some decoders and encoders, where the output format is defined by input format, like mulawdec and
mulawenc. These decoders usually have no headers that define the content of the stream. They are
usually more like conversion elements.
Below is an example of a negotiation steps of a typical transform element. In the sink pad CAPS event
handler, we compute the caps for the source pad and set those.
[...]
static gboolean
gst_my_filter_setcaps (GstMyFilter *filter,
GstCaps *caps)
{
GstStructure *structure;
int rate, channels;
gboolean ret;
GstCaps *outcaps;
structure = gst_caps_get_structure (caps, 0);
ret = gst_structure_get_int (structure, "rate", &rate);
ret = ret && gst_structure_get_int (structure, "channels", &channels);
if (!ret)
54
55
If the element prefers to operate in passthrough mode, check if downstream accepts the caps with the
ACCEPT_CAPS query. If it does, we can complete negotiation and we can operate in passthrough
mode.
Query the downstream peer pad for the list of possible caps.
Select from the downstream list the first caps that you can transform to and set this as the output caps.
You might have to fixate the caps to some reasonable defaults to construct fixed caps.
Lets look at the example of an element that can convert between samplerates, so where input and output
samplerate dont have to be the same:
static gboolean
gst_my_filter_setcaps (GstMyFilter *filter,
GstCaps *caps)
{
if (gst_pad_set_caps (filter->sinkpad, caps)) {
filter->passthrough = TRUE;
} else {
GstCaps *othercaps, *newcaps;
GstStructure *s = gst_caps_get_structure (caps, 0), *others;
/* no passthrough, setup internal conversion */
gst_structure_get_int (s, "channels", &filter->channels);
othercaps = gst_pad_get_allowed_caps (filter->srcpad);
others = gst_caps_get_structure (othercaps, 0);
gst_structure_set (others,
"channels", G_TYPE_INT, filter->channels, NULL);
/* now, the samplerate value can optionally have multiple values, so
* we "fixate" it, which means that one fixed value is chosen */
newcaps = gst_caps_copy_nth (othercaps, 0);
gst_caps_unref (othercaps);
gst_pad_fixate_caps (filter->srcpad, newcaps);
if (!gst_pad_set_caps (filter->srcpad, newcaps))
return FALSE;
56
57
Elements that want to propose a new format upstream need to first check if the new caps are
acceptable upstream with an ACCEPT_CAPS query. Then they would send a RECONFIGURE event
and be prepared to answer the CAPS query with the new prefered format. It should be noted that when
there is no upstream element that can (or wants) to renegotiate, the element needs to deal with the
currently configured format.
Elements that operate in transform negotiation according to Section 14.2.2 pass the RECONFIGURE
event upstream. Because these elements simply do a fixed transform based on the upstream caps, they
need to send the event upstream so that it can select a new format.
Elements that operate in fixed negotiation (Section 14.2.1) drop the RECONFIGURE event. These
elements cant reconfigure and their output caps dont depend on the upstream caps so the event can be
dropped.
Elements that can be reconfigured on the source pad (source pads implementing dynamic negotiation
in Section 14.2.3) should check its NEED_RECONFIGURE flag with
gst_pad_check_reconfigure () and it should start renegotiation when the function returns
TRUE.
58
static gboolean
gst_my_filter_query (GstPad *pad, GstObject * parent, GstQuery * query)
{
gboolean ret;
GstMyFilter *filter = GST_MY_FILTER (parent);
switch (GST_QUERY_TYPE (query)) {
case GST_QUERY_CAPS
{
GstPad *otherpad;
GstCaps *temp, *caps, *filt, *tcaps;
gint i;
otherpad = (pad == filter->srcpad) ? filter->sinkpad :
filter->srcpad;
caps = gst_pad_get_allowed_caps (otherpad);
gst_query_parse_caps (query, &filt);
/* We support *any* samplerate, indifferent from the samplerate
* supported by the linked elements on both sides. */
for (i = 0; i < gst_caps_get_size (caps); i++) {
GstStructure *structure = gst_caps_get_structure (caps, i);
gst_structure_remove_field (structure, "rate");
}
/* make sure we only return results that intersect our
* padtemplate */
tcaps = gst_pad_get_pad_template_caps (pad);
if (tcaps) {
temp = gst_caps_intersect (caps, tcaps);
gst_caps_unref (caps);
gst_caps_unref (tcaps);
caps = temp;
}
/* filter against the query filter when needed */
if (filt) {
temp = gst_caps_intersect (caps, filt);
gst_caps_unref (caps);
caps = temp;
}
gst_query_set_caps_result (query, caps);
gst_caps_unref (caps);
ret = TRUE;
break;
}
default:
ret = gst_pad_query_default (pad, parent, query);
break;
}
return ret;
59
60
15.1. GstMemory
GstMemory is an object that manages a region of memory. The memory object points to a region of
memory of maxsize. The area in this memory starting at offset and for size bytes is the accessible
region in the memory. the maxsize of the memory can never be changed after the object is created,
however, the offset and size can be changed.
GstMemory objects are created by a GstAllocator object. To implement support for a new kind of
[...]
61
15.2. GstBuffer
A GstBuffer is an lightweight object that is passed from an upstream to a downstream element and
contains memory and metadata. It represents the multimedia content that is pushed or pull downstream
by elements.
The buffer contains one or more GstMemory objects thet represent the data in the buffer.
Metadata in the buffer consists of:
DTS and PTS timestamps. These represent the decoding and presentation timestamps of the buffer
content and is used by synchronizing elements to schedule buffers. Both these timestamps can be
GST_CLOCK_TIME_NONE when unknown/undefined.
The duration of the buffer contents. This duration can be GST_CLOCK_TIME_NONE when
unknown/undefined.
Media specific offsets and offset_end. For video this is the frame number in the stream and for audio
the sample number. Other definitions for other media exist.
Arbitrary structures via GstMeta, see below.
62
[...]
GstBuffer *buffer;
GstMemory *mem;
GstMapInfo info;
/* make empty buffer */
buffer = gst_buffer_new ();
/* make memory holding 100 bytes */
mem = gst_allocator_alloc (NULL, 100, NULL);
/* add the the buffer */
gst_buffer_append_memory (buffer, mem);
[...]
/* get WRITE access to the memory and fill with 0xff */
gst_buffer_map (buffer, &info, GST_MAP_WRITE);
memset (info.data, 0xff, info.size);
gst_buffer_unmap (buffer, &info);
[...]
/* free the buffer */
gst_buffer_unref (buffer);
63
15.3. GstMeta
With the GstMeta system you can add arbitrary structures of on buffers. These structures describe extra
properties of the buffer such as cropping, stride, region of interest etc.
Metadata is also used to store, for example, the X image that is backing up the memory of the buffer.
This makes it easier for elements to locate the X image from the buffer.
The metadata system separates API specification (what the metadata and its API look like) and the
implementation (how it works). This makes it possible to make different implementations of the same
API, for example, depending on the hardware you are running on.
#include <gst/video/gstvideometa.h>
[...]
GstVideoCropMeta *meta;
/* buffer points to a video frame, add some cropping metadata */
meta = gst_buffer_add_video_crop_meta (buffer);
/* configure the cropping metadata */
meta->x = 8;
meta->y = 8;
meta->width = 120;
64
An element can then use the metadata on the buffer when rendering the frame like this:
#include <gst/video/gstvideometa.h>
[...]
GstVideoCropMeta *meta;
/* buffer points to a video frame, get the cropping metadata */
meta = gst_buffer_get_video_crop_meta (buffer);
if (meta) {
/* render frame with cropping */
_render_frame_cropped (buffer, meta->x, meta->y, meta->width, meta->height);
} else {
/* render frame */
_render_frame (buffer);
}
[...]
65
#include <gst/gst.h>
typedef struct _MyExampleMeta MyExampleMeta;
struct _MyExampleMeta {
GstMeta
meta;
gint
gchar
age;
*name;
};
GType my_example_meta_api_get_type (void);
#define MY_EXAMPLE_META_API_TYPE (my_example_meta_api_get_type())
#define gst_buffer_get_my_example_meta(b) \
((MyExampleMeta*)gst_buffer_get_meta((b),MY_EXAMPLE_META_API_TYPE))
The metadata API definition consists of the definition of the structure that holds a gint and a string. The
first field in the structure must be GstMeta.
We also define a my_example_meta_api_get_type () function that will register out metadata API
definition. We also define a convenience macro gst_buffer_get_my_example_meta () that simply
finds and returns the metadata with our new API.
Next lets have a look at how the my_example_meta_api_get_type () function is implemented in
the my-example-meta.c file.
#include "my-example-meta.h"
GType
my_example_meta_api_get_type (void)
{
static volatile GType type;
static const gchar *tags[] = { "foo", "bar", NULL };
if (g_once_init_enter (&type)) {
GType _type = gst_meta_api_type_register ("MyExampleMetaAPI", tags);
g_once_init_leave (&type, _type);
}
return type;
}
66
[...]
/* implementation */
const GstMetaInfo *my_example_meta_get_info (void);
#define MY_EXAMPLE_META_INFO (my_example_meta_get_info())
MyExampleMeta * gst_buffer_add_my_example_meta (GstBuffer
gint
const gchar
*buffer,
age,
name);
*
Lets have a look at how these functions are implemented in the my-example-meta.c file.
[...]
static gboolean
my_example_meta_init (GstMeta * meta, gpointer params, GstBuffer * buffer)
{
MyExampleMeta *emeta = (MyExampleMeta *) meta;
emeta->age = 0;
emeta->name = NULL;
return TRUE;
}
static gboolean
my_example_meta_transform (GstBuffer * transbuf, GstMeta * meta,
GstBuffer * buffer, GQuark type, gpointer data)
{
MyExampleMeta *emeta = (MyExampleMeta *) meta;
/* we always copy no matter what transform */
gst_buffer_add_my_example_meta (transbuf, emeta->age, emeta->name);
return TRUE;
67
gst_meta_register () registers the implementation details, like the API that you implement and the
size of the metadata structure along with methods to initialize and free the memory area. You can also
implement a transform function that will be called when a certain transformation (identified by the quark
and quark specific data) is performed on a buffer.
68
15.4. GstBufferPool
The GstBufferPool object provides a convenient base class for managing lists of reusable buffers.
Essential for this object is that all the buffers have the same properties such as size, padding, metadata
and alignment.
A bufferpool object can be configured to manage a minimum and maximum amount of buffers of a
specific size. A bufferpool can also be configured to use a specific GstAllocator for the memory of the
buffers. There is support in the bufferpool to enable bufferpool specific options, such as adding GstMeta
to the buffers in the pool or such as enabling specific padding on the memory in the buffers.
A Bufferpool can be inactivate and active. In the inactive state, you can configure the pool. In the active
state, you cant change the configuration anymore but you can acquire and release buffers from/to the
pool.
In the following sections we take a look at how you can use a bufferpool.
GstStructure *config;
[...]
/* get config structure */
config = gst_buffer_pool_get_config (pool);
/* set caps, size, minimum and maximum buffers in the pool */
gst_buffer_pool_config_set_params (config, caps, size, min, max);
69
The configuration of the bufferpool is maintained in a generic GstStructure that can be obtained with
gst_buffer_pool_get_config(). Convenience methods exist to get and set the configuration
options in this structure. After updating the structure, it is set as the current configuration in the
bufferpool again with gst_buffer_pool_set_config().
The following options can be configured on a bufferpool:
The caps of the buffers to allocate.
The size of the buffers. This is the suggested size of the buffers in the pool. The pool might decide to
allocate larger buffers to add padding.
The minimum and maximum amount of buffers in the pool. When minimum is set to > 0, the
bufferpool will pre-allocate this amount of buffers. When maximum is not 0, the bufferpool will
allocate up to maximum amount of buffers.
The allocator and parameters to use. Some bufferpools might ignore the allocator and use its internal
one.
Other arbitrary bufferpool options identified with a string. a bufferpool lists the supported options with
gst_buffer_pool_get_options() and you can ask if an option is supported with
gst_buffer_pool_has_option(). The option can be enabled by adding it to the configuration
structure with gst_buffer_pool_config_add_option (). These options are used to enable
things like letting the pool set metadata on the buffers or to add extra configuration options for
padding, for example.
After the configuration is set on the bufferpool, the pool can be activated with
gst_buffer_pool_set_active (pool, TRUE). From that point on you can use
gst_buffer_pool_acquire_buffer () to retrieve a buffer from the pool, like this:
[...]
GstFlowReturn ret;
GstBuffer *buffer;
ret = gst_buffer_pool_acquire_buffer (pool, &buffer, NULL);
if (G_UNLIKELY (ret != GST_FLOW_OK))
goto pool_failed;
[...]
70
It is important to check the return value of the acquire function because it is possible that it fails: When
your element shuts down, it will deactivate the bufferpool and then all calls to acquire will return
GST_FLOW_FLUSHNG.
All buffers that are acquired from the pool will have their pool member set to the original pool. When the
last ref is decremented on the buffer, GStreamer will automatically call
gst_buffer_pool_release_buffer() to release the buffer back to the pool. You (or any other
downstream element) dont need to know if a buffer came from a pool, you can just unref it.
15.5. GST_QUERY_ALLOCATION
The ALLOCATION query is used to negotiate GstMeta, GstBufferPool and GstAllocator between
elements. Negotiation of the allocation strategy is always initiated and decided by a srcpad after it has
negotiated a format and before it decides to push buffers. A sinkpad can suggest an allocation strategy
but it is ultimately the source pad that will decide based on the suggestions of the downstream sink pad.
The source pad will do a GST_QUERY_ALLOCATION with the negotiated caps as a parameter. This is
needed so that the downstream element knows what media type is being handled. A downstream sink
pad can answer the allocation query with the following results:
An array of possible GstBufferPool suggestions with suggested size, minimum and maximum
amount of buffers.
An array of GstAllocator objects along with suggested allocation parameters such as flags, prefix,
alignment and padding. These allocators can also be configured in a bufferpool when this is supported
by the bufferpool.
An array of supported GstMeta implementations along with metadata specific parameters. It is
important that the upstream element knows what kind of metadata is supported downstream before it
places that metadata on buffers.
When the GST_QUERY_ALLOCATION returns, the source pad will select from the available
bufferpools, allocators and metadata how it will allocate buffers.
71
#include <gst/video/video.h>
#include <gst/video/gstvideometa.h>
#include <gst/video/gstvideopool.h>
GstCaps *caps;
GstQuery *query;
GstStructure *structure;
GstBufferPool *pool;
GstStructure *config;
guint size, min, max;
[...]
/* find a pool for the negotiated caps now */
query = gst_query_new_allocation (caps, TRUE);
if (!gst_pad_peer_query (scope->srcpad, query)) {
/* query failed, not a problem, we use the query defaults */
}
if (gst_query_get_n_allocation_pools (query) > 0) {
/* we got configuration from our peer, parse them */
gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max);
} else {
pool = NULL;
size = 0;
min = max = 0;
}
if (pool == NULL) {
/* we did not get a pool, make one ourselves then */
pool = gst_video_buffer_pool_new ();
}
config = gst_buffer_pool_get_config (pool);
gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META);
gst_buffer_pool_config_set_params (config, caps, size, min, max);
gst_buffer_pool_set_config (pool, config);
/* and activate */
gst_buffer_pool_set_active (pool, TRUE);
[...]
72
from downstream.
Implementors of these methods should modify the given GstQuery object by updating the pool options
and allocation options.
73
Do not create a new type if you could use one which already exists.
If creating a new type, discuss it first with the other GStreamer developers, on at least one of: IRC,
mailing lists.
Try to ensure that the name for a new format is as unlikely to conflict with anything else created
already, and is not a more generalised name than it should be. For example: "audio/compressed"
would be too generalised a name to represent audio data compressed with an mp3 codec. Instead
"audio/mp3" might be an appropriate name, or "audio/compressed" could exist and have a property
indicating the type of compression used.
Ensure that, when you do create a new type, you specify it clearly, and get it added to the list of known
types so that other developers can use the type correctly when writing their elements.
74
library dependencies) to put it elsewhere. The reason for this centralization is to reduce the number of
plugins that need to be loaded in order to detect a streams type. Below is an example that will recognize
AVI files, which start with a RIFF tag, then the size of the file and then an AVI tag:
static void
gst_my_typefind_function (GstTypeFind *tf,
gpointer
data)
{
guint8 *data = gst_type_find_peek (tf, 0, 12);
if (data &&
GUINT32_FROM_LE (&((guint32 *) data)[0]) == GST_MAKE_FOURCC (R,I,F,F) &&
GUINT32_FROM_LE (&((guint32 *) data)[2]) == GST_MAKE_FOURCC (A,V,I, )) {
gst_type_find_suggest (tf, GST_TYPE_FIND_MAXIMUM,
gst_caps_new_simple ("video/x-msvideo", NULL));
}
}
static gboolean
plugin_init (GstPlugin *plugin)
{
static gchar *exts[] = { "avi", NULL };
if (!gst_type_find_register (plugin, "", GST_RANK_PRIMARY,
gst_my_typefind_function, exts,
gst_caps_new_simple ("video/x-msvideo",
NULL), NULL))
return FALSE;
}
75
Note that many of the properties are not required, but rather optional properties. This means that most of
these properties can be extracted from the container header, but that - in case the container header does
not provide these - they can also be extracted by parsing the stream header or the stream content. The
policy is that your element should provide the data that it knows about by only parsing its own content,
not another elements content. Example: the AVI header provides samplerate of the contained audio
stream in the header. MPEG system streams dont. This means that an AVI stream demuxer would
provide samplerate as a property for MPEG audio streams, whereas an MPEG demuxer would not. A
decoder needing this data would require a stream parser in between two extract this from the header or
calculate it from the stream.
Table 16-1. Table of Audio Types
Media
Type
Description
PropertyPropertyPropertyProperty Description
Type
Values
All
audio
types
channels integer
rate
integer
greater
than 0
The
number
of
channels
of audio
data.
greater
than 0
76
Description
PropertyPropertyPropertyProperty Description
Type
Values
string
string
block_align
integer
Chunk
buffer
size.
Any
quicktime,
dvi,
microsoft
or
4xm.
77
Description
PropertyPropertyPropertyProperty Description
Type
Values
audio/x- Audio as
cinepak provided
in a
Cinepak
(Quicktime)
stream.
audio/x- Audio as
dv
provided
in a
Digital
Video
stream.
audio/x- Free
flac
Lossless
Audio
codec
(FLAC).
audio/x- Data
gsm
encoded
by the
GSM
codec.
audio/x- A-Law
alaw
Audio.
audio/x- Mu-Law
mulaw Audio.
3 or 6
audio/mpeg
Audio mpegversion
integer
data
compressed
using the
MPEG
audio
encoding
scheme.
78
Description
PropertyPropertyPropertyProperty Description
Type
Values
boolean 0 or 1
A true
value
indicates
that each
buffer
contains
exactly
one
frame. A
false
value
indicates
that
frames
and
buffers
do not
necessarily match
up.
layer
integer
1, 2, or 3 The
compression
scheme
layer
used to
compress
the data
(only if
mpegversion=1).
bitrate
integer
greater
than 0
The
bitrate, in
bits per
second.
For VBR
(variable
bitrate)
MPEG
data, this
is the
average
bitrate.
79
Description
PropertyPropertyPropertyProperty Description
Type
Values
audio/x- Data
qdm2
encoded
by the
QDM
version 2
codec.
audio/x- Realmediaraversion integer
pnAudio
realaudio data.
1 or 2
audio/x- Data
speex
encoded
by the
Speex
audio
codec
audio/x- Vorbis
vorbis audio
data
1,2 or 3
audio/x- Ensoniq
paris
PARIS
audio
audio/x- Amiga
svx
IFF /
SVX8 /
SV16
audio
audio/x- Sphere
nist
NIST
audio
audio/x- Sound
voc
Blaster
VOC
audio
audio/x- Berkeley/IRCAM/CARL
ircam
audio
80
Description
PropertyPropertyPropertyProperty Description
Type
Values
audio/x- Sonic
w64
Foundrys
64 bit
RIFF/WAV
Description
PropertyPropertyPropertyProperty Description
Type
Values
All videowidth
types
integer
height
integer
The
height of
the video
image
greater
than 0
greater
than 0
81
82
Description
PropertyPropertyPropertyProperty Description
Type
Values
video/x- Unstructured
format
raw
and
uncompressed
raw
video
data.
string
I420
The layout of the video. See FourCC definition site
YV12 (http://www.fourcc.org/) for references and definitions.
YUY2 YUY2, YVYU and UYVY are 4:2:2 packed-pixel, Y41P
UYVY is 4:1:1 packed-pixel and IYU2 is 4:4:4 packed-pixel.
AYUV Y42B is 4:2:2 planar, YV12 and I420 are 4:2:0 planar,
RGBx Y41B is 4:1:1 planar and YUV9 and YVU9 are 4:1:0
BGRx planar. Y800 contains Y-samples only (black/white).
xRGB
xBGR
RGBA
BGRA
ARGB
ABGR
RGB
BGR
Y41B
Y42B
YVYU
Y444
v210
v216
NV12
NV21
GRAY8
GRAY16_BE
GRAY16_LE
v308
RGB16
BGR16
RGB15
BGR15
UYVP
A420
RGB8P
YUV9
YVU9
IYU1
ARGB64
AYUV64
r210
I420_10LE
I420_10BE
I422_10LE
I422_10BE
83
Description
PropertyPropertyPropertyProperty Description
Type
Values
video/x- DivX
divx
video.
divxversion
integer
video/x- Digital
dv
Video.
systemstream
boolean FALSE
video/x- H-263
h263
video.
variant
h263version
string
h263,
Enh263p, hanced
h263pp versions
of the
h263
codec.
video/x- H-264
h264
video.
variant
string
string
itu,
Vendor specific variant of the format. itu is the
videosoft standard.
video/x- Huffyuv
huffyuv video.
video/x- Indeo
indeo
video.
indeoversion
integer
video/x- H-263
intelvideo.
h263
variant
intel
video/x- Motionjpeg
JPEG
video.
string
84
Description
PropertyPropertyPropertyProperty Description
Type
Values
video/mpeg
MPEG
video.
mpegversion
integer
systemstream
boolean FALSE
Indicates
that this
stream is
not a
system
container
stream.
video/x- Microsoftmsmpegversion
integer
msmpeg MPEG-4
video deviations.
video/x- Microsoftmsvideoversion
integer
msvideocodec
Video 1
(oldish
codec).
video/x- Realmediarmversioninteger
pnvideo.
realvideo
"microsoft"
The RLE format inside the Microsoft AVI container has
or
a different byte layout than the RLE format inside
"quick- Apples Quicktime container; this property keeps track
of the layout.
time"
string
85
Description
PropertyPropertyPropertyProperty Description
Type
Values
integer 1 to 64 Bit
depth of
the used
palette.
This
means
that the
palette
that
belongs
to this
format
defines
2^depth
colors.
palette_data
GstBuffer
Buffer
containing a
color
palette
(in
nativeendian
RGBA)
used by
this
format.
The
buffer is
of size
4*2^depth.
1 or 3
video/x- Tarkin
tarkin
video.
video/x- Theora
theora video.
86
Description
PropertyPropertyPropertyProperty Description
Type
Values
video/x- VP-3
vp3
video.
video/x- XviD
xvid
video.
image/jpeg
Joint
Picture
Expert
Group
Image.
image/pngPortable
Network
Graphics
Image.
image/tiffTagged
Image
File
Format.
Description
PropertyPropertyPropertyProperty Description
Type
Values
video/x- Advanced
ms-asf Streaming
Format
(ASF).
video/x- AVI.
msvideo
87
Description
PropertyPropertyPropertyProperty Description
Type
Values
video/x- Digital
dv
Video.
systemstream
boolean TRUE
video/x- Matroska.
matroska
video/mpeg
Motion systemstream
boolean TRUE
Pictures
Expert
Group
System
Stream.
application/ogg
Ogg.
video/quicktime
Quicktime.
application/vnd.rnRealMedia.
realmedia
audio/x- WAV.
wav
Description
PropertyPropertyPropertyProperty Description
Type
Values
None defined yet.
Description
PropertyPropertyPropertyProperty Description
Type
Values
None defined yet.
88
89
If your element is chain-based, you will almost always have to implement a sink event function, since
that is how you are notified about segments, caps and the end of the stream.
If your element is exclusively loop-based, you may or may not want a sink event function (since the
element is driving the pipeline it will know the length of the stream in advance or be notified by the flow
return value of gst_pad_pull_range(). In some cases even loop-based element may receive events
from upstream though (for example audio decoders with an id3demux or apedemux element in front of
them, or demuxers that are being fed input from sources that send additional information about the
stream in custom events, as DVD sources do).
90
Stream Start
Caps
Segment
Tag (metadata)
Table Of Contents
Gap
Flush Start
Flush Stop
Seek Request
Navigation
For more comprehensive information about events and how they should be used correctly in various
circumstances please consult the GStreamer design documentation. This section only gives a general
overview.
91
17.3.2. Caps
The CAPS event contains the format description of the following buffers. See Caps negotiation for more
information about negotiation.
17.3.3. Segment
A segment event is sent downstream to announce the range of valid timestamps in the stream and how
they should be transformed into running-time and stream-time. A segment event must always be sent
before the first buffer of data and after a flush (see above).
The first segment event is created by the element driving the pipeline, like a source operating in
push-mode or a demuxer/decoder operating pull-based. This segment event then travels down the
pipeline and may be transformed on the way (a decoder, for example, might receive a segment event in
BYTES format and might transform this into a segment event in TIMES format based on the average
bitrate).
Depending on the element type, the event can simply be forwarded using gst_pad_event_default
(), or it should be parsed and a modified event should be sent on. The last is true for demuxers, which
generally have a byte-to-time conversion concept. Their input is usually byte-based, so the incoming
event will have an offset in byte units (GST_FORMAT_BYTES), too. Elements downstream, however,
expect segment events in time units, so that it can be used to synchronize against the pipeline clock.
Therefore, demuxers and similar elements should not forward the event, but parse it, free it and send a
segment event (in time units, GST_FORMAT_TIME) further downstream.
The segment event is created using the function gst_event_new_segment (). See the API reference
and design document for details about its parameters.
Elements parsing this event can use gst_event_parse_segment() to extract the event details. Elements
may find the GstSegment API useful to keep track of the current segment (if they want to use it for
output clipping, for example).
92
17.3.7. Gap
WRITEME
93
94
17.3.12. Navigation
Navigation events are sent upstream by video sinks to inform upstream elements of where the mouse
pointer is, if and where mouse pointer clicks have happened, or if keys have been pressed or released.
All this information is contained in the event structure which can be obtained with
gst_event_get_structure ().
Check out the navigationtest element in gst-plugins-good for an idea how to extract navigation
information from this event.
95
18.1. Clocks
Time in GStreamer is defined as the value returned from a particular GstClock object from the method
gst_clock_get_time ().
In a typical computer, there are many sources that can be used as a time source, e.g., the system time,
soundcards, CPU performance counters, ... For this reason, there are many GstClock implementations
available in GStreamer. The clock time doesnt always start from 0 or from some known value. Some
clocks start counting from some known start date, other clocks start counting since last reboot, etc...
As clocks return an absolute measure of time, they are not usually used directly. Instead, differences
between two clock times are used to measure elapsed time according to a clock.
96
97
98
High CPU load, there is not enough CPU power to handle the stream, causing buffers to arrive late in
the sink.
Network problems
The measurements result in QOS events that aim to adjust the datarate in one or more upstream
elements. Two types of adjustments can be made:
It is also possible for the application to artificially introduce delay between synchronized buffers, this is
called throttling. It can be used to limit or reduce the framerate, for example.
99
[...]
case GST_EVENT_QOS:
{
GstQOSType type;
gdouble proportion;
GstClockTimeDiff diff;
GstClockTime timestamp;
gst_event_parse_qos (event, &type, &proportion, &diff, ×tamp);
GST_OBJECT_LOCK (decoder);
priv->qos_proportion = proportion;
priv->qos_timestamp = timestamp;
priv->qos_diff = diff;
GST_OBJECT_UNLOCK (decoder);
res = gst_pad_push_event (decoder->sinkpad, event);
break;
}
[...]
With the QoS values, there are two types of corrections that an element can do:
100
[...]
GST_OBJECT_LOCK (dec);
qos_proportion = priv->qos_proportion;
qos_timestamp = priv->qos_timestamp;
qos_diff = priv->qos_diff;
GST_OBJECT_UNLOCK (dec);
/* calculate the earliest valid timestamp */
if (G_LIKELY (GST_CLOCK_TIME_IS_VALID (qos_timestamp))) {
if (G_UNLIKELY (qos_diff > 0)) {
earliest_time = qos_timestamp + 2 * qos_diff + frame_duration;
} else {
earliest_time = qos_timestamp + qos_diff;
}
} else {
earliest_time = GST_CLOCK_TIME_NONE;
}
/* compare earliest_time to running-time of next buffer */
if (earliest_time > timestamp)
goto drop_buffer;
[...]
101
Permanently dropping frames or reducing the CPU or bandwidth requirements of the element. Some
decoders might be able to skip decoding of B frames.
Switch to lower quality processing or reduce the algorithmic complexity. Care should be taken that
this doesnt introduce disturbing visual or audible glitches.
Assign more CPU cycles to critical parts of the pipeline. This could, for example, be done by
increasing the thread priority.
In all cases, elements should be prepared to go back to their normal processing rate when the proportion
member in the QOS event approaches the ideal proportion of 1.0 again.
19.3. Throttling
Elements synchronizing to the clock should expose a property to configure them in throttle mode. In
throttle mode, the time distance between buffers is kept to a configurable throttle interval. This means
that effectively the buffer rate is limited to 1 buffer per throttle interval. This can be used to limit the
framerate, for example.
When an element is configured in throttling mode (this is usually only implemented on sinks) it should
produce QoS events upstream with the jitter field set to the throttle interval. This should instruct
upstream elements to skip or drop the remaining buffers in the configured throttle interval.
The proportion field is set to the desired slowdown needed to get the desired throttle interval.
Implementations can use the QoS Throttle type, the proportion and the jitter member to tune their
implementations.
The default sink base class, has the throttle-time property for this feature. You can test this with:
gst-launch-1.0 videotestsrc ! xvimagesink throttle-time=500000000
102
An element changes its processing strategy because of QoS reasons (quality). This could include a
decoder that decides to drop every B frame to increase its processing speed or an effect element
switching to a lower quality algorithm.
103
Even though the gstcontroller library may be linked into the host application, you should make sure
it is initialized in your plugin_init function:
static gboolean
plugin_init (GstPlugin *plugin)
{
...
/* initialize library */
gst_controller_init (NULL, NULL);
...
}
It makes not sense for all GObject parameter to be real-time controlled. Therefore the next step is to
mark controllable parameters. This is done by using the special flag GST_PARAM_CONTROLLABLE. when
setting up GObject params in the _class_init method.
g_object_class_install_property (gobject_class, PROP_FREQ,
g_param_spec_double ("freq", "Frequency", "Frequency of test signal",
0.0, 20000.0, 440.0,
G_PARAM_READWRITE | GST_PARAM_CONTROLLABLE | G_PARAM_STATIC_STRINGS));
104
This call makes all parameter-changes for the given timestamp active by adjusting the GObject
properties of the element. Its up to the element to determine the synchronisation rate.
105
106
Or more conveniently:
static void gst_my_filter_some_interface_init (GstSomeInterface *iface);
G_DEFINE_TYPE_WITH_CODE (GstMyFilter, gst_my_filter,GST_TYPE_ELEMENT,
G_IMPLEMENT_INTERFACE (GST_TYPE_SOME_INTERFACE,
gst_my_filter_some_interface_init));
107
To get a grab on the Window where the video sink element is going to render. This is achieved by
either being informed about the Window identifier that the video sink element generated, or by forcing
the video sink element to use a specific Window identifier for rendering.
To force a redrawing of the latest video frame the video sink element displayed on the Window. Indeed
if the #GstPipeline is in #GST_STATE_PAUSED state, moving the Window around will damage its
content. Application developers will want to handle the Expose events themselves and force the video
sink element to refresh the Windows content.
A plugin drawing video output in a video window will need to have that window at one stage or another.
Passive mode simply means that no window has been given to the plugin before that stage, so the plugin
created the window by itself. In that case the plugin is responsible of destroying that window when its
not needed any more and it has to tell the applications that a window has been created so that the
application can use it. This is done using the have-window-handle message that can be posted from
the plugin with the gst_video_overlay_got_window_handle method.
As you probably guessed already active mode just means sending a video window to the plugin so that
video output goes there. This is done using the gst_video_overlay_set_window_handle method.
It is possible to switch from one mode to another at any moment, so the plugin implementing this
interface has to handle all cases. There are only 2 methods that plugins writers have to implement and
they most probably look like that :
static void
gst_my_filter_set_window_handle (GstVideoOverlay *overlay, guintptr handle)
{
GstMyFilter *my_filter = GST_MY_FILTER (overlay);
if (my_filter->window)
108
You will also need to use the interface methods to post messages when needed such as when receiving a
CAPS event where you will know the video geometry and maybe create the window.
static MyFilterWindow *
gst_my_filter_window_create (GstMyFilter *my_filter, gint width, gint height)
{
MyFilterWindow *window = g_new (MyFilterWindow, 1);
...
gst_video_overlay_got_window_handle (GST_VIDEO_OVERLAY (my_filter), window->win);
}
/* called from the event handler for CAPS events */
static gboolean
gst_my_filter_sink_set_caps (GstMyFilter *my_filter, GstCaps *caps)
{
gint width, height;
gboolean ret;
...
ret = gst_structure_get_int (structure, "width", &width);
ret &= gst_structure_get_int (structure, "height", &height);
if (!ret) return FALSE;
gst_video_overlay_prepare_window_handle (GST_VIDEO_OVERLAY (my_filter));
if (!my_filter->window)
my_filter->window = gst_my_filter_create_window (my_filter, width, height);
...
}
109
110
static void
gst_my_filter_class_init (GstMyFilterClass *klass)
{
[..]
gst_tag_register ("my_tag_name", GST_TAG_FLAG_META,
G_TYPE_STRING,
_("my own tag"),
_("a tag that is specific to my own element"),
NULL);
[..]
}
GType
gst_my_filter_get_type (void)
{
111
112
static void
gst_my_filter_task_func (GstElement *element)
{
GstMyFilter *filter = GST_MY_FILTER (element);
GstTagSetter *tagsetter = GST_TAG_SETTER (element);
GstData *data;
GstEvent *event;
gboolean eos = FALSE;
GstTagList *taglist = gst_tag_list_new ();
while (!eos) {
data = gst_pad_pull (filter->sinkpad);
/* Were not very much interested in data right now */
if (GST_IS_BUFFER (data))
gst_buffer_unref (GST_BUFFER (data));
event = GST_EVENT (data);
switch (GST_EVENT_TYPE (event)) {
case GST_EVENT_TAG:
gst_tag_list_insert (taglist, gst_event_tag_get_list (event),
GST_TAG_MERGE_PREPEND);
gst_event_unref (event);
break;
case GST_EVENT_EOS:
eos = TRUE;
gst_event_unref (event);
break;
default:
gst_pad_event_default (filter->sinkpad, event);
break;
}
}
/* merge tags with the ones retrieved from the application */
if ((gst_tag_setter_get_tag_list (tagsetter)) {
gst_tag_list_insert (taglist,
gst_tag_setter_get_tag_list (tagsetter),
gst_tag_setter_get_tag_merge_mode (tagsetter));
}
/* write tags */
gst_tag_list_foreach (taglist, gst_my_filter_write_tag, filter);
/* signal EOS */
gst_pad_push (filter->srcpad, gst_event_new (GST_EVENT_EOS));
}
113
114
It requires that the sink only has one sinkpad. Sink elements that need more than one sinkpad, must
make a manager element with multiple GstBaseSink elements inside.
Sink elements can derive from GstBaseSink using the usual GObject convenience macro
G_DEFINE_TYPE ():
G_DEFINE_TYPE (GstMySink, gst_my_sink, GST_TYPE_BASE_SINK);
[..]
static void
gst_my_sink_class_init (GstMySinkClass * klass)
{
klass->set_caps = [..];
klass->render = [..];
[..]
}
116
Derived implementations barely need to be aware of preroll, and do not need to know anything about
the technical implementation requirements of preroll. The base-class does all the hard work.
Less code to write in the derived class, shared code (and thus shared bugfixes).
There are also specialized base classes for audio and video, lets look at those a bit.
Also automatically provides a clock, so that other sinks (e.g. in case of audio/video playback) are
synchronized.
Features can be added to all audiosinks by making a change in the base class, which makes
maintenance easy.
Derived classes require only three small functions, plus some GObject boilerplate code.
In addition to implementing the audio base-class virtual functions, derived classes can (should) also
implement the GstBaseSink set_caps () and get_caps () virtual functions for negotiation.
117
Because of preroll (and the preroll () virtual function), it is possible to display a video frame
already when going into the GST_STATE_PAUSED state.
By adding new features to GstVideoSink, it will be possible to add extensions to videosinks that
affect all of them, but only need to be coded once, which is a huge maintenance benefit.
Automatic pad activation handling, and task-wrapping in case we get assigned to start a task ourselves.
The GstBaseSrc may not be suitable for all cases, though; it has limitations:
There is one and only one sourcepad. Source elements requiring multiple sourcepads must implement
a manager bin and use multiple source elements internally or make a manager element that uses a
source element and a demuxer inside.
It is possible to use special memory, such as X server memory pointers or mmap ()ed memory areas, as
data pointers in buffers returned from the create() virtual function.
118
New features can be added to it and will apply to all derived classes automatically.
119
They can be the driving force of the pipeline, by running their own task. This works particularly well
for elements that need random access, for example an AVI demuxer.
They can also run in push-based mode, which means that an upstream element drives the pipeline.
This works particularly well for streams that may come from network, such as Ogg.
In addition, audio parsers with one output can, in theory, also be written in random access mode.
Although simple playback will mostly work if your element only accepts one mode, it may be required
to implement multiple modes to work in combination with all sorts of applications, such as editing. Also,
performance may become better if you implement multiple modes. See Different scheduling modes to
see how an element can accept multiple scheduling modes.
120
121
To add support for private events with custom event handling to another element.
To add support for custom pad _query () or _convert () handling to another element.
To add custom data handling before or after another elements data handler function (generally its
_chain () function).
To embed an element, or a series of elements, into something that looks and works like a simple
element to the outside world. This is particular handy for implementing sources and sink elements
with multiple pads.
Making a manager is about as simple as it gets. You can derive from a GstBin, and in most cases, you
can embed the required elements in the _init () already, including setup of ghostpads. If you need any
custom data handlers, you can connect signals or embed a second element which you control.
122
V. Appendices
This chapter contains things that dont belong anywhere else.
Make sure the state of an element gets reset when going to NULL. Ideally, this should set all object
properties to their original state. This function should also be called from _init.
Make sure an element forgets everything about its contained stream when going from PAUSED to
READY. In READY, all stream states are reset. An element that goes from PAUSED to READY and back to
PAUSED should start reading the stream from he start again.
People that use gst-launch for testing have the tendency to not care about cleaning up. This is wrong.
An element should be tested using various applications, where testing not only means to make sure it
doesnt crash, but also to test for memory leaks using tools such as valgrind. Elements have to be
reusable in a pipeline after having been reset.
27.2. Debugging
Elements should never use their standard output for debugging (using functions such as printf ()
or g_print ()). Instead, elements should use the logging functions provided by GStreamer, named
GST_DEBUG (), GST_LOG (), GST_INFO (), GST_WARNING () and GST_ERROR (). The various
logging levels can be turned on and off at runtime and can thus be used for solving issues as they turn
up. Instead of GST_LOG () (as an example), you can also use GST_LOG_OBJECT () to print the
object that youre logging output for.
Ideally, elements should use their own debugging category. Most elements use the following code to
do that:
GST_DEBUG_CATEGORY_STATIC (myelement_debug);
#define GST_CAT_DEFAULT myelement_debug
[..]
static void
gst_myelement_class_init (GstMyelementClass *klass)
{
[..]
GST_DEBUG_CATEGORY_INIT (myelement_debug, "myelement",
0, "My own element");
}
124
At runtime, you can turn on debugging using the commandline option --gst-debug=myelement:5.
Elements should use GST_DEBUG_FUNCPTR when setting pad functions or overriding element
class methods, for example:
gst_pad_set_event_func (myelement->srcpad,
GST_DEBUG_FUNCPTR (my_element_src_event));
Elements that are aimed for inclusion into one of the GStreamer modules should ensure consistent
naming of the element name, structures and function names. For example, if the element type is
GstYellowFooDec, functions should be prefixed with gst_yellow_foo_dec_ and the element should be
registered as yellowfoodec. Separate words should be separate in this scheme, so it should be
GstFooDec and gst_foo_dec, and not GstFoodec and gst_foodec.
All elements to which it applies (sources, sinks, demuxers) should implement query functions on their
pads, so that applications and neighbour elements can request the current position, the stream length
(if known) and so on.
Elements should make sure they forward events they do not handle with gst_pad_event_default (pad,
parent, event) instead of just dropping them. Events should never be dropped unless specifically
intended.
Elements should make sure they forward queries they do not handle with gst_pad_query_default (pad,
parent, query) instead of just dropping them.
gst-launch is not a good tool to show that your element is finished. Applications such as Rhythmbox
and Totem (for GNOME) or AmaroK (for KDE) are. gst-launch will not test various things such as
proper clean-up on reset, event handling, querying and so on.
Parsers and demuxers should make sure to check their input. Input cannot be trusted. Prevent possible
buffer overflows and the like. Feel free to error out on unrecoverable stream errors. Test your demuxer
using stream corruption elements such as breakmydata (included in gst-plugins). It will randomly
insert, delete and modify bytes in a stream, and is therefore a good test for robustness. If your element
crashes when adding this element, your element needs fixing. If it errors out properly, its good
enough. Ideally, itd just continue to work and forward data as much as possible.
Demuxers should not assume that seeking works. Be prepared to work with unseekable input streams
(e.g. network sources) as well.
125
Sources and sinks should be prepared to be assigned another clock then the one they expose
themselves. Always use the provided clock for synchronization, else youll get A/V sync issues.
126
Discont events have been replaced by newsegment events. In 0.10, it is essential that you send a
newsegment event downstream before you send your first buffer (in 0.8 the scheduler would invent
discont events if you forgot them, in 0.10 this is no longer the case).
In 0.10, buffers have caps attached to them. Elements should allocate new buffers with
gst_pad_alloc_buffer (). See Caps negotiation for more details.
Most functions returning an object or an object property have been changed to return its own reference
rather than a constant reference of the one owned by the object itself. The reason for this change is
primarily thread-safety. This means effectively that return values of functions such as
gst_element_get_pad (), gst_pad_get_name (), gst_pad_get_parent (),
gst_object_get_parent (), and many more like these have to be freeed or unreferenced after
use. Check the API references of each function to know for sure whether return values should be
freeed or not.
In 0.8, scheduling could happen in any way. Source elements could be _get ()-based or _loop
()-based, and any other element could be _chain ()-based or _loop ()-based, with no limitations.
Scheduling in 0.10 is simpler for the scheduler, and the element is expected to do some more work.
Pads get assigned a scheduling mode, based on which they can either operate in random access-mode,
in pipeline driving mode or in push-mode. all this is documented in detail in
Different scheduling modes. As a result of this, the bytestream object no longer exists. Elements
requiring byte-level access should now use random access on their sinkpads.
Negotiation is asynchronous. This means that downstream negotiation is done as data comes in and
upstream negotiation is done whenever renegotiation is required. All details are described in
Caps negotiation.
For as far as possible, elements should try to use existing base classes in 0.10. Sink and source
elements, for example, could derive from GstBaseSrc and GstBaseSink. Audio sinks or sources
could even derive from audio-specific base classes. All existing base classes have been discussed in
Pre-made base classes and the next few chapters.
In 0.10, event handling and buffers are separated once again. This means that in order to receive
events, one no longer has to set the GST_FLAG_EVENT_AWARE flag, but can simply set an event
handling function on the elements sinkpad(s), using the function gst_pad_set_event_function
(). The _chain ()-function will only receive buffers.
127
Although core will wrap most threading-related locking for you (e.g. it takes the stream lock before
calling your data handling functions), you are still responsible for locking around certain functions,
e.g. object properties. Be sure to lock properly here, since applications will change those properties in
a different thread than the thread which does the actual data passing! You can use the
GST_OBJECT_LOCK () and GST_OBJECT_UNLOCK () helpers in most cases, fortunately, which
grabs the default property lock of the element.
GstValueFixedList and
If your plugins state change function hasnt been superseded by virtual start() and stop() methods of
one of the new base classes, then your plugins state change functions may need to be changed in order
to safely handle concurrent access by multiple threads. Your typical state change function will now
first handle upwards state changes, then chain up to the state change function of the parent class
(usually GstElementClass in these cases), and only then handle downwards state changes. See the
vorbis decoder plugin in gst-plugins-base for an example.
The reason for this is that in the case of downwards state changes you dont want to destroy allocated
resources while your plugins chain function (for example) is still accessing those resources in another
thread. Whether your chain function might be running or not depends on the state of your plugins
pads, and the state of those pads is closely linked to the state of the element. Pad states are handled in
the GstElement classs state change function, including proper locking, thats why it is essential to
chain up before destroying allocated resources.
As already mentioned above, you should really rewrite your plugin to derive from one of the new base
classes though, so you dont have to worry about these things, as the base class will handle it for you.
There are no base classes for decoders and encoders yet, so the above paragraphs about state changes
definitively apply if your plugin is a decoder or an encoder.
gst_pad_set_link_function (),
If the element is derived from a GstBase class, then override the set_caps ().
has been replaced by gst_pad_use_fixed_caps (). You
can then set the fixed caps to use on a pad with gst_pad_set_caps ().
gst_pad_use_explicit_caps ()
128
129
130