MultipleGraphs2.pdf
(
64 KB
)
Pobierz
Using Multiple Graphs in
DirectShow
Geraint Davies
Introduction & Overview
DirectShow uses a graph of connected filters. Each filter in the graph has the same state
(stopped, paused or running) and the same clock, and the data flows directly from filter to
filter. While this works well for many tasks, there are a number of tasks for which it is not
suited. Applications need to start and stop some filters independently of others, and to
switch connections between filters dynamically. In this article, and the accompanying
code, I want to demonstrate how to solve some of these problems by using multiple,
related graphs of filters.
First, some background. Why is it so complicated to change a graph while it’s running?
The design of DirectShow is intended to make filters as lightweight as possible, to
encourage developers to break down large monolithic processing engines into several
separate components with no loss of efficiency. This meant allowing each filter to call
directly to interfaces on other filters and not insisting on a thread switch for each filter.
The result is that a single thread created by one filter can call down through several other
filters and block in a number of places. This has created a low-overhead design, but sadly
it has also resulted in much of the complexity of the DirectShow architecture. Only by
following the strict rules of DirectShow behaviour can deadlocks be avoided.
All the filters in a graph must be in the same state. But the graph itself is a very
lightweight object: there is very little additional cost to having the same filters divided
between several different graphs. A set of custom filters can then feed the data between
graphs, outside of the DirectShow framework, and the graphs can be started and stopped
independently. You can do this very straightforwardly without sharing allocators or
threads between the graphs: the data is copied into queues in one graph, and then picked
up and delivered on a separate thread in a separate graph. However, as well as the copy
and thread switch, this scheme can often mean conversion between formats since dynamic
format changes cannot be passed on easily.
For my examples, I’ve used a framework that allows data to be passed directly between
graphs. The GMFBridge tool (“
GDCL Multigraph Framework
”) allows you to build
separate source and render graphs that can be connected together, and when connected,
data is passed efficiently from source to render. I want to show how this can be used to
solve a few common DirectShow tasks. I’m hoping that in the process of explaining it, I
will also be able to shed some light on some of the darker corners of DirectShow.
Full source code to the GMFBridge tool and the two example applications is available
from
http://www.gdcl.co.uk/articles
. This version is based on version 1.0.0.8, which
includes dynamic changes to video dimensions, and revised input locking.
I’m extremely grateful for the reviews, changes and suggestions provided by Iain Downs,
Phil Webster, Thore Karlsen and Stephen Estrop.
Using Multiple Graphs in DirectShow
Page 1 of 8
http://www.gdcl.co.uk
17 October 2005
Using GMFBridge
To use the GMFBridge architecture, you create a BridgeController, specifying whether
you have a video and/or audio stream. You can then create a number of graphs; the
controller will insert either sink filters or feeder filters into your graphs. The sink filters
consume data at the end of a graph: these are
source graphs
. The feeder filters feed that
data into another graph: these are
render graphs
. The controller can then connect one sink
filter to one feeder filter at once. The controller can operate in either suspend mode or
discard mode: this controls what happens when a source graph is running but not
connected: the data received at the sink can be held, causing the graph to suspend, or it
can be discarded so that the graph can continue running.
You will get confused about the names; sorry about that! At the downstream end of a
source graph, there is a sink filter (class BridgeSink). At the beginning of a render graph,
there is a feeder filter that acts as a source filter for that graph; this is the BridgeSource
class. The diagram below shows two source graphs and a render graph joined by the
Bridge Controller (in green) and the Bridge Sink and Bridge Source filters (coloured red).
Continuous Preview During Capture
I want to solve two different tasks using the GMFBridge toolkit. The simpler of my two
tasks is this: how can I keep showing a preview stream from my video capture device,
while starting and stopping capture into different capture files?
If you are capturing video to an AVI file using the AVI Multiplexor and File Writer
filters, the file is not finalized until the graph is stopped. So if you want to close one file
and start another, you need to stop and restart the graph – but since the preview is
performed by the same graph, that will cause a visible glitch in the preview window.
So why not put the AVI Mux and File Writer in a separate graph? We can then stop,
rebuild and restart those two components whenever we need a new file, without affecting
the preview graph. All we need is a way to get the data from the capture output into the
new graph. This is done using the GMFBridge custom sink and feeder filters.
The GMFPreview sample app creates a normal video preview graph: a video input device
is selected and its preview pin is rendered. Then a BridgeController is created, and it
inserts a sink filter into the graph, and the capture pin is fed to the sink filter. Once this is
set up, the preview graph is started. Whenever the user wants to capture a segment of
video, a capture graph can be built and the controller will bridge the preview graph to the
capture graph. When the file is to be closed, the bridge is disconnected and the capture
Using Multiple Graphs in DirectShow
Page 2 of 8
http://www.gdcl.co.uk
17 October 2005
graph is cleaned up. The controller is set in
discard mode
: this means that when the bridge
is not active, data received from the capture pin is discarded so that preview graph
continues to run. As soon as the bridge is connected, data from the capture pin will be fed
into the capture graph to the AVI Mux and then File Writer.
Seamless Clip Player
My second task is to play a sequence of separate video clips as a single movie, allowing
the user to seek about within the movie, so that the breaks between clips are not apparent.
The GMFPlay application does this by having a separate graph for each source clip and a
single graph containing the renderers. It can then switch seamlessly between source clips
as necessary.
This requires a few extra features: the controller signals the app when the source graph
reaches the end, so that the app can specify which source clip should come next. Dynamic
type changes are passed upstream, so that type changes requested by the video renderer
can be implemented in the decoder in another graph. And the allocator is shared, so that
buffers produced by the decoders can be fed directly to the renderers when connected.
The controller is in
suspend mode
; this means that when a source graph is not connected,
the data is blocked at the sink, ready to play back as soon as it is bridged to the render
graph.
The sink filter input pins are configured to accept only uncompressed data. This is to
ensure that the compressor is always in the source graph, so clips in different formats can
be linked, provided that the same decompressed format can be output. If the clips have
different dimensions, the player will attempt to switch the video media type at the video
renderer using ReceiveConnection. In the current player, there is somewhat less flexibility
in changing audio properties between clips, particularly as regards buffering
requirements.
Basic Operations
The Controller
The central component of the bridge is the BridgeController. This COM object is created
by the application to manage the connection between graphs, and it in turn creates and
manages the other components: the sink and feeder filters, and a BridgeStream object for
each pin-to-pin connection.
The controller’s IGMFBridgeController interface provides the application with the
following methods:
AddStream
Creates a BridgeStream object for one audio or video
stream, and specifies options for that stream. The sink and
feeder filters will have a pin for each stream.
SetBufferMinimum
The application can gain more time to perform the
change-over from one graph to another by increasing the
buffering in the source filter using this method, at a cost of
increased latency.
SetNotify
Allows the application to receive end-of-segment
notifications via a posted Windows message. The end of
segment condition is received on worker threads in the
sink filter’s pins, and this method transfers the processing
to the application’s main thread.
InsertSinkFilter
Creates a sink filter and inserts it into a graph. The sink
filter has an input pin linked to the video BridgeStream,
and/or an input pin linked to the audio BridgeStream.
Using Multiple Graphs in DirectShow
Page 3 of 8
http://www.gdcl.co.uk
17 October 2005
InsertSourceFilter
Creates a feeder filter and inserts it into a graph. The filter
has output pins corresponding to each BridgeStream that
the controller manages (audio and/or video). The media
type negotiation of the output pins needs to refer back (via
the BridgeStreams) to the sink filter’s pins, so this method
requires a graph containing a sink filter to be specified as
well.
CreateSourceGraph
Inserts a sink filter into the graph, and then builds a
playback graph for the file specified, connecting the
decoder outputs to the sink filter’s pins. The graph is built
using AddSourceFilter and Connect: this is to ensure that
only the streams that are required are rendered.
CreateRenderGraph
This method is given a fully-built source graph, and an
empty render graph. It inserts a source filter with an
output pin for each of the sink filter’s input pins, and it
renders each of these streams.
BridgeGraphs
Makes a connection between a sink filter (in a source
graph) and a feeder filter (in a render graph),
disconnecting any previous connection. It can be called
with two NULL parameters to disconnect the current
bridge. It can also be called with just a sink filter
parameter, to enable media type negotiation when
building a render graph.
NoMoreSegments
Indicates that the app has no further source graphs to
attach, so the EndOfStream signals are passed on to the
render graph. The end of segment Notify calls are passed
to the application when the EndOfStream calls arrive at
the sink filter. But at this point, there will still be some
unplayed data queued in the render graph. The
NoMoreSegments
call passes the EndOfStream onto the
render graph, so the app can wait for EC_COMPLETE
from the render graph before stopping playback.
GetSegmentTime
Returns the current time in the playback graph, as an
offset from the beginning of the current source graph.
Essentially, returns the current position of the renderers,
given in terms of the source graph. The demo application
uses this to update its slider position.
The controller has a BridgeStream for each stream to be connected – video and/or audio.
The BridgeStream object manages the connection between an input pin in one graph, and
an output pin in another graph. It passes calls in both directions to manage media type and
allocator negotiation and data delivery.
Media Type Negotiation
When building the graphs, the first thing the pins need to solve is to negotiate a suitable
media type. The input pins agree a media type when the source graph is built. But this
media type needs to be acceptable to the filters used in the render graph, and to other
source graphs that are to be used with the same bridge, and of course the source graph
building must be completed before these other graphs have been created.
So the sink input pin negotiates a media type, and when the first render graph is built this
can be renegotiated; once that render graph is built, the type is fixed and subsequent
source graphs must be able to negotiate the same type.
Using Multiple Graphs in DirectShow
Page 4 of 8
http://www.gdcl.co.uk
17 October 2005
The steps in the basic type-negotiation process are as follows:
1.
When configuring the controller, the application can specify the range of media
types to be accepted in the source graph. This can be
uncompressed
,
AVI Mux-
compatible
and
any
. The SinkInputPin accepts any types that meet this
specification. For example, if
uncompressed
is specified, the sink input pin will
reject any media types that it does not recognise as RGB, YUV or PCM audio.
2.
When a render graph is being built, the GetMediaType enumeration of preferred
output types comes from the output pin upstream of the Sink, and this same
output pin is called to QueryAccept a type before connection.
3.
If the filter is in suspend mode, then a custom redirecting allocator is used at the
sink input pin (this is discussed further below). In this mode, a dynamic media
type change can be initiated by the sink to switch to a new media type. So the
render output pin is allowed to negotiate a different type provided the upstream
output pin’s QueryAccept accepts it. This is essential for negotiating efficient
surface formats with a video renderer.
4.
In discard mode, this type of dynamic change is not possible, so the feeder
output pin will only support the type that the sink input pin has already
negotiated. This mode is typically used with a capture graph, where the bridged
stream is being fed to a multiplexor, and hence the
AVI Mux Compatible
option
is sufficient to ensure that the correct type is chosen. It is possible to change the
sink input’s media type with a
Reconnect
call, but the code currently does not do
this.
5.
Once the render graph has been built, the bridge media type is fixed. Further
graphs after this point will only be allowed to negotiate the type that is set in the
bridge. However, in suspend mode, dynamic media type changes originated by
(for example) the video renderer are still possible.
Starting with version 1.0.0.5, the GMFBridge code supports clips in different formats. We
allow dynamic changes between audio PCM formats, since the renderer can normally
handle a dynamic switch like this. If the clips have different video dimensions, we may be
able to switch dynamically to that type. This adds an additional complexity that is
discussed below.
To switch media types, we use dynamic type changes in the same way that the video
renderer does. This means attaching the new media type to a sample returned from
GetBuffer. We can only do this if we implement the allocator ourselves, so that the
GetBuffer method is implemented in our code. In discard mode, we don’t use a custom
allocator and so we cannot use dynamic type changes in discard mode. Discard mode is
normally used in a video capture environment where downstream-originated dynamic
type changes are not particularly important.
When the render graphs are used for video and audio rendering, it’s best to build the
render graph immediately after the first source graph is built. Then subsequent source
graphs can be verified for compatibility with the source and render graphs already built.
This is how the GMFPlay demo application behaves.
The GMFPreview application, on the other hand, runs in
discard
mode and the render
graph contains the multiplexor and file writer. In this case the correct media type is
ensured by the
eMuxInputs
option, which ensures that the sink input only accepts types
that are known to be acceptable to the AVI Multiplexor filter.
Allocators and Type Changes
The simplest way to build a bridge like this would be to copy the data received at the sink
input into some buffer outside the DirectShow architecture, and then copy it back into
IMediaSample
objects in the feeder source filter on a separate thread. Obviously this
Using Multiple Graphs in DirectShow
Page 5 of 8
http://www.gdcl.co.uk
17 October 2005
Plik z chomika:
Satoremihi7
Inne pliki z tego folderu:
bonk-0.6.tar.gz
(17 KB)
tera.7z
(2607 KB)
xvidcore-1.3.3.tar.gz
(799 KB)
last_x264.tar.bz2
(636 KB)
multicoreware-x265-e6b519dfbf81.zip
(890 KB)
Inne foldery tego chomika:
Pliki dostępne do 01.06.2025
Pliki dostępne do 16.11.2022
Pliki dostępne do 19.01.2025
Pliki dostępne do 19.04.2020
;)
Zgłoś jeśli
naruszono regulamin