Top |
struct | GstVideoAlignment |
#define | GST_META_TAG_VIDEO_STR |
#define | GST_META_TAG_VIDEO_ORIENTATION_STR |
#define | GST_META_TAG_VIDEO_SIZE_STR |
#define | GST_META_TAG_VIDEO_COLORSPACE_STR |
enum | GstVideoFormat |
#define | GST_VIDEO_MAX_PLANES |
#define | GST_VIDEO_MAX_COMPONENTS |
struct | GstVideoFormatInfo |
enum | GstVideoChromaSite |
enum | GstVideoFormatFlags |
enum | GstVideoPackFlags |
#define | GST_VIDEO_SIZE_RANGE |
#define | GST_VIDEO_FPS_RANGE |
#define | GST_VIDEO_FORMATS_ALL |
enum | GstVideoColorRange |
enum | GstVideoColorMatrix |
enum | GstVideoTransferFunction |
enum | GstVideoColorPrimaries |
GstVideoColorimetry | |
struct | GstVideoInfo |
enum | GstVideoInterlaceMode |
enum | GstVideoFlags |
struct | GstVideoFrame |
enum | GstVideoFrameFlags |
enum | GstVideoBufferFlags |
enum | GstVideoTileType |
enum | GstVideoTileMode |
gboolean gst_video_calculate_display_ratio (guint *dar_n
,guint *dar_d
,guint video_width
,guint video_height
,guint video_par_n
,guint video_par_d
,guint display_par_n
,guint display_par_d
);
Given the Pixel Aspect Ratio and size of an input video frame, and the pixel aspect ratio of the intended display device, calculates the actual display ratio the video will be rendered with.
dar_n |
Numerator of the calculated display_ratio |
|
dar_d |
Denominator of the calculated display_ratio |
|
video_width |
Width of the video frame in pixels |
|
video_height |
Height of the video frame in pixels |
|
video_par_n |
Numerator of the pixel aspect ratio of the input video. |
|
video_par_d |
Denominator of the pixel aspect ratio of the input video. |
|
display_par_n |
Numerator of the pixel aspect ratio of the display device |
|
display_par_d |
Denominator of the pixel aspect ratio of the display device |
void (*GstVideoConvertSampleCallback) (GstSample *sample
,GError *error
,gpointer user_data
);
GstSample * gst_video_convert_sample (GstSample *sample
,const GstCaps *to_caps
,GstClockTime timeout
,GError **error
);
Converts a raw video buffer into the specified output caps.
The output caps can be any raw video formats or any image formats (jpeg, png, ...).
The width, height and pixel-aspect-ratio can also be specified in the output caps.
void gst_video_convert_sample_async (GstSample *sample
,const GstCaps *to_caps
,GstClockTime timeout
,GstVideoConvertSampleCallback callback
,gpointer user_data
,GDestroyNotify destroy_notify
);
Converts a raw video buffer into the specified output caps.
The output caps can be any raw video formats or any image formats (jpeg, png, ...).
The width, height and pixel-aspect-ratio can also be specified in the output caps.
callback
will be called after conversion, when an error occured or if conversion didn't
finish after timeout
. callback
will always be called from the thread default
GMainContext
, see g_main_context_get_thread_default()
. If GLib before 2.22 is used,
this will always be the global default main context.
destroy_notify
will be called after the callback was called and user_data
is not needed
anymore.
sample |
||
to_caps |
the GstCaps to convert to |
|
timeout |
the maximum amount of time allowed for the processing. |
|
callback |
|
|
user_data |
extra data that will be passed to the |
|
destroy_notify |
|
void
gst_video_alignment_reset (GstVideoAlignment *align
);
Set align
to its default values with no padding and no alignment.
GstEvent *
gst_video_event_new_still_frame (gboolean in_still
);
Creates a new Still Frame event. If in_still
is TRUE
, then the event
represents the start of a still frame sequence. If it is FALSE
, then
the event ends a still frame sequence.
To parse an event created by gst_video_event_new_still_frame()
use
gst_video_event_parse_still_frame()
.
gboolean gst_video_event_parse_still_frame (GstEvent *event
,gboolean *in_still
);
Parse a GstEvent, identify if it is a Still Frame event, and return the still-frame state from the event if it is. If the event represents the start of a still frame, the in_still variable will be set to TRUE, otherwise FALSE. It is OK to pass NULL for the in_still variable order to just check whether the event is a valid still-frame event.
Create a still frame event using gst_video_event_new_still_frame()
event |
A GstEvent to parse |
|
in_still |
A boolean to receive the still-frame status from the event, or NULL |
GstEvent * gst_video_event_new_downstream_force_key_unit (GstClockTime timestamp
,GstClockTime stream_time
,GstClockTime running_time
,gboolean all_headers
,guint count
);
Creates a new downstream force key unit event. A downstream force key unit event can be sent down the pipeline to request downstream elements to produce a key unit. A downstream force key unit event must also be sent when handling an upstream force key unit event to notify downstream that the latter has been handled.
To parse an event created by gst_video_event_new_downstream_force_key_unit()
use
gst_video_event_parse_downstream_force_key_unit()
.
timestamp |
the timestamp of the buffer that starts a new key unit |
|
stream_time |
the stream_time of the buffer that starts a new key unit |
|
running_time |
the running_time of the buffer that starts a new key unit |
|
all_headers |
|
|
count |
integer that can be used to number key units |
gboolean gst_video_event_parse_downstream_force_key_unit (GstEvent *event
,GstClockTime *timestamp
,GstClockTime *stream_time
,GstClockTime *running_time
,gboolean *all_headers
,guint *count
);
Get timestamp, stream-time, running-time, all-headers and count in the force
key unit event. See gst_video_event_new_downstream_force_key_unit()
for a
full description of the downstream force key unit event.
running_time
will be adjusted for any pad offsets of pads it was passing through.
event |
A GstEvent to parse |
|
timestamp |
A pointer to the timestamp in the event. |
[out] |
stream_time |
A pointer to the stream-time in the event. |
[out] |
running_time |
A pointer to the running-time in the event. |
[out] |
all_headers |
A pointer to the all_headers flag in the event. |
[out] |
count |
A pointer to the count field of the event. |
[out] |
GstEvent * gst_video_event_new_upstream_force_key_unit (GstClockTime running_time
,gboolean all_headers
,guint count
);
Creates a new upstream force key unit event. An upstream force key unit event can be sent to request upstream elements to produce a key unit.
running_time
can be set to request a new key unit at a specific
running_time. If set to GST_CLOCK_TIME_NONE, upstream elements will produce a
new key unit as soon as possible.
To parse an event created by gst_video_event_new_downstream_force_key_unit()
use
gst_video_event_parse_downstream_force_key_unit()
.
running_time |
the running_time at which a new key unit should be produced |
|
all_headers |
|
|
count |
integer that can be used to number key units |
gboolean gst_video_event_parse_upstream_force_key_unit (GstEvent *event
,GstClockTime *running_time
,gboolean *all_headers
,guint *count
);
Get running-time, all-headers and count in the force key unit event. See
gst_video_event_new_upstream_force_key_unit()
for a full description of the
upstream force key unit event.
Create an upstream force key unit event using gst_video_event_new_upstream_force_key_unit()
running_time
will be adjusted for any pad offsets of pads it was passing through.
event |
A GstEvent to parse |
|
running_time |
A pointer to the running_time in the event. |
[out] |
all_headers |
A pointer to the all_headers flag in the event. |
[out] |
count |
A pointer to the count field in the event. |
[out] |
gboolean
gst_video_event_is_force_key_unit (GstEvent *event
);
Checks if an event is a force key unit event. Returns true for both upstream and downstream force key unit events.
GstVideoChromaSite
gst_video_chroma_from_string (const gchar *s
);
Convert s
to a GstVideoChromaSite
a GstVideoChromaSite or GST_VIDEO_CHROMA_SITE_UNKNOWN
when s
does
not contain a valid chroma description.
const gchar *
gst_video_chroma_to_string (GstVideoChromaSite site
);
Converts site
to its string representation.
void (*GstVideoFormatUnpack) (const GstVideoFormatInfo *info
,GstVideoPackFlags flags
,gpointer dest
,const gpointer data[GST_VIDEO_MAX_PLANES]
,const gint stride[GST_VIDEO_MAX_PLANES]
,gint x
,gint y
,gint width
);
Unpacks width
pixels from the given planes and strides containing data of
format info
. The pixels will be unpacked into dest
which each component
interleaved. dest
should at least be big enough to hold width
*
n_components * size(unpack_format) bytes.
For subsampled formats, the components will be duplicated in the destination array. Reconstruction of the missing components can be performed in a separate step after unpacking.
void (*GstVideoFormatPack) (const GstVideoFormatInfo *info
,GstVideoPackFlags flags
,const gpointer src
,gint sstride
,gpointer data[GST_VIDEO_MAX_PLANES]
,const gint stride[GST_VIDEO_MAX_PLANES]
,GstVideoChromaSite chroma_site
,gint y
,gint width
);
Packs width
pixels from src
to the given planes and strides in the
format info
. The pixels from source have each component interleaved
and will be packed into the planes in data
.
This function operates on pack_lines lines, meaning that src
should
contain at least pack_lines lines with a stride of sstride
and y
should be a multiple of pack_lines.
Subsampled formats will use the horizontally cosited component in the destination. Subsampling should be performed before packing.
Because tis function does not have a x coordinate, it is not possible to pack pixels starting from an unaligned position. For tiled images this means that packing should start from a tile coordinate. For subsampled formats this means that a complete pixel need to be packed.
info |
||
flags |
flags to control the packing |
|
src |
a source array |
|
sstride |
the source array stride |
|
data |
pointers to the destination data planes |
|
stride |
strides of the destination planes |
|
chroma_site |
the chroma siting of the target when subsampled (not used) |
|
y |
the y position in the image to pack to |
|
width |
the amount of pixels to pack. |
#define GST_VIDEO_FORMAT_INFO_IS_YUV(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_YUV)
#define GST_VIDEO_FORMAT_INFO_IS_RGB(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_RGB)
#define GST_VIDEO_FORMAT_INFO_IS_GRAY(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_GRAY)
#define GST_VIDEO_FORMAT_INFO_HAS_ALPHA(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_ALPHA)
#define GST_VIDEO_FORMAT_INFO_IS_LE(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_LE)
#define GST_VIDEO_FORMAT_INFO_HAS_PALETTE(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_PALETTE)
#define GST_VIDEO_FORMAT_INFO_IS_COMPLEX(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_COMPLEX)
#define GST_VIDEO_FORMAT_INFO_N_COMPONENTS(info) ((info)->n_components)
#define GST_VIDEO_FORMAT_INFO_PSTRIDE(info,c) ((info)->pixel_stride[c])
pixel stride for the given component. This is the amount of bytes to the pixel immediately to the right, so basically bytes from one pixel to the next. When bits < 8, the stride is expressed in bits.
Examples: for 24-bit RGB, the pixel stride would be 3 bytes, while it would be 4 bytes for RGBx or ARGB, and 8 bytes for ARGB64 or AYUV64. For planar formats such as I420 the pixel stride is usually 1. For YUY2 it would be 2 bytes.
#define GST_VIDEO_FORMAT_INFO_N_PLANES(info) ((info)->n_planes)
Number of planes. This is the number of planes the pixel layout is organized in in memory. The number of planes can be less than the number of components (e.g. Y,U,V,A or R, G, B, A) when multiple components are packed into one plane.
Examples: RGB/RGBx/RGBA: 1 plane, 3/3/4 components; I420: 3 planes, 3 components; NV21/NV12: 2 planes, 3 components.
#define GST_VIDEO_FORMAT_INFO_PLANE(info,c) ((info)->plane[c])
Plane number where the given component can be found. A plane may contain data for multiple components.
#define GST_VIDEO_FORMAT_INFO_SCALE_WIDTH(info,c,w) GST_VIDEO_SUB_SCALE ((info)->w_sub[c],(w))
#define GST_VIDEO_FORMAT_INFO_SCALE_HEIGHT(info,c,h) GST_VIDEO_SUB_SCALE ((info)->h_sub[c],(h))
#define GST_VIDEO_FORMAT_INFO_STRIDE(info,strides,comp) ((strides)[(info)->plane[comp]])
Row stride in bytes, that is number of bytes from the first pixel component of a row to the first pixel component in the next row. This might include some row padding (memory not actually used for anything, to make sure the beginning of the next row is aligned in a particular way).
GstVideoFormat gst_video_format_from_masks (gint depth
,gint bpp
,gint endianness
,guint red_mask
,guint green_mask
,guint blue_mask
,guint alpha_mask
);
Find the GstVideoFormat for the given parameters.
depth |
the amount of bits used for a pixel |
|
bpp |
the amount of bits used to store a pixel. This value is bigger than
|
|
endianness |
the endianness of the masks, G_LITTLE_ENDIAN or G_BIG_ENDIAN |
|
red_mask |
the red mask |
|
green_mask |
the green mask |
|
blue_mask |
the blue mask |
|
alpha_mask |
the alpha mask, or 0 if no alpha mask |
a GstVideoFormat or GST_VIDEO_FORMAT_UNKNOWN when the parameters to not specify a known format.
GstVideoFormat
gst_video_format_from_fourcc (guint32 fourcc
);
Converts a FOURCC value into the corresponding GstVideoFormat. If the FOURCC cannot be represented by GstVideoFormat, GST_VIDEO_FORMAT_UNKNOWN is returned.
guint32
gst_video_format_to_fourcc (GstVideoFormat format
);
Converts a GstVideoFormat value into the corresponding FOURCC. Only
a few YUV formats have corresponding FOURCC values. If format
has
no corresponding FOURCC value, 0 is returned.
GstVideoFormat
gst_video_format_from_string (const gchar *format
);
Convert the format
string to its GstVideoFormat.
the GstVideoFormat for format
or GST_VIDEO_FORMAT_UNKNOWN when the
string is not a known format.
const GstVideoFormatInfo *
gst_video_format_get_info (GstVideoFormat format
);
Get the GstVideoFormatInfo for format
#define GST_VIDEO_CAPS_MAKE(format)
Generic caps string for video, for use in pad templates.
gboolean gst_video_colorimetry_matches (GstVideoColorimetry *cinfo
,const gchar *color
);
Check if the colorimetry information in info
matches that of the
string color
.
gboolean gst_video_colorimetry_from_string (GstVideoColorimetry *cinfo
,const gchar *color
);
Parse the colorimetry string and update cinfo
with the parsed
values.
gchar *
gst_video_colorimetry_to_string (GstVideoColorimetry *cinfo
);
Make a string representation of cinfo
.
void gst_video_color_range_offsets (GstVideoColorRange range
,const GstVideoFormatInfo *info
,gint offset[GST_VIDEO_MAX_COMPONENTS]
,gint scale[GST_VIDEO_MAX_COMPONENTS]
);
Compute the offset and scale values for each component of info
. For each
component, (c[i] - offset[i]) / scale[i] will scale the component c[i] to the
range [0.0 .. 1.0].
The reverse operation (c[i] * scale[i]) + offset[i] can be used to convert
the component values in range [0.0 .. 1.0] back to their representation in
info
and range
.
#define GST_VIDEO_INFO_IS_GRAY(i) (GST_VIDEO_FORMAT_INFO_IS_GRAY((i)->finfo))
#define GST_VIDEO_INFO_HAS_ALPHA(i) (GST_VIDEO_FORMAT_INFO_HAS_ALPHA((i)->finfo))
#define GST_VIDEO_INFO_IS_INTERLACED(i) ((i)->interlace_mode != GST_VIDEO_INTERLACE_MODE_PROGRESSIVE)
#define GST_VIDEO_INFO_FLAG_IS_SET(i,flag) ((GST_VIDEO_INFO_FLAGS(i) & (flag)) == (flag))
#define GST_VIDEO_INFO_FLAG_SET(i,flag) (GST_VIDEO_INFO_FLAGS(i) |= (flag))
#define GST_VIDEO_INFO_FLAG_UNSET(i,flag) (GST_VIDEO_INFO_FLAGS(i) &= ~(flag))
#define GST_VIDEO_INFO_N_PLANES(i) (GST_VIDEO_FORMAT_INFO_N_PLANES((i)->finfo))
#define GST_VIDEO_INFO_N_COMPONENTS(i) GST_VIDEO_FORMAT_INFO_N_COMPONENTS((i)->finfo)
#define GST_VIDEO_INFO_COMP_DEPTH(i,c) GST_VIDEO_FORMAT_INFO_DEPTH((i)->finfo,(c))
#define GST_VIDEO_INFO_COMP_DATA(i,d,c) GST_VIDEO_FORMAT_INFO_DATA((i)->finfo,d,(c))
#define GST_VIDEO_INFO_COMP_OFFSET(i,c) GST_VIDEO_FORMAT_INFO_OFFSET((i)->finfo,(i)->offset,(c))
#define GST_VIDEO_INFO_COMP_STRIDE(i,c) GST_VIDEO_FORMAT_INFO_STRIDE((i)->finfo,(i)->stride,(c))
#define GST_VIDEO_INFO_COMP_WIDTH(i,c) GST_VIDEO_FORMAT_INFO_SCALE_WIDTH((i)->finfo,(c),(i)->width)
#define GST_VIDEO_INFO_COMP_HEIGHT(i,c) GST_VIDEO_FORMAT_INFO_SCALE_HEIGHT((i)->finfo,(c),(i)->height)
#define GST_VIDEO_INFO_COMP_PLANE(i,c) GST_VIDEO_FORMAT_INFO_PLANE((i)->finfo,(c))
#define GST_VIDEO_INFO_COMP_PSTRIDE(i,c) GST_VIDEO_FORMAT_INFO_PSTRIDE((i)->finfo,(c))
#define GST_VIDEO_INFO_COMP_POFFSET(i,c) GST_VIDEO_FORMAT_INFO_POFFSET((i)->finfo,(c))
void
gst_video_info_init (GstVideoInfo *info
);
Initialize info
with default values.
void gst_video_info_set_format (GstVideoInfo *info
,GstVideoFormat format
,guint width
,guint height
);
Set the default info for a video frame of format
and width
and height
.
Note: This initializes info
first, no values are preserved.
gboolean gst_video_info_from_caps (GstVideoInfo *info
,const GstCaps *caps
);
Parse caps
and update info
.
GstCaps *
gst_video_info_to_caps (GstVideoInfo *info
);
Convert the values of info
into a GstCaps.
gboolean gst_video_info_convert (GstVideoInfo *info
,GstFormat src_format
,gint64 src_value
,GstFormat dest_format
,gint64 *dest_value
);
Converts among various GstFormat types. This function handles GST_FORMAT_BYTES, GST_FORMAT_TIME, and GST_FORMAT_DEFAULT. For raw video, GST_FORMAT_DEFAULT corresponds to video frames. This function can be used to handle pad queries of the type GST_QUERY_CONVERT.
gboolean gst_video_info_is_equal (const GstVideoInfo *info
,const GstVideoInfo *other
);
Compares two GstVideoInfo and returns whether they are equal or not
void gst_video_info_align (GstVideoInfo *info
,GstVideoAlignment *align
);
Adjust the offset and stride fields in info
so that the padding and
stride alignment in align
is respected.
Extra padding will be added to the right side when stride alignment padding
is required and align
will be updated with the new padding values.
gboolean gst_video_frame_map_id (GstVideoFrame *frame
,GstVideoInfo *info
,GstBuffer *buffer
,gint id
,GstMapFlags flags
);
Use info
and buffer
to fill in the values of frame
with the video frame
information of frame id
.
When id
is -1, the default frame is mapped. When id
!= -1, this function
will return FALSE
when there is no GstVideoMeta with that id.
All video planes of buffer
will be mapped and the pointers will be set in
frame->data
.
frame |
pointer to GstVideoFrame |
|
info |
||
buffer |
the buffer to map |
|
id |
the frame id to map |
|
flags |
gboolean gst_video_frame_map (GstVideoFrame *frame
,GstVideoInfo *info
,GstBuffer *buffer
,GstMapFlags flags
);
Use info
and buffer
to fill in the values of frame
.
All video planes of buffer
will be mapped and the pointers will be set in
frame->data
.
void
gst_video_frame_unmap (GstVideoFrame *frame
);
Unmap the memory previously mapped with gst_video_frame_map.
gboolean gst_video_frame_copy (GstVideoFrame *dest
,const GstVideoFrame *src
);
Copy the contents from src
to dest
.
gboolean gst_video_frame_copy_plane (GstVideoFrame *dest
,const GstVideoFrame *src
,guint plane
);
Copy the plane with index plane
from src
to dest
.
#define GST_VIDEO_FRAME_FLAG_IS_SET(f,fl) ((GST_VIDEO_FRAME_FLAGS(f) & (fl)) == (fl))
#define GST_VIDEO_FRAME_IS_INTERLACED(f) (GST_VIDEO_FRAME_FLAG_IS_SET(f, GST_VIDEO_FRAME_FLAG_INTERLACED))
#define GST_VIDEO_FRAME_IS_TFF(f) (GST_VIDEO_FRAME_FLAG_IS_SET(f, GST_VIDEO_FRAME_FLAG_TFF))
#define GST_VIDEO_FRAME_IS_RFF(f) (GST_VIDEO_FRAME_FLAG_IS_SET(f, GST_VIDEO_FRAME_FLAG_RFF))
#define GST_VIDEO_FRAME_IS_ONEFIELD(f) (GST_VIDEO_FRAME_FLAG_IS_SET(f, GST_VIDEO_FRAME_FLAG_ONEFIELD))
#define GST_VIDEO_FRAME_N_PLANES(f) (GST_VIDEO_INFO_N_PLANES(&(f)->info))
#define GST_VIDEO_FRAME_PLANE_OFFSET(f,p) (GST_VIDEO_INFO_PLANE_OFFSET(&(f)->info,(p)))
#define GST_VIDEO_FRAME_PLANE_STRIDE(f,p) (GST_VIDEO_INFO_PLANE_STRIDE(&(f)->info,(p)))
#define GST_VIDEO_FRAME_N_COMPONENTS(f) GST_VIDEO_INFO_N_COMPONENTS(&(f)->info)
#define GST_VIDEO_FRAME_COMP_DEPTH(f,c) GST_VIDEO_INFO_COMP_DEPTH(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_DATA(f,c) GST_VIDEO_INFO_COMP_DATA(&(f)->info,(f)->data,(c))
#define GST_VIDEO_FRAME_COMP_STRIDE(f,c) GST_VIDEO_INFO_COMP_STRIDE(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_OFFSET(f,c) GST_VIDEO_INFO_COMP_OFFSET(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_WIDTH(f,c) GST_VIDEO_INFO_COMP_WIDTH(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_HEIGHT(f,c) GST_VIDEO_INFO_COMP_HEIGHT(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_PLANE(f,c) GST_VIDEO_INFO_COMP_PLANE(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_PSTRIDE(f,c) GST_VIDEO_INFO_COMP_PSTRIDE(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_POFFSET(f,c) GST_VIDEO_INFO_COMP_POFFSET(&(f)->info,(c))
guint gst_video_tile_get_index (GstVideoTileMode mode
,gint x
,gint y
,gint x_tiles
,gint y_tiles
);
Get the tile index of the tile at coordinates x
and y
in the tiled
image of x_tiles
by y_tiles
.
Use this method when mode
is of type GST_VIDEO_TILE_MODE_INDEXED.
#define GST_VIDEO_TILE_MAKE_MODE(num, type)
use this macro to create new tile modes.
#define GST_VIDEO_TILE_MODE_TYPE(mode) ((mode) & GST_VIDEO_TILE_TYPE_MASK)
Get the tile mode type of mode
#define GST_VIDEO_TILE_MODE_IS_INDEXED(mode) (GST_VIDEO_TILE_MODE_TYPE(mode) == GST_VIDEO_TILE_TYPE_INDEXED)
Check if mode
is an indexed tile type
#define GST_VIDEO_TILE_MAKE_STRIDE(x_tiles, y_tiles)
Encode the number of tile in X and Y into the stride.
#define GST_VIDEO_TILE_X_TILES(stride) ((stride) & GST_VIDEO_TILE_X_TILES_MASK)
Extract the number of tiles in X from the stride value.
struct GstVideoAlignment { guint padding_top; guint padding_bottom; guint padding_left; guint padding_right; guint stride_align[GST_VIDEO_MAX_PLANES]; };
Extra alignment paramters for the memory of video buffers. This structure is usually used to configure the bufferpool if it supports the GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT.
#define GST_META_TAG_VIDEO_STR "video"
This metadata is relevant for video streams.
Since 1.2
#define GST_META_TAG_VIDEO_ORIENTATION_STR "orientation"
This metadata stays relevant as long as video orientation is unchanged.
Since 1.2
#define GST_META_TAG_VIDEO_SIZE_STR "size"
This metadata stays relevant as long as video size is unchanged.
Since 1.2
#define GST_META_TAG_VIDEO_COLORSPACE_STR "colorspace"
This metadata stays relevant as long as video colorspace is unchanged.
Since 1.2
Enum value describing the most common video formats.
Unknown or unset video format id |
||
Encoded video format. Only ever use that in caps for special video formats in combination with non-system memory GstCapsFeatures where it does not make sense to specify a real video format. |
||
planar 4:2:0 YUV |
||
planar 4:2:0 YVU (like I420 but UV planes swapped) |
||
packed 4:2:2 YUV (Y0-U0-Y1-V0 Y2-U2-Y3-V2 Y4 ...) |
||
packed 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...) |
||
packed 4:4:4 YUV with alpha channel (A0-Y0-U0-V0 ...) |
||
sparse rgb packed into 32 bit, space last |
||
sparse reverse rgb packed into 32 bit, space last |
||
sparse rgb packed into 32 bit, space first |
||
sparse reverse rgb packed into 32 bit, space first |
||
rgb with alpha channel last |
||
reverse rgb with alpha channel last |
||
rgb with alpha channel first |
||
reverse rgb with alpha channel first |
||
rgb |
||
reverse rgb |
||
planar 4:1:1 YUV |
||
planar 4:2:2 YUV |
||
packed 4:2:2 YUV (Y0-V0-Y1-U0 Y2-V2-Y3-U2 Y4 ...) |
||
planar 4:4:4 YUV |
||
packed 4:2:2 10-bit YUV, complex format |
||
packed 4:2:2 16-bit YUV, Y0-U0-Y1-V1 order |
||
planar 4:2:0 YUV with interleaved UV plane |
||
planar 4:2:0 YUV with interleaved VU plane |
||
8-bit grayscale |
||
16-bit grayscale, most significant byte first |
||
16-bit grayscale, least significant byte first |
||
packed 4:4:4 YUV |
||
rgb 5-6-5 bits per component |
||
reverse rgb 5-6-5 bits per component |
||
rgb 5-5-5 bits per component |
||
reverse rgb 5-5-5 bits per component |
||
packed 10-bit 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...) |
||
planar 4:4:2:0 AYUV |
||
8-bit paletted RGB |
||
planar 4:1:0 YUV |
||
planar 4:1:0 YUV (like YUV9 but UV planes swapped) |
||
packed 4:1:1 YUV (Cb-Y0-Y1-Cr-Y2-Y3 ...) |
||
rgb with alpha channel first, 16 bits per channel |
||
packed 4:4:4 YUV with alpha channel, 16 bits per channel (A0-Y0-U0-V0 ...) |
||
packed 4:4:4 RGB, 10 bits per channel |
||
planar 4:2:0 YUV, 10 bits per channel |
||
planar 4:2:0 YUV, 10 bits per channel |
||
planar 4:2:2 YUV, 10 bits per channel |
||
planar 4:2:2 YUV, 10 bits per channel |
||
planar 4:4:4 YUV, 10 bits per channel |
||
planar 4:4:4 YUV, 10 bits per channel |
||
planar 4:4:4 RGB, 8 bits per channel |
||
planar 4:4:4 RGB, 10 bits per channel |
||
planar 4:4:4 RGB, 10 bits per channel |
||
planar 4:2:2 YUV with interleaved UV plane |
||
planar 4:4:4 YUV with interleaved UV plane |
||
NV12 with 64x32 tiling in zigzag pattern |
struct GstVideoFormatInfo { GstVideoFormat format; const gchar *name; const gchar *description; GstVideoFormatFlags flags; guint bits; guint n_components; guint shift[GST_VIDEO_MAX_COMPONENTS]; guint depth[GST_VIDEO_MAX_COMPONENTS]; gint pixel_stride[GST_VIDEO_MAX_COMPONENTS]; guint n_planes; guint plane[GST_VIDEO_MAX_COMPONENTS]; guint poffset[GST_VIDEO_MAX_COMPONENTS]; guint w_sub[GST_VIDEO_MAX_COMPONENTS]; guint h_sub[GST_VIDEO_MAX_COMPONENTS]; GstVideoFormat unpack_format; GstVideoFormatUnpack unpack_func; gint pack_lines; GstVideoFormatPack pack_func; GstVideoTileMode tile_mode; guint tile_ws; guint tile_hs; gpointer _gst_reserved[GST_PADDING]; };
Information for a video format.
GstVideoFormat |
||
const gchar * |
string representation of the format |
|
const gchar * |
use readable description of the format |
|
GstVideoFormatFlags |
||
guint |
The number of bits used to pack data items. This can be less than 8 when multiple pixels are stored in a byte. for values > 8 multiple bytes should be read according to the endianness flag before applying the shift and mask. |
|
guint |
the number of components in the video format. |
|
guint |
the number of bits to shift away to get the component data |
|
guint |
the depth in bits for each component |
|
gint |
the pixel stride of each component. This is the amount of bytes to the pixel immediately to the right. When bits < 8, the stride is expressed in bits. For 24-bit RGB, this would be 3 bytes, for example, while it would be 4 bytes for RGBx or ARGB. |
|
guint |
the number of planes for this format. The number of planes can be less than the amount of components when multiple components are packed into one plane. |
|
guint |
the plane number where a component can be found |
|
guint |
the offset in the plane where the first pixel of the components can be found. |
|
guint |
subsampling factor of the width for the component. Use GST_VIDEO_SUB_SCALE to scale a width. |
|
guint |
subsampling factor of the height for the component. Use GST_VIDEO_SUB_SCALE to scale a height. |
|
GstVideoFormat |
the format of the unpacked pixels. This format must have the GST_VIDEO_FORMAT_FLAG_UNPACK flag set. |
|
GstVideoFormatUnpack |
an unpack function for this format |
|
gint |
the amount of lines that will be packed |
|
GstVideoFormatPack |
an pack function for this format |
|
GstVideoTileMode |
The tiling mode
|
|
guint |
||
guint |
||
gpointer |
Various Chroma sitings.
The different video flags that a format info can have.
The video format is YUV, components are numbered 0=Y, 1=U, 2=V. |
||
The video format is RGB, components are numbered 0=R, 1=G, 2=B. |
||
The video is gray, there is one gray component with index 0. |
||
The video format has an alpha components with the number 3. |
||
The video format has data stored in little endianness. |
||
The video format has a palette. The palette is stored in the second plane and indexes are stored in the first plane. |
||
The video format has a complex layout that can't be described with the usual information in the GstVideoFormatInfo. |
||
This format can be used in a GstVideoFormatUnpack and GstVideoFormatPack function. |
||
The format is tiled, there is tiling information in the last plane. |
The different flags that can be used when packing and unpacking.
No flag |
||
When the source has a smaller depth than the target format, set the least significant bits of the target to 0. This is likely sightly faster but less accurate. When this flag is not specified, the most significant bits of the source are duplicated in the least significant bits of the destination. |
||
The source is interlaced. The unpacked format will be interlaced as well with each line containing information from alternating fields. (Since 1.2) |
Possible color range values. These constants are defined for 8 bit color values and can be scaled for other bit depths.
The color matrix is used to convert between Y'PbPr and non-linear RGB (R'G'B')
The video transfer function defines the formula for converting between non-linear RGB (R'G'B') and linear RGB
unknown transfer function |
||
linear RGB, gamma 1.0 curve |
||
Gamma 1.8 curve |
||
Gamma 2.0 curve |
||
Gamma 2.2 curve |
||
Gamma 2.2 curve with a linear segment in the lower range |
||
Gamma 2.2 curve with a linear segment in the lower range |
||
Gamma 2.4 curve with a linear segment in the lower range |
||
Gamma 2.8 curve |
||
Logarithmic transfer characteristic 100:1 range |
||
Logarithmic transfer characteristic 316.22777:1 range |
The color primaries define the how to transform linear RGB values to and from the CIE XYZ colorspace.
typedef struct { GstVideoColorRange range; GstVideoColorMatrix matrix; GstVideoTransferFunction transfer; GstVideoColorPrimaries primaries; } GstVideoColorimetry;
Structure describing the color info.
GstVideoColorRange |
the color range. This is the valid range for the samples. It is used to convert the samples to Y'PbPr values. |
|
GstVideoColorMatrix |
the color matrix. Used to convert between Y'PbPr and non-linear RGB (R'G'B') |
|
GstVideoTransferFunction |
the transfer function. used to convert between R'G'B' and RGB |
|
GstVideoColorPrimaries |
color primaries. used to convert between R'G'B' and CIE XYZ |
struct GstVideoInfo { const GstVideoFormatInfo *finfo; GstVideoInterlaceMode interlace_mode; GstVideoFlags flags; gint width; gint height; gsize size; gint views; GstVideoChromaSite chroma_site; GstVideoColorimetry colorimetry; gint par_n; gint par_d; gint fps_n; gint fps_d; gsize offset[GST_VIDEO_MAX_PLANES]; gint stride[GST_VIDEO_MAX_PLANES]; };
Information describing image properties. This information can be filled
in from GstCaps with gst_video_info_from_caps()
. The information is also used
to store the specific video info when mapping a video frame with
gst_video_frame_map()
.
Use the provided macros to access the info in this structure.
const GstVideoFormatInfo * |
the format info of the video |
|
GstVideoInterlaceMode |
the interlace mode |
|
GstVideoFlags |
additional video flags |
|
gint |
the width of the video |
|
gint |
the height of the video |
|
the default size of one frame |
||
gint |
the number of views for multiview video |
|
GstVideoChromaSite |
||
GstVideoColorimetry |
the colorimetry info |
|
gint |
the pixel-aspect-ratio numerator |
|
gint |
the pixel-aspect-ratio demnominator |
|
gint |
the framerate numerator |
|
gint |
the framerate demnominator |
|
offsets of the planes |
||
gint |
strides of the planes |
The possible values of the GstVideoInterlaceMode describing the interlace mode of the stream.
all frames are progressive |
||
2 fields are interleaved in one video frame. Extra buffer flags describe the field order. |
||
frames contains both interlaced and progressive video, the buffer flags describe the frame and fields. |
||
2 fields are stored in one buffer, use the frame ID to get access to the required field. For multiview (the 'views' property > 1) the fields of view N can be found at frame ID (N * 2) and (N * 2) + 1. Each field has only half the amount of lines as noted in the height property. This mode requires multiple GstVideoMeta metadata to describe the fields. |
struct GstVideoFrame { GstVideoInfo info; GstVideoFrameFlags flags; GstBuffer *buffer; gpointer meta; gint id; gpointer data[GST_VIDEO_MAX_PLANES]; GstMapInfo map[GST_VIDEO_MAX_PLANES]; };
A video frame obtained from gst_video_frame_map()
GstVideoInfo |
the GstVideoInfo |
|
GstVideoFrameFlags |
||
GstBuffer * |
the mapped buffer |
|
gpointer |
pointer to metadata if any |
|
gint |
id of the mapped frame. the id can for example be used to indentify the frame in case of multiview video. |
|
gpointer |
pointers to the plane data |
|
GstMapInfo |
mappings of the planes |
Extra video frame flags
Additional video buffer flags.
If the GstBuffer is interlaced. In mixed interlace-mode, this flags specifies if the frame is interlaced or progressive. |
||
If the GstBuffer is interlaced, then the first field in the video frame is the top field. If unset, the bottom field is first. |
||
If the GstBuffer is interlaced, then the first field
(as defined by the |
||
If the GstBuffer is interlaced, then only the
first field (as defined by the |
||
Enum value describing the most common tiling types.
Tiles are indexed. Use
|