Merge commit '559a400ac36e75a8d73ba263fd7fa6736df1c2da' into wm4-commits--merge-edition

This bumps libmpv version to 1.103
This commit is contained in:
Anton Kindestam 2018-12-05 19:02:03 +01:00
commit 8b83c89966
98 changed files with 3972 additions and 3183 deletions

View File

@ -32,6 +32,10 @@ API changes
::
--- mpv 0.30.0 ---
1.103 - redo handling of async commands
- add mpv_event_command and make it possible to return values from
commands issued with mpv_command_async() or mpv_command_node_async()
- add mpv_abort_async_command()
1.102 - rename struct mpv_opengl_drm_osd_size to mpv_opengl_drm_draw_surface_size
- rename MPV_RENDER_PARAM_DRM_OSD_SIZE to MPV_RENDER_PARAM_DRM_DRAW_SURFACE_SIZE

View File

@ -47,6 +47,25 @@ Interface changes
- support for `--spirv-compiler=nvidia` has been removed, leaving `shaderc`
as the only option. The `--spirv-compiler` option itself has been marked
as deprecated, and may be removed in the future.
- ipc: require that "request_id" fields are integers. Other types are still
accepted for compatibility, but this will stop in the future. Also, if no
request_id is provided, 0 will be assumed.
- mpv_command_node() and mp.command_native() now support named arguments
(see manpage). If you want to use them, use a new version of the manpage
as reference, which lists the definitive names.
- edition and disc title switching will now fully reload playback (may have
consequences for scripts, client API, or when using file-local options)
- remove async playback abort hack. This breaks aborting playback in the
following cases, iff the current stream is a network stream that
completely stopped responding:
- setting "program" property
- setting "cache-size" property
In earlier versions of mpv, the player core froze as well in these cases,
but could still be aborted with the quit, stop, playlist-prev,
playlist-next commands. If these properties are not accessed, frozen
network streams should not freeze the player core (only playback in
uncached regions), and differing behavior should be reported as a bug.
If --demuxer-thread=no is used, there are no guarantees.
--- mpv 0.29.0 ---
- drop --opensles-sample-rate, as --audio-samplerate should be used if desired
- drop deprecated --videotoolbox-format, --ff-aid, --ff-vid, --ff-sid,
@ -131,8 +150,6 @@ Interface changes
of 3D content doesn't justify such an option anyway.
- change cycle-values command to use the current value, instead of an
internal counter that remembered the current position.
- edition and disc title switching will now fully reload playback (may have
consequences for scripts, client API, or when using file-local options)
- remove deprecated ao/vo auto profiles. Consider using scripts like
auto-profiles.lua instead.
--- mpv 0.28.0 ---

View File

@ -41,10 +41,10 @@ commands they're bound to on the OSD, instead of executing the commands::
(Only closing the window will make **mpv** exit, pressing normal keys will
merely display the binding, even if mapped to quit.)
General Input Command Syntax
----------------------------
input.conf syntax
-----------------
``[Shift+][Ctrl+][Alt+][Meta+]<key> [{<section>}] [<prefixes>] <command> (<argument>)* [; <command>]``
``[Shift+][Ctrl+][Alt+][Meta+]<key> [{<section>}] <command> ( ; <command> )*``
Note that by default, the right Alt key can be used to create special
characters, and thus does not register as a modifier. The option
@ -59,9 +59,9 @@ character), or a symbolic name (as printed by ``--input-keylist``).
``<section>`` (braced with ``{`` and ``}``) is the input section for this
command.
Arguments are separated by whitespace. This applies even to string arguments.
For this reason, string arguments should be quoted with ``"``. Inside quotes,
C-style escaping can be used.
``<command>`` is the command itself. It consists of the command name and
multiple (or none) commands, all separated by whitespace. String arguments
need to be quoted with ``"``. Details see ``Flat command syntax``.
You can bind multiple commands to one key. For example:
@ -78,15 +78,89 @@ that matches, and the multi-key command will never be called. Intermediate keys
can be remapped to ``ignore`` in order to avoid this issue. The maximum number
of (non-modifier) keys for combinations is currently 4.
Flat command syntax
-------------------
This is the syntax used in input.conf, and referred to "input.conf syntax" in
a number of other places.
``<command> ::= [<prefixes>] <command_name> (<argument>)*``
``<argument> ::= (<string> | " <quoted_string> " )``
``command_name`` is an unquoted string with the command name itself. See
`List of Input Commands`_ for a list.
Arguments are separated by whitespace. This applies even to string arguments.
For this reason, string arguments should be quoted with ``"``. If a string
argument contains spaces or certain special characters, quoting and possibly
escaping is mandatory, or the command cannot be parsed correctly.
Inside quotes, C-style escaping can be used. JSON escapes according to RFC 8259,
minus surrogate pair escapes, should be a safe subset that can be used.
Commands specified as arrays
----------------------------
This applies to certain APIs, such as ``mp.commandv()`` or
``mp.command_native()`` (with array parameters) in Lua scripting, or
``mpv_command()`` or ``mpv_command_node()`` (with MPV_FORMAT_NODE_ARRAY) in the
C libmpv client API.
The command as well as all arguments are passed as a single array. Similar to
the `Flat command syntax`_, you can first pass prefixes as strings (each as
separate array item), then the command name as string, and then each argument
as string or a native value.
Since these APIs pass arguments as separate strings or native values, they do
not expect quotes, and do support escaping. Technically, there is the input.conf
parser, which first splits the command string into arguments, and then invokes
argument parsers for each argument. The input.conf parser normally handles
quotes and escaping. The array command APIs mentioned above pass strings
directly to the argument parsers, or can sidestep them by the ability to pass
non-string values.
Sometimes commands have string arguments, that in turn are actually parsed by
other components (e.g. filter strings with ``vf add``) - in these cases, you
you would have to double-escape in input.conf, but not with the array APIs.
For complex commands, consider using `Named arguments`_ instead, which should
give slightly more compatibility. Some commands do not support named arguments
and inherently take an array, though.
Named arguments
---------------
This applies to certain APIs, such as ``mp.command_native()`` (with tables that
have string keys) in Lua scripting, or ``mpv_command_node()`` (with
MPV_FORMAT_NODE_MAP) in the C libmpv client API.
Like with array commands, quoting and escaping is inherently not needed in the
normal case.
The name of each command is defined in each command description in the
`List of Input Commands`_. ``--input-cmdlist`` also lists them.
Some commands do not support named arguments (e.g. ``run`` command). You need
to use APIs that pass arguments as arrays.
Named arguments are not supported in the "flat" input.conf syntax, which means
you cannot use them for key bindings in input.conf at all.
List of Input Commands
----------------------
Commands with parameters have the parameter name enclosed in ``<`` / ``>``.
Don't add those to the actual command. Optional arguments are enclosed in
``[`` / ``]``. If you don't pass them, they will be set to a default value.
Remember to quote string arguments in input.conf (see `Flat command syntax`_).
``ignore``
Use this to "block" keys that should be unbound, and do nothing. Useful for
disabling default bindings, without disabling all bindings with
``--no-input-default-bindings``.
``seek <seconds> [relative|absolute|absolute-percent|relative-percent|exact|keyframes]``
``seek <target> [<flags>]``
Change the playback position. By default, seeks by a relative amount of
seconds.
@ -114,7 +188,7 @@ List of Input Commands
3rd parameter (essentially using a space instead of ``+``). The 3rd
parameter is still parsed, but is considered deprecated.
``revert-seek [mode]``
``revert-seek [<flags>]``
Undoes the ``seek`` command, and some other commands that seek (but not
necessarily all of them). Calling this command once will jump to the
playback position before the seek. Calling it a second time undoes the
@ -144,22 +218,24 @@ List of Input Commands
This does not work with audio-only playback.
``set <property> "<value>"``
Set the given property to the given value.
``set <name> <value>``
Set the given property or option to the given value.
``add <property> [<value>]``
Add the given value to the property. On overflow or underflow, clamp the
property to the maximum. If ``<value>`` is omitted, assume ``1``.
``add <name> [<value>]``
Add the given value to the property or option. On overflow or underflow,
clamp the property to the maximum. If ``<value>`` is omitted, assume ``1``.
``cycle <property> [up|down]``
Cycle the given property. ``up`` and ``down`` set the cycle direction. On
overflow, set the property back to the minimum, on underflow set it to the
maximum. If ``up`` or ``down`` is omitted, assume ``up``.
``cycle <name> [<value>]``
Cycle the given property or option. The second argument can be ``up`` or
``down`` to set the cycle direction. On overflow, set the property back to
the minimum, on underflow set it to the maximum. If ``up`` or ``down`` is
omitted, assume ``up``.
``multiply <property> <factor>``
Multiplies the value of a property with the numeric factor.
``multiply <name> <value>``
Similar to ``add``, but multiplies the property or option with the numeric
value.
``screenshot [subtitles|video|window|single|each-frame]``
``screenshot <flags>``
Take a screenshot.
Multiple flags are available (some can be combined with ``+``):
@ -186,45 +262,46 @@ List of Input Commands
second argument (and did not have flags). This syntax is still understood,
but deprecated and might be removed in the future.
Setting the ``async`` flag will make encoding and writing the actual image
file asynchronous in most cases. (``each-frame`` mode ignores this flag
currently.) Requesting async screenshots too early or too often could lead
to the same filenames being chosen, and overwriting each others in undefined
order.
If you combine this command with another one using ``;``, you can use the
``async`` flag to make encoding/writing the image file asynchronous. For
normal standalone commands, this is always asynchronous, and the flag has
no effect. (This behavior changed with mpv 0.29.0.)
``screenshot-to-file "<filename>" [subtitles|video|window]``
``screenshot-to-file <filename> <flags>``
Take a screenshot and save it to a given file. The format of the file will
be guessed by the extension (and ``--screenshot-format`` is ignored - the
behavior when the extension is missing or unknown is arbitrary).
The second argument is like the first argument to ``screenshot``.
The second argument is like the first argument to ``screenshot`` and
supports ``subtitles``, ``video``, ``window``.
If the file already exists, it's overwritten.
Like all input command parameters, the filename is subject to property
expansion as described in `Property Expansion`_.
The ``async`` flag has an effect on this command (see ``screenshot``
command).
``playlist-next [weak|force]``
``playlist-next <flags>``
Go to the next entry on the playlist.
First argument:
weak (default)
If the last file on the playlist is currently played, do nothing.
force
Terminate playback if there are no more files on the playlist.
``playlist-prev [weak|force]``
``playlist-prev <flags>``
Go to the previous entry on the playlist.
First argument:
weak (default)
If the first file on the playlist is currently played, do nothing.
force
Terminate playback if the first file is being played.
``loadfile "<file>" [replace|append|append-play [options]]``
Load the given file and play it.
``loadfile <url> [<flags> [<options>]]``
Load the given file or URL and play it.
Second argument:
@ -242,13 +319,20 @@ List of Input Commands
Not all options can be changed this way. Some options require a restart
of the player.
``loadlist "<playlist>" [replace|append]``
Load the given playlist file (like ``--playlist``).
``loadlist <url> [<flags>]``
Load the given playlist file or URL (like ``--playlist``).
Second argument:
<replace> (default)
Stop playback and replace the internal playlist with the new one.
<append>
Append the new playlist at the end of the current internal playlist.
``playlist-clear``
Clear the playlist, except the currently played file.
``playlist-remove current|<index>``
``playlist-remove <index>``
Remove the playlist entry at the given index. Index values start counting
with 0. The special value ``current`` removes the current entry. Note that
removing the current entry also stops playback and starts playing the next
@ -265,12 +349,14 @@ List of Input Commands
Shuffle the playlist. This is similar to what is done on start if the
``--shuffle`` option is used.
``run "command" "arg1" "arg2" ...``
``run <command> [<arg1> [<arg2> [...]]]``
Run the given command. Unlike in MPlayer/mplayer2 and earlier versions of
mpv (0.2.x and older), this doesn't call the shell. Instead, the command
is run directly, with each argument passed separately. Each argument is
expanded like in `Property Expansion`_. Note that there is a static limit
of (as of this writing) 9 arguments (this limit could be raised on demand).
expanded like in `Property Expansion`_.
This command has a variable number of arguments, and cannot be used with
named arguments.
The program is run in a detached way. mpv doesn't wait until the command
is completed, but continues playback right after spawning it.
@ -287,6 +373,81 @@ List of Input Commands
execute arbitrary shell commands. It is recommended to write a small
shell script, and call that with ``run``.
``subprocess``
Similar to ``run``, but gives more control about process execution to the
caller, and does does not detach the process.
This has the following named arguments. The order of them is not guaranteed,
so you should always call them with named arguments, see `Named arguments`_.
``args`` (``MPV_FORMAT_NODE_ARRAY[MPV_FORMAT_STRING]``)
Array of strings with the command as first argument, and subsequent
command line arguments following. This is just like the ``run`` command
argument list.
The first array entry is either an absolute path to the executable, or
a filename with no path components, in which case the ``PATH``
environment variable. On Unix, this is equivalent to ``posix_spawnp``
and ``execvp`` behavior.
``playback_only`` (``MPV_FORMAT_FLAG``)
Boolean indicating whether the process should be killed when playback
terminates (optional, default: yes). If enabled, stopping playback
will automatically kill the process, and you can't start it outside of
playback.
``capture_size`` (``MPV_FORMAT_INT64``)
Integer setting the maximum number of stdout plus stderr bytes that can
be captured (optional, default: 64MB). If the number of bytes exceeds
this, capturing is stopped. The limit is per captured stream.
``capture_stdout`` (``MPV_FORMAT_FLAG``)
Capture all data the process outputs to stdout and return it once the
process ends (optional, default: no).
``capture_stderr`` (``MPV_FORMAT_FLAG``)
Same as ``capture_stdout``, but for stderr.
The command returns the following result (as ``MPV_FORMAT_NODE_MAP``):
``status`` (``MPV_FORMAT_INT64``)
The raw exit status of the process. It will be negative on error. The
meaning of negative values is undefined, other than meaning error (and
does not necessarily correspond to OS low level exit status values).
On Windows, it can happen that a negative return value is returned
even if the process exits gracefully, because the win32 ``UINT`` exit
code is assigned to an ``int`` variable before being set as ``int64_t``
field in the result map. This might be fixed later.
``stdout`` (``MPV_FORMAT_BYTE_ARRAY``)
Captured stdout stream, limited to ``capture_size``.
``stderr`` (``MPV_FORMAT_BYTE_ARRAY``)
Same as ``stdout``, but for stderr.
``error_string`` (``MPV_FORMAT_STRING``)
Empty string if the process exited gracefully. The string ``killed`` if
the process was terminated in an unusual way. The string ``init`` if the
process could not be started.
On Windows, ``killed`` is only returned when the process has been
killed by mpv as a result of ``playback_only`` being set to ``yes``.
``killed_by_us`` (``MPV_FORMAT_FLAG``)
Set to ``yes`` if the process has been killed by mpv as a result
of ``playback_only`` being set to ``yes``.
Note that the command itself will always return success as long as the
parameters are correct. Whether the process could be spawned or whether
it was somehow killed or returned an error status has to be queried from
the result value.
This command can be asynchronously aborted via API.
In all cases, the subprocess will be terminated on player exit. Only the
``run`` command can start processes in a truly detached way.
``quit [<code>]``
Exit the player. If an argument is given, it's used as process exit code.
@ -295,15 +456,15 @@ List of Input Commands
will seek to the previous position on start. The (optional) argument is
exactly as in the ``quit`` command.
``sub-add "<file>" [<flags> [<title> [<lang>]]]``
Load the given subtitle file. It is selected as current subtitle after
loading.
``sub-add <url> [<flags> [<title> [<lang>]]]``
Load the given subtitle file or stream. By default, it is selected as
current subtitle after loading.
The ``flags`` args is one of the following values:
The ``flags`` argument is one of the following values:
<select>
Select the subtitle immediately.
Select the subtitle immediately (default).
<auto>
@ -346,11 +507,11 @@ List of Input Commands
events that have already been displayed, or are within a short prefetch
range.
``print-text "<string>"``
``print-text <text>``
Print text to stdout. The string can contain properties (see
`Property Expansion`_).
`Property Expansion`_). Take care to put the argument in quotes.
``show-text "<string>" [<duration>|-1 [<level>]]``
``show-text <text> [<duration>|-1 [<level>]]``
Show text on the OSD. The string can contain properties, which are expanded
as described in `Property Expansion`_. This can be used to show playback
time, filename, and so on.
@ -362,7 +523,7 @@ List of Input Commands
<level>
The minimum OSD level to show the text at (see ``--osd-level``).
``expand-text "<string>"``
``expand-text <string>``
Property-expand the argument and return the expanded string. This can be
used only through the client API or from a script using
``mp.command_native``. (see `Property Expansion`_).
@ -380,7 +541,7 @@ List of Input Commands
essentially like ``quit``. Useful for the client API: playback can be
stopped without terminating the player.
``mouse <x> <y> [<button> [single|double]]``
``mouse <x> <y> [<button> [<mode>]]``
Send a mouse event with given coordinate (``<x>``, ``<y>``).
Second argument:
@ -397,24 +558,24 @@ List of Input Commands
<double>
The mouse event represents double-click.
``keypress <key_name>``
``keypress <name>``
Send a key event through mpv's input handler, triggering whatever
behavior is configured to that key. ``key_name`` uses the ``input.conf``
behavior is configured to that key. ``name`` uses the ``input.conf``
naming scheme for keys and modifiers. Useful for the client API: key events
can be sent to libmpv to handle internally.
``keydown <key_name>``
``keydown <name>``
Similar to ``keypress``, but sets the ``KEYDOWN`` flag so that if the key is
bound to a repeatable command, it will be run repeatedly with mpv's key
repeat timing until the ``keyup`` command is called.
``keyup [<key_name>]``
``keyup [<name>]``
Set the ``KEYUP`` flag, stopping any repeated behavior that had been
triggered. ``key_name`` is optional. If ``key_name`` is not given or is an
triggered. ``name`` is optional. If ``name`` is not given or is an
empty string, ``KEYUP`` will be set on all keys. Otherwise, ``KEYUP`` will
only be set on the key specified by ``key_name``.
only be set on the key specified by ``name``.
``audio-add "<file>" [<flags> [<title> [<lang>]]]``
``audio-add <url> [<flags> [<title> [<lang>]]]``
Load the given audio file. See ``sub-add`` command.
``audio-remove [<id>]``
@ -442,21 +603,21 @@ List of Input Commands
Input Commands that are Possibly Subject to Change
--------------------------------------------------
``af set|add|toggle|del|clr "filter1=params,filter2,..."``
``af <operation> <value>``
Change audio filter chain. See ``vf`` command.
``vf set|add|toggle|del|clr "filter1=params,filter2,..."``
``vf <operation> <value>``
Change video filter chain.
The first argument decides what happens:
set
<set>
Overwrite the previous filter chain with the new one.
add
<add>
Append the new filter chain to the previous one.
toggle
<toggle>
Check if the given filter (with the exact parameters) is already
in the video chain. If yes, remove the filter. If no, add the filter.
(If several filters are passed to the command, this is done for
@ -466,14 +627,14 @@ Input Commands that are Possibly Subject to Change
without filter name and parameters as filter entry. This toggles the
enable/disable flag.
del
<del>
Remove the given filters from the video chain. Unlike in the other
cases, the second parameter is a comma separated list of filter names
or integer indexes. ``0`` would denote the first filter. Negative
indexes start from the last filter, and ``-1`` denotes the last
filter.
clr
<clr>
Remove all filters. Note that like the other sub-commands, this does
not control automatically inserted filters.
@ -512,18 +673,21 @@ Input Commands that are Possibly Subject to Change
"disabled" flag for the filter with the label ``deband`` when the
``a`` key is hit.
``cycle-values ["!reverse"] <property> "<value1>" "<value2>" ...``
``cycle-values [<"!reverse">] <property> <value1> [<value2> [...]]``
Cycle through a list of values. Each invocation of the command will set the
given property to the next value in the list. The command will use the
current value of the property/option, and use it to determine the current
position in the list of values. Once it has found it, it will set the
next value in the list (wrapping around to the first item if needed).
This command has a variable number of arguments, and cannot be used with
named arguments.
The special argument ``!reverse`` can be used to cycle the value list in
reverse. The only advantage is that you don't need to reverse the value
list yourself when adding a second key binding for cycling backwards.
``enable-section "<section>" [flags]``
``enable-section <name> [<flags>]``
Enable all key bindings in the named input section.
The enabled input sections form a stack. Bindings in sections on the top of
@ -545,10 +709,10 @@ Input Commands that are Possibly Subject to Change
<allow-vo-dragging>
Same.
``disable-section "<section>"``
``disable-section <name>``
Disable the named input section. Undoes ``enable-section``.
``define-section "<section>" "<contents>" [default|force]``
``define-section <name> <contents> [<flags>]``
Create a named input section, or replace the contents of an already existing
input section. The ``contents`` parameter uses the same syntax as the
``input.conf`` file (except that using the section syntax in it is not
@ -576,7 +740,7 @@ Input Commands that are Possibly Subject to Change
information about the key state. The special key name ``unmapped`` can be
used to match any unmapped key.
``overlay-add <id> <x> <y> "<file>" <offset> "<fmt>" <w> <h> <stride>``
``overlay-add <id> <x> <y> <file> <offset> <fmt> <w> <h> <stride>``
Add an OSD overlay sourced from raw data. This might be useful for scripts
and applications controlling mpv, and which want to display things on top
of the video window.
@ -587,6 +751,9 @@ Input Commands that are Possibly Subject to Change
anamorphic video (such as DVD), ``osd-par`` should be read as well, and the
overlay should be aspect-compensated.
This has the following named arguments. The order of them is not guaranteed,
so you should always call them with named arguments, see `Named arguments`_.
``id`` is an integer between 0 and 63 identifying the overlay element. The
ID can be used to add multiple overlay parts, update a part by using this
command with an already existing ID, or to remove a part with
@ -644,18 +811,24 @@ Input Commands that are Possibly Subject to Change
Remove an overlay added with ``overlay-add`` and the same ID. Does nothing
if no overlay with this ID exists.
``script-message "<arg1>" "<arg2>" ...``
``script-message [<arg1> [<arg2> [...]]]``
Send a message to all clients, and pass it the following list of arguments.
What this message means, how many arguments it takes, and what the arguments
mean is fully up to the receiver and the sender. Every client receives the
message, so be careful about name clashes (or use ``script-message-to``).
``script-message-to "<target>" "<arg1>" "<arg2>" ...``
This command has a variable number of arguments, and cannot be used with
named arguments.
``script-message-to <target> [<arg1> [<arg2> [...]]]``
Same as ``script-message``, but send it only to the client named
``<target>``. Each client (scripts etc.) has a unique name. For example,
Lua scripts can get their name via ``mp.get_script_name()``.
``script-binding "<name>"``
This command has a variable number of arguments, and cannot be used with
named arguments.
``script-binding <name>``
Invoke a script-provided key binding. This can be used to remap key
bindings provided by external Lua scripts.
@ -692,7 +865,7 @@ Input Commands that are Possibly Subject to Change
unseekable streams that are going out of sync.
This command might be changed or removed in the future.
``screenshot-raw [subtitles|video|window]``
``screenshot-raw [<flags>]``
Return a screenshot in memory. This can be used only through the client
API. The MPV_FORMAT_NODE_MAP returned by this command has the ``w``, ``h``,
``stride`` fields set to obvious contents. The ``format`` field is set to
@ -702,7 +875,10 @@ Input Commands that are Possibly Subject to Change
is freed as soon as the result mpv_node is freed. As usual with client API
semantics, you are not allowed to write to the image data.
``vf-command "<label>" "<cmd>" "<args>"``
The ``flags`` argument is like the first argument to ``screenshot`` and
supports ``subtitles``, ``video``, ``window``.
``vf-command <label> <command> <argument>``
Send a command to the filter with the given ``<label>``. Use ``all`` to send
it to all filters at once. The command and argument string is filter
specific. Currently, this only works with the ``lavfi`` filter - see
@ -711,10 +887,10 @@ Input Commands that are Possibly Subject to Change
Note that the ``<label>`` is a mpv filter label, not a libavfilter filter
name.
``af-command "<label>" "<cmd>" "<args>"``
``af-command <label> <command> <argument>``
Same as ``vf-command``, but for audio filters.
``apply-profile "<name>"``
``apply-profile <name>``
Apply the contents of a named profile. This is like using ``profile=name``
in a config file, except you can map it to a key binding to change it at
runtime.
@ -722,14 +898,14 @@ Input Commands that are Possibly Subject to Change
There is no such thing as "unapplying" a profile - applying a profile
merely sets all option values listed within the profile.
``load-script "<path>"``
``load-script <filename>``
Load a script, similar to the ``--script`` option. Whether this waits for
the script to finish initialization or not changed multiple times, and the
future behavior is left undefined.
``change-list "<option>" "<operation>" "<value>"``
``change-list <name> <operation> <value>``
This command changes list options as described in `List Options`_. The
``<option>`` parameter is the normal option name, while ``<operation>`` is
``<name>`` parameter is the normal option name, while ``<operation>`` is
the suffix or action used on the option.
Some operations take no value, but the command still requires the value
@ -870,11 +1046,49 @@ prefixes can be specified. They are separated by whitespace.
are asynchronous by default (or rather, their effects might manifest
after completion of the command). The semantics of this flag might change
in the future. Set it only if you don't rely on the effects of this command
being fully realized when it returns.
being fully realized when it returns. See `Synchronous vs. Asynchronous`_.
``sync``
Allow synchronous execution (if possible). Normally, all commands are
synchronous by default, but some are asynchronous by default for
compatibility with older behavior.
All of the osd prefixes are still overridden by the global ``--osd-level``
settings.
Synchronous vs. Asynchronous
----------------------------
The ``async`` and ``sync`` prefix matter only for how the issuer of the command
waits on the completion of the command. Normally it does not affect how the
command behaves by itself. There are the following cases:
- Normal input.conf commands are always run asynchronously. Slow running
commands are queued up or run in parallel.
- "Multi" input.conf commands (1 key binding, concatenated with ``;``) will be
executed in order, except for commands that are async (either prefixed with
``async``, or async by default for some commands). The async commands are
run in a detached manner, possibly in parallel to the remaining sync commands
in the list.
- Normal Lua and libmpv commands (e.g. ``mpv_command()``) are run in a blocking
manner, unless the ``async`` prefix is used, or the command is async by
default. This means in the sync case the caller will block, even if the core
continues playback. Async mode runs the command in a detached manner.
- Async libmpv command API (e.g. ``mpv_command_async()``) never blocks the
caller, and always notify their completion with a message. The ``sync`` and
``async`` prefixes make no difference.
- In all cases, async mode can still run commands in a synchronous manner, even
in detached mode. This can for example happen in cases when a command does not
have an asynchronous implementation. The async libmpv API still never blocks
the caller in these cases.
Before mpv 0.29.0, the ``async`` prefix was only used by screenshot commands,
and made them run the file saving code in a detached manner. This is the
default now, and ``async`` changes behavior only in the ways mentioned above.
Currently the following commands have different waiting characteristics with
sync vs. async: sub-add, audio-add, sub-reload, audio-reload,
rescan-external-files, screenshot, screenshot-to-file.
Input Sections
--------------
@ -1241,38 +1455,6 @@ Property list
playing at all. In other words, it's only ``no`` if there's actually
video playing. (Behavior since mpv 0.7.0.)
``cache``
Network cache fill state (0-100.0).
``cache-size`` (RW)
Network cache size in KB. This is similar to ``--cache``. This allows
setting the cache size at runtime. Currently, it's not possible to enable
or disable the cache at runtime using this property, just to resize an
existing cache.
This does not include the backbuffer size (changed after mpv 0.10.0).
Note that this tries to keep the cache contents as far as possible. To make
this easier, the cache resizing code will allocate the new cache while the
old cache is still allocated.
Don't use this when playing DVD or Blu-ray.
``cache-free`` (R)
Total free cache size in KB.
``cache-used`` (R)
Total used cache size in KB.
``cache-speed`` (R)
Current I/O read speed between the cache and the lower layer (like network).
This gives the number bytes per seconds over a 1 second window (using
the type ``MPV_FORMAT_INT64`` for the client API).
``cache-idle`` (R)
Returns ``yes`` if the cache is idle, which means the cache is filled as
much as possible, and is currently not reading more data.
``demuxer-cache-duration``
Approximate duration of video buffered in the demuxer, in seconds. The
guess is very unreliable, and often the property will not be available

View File

@ -74,6 +74,12 @@ some wrapper like .NET's NamedPipeClientStream.)
Protocol
--------
The protocol uses UTF-8-only JSON as defined by RFC-8259. Unlike standard JSON,
"\u" escape sequences are not allowed to construct surrogate pairs. To avoid
getting conflicts, encode all text characters including and above codepoint
U+0020 as UTF-8. mpv might output broken UTF-8 in corner cases (see "UTF-8"
section below).
Clients can execute commands on the player by sending JSON messages of the
following form:
@ -108,7 +114,10 @@ Because events can occur at any time, it may be difficult at times to determine
which response goes with which command. Commands may optionally include a
``request_id`` which, if provided in the command request, will be copied
verbatim into the response. mpv does not intrepret the ``request_id`` in any
way; it is solely for the use of the requester.
way; it is solely for the use of the requester. The only requirement is that
the ``request_id`` field must be an integer (a number without fractional parts
in the range ``-2^63..2^63-1``). Using other types is deprecated and will
currently show a warning. In the future, this will raise an error.
For example, this request:
@ -122,6 +131,11 @@ Would generate this response:
{ "error": "success", "data": 1.468135, "request_id": 100 }
If you don't specify a ``request_id``, command replies will set it to 0.
Commands may run asynchronously in the future, instead of blocking the socket
until a reply is sent.
All commands, replies, and events are separated from each other with a line
break character (``\n``).
@ -258,4 +272,28 @@ sometimes sends invalid JSON. If that is a problem for the client application's
parser, it should filter the raw data for invalid UTF-8 sequences and perform
the desired replacement, before feeding the data to its JSON parser.
mpv will not attempt to construct invalid UTF-8 with broken escape sequences.
mpv will not attempt to construct invalid UTF-8 with broken "\u" escape
sequences. This includes surrogate pairs.
JSON extensions
---------------
The following non-standard extensions are supported:
- a list or object item can have a trailing ","
- object syntax accepts "=" in addition of ":"
- object keys can be unquoted, if they start with a character in "A-Za-z\_"
and contain only characters in "A-Za-z0-9\_"
- byte escapes with "\xAB" are allowed (with AB being a 2 digit hex number)
Example:
::
{ objkey = "value\x0A" }
Is equivalent to:
::
{ "objkey": "value\n" }

View File

@ -109,12 +109,43 @@ The ``mp`` module is preloaded, although it can be loaded manually with
``mp.command_native(table [,def])``
Similar to ``mp.commandv``, but pass the argument list as table. This has
the advantage that in at least some cases, arguments can be passed as
native types.
native types. It also allows you to use named argument.
If the table is an array, each array item is like an argument in
``mp.commandv()`` (but can be a native type instead of a string).
If the table contains string keys, it's interpreted as command with named
arguments. This requires at least an entry with the key ``name`` to be
present, which must be a string, and contains the command name. The special
entry ``_flags`` is optional, and if present, must be an array of
`Input Command Prefixes`_ to apply. All other entries are interpreted as
arguments.
Returns a result table on success (usually empty), or ``def, error`` on
error. ``def`` is the second parameter provided to the function, and is
nil if it's missing.
``mp.command_native_async(table [,fn])``
Like ``mp.command_native()``, but the command is ran asynchronously (as far
as possible), and upon completion, fn is called. fn has two arguments:
``fn(success, result, error)``. ``success`` is always a Boolean and is true
if the command was successful, otherwise false. The second parameter is
the result value (can be nil) in case of success, nil otherwise (as returned
by ``mp.command_native()``). The third parameter is the error string in case
of an error, nil otherwise.
Returns a table with undefined contents, which can be used as argument for
``mp.abort_async_command``.
If starting the command failed for some reason, ``nil, error`` is returned,
and ``fn`` is called indicating failure, using the same error value.
``mp.abort_async_command(t)``
Abort a ``mp.command_native_async`` call. The argument is the return value
of that command (which starts asynchronous execution of the command).
Whether this works and how long it takes depends on the command and the
situation. The abort call itself is asynchronous. Does not return anything.
``mp.get_property(name [,def])``
Return the value of the given property as string. These are the same
properties as used in input.conf. See `Properties`_ for a list of
@ -634,45 +665,23 @@ strictly part of the guaranteed API.
``utils.subprocess(t)``
Runs an external process and waits until it exits. Returns process status
and the captured output.
and the captured output. This is a legacy wrapper around calling the
``subprocess`` command with ``mp.command_native``. It does the following
things:
The parameter ``t`` is a table. The function reads the following entries:
- copy the table ``t``
- rename ``cancellable`` field to ``playback_only``
- rename ``max_size`` to ``capture_size``
- set ``capture_stdout`` field to ``true`` if unset
- set ``name`` field to ``subprocess``
- call ``mp.command_native(copied_t)``
- if the command failed, create a dummy result table
- copy ``error_string`` to ``error`` field if the string is non-empty
- return the result table
``args``
Array of strings. The first array entry is the executable. This
can be either an absolute path, or a filename with no path
components, in which case the ``PATH`` environment variable is
used to resolve the executable. The other array elements are
passed as command line arguments.
``cancellable``
Optional. If set to ``true`` (default), then if the user stops
playback or goes to the next file while the process is running,
the process will be killed.
``max_size``
Optional. The maximum size in bytes of the data that can be captured
from stdout. (Default: 16 MB.)
The function returns a table as result with the following entries:
``status``
The raw exit status of the process. It will be negative on error.
``stdout``
Captured output stream as string, limited to ``max_size``.
``error``
``nil`` on success. The string ``killed`` if the process was
terminated in an unusual way. The string ``init`` if the process
could not be started.
On Windows, ``killed`` is only returned when the process has been
killed by mpv as a result of ``cancellable`` being set to ``true``.
``killed_by_us``
Set to ``true`` if the process has been killed by mpv as a result
of ``cancellable`` being set to ``true``.
It is recommended to use ``mp.command_native`` or ``mp.command_native_async``
directly, instead of calling this legacy wrapper. It is for compatibility
only.
``utils.subprocess_detached(t)``
Runs an external process and detaches it from mpv's control.
@ -685,6 +694,9 @@ strictly part of the guaranteed API.
The function returns ``nil``.
This is a legacy wrapper around calling the ``run`` command with
``mp.commandv`` and other functions.
``utils.getpid()``
Returns the process ID of the running mpv process. This can be used to identify
the calling mpv when launching (detached) subprocesses.

View File

@ -2952,6 +2952,19 @@ Demuxer
Disabling this option is not recommended. Use it for debugging only.
``--demuxer-termination-timeout=<seconds>``
Number of seconds the player should wait to shutdown the demuxer (default:
0.1). The player will wait up to this much time before it closes the
stream layer forcefully. Forceful closing usually means the network I/O is
given no chance to close its connections gracefully (of course the OS can
still close TCP connections properly), and might result in annoying messages
being logged, and in some cases, confused remote servers.
This timeout is usually only applied when loading has finished properly. If
loading is aborted by the user, or in some corner cases like removing
external tracks sourced from network during playback, forceful closing is
always used.
``--demuxer-readahead-secs=<seconds>``
If ``--demuxer-thread`` is enabled, this controls how much the demuxer
should buffer ahead in seconds (default: 1). As long as no packet has
@ -3830,97 +3843,15 @@ TV
Cache
-----
``--cache=<kBytes|yes|no|auto>``
Set the size of the cache in kilobytes, disable it with ``no``, or
automatically enable it if needed with ``auto`` (default: ``auto``).
With ``auto``, the cache will usually be enabled for network streams,
using the size set by ``--cache-default``. With ``yes``, the cache will
always be enabled with the size set by ``--cache-default`` (unless the
stream cannot be cached, or ``--cache-default`` disables caching).
``--cache=<yes|no|auto>``
Decide whether to use network cache settings (default: auto).
May be useful when playing files from slow media, but can also have
negative effects, especially with file formats that require a lot of
seeking, such as MP4.
If enabled, use the maximum of ``--cache-secs`` and ``--demuxer-max-bytes``
for the cache size. If disabled, ``--cache-pause`` and related are
implicitly disabled.
Note that half the cache size will be used to allow fast seeking back. This
is also the reason why a full cache is usually not reported as 100% full.
The cache fill display does not include the part of the cache reserved for
seeking back. The actual maximum percentage will usually be the ratio
between readahead and backbuffer sizes.
``--cache-default=<kBytes|no>``
Set the size of the cache in kilobytes (default: 10000 KB). Using ``no``
will not automatically enable the cache e.g. when playing from a network
stream. Note that using ``--cache`` will always override this option.
``--cache-initial=<kBytes>``
Playback will start when the cache has been filled up with this many
kilobytes of data (default: 0).
``--cache-seek-min=<kBytes>``
If a seek is to be made to a position within ``<kBytes>`` of the cache
size from the current position, mpv will wait for the cache to be
filled to this position rather than performing a stream seek (default:
500).
This matters for small forward seeks. With slow streams (especially HTTP
streams) there is a tradeoff between skipping the data between current
position and seek destination, or performing an actual seek. Depending
on the situation, either of these might be slower than the other method.
This option allows control over this.
``--cache-backbuffer=<kBytes>``
Size of the cache back buffer (default: 10000 KB). This will add to the total
cache size, and reserved the amount for seeking back. The reserved amount
will not be used for readahead, and instead preserves already read data to
enable fast seeking back.
``--cache-file=<TMP|path>``
Create a cache file on the filesystem.
There are two ways of using this:
1. Passing a path (a filename). The file will always be overwritten. When
the general cache is enabled, this file cache will be used to store
whatever is read from the source stream.
This will always overwrite the cache file, and you can't use an existing
cache file to resume playback of a stream. (Technically, mpv wouldn't
even know which blocks in the file are valid and which not.)
The resulting file will not necessarily contain all data of the source
stream. For example, if you seek, the parts that were skipped over are
never read and consequently are not written to the cache. The skipped over
parts are filled with zeros. This means that the cache file doesn't
necessarily correspond to a full download of the source stream.
Both of these issues could be improved if there is any user interest.
.. warning:: Causes random corruption when used with ordered chapters or
with ``--audio-file``.
2. Passing the string ``TMP``. This will not be interpreted as filename.
Instead, an invisible temporary file is created. It depends on your
C library where this file is created (usually ``/tmp/``), and whether
filename is visible (the ``tmpfile()`` function is used). On some
systems, automatic deletion of the cache file might not be guaranteed.
If you want to use a file cache, this mode is recommended, because it
doesn't break ordered chapters or ``--audio-file``. These modes open
multiple cache streams, and using the same file for them obviously
clashes.
See also: ``--cache-file-size``.
``--cache-file-size=<kBytes>``
Maximum size of the file created with ``--cache-file``. For read accesses
above this size, the cache is simply not used.
Keep in mind that some use-cases, like playing ordered chapters with cache
enabled, will actually create multiple cache files, each of which will
use up to this much disk space.
(Default: 1048576, 1 GB.)
The ``auto`` choice sets this depending on whether the stream is thought to
involve network accesses (this is an imperfect heuristic).
``--no-cache``
Turn off input stream caching. See ``--cache``.

View File

@ -0,0 +1,95 @@
-- Test script for some command API details.
local utils = require("mp.utils")
function join(sep, arr, count)
local r = ""
if count == nil then
count = #arr
end
for i = 1, count do
if i > 1 then
r = r .. sep
end
r = r .. utils.to_string(arr[i])
end
return r
end
mp.observe_property("vo-configured", "bool", function(_, v)
if v ~= true then
return
end
print("async expand-text")
mp.command_native_async({"expand-text", "hello ${path}!"},
function(res, val, err)
print("done async expand-text: " .. join(" ", {res, val, err}))
end)
-- make screenshot writing very slow
mp.set_property("screenshot-format", "png")
mp.set_property("screenshot-png-compression", "9")
timer = mp.add_periodic_timer(0.1, function() print("I'm alive") end)
timer:resume()
print("Slow screenshot command...")
res, err = mp.command_native({"screenshot"})
print("done, res: " .. utils.to_string(res))
print("Slow screenshot async command...")
res, err = mp.command_native_async({"screenshot"}, function(res)
print("done (async), res: " .. utils.to_string(res))
timer:kill()
end)
print("done (sending), res: " .. utils.to_string(res))
print("Broken screenshot async command...")
mp.command_native_async({"screenshot-to-file", "/nonexistent/bogus.png"},
function(res, val, err)
print("done err scr.: " .. join(" ", {res, val, err}))
end)
mp.command_native_async({name = "subprocess", args = {"sh", "-c", "echo hi && sleep 10s"}, capture_stdout = true},
function(res, val, err)
print("done subprocess: " .. join(" ", {res, val, err}))
end)
local x = mp.command_native_async({name = "subprocess", args = {"sleep", "inf"}},
function(res, val, err)
print("done sleep inf subprocess: " .. join(" ", {res, val, err}))
end)
mp.add_timeout(15, function()
print("aborting sleep inf subprocess after timeout")
mp.abort_async_command(x)
end)
-- (assuming this "freezes")
local y = mp.command_native_async({name = "sub-add", url = "-"},
function(res, val, err)
print("done sub-add stdin: " .. join(" ", {res, val, err}))
end)
mp.add_timeout(20, function()
print("aborting sub-add stdin after timeout")
mp.abort_async_command(y)
end)
-- This should get killed on script exit.
mp.command_native_async({name = "subprocess", playback_only = false,
args = {"sleep", "inf"}}, function()end)
-- Runs detached; should be killed on player exit (forces timeout)
mp.command_native({_flags={"async"}, name = "subprocess",
playback_only = false, args = {"sleep", "inf"}})
end)
mp.register_event("shutdown", function()
-- This "freezes" the script, should be killed via timeout.
print("freeze!")
local x = mp.command_native({name = "subprocess", playback_only = false,
args = {"sleep", "inf"}})
print("done, killed=" .. utils.to_string(x.killed_by_us))
end)

View File

@ -37,6 +37,7 @@
#include "demux/stheader.h"
#include "filters/f_decoder_wrapper.h"
#include "filters/filter_internal.h"
#include "options/m_config.h"
#include "options/options.h"
struct priv {
@ -80,8 +81,9 @@ static bool init(struct mp_filter *da, struct mp_codec_params *codec,
const char *decoder)
{
struct priv *ctx = da->priv;
struct MPOpts *mpopts = da->global->opts;
struct ad_lavc_params *opts = mpopts->ad_lavc_params;
struct MPOpts *mpopts = mp_get_config_group(ctx, da->global, GLOBAL_CONFIG);
struct ad_lavc_params *opts =
mp_get_config_group(ctx, da->global, &ad_lavc_conf);
AVCodecContext *lavc_context;
AVCodec *lavc_codec;

View File

@ -120,7 +120,7 @@ static bool get_desc(struct m_obj_desc *dst, int index)
}
// For the ao option
const struct m_obj_list ao_obj_list = {
static const struct m_obj_list ao_obj_list = {
.get_desc = get_desc,
.description = "audio outputs",
.allow_unknown_entries = true,
@ -129,13 +129,30 @@ const struct m_obj_list ao_obj_list = {
.use_global_options = true,
};
#define OPT_BASE_STRUCT struct ao_opts
const struct m_sub_options ao_conf = {
.opts = (const struct m_option[]) {
OPT_SETTINGSLIST("ao", audio_driver_list, 0, &ao_obj_list, ),
OPT_STRING("audio-device", audio_device, UPDATE_AUDIO),
OPT_STRING("audio-client-name", audio_client_name, UPDATE_AUDIO),
OPT_DOUBLE("audio-buffer", audio_buffer, M_OPT_MIN | M_OPT_MAX,
.min = 0, .max = 10),
{0}
},
.size = sizeof(OPT_BASE_STRUCT),
.defaults = &(const OPT_BASE_STRUCT){
.audio_buffer = 0.2,
.audio_device = "auto",
.audio_client_name = "mpv",
},
};
static struct ao *ao_alloc(bool probing, struct mpv_global *global,
void (*wakeup_cb)(void *ctx), void *wakeup_ctx,
char *name)
{
assert(wakeup_cb);
struct MPOpts *opts = global->opts;
struct mp_log *log = mp_log_new(NULL, global->log, "ao");
struct m_obj_desc desc;
if (!m_obj_list_find(&desc, &ao_obj_list, bstr0(name))) {
@ -143,6 +160,7 @@ static struct ao *ao_alloc(bool probing, struct mpv_global *global,
talloc_free(log);
return NULL;
};
struct ao_opts *opts = mp_get_config_group(NULL, global, &ao_conf);
struct ao *ao = talloc_ptrtype(NULL, ao);
talloc_steal(ao, log);
*ao = (struct ao) {
@ -155,6 +173,7 @@ static struct ao *ao_alloc(bool probing, struct mpv_global *global,
.def_buffer = opts->audio_buffer,
.client_name = talloc_strdup(ao, opts->audio_client_name),
};
talloc_free(opts);
ao->priv = m_config_group_from_desc(ao, ao->log, global, &desc, name);
if (!ao->priv)
goto error;
@ -267,8 +286,8 @@ struct ao *ao_init_best(struct mpv_global *global,
struct encode_lavc_context *encode_lavc_ctx,
int samplerate, int format, struct mp_chmap channels)
{
struct MPOpts *opts = global->opts;
void *tmp = talloc_new(NULL);
struct ao_opts *opts = mp_get_config_group(tmp, global, &ao_conf);
struct mp_log *log = mp_log_new(tmp, global->log, "ao");
struct ao *ao = NULL;
struct m_obj_settings *ao_list = NULL;

View File

@ -83,6 +83,13 @@ struct mpv_global;
struct input_ctx;
struct encode_lavc_context;
struct ao_opts {
struct m_obj_settings *audio_driver_list;
char *audio_device;
char *audio_client_name;
double audio_buffer;
};
struct ao *ao_init_best(struct mpv_global *global,
int init_flags,
void (*wakeup_cb)(void *ctx), void *wakeup_ctx,

View File

@ -8,10 +8,7 @@ struct mpv_global {
struct mp_log *log;
struct m_config_shadow *config;
struct mp_client_api *client_api;
// Using this is deprecated and should be avoided (missing synchronization).
// Use m_config_cache to access mpv_global.config instead.
struct MPOpts *opts;
char *configdir;
};
#endif

View File

@ -460,8 +460,6 @@ void mp_msg_init(struct mpv_global *global)
struct mp_log *log = mp_log_new(root, &dummy, "");
global->log = log;
mp_msg_update_msglevels(global);
}
// If opt is different from *current_path, reopen *file and update *current_path.
@ -501,13 +499,9 @@ static void reopen_file(char *opt, char **current_path, FILE **file,
talloc_free(tmp);
}
void mp_msg_update_msglevels(struct mpv_global *global)
void mp_msg_update_msglevels(struct mpv_global *global, struct MPOpts *opts)
{
struct mp_log_root *root = global->log->root;
struct MPOpts *opts = global->opts;
if (!opts)
return;
pthread_mutex_lock(&mp_msg_lock);
@ -522,8 +516,7 @@ void mp_msg_update_msglevels(struct mpv_global *global)
}
m_option_type_msglevels.free(&root->msg_levels);
m_option_type_msglevels.copy(NULL, &root->msg_levels,
&global->opts->msg_levels);
m_option_type_msglevels.copy(NULL, &root->msg_levels, &opts->msg_levels);
atomic_fetch_add(&root->reload_counter, 1);
pthread_mutex_unlock(&mp_msg_lock);

View File

@ -4,9 +4,10 @@
#include <stdbool.h>
struct mpv_global;
struct MPOpts;
void mp_msg_init(struct mpv_global *global);
void mp_msg_uninit(struct mpv_global *global);
void mp_msg_update_msglevels(struct mpv_global *global);
void mp_msg_update_msglevels(struct mpv_global *global, struct MPOpts *opts);
void mp_msg_force_stderr(struct mpv_global *global, bool force_stderr);
bool mp_msg_has_status_line(struct mpv_global *global);
bool mp_msg_has_log_file(struct mpv_global *global);

View File

@ -275,13 +275,14 @@ struct playlist_entry *playlist_entry_from_index(struct playlist *pl, int index)
}
}
struct playlist *playlist_parse_file(const char *file, struct mpv_global *global)
struct playlist *playlist_parse_file(const char *file, struct mp_cancel *cancel,
struct mpv_global *global)
{
struct mp_log *log = mp_log_new(NULL, global->log, "!playlist_parser");
mp_verbose(log, "Parsing playlist file %s...\n", file);
struct demuxer_params p = {.force_format = "playlist"};
struct demuxer *d = demux_open_url(file, &p, NULL, global);
struct demuxer *d = demux_open_url(file, &p, cancel, global);
if (!d) {
talloc_free(log);
return NULL;
@ -296,7 +297,7 @@ struct playlist *playlist_parse_file(const char *file, struct mpv_global *global
"pass it to the player\ndirectly. Don't use --playlist.\n");
}
}
free_demuxer_and_stream(d);
demux_free(d);
if (ret) {
mp_verbose(log, "Playlist successfully parsed\n");

View File

@ -101,8 +101,10 @@ int playlist_entry_to_index(struct playlist *pl, struct playlist_entry *e);
int playlist_entry_count(struct playlist *pl);
struct playlist_entry *playlist_entry_from_index(struct playlist *pl, int index);
struct mp_cancel;
struct mpv_global;
struct playlist *playlist_parse_file(const char *file, struct mpv_global *global);
struct playlist *playlist_parse_file(const char *file, struct mp_cancel *cancel,
struct mpv_global *global);
void playlist_entry_unref(struct playlist_entry *e);

View File

@ -35,6 +35,7 @@
#include "mpv_talloc.h"
#include "common/msg.h"
#include "common/global.h"
#include "misc/thread_tools.h"
#include "osdep/atomic.h"
#include "osdep/threads.h"
@ -86,6 +87,7 @@ const demuxer_desc_t *const demuxer_list[] = {
};
struct demux_opts {
int enable_cache;
int64_t max_bytes;
int64_t max_bytes_bw;
double min_secs;
@ -102,6 +104,8 @@ struct demux_opts {
const struct m_sub_options demux_conf = {
.opts = (const struct m_option[]){
OPT_CHOICE("cache", enable_cache, 0,
({"no", 0}, {"auto", -1}, {"yes", 1})),
OPT_DOUBLE("demuxer-readahead-secs", min_secs, M_OPT_MIN, .min = 0),
// (The MAX_BYTES sizes may not be accurate because the max field is
// of double type.)
@ -117,6 +121,7 @@ const struct m_sub_options demux_conf = {
},
.size = sizeof(struct demux_opts),
.defaults = &(const struct demux_opts){
.enable_cache = -1, // auto
.max_bytes = 150 * 1024 * 1024,
.max_bytes_bw = 50 * 1024 * 1024,
.min_secs = 1.0,
@ -129,11 +134,15 @@ const struct m_sub_options demux_conf = {
struct demux_internal {
struct mp_log *log;
struct demux_opts *opts;
// The demuxer runs potentially in another thread, so we keep two demuxer
// structs; the real demuxer can access the shadow struct only.
struct demuxer *d_thread; // accessed by demuxer impl. (producer)
struct demuxer *d_user; // accessed by player (consumer)
bool owns_stream;
// The lock protects the packet queues (struct demux_stream),
// and the fields below.
pthread_mutex_t lock;
@ -144,6 +153,7 @@ struct demux_internal {
bool thread_terminate;
bool threading;
bool shutdown_async;
void (*wakeup_cb)(void *ctx);
void *wakeup_cb_ctx;
@ -212,8 +222,6 @@ struct demux_internal {
// Transient state.
double duration;
// Cached state.
bool force_cache_update;
struct stream_cache_info stream_cache_info;
int64_t stream_size;
// Updated during init only.
char *stream_base_filename;
@ -513,23 +521,40 @@ static void update_seek_ranges(struct demux_cached_range *range)
range->is_bof = true;
range->is_eof = true;
double min_start_pts = MP_NOPTS_VALUE;
double max_end_pts = MP_NOPTS_VALUE;
for (int n = 0; n < range->num_streams; n++) {
struct demux_queue *queue = range->streams[n];
if (queue->ds->selected && queue->ds->eager) {
range->seek_start = MP_PTS_MAX(range->seek_start, queue->seek_start);
range->seek_end = MP_PTS_MIN(range->seek_end, queue->seek_end);
if (queue->is_bof) {
min_start_pts = MP_PTS_MIN(min_start_pts, queue->seek_start);
} else {
range->seek_start =
MP_PTS_MAX(range->seek_start, queue->seek_start);
}
if (queue->is_eof) {
max_end_pts = MP_PTS_MAX(max_end_pts, queue->seek_end);
} else {
range->seek_end = MP_PTS_MIN(range->seek_end, queue->seek_end);
}
range->is_eof &= queue->is_eof;
range->is_bof &= queue->is_bof;
if (queue->seek_start >= queue->seek_end) {
range->seek_start = range->seek_end = MP_NOPTS_VALUE;
break;
}
bool empty = queue->is_eof && !queue->head;
if (queue->seek_start >= queue->seek_end && !empty)
goto broken;
}
}
if (range->is_eof)
range->seek_end = max_end_pts;
if (range->is_bof)
range->seek_start = min_start_pts;
// Sparse stream behavior is not very clearly defined, but usually we don't
// want it to restrict the range of other streams, unless
// This is incorrect in any of these cases:
@ -557,7 +582,12 @@ static void update_seek_ranges(struct demux_cached_range *range)
}
if (range->seek_start >= range->seek_end)
range->seek_start = range->seek_end = MP_NOPTS_VALUE;
goto broken;
return;
broken:
range->seek_start = range->seek_end = MP_NOPTS_VALUE;
}
// Remove queue->head from the queue. Does not update in->fw_bytes/in->fw_packs.
@ -925,7 +955,33 @@ int demux_get_num_stream(struct demuxer *demuxer)
return r;
}
void free_demuxer(demuxer_t *demuxer)
static void demux_shutdown(struct demux_internal *in)
{
struct demuxer *demuxer = in->d_user;
if (demuxer->desc->close)
demuxer->desc->close(in->d_thread);
demuxer->priv = NULL;
in->d_thread->priv = NULL;
demux_flush(demuxer);
assert(in->total_bytes == 0);
if (in->owns_stream)
free_stream(demuxer->stream);
demuxer->stream = NULL;
}
static void demux_dealloc(struct demux_internal *in)
{
for (int n = 0; n < in->num_streams; n++)
talloc_free(in->streams[n]);
pthread_mutex_destroy(&in->lock);
pthread_cond_destroy(&in->wakeup);
talloc_free(in->d_user);
}
void demux_free(struct demuxer *demuxer)
{
if (!demuxer)
return;
@ -933,27 +989,74 @@ void free_demuxer(demuxer_t *demuxer)
assert(demuxer == in->d_user);
demux_stop_thread(demuxer);
if (demuxer->desc->close)
demuxer->desc->close(in->d_thread);
demux_flush(demuxer);
assert(in->total_bytes == 0);
for (int n = 0; n < in->num_streams; n++)
talloc_free(in->streams[n]);
pthread_mutex_destroy(&in->lock);
pthread_cond_destroy(&in->wakeup);
talloc_free(demuxer);
demux_shutdown(in);
demux_dealloc(in);
}
void free_demuxer_and_stream(struct demuxer *demuxer)
// Start closing the demuxer and eventually freeing the demuxer asynchronously.
// You must not access the demuxer once this has been started. Once the demuxer
// is shutdown, the wakeup callback is invoked. Then you need to call
// demux_free_async_finish() to end the operation (it must not be called from
// the wakeup callback).
// This can return NULL. Then the demuxer cannot be free'd asynchronously, and
// you need to call demux_free() instead.
struct demux_free_async_state *demux_free_async(struct demuxer *demuxer)
{
struct demux_internal *in = demuxer->in;
assert(demuxer == in->d_user);
if (!in->threading)
return NULL;
pthread_mutex_lock(&in->lock);
in->thread_terminate = true;
in->shutdown_async = true;
pthread_cond_signal(&in->wakeup);
pthread_mutex_unlock(&in->lock);
return (struct demux_free_async_state *)demuxer->in; // lies
}
// As long as state is valid, you can call this to request immediate abort.
// Roughly behaves as demux_cancel_and_free(), except you still need to wait
// for the result.
void demux_free_async_force(struct demux_free_async_state *state)
{
struct demux_internal *in = (struct demux_internal *)state; // reverse lies
mp_cancel_trigger(in->d_user->cancel);
}
// Check whether the demuxer is shutdown yet. If not, return false, and you
// need to call this again in the future (preferably after you were notified by
// the wakeup callback). If yes, deallocate all state, and return true (in
// particular, the state ptr becomes invalid, and the wakeup callback will never
// be called again).
bool demux_free_async_finish(struct demux_free_async_state *state)
{
struct demux_internal *in = (struct demux_internal *)state; // reverse lies
pthread_mutex_lock(&in->lock);
bool busy = in->shutdown_async;
pthread_mutex_unlock(&in->lock);
if (busy)
return false;
demux_stop_thread(in->d_user);
demux_dealloc(in);
return true;
}
// Like demux_free(), but trigger an abort, which will force the demuxer to
// terminate immediately. If this wasn't opened with demux_open_url(), there is
// some chance this will accidentally abort other things via demuxer->cancel.
void demux_cancel_and_free(struct demuxer *demuxer)
{
if (!demuxer)
return;
struct stream *s = demuxer->stream;
free_demuxer(demuxer);
free_stream(s);
mp_cancel_trigger(demuxer->cancel);
demux_free(demuxer);
}
// Start the demuxer thread, which reads ahead packets on its own.
@ -1597,9 +1700,6 @@ static void execute_trackswitch(struct demux_internal *in)
if (in->d_thread->desc->control)
in->d_thread->desc->control(in->d_thread, DEMUXER_CTRL_SWITCHED_TRACKS, 0);
stream_control(in->d_thread->stream, STREAM_CTRL_SET_READAHEAD,
&(int){any_selected});
pthread_mutex_lock(&in->lock);
}
@ -1648,13 +1748,6 @@ static bool thread_work(struct demux_internal *in)
if (read_packet(in))
return true; // read_packet unlocked, so recheck conditions
}
if (in->force_cache_update) {
pthread_mutex_unlock(&in->lock);
update_cache(in);
pthread_mutex_lock(&in->lock);
in->force_cache_update = false;
return true;
}
return false;
}
@ -1663,12 +1756,23 @@ static void *demux_thread(void *pctx)
struct demux_internal *in = pctx;
mpthread_set_name("demux");
pthread_mutex_lock(&in->lock);
while (!in->thread_terminate) {
if (thread_work(in))
continue;
pthread_cond_signal(&in->wakeup);
pthread_cond_wait(&in->wakeup, &in->lock);
}
if (in->shutdown_async) {
pthread_mutex_unlock(&in->lock);
demux_shutdown(in);
pthread_mutex_lock(&in->lock);
in->shutdown_async = false;
if (in->wakeup_cb)
in->wakeup_cb(in->wakeup_cb_ctx);
}
pthread_mutex_unlock(&in->lock);
return NULL;
}
@ -2167,6 +2271,19 @@ static void fixup_metadata(struct demux_internal *in)
}
}
// Return whether "heavy" caching on this stream is enabled. By default, this
// corresponds to whether the source stream is considered in the network. The
// only effect should be adjusting display behavior (of cache stats etc.), and
// possibly switching between which set of options influence cache settings.
bool demux_is_network_cached(demuxer_t *demuxer)
{
struct demux_internal *in = demuxer->in;
bool use_cache = demuxer->is_network;
if (in->opts->enable_cache >= 0)
use_cache = in->opts->enable_cache == 1;
return use_cache;
}
static struct demuxer *open_given_type(struct mpv_global *global,
struct mp_log *log,
const struct demuxer_desc *desc,
@ -2182,6 +2299,7 @@ static struct demuxer *open_given_type(struct mpv_global *global,
*demuxer = (struct demuxer) {
.desc = desc,
.stream = stream,
.cancel = stream->cancel,
.seekable = stream->seekable,
.filepos = -1,
.global = global,
@ -2192,14 +2310,14 @@ static struct demuxer *open_given_type(struct mpv_global *global,
.access_references = opts->access_references,
.events = DEMUX_EVENT_ALL,
.duration = -1,
.extended_ctrls = stream->extended_ctrls,
};
demuxer->seekable = stream->seekable;
if (demuxer->stream->underlying && !demuxer->stream->underlying->seekable)
demuxer->seekable = false;
struct demux_internal *in = demuxer->in = talloc_ptrtype(demuxer, in);
*in = (struct demux_internal){
.log = demuxer->log,
.opts = opts,
.d_thread = talloc(demuxer, struct demuxer),
.d_user = demuxer,
.min_secs = opts->min_secs,
@ -2260,10 +2378,8 @@ static struct demuxer *open_given_type(struct mpv_global *global,
fixup_metadata(in);
in->events = DEMUX_EVENT_ALL;
demux_update(demuxer);
stream_control(demuxer->stream, STREAM_CTRL_SET_READAHEAD,
&(int){params ? params->initial_readahead : false});
int seekable = opts->seekable_cache;
if (demuxer->is_network || stream->caching) {
if (demux_is_network_cached(demuxer)) {
in->min_secs = MPMAX(in->min_secs, opts->min_secs_cache);
if (seekable < 0)
seekable = 1;
@ -2287,7 +2403,7 @@ static struct demuxer *open_given_type(struct mpv_global *global,
return demuxer;
}
free_demuxer(demuxer);
demux_free(demuxer);
return NULL;
}
@ -2296,6 +2412,9 @@ static const int d_request[] = {DEMUX_CHECK_REQUEST, -1};
static const int d_force[] = {DEMUX_CHECK_FORCE, -1};
// params can be NULL
// If params->does_not_own_stream==false, this does _not_ free the stream if
// opening fails. But if it succeeds, a later demux_free() call will free the
// stream.
struct demuxer *demux_open(struct stream *stream, struct demuxer_params *params,
struct mpv_global *global)
{
@ -2335,6 +2454,8 @@ struct demuxer *demux_open(struct stream *stream, struct demuxer_params *params,
if (demuxer) {
talloc_steal(demuxer, log);
log = NULL;
demuxer->in->owns_stream =
params ? !params->does_not_own_stream : false;
goto done;
}
}
@ -2348,28 +2469,36 @@ done:
// Convenience function: open the stream, enable the cache (according to params
// and global opts.), open the demuxer.
// (use free_demuxer_and_stream() to free the underlying stream too)
// Also for some reason may close the opened stream if it's not needed.
// demuxer->cancel is not the cancel parameter, but is its own object that will
// be a slave (mp_cancel_set_parent()) to provided cancel object.
// demuxer->cancel is automatically freed.
struct demuxer *demux_open_url(const char *url,
struct demuxer_params *params,
struct mp_cancel *cancel,
struct mpv_global *global)
struct demuxer_params *params,
struct mp_cancel *cancel,
struct mpv_global *global)
{
struct demuxer_params dummy = {0};
if (!params)
params = &dummy;
assert(!params->does_not_own_stream); // API user error
struct mp_cancel *priv_cancel = mp_cancel_new(NULL);
if (cancel)
mp_cancel_set_parent(priv_cancel, cancel);
struct stream *s = stream_create(url, STREAM_READ | params->stream_flags,
cancel, global);
if (!s)
priv_cancel, global);
if (!s) {
talloc_free(priv_cancel);
return NULL;
if (!params->disable_cache)
stream_enable_cache_defaults(&s);
}
struct demuxer *d = demux_open(s, params, global);
if (d) {
talloc_steal(d->in, priv_cancel);
demux_maybe_replace_stream(d);
} else {
params->demuxer_failed = true;
free_stream(s);
talloc_free(priv_cancel);
}
return d;
}
@ -2916,15 +3045,12 @@ static void update_cache(struct demux_internal *in)
// Don't lock while querying the stream.
struct mp_tags *stream_metadata = NULL;
struct stream_cache_info stream_cache_info = {.size = -1};
int64_t stream_size = stream_get_size(stream);
stream_control(stream, STREAM_CTRL_GET_METADATA, &stream_metadata);
stream_control(stream, STREAM_CTRL_GET_CACHE_INFO, &stream_cache_info);
pthread_mutex_lock(&in->lock);
in->stream_size = stream_size;
in->stream_cache_info = stream_cache_info;
if (stream_metadata) {
for (int n = 0; n < in->num_streams; n++) {
struct demux_stream *ds = in->streams[n]->ds;
@ -2939,18 +3065,7 @@ static void update_cache(struct demux_internal *in)
// must be called locked
static int cached_stream_control(struct demux_internal *in, int cmd, void *arg)
{
// If the cache is active, wake up the thread to possibly update cache state.
if (in->stream_cache_info.size >= 0) {
in->force_cache_update = true;
pthread_cond_signal(&in->wakeup);
}
switch (cmd) {
case STREAM_CTRL_GET_CACHE_INFO:
if (in->stream_cache_info.size < 0)
return STREAM_UNSUPPORTED;
*(struct stream_cache_info *)arg = in->stream_cache_info;
return STREAM_OK;
case STREAM_CTRL_GET_SIZE:
if (in->stream_size < 0)
return STREAM_UNSUPPORTED;
@ -3118,7 +3233,7 @@ int demux_stream_control(demuxer_t *demuxer, int ctrl, void *arg)
bool demux_cancel_test(struct demuxer *demuxer)
{
return mp_cancel_test(demuxer->stream->cancel);
return mp_cancel_test(demuxer->cancel);
}
struct demux_chapter *demux_copy_chapter_data(struct demux_chapter *c, int num)

View File

@ -174,12 +174,11 @@ struct demuxer_params {
bool *matroska_was_valid;
struct timeline *timeline;
bool disable_timeline;
bool initial_readahead;
bstr init_fragment;
bool skip_lavf_probing;
bool does_not_own_stream; // if false, stream is free'd on demux_free()
// -- demux_open_url() only
int stream_flags;
bool disable_cache;
// result
bool demuxer_failed;
};
@ -202,6 +201,7 @@ typedef struct demuxer {
bool fully_read;
bool is_network; // opened directly from a network stream
bool access_references; // allow opening other files/URLs
bool extended_ctrls; // supports some of BD/DVD/DVB/TV controls
// Bitmask of DEMUX_EVENT_*
int events;
@ -233,6 +233,9 @@ typedef struct demuxer {
struct mp_tags **update_stream_tags;
int num_update_stream_tags;
// Triggered when ending demuxing forcefully. Usually bound to the stream too.
struct mp_cancel *cancel;
// Since the demuxer can run in its own thread, and the stream is not
// thread-safe, only the demuxer is allowed to access the stream directly.
// You can freely use demux_stream_control() to send STREAM_CTRLs.
@ -245,8 +248,13 @@ typedef struct {
int aid, vid, sid; //audio, video and subtitle id
} demux_program_t;
void free_demuxer(struct demuxer *demuxer);
void free_demuxer_and_stream(struct demuxer *demuxer);
void demux_free(struct demuxer *demuxer);
void demux_cancel_and_free(struct demuxer *demuxer);
struct demux_free_async_state;
struct demux_free_async_state *demux_free_async(struct demuxer *demuxer);
void demux_free_async_force(struct demux_free_async_state *state);
bool demux_free_async_finish(struct demux_free_async_state *state);
void demux_add_packet(struct sh_stream *stream, demux_packet_t *dp);
void demuxer_feed_caption(struct sh_stream *stream, demux_packet_t *dp);
@ -307,6 +315,7 @@ void demux_metadata_changed(demuxer_t *demuxer);
void demux_update(demuxer_t *demuxer);
void demux_disable_cache(demuxer_t *demuxer);
bool demux_is_network_cached(demuxer_t *demuxer);
struct sh_stream *demuxer_stream_by_demuxer_id(struct demuxer *d,
enum stream_type t, int id);

View File

@ -285,15 +285,15 @@ static int d_open(demuxer_t *demuxer, enum demux_check check)
if (check != DEMUX_CHECK_FORCE)
return -1;
struct demuxer_params params = {.force_format = "+lavf"};
struct demuxer_params params = {
.force_format = "+lavf",
.does_not_own_stream = true,
};
struct stream *cur = demuxer->stream;
const char *sname = "";
while (cur) {
if (cur->info)
sname = cur->info->name;
cur = cur->underlying; // down the caching chain
}
if (cur->info)
sname = cur->info->name;
p->is_cdda = strcmp(sname, "cdda") == 0;
p->is_dvd = strcmp(sname, "dvd") == 0 ||
@ -342,13 +342,15 @@ static int d_open(demuxer_t *demuxer, enum demux_check check)
if (stream_control(demuxer->stream, STREAM_CTRL_GET_TIME_LENGTH, &len) >= 1)
demuxer->duration = len;
demuxer->extended_ctrls = true;
return 0;
}
static void d_close(demuxer_t *demuxer)
{
struct priv *p = demuxer->priv;
free_demuxer(p->slave);
demux_free(p->slave);
}
static int d_control(demuxer_t *demuxer, int cmd, void *arg)

View File

@ -42,6 +42,7 @@
#include "common/av_common.h"
#include "misc/bstr.h"
#include "misc/charset_conv.h"
#include "misc/thread_tools.h"
#include "stream/stream.h"
#include "demux.h"
@ -781,8 +782,7 @@ static void update_metadata(demuxer_t *demuxer)
static int interrupt_cb(void *ctx)
{
struct demuxer *demuxer = ctx;
lavf_priv_t *priv = demuxer->priv;
return mp_cancel_test(priv->stream->cancel);
return mp_cancel_test(demuxer->cancel);
}
static int block_io_open(struct AVFormatContext *s, AVIOContext **pb,

View File

@ -39,6 +39,7 @@
#include "options/options.h"
#include "options/path.h"
#include "misc/bstr.h"
#include "misc/thread_tools.h"
#include "common/common.h"
#include "common/playlist.h"
#include "stream/stream.h"
@ -171,7 +172,6 @@ static bool check_file_seg(struct tl_ctx *ctx, char *filename, int segment)
.matroska_wanted_segment = segment,
.matroska_was_valid = &was_valid,
.disable_timeline = true,
.disable_cache = true,
};
struct mp_cancel *cancel = ctx->tl->cancel;
if (mp_cancel_test(cancel))
@ -215,21 +215,12 @@ static bool check_file_seg(struct tl_ctx *ctx, char *filename, int segment)
}
}
if (stream_wants_cache(d->stream, ctx->opts->stream_cache)) {
free_demuxer_and_stream(d);
params.disable_cache = false;
params.matroska_wanted_uids = ctx->uids; // potentially reallocated, same data
d = demux_open_url(filename, &params, cancel, ctx->global);
if (!d)
return false;
}
ctx->sources[i] = d;
return true;
}
}
free_demuxer_and_stream(d);
demux_free(d);
return was_valid;
}
@ -263,7 +254,8 @@ static void find_ordered_chapter_sources(struct tl_ctx *ctx)
MP_INFO(ctx, "Loading references from '%s'.\n",
opts->ordered_chapters_files);
struct playlist *pl =
playlist_parse_file(opts->ordered_chapters_files, ctx->global);
playlist_parse_file(opts->ordered_chapters_files,
ctx->tl->cancel, ctx->global);
talloc_steal(tmp, pl);
for (struct playlist_entry *e = pl ? pl->first : NULL; e; e = e->next)
MP_TARRAY_APPEND(tmp, filenames, num_filenames, e->filename);
@ -515,7 +507,7 @@ void build_ordered_chapter_timeline(struct timeline *tl)
.global = tl->global,
.tl = tl,
.demuxer = demuxer,
.opts = mp_get_config_group(ctx, tl->global, NULL),
.opts = mp_get_config_group(ctx, tl->global, GLOBAL_CONFIG),
};
if (!ctx->opts->ordered_chapters || !demuxer->access_references) {

View File

@ -25,6 +25,7 @@
#include "options/options.h"
#include "common/msg.h"
#include "common/playlist.h"
#include "misc/thread_tools.h"
#include "options/path.h"
#include "stream/stream.h"
#include "osdep/io.h"

View File

@ -147,7 +147,7 @@ static void close_lazy_segments(struct demuxer *demuxer)
for (int n = 0; n < p->num_segments; n++) {
struct segment *seg = p->segments[n];
if (seg != p->current && seg->d && seg->lazy) {
free_demuxer_and_stream(seg->d);
demux_free(seg->d);
seg->d = NULL;
}
}
@ -167,7 +167,7 @@ static void reopen_lazy_segments(struct demuxer *demuxer)
.skip_lavf_probing = true,
};
p->current->d = demux_open_url(p->current->url, &params,
demuxer->stream->cancel, demuxer->global);
demuxer->cancel, demuxer->global);
if (!p->current->d && !demux_cancel_test(demuxer))
MP_ERR(demuxer, "failed to load segment\n");
if (p->current->d)
@ -431,7 +431,7 @@ static void d_close(struct demuxer *demuxer)
p->current = NULL;
close_lazy_segments(demuxer);
timeline_destroy(p->tl);
free_demuxer(master);
demux_free(master);
}
static int d_control(struct demuxer *demuxer, int cmd, void *arg)

View File

@ -181,6 +181,8 @@ no_audio:
if(funcs->control(tvh->priv,TVI_CONTROL_VID_SET_GAIN,&tvh->tv_param->gain)!=TVI_CONTROL_TRUE)
MP_WARN(tvh, "Unable to set gain control!\n");
demuxer->extended_ctrls = true;
return 0;
}

View File

@ -14,7 +14,7 @@ struct timeline *timeline_load(struct mpv_global *global, struct mp_log *log,
*tl = (struct timeline){
.global = global,
.log = log,
.cancel = demuxer->stream->cancel,
.cancel = demuxer->cancel,
.demuxer = demuxer,
.track_layout = demuxer,
};
@ -34,9 +34,9 @@ void timeline_destroy(struct timeline *tl)
for (int n = 0; n < tl->num_sources; n++) {
struct demuxer *d = tl->sources[n];
if (d != tl->demuxer && d != tl->track_layout)
free_demuxer_and_stream(d);
demux_free(d);
}
if (tl->track_layout && tl->track_layout != tl->demuxer)
free_demuxer_and_stream(tl->track_layout);
demux_free(tl->track_layout);
talloc_free(tl);
}

View File

@ -133,9 +133,9 @@
#_ cycle video
#T cycle ontop # toggle video window ontop of other windows
#f cycle fullscreen # toggle fullscreen
#s async screenshot # take a screenshot
#S async screenshot video # ...without subtitles
#Ctrl+s async screenshot window # ...with subtitles and OSD, and scaled
#s screenshot # take a screenshot
#S screenshot video # ...without subtitles
#Ctrl+s screenshot window # ...with subtitles and OSD, and scaled
#Alt+s screenshot each-frame # automatically screenshot every frame
#w add panscan -0.1 # zoom out with -panscan 0 -fs
#W add panscan +0.1 # in

View File

@ -27,7 +27,7 @@
#include "config.h"
#include "options/options.h"
#include "common/msg.h"
#include "options/m_config.h"
#include "osdep/timer.h"
#include "demux/demux.h"
@ -50,7 +50,7 @@
struct priv {
struct mp_filter *f;
struct mp_log *log;
struct MPOpts *opts;
struct m_config_cache *opt_cache;
struct sh_stream *header;
struct mp_codec_params *codec;
@ -162,7 +162,8 @@ struct mp_decoder_list *audio_decoder_list(void)
bool mp_decoder_wrapper_reinit(struct mp_decoder_wrapper *d)
{
struct priv *p = d->f->priv;
struct MPOpts *opts = p->opts;
struct MPOpts *opts = p->opt_cache->opts;
m_config_cache_update(p->opt_cache);
if (p->decoder)
talloc_free(p->decoder->f);
@ -236,9 +237,10 @@ static bool is_valid_peak(float sig_peak)
static void fix_image_params(struct priv *p,
struct mp_image_params *params)
{
struct MPOpts *opts = p->opts;
struct mp_image_params m = *params;
struct mp_codec_params *c = p->codec;
struct MPOpts *opts = p->opt_cache->opts;
m_config_cache_update(p->opt_cache);
MP_VERBOSE(p, "Decoder format: %s\n", mp_image_params_to_str(params));
p->dec_format = *params;
@ -302,7 +304,8 @@ static void fix_image_params(struct priv *p,
static void process_video_frame(struct priv *p, struct mp_image *mpi)
{
struct MPOpts *opts = p->opts;
struct MPOpts *opts = p->opt_cache->opts;
m_config_cache_update(p->opt_cache);
// Note: the PTS is reordered, but the DTS is not. Both should be monotonic.
double pts = mpi->pts;
@ -645,13 +648,15 @@ struct mp_decoder_wrapper *mp_decoder_wrapper_create(struct mp_filter *parent,
struct priv *p = f->priv;
struct mp_decoder_wrapper *w = &p->public;
p->opts = f->global->opts;
p->opt_cache = m_config_cache_alloc(p, f->global, GLOBAL_CONFIG);
p->log = f->log;
p->f = f;
p->header = src;
p->codec = p->header->codec;
w->f = f;
struct MPOpts *opts = p->opt_cache->opts;
mp_filter_add_pin(f, MP_PIN_OUT, "out");
if (p->header->type == STREAM_VIDEO) {
@ -661,8 +666,8 @@ struct mp_decoder_wrapper *mp_decoder_wrapper_create(struct mp_filter *parent,
MP_VERBOSE(p, "Container reported FPS: %f\n", p->public.fps);
if (p->opts->force_fps) {
p->public.fps = p->opts->force_fps;
if (opts->force_fps) {
p->public.fps = opts->force_fps;
MP_INFO(p, "FPS forced to %5.3f.\n", p->public.fps);
MP_INFO(p, "Use --no-correct-pts to force FPS based timing.\n");
}

View File

@ -18,6 +18,7 @@
#include <stddef.h>
#include "misc/bstr.h"
#include "misc/node.h"
#include "common/common.h"
#include "common/msg.h"
#include "options/m_option.h"
@ -27,15 +28,13 @@
#include "libmpv/client.h"
const struct mp_cmd_def mp_cmd_list = {
.name = "list",
};
static void destroy_cmd(void *ptr)
{
struct mp_cmd *cmd = ptr;
for (int n = 0; n < cmd->nargs; n++)
m_option_free(cmd->args[n].type, &cmd->args[n].v);
for (int n = 0; n < cmd->nargs; n++) {
if (cmd->args[n].type)
m_option_free(cmd->args[n].type, &cmd->args[n].v);
}
}
struct flag {
@ -52,7 +51,8 @@ static const struct flag cmd_flags[] = {
{"expand-properties", 0, MP_EXPAND_PROPERTIES},
{"raw", MP_EXPAND_PROPERTIES, 0},
{"repeatable", 0, MP_ALLOW_REPEAT},
{"async", 0, MP_ASYNC_CMD},
{"async", MP_SYNC_CMD, MP_ASYNC_CMD},
{"sync", MP_ASYNC_CMD, MP_SYNC_CMD},
{0}
};
@ -114,34 +114,93 @@ static const struct m_option *get_arg_type(const struct mp_cmd_def *cmd, int i)
return opt && opt->type ? opt : NULL;
}
// Verify that there are missing args, fill in missing optional args.
// Return the name of the argument, possibly as stack allocated string (which is
// why this is a macro, and out of laziness). Otherwise as get_arg_type().
#define get_arg_name(cmd, i) \
((i) < MP_CMD_DEF_MAX_ARGS && (cmd)->args[(i)].name && \
(cmd)->args[(i)].name[0] \
? (cmd)->args[(i)].name : mp_tprintf(10, "%d", (i) + 1))
// Verify that there are no missing args, fill in missing optional args.
static bool finish_cmd(struct mp_log *log, struct mp_cmd *cmd)
{
for (int i = cmd->nargs; i < MP_CMD_DEF_MAX_ARGS; i++) {
for (int i = 0; i < MP_CMD_DEF_MAX_ARGS; i++) {
// (type==NULL is used for yet unset arguments)
if (i < cmd->nargs && cmd->args[i].type)
continue;
const struct m_option *opt = get_arg_type(cmd->def, i);
if (!opt || is_vararg(cmd->def, i))
if (i >= cmd->nargs && (!opt || is_vararg(cmd->def, i)))
break;
if (!opt->defval && !(opt->flags & MP_CMD_OPT_ARG)) {
mp_err(log, "Command %s: more than %d arguments required.\n",
cmd->name, cmd->nargs);
mp_err(log, "Command %s: required argument %s not set.\n",
cmd->name, get_arg_name(cmd->def, i));
return false;
}
struct mp_cmd_arg arg = {.type = opt};
if (opt->defval)
m_option_copy(opt, &arg.v, opt->defval);
MP_TARRAY_APPEND(cmd, cmd->args, cmd->nargs, arg);
assert(i <= cmd->nargs);
if (i == cmd->nargs) {
MP_TARRAY_APPEND(cmd, cmd->args, cmd->nargs, arg);
} else {
cmd->args[i] = arg;
}
}
if (!(cmd->flags & (MP_ASYNC_CMD | MP_SYNC_CMD)))
cmd->flags |= cmd->def->default_async ? MP_ASYNC_CMD : MP_SYNC_CMD;
return true;
}
struct mp_cmd *mp_input_parse_cmd_node(struct mp_log *log, mpv_node *node)
static bool set_node_arg(struct mp_log *log, struct mp_cmd *cmd, int i,
mpv_node *val)
{
struct mp_cmd *cmd = talloc_ptrtype(NULL, cmd);
talloc_set_destructor(cmd, destroy_cmd);
*cmd = (struct mp_cmd) { .scale = 1, .scale_units = 1 };
const char *name = get_arg_name(cmd->def, i);
if (node->format != MPV_FORMAT_NODE_ARRAY)
goto error;
const struct m_option *opt = get_arg_type(cmd->def, i);
if (!opt) {
mp_err(log, "Command %s: has only %d arguments.\n", cmd->name, i);
return false;
}
if (i < cmd->nargs && cmd->args[i].type) {
mp_err(log, "Command %s: argument %s was already set.\n", cmd->name, name);
return false;
}
struct mp_cmd_arg arg = {.type = opt};
void *dst = &arg.v;
if (val->format == MPV_FORMAT_STRING) {
int r = m_option_parse(log, opt, bstr0(cmd->name),
bstr0(val->u.string), dst);
if (r < 0) {
mp_err(log, "Command %s: argument %s can't be parsed: %s.\n",
cmd->name, name, m_option_strerror(r));
return false;
}
} else {
int r = m_option_set_node(opt, dst, val);
if (r < 0) {
mp_err(log, "Command %s: argument %s has incompatible type.\n",
cmd->name, name);
return false;
}
}
// (leave unset arguments blank, to be set later or checked by finish_cmd())
while (i >= cmd->nargs) {
struct mp_cmd_arg t = {0};
MP_TARRAY_APPEND(cmd, cmd->args, cmd->nargs, t);
}
cmd->args[i] = arg;
return true;
}
static bool cmd_node_array(struct mp_log *log, struct mp_cmd *cmd, mpv_node *node)
{
assert(node->format == MPV_FORMAT_NODE_ARRAY);
mpv_node_list *args = node->u.list;
int cur = 0;
@ -157,46 +216,96 @@ struct mp_cmd *mp_input_parse_cmd_node(struct mp_log *log, mpv_node *node)
if (cur < args->num && args->values[cur].format == MPV_FORMAT_STRING)
cmd_name = bstr0(args->values[cur++].u.string);
if (!find_cmd(log, cmd, cmd_name))
goto error;
return false;
int first = cur;
for (int i = 0; i < args->num - first; i++) {
const struct m_option *opt = get_arg_type(cmd->def, i);
if (!opt) {
mp_err(log, "Command %s: has only %d arguments.\n", cmd->name, i);
goto error;
}
mpv_node *val = &args->values[cur++];
struct mp_cmd_arg arg = {.type = opt};
void *dst = &arg.v;
if (val->format == MPV_FORMAT_STRING) {
int r = m_option_parse(log, opt, bstr0(cmd->name),
bstr0(val->u.string), dst);
if (r < 0) {
mp_err(log, "Command %s: argument %d can't be parsed: %s.\n",
cmd->name, i + 1, m_option_strerror(r));
goto error;
}
} else {
int r = m_option_set_node(opt, dst, val);
if (r < 0) {
mp_err(log, "Command %s: argument %d has incompatible type.\n",
cmd->name, i + 1);
goto error;
}
}
MP_TARRAY_APPEND(cmd, cmd->args, cmd->nargs, arg);
if (!set_node_arg(log, cmd, cmd->nargs, &args->values[cur++]))
return false;
}
if (!finish_cmd(log, cmd))
goto error;
return cmd;
error:
talloc_free(cmd);
return NULL;
return true;
}
static bool cmd_node_map(struct mp_log *log, struct mp_cmd *cmd, mpv_node *node)
{
assert(node->format == MPV_FORMAT_NODE_MAP);
mpv_node_list *args = node->u.list;
mpv_node *name = node_map_get(node, "name");
if (!name || name->format != MPV_FORMAT_STRING)
return false;
if (!find_cmd(log, cmd, bstr0(name->u.string)))
return false;
if (cmd->def->vararg) {
mp_err(log, "Command %s: this command uses a variable number of "
"arguments, which does not work with named arguments.\n",
cmd->name);
return false;
}
for (int n = 0; n < args->num; n++) {
const char *key = args->keys[n];
mpv_node *val = &args->values[n];
if (strcmp(key, "name") == 0) {
// already handled above
} else if (strcmp(key, "_flags") == 0) {
if (val->format != MPV_FORMAT_NODE_ARRAY)
return false;
mpv_node_list *flags = val->u.list;
for (int i = 0; i < flags->num; i++) {
if (flags->values[i].format != MPV_FORMAT_STRING)
return false;
if (!apply_flag(cmd, bstr0(flags->values[i].u.string)))
return false;
}
} else {
int arg = -1;
for (int i = 0; i < MP_CMD_DEF_MAX_ARGS; i++) {
const char *arg_name = cmd->def->args[i].name;
if (arg_name && arg_name[0] && strcmp(key, arg_name) == 0) {
arg = i;
break;
}
}
if (arg < 0) {
mp_err(log, "Command %s: no argument %s.\n", cmd->name, key);
return false;
}
if (!set_node_arg(log, cmd, arg, val))
return false;
}
}
return true;
}
struct mp_cmd *mp_input_parse_cmd_node(struct mp_log *log, mpv_node *node)
{
struct mp_cmd *cmd = talloc_ptrtype(NULL, cmd);
talloc_set_destructor(cmd, destroy_cmd);
*cmd = (struct mp_cmd) { .scale = 1, .scale_units = 1 };
bool res = false;
if (node->format == MPV_FORMAT_NODE_ARRAY) {
res = cmd_node_array(log, cmd, node);
} else if (node->format == MPV_FORMAT_NODE_MAP) {
res = cmd_node_map(log, cmd, node);
}
res = res && finish_cmd(log, cmd);
if (!res)
TA_FREEP(&cmd);
return cmd;
}
static bool read_token(bstr str, bstr *out_rest, bstr *out_token)
{
@ -443,34 +552,6 @@ void mp_cmd_dump(struct mp_log *log, int msgl, char *header, struct mp_cmd *cmd)
mp_msg(log, msgl, "]\n");
}
// 0: no, 1: maybe, 2: sure
static int is_abort_cmd(struct mp_cmd *cmd)
{
if (cmd->def->is_abort)
return 2;
if (cmd->def->is_soft_abort)
return 1;
if (cmd->def == &mp_cmd_list) {
int r = 0;
for (struct mp_cmd *sub = cmd->args[0].v.p; sub; sub = sub->queue_next) {
int x = is_abort_cmd(sub);
r = MPMAX(r, x);
}
return r;
}
return 0;
}
bool mp_input_is_maybe_abort_cmd(struct mp_cmd *cmd)
{
return is_abort_cmd(cmd) >= 1;
}
bool mp_input_is_abort_cmd(struct mp_cmd *cmd)
{
return is_abort_cmd(cmd) >= 2;
}
bool mp_input_is_repeatable_cmd(struct mp_cmd *cmd)
{
return (cmd->def->allow_auto_repeat) || cmd->def == &mp_cmd_list ||
@ -488,12 +569,13 @@ void mp_print_cmd_list(struct mp_log *out)
const struct mp_cmd_def *def = &mp_cmds[i];
mp_info(out, "%-20.20s", def->name);
for (int j = 0; j < MP_CMD_DEF_MAX_ARGS && def->args[j].type; j++) {
const char *type = def->args[j].type->name;
if (def->args[j].defval)
mp_info(out, " [%s]", type);
else
mp_info(out, " %s", type);
const struct m_option *arg = &def->args[j];
bool is_opt = arg->defval || (arg->flags & MP_CMD_OPT_ARG);
mp_info(out, " %s%s=%s%s", is_opt ? "[" : "", arg->name,
arg->type->name, is_opt ? "]" : "");
}
if (def->vararg)
mp_info(out, "..."); // essentially append to last argument
mp_info(out, "\n");
}
}

View File

@ -39,9 +39,26 @@ struct mp_cmd_def {
bool on_updown; // always emit it on both up and down key events
bool vararg; // last argument can be given 0 to multiple times
bool scalable;
bool is_abort;
bool is_soft_abort;
bool is_ignore;
bool default_async; // default to MP_ASYNC flag if none set by user
// If you set this, handler() must ensure mp_cmd_ctx_complete() is called
// at some point (can be after handler() returns). If you don't set it, the
// common code will call mp_cmd_ctx_complete() when handler() returns.
// You must make sure that the core cannot disappear while you do work. The
// common code keeps the core referenced only until handler() returns.
bool exec_async;
// If set, handler() is run on a separate worker thread. This means you can
// use mp_core_[un]lock() to temporarily unlock and re-lock the core (while
// unlocked, you have no synchronized access to mpctx, but you can do long
// running operations without blocking playback or input handling).
bool spawn_thread;
// If this is set, mp_cmd_ctx.abort is set. Set this if handler() can do
// asynchronous abort of the command, and explicitly uses mp_cmd_ctx.abort.
// (Not setting it when it's not needed can save resources.)
bool can_abort;
// If playback ends, and the command is still running, an abort is
// automatically triggered.
bool abort_on_playback_end;
};
enum mp_cmd_flags {
@ -51,7 +68,11 @@ enum mp_cmd_flags {
MP_ON_OSD_MSG = 4, // force a message, if applicable
MP_EXPAND_PROPERTIES = 8, // expand strings as properties
MP_ALLOW_REPEAT = 16, // if used as keybinding, allow key repeat
MP_ASYNC_CMD = 32,
// Exactly one of the following 2 bits is set. Which one is used depends on
// the command parser (prefixes and mp_cmd_def.default_async).
MP_ASYNC_CMD = 32, // do not wait for command to complete
MP_SYNC_CMD = 64, // block on command completion
MP_ON_OSD_FLAGS = MP_ON_OSD_NO | MP_ON_OSD_AUTO |
MP_ON_OSD_BAR | MP_ON_OSD_MSG,
@ -64,6 +85,7 @@ struct mp_cmd_arg {
const struct m_option *type;
union {
int i;
int64_t i64;
float f;
double d;
char *s;
@ -73,7 +95,6 @@ struct mp_cmd_arg {
};
typedef struct mp_cmd {
int id;
char *name;
struct mp_cmd_arg *args;
int nargs;
@ -98,11 +119,6 @@ typedef struct mp_cmd {
extern const struct mp_cmd_def mp_cmds[];
extern const struct mp_cmd_def mp_cmd_list;
// Executing this command will maybe abort playback (play something else, or quit).
bool mp_input_is_maybe_abort_cmd(struct mp_cmd *cmd);
// This command will definitely abort playback.
bool mp_input_is_abort_cmd(struct mp_cmd *cmd);
bool mp_input_is_repeatable_cmd(struct mp_cmd *cmd);
bool mp_input_is_scalable_cmd(struct mp_cmd *cmd);

View File

@ -153,9 +153,6 @@ struct input_ctx {
struct cmd_queue cmd_queue;
void (*cancel)(void *cancel_ctx);
void *cancel_ctx;
void (*wakeup_cb)(void *ctx);
void *wakeup_ctx;
};
@ -531,13 +528,11 @@ static void release_down_cmd(struct input_ctx *ictx, bool drop_current)
}
// We don't want the append to the command queue indefinitely, because that
// could lead to situations where recovery would take too long. On the other
// hand, don't drop commands that will abort playback.
// could lead to situations where recovery would take too long.
static bool should_drop_cmd(struct input_ctx *ictx, struct mp_cmd *cmd)
{
struct cmd_queue *queue = &ictx->cmd_queue;
return queue_count_cmds(queue) >= ictx->opts->key_fifo_size &&
!mp_input_is_abort_cmd(cmd);
return queue_count_cmds(queue) >= ictx->opts->key_fifo_size;
}
static struct mp_cmd *resolve_key(struct input_ctx *ictx, int code)
@ -883,26 +878,10 @@ static void adjust_max_wait_time(struct input_ctx *ictx, double *time)
}
}
static bool test_abort_cmd(struct input_ctx *ictx, struct mp_cmd *new)
{
if (!mp_input_is_maybe_abort_cmd(new))
return false;
if (mp_input_is_abort_cmd(new))
return true;
// Abort only if there are going to be at least 2 commands in the queue.
for (struct mp_cmd *cmd = ictx->cmd_queue.first; cmd; cmd = cmd->queue_next) {
if (mp_input_is_maybe_abort_cmd(cmd))
return true;
}
return false;
}
int mp_input_queue_cmd(struct input_ctx *ictx, mp_cmd_t *cmd)
{
input_lock(ictx);
if (cmd) {
if (ictx->cancel && test_abort_cmd(ictx, cmd))
ictx->cancel(ictx->cancel_ctx);
queue_add_tail(&ictx->cmd_queue, cmd);
mp_input_wakeup(ictx);
}
@ -1391,8 +1370,11 @@ void mp_input_load_config(struct input_ctx *ictx)
}
#if HAVE_WIN32_PIPES
if (ictx->global->opts->input_file && *ictx->global->opts->input_file)
mp_input_pipe_add(ictx, ictx->global->opts->input_file);
char *ifile;
mp_read_option_raw(ictx->global, "input-file", &m_option_type_string, &ifile);
if (ifile && ifile[0])
mp_input_pipe_add(ictx, ifile);
talloc_free(ifile);
#endif
input_unlock(ictx);
@ -1423,14 +1405,6 @@ void mp_input_uninit(struct input_ctx *ictx)
talloc_free(ictx);
}
void mp_input_set_cancel(struct input_ctx *ictx, void (*cb)(void *c), void *c)
{
input_lock(ictx);
ictx->cancel = cb;
ictx->cancel_ctx = c;
input_unlock(ictx);
}
bool mp_input_use_alt_gr(struct input_ctx *ictx)
{
input_lock(ictx);

View File

@ -193,10 +193,6 @@ double mp_input_get_delay(struct input_ctx *ictx);
// Wake up sleeping input loop from another thread.
void mp_input_wakeup(struct input_ctx *ictx);
// Used to asynchronously abort playback. Needed because the core still can
// block on network in some situations.
void mp_input_set_cancel(struct input_ctx *ictx, void (*cb)(void *c), void *c);
// If this returns true, use Right Alt key as Alt Gr to produce special
// characters. If false, count Right Alt as the modifier Alt key.
bool mp_input_use_alt_gr(struct input_ctx *ictx);

View File

@ -36,6 +36,7 @@
#include "common/msg.h"
#include "input/input.h"
#include "libmpv/client.h"
#include "options/m_config.h"
#include "options/options.h"
#include "options/path.h"
#include "player/client.h"
@ -386,7 +387,7 @@ done:
struct mp_ipc_ctx *mp_init_ipc(struct mp_client_api *client_api,
struct mpv_global *global)
{
struct MPOpts *opts = global->opts;
struct MPOpts *opts = mp_get_config_group(NULL, global, GLOBAL_CONFIG);
struct mp_ipc_ctx *arg = talloc_ptrtype(NULL, arg);
*arg = (struct mp_ipc_ctx){
@ -397,10 +398,12 @@ struct mp_ipc_ctx *mp_init_ipc(struct mp_client_api *client_api,
};
char *input_file = mp_get_user_path(arg, global, opts->input_file);
talloc_free(opts);
if (input_file && *input_file)
ipc_start_client_text(arg, input_file);
if (!opts->ipc_path || !*opts->ipc_path)
if (!arg->path || !arg->path[0])
goto out;
if (mp_make_wakeup_pipe(arg->death_pipe) < 0)

View File

@ -29,6 +29,7 @@
#include "common/msg.h"
#include "input/input.h"
#include "libmpv/client.h"
#include "options/m_config.h"
#include "options/options.h"
#include "player/client.h"
@ -449,7 +450,7 @@ done:
struct mp_ipc_ctx *mp_init_ipc(struct mp_client_api *client_api,
struct mpv_global *global)
{
struct MPOpts *opts = global->opts;
struct MPOpts *opts = mp_get_config_group(NULL, global, GLOBAL_CONFIG);
struct mp_ipc_ctx *arg = talloc_ptrtype(NULL, arg);
*arg = (struct mp_ipc_ctx){
@ -478,12 +479,14 @@ struct mp_ipc_ctx *mp_init_ipc(struct mp_client_api *client_api,
if (pthread_create(&arg->thread, NULL, ipc_thread, arg))
goto out;
talloc_free(opts);
return arg;
out:
if (arg->death_event)
CloseHandle(arg->death_event);
talloc_free(arg);
talloc_free(opts);
return NULL;
}

View File

@ -20,23 +20,12 @@
#include "common/msg.h"
#include "input/input.h"
#include "misc/json.h"
#include "misc/node.h"
#include "options/m_option.h"
#include "options/options.h"
#include "options/path.h"
#include "player/client.h"
static mpv_node *mpv_node_map_get(mpv_node *src, const char *key)
{
if (src->format != MPV_FORMAT_NODE_MAP)
return NULL;
for (int i = 0; i < src->u.list->num; i++)
if (!strcmp(key, src->u.list->keys[i]))
return &src->u.list->values[i];
return NULL;
}
static mpv_node *mpv_node_array_get(mpv_node *src, int index)
{
if (src->format != MPV_FORMAT_NODE_ARRAY)
@ -217,9 +206,13 @@ static char *json_execute_command(struct mpv_handle *client, void *ta_parent,
goto error;
}
reqid_node = mpv_node_map_get(&msg_node, "request_id");
reqid_node = node_map_get(&msg_node, "request_id");
if (reqid_node && reqid_node->format != MPV_FORMAT_INT64) {
mp_warn(log, "'request_id' must be an integer. Using other types is "
"deprecated and will trigger an error in the future!\n");
}
mpv_node *cmd_node = mpv_node_map_get(&msg_node, "command");
mpv_node *cmd_node = node_map_get(&msg_node, "command");
if (!cmd_node ||
(cmd_node->format != MPV_FORMAT_NODE_ARRAY) ||
!cmd_node->u.list->num)
@ -415,6 +408,8 @@ error:
*/
if (reqid_node) {
mpv_node_map_add(ta_parent, &reply_node, "request_id", reqid_node);
} else {
mpv_node_map_add_int64(ta_parent, &reply_node, "request_id", 0);
}
mpv_node_map_add_string(ta_parent, &reply_node, "error", mpv_error_string(rc));

View File

@ -107,8 +107,9 @@ extern "C" {
* careful not accidentally interpret the mpv_event->reply_userdata if an
* event is not a reply. (For non-replies, this field is set to 0.)
*
* Currently, asynchronous calls are always strictly ordered (even with
* synchronous calls) for each client, although that may change in the future.
* Asynchronous calls may be reordered in arbitrarily with other synchronous
* and asynchronous calls. If you want a guaranteed order, you need to wait
* until asynchronous calls report completion before doing the next call.
*
* Multithreading
* --------------
@ -195,6 +196,18 @@ extern "C" {
* or change the underlying datatypes. It might be a good idea to prefer
* MPV_FORMAT_STRING over other types to decouple your code from potential
* mpv changes.
*
* Future changes
* --------------
*
* This are the planned changes that will most likely be done on the next major
* bump of the library:
*
* - remove all symbols and include files that are marked as deprecated
* - reassign enum numerical values to remove gaps
* - remove the mpv_opengl_init_params.extra_exts field
* - change the type of mpv_event_end_file.reason
* - disabling all events by default
*/
/**
@ -210,7 +223,7 @@ extern "C" {
* relational operators (<, >, <=, >=).
*/
#define MPV_MAKE_VERSION(major, minor) (((major) << 16) | (minor) | 0UL)
#define MPV_CLIENT_API_VERSION MPV_MAKE_VERSION(1, 102)
#define MPV_CLIENT_API_VERSION MPV_MAKE_VERSION(1, 103)
/**
* The API user is allowed to "#define MPV_ENABLE_DEPRECATED 0" before
@ -928,10 +941,25 @@ int mpv_command(mpv_handle *ctx, const char **args);
*
* Does not use OSD and string expansion by default.
*
* @param[in] args mpv_node with format set to MPV_FORMAT_NODE_ARRAY; each entry
* is an argument using an arbitrary format (the format must be
* compatible to the used command). Usually, the first item is
* the command name (as MPV_FORMAT_STRING).
* The args argument can have one of the following formats:
*
* MPV_FORMAT_NODE_ARRAY:
* Positional arguments. Each entry is an argument using an arbitrary
* format (the format must be compatible to the used command). Usually,
* the first item is the command name (as MPV_FORMAT_STRING). The order
* of arguments is as documented in each command description.
*
* MPV_FORMAT_NODE_MAP:
* Named arguments. This requires at least an entry with the key "name"
* to be present, which must be a string, and contains the command name.
* The special entry "_flags" is optional, and if present, must be an
* array of strings, each being a command prefix to apply. All other
* entries are interpreted as arguments. They must use the argument names
* as documented in each command description. Some commands do not
* support named arguments at all, and must use MPV_FORMAT_NODE_ARRAY.
*
* @param[in] args mpv_node with format set to one of the values documented
* above (see there for details)
* @param[out] result Optional, pass NULL if unused. If not NULL, and if the
* function succeeds, this is set to command-specific return
* data. You must call mpv_free_node_contents() to free it
@ -954,14 +982,11 @@ int mpv_command_string(mpv_handle *ctx, const char *args);
* Same as mpv_command, but run the command asynchronously.
*
* Commands are executed asynchronously. You will receive a
* MPV_EVENT_COMMAND_REPLY event. (This event will also have an
* error code set if running the command failed.)
* MPV_EVENT_COMMAND_REPLY event. This event will also have an
* error code set if running the command failed. For commands that
* return data, the data is put into mpv_event_command.result.
*
* This has nothing to do with the "async" command prefix, although they might
* be unified in the future. For now, calling this API means that the command
* will be synchronously executed on the core, without blocking the API user.
*
* * Safe to be called from mpv render API threads.
* Safe to be called from mpv render API threads.
*
* @param reply_userdata the value mpv_event.reply_userdata of the reply will
* be set to (see section about asynchronous calls)
@ -976,8 +1001,7 @@ int mpv_command_async(mpv_handle *ctx, uint64_t reply_userdata,
* function is to mpv_command_node() what mpv_command_async() is to
* mpv_command().
*
* See mpv_command_async() for details. Retrieving the result is not
* supported yet.
* See mpv_command_async() for details.
*
* Safe to be called from mpv render API threads.
*
@ -989,6 +1013,38 @@ int mpv_command_async(mpv_handle *ctx, uint64_t reply_userdata,
int mpv_command_node_async(mpv_handle *ctx, uint64_t reply_userdata,
mpv_node *args);
/**
* Signal to all async requests with the matching ID to abort. This affects
* the following API calls:
*
* mpv_command_async
* mpv_command_node_async
*
* All of these functions take a reply_userdata parameter. This API function
* tells all requests with the matching reply_userdata value to try to return
* as soon as possible. If there are multiple requests with matching ID, it
* aborts all of them.
*
* This API function is mostly asynchronous itself. It will not wait until the
* command is aborted. Instead, the command will terminate as usual, but with
* some work not done. How this is signaled depends on the specific command (for
* example, the "subprocess" command will indicate it by "killed_by_us" set to
* true in the result). How long it takes also depends on the situation. The
* aborting process is completely asynchronous.
*
* Not all commands may support this functionality. In this case, this function
* will have no effect. The same is true if the request using the passed
* reply_userdata has already terminated, has not been started yet, or was
* never in use at all.
*
* You have to be careful of race conditions: the time during which the abort
* request will be effective is _after_ e.g. mpv_command_async() has returned,
* and before the command has signaled completion with MPV_EVENT_COMMAND_REPLY.
*
* @param reply_userdata ID of the request to be aborted (see above)
*/
void mpv_abort_async_command(mpv_handle *ctx, uint64_t reply_userdata);
/**
* Set a property to a given value. Properties are essentially variables which
* can be queried or set at runtime. For example, writing to the pause property
@ -1202,7 +1258,8 @@ typedef enum mpv_event_id {
*/
MPV_EVENT_SET_PROPERTY_REPLY = 4,
/**
* Reply to a mpv_command_async() request.
* Reply to a mpv_command_async() or mpv_command_node_async() request.
* See also mpv_event and mpv_event_command.
*/
MPV_EVENT_COMMAND_REPLY = 5,
/**
@ -1549,6 +1606,17 @@ typedef struct mpv_event_hook {
uint64_t id;
} mpv_event_hook;
// Since API version 1.102.
typedef struct mpv_event_command {
/**
* Result data of the command. Note that success/failure is signaled
* separately via mpv_event.error. This field is only for result data
* in case of success. Most commands leave it at MPV_FORMAT_NONE. Set
* to MPV_FORMAT_NONE on failure.
*/
mpv_node result;
} mpv_event_command;
typedef struct mpv_event {
/**
* One of mpv_event. Keep in mind that later ABI compatible releases might
@ -1575,6 +1643,7 @@ typedef struct mpv_event {
* MPV_EVENT_SET_PROPERTY_REPLY
* MPV_EVENT_COMMAND_REPLY
* MPV_EVENT_PROPERTY_CHANGE
* MPV_EVENT_HOOK
*/
uint64_t reply_userdata;
/**
@ -1584,6 +1653,8 @@ typedef struct mpv_event {
* MPV_EVENT_LOG_MESSAGE: mpv_event_log_message*
* MPV_EVENT_CLIENT_MESSAGE: mpv_event_client_message*
* MPV_EVENT_END_FILE: mpv_event_end_file*
* MPV_EVENT_HOOK: mpv_event_hook*
* MPV_EVENT_COMMAND_REPLY* mpv_event_command*
* other: NULL
*
* Note: future enhancements might add new event structs for existing or new

View File

@ -1,3 +1,4 @@
mpv_abort_async_command
mpv_client_api_version
mpv_client_name
mpv_command

View File

@ -107,11 +107,13 @@ typedef struct mpv_opengl_init_params {
/**
* This retrieves OpenGL function pointers, and will use them in subsequent
* operation.
* Usually, GL context APIs do this for you (e.g. with glXGetProcAddressARB
* or wglGetProcAddress), but some APIs do not always return pointers for
* all standard functions (even if present); in this case you have to
* compensate by looking up these functions yourself and returning them
* from this callback.
* Usually, you can simply call the GL context APIs from this callback (e.g.
* glXGetProcAddressARB or wglGetProcAddress), but some APIs do not always
* return pointers for all standard functions (even if present); in this
* case you have to compensate by looking up these functions yourself when
* libmpv wants to resolve them through this callback.
* libmpv will not normally attempt to resolve GL functions on its own, nor
* does it link to GL libraries directly.
*/
void *(*get_proc_address)(void *ctx, const char *name);
/**
@ -147,6 +149,9 @@ typedef struct mpv_opengl_fbo {
int internal_format;
} mpv_opengl_fbo;
/**
* For MPV_RENDER_PARAM_DRM_DISPLAY.
*/
typedef struct mpv_opengl_drm_params {
/**
* DRM fd (int). Set to a negative number if invalid.
@ -177,6 +182,9 @@ typedef struct mpv_opengl_drm_params {
int render_fd;
} mpv_opengl_drm_params;
/**
* For MPV_RENDER_PARAM_DRM_DRAW_SURFACE_SIZE.
*/
typedef struct mpv_opengl_drm_draw_surface_size {
/**
* size of the draw plane surface in pixels.

View File

@ -112,7 +112,7 @@ typedef int64_t (*mpv_stream_cb_read_fn)(void *cookie, char *buf, uint64_t nbyte
* is used to test whether the stream is seekable (since seekability might
* depend on the URI contents, not just the protocol). Return
* MPV_ERROR_UNSUPPORTED if seeking is not implemented for this stream. This
* seek also servies to establish the fact that streams start at position 0.
* seek also serves to establish the fact that streams start at position 0.
*
* This callback can be NULL, in which it behaves as if always returning
* MPV_ERROR_UNSUPPORTED.

View File

@ -22,9 +22,14 @@
* doesn't verify what's passed to strtod(), and also prefers parsing numbers
* as integers with stroll() if possible).
*
* Does not support extensions like unquoted string literals.
* It has some non-standard extensions which shouldn't conflict with JSON:
* - a list or object item can have a trailing ","
* - object syntax accepts "=" in addition of ":"
* - object keys can be unquoted, if they start with a character in [A-Za-z_]
* and contain only characters in [A-Za-z0-9_]
* - byte escapes with "\xAB" are allowed (with AB being a 2 digit hex number)
*
* Also see: http://tools.ietf.org/html/rfc4627
* Also see: http://tools.ietf.org/html/rfc8259
*
* JSON writer:
*
@ -34,9 +39,6 @@
* to deal with somehow: either by using byte-strings for JSON, or by running
* a "fixup" pass on the input data. The latter could for example change
* invalid UTF-8 sequences to replacement characters.
*
* Currently, will insert \u literals for characters 0-31, '"', '\', and write
* everything else literally.
*/
#include <stdlib.h>
@ -48,6 +50,7 @@
#include "common/common.h"
#include "misc/bstr.h"
#include "misc/ctype.h"
#include "json.h"
@ -75,6 +78,24 @@ void json_skip_whitespace(char **src)
eat_ws(src);
}
static int read_id(void *ta_parent, struct mpv_node *dst, char **src)
{
char *start = *src;
if (!mp_isalpha(**src) && **src != '_')
return -1;
while (mp_isalnum(**src) || **src == '_')
*src += 1;
if (**src == ' ') {
**src = '\0'; // we're allowed to mutate it => can avoid the strndup
*src += 1;
} else {
start = talloc_strndup(ta_parent, start, *src - start);
}
dst->format = MPV_FORMAT_STRING;
dst->u.string = start;
return 0;
}
static int read_str(void *ta_parent, struct mpv_node *dst, char **src)
{
if (!eat_c(src, '"'))
@ -125,12 +146,18 @@ static int read_sub(void *ta_parent, struct mpv_node *dst, char **src,
if (list->num > 0 && !eat_c(src, ','))
return -1; // missing ','
eat_ws(src);
// non-standard extension: allow a trailing ","
if (eat_c(src, term))
break;
if (is_obj) {
struct mpv_node keynode;
if (read_str(list, &keynode, src) < 0)
// non-standard extension: allow unquoted strings as keys
if (read_id(list, &keynode, src) < 0 &&
read_str(list, &keynode, src) < 0)
return -1; // key is not a string
eat_ws(src);
if (!eat_c(src, ':'))
// non-standard extension: allow "=" instead of ":"
if (!eat_c(src, ':') && !eat_c(src, '='))
return -1; // ':' missing
eat_ws(src);
MP_TARRAY_GROW(list, list->keys, list->num);
@ -218,6 +245,14 @@ int json_parse(void *ta_parent, struct mpv_node *dst, char **src, int max_depth)
#define APPEND(b, s) bstr_xappend(NULL, (b), bstr0(s))
static const char special_escape[] = {
['\b'] = 'b',
['\f'] = 'f',
['\n'] = 'n',
['\r'] = 'r',
['\t'] = 't',
};
static void write_json_str(bstr *b, unsigned char *str)
{
APPEND(b, "\"");
@ -228,7 +263,15 @@ static void write_json_str(bstr *b, unsigned char *str)
if (!cur[0])
break;
bstr_xappend(NULL, b, (bstr){str, cur - str});
bstr_xappend_asprintf(NULL, b, "\\u%04x", (unsigned char)cur[0]);
if (cur[0] == '\"') {
bstr_xappend(NULL, b, (bstr){"\\\"", 2});
} else if (cur[0] == '\\') {
bstr_xappend(NULL, b, (bstr){"\\\\", 2});
} else if (cur[0] < sizeof(special_escape) && special_escape[cur[0]]) {
bstr_xappend_asprintf(NULL, b, "\\%c", special_escape[cur[0]]);
} else {
bstr_xappend_asprintf(NULL, b, "\\u%04x", (unsigned char)cur[0]);
}
str = cur + 1;
}
APPEND(b, str);

107
misc/linked_list.h Normal file
View File

@ -0,0 +1,107 @@
#pragma once
#include <stddef.h>
/*
* Doubly linked list macros. All of these require that each list item is a
* struct, that contains a field, that is another struct with prev/next fields:
*
* struct example_item {
* struct {
* struct example_item *prev, *next;
* } mylist;
* };
*
* And a struct somewhere that represents the "list" and has head/tail fields:
*
* struct {
* struct example_item *head, *tail;
* } mylist_var;
*
* Then you can e.g. insert elements like this:
*
* struct example_item item;
* LL_APPEND(mylist, &mylist_var, &item);
*
* The first macro argument is always the name if the field in the item that
* contains the prev/next pointers, in this case struct example_item.mylist.
* This was done so that a single item can be in multiple lists.
*
* The list is started/terminated with NULL. Nothing ever points _to_ the
* list head, so the list head memory location can be safely moved.
*
* General rules are:
* - list head is initialized by setting head/tail to NULL
* - list items do not need to be initialized before inserting them
* - next/prev fields of list items are not cleared when they are removed
* - there's no way to know whether an item is in the list or not (unless
* you clear prev/next on init/removal, _and_ check whether items with
* prev/next==NULL are referenced by head/tail)
*/
// Insert item at the end of the list (list->tail == item).
// Undefined behavior if item is already in the list.
#define LL_APPEND(field, list, item) do { \
(item)->field.prev = (list)->tail; \
(item)->field.next = NULL; \
LL_RELINK_(field, list, item) \
} while (0)
// Insert item enew after eprev (i.e. eprev->next == enew). If eprev is NULL,
// then insert it as head (list->head == enew).
// Undefined behavior if enew is already in the list, or eprev isn't.
#define LL_INSERT_AFTER(field, list, eprev, enew) do { \
(enew)->field.prev = (eprev); \
(enew)->field.next = (eprev) ? (eprev)->field.next : (list)->head; \
LL_RELINK_(field, list, enew) \
} while (0)
// Insert item at the start of the list (list->head == item).
// Undefined behavior if item is already in the list.
#define LL_PREPEND(field, list, item) do { \
(item)->field.prev = NULL; \
(item)->field.next = (list)->head; \
LL_RELINK_(field, list, item) \
} while (0)
// Insert item enew before enext (i.e. enew->next == enext). If enext is NULL,
// then insert it as tail (list->tail == enew).
// Undefined behavior if enew is already in the list, or enext isn't.
#define LL_INSERT_BEFORE(field, list, enext, enew) do { \
(enew)->field.prev = (enext) ? (enext)->field.prev : (list)->tail; \
(enew)->field.next = (enext); \
LL_RELINK_(field, list, enew) \
} while (0)
// Remove the item from the list.
// Undefined behavior if item is not in the list.
#define LL_REMOVE(field, list, item) do { \
if ((item)->field.prev) { \
(item)->field.prev->field.next = (item)->field.next; \
} else { \
(list)->head = (item)->field.next; \
} \
if ((item)->field.next) { \
(item)->field.next->field.prev = (item)->field.prev; \
} else { \
(list)->tail = (item)->field.prev; \
} \
} while (0)
// Remove all items from the list.
#define LL_CLEAR(field, list) do { \
(list)->head = (list)->tail = NULL; \
} while (0)
// Internal helper.
#define LL_RELINK_(field, list, item) \
if ((item)->field.prev) { \
(item)->field.prev->field.next = (item); \
} else { \
(list)->head = (item); \
} \
if ((item)->field.next) { \
(item)->field.next->field.prev = (item); \
} else { \
(list)->tail = (item); \
}

View File

@ -81,3 +81,68 @@ void node_map_add_flag(struct mpv_node *dst, const char *key, bool v)
{
node_map_add(dst, key, MPV_FORMAT_FLAG)->u.flag = v;
}
mpv_node *node_map_get(mpv_node *src, const char *key)
{
if (src->format != MPV_FORMAT_NODE_MAP)
return NULL;
for (int i = 0; i < src->u.list->num; i++) {
if (strcmp(key, src->u.list->keys[i]) == 0)
return &src->u.list->values[i];
}
return NULL;
}
// Note: for MPV_FORMAT_NODE_MAP, this (incorrectly) takes the order into
// account, instead of treating it as set.
bool equal_mpv_value(const void *a, const void *b, mpv_format format)
{
switch (format) {
case MPV_FORMAT_NONE:
return true;
case MPV_FORMAT_STRING:
case MPV_FORMAT_OSD_STRING:
return strcmp(*(char **)a, *(char **)b) == 0;
case MPV_FORMAT_FLAG:
return *(int *)a == *(int *)b;
case MPV_FORMAT_INT64:
return *(int64_t *)a == *(int64_t *)b;
case MPV_FORMAT_DOUBLE:
return *(double *)a == *(double *)b;
case MPV_FORMAT_NODE:
return equal_mpv_node(a, b);
case MPV_FORMAT_BYTE_ARRAY: {
const struct mpv_byte_array *a_r = a, *b_r = b;
if (a_r->size != b_r->size)
return false;
return memcmp(a_r->data, b_r->data, a_r->size) == 0;
}
case MPV_FORMAT_NODE_ARRAY:
case MPV_FORMAT_NODE_MAP:
{
mpv_node_list *l_a = *(mpv_node_list **)a, *l_b = *(mpv_node_list **)b;
if (l_a->num != l_b->num)
return false;
for (int n = 0; n < l_a->num; n++) {
if (format == MPV_FORMAT_NODE_MAP) {
if (strcmp(l_a->keys[n], l_b->keys[n]) != 0)
return false;
}
if (!equal_mpv_node(&l_a->values[n], &l_b->values[n]))
return false;
}
return true;
}
}
abort(); // supposed to be able to handle all defined types
}
// Remarks see equal_mpv_value().
bool equal_mpv_node(const struct mpv_node *a, const struct mpv_node *b)
{
if (a->format != b->format)
return false;
return equal_mpv_value(&a->u, &b->u, a->format);
}

View File

@ -10,5 +10,8 @@ void node_map_add_string(struct mpv_node *dst, const char *key, const char *val)
void node_map_add_int64(struct mpv_node *dst, const char *key, int64_t v);
void node_map_add_double(struct mpv_node *dst, const char *key, double v);
void node_map_add_flag(struct mpv_node *dst, const char *key, bool v);
mpv_node *node_map_get(mpv_node *src, const char *key);
bool equal_mpv_value(const void *a, const void *b, mpv_format format);
bool equal_mpv_node(const struct mpv_node *a, const struct mpv_node *b);
#endif

View File

@ -1,40 +1,51 @@
/*
* This file is part of mpv.
/* Copyright (C) 2018 the mpv developers
*
* mpv is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* mpv is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <pthread.h>
#include "common/common.h"
#include "osdep/threads.h"
#include "osdep/timer.h"
#include "thread_pool.h"
// Threads destroy themselves after this many seconds, if there's no new work
// and the thread count is above the configured minimum.
#define DESTROY_TIMEOUT 10
struct work {
void (*fn)(void *ctx);
void *fn_ctx;
};
struct mp_thread_pool {
pthread_t *threads;
int num_threads;
int min_threads, max_threads;
pthread_mutex_t lock;
pthread_cond_t wakeup;
// --- the following fields are protected by lock
pthread_t *threads;
int num_threads;
// Number of threads which have taken up work and are still processing it.
int busy_threads;
bool terminate;
struct work *work;
int num_work;
};
@ -43,25 +54,61 @@ static void *worker_thread(void *arg)
{
struct mp_thread_pool *pool = arg;
mpthread_set_name("worker");
pthread_mutex_lock(&pool->lock);
struct timespec ts = {0};
bool got_timeout = false;
while (1) {
while (!pool->num_work && !pool->terminate)
pthread_cond_wait(&pool->wakeup, &pool->lock);
struct work work = {0};
if (pool->num_work > 0) {
work = pool->work[pool->num_work - 1];
pool->num_work -= 1;
}
if (!pool->num_work && pool->terminate)
break;
if (!work.fn) {
if (got_timeout || pool->terminate)
break;
assert(pool->num_work > 0);
struct work work = pool->work[pool->num_work - 1];
pool->num_work -= 1;
if (pool->num_threads > pool->min_threads) {
if (!ts.tv_sec && !ts.tv_nsec)
ts = mp_rel_time_to_timespec(DESTROY_TIMEOUT);
if (pthread_cond_timedwait(&pool->wakeup, &pool->lock, &ts))
got_timeout = pool->num_threads > pool->min_threads;
} else {
pthread_cond_wait(&pool->wakeup, &pool->lock);
}
continue;
}
pool->busy_threads += 1;
pthread_mutex_unlock(&pool->lock);
work.fn(work.fn_ctx);
pthread_mutex_lock(&pool->lock);
}
assert(pool->num_work == 0);
pthread_mutex_unlock(&pool->lock);
work.fn(work.fn_ctx);
pthread_mutex_lock(&pool->lock);
pool->busy_threads -= 1;
ts = (struct timespec){0};
got_timeout = false;
}
// If no termination signal was given, it must mean we died because of a
// timeout, and nobody is waiting for us. We have to remove ourselves.
if (!pool->terminate) {
for (int n = 0; n < pool->num_threads; n++) {
if (pthread_equal(pool->threads[n], pthread_self())) {
pthread_detach(pthread_self());
MP_TARRAY_REMOVE_AT(pool->threads, pool->num_threads, n);
pthread_mutex_unlock(&pool->lock);
return NULL;
}
}
assert(0);
}
pthread_mutex_unlock(&pool->lock);
return NULL;
}
@ -69,27 +116,46 @@ static void thread_pool_dtor(void *ctx)
{
struct mp_thread_pool *pool = ctx;
pthread_mutex_lock(&pool->lock);
pool->terminate = true;
pthread_cond_broadcast(&pool->wakeup);
pthread_t *threads = pool->threads;
int num_threads = pool->num_threads;
pool->threads = NULL;
pool->num_threads = 0;
pthread_mutex_unlock(&pool->lock);
for (int n = 0; n < pool->num_threads; n++)
pthread_join(pool->threads[n], NULL);
for (int n = 0; n < num_threads; n++)
pthread_join(threads[n], NULL);
assert(pool->num_work == 0);
assert(pool->num_threads == 0);
pthread_cond_destroy(&pool->wakeup);
pthread_mutex_destroy(&pool->lock);
}
// Create a thread pool with the given number of worker threads. This can return
// NULL if the worker threads could not be created. The thread pool can be
// destroyed with talloc_free(pool), or indirectly with talloc_free(ta_parent).
// If there are still work items on freeing, it will block until all work items
// are done, and the threads terminate.
struct mp_thread_pool *mp_thread_pool_create(void *ta_parent, int threads)
static bool add_thread(struct mp_thread_pool *pool)
{
assert(threads > 0);
pthread_t thread;
if (pthread_create(&thread, NULL, worker_thread, pool) != 0)
return false;
MP_TARRAY_APPEND(pool, pool->threads, pool->num_threads, thread);
return true;
}
struct mp_thread_pool *mp_thread_pool_create(void *ta_parent, int init_threads,
int min_threads, int max_threads)
{
assert(min_threads >= 0);
assert(init_threads <= min_threads);
assert(max_threads > 0 && max_threads >= min_threads);
struct mp_thread_pool *pool = talloc_zero(ta_parent, struct mp_thread_pool);
talloc_set_destructor(pool, thread_pool_dtor);
@ -97,29 +163,61 @@ struct mp_thread_pool *mp_thread_pool_create(void *ta_parent, int threads)
pthread_mutex_init(&pool->lock, NULL);
pthread_cond_init(&pool->wakeup, NULL);
for (int n = 0; n < threads; n++) {
pthread_t thread;
if (pthread_create(&thread, NULL, worker_thread, pool)) {
talloc_free(pool);
return NULL;
}
MP_TARRAY_APPEND(pool, pool->threads, pool->num_threads, thread);
}
pool->min_threads = min_threads;
pool->max_threads = max_threads;
pthread_mutex_lock(&pool->lock);
for (int n = 0; n < init_threads; n++)
add_thread(pool);
bool ok = pool->num_threads >= init_threads;
pthread_mutex_unlock(&pool->lock);
if (!ok)
TA_FREEP(&pool);
return pool;
}
// Queue a function to be run on a worker thread: fn(fn_ctx)
// If no worker thread is currently available, it's appended to a list in memory
// with unbounded size. This function always returns immediately.
// Concurrent queue calls are allowed, as long as it does not overlap with
// pool destruction.
void mp_thread_pool_queue(struct mp_thread_pool *pool, void (*fn)(void *ctx),
void *fn_ctx)
static bool thread_pool_add(struct mp_thread_pool *pool, void (*fn)(void *ctx),
void *fn_ctx, bool allow_queue)
{
bool ok = true;
assert(fn);
pthread_mutex_lock(&pool->lock);
struct work work = {fn, fn_ctx};
MP_TARRAY_INSERT_AT(pool, pool->work, pool->num_work, 0, work);
pthread_cond_signal(&pool->wakeup);
// If there are not enough threads to process all at once, but we can
// create a new thread, then do so. If work is queued quickly, it can
// happen that not all available threads have picked up work yet (up to
// num_threads - busy_threads threads), which has to be accounted for.
if (pool->busy_threads + pool->num_work + 1 > pool->num_threads &&
pool->num_threads < pool->max_threads)
{
if (!add_thread(pool)) {
// If we can queue it, it'll get done as long as there is 1 thread.
ok = allow_queue && pool->num_threads > 0;
}
}
if (ok) {
MP_TARRAY_INSERT_AT(pool, pool->work, pool->num_work, 0, work);
pthread_cond_signal(&pool->wakeup);
}
pthread_mutex_unlock(&pool->lock);
return ok;
}
bool mp_thread_pool_queue(struct mp_thread_pool *pool, void (*fn)(void *ctx),
void *fn_ctx)
{
return thread_pool_add(pool, fn, fn_ctx, true);
}
bool mp_thread_pool_run(struct mp_thread_pool *pool, void (*fn)(void *ctx),
void *fn_ctx)
{
return thread_pool_add(pool, fn, fn_ctx, false);
}

View File

@ -3,8 +3,32 @@
struct mp_thread_pool;
struct mp_thread_pool *mp_thread_pool_create(void *ta_parent, int threads);
void mp_thread_pool_queue(struct mp_thread_pool *pool, void (*fn)(void *ctx),
// Create a thread pool with the given number of worker threads. This can return
// NULL if the worker threads could not be created. The thread pool can be
// destroyed with talloc_free(pool), or indirectly with talloc_free(ta_parent).
// If there are still work items on freeing, it will block until all work items
// are done, and the threads terminate.
// init_threads is the number of threads created in this function (and it fails
// if it could not be done). min_threads must be >=, if it's >, then the
// remaining threads will be created on demand, but never destroyed.
// If init_threads > 0, then mp_thread_pool_queue() can never fail.
// If init_threads == 0, mp_thread_pool_create() itself can never fail.
struct mp_thread_pool *mp_thread_pool_create(void *ta_parent, int init_threads,
int min_threads, int max_threads);
// Queue a function to be run on a worker thread: fn(fn_ctx)
// If no worker thread is currently available, it's appended to a list in memory
// with unbounded size. This function always returns immediately.
// Concurrent queue calls are allowed, as long as it does not overlap with
// pool destruction.
// This function is explicitly thread-safe.
// Cannot fail if thread pool was created with at least 1 thread.
bool mp_thread_pool_queue(struct mp_thread_pool *pool, void (*fn)(void *ctx),
void *fn_ctx);
// Like mp_thread_pool_queue(), but only queue the item and succeed if a thread
// can be reserved for the item (i.e. minimal wait time instead of unbounded).
bool mp_thread_pool_run(struct mp_thread_pool *pool, void (*fn)(void *ctx),
void *fn_ctx);
#endif

269
misc/thread_tools.c Normal file
View File

@ -0,0 +1,269 @@
/* Copyright (C) 2018 the mpv developers
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <assert.h>
#include <string.h>
#include <sys/types.h>
#include <unistd.h>
#include <errno.h>
#ifdef __MINGW32__
#include <windows.h>
#else
#include <poll.h>
#endif
#include "common/common.h"
#include "misc/linked_list.h"
#include "osdep/atomic.h"
#include "osdep/io.h"
#include "osdep/timer.h"
#include "thread_tools.h"
uintptr_t mp_waiter_wait(struct mp_waiter *waiter)
{
pthread_mutex_lock(&waiter->lock);
while (!waiter->done)
pthread_cond_wait(&waiter->wakeup, &waiter->lock);
pthread_mutex_unlock(&waiter->lock);
uintptr_t ret = waiter->value;
// We document that after mp_waiter_wait() the waiter object becomes
// invalid. (It strictly returns only after mp_waiter_wakeup() has returned,
// and the object is "single-shot".) So destroy it here.
// Normally, we expect that the system uses futexes, in which case the
// following functions will do nearly nothing. This is true for Windows
// and Linux. But some lesser OSes still might allocate kernel objects
// when initializing mutexes, so destroy them here.
pthread_mutex_destroy(&waiter->lock);
pthread_cond_destroy(&waiter->wakeup);
memset(waiter, 0xCA, sizeof(*waiter)); // for debugging
return ret;
}
void mp_waiter_wakeup(struct mp_waiter *waiter, uintptr_t value)
{
pthread_mutex_lock(&waiter->lock);
assert(!waiter->done);
waiter->done = true;
waiter->value = value;
pthread_cond_signal(&waiter->wakeup);
pthread_mutex_unlock(&waiter->lock);
}
bool mp_waiter_poll(struct mp_waiter *waiter)
{
pthread_mutex_lock(&waiter->lock);
bool r = waiter->done;
pthread_mutex_unlock(&waiter->lock);
return r;
}
struct mp_cancel {
pthread_mutex_t lock;
pthread_cond_t wakeup;
// Semaphore state and "mirrors".
atomic_bool triggered;
void (*cb)(void *ctx);
void *cb_ctx;
int wakeup_pipe[2];
void *win32_event; // actually HANDLE
// Slave list. These are automatically notified as well.
struct {
struct mp_cancel *head, *tail;
} slaves;
// For slaves. Synchronization is managed by parent.lock!
struct mp_cancel *parent;
struct {
struct mp_cancel *next, *prev;
} siblings;
};
static void cancel_destroy(void *p)
{
struct mp_cancel *c = p;
assert(!c->slaves.head); // API user error
mp_cancel_set_parent(c, NULL);
if (c->wakeup_pipe[0] >= 0) {
close(c->wakeup_pipe[0]);
close(c->wakeup_pipe[1]);
}
#ifdef __MINGW32__
if (c->win32_event)
CloseHandle(c->win32_event);
#endif
pthread_mutex_destroy(&c->lock);
pthread_cond_destroy(&c->wakeup);
}
struct mp_cancel *mp_cancel_new(void *talloc_ctx)
{
struct mp_cancel *c = talloc_ptrtype(talloc_ctx, c);
talloc_set_destructor(c, cancel_destroy);
*c = (struct mp_cancel){
.triggered = ATOMIC_VAR_INIT(false),
.wakeup_pipe = {-1, -1},
};
pthread_mutex_init(&c->lock, NULL);
pthread_cond_init(&c->wakeup, NULL);
return c;
}
static void trigger_locked(struct mp_cancel *c)
{
atomic_store(&c->triggered, true);
pthread_cond_broadcast(&c->wakeup); // condition bound to c->triggered
if (c->cb)
c->cb(c->cb_ctx);
for (struct mp_cancel *sub = c->slaves.head; sub; sub = sub->siblings.next)
mp_cancel_trigger(sub);
if (c->wakeup_pipe[1] >= 0)
(void)write(c->wakeup_pipe[1], &(char){0}, 1);
#ifdef __MINGW32__
if (c->win32_event)
SetEvent(c->win32_event);
#endif
}
void mp_cancel_trigger(struct mp_cancel *c)
{
pthread_mutex_lock(&c->lock);
trigger_locked(c);
pthread_mutex_unlock(&c->lock);
}
void mp_cancel_reset(struct mp_cancel *c)
{
pthread_mutex_lock(&c->lock);
atomic_store(&c->triggered, false);
if (c->wakeup_pipe[0] >= 0) {
// Flush it fully.
while (1) {
int r = read(c->wakeup_pipe[0], &(char[256]){0}, 256);
if (r <= 0 && !(r < 0 && errno == EINTR))
break;
}
}
#ifdef __MINGW32__
if (c->win32_event)
ResetEvent(c->win32_event);
#endif
pthread_mutex_unlock(&c->lock);
}
bool mp_cancel_test(struct mp_cancel *c)
{
return c ? atomic_load_explicit(&c->triggered, memory_order_relaxed) : false;
}
bool mp_cancel_wait(struct mp_cancel *c, double timeout)
{
struct timespec ts = mp_rel_time_to_timespec(timeout);
pthread_mutex_lock(&c->lock);
while (!mp_cancel_test(c)) {
if (pthread_cond_timedwait(&c->wakeup, &c->lock, &ts))
break;
}
pthread_mutex_unlock(&c->lock);
return mp_cancel_test(c);
}
// If a new notification mechanism was added, and the mp_cancel state was
// already triggered, make sure the newly added mechanism is also triggered.
static void retrigger_locked(struct mp_cancel *c)
{
if (mp_cancel_test(c))
trigger_locked(c);
}
void mp_cancel_set_cb(struct mp_cancel *c, void (*cb)(void *ctx), void *ctx)
{
pthread_mutex_lock(&c->lock);
c->cb = cb;
c->cb_ctx = ctx;
retrigger_locked(c);
pthread_mutex_unlock(&c->lock);
}
void mp_cancel_set_parent(struct mp_cancel *slave, struct mp_cancel *parent)
{
// We can access c->parent without synchronization, because:
// - concurrent mp_cancel_set_parent() calls to slave are not allowed
// - slave->parent needs to stay valid as long as the slave exists
if (slave->parent == parent)
return;
if (slave->parent) {
pthread_mutex_lock(&slave->parent->lock);
LL_REMOVE(siblings, &slave->parent->slaves, slave);
pthread_mutex_unlock(&slave->parent->lock);
}
slave->parent = parent;
if (slave->parent) {
pthread_mutex_lock(&slave->parent->lock);
LL_APPEND(siblings, &slave->parent->slaves, slave);
retrigger_locked(slave->parent);
pthread_mutex_unlock(&slave->parent->lock);
}
}
int mp_cancel_get_fd(struct mp_cancel *c)
{
pthread_mutex_lock(&c->lock);
if (c->wakeup_pipe[0] < 0) {
mp_make_wakeup_pipe(c->wakeup_pipe);
retrigger_locked(c);
}
pthread_mutex_unlock(&c->lock);
return c->wakeup_pipe[0];
}
#ifdef __MINGW32__
void *mp_cancel_get_event(struct mp_cancel *c)
{
pthread_mutex_lock(&c->lock);
if (!c->win32_event) {
c->win32_event = CreateEventW(NULL, TRUE, FALSE, NULL);
retrigger_locked(c);
}
pthread_mutex_unlock(&c->lock);
return c->win32_event;
}
#endif

82
misc/thread_tools.h Normal file
View File

@ -0,0 +1,82 @@
#pragma once
#include <stdint.h>
#include <stdbool.h>
#include <pthread.h>
// This is basically a single-shot semaphore, intended as light-weight solution
// for just making a thread wait for another thread.
struct mp_waiter {
// All fields are considered private. Use MP_WAITER_INITIALIZER to init.
pthread_mutex_t lock;
pthread_cond_t wakeup;
bool done;
uintptr_t value;
};
// Initialize a mp_waiter object for use with mp_waiter_*().
#define MP_WAITER_INITIALIZER { \
.lock = PTHREAD_MUTEX_INITIALIZER, \
.wakeup = PTHREAD_COND_INITIALIZER, \
}
// Block until some other thread calls mp_waiter_wakeup(). The function returns
// the value argument of that wakeup call. After this, the waiter object must
// not be used anymore. Although you can reinit it with MP_WAITER_INITIALIZER
// (then you must make sure nothing calls mp_waiter_wakeup() before this).
uintptr_t mp_waiter_wait(struct mp_waiter *waiter);
// Unblock the thread waiting with mp_waiter_wait(), and make it return the
// provided value. If the other thread did not enter that call yet, it will
// return immediately once it does (mp_waiter_wakeup() always returns
// immediately). Calling this more than once is not allowed.
void mp_waiter_wakeup(struct mp_waiter *waiter, uintptr_t value);
// Query whether the waiter was woken up. If true, mp_waiter_wait() will return
// immediately. This is useful if you want to use another way to block and
// wakeup (in parallel to mp_waiter).
// You still need to call mp_waiter_wait() to free resources.
bool mp_waiter_poll(struct mp_waiter *waiter);
// Basically a binary semaphore that supports signaling the semaphore value to
// a bunch of other complicated mechanisms (such as wakeup pipes). It was made
// for aborting I/O and thus has according naming.
struct mp_cancel;
struct mp_cancel *mp_cancel_new(void *talloc_ctx);
// Request abort.
void mp_cancel_trigger(struct mp_cancel *c);
// Return whether the caller should abort.
// For convenience, c==NULL is allowed.
bool mp_cancel_test(struct mp_cancel *c);
// Wait until the even is signaled. If the timeout (in seconds) expires, return
// false. timeout==0 polls, timeout<0 waits forever.
bool mp_cancel_wait(struct mp_cancel *c, double timeout);
// Restore original state. (Allows reusing a mp_cancel.)
void mp_cancel_reset(struct mp_cancel *c);
// Add a callback to invoke when mp_cancel gets triggered. If it's already
// triggered, call it from mp_cancel_add_cb() directly. May be called multiple
// times even if the trigger state changes; not called if it resets. In all
// cases, this may be called with internal locks held (either in mp_cancel, or
// other locks held by whoever calls mp_cancel_trigger()).
// There is only one callback. Create a slave mp_cancel to get a private one.
void mp_cancel_set_cb(struct mp_cancel *c, void (*cb)(void *ctx), void *ctx);
// If parent gets triggered, automatically trigger slave. There is only 1
// parent; setting NULL clears the parent. Freeing slave also automatically
// ends the parent link, but the parent mp_cancel must remain valid until the
// slave is manually removed or destroyed. Destroying a mp_cancel that still
// has slaves is an error.
void mp_cancel_set_parent(struct mp_cancel *slave, struct mp_cancel *parent);
// win32 "Event" HANDLE that indicates the current mp_cancel state.
void *mp_cancel_get_event(struct mp_cancel *c);
// The FD becomes readable if mp_cancel_test() would return true.
// Don't actually read from it, just use it for poll().
int mp_cancel_get_fd(struct mp_cancel *c);

View File

@ -55,19 +55,42 @@ static const union m_option_value default_value;
// For use with m_config_cache.
struct m_config_shadow {
pthread_mutex_t lock;
struct m_config *root;
char *data;
pthread_mutex_t lock;
// -- protected by lock
struct m_config_data *data; // protected shadow copy of the option data
struct m_config_cache **listeners;
int num_listeners;
};
// Represents a sub-struct (OPT_SUBSTRUCT()).
struct m_config_group {
const struct m_sub_options *group; // or NULL for top-level options
int parent_group; // index of parent group in m_config.groups
void *opts; // pointer to group user option struct
atomic_llong ts; // incremented on every write access
const struct m_sub_options *group;
int group_count; // 1 + number of all sub groups owned by this (so
// m_config.groups[idx..idx+group_count] is used by the
// entire tree of sub groups included by this group)
int parent_group; // index of parent group into m_config.groups[], or
// -1 for group 0
int parent_ptr; // ptr offset in the parent group's data, or -1 if
// none
int co_index; // index of the first group opt into m_config.opts[]
int co_end_index; // index of the last group opt + 1 (i.e. exclusive)
};
// A copy of option data. Used for the main option struct, the shadow data,
// and copies for m_config_cache.
struct m_config_data {
struct m_config *root; // root config (with up-to-date data)
int group_index; // start index into m_config.groups[]
struct m_group_data *gdata; // user struct allocation (our copy of data)
int num_gdata; // (group_index+num_gdata = end index)
atomic_llong ts; // last change timestamp we've seen
};
// Per m_config_data state for each m_config_group.
struct m_group_data {
char *udata; // pointer to group user option struct
long long ts; // incremented on every write access
};
struct m_profile {
@ -86,6 +109,20 @@ struct m_opt_backup {
void *backup;
};
static void add_sub_group(struct m_config *config, const char *name_prefix,
int parent_group_index, int parent_ptr,
const struct m_sub_options *subopts);
static struct m_group_data *m_config_gdata(struct m_config_data *data,
int group_index)
{
if (group_index < data->group_index ||
group_index >= data->group_index + data->num_gdata)
return NULL;
return &data->gdata[group_index - data->group_index];
}
static int show_profile(struct m_config *config, bstr param)
{
struct m_profile *p;
@ -140,30 +177,129 @@ static void substruct_write_ptr(void *ptr, void *val)
memcpy(ptr, &src, sizeof(src));
}
static void add_options(struct m_config *config,
struct m_config_option *parent,
void *optstruct,
const void *optstruct_def,
const struct m_option *defs);
// Initialize a field with a given value. In case this is dynamic data, it has
// to be allocated and copied. src can alias dst.
static void init_opt_inplace(const struct m_option *opt, void *dst,
const void *src)
{
// The option will use dynamic memory allocation iff it has a free callback.
if (opt->type->free) {
union m_option_value temp;
memcpy(&temp, src, opt->type->size);
memset(dst, 0, opt->type->size);
m_option_copy(opt, dst, &temp);
} else if (src != dst) {
memcpy(dst, src, opt->type->size);
}
}
static void alloc_group(struct m_config_data *data, int group_index,
struct m_config_data *copy)
{
assert(group_index == data->group_index + data->num_gdata);
assert(group_index < data->root->num_groups);
struct m_config_group *group = &data->root->groups[group_index];
const struct m_sub_options *opts = group->group;
MP_TARRAY_GROW(data, data->gdata, data->num_gdata);
struct m_group_data *gdata = &data->gdata[data->num_gdata++];
struct m_group_data *copy_gdata =
copy ? m_config_gdata(copy, group_index) : NULL;
*gdata = (struct m_group_data){
.udata = talloc_zero_size(data, opts->size),
.ts = copy_gdata ? copy_gdata->ts : 0,
};
if (opts->defaults)
memcpy(gdata->udata, opts->defaults, opts->size);
char *copy_src = copy_gdata ? copy_gdata->udata : NULL;
for (int n = group->co_index; n < group->co_end_index; n++) {
assert(n >= 0 && n < data->root->num_opts);
struct m_config_option *co = &data->root->opts[n];
if (co->opt->offset < 0 || co->opt->type->size == 0)
continue;
void *dst = gdata->udata + co->opt->offset;
const void *defptr = co->opt->defval ? co->opt->defval : dst;
if (copy_src)
defptr = copy_src + co->opt->offset;
init_opt_inplace(co->opt, dst, defptr);
}
// If there's a parent, update its pointer to the new struct.
if (group->parent_group >= data->group_index && group->parent_ptr >= 0) {
struct m_group_data *parent_gdata =
m_config_gdata(data, group->parent_group);
assert(parent_gdata);
substruct_write_ptr(parent_gdata->udata + group->parent_ptr, gdata->udata);
}
}
static void free_option_data(void *p)
{
struct m_config_data *data = p;
for (int i = 0; i < data->num_gdata; i++) {
struct m_group_data *gdata = &data->gdata[i];
struct m_config_group *group = &data->root->groups[data->group_index + i];
for (int n = group->co_index; n < group->co_end_index; n++) {
struct m_config_option *co = &data->root->opts[n];
if (co->opt->offset >= 0 && co->opt->type->size > 0)
m_option_free(co->opt, gdata->udata + co->opt->offset);
}
}
}
// Allocate data using the option description in root, starting at group_index
// (index into m_config.groups[]).
// If copy is not NULL, copy all data from there (for groups which are in both
// m_config_data instances), in all other cases init the data with the defaults.
static struct m_config_data *allocate_option_data(void *ta_parent,
struct m_config *root,
int group_index,
struct m_config_data *copy)
{
assert(group_index >= 0 && group_index < root->num_groups);
struct m_config_data *data = talloc_zero(ta_parent, struct m_config_data);
talloc_set_destructor(data, free_option_data);
data->root = root;
data->group_index = group_index;
struct m_config_group *root_group = &root->groups[group_index];
assert(root_group->group_count > 0);
for (int n = group_index; n < group_index + root_group->group_count; n++)
alloc_group(data, n, copy);
if (copy)
data->ts = copy->ts;
return data;
}
static void config_destroy(void *p)
{
struct m_config *config = p;
m_config_restore_backups(config);
for (int n = 0; n < config->num_opts; n++) {
struct m_config_option *co = &config->opts[n];
m_option_free(co->opt, co->data);
if (config->shadow && co->shadow_offset >= 0)
m_option_free(co->opt, config->shadow->data + co->shadow_offset);
}
if (config->shadow) {
// must all have been unregistered
assert(config->shadow->num_listeners == 0);
pthread_mutex_destroy(&config->shadow->lock);
talloc_free(config->shadow);
}
talloc_free(config->data);
}
struct m_config *m_config_new(void *talloc_ctx, struct mp_log *log,
@ -172,25 +308,29 @@ struct m_config *m_config_new(void *talloc_ctx, struct mp_log *log,
{
struct m_config *config = talloc(talloc_ctx, struct m_config);
talloc_set_destructor(config, config_destroy);
*config = (struct m_config)
{.log = log, .size = size, .defaults = defaults, .options = options};
*config = (struct m_config){.log = log,};
// size==0 means a dummy object is created
if (size) {
config->optstruct = talloc_zero_size(config, size);
if (defaults)
memcpy(config->optstruct, defaults, size);
struct m_sub_options *subopts = talloc_ptrtype(config, subopts);
*subopts = (struct m_sub_options){
.opts = options,
.size = size,
.defaults = defaults,
};
add_sub_group(config, NULL, -1, -1, subopts);
if (!size)
return config;
config->data = allocate_option_data(config, config, 0, NULL);
config->optstruct = config->data->gdata[0].udata;
for (int n = 0; n < config->num_opts; n++) {
struct m_config_option *co = &config->opts[n];
struct m_group_data *gdata = m_config_gdata(config->data, co->group_index);
if (gdata && co->opt->offset >= 0)
co->data = gdata->udata + co->opt->offset;
}
config->num_groups = 1;
MP_TARRAY_GROW(config, config->groups, 1);
config->groups[0] = (struct m_config_group){
.parent_group = -1,
.opts = config->optstruct,
};
if (options)
add_options(config, NULL, config->optstruct, defaults, options);
return config;
}
@ -216,14 +356,14 @@ struct m_config *m_config_from_obj_desc_noalloc(void *talloc_ctx,
return m_config_new(talloc_ctx, log, 0, desc->priv_defaults, desc->options);
}
static struct m_config_group *find_group(struct mpv_global *global,
const struct m_option *cfg)
static const struct m_config_group *find_group(struct mpv_global *global,
const struct m_option *cfg)
{
struct m_config_shadow *shadow = global->config;
struct m_config *root = shadow->root;
for (int n = 0; n < root->num_groups; n++) {
if (cfg && root->groups[n].group && root->groups[n].group->opts == cfg)
if (root->groups[n].group->opts == cfg)
return &root->groups[n];
}
@ -238,7 +378,7 @@ static struct m_config_group *find_group(struct mpv_global *global,
void *m_config_group_from_desc(void *ta_parent, struct mp_log *log,
struct mpv_global *global, struct m_obj_desc *desc, const char *name)
{
struct m_config_group *group = find_group(global, desc->options);
const struct m_config_group *group = find_group(global, desc->options);
if (group) {
return mp_get_config_group(ta_parent, global, group->group);
} else {
@ -335,211 +475,109 @@ void m_config_backup_all_opts(struct m_config *config)
ensure_backup(config, &config->opts[n]);
}
static void m_config_add_option(struct m_config *config,
struct m_config_option *parent,
void *optstruct,
const void *optstruct_def,
const struct m_option *arg);
static void add_options(struct m_config *config,
struct m_config_option *parent,
void *optstruct,
const void *optstruct_def,
const struct m_option *defs)
static void init_obj_settings_list(struct m_config *config,
int parent_group_index,
const struct m_obj_list *list)
{
for (int i = 0; defs && defs[i].name; i++)
m_config_add_option(config, parent, optstruct, optstruct_def, &defs[i]);
struct m_obj_desc desc;
for (int n = 0; ; n++) {
if (!list->get_desc(&desc, n))
break;
if (desc.global_opts) {
add_sub_group(config, NULL, parent_group_index, -1,
desc.global_opts);
}
if (list->use_global_options && desc.options) {
struct m_sub_options *conf = talloc_ptrtype(config, conf);
*conf = (struct m_sub_options){
.prefix = desc.options_prefix,
.opts = desc.options,
.defaults = desc.priv_defaults,
.size = desc.priv_size,
};
add_sub_group(config, NULL, parent_group_index, -1, conf);
}
}
}
static void add_sub_options(struct m_config *config,
struct m_config_option *parent,
const struct m_sub_options *subopts)
static const char *concat_name(void *ta_parent, const char *a, const char *b)
{
// Can't be used multiple times.
assert(a);
assert(b);
if (!a[0])
return b;
if (!b[0])
return a;
return talloc_asprintf(ta_parent, "%s-%s", a, b);
}
static void add_sub_group(struct m_config *config, const char *name_prefix,
int parent_group_index, int parent_ptr,
const struct m_sub_options *subopts)
{
// Can't be used multiple times.
for (int n = 0; n < config->num_groups; n++)
assert(config->groups[n].group != subopts);
// You can only use UPDATE_ flags here.
assert(!(subopts->change_flags & ~(unsigned)UPDATE_OPTS_MASK));
void *new_optstruct = NULL;
if (config->optstruct) { // only if not noalloc
new_optstruct = talloc_zero_size(config, subopts->size);
if (subopts->defaults)
memcpy(new_optstruct, subopts->defaults, subopts->size);
}
if (parent && parent->data)
substruct_write_ptr(parent->data, new_optstruct);
assert(parent_group_index >= -1 && parent_group_index < config->num_groups);
const void *new_optstruct_def = NULL;
if (parent && parent->default_data)
new_optstruct_def = substruct_read_ptr(parent->default_data);
if (!new_optstruct_def)
new_optstruct_def = subopts->defaults;
int group = config->num_groups++;
MP_TARRAY_GROW(config, config->groups, group);
config->groups[group] = (struct m_config_group){
int group_index = config->num_groups++;
MP_TARRAY_GROW(config, config->groups, group_index);
config->groups[group_index] = (struct m_config_group){
.group = subopts,
.parent_group = parent ? parent->group : 0,
.opts = new_optstruct,
.parent_group = parent_group_index,
.parent_ptr = parent_ptr,
.co_index = config->num_opts,
};
struct m_config_option next = {
.name = "",
.group = group,
};
if (parent && parent->name && parent->name[0])
next.name = parent->name;
if (subopts->prefix && subopts->prefix[0]) {
assert(next.name);
next.name = subopts->prefix;
}
add_options(config, &next, new_optstruct, new_optstruct_def, subopts->opts);
}
if (subopts->prefix && subopts->prefix[0])
name_prefix = subopts->prefix;
if (!name_prefix)
name_prefix = "";
#define MAX_VO_AO 16
for (int i = 0; subopts->opts && subopts->opts[i].name; i++) {
const struct m_option *opt = &subopts->opts[i];
struct group_entry {
const struct m_obj_list *entry;
struct m_sub_options subs[MAX_VO_AO];
bool initialized;
};
static struct group_entry g_groups[2]; // limited by max. m_obj_list overall
static int g_num_groups = 0;
static pthread_mutex_t g_group_mutex = PTHREAD_MUTEX_INITIALIZER;
static const struct m_sub_options *get_cached_group(const struct m_obj_list *list,
int n, struct m_sub_options *v)
{
pthread_mutex_lock(&g_group_mutex);
struct group_entry *group = NULL;
for (int i = 0; i < g_num_groups; i++) {
if (g_groups[i].entry == list) {
group = &g_groups[i];
break;
}
}
if (!group) {
assert(g_num_groups < MP_ARRAY_SIZE(g_groups));
group = &g_groups[g_num_groups++];
group->entry = list;
}
if (!group->initialized) {
if (!v) {
n = -1;
group->initialized = true;
} else {
assert(n < MAX_VO_AO); // simply increase this if it fails
group->subs[n] = *v;
}
}
pthread_mutex_unlock(&g_group_mutex);
return n >= 0 ? &group->subs[n] : NULL;
}
static void init_obj_settings_list(struct m_config *config,
const struct m_obj_list *list)
{
struct m_obj_desc desc;
for (int n = 0; ; n++) {
if (!list->get_desc(&desc, n)) {
if (list->use_global_options)
get_cached_group(list, n, NULL);
break;
}
if (desc.global_opts)
add_sub_options(config, NULL, desc.global_opts);
if (list->use_global_options && desc.options) {
struct m_sub_options conf = {
.prefix = desc.options_prefix,
.opts = desc.options,
.defaults = desc.priv_defaults,
.size = desc.priv_size,
};
add_sub_options(config, NULL, get_cached_group(list, n, &conf));
}
}
}
// Initialize a field with a given value. In case this is dynamic data, it has
// to be allocated and copied. src can alias dst, also can be NULL.
static void init_opt_inplace(const struct m_option *opt, void *dst,
const void *src)
{
union m_option_value temp = {0};
if (src)
memcpy(&temp, src, opt->type->size);
memset(dst, 0, opt->type->size);
m_option_copy(opt, dst, &temp);
}
static void m_config_add_option(struct m_config *config,
struct m_config_option *parent,
void *optstruct,
const void *optstruct_def,
const struct m_option *arg)
{
assert(config != NULL);
assert(arg != NULL);
const char *parent_name = parent ? parent->name : "";
struct m_config_option co = {
.opt = arg,
.name = arg->name,
.shadow_offset = -1,
.group = parent ? parent->group : 0,
.default_data = &default_value,
.is_hidden = !!arg->deprecation_message,
};
if (arg->offset >= 0) {
if (optstruct)
co.data = (char *)optstruct + arg->offset;
if (optstruct_def)
co.default_data = (char *)optstruct_def + arg->offset;
}
if (arg->defval)
co.default_data = arg->defval;
// Fill in the full name
if (!co.name[0]) {
co.name = parent_name;
} else if (parent_name[0]) {
co.name = talloc_asprintf(config, "%s-%s", parent_name, co.name);
}
if (arg->type == &m_option_type_subconfig) {
const struct m_sub_options *subopts = arg->priv;
add_sub_options(config, &co, subopts);
} else {
int size = arg->type->size;
if (optstruct && size) {
// The required alignment is unknown, so go with the maximum C
// could require. Slightly wasteful, but not that much.
int align = (size - config->shadow_size % size) % size;
int offset = config->shadow_size + align;
assert(offset <= INT16_MAX);
co.shadow_offset = offset;
config->shadow_size = co.shadow_offset + size;
}
// Initialize options
if (co.data && co.default_data)
init_opt_inplace(arg, co.data, co.default_data);
if (opt->type == &m_option_type_subconfig)
continue;
struct m_config_option co = {
.name = concat_name(config, name_prefix, opt->name),
.opt = opt,
.group_index = group_index,
.is_hidden = !!opt->deprecation_message,
};
MP_TARRAY_APPEND(config, config->opts, config->num_opts, co);
if (arg->type == &m_option_type_obj_settings_list)
init_obj_settings_list(config, (const struct m_obj_list *)arg->priv);
}
config->groups[group_index].co_end_index = config->num_opts;
// Initialize sub-structs. These have to come after, because co_index and
// co_end_index must strictly be for a single struct only.
for (int i = 0; subopts->opts && subopts->opts[i].name; i++) {
const struct m_option *opt = &subopts->opts[i];
if (opt->type == &m_option_type_subconfig) {
const struct m_sub_options *new_subopts = opt->priv;
// Providing default structs in-place is not allowed.
if (opt->offset >= 0 && subopts->defaults) {
void *ptr = (char *)subopts->defaults + opt->offset;
assert(!substruct_read_ptr(ptr));
}
const char *prefix = concat_name(config, name_prefix, opt->name);
add_sub_group(config, prefix, group_index, opt->offset, new_subopts);
} else if (opt->type == &m_option_type_obj_settings_list) {
const struct m_obj_list *objlist = opt->priv;
init_obj_settings_list(config, group_index, objlist);
}
}
config->groups[group_index].group_count = config->num_groups - group_index;
}
struct m_config_option *m_config_get_co_raw(const struct m_config *config,
@ -627,6 +665,19 @@ struct m_config_option *m_config_get_co_index(struct m_config *config, int index
return &config->opts[index];
}
const void *m_config_get_co_default(const struct m_config *config,
struct m_config_option *co)
{
if (co->opt->defval)
return co->opt->defval;
const struct m_sub_options *subopt = config->groups[co->group_index].group;
if (co->opt->offset >= 0 && subopt->defaults)
return (char *)subopt->defaults + co->opt->offset;
return NULL;
}
const char *m_config_get_positional_option(const struct m_config *config, int p)
{
int pos = 0;
@ -753,7 +804,6 @@ static int m_config_handle_special_options(struct m_config *config,
return M_OPT_UNKNOWN;
}
// Unlike m_config_set_option_raw() this does not go through the property layer
// via config.option_set_callback.
int m_config_set_option_raw_direct(struct m_config *config,
@ -1033,8 +1083,11 @@ void m_config_print_option_list(const struct m_config *config, const char *name)
MP_INFO(config, " (%s to %s)", min, max);
}
char *def = NULL;
if (co->default_data)
def = m_option_pretty_print(opt, co->default_data);
const void *defptr = m_config_get_co_default(config, co);
if (!defptr)
defptr = &default_value;
if (defptr)
def = m_option_pretty_print(opt, defptr);
if (def) {
MP_INFO(config, " (default: %s)", def);
talloc_free(def);
@ -1188,36 +1241,16 @@ struct mpv_node m_config_get_profiles(struct m_config *config)
void m_config_create_shadow(struct m_config *config)
{
assert(config->global && config->options && config->size);
assert(config->global);
assert(!config->shadow && !config->global->config);
config->shadow = talloc_zero(config, struct m_config_shadow);
config->shadow->data = talloc_zero_size(config->shadow, config->shadow_size);
config->shadow = talloc_zero(NULL, struct m_config_shadow);
config->shadow->data =
allocate_option_data(config->shadow, config, 0, config->data);
config->shadow->root = config;
pthread_mutex_init(&config->shadow->lock, NULL);
config->global->config = config->shadow;
for (int n = 0; n < config->num_opts; n++) {
struct m_config_option *co = &config->opts[n];
if (co->shadow_offset < 0)
continue;
m_option_copy(co->opt, config->shadow->data + co->shadow_offset, co->data);
}
}
// Return whether parent is a parent of group. Also returns true if they're equal.
static bool is_group_included(struct m_config *config, int group, int parent)
{
for (;;) {
if (group == parent)
return true;
if (group < 0)
break;
group = config->groups[group].parent_group;
}
return false;
}
static void cache_destroy(void *p)
@ -1236,58 +1269,64 @@ struct m_config_cache *m_config_cache_alloc(void *ta_parent,
{
struct m_config_shadow *shadow = global->config;
struct m_config *root = shadow->root;
int group_index = -1;
struct m_config_cache *cache = talloc_zero(ta_parent, struct m_config_cache);
talloc_set_destructor(cache, cache_destroy);
cache->shadow = shadow;
cache->shadow_config = m_config_new(cache, mp_null_log, root->size,
root->defaults, root->options);
struct m_config *config = cache->shadow_config;
assert(config->num_opts == root->num_opts);
for (int n = 0; n < root->num_opts; n++) {
assert(config->opts[n].opt->type == root->opts[n].opt->type);
assert(config->opts[n].shadow_offset == root->opts[n].shadow_offset);
}
cache->ts = -1;
cache->group = -1;
for (int n = 0; n < config->num_groups; n++) {
if (config->groups[n].group == group) {
cache->opts = config->groups[n].opts;
cache->group = n;
for (int n = 0; n < root->num_groups; n++) {
// group==NULL is special cased to root group.
if (root->groups[n].group == group || (!group && !n)) {
group_index = n;
break;
}
}
assert(cache->group >= 0);
assert(cache->opts);
assert(group_index >= 0); // invalid group (or not in option tree)
// If we're not on the top-level, restrict set of options to the sub-group
// to reduce update costs. (It would be better not to add them in the first
// place.)
if (cache->group > 0) {
int num_opts = config->num_opts;
config->num_opts = 0;
for (int n = 0; n < num_opts; n++) {
struct m_config_option *co = &config->opts[n];
if (is_group_included(config, co->group, cache->group)) {
config->opts[config->num_opts++] = *co;
} else {
m_option_free(co->opt, co->data);
struct m_config_cache *cache = talloc_zero(ta_parent, struct m_config_cache);
talloc_set_destructor(cache, cache_destroy);
cache->shadow = shadow;
pthread_mutex_lock(&shadow->lock);
cache->data = allocate_option_data(cache, root, group_index, shadow->data);
pthread_mutex_unlock(&shadow->lock);
cache->opts = cache->data->gdata[0].udata;
return cache;
}
static bool update_options(struct m_config_data *dst, struct m_config_data *src)
{
assert(dst->root == src->root);
bool res = false;
dst->ts = src->ts;
// Must be from same root, but they can have arbitrary overlap.
int group_s = MPMAX(dst->group_index, src->group_index);
int group_e = MPMIN(dst->group_index + dst->num_gdata,
src->group_index + src->num_gdata);
assert(group_s >= 0 && group_e <= dst->root->num_groups);
for (int n = group_s; n < group_e; n++) {
struct m_config_group *g = &dst->root->groups[n];
struct m_group_data *gsrc = m_config_gdata(src, n);
struct m_group_data *gdst = m_config_gdata(dst, n);
assert(gsrc && gdst);
if (gdst->ts >= gsrc->ts)
continue;
gdst->ts = gsrc->ts;
res = true;
for (int i = g->co_index; i < g->co_end_index; i++) {
struct m_config_option *co = &dst->root->opts[i];
if (co->opt->offset >= 0 && co->opt->type->size) {
m_option_copy(co->opt, gdst->udata + co->opt->offset,
gsrc->udata + co->opt->offset);
}
}
for (int n = 0; n < config->num_groups; n++) {
if (!is_group_included(config, n, cache->group))
TA_FREEP(&config->groups[n].opts);
}
}
m_config_cache_update(cache);
return cache;
return res;
}
bool m_config_cache_update(struct m_config_cache *cache)
@ -1296,51 +1335,48 @@ bool m_config_cache_update(struct m_config_cache *cache)
// Using atomics and checking outside of the lock - it's unknown whether
// this makes it faster or slower. Just cargo culting it.
if (atomic_load(&shadow->root->groups[cache->group].ts) <= cache->ts)
if (atomic_load_explicit(&cache->data->ts, memory_order_relaxed) >=
atomic_load(&shadow->data->ts))
return false;
pthread_mutex_lock(&shadow->lock);
cache->ts = atomic_load(&shadow->root->groups[cache->group].ts);
for (int n = 0; n < cache->shadow_config->num_opts; n++) {
struct m_config_option *co = &cache->shadow_config->opts[n];
if (co->shadow_offset >= 0)
m_option_copy(co->opt, co->data, shadow->data + co->shadow_offset);
}
bool res = update_options(cache->data, shadow->data);
pthread_mutex_unlock(&shadow->lock);
return true;
return res;
}
void m_config_notify_change_co(struct m_config *config,
struct m_config_option *co)
{
struct m_config_shadow *shadow = config->shadow;
assert(co->data);
if (shadow) {
pthread_mutex_lock(&shadow->lock);
if (co->shadow_offset >= 0)
m_option_copy(co->opt, shadow->data + co->shadow_offset, co->data);
struct m_config_data *data = shadow->data;
struct m_group_data *gdata = m_config_gdata(data, co->group_index);
assert(gdata);
gdata->ts = atomic_fetch_add(&data->ts, 1) + 1;
m_option_copy(co->opt, gdata->udata + co->opt->offset, co->data);
for (int n = 0; n < shadow->num_listeners; n++) {
struct m_config_cache *cache = shadow->listeners[n];
if (cache->wakeup_cb && m_config_gdata(cache->data, co->group_index))
cache->wakeup_cb(cache->wakeup_cb_ctx);
}
pthread_mutex_unlock(&shadow->lock);
}
int changed = co->opt->flags & UPDATE_OPTS_MASK;
int group = co->group;
while (group >= 0) {
struct m_config_group *g = &config->groups[group];
atomic_fetch_add(&g->ts, 1);
if (g->group)
changed |= g->group->change_flags;
group = g->parent_group;
}
if (shadow) {
pthread_mutex_lock(&shadow->lock);
for (int n = 0; n < shadow->num_listeners; n++) {
struct m_config_cache *cache = shadow->listeners[n];
if (cache->wakeup_cb)
cache->wakeup_cb(cache->wakeup_cb_ctx);
}
pthread_mutex_unlock(&shadow->lock);
int group_index = co->group_index;
while (group_index >= 0) {
struct m_config_group *g = &config->groups[group_index];
changed |= g->group->change_flags;
group_index = g->parent_group;
}
if (config->option_change_callback) {
@ -1441,11 +1477,14 @@ void mp_read_option_raw(struct mpv_global *global, const char *name,
struct m_config_shadow *shadow = global->config;
struct m_config_option *co = m_config_get_co_raw(shadow->root, bstr0(name));
assert(co);
assert(co->shadow_offset >= 0);
assert(co->opt->offset >= 0);
assert(co->opt->type == type);
struct m_group_data *gdata = m_config_gdata(shadow->data, co->group_index);
assert(gdata);
memset(dst, 0, co->opt->type->size);
m_option_copy(co->opt, dst, shadow->data + co->shadow_offset);
m_option_copy(co->opt, dst, gdata->udata + co->opt->offset);
}
struct m_config *mp_get_root_config(struct mpv_global *global)

View File

@ -43,12 +43,10 @@ struct m_config_option {
bool is_set_from_config : 1; // Set by a config file
bool is_set_locally : 1; // Has a backup entry
bool warning_was_printed : 1;
int16_t shadow_offset; // Offset into m_config_shadow.data
int16_t group; // Index into m_config.groups
int16_t group_index; // Index into m_config.groups
const char *name; // Full name (ie option-subopt)
const struct m_option *opt; // Option description
void *data; // Raw value of the option
const void *default_data; // Raw default value
};
// Config object
@ -61,11 +59,6 @@ typedef struct m_config {
struct m_config_option *opts; // all options, even suboptions
int num_opts;
// Creation parameters
size_t size;
const void *defaults;
const struct m_option *options;
// List of defined profiles.
struct m_profile *profiles;
// Depth when recursively including profiles.
@ -94,14 +87,17 @@ typedef struct m_config {
void *optstruct; // struct mpopts or other
int shadow_size;
// List of m_sub_options instances.
// Private. List of m_sub_options instances.
// Index 0 is the top-level and is always present.
// Immutable after init.
// Invariant: a parent is always at a lower index than any of its children.
struct m_config_group *groups;
int num_groups;
// Thread-safe shadow memory; only set for the main m_config.
// Private. Non-NULL if data was allocated. m_config_option.data uses it.
struct m_config_data *data;
// Private. Thread-safe shadow memory; only set for the main m_config.
struct m_config_shadow *shadow;
} m_config_t;
@ -182,6 +178,8 @@ struct m_config_option *m_config_get_co(const struct m_config *config,
int m_config_get_co_count(struct m_config *config);
struct m_config_option *m_config_get_co_index(struct m_config *config, int index);
const void *m_config_get_co_default(const struct m_config *config,
struct m_config_option *co);
// Return the n-th option by position. n==0 is the first option. If there are
// less than (n + 1) options, return NULL.
@ -264,14 +262,13 @@ struct mpv_node m_config_get_profiles(struct m_config *config);
// the cache itself is allowed.
struct m_config_cache {
// The struct as indicated by m_config_cache_alloc's group parameter.
// (Internally the same as data->gdata[0]->udata.)
void *opts;
// Internal.
struct m_config_shadow *shadow;
struct m_config *shadow_config;
long long ts;
int group;
bool in_list;
struct m_config_shadow *shadow; // real data
struct m_config_data *data; // copy for the cache user
bool in_list; // registered as listener with root config
// --- Implicitly synchronized by setting/unsetting wakeup_cb.
struct mp_dispatch_queue *wakeup_dispatch_queue;
void (*wakeup_dispatch_cb)(void *ctx);
@ -281,15 +278,17 @@ struct m_config_cache {
void *wakeup_cb_ctx;
};
#define GLOBAL_CONFIG NULL
// Create a mirror copy from the global options.
// Keep in mind that a m_config_cache object is not thread-safe; it merely
// provides thread-safe access to the global options. All API functions for
// the same m_config_cache object must synchronized, unless otherwise noted.
// ta_parent: parent for the returned allocation
// global: option data source
// group: the option group to return. This can be NULL for the global option
// struct (MPOpts), or m_sub_options used in a certain OPT_SUBSTRUCT()
// item.
// group: the option group to return. This can be GLOBAL_CONFIG for the global
// option struct (MPOpts), or m_sub_options used in a certain
// OPT_SUBSTRUCT() item.
struct m_config_cache *m_config_cache_alloc(void *ta_parent,
struct mpv_global *global,
const struct m_sub_options *group);
@ -320,6 +319,7 @@ bool m_config_cache_update(struct m_config_cache *cache);
// Like m_config_cache_alloc(), but return the struct (m_config_cache->opts)
// directly, with no way to update the config. Basically this returns a copy
// with a snapshot of the current option values.
// group==GLOBAL_CONFIG is a special case, and always returns the root group.
void *mp_get_config_group(void *ta_parent, struct mpv_global *global,
const struct m_sub_options *group);

View File

@ -564,6 +564,7 @@ extern const char m_option_path_separator;
#define OPTDEF_STR(s) .defval = (void *)&(char * const){s}
#define OPTDEF_INT(i) .defval = (void *)&(const int){i}
#define OPTDEF_INT64(i) .defval = (void *)&(const int64_t){i}
#define OPTDEF_FLOAT(f) .defval = (void *)&(const float){f}
#define OPTDEF_DOUBLE(d) .defval = (void *)&(const double){d}

View File

@ -59,7 +59,6 @@ extern const struct m_sub_options tv_params_conf;
extern const struct m_sub_options stream_cdda_conf;
extern const struct m_sub_options stream_dvb_conf;
extern const struct m_sub_options stream_lavf_conf;
extern const struct m_sub_options stream_cache_conf;
extern const struct m_sub_options sws_conf;
extern const struct m_sub_options drm_conf;
extern const struct m_sub_options demux_rawaudio_conf;
@ -78,7 +77,8 @@ extern const struct m_sub_options demux_conf;
extern const struct m_obj_list vf_obj_list;
extern const struct m_obj_list af_obj_list;
extern const struct m_obj_list vo_obj_list;
extern const struct m_obj_list ao_obj_list;
extern const struct m_sub_options ao_conf;
extern const struct m_sub_options opengl_conf;
extern const struct m_sub_options vulkan_conf;
@ -386,8 +386,6 @@ const m_option_t mp_opts[] = {
// ------------------------- stream options --------------------
OPT_SUBSTRUCT("", stream_cache, stream_cache_conf, 0),
#if HAVE_DVDREAD || HAVE_DVDNAV
OPT_SUBSTRUCT("", dvd_opts, dvd_conf, 0),
#endif /* HAVE_DVDREAD */
@ -461,6 +459,7 @@ const m_option_t mp_opts[] = {
OPT_STRING("audio-demuxer", audio_demuxer_name, 0),
OPT_STRING("sub-demuxer", sub_demuxer_name, 0),
OPT_FLAG("demuxer-thread", demuxer_thread, 0),
OPT_DOUBLE("demuxer-termination-timeout", demux_termination_timeout, 0),
OPT_FLAG("prefetch-playlist", prefetch_open, 0),
OPT_FLAG("cache-pause", cache_pause, 0),
OPT_FLAG("cache-pause-initial", cache_pause_initial, 0),
@ -509,18 +508,13 @@ const m_option_t mp_opts[] = {
OPT_STRING("audio-spdif", audio_spdif, 0),
OPT_STRING_VALIDATE("hwdec", hwdec_api, M_OPT_OPTIONAL_PARAM,
hwdec_validate_opt),
OPT_STRING("hwdec-codecs", hwdec_codecs, 0),
OPT_IMAGEFORMAT("hwdec-image-format", hwdec_image_format, 0, .min = -1),
// -1 means auto aspect (prefer container size until aspect change)
// 0 means square pixels
OPT_ASPECT("video-aspect", movie_aspect, UPDATE_IMGPAR, -1.0, 10.0),
OPT_CHOICE("video-aspect-method", aspect_method, UPDATE_IMGPAR,
({"bitstream", 1}, {"container", 2})),
OPT_SUBSTRUCT("vd-lavc", vd_lavc_params, vd_lavc_conf, 0),
OPT_SUBSTRUCT("", vd_lavc_params, vd_lavc_conf, 0),
OPT_SUBSTRUCT("ad-lavc", ad_lavc_params, ad_lavc_conf, 0),
OPT_SUBSTRUCT("", demux_lavf, demux_lavf_conf, 0),
@ -548,10 +542,8 @@ const m_option_t mp_opts[] = {
OPT_FLAG("osd-bar", osd_bar_visible, UPDATE_OSD),
//---------------------- libao/libvo options ------------------------
OPT_SETTINGSLIST("ao", audio_driver_list, 0, &ao_obj_list, ),
OPT_STRING("audio-device", audio_device, UPDATE_AUDIO),
OPT_SUBSTRUCT("", ao_opts, ao_conf, 0),
OPT_FLAG("audio-exclusive", audio_exclusive, UPDATE_AUDIO),
OPT_STRING("audio-client-name", audio_client_name, UPDATE_AUDIO),
OPT_FLAG("audio-fallback-to-null", ao_null_fallback, 0),
OPT_FLAG("audio-stream-silence", audio_stream_silence, 0),
OPT_FLOATRANGE("audio-wait-open", audio_wait_open, 0, 0, 60),
@ -576,8 +568,6 @@ const m_option_t mp_opts[] = {
({"no", 0},
{"yes", 1},
{"weak", -1})),
OPT_DOUBLE("audio-buffer", audio_buffer, M_OPT_MIN | M_OPT_MAX,
.min = 0, .max = 10),
OPT_STRING("title", wintitle, 0),
OPT_STRING("force-media-title", media_title, 0),
@ -878,16 +868,12 @@ const m_option_t mp_opts[] = {
const struct MPOpts mp_default_opts = {
.use_terminal = 1,
.msg_color = 1,
.audio_driver_list = NULL,
.audio_decoders = NULL,
.video_decoders = NULL,
.softvol_max = 130,
.softvol_volume = 100,
.softvol_mute = 0,
.gapless_audio = -1,
.audio_buffer = 0.2,
.audio_device = "auto",
.audio_client_name = "mpv",
.wintitle = "${?media-title:${media-title}}${!media-title:No file} - mpv",
.stop_screensaver = 1,
.cursor_autohide_delay = 1000,
@ -915,6 +901,7 @@ const struct MPOpts mp_default_opts = {
.position_resume = 1,
.autoload_files = 1,
.demuxer_thread = 1,
.demux_termination_timeout = 0.1,
.hls_bitrate = INT_MAX,
.cache_pause = 1,
.cache_pause_wait = 1.0,
@ -952,9 +939,6 @@ const struct MPOpts mp_default_opts = {
.osd_bar_visible = 1,
.screenshot_template = "mpv-shot%n",
.hwdec_api = HAVE_RPI ? "mmal" : "no",
.hwdec_codecs = "h264,vc1,wmv3,hevc,mpeg2video,vp9",
.audio_output_channels = {
.set = 1,
.auto_safe = 1,

View File

@ -59,16 +59,6 @@ typedef struct mp_vo_opts {
struct drm_opts *drm_opts;
} mp_vo_opts;
struct mp_cache_opts {
int size;
int def_size;
int initial;
int seek_min;
int back_buffer;
char *file;
int file_max;
};
// Subtitle options needed by the subtitle decoders/renderers.
struct mp_subtitle_opts {
int sub_visibility;
@ -144,10 +134,7 @@ typedef struct MPOpts {
int auto_load_scripts;
struct m_obj_settings *audio_driver_list;
char *audio_device;
int audio_exclusive;
char *audio_client_name;
int ao_null_fallback;
int audio_stream_silence;
float audio_wait_open;
@ -160,9 +147,9 @@ typedef struct MPOpts {
int softvol_mute;
float softvol_max;
int gapless_audio;
double audio_buffer;
mp_vo_opts *vo;
struct ao_opts *ao_opts;
char *wintitle;
char *media_title;
@ -207,7 +194,6 @@ typedef struct MPOpts {
char *force_configdir;
int use_filedir_conf;
int hls_bitrate;
struct mp_cache_opts *stream_cache;
int chapterrange[2];
int edition_id;
int correct_pts;
@ -261,6 +247,7 @@ typedef struct MPOpts {
char **audio_files;
char *demuxer_name;
int demuxer_thread;
double demux_termination_timeout;
int prefetch_open;
char *audio_demuxer_name;
char *sub_demuxer_name;
@ -295,10 +282,6 @@ typedef struct MPOpts {
int audiofile_auto;
int osd_bar_visible;
char *hwdec_api;
char *hwdec_codecs;
int hwdec_image_format;
int w32_priority;
struct tv_params *tv_params;
@ -364,14 +347,10 @@ struct filter_opts {
extern const m_option_t mp_opts[];
extern const struct MPOpts mp_default_opts;
extern const struct m_sub_options vo_sub_opts;
extern const struct m_sub_options stream_cache_conf;
extern const struct m_sub_options dvd_conf;
extern const struct m_sub_options mp_subtitle_sub_opts;
extern const struct m_sub_options mp_osd_render_sub_opts;
extern const struct m_sub_options filter_conf;
extern const struct m_sub_options resample_conf;
int hwdec_validate_opt(struct mp_log *log, const m_option_t *opt,
struct bstr name, struct bstr param);
#endif

View File

@ -199,7 +199,7 @@ int m_config_parse_mp_command_line(m_config_t *config, struct playlist *files,
if (bstrcmp0(p.arg, "playlist") == 0) {
// append the playlist to the local args
char *param0 = bstrdup0(NULL, p.param);
struct playlist *pl = playlist_parse_file(param0, global);
struct playlist *pl = playlist_parse_file(param0, NULL, global);
talloc_free(param0);
if (!pl) {
MP_FATAL(config, "Error reading playlist '%.*s'\n",
@ -281,10 +281,8 @@ err_out:
* during normal options parsing.
*/
void m_config_preparse_command_line(m_config_t *config, struct mpv_global *global,
char **argv)
int *verbose, char **argv)
{
struct MPOpts *opts = global->opts;
struct parse_state p = {config, argv};
while (split_opt_silent(&p) == 0) {
if (p.is_opt) {
@ -293,7 +291,7 @@ void m_config_preparse_command_line(m_config_t *config, struct mpv_global *globa
int flags = M_SETOPT_FROM_CMDLINE | M_SETOPT_PRE_PARSE_ONLY;
m_config_set_option_cli(config, p.arg, p.param, flags);
if (bstrcmp0(p.arg, "v") == 0)
opts->verbose++;
(*verbose)++;
}
}

View File

@ -27,6 +27,6 @@ struct mpv_global;
int m_config_parse_mp_command_line(m_config_t *config, struct playlist *files,
struct mpv_global *global, char **argv);
void m_config_preparse_command_line(m_config_t *config, struct mpv_global *global,
char **argv);
int *verbose, char **argv);
#endif /* MPLAYER_PARSER_MPCMD_H */

View File

@ -61,6 +61,19 @@ static const char *const config_dirs[] = {
"global",
};
void mp_init_paths(struct mpv_global *global, struct MPOpts *opts)
{
TA_FREEP(&global->configdir);
const char *force_configdir = getenv("MPV_HOME");
if (opts->force_configdir && opts->force_configdir[0])
force_configdir = opts->force_configdir;
if (!opts->load_config)
force_configdir = "";
global->configdir = talloc_strdup(global, force_configdir);
}
// Return a platform specific path using a path type as defined in osdep/path.h.
// Keep in mind that the only way to free the return value is freeing talloc_ctx
// (or its children), as this function can return a statically allocated string.
@ -70,15 +83,10 @@ static const char *mp_get_platform_path(void *talloc_ctx,
{
assert(talloc_ctx);
const char *force_configdir = getenv("MPV_HOME");
if (global->opts->force_configdir && global->opts->force_configdir[0])
force_configdir = global->opts->force_configdir;
if (!global->opts->load_config)
force_configdir = "";
if (force_configdir) {
if (global->configdir) {
for (int n = 0; n < MP_ARRAY_SIZE(config_dirs); n++) {
if (strcmp(config_dirs[n], type) == 0)
return (n == 0 && force_configdir[0]) ? force_configdir : NULL;
return (n == 0 && global->configdir[0]) ? global->configdir : NULL;
}
}

View File

@ -24,6 +24,9 @@
#include "misc/bstr.h"
struct mpv_global;
struct MPOpts;
void mp_init_paths(struct mpv_global *global, struct MPOpts *opts);
// Search for the input filename in several paths. These include user and global
// config locations by default. Some platforms may implement additional platform

View File

@ -17,5 +17,10 @@
#define PRINTF_ATTRIBUTE(a1, a2) __attribute__ ((format (gnu_printf, a1, a2)))
#endif
#if __STDC_VERSION__ >= 201112L
#include <stdalign.h>
#else
#define alignof(x) (offsetof(struct {char unalign_; x u;}, u))
#endif
#endif

View File

@ -26,8 +26,9 @@
#include "osdep/subprocess.h"
#include "osdep/io.h"
#include "common/common.h"
#include "misc/thread_tools.h"
#include "osdep/io.h"
#include "stream/stream.h"
extern char **environ;

View File

@ -27,6 +27,7 @@
#include "common/common.h"
#include "stream/stream.h"
#include "misc/bstr.h"
#include "misc/thread_tools.h"
static void write_arg(bstr *cmdline, char *arg)
{

View File

@ -59,16 +59,6 @@ double mp_time_sec(void)
return mp_time_us() / (double)(1000 * 1000);
}
int64_t mp_time_relative_us(int64_t *t)
{
int64_t r = 0;
int64_t now = mp_time_us();
if (*t)
r = now - *t;
*t = now;
return r;
}
int64_t mp_add_timeout(int64_t time_us, double timeout_sec)
{
assert(time_us > 0); // mp_time_us() returns strictly positive values

View File

@ -39,12 +39,6 @@ void mp_sleep_us(int64_t us);
#define MP_START_TIME 10000000
// Return the amount of time that has passed since the last call, in
// microseconds. *t is used to calculate the time that has passed by storing
// the current time in it. If *t is 0, the call will return 0. (So that the
// first call will return 0, instead of the absolute current time.)
int64_t mp_time_relative_us(int64_t *t);
// Add a time in seconds to the given time in microseconds, and return it.
// Takes care of possible overflows. Never returns a negative or 0 time.
int64_t mp_add_timeout(int64_t time_us, double timeout_sec);

View File

@ -31,7 +31,9 @@
#include "input/cmd.h"
#include "misc/ctype.h"
#include "misc/dispatch.h"
#include "misc/node.h"
#include "misc/rendezvous.h"
#include "misc/thread_tools.h"
#include "options/m_config.h"
#include "options/m_option.h"
#include "options/m_property.h"
@ -370,6 +372,30 @@ void mpv_wait_async_requests(mpv_handle *ctx)
pthread_mutex_unlock(&ctx->lock);
}
// Send abort signal to all matching work items.
// If type==0, destroy all of the matching ctx.
// If ctx==0, destroy all.
static void abort_async(struct MPContext *mpctx, mpv_handle *ctx,
int type, uint64_t id)
{
pthread_mutex_lock(&mpctx->abort_lock);
// Destroy all => ensure any newly appearing work is aborted immediately.
if (ctx == NULL)
mpctx->abort_all = true;
for (int n = 0; n < mpctx->num_abort_list; n++) {
struct mp_abort_entry *abort = mpctx->abort_list[n];
if (!ctx || (abort->client == ctx && (!type ||
(abort->client_work_type == type && abort->client_work_id == id))))
{
mp_abort_trigger_locked(mpctx, abort);
}
}
pthread_mutex_unlock(&mpctx->abort_lock);
}
static void get_thread(void *ptr)
{
*(pthread_t *)ptr = pthread_self();
@ -388,6 +414,8 @@ static void mp_destroy_client(mpv_handle *ctx, bool terminate)
if (terminate)
mpv_command(ctx, (const char*[]){"quit", NULL});
abort_async(mpctx, ctx, 0, 0);
// reserved_events equals the number of asynchronous requests that weren't
// yet replied. In order to avoid that trying to reply to a removed client
// causes a crash, block until all asynchronous requests were served.
@ -483,32 +511,52 @@ void mpv_terminate_destroy(mpv_handle *ctx)
mp_destroy_client(ctx, true);
}
static bool can_terminate(struct MPContext *mpctx)
{
struct mp_client_api *clients = mpctx->clients;
pthread_mutex_lock(&clients->lock);
bool ok = clients->num_clients == 0 && mpctx->outstanding_async == 0 &&
(mpctx->is_cli || clients->terminate_core_thread);
pthread_mutex_unlock(&clients->lock);
return ok;
}
// Can be called on the core thread only. Idempotent.
// Also happens to take care of shutting down any async work.
void mp_shutdown_clients(struct MPContext *mpctx)
{
struct mp_client_api *clients = mpctx->clients;
// Prevent that new clients can appear.
pthread_mutex_lock(&clients->lock);
clients->shutting_down = true;
pthread_mutex_unlock(&clients->lock);
// Forcefully abort async work after 2 seconds of waiting.
double abort_time = mp_time_sec() + 2;
pthread_mutex_lock(&clients->lock);
// Prevent that new clients can appear.
clients->shutting_down = true;
// Wait until we can terminate.
while (clients->num_clients || mpctx->outstanding_async ||
!(mpctx->is_cli || clients->terminate_core_thread))
{
pthread_mutex_unlock(&clients->lock);
double left = abort_time - mp_time_sec();
if (left >= 0) {
mp_set_timeout(mpctx, left);
} else {
// Forcefully abort any ongoing async work. This is quite rude and
// probably not what everyone wants, so it happens only after a
// timeout.
abort_async(mpctx, NULL, 0, 0);
}
while (!can_terminate(mpctx)) {
mp_client_broadcast_event(mpctx, MPV_EVENT_SHUTDOWN, NULL);
mp_wait_events(mpctx);
pthread_mutex_lock(&clients->lock);
}
pthread_mutex_unlock(&clients->lock);
}
bool mp_is_shutting_down(struct MPContext *mpctx)
{
struct mp_client_api *clients = mpctx->clients;
pthread_mutex_lock(&clients->lock);
bool res = clients->shutting_down;
pthread_mutex_unlock(&clients->lock);
return res;
}
static void *core_thread(void *p)
@ -677,16 +725,6 @@ static void send_reply(struct mpv_handle *ctx, uint64_t userdata,
pthread_mutex_unlock(&ctx->lock);
}
static void status_reply(struct mpv_handle *ctx, int event,
uint64_t userdata, int status)
{
struct mpv_event reply = {
.event_id = event,
.error = status,
};
send_reply(ctx, userdata, &reply);
}
// Return whether there's any client listening to this event.
// If false is returned, the core doesn't need to send it.
bool mp_client_event_is_registered(struct MPContext *mpctx, int event)
@ -905,54 +943,6 @@ static bool conv_node_to_format(void *dst, mpv_format dst_fmt, mpv_node *src)
return false;
}
// Note: for MPV_FORMAT_NODE_MAP, this (incorrectly) takes the order into
// account, instead of treating it as set.
static bool compare_value(void *a, void *b, mpv_format format)
{
switch (format) {
case MPV_FORMAT_NONE:
return true;
case MPV_FORMAT_STRING:
case MPV_FORMAT_OSD_STRING:
return strcmp(*(char **)a, *(char **)b) == 0;
case MPV_FORMAT_FLAG:
return *(int *)a == *(int *)b;
case MPV_FORMAT_INT64:
return *(int64_t *)a == *(int64_t *)b;
case MPV_FORMAT_DOUBLE:
return *(double *)a == *(double *)b;
case MPV_FORMAT_NODE: {
struct mpv_node *a_n = a, *b_n = b;
if (a_n->format != b_n->format)
return false;
return compare_value(&a_n->u, &b_n->u, a_n->format);
}
case MPV_FORMAT_BYTE_ARRAY: {
struct mpv_byte_array *a_r = a, *b_r = b;
if (a_r->size != b_r->size)
return false;
return memcmp(a_r->data, b_r->data, a_r->size) == 0;
}
case MPV_FORMAT_NODE_ARRAY:
case MPV_FORMAT_NODE_MAP:
{
mpv_node_list *l_a = *(mpv_node_list **)a, *l_b = *(mpv_node_list **)b;
if (l_a->num != l_b->num)
return false;
for (int n = 0; n < l_a->num; n++) {
if (!compare_value(&l_a->values[n], &l_b->values[n], MPV_FORMAT_NODE))
return false;
if (format == MPV_FORMAT_NODE_MAP) {
if (strcmp(l_a->keys[n], l_b->keys[n]) != 0)
return false;
}
}
return true;
}
}
abort();
}
void mpv_free_node_contents(mpv_node *node)
{
static const struct m_option type = { .type = CONF_TYPE_NODE };
@ -1017,29 +1007,30 @@ static int run_async(mpv_handle *ctx, void (*fn)(void *fn_data), void *fn_data)
talloc_free(fn_data);
return err;
}
mp_dispatch_enqueue_autofree(ctx->mpctx->dispatch, fn, fn_data);
mp_dispatch_enqueue(ctx->mpctx->dispatch, fn, fn_data);
return 0;
}
struct cmd_request {
struct MPContext *mpctx;
struct mp_cmd *cmd;
struct mpv_node *res;
int status;
struct mpv_handle *reply_ctx;
uint64_t userdata;
struct mpv_node *res;
struct mp_waiter completion;
};
static void cmd_fn(void *data)
static void cmd_complete(struct mp_cmd_ctx *cmd)
{
struct cmd_request *req = data;
int r = run_command(req->mpctx, req->cmd, req->res);
req->status = r >= 0 ? 0 : MPV_ERROR_COMMAND;
talloc_free(req->cmd);
if (req->reply_ctx) {
status_reply(req->reply_ctx, MPV_EVENT_COMMAND_REPLY,
req->userdata, req->status);
struct cmd_request *req = cmd->on_completion_priv;
req->status = cmd->success ? 0 : MPV_ERROR_COMMAND;
if (req->res) {
*req->res = cmd->result;
cmd->result = (mpv_node){0};
}
// Unblock the waiting thread (especially for async commands).
mp_waiter_wakeup(&req->completion, 0);
}
static int run_client_command(mpv_handle *ctx, struct mp_cmd *cmd, mpv_node *res)
@ -1049,17 +1040,33 @@ static int run_client_command(mpv_handle *ctx, struct mp_cmd *cmd, mpv_node *res
if (!cmd)
return MPV_ERROR_INVALID_PARAMETER;
if (mp_input_is_abort_cmd(cmd))
mp_abort_playback_async(ctx->mpctx);
cmd->sender = ctx->name;
struct cmd_request req = {
.mpctx = ctx->mpctx,
.cmd = cmd,
.res = res,
.completion = MP_WAITER_INITIALIZER,
};
run_locked(ctx, cmd_fn, &req);
bool async = cmd->flags & MP_ASYNC_CMD;
lock_core(ctx);
if (async) {
run_command(ctx->mpctx, cmd, NULL, NULL, NULL);
} else {
struct mp_abort_entry *abort = NULL;
if (cmd->def->can_abort) {
abort = talloc_zero(NULL, struct mp_abort_entry);
abort->client = ctx;
}
run_command(ctx->mpctx, cmd, abort, cmd_complete, &req);
}
unlock_core(ctx);
if (!async)
mp_waiter_wait(&req.completion);
return req.status;
}
@ -1083,7 +1090,54 @@ int mpv_command_string(mpv_handle *ctx, const char *args)
mp_input_parse_cmd(ctx->mpctx->input, bstr0((char*)args), ctx->name), NULL);
}
static int run_cmd_async(mpv_handle *ctx, uint64_t ud, struct mp_cmd *cmd)
struct async_cmd_request {
struct MPContext *mpctx;
struct mp_cmd *cmd;
struct mpv_handle *reply_ctx;
uint64_t userdata;
};
static void async_cmd_complete(struct mp_cmd_ctx *cmd)
{
struct async_cmd_request *req = cmd->on_completion_priv;
struct mpv_event_command *data = talloc_zero(NULL, struct mpv_event_command);
data->result = cmd->result;
cmd->result = (mpv_node){0};
talloc_steal(data, node_get_alloc(&data->result));
struct mpv_event reply = {
.event_id = MPV_EVENT_COMMAND_REPLY,
.data = data,
.error = cmd->success ? 0 : MPV_ERROR_COMMAND,
};
send_reply(req->reply_ctx, req->userdata, &reply);
talloc_free(req);
}
static void async_cmd_fn(void *data)
{
struct async_cmd_request *req = data;
struct mp_cmd *cmd = req->cmd;
ta_xset_parent(cmd, NULL);
req->cmd = NULL;
struct mp_abort_entry *abort = NULL;
if (cmd->def->can_abort) {
abort = talloc_zero(NULL, struct mp_abort_entry);
abort->client = req->reply_ctx;
abort->client_work_type = MPV_EVENT_COMMAND_REPLY;
abort->client_work_id = req->userdata;
}
// This will synchronously or asynchronously call cmd_complete (depending
// on the command).
run_command(req->mpctx, cmd, abort, async_cmd_complete, req);
}
static int run_async_cmd(mpv_handle *ctx, uint64_t ud, struct mp_cmd *cmd)
{
if (!ctx->mpctx->initialized)
return MPV_ERROR_UNINITIALIZED;
@ -1092,24 +1146,29 @@ static int run_cmd_async(mpv_handle *ctx, uint64_t ud, struct mp_cmd *cmd)
cmd->sender = ctx->name;
struct cmd_request *req = talloc_ptrtype(NULL, req);
*req = (struct cmd_request){
struct async_cmd_request *req = talloc_ptrtype(NULL, req);
*req = (struct async_cmd_request){
.mpctx = ctx->mpctx,
.cmd = cmd,
.cmd = talloc_steal(req, cmd),
.reply_ctx = ctx,
.userdata = ud,
};
return run_async(ctx, cmd_fn, req);
return run_async(ctx, async_cmd_fn, req);
}
int mpv_command_async(mpv_handle *ctx, uint64_t ud, const char **args)
{
return run_cmd_async(ctx, ud, mp_input_parse_cmd_strv(ctx->log, args));
return run_async_cmd(ctx, ud, mp_input_parse_cmd_strv(ctx->log, args));
}
int mpv_command_node_async(mpv_handle *ctx, uint64_t ud, mpv_node *args)
{
return run_cmd_async(ctx, ud, mp_input_parse_cmd_node(ctx->log, args));
return run_async_cmd(ctx, ud, mp_input_parse_cmd_node(ctx->log, args));
}
void mpv_abort_async_command(mpv_handle *ctx, uint64_t reply_userdata)
{
abort_async(ctx->mpctx, ctx, MPV_EVENT_COMMAND_REPLY, reply_userdata);
}
static int translate_property_error(int errc)
@ -1156,8 +1215,12 @@ static void setproperty_fn(void *arg)
req->status = translate_property_error(err);
if (req->reply_ctx) {
status_reply(req->reply_ctx, MPV_EVENT_SET_PROPERTY_REPLY,
req->userdata, req->status);
struct mpv_event reply = {
.event_id = MPV_EVENT_SET_PROPERTY_REPLY,
.error = req->status,
};
send_reply(req->reply_ctx, req->userdata, &reply);
talloc_free(req);
}
}
@ -1313,6 +1376,7 @@ static void getproperty_fn(void *arg)
.error = req->status,
};
send_reply(req->reply_ctx, req->userdata, &reply);
talloc_free(req);
}
}
@ -1508,7 +1572,7 @@ static void update_prop(void *p)
if (prop->user_value_valid != prop->new_value_valid) {
prop->changed = true;
} else if (prop->user_value_valid && prop->new_value_valid) {
if (!compare_value(&prop->user_value, &prop->new_value, prop->format))
if (!equal_mpv_value(&prop->user_value, &prop->new_value, prop->format))
prop->changed = true;
}
if (prop->dead)

View File

@ -20,6 +20,7 @@ struct mpv_global;
void mp_clients_init(struct MPContext *mpctx);
void mp_clients_destroy(struct MPContext *mpctx);
void mp_shutdown_clients(struct MPContext *mpctx);
bool mp_is_shutting_down(struct MPContext *mpctx);
bool mp_clients_all_initialized(struct MPContext *mpctx);
bool mp_client_exists(struct MPContext *mpctx, const char *client_name);

File diff suppressed because it is too large Load Diff

View File

@ -20,6 +20,8 @@
#include <stdbool.h>
#include "libmpv/client.h"
struct MPContext;
struct mp_cmd;
struct mp_log;
@ -43,12 +45,31 @@ struct mp_cmd_ctx {
bool bar_osd; // OSD bar requested
bool seek_msg_osd; // same as above, but for seek commands
bool seek_bar_osd;
// Return values
// If mp_cmd_def.can_abort is set, this will be set.
struct mp_abort_entry *abort;
// Return values (to be set by command implementation, read by the
// completion callback).
bool success; // true by default
struct mpv_node *result;
struct mpv_node result;
// Command handlers can set this to false if returning from the command
// handler does not complete the command. It stops the common command code
// from signaling the completion automatically, and you can call
// mp_cmd_ctx_complete() to invoke on_completion() properly (including all
// the bookkeeping).
/// (Note that in no case you can call mp_cmd_ctx_complete() from within
// the command handler, because it frees the mp_cmd_ctx.)
bool completed; // true by default
// This is managed by the common command code. For rules about how and where
// this is called see run_command() comments.
void (*on_completion)(struct mp_cmd_ctx *cmd);
void *on_completion_priv; // for free use by on_completion callback
};
int run_command(struct MPContext *mpctx, struct mp_cmd *cmd, struct mpv_node *res);
void run_command(struct MPContext *mpctx, struct mp_cmd *cmd,
struct mp_abort_entry *abort,
void (*on_completion)(struct mp_cmd_ctx *cmd),
void *on_completion_priv);
void mp_cmd_ctx_complete(struct mp_cmd_ctx *cmd);
char *mp_property_expand_string(struct MPContext *mpctx, const char *str);
char *mp_property_expand_escaped_string(struct MPContext *mpctx, const char *str);
void property_print_help(struct MPContext *mpctx);

View File

@ -37,12 +37,13 @@
// definitions used internally by the core player code
enum stop_play_reason {
KEEP_PLAYING = 0, // must be 0, numeric values of others do not matter
KEEP_PLAYING = 0, // playback of a file is actually going on
// must be 0, numeric values of others do not matter
AT_END_OF_FILE, // file has ended, prepare to play next
// also returned on unrecoverable playback errors
PT_NEXT_ENTRY, // prepare to play next entry in playlist
PT_CURRENT_ENTRY, // prepare to play mpctx->playlist->current
PT_STOP, // stop playback, clear playlist
PT_STOP, // stop playback, or transient state when going to next
PT_QUIT, // stop playback, quit player
PT_ERROR, // play next playlist entry (due to an error)
};
@ -243,6 +244,8 @@ typedef struct MPContext {
// mp_dispatch_lock must be called to change it.
int64_t outstanding_async;
struct mp_thread_pool *thread_pool; // for coarse I/O, often during loading
struct mp_log *statusline;
struct osd_state *osd;
char *term_osd_text;
@ -294,6 +297,8 @@ typedef struct MPContext {
struct track **tracks;
int num_tracks;
int64_t death_hack; // don't fucking ask, just don't
char *track_layout_hash;
// Selected tracks. NULL if no track selected.
@ -434,10 +439,12 @@ typedef struct MPContext {
struct mp_ipc_ctx *ipc_ctx;
pthread_mutex_t lock;
pthread_mutex_t abort_lock;
// --- The following fields are protected by lock
struct mp_cancel *demuxer_cancel; // cancel handle for MPContext.demuxer
// --- The following fields are protected by abort_lock
struct mp_abort_entry **abort_list;
int num_abort_list;
bool abort_all; // during final termination
// --- Owned by MPContext
pthread_t open_thread;
@ -455,6 +462,20 @@ typedef struct MPContext {
int open_res_error;
} MPContext;
// Contains information about an asynchronous work item, how it can be aborted,
// and when. All fields are protected by MPContext.abort_lock.
struct mp_abort_entry {
// General conditions.
bool coupled_to_playback; // trigger when playback is terminated
// Actual trigger to abort the work.
struct mp_cancel *cancel;
// For client API.
struct mpv_handle *client; // non-NULL if done by a client API user
int client_work_type; // client API type, e.h. MPV_EVENT_COMMAND_REPLY
uint64_t client_work_id; // client API user reply_userdata value
// (only valid if client_work_type set)
};
// audio.c
void reset_audio_state(struct MPContext *mpctx);
void reinit_audio_chain(struct MPContext *mpctx);
@ -484,9 +505,15 @@ struct playlist_entry *mp_check_playlist_resume(struct MPContext *mpctx,
// loadfile.c
void mp_abort_playback_async(struct MPContext *mpctx);
void mp_abort_add(struct MPContext *mpctx, struct mp_abort_entry *abort);
void mp_abort_remove(struct MPContext *mpctx, struct mp_abort_entry *abort);
void mp_abort_recheck_locked(struct MPContext *mpctx,
struct mp_abort_entry *abort);
void mp_abort_trigger_locked(struct MPContext *mpctx,
struct mp_abort_entry *abort);
void uninit_player(struct MPContext *mpctx, unsigned int mask);
int mp_add_external_file(struct MPContext *mpctx, char *filename,
enum stream_type filter);
enum stream_type filter, struct mp_cancel *cancel);
#define FLAG_MARK_SELECTION 1
void mp_switch_track(struct MPContext *mpctx, enum stream_type type,
struct track *track, int flags);
@ -505,7 +532,7 @@ void update_demuxer_properties(struct MPContext *mpctx);
void print_track_list(struct MPContext *mpctx, const char *msg);
void reselect_demux_stream(struct MPContext *mpctx, struct track *track);
void prepare_playlist(struct MPContext *mpctx, struct playlist *pl);
void autoload_external_files(struct MPContext *mpctx);
void autoload_external_files(struct MPContext *mpctx, struct mp_cancel *cancel);
struct track *select_default_track(struct MPContext *mpctx, int order,
enum stream_type type);
void prefetch_next(struct MPContext *mpctx);
@ -528,8 +555,6 @@ double get_play_end_pts(struct MPContext *mpctx);
double get_play_start_pts(struct MPContext *mpctx);
double get_ab_loop_start_time(struct MPContext *mpctx);
void merge_playlist_files(struct playlist *pl);
float mp_get_cache_percent(struct MPContext *mpctx);
bool mp_get_cache_idle(struct MPContext *mpctx);
void update_vo_playback_state(struct MPContext *mpctx);
void update_window_title(struct MPContext *mpctx, bool force);
void error_on_track(struct MPContext *mpctx, struct track *track);
@ -551,6 +576,8 @@ void mp_wait_events(struct MPContext *mpctx);
void mp_set_timeout(struct MPContext *mpctx, double sleeptime);
void mp_wakeup_core(struct MPContext *mpctx);
void mp_wakeup_core_cb(void *ctx);
void mp_core_lock(struct MPContext *mpctx);
void mp_core_unlock(struct MPContext *mpctx);
void mp_process_input(struct MPContext *mpctx);
double get_relative_time(struct MPContext *mpctx);
void reset_playback_state(struct MPContext *mpctx);

View File

@ -104,13 +104,12 @@ static struct bstr guess_lang_from_filename(struct bstr name)
return (struct bstr){name.start + i + 1, n};
}
static void append_dir_subtitles(struct mpv_global *global,
static void append_dir_subtitles(struct mpv_global *global, struct MPOpts *opts,
struct subfn **slist, int *nsub,
struct bstr path, const char *fname,
int limit_fuzziness, int limit_type)
{
void *tmpmem = talloc_new(NULL);
struct MPOpts *opts = global->opts;
struct mp_log *log = mp_log_new(tmpmem, global->log, "find_files");
struct bstr f_fbname = bstr0(mp_basename(fname));
@ -253,16 +252,16 @@ static void filter_subidx(struct subfn **slist, int *nsub)
}
}
static void load_paths(struct mpv_global *global, struct subfn **slist,
int *nsubs, const char *fname, char **paths,
char *cfg_path, int type)
static void load_paths(struct mpv_global *global, struct MPOpts *opts,
struct subfn **slist, int *nsubs, const char *fname,
char **paths, char *cfg_path, int type)
{
for (int i = 0; paths && paths[i]; i++) {
char *expanded_path = mp_get_user_path(NULL, global, paths[i]);
char *path = mp_path_join_bstr(
*slist, mp_dirname(fname),
bstr0(expanded_path ? expanded_path : paths[i]));
append_dir_subtitles(global, slist, nsubs, bstr0(path),
append_dir_subtitles(global, opts, slist, nsubs, bstr0(path),
fname, 0, type);
talloc_free(expanded_path);
}
@ -270,32 +269,32 @@ static void load_paths(struct mpv_global *global, struct subfn **slist,
// Load subtitles in ~/.mpv/sub (or similar) limiting sub fuzziness
char *mp_subdir = mp_find_config_file(NULL, global, cfg_path);
if (mp_subdir) {
append_dir_subtitles(global, slist, nsubs, bstr0(mp_subdir), fname, 1,
type);
append_dir_subtitles(global, opts, slist, nsubs, bstr0(mp_subdir),
fname, 1, type);
}
talloc_free(mp_subdir);
}
// Return a list of subtitles and audio files found, sorted by priority.
// Last element is terminated with a fname==NULL entry.
struct subfn *find_external_files(struct mpv_global *global, const char *fname)
struct subfn *find_external_files(struct mpv_global *global, const char *fname,
struct MPOpts *opts)
{
struct MPOpts *opts = global->opts;
struct subfn *slist = talloc_array_ptrtype(NULL, slist, 1);
int n = 0;
// Load subtitles from current media directory
append_dir_subtitles(global, &slist, &n, mp_dirname(fname), fname, 0, -1);
append_dir_subtitles(global, opts, &slist, &n, mp_dirname(fname), fname, 0, -1);
// Load subtitles in dirs specified by sub-paths option
if (opts->sub_auto >= 0) {
load_paths(global, &slist, &n, fname, opts->sub_paths, "sub",
load_paths(global, opts, &slist, &n, fname, opts->sub_paths, "sub",
STREAM_SUB);
}
if (opts->audiofile_auto >= 0) {
load_paths(global, &slist, &n, fname, opts->audiofile_paths, "audio",
STREAM_AUDIO);
load_paths(global, opts, &slist, &n, fname, opts->audiofile_paths,
"audio", STREAM_AUDIO);
}
// Sort by name for filter_subidx()

View File

@ -28,7 +28,9 @@ struct subfn {
};
struct mpv_global;
struct subfn *find_external_files(struct mpv_global *global, const char *fname);
struct MPOpts;
struct subfn *find_external_files(struct mpv_global *global, const char *fname,
struct MPOpts *opts);
bool mp_might_be_subtitle_file(const char *filename);

View File

@ -26,6 +26,8 @@
#include "config.h"
#include "mpv_talloc.h"
#include "misc/thread_pool.h"
#include "misc/thread_tools.h"
#include "osdep/io.h"
#include "osdep/terminal.h"
#include "osdep/threads.h"
@ -59,15 +61,132 @@
#include "command.h"
#include "libmpv/client.h"
// Called from the demuxer thread if a new packet is available, or other changes.
static void wakeup_demux(void *pctx)
{
struct MPContext *mpctx = pctx;
mp_wakeup_core(mpctx);
}
// Called by foreign threads when playback should be stopped and such.
void mp_abort_playback_async(struct MPContext *mpctx)
{
mp_cancel_trigger(mpctx->playback_abort);
pthread_mutex_lock(&mpctx->lock);
if (mpctx->demuxer_cancel)
mp_cancel_trigger(mpctx->demuxer_cancel);
pthread_mutex_unlock(&mpctx->lock);
pthread_mutex_lock(&mpctx->abort_lock);
for (int n = 0; n < mpctx->num_abort_list; n++) {
struct mp_abort_entry *abort = mpctx->abort_list[n];
if (abort->coupled_to_playback)
mp_abort_trigger_locked(mpctx, abort);
}
pthread_mutex_unlock(&mpctx->abort_lock);
}
// Add it to the global list, and allocate required data structures.
void mp_abort_add(struct MPContext *mpctx, struct mp_abort_entry *abort)
{
pthread_mutex_lock(&mpctx->abort_lock);
assert(!abort->cancel);
abort->cancel = mp_cancel_new(NULL);
MP_TARRAY_APPEND(NULL, mpctx->abort_list, mpctx->num_abort_list, abort);
mp_abort_recheck_locked(mpctx, abort);
pthread_mutex_unlock(&mpctx->abort_lock);
}
// Remove Add it to the global list, and free/clear required data structures.
// Does not deallocate the abort value itself.
void mp_abort_remove(struct MPContext *mpctx, struct mp_abort_entry *abort)
{
pthread_mutex_lock(&mpctx->abort_lock);
for (int n = 0; n < mpctx->num_abort_list; n++) {
if (mpctx->abort_list[n] == abort) {
MP_TARRAY_REMOVE_AT(mpctx->abort_list, mpctx->num_abort_list, n);
TA_FREEP(&abort->cancel);
abort = NULL; // it's not free'd, just clear for the assert below
break;
}
}
assert(!abort); // should have been in the list
pthread_mutex_unlock(&mpctx->abort_lock);
}
// Verify whether the abort needs to be signaled after changing certain fields
// in abort.
void mp_abort_recheck_locked(struct MPContext *mpctx,
struct mp_abort_entry *abort)
{
if ((abort->coupled_to_playback && mp_cancel_test(mpctx->playback_abort)) ||
mpctx->abort_all)
{
mp_abort_trigger_locked(mpctx, abort);
}
}
void mp_abort_trigger_locked(struct MPContext *mpctx,
struct mp_abort_entry *abort)
{
mp_cancel_trigger(abort->cancel);
}
static void kill_demuxers_reentrant(struct MPContext *mpctx,
struct demuxer **demuxers, int num_demuxers)
{
struct demux_free_async_state **items = NULL;
int num_items = 0;
for (int n = 0; n < num_demuxers; n++) {
struct demuxer *d = demuxers[n];
if (!demux_cancel_test(d)) {
// Make sure it is set if it wasn't yet.
demux_set_wakeup_cb(d, wakeup_demux, mpctx);
struct demux_free_async_state *item = demux_free_async(d);
if (item) {
MP_TARRAY_APPEND(NULL, items, num_items, item);
d = NULL;
}
}
demux_cancel_and_free(d);
}
if (!num_items)
return;
MP_DBG(mpctx, "Terminating demuxers...\n");
double end = mp_time_sec() + mpctx->opts->demux_termination_timeout;
bool force = false;
while (num_items) {
double wait = end - mp_time_sec();
for (int n = 0; n < num_items; n++) {
struct demux_free_async_state *item = items[n];
if (demux_free_async_finish(item)) {
items[n] = items[num_items - 1];
num_items -= 1;
n--;
goto repeat;
} else if (wait < 0) {
demux_free_async_force(item);
if (!force)
MP_VERBOSE(mpctx, "Forcefully terminating demuxers...\n");
force = true;
}
}
if (wait >= 0)
mp_set_timeout(mpctx, wait);
mp_idle(mpctx);
repeat:;
}
talloc_free(items);
MP_DBG(mpctx, "Done terminating demuxers.\n");
}
static void uninit_demuxer(struct MPContext *mpctx)
@ -76,28 +195,44 @@ static void uninit_demuxer(struct MPContext *mpctx)
for (int t = 0; t < STREAM_TYPE_COUNT; t++)
mpctx->current_track[r][t] = NULL;
}
mpctx->seek_slave = NULL;
talloc_free(mpctx->chapters);
mpctx->chapters = NULL;
mpctx->num_chapters = 0;
// close demuxers for external tracks
for (int n = mpctx->num_tracks - 1; n >= 0; n--) {
mpctx->tracks[n]->selected = false;
mp_remove_track(mpctx, mpctx->tracks[n]);
}
struct demuxer **demuxers = NULL;
int num_demuxers = 0;
if (mpctx->demuxer)
MP_TARRAY_APPEND(NULL, demuxers, num_demuxers, mpctx->demuxer);
mpctx->demuxer = NULL;
for (int i = 0; i < mpctx->num_tracks; i++) {
sub_destroy(mpctx->tracks[i]->d_sub);
talloc_free(mpctx->tracks[i]);
struct track *track = mpctx->tracks[i];
assert(!track->dec && !track->d_sub);
assert(!track->vo_c && !track->ao_c);
assert(!track->sink);
assert(!track->remux_sink);
// Demuxers can be added in any order (if they appear mid-stream), and
// we can't know which tracks uses which, so here's some O(n^2) trash.
for (int n = 0; n < num_demuxers; n++) {
if (demuxers[n] == track->demuxer) {
track->demuxer = NULL;
break;
}
}
if (track->demuxer)
MP_TARRAY_APPEND(NULL, demuxers, num_demuxers, track->demuxer);
talloc_free(track);
}
mpctx->num_tracks = 0;
free_demuxer_and_stream(mpctx->demuxer);
mpctx->demuxer = NULL;
pthread_mutex_lock(&mpctx->lock);
talloc_free(mpctx->demuxer_cancel);
mpctx->demuxer_cancel = NULL;
pthread_mutex_unlock(&mpctx->lock);
kill_demuxers_reentrant(mpctx, demuxers, num_demuxers);
talloc_free(demuxers);
}
#define APPEND(s, ...) mp_snprintf_cat(s, sizeof(s), __VA_ARGS__)
@ -227,20 +362,16 @@ void reselect_demux_stream(struct MPContext *mpctx, struct track *track)
if (!track->stream)
return;
double pts = get_current_time(mpctx);
if (pts != MP_NOPTS_VALUE)
if (pts != MP_NOPTS_VALUE) {
pts += get_track_seek_offset(mpctx, track);
if (track->type == STREAM_SUB)
pts -= 10.0;
}
demuxer_select_track(track->demuxer, track->stream, pts, track->selected);
if (track == mpctx->seek_slave)
mpctx->seek_slave = NULL;
}
// Called from the demuxer thread if a new packet is available.
static void wakeup_demux(void *pctx)
{
struct MPContext *mpctx = pctx;
mp_wakeup_core(mpctx);
}
static void enable_demux_thread(struct MPContext *mpctx, struct demuxer *demux)
{
if (mpctx->opts->demuxer_thread && !demux->fully_read) {
@ -554,8 +685,6 @@ bool mp_remove_track(struct MPContext *mpctx, struct track *track)
struct demuxer *d = track->demuxer;
sub_destroy(track->d_sub);
if (mpctx->seek_slave == track)
mpctx->seek_slave = NULL;
@ -572,7 +701,7 @@ bool mp_remove_track(struct MPContext *mpctx, struct track *track)
in_use |= mpctx->tracks[n]->demuxer == d;
if (!in_use)
free_demuxer_and_stream(d);
demux_cancel_and_free(d);
mp_notify(mpctx, MPV_EVENT_TRACKS_CHANGED, NULL);
@ -581,11 +710,14 @@ bool mp_remove_track(struct MPContext *mpctx, struct track *track)
// Add the given file as additional track. The filter argument controls how or
// if tracks are auto-selected at any point.
// To be run on a worker thread, locked (temporarily unlocks core).
// cancel will generally be used to abort the loading process, but on success
// the demuxer is changed to be slaved to mpctx->playback_abort instead.
int mp_add_external_file(struct MPContext *mpctx, char *filename,
enum stream_type filter)
enum stream_type filter, struct mp_cancel *cancel)
{
struct MPOpts *opts = mpctx->opts;
if (!filename)
if (!filename || mp_cancel_test(cancel))
return -1;
char *disp_filename = filename;
@ -603,11 +735,22 @@ int mp_add_external_file(struct MPContext *mpctx, char *filename,
break;
}
mp_core_unlock(mpctx);
struct demuxer *demuxer =
demux_open_url(filename, &params, mpctx->playback_abort, mpctx->global);
demux_open_url(filename, &params, cancel, mpctx->global);
if (demuxer)
enable_demux_thread(mpctx, demuxer);
mp_core_lock(mpctx);
// The command could have overlapped with playback exiting. (We don't care
// if playback has started again meanwhile - weird, but not a problem.)
if (mpctx->stop_play)
goto err_out;
if (!demuxer)
goto err_out;
enable_demux_thread(mpctx, demuxer);
if (opts->rebase_start_time)
demux_set_ts_offset(demuxer, -demuxer->start_time);
@ -622,12 +765,11 @@ int mp_add_external_file(struct MPContext *mpctx, char *filename,
}
if (!has_any) {
free_demuxer_and_stream(demuxer);
char *tname = mp_tprintf(20, "%s ", stream_type_name(filter));
if (filter == STREAM_TYPE_COUNT)
tname = "";
MP_ERR(mpctx, "No %sstreams in file %s.\n", tname, disp_filename);
return -1;
goto err_out;
}
int first_num = -1;
@ -643,22 +785,33 @@ int mp_add_external_file(struct MPContext *mpctx, char *filename,
first_num = mpctx->num_tracks - 1;
}
mp_cancel_set_parent(demuxer->cancel, mpctx->playback_abort);
return first_num;
err_out:
if (!mp_cancel_test(mpctx->playback_abort))
demux_cancel_and_free(demuxer);
if (!mp_cancel_test(cancel))
MP_ERR(mpctx, "Can not open external file %s.\n", disp_filename);
return -1;
}
// to be run on a worker thread, locked (temporarily unlocks core)
static void open_external_files(struct MPContext *mpctx, char **files,
enum stream_type filter)
{
// Need a copy, because the option value could be mutated during iteration.
void *tmp = talloc_new(NULL);
files = mp_dup_str_array(tmp, files);
for (int n = 0; files && files[n]; n++)
mp_add_external_file(mpctx, files[n], filter);
mp_add_external_file(mpctx, files[n], filter, mpctx->playback_abort);
talloc_free(tmp);
}
void autoload_external_files(struct MPContext *mpctx)
// See mp_add_external_file() for meaning of cancel parameter.
void autoload_external_files(struct MPContext *mpctx, struct mp_cancel *cancel)
{
if (mpctx->opts->sub_auto < 0 && mpctx->opts->audiofile_auto < 0)
return;
@ -673,7 +826,8 @@ void autoload_external_files(struct MPContext *mpctx)
&stream_filename) > 0)
base_filename = talloc_steal(tmp, stream_filename);
}
struct subfn *list = find_external_files(mpctx->global, base_filename);
struct subfn *list = find_external_files(mpctx->global, base_filename,
mpctx->opts);
talloc_steal(tmp, list);
int sc[STREAM_TYPE_COUNT] = {0};
@ -694,7 +848,7 @@ void autoload_external_files(struct MPContext *mpctx)
goto skip;
if (list[i].type == STREAM_AUDIO && !sc[STREAM_VIDEO])
goto skip;
int first = mp_add_external_file(mpctx, filename, list[i].type);
int first = mp_add_external_file(mpctx, filename, list[i].type, cancel);
if (first < 0)
goto skip;
@ -758,24 +912,35 @@ static void process_hooks(struct MPContext *mpctx, char *name)
{
mp_hook_start(mpctx, name);
while (!mp_hook_test_completion(mpctx, name))
while (!mp_hook_test_completion(mpctx, name)) {
mp_idle(mpctx);
// We have no idea what blocks a hook, so just do a full abort.
if (mpctx->stop_play)
mp_abort_playback_async(mpctx);
}
}
// to be run on a worker thread, locked (temporarily unlocks core)
static void load_chapters(struct MPContext *mpctx)
{
struct demuxer *src = mpctx->demuxer;
bool free_src = false;
char *chapter_file = mpctx->opts->chapter_file;
if (chapter_file && chapter_file[0]) {
chapter_file = talloc_strdup(NULL, chapter_file);
mp_core_unlock(mpctx);
struct demuxer *demux = demux_open_url(chapter_file, NULL,
mpctx->playback_abort, mpctx->global);
mpctx->playback_abort,
mpctx->global);
mp_core_lock(mpctx);
if (demux) {
src = demux;
free_src = true;
}
talloc_free(mpctx->chapters);
mpctx->chapters = NULL;
talloc_free(chapter_file);
}
if (src && !mpctx->chapters) {
talloc_free(mpctx->chapters);
@ -787,7 +952,7 @@ static void load_chapters(struct MPContext *mpctx)
}
}
if (free_src)
free_demuxer_and_stream(src);
demux_cancel_and_free(src);
}
static void load_per_file_options(m_config_t *conf,
@ -809,7 +974,6 @@ static void *open_demux_thread(void *ctx)
struct demuxer_params p = {
.force_format = mpctx->open_format,
.stream_flags = mpctx->open_url_flags,
.initial_readahead = true,
};
mpctx->open_res_demuxer =
demux_open_url(mpctx->open_url, &p, mpctx->open_cancel, mpctx->global);
@ -840,14 +1004,14 @@ static void cancel_open(struct MPContext *mpctx)
pthread_join(mpctx->open_thread, NULL);
mpctx->open_active = false;
if (mpctx->open_res_demuxer)
demux_cancel_and_free(mpctx->open_res_demuxer);
mpctx->open_res_demuxer = NULL;
TA_FREEP(&mpctx->open_cancel);
TA_FREEP(&mpctx->open_url);
TA_FREEP(&mpctx->open_format);
if (mpctx->open_res_demuxer)
free_demuxer_and_stream(mpctx->open_res_demuxer);
mpctx->open_res_demuxer = NULL;
atomic_store(&mpctx->open_done, false);
}
@ -904,9 +1068,7 @@ static void open_demux_reentrant(struct MPContext *mpctx)
start_open(mpctx, url, mpctx->playing->stream_flags);
// User abort should cancel the opener now.
pthread_mutex_lock(&mpctx->lock);
mpctx->demuxer_cancel = mpctx->open_cancel;
pthread_mutex_unlock(&mpctx->lock);
mp_cancel_set_parent(mpctx->open_cancel, mpctx->playback_abort);
while (!atomic_load(&mpctx->open_done)) {
mp_idle(mpctx);
@ -916,15 +1078,11 @@ static void open_demux_reentrant(struct MPContext *mpctx)
}
if (mpctx->open_res_demuxer) {
assert(mpctx->demuxer_cancel == mpctx->open_cancel);
mpctx->demuxer = mpctx->open_res_demuxer;
mpctx->open_res_demuxer = NULL;
mpctx->open_cancel = NULL;
mp_cancel_set_parent(mpctx->demuxer->cancel, mpctx->playback_abort);
} else {
mpctx->error_playing = mpctx->open_res_error;
pthread_mutex_lock(&mpctx->lock);
mpctx->demuxer_cancel = NULL;
pthread_mutex_unlock(&mpctx->lock);
}
cancel_open(mpctx); // cleanup
@ -1134,6 +1292,48 @@ void update_lavfi_complex(struct MPContext *mpctx)
}
}
// Worker thread for loading external files and such. This is needed to avoid
// freezing the core when waiting for network while loading these.
static void load_external_opts_thread(void *p)
{
void **a = p;
struct MPContext *mpctx = a[0];
struct mp_waiter *waiter = a[1];
mp_core_lock(mpctx);
load_chapters(mpctx);
open_external_files(mpctx, mpctx->opts->audio_files, STREAM_AUDIO);
open_external_files(mpctx, mpctx->opts->sub_name, STREAM_SUB);
open_external_files(mpctx, mpctx->opts->external_files, STREAM_TYPE_COUNT);
autoload_external_files(mpctx, mpctx->playback_abort);
mp_waiter_wakeup(waiter, 0);
mp_wakeup_core(mpctx);
mp_core_unlock(mpctx);
}
static void load_external_opts(struct MPContext *mpctx)
{
struct mp_waiter wait = MP_WAITER_INITIALIZER;
void *a[] = {mpctx, &wait};
if (!mp_thread_pool_queue(mpctx->thread_pool, load_external_opts_thread, a)) {
mpctx->stop_play = PT_ERROR;
return;
}
while (!mp_waiter_poll(&wait)) {
mp_idle(mpctx);
if (mpctx->stop_play)
mp_abort_playback_async(mpctx);
}
mp_waiter_wait(&wait);
}
// Start playing the current playlist entry.
// Handle initialization and deinitialization.
static void play_current_file(struct MPContext *mpctx)
@ -1141,6 +1341,8 @@ static void play_current_file(struct MPContext *mpctx)
struct MPOpts *opts = mpctx->opts;
double playback_start = -1e100;
assert(mpctx->stop_play);
mp_notify(mpctx, MPV_EVENT_START_FILE, NULL);
mp_cancel_reset(mpctx->playback_abort);
@ -1161,15 +1363,14 @@ static void play_current_file(struct MPContext *mpctx)
mpctx->speed_factor_a = mpctx->speed_factor_v = 1.0;
mpctx->display_sync_error = 0.0;
mpctx->display_sync_active = false;
// let get_current_time() show 0 as start time (before playback_pts is set)
mpctx->last_seek_pts = 0.0;
mpctx->seek = (struct seek_params){ 0 };
mpctx->filter_root = mp_filter_create_root(mpctx->global);
mp_filter_root_set_wakeup_cb(mpctx->filter_root, mp_wakeup_core_cb, mpctx);
reset_playback_state(mpctx);
// let get_current_time() show 0 as start time (before playback_pts is set)
mpctx->last_seek_pts = 0.0;
mpctx->playing = mpctx->playlist->current;
if (!mpctx->playing || !mpctx->playing->filename)
goto terminate_playback;
@ -1251,13 +1452,11 @@ static void play_current_file(struct MPContext *mpctx)
demux_set_ts_offset(mpctx->demuxer, -mpctx->demuxer->start_time);
enable_demux_thread(mpctx, mpctx->demuxer);
load_chapters(mpctx);
add_demuxer_tracks(mpctx, mpctx->demuxer);
open_external_files(mpctx, opts->audio_files, STREAM_AUDIO);
open_external_files(mpctx, opts->sub_name, STREAM_SUB);
open_external_files(mpctx, opts->external_files, STREAM_TYPE_COUNT);
autoload_external_files(mpctx);
load_external_opts(mpctx);
if (mpctx->stop_play)
goto terminate_playback;
check_previous_track_selection(mpctx);
@ -1369,21 +1568,19 @@ static void play_current_file(struct MPContext *mpctx)
terminate_playback:
update_core_idle_state(mpctx);
process_hooks(mpctx, "on_unload");
if (mpctx->stop_play == KEEP_PLAYING)
mpctx->stop_play = AT_END_OF_FILE;
if (!mpctx->stop_play)
mpctx->stop_play = PT_ERROR;
if (mpctx->stop_play != AT_END_OF_FILE)
clear_audio_output_buffers(mpctx);
update_core_idle_state(mpctx);
process_hooks(mpctx, "on_unload");
if (mpctx->step_frames)
opts->pause = 1;
mp_abort_playback_async(mpctx);
close_recorder(mpctx);
// time to uninit all, except global stuff:
@ -1391,12 +1588,16 @@ terminate_playback:
uninit_audio_chain(mpctx);
uninit_video_chain(mpctx);
uninit_sub_all(mpctx);
uninit_demuxer(mpctx);
if (!opts->gapless_audio && !mpctx->encode_lavc_ctx)
uninit_audio_out(mpctx);
mpctx->playback_initialized = false;
uninit_demuxer(mpctx);
// Possibly stop ongoing async commands.
mp_abort_playback_async(mpctx);
m_config_restore_backups(mpctx->mconfig);
TA_FREEP(&mpctx->filter_root);
@ -1458,6 +1659,8 @@ terminate_playback:
} else {
mpctx->files_played++;
}
assert(mpctx->stop_play);
}
// Determine the next file to play. Note that if this function returns non-NULL,
@ -1523,6 +1726,7 @@ void mp_play_files(struct MPContext *mpctx)
prepare_playlist(mpctx, mpctx->playlist);
for (;;) {
assert(mpctx->stop_play);
idle_loop(mpctx);
if (mpctx->stop_play == PT_QUIT)
break;
@ -1533,14 +1737,14 @@ void mp_play_files(struct MPContext *mpctx)
struct playlist_entry *new_entry = mpctx->playlist->current;
if (mpctx->stop_play == PT_NEXT_ENTRY || mpctx->stop_play == PT_ERROR ||
mpctx->stop_play == AT_END_OF_FILE || !mpctx->stop_play)
mpctx->stop_play == AT_END_OF_FILE || mpctx->stop_play == PT_STOP)
{
new_entry = mp_next_file(mpctx, +1, false, true);
}
mpctx->playlist->current = new_entry;
mpctx->playlist->current_was_replaced = false;
mpctx->stop_play = 0;
mpctx->stop_play = PT_STOP;
if (!mpctx->playlist->current && mpctx->opts->player_idle_mode < 2)
break;
@ -1567,6 +1771,7 @@ void mp_set_playlist_entry(struct MPContext *mpctx, struct playlist_entry *e)
assert(!e || playlist_entry_to_index(mpctx->playlist, e) >= 0);
mpctx->playlist->current = e;
mpctx->playlist->current_was_replaced = false;
// Make it pick up the new entry.
if (!mpctx->stop_play)
mpctx->stop_play = PT_CURRENT_ENTRY;
mp_wakeup_core(mpctx);

View File

@ -562,6 +562,12 @@ static int script_wait_event(lua_State *L)
lua_setfield(L, -2, "hook_id");
break;
}
case MPV_EVENT_COMMAND_REPLY: {
mpv_event_command *cmd = event->data;
pushnode(L, &cmd->result);
lua_setfield(L, -2, "result");
break;
}
default: ;
}
@ -967,6 +973,26 @@ static int script_command_native(lua_State *L)
return 2;
}
static int script_raw_command_native_async(lua_State *L)
{
struct script_ctx *ctx = get_ctx(L);
uint64_t id = luaL_checknumber(L, 1);
struct mpv_node node;
void *tmp = mp_lua_PITA(L);
makenode(tmp, &node, L, 2);
int res = mpv_command_node_async(ctx->client, id, &node);
talloc_free_children(tmp);
return check_error(L, res);
}
static int script_raw_abort_async_command(lua_State *L)
{
struct script_ctx *ctx = get_ctx(L);
uint64_t id = luaL_checknumber(L, 1);
mpv_abort_async_command(ctx->client, id);
return 0;
}
static int script_set_osd_ass(lua_State *L)
{
struct script_ctx *ctx = get_ctx(L);
@ -1169,112 +1195,6 @@ static int script_join_path(lua_State *L)
return 1;
}
struct subprocess_cb_ctx {
struct mp_log *log;
void* talloc_ctx;
int64_t max_size;
bstr output;
};
static void subprocess_stdout(void *p, char *data, size_t size)
{
struct subprocess_cb_ctx *ctx = p;
if (ctx->output.len < ctx->max_size)
bstr_xappend(ctx->talloc_ctx, &ctx->output, (bstr){data, size});
}
static void subprocess_stderr(void *p, char *data, size_t size)
{
struct subprocess_cb_ctx *ctx = p;
MP_INFO(ctx, "%.*s", (int)size, data);
}
static int script_subprocess(lua_State *L)
{
struct script_ctx *ctx = get_ctx(L);
luaL_checktype(L, 1, LUA_TTABLE);
void *tmp = mp_lua_PITA(L);
lua_getfield(L, 1, "args"); // args
int num_args = mp_lua_len(L, -1);
char *args[256];
if (num_args > MP_ARRAY_SIZE(args) - 1) // last needs to be NULL
luaL_error(L, "too many arguments");
if (num_args < 1)
luaL_error(L, "program name missing");
for (int n = 0; n < num_args; n++) {
lua_pushinteger(L, n + 1); // args n
lua_gettable(L, -2); // args arg
args[n] = talloc_strdup(tmp, lua_tostring(L, -1));
if (!args[n])
luaL_error(L, "program arguments must be strings");
lua_pop(L, 1); // args
}
args[num_args] = NULL;
lua_pop(L, 1); // -
lua_getfield(L, 1, "cancellable"); // c
struct mp_cancel *cancel = NULL;
if (lua_isnil(L, -1) ? true : lua_toboolean(L, -1))
cancel = ctx->mpctx->playback_abort;
lua_pop(L, 1); // -
lua_getfield(L, 1, "max_size"); // m
int64_t max_size = lua_isnil(L, -1) ? 64 * 1024 * 1024 : lua_tointeger(L, -1);
struct subprocess_cb_ctx cb_ctx = {
.log = ctx->log,
.talloc_ctx = tmp,
.max_size = max_size,
};
char *error = NULL;
int status = mp_subprocess(args, cancel, &cb_ctx, subprocess_stdout,
subprocess_stderr, &error);
lua_newtable(L); // res
if (error) {
lua_pushstring(L, error); // res e
lua_setfield(L, -2, "error"); // res
}
lua_pushinteger(L, status); // res s
lua_setfield(L, -2, "status"); // res
lua_pushlstring(L, cb_ctx.output.start, cb_ctx.output.len); // res d
lua_setfield(L, -2, "stdout"); // res
lua_pushboolean(L, status == MP_SUBPROCESS_EKILLED_BY_US); // res b
lua_setfield(L, -2, "killed_by_us"); // res
return 1;
}
static int script_subprocess_detached(lua_State *L)
{
struct script_ctx *ctx = get_ctx(L);
luaL_checktype(L, 1, LUA_TTABLE);
void *tmp = mp_lua_PITA(L);
lua_getfield(L, 1, "args"); // args
int num_args = mp_lua_len(L, -1);
char *args[256];
if (num_args > MP_ARRAY_SIZE(args) - 1) // last needs to be NULL
luaL_error(L, "too many arguments");
if (num_args < 1)
luaL_error(L, "program name missing");
for (int n = 0; n < num_args; n++) {
lua_pushinteger(L, n + 1); // args n
lua_gettable(L, -2); // args arg
args[n] = talloc_strdup(tmp, lua_tostring(L, -1));
if (!args[n])
luaL_error(L, "program arguments must be strings");
lua_pop(L, 1); // args
}
args[num_args] = NULL;
lua_pop(L, 1); // -
mp_subprocess_detached(ctx->log, args);
lua_pushnil(L);
return 1;
}
static int script_getpid(lua_State *L)
{
lua_pushnumber(L, mp_getpid());
@ -1339,6 +1259,8 @@ static const struct fn_entry main_fns[] = {
FN_ENTRY(command),
FN_ENTRY(commandv),
FN_ENTRY(command_native),
FN_ENTRY(raw_command_native_async),
FN_ENTRY(raw_abort_async_command),
FN_ENTRY(get_property_bool),
FN_ENTRY(get_property_number),
FN_ENTRY(get_property_native),
@ -1367,8 +1289,6 @@ static const struct fn_entry utils_fns[] = {
FN_ENTRY(file_info),
FN_ENTRY(split_path),
FN_ENTRY(join_path),
FN_ENTRY(subprocess),
FN_ENTRY(subprocess_detached),
FN_ENTRY(getpid),
FN_ENTRY(parse_json),
FN_ENTRY(format_json),

View File

@ -528,6 +528,41 @@ function mp.add_hook(name, pri, cb)
mp.raw_hook_add(id, name, pri - 50)
end
local async_call_table = {}
local async_next_id = 1
function mp.command_native_async(node, cb)
local id = async_next_id
async_next_id = async_next_id + 1
local res, err = mp.raw_command_native_async(id, node)
if not res then
cb(false, nil, err)
return res, err
end
local t = {cb = cb, id = id}
async_call_table[id] = t
return t
end
mp.register_event("command-reply", function(ev)
local id = tonumber(ev.id)
local t = async_call_table[id]
local cb = t.cb
t.id = nil
async_call_table[id] = nil
if ev.error then
cb(false, nil, ev.error)
else
cb(true, ev.result, nil)
end
end)
function mp.abort_async_command(t)
if t.id ~= nil then
mp.raw_abort_async_command(t.id)
end
end
local mp_utils = package.loaded["mp.utils"]
function mp_utils.format_table(t, set)
@ -596,4 +631,31 @@ function mp_utils.format_bytes_humanized(b)
return string.format("%0.2f %s", b, d[i] and d[i] or "*1024^" .. (i-1))
end
function mp_utils.subprocess(t)
local cmd = {}
cmd.name = "subprocess"
cmd.capture_stdout = true
for k, v in pairs(t) do
if k == "cancellable" then
k = "playback_only"
elseif k == "max_size" then
k = "capture_size"
end
cmd[k] = v
end
local res, err = mp.command_native(cmd)
if res == nil then
-- an error usually happens only if parsing failed (or no args passed)
res = {error_string = err, status = -1}
end
if res.error_string ~= "" then
res.error = res.error_string
end
return res
end
function mp_utils.subprocess_detached(t)
mp.commandv("run", unpack(t.args))
end
return {}

View File

@ -28,6 +28,7 @@
#include "mpv_talloc.h"
#include "misc/dispatch.h"
#include "misc/thread_pool.h"
#include "osdep/io.h"
#include "osdep/terminal.h"
#include "osdep/timer.h"
@ -53,7 +54,7 @@
#include "audio/out/ao.h"
#include "demux/demux.h"
#include "stream/stream.h"
#include "misc/thread_tools.h"
#include "sub/osd.h"
#include "video/out/vo.h"
@ -116,7 +117,7 @@ void mp_update_logging(struct MPContext *mpctx, bool preinit)
{
bool had_log_file = mp_msg_has_log_file(mpctx->global);
mp_msg_update_msglevels(mpctx->global);
mp_msg_update_msglevels(mpctx->global, mpctx->opts);
bool enable = mpctx->opts->use_terminal;
bool enabled = cas_terminal_owner(mpctx, mpctx);
@ -188,7 +189,9 @@ void mp_destroy(struct MPContext *mpctx)
uninit_libav(mpctx->global);
mp_msg_uninit(mpctx->global);
pthread_mutex_destroy(&mpctx->lock);
assert(!mpctx->num_abort_list);
talloc_free(mpctx->abort_list);
pthread_mutex_destroy(&mpctx->abort_lock);
talloc_free(mpctx);
}
@ -219,7 +222,9 @@ static bool handle_help_options(struct MPContext *mpctx)
MP_INFO(mpctx, "\n");
return true;
}
if (opts->audio_device && strcmp(opts->audio_device, "help") == 0) {
if (opts->ao_opts->audio_device &&
strcmp(opts->ao_opts->audio_device, "help") == 0)
{
ao_print_devices(mpctx->global, log);
return true;
}
@ -241,12 +246,6 @@ static int cfg_include(void *ctx, char *filename, int flags)
return r;
}
static void abort_playback_cb(void *ctx)
{
struct MPContext *mpctx = ctx;
mp_abort_playback_async(mpctx);
}
// We mostly care about LC_NUMERIC, and how "." vs. "," is treated,
// Other locale stuff might break too, but probably isn't too bad.
static bool check_locale(void)
@ -279,9 +278,11 @@ struct MPContext *mp_create(void)
.playlist = talloc_struct(mpctx, struct playlist, {0}),
.dispatch = mp_dispatch_create(mpctx),
.playback_abort = mp_cancel_new(mpctx),
.thread_pool = mp_thread_pool_create(mpctx, 0, 1, 30),
.stop_play = PT_STOP,
};
pthread_mutex_init(&mpctx->lock, NULL);
pthread_mutex_init(&mpctx->abort_lock, NULL);
mpctx->global = talloc_zero(mpctx, struct mpv_global);
@ -302,8 +303,6 @@ struct MPContext *mp_create(void)
m_config_parse(mpctx->mconfig, "", bstr0(def_config), NULL, 0);
m_config_create_shadow(mpctx->mconfig);
mpctx->global->opts = mpctx->opts;
mpctx->input = mp_input_init(mpctx->global, mp_wakeup_core_cb, mpctx);
screenshot_init(mpctx);
command_init(mpctx);
@ -315,8 +314,6 @@ struct MPContext *mp_create(void)
cocoa_set_input_context(mpctx->input);
#endif
mp_input_set_cancel(mpctx->input, abort_playback_cb, mpctx);
char *verbose_env = getenv("MPV_VERBOSE");
if (verbose_env)
mpctx->opts->verbose = atoi(verbose_env);
@ -336,9 +333,12 @@ int mp_initialize(struct MPContext *mpctx, char **options)
assert(!mpctx->initialized);
// Preparse the command line, so we can init the terminal early.
if (options)
m_config_preparse_command_line(mpctx->mconfig, mpctx->global, options);
if (options) {
m_config_preparse_command_line(mpctx->mconfig, mpctx->global,
&opts->verbose, options);
}
mp_init_paths(mpctx->global, opts);
mp_update_logging(mpctx, true);
if (options) {

View File

@ -203,24 +203,6 @@ void issue_refresh_seek(struct MPContext *mpctx, enum seek_precision min_prec)
queue_seek(mpctx, MPSEEK_ABSOLUTE, get_current_time(mpctx), min_prec, 0);
}
float mp_get_cache_percent(struct MPContext *mpctx)
{
struct stream_cache_info info = {0};
if (mpctx->demuxer)
demux_stream_control(mpctx->demuxer, STREAM_CTRL_GET_CACHE_INFO, &info);
if (info.size > 0 && info.fill >= 0)
return info.fill / (info.size / 100.0);
return -1;
}
bool mp_get_cache_idle(struct MPContext *mpctx)
{
struct stream_cache_info info = {0};
if (mpctx->demuxer)
demux_stream_control(mpctx->demuxer, STREAM_CTRL_GET_CACHE_INFO, &info);
return info.idle;
}
void update_vo_playback_state(struct MPContext *mpctx)
{
if (mpctx->video_out && mpctx->video_out->config_ok) {

View File

@ -229,27 +229,23 @@ static char *get_term_status_msg(struct MPContext *mpctx)
}
}
if (mpctx->demuxer) {
struct stream_cache_info info = {0};
demux_stream_control(mpctx->demuxer, STREAM_CTRL_GET_CACHE_INFO, &info);
if (info.size > 0 || mpctx->demuxer->is_network) {
saddf(&line, " Cache: ");
if (mpctx->demuxer && demux_is_network_cached(mpctx->demuxer)) {
saddf(&line, " Cache: ");
struct demux_ctrl_reader_state s = {.ts_duration = -1};
demux_control(mpctx->demuxer, DEMUXER_CTRL_GET_READER_STATE, &s);
struct demux_ctrl_reader_state s = {.ts_duration = -1};
demux_control(mpctx->demuxer, DEMUXER_CTRL_GET_READER_STATE, &s);
if (s.ts_duration < 0) {
saddf(&line, "???");
if (s.ts_duration < 0) {
saddf(&line, "???");
} else {
saddf(&line, "%2ds", (int)s.ts_duration);
}
int64_t cache_size = s.fw_bytes;
if (cache_size > 0) {
if (cache_size >= 1024 * 1024) {
saddf(&line, "+%lldMB", (long long)(cache_size / 1024 / 1024));
} else {
saddf(&line, "%2ds", (int)s.ts_duration);
}
int64_t cache_size = s.fw_bytes + info.fill;
if (cache_size > 0) {
if (cache_size >= 1024 * 1024) {
saddf(&line, "+%lldMB", (long long)(cache_size / 1024 / 1024));
} else {
saddf(&line, "+%lldKB", (long long)(cache_size / 1024));
}
saddf(&line, "+%lldKB", (long long)(cache_size / 1024));
}
}
}
@ -267,9 +263,13 @@ static void term_osd_print_status_lazy(struct MPContext *mpctx)
if (!opts->use_terminal)
return;
if (opts->quiet || !mpctx->playback_initialized || !mpctx->playing_msg_shown)
if (opts->quiet || !mpctx->playback_initialized ||
!mpctx->playing_msg_shown || mpctx->stop_play)
{
term_osd_set_status_lazy(mpctx, "");
if (!mpctx->playing || mpctx->stop_play) {
mp_msg_flush_status_line(mpctx->log);
term_osd_set_status_lazy(mpctx, "");
}
return;
}

View File

@ -92,6 +92,16 @@ void mp_wakeup_core_cb(void *ctx)
mp_wakeup_core(mpctx);
}
void mp_core_lock(struct MPContext *mpctx)
{
mp_dispatch_lock(mpctx->dispatch);
}
void mp_core_unlock(struct MPContext *mpctx)
{
mp_dispatch_unlock(mpctx->dispatch);
}
// Process any queued input, whether it's user input, or requests from client
// API threads. This also resets the "wakeup" flag used with mp_wait_events().
void mp_process_input(struct MPContext *mpctx)
@ -100,8 +110,7 @@ void mp_process_input(struct MPContext *mpctx)
mp_cmd_t *cmd = mp_input_read_cmd(mpctx->input);
if (!cmd)
break;
run_command(mpctx, cmd, NULL);
mp_cmd_free(cmd);
run_command(mpctx, cmd, NULL, NULL, NULL);
}
mp_set_timeout(mpctx, mp_input_get_delay(mpctx->input));
}
@ -118,8 +127,8 @@ void update_core_idle_state(struct MPContext *mpctx)
{
bool eof = mpctx->video_status == STATUS_EOF &&
mpctx->audio_status == STATUS_EOF;
bool active = !mpctx->paused && mpctx->restart_complete && mpctx->playing &&
mpctx->in_playloop && !eof;
bool active = !mpctx->paused && mpctx->restart_complete &&
mpctx->stop_play && mpctx->in_playloop && !eof;
if (mpctx->playback_active != active) {
mpctx->playback_active = active;
@ -219,7 +228,6 @@ void reset_playback_state(struct MPContext *mpctx)
mpctx->hrseek_backstep = false;
mpctx->current_seek = (struct seek_params){0};
mpctx->playback_pts = MP_NOPTS_VALUE;
mpctx->last_seek_pts = MP_NOPTS_VALUE;
mpctx->step_frames = 0;
mpctx->ab_loop_clip = true;
mpctx->restart_complete = false;
@ -619,14 +627,11 @@ static void handle_pause_on_low_cache(struct MPContext *mpctx)
double now = mp_time_sec();
struct stream_cache_info c = {.idle = true};
demux_stream_control(mpctx->demuxer, STREAM_CTRL_GET_CACHE_INFO, &c);
struct demux_ctrl_reader_state s = {.idle = true, .ts_duration = -1};
demux_control(mpctx->demuxer, DEMUXER_CTRL_GET_READER_STATE, &s);
int cache_buffer = 100;
bool use_pause_on_low_cache = (c.size > 0 || mpctx->demuxer->is_network) &&
bool use_pause_on_low_cache = demux_is_network_cached(mpctx->demuxer) &&
opts->cache_pause;
if (!mpctx->restart_complete) {
@ -661,7 +666,7 @@ static void handle_pause_on_low_cache(struct MPContext *mpctx)
}
// Also update cache properties.
bool busy = !s.idle || !c.idle;
bool busy = !s.idle;
if (busy || mpctx->next_cache_update > 0) {
if (mpctx->next_cache_update <= now) {
mpctx->next_cache_update = busy ? now + 0.25 : 0;
@ -865,7 +870,7 @@ int handle_force_window(struct MPContext *mpctx, bool force)
{
// True if we're either in idle mode, or loading of the file has finished.
// It's also set via force in some stages during file loading.
bool act = !mpctx->playing || mpctx->playback_initialized || force;
bool act = mpctx->stop_play || mpctx->playback_initialized || force;
// On the other hand, if a video track is selected, but no video is ever
// decoded on it, then create the window.

View File

@ -27,9 +27,11 @@
#include "screenshot.h"
#include "core.h"
#include "command.h"
#include "input/cmd.h"
#include "misc/bstr.h"
#include "misc/dispatch.h"
#include "misc/thread_pool.h"
#include "misc/node.h"
#include "misc/thread_tools.h"
#include "common/msg.h"
#include "options/path.h"
#include "video/mp_image.h"
@ -46,13 +48,12 @@
typedef struct screenshot_ctx {
struct MPContext *mpctx;
int mode;
bool each_frame;
bool osd;
int frameno;
// Command to repeat in each-frame mode.
struct mp_cmd *each_frame;
struct mp_thread_pool *thread_pool;
int frameno;
} screenshot_ctx;
void screenshot_init(struct MPContext *mpctx)
@ -92,73 +93,27 @@ static char *stripext(void *talloc_ctx, const char *s)
return talloc_asprintf(talloc_ctx, "%.*s", (int)(end - s), s);
}
struct screenshot_item {
bool on_thread;
struct MPContext *mpctx;
const char *filename;
struct mp_image *img;
struct image_writer_opts opts;
};
#define LOCK(item) if (item->on_thread) mp_dispatch_lock(item->mpctx->dispatch);
#define UNLOCK(item) if (item->on_thread) mp_dispatch_unlock(item->mpctx->dispatch);
static void write_screenshot_thread(void *arg)
{
struct screenshot_item *item = arg;
screenshot_ctx *ctx = item->mpctx->screenshot_ctx;
LOCK(item)
screenshot_msg(ctx, MSGL_INFO, "Screenshot: '%s'", item->filename);
UNLOCK(item)
if (!item->img || !write_image(item->img, &item->opts, item->filename,
item->mpctx->log))
{
LOCK(item)
screenshot_msg(ctx, MSGL_ERR, "Error writing screenshot!");
UNLOCK(item)
}
if (item->on_thread) {
mp_dispatch_lock(item->mpctx->dispatch);
screenshot_msg(ctx, MSGL_V, "Screenshot writing done.");
item->mpctx->outstanding_async -= 1;
mp_wakeup_core(item->mpctx);
mp_dispatch_unlock(item->mpctx->dispatch);
}
talloc_free(item);
}
static void write_screenshot(struct MPContext *mpctx, struct mp_image *img,
const char *filename, struct image_writer_opts *opts,
bool async)
static bool write_screenshot(struct MPContext *mpctx, struct mp_image *img,
const char *filename, struct image_writer_opts *opts)
{
screenshot_ctx *ctx = mpctx->screenshot_ctx;
struct image_writer_opts *gopts = mpctx->opts->screenshot_image_opts;
struct image_writer_opts opts_copy = opts ? *opts : *gopts;
struct screenshot_item *item = talloc_zero(NULL, struct screenshot_item);
*item = (struct screenshot_item){
.mpctx = mpctx,
.filename = talloc_strdup(item, filename),
.img = talloc_steal(item, mp_image_new_ref(img)),
.opts = opts ? *opts : *gopts,
};
screenshot_msg(ctx, MSGL_V, "Starting screenshot: '%s'", filename);
if (async) {
if (!ctx->thread_pool)
ctx->thread_pool = mp_thread_pool_create(ctx, 1);
if (ctx->thread_pool) {
item->on_thread = true;
mpctx->outstanding_async += 1;
mp_thread_pool_queue(ctx->thread_pool, write_screenshot_thread, item);
item = NULL;
}
mp_core_unlock(mpctx);
bool ok = img && write_image(img, &opts_copy, filename, mpctx->log);
mp_core_lock(mpctx);
if (ok) {
screenshot_msg(ctx, MSGL_INFO, "Screenshot: '%s'", filename);
} else {
screenshot_msg(ctx, MSGL_ERR, "Error writing screenshot!");
}
if (item)
write_screenshot_thread(item);
return ok;
}
#ifdef _WIN32
@ -432,7 +387,8 @@ static struct mp_image *screenshot_get(struct MPContext *mpctx, int mode,
return image;
}
struct mp_image *screenshot_get_rgb(struct MPContext *mpctx, int mode)
// mode is the same as in screenshot_get()
static struct mp_image *screenshot_get_rgb(struct MPContext *mpctx, int mode)
{
struct mp_image *mpi = screenshot_get(mpctx, mode, false);
if (!mpi)
@ -442,9 +398,13 @@ struct mp_image *screenshot_get_rgb(struct MPContext *mpctx, int mode)
return res;
}
void screenshot_to_file(struct MPContext *mpctx, const char *filename, int mode,
bool osd, bool async)
void cmd_screenshot_to_file(void *p)
{
struct mp_cmd_ctx *cmd = p;
struct MPContext *mpctx = cmd->mpctx;
const char *filename = cmd->args[0].v.s;
int mode = cmd->args[1].v.i;
bool osd = cmd->msg_osd;
screenshot_ctx *ctx = mpctx->screenshot_ctx;
struct image_writer_opts opts = *mpctx->opts->screenshot_image_opts;
bool old_osd = ctx->osd;
@ -456,34 +416,45 @@ void screenshot_to_file(struct MPContext *mpctx, const char *filename, int mode,
opts.format = format;
bool high_depth = image_writer_high_depth(&opts);
struct mp_image *image = screenshot_get(mpctx, mode, high_depth);
ctx->osd = old_osd;
if (!image) {
screenshot_msg(ctx, MSGL_ERR, "Taking screenshot failed.");
goto end;
cmd->success = false;
return;
}
write_screenshot(mpctx, image, filename, &opts, async);
cmd->success = write_screenshot(mpctx, image, filename, &opts);
talloc_free(image);
end:
ctx->osd = old_osd;
}
void screenshot_request(struct MPContext *mpctx, int mode, bool each_frame,
bool osd, bool async)
void cmd_screenshot(void *p)
{
struct mp_cmd_ctx *cmd = p;
struct MPContext *mpctx = cmd->mpctx;
int mode = cmd->args[0].v.i & 3;
bool each_frame_toggle = (cmd->args[0].v.i | cmd->args[1].v.i) & 8;
bool each_frame_mode = cmd->args[0].v.i & 16;
bool osd = cmd->msg_osd;
screenshot_ctx *ctx = mpctx->screenshot_ctx;
if (mode == MODE_SUBTITLES && osd_get_render_subs_in_filter(mpctx->osd))
mode = 0;
if (each_frame) {
ctx->each_frame = !ctx->each_frame;
if (!ctx->each_frame)
return;
} else {
ctx->each_frame = false;
if (!each_frame_mode) {
if (each_frame_toggle) {
if (ctx->each_frame) {
TA_FREEP(&ctx->each_frame);
return;
}
ctx->each_frame = talloc_steal(ctx, mp_cmd_clone(cmd->cmd));
ctx->each_frame->args[0].v.i |= 16;
} else {
TA_FREEP(&ctx->each_frame);
}
}
ctx->mode = mode;
cmd->success = false;
ctx->osd = osd;
struct image_writer_opts *opts = mpctx->opts->screenshot_image_opts;
@ -494,7 +465,7 @@ void screenshot_request(struct MPContext *mpctx, int mode, bool each_frame,
if (image) {
char *filename = gen_fname(ctx, image_writer_file_ext(opts));
if (filename)
write_screenshot(mpctx, image, filename, NULL, async);
cmd->success = write_screenshot(mpctx, image, filename, NULL);
talloc_free(filename);
} else {
screenshot_msg(ctx, MSGL_ERR, "Taking screenshot failed.");
@ -503,6 +474,42 @@ void screenshot_request(struct MPContext *mpctx, int mode, bool each_frame,
talloc_free(image);
}
void cmd_screenshot_raw(void *p)
{
struct mp_cmd_ctx *cmd = p;
struct MPContext *mpctx = cmd->mpctx;
struct mpv_node *res = &cmd->result;
struct mp_image *img = screenshot_get_rgb(mpctx, cmd->args[0].v.i);
if (!img) {
cmd->success = false;
return;
}
node_init(res, MPV_FORMAT_NODE_MAP, NULL);
node_map_add_int64(res, "w", img->w);
node_map_add_int64(res, "h", img->h);
node_map_add_int64(res, "stride", img->stride[0]);
node_map_add_string(res, "format", "bgr0");
struct mpv_byte_array *ba =
node_map_add(res, "data", MPV_FORMAT_BYTE_ARRAY)->u.ba;
*ba = (struct mpv_byte_array){
.data = img->planes[0],
.size = img->stride[0] * img->h,
};
talloc_steal(ba, img);
}
static void screenshot_fin(struct mp_cmd_ctx *cmd)
{
void **a = cmd->on_completion_priv;
struct MPContext *mpctx = a[0];
struct mp_waiter *waiter = a[1];
mp_waiter_wakeup(waiter, 0);
mp_wakeup_core(mpctx);
}
void screenshot_flip(struct MPContext *mpctx)
{
screenshot_ctx *ctx = mpctx->screenshot_ctx;
@ -510,6 +517,14 @@ void screenshot_flip(struct MPContext *mpctx)
if (!ctx->each_frame)
return;
ctx->each_frame = false;
screenshot_request(mpctx, ctx->mode, true, ctx->osd, false);
struct mp_waiter wait = MP_WAITER_INITIALIZER;
void *a[] = {mpctx, &wait};
run_command(mpctx, mp_cmd_clone(ctx->each_frame), NULL, screenshot_fin, a);
// Block (in a reentrant way) until he screenshot was written. Otherwise,
// we could pile up screenshot requests forever.
while (!mp_waiter_poll(&wait))
mp_idle(mpctx);
mp_waiter_wait(&wait);
}

View File

@ -25,24 +25,12 @@ struct MPContext;
// One time initialization at program start.
void screenshot_init(struct MPContext *mpctx);
// Request a taking & saving a screenshot of the currently displayed frame.
// mode: 0: -, 1: save the actual output window contents, 2: with subtitles.
// each_frame: If set, this toggles per-frame screenshots, exactly like the
// screenshot slave command (MP_CMD_SCREENSHOT).
// osd: show status on OSD
void screenshot_request(struct MPContext *mpctx, int mode, bool each_frame,
bool osd, bool async);
// filename: where to store the screenshot; doesn't try to find an alternate
// name if the file already exists
// mode, osd: same as in screenshot_request()
void screenshot_to_file(struct MPContext *mpctx, const char *filename, int mode,
bool osd, bool async);
// mode is the same as in screenshot_request()
struct mp_image *screenshot_get_rgb(struct MPContext *mpctx, int mode);
// Called by the playback core code when a new frame is displayed.
void screenshot_flip(struct MPContext *mpctx);
// Handlers for the user-facing commands.
void cmd_screenshot(void *p);
void cmd_screenshot_to_file(void *p);
void cmd_screenshot_raw(void *p);
#endif /* MPLAYER_SCREENSHOT_H */

View File

@ -70,6 +70,8 @@ void uninit_sub(struct MPContext *mpctx, struct track *track)
sub_select(track->d_sub, false);
int order = get_order(mpctx, track);
osd_set_sub(mpctx->osd, order, NULL);
sub_destroy(track->d_sub);
track->d_sub = NULL;
}
}
@ -182,7 +184,9 @@ void reinit_sub(struct MPContext *mpctx, struct track *track)
if (!track || !track->stream || track->stream->type != STREAM_SUB)
return;
if (!track->d_sub && !init_subdec(mpctx, track)) {
assert(!track->d_sub);
if (!init_subdec(mpctx, track)) {
error_on_track(mpctx, track);
return;
}

View File

@ -1169,7 +1169,6 @@ void write_video(struct MPContext *mpctx)
MP_VERBOSE(mpctx, "first video frame after restart shown\n");
}
}
screenshot_flip(mpctx);
mp_notify(mpctx, MPV_EVENT_TICK, NULL);
@ -1188,6 +1187,8 @@ void write_video(struct MPContext *mpctx)
mpctx->max_frames--;
}
screenshot_flip(mpctx);
mp_wakeup_core(mpctx);
return;

View File

@ -1,808 +0,0 @@
/*
* This file is part of mpv.
*
* mpv is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* mpv is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
*/
// Time in seconds the main thread waits for the cache thread. On wakeups, the
// code checks for user requested aborts and also prints warnings that the
// cache is being slow.
#define CACHE_WAIT_TIME 1.0
// The time the cache sleeps in idle mode. This controls how often the cache
// retries reading from the stream after EOF has reached (in case the stream is
// actually readable again, for example if data has been appended to a file).
// Note that if this timeout is too low, the player will waste too much CPU
// when player is paused.
#define CACHE_IDLE_SLEEP_TIME 1.0
// Time in seconds the cache updates "cached" controls. Note that idle mode
// will block the cache from doing this, and this timeout is honored only if
// the cache is active.
#define CACHE_UPDATE_CONTROLS_TIME 2.0
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <unistd.h>
#include <errno.h>
#include <assert.h>
#include <pthread.h>
#include <time.h>
#include <math.h>
#include <sys/time.h>
#include <libavutil/common.h>
#include "config.h"
#include "osdep/timer.h"
#include "osdep/threads.h"
#include "common/msg.h"
#include "common/tags.h"
#include "options/options.h"
#include "stream.h"
#include "common/common.h"
#define OPT_BASE_STRUCT struct mp_cache_opts
const struct m_sub_options stream_cache_conf = {
.opts = (const struct m_option[]){
OPT_CHOICE_OR_INT("cache", size, 0, 32, 0x7fffffff,
({"no", 0},
{"auto", -1},
{"yes", -2})),
OPT_CHOICE_OR_INT("cache-default", def_size, 0, 32, 0x7fffffff,
({"no", 0})),
OPT_INTRANGE("cache-initial", initial, 0, 0, 0x7fffffff),
OPT_INTRANGE("cache-seek-min", seek_min, 0, 0, 0x7fffffff),
OPT_INTRANGE("cache-backbuffer", back_buffer, 0, 0, 0x7fffffff),
OPT_STRING("cache-file", file, M_OPT_FILE),
OPT_INTRANGE("cache-file-size", file_max, 0, 0, 0x7fffffff),
{0}
},
.size = sizeof(struct mp_cache_opts),
.defaults = &(const struct mp_cache_opts){
.size = -1,
.def_size = 10000,
.initial = 0,
.seek_min = 500,
.back_buffer = 10000,
.file_max = 1024 * 1024,
},
};
// Note: (struct priv*)(cache->priv)->cache == cache
struct priv {
pthread_t cache_thread;
bool cache_thread_running;
pthread_mutex_t mutex;
pthread_cond_t wakeup;
// Constants (as long as cache thread is running)
// Some of these might actually be changed by a synced cache resize.
unsigned char *buffer; // base pointer of the allocated buffer memory
int64_t buffer_size; // size of the allocated buffer memory
int64_t back_size; // keep back_size amount of old bytes for backward seek
int64_t seek_limit; // keep filling cache if distance is less that seek limit
bool seekable; // underlying stream is seekable
struct mp_log *log;
// Owned by the main thread
stream_t *cache; // wrapper stream, used by demuxer etc.
// Owned by the cache thread
stream_t *stream; // "real" stream, used to read from the source media
int64_t bytes_until_wakeup; // wakeup cache thread after this many bytes
// All the following members are shared between the threads.
// You must lock the mutex to access them.
// Ringbuffer
int64_t min_filepos; // range of file that is cached in the buffer
int64_t max_filepos; // ... max_filepos being the last read position
bool eof; // true if max_filepos = EOF
int64_t offset; // buffer[WRAP(s->max_filepos - offset)] corresponds
// to the byte at max_filepos (must be wrapped by
// buffer_size)
bool idle; // cache thread has stopped reading
int64_t reads; // number of actual read attempts performed
int64_t speed_start; // start time (us) for calculating download speed
int64_t speed_amount; // bytes read since speed_start
double speed;
bool enable_readahead; // actively read beyond read() position
int64_t read_filepos; // client read position (mirrors cache->pos)
int64_t read_min; // file position until which the thread should
// read even if readahead is disabled
int64_t eof_pos;
bool read_seek_failed; // let a read fail because an async seek failed
int control; // requested STREAM_CTRL_... or CACHE_CTRL_...
void *control_arg; // temporary for executing STREAM_CTRLs
int control_res;
bool control_flush;
// Cached STREAM_CTRLs
double stream_time_length;
int64_t stream_size;
struct mp_tags *stream_metadata;
double start_pts;
bool has_avseek;
};
enum {
CACHE_CTRL_NONE = 0,
CACHE_CTRL_QUIT = -1,
CACHE_CTRL_PING = -2,
CACHE_CTRL_SEEK = -3,
// we should fill buffer only if space>=FILL_LIMIT
FILL_LIMIT = 16 * 1024,
};
// Used by the main thread to wakeup the cache thread, and to wait for the
// cache thread. The cache mutex has to be locked when calling this function.
// *retry_time should be set to 0 on the first call.
// Return false if the stream has been aborted.
static bool cache_wakeup_and_wait(struct priv *s, double *retry_time)
{
double start = mp_time_sec();
if (*retry_time >= CACHE_WAIT_TIME) {
MP_VERBOSE(s, "Cache is not responding - slow/stuck network connection?\n");
*retry_time = -1; // do not warn again for this call
}
pthread_cond_signal(&s->wakeup);
struct timespec ts = mp_rel_time_to_timespec(CACHE_WAIT_TIME);
pthread_cond_timedwait(&s->wakeup, &s->mutex, &ts);
if (*retry_time >= 0)
*retry_time += mp_time_sec() - start;
return !mp_cancel_test(s->cache->cancel);
}
// Runs in the cache thread
static void cache_drop_contents(struct priv *s)
{
s->offset = s->min_filepos = s->max_filepos = s->read_filepos;
s->eof = false;
s->start_pts = MP_NOPTS_VALUE;
}
static void update_speed(struct priv *s)
{
int64_t now = mp_time_us();
if (s->speed_start + 1000000 <= now) {
s->speed = s->speed_amount * 1e6 / (now - s->speed_start);
s->speed_amount = 0;
s->speed_start = now;
}
}
// Copy at most dst_size from the cache at the given absolute file position pos.
// Return number of bytes that could actually be read.
// Does not advance the file position, or change anything else.
// Can be called from anywhere, as long as the mutex is held.
static size_t read_buffer(struct priv *s, unsigned char *dst,
size_t dst_size, int64_t pos)
{
size_t read = 0;
while (read < dst_size) {
if (pos >= s->max_filepos || pos < s->min_filepos)
break;
int64_t newb = s->max_filepos - pos; // new bytes in the buffer
int64_t bpos = pos - s->offset; // file pos to buffer memory pos
if (bpos < 0) {
bpos += s->buffer_size;
} else if (bpos >= s->buffer_size) {
bpos -= s->buffer_size;
}
if (newb > s->buffer_size - bpos)
newb = s->buffer_size - bpos; // handle wrap...
newb = MPMIN(newb, dst_size - read);
assert(newb >= 0 && read + newb <= dst_size);
assert(bpos >= 0 && bpos + newb <= s->buffer_size);
memcpy(&dst[read], &s->buffer[bpos], newb);
read += newb;
pos += newb;
}
return read;
}
// Whether a seek will be needed to get to the position. This honors seek_limit,
// which is a heuristic to prevent dropping the cache with small forward seeks.
// This helps in situations where waiting for network a bit longer would quickly
// reach the target position. Especially if the demuxer seeks back and forth,
// not dropping the backwards cache will be a major performance win.
static bool needs_seek(struct priv *s, int64_t pos)
{
return pos < s->min_filepos || pos > s->max_filepos + s->seek_limit;
}
static bool cache_update_stream_position(struct priv *s)
{
int64_t read = s->read_filepos;
s->read_seek_failed = false;
if (needs_seek(s, read)) {
MP_VERBOSE(s, "Dropping cache at pos %"PRId64", "
"cached range: %"PRId64"-%"PRId64".\n", read,
s->min_filepos, s->max_filepos);
cache_drop_contents(s);
}
if (stream_tell(s->stream) != s->max_filepos && s->seekable) {
MP_VERBOSE(s, "Seeking underlying stream: %"PRId64" -> %"PRId64"\n",
stream_tell(s->stream), s->max_filepos);
if (!stream_seek(s->stream, s->max_filepos)) {
s->read_seek_failed = true;
return false;
}
}
return stream_tell(s->stream) == s->max_filepos;
}
// Runs in the cache thread.
static void cache_fill(struct priv *s)
{
int64_t read = s->read_filepos;
bool read_attempted = false;
int len = 0;
if (!cache_update_stream_position(s))
goto done;
if (!s->enable_readahead && s->read_min <= s->max_filepos)
goto done;
if (mp_cancel_test(s->cache->cancel))
goto done;
// number of buffer bytes which should be preserved in backwards direction
int64_t back = MPCLAMP(read - s->min_filepos, 0, s->back_size);
// limit maximum readahead so that the backbuffer space is reserved, even
// if the backbuffer is not used. limit it to ensure that we don't stall the
// network when starting a file, or we wouldn't download new data until we
// get new free space again. (unless everything fits in the cache.)
if (s->stream_size > s->buffer_size)
back = MPMAX(back, s->back_size);
// number of buffer bytes that are valid and can be read
int64_t newb = FFMAX(s->max_filepos - read, 0);
// max. number of bytes that can be written (starting from max_filepos)
int64_t space = s->buffer_size - (newb + back);
// offset into the buffer that maps to max_filepos
int64_t pos = s->max_filepos - s->offset;
if (pos >= s->buffer_size)
pos -= s->buffer_size; // wrap-around
if (space < FILL_LIMIT)
goto done;
// limit to end of buffer (without wrapping)
if (pos + space >= s->buffer_size)
space = s->buffer_size - pos;
// limit read size (or else would block and read the entire buffer in 1 call)
space = FFMIN(space, s->stream->read_chunk);
// back+newb+space <= buffer_size
int64_t back2 = s->buffer_size - (space + newb); // max back size
if (s->min_filepos < (read - back2))
s->min_filepos = read - back2;
// The read call might take a long time and block, so drop the lock.
pthread_mutex_unlock(&s->mutex);
len = stream_read_partial(s->stream, &s->buffer[pos], space);
pthread_mutex_lock(&s->mutex);
// Do this after reading a block, because at least libdvdnav updates the
// stream position only after actually reading something after a seek.
if (s->start_pts == MP_NOPTS_VALUE) {
double pts;
if (stream_control(s->stream, STREAM_CTRL_GET_CURRENT_TIME, &pts) > 0)
s->start_pts = pts;
}
s->max_filepos += len;
if (pos + len == s->buffer_size)
s->offset += s->buffer_size; // wrap...
s->speed_amount += len;
read_attempted = true;
done: ;
bool prev_eof = s->eof;
if (read_attempted)
s->eof = len <= 0;
if (!prev_eof && s->eof) {
s->eof_pos = stream_tell(s->stream);
MP_VERBOSE(s, "EOF reached.\n");
}
s->idle = s->eof || !read_attempted;
s->reads++;
update_speed(s);
pthread_cond_signal(&s->wakeup);
}
// This is called both during init and at runtime.
// The size argument is the readahead half only; s->back_size is the backbuffer.
static int resize_cache(struct priv *s, int64_t size)
{
int64_t min_size = FILL_LIMIT * 2;
int64_t max_size = ((size_t)-1) / 8;
if (s->stream_size > 0) {
size = MPMIN(size, s->stream_size);
if (size >= s->stream_size) {
MP_VERBOSE(s, "no backbuffer needed\n");
s->back_size = 0;
}
}
int64_t buffer_size = MPCLAMP(size, min_size, max_size);
s->back_size = MPCLAMP(s->back_size, min_size, max_size);
buffer_size += s->back_size;
unsigned char *buffer = malloc(buffer_size);
if (!buffer)
return STREAM_ERROR;
if (s->buffer) {
// Copy & free the old ringbuffer data.
// If the buffer is too small, prefer to copy these regions:
// 1. Data starting from read_filepos, until cache end
size_t read_1 = read_buffer(s, buffer, buffer_size, s->read_filepos);
// 2. then data from before read_filepos until cache start
// (this one needs to be copied to the end of the ringbuffer)
size_t read_2 = 0;
if (s->min_filepos < s->read_filepos) {
size_t copy_len = buffer_size - read_1;
copy_len = MPMIN(copy_len, s->read_filepos - s->min_filepos);
assert(copy_len + read_1 <= buffer_size);
read_2 = read_buffer(s, buffer + buffer_size - copy_len, copy_len,
s->read_filepos - copy_len);
// This shouldn't happen, unless copy_len was computed incorrectly.
assert(read_2 == copy_len);
}
// Set it up such that read_1 is at buffer pos 0, and read_2 wraps
// around below it, so that it is located at the end of the buffer.
s->min_filepos = s->read_filepos - read_2;
s->max_filepos = s->read_filepos + read_1;
s->offset = s->max_filepos - read_1;
} else {
cache_drop_contents(s);
}
free(s->buffer);
s->buffer_size = buffer_size;
s->buffer = buffer;
s->idle = false;
s->eof = false;
//make sure that we won't wait from cache_fill
//more data than it is allowed to fill
if (s->seek_limit > s->buffer_size - FILL_LIMIT)
s->seek_limit = s->buffer_size - FILL_LIMIT;
MP_VERBOSE(s, "Cache size set to %lld KiB (%lld KiB backbuffer)\n",
(long long)(s->buffer_size / 1024),
(long long)(s->back_size / 1024));
assert(s->back_size < s->buffer_size);
return STREAM_OK;
}
static void update_cached_controls(struct priv *s)
{
int64_t i64;
double d;
struct mp_tags *tags;
s->stream_time_length = 0;
if (stream_control(s->stream, STREAM_CTRL_GET_TIME_LENGTH, &d) == STREAM_OK)
s->stream_time_length = d;
if (stream_control(s->stream, STREAM_CTRL_GET_METADATA, &tags) == STREAM_OK) {
talloc_free(s->stream_metadata);
s->stream_metadata = talloc_steal(s, tags);
}
s->stream_size = s->eof_pos;
i64 = stream_get_size(s->stream);
if (i64 >= 0)
s->stream_size = i64;
s->has_avseek = stream_control(s->stream, STREAM_CTRL_HAS_AVSEEK, NULL) > 0;
}
// the core might call these every frame, so cache them...
static int cache_get_cached_control(stream_t *cache, int cmd, void *arg)
{
struct priv *s = cache->priv;
switch (cmd) {
case STREAM_CTRL_GET_CACHE_INFO:
*(struct stream_cache_info *)arg = (struct stream_cache_info) {
.size = s->buffer_size - s->back_size,
.fill = s->max_filepos - s->read_filepos,
.idle = s->idle,
.speed = llrint(s->speed),
};
return STREAM_OK;
case STREAM_CTRL_SET_READAHEAD:
s->enable_readahead = *(int *)arg;
pthread_cond_signal(&s->wakeup);
return STREAM_OK;
case STREAM_CTRL_GET_TIME_LENGTH:
*(double *)arg = s->stream_time_length;
return s->stream_time_length ? STREAM_OK : STREAM_UNSUPPORTED;
case STREAM_CTRL_GET_SIZE:
if (s->stream_size < 0)
return STREAM_UNSUPPORTED;
*(int64_t *)arg = s->stream_size;
return STREAM_OK;
case STREAM_CTRL_GET_CURRENT_TIME: {
if (s->start_pts == MP_NOPTS_VALUE)
return STREAM_UNSUPPORTED;
*(double *)arg = s->start_pts;
return STREAM_OK;
}
case STREAM_CTRL_HAS_AVSEEK:
return s->has_avseek ? STREAM_OK : STREAM_UNSUPPORTED;
case STREAM_CTRL_GET_METADATA: {
if (s->stream_metadata) {
ta_set_parent(s->stream_metadata, NULL);
*(struct mp_tags **)arg = s->stream_metadata;
s->stream_metadata = NULL;
return STREAM_OK;
}
return STREAM_UNSUPPORTED;
}
case STREAM_CTRL_AVSEEK:
if (!s->has_avseek)
return STREAM_UNSUPPORTED;
break;
}
return STREAM_ERROR;
}
static bool control_needs_flush(int stream_ctrl)
{
switch (stream_ctrl) {
case STREAM_CTRL_SEEK_TO_TIME:
case STREAM_CTRL_AVSEEK:
case STREAM_CTRL_SET_ANGLE:
case STREAM_CTRL_SET_CURRENT_TITLE:
case STREAM_CTRL_DVB_SET_CHANNEL:
case STREAM_CTRL_DVB_SET_CHANNEL_NAME:
case STREAM_CTRL_DVB_STEP_CHANNEL:
return true;
}
return false;
}
// Runs in the cache thread
static void cache_execute_control(struct priv *s)
{
uint64_t old_pos = stream_tell(s->stream);
s->control_flush = false;
switch (s->control) {
case STREAM_CTRL_SET_CACHE_SIZE:
s->control_res = resize_cache(s, *(int64_t *)s->control_arg);
break;
default:
s->control_res = stream_control(s->stream, s->control, s->control_arg);
}
bool pos_changed = old_pos != stream_tell(s->stream);
bool ok = s->control_res == STREAM_OK;
if (pos_changed && !ok) {
MP_ERR(s, "STREAM_CTRL changed stream pos but "
"returned error, this is not allowed!\n");
} else if (pos_changed || (ok && control_needs_flush(s->control))) {
MP_VERBOSE(s, "Dropping cache due to control()\n");
s->read_filepos = stream_tell(s->stream);
s->read_min = s->read_filepos;
s->control_flush = true;
cache_drop_contents(s);
}
update_cached_controls(s);
s->control = CACHE_CTRL_NONE;
pthread_cond_signal(&s->wakeup);
}
static void *cache_thread(void *arg)
{
struct priv *s = arg;
mpthread_set_name("cache");
pthread_mutex_lock(&s->mutex);
update_cached_controls(s);
double last = mp_time_sec();
while (s->control != CACHE_CTRL_QUIT) {
if (mp_time_sec() - last > CACHE_UPDATE_CONTROLS_TIME) {
update_cached_controls(s);
last = mp_time_sec();
}
if (s->control > 0) {
cache_execute_control(s);
} else if (s->control == CACHE_CTRL_SEEK) {
s->control_res = cache_update_stream_position(s);
s->control = CACHE_CTRL_NONE;
pthread_cond_signal(&s->wakeup);
} else {
cache_fill(s);
}
if (s->control == CACHE_CTRL_PING) {
pthread_cond_signal(&s->wakeup);
s->control = CACHE_CTRL_NONE;
}
if (s->idle && s->control == CACHE_CTRL_NONE) {
struct timespec ts = mp_rel_time_to_timespec(CACHE_IDLE_SLEEP_TIME);
pthread_cond_timedwait(&s->wakeup, &s->mutex, &ts);
}
}
pthread_cond_signal(&s->wakeup);
pthread_mutex_unlock(&s->mutex);
MP_VERBOSE(s, "Cache exiting...\n");
return NULL;
}
static int cache_fill_buffer(struct stream *cache, char *buffer, int max_len)
{
struct priv *s = cache->priv;
assert(s->cache_thread_running);
pthread_mutex_lock(&s->mutex);
if (cache->pos != s->read_filepos)
MP_ERR(s, "!!! read_filepos differs !!! report this bug...\n");
int readb = 0;
if (max_len > 0) {
double retry_time = 0;
int64_t retry = s->reads - 1; // try at least 1 read on EOF
while (1) {
s->read_min = s->read_filepos + max_len + 64 * 1024;
readb = read_buffer(s, buffer, max_len, s->read_filepos);
s->read_filepos += readb;
if (readb > 0)
break;
if (s->eof && s->read_filepos >= s->max_filepos && s->reads >= retry)
break;
s->idle = false;
if (!cache_wakeup_and_wait(s, &retry_time))
break;
if (s->read_seek_failed) {
MP_ERR(s, "error reading after async seek failed\n");
s->read_seek_failed = false;
break;
}
}
}
if (!s->eof) {
// wakeup the cache thread, possibly make it read more data ahead
// this is throttled to reduce excessive wakeups during normal reading
// (using the amount of bytes after which the cache thread most likely
// can actually read new data)
s->bytes_until_wakeup -= readb;
if (s->bytes_until_wakeup <= 0) {
s->bytes_until_wakeup = MPMAX(FILL_LIMIT, s->stream->read_chunk);
pthread_cond_signal(&s->wakeup);
}
}
pthread_mutex_unlock(&s->mutex);
return readb;
}
static int cache_seek(stream_t *cache, int64_t pos)
{
struct priv *s = cache->priv;
assert(s->cache_thread_running);
int r = 1;
pthread_mutex_lock(&s->mutex);
MP_DBG(s, "request seek: %" PRId64 " <= to=%" PRId64
" (cur=%" PRId64 ") <= %" PRId64 " \n",
s->min_filepos, pos, s->read_filepos, s->max_filepos);
if (!s->seekable && pos > s->max_filepos) {
MP_ERR(s, "Attempting to seek past cached data in unseekable stream.\n");
r = 0;
} else if (!s->seekable && pos < s->min_filepos) {
MP_ERR(s, "Attempting to seek before cached data in unseekable stream.\n");
r = 0;
} else {
cache->pos = s->read_filepos = s->read_min = pos;
// Is this seek likely to cause a stream-level seek?
// If it is, wait until that is complete and return its result.
// This check is not quite exact - if the reader thread is blocked in
// a read, the read might advance file position enough that a seek
// forward is no longer needed.
if (needs_seek(s, pos)) {
s->eof = false;
s->control = CACHE_CTRL_SEEK;
s->control_res = 0;
double retry = 0;
while (s->control != CACHE_CTRL_NONE) {
if (!cache_wakeup_and_wait(s, &retry))
break;
}
r = s->control_res;
} else {
pthread_cond_signal(&s->wakeup);
r = 1;
}
}
s->bytes_until_wakeup = 0;
pthread_mutex_unlock(&s->mutex);
return r;
}
static int cache_control(stream_t *cache, int cmd, void *arg)
{
struct priv *s = cache->priv;
int r = STREAM_ERROR;
assert(cmd > 0);
pthread_mutex_lock(&s->mutex);
r = cache_get_cached_control(cache, cmd, arg);
if (r != STREAM_ERROR)
goto done;
MP_VERBOSE(s, "blocking for STREAM_CTRL %d\n", cmd);
s->control = cmd;
s->control_arg = arg;
double retry = 0;
while (s->control != CACHE_CTRL_NONE) {
if (!cache_wakeup_and_wait(s, &retry)) {
s->eof = 1;
r = STREAM_UNSUPPORTED;
goto done;
}
}
r = s->control_res;
if (s->control_flush) {
stream_drop_buffers(cache);
cache->pos = s->read_filepos;
}
done:
pthread_mutex_unlock(&s->mutex);
return r;
}
static void cache_uninit(stream_t *cache)
{
struct priv *s = cache->priv;
if (s->cache_thread_running) {
MP_VERBOSE(s, "Terminating cache...\n");
pthread_mutex_lock(&s->mutex);
s->control = CACHE_CTRL_QUIT;
pthread_cond_signal(&s->wakeup);
pthread_mutex_unlock(&s->mutex);
pthread_join(s->cache_thread, NULL);
}
pthread_mutex_destroy(&s->mutex);
pthread_cond_destroy(&s->wakeup);
free(s->buffer);
talloc_free(s);
}
// return 1 on success, 0 if the cache is disabled/not needed, and -1 on error
// or if the cache is disabled
int stream_cache_init(stream_t *cache, stream_t *stream,
struct mp_cache_opts *opts)
{
if (opts->size < 1)
return 0;
struct priv *s = talloc_zero(NULL, struct priv);
s->log = cache->log;
s->eof_pos = -1;
s->enable_readahead = true;
cache_drop_contents(s);
s->speed_start = mp_time_us();
s->seek_limit = opts->seek_min * 1024ULL;
s->back_size = opts->back_buffer * 1024ULL;
s->stream_size = stream_get_size(stream);
if (resize_cache(s, opts->size * 1024ULL) != STREAM_OK) {
MP_ERR(s, "Failed to allocate cache buffer.\n");
talloc_free(s);
return -1;
}
pthread_mutex_init(&s->mutex, NULL);
pthread_cond_init(&s->wakeup, NULL);
cache->priv = s;
s->cache = cache;
s->stream = stream;
cache->seek = cache_seek;
cache->fill_buffer = cache_fill_buffer;
cache->control = cache_control;
cache->close = cache_uninit;
int64_t min = opts->initial * 1024ULL;
if (min > s->buffer_size - FILL_LIMIT)
min = s->buffer_size - FILL_LIMIT;
s->seekable = stream->seekable;
if (pthread_create(&s->cache_thread, NULL, cache_thread, s) != 0) {
MP_ERR(s, "Starting cache thread failed.\n");
return -1;
}
s->cache_thread_running = true;
// wait until cache is filled with at least min bytes
if (min < 1)
return 1;
for (;;) {
if (mp_cancel_test(cache->cancel))
return -1;
struct stream_cache_info info;
if (stream_control(s->cache, STREAM_CTRL_GET_CACHE_INFO, &info) < 0)
break;
mp_msg(s->log, MSGL_STATUS, "Cache fill: %5.2f%% "
"(%" PRId64 " bytes)", 100.0 * info.fill / s->buffer_size,
info.fill);
if (info.fill >= min)
break;
if (info.idle)
break; // file is smaller than prefill size
// Wake up if the cache is done reading some data (or on timeout/abort)
pthread_mutex_lock(&s->mutex);
s->control = CACHE_CTRL_PING;
pthread_cond_signal(&s->wakeup);
cache_wakeup_and_wait(s, &(double){0});
pthread_mutex_unlock(&s->mutex);
}
return 1;
}

View File

@ -1,158 +0,0 @@
/*
* This file is part of mpv.
*
* mpv is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* mpv is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
*/
#include <stdio.h>
#include <stdint.h>
#include "osdep/io.h"
#include "common/common.h"
#include "common/msg.h"
#include "options/options.h"
#include "stream.h"
#define BLOCK_SIZE 1024LL
#define BLOCK_ALIGN(p) ((p) & ~(BLOCK_SIZE - 1))
struct priv {
struct stream *original;
FILE *cache_file;
uint8_t *block_bits; // 1 bit for each BLOCK_SIZE, whether block was read
int64_t size; // currently known size
int64_t max_size; // max. size for block_bits and cache_file
};
static bool test_bit(struct priv *p, int64_t pos)
{
if (pos < 0 || pos >= p->size)
return false;
size_t block = pos / BLOCK_SIZE;
return p->block_bits[block / 8] & (1 << (block % 8));
}
static void set_bit(struct priv *p, int64_t pos, bool bit)
{
if (pos < 0 || pos >= p->size)
return;
size_t block = pos / BLOCK_SIZE;
unsigned int m = (1 << (block % 8));
p->block_bits[block / 8] = (p->block_bits[block / 8] & ~m) | (bit ? m : 0);
}
static int fill_buffer(stream_t *s, char *buffer, int max_len)
{
struct priv *p = s->priv;
if (s->pos < 0)
return -1;
if (s->pos >= p->max_size) {
if (stream_seek(p->original, s->pos) < 1)
return -1;
return stream_read(p->original, buffer, max_len);
}
// Size of file changes -> invalidate last block
if (s->pos >= p->size - BLOCK_SIZE) {
int64_t new_size = stream_get_size(s);
if (p->size >= 0 && new_size != p->size)
set_bit(p, BLOCK_ALIGN(p->size), 0);
p->size = MPMIN(p->max_size, new_size);
}
int64_t aligned = BLOCK_ALIGN(s->pos);
if (!test_bit(p, aligned)) {
char tmp[BLOCK_SIZE];
stream_seek(p->original, aligned);
int r = stream_read(p->original, tmp, BLOCK_SIZE);
if (r < BLOCK_SIZE) {
if (p->size < 0) {
MP_WARN(s, "suspected EOF\n");
} else if (aligned + r < p->size) {
MP_ERR(s, "unexpected EOF\n");
return -1;
}
}
if (fseeko(p->cache_file, aligned, SEEK_SET))
return -1;
if (fwrite(tmp, r, 1, p->cache_file) != 1)
return -1;
set_bit(p, aligned, 1);
}
if (fseeko(p->cache_file, s->pos, SEEK_SET))
return -1;
// align/limit to blocks
max_len = MPMIN(max_len, BLOCK_SIZE - (s->pos % BLOCK_SIZE));
// Limit to max. known file size
if (p->size >= 0)
max_len = MPMIN(max_len, p->size - s->pos);
return fread(buffer, 1, max_len, p->cache_file);
}
static int seek(stream_t *s, int64_t newpos)
{
return 1;
}
static int control(stream_t *s, int cmd, void *arg)
{
struct priv *p = s->priv;
return stream_control(p->original, cmd, arg);
}
static void s_close(stream_t *s)
{
struct priv *p = s->priv;
if (p->cache_file)
fclose(p->cache_file);
talloc_free(p);
}
// return 1 on success, 0 if disabled, -1 on error
int stream_file_cache_init(stream_t *cache, stream_t *stream,
struct mp_cache_opts *opts)
{
if (!opts->file || !opts->file[0] || opts->file_max < 1)
return 0;
if (!stream->seekable) {
MP_ERR(cache, "can't cache unseekable stream\n");
return -1;
}
bool use_anon_file = strcmp(opts->file, "TMP") == 0;
FILE *file = use_anon_file ? tmpfile() : fopen(opts->file, "wb+");
if (!file) {
MP_ERR(cache, "can't open cache file '%s'\n", opts->file);
return -1;
}
struct priv *p = talloc_zero(NULL, struct priv);
cache->priv = p;
p->original = stream;
p->cache_file = file;
p->max_size = opts->file_max * 1024LL;
// file_max can be INT_MAX, so this is at most about 256MB
p->block_bits = talloc_zero_size(p, (p->max_size / BLOCK_SIZE + 1) / 8 + 1);
cache->seek = seek;
cache->fill_buffer = fill_buffer;
cache->control = control;
cache->close = s_close;
return 1;
}

View File

@ -17,16 +17,12 @@
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <limits.h>
#include <errno.h>
#include <strings.h>
#include <assert.h>
#include <libavutil/common.h>
#include "osdep/atomic.h"
#include "osdep/io.h"
#include "mpv_talloc.h"
@ -36,6 +32,7 @@
#include "common/common.h"
#include "common/global.h"
#include "misc/bstr.h"
#include "misc/thread_tools.h"
#include "common/msg.h"
#include "options/options.h"
#include "options/path.h"
@ -45,12 +42,6 @@
#include "options/m_option.h"
#include "options/m_config.h"
#ifdef __MINGW32__
#include <windows.h>
#else
#include <poll.h>
#endif
// Includes additional padding in case sizes get rounded up by sector size.
#define TOTAL_BUFFER_SIZE (STREAM_MAX_BUFFER_SIZE + STREAM_MAX_SECTOR_SIZE)
@ -238,7 +229,6 @@ static int open_internal(const stream_info_t *sinfo, const char *url, int flags,
s->global = global;
s->url = talloc_strdup(s, url);
s->path = talloc_strdup(s, path);
s->allow_caching = true;
s->is_network = sinfo->is_network;
s->mode = flags & (STREAM_READ | STREAM_WRITE);
@ -265,9 +255,6 @@ static int open_internal(const stream_info_t *sinfo, const char *url, int flags,
if (!s->read_chunk)
s->read_chunk = 4 * (s->sector_size ? s->sector_size : STREAM_BUFFER_SIZE);
if (!s->fill_buffer)
s->allow_caching = false;
assert(s->seekable == !!s->seek);
if (s->mime_type)
@ -590,7 +577,6 @@ void free_stream(stream_t *s)
if (s->close)
s->close(s);
free_stream(s->underlying);
talloc_free(s);
}
@ -606,98 +592,6 @@ stream_t *open_memory_stream(void *data, int len)
return s;
}
static stream_t *open_cache(stream_t *orig, const char *name)
{
stream_t *cache = new_stream();
cache->underlying = orig;
cache->caching = true;
cache->seekable = true;
cache->mode = STREAM_READ;
cache->read_chunk = 4 * STREAM_BUFFER_SIZE;
cache->url = talloc_strdup(cache, orig->url);
cache->mime_type = talloc_strdup(cache, orig->mime_type);
cache->demuxer = talloc_strdup(cache, orig->demuxer);
cache->lavf_type = talloc_strdup(cache, orig->lavf_type);
cache->streaming = orig->streaming,
cache->is_network = orig->is_network;
cache->is_local_file = orig->is_local_file;
cache->is_directory = orig->is_directory;
cache->cancel = orig->cancel;
cache->global = orig->global;
cache->log = mp_log_new(cache, cache->global->log, name);
return cache;
}
static struct mp_cache_opts check_cache_opts(stream_t *stream,
struct mp_cache_opts *opts)
{
struct mp_cache_opts use_opts = *opts;
if (use_opts.size == -1)
use_opts.size = stream->streaming ? use_opts.def_size : 0;
if (use_opts.size == -2)
use_opts.size = use_opts.def_size;
if (stream->mode != STREAM_READ || !stream->allow_caching || use_opts.size < 1)
use_opts.size = 0;
return use_opts;
}
bool stream_wants_cache(stream_t *stream, struct mp_cache_opts *opts)
{
struct mp_cache_opts use_opts = check_cache_opts(stream, opts);
return use_opts.size > 0;
}
// return 1 on success, 0 if the cache is disabled/not needed, and -1 on error
// or if the cache is disabled
static int stream_enable_cache(stream_t **stream, struct mp_cache_opts *opts)
{
stream_t *orig = *stream;
struct mp_cache_opts use_opts = check_cache_opts(*stream, opts);
if (use_opts.size < 1)
return 0;
stream_t *fcache = open_cache(orig, "file-cache");
if (stream_file_cache_init(fcache, orig, &use_opts) <= 0) {
fcache->underlying = NULL; // don't free original stream
free_stream(fcache);
fcache = orig;
}
stream_t *cache = open_cache(fcache, "cache");
int res = stream_cache_init(cache, fcache, &use_opts);
if (res <= 0) {
cache->underlying = NULL; // don't free original stream
free_stream(cache);
if (fcache != orig) {
fcache->underlying = NULL;
free_stream(fcache);
}
} else {
*stream = cache;
}
return res;
}
// Do some crazy stuff to call stream_enable_cache() with the global options.
int stream_enable_cache_defaults(stream_t **stream)
{
struct mpv_global *global = (*stream)->global;
if (!global)
return 0;
void *tmp = talloc_new(NULL);
struct mp_cache_opts *opts =
mp_get_config_group(tmp, global, &stream_cache_conf);
int r = stream_enable_cache(stream, opts);
talloc_free(tmp);
return r;
}
static uint16_t stream_read_word_endian(stream_t *s, bool big_endian)
{
unsigned int y = stream_read_char(s);
@ -842,131 +736,6 @@ struct bstr stream_read_file(const char *filename, void *talloc_ctx,
return res;
}
#ifndef __MINGW32__
struct mp_cancel {
atomic_bool triggered;
int wakeup_pipe[2];
};
static void cancel_destroy(void *p)
{
struct mp_cancel *c = p;
if (c->wakeup_pipe[0] >= 0) {
close(c->wakeup_pipe[0]);
close(c->wakeup_pipe[1]);
}
}
struct mp_cancel *mp_cancel_new(void *talloc_ctx)
{
struct mp_cancel *c = talloc_ptrtype(talloc_ctx, c);
talloc_set_destructor(c, cancel_destroy);
*c = (struct mp_cancel){.triggered = ATOMIC_VAR_INIT(false)};
mp_make_wakeup_pipe(c->wakeup_pipe);
return c;
}
// Request abort.
void mp_cancel_trigger(struct mp_cancel *c)
{
atomic_store(&c->triggered, true);
(void)write(c->wakeup_pipe[1], &(char){0}, 1);
}
// Restore original state. (Allows reusing a mp_cancel.)
void mp_cancel_reset(struct mp_cancel *c)
{
atomic_store(&c->triggered, false);
// Flush it fully.
while (1) {
int r = read(c->wakeup_pipe[0], &(char[256]){0}, 256);
if (r < 0 && errno == EINTR)
continue;
if (r <= 0)
break;
}
}
// Return whether the caller should abort.
// For convenience, c==NULL is allowed.
bool mp_cancel_test(struct mp_cancel *c)
{
return c ? atomic_load_explicit(&c->triggered, memory_order_relaxed) : false;
}
// Wait until the even is signaled. If the timeout (in seconds) expires, return
// false. timeout==0 polls, timeout<0 waits forever.
bool mp_cancel_wait(struct mp_cancel *c, double timeout)
{
struct pollfd fd = { .fd = c->wakeup_pipe[0], .events = POLLIN };
poll(&fd, 1, timeout * 1000);
return fd.revents & POLLIN;
}
// The FD becomes readable if mp_cancel_test() would return true.
// Don't actually read from it, just use it for poll().
int mp_cancel_get_fd(struct mp_cancel *c)
{
return c->wakeup_pipe[0];
}
#else
struct mp_cancel {
atomic_bool triggered;
HANDLE event;
};
static void cancel_destroy(void *p)
{
struct mp_cancel *c = p;
CloseHandle(c->event);
}
struct mp_cancel *mp_cancel_new(void *talloc_ctx)
{
struct mp_cancel *c = talloc_ptrtype(talloc_ctx, c);
talloc_set_destructor(c, cancel_destroy);
*c = (struct mp_cancel){.triggered = ATOMIC_VAR_INIT(false)};
c->event = CreateEventW(NULL, TRUE, FALSE, NULL);
return c;
}
void mp_cancel_trigger(struct mp_cancel *c)
{
atomic_store(&c->triggered, true);
SetEvent(c->event);
}
void mp_cancel_reset(struct mp_cancel *c)
{
atomic_store(&c->triggered, false);
ResetEvent(c->event);
}
bool mp_cancel_test(struct mp_cancel *c)
{
return c ? atomic_load_explicit(&c->triggered, memory_order_relaxed) : false;
}
bool mp_cancel_wait(struct mp_cancel *c, double timeout)
{
return WaitForSingleObject(c->event, timeout < 0 ? INFINITE : timeout * 1000)
== WAIT_OBJECT_0;
}
void *mp_cancel_get_event(struct mp_cancel *c)
{
return c->event;
}
int mp_cancel_get_fd(struct mp_cancel *c)
{
return -1;
}
#endif
char **stream_get_proto_list(void)
{
char **list = NULL;

View File

@ -52,11 +52,6 @@
enum stream_ctrl {
STREAM_CTRL_GET_SIZE = 1,
// Cache
STREAM_CTRL_GET_CACHE_INFO,
STREAM_CTRL_SET_CACHE_SIZE,
STREAM_CTRL_SET_READAHEAD,
// stream_memory.c
STREAM_CTRL_SET_CONTENTS,
@ -104,14 +99,6 @@ enum stream_ctrl {
STREAM_CTRL_SET_CURRENT_TITLE,
};
// for STREAM_CTRL_GET_CACHE_INFO
struct stream_cache_info {
int64_t size;
int64_t fill;
bool idle;
int64_t speed;
};
struct stream_lang_req {
int type; // STREAM_AUDIO, STREAM_SUB
int id;
@ -179,34 +166,21 @@ typedef struct stream {
bool seekable : 1; // presence of general byte seeking support
bool fast_skip : 1; // consider stream fast enough to fw-seek by skipping
bool is_network : 1; // original stream_info_t.is_network flag
bool allow_caching : 1; // stream cache makes sense
bool caching : 1; // is a cache, or accesses a cache
bool is_local_file : 1; // from the filesystem
bool is_directory : 1; // directory on the filesystem
bool access_references : 1; // open other streams
bool extended_ctrls : 1; // supports some of BD/DVD/DVB/TV controls
struct mp_log *log;
struct mpv_global *global;
struct mp_cancel *cancel; // cancellation notification
struct stream *underlying; // e.g. cache wrapper
// Includes additional padding in case sizes get rounded up by sector size.
unsigned char buffer[];
} stream_t;
int stream_fill_buffer(stream_t *s);
struct mp_cache_opts;
bool stream_wants_cache(stream_t *stream, struct mp_cache_opts *opts);
int stream_enable_cache_defaults(stream_t **stream);
// Internal
int stream_cache_init(stream_t *cache, stream_t *stream,
struct mp_cache_opts *opts);
int stream_file_cache_init(stream_t *cache, stream_t *stream,
struct mp_cache_opts *opts);
int stream_write_buffer(stream_t *s, unsigned char *buf, int len);
inline static int stream_read_char(stream_t *s)
@ -254,14 +228,6 @@ stream_t *open_memory_stream(void *data, int len);
void mp_url_unescape_inplace(char *buf);
char *mp_url_escape(void *talloc_ctx, const char *s, const char *ok);
struct mp_cancel *mp_cancel_new(void *talloc_ctx);
void mp_cancel_trigger(struct mp_cancel *c);
bool mp_cancel_test(struct mp_cancel *c);
bool mp_cancel_wait(struct mp_cancel *c, double timeout);
void mp_cancel_reset(struct mp_cancel *c);
void *mp_cancel_get_event(struct mp_cancel *c); // win32 HANDLE
int mp_cancel_get_fd(struct mp_cancel *c);
// stream_file.c
char *mp_file_url_to_filename(void *talloc_ctx, bstr url);
char *mp_file_get_path(void *talloc_ctx, bstr url);

View File

@ -22,7 +22,6 @@
static int open_f(stream_t *stream)
{
stream->demuxer = "lavf";
stream->allow_caching = false;
return STREAM_OK;
}

View File

@ -1117,9 +1117,9 @@ static int dvb_open(stream_t *stream)
stream->close = dvbin_close;
stream->control = dvbin_stream_control;
stream->streaming = true;
stream->allow_caching = true;
stream->demuxer = "lavf";
stream->lavf_type = "mpegts";
stream->extended_ctrls = true;
return STREAM_OK;

View File

@ -520,7 +520,6 @@ static int open_s_internal(stream_t *stream)
stream->close = stream_dvdnav_close;
stream->demuxer = "+disc";
stream->lavf_type = "mpeg";
stream->allow_caching = false;
return STREAM_OK;
}

View File

@ -6,7 +6,6 @@
static int s_open (struct stream *stream)
{
stream->demuxer = "edl";
stream->allow_caching = false;
return STREAM_OK;
}

View File

@ -34,6 +34,7 @@
#include "common/common.h"
#include "common/msg.h"
#include "misc/thread_tools.h"
#include "stream.h"
#include "options/m_option.h"
#include "options/path.h"
@ -64,6 +65,7 @@ struct priv {
bool regular_file;
bool appending;
int64_t orig_size;
struct mp_cancel *cancel;
};
// Total timeout = RETRY_TIMEOUT * MAX_RETRIES
@ -84,7 +86,7 @@ static int fill_buffer(stream_t *s, char *buffer, int max_len)
#ifndef __MINGW32__
if (p->use_poll) {
int c = s->cancel ? mp_cancel_get_fd(s->cancel) : -1;
int c = mp_cancel_get_fd(p->cancel);
struct pollfd fds[2] = {
{.fd = p->fd, .events = POLLIN},
{.fd = c, .events = POLLIN},
@ -111,7 +113,7 @@ static int fill_buffer(stream_t *s, char *buffer, int max_len)
if (!p->appending || p->use_poll)
break;
if (mp_cancel_wait(s->cancel, RETRY_TIMEOUT))
if (mp_cancel_wait(p->cancel, RETRY_TIMEOUT))
break;
}
@ -159,6 +161,7 @@ static void s_close(stream_t *s)
struct priv *p = s->priv;
if (p->close)
close(p->fd);
talloc_free(p->cancel);
}
// If url is a file:// URL, return the local filename, otherwise return NULL.
@ -323,7 +326,6 @@ static int open_f(stream_t *stream)
if (fstat(p->fd, &st) == 0) {
if (S_ISDIR(st.st_mode)) {
stream->is_directory = true;
stream->allow_caching = false;
MP_INFO(stream, "This is a directory - adding to playlist.\n");
} else if (S_ISREG(st.st_mode)) {
p->regular_file = true;
@ -360,6 +362,10 @@ static int open_f(stream_t *stream)
p->orig_size = get_size(stream);
p->cancel = mp_cancel_new(p);
if (stream->cancel)
mp_cancel_set_parent(p->cancel, stream->cancel);
return STREAM_OK;
}

View File

@ -24,6 +24,7 @@
#include "common/msg.h"
#include "common/tags.h"
#include "common/av_common.h"
#include "misc/thread_tools.h"
#include "stream.h"
#include "options/m_config.h"
#include "options/m_option.h"

View File

@ -20,6 +20,7 @@
#include "misc/bstr.h"
#include "common/common.h"
#include "misc/thread_tools.h"
#include "stream.h"
#include "stream_libarchive.h"

View File

@ -62,7 +62,6 @@ static int open_f(stream_t *stream)
stream->seekable = true;
stream->control = control;
stream->read_chunk = 1024 * 1024;
stream->allow_caching = false;
struct priv *p = talloc_zero(stream, struct priv);
stream->priv = p;

View File

@ -31,7 +31,6 @@ static int
mf_stream_open (stream_t *stream)
{
stream->demuxer = "mf";
stream->allow_caching = false;
return STREAM_OK;
}

View File

@ -41,7 +41,6 @@ tv_stream_open (stream_t *stream)
stream->close=tv_stream_close;
stream->demuxer = "tv";
stream->allow_caching = false;
return STREAM_OK;
}

97
test/json.c Normal file
View File

@ -0,0 +1,97 @@
#include "test_helpers.h"
#include "common/common.h"
#include "misc/json.h"
#include "misc/node.h"
struct entry {
const char *src;
const char *out_txt;
struct mpv_node out_data;
bool expect_fail;
};
#define TEXT(...) #__VA_ARGS__
#define VAL_LIST(...) (struct mpv_node[]){__VA_ARGS__}
#define L(...) __VA_ARGS__
#define NODE_INT64(v) {.format = MPV_FORMAT_INT64, .u = { .int64 = (v) }}
#define NODE_STR(v) {.format = MPV_FORMAT_STRING, .u = { .string = (v) }}
#define NODE_BOOL(v) {.format = MPV_FORMAT_FLAG, .u = { .flag = (bool)(v) }}
#define NODE_FLOAT(v) {.format = MPV_FORMAT_DOUBLE, .u = { .double_ = (v) }}
#define NODE_NONE() {.format = MPV_FORMAT_NONE }
#define NODE_ARRAY(...) {.format = MPV_FORMAT_NODE_ARRAY, .u = { .list = \
&(struct mpv_node_list) { \
.num = sizeof(VAL_LIST(__VA_ARGS__)) / sizeof(struct mpv_node), \
.values = VAL_LIST(__VA_ARGS__)}}}
#define NODE_MAP(k, v) {.format = MPV_FORMAT_NODE_MAP, .u = { .list = \
&(struct mpv_node_list) { \
.num = sizeof(VAL_LIST(v)) / sizeof(struct mpv_node), \
.values = VAL_LIST(v), \
.keys = (char**)(const char *[]){k}}}}
static const struct entry entries[] = {
{ "null", "null", NODE_NONE()},
{ "true", "true", NODE_BOOL(true)},
{ "false", "false", NODE_BOOL(false)},
{ "", .expect_fail = true},
{ "abc", .expect_fail = true},
{ " 123 ", "123", NODE_INT64(123)},
{ "123.25", "123.250000", NODE_FLOAT(123.25)},
{ TEXT("a\n\\\/\\\""), TEXT("a\n\\/\\\""), NODE_STR("a\n\\/\\\"")},
{ TEXT("a\u2c29"), TEXT("aⰩ"), NODE_STR("a\342\260\251")},
{ "[1,2,3]", "[1,2,3]",
NODE_ARRAY(NODE_INT64(1), NODE_INT64(2), NODE_INT64(3))},
{ "[ ]", "[]", NODE_ARRAY()},
{ "[1,,2]", .expect_fail = true},
{ "[,]", .expect_fail = true},
{ TEXT({"a":1, "b":2}), TEXT({"a":1,"b":2}),
NODE_MAP(L("a", "b"), L(NODE_INT64(1), NODE_INT64(2)))},
{ "{ }", "{}", NODE_MAP(L(), L())},
{ TEXT({"a":b}), .expect_fail = true},
{ TEXT({1a:"b"}), .expect_fail = true},
// non-standard extensions
{ "[1,2,]", "[1,2]", NODE_ARRAY(NODE_INT64(1), NODE_INT64(2))},
{ TEXT({a:"b"}), TEXT({"a":"b"}),
NODE_MAP(L("a"), L(NODE_STR("b")))},
{ TEXT({a="b"}), TEXT({"a":"b"}),
NODE_MAP(L("a"), L(NODE_STR("b")))},
{ TEXT({a ="b"}), TEXT({"a":"b"}),
NODE_MAP(L("a"), L(NODE_STR("b")))},
{ TEXT({_a12="b"}), TEXT({"_a12":"b"}),
NODE_MAP(L("_a12"), L(NODE_STR("b")))},
};
#define MAX_DEPTH 10
static void test_json(void **state)
{
for (int n = 0; n < MP_ARRAY_SIZE(entries); n++) {
const struct entry *e = &entries[n];
print_message("%d: %s\n", n, e->src);
void *tmp = talloc_new(NULL);
char *s = talloc_strdup(tmp, e->src);
json_skip_whitespace(&s);
struct mpv_node res;
bool ok = json_parse(tmp, &res, &s, MAX_DEPTH) >= 0;
assert_true(ok != e->expect_fail);
if (!ok)
continue;
char *d = talloc_strdup(tmp, "");
assert_true(json_write(&d, &res) >= 0);
assert_string_equal(e->out_txt, d);
assert_true(equal_mpv_node(&e->out_data, &res));
talloc_free(tmp);
}
}
int main(void) {
const struct CMUnitTest tests[] = {
cmocka_unit_test(test_json),
};
return cmocka_run_group_tests(tests, NULL, NULL);
}

162
test/linked_list.c Normal file
View File

@ -0,0 +1,162 @@
#include "test_helpers.h"
#include "common/common.h"
#include "misc/linked_list.h"
struct list_item {
int v;
struct {
struct list_item *prev, *next;
} list_node;
};
struct the_list {
struct list_item *head, *tail;
};
static bool do_check_list(struct the_list *lst, int *c, int num_c)
{
if (!lst->head)
assert_true(!lst->tail);
if (!lst->tail)
assert_true(!lst->head);
for (struct list_item *cur = lst->head; cur; cur = cur->list_node.next) {
if (cur->list_node.prev) {
assert_true(cur->list_node.prev->list_node.next == cur);
assert_true(lst->head != cur);
} else {
assert_true(lst->head == cur);
}
if (cur->list_node.next) {
assert_true(cur->list_node.next->list_node.prev == cur);
assert_true(lst->tail != cur);
} else {
assert_true(lst->tail == cur);
}
if (num_c < 1)
return false;
if (c[0] != cur->v)
return false;
num_c--;
c++;
}
if (num_c)
return false;
return true;
}
static void test_linked_list(void **state)
{
struct the_list lst = {0};
struct list_item e1 = {1};
struct list_item e2 = {2};
struct list_item e3 = {3};
struct list_item e4 = {4};
struct list_item e5 = {5};
struct list_item e6 = {6};
#define check_list(...) \
assert_true(do_check_list(&lst, (int[]){__VA_ARGS__}, \
sizeof((int[]){__VA_ARGS__}) / sizeof(int)));
#define check_list_empty() \
assert_true(do_check_list(&lst, NULL, 0));
check_list_empty();
LL_APPEND(list_node, &lst, &e1);
check_list(1);
LL_APPEND(list_node, &lst, &e2);
check_list(1, 2);
LL_APPEND(list_node, &lst, &e4);
check_list(1, 2, 4);
LL_CLEAR(list_node, &lst);
check_list_empty();
LL_PREPEND(list_node, &lst, &e4);
check_list(4);
LL_PREPEND(list_node, &lst, &e2);
check_list(2, 4);
LL_PREPEND(list_node, &lst, &e1);
check_list(1, 2, 4);
LL_CLEAR(list_node, &lst);
check_list_empty();
LL_INSERT_BEFORE(list_node, &lst, (struct list_item *)NULL, &e6);
check_list(6);
LL_INSERT_BEFORE(list_node, &lst, (struct list_item *)NULL, &e1);
check_list(6, 1);
LL_INSERT_BEFORE(list_node, &lst, (struct list_item *)NULL, &e2);
check_list(6, 1, 2);
LL_INSERT_BEFORE(list_node, &lst, &e6, &e3);
check_list(3, 6, 1, 2);
LL_INSERT_BEFORE(list_node, &lst, &e6, &e5);
check_list(3, 5, 6, 1, 2);
LL_INSERT_BEFORE(list_node, &lst, &e2, &e4);
check_list(3, 5, 6, 1, 4, 2);
LL_REMOVE(list_node, &lst, &e6);
check_list(3, 5, 1, 4, 2);
LL_REMOVE(list_node, &lst, &e3);
check_list(5, 1, 4, 2);
LL_REMOVE(list_node, &lst, &e2);
check_list(5, 1, 4);
LL_REMOVE(list_node, &lst, &e4);
check_list(5, 1);
LL_REMOVE(list_node, &lst, &e5);
check_list(1);
LL_REMOVE(list_node, &lst, &e1);
check_list_empty();
LL_APPEND(list_node, &lst, &e2);
check_list(2);
LL_REMOVE(list_node, &lst, &e2);
check_list_empty();
LL_INSERT_AFTER(list_node, &lst, (struct list_item *)NULL, &e1);
check_list(1);
LL_INSERT_AFTER(list_node, &lst, (struct list_item *)NULL, &e2);
check_list(2, 1);
LL_INSERT_AFTER(list_node, &lst, (struct list_item *)NULL, &e3);
check_list(3, 2, 1);
LL_INSERT_AFTER(list_node, &lst, &e3, &e4);
check_list(3, 4, 2, 1);
LL_INSERT_AFTER(list_node, &lst, &e4, &e5);
check_list(3, 4, 5, 2, 1);
LL_INSERT_AFTER(list_node, &lst, &e1, &e6);
check_list(3, 4, 5, 2, 1, 6);
}
int main(void) {
const struct CMUnitTest tests[] = {
cmocka_unit_test(test_linked_list),
};
return cmocka_run_group_tests(tests, NULL, NULL);
}

View File

@ -28,9 +28,12 @@
#include <libavutil/intreadwrite.h>
#include <libavutil/pixdesc.h>
#include "config.h"
#include "mpv_talloc.h"
#include "common/global.h"
#include "common/msg.h"
#include "options/m_config.h"
#include "options/options.h"
#include "misc/bstr.h"
#include "common/av_common.h"
@ -59,6 +62,8 @@ static void uninit_avctx(struct mp_filter *vd);
static int get_buffer2_direct(AVCodecContext *avctx, AVFrame *pic, int flags);
static enum AVPixelFormat get_format_hwdec(struct AVCodecContext *avctx,
const enum AVPixelFormat *pix_fmt);
static int hwdec_validate_opt(struct mp_log *log, const m_option_t *opt,
struct bstr name, struct bstr param);
#define HWDEC_DELAY_QUEUE_COUNT 2
@ -84,6 +89,9 @@ struct vd_lavc_params {
int software_fallback;
char **avopts;
int dr;
char *hwdec_api;
char *hwdec_codecs;
int hwdec_image_format;
};
static const struct m_opt_choice_alternatives discard_names[] = {
@ -101,20 +109,24 @@ static const struct m_opt_choice_alternatives discard_names[] = {
const struct m_sub_options vd_lavc_conf = {
.opts = (const m_option_t[]){
OPT_FLAG("fast", fast, 0),
OPT_FLAG("show-all", show_all, 0),
OPT_DISCARD("skiploopfilter", skip_loop_filter, 0),
OPT_DISCARD("skipidct", skip_idct, 0),
OPT_DISCARD("skipframe", skip_frame, 0),
OPT_DISCARD("framedrop", framedrop, 0),
OPT_INT("threads", threads, M_OPT_MIN, .min = 0),
OPT_FLAG("bitexact", bitexact, 0),
OPT_FLAG("assume-old-x264", old_x264, 0),
OPT_FLAG("check-hw-profile", check_hw_profile, 0),
OPT_CHOICE_OR_INT("software-fallback", software_fallback, 0, 1, INT_MAX,
({"no", INT_MAX}, {"yes", 1})),
OPT_KEYVALUELIST("o", avopts, 0),
OPT_FLAG("dr", dr, 0),
OPT_FLAG("vd-lavc-fast", fast, 0),
OPT_FLAG("vd-lavc-show-all", show_all, 0),
OPT_DISCARD("vd-lavc-skiploopfilter", skip_loop_filter, 0),
OPT_DISCARD("vd-lavc-skipidct", skip_idct, 0),
OPT_DISCARD("vd-lavc-skipframe", skip_frame, 0),
OPT_DISCARD("vd-lavc-framedrop", framedrop, 0),
OPT_INT("vd-lavc-threads", threads, M_OPT_MIN, .min = 0),
OPT_FLAG("vd-lavc-bitexact", bitexact, 0),
OPT_FLAG("vd-lavc-assume-old-x264", old_x264, 0),
OPT_FLAG("vd-lavc-check-hw-profile", check_hw_profile, 0),
OPT_CHOICE_OR_INT("vd-lavc-software-fallback", software_fallback,
0, 1, INT_MAX, ({"no", INT_MAX}, {"yes", 1})),
OPT_KEYVALUELIST("vd-lavc-o", avopts, 0),
OPT_FLAG("vd-lavc-dr", dr, 0),
OPT_STRING_VALIDATE("hwdec", hwdec_api, M_OPT_OPTIONAL_PARAM,
hwdec_validate_opt),
OPT_STRING("hwdec-codecs", hwdec_codecs, 0),
OPT_IMAGEFORMAT("hwdec-image-format", hwdec_image_format, 0, .min = -1),
{0}
},
.size = sizeof(struct vd_lavc_params),
@ -127,6 +139,8 @@ const struct m_sub_options vd_lavc_conf = {
.skip_frame = AVDISCARD_DEFAULT,
.framedrop = AVDISCARD_NONREF,
.dr = 1,
.hwdec_api = HAVE_RPI ? "mmal" : "no",
.hwdec_codecs = "h264,vc1,wmv3,hevc,mpeg2video,vp9",
},
};
@ -147,7 +161,8 @@ struct hwdec_info {
typedef struct lavc_ctx {
struct mp_log *log;
struct MPOpts *opts;
struct m_config_cache *opts_cache;
struct vd_lavc_params *opts;
struct mp_codec_params *codec;
AVCodecContext *avctx;
AVFrame *pic;
@ -409,6 +424,8 @@ static void select_and_set_hwdec(struct mp_filter *vd)
vd_ffmpeg_ctx *ctx = vd->priv;
const char *codec = ctx->codec->codec;
m_config_cache_update(ctx->opts_cache);
bstr opt = bstr0(ctx->opts->hwdec_api);
bool hwdec_requested = !bstr_equals0(opt, "no");
@ -493,8 +510,8 @@ static void select_and_set_hwdec(struct mp_filter *vd)
}
}
int hwdec_validate_opt(struct mp_log *log, const m_option_t *opt,
struct bstr name, struct bstr param)
static int hwdec_validate_opt(struct mp_log *log, const m_option_t *opt,
struct bstr name, struct bstr param)
{
if (bstr_equals0(param, "help")) {
struct hwdec_info *hwdecs = NULL;
@ -543,9 +560,11 @@ static void reinit(struct mp_filter *vd)
static void init_avctx(struct mp_filter *vd)
{
vd_ffmpeg_ctx *ctx = vd->priv;
struct vd_lavc_params *lavc_param = ctx->opts->vd_lavc_params;
struct vd_lavc_params *lavc_param = ctx->opts;
struct mp_codec_params *c = ctx->codec;
m_config_cache_update(ctx->opts_cache);
assert(!ctx->avctx);
const AVCodec *lavc_codec = NULL;
@ -911,7 +930,7 @@ static bool prepare_decoding(struct mp_filter *vd)
{
vd_ffmpeg_ctx *ctx = vd->priv;
AVCodecContext *avctx = ctx->avctx;
struct vd_lavc_params *opts = ctx->opts->vd_lavc_params;
struct vd_lavc_params *opts = ctx->opts;
if (!avctx || ctx->hwdec_failed)
return false;
@ -937,7 +956,7 @@ static bool prepare_decoding(struct mp_filter *vd)
static void handle_err(struct mp_filter *vd)
{
vd_ffmpeg_ctx *ctx = vd->priv;
struct vd_lavc_params *opts = ctx->opts->vd_lavc_params;
struct vd_lavc_params *opts = ctx->opts;
MP_WARN(vd, "Error while decoding frame!\n");
@ -1194,7 +1213,8 @@ static struct mp_decoder *create(struct mp_filter *parent,
vd_ffmpeg_ctx *ctx = vd->priv;
ctx->log = vd->log;
ctx->opts = vd->global->opts;
ctx->opts_cache = m_config_cache_alloc(ctx, vd->global, &vd_lavc_conf);
ctx->opts = ctx->opts_cache->opts;
ctx->codec = codec;
ctx->decoder = talloc_strdup(ctx, decoder);
ctx->hwdec_swpool = mp_image_pool_new(ctx);

View File

@ -3871,7 +3871,9 @@ static void reinit_from_options(struct gl_video *p)
gl_video_setup_hooks(p);
reinit_osd(p);
if (p->opts.interpolation && !p->global->opts->video_sync && !p->dsi_warned) {
int vs;
mp_read_option_raw(p->global, "video-sync", &m_option_type_choice, &vs);
if (p->opts.interpolation && !vs && !p->dsi_warned) {
MP_WARN(p, "Interpolation now requires enabling display-sync mode.\n"
"E.g.: --video-sync=display-resample\n");
p->dsi_warned = true;

View File

@ -300,11 +300,9 @@ static struct vo *vo_create(bool probing, struct mpv_global *global,
m_config_cache_set_dispatch_change_cb(vo->opts_cache, vo->in->dispatch,
update_opts, vo);
#if HAVE_GL
vo->gl_opts_cache = m_config_cache_alloc(NULL, global, &gl_video_conf);
m_config_cache_set_dispatch_change_cb(vo->gl_opts_cache, vo->in->dispatch,
update_opts, vo);
#endif
vo->eq_opts_cache = m_config_cache_alloc(NULL, global, &mp_csp_equalizer_conf);
m_config_cache_set_dispatch_change_cb(vo->eq_opts_cache, vo->in->dispatch,
@ -332,7 +330,9 @@ error:
struct vo *init_best_video_out(struct mpv_global *global, struct vo_extra *ex)
{
struct m_obj_settings *vo_list = global->opts->vo->video_driver_list;
struct mp_vo_opts *opts = mp_get_config_group(NULL, global, &vo_sub_opts);
struct m_obj_settings *vo_list = opts->video_driver_list;
struct vo *vo = NULL;
// first try the preferred drivers, with their optional subdevice param:
if (vo_list && vo_list[0].name) {
for (int n = 0; vo_list[n].name; n++) {
@ -340,11 +340,11 @@ struct vo *init_best_video_out(struct mpv_global *global, struct vo_extra *ex)
if (strlen(vo_list[n].name) == 0)
goto autoprobe;
bool p = !!vo_list[n + 1].name;
struct vo *vo = vo_create(p, global, ex, vo_list[n].name);
vo = vo_create(p, global, ex, vo_list[n].name);
if (vo)
return vo;
goto done;
}
return NULL;
goto done;
}
autoprobe:
// now try the rest...
@ -352,11 +352,13 @@ autoprobe:
const struct vo_driver *driver = video_out_drivers[i];
if (driver == &video_out_null)
break;
struct vo *vo = vo_create(true, global, ex, (char *)driver->name);
vo = vo_create(true, global, ex, (char *)driver->name);
if (vo)
return vo;
goto done;
}
return NULL;
done:
talloc_free(opts);
return vo;
}
static void terminate_vo(void *p)

View File

@ -20,7 +20,12 @@ def __add_generic_flags__(ctx):
ctx.env.CFLAGS += ["-D_ISOC99_SOURCE", "-D_GNU_SOURCE",
"-D_LARGEFILE_SOURCE", "-D_FILE_OFFSET_BITS=64",
"-D_LARGEFILE64_SOURCE",
"-std=c99", "-Wall"]
"-Wall"]
if ctx.check_cc(cflags="-std=c11", mandatory=False):
ctx.env.CFLAGS += ["-std=c11"]
else:
ctx.env.CFLAGS += ["-std=c99"]
if ctx.is_optimization():
ctx.env.CFLAGS += ['-O2']

View File

@ -323,6 +323,7 @@ def build(ctx):
( "misc/rendezvous.c" ),
( "misc/ring.c" ),
( "misc/thread_pool.c" ),
( "misc/thread_tools.c" ),
## Options
( "options/m_config.c" ),
@ -356,8 +357,6 @@ def build(ctx):
( "stream/ai_oss.c", "oss-audio && audio-input" ),
( "stream/ai_sndio.c", "sndio && audio-input" ),
( "stream/audio_in.c", "audio-input" ),
( "stream/cache.c" ),
( "stream/cache_file.c" ),
( "stream/cookies.c" ),
( "stream/dvb_tune.c", "dvbin" ),
( "stream/frequencies.c", "tv" ),