We support all preset transitions in the xfade filter,
as well as a handful of "custom" ones we define.
We only support an audio cross-fade. We may want to support J and L audio cuts (switch audio
before/after the transition) later.
This allows full cuts to support multiple ranges in the same way fast cuts do,
by using multiple inputs to ffmpeg and concat filters joining them.
This will be easy to add transitions to later as this is "just" replacing a concat filter
with an xfade + afade filter.
This happens when we are live viewing a stream, and the last available segment
is at the end of an hour.
We generate the end timestamp as being the end of the last available hour,
which might be within the range of the last available segment. When this happens
we stream the last segment then say we reached the requested end point.
This makes the player stop asking for more segments.
The fix is to pad the end time by an extra hour so we're asking for 1 hour more than the
last available hour.
Flask sends a chunked response with one chunk per item yielded.
This adds a lot of overhead per yielded item.
We avoid this by collecting the lines of the media playlist into larger chunks
and only flushing once every 1000 segments.
For small playlists this means they'll be emitted as one chunk,
but for large playlists we still get the streaming behaviour.
Seeing the following error on latest versions of gevent:
Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/lib/python3.9/site-packages/zulip_bots/schedulebot.py", line 2, in <module>
import gevent.monkey
File "/usr/lib/python3.9/site-packages/gevent/__init__.py", line 72, in <module>
from gevent._hub_local import get_hub
File "/usr/lib/python3.9/site-packages/gevent/_hub_local.py", line 150, in <module>
import_c_accel(globals(), 'gevent.__hub_local')
File "/usr/lib/python3.9/site-packages/gevent/_util.py", line 148, in import_c_accel
mod = importlib.import_module(cname)
File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'gevent._gevent_c_hub_local'
This should hopefully result in frames on the edge of timestamps being extracted
from a combination of the neighboring segment and the naive one,
so that we don't get errors extracting a frame.
Each template now has two files, a `.png` and a `.json`. This is currently making them show up twice.
To fix this, we only consider files which end in `.png`.
We do this in the backend so the frontend doesn't need to know about it.
Rarely, we find ourselves needing to explicitly delete some data, eg. something that shouldn't
have been public and should be removed from all records.
It would also be nice if we could "clean up" bad versions of the same segment,
which occasionally come up when downloaders have issues.
With our distributed segment database, this is actually rather difficult as deleting the data
from any one server would cause it to be restored from the others. It was only possible
by stopping all backfill, deleting the data on all servers, then starting backfill again.
Here we introduce a more practical approach. An operator creates an empty flag file
with the same name as the segment to be deleted, but with a `.tombstone` extension.
eg. to delete a file `/segments/desertbus/source/2019-11-13T02/45:51.608000-2.0-full-7IS92rssMzoSBQDIevHStbTNy-URRV3Vw-jzZ6pwOZM.ts`,
you would create a tombstone `/segments/desertbus/source/2019-11-13T02/45:51.608000-2.0-full-7IS92rssMzoSBQDIevHStbTNy-URRV3Vw-jzZ6pwOZM.tombstone`.
These tombstone files do two important things:
* They hide the segment from being listed, which both means:
* It can't be restreamed or put into a video
* It can't be backfilled to other nodes
* The tombstone files themselves do get backfilled to other nodes, so you only need to mark them on one server.
Once the tombstone has propagated to all nodes, the segment file can be deleted independently on each one.
We chose not to have a tombstone automatically trigger a segment deletion for safety reasons.
Previously both restreamer and thrimshim had some complex logic for dealing with
graceful shutdown, in different ways, that was still prone to race conditions.
We replace this with a common method that does it properly.
Fixes#226
When pushed, this tells github to associate the ghcr.io repo that was pushed to
with the github repo specified (the owner needs to match).
This does a few things.
Most importantly, this automatically gives github actions credentials to push to these
repositories when run in the context of the wubloader repo.
This is the simplest case as we can just cut each range like we already do,
then concat the results.
We still allow for the full design in the database and cutter, but error out if transitions
is ever anything but hard cuts or if it's a full cut.
We also update the restreamer to allow accepting ranges, however for usability we still allow
the old "just one start and end" args.
Note this changes the thrimshim API to give and take the new "video_ranges" and "video_transitions" columns.
Check that open() calls for reading and writing use binary modes
Use alpine version with py3-pip package
Use python3 in Dockerfile CMD
Remove sys.setdefaultencoding() "hack"
Simplify ensure_directory() in common.common package