By restarting the update immediately instead of waiting for the next one.
We only try this up to 3 times to prevent excessive quota usage if it keeps happening.
Previously, we would only do reorderings if we were refreshing the playlist
for some other reason (eg. a video was inserted).
We want to refresh the playlist before attempting reorderings,
so we split the routine into two parts:
- A part that finds out-of-order videos and returns a list of moves to make.
- A part that executes those moves.
We do the former before AND after refreshing, and the latter only with the result from after.
In addition to sorting the videos themselves into the correct spots,
we need to special case them when scanning for the correct place to insert other videos.
Note this only places them in the correct place on insert,
which requires they be set in the playlist config BEFORE being inserted.
A follow-up commit will handle the case of needing to re-order them post insert.
And switch to passing around a namedtuple of these + tags instead of just tags.
To avoid confusion with the list of videos in the playlist, we refer to this data as the "playlist config".
These represent a pinned first/last video in a playlist.
On the choice of a video id vs an event id:
- Event ids are known before video ids, so we can "set and forget" before a video is uploaded
- No need to re-set if an event's video is re-edited or changed
- In cases where an external video is desired, we can use manual link to associate an event with it
Since we're referencing a primary key, we might as well also make it a proper foreign key
with sensible delete behaviour, though in practice we never delete events.
This is required in order to be able to move entries later.
Note our view of entry IDs may always be out of date, so any time you use one
you have to handle it no longer existing.
In theory there should be no change in actual output for no-transition cuts,
even though we're handling the logic in a very different way.
This doesn't actually allow transitions, but sets up most of what is needed
We support all preset transitions in the xfade filter,
as well as a handful of "custom" ones we define.
We only support an audio cross-fade. We may want to support J and L audio cuts (switch audio
before/after the transition) later.
This allows full cuts to support multiple ranges in the same way fast cuts do,
by using multiple inputs to ffmpeg and concat filters joining them.
This will be easy to add transitions to later as this is "just" replacing a concat filter
with an xfade + afade filter.
This is a more featureful wrapper around ffmpeg with notable differences:
- It's used as a context manager, and so can manage its own cleanup
- It takes care of input feeding
- It can handle multiple inputs (via pipes), instead of one (via stdin)
This drastically reduces the setup and cleanup code required for most basic usage,
and the multi-input support will be used in followup changes.
Of 4 users of this function, all but one set them to None.
We're about to replace that one usage with something else, so it makes more sense
to not have them as options at all and just have the user add to the encode args manually.
- Move sheets API into common dir, since multi use
- Live download from Google Sheets using Config
- Falls back on old schedule if new one can't be downloaded for some reason
We are not sure what characters are allowed in chapter titles.
Emoji seem to be disallowed. It is unknown whether things like accents or smart quotes are allowed.
To be conservative, we warn if there are any non-ascii characters in the chapter title.
This happens when we are live viewing a stream, and the last available segment
is at the end of an hour.
We generate the end timestamp as being the end of the last available hour,
which might be within the range of the last available segment. When this happens
we stream the last segment then say we reached the requested end point.
This makes the player stop asking for more segments.
The fix is to pad the end time by an extra hour so we're asking for 1 hour more than the
last available hour.
Flask sends a chunked response with one chunk per item yielded.
This adds a lot of overhead per yielded item.
We avoid this by collecting the lines of the media playlist into larger chunks
and only flushing once every 1000 segments.
For small playlists this means they'll be emitted as one chunk,
but for large playlists we still get the streaming behaviour.
Sometimes in the wild (particularly on youtube) segments may not be timed perfectly, so allow up to 10ms of gap or overlap
to be counted as "equal" for purposes of finding the best segment.
Seeing the following error on latest versions of gevent:
Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/lib/python3.9/site-packages/zulip_bots/schedulebot.py", line 2, in <module>
import gevent.monkey
File "/usr/lib/python3.9/site-packages/gevent/__init__.py", line 72, in <module>
from gevent._hub_local import get_hub
File "/usr/lib/python3.9/site-packages/gevent/_hub_local.py", line 150, in <module>
import_c_accel(globals(), 'gevent.__hub_local')
File "/usr/lib/python3.9/site-packages/gevent/_util.py", line 148, in import_c_accel
mod = importlib.import_module(cname)
File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'gevent._gevent_c_hub_local'