This is the simplest case as we can just cut each range like we already do,
then concat the results.
We still allow for the full design in the database and cutter, but error out if transitions
is ever anything but hard cuts or if it's a full cut.
We also update the restreamer to allow accepting ranges, however for usability we still allow
the old "just one start and end" args.
Note this changes the thrimshim API to give and take the new "video_ranges" and "video_transitions" columns.
Check that open() calls for reading and writing use binary modes
Use alpine version with py3-pip package
Use python3 in Dockerfile CMD
Remove sys.setdefaultencoding() "hack"
Simplify ensure_directory() in common.common package
By carefully ensuring most of our dockerfiles are identical in their first few layers,
we only need to build those layers once instead of every time.
In particular, we move installing gevent to before installing common,
so that even when common changes gevent doesn't need to be reinstalled.
This is important because gevent takes ages to install.
Also fixes segment_coverage, which wasn't being installed.
In a fast cut, we edit the first and last segments then concatenate them all.
However, this leads to some tiny but perciptible artifacting around the border
of the first and second (and second-last and last) segments.
A full cut is much slower, but re-encodes the video into the desired format
and is more reliable.
We want both options to be available.
With this commit, we only add the option, we don't use it in restreamer or cutter.
We wrap direct dateutil calls to handle two distinct cases:
* `common.dateutil.parse()`: We want to handle arbitrary timestamps including tz info,
then convert them to UTC.
This is used in HLS parsing, and for command line input for backfiller
* `common.dateutil.parse_utc_only()`: We want to only handle UTC timestamps,
but datetime.strptime isn't flexible enough (eg. can't handle missing fractional component).
This is used for restreamer request params.
Note this moves over the 'experimental' cutter and deletes the original cutter
that concatenates entire videos before cutting.
We may eventually want to revive that method if the experimental cutter turns out
to introduce too many issues.
We move most of the code over verbatim, but adjust it such that it acts
as a generic iterator that can be used in a variety of contexts.
Some other changes made during the move include telling ffmpeg to be quieter
(don't output version info and junk, only log if something goes wrong),
and avoiding errors during cleanup.
Backdoor allows the operator to telnet into the given port, and get a python shell
running inside the process, from which you can debug, modify state (eg. set the log level),
or whatever. This is extremely useful for debugging weird states that you encounter randomly
but can't easily reproduce, without restarting the process and needing to wait until it happens again.
Prom client doesn't like you creating two stats with the same name,
even though they have different labels and this makes perfect sense.
I feel like I just need to re-write the prom client at some point - it doesn't actually
do all that much except get in your way, apart from the actual text encoding which I
can steal.
Anyway, in the meantime, we get around this by breaking up metrics into two names,
a "foo_all" and a "foo_ENDPOINT". The foo_all lacks the detailed labels,
but is still labelled by endpoint and can be used more easily.
The foo_ENDPOINT labels have more information but require messier PromQL as you need to
match on a name regex if you want to look at more than one specific endpoint.
I had to go to some effort to get nice labelling,
which also meant none of the existing libs for this were any good,
but this works well enough.
Exposes the metrics on /metrics.
The calculations were backwards, so instead of cutting a video by, say, 2 seconds,
it would cut by -2 seconds, which was clamped to 0. So it would never actually cut,
it would always use the closest segment.
Also, once we were actually cutting, we hit an issue where ffmpeg would finish and close
its input early, because we'd reached the end of the cut video, but not all input had been written yet.
This resulted in an EPIPE error (write to closed pipe) in the input feeder. We now ignore that.
This cutter works by only cutting the first and last segments to size,
then concatting them with the other segments, so we only ever process a few seconds
of video instead of the entire video duration.
However, to make this work, care must be taken that the cut segments use the same codecs
as the other segments.
The reason it's experimental is that we are not yet confident in its ability
to cut accurately and without sync issues. We have seen some minor issues when trying to play
back the raw output files, but youtube's re-encoding has consistently smoothed out those issues
and they seem to be highly player-specific. Vigorous testing is needed.
Also note that both methods right now (cat then cut, and cut then cat) only work if all the segments
are cattable, that is they all use the same codecs, have the same resolution, etc.
If a stream were to change its encoding settings, and we were cutting over that change,
both approaches would not work. We should add checks for that scenario (which can only happen
over a stream drop), and if so fallback to a slow method using ffmpeg's concat filter,
which will work even for disparate codecs, though reconciling mismatched resolutions or frame rates
may require further work.
This is mainly just for testing until we get the database and proper cutter up,
but it might prove useful to have in the long run too.
This code will probably end up being totally rewritten,
as it uses the most naive form of cutting and reencoding,
and it has a whole bunch of http-serving specifics intertwined with the cutting logic.