The sheetsync loads playlist ids and tags into a new table `playlists`.
playlist manager reads this table and merges it with the playlists given on the command line.
When pushed, this tells github to associate the ghcr.io repo that was pushed to
with the github repo specified (the owner needs to match).
This does a few things.
Most importantly, this automatically gives github actions credentials to push to these
repositories when run in the context of the wubloader repo.
This is the simplest case as we can just cut each range like we already do,
then concat the results.
We still allow for the full design in the database and cutter, but error out if transitions
is ever anything but hard cuts or if it's a full cut.
We also update the restreamer to allow accepting ranges, however for usability we still allow
the old "just one start and end" args.
Note this changes the thrimshim API to give and take the new "video_ranges" and "video_transitions" columns.
Check that open() calls for reading and writing use binary modes
Use alpine version with py3-pip package
Use python3 in Dockerfile CMD
Remove sys.setdefaultencoding() "hack"
Simplify ensure_directory() in common.common package
In order for the upcoming playlist manager to be able to use the DB `tags` column to know
what tags a video has, all the tags it needs need to be present.
Previously, this was a problem because the day and category tags only get added at the cutter
and so wouldn't be listed.
This moves them so they are added when parsing the row in sheetsync.
It also adds the poster moment tag if poster moment is checked.
Note that fully static tags that go on all videos are still only added in cutter,
but the playlist manager doesn't need to care about those (since by definition
they will match every video).
New tags column shunts all columns after it right by 1.
Note we parse tags by splitting on commas then discarding whitespace.
If this would create an empty string tag, it is ignored.
Example: "foo, bar baz,a,,bc " -> ["foo", "bar baz", "a", "bc"]
This lets us view a number of useful graphs in dashboards, eg. rows by state,
errored rows, rows by day, rows by category, meltdowns per day, fraction of
events that are poster moments by category.
Sheetsync was the natural place to do this since it was already periodically scanning
the entire events table.
Instead of always waiting 5 seconds between runs,
wait until 5 seconds after the previous run started.
This ensures we actually run every 5sec and not every 5sec + how long it takes to run
Only check the other sheets every 4th time (20sec instead of 5sec).
This elminiates a huge source of unnessecary reads, which prevents us from going over
our API limit.
By carefully ensuring most of our dockerfiles are identical in their first few layers,
we only need to build those layers once instead of every time.
In particular, we move installing gevent to before installing common,
so that even when common changes gevent doesn't need to be reinstalled.
This is important because gevent takes ages to install.
Also fixes segment_coverage, which wasn't being installed.
This is a nicer error than crashing in the depths of some error handler
(which is what will happen if the DB goes unavailable while they're running),
and it's a far more common case (eg. the DB is misconfigured) than having it fail
halfway through.
Neither of these services can do anything meaningful without the DB,
so crashing without it is acceptable behaviour.
This tells us which sheet a row came from
(so we don't need to scan every sheet to find it if we're trying to do
lookups in that direction).
It is also needed in order to tag the videos with the Day number.
Exposes a way to read all rows, and write a single cell.
We need to read all columns of each row so we know what would be modified
so we only do updates to single cells that aren't already the correct value.
This keeps us from impacting the sheet load too much with constantly changing values,
which I think might be a thing even if the values are the same.