instead of proxying through restreamer.
This should improve performance when serving the (large) segment files,
and free up restreamer for things like generating the playlist.
This accomplishes two things:
1. It allows thrimshim to properly validate length restrictions (not implemented yet)
2. It means that the database has a record of the values actually written for each of these rows,
instead of that information depending on how the cutter was configured at the time.
NOTE: This is only safe if we're only running one.
Later we should make a way to control this in config so one node has it
but others can run non-allocating sheetsyncs.
This deals with the problem where multiple youtube locations that refer
to the same actual account (but with different settings) will all try to check
for when videos are done transcoding, when only one is needed.
Cutter now takes a 'config' arg which is a json blob with detail
on each upload location. This is a bit nasty if you're trying to run it manually
but was the easiest way to transfer the config data from docker-compose.jsonnet
to the actual application.
nginx tries to resolve everything at startup, which doesn't work
if some of the services aren't present.
we instead generate the config file from a passed in env var, so that only
enabled services are present.
For ease-of-use, we use a jsonnet file to generate the yaml.
Jsonnet is a language for generating JSON documents.
In this case it's useful to us because it lets us have comments,
references to settings defined at the top, and some basic logic
like converting qualities from a list of strings to a comma-seperated string.
To avoid requiring jsonnet to be installed, we use the official jsonnet docker image
in the generate script.