Any endpoints that don't need a DB conn will still work fine.
Notably, this includes /defaults, which is needed for
thrimbletrimmer to work in a non-specific-row mode.
This is a nicer error than crashing in the depths of some error handler
(which is what will happen if the DB goes unavailable while they're running),
and it's a far more common case (eg. the DB is misconfigured) than having it fail
halfway through.
Neither of these services can do anything meaningful without the DB,
so crashing without it is acceptable behaviour.
We move all connection handling into get_nodes().
This means that problems connecting won't cause further errors
and cause the application to completely crash.
In turn, this means that the behaviour if the database goes down becomes
"continue backfilling from the nodes we know about" instead of crashing.
instead of proxying through restreamer.
This should improve performance when serving the (large) segment files,
and free up restreamer for things like generating the playlist.
This accomplishes two things:
1. It allows thrimshim to properly validate length restrictions (not implemented yet)
2. It means that the database has a record of the values actually written for each of these rows,
instead of that information depending on how the cutter was configured at the time.
NOTE: This is only safe if we're only running one.
Later we should make a way to control this in config so one node has it
but others can run non-allocating sheetsyncs.
Instead of handling each error condition seperately,
we raise an UploadError which includes whether it's retryable.
The advantage of this is that upload backends can also raise an UploadError
to indicate two conditions it currently cannot:
That an error is unretryable
That an error is retryable, even if the row was already in finalizing
Under this scheme, errors while cutting become unretryable UploadErrors,
and unhandled exceptions in uploading become retryable UploadErrors if
the row is not yet finalizing only.
Most formats like mp4 require ffmpeg to make changes at the start of the file
throughout writing.
Unfortunately, this prevents us from streaming the upload as we cut it.
Instead, we spool to a temporary file until ffmpeg exits,
then upload that all at once.