The old "docker build" no longer does caching the way it used to, and our cache logic doesn't work.
The new cache logic uses buildah, which is an alternate image build tool.
Buildah comes pre-installed on GHA.
When building, it pushes each layer as it goes to the cache repo.
It queries the repo for layers that are already built, so we don't need to explicitly pull
any specific tags and cache from them.
If caching is not enabled we still use docker as normal, so local development is not affected.
Local automatic caching will still apply.
This is intended mainly for travis CI, because by default it doesn't cache any layers
between builds.
By pulling likely-reusable builds (all parents of the current commit),
we take a fixed cost slowdown but in many cases should see a dramatic speed increase
overall, since we won't need to re-build anything that hasn't changed.
This isn't needed for local builds, where docker will do this on its own
with any previously-built images.
By carefully ensuring most of our dockerfiles are identical in their first few layers,
we only need to build those layers once instead of every time.
In particular, we move installing gevent to before installing common,
so that even when common changes gevent doesn't need to be reinstalled.
This is important because gevent takes ages to install.
Also fixes segment_coverage, which wasn't being installed.
The cutter has two jobs:
* To cut videos, taking them through states EDITED -> TRANSCODING
* To monitor TRANSCODING videos for when they're complete
We run these as separate greenlets with their own DB connections,
and if either dies we gracefully shut down the other.