In order for the upcoming playlist manager to be able to use the DB `tags` column to know
what tags a video has, all the tags it needs need to be present.
Previously, this was a problem because the day and category tags only get added at the cutter
and so wouldn't be listed.
This moves them so they are added when parsing the row in sheetsync.
It also adds the poster moment tag if poster moment is checked.
Note that fully static tags that go on all videos are still only added in cutter,
but the playlist manager doesn't need to care about those (since by definition
they will match every video).
Tags default to tags given on the sheet, but can be modified by the editor.
Tags are represented as a comma-seperated string, and are round-tripped on loss of focus
as a way to validate.
New tags column shunts all columns after it right by 1.
Note we parse tags by splitting on commas then discarding whitespace.
If this would create an empty string tag, it is ignored.
Example: "foo, bar baz,a,,bc " -> ["foo", "bar baz", "a", "bc"]
tags is a sheet input which provides a default list of tags for the editor.
video_tags is set upon the video being edited and is used by the cutter to set video tags on supported upload locations.
Use jsonnet computed field names to optionally add TLS configuration to generated Ingress object
In this way, one can easily let the kubernetes ingress handle TLS, with or without a secretName
Additional configuration would be required to tie into cert-manager for automated cert generation
The database changes rarely, and is disruptive to re-deploy,
so we track its image version seperately so it only needs to be upgraded
when there's actually a change.
Note this version is very simplified compared to the docker-compose
and has some major limitations:
* It relies on hostPath and a nodeSelector to put all the components on a shared storage node
* It only supports use as a replication node (downloader, restreamer, backfiller, segment_coverage)
* It uses the k8s Ingress instead of the built-in nginx for http routing.
We've noticed that when nodes have connection problems, they get full segments
with different hashes. Inspection of these segments shows that
they all have identical data up to a point.
Segments that fetched normally will then have the remainder of the data.
Segments that had issues will have a slightly corrupted end.
The data is still valid, and no errors are raised. It just doesn't have all the data.
We noticed that these corrupted segments all were cut off exactly 60sec after their requests
began. We believe this is a server-side timeout on the request that returns whatever data
it has, then closes the container file cleanly before returning successfully.
We detect segments that take > 59 seconds to recieve, and label them as "suspect".
Suspect segments are treated identically to partial segments, except they are always preferred
over partials.
We occasionally see corrupted segments that are slightly shorter in size
but report the same metadata as the full segments. Prefer the largest version
as it's likely the least corrupt.