We save all racing + caw streams for races that start within the next 5min
and have not yet finished.
Removed the follow_game stuff completely because the twitch API it used is dead.
Twitch removed their old access token endpoint and now use a GraphQL endpoint.
The old endpoint would just always return 404, which we sadly interpreted as "stream not up".
Thankfully streamlink has already done the reverse engineering work so I was able to
update it to work again fairly easily, it's just a bit more convoluted.
An edit has been submitted for the video. The video hasn't been submitted yet, thrimbletrimmer just informs other components how it wants the edit to be.
Adds a built-in "youtube-manual" location which is like "manual" except that it only works
with youtube URLs and populates the video_id column.
The intent is so that we can have playlist_manager manage videos we upload manually,
while still being able to distinguish between that and other manual links that shouldn't
be included (eg. links to third party youtube videos).
This is set when setting a manual link in thrimbletrimmer with a new checkbox, default off.
When comparing old and new video tags, it errors because it's a list, not string.
We change it to apply the transforms to all tags in the list, and also ignore changes in list ordering.
float() is inaccurate and Decimal() is very slow (~3x the cpu usage)
so instead we right-pad with 0s (eg. so 1.2345 -> 1.234500) then convert to int microsec directly.
Floating point error leads to 1us differences in parsed times,
which causes false positives in the overlapping segments check.
By using a Decimal, we get the exact digits from the filepath.
Because the checking process is entirely CPU-bound, it does not give any other
greenlets a chance to run while it is processing. This prevents us from responding
to metrics queries, and prometheus then times out.
By stopping to handle all other traffic in between each hour processed, we ensure metrics
remain responsive while processing.
strptime is very slow. In terms of pure get_best_segments() speed, this change
more than doubles the throughput.
In particular for segment_coverage, this halves the run time for each check.