When we're under CPU or disk contention, doing other work
can become very slow. We want to avoid spurious errors in this situation
as this causes further retries and further contention.
One easy way to do this is to increase the time we have to finish fetching headers.
To preserve independence between workers and ensure that a
retry (a worker re-create) actually starts from scratch, we only pool connections
on a per-worker basis.
Furthermore, for the same reason, we only let SegmentGetters use the worker's
pool on their first attempt. After that, they create a new pool to ensure they have a clean retry.
Despite this, the result should be that we're almost always re-using an existing connection
when getting segments or media playlists, unless something goes wrong.
SSL connection setup was measured as almost half the CPU time used by the process,
so this change should result in a signifigant CPU usage reduction.
Backdoor allows the operator to telnet into the given port, and get a python shell
running inside the process, from which you can debug, modify state (eg. set the log level),
or whatever. This is extremely useful for debugging weird states that you encounter randomly
but can't easily reproduce, without restarting the process and needing to wait until it happens again.
This meant that old workers would never shut down, causing us to fetch the same media playlist
and same segments multiple times for no reason, and to never give up in face of (non-403/404) errors
even once we have something else working.
Also fix some logging.
When we're out of touch with twitch for long enough, our segment URL will get
so old that twitch stops returning 403 because our token is expired,
and start returning 404s, presumebly becasue the underlying resource has gone away.
We want to treat these the same.
I ran `pyflakes` on the repo and found these bugs:
```
./common/common.py:289: undefined name 'random'
./downloader/downloader/main.py:7: 'random' imported but unused
./backfiller/backfiller/main.py:150: undefined name 'variant'
./backfiller/backfiller/main.py:158: undefined name 'timedelta'
./backfiller/backfiller/main.py:171: undefined name 'sort'
./backfiller/backfiller/main.py:173: undefined name 'sort'
```
(ok, the "imported but unused" one isn't a bug, but the rest are)
This fixes those, as well as a further issue I saw with sorting of hours.
Iterables are not sortable. As an obvious example, what if your iterable was infinite?
As a result, any attempt to sort an iterable that is not already a friendly type like a list
or tuple will result in an error. We avoid this by coercing to list, fully realising the iterable
and putting it into a form that python will let us sort. It also avoids the nasty side-effect
of mutating the list that gets passed into us, which the caller may not expect. Consider this example:
```
>>> my_hours = ["one", "two", "three"]
>>> print my_hours
["one", "two", "three"]
>>> backfill_node(base_dir, node, stream, variants, hours=my_hours, order='forward')
>>> print my_hours
["one", "three", "two"]
```
Also, one of the linter errors was non-trivial to fix - we were trying to get a list of hours
(which is an api call for a particular variant), but at a time when we weren't dealing with a single
variant. My solution was to get a list of hours for ALL variants, and take the union.
Previously, downloader would put files under BASE_DIR/VARIANT/HOUR/FILE.ts
now, it will put files under BASE_DIR/STREAM/VARIANT/HOUR/FILE.ts
This brings downloader in line with restreamer's concept of base_dir
* Fix bug where soft timeout is not cancelled if an exception occurs
* Various logging tweaks
* Prevent master playlist wait time from going negative
* Stop gracefully if stream worker detects end of stream
* Don't treat master playlist 404 as an error, it just means the stream isn't up
* Set a reasonable log format
* Make soft timeouts not always fire
* Change soft_hard_timeout signature slightly for ease-of-use
* Make renames not fail if file already exists
* Misc typos
This makes the code crazy complicated and messy, but means we can be persistent about
not giving up, while still retrying at the same time, and trying multiple urls at once
until we find one that works.
See docstrings for a full discussion on some of the failures we're trying to work around.
This is a useful library and we might as well use it.
Copying it over and slightly modifying it to work was easier than importing all of streamlink.
The original version may be found at 30043408c7/src/streamlink/stream/hls_playlist.py