Because load functions are used recursively (when load statements are
found in a tabby file), it would be too much hassle to pass the
encoding parameter around - better use `self._encoding`.
When an encoding is explicitly specified, it will be used.
Otherwise, default encoding used by Path.open will be tried, and
charset_normalizer will be used to guess if that fails.
If reading a tsv file with default encoding fails, roll out a
cannon (charset-normalizer) and try to guess encoding to use.
By default, `Path.open()` will use `locale.getencoding()` when reading
a file (which means that we implicitly use utf-8, at least on
linux). This would fail when reading files with non-ascii characters
prepared (with not-uncommon settings) on Windows. There is no perfect
way to learn the encoding from a plain text file, but existing tools
seem to do a good job.
This commit refactors tabby loader, makes it use guessed encoding (but
only after the default fails) and closes#112https://charset-normalizer.readthedocs.io