From 0406cdc79d53763ab55c3027e3398aa21a5181ad Mon Sep 17 00:00:00 2001 From: McCloudS <64094529+McCloudS@users.noreply.github.com> Date: Sun, 11 Feb 2024 10:10:13 -0700 Subject: [PATCH] Update README.md --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index 7b3296b..d11df66 100644 --- a/README.md +++ b/README.md @@ -2,6 +2,8 @@
Updates: +11 Feb 2024: Added a 'launcher.py' file for Docker to prevent huge image downloads. If you want to maintain an older version of subgen.py, you need to mount it as a path (IE "${APPDATA}/subgen/subgen.py:/subgen/subgen.py"). Now set UPDATE to True if you want pull the latest version, otherwise it will default to what was in the image on build. + 10 Feb 2024: Added some features from JaiZed's branch such as skipping if SDH subtitles are detected, functions updated to also be able to transcribe audio files, allow individual files to be manually transcribed, and a better implementation of forceLanguage. Added /batch endpoint (Thanks JaiZed). Allows you to navigate in a browser to http://subgen_ip:8090/docs and call the batch endpoint which can take a file or a folder to manually transcribe files. Added CLEAR_VRAM_ON_COMPLETE, HF_TRANSFORMERS, HF_BATCH_SIZE. Hugging Face Transformers boast '9x increase', but my limited testing shows it's comparable to faster-whisper or slightly slower. I also have an older 8gb GPU. Simplest way to persist HF Transformer models is to set "HF_HUB_CACHE" and set it to "/subgen/models" for Docker (assuming you have the matching volume). 8 Feb 2024: Added FORCE_DETECTED_LANGUAGE_TO to force a wrongly detected language. Fixed asr to actually use the language passed to it. @@ -145,6 +147,7 @@ The following environment variables are available in Docker. They will default | CLEAR_VRAM_ON_COMPLETE | 'True' | This will delete the model and do garbage collection when queue is empty. Good if you need to use the VRAM for something else. | | HF_TRANSFORMERS | 'False' | Uses Hugging Face Transformers models that should be faster, not tested as of now because HF is down. | | HF_BATCH_SIZE | 24 | Batch size to be used with above. Batch size has a correlation to VRAM, not sure what it is yet and may require tinkering. +| UPDATE | False | Will pull latest subgen.py from the repository if True. False will use the original subgen.py build into the image. | ### Images: mccloud/subgen:latest ~~or mccloud/subgen:cpu is CPU only (smaller)
~~