Update README.md
This commit is contained in:
@@ -2,6 +2,8 @@
|
||||
<details>
|
||||
<summary>Updates:</summary>
|
||||
|
||||
10 Feb 2024: Added some features from JaiZed's branch such as skipping if SDH subtitles are detected, functions updated to also be able to transcribe audio files, allow individual files to be manually transcribed, and a better implementation of forceLanguage. Added /batch endpoint (Thanks JaiZed). Allows you to navigate in a browser to http://subgen_ip:8090/docs and call the batch endpoint which can take a file or a folder to manually transcribe files. Added CLEAR_VRAM_ON_COMPLETE, HF_TRANSFORMERS, HF_BATCH_SIZE. Hugging Face Transformers boast '9x increase', but I have been unable to test them at this point.
|
||||
|
||||
8 Feb 2024: Added FORCE_DETECTED_LANGUAGE_TO to force a wrongly detected language. Fixed asr to actually use the language passed to it.
|
||||
|
||||
5 Feb 2024: General housekeeping, minor tweaks on the TRANSCRIBE_FOLDERS function. Added a GHCR repo in addition to Dockerhub.
|
||||
@@ -140,6 +142,9 @@ The following environment variables are available in Docker. They will default
|
||||
| COMPUTE_TYPE | 'auto' | Set compute-type using the following information: https://github.com/OpenNMT/CTranslate2/blob/master/docs/quantization.md |
|
||||
| DEBUG | False | Provides some debug data that can be helpful to troubleshoot path mapping and other issues. Fun fact, if this is set to true, any modifications to the script will auto-reload it (if it isn't actively transcoding). Useful to make small tweaks without re-downloading the whole file. |
|
||||
| FORCE_DETECTED_LANGUAGE_TO | '' | This is to force the model to a language instead of the detected one, takes a 2 letter language code. For example, your audio is French but keeps detecting as English, you would set it to 'fr' |
|
||||
| CLEAR_VRAM_ON_COMPLETE | 'True' | This will delete the model and do garbage collection when queue is empty. Good if you need to use the VRAM for something else. |
|
||||
| HF_TRANSFORMERS | 'False' | Uses Hugging Face Transformers models that should be faster, not tested as of now because HF is down. |
|
||||
| HF_BATCH_SIZE | 24 | Batch size to be used with above. Batch size has a correlation to VRAM, not sure what it is yet and may require tinkering.
|
||||
|
||||
### Images:
|
||||
mccloud/subgen:latest ~~or mccloud/subgen:cpu is CPU only (smaller)<br>~~
|
||||
|
||||
Reference in New Issue
Block a user