Skip to content

Releases: daswer123/xtts-api-server

0.8.6

02 Jan 16:22
Compare
Choose a tag to compare

0.8.6

  1. Improved work with cache
  2. Fixed a bug related to the same response if requests were made at the same time

Thanks for the changes @chanis2 more info here #62

0.8.5

  1. Fixing a potential security issue, thanks #61

0.8.4

  1. Update requerments for tranformers lib

0.8.3

  1. Remake Docker, now it works
    UPD 19.01.24 - Thanks @mickdekkers for improving the docker

0.8.2

  1. Increased the maximum chunk size for streaming
  2. Correct README

0.8.1

  1. Added the ability to customize the chunk size for streaming

0.8.0

  1. Added new endpoints that allow you to: change the folder where the models are stored, change the model without restarting the server and endpoint to change the generation parameters.
  2. In version 0.7.6 an endpoint for streaming was added, I advise you to pay attention to it #37

0.7.6

21 Dec 00:37
Compare
Choose a tag to compare

0.7.6

  1. A new endpoint has been added that allows you to get streaming audio.
  2. Streaming integration in SillyTavern SillyTavern/SillyTavern#1623.

Thanks to @Cohee1207 for all the changes in this update

0.7.5

  1. Simplified loading custom models, now you need to create a models folder in the root and put your model in this folder, in the models folder there should be 3 files model.pth, vocab.json, config.json

Specify with a flag the name of the folder with your model, for example -v warcraft3

  1. Removed an unnecessary warning

0.7.4

  1. Connect Pypi and github https://pypi.org/project/xtts-api-server/

0.7.3

  1. Added a --listen flag, this makes it easier to understand and use outside the local pc
  2. Now you don't have to use -t , the script will try to automatically detect your local ip address and use it for the preview

0.7.2

  1. Fixed the error that occurred if you specify the path to the speaker file

0.7.1

  1. Added check for old format of model records, e.g. 2.0.2, they will be automatically brought to the correct form, e.g. from 2.0.2 to v2.0.2

0.7.0

  1. You can load a custom model, for this you just need to create a folder in the models folder and put there 3 files vocab.json config.json model.pth after that you need to specify the name of this folder using the -v "Model Name" flag.
  2. You can specify the path to a text file as an input, to do this, pass the path to the file with the .txt ending to the text field
  3. Now you can cache the results so that you don't have to wait for a new generation when you make another request, use the new --use-cache flag to do this

0.6.8

04 Dec 00:41
Compare
Choose a tag to compare

0.6.8

  1. Added support for multiple sample speakers for input. thanks @lendot for the update

0.6.7

  1. Added a check for the correct source
  2. Fixed a bug with deepspeed

0.6.6

  1. Fixed a bug with --streaming-mode-improve
  2. Added support for deepspeed, can be enabled via flag --deepspeed will automatically download necessary libraries

0.6.5

  1. Added the ability to synchronously play audio in streaming mode via new flag --stream-play-sync

0.6.4

  1. Fixed error related to multiprocessing, thanks @lendot

0.6.3

  1. Updated the RealtimeTTS library to 3.3.32
  2. Fixed an issue with dependencies due to tts update

0.6.1

  1. Updated the RealtimeTTS library to 3.3.32
  2. Added info on webui to fine tune xtts

0.6.0

  1. Added the ability to select the device to work, -d flag

0.5.9

30 Nov 22:54
Compare
Choose a tag to compare

0.5.9

  1. Update TTS version to >=0.21.2

0.5.8

  1. Fixed low generation quality on -ms local

0.5.7

  1. Thanks @sharockys for helping me set up autodeploy on docker
  2. After 10 comits, I think I got it set up on pypi.

0.5.6

29 Nov 03:25
Compare
Choose a tag to compare

0.5.0

  1. Streaming mode has been added more details here https://github.com/daswer123/xtts-api-server#about-streaming-mode #10
  2. Docker updated

0.5.1

  1. Changed folder for the model that they would be in synergy with RealtimeTTS
  2. Added information at startup about streaming mode

0.5.2

  1. Fix bug when models download twice

0.5.3

  1. Updated the RealtimeTTS library to version 0.3

0.5.4

  1. Updated the RealtimeTTS library to version 0.31
  2. Now in streaming mode you can interrupt the current stream and start a new one

0.5.5

  1. Reduced the wait time after you have interrupted a response to 0.1 seconds
    Time is still needed to avoid stuttering.

0.5.6

  1. Added a new flag -streaming-mode-improve which triggers an improved version of streaming, more details in REAME
  2. Removed the timer when we interrupt the stream and start a new one, thanks to the update 0.32.0 now it does it by itself and no more errors thanks @KoljaB for the quick fix in RealtimeTTS
  3. Updated the RealtimeTTS library to version 0.3.2
  4. Added a stream2sentence check

0.4.5

27 Nov 08:21
273bf43
Compare
Choose a tag to compare

0.4.5

  1. Added changlog information

0.4.4

  1. Slightly improved the operation of --lowvram
  2. Added possibility to select model version via -v flag

0.4.3

  1. Added the --lowvram flag which allows you to keep the model in RAM and load into VRAM only during the conversion

0.4.2

  1. Changed the default model-source option to apiManual
  2. Added apiManual option, which works like normal api but with model 2.0.2.

0.4.1

  1. Сhanged the name of the options in the model_source flag to make it clearer. Now it local and api

0.4.0

  1. Docker support has been added
  2. Added model loading method,
  3. Added TTS version check
  4. Changed GET request from speakers_default to speakers_list for clarity
  5. Updated note for creating samples for cloning
  6. Fixed bug with code 307

For contributions and code samples, I'd like to thank @sharockys and @erew123

0.3.2

23 Nov 04:47
Compare
Choose a tag to compare

0.3.0
Fix for Japanese
Fixed some silly comments in the code
Now the file is returned instead of the stream, it seems to have improved the playback speed in SillyTavern.

0.3.1
Fixed voice preview display when using google colab

0.3.2
Add Hindi support

0.2.5

21 Nov 10:39
Compare
Choose a tag to compare

Fix model loading

0.2

21 Nov 08:56
9330c8b
Compare
Choose a tag to compare
0.2

Added a GET to get the speaker for SillyTavern
Adapted for SillyTavern

0.1

21 Nov 05:40
Compare
Choose a tag to compare
0.1

Package for pyypl, server configured