Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with the streaming mode #47

Open
Haurrus opened this issue Jan 19, 2024 · 5 comments
Open

Issue with the streaming mode #47

Haurrus opened this issue Jan 19, 2024 · 5 comments

Comments

@Haurrus
Copy link

Haurrus commented Jan 19, 2024

Here's my issue when I try to use the streaming mode, I'm on windows :

2024-01-19 13:33:23.733 | WARNING | mp_main::78 - 'Streaming Mode' has certain limitations, you can read about them here https://github.com/daswer123/xtts-api-server#about-streaming-mode
2024-01-19 13:33:23.733 | INFO | mp_main::81 - You launched an improved version of streaming, this version features an improved tokenizer and more context when processing sentences, which can be good for complex languages like Chinese
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\spawn.py", line 122, in spawn_mai exitcode = _main(fd, parent_sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\spawn.py", line 131, in _main
prepare(preparation_data)
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 291, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "D:\Modelisation_IA\xtts-api-server\xtts_api_server\server.py", line 85, in
engine = CoquiEngine(specific_model=MODEL_VERSION,use_deepspeed=DEEPSPEED,local_models_path=str(model_path))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Modelisation_IA\xtts-api-server\xtts_api_server\RealtimeTTS\engines\base_engine.py", line 11, in call
instance = super().call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Modelisation_IA\xtts-api-server\xtts_api_server\RealtimeTTS\engines\coqui_engine.py", line 83, in init
set_start_method('spawn')
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\context.py", line 247, in set_start_method
raise RuntimeError('context has already been set')
RuntimeError: context has already been set
Traceback (most recent call last):
File "D:\Modelisation_IA\xtts-api-server\xtts_api_server\server.py", line 85, in
engine = CoquiEngine(specific_model=MODEL_VERSION,use_deepspeed=DEEPSPEED,local_models_path=str(model_path))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Modelisation_IA\xtts-api-server\xtts_api_server\RealtimeTTS\engines\base_engine.py", line 11, in call
instance = super().call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Modelisation_IA\xtts-api-server\xtts_api_server\RealtimeTTS\engines\coqui_engine.py", line 113, in init
self.main_synthesize_ready_event.wait()
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\synchronize.py", line 356, in wait
self._cond.wait(timeout)
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\synchronize.py", line 268, in wait
return self._wait_semaphore.acquire(True, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt

@FatalErrorVXD
Copy link

Same:
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\responses.py", line 259, in call
await wrap(partial(self.listen_for_disconnect, receive))
File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\responses.py", line 255, in wrap
await func()
File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\responses.py", line 232, in listen_for_disconnect
message = await receive()
^^^^^^^^^^^^^^^
File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 538, in receive
await self.message_event.wait()
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2288.0_x64__qbz5n2kfra8p0\Lib\asyncio\locks.py", line 213, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 1d857217bd0

During handling of the above exception, another exception occurred:

  • Exception Group Traceback (most recent call last):
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 408, in run_asgi
    | result = await app( # type: ignore[func-returns-value]
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in call
    | return await self.app(scope, receive, send)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\fastapi\applications.py", line 1054, in call
    | await super().call(scope, receive, send)
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\applications.py", line 116, in call
    | await self.middleware_stack(scope, receive, send)
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\middleware\errors.py", line 186, in call
    | raise exc
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\middleware\errors.py", line 164, in call
    | await self.app(scope, receive, _send)
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\middleware\cors.py", line 83, in call
    | await self.app(scope, receive, send)
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\middleware\exceptions.py", line 62, in call
    | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette_exception_handler.py", line 55, in wrapped_app
    | raise exc
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette_exception_handler.py", line 44, in wrapped_app
    | await app(scope, receive, sender)
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\routing.py", line 746, in call
    | await route.handle(scope, receive, send)
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\routing.py", line 288, in handle
    | await self.app(scope, receive, send)
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\routing.py", line 75, in app
    | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette_exception_handler.py", line 55, in wrapped_app
    | raise exc
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette_exception_handler.py", line 44, in wrapped_app
    | await app(scope, receive, sender)
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\routing.py", line 73, in app
    | await response(scope, receive, send)
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\responses.py", line 252, in call
    | async with anyio.create_task_group() as task_group:
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\anyio_backends_asyncio.py", line 678, in aexit
    | raise BaseExceptionGroup(
    | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
    +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\responses.py", line 255, in wrap
    | await func()
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\starlette\responses.py", line 244, in stream_response
    | async for chunk in self.body_iterator:
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\xtts_api_server\server.py", line 239, in generator
    | async for chunk in chunks:
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\xtts_api_server\tts_funcs.py", line 591, in stream_fn
    | async for chunk in self.stream_generation(clear_text,speaker_name_or_path,speaker_wav,language,output_file):
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\xtts_api_server\tts_funcs.py", line 456, in stream_generation
    | gpt_cond_latent, speaker_embedding = self.get_or_create_latents(speaker_name, speaker_wav)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "C:\ai\SillyTavern\xtts2\venv\Lib\site-packages\xtts_api_server\tts_funcs.py", line 260, in get_or_create_latents
    | gpt_cond_latent, speaker_embedding = self.model.get_conditioning_latents(speaker_wav)
    | ^^^^^^^^^^
    | AttributeError: 'TTSWrapper' object has no attribute 'model'
    +------------------------------------

@clockworkwhale
Copy link

clockworkwhale commented Feb 26, 2024

I'm also getting the exact same error as the two users above when attempting to generate with --streaming-mode or --streaming-mode-improve enabled. Everything works normally if the server is launched without one of those flags.

@theobjectivedad
Copy link

+1, same issue ... tried with and w/o deepspeed.

@scalar27
Copy link

+1, on Mac M1

@Kirinxxx
Copy link

Had the same issue. Found the solution hidden in the instructions. When you activate streaming mode for the server you need to uncheck streaming within sillytavern itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants