Multiprocess manager does not restart worker if there is only 1 #2390
Replies: 4 comments 3 replies
-
|
Is anyone able to shed some light on this? The FastAPI documentation suggests to use a single worker process when running in a container cluster, but this approach is incompatible with also using Is this a bug or is there some other way to accomplish this that I am missing? |
Beta Was this translation helpful? Give feedback.
-
|
I think your understanding is on point. In the main try:
if config.should_reload:
sock = config.bind_socket()
ChangeReload(config, target=server.run, sockets=[sock]).run()
elif config.workers > 1:
sock = config.bind_socket()
Multiprocess(config, target=server.run, sockets=[sock]).run()
else:
server.run()You are in the case config.workers > 1 Now in In this function, you can see that when a child process dies, another process is started. def keep_subprocess_alive(self) -> None:
if self.should_exit.is_set():
return # parent process is exiting, no need to keep subprocess alive
for idx, process in enumerate(self.processes):
if process.is_alive():
continue
process.kill() # process is hung, kill it
process.join()
if self.should_exit.is_set():
return # pragma: full coverage
logger.info(f"Child process [{process.pid}] died")
process = Process(self.config, self.target, self.sockets)
process.start()
self.processes[idx] = processNow in the Server class, in if self.config.limit_max_requests is not None:
return self.server_state.total_requests >= self.config.limit_max_requestsSo from my understanding, the limit_max_requests is actually respected in the sense that it kills your child process when the nb of requests is exceeded. However, the implementation in Multiprocess is such that if a process is killed, another is created. |
Beta Was this translation helpful? Give feedback.
-
|
I realise it may be bad etiquette to bump my own question, but this has become relevant for me again and the behavior is still the same with fastapi 0.120.3 and uvicorn 0.38.0. It would be helpful for me to know if there is a workaround for this, or if it is a bug that could be fixed. |
Beta Was this translation helpful? Give feedback.
-
|
This seems to be by design. When using 1 worker, it means not using Uvicorn's process manager. Edit: I thought the otherwise. I tought reload needs
Probably 1 worker is meant for container systems, where they can restart it. IMHO what I find strange in the Uvicorn options is:
That's more of a hindrance to me than a feature. You may be able to bypass this with a number greater than |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
If I start uvicorn with just 1 worker, and set a limit for the maximum number of requests, then the application shuts down after that number of requests. If I instead use 2 workers, then the workers will be restarted after handling the specified number of requests.
I can tell from the logging that when I use 2 workers a parent process is started (
INFO: Started parent process). I assume that this parent process is responsible for starting new workers and it does not run when using 1 worker. So in a sense I understand what is happening but I find it quite unintuitive. Is there some reason to not let a solitary worker be restarted?main.py
Start application with
uvicorn --limit-max-requests 3 --workers 1 main:app, send three requests,curl localhost:8000, and the entire application shuts down.Start with
uvicorn --limit-max-requests 3 --workers 2 main:appinstead, send 3 requests and only the worker that handled the requests will be shut down and a new one is started.I'm using python 3.12.1, fastapi 0.111.0, and uvicorn 0.30.1.
Beta Was this translation helpful? Give feedback.
All reactions