You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @agunapal, I'm coding an python app using PyQT5, like an surveilance camera app. My pipeline look like this:
The program reads frames from RTSP link or video file ---> send frame to Inference API using requests python lib ---> get results from Torchserve API responses ---> process the response results (draw bounding box on frame) ---> convert processed frame from numpy array to QPixMap to display on app.
In each camera, I have 2 queues: 1 for stores the original frames, 1 for stores the processed frames. But it too slow, take about 4-5 seconds for 1 frame processed. What strategy I can use in this situation?
Hi guys, I have a question: Can I serve several models (about 5 - 6 models) using both CPU and GPU inference?
The text was updated successfully, but these errors were encountered: