Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: 10.9.0 - RAM keeps steadily increasing #11551

Open
1 task done
nothing2obvi opened this issue May 12, 2024 · 20 comments
Open
1 task done

[Issue]: 10.9.0 - RAM keeps steadily increasing #11551

nothing2obvi opened this issue May 12, 2024 · 20 comments
Labels
bug Something isn't working

Comments

@nothing2obvi
Copy link

nothing2obvi commented May 12, 2024

Please describe your bug

I don't have the exact numbers, but I don't think 10.8.13 ever used RAM over 750 MB for me. I think it was even lower than that. It's definitely never reached 1 GB, though, even with multiple streams. So far, I have one 1080p stream with no transcoding going on for Jellyfin 10.9.0, the RAM used is now at 1.88 GB, and it just keeps increasing. I have no resource limits set on Docker.

Is this expected behavior? I expected it to stay at its usual levels from 10.8.13. I'm concerned because my server doesn't have a lot of RAM but 10.8.13 worked perfectly on it.

Reproduction Steps

  1. Upgrade to Jellyfin 10.9.0
  2. Play a 1080p stream with no transcoding.
  3. RAM spikes up past levels previously seen in 10.8.13

Jellyfin Version

10.9.0

if other:

No response

Environment

- OS: Mac 14.4.1
- Linux Kernel: none
- Virtualization: Docker Desktop 4.30.0
- Clients: AndroidTV
- Browser: Opera 109
- FFmpeg Version: whatever is in Jellyfin 10.9.0
- Playback Method: Direct Play
- Hardware Acceleration: none
- GPU Model: none
- Plugins: AniDB, AudioDB, Bookshelf, Cover Art Archive, Fanart, IMVDb, Kodi Sync Queue, MusicBrainz, OMDb, Playback Reporting, SSO Auth, Studio Images, TMDb, TMDb Box Sets, TMDb Trailers, TVmaze, TheTVDB, Trakt, Webhook, YoutubeMetadata
- Reverse Proxy: SWAG
- Base URL: none (I think)
- Networking: host (I think)
- Storage: local

Jellyfin logs

https://pastebin.com/WGsquD5E

FFmpeg logs

No response

Please attach any browser or client logs here

No response

Please attach any screenshots here

Screen Shot 2024-05-11 at 9 52 39 PM

Code of Conduct

  • I agree to follow this project's Code of Conduct
@nothing2obvi nothing2obvi added the bug Something isn't working label May 12, 2024
@jellyfin-bot
Copy link
Contributor

Hi, it seems like your issue report has the following item(s) that need to be addressed:

  • You are not running an up-to-date version of Jellyfin. Please update to the latest release.

This is an automated message, currently under testing. Please file an issue here if you encounter any problems.

@nothing2obvi
Copy link
Author

Hello, bot, I am on the latest release.

@felix920506
Copy link
Member

@nothing2obvi sorry the script wasn't updated until a few minutes ago

As for the memory problem, please just keep an eye on it and see where it stops increasing then report back.
There are memory leaks in some upstream projects that we use and we have put in some measures to mitigate it.

@jellyfin-bot jellyfin-bot added this to Needs triage in Issue Triage for Main Repo May 12, 2024
@nothing2obvi
Copy link
Author

Hi, thanks for your response. I really would just let it sit but my server has only 8 GB of RAM, so it was already starting to affect my other services. I've had to restore my 10.8.13 files from backup and will be using that for a while.

When I have time I'll boot up a 10.9.0 instance on another computer and see how it goes.

@sjorge
Copy link

sjorge commented May 12, 2024

I also observe higher memory usage, but it seems to cap out around 4G. Using the restart button on the dashboard does not free any memory. However restarting using systemctl seems to knock it back down to normal levels in the 512M range.

I wonder if it got bloated because the initial rescan and audio normalization run. Will keep an eye out.

Edit: a scan all seems to have it increase to around 4G, gaining about 1-2M per second until the scan is finished. It takes 3ish scans for it to cap out around 4G for me.

@sandervankasteel
Copy link

I've done a little bit of investigating into this issue. On Docker at least (with the offical jellyfin image), after playback (or anything that uses ffmpeg) not all sub processes appear to cleaned up after playback and that appears to get worse with the number of streams started.

Screenshots:

After playing one file:
image

After playing a 3 files:
image

After restarting the container:
image

@nothing2obvi
Copy link
Author

After restarting the container it seems to settle down to around 1 GB, which I'm fine with. However, I'll leave this issue open as others seem to have a similar issue.

@nothing2obvi
Copy link
Author

nothing2obvi commented May 15, 2024

I spoke too soon. Even just during one stream (with no transcoding) the RAM will increase up to around 1.8 GB and then my Jellyfin instance will crash and restart. Never had this behavior on 10.8.13. I could play up to at least 5 streams with ease.

@EVOTk
Copy link

EVOTk commented May 17, 2024

I also have a lot of problems with memory consumption, whereas Jellyfin never uses more than 4GB ( 10.8.x ) .

Now it requires more than 13GB for a single stream ... and goes up to 18GB. 10.9.2 doesn't solve this problem :(

image

Edit :
OMV 7 - Debian 12
32GB

Jellyfin official docker image 10.9.3
plugins : AudioDB - Intro Skipper - MusicBrainz - OMDb - Session Cleaner - Studio Images - TMDb - TheTVDB

@xd003
Copy link

xd003 commented May 20, 2024

Wondering if this issue gets resolved with #11670 ?

@phyzical
Copy link

@EVOTk OOC when is your intro skipper scan running?

@phyzical
Copy link

Also just thought i would add i was seeing memory ballooning in 10.8

@EVOTk
Copy link

EVOTk commented May 21, 2024

Hello,
image
I waited several days for the scan to finish (and unlocked the container's RAM limit).

Under 10.8 , the container was limited to 4GB of RAM without any problems.

Under 10.9, the container crashes if I limit the RAM, even to 12GB! Currently, without any playback or scanning, ... the container consumes 14GB
image

image
Currently, I have no playback or syncronization, ... in progress, the CPU is idle, but yet there are many Jellyfin processes consuming memory.

@gnattu
Copy link
Member

gnattu commented May 21, 2024

@EVOTk Have you tried disable intro skipper plugin, restart container, and see if it happens again?

@EVOTk
Copy link

EVOTk commented May 21, 2024

Yes, I've done several tests, intro skipper makes RAM consumption rise faster, but it's not the source of the problem.
Even if intro skipper isn't present, Jellyfin still consumes a lot of RAM.

It's as if after playback, it's never completely cleared.

image

image

@nothing2obvi
Copy link
Author

@EVOTk May I ask what program you're using to get those graphs for memory? I'd like to try this with my own Jellyfin instance.

@EVOTk
Copy link

EVOTk commented May 22, 2024

Hello, it's Portainer https://docs.portainer.io/start/install-ce

@phyzical
Copy link

its something ffmpeg related, as that's what intro skipper uses to do its thing.

@EVOTk that drop caches command what is that? is that a schedule task that runs?

@EVOTk
Copy link

EVOTk commented May 22, 2024

@EVOTk that drop caches command what is that? is that a schedule task that runs?

This is a command I've been told to use when Jellyfin uses too much RAM so that Jellyfin is usable.

@EVOTk
Copy link

EVOTk commented May 23, 2024

This morning I received many alerts about RAM consumption.

image

image

Under 10.8, total server RAM consumption never exceeded 14/15GB. Today, it consumes over 90% of 32GB!

The 1st alerts seem to correspond to the following:

[2024-05-23 02:00:19.934 +00:00] [INF] [20] Emby.Server.Implementations.MediaEncoder.EncodingManager: Stopping chapter extraction for "À la croisée des chemins" because a chapter was found with a position greater than the runtime.
[2024-05-23 02:00:19.935 +00:00] [INF] [20] Emby.Server.Implementations.MediaEncoder.EncodingManager: Stopping chapter extraction for "L'Explosion de Cardiff" because a chapter was found with a position greater than the runtime.
[2024-05-23 02:00:19.935 +00:00] [INF] [20] Emby.Server.Implementations.MediaEncoder.EncodingManager: Stopping chapter extraction for "La Fin du monde" because a chapter was found with a position greater than the runtime.
[2024-05-23 02:00:19.935 +00:00] [INF] [20] Emby.Server.Implementations.MediaEncoder.EncodingManager: Stopping chapter extraction for "Fête des pères" because a chapter was found with a position greater than the runtime.

Currently the Jellyfin server does nothing, but RAM is not liberated.

Edit : I'm restarted Jellyfin, after 2h i'm started playback with transcode :
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: Team Review
Development

No branches or pull requests

9 participants