Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kernel: Stop allocating physical pages for mapped MMIO regions #24279

Merged
merged 2 commits into from May 17, 2024

Conversation

IdanHo
Copy link
Member

@IdanHo IdanHo commented May 10, 2024

As MMIO is placed at fixed physical addresses, and does not need to be backed by real RAM physical pages, there's no need to use PhysicalPage instances to track their pages.
This results in slightly reduced allocations, but more importantly makes MMIO addresses which end up after the normal RAM ranges work, like 64-bit PCI BARs usually are.

Fixes #22720.
If this is merged after #24254, do not merge this until I add a commit to revert the commit in that PR that temporarily disables usage of 64-bit PCI MMIO addresses.

@github-actions github-actions bot added the 👀 pr-needs-review PR needs review from a maintainer or community member label May 10, 2024
@supercomputer7
Copy link
Member

FWIW, to me it was always quite clear that PhysicalPage could back a non-RAM physical memory region.
Maybe we should rename this class to PhysicalRAMPage then?

@IdanHo
Copy link
Member Author

IdanHo commented May 11, 2024

FWIW, to me it was always quite clear that PhysicalPage could back a non-RAM physical memory region.

AFAICT you don't actually gain anything by backing non-RAM pages by these, since the only 2 features it supports are completely irrelevant/contradictory to non-RAM pages:

  • Reference counting to allow for dynamic allocation (which we use the added m_may_return_to_freelist to disable for non-RAM pages)
  • Allowing usage of sentinel values like shared zero pages/lazy committed pages

So there's no point to using them to back non-RAM regions. We can rename them if you think that helps.

@supercomputer7
Copy link
Member

supercomputer7 commented May 11, 2024

FWIW, to me it was always quite clear that PhysicalPage could back a non-RAM physical memory region.

AFAICT you don't actually gain anything by backing non-RAM pages by these, since the only 2 features it supports are completely irrelevant/contradictory to non-RAM pages:

* Reference counting to allow for dynamic allocation (which we use the added m_may_return_to_freelist to disable for non-RAM pages)

* Allowing usage of sentinel values like shared zero pages/lazy committed pages

So there's no point to using them to back non-RAM regions. We can rename them if you think that helps.

Sure, I see your point here. I probably thought that someday we will use them as a way to expose physical ranges (and what is their usage) to userland in /sys, a bit like /proc/iomem on Linux.
Anyway, I'd like the class name to be renamed - it's not overly verbose if it would be renamed to PhysicalRAMPage, and will make sure the purpose of the class is self-explanatory.

@@ -80,7 +80,7 @@ ErrorOr<void> DisplayConnector::allocate_framebuffer_resources(size_t rounded_si
if (!m_framebuffer_at_arbitrary_physical_range) {
VERIFY(m_framebuffer_address.value().page_base() == m_framebuffer_address.value());
m_shared_framebuffer_vmobject = TRY(Memory::SharedFramebufferVMObject::try_create_for_physical_range(m_framebuffer_address.value(), rounded_size));
Copy link
Contributor

@spholz spholz May 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SharedFramebufferVMObject::try_create_for_physical_range will currently create an AnonymousVMObject.
Should it use a MMIOVMObject now if this constructor is used?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not obvious to me if it should, it does a bit too much trickery with switching between regions/vmobjects etc for me to be sure. @supercomputer7 what do you think?

Copy link
Contributor

@spholz spholz May 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Raspberry Pi framebuffer is in the VideoCore memory, which is after RAM.
So if we want to use the devicetree for the memory map on aarch64, we would have to make the SharedFramebufferVMObject work with memory after RAM.
That is the reason why I brought this up.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally, I would agree, but there's a bunch of logic in SharedFramebufferVMObject that deals with mapping and remapping the backing regions live (using the PhysicalPages), which I don't think would currently work with MMIOVMObject.
TBH, I'm not sure all of this logic has to be this complicated, but that's an issue for another time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to support virtual consoles and graphical desktop, we have to somehow ensure that when you move from a graphical TTY to a non-graphical one (or the other way around), then either the VT code gets access or the WindowManager is able to print pixels. This is why it's called a shared buffer, but only one client has access to it each time.

as for the actual implementation, it could be MMIO-oriented, although some GPUs don't do this and don't really allow MMIO access to the framebuffer, but rather you need to initialize a DMA transfer somehow.
But for most of our implementations we should back the shared buffer VMObject with an anonymous vmobject for shadow copy, and MMIO one for actual access.

@spholz
Copy link
Contributor

spholz commented May 11, 2024

If we for some reason get page faults in a high MMIO region, Region::handle_fault will cause a panic, as it will try to get the PhysicalPage for the faulting address: https://github.com/SerenityOS/serenity/blob/master/Kernel/Memory/Region.cpp#L378. (same for the RISC-V handle_fault version)

@IdanHo
Copy link
Member Author

IdanHo commented May 12, 2024

If we for some reason get page faults in a high MMIO region, Region::handle_fault will cause a panic, as it will try to get the PhysicalPage for the faulting address

Sure, but there's no legitimate way to get a page fault on a high-MMIO region - AKA if you did it must be a kernel bug, so I don't think that's an issue. (If we did want to do something about it, it would only be printing a slightly nicer PANIC message)

As MMIO is placed at fixed physical addressed, and does not need to be
backed by real RAM physical pages, there's no need to use PhysicalPage
instances to track their pages.
This results in slightly reduced allocations, but more importantly
makes MMIO addresses which end up after the normal RAM ranges work,
like 64-bit PCI BARs usually are.
Since these are now only used to represent RAM pages, (and not MMIO
pages) rename them to make their purpose more obvious.
@ADKaster ADKaster merged commit 26cff62 into SerenityOS:master May 17, 2024
12 checks passed
@github-actions github-actions bot removed the 👀 pr-needs-review PR needs review from a maintainer or community member label May 17, 2024
@IdanHo IdanHo deleted the fix_mmio_ppage_usage branch May 17, 2024 22:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Kernel: Running with >= 4G RAM causes panic when using SeaBIOS 1.16.3
4 participants