Message ID | 20200622092719.1380968-1-ardb@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [RFC] arm64/acpi: disallow AML memory opregions to access kernel memory | expand |
On Mon, Jun 22, 2020 at 3:27 AM Ard Biesheuvel <ardb@kernel.org> wrote: > > ACPI provides support for SystemMemory opregions, to allow AML methods > to access MMIO registers of, e.g., GPIO controllers, or access reserved > regions of memory that are owned by the firmware. > > Currently, we also permit AML methods to access memory that is owned by > the kernel and mapped via the linear region, which does not seem to be > supported by a valid use case, and exposes the kernel's internal state > to AML methods that may be buggy and exploitable. > > So close the door on this, and simply reject AML remapping requests for > any memory that has a valid mapping in the linear region. > > Reported-by: Jason A. Donenfeld <Jason@zx2c4.com> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org> > --- > arch/arm64/include/asm/acpi.h | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h > index a45366c3909b..18dcef4e6764 100644 > --- a/arch/arm64/include/asm/acpi.h > +++ b/arch/arm64/include/asm/acpi.h > @@ -50,9 +50,9 @@ pgprot_t __acpi_get_mem_attribute(phys_addr_t addr); > static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys, > acpi_size size) > { > - /* For normal memory we already have a cacheable mapping. */ > + /* Don't allow access to kernel memory from AML code */ > if (memblock_is_map_memory(phys)) > - return (void __iomem *)__phys_to_virt(phys); > + return NULL; I'm happy to see that implementation-wise it's so easy. Take my Acked-by, but I'd really prefer somebody with some ACPI experience and has looked at tons of DSDTs over the years to say whether or not this will break hardware. [As an aside, the current implementation is actually "wrong", since that will trap when an ASL tries to write to regions mapped as read-only, which shouldn't happen when selecting physical addresses. I learned this the ~hard way when writing those exploits last week. :-P]
Hmm, actually... > > if (memblock_is_map_memory(phys)) > > - return (void __iomem *)__phys_to_virt(phys); > > + return NULL; It might be prudent to have this check take into account the size of the region being mapped. I realize ACPI considers it to be undefined if you cross borders, but I could imagine actual system behavior being somewhat complicated, and a clever bypass being possible. Hypothetically: KASLR starts kernel at phys_base+offset, [phys_base, rounddownpage(offset)) doesn't get mapped, malicious acpi then maps phys_base+rounddownpage(offset)-1, and then this check doesn't get hit.
On Mon, Jun 22, 2020 at 11:27:19AM +0200, Ard Biesheuvel wrote: > ACPI provides support for SystemMemory opregions, to allow AML methods > to access MMIO registers of, e.g., GPIO controllers, or access reserved > regions of memory that are owned by the firmware. > > Currently, we also permit AML methods to access memory that is owned by > the kernel and mapped via the linear region, which does not seem to be > supported by a valid use case, and exposes the kernel's internal state > to AML methods that may be buggy and exploitable. > > So close the door on this, and simply reject AML remapping requests for > any memory that has a valid mapping in the linear region. > > Reported-by: Jason A. Donenfeld <Jason@zx2c4.com> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org> > --- > arch/arm64/include/asm/acpi.h | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h > index a45366c3909b..18dcef4e6764 100644 > --- a/arch/arm64/include/asm/acpi.h > +++ b/arch/arm64/include/asm/acpi.h > @@ -50,9 +50,9 @@ pgprot_t __acpi_get_mem_attribute(phys_addr_t addr); > static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys, > acpi_size size) > { > - /* For normal memory we already have a cacheable mapping. */ > + /* Don't allow access to kernel memory from AML code */ > if (memblock_is_map_memory(phys)) > - return (void __iomem *)__phys_to_virt(phys); > + return NULL; I wonder if it would be better to poison this so that if we do see reports of AML crashes we'll know straight away that it tried to access memory mapped by the linear region, as opposed to some other NULL dereference. Anyway, no objections to the idea. Be good for some of the usual ACPI suspects to check this doesn't blow up immediately, though. Will
On Tue, 23 Jun 2020 at 10:13, Will Deacon <will@kernel.org> wrote: > > On Mon, Jun 22, 2020 at 11:27:19AM +0200, Ard Biesheuvel wrote: > > ACPI provides support for SystemMemory opregions, to allow AML methods > > to access MMIO registers of, e.g., GPIO controllers, or access reserved > > regions of memory that are owned by the firmware. > > > > Currently, we also permit AML methods to access memory that is owned by > > the kernel and mapped via the linear region, which does not seem to be > > supported by a valid use case, and exposes the kernel's internal state > > to AML methods that may be buggy and exploitable. > > > > So close the door on this, and simply reject AML remapping requests for > > any memory that has a valid mapping in the linear region. > > > > Reported-by: Jason A. Donenfeld <Jason@zx2c4.com> > > Signed-off-by: Ard Biesheuvel <ardb@kernel.org> > > --- > > arch/arm64/include/asm/acpi.h | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h > > index a45366c3909b..18dcef4e6764 100644 > > --- a/arch/arm64/include/asm/acpi.h > > +++ b/arch/arm64/include/asm/acpi.h > > @@ -50,9 +50,9 @@ pgprot_t __acpi_get_mem_attribute(phys_addr_t addr); > > static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys, > > acpi_size size) > > { > > - /* For normal memory we already have a cacheable mapping. */ > > + /* Don't allow access to kernel memory from AML code */ > > if (memblock_is_map_memory(phys)) > > - return (void __iomem *)__phys_to_virt(phys); > > + return NULL; > > I wonder if it would be better to poison this so that if we do see reports > of AML crashes we'll know straight away that it tried to access memory > mapped by the linear region, as opposed to some other NULL dereference. > We could just add a WARN_ONCE() here, no? > Anyway, no objections to the idea. Be good for some of the usual ACPI > suspects to check this doesn't blow up immediately, though. > Indeed, hence the RFC. Jason does have a point regarding the range check, so I will try to do something about that and send a v2.
On Tue, Jun 23, 2020 at 10:16:19AM +0200, Ard Biesheuvel wrote: > On Tue, 23 Jun 2020 at 10:13, Will Deacon <will@kernel.org> wrote: > > On Mon, Jun 22, 2020 at 11:27:19AM +0200, Ard Biesheuvel wrote: > > > diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h > > > index a45366c3909b..18dcef4e6764 100644 > > > --- a/arch/arm64/include/asm/acpi.h > > > +++ b/arch/arm64/include/asm/acpi.h > > > @@ -50,9 +50,9 @@ pgprot_t __acpi_get_mem_attribute(phys_addr_t addr); > > > static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys, > > > acpi_size size) > > > { > > > - /* For normal memory we already have a cacheable mapping. */ > > > + /* Don't allow access to kernel memory from AML code */ > > > if (memblock_is_map_memory(phys)) > > > - return (void __iomem *)__phys_to_virt(phys); > > > + return NULL; > > > > I wonder if it would be better to poison this so that if we do see reports > > of AML crashes we'll know straight away that it tried to access memory > > mapped by the linear region, as opposed to some other NULL dereference. > > > > We could just add a WARN_ONCE() here, no? Yeah, or that, or a firmware taint. Just something to distinguish this from other NULL pointer derefs. > > Anyway, no objections to the idea. Be good for some of the usual ACPI > > suspects to check this doesn't blow up immediately, though. > > > > Indeed, hence the RFC. Jason does have a point regarding the range > check, so I will try to do something about that and send a v2. Ok, I'll keep an eye out for it. Will
diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h index a45366c3909b..18dcef4e6764 100644 --- a/arch/arm64/include/asm/acpi.h +++ b/arch/arm64/include/asm/acpi.h @@ -50,9 +50,9 @@ pgprot_t __acpi_get_mem_attribute(phys_addr_t addr); static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size) { - /* For normal memory we already have a cacheable mapping. */ + /* Don't allow access to kernel memory from AML code */ if (memblock_is_map_memory(phys)) - return (void __iomem *)__phys_to_virt(phys); + return NULL; /* * We should still honor the memory's attribute here because
ACPI provides support for SystemMemory opregions, to allow AML methods to access MMIO registers of, e.g., GPIO controllers, or access reserved regions of memory that are owned by the firmware. Currently, we also permit AML methods to access memory that is owned by the kernel and mapped via the linear region, which does not seem to be supported by a valid use case, and exposes the kernel's internal state to AML methods that may be buggy and exploitable. So close the door on this, and simply reject AML remapping requests for any memory that has a valid mapping in the linear region. Reported-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> --- arch/arm64/include/asm/acpi.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)