diff mbox series

[RFC] arm64/acpi: disallow AML memory opregions to access kernel memory

Message ID 20200622092719.1380968-1-ardb@kernel.org (mailing list archive)
State New, archived
Headers show
Series [RFC] arm64/acpi: disallow AML memory opregions to access kernel memory | expand

Commit Message

Ard Biesheuvel June 22, 2020, 9:27 a.m. UTC
ACPI provides support for SystemMemory opregions, to allow AML methods
to access MMIO registers of, e.g., GPIO controllers, or access reserved
regions of memory that are owned by the firmware.

Currently, we also permit AML methods to access memory that is owned by
the kernel and mapped via the linear region, which does not seem to be
supported by a valid use case, and exposes the kernel's internal state
to AML methods that may be buggy and exploitable.

So close the door on this, and simply reject AML remapping requests for
any memory that has a valid mapping in the linear region.

Reported-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/acpi.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Jason A. Donenfeld June 22, 2020, 9:09 p.m. UTC | #1
On Mon, Jun 22, 2020 at 3:27 AM Ard Biesheuvel <ardb@kernel.org> wrote:
>
> ACPI provides support for SystemMemory opregions, to allow AML methods
> to access MMIO registers of, e.g., GPIO controllers, or access reserved
> regions of memory that are owned by the firmware.
>
> Currently, we also permit AML methods to access memory that is owned by
> the kernel and mapped via the linear region, which does not seem to be
> supported by a valid use case, and exposes the kernel's internal state
> to AML methods that may be buggy and exploitable.
>
> So close the door on this, and simply reject AML remapping requests for
> any memory that has a valid mapping in the linear region.
>
> Reported-by: Jason A. Donenfeld <Jason@zx2c4.com>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/include/asm/acpi.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
> index a45366c3909b..18dcef4e6764 100644
> --- a/arch/arm64/include/asm/acpi.h
> +++ b/arch/arm64/include/asm/acpi.h
> @@ -50,9 +50,9 @@ pgprot_t __acpi_get_mem_attribute(phys_addr_t addr);
>  static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys,
>                                             acpi_size size)
>  {
> -       /* For normal memory we already have a cacheable mapping. */
> +       /* Don't allow access to kernel memory from AML code */
>         if (memblock_is_map_memory(phys))
> -               return (void __iomem *)__phys_to_virt(phys);
> +               return NULL;

I'm happy to see that implementation-wise it's so easy. Take my
Acked-by, but I'd really prefer somebody with some ACPI experience and
has looked at tons of DSDTs over the years to say whether or not this
will break hardware.

[As an aside, the current implementation is actually "wrong", since
that will trap when an ASL tries to write to regions mapped as
read-only, which shouldn't happen when selecting physical addresses. I
learned this the ~hard way when writing those exploits last week. :-P]
Jason A. Donenfeld June 22, 2020, 9:15 p.m. UTC | #2
Hmm, actually...

> >         if (memblock_is_map_memory(phys))
> > -               return (void __iomem *)__phys_to_virt(phys);
> > +               return NULL;

It might be prudent to have this check take into account the size of
the region being mapped. I realize ACPI considers it to be undefined
if you cross borders, but I could imagine actual system behavior being
somewhat complicated, and a clever bypass being possible.
Hypothetically: KASLR starts kernel at phys_base+offset, [phys_base,
rounddownpage(offset)) doesn't get mapped, malicious acpi then maps
phys_base+rounddownpage(offset)-1, and then this check doesn't get
hit.
Will Deacon June 23, 2020, 8:13 a.m. UTC | #3
On Mon, Jun 22, 2020 at 11:27:19AM +0200, Ard Biesheuvel wrote:
> ACPI provides support for SystemMemory opregions, to allow AML methods
> to access MMIO registers of, e.g., GPIO controllers, or access reserved
> regions of memory that are owned by the firmware.
> 
> Currently, we also permit AML methods to access memory that is owned by
> the kernel and mapped via the linear region, which does not seem to be
> supported by a valid use case, and exposes the kernel's internal state
> to AML methods that may be buggy and exploitable.
> 
> So close the door on this, and simply reject AML remapping requests for
> any memory that has a valid mapping in the linear region.
> 
> Reported-by: Jason A. Donenfeld <Jason@zx2c4.com>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/include/asm/acpi.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
> index a45366c3909b..18dcef4e6764 100644
> --- a/arch/arm64/include/asm/acpi.h
> +++ b/arch/arm64/include/asm/acpi.h
> @@ -50,9 +50,9 @@ pgprot_t __acpi_get_mem_attribute(phys_addr_t addr);
>  static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys,
>  					    acpi_size size)
>  {
> -	/* For normal memory we already have a cacheable mapping. */
> +	/* Don't allow access to kernel memory from AML code */
>  	if (memblock_is_map_memory(phys))
> -		return (void __iomem *)__phys_to_virt(phys);
> +		return NULL;

I wonder if it would be better to poison this so that if we do see reports
of AML crashes we'll know straight away that it tried to access memory
mapped by the linear region, as opposed to some other NULL dereference.

Anyway, no objections to the idea. Be good for some of the usual ACPI
suspects to check this doesn't blow up immediately, though.

Will
Ard Biesheuvel June 23, 2020, 8:16 a.m. UTC | #4
On Tue, 23 Jun 2020 at 10:13, Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jun 22, 2020 at 11:27:19AM +0200, Ard Biesheuvel wrote:
> > ACPI provides support for SystemMemory opregions, to allow AML methods
> > to access MMIO registers of, e.g., GPIO controllers, or access reserved
> > regions of memory that are owned by the firmware.
> >
> > Currently, we also permit AML methods to access memory that is owned by
> > the kernel and mapped via the linear region, which does not seem to be
> > supported by a valid use case, and exposes the kernel's internal state
> > to AML methods that may be buggy and exploitable.
> >
> > So close the door on this, and simply reject AML remapping requests for
> > any memory that has a valid mapping in the linear region.
> >
> > Reported-by: Jason A. Donenfeld <Jason@zx2c4.com>
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > ---
> >  arch/arm64/include/asm/acpi.h | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
> > index a45366c3909b..18dcef4e6764 100644
> > --- a/arch/arm64/include/asm/acpi.h
> > +++ b/arch/arm64/include/asm/acpi.h
> > @@ -50,9 +50,9 @@ pgprot_t __acpi_get_mem_attribute(phys_addr_t addr);
> >  static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys,
> >                                           acpi_size size)
> >  {
> > -     /* For normal memory we already have a cacheable mapping. */
> > +     /* Don't allow access to kernel memory from AML code */
> >       if (memblock_is_map_memory(phys))
> > -             return (void __iomem *)__phys_to_virt(phys);
> > +             return NULL;
>
> I wonder if it would be better to poison this so that if we do see reports
> of AML crashes we'll know straight away that it tried to access memory
> mapped by the linear region, as opposed to some other NULL dereference.
>

We could just add a WARN_ONCE() here, no?

> Anyway, no objections to the idea. Be good for some of the usual ACPI
> suspects to check this doesn't blow up immediately, though.
>

Indeed, hence the RFC. Jason does have a point regarding the range
check, so I will try to do something about that and send a v2.
Will Deacon June 23, 2020, 9:14 a.m. UTC | #5
On Tue, Jun 23, 2020 at 10:16:19AM +0200, Ard Biesheuvel wrote:
> On Tue, 23 Jun 2020 at 10:13, Will Deacon <will@kernel.org> wrote:
> > On Mon, Jun 22, 2020 at 11:27:19AM +0200, Ard Biesheuvel wrote:
> > > diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
> > > index a45366c3909b..18dcef4e6764 100644
> > > --- a/arch/arm64/include/asm/acpi.h
> > > +++ b/arch/arm64/include/asm/acpi.h
> > > @@ -50,9 +50,9 @@ pgprot_t __acpi_get_mem_attribute(phys_addr_t addr);
> > >  static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys,
> > >                                           acpi_size size)
> > >  {
> > > -     /* For normal memory we already have a cacheable mapping. */
> > > +     /* Don't allow access to kernel memory from AML code */
> > >       if (memblock_is_map_memory(phys))
> > > -             return (void __iomem *)__phys_to_virt(phys);
> > > +             return NULL;
> >
> > I wonder if it would be better to poison this so that if we do see reports
> > of AML crashes we'll know straight away that it tried to access memory
> > mapped by the linear region, as opposed to some other NULL dereference.
> >
> 
> We could just add a WARN_ONCE() here, no?

Yeah, or that, or a firmware taint. Just something to distinguish this
from other NULL pointer derefs.

> > Anyway, no objections to the idea. Be good for some of the usual ACPI
> > suspects to check this doesn't blow up immediately, though.
> >
> 
> Indeed, hence the RFC. Jason does have a point regarding the range
> check, so I will try to do something about that and send a v2.

Ok, I'll keep an eye out for it.

Will
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
index a45366c3909b..18dcef4e6764 100644
--- a/arch/arm64/include/asm/acpi.h
+++ b/arch/arm64/include/asm/acpi.h
@@ -50,9 +50,9 @@  pgprot_t __acpi_get_mem_attribute(phys_addr_t addr);
 static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys,
 					    acpi_size size)
 {
-	/* For normal memory we already have a cacheable mapping. */
+	/* Don't allow access to kernel memory from AML code */
 	if (memblock_is_map_memory(phys))
-		return (void __iomem *)__phys_to_virt(phys);
+		return NULL;
 
 	/*
 	 * We should still honor the memory's attribute here because