Message ID | 20210122155642.23187-2-vincenzo.frascino@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kasan: Fix metadata detection for KASAN_HW_TAGS | expand |
Hi Vincenzo, On Fri, Jan 22, 2021 at 03:56:40PM +0000, Vincenzo Frascino wrote: > Currently, the __is_lm_address() check just masks out the top 12 bits > of the address, but if they are 0, it still yields a true result. > This has as a side effect that virt_addr_valid() returns true even for > invalid virtual addresses (e.g. 0x0). > > Improve the detection checking that it's actually a kernel address > starting at PAGE_OFFSET. > > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Will Deacon <will@kernel.org> > Suggested-by: Catalin Marinas <catalin.marinas@arm.com> > Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> > Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Looking around, it seems that there are some existing uses of virt_addr_valid() that expect it to reject addresses outside of the TTBR1 range. For example, check_mem_type() in drivers/tee/optee/call.c. Given that, I think we need something that's easy to backport to stable. This patch itself looks fine, but it's not going to backport very far, so I suspect we might need to write a preparatory patch that adds an explicit range check to virt_addr_valid() which can be trivially backported. For this patch: Acked-by: Mark Rutland <mark.rutland@arm.com> Thanks, Mark. > --- > arch/arm64/include/asm/memory.h | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index 18fce223b67b..99d7e1494aaa 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -247,9 +247,11 @@ static inline const void *__tag_set(const void *addr, u8 tag) > > > /* > - * The linear kernel range starts at the bottom of the virtual address space. > + * Check whether an arbitrary address is within the linear map, which > + * lives in the [PAGE_OFFSET, PAGE_END) interval at the bottom of the > + * kernel's TTBR1 address range. > */ > -#define __is_lm_address(addr) (((u64)(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET)) > +#define __is_lm_address(addr) (((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET)) > > #define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) > #define __kimg_to_phys(addr) ((addr) - kimage_voffset) > -- > 2.30.0 >
Hi Mark, On 1/25/21 1:02 PM, Mark Rutland wrote: > Hi Vincenzo, > > On Fri, Jan 22, 2021 at 03:56:40PM +0000, Vincenzo Frascino wrote: >> Currently, the __is_lm_address() check just masks out the top 12 bits >> of the address, but if they are 0, it still yields a true result. >> This has as a side effect that virt_addr_valid() returns true even for >> invalid virtual addresses (e.g. 0x0). >> >> Improve the detection checking that it's actually a kernel address >> starting at PAGE_OFFSET. >> >> Cc: Catalin Marinas <catalin.marinas@arm.com> >> Cc: Will Deacon <will@kernel.org> >> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> >> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> >> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> > > Looking around, it seems that there are some existing uses of > virt_addr_valid() that expect it to reject addresses outside of the > TTBR1 range. For example, check_mem_type() in drivers/tee/optee/call.c. > > Given that, I think we need something that's easy to backport to stable. > I agree, I started looking at it this morning and I found cases even in the main allocators (slub and page_alloc) either then the one you mentioned. > This patch itself looks fine, but it's not going to backport very far, > so I suspect we might need to write a preparatory patch that adds an > explicit range check to virt_addr_valid() which can be trivially > backported. > I checked the old releases and I agree this is not back-portable as it stands. I propose therefore to add a preparatory patch with the check below: #define __is_ttrb1_address(addr) ((u64)(addr) >= PAGE_OFFSET && \ (u64)(addr) < PAGE_END) If it works for you I am happy to take care of it and post a new version of my patches. Thanks! > For this patch: > > Acked-by: Mark Rutland <mark.rutland@arm.com> > > Thanks, > Mark. > >> --- >> arch/arm64/include/asm/memory.h | 6 ++++-- >> 1 file changed, 4 insertions(+), 2 deletions(-) >> >> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h >> index 18fce223b67b..99d7e1494aaa 100644 >> --- a/arch/arm64/include/asm/memory.h >> +++ b/arch/arm64/include/asm/memory.h >> @@ -247,9 +247,11 @@ static inline const void *__tag_set(const void *addr, u8 tag) >> >> >> /* >> - * The linear kernel range starts at the bottom of the virtual address space. >> + * Check whether an arbitrary address is within the linear map, which >> + * lives in the [PAGE_OFFSET, PAGE_END) interval at the bottom of the >> + * kernel's TTBR1 address range. >> */ >> -#define __is_lm_address(addr) (((u64)(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET)) >> +#define __is_lm_address(addr) (((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET)) >> >> #define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) >> #define __kimg_to_phys(addr) ((addr) - kimage_voffset) >> -- >> 2.30.0 >>
On Mon, Jan 25, 2021 at 02:36:34PM +0000, Vincenzo Frascino wrote: > On 1/25/21 1:02 PM, Mark Rutland wrote: > > On Fri, Jan 22, 2021 at 03:56:40PM +0000, Vincenzo Frascino wrote: > >> Currently, the __is_lm_address() check just masks out the top 12 bits > >> of the address, but if they are 0, it still yields a true result. > >> This has as a side effect that virt_addr_valid() returns true even for > >> invalid virtual addresses (e.g. 0x0). > >> > >> Improve the detection checking that it's actually a kernel address > >> starting at PAGE_OFFSET. > >> > >> Cc: Catalin Marinas <catalin.marinas@arm.com> > >> Cc: Will Deacon <will@kernel.org> > >> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> > >> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> > >> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> > > > > Looking around, it seems that there are some existing uses of > > virt_addr_valid() that expect it to reject addresses outside of the > > TTBR1 range. For example, check_mem_type() in drivers/tee/optee/call.c. > > > > Given that, I think we need something that's easy to backport to stable. > > > > I agree, I started looking at it this morning and I found cases even in the main > allocators (slub and page_alloc) either then the one you mentioned. > > > This patch itself looks fine, but it's not going to backport very far, > > so I suspect we might need to write a preparatory patch that adds an > > explicit range check to virt_addr_valid() which can be trivially > > backported. > > > > I checked the old releases and I agree this is not back-portable as it stands. > I propose therefore to add a preparatory patch with the check below: > > #define __is_ttrb1_address(addr) ((u64)(addr) >= PAGE_OFFSET && \ > (u64)(addr) < PAGE_END) > > If it works for you I am happy to take care of it and post a new version of my > patches. I'm not entirely sure we need a preparatory patch. IIUC (it needs checking), virt_addr_valid() was fine until 5.4, broken by commit 14c127c957c1 ("arm64: mm: Flip kernel VA space"). Will addressed the flip case in 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using __is_lm_address()") but this broke the <PAGE_OFFSET case. So in 5.4 a NULL address is considered valid. Ard's commit f4693c2716b3 ("arm64: mm: extend linear region for 52-bit VA configurations") changed the test to no longer rely on va_bits but did not change the broken semantics. If Ard's change plus the fix proposed in this test works on 5.4, I'd say we just merge this patch with the corresponding Cc stable and Fixes tags and tweak it slightly when doing the backports as it wouldn't apply cleanly. IOW, I wouldn't add another check to virt_addr_valid() as we did not need one prior to 5.4.
On 1/25/21 2:59 PM, Catalin Marinas wrote: > On Mon, Jan 25, 2021 at 02:36:34PM +0000, Vincenzo Frascino wrote: >> On 1/25/21 1:02 PM, Mark Rutland wrote: >>> On Fri, Jan 22, 2021 at 03:56:40PM +0000, Vincenzo Frascino wrote: >>>> Currently, the __is_lm_address() check just masks out the top 12 bits >>>> of the address, but if they are 0, it still yields a true result. >>>> This has as a side effect that virt_addr_valid() returns true even for >>>> invalid virtual addresses (e.g. 0x0). >>>> >>>> Improve the detection checking that it's actually a kernel address >>>> starting at PAGE_OFFSET. >>>> >>>> Cc: Catalin Marinas <catalin.marinas@arm.com> >>>> Cc: Will Deacon <will@kernel.org> >>>> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> >>>> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> >>>> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> >>> >>> Looking around, it seems that there are some existing uses of >>> virt_addr_valid() that expect it to reject addresses outside of the >>> TTBR1 range. For example, check_mem_type() in drivers/tee/optee/call.c. >>> >>> Given that, I think we need something that's easy to backport to stable. >>> >> >> I agree, I started looking at it this morning and I found cases even in the main >> allocators (slub and page_alloc) either then the one you mentioned. >> >>> This patch itself looks fine, but it's not going to backport very far, >>> so I suspect we might need to write a preparatory patch that adds an >>> explicit range check to virt_addr_valid() which can be trivially >>> backported. >>> >> >> I checked the old releases and I agree this is not back-portable as it stands. >> I propose therefore to add a preparatory patch with the check below: >> >> #define __is_ttrb1_address(addr) ((u64)(addr) >= PAGE_OFFSET && \ >> (u64)(addr) < PAGE_END) >> >> If it works for you I am happy to take care of it and post a new version of my >> patches. > > I'm not entirely sure we need a preparatory patch. IIUC (it needs > checking), virt_addr_valid() was fine until 5.4, broken by commit > 14c127c957c1 ("arm64: mm: Flip kernel VA space"). Will addressed the > flip case in 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using > __is_lm_address()") but this broke the <PAGE_OFFSET case. So in 5.4 a > NULL address is considered valid. > > Ard's commit f4693c2716b3 ("arm64: mm: extend linear region for 52-bit > VA configurations") changed the test to no longer rely on va_bits but > did not change the broken semantics. > > If Ard's change plus the fix proposed in this test works on 5.4, I'd say > we just merge this patch with the corresponding Cc stable and Fixes tags > and tweak it slightly when doing the backports as it wouldn't apply > cleanly. IOW, I wouldn't add another check to virt_addr_valid() as we > did not need one prior to 5.4. > Thank you for the detailed analysis. I checked on 5.4 and it seems that Ard patch (not a clean backport) plus my proposed fix works correctly and solves the issue. Tomorrow I will post a new version of the series that includes what you are suggesting.
On Mon, Jan 25, 2021 at 02:59:12PM +0000, Catalin Marinas wrote: > On Mon, Jan 25, 2021 at 02:36:34PM +0000, Vincenzo Frascino wrote: > > On 1/25/21 1:02 PM, Mark Rutland wrote: > > > On Fri, Jan 22, 2021 at 03:56:40PM +0000, Vincenzo Frascino wrote: > > > This patch itself looks fine, but it's not going to backport very far, > > > so I suspect we might need to write a preparatory patch that adds an > > > explicit range check to virt_addr_valid() which can be trivially > > > backported. > > > > I checked the old releases and I agree this is not back-portable as it stands. > > I propose therefore to add a preparatory patch with the check below: > > > > #define __is_ttrb1_address(addr) ((u64)(addr) >= PAGE_OFFSET && \ > > > > If it works for you I am happy to take care of it and post a new version of my > > patches. > > I'm not entirely sure we need a preparatory patch. IIUC (it needs > checking), virt_addr_valid() was fine until 5.4, broken by commit > 14c127c957c1 ("arm64: mm: Flip kernel VA space"). Ah, so it was; thanks for digging into the history! > Will addressed the > flip case in 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using > __is_lm_address()") but this broke the <PAGE_OFFSET case. So in 5.4 a > NULL address is considered valid. > > Ard's commit f4693c2716b3 ("arm64: mm: extend linear region for 52-bit > VA configurations") changed the test to no longer rely on va_bits but > did not change the broken semantics. > > If Ard's change plus the fix proposed in this test works on 5.4, I'd say > we just merge this patch with the corresponding Cc stable and Fixes tags > and tweak it slightly when doing the backports as it wouldn't apply > cleanly. IOW, I wouldn't add another check to virt_addr_valid() as we > did not need one prior to 5.4. That makes sense to me; sorry for the noise! Thanks, Mark.
On Mon, Jan 25, 2021 at 04:09:57PM +0000, Vincenzo Frascino wrote: > On 1/25/21 2:59 PM, Catalin Marinas wrote: > > On Mon, Jan 25, 2021 at 02:36:34PM +0000, Vincenzo Frascino wrote: > >> On 1/25/21 1:02 PM, Mark Rutland wrote: > >>> On Fri, Jan 22, 2021 at 03:56:40PM +0000, Vincenzo Frascino wrote: > >>>> Currently, the __is_lm_address() check just masks out the top 12 bits > >>>> of the address, but if they are 0, it still yields a true result. > >>>> This has as a side effect that virt_addr_valid() returns true even for > >>>> invalid virtual addresses (e.g. 0x0). > >>>> > >>>> Improve the detection checking that it's actually a kernel address > >>>> starting at PAGE_OFFSET. > >>>> > >>>> Cc: Catalin Marinas <catalin.marinas@arm.com> > >>>> Cc: Will Deacon <will@kernel.org> > >>>> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> > >>>> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> > >>>> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> > >>> > >>> Looking around, it seems that there are some existing uses of > >>> virt_addr_valid() that expect it to reject addresses outside of the > >>> TTBR1 range. For example, check_mem_type() in drivers/tee/optee/call.c. > >>> > >>> Given that, I think we need something that's easy to backport to stable. > >>> > >> > >> I agree, I started looking at it this morning and I found cases even in the main > >> allocators (slub and page_alloc) either then the one you mentioned. > >> > >>> This patch itself looks fine, but it's not going to backport very far, > >>> so I suspect we might need to write a preparatory patch that adds an > >>> explicit range check to virt_addr_valid() which can be trivially > >>> backported. > >>> > >> > >> I checked the old releases and I agree this is not back-portable as it stands. > >> I propose therefore to add a preparatory patch with the check below: > >> > >> #define __is_ttrb1_address(addr) ((u64)(addr) >= PAGE_OFFSET && \ > >> (u64)(addr) < PAGE_END) > >> > >> If it works for you I am happy to take care of it and post a new version of my > >> patches. > > > > I'm not entirely sure we need a preparatory patch. IIUC (it needs > > checking), virt_addr_valid() was fine until 5.4, broken by commit > > 14c127c957c1 ("arm64: mm: Flip kernel VA space"). Will addressed the > > flip case in 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using > > __is_lm_address()") but this broke the <PAGE_OFFSET case. So in 5.4 a > > NULL address is considered valid. > > > > Ard's commit f4693c2716b3 ("arm64: mm: extend linear region for 52-bit > > VA configurations") changed the test to no longer rely on va_bits but > > did not change the broken semantics. > > > > If Ard's change plus the fix proposed in this test works on 5.4, I'd say > > we just merge this patch with the corresponding Cc stable and Fixes tags > > and tweak it slightly when doing the backports as it wouldn't apply > > cleanly. IOW, I wouldn't add another check to virt_addr_valid() as we > > did not need one prior to 5.4. > > Thank you for the detailed analysis. I checked on 5.4 and it seems that Ard > patch (not a clean backport) plus my proposed fix works correctly and solves the > issue. I didn't mean the backport of the whole commit f4693c2716b3 as it probably has other dependencies, just the __is_lm_address() change in that patch. > Tomorrow I will post a new version of the series that includes what you are > suggesting. Please post the __is_lm_address() fix separately from the kasan patches. I'll pick it up as a fix via the arm64 tree. The kasan change can go in 5.12 since it's not currently broken but I'll leave the decision with Andrey.
On 1/25/21 5:56 PM, Catalin Marinas wrote: > On Mon, Jan 25, 2021 at 04:09:57PM +0000, Vincenzo Frascino wrote: >> On 1/25/21 2:59 PM, Catalin Marinas wrote: >>> On Mon, Jan 25, 2021 at 02:36:34PM +0000, Vincenzo Frascino wrote: >>>> On 1/25/21 1:02 PM, Mark Rutland wrote: >>>>> On Fri, Jan 22, 2021 at 03:56:40PM +0000, Vincenzo Frascino wrote: >>>>>> Currently, the __is_lm_address() check just masks out the top 12 bits >>>>>> of the address, but if they are 0, it still yields a true result. >>>>>> This has as a side effect that virt_addr_valid() returns true even for >>>>>> invalid virtual addresses (e.g. 0x0). >>>>>> >>>>>> Improve the detection checking that it's actually a kernel address >>>>>> starting at PAGE_OFFSET. >>>>>> >>>>>> Cc: Catalin Marinas <catalin.marinas@arm.com> >>>>>> Cc: Will Deacon <will@kernel.org> >>>>>> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> >>>>>> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> >>>>>> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> >>>>> >>>>> Looking around, it seems that there are some existing uses of >>>>> virt_addr_valid() that expect it to reject addresses outside of the >>>>> TTBR1 range. For example, check_mem_type() in drivers/tee/optee/call.c. >>>>> >>>>> Given that, I think we need something that's easy to backport to stable. >>>>> >>>> >>>> I agree, I started looking at it this morning and I found cases even in the main >>>> allocators (slub and page_alloc) either then the one you mentioned. >>>> >>>>> This patch itself looks fine, but it's not going to backport very far, >>>>> so I suspect we might need to write a preparatory patch that adds an >>>>> explicit range check to virt_addr_valid() which can be trivially >>>>> backported. >>>>> >>>> >>>> I checked the old releases and I agree this is not back-portable as it stands. >>>> I propose therefore to add a preparatory patch with the check below: >>>> >>>> #define __is_ttrb1_address(addr) ((u64)(addr) >= PAGE_OFFSET && \ >>>> (u64)(addr) < PAGE_END) >>>> >>>> If it works for you I am happy to take care of it and post a new version of my >>>> patches. >>> >>> I'm not entirely sure we need a preparatory patch. IIUC (it needs >>> checking), virt_addr_valid() was fine until 5.4, broken by commit >>> 14c127c957c1 ("arm64: mm: Flip kernel VA space"). Will addressed the >>> flip case in 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using >>> __is_lm_address()") but this broke the <PAGE_OFFSET case. So in 5.4 a >>> NULL address is considered valid. >>> >>> Ard's commit f4693c2716b3 ("arm64: mm: extend linear region for 52-bit >>> VA configurations") changed the test to no longer rely on va_bits but >>> did not change the broken semantics. >>> >>> If Ard's change plus the fix proposed in this test works on 5.4, I'd say >>> we just merge this patch with the corresponding Cc stable and Fixes tags >>> and tweak it slightly when doing the backports as it wouldn't apply >>> cleanly. IOW, I wouldn't add another check to virt_addr_valid() as we >>> did not need one prior to 5.4. >> >> Thank you for the detailed analysis. I checked on 5.4 and it seems that Ard >> patch (not a clean backport) plus my proposed fix works correctly and solves the >> issue. > > I didn't mean the backport of the whole commit f4693c2716b3 as it > probably has other dependencies, just the __is_lm_address() change in > that patch. > Then call it preparatory patch ;) >> Tomorrow I will post a new version of the series that includes what you are >> suggesting. > > Please post the __is_lm_address() fix separately from the kasan patches. > I'll pick it up as a fix via the arm64 tree. The kasan change can go in > 5.12 since it's not currently broken but I'll leave the decision with > Andrey. > Ok, will do.
On Tue, Jan 26, 2021 at 11:58:13AM +0000, Vincenzo Frascino wrote: > On 1/25/21 5:56 PM, Catalin Marinas wrote: > > On Mon, Jan 25, 2021 at 04:09:57PM +0000, Vincenzo Frascino wrote: > >> On 1/25/21 2:59 PM, Catalin Marinas wrote: > >>> On Mon, Jan 25, 2021 at 02:36:34PM +0000, Vincenzo Frascino wrote: > >>>> On 1/25/21 1:02 PM, Mark Rutland wrote: > >>>>> On Fri, Jan 22, 2021 at 03:56:40PM +0000, Vincenzo Frascino wrote: > >>>>>> Currently, the __is_lm_address() check just masks out the top 12 bits > >>>>>> of the address, but if they are 0, it still yields a true result. > >>>>>> This has as a side effect that virt_addr_valid() returns true even for > >>>>>> invalid virtual addresses (e.g. 0x0). > >>>>>> > >>>>>> Improve the detection checking that it's actually a kernel address > >>>>>> starting at PAGE_OFFSET. > >>>>>> > >>>>>> Cc: Catalin Marinas <catalin.marinas@arm.com> > >>>>>> Cc: Will Deacon <will@kernel.org> > >>>>>> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> > >>>>>> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> > >>>>>> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> > >>>>> > >>>>> Looking around, it seems that there are some existing uses of > >>>>> virt_addr_valid() that expect it to reject addresses outside of the > >>>>> TTBR1 range. For example, check_mem_type() in drivers/tee/optee/call.c. > >>>>> > >>>>> Given that, I think we need something that's easy to backport to stable. > >>>>> > >>>> > >>>> I agree, I started looking at it this morning and I found cases even in the main > >>>> allocators (slub and page_alloc) either then the one you mentioned. > >>>> > >>>>> This patch itself looks fine, but it's not going to backport very far, > >>>>> so I suspect we might need to write a preparatory patch that adds an > >>>>> explicit range check to virt_addr_valid() which can be trivially > >>>>> backported. > >>>>> > >>>> > >>>> I checked the old releases and I agree this is not back-portable as it stands. > >>>> I propose therefore to add a preparatory patch with the check below: > >>>> > >>>> #define __is_ttrb1_address(addr) ((u64)(addr) >= PAGE_OFFSET && \ > >>>> (u64)(addr) < PAGE_END) > >>>> > >>>> If it works for you I am happy to take care of it and post a new version of my > >>>> patches. > >>> > >>> I'm not entirely sure we need a preparatory patch. IIUC (it needs > >>> checking), virt_addr_valid() was fine until 5.4, broken by commit > >>> 14c127c957c1 ("arm64: mm: Flip kernel VA space"). Will addressed the > >>> flip case in 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using > >>> __is_lm_address()") but this broke the <PAGE_OFFSET case. So in 5.4 a > >>> NULL address is considered valid. > >>> > >>> Ard's commit f4693c2716b3 ("arm64: mm: extend linear region for 52-bit > >>> VA configurations") changed the test to no longer rely on va_bits but > >>> did not change the broken semantics. > >>> > >>> If Ard's change plus the fix proposed in this test works on 5.4, I'd say > >>> we just merge this patch with the corresponding Cc stable and Fixes tags > >>> and tweak it slightly when doing the backports as it wouldn't apply > >>> cleanly. IOW, I wouldn't add another check to virt_addr_valid() as we > >>> did not need one prior to 5.4. > >> > >> Thank you for the detailed analysis. I checked on 5.4 and it seems that Ard > >> patch (not a clean backport) plus my proposed fix works correctly and solves the > >> issue. > > > > I didn't mean the backport of the whole commit f4693c2716b3 as it > > probably has other dependencies, just the __is_lm_address() change in > > that patch. > > Then call it preparatory patch ;) It's preparatory only for the stable backports, not for current mainline. But I'd rather change the upstream patch when backporting to apply cleanly, no need for a preparatory stable patch.
On 1/26/21 12:07 PM, Catalin Marinas wrote: > On Tue, Jan 26, 2021 at 11:58:13AM +0000, Vincenzo Frascino wrote: >> On 1/25/21 5:56 PM, Catalin Marinas wrote: >>> On Mon, Jan 25, 2021 at 04:09:57PM +0000, Vincenzo Frascino wrote: >>>> On 1/25/21 2:59 PM, Catalin Marinas wrote: >>>>> On Mon, Jan 25, 2021 at 02:36:34PM +0000, Vincenzo Frascino wrote: >>>>>> On 1/25/21 1:02 PM, Mark Rutland wrote: >>>>>>> On Fri, Jan 22, 2021 at 03:56:40PM +0000, Vincenzo Frascino wrote: >>>>>>>> Currently, the __is_lm_address() check just masks out the top 12 bits >>>>>>>> of the address, but if they are 0, it still yields a true result. >>>>>>>> This has as a side effect that virt_addr_valid() returns true even for >>>>>>>> invalid virtual addresses (e.g. 0x0). >>>>>>>> >>>>>>>> Improve the detection checking that it's actually a kernel address >>>>>>>> starting at PAGE_OFFSET. >>>>>>>> >>>>>>>> Cc: Catalin Marinas <catalin.marinas@arm.com> >>>>>>>> Cc: Will Deacon <will@kernel.org> >>>>>>>> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> >>>>>>>> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> >>>>>>>> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> >>>>>>> >>>>>>> Looking around, it seems that there are some existing uses of >>>>>>> virt_addr_valid() that expect it to reject addresses outside of the >>>>>>> TTBR1 range. For example, check_mem_type() in drivers/tee/optee/call.c. >>>>>>> >>>>>>> Given that, I think we need something that's easy to backport to stable. >>>>>>> >>>>>> >>>>>> I agree, I started looking at it this morning and I found cases even in the main >>>>>> allocators (slub and page_alloc) either then the one you mentioned. >>>>>> >>>>>>> This patch itself looks fine, but it's not going to backport very far, >>>>>>> so I suspect we might need to write a preparatory patch that adds an >>>>>>> explicit range check to virt_addr_valid() which can be trivially >>>>>>> backported. >>>>>>> >>>>>> >>>>>> I checked the old releases and I agree this is not back-portable as it stands. >>>>>> I propose therefore to add a preparatory patch with the check below: >>>>>> >>>>>> #define __is_ttrb1_address(addr) ((u64)(addr) >= PAGE_OFFSET && \ >>>>>> (u64)(addr) < PAGE_END) >>>>>> >>>>>> If it works for you I am happy to take care of it and post a new version of my >>>>>> patches. >>>>> >>>>> I'm not entirely sure we need a preparatory patch. IIUC (it needs >>>>> checking), virt_addr_valid() was fine until 5.4, broken by commit >>>>> 14c127c957c1 ("arm64: mm: Flip kernel VA space"). Will addressed the >>>>> flip case in 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using >>>>> __is_lm_address()") but this broke the <PAGE_OFFSET case. So in 5.4 a >>>>> NULL address is considered valid. >>>>> >>>>> Ard's commit f4693c2716b3 ("arm64: mm: extend linear region for 52-bit >>>>> VA configurations") changed the test to no longer rely on va_bits but >>>>> did not change the broken semantics. >>>>> >>>>> If Ard's change plus the fix proposed in this test works on 5.4, I'd say >>>>> we just merge this patch with the corresponding Cc stable and Fixes tags >>>>> and tweak it slightly when doing the backports as it wouldn't apply >>>>> cleanly. IOW, I wouldn't add another check to virt_addr_valid() as we >>>>> did not need one prior to 5.4. >>>> >>>> Thank you for the detailed analysis. I checked on 5.4 and it seems that Ard >>>> patch (not a clean backport) plus my proposed fix works correctly and solves the >>>> issue. >>> >>> I didn't mean the backport of the whole commit f4693c2716b3 as it >>> probably has other dependencies, just the __is_lm_address() change in >>> that patch. >> >> Then call it preparatory patch ;) > > It's preparatory only for the stable backports, not for current > mainline. But I'd rather change the upstream patch when backporting to > apply cleanly, no need for a preparatory stable patch. > Thanks for the clarification.
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 18fce223b67b..99d7e1494aaa 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -247,9 +247,11 @@ static inline const void *__tag_set(const void *addr, u8 tag) /* - * The linear kernel range starts at the bottom of the virtual address space. + * Check whether an arbitrary address is within the linear map, which + * lives in the [PAGE_OFFSET, PAGE_END) interval at the bottom of the + * kernel's TTBR1 address range. */ -#define __is_lm_address(addr) (((u64)(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET)) +#define __is_lm_address(addr) (((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET)) #define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) #define __kimg_to_phys(addr) ((addr) - kimage_voffset)