From patchwork Wed Jun 2 23:52:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12295837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34B3FC4708F for ; Wed, 2 Jun 2021 23:56:48 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ED710613BF for ; Wed, 2 Jun 2021 23:56:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED710613BF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=1HqGHPjmTIrYrSTTJWWJgUw9G0sD4ogvcDlPspj+lOM=; b=Rd4zrCm6bVjVNpb/HweIsslAtF bybIdpkCcCM5aOaCDJpfKEqO+/8xoWe6nYm8fClBfXb1cWiq3uuK2rf8U7lU1QMW1sMJz6AjFfhxJ vvodmifj2pz7GRXVEFs5FnlvY33R0XZBwkOxcurZFOwteHVCkHmPiWD/sLMfGJU+R5qYJTETOpxUA ISI72BTFvBWgiuISy4b/sV9DdiqfPAskwyc94PoBF/Q2d4IgqnjXI8ctqWkJrLYBa5J99HVClDe5w Sz4DJ0k9jfdlWEjqVRT7wsM4VI9QZz8OmHR4xIAGOJ8ItAZgAlh+AMQt+H2uBVrNTep0kaSWc2e0J zPp/aOxQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loafN-006Vti-1b; Wed, 02 Jun 2021 23:52:53 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loafI-006Vsl-FL for linux-arm-kernel@lists.infradead.org; Wed, 02 Jun 2021 23:52:50 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id s8-20020a5b04480000b029049fb35700b9so5340611ybp.5 for ; Wed, 02 Jun 2021 16:52:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iNeCumhQwBDD1WESNtz5Sf3UPV2JBDkD3ST3wMfTn98=; b=K35pHo8PmCkcplXhQwTcHTrXHCil2LEb6OmUGZ0xVYgL7CmxHkluaj7rgudNClUgz9 Kk0maiYWHbfTQ44yRXCVkC999u1jJ/2Ypkx23krbTWVPTYdlh8Vr7gJIdo/9G5zIE5E/ 22wEznkKMgG+7N6qLfrqj1f9g9JN0IAD0sFhiepaMdtEH1XSZOOIeFlTTY1ZGOEkMLzG zVnyLKO9N6RXgwhn5mvWAFKckr2ek/Ob1oc/qZScNOGp7y5FrLL7PkOFqtj+bfms4BF7 dVsUMR0bkIoEhe5ZI4ohcMg5pDAOa4DGb8d0HNfnWNQRrDAyzH6ZinYF4hUIyzHOXuim m6Vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iNeCumhQwBDD1WESNtz5Sf3UPV2JBDkD3ST3wMfTn98=; b=QLeJRQD1bGLusqh4wrw9zNeWzeEf8BwncWNjkaVysfGJBEqQqGHLXRqMoAyhzGUS8T 0+pyonalz5QhdVokA/NGmpSI11M0pwF2pJqAkrZ+fCNGw5b74h1uDA0TsGajSGogPMuS hCxpooTzFbjEPZJSRok/0rQJr0acajGtIb8RBdwk2HmtxmceKLlUafNiqdiE/KW8u3Ni xQS4ggbAk69w3SRU5A9pZcU7CH0zFkPWRinr699CzJ4kNwc+7rznX6xI0h/wmDA9i/GH TIDpe+zmd2odJ1XdtVhUji7o0wZeE7IH3dao9Rfk1b7tdN/XGCwg/+YjGlaaV0pkjAZA C0JA== X-Gm-Message-State: AOAM530FLn2nfJw4IibJkbDJTQ2Ea71fBjGNojZUHTy44rFVz1EwIe3p /UH3lu7iwFcwsHd1RaC0fWkp99I= X-Google-Smtp-Source: ABdhPJwmjsjGnliGgJ+3aqIVe96975Z803jV3Qasd1NEtZWHY0XF6z8rRlPxy06Z4VF5o8Go4wEtzgA= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:1ca2:69fb:b6e1:cf21]) (user=pcc job=sendgmr) by 2002:a25:50d0:: with SMTP id e199mr19611122ybb.189.1622677965505; Wed, 02 Jun 2021 16:52:45 -0700 (PDT) Date: Wed, 2 Jun 2021 16:52:27 -0700 In-Reply-To: <20210602235230.3928842-1-pcc@google.com> Message-Id: <20210602235230.3928842-2-pcc@google.com> Mime-Version: 1.0 References: <20210602235230.3928842-1-pcc@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v6 1/4] mm: arch: remove indirection level in alloc_zeroed_user_highpage_movable() From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton , Jann Horn Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210602_165248_524800_4317DE04 X-CRM114-Status: GOOD ( 20.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In an upcoming change we would like to add a flag to GFP_HIGHUSER_MOVABLE so that it would no longer be an OR of GFP_HIGHUSER and __GFP_MOVABLE. This poses a problem for alloc_zeroed_user_highpage_movable() which passes __GFP_MOVABLE into an arch-specific __alloc_zeroed_user_highpage() hook which ORs in GFP_HIGHUSER. Since __alloc_zeroed_user_highpage() is only ever called from alloc_zeroed_user_highpage_movable(), we can remove one level of indirection here. Remove __alloc_zeroed_user_highpage(), make alloc_zeroed_user_highpage_movable() the hook, and use GFP_HIGHUSER_MOVABLE in the hook implementations so that they will pick up the new flag that we are going to add. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/Ic6361c657b2cdcd896adbe0cf7cb5a7fbb1ed7bf Acked-by: Catalin Marinas --- v6: - fix intermediate build error on arm64 v5: - fix s390 build error arch/alpha/include/asm/page.h | 6 +++--- arch/arm64/include/asm/page.h | 6 +++--- arch/ia64/include/asm/page.h | 6 +++--- arch/m68k/include/asm/page_no.h | 6 +++--- arch/s390/include/asm/page.h | 6 +++--- arch/x86/include/asm/page.h | 6 +++--- include/linux/highmem.h | 35 ++++++++------------------------- 7 files changed, 26 insertions(+), 45 deletions(-) diff --git a/arch/alpha/include/asm/page.h b/arch/alpha/include/asm/page.h index 268f99b4602b..18f48a6f2ff6 100644 --- a/arch/alpha/include/asm/page.h +++ b/arch/alpha/include/asm/page.h @@ -17,9 +17,9 @@ extern void clear_page(void *page); #define clear_user_page(page, vaddr, pg) clear_page(page) -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vmaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vmaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE extern void copy_page(void * _to, void * _from); #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 012cffc574e8..e1fc0f60e79f 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -28,9 +28,9 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) diff --git a/arch/ia64/include/asm/page.h b/arch/ia64/include/asm/page.h index f4dc81fa7146..1b990466d540 100644 --- a/arch/ia64/include/asm/page.h +++ b/arch/ia64/include/asm/page.h @@ -82,16 +82,16 @@ do { \ } while (0) -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ ({ \ struct page *page = alloc_page_vma( \ - GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr); \ + GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr); \ if (page) \ flush_dcache_page(page); \ page; \ }) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) diff --git a/arch/m68k/include/asm/page_no.h b/arch/m68k/include/asm/page_no.h index 8d0f862ee9d7..c9d0d84158a4 100644 --- a/arch/m68k/include/asm/page_no.h +++ b/arch/m68k/include/asm/page_no.h @@ -13,9 +13,9 @@ extern unsigned long memory_end; #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #define __pa(vaddr) ((unsigned long)(vaddr)) #define __va(paddr) ((void *)((unsigned long)(paddr))) diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index cc98f9b78fd4..479dc76e0eca 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -68,9 +68,9 @@ static inline void copy_page(void *to, void *from) #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE /* * These are used to make use of C type-checking.. diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index 7555b48803a8..4d5810c8fab7 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -34,9 +34,9 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, copy_page(to, from); } -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #ifndef __pa #define __pa(x) __phys_addr((unsigned long)(x)) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 832b49b50c7b..54d0643b8fcf 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -152,28 +152,24 @@ static inline void clear_user_highpage(struct page *page, unsigned long vaddr) } #endif -#ifndef __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#ifndef __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE /** - * __alloc_zeroed_user_highpage - Allocate a zeroed HIGHMEM page for a VMA with caller-specified movable GFP flags - * @movableflags: The GFP flags related to the pages future ability to move like __GFP_MOVABLE + * alloc_zeroed_user_highpage_movable - Allocate a zeroed HIGHMEM page for a VMA that the caller knows can move * @vma: The VMA the page is to be allocated for * @vaddr: The virtual address the page will be inserted into * - * This function will allocate a page for a VMA but the caller is expected - * to specify via movableflags whether the page will be movable in the - * future or not + * This function will allocate a page for a VMA that the caller knows will + * be able to migrate in the future using move_pages() or reclaimed * * An architecture may override this function by defining - * __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE and providing their own + * __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE and providing their own * implementation. */ static inline struct page * -__alloc_zeroed_user_highpage(gfp_t movableflags, - struct vm_area_struct *vma, - unsigned long vaddr) +alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, + unsigned long vaddr) { - struct page *page = alloc_page_vma(GFP_HIGHUSER | movableflags, - vma, vaddr); + struct page *page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vaddr); if (page) clear_user_highpage(page, vaddr); @@ -182,21 +178,6 @@ __alloc_zeroed_user_highpage(gfp_t movableflags, } #endif -/** - * alloc_zeroed_user_highpage_movable - Allocate a zeroed HIGHMEM page for a VMA that the caller knows can move - * @vma: The VMA the page is to be allocated for - * @vaddr: The virtual address the page will be inserted into - * - * This function will allocate a page for a VMA that the caller knows will - * be able to migrate in the future using move_pages() or reclaimed - */ -static inline struct page * -alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, - unsigned long vaddr) -{ - return __alloc_zeroed_user_highpage(__GFP_MOVABLE, vma, vaddr); -} - static inline void clear_highpage(struct page *page) { void *kaddr = kmap_atomic(page); From patchwork Wed Jun 2 23:52:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12295843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBDC3C4708F for ; Wed, 2 Jun 2021 23:59:33 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B30B1613D6 for ; Wed, 2 Jun 2021 23:59:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B30B1613D6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=rYFeRpC3YyLJprprzVZ1jBDJfKe5ass4eo0kwdB4MOU=; b=PEdd2Exsbj7Tfr4U7pKsKW5sBZ G3TSyp5T1sVC1zlEGO6F80fXc8hvXVavG3PYy9xA7ZFIkC1Vjk5Oxg55FHHZSbkbcRrR3WaKO2BDb ekAWFlI7RJQQDvTWbsh23xmA7QTVY0qFe9HhPMdP8WNHpflirwzY4DtpmLlqe3EXriCAws74x5SoI V9delm9hUUptVDfhWSKSrhpmWQv+mSbH0CuLcpHe8t9VZfLXAXNJ2pH6YtT1p8Qf48/mYDcntRu8Y VnbjaunW42Q0tNYEElOcAnPsxfWytaDHOfwinUKRa5Ha7e0cZX6CjLdIWTxK8PsW7uOPD60u76krA rmOBaKiQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loaiN-006WEU-Ez; Wed, 02 Jun 2021 23:55:59 +0000 Received: from mail-qv1-f74.google.com ([209.85.219.74]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loaiF-006VxZ-As for linux-arm-kernel@lists.infradead.org; Wed, 02 Jun 2021 23:55:53 +0000 Received: by mail-qv1-f74.google.com with SMTP id h11-20020a0ceecb0000b0290211ed54e716so3057360qvs.9 for ; Wed, 02 Jun 2021 16:53:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=7KHGm0rolTwRRJ+CNN7+WW+JxWKTLu3mG3Zi3IJD9Uw=; b=DockROJr3vGl9OyJiCsHkr1SEmHIybMVIj2CqjCKPrkSjSYX51jYOyJQk0ooWKccl8 4wMcLzKZBj1BaMthyU3tEHD53XqaVCYgtrO28a2ZZiQtCm9UrM+dYFC/056vFjCp/vMB jlL1XudfEovnaLEmPOOkAtbqns037SgkfMz9+9johwwqGn9MBxan5Vs+Pki1A8FuXXPE BMEinwgnaVLyWFec44S4LgyHQOBNK6JIat+kTkdBO/MiUF2dzocL1Ac1nUQuPtCJshnH lOi/o2+X2Kj7HInv8LUva5vGNienPqIVVuaoKhj6MwOzCzMzFJsa5wF2SiDGb0mlYpVf gwgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7KHGm0rolTwRRJ+CNN7+WW+JxWKTLu3mG3Zi3IJD9Uw=; b=La20G02FOHdCy7hN/W1JCCvZ9S4YsPJV2wepYtdpJQJZNCnCXBFnGbhtMcXTS40FmN dPh0F1uEyJJxY5uFzOyGwug7oycMbfk3DZNDctnmKn9OUt2dkSfJ2XxP9Pk/BvAqBPv2 6PjGwbIuR2aBRyB1I+2f4chTdLTM0k4n95bl/6i49uSQyp8seVhp4rHwlV7zr+lvYEmA 5+nXMIBxVpJLz12dJUetfqHUiyvbhtiRzTKVQNduPyleRk8UsBF9csLUlXrCJHbem+wL m9/WD2AHDNez9ruB/EaVEjx1G6c+ACh0UfvH0jnnSWuCF55C+Dn+DW0V5bMl54seRcnQ FdRg== X-Gm-Message-State: AOAM530lbd89xxkXihUuSGPMav6l/3toFzWrYciiF/mz2czR+rhj/Dly IQsjU0Wkc0fkLZWh/LcOVBjs4EU= X-Google-Smtp-Source: ABdhPJxPZCUpZVIwwbMguHohTZ/uhOesjUClknLntHpMOlqQNlhtmK7enA/ufal7CSR54TtfWi6tKXY= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:1ca2:69fb:b6e1:cf21]) (user=pcc job=sendgmr) by 2002:a0c:8e01:: with SMTP id v1mr30842896qvb.35.1622677967690; Wed, 02 Jun 2021 16:52:47 -0700 (PDT) Date: Wed, 2 Jun 2021 16:52:28 -0700 In-Reply-To: <20210602235230.3928842-1-pcc@google.com> Message-Id: <20210602235230.3928842-3-pcc@google.com> Mime-Version: 1.0 References: <20210602235230.3928842-1-pcc@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v6 2/4] kasan: use separate (un)poison implementation for integrated init From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton , Jann Horn Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210602_165551_395794_123A8945 X-CRM114-Status: GOOD ( 20.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently with integrated init page_alloc.c needs to know whether kasan_alloc_pages() will zero initialize memory, but this will start becoming more complicated once we start adding tag initialization support for user pages. To avoid page_alloc.c needing to know more details of what integrated init will do, move the unpoisoning logic for integrated init into the HW tags implementation. Currently the logic is identical but it will diverge in subsequent patches. For symmetry do the same for poisoning although this logic will be unaffected by subsequent patches. Signed-off-by: Peter Collingbourne Reviewed-by: Andrey Konovalov Link: https://linux-review.googlesource.com/id/I2c550234c6c4a893c48c18ff0c6ce658c7c67056 --- v4: - use IS_ENABLED(CONFIG_KASAN) - add comments to kasan_alloc_pages and kasan_free_pages - remove line break v3: - use BUILD_BUG() v2: - fix build with KASAN disabled include/linux/kasan.h | 64 +++++++++++++++++++++++++------------------ mm/kasan/common.c | 4 +-- mm/kasan/hw_tags.c | 22 +++++++++++++++ mm/mempool.c | 6 ++-- mm/page_alloc.c | 55 +++++++++++++++++++------------------ 5 files changed, 95 insertions(+), 56 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index b1678a61e6a7..a1c7ce5f3e4f 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -2,6 +2,7 @@ #ifndef _LINUX_KASAN_H #define _LINUX_KASAN_H +#include #include #include @@ -79,14 +80,6 @@ static inline void kasan_disable_current(void) {} #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ -#ifdef CONFIG_KASAN - -struct kasan_cache { - int alloc_meta_offset; - int free_meta_offset; - bool is_kmalloc; -}; - #ifdef CONFIG_KASAN_HW_TAGS DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); @@ -101,11 +94,14 @@ static inline bool kasan_has_integrated_init(void) return kasan_enabled(); } +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags); +void kasan_free_pages(struct page *page, unsigned int order); + #else /* CONFIG_KASAN_HW_TAGS */ static inline bool kasan_enabled(void) { - return true; + return IS_ENABLED(CONFIG_KASAN); } static inline bool kasan_has_integrated_init(void) @@ -113,8 +109,30 @@ static inline bool kasan_has_integrated_init(void) return false; } +static __always_inline void kasan_alloc_pages(struct page *page, + unsigned int order, gfp_t flags) +{ + /* Only available for integrated init. */ + BUILD_BUG(); +} + +static __always_inline void kasan_free_pages(struct page *page, + unsigned int order) +{ + /* Only available for integrated init. */ + BUILD_BUG(); +} + #endif /* CONFIG_KASAN_HW_TAGS */ +#ifdef CONFIG_KASAN + +struct kasan_cache { + int alloc_meta_offset; + int free_meta_offset; + bool is_kmalloc; +}; + slab_flags_t __kasan_never_merge(void); static __always_inline slab_flags_t kasan_never_merge(void) { @@ -130,20 +148,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size) __kasan_unpoison_range(addr, size); } -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init); -static __always_inline void kasan_alloc_pages(struct page *page, +void __kasan_poison_pages(struct page *page, unsigned int order, bool init); +static __always_inline void kasan_poison_pages(struct page *page, unsigned int order, bool init) { if (kasan_enabled()) - __kasan_alloc_pages(page, order, init); + __kasan_poison_pages(page, order, init); } -void __kasan_free_pages(struct page *page, unsigned int order, bool init); -static __always_inline void kasan_free_pages(struct page *page, - unsigned int order, bool init) +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init); +static __always_inline void kasan_unpoison_pages(struct page *page, + unsigned int order, bool init) { if (kasan_enabled()) - __kasan_free_pages(page, order, init); + __kasan_unpoison_pages(page, order, init); } void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, @@ -285,21 +303,15 @@ void kasan_restore_multi_shot(bool enabled); #else /* CONFIG_KASAN */ -static inline bool kasan_enabled(void) -{ - return false; -} -static inline bool kasan_has_integrated_init(void) -{ - return false; -} static inline slab_flags_t kasan_never_merge(void) { return 0; } static inline void kasan_unpoison_range(const void *address, size_t size) {} -static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {} -static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {} +static inline void kasan_poison_pages(struct page *page, unsigned int order, + bool init) {} +static inline void kasan_unpoison_pages(struct page *page, unsigned int order, + bool init) {} static inline void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) {} diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 6bb87f2acd4e..0ecd293af344 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void) return 0; } -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init) { u8 tag; unsigned long i; @@ -111,7 +111,7 @@ void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) kasan_unpoison(page_address(page), PAGE_SIZE << order, init); } -void __kasan_free_pages(struct page *page, unsigned int order, bool init) +void __kasan_poison_pages(struct page *page, unsigned int order, bool init) { if (likely(!PageHighMem(page))) kasan_poison(page_address(page), PAGE_SIZE << order, diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 4004388b4e4b..9d0f6f934016 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -238,6 +238,28 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, return &alloc_meta->free_track[0]; } +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) +{ + /* + * This condition should match the one in post_alloc_hook() in + * page_alloc.c. + */ + bool init = !want_init_on_free() && want_init_on_alloc(flags); + + kasan_unpoison_pages(page, order, init); +} + +void kasan_free_pages(struct page *page, unsigned int order) +{ + /* + * This condition should match the one in free_pages_prepare() in + * page_alloc.c. + */ + bool init = want_init_on_free(); + + kasan_poison_pages(page, order, init); +} + #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) void kasan_set_tagging_report_once(bool state) diff --git a/mm/mempool.c b/mm/mempool.c index a258cf4de575..0b8afbec3e35 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -106,7 +106,8 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) kasan_slab_free_mempool(element); else if (pool->alloc == mempool_alloc_pages) - kasan_free_pages(element, (unsigned long)pool->pool_data, false); + kasan_poison_pages(element, (unsigned long)pool->pool_data, + false); } static void kasan_unpoison_element(mempool_t *pool, void *element) @@ -114,7 +115,8 @@ static void kasan_unpoison_element(mempool_t *pool, void *element) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) kasan_unpoison_range(element, __ksize(element)); else if (pool->alloc == mempool_alloc_pages) - kasan_alloc_pages(element, (unsigned long)pool->pool_data, false); + kasan_unpoison_pages(element, (unsigned long)pool->pool_data, + false); } static __always_inline void add_element(mempool_t *pool, void *element) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index aaa1655cf682..4fddb7cac3c6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -382,7 +382,7 @@ int page_group_by_mobility_disabled __read_mostly; static DEFINE_STATIC_KEY_TRUE(deferred_pages); /* - * Calling kasan_free_pages() only after deferred memory initialization + * Calling kasan_poison_pages() only after deferred memory initialization * has completed. Poisoning pages during deferred memory init will greatly * lengthen the process and cause problem in large memory systems as the * deferred pages initialization is done with interrupt disabled. @@ -394,15 +394,11 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * on-demand allocation and then freed again before the deferred pages * initialization is done, but this is not likely to happen. */ -static inline void kasan_free_nondeferred_pages(struct page *page, int order, - bool init, fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) { - if (static_branch_unlikely(&deferred_pages)) - return; - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)) - return; - kasan_free_pages(page, order, init); + return static_branch_unlikely(&deferred_pages) || + (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + (fpi_flags & FPI_SKIP_KASAN_POISON)); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -453,13 +449,10 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } #else -static inline void kasan_free_nondeferred_pages(struct page *page, int order, - bool init, fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) { - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)) - return; - kasan_free_pages(page, order, init); + return (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + (fpi_flags & FPI_SKIP_KASAN_POISON)); } static inline bool early_page_uninitialised(unsigned long pfn) @@ -1245,7 +1238,7 @@ static __always_inline bool free_pages_prepare(struct page *page, unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; - bool init; + bool skip_kasan_poison = should_skip_kasan_poison(fpi_flags); VM_BUG_ON_PAGE(PageTail(page), page); @@ -1314,10 +1307,17 @@ static __always_inline bool free_pages_prepare(struct page *page, * With hardware tag-based KASAN, memory tags must be set before the * page becomes unavailable via debug_pagealloc or arch_free_page. */ - init = want_init_on_free(); - if (init && !kasan_has_integrated_init()) - kernel_init_free_pages(page, 1 << order); - kasan_free_nondeferred_pages(page, order, init, fpi_flags); + if (kasan_has_integrated_init()) { + if (!skip_kasan_poison) + kasan_free_pages(page, order); + } else { + bool init = want_init_on_free(); + + if (init) + kernel_init_free_pages(page, 1 << order); + if (!skip_kasan_poison) + kasan_poison_pages(page, order, init); + } /* * arch_free_page() can make the page's contents inaccessible. s390 @@ -2324,8 +2324,6 @@ static bool check_new_pages(struct page *page, unsigned int order) inline void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags) { - bool init; - set_page_private(page, 0); set_page_refcounted(page); @@ -2344,10 +2342,15 @@ inline void post_alloc_hook(struct page *page, unsigned int order, * kasan_alloc_pages and kernel_init_free_pages must be * kept together to avoid discrepancies in behavior. */ - init = !want_init_on_free() && want_init_on_alloc(gfp_flags); - kasan_alloc_pages(page, order, init); - if (init && !kasan_has_integrated_init()) - kernel_init_free_pages(page, 1 << order); + if (kasan_has_integrated_init()) { + kasan_alloc_pages(page, order, gfp_flags); + } else { + bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags); + + kasan_unpoison_pages(page, order, init); + if (init) + kernel_init_free_pages(page, 1 << order); + } set_page_owner(page, order, gfp_flags); } From patchwork Wed Jun 2 23:52:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12295845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63C8EC47083 for ; Wed, 2 Jun 2021 23:59:38 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 36B73613BF for ; Wed, 2 Jun 2021 23:59:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 36B73613BF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=GYi9cmfYzdJvWYfGJfyVEyRCCbQ1PXMb/dogG5MO7Qs=; b=BHeXz6Xb4q+uKKAw1tJ7HT/xlb xl3GX+dX46xq6tHY+cxDDqTyM3qCoI4gZ34Iwt2Fd3mR2WUA7itU45MqiQAicMIiNnQnLsPWYd3a0 3bAzuGNYCqaKJm+BpVKlR/0KHl3M35y84waWe9ja4ZXxqrrz9YVm7GQyu09kPJT524QUwKOBRwDC2 WKjYibu1MamuqEY1TPOi6a/wn04Fw/t8Y99BTxGN6fh0jcelY9WIxuma5fpspRSI7VUjoJyEqKkMb 62kWObeWRcaovrkdUMr0TE9LkNJKyKZYNRP/F1EVmyKXKTkP732AnE7V1o04JHKRzPYUMEPEQdmNn DsEgJQTA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loaiY-006WGC-QC; Wed, 02 Jun 2021 23:56:10 +0000 Received: from mail-qv1-f74.google.com ([209.85.219.74]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loaiG-006Vxc-7t for linux-arm-kernel@lists.infradead.org; Wed, 02 Jun 2021 23:55:54 +0000 Received: by mail-qv1-f74.google.com with SMTP id r11-20020a0cb28b0000b02901c87a178503so3009475qve.22 for ; Wed, 02 Jun 2021 16:53:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LkWWcPn9ZO+4rqyEj2GuCPa3ZqhFpIKmyHzStMfTmxo=; b=jN5yANgKvQyuw5H1ZTwT4k16dacTMbf1fxTGyUEuufH26f0k+bYQ9T33NPkFkeXJK6 QO31/U2FmabU4JbMRuMkP2nEZpe9tquf9SE4MXzUv5xgh/STP542CzATgZvrySxjz7ti +O1UxkCFn4qZMZF2zXtneE5M4X/5Nshux7n9VuxZHa4mtcnm9lcdgKf656ucji6ZnqS1 iK47TuXfw/99nK5SF5w23vHg+hZ8pIMm/oXLeocFDhSyFxjVbC6Lc377Pn+PxEH2GcwW gNnaqOLNmCozoJ0+qVGef1SXuhfAii1/2FNhs8PjoQds/NCoxsufTUdA8EwAmluefqBF hcVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LkWWcPn9ZO+4rqyEj2GuCPa3ZqhFpIKmyHzStMfTmxo=; b=D0H7CtdQlkbS0g6z/GUAAx3DAx5sp9gFZtoU4RbUvFIRVn/HtyNaLvMGfM8BFbzoDd p8Pc0cnmTbeRR9nI5t0AQXxUpvBewZP1KmPVm1GC0RSZtui1aU5pOJLdTjxi4NKnDgnL 9f9UyIlMwMQ467zrzk40rk36SQLdQnq8WuX0tbQ5aq6rHM95791aP4DDMtHZUN7Kr7+X NJ0FCvbhDx64y1CsVFuf1IuREM1nthcKKvi+yBiMHL7BToT0Y/H0CzlETiDRFTqM3kdJ BcqKbI+dxJ5WlHuHu+rxQcaqN2klClr10SeOZ5pvg1lSIrLxIRGty/u2qw8n1lZv14GX kQ3w== X-Gm-Message-State: AOAM533kGIdjc3Mo/0H/7bJr8YhFbMUMFVaKZVwh3VDwdKvOBl9bCZf8 efnT8TFNxZc90xVVSDr9i2FGKV0= X-Google-Smtp-Source: ABdhPJy29j5u1XKyqJCbeOXLv5hUrhWNgnUhZh4e56HfjvZ7QFqPYJdw41j9VGQrgdt8NBCyRcov2Xc= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:1ca2:69fb:b6e1:cf21]) (user=pcc job=sendgmr) by 2002:a0c:fec3:: with SMTP id z3mr9771105qvs.57.1622677969688; Wed, 02 Jun 2021 16:52:49 -0700 (PDT) Date: Wed, 2 Jun 2021 16:52:29 -0700 In-Reply-To: <20210602235230.3928842-1-pcc@google.com> Message-Id: <20210602235230.3928842-4-pcc@google.com> Mime-Version: 1.0 References: <20210602235230.3928842-1-pcc@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v6 3/4] arm64: mte: handle tags zeroing at page allocation time From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton , Jann Horn Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210602_165552_299473_491D8B6A X-CRM114-Status: GOOD ( 29.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, on an anonymous page fault, the kernel allocates a zeroed page and maps it in user space. If the mapping is tagged (PROT_MTE), set_pte_at() additionally clears the tags. It is, however, more efficient to clear the tags at the same time as zeroing the data on allocation. To avoid clearing the tags on any page (which may not be mapped as tagged), only do this if the vma flags contain VM_MTE. This requires introducing a new GFP flag that is used to determine whether to clear the tags. The DC GZVA instruction with a 0 top byte (and 0 tag) requires top-byte-ignore. Set the TCR_EL1.{TBI1,TBID1} bits irrespective of whether KASAN_HW is enabled. Signed-off-by: Peter Collingbourne Co-developed-by: Catalin Marinas Signed-off-by: Catalin Marinas Link: https://linux-review.googlesource.com/id/Id46dc94e30fe11474f7e54f5d65e7658dbdddb26 Reviewed-by: Catalin Marinas Reviewed-by: Andrey Konovalov --- v2: - remove want_zero_tags_on_free() arch/arm64/include/asm/mte.h | 4 ++++ arch/arm64/include/asm/page.h | 8 ++++++-- arch/arm64/lib/mte.S | 20 ++++++++++++++++++++ arch/arm64/mm/fault.c | 26 ++++++++++++++++++++++++++ arch/arm64/mm/proc.S | 10 +++++++--- include/linux/gfp.h | 9 +++++++-- include/linux/highmem.h | 8 ++++++++ mm/kasan/hw_tags.c | 9 ++++++++- mm/page_alloc.c | 13 ++++++++++--- 9 files changed, 96 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index bc88a1ced0d7..67bf259ae768 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -37,6 +37,7 @@ void mte_free_tag_storage(char *storage); /* track which pages have valid allocation tags */ #define PG_mte_tagged PG_arch_2 +void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t *ptep, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); void mte_thread_init_user(void); @@ -53,6 +54,9 @@ int mte_ptrace_copy_tags(struct task_struct *child, long request, /* unused if !CONFIG_ARM64_MTE, silence the compiler */ #define PG_mte_tagged 0 +static inline void mte_zero_clear_page_tags(void *addr) +{ +} static inline void mte_sync_tags(pte_t *ptep, pte_t pte) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index e1fc0f60e79f..ed1b9dcf12b2 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -13,6 +13,7 @@ #ifndef __ASSEMBLY__ #include /* for READ_IMPLIES_EXEC */ +#include /* for gfp_t */ #include struct page; @@ -28,10 +29,13 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE -#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, + unsigned long vaddr); #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE +void tag_clear_highpage(struct page *to); +#define __HAVE_ARCH_TAG_CLEAR_HIGHPAGE + #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 351537c12f36..e83643b3995f 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -36,6 +36,26 @@ SYM_FUNC_START(mte_clear_page_tags) ret SYM_FUNC_END(mte_clear_page_tags) +/* + * Zero the page and tags at the same time + * + * Parameters: + * x0 - address to the beginning of the page + */ +SYM_FUNC_START(mte_zero_clear_page_tags) + mrs x1, dczid_el0 + and w1, w1, #0xf + mov x2, #4 + lsl x1, x2, x1 + and x0, x0, #(1 << MTE_TAG_SHIFT) - 1 // clear the tag + +1: dc gzva, x0 + add x0, x0, x1 + tst x0, #(PAGE_SIZE - 1) + b.ne 1b + ret +SYM_FUNC_END(mte_zero_clear_page_tags) + /* * Copy the tags from the source page to the destination one * x0 - address of the destination page diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 871c82ab0a30..180c0343d82a 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -921,3 +921,29 @@ void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr, debug_exception_exit(regs); } NOKPROBE_SYMBOL(do_debug_exception); + +/* + * Used during anonymous page fault handling. + */ +struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, + unsigned long vaddr) +{ + gfp_t flags = GFP_HIGHUSER_MOVABLE | __GFP_ZERO; + + /* + * If the page is mapped with PROT_MTE, initialise the tags at the + * point of allocation and page zeroing as this is usually faster than + * separate DC ZVA and STGM. + */ + if (vma->vm_flags & VM_MTE) + flags |= __GFP_ZEROTAGS; + + return alloc_page_vma(flags, vma, vaddr); +} + +void tag_clear_highpage(struct page *page) +{ + mte_zero_clear_page_tags(page_address(page)); + page_kasan_tag_reset(page); + set_bit(PG_mte_tagged, &page->flags); +} diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 97d7bcd8d4f2..48fd1df3d05a 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -46,9 +46,13 @@ #endif #ifdef CONFIG_KASAN_HW_TAGS -#define TCR_KASAN_HW_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1 +#define TCR_MTE_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1 #else -#define TCR_KASAN_HW_FLAGS 0 +/* + * The mte_zero_clear_page_tags() implementation uses DC GZVA, which relies on + * TBI being enabled at EL1. + */ +#define TCR_MTE_FLAGS TCR_TBI1 | TCR_TBID1 #endif /* @@ -464,7 +468,7 @@ SYM_FUNC_START(__cpu_setup) msr_s SYS_TFSRE0_EL1, xzr /* set the TCR_EL1 bits */ - mov_q x10, TCR_KASAN_HW_FLAGS + mov_q x10, TCR_MTE_FLAGS orr tcr, tcr, x10 1: #endif diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 11da8af06704..68ba237365dc 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -53,8 +53,9 @@ struct vm_area_struct; #define ___GFP_HARDWALL 0x100000u #define ___GFP_THISNODE 0x200000u #define ___GFP_ACCOUNT 0x400000u +#define ___GFP_ZEROTAGS 0x800000u #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x800000u +#define ___GFP_NOLOCKDEP 0x1000000u #else #define ___GFP_NOLOCKDEP 0 #endif @@ -229,16 +230,20 @@ struct vm_area_struct; * %__GFP_COMP address compound page metadata. * * %__GFP_ZERO returns a zeroed page on success. + * + * %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if + * __GFP_ZERO is set. */ #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) +#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 54d0643b8fcf..8c6e8e996c87 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -185,6 +185,14 @@ static inline void clear_highpage(struct page *page) kunmap_atomic(kaddr); } +#ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGE + +static inline void tag_clear_highpage(struct page *page) +{ +} + +#endif + /* * If we pass in a base or tail page, we can zero up to PAGE_SIZE. * If we pass in a head page, we can zero up to the size of the compound page. diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 9d0f6f934016..41fd5326ee0a 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -246,7 +246,14 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) */ bool init = !want_init_on_free() && want_init_on_alloc(flags); - kasan_unpoison_pages(page, order, init); + if (flags & __GFP_ZEROTAGS) { + int i; + + for (i = 0; i != 1 << order; ++i) + tag_clear_highpage(page + i); + } else { + kasan_unpoison_pages(page, order, init); + } } void kasan_free_pages(struct page *page, unsigned int order) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4fddb7cac3c6..13937e793fda 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1219,10 +1219,16 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) return ret; } -static void kernel_init_free_pages(struct page *page, int numpages) +static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags) { int i; + if (zero_tags) { + for (i = 0; i < numpages; i++) + tag_clear_highpage(page + i); + return; + } + /* s390's use of memset() could override KASAN redzones. */ kasan_disable_current(); for (i = 0; i < numpages; i++) { @@ -1314,7 +1320,7 @@ static __always_inline bool free_pages_prepare(struct page *page, bool init = want_init_on_free(); if (init) - kernel_init_free_pages(page, 1 << order); + kernel_init_free_pages(page, 1 << order, false); if (!skip_kasan_poison) kasan_poison_pages(page, order, init); } @@ -2349,7 +2355,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order, kasan_unpoison_pages(page, order, init); if (init) - kernel_init_free_pages(page, 1 << order); + kernel_init_free_pages(page, 1 << order, + gfp_flags & __GFP_ZEROTAGS); } set_page_owner(page, order, gfp_flags); From patchwork Wed Jun 2 23:52:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12295839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E564FC47083 for ; Wed, 2 Jun 2021 23:57:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B365D613BF for ; Wed, 2 Jun 2021 23:57:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B365D613BF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=4IJaniuEI7dayOU8oJ0AIISBTxWJUp7cBjdFpoeIEqg=; b=G0DfmvmaA2R+1uuCI2NCVpSjKw Not6Cm4WP2CVQCYWHAwW2B3/JkLEU1vs/zijYchY9joS1ElWVD0UPz62wBdIKV393CdwiIMcUUpE6 8zIpWi3abRYBxhZmzeUj4818aCLH9AYhhUvPV1N6eRDBZUR/4uLlyhzO2en69JkuDMr39kdjO+MOP hLfR6YSjIdnz0SXftvry8ok6l/J5KC0PaqxtOXBuUIWdpVLj6RfFVt0msQ5c5YPbu23/FKmmeftOA R1L0sSXUid/Rb7nDuYzChU3OIhFmk7qd7Tlw5cXM+zqoH3aXtMpTqBgN4lu/TbvYuP5CFrFE1TxK/ cLOE8/fw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loagS-006VyN-1q; Wed, 02 Jun 2021 23:54:00 +0000 Received: from mail-qv1-f73.google.com ([209.85.219.73]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loagO-006Vxi-1g for linux-arm-kernel@lists.infradead.org; Wed, 02 Jun 2021 23:53:57 +0000 Received: by mail-qv1-f73.google.com with SMTP id n3-20020a0cee630000b029020e62abfcbdso3018249qvs.16 for ; Wed, 02 Jun 2021 16:53:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hqsHpNWXWCUD+Pz3cXUNhJq0K6YRaK2x9klPnYwXBMU=; b=ikdfNEVHMRBVC4vEJcSYj1OSI3Gu52XnzF1DSs0DEzfv36rlKfeZwJ04StYx2k/utp 2Dm8ZbheIErDyw1FBDQhaTVa6FqpKI6reC5NzQuHy/3s06YYbiGlbdTQxcH84bGBoqrU gfa1//wxcJvAObj3G/XEGibjiIxPr86LWUkL8YzNIwn/1vQKaCKlOrXLoN2eBFQoGbv1 sroY4muNZkZ/8nEVqEkaiqiEzO/HvliUt8uWsEIiUToCru3DtuLa1XGZjdq7JtzG9Nod Ez6Jl1LmAXL+HHdgSNG/OSGBmAZyp/Mo7/DvYQJvQsCi0C1sUiVeJLHaPIRsR3B78gH9 vJEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hqsHpNWXWCUD+Pz3cXUNhJq0K6YRaK2x9klPnYwXBMU=; b=hCfXF1UUejYPJKyoZQDlO+QnTCpTcsYm9FJZOfxi99B97QwapUS3i9UdfVbk20PZuX c2kmWJyz4+wWvHmWyIA05OwtOksQJhEpUF79CwZjg8Dk0RbKq2PFViZ+u37UeXYutnUy CD4EiqlUftLvE2RF+PF1Fn0j4SPuSayuGz21Nx0Q3CbC6ro8y1H05DXFGfv94WwwqxUl r1EY8CoY/YiFACJEvm81qRoYKeWTOPAdlks2BCA4ugYrsQdGg9tWrzsibmma8nR3Vwed NpaiEio4RBG9ePqjYfopycW/8ULbFqiSM9ujjGBG9KebejpSQ+JaIY4Zr7ZiPIUvMJk6 U2cQ== X-Gm-Message-State: AOAM531a1feKaqn+vA0R3DxNYO7paUZwEa+jfVYWywwJPmFLFbH28ang iDcm4rK4500pdaQnzXRQj25icac= X-Google-Smtp-Source: ABdhPJz5Ku9JC9DDanKBv3oPaWs5bKsvn+j748dXJunvZALwIHwtyaAoFdB63E6imdskvezbD+Ty5WE= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:1ca2:69fb:b6e1:cf21]) (user=pcc job=sendgmr) by 2002:a05:6214:1791:: with SMTP id ct17mr11019977qvb.21.1622677971733; Wed, 02 Jun 2021 16:52:51 -0700 (PDT) Date: Wed, 2 Jun 2021 16:52:30 -0700 In-Reply-To: <20210602235230.3928842-1-pcc@google.com> Message-Id: <20210602235230.3928842-5-pcc@google.com> Mime-Version: 1.0 References: <20210602235230.3928842-1-pcc@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v6 4/4] kasan: disable freed user page poisoning with HW tags From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton , Jann Horn Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210602_165356_121579_5B22A612 X-CRM114-Status: GOOD ( 21.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Poisoning freed pages protects against kernel use-after-free. The likelihood of such a bug involving kernel pages is significantly higher than that for user pages. At the same time, poisoning freed pages can impose a significant performance cost, which cannot always be justified for user pages given the lower probability of finding a bug. Therefore, disable freed user page poisoning when using HW tags. We identify "user" pages via the flag set GFP_HIGHUSER_MOVABLE, which indicates a strong likelihood of not being directly accessible to the kernel. Signed-off-by: Peter Collingbourne Reviewed-by: Andrey Konovalov Link: https://linux-review.googlesource.com/id/I716846e2de8ef179f44e835770df7e6307be96c9 --- v4: - move flag to GFP_HIGHUSER_MOVABLE - remove command line flag include/linux/gfp.h | 13 ++++++++++--- include/linux/page-flags.h | 9 +++++++++ include/trace/events/mmflags.h | 9 ++++++++- mm/kasan/hw_tags.c | 3 +++ mm/page_alloc.c | 12 +++++++----- 5 files changed, 37 insertions(+), 9 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 68ba237365dc..e6102dfa4faa 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -54,8 +54,9 @@ struct vm_area_struct; #define ___GFP_THISNODE 0x200000u #define ___GFP_ACCOUNT 0x400000u #define ___GFP_ZEROTAGS 0x800000u +#define ___GFP_SKIP_KASAN_POISON 0x1000000u #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x1000000u +#define ___GFP_NOLOCKDEP 0x2000000u #else #define ___GFP_NOLOCKDEP 0 #endif @@ -233,17 +234,22 @@ struct vm_area_struct; * * %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if * __GFP_ZERO is set. + * + * %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned + * on deallocation. Typically used for userspace pages. Currently only has an + * effect in HW tags mode. */ #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) #define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) +#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON) /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (25 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** @@ -324,7 +330,8 @@ struct vm_area_struct; #define GFP_DMA __GFP_DMA #define GFP_DMA32 __GFP_DMA32 #define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM) -#define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE) +#define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE | \ + __GFP_SKIP_KASAN_POISON) #define GFP_TRANSHUGE_LIGHT ((GFP_HIGHUSER_MOVABLE | __GFP_COMP | \ __GFP_NOMEMALLOC | __GFP_NOWARN) & ~__GFP_RECLAIM) #define GFP_TRANSHUGE (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 04a34c08e0a6..40e2c5000585 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -137,6 +137,9 @@ enum pageflags { #endif #ifdef CONFIG_64BIT PG_arch_2, +#endif +#ifdef CONFIG_KASAN_HW_TAGS + PG_skip_kasan_poison, #endif __NR_PAGEFLAGS, @@ -443,6 +446,12 @@ TESTCLEARFLAG(Young, young, PF_ANY) PAGEFLAG(Idle, idle, PF_ANY) #endif +#ifdef CONFIG_KASAN_HW_TAGS +PAGEFLAG(SkipKASanPoison, skip_kasan_poison, PF_HEAD) +#else +PAGEFLAG_FALSE(SkipKASanPoison) +#endif + /* * PageReported() is used to track reported free pages within the Buddy * allocator. We can use the non-atomic version of the test and set diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 629c7a0eaff2..390270e00a1d 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -85,6 +85,12 @@ #define IF_HAVE_PG_ARCH_2(flag,string) #endif +#ifdef CONFIG_KASAN_HW_TAGS +#define IF_HAVE_PG_SKIP_KASAN_POISON(flag,string) ,{1UL << flag, string} +#else +#define IF_HAVE_PG_SKIP_KASAN_POISON(flag,string) +#endif + #define __def_pageflag_names \ {1UL << PG_locked, "locked" }, \ {1UL << PG_waiters, "waiters" }, \ @@ -112,7 +118,8 @@ IF_HAVE_PG_UNCACHED(PG_uncached, "uncached" ) \ IF_HAVE_PG_HWPOISON(PG_hwpoison, "hwpoison" ) \ IF_HAVE_PG_IDLE(PG_young, "young" ) \ IF_HAVE_PG_IDLE(PG_idle, "idle" ) \ -IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) +IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) \ +IF_HAVE_PG_SKIP_KASAN_POISON(PG_skip_kasan_poison, "skip_kasan_poison") #define show_page_flags(flags) \ (flags) ? __print_flags(flags, "|", \ diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 41fd5326ee0a..ed5e5b833d61 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -246,6 +246,9 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) */ bool init = !want_init_on_free() && want_init_on_alloc(flags); + if (flags & __GFP_SKIP_KASAN_POISON) + SetPageSkipKASanPoison(page); + if (flags & __GFP_ZEROTAGS) { int i; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 13937e793fda..5ad76e540a22 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -394,11 +394,12 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * on-demand allocation and then freed again before the deferred pages * initialization is done, but this is not likely to happen. */ -static inline bool should_skip_kasan_poison(fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags) { return static_branch_unlikely(&deferred_pages) || (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)); + (fpi_flags & FPI_SKIP_KASAN_POISON)) || + PageSkipKASanPoison(page); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -449,10 +450,11 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } #else -static inline bool should_skip_kasan_poison(fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags) { return (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)); + (fpi_flags & FPI_SKIP_KASAN_POISON)) || + PageSkipKASanPoison(page); } static inline bool early_page_uninitialised(unsigned long pfn) @@ -1244,7 +1246,7 @@ static __always_inline bool free_pages_prepare(struct page *page, unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; - bool skip_kasan_poison = should_skip_kasan_poison(fpi_flags); + bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags); VM_BUG_ON_PAGE(PageTail(page), page);