From patchwork Fri Dec 18 22:01:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47012C2D0E4 for ; Fri, 18 Dec 2020 22:01:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C6FEF23B54 for ; Fri, 18 Dec 2020 22:01:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C6FEF23B54 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4EC8E6B005D; Fri, 18 Dec 2020 17:01:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C1446B0068; Fri, 18 Dec 2020 17:01:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38BE26B006C; Fri, 18 Dec 2020 17:01:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id 212886B005D for ; Fri, 18 Dec 2020 17:01:31 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D4E67349B for ; Fri, 18 Dec 2020 22:01:30 +0000 (UTC) X-FDA: 77607775140.23.clock59_4a089b127440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id B795137608 for ; Fri, 18 Dec 2020 22:01:30 +0000 (UTC) X-HE-Tag: clock59_4a089b127440 X-Filterd-Recvd-Size: 2749 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:01:30 +0000 (UTC) Date: Fri, 18 Dec 2020 14:01:28 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328889; bh=S+FtJ3ySdURLoRj1IigrxxOmVpvlhvou+RKM3lV6+W4=; h=From:To:Subject:In-Reply-To:From; b=bw9iNG+SVxnG7z0/MrvumB8tDTID7IKTXTo5a1Wpw1AmTVf+rh8CTUGA6XQ4Hbou9 wokr98ieiaUoUAW805cAD5kOupvrio/vwk1TpaNWiQMmvOxUQIcS7M7yH0YbAc4tFN pIaYaxYNpneCMffm96ZpsTYoKIZLGC5zqmIw0tGo= From: Andrew Morton To: akpm@linux-foundation.org, alex.shi@linux.alibaba.com, guro@fb.com, hannes@cmpxchg.org, hughd@google.com, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vdavydov.dev@gmail.com Subject: [patch 01/78] mm/memcg: bail early from swap accounting if memcg disabled Message-ID: <20201218220128.vmNwvyXdi%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alex Shi Subject: mm/memcg: bail early from swap accounting if memcg disabled Patch series "bail out early for memcg disable". These 2 patches are indepenedent from per memcg lru lock, and may encounter unexpected warning, so let's move out them from per memcg lru locking patchset. This patch (of 2): We could bail out early when memcg wasn't enabled. Link: https://lkml.kernel.org/r/1604283436-18880-1-git-send-email-alex.shi@linux.alibaba.com Link: https://lkml.kernel.org/r/1604283436-18880-2-git-send-email-alex.shi@linux.alibaba.com Signed-off-by: Alex Shi Reviewed-by: Roman Gushchin Acked-by: Michal Hocko Acked-by: Hugh Dickins Acked-by: Johannes Weiner Cc: Vladimir Davydov Signed-off-by: Andrew Morton --- mm/memcontrol.c | 6 ++++++ 1 file changed, 6 insertions(+) --- a/mm/memcontrol.c~mm-memcg-bail-early-from-swap-accounting-if-memcg-disabled +++ a/mm/memcontrol.c @@ -7178,6 +7178,9 @@ void mem_cgroup_swapout(struct page *pag VM_BUG_ON_PAGE(PageLRU(page), page); VM_BUG_ON_PAGE(page_count(page), page); + if (mem_cgroup_disabled()) + return; + if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; @@ -7242,6 +7245,9 @@ int mem_cgroup_try_charge_swap(struct pa struct mem_cgroup *memcg; unsigned short oldid; + if (mem_cgroup_disabled()) + return 0; + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return 0; From patchwork Fri Dec 18 22:01:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 811D2C4361B for ; Fri, 18 Dec 2020 22:01:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2AB2823B8C for ; Fri, 18 Dec 2020 22:01:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2AB2823B8C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B54036B0068; Fri, 18 Dec 2020 17:01:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B04396B006C; Fri, 18 Dec 2020 17:01:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A19B66B006E; Fri, 18 Dec 2020 17:01:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id 8878F6B0068 for ; Fri, 18 Dec 2020 17:01:34 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4B4AC2480 for ; Fri, 18 Dec 2020 22:01:34 +0000 (UTC) X-FDA: 77607775308.18.boot97_400a48f27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 1F12E100EC665 for ; Fri, 18 Dec 2020 22:01:34 +0000 (UTC) X-HE-Tag: boot97_400a48f27440 X-Filterd-Recvd-Size: 4222 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:01:33 +0000 (UTC) Date: Fri, 18 Dec 2020 14:01:31 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328892; bh=CF4+Id4UECqjKR9xZz7d1VczcIgoQ+EKKhweXhf3/Tg=; h=From:To:Subject:In-Reply-To:From; b=ZFkK4Fkvuiv27dVszoPWmxTeNKag+C3fXSAFGO4coHBsuFlIOdPtu79Ly/vKT8roK yImd4/rUsBYfCdzbFYAWAtEyIsVd4vY8MBMlkim+X9jx5RaFJQ8/57v00dR+HbdmZJ bXi7u6KYHadKOVloOgC1FAxcqc4kx+R9ahr8a2Zc= From: Andrew Morton To: akpm@linux-foundation.org, alex.shi@linux.alibaba.com, hannes@cmpxchg.org, hughd@google.com, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vdavydov.dev@gmail.com Subject: [patch 02/78] mm/memcg: warning on !memcg after readahead page charged Message-ID: <20201218220131.5zsD3yneg%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alex Shi Subject: mm/memcg: warning on !memcg after readahead page charged Add VM_WARN_ON_ONCE_PAGE() macro. Since readahead page is charged on memcg too, in theory we don't have to check this exception now. Before safely remove them all, add a warning for the unexpected !memcg. Link: https://lkml.kernel.org/r/1604283436-18880-3-git-send-email-alex.shi@linux.alibaba.com Signed-off-by: Alex Shi Acked-by: Michal Hocko Acked-by: Hugh Dickins Acked-by: Johannes Weiner Cc: Vladimir Davydov Signed-off-by: Andrew Morton --- include/linux/mmdebug.h | 13 +++++++++++++ mm/memcontrol.c | 10 ++++------ 2 files changed, 17 insertions(+), 6 deletions(-) --- a/include/linux/mmdebug.h~mm-memcg-warning-on-memcg-after-readahead-page-charged +++ a/include/linux/mmdebug.h @@ -37,6 +37,18 @@ void dump_mm(const struct mm_struct *mm) BUG(); \ } \ } while (0) +#define VM_WARN_ON_ONCE_PAGE(cond, page) ({ \ + static bool __section(".data.once") __warned; \ + int __ret_warn_once = !!(cond); \ + \ + if (unlikely(__ret_warn_once && !__warned)) { \ + dump_page(page, "VM_WARN_ON_ONCE_PAGE(" __stringify(cond)")");\ + __warned = true; \ + WARN_ON(1); \ + } \ + unlikely(__ret_warn_once); \ +}) + #define VM_WARN_ON(cond) (void)WARN_ON(cond) #define VM_WARN_ON_ONCE(cond) (void)WARN_ON_ONCE(cond) #define VM_WARN_ONCE(cond, format...) (void)WARN_ONCE(cond, format) @@ -48,6 +60,7 @@ void dump_mm(const struct mm_struct *mm) #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond) #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond) +#define VM_WARN_ON_ONCE_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond) #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond) #endif --- a/mm/memcontrol.c~mm-memcg-warning-on-memcg-after-readahead-page-charged +++ a/mm/memcontrol.c @@ -1362,10 +1362,7 @@ struct lruvec *mem_cgroup_page_lruvec(st } memcg = page_memcg(page); - /* - * Swapcache readahead pages are added to the LRU - and - * possibly migrated - before they are charged. - */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) memcg = root_mem_cgroup; @@ -6987,6 +6984,7 @@ void mem_cgroup_migrate(struct page *old return; memcg = page_memcg(oldpage); + VM_WARN_ON_ONCE_PAGE(!memcg, oldpage); if (!memcg) return; @@ -7186,7 +7184,7 @@ void mem_cgroup_swapout(struct page *pag memcg = page_memcg(page); - /* Readahead page, never charged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) return; @@ -7253,7 +7251,7 @@ int mem_cgroup_try_charge_swap(struct pa memcg = page_memcg(page); - /* Readahead page, never charged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) return 0; From patchwork Fri Dec 18 22:01:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C0DFC4361B for ; Fri, 18 Dec 2020 22:01:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BD80323B97 for ; Fri, 18 Dec 2020 22:01:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD80323B97 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5FC9C6B006C; Fri, 18 Dec 2020 17:01:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5AC356B006E; Fri, 18 Dec 2020 17:01:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C2B26B0070; Fri, 18 Dec 2020 17:01:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id 3913F6B006C for ; Fri, 18 Dec 2020 17:01:38 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id EDFD418172B32 for ; Fri, 18 Dec 2020 22:01:37 +0000 (UTC) X-FDA: 77607775434.02.hour08_43043ea27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id BAF2E10097AA3 for ; Fri, 18 Dec 2020 22:01:37 +0000 (UTC) X-HE-Tag: hour08_43043ea27440 X-Filterd-Recvd-Size: 6090 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:01:36 +0000 (UTC) Date: Fri, 18 Dec 2020 14:01:35 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328895; bh=5AdAuiKyu7bHbVtbHYJsbjVEeMV2/StMkyancsFJaKA=; h=From:To:Subject:In-Reply-To:From; b=wLtX57O5iBcp30adFCyQezrXkQ25b7ImGDU7HKvCQqOd/IbhZq6zCh9cpFUjwoT2k NN4/m5ATQW1uBf85FSESN3gUTUXpIDg8Oyf+BUA0Mhvi0dmuWgMA5bg1GD8PiWPjWl 1c8urrIXfms4U3oki64eySjae5NC8iu24dt4ZvSU= From: Andrew Morton To: akpm@linux-foundation.org, guro@fb.com, hannes@cmpxchg.org, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, richard.weiyang@gmail.com, shakeelb@google.com, torvalds@linux-foundation.org Subject: [patch 03/78] mm/memcg: remove unused definitions Message-ID: <20201218220135.8ZtqsqPmA%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Wei Yang Subject: mm/memcg: remove unused definitions Some definitions are left unused, just clean them. Link: https://lkml.kernel.org/r/20201108003834.12669-1-richard.weiyang@gmail.com Signed-off-by: Wei Yang Acked-by: Michal Hocko Reviewed-by: Shakeel Butt Reviewed-by: Roman Gushchin Cc: Johannes Weiner Signed-off-by: Andrew Morton --- include/linux/memcontrol.h | 118 ----------------------------------- 1 file changed, 118 deletions(-) --- a/include/linux/memcontrol.h~mm-memcg-remove-unused-definitions +++ a/include/linux/memcontrol.h @@ -913,41 +913,6 @@ static inline void mod_memcg_state(struc local_irq_restore(flags); } -/** - * mod_memcg_page_state - update page state statistics - * @page: the page - * @idx: page state item to account - * @val: number of pages (positive or negative) - * - * The @page must be locked or the caller must use lock_page_memcg() - * to prevent double accounting when the page is concurrently being - * moved to another memcg: - * - * lock_page(page) or lock_page_memcg(page) - * if (TestClearPageState(page)) - * mod_memcg_page_state(page, state, -1); - * unlock_page(page) or unlock_page_memcg(page) - * - * Kernel pages are an exception to this, since they'll never move. - */ -static inline void __mod_memcg_page_state(struct page *page, - int idx, int val) -{ - struct mem_cgroup *memcg = page_memcg(page); - - if (memcg) - __mod_memcg_state(memcg, idx, val); -} - -static inline void mod_memcg_page_state(struct page *page, - int idx, int val) -{ - struct mem_cgroup *memcg = page_memcg(page); - - if (memcg) - mod_memcg_state(memcg, idx, val); -} - static inline unsigned long lruvec_page_state(struct lruvec *lruvec, enum node_stat_item idx) { @@ -1395,18 +1360,6 @@ static inline void mod_memcg_state(struc { } -static inline void __mod_memcg_page_state(struct page *page, - int idx, - int nr) -{ -} - -static inline void mod_memcg_page_state(struct page *page, - int idx, - int nr) -{ -} - static inline unsigned long lruvec_page_state(struct lruvec *lruvec, enum node_stat_item idx) { @@ -1479,34 +1432,6 @@ static inline void lruvec_memcg_debug(st } #endif /* CONFIG_MEMCG */ -/* idx can be of type enum memcg_stat_item or node_stat_item */ -static inline void __inc_memcg_state(struct mem_cgroup *memcg, - int idx) -{ - __mod_memcg_state(memcg, idx, 1); -} - -/* idx can be of type enum memcg_stat_item or node_stat_item */ -static inline void __dec_memcg_state(struct mem_cgroup *memcg, - int idx) -{ - __mod_memcg_state(memcg, idx, -1); -} - -/* idx can be of type enum memcg_stat_item or node_stat_item */ -static inline void __inc_memcg_page_state(struct page *page, - int idx) -{ - __mod_memcg_page_state(page, idx, 1); -} - -/* idx can be of type enum memcg_stat_item or node_stat_item */ -static inline void __dec_memcg_page_state(struct page *page, - int idx) -{ - __mod_memcg_page_state(page, idx, -1); -} - static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) { __mod_lruvec_kmem_state(p, idx, 1); @@ -1517,34 +1442,6 @@ static inline void __dec_lruvec_kmem_sta __mod_lruvec_kmem_state(p, idx, -1); } -/* idx can be of type enum memcg_stat_item or node_stat_item */ -static inline void inc_memcg_state(struct mem_cgroup *memcg, - int idx) -{ - mod_memcg_state(memcg, idx, 1); -} - -/* idx can be of type enum memcg_stat_item or node_stat_item */ -static inline void dec_memcg_state(struct mem_cgroup *memcg, - int idx) -{ - mod_memcg_state(memcg, idx, -1); -} - -/* idx can be of type enum memcg_stat_item or node_stat_item */ -static inline void inc_memcg_page_state(struct page *page, - int idx) -{ - mod_memcg_page_state(page, idx, 1); -} - -/* idx can be of type enum memcg_stat_item or node_stat_item */ -static inline void dec_memcg_page_state(struct page *page, - int idx) -{ - mod_memcg_page_state(page, idx, -1); -} - static inline struct lruvec *parent_lruvec(struct lruvec *lruvec) { struct mem_cgroup *memcg; @@ -1733,21 +1630,6 @@ static inline void memcg_kmem_uncharge_p __memcg_kmem_uncharge_page(page, order); } -static inline int memcg_kmem_charge(struct mem_cgroup *memcg, gfp_t gfp, - unsigned int nr_pages) -{ - if (memcg_kmem_enabled()) - return __memcg_kmem_charge(memcg, gfp, nr_pages); - return 0; -} - -static inline void memcg_kmem_uncharge(struct mem_cgroup *memcg, - unsigned int nr_pages) -{ - if (memcg_kmem_enabled()) - __memcg_kmem_uncharge(memcg, nr_pages); -} - /* * A helper for accessing memcg's kmem_id, used for getting * corresponding LRU lists. From patchwork Fri Dec 18 22:01:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB57AC4361B for ; Fri, 18 Dec 2020 22:01:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6F39B23B8C for ; Fri, 18 Dec 2020 22:01:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6F39B23B8C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0F4716B006E; Fri, 18 Dec 2020 17:01:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0816A6B0070; Fri, 18 Dec 2020 17:01:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E87806B0071; Fri, 18 Dec 2020 17:01:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0091.hostedemail.com [216.40.44.91]) by kanga.kvack.org (Postfix) with ESMTP id D42566B006E for ; Fri, 18 Dec 2020 17:01:40 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 95BD1181CBDDD for ; Fri, 18 Dec 2020 22:01:40 +0000 (UTC) X-FDA: 77607775560.17.twist53_371153b27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 76AFE181B9DD7 for ; Fri, 18 Dec 2020 22:01:40 +0000 (UTC) X-HE-Tag: twist53_371153b27440 X-Filterd-Recvd-Size: 3241 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:01:39 +0000 (UTC) Date: Fri, 18 Dec 2020 14:01:38 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328898; bh=dT9nSEPUIKhNbItOB3oWwaqWFWWxw2n6x52oi/m/ECQ=; h=From:To:Subject:In-Reply-To:From; b=L48jDJ2F2RcMxh6G6WXMh3cOHyQJw8P/gRY7bA8AlPwnvU0ZFJhCZSMXfhHBRaEmT X0YQKg4tHib9+RDF7iGFEm2M6Fqdo459jg9RSw3nYXPlx9pCYCWUvww7rF5MJRvwle MJFd+znO/RJEuicHdJEfTqSCKWUjm6e6774JDbDc= From: Andrew Morton To: akpm@linux-foundation.org, guro@fb.com, hannes@cmpxchg.org, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, pbonzini@redhat.com, shakeelb@google.com, torvalds@linux-foundation.org Subject: [patch 04/78] mm, kvm: account kvm_vcpu_mmap to kmemcg Message-ID: <20201218220138.ZaFbo4Y29%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Shakeel Butt Subject: mm, kvm: account kvm_vcpu_mmap to kmemcg A VCPU of a VM can allocate couple of pages which can be mmap'ed by the user space application. At the moment this memory is not charged to the memcg of the VMM. On a large machine running large number of VMs or small number of VMs having large number of VCPUs, this unaccounted memory can be very significant. So, charge this memory to the memcg of the VMM. Please note that lifetime of these allocations corresponds to the lifetime of the VMM. Link: https://lkml.kernel.org/r/20201106202923.2087414-1-shakeelb@google.com Signed-off-by: Shakeel Butt Acked-by: Roman Gushchin Acked-by: Paolo Bonzini Cc: Johannes Weiner Cc: Michal Hocko Signed-off-by: Andrew Morton --- arch/x86/kvm/x86.c | 2 +- virt/kvm/coalesced_mmio.c | 2 +- virt/kvm/kvm_main.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) --- a/arch/x86/kvm/x86.c~mm-kvm-account-kvm_vcpu_mmap-to-kmemcg +++ a/arch/x86/kvm/x86.c @@ -9869,7 +9869,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu r = -ENOMEM; - page = alloc_page(GFP_KERNEL | __GFP_ZERO); + page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!page) goto fail_free_lapic; vcpu->arch.pio_data = page_address(page); --- a/virt/kvm/coalesced_mmio.c~mm-kvm-account-kvm_vcpu_mmap-to-kmemcg +++ a/virt/kvm/coalesced_mmio.c @@ -111,7 +111,7 @@ int kvm_coalesced_mmio_init(struct kvm * { struct page *page; - page = alloc_page(GFP_KERNEL | __GFP_ZERO); + page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!page) return -ENOMEM; --- a/virt/kvm/kvm_main.c~mm-kvm-account-kvm_vcpu_mmap-to-kmemcg +++ a/virt/kvm/kvm_main.c @@ -3116,7 +3116,7 @@ static int kvm_vm_ioctl_create_vcpu(stru } BUILD_BUG_ON(sizeof(struct kvm_run) > PAGE_SIZE); - page = alloc_page(GFP_KERNEL | __GFP_ZERO); + page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!page) { r = -ENOMEM; goto vcpu_free; From patchwork Fri Dec 18 22:01:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13147C4361B for ; Fri, 18 Dec 2020 22:01:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B3A1423B8C for ; Fri, 18 Dec 2020 22:01:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B3A1423B8C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 513566B0070; Fri, 18 Dec 2020 17:01:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E8BE6B0071; Fri, 18 Dec 2020 17:01:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 400E96B0072; Fri, 18 Dec 2020 17:01:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0062.hostedemail.com [216.40.44.62]) by kanga.kvack.org (Postfix) with ESMTP id 2B87D6B0070 for ; Fri, 18 Dec 2020 17:01:44 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EA0128249980 for ; Fri, 18 Dec 2020 22:01:43 +0000 (UTC) X-FDA: 77607775686.12.wave16_00021ee27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id C674B1809F92F for ; Fri, 18 Dec 2020 22:01:43 +0000 (UTC) X-HE-Tag: wave16_00021ee27440 X-Filterd-Recvd-Size: 5227 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:01:43 +0000 (UTC) Date: Fri, 18 Dec 2020 14:01:41 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328902; bh=X9bUXwEfgMgHp3CfL+W4LP0G9NZ5PY6hSIxyz/fZPk8=; h=From:To:Subject:In-Reply-To:From; b=xlYscCMcLpf3jWI86Pjy5z1bGyh8sPdMiG13ZkFLuOt1/uyTJjHTsnCRqOKoxU/d/ o4kGGfSA3eU56fTV4/DeRl565LYlJuvtFSmVj02NvhAN4BT/QVohl9MwL1LOu1f5n9 EsEIgn7NEBwjV0B5PwVZSpC3FQRMXLqX2gCC/7Zk= From: Andrew Morton To: akpm@linux-foundation.org, alex.shi@linux.alibaba.com, chris@chrisdown.name, guro@fb.com, hannes@cmpxchg.org, laoar.shao@gmail.com, linux-mm@kvack.org, lstoakes@gmail.com, mhocko@suse.com, mm-commits@vger.kernel.org, sh_def@163.com, shakeelb@google.com, torvalds@linux-foundation.org, vdavydov.dev@gmail.com Subject: [patch 05/78] mm/memcontrol:rewrite mem_cgroup_page_lruvec() Message-ID: <20201218220141.bF5Js2IGm%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Hui Su Subject: mm/memcontrol:rewrite mem_cgroup_page_lruvec() mem_cgroup_page_lruvec() in memcontrol.c and mem_cgroup_lruvec() in memcontrol.h is very similar except for the param(page and memcg) which also can be convert to each other. So rewrite mem_cgroup_page_lruvec() with mem_cgroup_lruvec(). [alex.shi@linux.alibaba.com: add missed warning in mem_cgroup_lruvec] Link: https://lkml.kernel.org/r/94f17bb7-ec61-5b72-3555-fabeb5a4d73b@linux.alibaba.com [lstoakes@gmail.com: warn on missing memcg on mem_cgroup_page_lruvec()] Link: https://lkml.kernel.org/r/20201125112202.387009-1-lstoakes@gmail.com Link: https://lkml.kernel.org/r/20201108143731.GA74138@rlk Signed-off-by: Hui Su Signed-off-by: Alex Shi Signed-off-by: Lorenzo Stoakes Acked-by: Michal Hocko Acked-by: Johannes Weiner Reviewed-by: Shakeel Butt Acked-by: Roman Gushchin Cc: Vladimir Davydov Cc: Yafang Shao Cc: Chris Down Signed-off-by: Andrew Morton --- include/linux/memcontrol.h | 19 ++++++++++++++++- mm/memcontrol.c | 37 ----------------------------------- 2 files changed, 17 insertions(+), 39 deletions(-) --- a/include/linux/memcontrol.h~mm-memcontrol-rewrite-mem_cgroup_page_lruvec +++ a/include/linux/memcontrol.h @@ -620,9 +620,10 @@ mem_cgroup_nodeinfo(struct mem_cgroup *m /** * mem_cgroup_lruvec - get the lru list vector for a memcg & node * @memcg: memcg of the wanted lruvec + * @pgdat: pglist_data * * Returns the lru list vector holding pages for a given @memcg & - * @node combination. This can be the node lruvec, if the memory + * @pgdat combination. This can be the node lruvec, if the memory * controller is disabled. */ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, @@ -652,7 +653,21 @@ out: return lruvec; } -struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *); +/** + * mem_cgroup_page_lruvec - return lruvec for isolating/putting an LRU page + * @page: the page + * @pgdat: pgdat of the page + * + * This function relies on page->mem_cgroup being stable. + */ +static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page, + struct pglist_data *pgdat) +{ + struct mem_cgroup *memcg = page_memcg(page); + + VM_WARN_ON_ONCE_PAGE(!memcg, page); + return mem_cgroup_lruvec(memcg, pgdat); +} static inline bool lruvec_holds_page_lru_lock(struct page *page, struct lruvec *lruvec) --- a/mm/memcontrol.c~mm-memcontrol-rewrite-mem_cgroup_page_lruvec +++ a/mm/memcontrol.c @@ -1343,43 +1343,6 @@ void lruvec_memcg_debug(struct lruvec *l #endif /** - * mem_cgroup_page_lruvec - return lruvec for isolating/putting an LRU page - * @page: the page - * @pgdat: pgdat of the page - * - * This function relies on page's memcg being stable - see the - * access rules in commit_charge(). - */ -struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgdat) -{ - struct mem_cgroup_per_node *mz; - struct mem_cgroup *memcg; - struct lruvec *lruvec; - - if (mem_cgroup_disabled()) { - lruvec = &pgdat->__lruvec; - goto out; - } - - memcg = page_memcg(page); - VM_WARN_ON_ONCE_PAGE(!memcg, page); - if (!memcg) - memcg = root_mem_cgroup; - - mz = mem_cgroup_page_nodeinfo(memcg, page); - lruvec = &mz->lruvec; -out: - /* - * Since a node can be onlined after the mem_cgroup was created, - * we have to be prepared to initialize lruvec->zone here; - * and if offlined then reonlined, we need to reinitialize it. - */ - if (unlikely(lruvec->pgdat != pgdat)) - lruvec->pgdat = pgdat; - return lruvec; -} - -/** * lock_page_lruvec - lock and return lruvec for a given page. * @page: the page * From patchwork Fri Dec 18 22:01:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 995ECC4361B for ; Fri, 18 Dec 2020 22:01:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 30E2E23B8C for ; Fri, 18 Dec 2020 22:01:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 30E2E23B8C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C8BE66B0071; Fri, 18 Dec 2020 17:01:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C3C1A6B0072; Fri, 18 Dec 2020 17:01:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B2BCD6B0073; Fri, 18 Dec 2020 17:01:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 9B0D36B0071 for ; Fri, 18 Dec 2020 17:01:47 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 69B0A4DD0 for ; Fri, 18 Dec 2020 22:01:47 +0000 (UTC) X-FDA: 77607775854.30.sound38_1406e4327440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 412C91807F1F9 for ; Fri, 18 Dec 2020 22:01:47 +0000 (UTC) X-HE-Tag: sound38_1406e4327440 X-Filterd-Recvd-Size: 5804 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:01:46 +0000 (UTC) Date: Fri, 18 Dec 2020 14:01:44 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328905; bh=q7V4wkxefHRJ/6RednrrO0fCQKQ+UWiSMvd+P3R8B44=; h=From:To:Subject:In-Reply-To:From; b=EvNyMJDyW6TC9nhtxN3d2SAp0rJJGv8Qrd/IZ/KRkrBUhgiIgbeqkse1qNRa6XGi7 7OtCLaahGTKFU79THNXBRbA7Y2rE0VMuvRyrocvfY9hLkjmceJ+5i3EtKq6w75pqcn xnNZWxGRF+46CYGMjooYMssODHdkMCCoM4iHHg08= From: Andrew Morton To: akpm@linux-foundation.org, dbueso@suse.de, edumazet@google.com, guantaol@google.com, khazhy@google.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, soheil@google.com, torvalds@linux-foundation.org, willemb@google.com Subject: [patch 06/78] epoll: check for events when removing a timed out thread from the wait queue Message-ID: <20201218220144.2NepJeilo%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Soheil Hassas Yeganeh Subject: epoll: check for events when removing a timed out thread from the wait queue Patch series "simplify ep_poll". This patch series is a followup based on the suggestions and feedback by Linus: https://lkml.kernel.org/r/CAHk-=wizk=OxUyQPbO8MS41w2Pag1kniUV5WdD5qWL-gq1kjDA@mail.gmail.com The first patch in the series is a fix for the epoll race in presence of timeouts, so that it can be cleanly backported to all affected stable kernels. The rest of the patch series simplify the ep_poll() implementation. Some of these simplifications result in minor performance enhancements as well. We have kept these changes under self tests and internal benchmarks for a few days, and there are minor (1-2%) performance enhancements as a result. This patch (of 8): After abc610e01c66 ("fs/epoll: avoid barrier after an epoll_wait(2) timeout"), we break out of the ep_poll loop upon timeout, without checking whether there is any new events available. Prior to that patch-series we always called ep_events_available() after exiting the loop. This can cause races and missed wakeups. For example, consider the following scenario reported by Guantao Liu: Suppose we have an eventfd added using EPOLLET to an epollfd. Thread 1: Sleeps for just below 5ms and then writes to an eventfd. Thread 2: Calls epoll_wait with a timeout of 5 ms. If it sees an event of the eventfd, it will write back on that fd. Thread 3: Calls epoll_wait with a negative timeout. Prior to abc610e01c66, it is guaranteed that Thread 3 will wake up either by Thread 1 or Thread 2. After abc610e01c66, Thread 3 can be blocked indefinitely if Thread 2 sees a timeout right before the write to the eventfd by Thread 1. Thread 2 will be woken up from schedule_hrtimeout_range and, with evail 0, it will not call ep_send_events(). To fix this issue: 1) Simplify the timed_out case as suggested by Linus. 2) while holding the lock, recheck whether the thread was woken up after its time out has reached. Note that (2) is different from Linus' original suggestion: It do not set "eavail = ep_events_available(ep)" to avoid unnecessary contention (when there are too many timed-out threads and a small number of events), as well as races mentioned in the discussion thread. This is the first patch in the series so that the backport to stable releases is straightforward. Link: https://lkml.kernel.org/r/20201106231635.3528496-1-soheil.kdev@gmail.com Link: https://lkml.kernel.org/r/CAHk-=wizk=OxUyQPbO8MS41w2Pag1kniUV5WdD5qWL-gq1kjDA@mail.gmail.com Link: https://lkml.kernel.org/r/20201106231635.3528496-2-soheil.kdev@gmail.com Fixes: abc610e01c66 ("fs/epoll: avoid barrier after an epoll_wait(2) timeout") Signed-off-by: Soheil Hassas Yeganeh Tested-by: Guantao Liu Suggested-by: Linus Torvalds Reported-by: Guantao Liu Reviewed-by: Eric Dumazet Reviewed-by: Willem de Bruijn Reviewed-by: Khazhismel Kumykov Reviewed-by: Davidlohr Bueso Signed-off-by: Andrew Morton --- fs/eventpoll.c | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) --- a/fs/eventpoll.c~epoll-check-for-events-when-removing-a-timed-out-thread-from-the-wait-queue +++ a/fs/eventpoll.c @@ -1817,23 +1817,30 @@ fetch_events: } write_unlock_irq(&ep->lock); - if (eavail || res) - break; - - if (!schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS)) { - timed_out = 1; - break; - } - - /* We were woken up, thus go and try to harvest some events */ + if (!eavail && !res) + timed_out = !schedule_hrtimeout_range(to, slack, + HRTIMER_MODE_ABS); + + /* + * We were woken up, thus go and try to harvest some events. + * If timed out and still on the wait queue, recheck eavail + * carefully under lock, below. + */ eavail = 1; - } while (0); __set_current_state(TASK_RUNNING); if (!list_empty_careful(&wait.entry)) { write_lock_irq(&ep->lock); + /* + * If the thread timed out and is not on the wait queue, it + * means that the thread was woken up after its timeout expired + * before it could reacquire the lock. Thus, when wait.entry is + * empty, it needs to harvest events. + */ + if (timed_out) + eavail = list_empty(&wait.entry); __remove_wait_queue(&ep->wq, &wait); write_unlock_irq(&ep->lock); } From patchwork Fri Dec 18 22:01:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7281CC4361B for ; Fri, 18 Dec 2020 22:01:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 19D6323B8C for ; Fri, 18 Dec 2020 22:01:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 19D6323B8C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AB8146B0072; Fri, 18 Dec 2020 17:01:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A67186B0073; Fri, 18 Dec 2020 17:01:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97F3A6B0074; Fri, 18 Dec 2020 17:01:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id 857396B0072 for ; Fri, 18 Dec 2020 17:01:50 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4D3AF181BD7A9 for ; Fri, 18 Dec 2020 22:01:50 +0000 (UTC) X-FDA: 77607775980.02.touch96_0a088c527440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 3337E10097AA1 for ; Fri, 18 Dec 2020 22:01:50 +0000 (UTC) X-HE-Tag: touch96_0a088c527440 X-Filterd-Recvd-Size: 3598 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:01:49 +0000 (UTC) Date: Fri, 18 Dec 2020 14:01:48 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328908; bh=k8ft8qdDcoHonV5M34yhMrCBHfaTTVb01zOJWlQ6DGw=; h=From:To:Subject:In-Reply-To:From; b=MBkF4V6kcqOILyxrOOrSQ+vlqcFT20Owv9hC1iUAzgMGQOrfDKpcLODcJXx6hegvk l7EySsmaJzuiL3BIW/O97UMNWxoJgeXREQuK0GGLKyslndDdvqaAekAjJ0tqUpk9gL jEyaim+Fiu3pMiYhiIWz6jiIW8oOOBsioXK6M8NI= From: Andrew Morton To: akpm@linux-foundation.org, edumazet@google.com, guantaol@google.com, khazhy@google.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, soheil@google.com, torvalds@linux-foundation.org, willemb@google.com Subject: [patch 07/78] epoll: simplify signal handling Message-ID: <20201218220148.LntJd0Nd3%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Soheil Hassas Yeganeh Subject: epoll: simplify signal handling Check signals before locking ep->lock, and immediately return -EINTR if there is any signal pending. This saves a few loads, stores, and branches from the hot path and simplifies the loop structure for follow up patches. Link: https://lkml.kernel.org/r/20201106231635.3528496-3-soheil.kdev@gmail.com Signed-off-by: Soheil Hassas Yeganeh Suggested-by: Linus Torvalds Reviewed-by: Eric Dumazet Reviewed-by: Willem de Bruijn Reviewed-by: Khazhismel Kumykov Cc: Guantao Liu Signed-off-by: Andrew Morton --- fs/eventpoll.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) --- a/fs/eventpoll.c~epoll-simplify-signal-handling +++ a/fs/eventpoll.c @@ -1733,7 +1733,7 @@ static inline struct timespec64 ep_set_m static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, int maxevents, long timeout) { - int res = 0, eavail, timed_out = 0; + int res, eavail, timed_out = 0; u64 slack = 0; wait_queue_entry_t wait; ktime_t expires, *to = NULL; @@ -1780,6 +1780,9 @@ fetch_events: ep_reset_busy_poll_napi_id(ep); do { + if (signal_pending(current)) + return -EINTR; + /* * Internally init_wait() uses autoremove_wake_function(), * thus wait entry is removed from the wait queue on each @@ -1809,15 +1812,12 @@ fetch_events: * important. */ eavail = ep_events_available(ep); - if (!eavail) { - if (signal_pending(current)) - res = -EINTR; - else - __add_wait_queue_exclusive(&ep->wq, &wait); - } + if (!eavail) + __add_wait_queue_exclusive(&ep->wq, &wait); + write_unlock_irq(&ep->lock); - if (!eavail && !res) + if (!eavail) timed_out = !schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS); @@ -1853,14 +1853,14 @@ send_events: * finding more events available and fetching * repeatedly. */ - res = -EINTR; + return -EINTR; } /* * Try to transfer events to user space. In case we get 0 events and * there's still timeout left over, we go trying again in search of * more luck. */ - if (!res && eavail && + if (eavail && !(res = ep_send_events(ep, events, maxevents)) && !timed_out) goto fetch_events; From patchwork Fri Dec 18 22:01:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983039 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E3D1C4361B for ; Fri, 18 Dec 2020 22:01:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 44AA223B9B for ; Fri, 18 Dec 2020 22:01:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 44AA223B9B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D9C3E6B0073; Fri, 18 Dec 2020 17:01:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D4CFB6B0074; Fri, 18 Dec 2020 17:01:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8B2F6B0075; Fri, 18 Dec 2020 17:01:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id B276C6B0073 for ; Fri, 18 Dec 2020 17:01:53 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 76E74181B8AF6 for ; Fri, 18 Dec 2020 22:01:53 +0000 (UTC) X-FDA: 77607776106.22.blood93_05010da27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 3A22A180A953B for ; Fri, 18 Dec 2020 22:01:53 +0000 (UTC) X-HE-Tag: blood93_05010da27440 X-Filterd-Recvd-Size: 3173 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:01:52 +0000 (UTC) Date: Fri, 18 Dec 2020 14:01:51 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328912; bh=1O+6iTrYDDJFJRw14JQJikgb7q+y5IwY4bCQXmmFejE=; h=From:To:Subject:In-Reply-To:From; b=aOC3QcCS46ZfxvPXPOUhrQPnRtWqSZka/LOoFtIM4ysmtWiO3CDcpWoP5FbCwoOyT 8P9vWrC+2Dn8f3wq4/8qPZp0tVnpnpXEWE3ubyO5flzcdlprpHEhTMEj5YRgukYZ4U Cnq1clouMRWOB8AO+Cyb6MV/KCXu5BBFeGPGqg9U= From: Andrew Morton To: akpm@linux-foundation.org, edumazet@google.com, guantaol@google.com, khazhy@google.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, soheil@google.com, torvalds@linux-foundation.org, willemb@google.com Subject: [patch 08/78] epoll: pull fatal signal checks into ep_send_events() Message-ID: <20201218220151.ncBs4CLZW%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Soheil Hassas Yeganeh Subject: epoll: pull fatal signal checks into ep_send_events() To simplify the code, pull in checking the fatal signals into ep_send_events(). ep_send_events() is called only from ep_poll(). Note that, previously, we were always checking fatal events, but it is checked only if eavail is true. This should be fine because the goal of that check is to quickly return from epoll_wait() when there is a pending fatal signal. Link: https://lkml.kernel.org/r/20201106231635.3528496-4-soheil.kdev@gmail.com Signed-off-by: Soheil Hassas Yeganeh Suggested-by: Willem de Bruijn Reviewed-by: Eric Dumazet Reviewed-by: Willem de Bruijn Reviewed-by: Khazhismel Kumykov Cc: Guantao Liu Cc: Linus Torvalds Signed-off-by: Andrew Morton --- fs/eventpoll.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) --- a/fs/eventpoll.c~epoll-pull-fatal-signal-checks-into-ep_send_events +++ a/fs/eventpoll.c @@ -1625,6 +1625,14 @@ static int ep_send_events(struct eventpo poll_table pt; int res = 0; + /* + * Always short-circuit for fatal signals to allow threads to make a + * timely exit without the chance of finding more events available and + * fetching repeatedly. + */ + if (fatal_signal_pending(current)) + return -EINTR; + init_poll_funcptr(&pt, NULL); mutex_lock(&ep->mtx); @@ -1846,15 +1854,6 @@ fetch_events: } send_events: - if (fatal_signal_pending(current)) { - /* - * Always short-circuit for fatal signals to allow - * threads to make a timely exit without the chance of - * finding more events available and fetching - * repeatedly. - */ - return -EINTR; - } /* * Try to transfer events to user space. In case we get 0 events and * there's still timeout left over, we go trying again in search of From patchwork Fri Dec 18 22:01:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983041 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC241C4361B for ; Fri, 18 Dec 2020 22:01:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7530A23BA7 for ; Fri, 18 Dec 2020 22:01:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7530A23BA7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0FA2A6B005C; Fri, 18 Dec 2020 17:01:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A5556B0074; Fri, 18 Dec 2020 17:01:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB0D06B0075; Fri, 18 Dec 2020 17:01:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0009.hostedemail.com [216.40.44.9]) by kanga.kvack.org (Postfix) with ESMTP id D7BF76B005C for ; Fri, 18 Dec 2020 17:01:56 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 94F272DFC for ; Fri, 18 Dec 2020 22:01:56 +0000 (UTC) X-FDA: 77607776232.20.loaf15_1511c4c27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 6816F180C0F79 for ; Fri, 18 Dec 2020 22:01:56 +0000 (UTC) X-HE-Tag: loaf15_1511c4c27440 X-Filterd-Recvd-Size: 2390 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:01:55 +0000 (UTC) Date: Fri, 18 Dec 2020 14:01:54 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328915; bh=j3qlxyfozMkL4HzesaEpJH5bOTQCSc6uzzDERBusoBY=; h=From:To:Subject:In-Reply-To:From; b=F4rQzoNIllnhzLNmG17fKoSp1cM2BkvZrXmGEPIzvVRpZWRWZdhGgeRAgCB4tSbP4 PTumCTrJJAllmJCi2Zlf9PMhjY1yxyinzi4z7ZFI29utzYo4sqsQv1fj0SIbTklz58 i0fQv7wB+qoKin79WvUARzzUB8Hh5yIMfcBoZbP0= From: Andrew Morton To: akpm@linux-foundation.org, edumazet@google.com, guantaol@google.com, khazhy@google.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, soheil@google.com, torvalds@linux-foundation.org, willemb@google.com Subject: [patch 09/78] epoll: move eavail next to the list_empty_careful check Message-ID: <20201218220154.x7I97m8oa%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Soheil Hassas Yeganeh Subject: epoll: move eavail next to the list_empty_careful check This is a no-op change and simply to make the code more coherent. Link: https://lkml.kernel.org/r/20201106231635.3528496-5-soheil.kdev@gmail.com Signed-off-by: Soheil Hassas Yeganeh Suggested-by: Linus Torvalds Reviewed-by: Eric Dumazet Reviewed-by: Willem de Bruijn Reviewed-by: Khazhismel Kumykov Cc: Guantao Liu Signed-off-by: Andrew Morton --- fs/eventpoll.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) --- a/fs/eventpoll.c~epoll-move-eavail-next-to-the-list_empty_careful-check +++ a/fs/eventpoll.c @@ -1828,6 +1828,7 @@ fetch_events: if (!eavail) timed_out = !schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS); + __set_current_state(TASK_RUNNING); /* * We were woken up, thus go and try to harvest some events. @@ -1837,8 +1838,6 @@ fetch_events: eavail = 1; } while (0); - __set_current_state(TASK_RUNNING); - if (!list_empty_careful(&wait.entry)) { write_lock_irq(&ep->lock); /* From patchwork Fri Dec 18 22:01:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9CA0C4361B for ; Fri, 18 Dec 2020 22:02:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4C17F23BA7 for ; Fri, 18 Dec 2020 22:02:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4C17F23BA7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E11C76B0074; Fri, 18 Dec 2020 17:01:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DC2926B0075; Fri, 18 Dec 2020 17:01:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D00226B0078; Fri, 18 Dec 2020 17:01:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id B710B6B0074 for ; Fri, 18 Dec 2020 17:01:59 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 859CE180931ED for ; Fri, 18 Dec 2020 22:01:59 +0000 (UTC) X-FDA: 77607776358.28.net92_3b17dfa27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 5AFF06D6D for ; Fri, 18 Dec 2020 22:01:59 +0000 (UTC) X-HE-Tag: net92_3b17dfa27440 X-Filterd-Recvd-Size: 4214 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:01:58 +0000 (UTC) Date: Fri, 18 Dec 2020 14:01:57 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328918; bh=qSDa8QP2U5dSJTYkAegSWxWOc3++/4jOx8KSR6/EGu8=; h=From:To:Subject:In-Reply-To:From; b=IGjzbuH862sUOmVBLaCQmeSPi5WM8fCFSosbHfJK2nDy0P4Jlw7vA+llKHTYk5SIN jvsDNv4PEKZ7n/nWwH1g6UBEPI1gjKPiL0qp2w5Y8DH82WXlk2EmRUGrDD/DDBiVyc owqbk3BOWZLUj+kF0m7Va1E9wvd49/HHGIuNLPxU= From: Andrew Morton To: akpm@linux-foundation.org, edumazet@google.com, guantaol@google.com, khazhy@google.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, soheil@google.com, torvalds@linux-foundation.org, willemb@google.com Subject: [patch 10/78] epoll: simplify and optimize busy loop logic Message-ID: <20201218220157.x9tlaAvwR%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Soheil Hassas Yeganeh Subject: epoll: simplify and optimize busy loop logic ep_events_available() is called multiple times around the busy loop logic, even though the logic is generally not used. ep_reset_busy_poll_napi_id() is similarly always called, even when busy loop is not used. Eliminate ep_reset_busy_poll_napi_id() and inline it inside ep_busy_loop(). Make ep_busy_loop() return whether there are any events available after the busy loop. This will eliminate unnecessary loads and branches, and simplifies the loop. Link: https://lkml.kernel.org/r/20201106231635.3528496-6-soheil.kdev@gmail.com Signed-off-by: Soheil Hassas Yeganeh Reviewed-by: Eric Dumazet Reviewed-by: Willem de Bruijn Reviewed-by: Khazhismel Kumykov Cc: Guantao Liu Cc: Linus Torvalds Signed-off-by: Andrew Morton --- fs/eventpoll.c | 40 +++++++++++++++++----------------------- 1 file changed, 17 insertions(+), 23 deletions(-) --- a/fs/eventpoll.c~epoll-simplify-and-optimize-busy-loop-logic +++ a/fs/eventpoll.c @@ -389,19 +389,24 @@ static bool ep_busy_loop_end(void *p, un * * we must do our busy polling with irqs enabled */ -static void ep_busy_loop(struct eventpoll *ep, int nonblock) +static bool ep_busy_loop(struct eventpoll *ep, int nonblock) { unsigned int napi_id = READ_ONCE(ep->napi_id); - if ((napi_id >= MIN_NAPI_ID) && net_busy_loop_on()) + if ((napi_id >= MIN_NAPI_ID) && net_busy_loop_on()) { napi_busy_loop(napi_id, nonblock ? NULL : ep_busy_loop_end, ep, false, BUSY_POLL_BUDGET); -} - -static inline void ep_reset_busy_poll_napi_id(struct eventpoll *ep) -{ - if (ep->napi_id) + if (ep_events_available(ep)) + return true; + /* + * Busy poll timed out. Drop NAPI ID for now, we can add + * it back in when we have moved a socket with a valid NAPI + * ID onto the ready list. + */ ep->napi_id = 0; + return false; + } + return false; } /* @@ -441,12 +446,9 @@ static inline void ep_set_busy_poll_napi #else -static inline void ep_busy_loop(struct eventpoll *ep, int nonblock) -{ -} - -static inline void ep_reset_busy_poll_napi_id(struct eventpoll *ep) +static inline bool ep_busy_loop(struct eventpoll *ep, int nonblock) { + return false; } static inline void ep_set_busy_poll_napi_id(struct epitem *epi) @@ -1772,21 +1774,13 @@ static int ep_poll(struct eventpoll *ep, } fetch_events: - - if (!ep_events_available(ep)) - ep_busy_loop(ep, timed_out); - eavail = ep_events_available(ep); + if (!eavail) + eavail = ep_busy_loop(ep, timed_out); + if (eavail) goto send_events; - /* - * Busy poll timed out. Drop NAPI ID for now, we can add - * it back in when we have moved a socket with a valid NAPI - * ID onto the ready list. - */ - ep_reset_busy_poll_napi_id(ep); - do { if (signal_pending(current)) return -EINTR; From patchwork Fri Dec 18 22:02:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEAEFC4361B for ; Fri, 18 Dec 2020 22:02:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9A43823BA7 for ; Fri, 18 Dec 2020 22:02:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9A43823BA7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2EF726B0075; Fri, 18 Dec 2020 17:02:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 29EB96B0078; Fri, 18 Dec 2020 17:02:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1DEAF6B007B; Fri, 18 Dec 2020 17:02:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id 0C7166B0075 for ; Fri, 18 Dec 2020 17:02:03 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B8B6052AD for ; Fri, 18 Dec 2020 22:02:02 +0000 (UTC) X-FDA: 77607776484.08.price86_2a092ec27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 9CD6B1819E62B for ; Fri, 18 Dec 2020 22:02:02 +0000 (UTC) X-HE-Tag: price86_2a092ec27440 X-Filterd-Recvd-Size: 3575 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:02 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328921; bh=fAnk2UmSqMrq+GnQnRGNlskacHwWWjkcjsuD/OdyKZs=; h=From:To:Subject:In-Reply-To:From; b=vIrez+v0hCXNfWsSnXQwoDHbtMoFROZH4hleKEWRHsGNJ0qUQkSOKWOTU8yRVAjeZ AvoTu7v/jlZe9RZSXFsujMONRzWRXDJ2M/NPUi0QrWUlWNSkmhFHcNoO8v1w61BiNI b9Gjbgar4HJc3F9MSoe4OHkbpT2Kd6cMvrlsSD28= From: Andrew Morton To: akpm@linux-foundation.org, edumazet@google.com, guantaol@google.com, khazhy@google.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, soheil@google.com, torvalds@linux-foundation.org, willemb@google.com Subject: [patch 11/78] epoll: pull all code between fetch_events and send_event into the loop Message-ID: <20201218220200.vfpDV7gWz%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Soheil Hassas Yeganeh Subject: epoll: pull all code between fetch_events and send_event into the loop This is a no-op change which simplifies the follow up patches. Link: https://lkml.kernel.org/r/20201106231635.3528496-7-soheil.kdev@gmail.com Signed-off-by: Soheil Hassas Yeganeh Suggested-by: Linus Torvalds Reviewed-by: Eric Dumazet Reviewed-by: Willem de Bruijn Reviewed-by: Khazhismel Kumykov Cc: Guantao Liu Signed-off-by: Andrew Morton --- fs/eventpoll.c | 41 +++++++++++++++++++++-------------------- 1 file changed, 21 insertions(+), 20 deletions(-) --- a/fs/eventpoll.c~epoll-pull-all-code-between-fetch_events-and-send_event-into-the-loop +++ a/fs/eventpoll.c @@ -1774,14 +1774,14 @@ static int ep_poll(struct eventpoll *ep, } fetch_events: - eavail = ep_events_available(ep); - if (!eavail) - eavail = ep_busy_loop(ep, timed_out); + do { + eavail = ep_events_available(ep); + if (!eavail) + eavail = ep_busy_loop(ep, timed_out); - if (eavail) - goto send_events; + if (eavail) + goto send_events; - do { if (signal_pending(current)) return -EINTR; @@ -1830,21 +1830,22 @@ fetch_events: * carefully under lock, below. */ eavail = 1; - } while (0); - if (!list_empty_careful(&wait.entry)) { - write_lock_irq(&ep->lock); - /* - * If the thread timed out and is not on the wait queue, it - * means that the thread was woken up after its timeout expired - * before it could reacquire the lock. Thus, when wait.entry is - * empty, it needs to harvest events. - */ - if (timed_out) - eavail = list_empty(&wait.entry); - __remove_wait_queue(&ep->wq, &wait); - write_unlock_irq(&ep->lock); - } + if (!list_empty_careful(&wait.entry)) { + write_lock_irq(&ep->lock); + /* + * If the thread timed out and is not on the wait queue, + * it means that the thread was woken up after its + * timeout expired before it could reacquire the lock. + * Thus, when wait.entry is empty, it needs to harvest + * events. + */ + if (timed_out) + eavail = list_empty(&wait.entry); + __remove_wait_queue(&ep->wq, &wait); + write_unlock_irq(&ep->lock); + } + } while (0); send_events: /* From patchwork Fri Dec 18 22:02:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED238C4361B for ; Fri, 18 Dec 2020 22:02:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8E17B23BAC for ; Fri, 18 Dec 2020 22:02:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8E17B23BAC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2A2686B0078; Fri, 18 Dec 2020 17:02:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 251056B007B; Fri, 18 Dec 2020 17:02:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18F8B6B007D; Fri, 18 Dec 2020 17:02:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0250.hostedemail.com [216.40.44.250]) by kanga.kvack.org (Postfix) with ESMTP id 071246B0078 for ; Fri, 18 Dec 2020 17:02:06 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C8D1752AD for ; Fri, 18 Dec 2020 22:02:05 +0000 (UTC) X-FDA: 77607776610.11.slave68_2e1639627440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 958A6181C43DC for ; Fri, 18 Dec 2020 22:02:05 +0000 (UTC) X-HE-Tag: slave68_2e1639627440 X-Filterd-Recvd-Size: 3760 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:05 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:03 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328924; bh=1wsdQJcMxaNSD6uONJ2u9GA0UyVe8mkcshgkBarFMVE=; h=From:To:Subject:In-Reply-To:From; b=daFverbZ1W7xmDt6cjN5oNNd7axr9utjoDEd91iON7Y0Z1IB0Y7EBtmR0taD8YuJg GXb4O9FO6bNub5PhkfhUbV4oO9a9Od24xtfIQx/ur1dkx43GjckkDkYeLRyvCJjqAb fBbqgmNSreTKn8pi2diQxo98YgBmnaCU0Bi22Bj0= From: Andrew Morton To: akpm@linux-foundation.org, edumazet@google.com, guantaol@google.com, khazhy@google.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, soheil@google.com, torvalds@linux-foundation.org, willemb@google.com Subject: [patch 12/78] epoll: replace gotos with a proper loop Message-ID: <20201218220203.uBS-rg8Is%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Soheil Hassas Yeganeh Subject: epoll: replace gotos with a proper loop The existing loop is pointless, and the labels make it really hard to follow the structure. Replace that control structure with a simple loop that returns when there are new events, there is a signal, or the thread has timed out. Link: https://lkml.kernel.org/r/20201106231635.3528496-8-soheil.kdev@gmail.com Signed-off-by: Soheil Hassas Yeganeh Suggested-by: Linus Torvalds Reviewed-by: Eric Dumazet Reviewed-by: Willem de Bruijn Reviewed-by: Khazhismel Kumykov Cc: Guantao Liu Signed-off-by: Andrew Morton --- fs/eventpoll.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) --- a/fs/eventpoll.c~epoll-replace-gotos-with-a-proper-loop +++ a/fs/eventpoll.c @@ -1743,7 +1743,7 @@ static inline struct timespec64 ep_set_m static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, int maxevents, long timeout) { - int res, eavail, timed_out = 0; + int res, eavail = 0, timed_out = 0; u64 slack = 0; wait_queue_entry_t wait; ktime_t expires, *to = NULL; @@ -1769,18 +1769,30 @@ static int ep_poll(struct eventpoll *ep, write_lock_irq(&ep->lock); eavail = ep_events_available(ep); write_unlock_irq(&ep->lock); - - goto send_events; } -fetch_events: - do { + while (1) { + if (eavail) { + /* + * Try to transfer events to user space. In case we get + * 0 events and there's still timeout left over, we go + * trying again in search of more luck. + */ + res = ep_send_events(ep, events, maxevents); + if (res) + return res; + } + + if (timed_out) + return 0; + eavail = ep_events_available(ep); - if (!eavail) - eavail = ep_busy_loop(ep, timed_out); + if (eavail) + continue; + eavail = ep_busy_loop(ep, timed_out); if (eavail) - goto send_events; + continue; if (signal_pending(current)) return -EINTR; @@ -1845,19 +1857,7 @@ fetch_events: __remove_wait_queue(&ep->wq, &wait); write_unlock_irq(&ep->lock); } - } while (0); - -send_events: - /* - * Try to transfer events to user space. In case we get 0 events and - * there's still timeout left over, we go trying again in search of - * more luck. - */ - if (eavail && - !(res = ep_send_events(ep, events, maxevents)) && !timed_out) - goto fetch_events; - - return res; + } } /** From patchwork Fri Dec 18 22:02:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 520D1C4361B for ; Fri, 18 Dec 2020 22:02:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ED19223B7E for ; Fri, 18 Dec 2020 22:02:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED19223B7E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8BB236B007B; Fri, 18 Dec 2020 17:02:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 843DD6B007D; Fri, 18 Dec 2020 17:02:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 781676B007E; Fri, 18 Dec 2020 17:02:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id 5F2196B007B for ; Fri, 18 Dec 2020 17:02:09 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 288BF8249980 for ; Fri, 18 Dec 2020 22:02:09 +0000 (UTC) X-FDA: 77607776778.27.moon40_4606bdd27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 0CDED3D663 for ; Fri, 18 Dec 2020 22:02:09 +0000 (UTC) X-HE-Tag: moon40_4606bdd27440 X-Filterd-Recvd-Size: 4151 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:08 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:06 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328927; bh=/w5Chu1RojjOulIY4C553qsl8NTMpu2u/eUurC+C05c=; h=From:To:Subject:In-Reply-To:From; b=AInEbVLnqgOW6UQx+Wz5148vG500CI8PvZFKpkGe5J7G/W72CLUrVgjqsxy6zVDCn Hr7em/D7udu/bqIENwfgpkWKHXeOTXh5E6PsZn5kg2Y3UN8dbQ39sWLC2VX0mTA93l HpP8zETon8DcS8kkUrx1Kk5272NS3m9WtBrjMECE= From: Andrew Morton To: akpm@linux-foundation.org, edumazet@google.com, guantaol@google.com, khazhy@google.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, soheil@google.com, torvalds@linux-foundation.org, willemb@google.com Subject: [patch 13/78] epoll: eliminate unnecessary lock for zero timeout Message-ID: <20201218220206.CTX0JqXyM%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Soheil Hassas Yeganeh Subject: epoll: eliminate unnecessary lock for zero timeout We call ep_events_available() under lock when timeout is 0, and then call it without locks in the loop for the other cases. Instead, call ep_events_available() without lock for all cases. For non-zero timeouts, we will recheck after adding the thread to the wait queue. For zero timeout cases, by definition, user is opportunistically polling and will have to call epoll_wait again in the future. Note that this lock was kept in c5a282e9635e9 because the whole loop was historically under lock. This patch results in a 1% CPU/RPC reduction in RPC benchmarks. Link: https://lkml.kernel.org/r/20201106231635.3528496-9-soheil.kdev@gmail.com Signed-off-by: Soheil Hassas Yeganeh Suggested-by: Eric Dumazet Reviewed-by: Eric Dumazet Reviewed-by: Willem de Bruijn Reviewed-by: Khazhismel Kumykov Cc: Guantao Liu Cc: Linus Torvalds Signed-off-by: Andrew Morton --- fs/eventpoll.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) --- a/fs/eventpoll.c~epoll-eliminate-unnecessary-lock-for-zero-timeout +++ a/fs/eventpoll.c @@ -1743,7 +1743,7 @@ static inline struct timespec64 ep_set_m static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, int maxevents, long timeout) { - int res, eavail = 0, timed_out = 0; + int res, eavail, timed_out = 0; u64 slack = 0; wait_queue_entry_t wait; ktime_t expires, *to = NULL; @@ -1759,18 +1759,21 @@ static int ep_poll(struct eventpoll *ep, } else if (timeout == 0) { /* * Avoid the unnecessary trip to the wait queue loop, if the - * caller specified a non blocking operation. We still need - * lock because we could race and not see an epi being added - * to the ready list while in irq callback. Thus incorrectly - * returning 0 back to userspace. + * caller specified a non blocking operation. */ timed_out = 1; - - write_lock_irq(&ep->lock); - eavail = ep_events_available(ep); - write_unlock_irq(&ep->lock); } + /* + * This call is racy: We may or may not see events that are being added + * to the ready list under the lock (e.g., in IRQ callbacks). For, cases + * with a non-zero timeout, this thread will check the ready list under + * lock and will added to the wait queue. For, cases with a zero + * timeout, the user by definition should not care and will have to + * recheck again. + */ + eavail = ep_events_available(ep); + while (1) { if (eavail) { /* @@ -1786,10 +1789,6 @@ static int ep_poll(struct eventpoll *ep, if (timed_out) return 0; - eavail = ep_events_available(ep); - if (eavail) - continue; - eavail = ep_busy_loop(ep, timed_out); if (eavail) continue; From patchwork Fri Dec 18 22:02:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983051 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBE53C4361B for ; Fri, 18 Dec 2020 22:02:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5F6B323BAC for ; Fri, 18 Dec 2020 22:02:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F6B323BAC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F23E66B007D; Fri, 18 Dec 2020 17:02:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ED49D6B007E; Fri, 18 Dec 2020 17:02:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DED236B0080; Fri, 18 Dec 2020 17:02:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0212.hostedemail.com [216.40.44.212]) by kanga.kvack.org (Postfix) with ESMTP id C6D7D6B007D for ; Fri, 18 Dec 2020 17:02:12 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 97F442497 for ; Fri, 18 Dec 2020 22:02:12 +0000 (UTC) X-FDA: 77607776904.04.wish30_3705c8827440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 786AA801A3B6 for ; Fri, 18 Dec 2020 22:02:12 +0000 (UTC) X-HE-Tag: wish30_3705c8827440 X-Filterd-Recvd-Size: 11362 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:11 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:10 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328931; bh=AexJ2vMYIopIJOlUAiEGDR3hvR4BxmRpff6tzMYmp+c=; h=From:To:Subject:In-Reply-To:From; b=JUA+tpJ66NOP3BB9s/C96gQ8V99yRNuZu532uDQe4rQOf6AGeQ/lYAPprNqJtRUwL Y0wlURho2rrtugThzI032gJJIj9C13St+6t9iaB+NcOREmF3zHUyn2j/t4OjPjjcRG ZhyMBRG36bEDk4lQm9ZX5YwdIlchy3WSIkz8XA1M= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 14/78] kasan: drop unnecessary GPL text from comment headers Message-ID: <20201218220210.Dw1XK0QbY%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: drop unnecessary GPL text from comment headers Patch series "kasan: add hardware tag-based mode for arm64", v11. This patchset adds a new hardware tag-based mode to KASAN [1]. The new mode is similar to the existing software tag-based KASAN, but relies on arm64 Memory Tagging Extension (MTE) [2] to perform memory and pointer tagging (instead of shadow memory and compiler instrumentation). This patchset is co-developed and tested by Vincenzo Frascino . This patchset is available here: https://github.com/xairy/linux/tree/up-kasan-mte-v11 For testing in QEMU hardware tag-based KASAN requires: 1. QEMU built from master [4] (use "-machine virt,mte=on -cpu max" arguments to run). 2. GCC version 10. [1] https://www.kernel.org/doc/html/latest/dev-tools/kasan.html [2] https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/enhancing-memory-safety [3] git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux for-next/mte [4] https://github.com/qemu/qemu ====== Overview The underlying ideas of the approach used by hardware tag-based KASAN are: 1. By relying on the Top Byte Ignore (TBI) arm64 CPU feature, pointer tags are stored in the top byte of each kernel pointer. 2. With the Memory Tagging Extension (MTE) arm64 CPU feature, memory tags for kernel memory allocations are stored in a dedicated memory not accessible via normal instuctions. 3. On each memory allocation, a random tag is generated, embedded it into the returned pointer, and the corresponding memory is tagged with the same tag value. 4. With MTE the CPU performs a check on each memory access to make sure that the pointer tag matches the memory tag. 5. On a tag mismatch the CPU generates a tag fault, and a KASAN report is printed. Same as other KASAN modes, hardware tag-based KASAN is intended as a debugging feature at this point. ====== Rationale There are two main reasons for this new hardware tag-based mode: 1. Previously implemented software tag-based KASAN is being successfully used on dogfood testing devices due to its low memory overhead (as initially planned). The new hardware mode keeps the same low memory overhead, and is expected to have significantly lower performance impact, due to the tag checks being performed by the hardware. Therefore the new mode can be used as a better alternative in dogfood testing for hardware that supports MTE. 2. The new mode lays the groundwork for the planned in-kernel MTE-based memory corruption mitigation to be used in production. ====== Technical details Considering the implementation perspective, hardware tag-based KASAN is almost identical to the software mode. The key difference is using MTE for assigning and checking tags. Compared to the software mode, the hardware mode uses 4 bits per tag, as dictated by MTE. Pointer tags are stored in bits [56:60), the top 4 bits have the normal value 0xF. Having less distict tags increases the probablity of false negatives (from ~1/256 to ~1/16) in certain cases. Only synchronous exceptions are set up and used by hardware tag-based KASAN. ====== Benchmarks Note: all measurements have been performed with software emulation of Memory Tagging Extension, performance numbers for hardware tag-based KASAN on the actual hardware are expected to be better. Boot time [1]: * 2.8 sec for clean kernel * 5.7 sec for hardware tag-based KASAN * 11.8 sec for software tag-based KASAN * 11.6 sec for generic KASAN Slab memory usage after boot [2]: * 7.0 kb for clean kernel * 9.7 kb for hardware tag-based KASAN * 9.7 kb for software tag-based KASAN * 41.3 kb for generic KASAN Measurements have been performed with: * defconfig-based configs * Manually built QEMU master * QEMU arguments: -machine virt,mte=on -cpu max * CONFIG_KASAN_STACK_ENABLE disabled * CONFIG_KASAN_INLINE enabled * clang-10 as the compiler and gcc-10 as the assembler [1] Time before the ext4 driver is initialized. [2] Measured as `cat /proc/meminfo | grep Slab`. ====== Notes The cover letter for software tag-based KASAN patchset can be found here: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=0116523cfffa62aeb5aa3b85ce7419f3dae0c1b8 ===== Tags Tested-by: Vincenzo Frascino This patch (of 41): Don't mention "GNU General Public License version 2" text explicitly, as it's already covered by the SPDX-License-Identifier. Link: https://lkml.kernel.org/r/cover.1606161801.git.andreyknvl@google.com Link: https://lkml.kernel.org/r/6ea9f5f4aa9dbbffa0d0c0a780b37699a4531034.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Catalin Marinas Cc: Will Deacon Cc: Dmitry Vyukov Cc: Andrey Ryabinin Cc: Evgenii Stepanov Cc: Branislav Rankov Cc: Kevin Brodsky Cc: Vasily Gorbik Signed-off-by: Andrew Morton --- mm/kasan/common.c | 5 ----- mm/kasan/generic.c | 5 ----- mm/kasan/generic_report.c | 5 ----- mm/kasan/init.c | 5 ----- mm/kasan/quarantine.c | 10 ---------- mm/kasan/report.c | 5 ----- mm/kasan/tags.c | 5 ----- mm/kasan/tags_report.c | 5 ----- 8 files changed, 45 deletions(-) --- a/mm/kasan/common.c~kasan-drop-unnecessary-gpl-text-from-comment-headers +++ a/mm/kasan/common.c @@ -7,11 +7,6 @@ * * Some code borrowed from https://github.com/xairy/kasan-prototype by * Andrey Konovalov - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * */ #include --- a/mm/kasan/generic.c~kasan-drop-unnecessary-gpl-text-from-comment-headers +++ a/mm/kasan/generic.c @@ -7,11 +7,6 @@ * * Some code borrowed from https://github.com/xairy/kasan-prototype by * Andrey Konovalov - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt --- a/mm/kasan/generic_report.c~kasan-drop-unnecessary-gpl-text-from-comment-headers +++ a/mm/kasan/generic_report.c @@ -7,11 +7,6 @@ * * Some code borrowed from https://github.com/xairy/kasan-prototype by * Andrey Konovalov - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * */ #include --- a/mm/kasan/init.c~kasan-drop-unnecessary-gpl-text-from-comment-headers +++ a/mm/kasan/init.c @@ -4,11 +4,6 @@ * * Copyright (c) 2015 Samsung Electronics Co., Ltd. * Author: Andrey Ryabinin - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * */ #include --- a/mm/kasan/quarantine.c~kasan-drop-unnecessary-gpl-text-from-comment-headers +++ a/mm/kasan/quarantine.c @@ -6,16 +6,6 @@ * Copyright (C) 2016 Google, Inc. * * Based on code by Dmitry Chernenkov. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * version 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * */ #include --- a/mm/kasan/report.c~kasan-drop-unnecessary-gpl-text-from-comment-headers +++ a/mm/kasan/report.c @@ -7,11 +7,6 @@ * * Some code borrowed from https://github.com/xairy/kasan-prototype by * Andrey Konovalov - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * */ #include --- a/mm/kasan/tags.c~kasan-drop-unnecessary-gpl-text-from-comment-headers +++ a/mm/kasan/tags.c @@ -4,11 +4,6 @@ * * Copyright (c) 2018 Google, Inc. * Author: Andrey Konovalov - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt --- a/mm/kasan/tags_report.c~kasan-drop-unnecessary-gpl-text-from-comment-headers +++ a/mm/kasan/tags_report.c @@ -7,11 +7,6 @@ * * Some code borrowed from https://github.com/xairy/kasan-prototype by * Andrey Konovalov - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * */ #include From patchwork Fri Dec 18 22:02:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983053 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01D6DC4361B for ; Fri, 18 Dec 2020 22:02:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ACBCA23BAC for ; Fri, 18 Dec 2020 22:02:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ACBCA23BAC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 510776B007E; Fri, 18 Dec 2020 17:02:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 475086B0080; Fri, 18 Dec 2020 17:02:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33BC06B0081; Fri, 18 Dec 2020 17:02:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id 1CFB46B007E for ; Fri, 18 Dec 2020 17:02:16 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DA36C181B9DA9 for ; Fri, 18 Dec 2020 22:02:15 +0000 (UTC) X-FDA: 77607777030.08.spark38_4c12a6227440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id B89DC1819E62B for ; Fri, 18 Dec 2020 22:02:15 +0000 (UTC) X-HE-Tag: spark38_4c12a6227440 X-Filterd-Recvd-Size: 2828 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:15 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:13 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328934; bh=iEJgPSTIOZrOrqUbae8/LF4D4oibSp4dMwOwMLPsWWM=; h=From:To:Subject:In-Reply-To:From; b=0+4MQ3e2UZGsD4o5p+EcZLAcyANhtqfl1dstOm9XYNMKMPfuszlY7YQzRGQLBCwVJ 2pj1h+17WZbcNV8FacCCuH+UKlgZWORAe1y8PFGIvw2b7MP9gGU9viADKfH975maD9 p5yHm2J8odJcJsT5u0nTWYNdWLmkABOgXRkQ6U34= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 15/78] kasan: KASAN_VMALLOC depends on KASAN_GENERIC Message-ID: <20201218220213.fFmTw-LzM%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: KASAN_VMALLOC depends on KASAN_GENERIC Currently only generic KASAN mode supports vmalloc, reflect that in the config. Link: https://lkml.kernel.org/r/0c493d3a065ad95b04313d00244e884a7e2498ff.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- lib/Kconfig.kasan | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/lib/Kconfig.kasan~kasan-kasan_vmalloc-depends-on-kasan_generic +++ a/lib/Kconfig.kasan @@ -146,7 +146,7 @@ config KASAN_SW_TAGS_IDENTIFY config KASAN_VMALLOC bool "Back mappings in vmalloc space with real shadow memory" - depends on HAVE_ARCH_KASAN_VMALLOC + depends on KASAN_GENERIC && HAVE_ARCH_KASAN_VMALLOC help By default, the shadow region for vmalloc space is the read-only zero page. This means that KASAN cannot detect errors involving From patchwork Fri Dec 18 22:02:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4089CC4361B for ; Fri, 18 Dec 2020 22:02:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DE7B023B54 for ; Fri, 18 Dec 2020 22:02:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DE7B023B54 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 86E366B0080; Fri, 18 Dec 2020 17:02:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D00B6B0081; Fri, 18 Dec 2020 17:02:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 670A06B0082; Fri, 18 Dec 2020 17:02:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id 4E9CC6B0080 for ; Fri, 18 Dec 2020 17:02:19 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 195CC1803B592 for ; Fri, 18 Dec 2020 22:02:19 +0000 (UTC) X-FDA: 77607777198.19.lake89_4f1426c27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id EC1371ACEA2 for ; Fri, 18 Dec 2020 22:02:18 +0000 (UTC) X-HE-Tag: lake89_4f1426c27440 X-Filterd-Recvd-Size: 7854 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:18 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:16 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328937; bh=xVl2M5glN7HCbaxCh+UhQW7KpJYvRw/xwxfZzL4AlqA=; h=From:To:Subject:In-Reply-To:From; b=r4NhIUBMRXdvws5veRhgHuohdJY5bdwG842t79BCEdBIxR6hENoxHnliNLcykJrdo Ys+W4H1Nm8oy556Fjt4HhywJVNbh1p/z3UB3xjvKdOjLJVPNWNc1DE7LibHoM8+s03 D2YmThInnj3qR1JoZHeSkWWJ3mRJFXpsTV3OK+rM= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 16/78] kasan: group vmalloc code Message-ID: <20201218220216.YxdPT5Dtv%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: group vmalloc code This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. Group all vmalloc-related function declarations in include/linux/kasan.h, and their implementations in mm/kasan/common.c. No functional changes. Link: https://lkml.kernel.org/r/80a6fdd29b039962843bd6cf22ce2643a7c8904e.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- include/linux/kasan.h | 41 +++++++++++---------- mm/kasan/common.c | 78 ++++++++++++++++++++-------------------- 2 files changed, 63 insertions(+), 56 deletions(-) --- a/include/linux/kasan.h~kasan-group-vmalloc-code +++ a/include/linux/kasan.h @@ -75,19 +75,6 @@ struct kasan_cache { int free_meta_offset; }; -/* - * These functions provide a special case to support backing module - * allocations with real shadow memory. With KASAN vmalloc, the special - * case is unnecessary, as the work is handled in the generic case. - */ -#ifndef CONFIG_KASAN_VMALLOC -int kasan_module_alloc(void *addr, size_t size); -void kasan_free_shadow(const struct vm_struct *vm); -#else -static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } -static inline void kasan_free_shadow(const struct vm_struct *vm) {} -#endif - int kasan_add_zero_shadow(void *start, unsigned long size); void kasan_remove_zero_shadow(void *start, unsigned long size); @@ -156,9 +143,6 @@ static inline bool kasan_slab_free(struc return false; } -static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } -static inline void kasan_free_shadow(const struct vm_struct *vm) {} - static inline int kasan_add_zero_shadow(void *start, unsigned long size) { return 0; @@ -211,13 +195,16 @@ static inline void *kasan_reset_tag(cons #endif /* CONFIG_KASAN_SW_TAGS */ #ifdef CONFIG_KASAN_VMALLOC + int kasan_populate_vmalloc(unsigned long addr, unsigned long size); void kasan_poison_vmalloc(const void *start, unsigned long size); void kasan_unpoison_vmalloc(const void *start, unsigned long size); void kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, unsigned long free_region_end); -#else + +#else /* CONFIG_KASAN_VMALLOC */ + static inline int kasan_populate_vmalloc(unsigned long start, unsigned long size) { @@ -232,7 +219,25 @@ static inline void kasan_release_vmalloc unsigned long end, unsigned long free_region_start, unsigned long free_region_end) {} -#endif + +#endif /* CONFIG_KASAN_VMALLOC */ + +#if defined(CONFIG_KASAN) && !defined(CONFIG_KASAN_VMALLOC) + +/* + * These functions provide a special case to support backing module + * allocations with real shadow memory. With KASAN vmalloc, the special + * case is unnecessary, as the work is handled in the generic case. + */ +int kasan_module_alloc(void *addr, size_t size); +void kasan_free_shadow(const struct vm_struct *vm); + +#else /* CONFIG_KASAN && !CONFIG_KASAN_VMALLOC */ + +static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } +static inline void kasan_free_shadow(const struct vm_struct *vm) {} + +#endif /* CONFIG_KASAN && !CONFIG_KASAN_VMALLOC */ #ifdef CONFIG_KASAN_INLINE void kasan_non_canonical_hook(unsigned long addr); --- a/mm/kasan/common.c~kasan-group-vmalloc-code +++ a/mm/kasan/common.c @@ -536,44 +536,6 @@ void kasan_kfree_large(void *ptr, unsign /* The object will be poisoned by page_alloc. */ } -#ifndef CONFIG_KASAN_VMALLOC -int kasan_module_alloc(void *addr, size_t size) -{ - void *ret; - size_t scaled_size; - size_t shadow_size; - unsigned long shadow_start; - - shadow_start = (unsigned long)kasan_mem_to_shadow(addr); - scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT; - shadow_size = round_up(scaled_size, PAGE_SIZE); - - if (WARN_ON(!PAGE_ALIGNED(shadow_start))) - return -EINVAL; - - ret = __vmalloc_node_range(shadow_size, 1, shadow_start, - shadow_start + shadow_size, - GFP_KERNEL, - PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE, - __builtin_return_address(0)); - - if (ret) { - __memset(ret, KASAN_SHADOW_INIT, shadow_size); - find_vm_area(addr)->flags |= VM_KASAN; - kmemleak_ignore(ret); - return 0; - } - - return -ENOMEM; -} - -void kasan_free_shadow(const struct vm_struct *vm) -{ - if (vm->flags & VM_KASAN) - vfree(kasan_mem_to_shadow(vm->addr)); -} -#endif - #ifdef CONFIG_MEMORY_HOTPLUG static bool shadow_mapped(unsigned long addr) { @@ -685,6 +647,7 @@ core_initcall(kasan_memhotplug_init); #endif #ifdef CONFIG_KASAN_VMALLOC + static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, void *unused) { @@ -923,4 +886,43 @@ void kasan_release_vmalloc(unsigned long (unsigned long)shadow_end); } } + +#else /* CONFIG_KASAN_VMALLOC */ + +int kasan_module_alloc(void *addr, size_t size) +{ + void *ret; + size_t scaled_size; + size_t shadow_size; + unsigned long shadow_start; + + shadow_start = (unsigned long)kasan_mem_to_shadow(addr); + scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT; + shadow_size = round_up(scaled_size, PAGE_SIZE); + + if (WARN_ON(!PAGE_ALIGNED(shadow_start))) + return -EINVAL; + + ret = __vmalloc_node_range(shadow_size, 1, shadow_start, + shadow_start + shadow_size, + GFP_KERNEL, + PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE, + __builtin_return_address(0)); + + if (ret) { + __memset(ret, KASAN_SHADOW_INIT, shadow_size); + find_vm_area(addr)->flags |= VM_KASAN; + kmemleak_ignore(ret); + return 0; + } + + return -ENOMEM; +} + +void kasan_free_shadow(const struct vm_struct *vm) +{ + if (vm->flags & VM_KASAN) + vfree(kasan_mem_to_shadow(vm->addr)); +} + #endif From patchwork Fri Dec 18 22:02:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983057 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C43A8C4361B for ; Fri, 18 Dec 2020 22:02:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5AAD623B87 for ; Fri, 18 Dec 2020 22:02:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5AAD623B87 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EAEF96B0081; Fri, 18 Dec 2020 17:02:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E11FA6B0082; Fri, 18 Dec 2020 17:02:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D00816B0083; Fri, 18 Dec 2020 17:02:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id B76B66B0081 for ; Fri, 18 Dec 2020 17:02:22 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7CB1D181AEF3E for ; Fri, 18 Dec 2020 22:02:22 +0000 (UTC) X-FDA: 77607777324.09.stove57_500fd2627440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 6433E181C2203 for ; Fri, 18 Dec 2020 22:02:22 +0000 (UTC) X-HE-Tag: stove57_500fd2627440 X-Filterd-Recvd-Size: 5644 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:21 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:20 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328941; bh=Tbjr1ls2W/E9ex/nUEg5tpPEJIGUedvscg041utElv8=; h=From:To:Subject:In-Reply-To:From; b=aSy4tZvNQqeyuBLmmVMP8NXdLn/B4VfGVqh3toLx6XuYkujJwW7rwGZfrJdbaEbtn qUo3B+QjOtRTvPTqUw3p11591U346RBd/eN30iIMJp36fEeSMh7sl8WIw8Z0jTZiKF oLMaSrob8BTKou8qsVff0rAgyAd0VjvrXuLcz81g= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, valdis.kletnieks@vt.edu, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 17/78] kasan: shadow declarations only for software modes Message-ID: <20201218220220.F14pMRefZ%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: shadow declarations only for software modes This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. Group shadow-related KASAN function declarations and only define them for the two existing software modes. No functional changes for software modes. Link: https://lkml.kernel.org/r/e88d94eff94db883a65dca52e1736d80d28dd9bc.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Signed-off-by: Valdis Kletnieks Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon [valdis.kletnieks@vt.edu: fix build issue with asmlinkage] Link: https://lkml.kernel.org/r/35126.1606402815@turing-police Link: https://lore.kernel.org/linux-arm-kernel/24105.1606397102@turing-police/ Signed-off-by: Andrew Morton --- include/linux/kasan.h | 48 ++++++++++++++++++++++++++-------------- 1 file changed, 32 insertions(+), 16 deletions(-) --- a/include/linux/kasan.h~kasan-shadow-declarations-only-for-software-modes +++ a/include/linux/kasan.h @@ -11,7 +11,7 @@ struct task_struct; #ifdef CONFIG_KASAN -#include +#include #include /* kasan_data struct is used in KUnit tests for KASAN expected failures */ @@ -20,6 +20,20 @@ struct kunit_kasan_expectation { bool report_found; }; +#endif + +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) + +#include + +/* Software KASAN implementations use shadow memory. */ + +#ifdef CONFIG_KASAN_SW_TAGS +#define KASAN_SHADOW_INIT 0xFF +#else +#define KASAN_SHADOW_INIT 0 +#endif + extern unsigned char kasan_early_shadow_page[PAGE_SIZE]; extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE]; extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD]; @@ -35,6 +49,23 @@ static inline void *kasan_mem_to_shadow( + KASAN_SHADOW_OFFSET; } +int kasan_add_zero_shadow(void *start, unsigned long size); +void kasan_remove_zero_shadow(void *start, unsigned long size); + +#else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ + +static inline int kasan_add_zero_shadow(void *start, unsigned long size) +{ + return 0; +} +static inline void kasan_remove_zero_shadow(void *start, + unsigned long size) +{} + +#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ + +#ifdef CONFIG_KASAN + /* Enable reporting bugs after kasan_disable_current() */ extern void kasan_enable_current(void); @@ -75,9 +106,6 @@ struct kasan_cache { int free_meta_offset; }; -int kasan_add_zero_shadow(void *start, unsigned long size); -void kasan_remove_zero_shadow(void *start, unsigned long size); - size_t __ksize(const void *); static inline void kasan_unpoison_slab(const void *ptr) { @@ -143,14 +171,6 @@ static inline bool kasan_slab_free(struc return false; } -static inline int kasan_add_zero_shadow(void *start, unsigned long size) -{ - return 0; -} -static inline void kasan_remove_zero_shadow(void *start, - unsigned long size) -{} - static inline void kasan_unpoison_slab(const void *ptr) { } static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } @@ -158,8 +178,6 @@ static inline size_t kasan_metadata_size #ifdef CONFIG_KASAN_GENERIC -#define KASAN_SHADOW_INIT 0 - void kasan_cache_shrink(struct kmem_cache *cache); void kasan_cache_shutdown(struct kmem_cache *cache); void kasan_record_aux_stack(void *ptr); @@ -174,8 +192,6 @@ static inline void kasan_record_aux_stac #ifdef CONFIG_KASAN_SW_TAGS -#define KASAN_SHADOW_INIT 0xFF - void kasan_init_tags(void); void *kasan_reset_tag(const void *addr); From patchwork Fri Dec 18 22:02:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983059 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 424E9C3526F for ; Fri, 18 Dec 2020 22:02:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DA99E23B87 for ; Fri, 18 Dec 2020 22:02:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DA99E23B87 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 75CD16B0082; Fri, 18 Dec 2020 17:02:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 70C096B0083; Fri, 18 Dec 2020 17:02:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5FB186B0085; Fri, 18 Dec 2020 17:02:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0113.hostedemail.com [216.40.44.113]) by kanga.kvack.org (Postfix) with ESMTP id 42D256B0082 for ; Fri, 18 Dec 2020 17:02:26 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 051788249980 for ; Fri, 18 Dec 2020 22:02:26 +0000 (UTC) X-FDA: 77607777492.12.crush07_300a23a27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id D9547180C6201 for ; Fri, 18 Dec 2020 22:02:25 +0000 (UTC) X-HE-Tag: crush07_300a23a27440 X-Filterd-Recvd-Size: 13221 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:25 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:23 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328944; bh=w9jd/ejKZA/3A39DIjQfsxAp9CDDf0bS76vUXyt2lvg=; h=From:To:Subject:In-Reply-To:From; b=yyzvYwC7F/AGlb1Xu1HXN2dWcpqhJMPg1dbUUp6DUO1PBJsKAtNehGGwOw2+B83GS al8ZrXPYL0qcdy0hy22tepln+39jD6NaUhHG72Sbk7DBn53bN+fzI8OpNGo/gllK19 ue7hHBJLDTTM4Nte+2wfjwfMFJw4hdFMWixwy5hk= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 18/78] kasan: rename (un)poison_shadow to (un)poison_range Message-ID: <20201218220223.QbwkH2neg%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: rename (un)poison_shadow to (un)poison_range This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. The new mode won't be using shadow memory. Rename external annotation kasan_unpoison_shadow() to kasan_unpoison_range(), and introduce internal functions (un)poison_range() (without kasan_ prefix). Co-developed-by: Marco Elver Link: https://lkml.kernel.org/r/fccdcaa13dc6b2211bf363d6c6d499279a54fe3a.1606161801.git.andreyknvl@google.com Signed-off-by: Marco Elver Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- include/linux/kasan.h | 6 ++-- kernel/fork.c | 4 +-- mm/kasan/common.c | 49 ++++++++++++++++++++++------------------ mm/kasan/generic.c | 23 ++++++++---------- mm/kasan/kasan.h | 3 +- mm/kasan/tags.c | 2 - mm/slab_common.c | 2 - 7 files changed, 47 insertions(+), 42 deletions(-) --- a/include/linux/kasan.h~kasan-rename-unpoison_shadow-to-unpoison_range +++ a/include/linux/kasan.h @@ -72,7 +72,7 @@ extern void kasan_enable_current(void); /* Disable reporting bugs for current task */ extern void kasan_disable_current(void); -void kasan_unpoison_shadow(const void *address, size_t size); +void kasan_unpoison_range(const void *address, size_t size); void kasan_unpoison_task_stack(struct task_struct *task); @@ -109,7 +109,7 @@ struct kasan_cache { size_t __ksize(const void *); static inline void kasan_unpoison_slab(const void *ptr) { - kasan_unpoison_shadow(ptr, __ksize(ptr)); + kasan_unpoison_range(ptr, __ksize(ptr)); } size_t kasan_metadata_size(struct kmem_cache *cache); @@ -118,7 +118,7 @@ void kasan_restore_multi_shot(bool enabl #else /* CONFIG_KASAN */ -static inline void kasan_unpoison_shadow(const void *address, size_t size) {} +static inline void kasan_unpoison_range(const void *address, size_t size) {} static inline void kasan_unpoison_task_stack(struct task_struct *task) {} --- a/kernel/fork.c~kasan-rename-unpoison_shadow-to-unpoison_range +++ a/kernel/fork.c @@ -225,8 +225,8 @@ static unsigned long *alloc_thread_stack if (!s) continue; - /* Clear the KASAN shadow of the stack. */ - kasan_unpoison_shadow(s->addr, THREAD_SIZE); + /* Mark stack accessible for KASAN. */ + kasan_unpoison_range(s->addr, THREAD_SIZE); /* Clear stale pointers from reused stack. */ memset(s->addr, 0, THREAD_SIZE); --- a/mm/kasan/common.c~kasan-rename-unpoison_shadow-to-unpoison_range +++ a/mm/kasan/common.c @@ -108,7 +108,7 @@ void *memcpy(void *dest, const void *src * Poisons the shadow memory for 'size' bytes starting from 'addr'. * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE. */ -void kasan_poison_shadow(const void *address, size_t size, u8 value) +void poison_range(const void *address, size_t size, u8 value) { void *shadow_start, *shadow_end; @@ -125,7 +125,7 @@ void kasan_poison_shadow(const void *add __memset(shadow_start, value, shadow_end - shadow_start); } -void kasan_unpoison_shadow(const void *address, size_t size) +void unpoison_range(const void *address, size_t size) { u8 tag = get_tag(address); @@ -136,7 +136,7 @@ void kasan_unpoison_shadow(const void *a */ address = reset_tag(address); - kasan_poison_shadow(address, size, tag); + poison_range(address, size, tag); if (size & KASAN_SHADOW_MASK) { u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); @@ -148,12 +148,17 @@ void kasan_unpoison_shadow(const void *a } } +void kasan_unpoison_range(const void *address, size_t size) +{ + unpoison_range(address, size); +} + static void __kasan_unpoison_stack(struct task_struct *task, const void *sp) { void *base = task_stack_page(task); size_t size = sp - base; - kasan_unpoison_shadow(base, size); + unpoison_range(base, size); } /* Unpoison the entire stack for a task. */ @@ -172,7 +177,7 @@ asmlinkage void kasan_unpoison_task_stac */ void *base = (void *)((unsigned long)watermark & ~(THREAD_SIZE - 1)); - kasan_unpoison_shadow(base, watermark - base); + unpoison_range(base, watermark - base); } void kasan_alloc_pages(struct page *page, unsigned int order) @@ -186,13 +191,13 @@ void kasan_alloc_pages(struct page *page tag = random_tag(); for (i = 0; i < (1 << order); i++) page_kasan_tag_set(page + i, tag); - kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); + unpoison_range(page_address(page), PAGE_SIZE << order); } void kasan_free_pages(struct page *page, unsigned int order) { if (likely(!PageHighMem(page))) - kasan_poison_shadow(page_address(page), + poison_range(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE); } @@ -284,18 +289,18 @@ void kasan_poison_slab(struct page *page for (i = 0; i < compound_nr(page); i++) page_kasan_tag_reset(page + i); - kasan_poison_shadow(page_address(page), page_size(page), - KASAN_KMALLOC_REDZONE); + poison_range(page_address(page), page_size(page), + KASAN_KMALLOC_REDZONE); } void kasan_unpoison_object_data(struct kmem_cache *cache, void *object) { - kasan_unpoison_shadow(object, cache->object_size); + unpoison_range(object, cache->object_size); } void kasan_poison_object_data(struct kmem_cache *cache, void *object) { - kasan_poison_shadow(object, + poison_range(object, round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE), KASAN_KMALLOC_REDZONE); } @@ -408,7 +413,7 @@ static bool __kasan_slab_free(struct kme } rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE); - kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE); + poison_range(object, rounded_up_size, KASAN_KMALLOC_FREE); if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) || unlikely(!(cache->flags & SLAB_KASAN))) @@ -448,9 +453,9 @@ static void *__kasan_kmalloc(struct kmem tag = assign_tag(cache, object, false, keep_tag); /* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */ - kasan_unpoison_shadow(set_tag(object, tag), size); - kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, - KASAN_KMALLOC_REDZONE); + unpoison_range(set_tag(object, tag), size); + poison_range((void *)redzone_start, redzone_end - redzone_start, + KASAN_KMALLOC_REDZONE); if (cache->flags & SLAB_KASAN) kasan_set_track(&get_alloc_info(cache, object)->alloc_track, flags); @@ -489,9 +494,9 @@ void * __must_check kasan_kmalloc_large( KASAN_SHADOW_SCALE_SIZE); redzone_end = (unsigned long)ptr + page_size(page); - kasan_unpoison_shadow(ptr, size); - kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, - KASAN_PAGE_REDZONE); + unpoison_range(ptr, size); + poison_range((void *)redzone_start, redzone_end - redzone_start, + KASAN_PAGE_REDZONE); return (void *)ptr; } @@ -523,7 +528,7 @@ void kasan_poison_kfree(void *ptr, unsig kasan_report_invalid_free(ptr, ip); return; } - kasan_poison_shadow(ptr, page_size(page), KASAN_FREE_PAGE); + poison_range(ptr, page_size(page), KASAN_FREE_PAGE); } else { __kasan_slab_free(page->slab_cache, ptr, ip, false); } @@ -709,7 +714,7 @@ int kasan_populate_vmalloc(unsigned long * // vmalloc() allocates memory * // let a = area->addr * // we reach kasan_populate_vmalloc - * // and call kasan_unpoison_shadow: + * // and call unpoison_range: * STORE shadow(a), unpoison_val * ... * STORE shadow(a+99), unpoison_val x = LOAD p @@ -744,7 +749,7 @@ void kasan_poison_vmalloc(const void *st return; size = round_up(size, KASAN_SHADOW_SCALE_SIZE); - kasan_poison_shadow(start, size, KASAN_VMALLOC_INVALID); + poison_range(start, size, KASAN_VMALLOC_INVALID); } void kasan_unpoison_vmalloc(const void *start, unsigned long size) @@ -752,7 +757,7 @@ void kasan_unpoison_vmalloc(const void * if (!is_vmalloc_or_module_addr(start)) return; - kasan_unpoison_shadow(start, size); + unpoison_range(start, size); } static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, --- a/mm/kasan/generic.c~kasan-rename-unpoison_shadow-to-unpoison_range +++ a/mm/kasan/generic.c @@ -202,11 +202,11 @@ static void register_global(struct kasan { size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE); - kasan_unpoison_shadow(global->beg, global->size); + unpoison_range(global->beg, global->size); - kasan_poison_shadow(global->beg + aligned_size, - global->size_with_redzone - aligned_size, - KASAN_GLOBAL_REDZONE); + poison_range(global->beg + aligned_size, + global->size_with_redzone - aligned_size, + KASAN_GLOBAL_REDZONE); } void __asan_register_globals(struct kasan_global *globals, size_t size) @@ -285,13 +285,12 @@ void __asan_alloca_poison(unsigned long WARN_ON(!IS_ALIGNED(addr, KASAN_ALLOCA_REDZONE_SIZE)); - kasan_unpoison_shadow((const void *)(addr + rounded_down_size), - size - rounded_down_size); - kasan_poison_shadow(left_redzone, KASAN_ALLOCA_REDZONE_SIZE, - KASAN_ALLOCA_LEFT); - kasan_poison_shadow(right_redzone, - padding_size + KASAN_ALLOCA_REDZONE_SIZE, - KASAN_ALLOCA_RIGHT); + unpoison_range((const void *)(addr + rounded_down_size), + size - rounded_down_size); + poison_range(left_redzone, KASAN_ALLOCA_REDZONE_SIZE, + KASAN_ALLOCA_LEFT); + poison_range(right_redzone, padding_size + KASAN_ALLOCA_REDZONE_SIZE, + KASAN_ALLOCA_RIGHT); } EXPORT_SYMBOL(__asan_alloca_poison); @@ -301,7 +300,7 @@ void __asan_allocas_unpoison(const void if (unlikely(!stack_top || stack_top > stack_bottom)) return; - kasan_unpoison_shadow(stack_top, stack_bottom - stack_top); + unpoison_range(stack_top, stack_bottom - stack_top); } EXPORT_SYMBOL(__asan_allocas_unpoison); --- a/mm/kasan/kasan.h~kasan-rename-unpoison_shadow-to-unpoison_range +++ a/mm/kasan/kasan.h @@ -150,7 +150,8 @@ static inline bool addr_has_shadow(const return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START)); } -void kasan_poison_shadow(const void *address, size_t size, u8 value); +void poison_range(const void *address, size_t size, u8 value); +void unpoison_range(const void *address, size_t size); /** * check_memory_region - Check memory region, and report if invalid access. --- a/mm/kasan/tags.c~kasan-rename-unpoison_shadow-to-unpoison_range +++ a/mm/kasan/tags.c @@ -153,7 +153,7 @@ EXPORT_SYMBOL(__hwasan_storeN_noabort); void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size) { - kasan_poison_shadow((void *)addr, size, tag); + poison_range((void *)addr, size, tag); } EXPORT_SYMBOL(__hwasan_tag_memory); --- a/mm/slab_common.c~kasan-rename-unpoison_shadow-to-unpoison_range +++ a/mm/slab_common.c @@ -1176,7 +1176,7 @@ size_t ksize(const void *objp) * We assume that ksize callers could use whole allocated area, * so we need to unpoison this area. */ - kasan_unpoison_shadow(objp, size); + kasan_unpoison_range(objp, size); return size; } EXPORT_SYMBOL(ksize); From patchwork Fri Dec 18 22:02:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8DF7C3526B for ; Fri, 18 Dec 2020 22:02:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4738A23B87 for ; Fri, 18 Dec 2020 22:02:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4738A23B87 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D76936B0083; Fri, 18 Dec 2020 17:02:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D26546B0085; Fri, 18 Dec 2020 17:02:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C14ED6B0087; Fri, 18 Dec 2020 17:02:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id A7B426B0083 for ; Fri, 18 Dec 2020 17:02:29 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 704533650 for ; Fri, 18 Dec 2020 22:02:29 +0000 (UTC) X-FDA: 77607777618.19.nose06_3f0630b27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 4B50C1ACEA2 for ; Fri, 18 Dec 2020 22:02:29 +0000 (UTC) X-HE-Tag: nose06_3f0630b27440 X-Filterd-Recvd-Size: 16683 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:28 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:26 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328947; bh=NfBmqfB1pNKkD1p9gAgulCyTNRF42GVopbp9die7ir8=; h=From:To:Subject:In-Reply-To:From; b=1XAYGIWN4pyo3hOpQbtjqfL1t2vLYWLiozvtHivjU1gopmW+RTydYHnqJlIe19JKl 9n7W8C0XtZaM9HL9vUePZHYOjRoYyiW3X74Hxzg6vM3q8lBKW8cfh4XOhQyMzeMb1o HyxomUc6wmI+dJfbj66Lri9h90lLXHz/p8PB+ikA= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 19/78] kasan: rename KASAN_SHADOW_* to KASAN_GRANULE_* Message-ID: <20201218220226.WO2R6adY2%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: rename KASAN_SHADOW_* to KASAN_GRANULE_* This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. The new mode won't be using shadow memory, but will still use the concept of memory granules. Each memory granule maps to a single metadata entry: 8 bytes per one shadow byte for generic mode, 16 bytes per one shadow byte for software tag-based mode, and 16 bytes per one allocation tag for hardware tag-based mode. Rename KASAN_SHADOW_SCALE_SIZE to KASAN_GRANULE_SIZE, and KASAN_SHADOW_MASK to KASAN_GRANULE_MASK. Also use MASK when used as a mask, otherwise use SIZE. No functional changes. Link: https://lkml.kernel.org/r/939b5754e47f528a6e6a6f28ffc5815d8d128033.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- Documentation/dev-tools/kasan.rst | 2 - lib/test_kasan.c | 2 - lib/test_kasan_module.c | 2 - mm/kasan/common.c | 39 ++++++++++++++-------------- mm/kasan/generic.c | 14 +++++----- mm/kasan/generic_report.c | 8 ++--- mm/kasan/init.c | 8 ++--- mm/kasan/kasan.h | 4 +- mm/kasan/report.c | 10 +++---- mm/kasan/tags_report.c | 2 - 10 files changed, 46 insertions(+), 45 deletions(-) --- a/Documentation/dev-tools/kasan.rst~kasan-rename-kasan_shadow_-to-kasan_granule_ +++ a/Documentation/dev-tools/kasan.rst @@ -265,7 +265,7 @@ Most mappings in vmalloc space are small page of shadow space. Allocating a full shadow page per mapping would therefore be wasteful. Furthermore, to ensure that different mappings use different shadow pages, mappings would have to be aligned to -``KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE``. +``KASAN_GRANULE_SIZE * PAGE_SIZE``. Instead, we share backing space across multiple mappings. We allocate a backing page when a mapping in vmalloc space uses a particular page --- a/lib/test_kasan.c~kasan-rename-kasan_shadow_-to-kasan_granule_ +++ a/lib/test_kasan.c @@ -25,7 +25,7 @@ #include "../mm/kasan/kasan.h" -#define OOB_TAG_OFF (IS_ENABLED(CONFIG_KASAN_GENERIC) ? 0 : KASAN_SHADOW_SCALE_SIZE) +#define OOB_TAG_OFF (IS_ENABLED(CONFIG_KASAN_GENERIC) ? 0 : KASAN_GRANULE_SIZE) /* * We assign some test results to these globals to make sure the tests --- a/lib/test_kasan_module.c~kasan-rename-kasan_shadow_-to-kasan_granule_ +++ a/lib/test_kasan_module.c @@ -15,7 +15,7 @@ #include "../mm/kasan/kasan.h" -#define OOB_TAG_OFF (IS_ENABLED(CONFIG_KASAN_GENERIC) ? 0 : KASAN_SHADOW_SCALE_SIZE) +#define OOB_TAG_OFF (IS_ENABLED(CONFIG_KASAN_GENERIC) ? 0 : KASAN_GRANULE_SIZE) static noinline void __init copy_user_test(void) { --- a/mm/kasan/common.c~kasan-rename-kasan_shadow_-to-kasan_granule_ +++ a/mm/kasan/common.c @@ -106,7 +106,7 @@ void *memcpy(void *dest, const void *src /* * Poisons the shadow memory for 'size' bytes starting from 'addr'. - * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE. + * Memory addresses should be aligned to KASAN_GRANULE_SIZE. */ void poison_range(const void *address, size_t size, u8 value) { @@ -138,13 +138,13 @@ void unpoison_range(const void *address, poison_range(address, size, tag); - if (size & KASAN_SHADOW_MASK) { + if (size & KASAN_GRANULE_MASK) { u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) *shadow = tag; else - *shadow = size & KASAN_SHADOW_MASK; + *shadow = size & KASAN_GRANULE_MASK; } } @@ -301,7 +301,7 @@ void kasan_unpoison_object_data(struct k void kasan_poison_object_data(struct kmem_cache *cache, void *object) { poison_range(object, - round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE), + round_up(cache->object_size, KASAN_GRANULE_SIZE), KASAN_KMALLOC_REDZONE); } @@ -373,7 +373,7 @@ static inline bool shadow_invalid(u8 tag { if (IS_ENABLED(CONFIG_KASAN_GENERIC)) return shadow_byte < 0 || - shadow_byte >= KASAN_SHADOW_SCALE_SIZE; + shadow_byte >= KASAN_GRANULE_SIZE; /* else CONFIG_KASAN_SW_TAGS: */ if ((u8)shadow_byte == KASAN_TAG_INVALID) @@ -412,7 +412,7 @@ static bool __kasan_slab_free(struct kme return true; } - rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE); + rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE); poison_range(object, rounded_up_size, KASAN_KMALLOC_FREE); if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) || @@ -445,9 +445,9 @@ static void *__kasan_kmalloc(struct kmem return NULL; redzone_start = round_up((unsigned long)(object + size), - KASAN_SHADOW_SCALE_SIZE); + KASAN_GRANULE_SIZE); redzone_end = round_up((unsigned long)object + cache->object_size, - KASAN_SHADOW_SCALE_SIZE); + KASAN_GRANULE_SIZE); if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) tag = assign_tag(cache, object, false, keep_tag); @@ -491,7 +491,7 @@ void * __must_check kasan_kmalloc_large( page = virt_to_page(ptr); redzone_start = round_up((unsigned long)(ptr + size), - KASAN_SHADOW_SCALE_SIZE); + KASAN_GRANULE_SIZE); redzone_end = (unsigned long)ptr + page_size(page); unpoison_range(ptr, size); @@ -589,8 +589,8 @@ static int __meminit kasan_mem_notifier( shadow_size = nr_shadow_pages << PAGE_SHIFT; shadow_end = shadow_start + shadow_size; - if (WARN_ON(mem_data->nr_pages % KASAN_SHADOW_SCALE_SIZE) || - WARN_ON(start_kaddr % (KASAN_SHADOW_SCALE_SIZE << PAGE_SHIFT))) + if (WARN_ON(mem_data->nr_pages % KASAN_GRANULE_SIZE) || + WARN_ON(start_kaddr % (KASAN_GRANULE_SIZE << PAGE_SHIFT))) return NOTIFY_BAD; switch (action) { @@ -748,7 +748,7 @@ void kasan_poison_vmalloc(const void *st if (!is_vmalloc_or_module_addr(start)) return; - size = round_up(size, KASAN_SHADOW_SCALE_SIZE); + size = round_up(size, KASAN_GRANULE_SIZE); poison_range(start, size, KASAN_VMALLOC_INVALID); } @@ -861,22 +861,22 @@ void kasan_release_vmalloc(unsigned long unsigned long region_start, region_end; unsigned long size; - region_start = ALIGN(start, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE); - region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE); + region_start = ALIGN(start, PAGE_SIZE * KASAN_GRANULE_SIZE); + region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_GRANULE_SIZE); free_region_start = ALIGN(free_region_start, - PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE); + PAGE_SIZE * KASAN_GRANULE_SIZE); if (start != region_start && free_region_start < region_start) - region_start -= PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE; + region_start -= PAGE_SIZE * KASAN_GRANULE_SIZE; free_region_end = ALIGN_DOWN(free_region_end, - PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE); + PAGE_SIZE * KASAN_GRANULE_SIZE); if (end != region_end && free_region_end > region_end) - region_end += PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE; + region_end += PAGE_SIZE * KASAN_GRANULE_SIZE; shadow_start = kasan_mem_to_shadow((void *)region_start); shadow_end = kasan_mem_to_shadow((void *)region_end); @@ -902,7 +902,8 @@ int kasan_module_alloc(void *addr, size_ unsigned long shadow_start; shadow_start = (unsigned long)kasan_mem_to_shadow(addr); - scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT; + scaled_size = (size + KASAN_GRANULE_SIZE - 1) >> + KASAN_SHADOW_SCALE_SHIFT; shadow_size = round_up(scaled_size, PAGE_SIZE); if (WARN_ON(!PAGE_ALIGNED(shadow_start))) --- a/mm/kasan/generic.c~kasan-rename-kasan_shadow_-to-kasan_granule_ +++ a/mm/kasan/generic.c @@ -46,7 +46,7 @@ static __always_inline bool memory_is_po s8 shadow_value = *(s8 *)kasan_mem_to_shadow((void *)addr); if (unlikely(shadow_value)) { - s8 last_accessible_byte = addr & KASAN_SHADOW_MASK; + s8 last_accessible_byte = addr & KASAN_GRANULE_MASK; return unlikely(last_accessible_byte >= shadow_value); } @@ -62,7 +62,7 @@ static __always_inline bool memory_is_po * Access crosses 8(shadow size)-byte boundary. Such access maps * into 2 shadow bytes, so we need to check them both. */ - if (unlikely(((addr + size - 1) & KASAN_SHADOW_MASK) < size - 1)) + if (unlikely(((addr + size - 1) & KASAN_GRANULE_MASK) < size - 1)) return *shadow_addr || memory_is_poisoned_1(addr + size - 1); return memory_is_poisoned_1(addr + size - 1); @@ -73,7 +73,7 @@ static __always_inline bool memory_is_po u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr); /* Unaligned 16-bytes access maps into 3 shadow bytes. */ - if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE))) + if (unlikely(!IS_ALIGNED(addr, KASAN_GRANULE_SIZE))) return *shadow_addr || memory_is_poisoned_1(addr + 15); return *shadow_addr; @@ -134,7 +134,7 @@ static __always_inline bool memory_is_po s8 *last_shadow = (s8 *)kasan_mem_to_shadow((void *)last_byte); if (unlikely(ret != (unsigned long)last_shadow || - ((long)(last_byte & KASAN_SHADOW_MASK) >= *last_shadow))) + ((long)(last_byte & KASAN_GRANULE_MASK) >= *last_shadow))) return true; } return false; @@ -200,7 +200,7 @@ void kasan_cache_shutdown(struct kmem_ca static void register_global(struct kasan_global *global) { - size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE); + size_t aligned_size = round_up(global->size, KASAN_GRANULE_SIZE); unpoison_range(global->beg, global->size); @@ -274,10 +274,10 @@ EXPORT_SYMBOL(__asan_handle_no_return); /* Emitted by compiler to poison alloca()ed objects. */ void __asan_alloca_poison(unsigned long addr, size_t size) { - size_t rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE); + size_t rounded_up_size = round_up(size, KASAN_GRANULE_SIZE); size_t padding_size = round_up(size, KASAN_ALLOCA_REDZONE_SIZE) - rounded_up_size; - size_t rounded_down_size = round_down(size, KASAN_SHADOW_SCALE_SIZE); + size_t rounded_down_size = round_down(size, KASAN_GRANULE_SIZE); const void *left_redzone = (const void *)(addr - KASAN_ALLOCA_REDZONE_SIZE); --- a/mm/kasan/generic_report.c~kasan-rename-kasan_shadow_-to-kasan_granule_ +++ a/mm/kasan/generic_report.c @@ -34,7 +34,7 @@ void *find_first_bad_addr(void *addr, si void *p = addr; while (p < addr + size && !(*(u8 *)kasan_mem_to_shadow(p))) - p += KASAN_SHADOW_SCALE_SIZE; + p += KASAN_GRANULE_SIZE; return p; } @@ -46,14 +46,14 @@ static const char *get_shadow_bug_type(s shadow_addr = (u8 *)kasan_mem_to_shadow(info->first_bad_addr); /* - * If shadow byte value is in [0, KASAN_SHADOW_SCALE_SIZE) we can look + * If shadow byte value is in [0, KASAN_GRANULE_SIZE) we can look * at the next shadow byte to determine the type of the bad access. */ - if (*shadow_addr > 0 && *shadow_addr <= KASAN_SHADOW_SCALE_SIZE - 1) + if (*shadow_addr > 0 && *shadow_addr <= KASAN_GRANULE_SIZE - 1) shadow_addr++; switch (*shadow_addr) { - case 0 ... KASAN_SHADOW_SCALE_SIZE - 1: + case 0 ... KASAN_GRANULE_SIZE - 1: /* * In theory it's still possible to see these shadow values * due to a data race in the kernel code. --- a/mm/kasan/init.c~kasan-rename-kasan_shadow_-to-kasan_granule_ +++ a/mm/kasan/init.c @@ -442,8 +442,8 @@ void kasan_remove_zero_shadow(void *star end = addr + (size >> KASAN_SHADOW_SCALE_SHIFT); if (WARN_ON((unsigned long)start % - (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE)) || - WARN_ON(size % (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE))) + (KASAN_GRANULE_SIZE * PAGE_SIZE)) || + WARN_ON(size % (KASAN_GRANULE_SIZE * PAGE_SIZE))) return; for (; addr < end; addr = next) { @@ -477,8 +477,8 @@ int kasan_add_zero_shadow(void *start, u shadow_end = shadow_start + (size >> KASAN_SHADOW_SCALE_SHIFT); if (WARN_ON((unsigned long)start % - (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE)) || - WARN_ON(size % (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE))) + (KASAN_GRANULE_SIZE * PAGE_SIZE)) || + WARN_ON(size % (KASAN_GRANULE_SIZE * PAGE_SIZE))) return -EINVAL; ret = kasan_populate_early_shadow(shadow_start, shadow_end); --- a/mm/kasan/kasan.h~kasan-rename-kasan_shadow_-to-kasan_granule_ +++ a/mm/kasan/kasan.h @@ -5,8 +5,8 @@ #include #include -#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) -#define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) +#define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) +#define KASAN_GRANULE_MASK (KASAN_GRANULE_SIZE - 1) #define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */ #define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ --- a/mm/kasan/report.c~kasan-rename-kasan_shadow_-to-kasan_granule_ +++ a/mm/kasan/report.c @@ -314,24 +314,24 @@ static bool __must_check get_address_sta return false; aligned_addr = round_down((unsigned long)addr, sizeof(long)); - mem_ptr = round_down(aligned_addr, KASAN_SHADOW_SCALE_SIZE); + mem_ptr = round_down(aligned_addr, KASAN_GRANULE_SIZE); shadow_ptr = kasan_mem_to_shadow((void *)aligned_addr); shadow_bottom = kasan_mem_to_shadow(end_of_stack(current)); while (shadow_ptr >= shadow_bottom && *shadow_ptr != KASAN_STACK_LEFT) { shadow_ptr--; - mem_ptr -= KASAN_SHADOW_SCALE_SIZE; + mem_ptr -= KASAN_GRANULE_SIZE; } while (shadow_ptr >= shadow_bottom && *shadow_ptr == KASAN_STACK_LEFT) { shadow_ptr--; - mem_ptr -= KASAN_SHADOW_SCALE_SIZE; + mem_ptr -= KASAN_GRANULE_SIZE; } if (shadow_ptr < shadow_bottom) return false; - frame = (const unsigned long *)(mem_ptr + KASAN_SHADOW_SCALE_SIZE); + frame = (const unsigned long *)(mem_ptr + KASAN_GRANULE_SIZE); if (frame[0] != KASAN_CURRENT_STACK_FRAME_MAGIC) { pr_err("KASAN internal error: frame info validation failed; invalid marker: %lu\n", frame[0]); @@ -599,6 +599,6 @@ void kasan_non_canonical_hook(unsigned l else bug_type = "maybe wild-memory-access"; pr_alert("KASAN: %s in range [0x%016lx-0x%016lx]\n", bug_type, - orig_addr, orig_addr + KASAN_SHADOW_MASK); + orig_addr, orig_addr + KASAN_GRANULE_SIZE - 1); } #endif --- a/mm/kasan/tags_report.c~kasan-rename-kasan_shadow_-to-kasan_granule_ +++ a/mm/kasan/tags_report.c @@ -76,7 +76,7 @@ void *find_first_bad_addr(void *addr, si void *end = p + size; while (p < end && tag == *(u8 *)kasan_mem_to_shadow(p)) - p += KASAN_SHADOW_SCALE_SIZE; + p += KASAN_GRANULE_SIZE; return p; } From patchwork Fri Dec 18 22:02:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFC20C4361B for ; Fri, 18 Dec 2020 22:02:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4B20A23B87 for ; Fri, 18 Dec 2020 22:02:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B20A23B87 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E13846B0085; Fri, 18 Dec 2020 17:02:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D9B8B6B0087; Fri, 18 Dec 2020 17:02:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C15716B0088; Fri, 18 Dec 2020 17:02:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id A7B4F6B0085 for ; Fri, 18 Dec 2020 17:02:32 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7464318163987 for ; Fri, 18 Dec 2020 22:02:32 +0000 (UTC) X-FDA: 77607777744.23.frog87_4c04c4527440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 5853537604 for ; Fri, 18 Dec 2020 22:02:32 +0000 (UTC) X-HE-Tag: frog87_4c04c4527440 X-Filterd-Recvd-Size: 3571 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:31 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:30 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328951; bh=FFOGr2o/XL8jMtH5M70PWuYqpSKtfQGlvgjA8O4gOJs=; h=From:To:Subject:In-Reply-To:From; b=ifkdUmhIG0pJQGtXvUwvSJGO3JSgYKFWFCaHQAZrmJhx+R+cpxEfAwdQDy9Fyin1E Wv4sGM/kDnF262ld7eVbBDK86sm3kRXFGuVD120+7mUlIRiZJfAjKj+TcStJqAzLSR ++dbIScXDsGGdqDBrT8zlxeQUTYYXfQXXcelMQ8Q= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 20/78] kasan: only build init.c for software modes Message-ID: <20201218220230.nc-wsPDCP%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000072, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: only build init.c for software modes This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. The new mode won't be using shadow memory, so only build init.c that contains shadow initialization code for software modes. No functional changes for software modes. Link: https://lkml.kernel.org/r/bae0a6a35b7a9b1a443803c1a55e6e3fecc311c9.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/Makefile | 6 +++--- mm/kasan/init.c | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) --- a/mm/kasan/init.c~kasan-only-build-initc-for-software-modes +++ a/mm/kasan/init.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * This file contains some kasan initialization code. + * This file contains KASAN shadow initialization code. * * Copyright (c) 2015 Samsung Electronics Co., Ltd. * Author: Andrey Ryabinin --- a/mm/kasan/Makefile~kasan-only-build-initc-for-software-modes +++ a/mm/kasan/Makefile @@ -29,6 +29,6 @@ CFLAGS_report.o := $(CC_FLAGS_KASAN_RUNT CFLAGS_tags.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_tags_report.o := $(CC_FLAGS_KASAN_RUNTIME) -obj-$(CONFIG_KASAN) := common.o init.o report.o -obj-$(CONFIG_KASAN_GENERIC) += generic.o generic_report.o quarantine.o -obj-$(CONFIG_KASAN_SW_TAGS) += tags.o tags_report.o +obj-$(CONFIG_KASAN) := common.o report.o +obj-$(CONFIG_KASAN_GENERIC) += init.o generic.o generic_report.o quarantine.o +obj-$(CONFIG_KASAN_SW_TAGS) += init.o tags.o tags_report.o From patchwork Fri Dec 18 22:02:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBFB2C4361B for ; Fri, 18 Dec 2020 22:02:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7209F23BB5 for ; Fri, 18 Dec 2020 22:02:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7209F23BB5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1401E6B005D; Fri, 18 Dec 2020 17:02:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0CAF96B0087; Fri, 18 Dec 2020 17:02:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB0A86B0088; Fri, 18 Dec 2020 17:02:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0179.hostedemail.com [216.40.44.179]) by kanga.kvack.org (Postfix) with ESMTP id CAC7B6B005D for ; Fri, 18 Dec 2020 17:02:36 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 90BD8181C967F for ; Fri, 18 Dec 2020 22:02:36 +0000 (UTC) X-FDA: 77607777912.09.yard35_0f0b79927440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 717F1181B9DD7 for ; Fri, 18 Dec 2020 22:02:36 +0000 (UTC) X-HE-Tag: yard35_0f0b79927440 X-Filterd-Recvd-Size: 36437 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:35 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:33 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328954; bh=GRy2QdEt9iI8igBGgzjeMb8/HE/2eChOP2yKpP/HwZE=; h=From:To:Subject:In-Reply-To:From; b=ocF5rGLGLeDv7kF3sRHf3gJ0uq6maO8juYoshQMObQE3LoHOXayGpejJ8hMc4Vft+ SB9pU1ovuQSScIOj4+8TukQzJ4EIHyuUUfg9Fhn40TtRXsGo9LYl4tP6PTh1aPNE3g DmWs02vRuqzy+1jeNYRjgynuPkuAK8RjkADtqwLY= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 21/78] kasan: split out shadow.c from common.c Message-ID: <20201218220233.pgX0nYYVt%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: split out shadow.c from common.c This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. The new mode won't be using shadow memory. Move all shadow-related code to shadow.c, which is only enabled for software KASAN modes that use shadow memory. No functional changes for software modes. Link: https://lkml.kernel.org/r/17d95cfa7d5cf9c4fcd9bf415f2a8dea911668df.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton Reported-by: kernel test robot --- mm/kasan/Makefile | 6 mm/kasan/common.c | 486 ----------------------------------------- mm/kasan/shadow.c | 518 ++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 523 insertions(+), 487 deletions(-) --- a/mm/kasan/common.c~kasan-split-out-shadowc-from-commonc +++ a/mm/kasan/common.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * This file contains common generic and tag-based KASAN code. + * This file contains common KASAN code. * * Copyright (c) 2014 Samsung Electronics Co., Ltd. * Author: Andrey Ryabinin @@ -13,7 +13,6 @@ #include #include #include -#include #include #include #include @@ -26,12 +25,8 @@ #include #include #include -#include #include -#include -#include - #include "kasan.h" #include "../slab.h" @@ -61,93 +56,6 @@ void kasan_disable_current(void) current->kasan_depth--; } -bool __kasan_check_read(const volatile void *p, unsigned int size) -{ - return check_memory_region((unsigned long)p, size, false, _RET_IP_); -} -EXPORT_SYMBOL(__kasan_check_read); - -bool __kasan_check_write(const volatile void *p, unsigned int size) -{ - return check_memory_region((unsigned long)p, size, true, _RET_IP_); -} -EXPORT_SYMBOL(__kasan_check_write); - -#undef memset -void *memset(void *addr, int c, size_t len) -{ - if (!check_memory_region((unsigned long)addr, len, true, _RET_IP_)) - return NULL; - - return __memset(addr, c, len); -} - -#ifdef __HAVE_ARCH_MEMMOVE -#undef memmove -void *memmove(void *dest, const void *src, size_t len) -{ - if (!check_memory_region((unsigned long)src, len, false, _RET_IP_) || - !check_memory_region((unsigned long)dest, len, true, _RET_IP_)) - return NULL; - - return __memmove(dest, src, len); -} -#endif - -#undef memcpy -void *memcpy(void *dest, const void *src, size_t len) -{ - if (!check_memory_region((unsigned long)src, len, false, _RET_IP_) || - !check_memory_region((unsigned long)dest, len, true, _RET_IP_)) - return NULL; - - return __memcpy(dest, src, len); -} - -/* - * Poisons the shadow memory for 'size' bytes starting from 'addr'. - * Memory addresses should be aligned to KASAN_GRANULE_SIZE. - */ -void poison_range(const void *address, size_t size, u8 value) -{ - void *shadow_start, *shadow_end; - - /* - * Perform shadow offset calculation based on untagged address, as - * some of the callers (e.g. kasan_poison_object_data) pass tagged - * addresses to this function. - */ - address = reset_tag(address); - - shadow_start = kasan_mem_to_shadow(address); - shadow_end = kasan_mem_to_shadow(address + size); - - __memset(shadow_start, value, shadow_end - shadow_start); -} - -void unpoison_range(const void *address, size_t size) -{ - u8 tag = get_tag(address); - - /* - * Perform shadow offset calculation based on untagged address, as - * some of the callers (e.g. kasan_unpoison_object_data) pass tagged - * addresses to this function. - */ - address = reset_tag(address); - - poison_range(address, size, tag); - - if (size & KASAN_GRANULE_MASK) { - u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); - - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) - *shadow = tag; - else - *shadow = size & KASAN_GRANULE_MASK; - } -} - void kasan_unpoison_range(const void *address, size_t size) { unpoison_range(address, size); @@ -540,395 +448,3 @@ void kasan_kfree_large(void *ptr, unsign kasan_report_invalid_free(ptr, ip); /* The object will be poisoned by page_alloc. */ } - -#ifdef CONFIG_MEMORY_HOTPLUG -static bool shadow_mapped(unsigned long addr) -{ - pgd_t *pgd = pgd_offset_k(addr); - p4d_t *p4d; - pud_t *pud; - pmd_t *pmd; - pte_t *pte; - - if (pgd_none(*pgd)) - return false; - p4d = p4d_offset(pgd, addr); - if (p4d_none(*p4d)) - return false; - pud = pud_offset(p4d, addr); - if (pud_none(*pud)) - return false; - - /* - * We can't use pud_large() or pud_huge(), the first one is - * arch-specific, the last one depends on HUGETLB_PAGE. So let's abuse - * pud_bad(), if pud is bad then it's bad because it's huge. - */ - if (pud_bad(*pud)) - return true; - pmd = pmd_offset(pud, addr); - if (pmd_none(*pmd)) - return false; - - if (pmd_bad(*pmd)) - return true; - pte = pte_offset_kernel(pmd, addr); - return !pte_none(*pte); -} - -static int __meminit kasan_mem_notifier(struct notifier_block *nb, - unsigned long action, void *data) -{ - struct memory_notify *mem_data = data; - unsigned long nr_shadow_pages, start_kaddr, shadow_start; - unsigned long shadow_end, shadow_size; - - nr_shadow_pages = mem_data->nr_pages >> KASAN_SHADOW_SCALE_SHIFT; - start_kaddr = (unsigned long)pfn_to_kaddr(mem_data->start_pfn); - shadow_start = (unsigned long)kasan_mem_to_shadow((void *)start_kaddr); - shadow_size = nr_shadow_pages << PAGE_SHIFT; - shadow_end = shadow_start + shadow_size; - - if (WARN_ON(mem_data->nr_pages % KASAN_GRANULE_SIZE) || - WARN_ON(start_kaddr % (KASAN_GRANULE_SIZE << PAGE_SHIFT))) - return NOTIFY_BAD; - - switch (action) { - case MEM_GOING_ONLINE: { - void *ret; - - /* - * If shadow is mapped already than it must have been mapped - * during the boot. This could happen if we onlining previously - * offlined memory. - */ - if (shadow_mapped(shadow_start)) - return NOTIFY_OK; - - ret = __vmalloc_node_range(shadow_size, PAGE_SIZE, shadow_start, - shadow_end, GFP_KERNEL, - PAGE_KERNEL, VM_NO_GUARD, - pfn_to_nid(mem_data->start_pfn), - __builtin_return_address(0)); - if (!ret) - return NOTIFY_BAD; - - kmemleak_ignore(ret); - return NOTIFY_OK; - } - case MEM_CANCEL_ONLINE: - case MEM_OFFLINE: { - struct vm_struct *vm; - - /* - * shadow_start was either mapped during boot by kasan_init() - * or during memory online by __vmalloc_node_range(). - * In the latter case we can use vfree() to free shadow. - * Non-NULL result of the find_vm_area() will tell us if - * that was the second case. - * - * Currently it's not possible to free shadow mapped - * during boot by kasan_init(). It's because the code - * to do that hasn't been written yet. So we'll just - * leak the memory. - */ - vm = find_vm_area((void *)shadow_start); - if (vm) - vfree((void *)shadow_start); - } - } - - return NOTIFY_OK; -} - -static int __init kasan_memhotplug_init(void) -{ - hotplug_memory_notifier(kasan_mem_notifier, 0); - - return 0; -} - -core_initcall(kasan_memhotplug_init); -#endif - -#ifdef CONFIG_KASAN_VMALLOC - -static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, - void *unused) -{ - unsigned long page; - pte_t pte; - - if (likely(!pte_none(*ptep))) - return 0; - - page = __get_free_page(GFP_KERNEL); - if (!page) - return -ENOMEM; - - memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE); - pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL); - - spin_lock(&init_mm.page_table_lock); - if (likely(pte_none(*ptep))) { - set_pte_at(&init_mm, addr, ptep, pte); - page = 0; - } - spin_unlock(&init_mm.page_table_lock); - if (page) - free_page(page); - return 0; -} - -int kasan_populate_vmalloc(unsigned long addr, unsigned long size) -{ - unsigned long shadow_start, shadow_end; - int ret; - - if (!is_vmalloc_or_module_addr((void *)addr)) - return 0; - - shadow_start = (unsigned long)kasan_mem_to_shadow((void *)addr); - shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE); - shadow_end = (unsigned long)kasan_mem_to_shadow((void *)addr + size); - shadow_end = ALIGN(shadow_end, PAGE_SIZE); - - ret = apply_to_page_range(&init_mm, shadow_start, - shadow_end - shadow_start, - kasan_populate_vmalloc_pte, NULL); - if (ret) - return ret; - - flush_cache_vmap(shadow_start, shadow_end); - - /* - * We need to be careful about inter-cpu effects here. Consider: - * - * CPU#0 CPU#1 - * WRITE_ONCE(p, vmalloc(100)); while (x = READ_ONCE(p)) ; - * p[99] = 1; - * - * With compiler instrumentation, that ends up looking like this: - * - * CPU#0 CPU#1 - * // vmalloc() allocates memory - * // let a = area->addr - * // we reach kasan_populate_vmalloc - * // and call unpoison_range: - * STORE shadow(a), unpoison_val - * ... - * STORE shadow(a+99), unpoison_val x = LOAD p - * // rest of vmalloc process - * STORE p, a LOAD shadow(x+99) - * - * If there is no barrier between the end of unpoisioning the shadow - * and the store of the result to p, the stores could be committed - * in a different order by CPU#0, and CPU#1 could erroneously observe - * poison in the shadow. - * - * We need some sort of barrier between the stores. - * - * In the vmalloc() case, this is provided by a smp_wmb() in - * clear_vm_uninitialized_flag(). In the per-cpu allocator and in - * get_vm_area() and friends, the caller gets shadow allocated but - * doesn't have any pages mapped into the virtual address space that - * has been reserved. Mapping those pages in will involve taking and - * releasing a page-table lock, which will provide the barrier. - */ - - return 0; -} - -/* - * Poison the shadow for a vmalloc region. Called as part of the - * freeing process at the time the region is freed. - */ -void kasan_poison_vmalloc(const void *start, unsigned long size) -{ - if (!is_vmalloc_or_module_addr(start)) - return; - - size = round_up(size, KASAN_GRANULE_SIZE); - poison_range(start, size, KASAN_VMALLOC_INVALID); -} - -void kasan_unpoison_vmalloc(const void *start, unsigned long size) -{ - if (!is_vmalloc_or_module_addr(start)) - return; - - unpoison_range(start, size); -} - -static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, - void *unused) -{ - unsigned long page; - - page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT); - - spin_lock(&init_mm.page_table_lock); - - if (likely(!pte_none(*ptep))) { - pte_clear(&init_mm, addr, ptep); - free_page(page); - } - spin_unlock(&init_mm.page_table_lock); - - return 0; -} - -/* - * Release the backing for the vmalloc region [start, end), which - * lies within the free region [free_region_start, free_region_end). - * - * This can be run lazily, long after the region was freed. It runs - * under vmap_area_lock, so it's not safe to interact with the vmalloc/vmap - * infrastructure. - * - * How does this work? - * ------------------- - * - * We have a region that is page aligned, labelled as A. - * That might not map onto the shadow in a way that is page-aligned: - * - * start end - * v v - * |????????|????????|AAAAAAAA|AA....AA|AAAAAAAA|????????| < vmalloc - * -------- -------- -------- -------- -------- - * | | | | | - * | | | /-------/ | - * \-------\|/------/ |/---------------/ - * ||| || - * |??AAAAAA|AAAAAAAA|AA??????| < shadow - * (1) (2) (3) - * - * First we align the start upwards and the end downwards, so that the - * shadow of the region aligns with shadow page boundaries. In the - * example, this gives us the shadow page (2). This is the shadow entirely - * covered by this allocation. - * - * Then we have the tricky bits. We want to know if we can free the - * partially covered shadow pages - (1) and (3) in the example. For this, - * we are given the start and end of the free region that contains this - * allocation. Extending our previous example, we could have: - * - * free_region_start free_region_end - * | start end | - * v v v v - * |FFFFFFFF|FFFFFFFF|AAAAAAAA|AA....AA|AAAAAAAA|FFFFFFFF| < vmalloc - * -------- -------- -------- -------- -------- - * | | | | | - * | | | /-------/ | - * \-------\|/------/ |/---------------/ - * ||| || - * |FFAAAAAA|AAAAAAAA|AAF?????| < shadow - * (1) (2) (3) - * - * Once again, we align the start of the free region up, and the end of - * the free region down so that the shadow is page aligned. So we can free - * page (1) - we know no allocation currently uses anything in that page, - * because all of it is in the vmalloc free region. But we cannot free - * page (3), because we can't be sure that the rest of it is unused. - * - * We only consider pages that contain part of the original region for - * freeing: we don't try to free other pages from the free region or we'd - * end up trying to free huge chunks of virtual address space. - * - * Concurrency - * ----------- - * - * How do we know that we're not freeing a page that is simultaneously - * being used for a fresh allocation in kasan_populate_vmalloc(_pte)? - * - * We _can_ have kasan_release_vmalloc and kasan_populate_vmalloc running - * at the same time. While we run under free_vmap_area_lock, the population - * code does not. - * - * free_vmap_area_lock instead operates to ensure that the larger range - * [free_region_start, free_region_end) is safe: because __alloc_vmap_area and - * the per-cpu region-finding algorithm both run under free_vmap_area_lock, - * no space identified as free will become used while we are running. This - * means that so long as we are careful with alignment and only free shadow - * pages entirely covered by the free region, we will not run in to any - * trouble - any simultaneous allocations will be for disjoint regions. - */ -void kasan_release_vmalloc(unsigned long start, unsigned long end, - unsigned long free_region_start, - unsigned long free_region_end) -{ - void *shadow_start, *shadow_end; - unsigned long region_start, region_end; - unsigned long size; - - region_start = ALIGN(start, PAGE_SIZE * KASAN_GRANULE_SIZE); - region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_GRANULE_SIZE); - - free_region_start = ALIGN(free_region_start, - PAGE_SIZE * KASAN_GRANULE_SIZE); - - if (start != region_start && - free_region_start < region_start) - region_start -= PAGE_SIZE * KASAN_GRANULE_SIZE; - - free_region_end = ALIGN_DOWN(free_region_end, - PAGE_SIZE * KASAN_GRANULE_SIZE); - - if (end != region_end && - free_region_end > region_end) - region_end += PAGE_SIZE * KASAN_GRANULE_SIZE; - - shadow_start = kasan_mem_to_shadow((void *)region_start); - shadow_end = kasan_mem_to_shadow((void *)region_end); - - if (shadow_end > shadow_start) { - size = shadow_end - shadow_start; - apply_to_existing_page_range(&init_mm, - (unsigned long)shadow_start, - size, kasan_depopulate_vmalloc_pte, - NULL); - flush_tlb_kernel_range((unsigned long)shadow_start, - (unsigned long)shadow_end); - } -} - -#else /* CONFIG_KASAN_VMALLOC */ - -int kasan_module_alloc(void *addr, size_t size) -{ - void *ret; - size_t scaled_size; - size_t shadow_size; - unsigned long shadow_start; - - shadow_start = (unsigned long)kasan_mem_to_shadow(addr); - scaled_size = (size + KASAN_GRANULE_SIZE - 1) >> - KASAN_SHADOW_SCALE_SHIFT; - shadow_size = round_up(scaled_size, PAGE_SIZE); - - if (WARN_ON(!PAGE_ALIGNED(shadow_start))) - return -EINVAL; - - ret = __vmalloc_node_range(shadow_size, 1, shadow_start, - shadow_start + shadow_size, - GFP_KERNEL, - PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE, - __builtin_return_address(0)); - - if (ret) { - __memset(ret, KASAN_SHADOW_INIT, shadow_size); - find_vm_area(addr)->flags |= VM_KASAN; - kmemleak_ignore(ret); - return 0; - } - - return -ENOMEM; -} - -void kasan_free_shadow(const struct vm_struct *vm) -{ - if (vm->flags & VM_KASAN) - vfree(kasan_mem_to_shadow(vm->addr)); -} - -#endif --- a/mm/kasan/Makefile~kasan-split-out-shadowc-from-commonc +++ a/mm/kasan/Makefile @@ -10,6 +10,7 @@ CFLAGS_REMOVE_generic_report.o = $(CC_FL CFLAGS_REMOVE_init.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_quarantine.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_report.o = $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_shadow.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_tags.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_tags_report.o = $(CC_FLAGS_FTRACE) @@ -26,9 +27,10 @@ CFLAGS_generic_report.o := $(CC_FLAGS_KA CFLAGS_init.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_quarantine.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_report.o := $(CC_FLAGS_KASAN_RUNTIME) +CFLAGS_shadow.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_tags.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_tags_report.o := $(CC_FLAGS_KASAN_RUNTIME) obj-$(CONFIG_KASAN) := common.o report.o -obj-$(CONFIG_KASAN_GENERIC) += init.o generic.o generic_report.o quarantine.o -obj-$(CONFIG_KASAN_SW_TAGS) += init.o tags.o tags_report.o +obj-$(CONFIG_KASAN_GENERIC) += init.o generic.o generic_report.o shadow.o quarantine.o +obj-$(CONFIG_KASAN_SW_TAGS) += init.o shadow.o tags.o tags_report.o --- /dev/null +++ a/mm/kasan/shadow.c @@ -0,0 +1,518 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * This file contains KASAN runtime code that manages shadow memory for + * generic and software tag-based KASAN modes. + * + * Copyright (c) 2014 Samsung Electronics Co., Ltd. + * Author: Andrey Ryabinin + * + * Some code borrowed from https://github.com/xairy/kasan-prototype by + * Andrey Konovalov + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include "kasan.h" + +bool __kasan_check_read(const volatile void *p, unsigned int size) +{ + return check_memory_region((unsigned long)p, size, false, _RET_IP_); +} +EXPORT_SYMBOL(__kasan_check_read); + +bool __kasan_check_write(const volatile void *p, unsigned int size) +{ + return check_memory_region((unsigned long)p, size, true, _RET_IP_); +} +EXPORT_SYMBOL(__kasan_check_write); + +#undef memset +void *memset(void *addr, int c, size_t len) +{ + if (!check_memory_region((unsigned long)addr, len, true, _RET_IP_)) + return NULL; + + return __memset(addr, c, len); +} + +#ifdef __HAVE_ARCH_MEMMOVE +#undef memmove +void *memmove(void *dest, const void *src, size_t len) +{ + if (!check_memory_region((unsigned long)src, len, false, _RET_IP_) || + !check_memory_region((unsigned long)dest, len, true, _RET_IP_)) + return NULL; + + return __memmove(dest, src, len); +} +#endif + +#undef memcpy +void *memcpy(void *dest, const void *src, size_t len) +{ + if (!check_memory_region((unsigned long)src, len, false, _RET_IP_) || + !check_memory_region((unsigned long)dest, len, true, _RET_IP_)) + return NULL; + + return __memcpy(dest, src, len); +} + +/* + * Poisons the shadow memory for 'size' bytes starting from 'addr'. + * Memory addresses should be aligned to KASAN_GRANULE_SIZE. + */ +void poison_range(const void *address, size_t size, u8 value) +{ + void *shadow_start, *shadow_end; + + /* + * Perform shadow offset calculation based on untagged address, as + * some of the callers (e.g. kasan_poison_object_data) pass tagged + * addresses to this function. + */ + address = reset_tag(address); + + /* Skip KFENCE memory if called explicitly outside of sl*b. */ + if (is_kfence_address(address)) + return; + + shadow_start = kasan_mem_to_shadow(address); + shadow_end = kasan_mem_to_shadow(address + size); + + __memset(shadow_start, value, shadow_end - shadow_start); +} + +void unpoison_range(const void *address, size_t size) +{ + u8 tag = get_tag(address); + + /* + * Perform shadow offset calculation based on untagged address, as + * some of the callers (e.g. kasan_unpoison_object_data) pass tagged + * addresses to this function. + */ + address = reset_tag(address); + + /* + * Skip KFENCE memory if called explicitly outside of sl*b. Also note + * that calls to ksize(), where size is not a multiple of machine-word + * size, would otherwise poison the invalid portion of the word. + */ + if (is_kfence_address(address)) + return; + + poison_range(address, size, tag); + + if (size & KASAN_GRANULE_MASK) { + u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); + + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) + *shadow = tag; + else + *shadow = size & KASAN_GRANULE_MASK; + } +} + +#ifdef CONFIG_MEMORY_HOTPLUG +static bool shadow_mapped(unsigned long addr) +{ + pgd_t *pgd = pgd_offset_k(addr); + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + if (pgd_none(*pgd)) + return false; + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + return false; + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) + return false; + + /* + * We can't use pud_large() or pud_huge(), the first one is + * arch-specific, the last one depends on HUGETLB_PAGE. So let's abuse + * pud_bad(), if pud is bad then it's bad because it's huge. + */ + if (pud_bad(*pud)) + return true; + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return false; + + if (pmd_bad(*pmd)) + return true; + pte = pte_offset_kernel(pmd, addr); + return !pte_none(*pte); +} + +static int __meminit kasan_mem_notifier(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct memory_notify *mem_data = data; + unsigned long nr_shadow_pages, start_kaddr, shadow_start; + unsigned long shadow_end, shadow_size; + + nr_shadow_pages = mem_data->nr_pages >> KASAN_SHADOW_SCALE_SHIFT; + start_kaddr = (unsigned long)pfn_to_kaddr(mem_data->start_pfn); + shadow_start = (unsigned long)kasan_mem_to_shadow((void *)start_kaddr); + shadow_size = nr_shadow_pages << PAGE_SHIFT; + shadow_end = shadow_start + shadow_size; + + if (WARN_ON(mem_data->nr_pages % KASAN_GRANULE_SIZE) || + WARN_ON(start_kaddr % (KASAN_GRANULE_SIZE << PAGE_SHIFT))) + return NOTIFY_BAD; + + switch (action) { + case MEM_GOING_ONLINE: { + void *ret; + + /* + * If shadow is mapped already than it must have been mapped + * during the boot. This could happen if we onlining previously + * offlined memory. + */ + if (shadow_mapped(shadow_start)) + return NOTIFY_OK; + + ret = __vmalloc_node_range(shadow_size, PAGE_SIZE, shadow_start, + shadow_end, GFP_KERNEL, + PAGE_KERNEL, VM_NO_GUARD, + pfn_to_nid(mem_data->start_pfn), + __builtin_return_address(0)); + if (!ret) + return NOTIFY_BAD; + + kmemleak_ignore(ret); + return NOTIFY_OK; + } + case MEM_CANCEL_ONLINE: + case MEM_OFFLINE: { + struct vm_struct *vm; + + /* + * shadow_start was either mapped during boot by kasan_init() + * or during memory online by __vmalloc_node_range(). + * In the latter case we can use vfree() to free shadow. + * Non-NULL result of the find_vm_area() will tell us if + * that was the second case. + * + * Currently it's not possible to free shadow mapped + * during boot by kasan_init(). It's because the code + * to do that hasn't been written yet. So we'll just + * leak the memory. + */ + vm = find_vm_area((void *)shadow_start); + if (vm) + vfree((void *)shadow_start); + } + } + + return NOTIFY_OK; +} + +static int __init kasan_memhotplug_init(void) +{ + hotplug_memory_notifier(kasan_mem_notifier, 0); + + return 0; +} + +core_initcall(kasan_memhotplug_init); +#endif + +#ifdef CONFIG_KASAN_VMALLOC + +static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, + void *unused) +{ + unsigned long page; + pte_t pte; + + if (likely(!pte_none(*ptep))) + return 0; + + page = __get_free_page(GFP_KERNEL); + if (!page) + return -ENOMEM; + + memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE); + pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL); + + spin_lock(&init_mm.page_table_lock); + if (likely(pte_none(*ptep))) { + set_pte_at(&init_mm, addr, ptep, pte); + page = 0; + } + spin_unlock(&init_mm.page_table_lock); + if (page) + free_page(page); + return 0; +} + +int kasan_populate_vmalloc(unsigned long addr, unsigned long size) +{ + unsigned long shadow_start, shadow_end; + int ret; + + if (!is_vmalloc_or_module_addr((void *)addr)) + return 0; + + shadow_start = (unsigned long)kasan_mem_to_shadow((void *)addr); + shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE); + shadow_end = (unsigned long)kasan_mem_to_shadow((void *)addr + size); + shadow_end = ALIGN(shadow_end, PAGE_SIZE); + + ret = apply_to_page_range(&init_mm, shadow_start, + shadow_end - shadow_start, + kasan_populate_vmalloc_pte, NULL); + if (ret) + return ret; + + flush_cache_vmap(shadow_start, shadow_end); + + /* + * We need to be careful about inter-cpu effects here. Consider: + * + * CPU#0 CPU#1 + * WRITE_ONCE(p, vmalloc(100)); while (x = READ_ONCE(p)) ; + * p[99] = 1; + * + * With compiler instrumentation, that ends up looking like this: + * + * CPU#0 CPU#1 + * // vmalloc() allocates memory + * // let a = area->addr + * // we reach kasan_populate_vmalloc + * // and call unpoison_range: + * STORE shadow(a), unpoison_val + * ... + * STORE shadow(a+99), unpoison_val x = LOAD p + * // rest of vmalloc process + * STORE p, a LOAD shadow(x+99) + * + * If there is no barrier between the end of unpoisioning the shadow + * and the store of the result to p, the stores could be committed + * in a different order by CPU#0, and CPU#1 could erroneously observe + * poison in the shadow. + * + * We need some sort of barrier between the stores. + * + * In the vmalloc() case, this is provided by a smp_wmb() in + * clear_vm_uninitialized_flag(). In the per-cpu allocator and in + * get_vm_area() and friends, the caller gets shadow allocated but + * doesn't have any pages mapped into the virtual address space that + * has been reserved. Mapping those pages in will involve taking and + * releasing a page-table lock, which will provide the barrier. + */ + + return 0; +} + +/* + * Poison the shadow for a vmalloc region. Called as part of the + * freeing process at the time the region is freed. + */ +void kasan_poison_vmalloc(const void *start, unsigned long size) +{ + if (!is_vmalloc_or_module_addr(start)) + return; + + size = round_up(size, KASAN_GRANULE_SIZE); + poison_range(start, size, KASAN_VMALLOC_INVALID); +} + +void kasan_unpoison_vmalloc(const void *start, unsigned long size) +{ + if (!is_vmalloc_or_module_addr(start)) + return; + + unpoison_range(start, size); +} + +static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, + void *unused) +{ + unsigned long page; + + page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT); + + spin_lock(&init_mm.page_table_lock); + + if (likely(!pte_none(*ptep))) { + pte_clear(&init_mm, addr, ptep); + free_page(page); + } + spin_unlock(&init_mm.page_table_lock); + + return 0; +} + +/* + * Release the backing for the vmalloc region [start, end), which + * lies within the free region [free_region_start, free_region_end). + * + * This can be run lazily, long after the region was freed. It runs + * under vmap_area_lock, so it's not safe to interact with the vmalloc/vmap + * infrastructure. + * + * How does this work? + * ------------------- + * + * We have a region that is page aligned, labelled as A. + * That might not map onto the shadow in a way that is page-aligned: + * + * start end + * v v + * |????????|????????|AAAAAAAA|AA....AA|AAAAAAAA|????????| < vmalloc + * -------- -------- -------- -------- -------- + * | | | | | + * | | | /-------/ | + * \-------\|/------/ |/---------------/ + * ||| || + * |??AAAAAA|AAAAAAAA|AA??????| < shadow + * (1) (2) (3) + * + * First we align the start upwards and the end downwards, so that the + * shadow of the region aligns with shadow page boundaries. In the + * example, this gives us the shadow page (2). This is the shadow entirely + * covered by this allocation. + * + * Then we have the tricky bits. We want to know if we can free the + * partially covered shadow pages - (1) and (3) in the example. For this, + * we are given the start and end of the free region that contains this + * allocation. Extending our previous example, we could have: + * + * free_region_start free_region_end + * | start end | + * v v v v + * |FFFFFFFF|FFFFFFFF|AAAAAAAA|AA....AA|AAAAAAAA|FFFFFFFF| < vmalloc + * -------- -------- -------- -------- -------- + * | | | | | + * | | | /-------/ | + * \-------\|/------/ |/---------------/ + * ||| || + * |FFAAAAAA|AAAAAAAA|AAF?????| < shadow + * (1) (2) (3) + * + * Once again, we align the start of the free region up, and the end of + * the free region down so that the shadow is page aligned. So we can free + * page (1) - we know no allocation currently uses anything in that page, + * because all of it is in the vmalloc free region. But we cannot free + * page (3), because we can't be sure that the rest of it is unused. + * + * We only consider pages that contain part of the original region for + * freeing: we don't try to free other pages from the free region or we'd + * end up trying to free huge chunks of virtual address space. + * + * Concurrency + * ----------- + * + * How do we know that we're not freeing a page that is simultaneously + * being used for a fresh allocation in kasan_populate_vmalloc(_pte)? + * + * We _can_ have kasan_release_vmalloc and kasan_populate_vmalloc running + * at the same time. While we run under free_vmap_area_lock, the population + * code does not. + * + * free_vmap_area_lock instead operates to ensure that the larger range + * [free_region_start, free_region_end) is safe: because __alloc_vmap_area and + * the per-cpu region-finding algorithm both run under free_vmap_area_lock, + * no space identified as free will become used while we are running. This + * means that so long as we are careful with alignment and only free shadow + * pages entirely covered by the free region, we will not run in to any + * trouble - any simultaneous allocations will be for disjoint regions. + */ +void kasan_release_vmalloc(unsigned long start, unsigned long end, + unsigned long free_region_start, + unsigned long free_region_end) +{ + void *shadow_start, *shadow_end; + unsigned long region_start, region_end; + unsigned long size; + + region_start = ALIGN(start, PAGE_SIZE * KASAN_GRANULE_SIZE); + region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_GRANULE_SIZE); + + free_region_start = ALIGN(free_region_start, + PAGE_SIZE * KASAN_GRANULE_SIZE); + + if (start != region_start && + free_region_start < region_start) + region_start -= PAGE_SIZE * KASAN_GRANULE_SIZE; + + free_region_end = ALIGN_DOWN(free_region_end, + PAGE_SIZE * KASAN_GRANULE_SIZE); + + if (end != region_end && + free_region_end > region_end) + region_end += PAGE_SIZE * KASAN_GRANULE_SIZE; + + shadow_start = kasan_mem_to_shadow((void *)region_start); + shadow_end = kasan_mem_to_shadow((void *)region_end); + + if (shadow_end > shadow_start) { + size = shadow_end - shadow_start; + apply_to_existing_page_range(&init_mm, + (unsigned long)shadow_start, + size, kasan_depopulate_vmalloc_pte, + NULL); + flush_tlb_kernel_range((unsigned long)shadow_start, + (unsigned long)shadow_end); + } +} + +#else /* CONFIG_KASAN_VMALLOC */ + +int kasan_module_alloc(void *addr, size_t size) +{ + void *ret; + size_t scaled_size; + size_t shadow_size; + unsigned long shadow_start; + + shadow_start = (unsigned long)kasan_mem_to_shadow(addr); + scaled_size = (size + KASAN_GRANULE_SIZE - 1) >> + KASAN_SHADOW_SCALE_SHIFT; + shadow_size = round_up(scaled_size, PAGE_SIZE); + + if (WARN_ON(!PAGE_ALIGNED(shadow_start))) + return -EINVAL; + + ret = __vmalloc_node_range(shadow_size, 1, shadow_start, + shadow_start + shadow_size, + GFP_KERNEL, + PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE, + __builtin_return_address(0)); + + if (ret) { + __memset(ret, KASAN_SHADOW_INIT, shadow_size); + find_vm_area(addr)->flags |= VM_KASAN; + kmemleak_ignore(ret); + return 0; + } + + return -ENOMEM; +} + +void kasan_free_shadow(const struct vm_struct *vm) +{ + if (vm->flags & VM_KASAN) + vfree(kasan_mem_to_shadow(vm->addr)); +} + +#endif From patchwork Fri Dec 18 22:02:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5586FC2D0E4 for ; Fri, 18 Dec 2020 22:02:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0985923BB5 for ; Fri, 18 Dec 2020 22:02:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0985923BB5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A04186B0068; Fri, 18 Dec 2020 17:02:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 97D4B6B0087; Fri, 18 Dec 2020 17:02:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 844BF6B0088; Fri, 18 Dec 2020 17:02:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id 6C1586B0068 for ; Fri, 18 Dec 2020 17:02:39 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 345218249980 for ; Fri, 18 Dec 2020 22:02:39 +0000 (UTC) X-FDA: 77607778038.07.lamp07_3d025b327440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id D8804180D8FB7 for ; Fri, 18 Dec 2020 22:02:38 +0000 (UTC) X-HE-Tag: lamp07_3d025b327440 X-Filterd-Recvd-Size: 5789 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:38 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:36 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328957; bh=3sfx/e9BVfHzKtPV+SD6U8fRuN5aQYIP2gdZrh3nRQ0=; h=From:To:Subject:In-Reply-To:From; b=woi1rQ4ZFStBSbxQtyhBqHJK8dBd/bxKfpILLxRhy3/jTEI8TbrE839h7gsevEGso C5NlnXV8jckw4o+oKEnDtWS1MTOH5gUgwl7FfHs3m2Xn80SQUo29u2xKNmeAlNcTWo J3DBZladxwORADjTa4DWJg6bZaGl3MTfBbc7XVTw= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 22/78] kasan: define KASAN_MEMORY_PER_SHADOW_PAGE Message-ID: <20201218220236.6c_3UIn-G%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: define KASAN_MEMORY_PER_SHADOW_PAGE Define KASAN_MEMORY_PER_SHADOW_PAGE as (KASAN_GRANULE_SIZE << PAGE_SHIFT), which is the same as (KASAN_GRANULE_SIZE * PAGE_SIZE) for software modes that use shadow memory, and use it across KASAN code to simplify it. Link: https://lkml.kernel.org/r/8329391cfe14b5cffd3decf3b5c535b6ce21eef6.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/init.c | 10 ++++------ mm/kasan/kasan.h | 2 ++ mm/kasan/shadow.c | 16 +++++++--------- 3 files changed, 13 insertions(+), 15 deletions(-) --- a/mm/kasan/init.c~kasan-define-kasan_memory_per_shadow_page +++ a/mm/kasan/init.c @@ -441,9 +441,8 @@ void kasan_remove_zero_shadow(void *star addr = (unsigned long)kasan_mem_to_shadow(start); end = addr + (size >> KASAN_SHADOW_SCALE_SHIFT); - if (WARN_ON((unsigned long)start % - (KASAN_GRANULE_SIZE * PAGE_SIZE)) || - WARN_ON(size % (KASAN_GRANULE_SIZE * PAGE_SIZE))) + if (WARN_ON((unsigned long)start % KASAN_MEMORY_PER_SHADOW_PAGE) || + WARN_ON(size % KASAN_MEMORY_PER_SHADOW_PAGE)) return; for (; addr < end; addr = next) { @@ -476,9 +475,8 @@ int kasan_add_zero_shadow(void *start, u shadow_start = kasan_mem_to_shadow(start); shadow_end = shadow_start + (size >> KASAN_SHADOW_SCALE_SHIFT); - if (WARN_ON((unsigned long)start % - (KASAN_GRANULE_SIZE * PAGE_SIZE)) || - WARN_ON(size % (KASAN_GRANULE_SIZE * PAGE_SIZE))) + if (WARN_ON((unsigned long)start % KASAN_MEMORY_PER_SHADOW_PAGE) || + WARN_ON(size % KASAN_MEMORY_PER_SHADOW_PAGE)) return -EINVAL; ret = kasan_populate_early_shadow(shadow_start, shadow_end); --- a/mm/kasan/kasan.h~kasan-define-kasan_memory_per_shadow_page +++ a/mm/kasan/kasan.h @@ -8,6 +8,8 @@ #define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #define KASAN_GRANULE_MASK (KASAN_GRANULE_SIZE - 1) +#define KASAN_MEMORY_PER_SHADOW_PAGE (KASAN_GRANULE_SIZE << PAGE_SHIFT) + #define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */ #define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ #define KASAN_TAG_MAX 0xFD /* maximum value for random tags */ --- a/mm/kasan/shadow.c~kasan-define-kasan_memory_per_shadow_page +++ a/mm/kasan/shadow.c @@ -174,7 +174,7 @@ static int __meminit kasan_mem_notifier( shadow_end = shadow_start + shadow_size; if (WARN_ON(mem_data->nr_pages % KASAN_GRANULE_SIZE) || - WARN_ON(start_kaddr % (KASAN_GRANULE_SIZE << PAGE_SHIFT))) + WARN_ON(start_kaddr % KASAN_MEMORY_PER_SHADOW_PAGE)) return NOTIFY_BAD; switch (action) { @@ -445,22 +445,20 @@ void kasan_release_vmalloc(unsigned long unsigned long region_start, region_end; unsigned long size; - region_start = ALIGN(start, PAGE_SIZE * KASAN_GRANULE_SIZE); - region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_GRANULE_SIZE); + region_start = ALIGN(start, KASAN_MEMORY_PER_SHADOW_PAGE); + region_end = ALIGN_DOWN(end, KASAN_MEMORY_PER_SHADOW_PAGE); - free_region_start = ALIGN(free_region_start, - PAGE_SIZE * KASAN_GRANULE_SIZE); + free_region_start = ALIGN(free_region_start, KASAN_MEMORY_PER_SHADOW_PAGE); if (start != region_start && free_region_start < region_start) - region_start -= PAGE_SIZE * KASAN_GRANULE_SIZE; + region_start -= KASAN_MEMORY_PER_SHADOW_PAGE; - free_region_end = ALIGN_DOWN(free_region_end, - PAGE_SIZE * KASAN_GRANULE_SIZE); + free_region_end = ALIGN_DOWN(free_region_end, KASAN_MEMORY_PER_SHADOW_PAGE); if (end != region_end && free_region_end > region_end) - region_end += PAGE_SIZE * KASAN_GRANULE_SIZE; + region_end += KASAN_MEMORY_PER_SHADOW_PAGE; shadow_start = kasan_mem_to_shadow((void *)region_start); shadow_end = kasan_mem_to_shadow((void *)region_end); From patchwork Fri Dec 18 22:02:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983073 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BB6CC2D0E4 for ; Fri, 18 Dec 2020 22:02:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D370423B8C for ; Fri, 18 Dec 2020 22:02:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D370423B8C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 788E76B0087; Fri, 18 Dec 2020 17:02:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6993B6B0088; Fri, 18 Dec 2020 17:02:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C1BB6B0089; Fri, 18 Dec 2020 17:02:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0118.hostedemail.com [216.40.44.118]) by kanga.kvack.org (Postfix) with ESMTP id 315BB6B0087 for ; Fri, 18 Dec 2020 17:02:50 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id F0718181C43DC for ; Fri, 18 Dec 2020 22:02:49 +0000 (UTC) X-FDA: 77607778458.20.north16_5e1276827440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 2359F180C9F5C for ; Fri, 18 Dec 2020 22:02:42 +0000 (UTC) X-HE-Tag: north16_5e1276827440 X-Filterd-Recvd-Size: 5724 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:41 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:39 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328960; bh=WrCp/mYoc7bkusflRSZG45RSf8Huec1dFaE455zpBKc=; h=From:To:Subject:In-Reply-To:From; b=M7xn7VR2W5jQk//s64R1X4LUYh4CBjC0hpI+2TfeSud/OM4wndO5rB1aRhCYvxaKl 5F2C6guqmjiIn8z92uL3QLg9ogTraPhgZ86CncwiWDaVBzUNDXS3TT75BSGPKLeCuu EKJi4FVVQSDdE54Pkd5Vg8dTOSTGxt+V/ubZ926s= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 23/78] kasan: rename report and tags files Message-ID: <20201218220239.vkaslEJnn%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: rename report and tags files Rename generic_report.c to report_generic.c and tags_report.c to report_sw_tags.c, as their content is more relevant to report.c file. Also rename tags.c to sw_tags.c to better reflect that this file contains code for software tag-based mode. No functional changes. Link: https://lkml.kernel.org/r/a6105d416da97d389580015afed66c4c3cfd4c08.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/Makefile | 16 +- mm/kasan/generic_report.c | 160 ----------------------------- mm/kasan/report.c | 2 mm/kasan/report_generic.c | 160 +++++++++++++++++++++++++++++ mm/kasan/report_sw_tags.c | 88 ++++++++++++++++ mm/kasan/sw_tags.c | 195 ++++++++++++++++++++++++++++++++++++ mm/kasan/tags.c | 195 ------------------------------------ mm/kasan/tags_report.c | 88 ---------------- 8 files changed, 452 insertions(+), 452 deletions(-) --- a/mm/kasan/Makefile~kasan-rename-report-and-tags-files +++ a/mm/kasan/Makefile @@ -6,13 +6,13 @@ KCOV_INSTRUMENT := n # Disable ftrace to avoid recursion. CFLAGS_REMOVE_common.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_generic.o = $(CC_FLAGS_FTRACE) -CFLAGS_REMOVE_generic_report.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_init.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_quarantine.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_report.o = $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_report_generic.o = $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_report_sw_tags.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_shadow.o = $(CC_FLAGS_FTRACE) -CFLAGS_REMOVE_tags.o = $(CC_FLAGS_FTRACE) -CFLAGS_REMOVE_tags_report.o = $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_sw_tags.o = $(CC_FLAGS_FTRACE) # Function splitter causes unnecessary splits in __asan_load1/__asan_store1 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533 @@ -23,14 +23,14 @@ CC_FLAGS_KASAN_RUNTIME += -DDISABLE_BRAN CFLAGS_common.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_generic.o := $(CC_FLAGS_KASAN_RUNTIME) -CFLAGS_generic_report.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_init.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_quarantine.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_report.o := $(CC_FLAGS_KASAN_RUNTIME) +CFLAGS_report_generic.o := $(CC_FLAGS_KASAN_RUNTIME) +CFLAGS_report_sw_tags.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_shadow.o := $(CC_FLAGS_KASAN_RUNTIME) -CFLAGS_tags.o := $(CC_FLAGS_KASAN_RUNTIME) -CFLAGS_tags_report.o := $(CC_FLAGS_KASAN_RUNTIME) +CFLAGS_sw_tags.o := $(CC_FLAGS_KASAN_RUNTIME) obj-$(CONFIG_KASAN) := common.o report.o -obj-$(CONFIG_KASAN_GENERIC) += init.o generic.o generic_report.o shadow.o quarantine.o -obj-$(CONFIG_KASAN_SW_TAGS) += init.o shadow.o tags.o tags_report.o +obj-$(CONFIG_KASAN_GENERIC) += init.o generic.o report_generic.o shadow.o quarantine.o +obj-$(CONFIG_KASAN_SW_TAGS) += init.o report_sw_tags.o shadow.o sw_tags.o --- a/mm/kasan/report.c~kasan-rename-report-and-tags-files +++ a/mm/kasan/report.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * This file contains common generic and tag-based KASAN error reporting code. + * This file contains common KASAN error reporting code. * * Copyright (c) 2014 Samsung Electronics Co., Ltd. * Author: Andrey Ryabinin diff --git a/mm/kasan/generic_report.c b/mm/kasan/report_generic.c similarity index 100% rename from mm/kasan/generic_report.c rename to mm/kasan/report_generic.c diff --git a/mm/kasan/tags_report.c b/mm/kasan/report_sw_tags.c similarity index 100% rename from mm/kasan/tags_report.c rename to mm/kasan/report_sw_tags.c diff --git a/mm/kasan/tags.c b/mm/kasan/sw_tags.c similarity index 100% rename from mm/kasan/tags.c rename to mm/kasan/sw_tags.c From patchwork Fri Dec 18 22:02:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983069 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0461C4361B for ; Fri, 18 Dec 2020 22:02:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 580D323BC7 for ; Fri, 18 Dec 2020 22:02:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 580D323BC7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ED42C6B006C; Fri, 18 Dec 2020 17:02:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E5F026B006E; Fri, 18 Dec 2020 17:02:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D26E36B0087; Fri, 18 Dec 2020 17:02:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B97136B006C for ; Fri, 18 Dec 2020 17:02:45 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 891231EFF for ; Fri, 18 Dec 2020 22:02:45 +0000 (UTC) X-FDA: 77607778290.10.desk18_000bb9a27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id 7B67116A4B7 for ; Fri, 18 Dec 2020 22:02:45 +0000 (UTC) X-HE-Tag: desk18_000bb9a27440 X-Filterd-Recvd-Size: 3477 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:44 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:43 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328964; bh=BnKABBFXehNVsAf/DxE7GIkSrsUgxtLfAbUVA2DgJtY=; h=From:To:Subject:In-Reply-To:From; b=2U3axS9pxfj+kaaip2gNYLf5ylIQobHnS3mhelRPT3JcDoQ4KHgbtnrLITZLGVqPl TplsxqxwjO7HNZGZFU/f9NS1uL496n5LgV5JsqlKDRI8qbeCfJBYjdkWuO1emZBd4J rbPsRHmr2SQ9d+qAb3TeILBS4TnyoMDlFjUqOzOw= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 24/78] kasan: don't duplicate config dependencies Message-ID: <20201218220243.1tkrm8XiL%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: don't duplicate config dependencies Both KASAN_GENERIC and KASAN_SW_TAGS have common dependencies, move those to KASAN. Link: https://lkml.kernel.org/r/c1cc0d562608a318c607afe22db5ec2a7af72e47.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- lib/Kconfig.kasan | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) --- a/lib/Kconfig.kasan~kasan-dont-duplicate-config-dependencies +++ a/lib/Kconfig.kasan @@ -24,6 +24,8 @@ menuconfig KASAN (HAVE_ARCH_KASAN_SW_TAGS && CC_HAS_KASAN_SW_TAGS) depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB) depends on CC_HAS_WORKING_NOSANITIZE_ADDRESS + select CONSTRUCTORS + select STACKDEPOT help Enables KASAN (KernelAddressSANitizer) - runtime memory debugger, designed to find out-of-bounds accesses and use-after-free bugs. @@ -46,10 +48,7 @@ choice config KASAN_GENERIC bool "Generic mode" depends on HAVE_ARCH_KASAN && CC_HAS_KASAN_GENERIC - depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB) select SLUB_DEBUG if SLUB - select CONSTRUCTORS - select STACKDEPOT help Enables generic KASAN mode. @@ -70,10 +69,7 @@ config KASAN_GENERIC config KASAN_SW_TAGS bool "Software tag-based mode" depends on HAVE_ARCH_KASAN_SW_TAGS && CC_HAS_KASAN_SW_TAGS - depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB) select SLUB_DEBUG if SLUB - select CONSTRUCTORS - select STACKDEPOT help Enables software tag-based KASAN mode. From patchwork Fri Dec 18 22:02:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63181C4361B for ; Fri, 18 Dec 2020 22:02:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 046D423BC7 for ; Fri, 18 Dec 2020 22:02:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 046D423BC7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9CD896B006E; Fri, 18 Dec 2020 17:02:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 97C236B0087; Fri, 18 Dec 2020 17:02:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86BF26B0088; Fri, 18 Dec 2020 17:02:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id 708656B006E for ; Fri, 18 Dec 2020 17:02:49 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 39294181B9DA9 for ; Fri, 18 Dec 2020 22:02:49 +0000 (UTC) X-FDA: 77607778458.04.pet41_5f0186227440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 117B7801D1C0 for ; Fri, 18 Dec 2020 22:02:49 +0000 (UTC) X-HE-Tag: pet41_5f0186227440 X-Filterd-Recvd-Size: 5430 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:48 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:46 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328967; bh=l3lg1QHfWNiKhxkb3gVgUjhwfevdtIV2NAFdxq0ggH0=; h=From:To:Subject:In-Reply-To:From; b=gqKfATHpJqd4Zv/DV63pWO7Ts5yIuR9ABFkhF0mMKzhQqGFfmxBegmD1lYCRPx5Ja Y03mTjlNQe/k5dJiviu22rJREcwEA2M9zX4QEWTcN//2bqSmnWYFsmFc3FpyFp6Prx fbjLAxJYWQfoMHdZ5CloCtmcTXlSccz1q0HMR1wo= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 25/78] kasan: hide invalid free check implementation Message-ID: <20201218220246.mupH-NTEA%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000003, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: hide invalid free check implementation This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. For software KASAN modes the check is based on the value in the shadow memory. Hardware tag-based KASAN won't be using shadow, so hide the implementation of the check in check_invalid_free(). Also simplify the code for software tag-based mode. No functional changes for software modes. Link: https://lkml.kernel.org/r/d01534a4b977f97d87515dc590e6348e1406de81.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/common.c | 19 +------------------ mm/kasan/generic.c | 7 +++++++ mm/kasan/kasan.h | 2 ++ mm/kasan/sw_tags.c | 9 +++++++++ 4 files changed, 19 insertions(+), 18 deletions(-) --- a/mm/kasan/common.c~kasan-hide-invalid-free-check-implementation +++ a/mm/kasan/common.c @@ -277,25 +277,9 @@ void * __must_check kasan_init_slab_obj( return (void *)object; } -static inline bool shadow_invalid(u8 tag, s8 shadow_byte) -{ - if (IS_ENABLED(CONFIG_KASAN_GENERIC)) - return shadow_byte < 0 || - shadow_byte >= KASAN_GRANULE_SIZE; - - /* else CONFIG_KASAN_SW_TAGS: */ - if ((u8)shadow_byte == KASAN_TAG_INVALID) - return true; - if ((tag != KASAN_TAG_KERNEL) && (tag != (u8)shadow_byte)) - return true; - - return false; -} - static bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip, bool quarantine) { - s8 shadow_byte; u8 tag; void *tagged_object; unsigned long rounded_up_size; @@ -314,8 +298,7 @@ static bool __kasan_slab_free(struct kme if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) return false; - shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object)); - if (shadow_invalid(tag, shadow_byte)) { + if (check_invalid_free(tagged_object)) { kasan_report_invalid_free(tagged_object, ip); return true; } --- a/mm/kasan/generic.c~kasan-hide-invalid-free-check-implementation +++ a/mm/kasan/generic.c @@ -187,6 +187,13 @@ bool check_memory_region(unsigned long a return check_memory_region_inline(addr, size, write, ret_ip); } +bool check_invalid_free(void *addr) +{ + s8 shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(addr)); + + return shadow_byte < 0 || shadow_byte >= KASAN_GRANULE_SIZE; +} + void kasan_cache_shrink(struct kmem_cache *cache) { quarantine_remove_cache(cache); --- a/mm/kasan/kasan.h~kasan-hide-invalid-free-check-implementation +++ a/mm/kasan/kasan.h @@ -166,6 +166,8 @@ void unpoison_range(const void *address, bool check_memory_region(unsigned long addr, size_t size, bool write, unsigned long ret_ip); +bool check_invalid_free(void *addr); + void *find_first_bad_addr(void *addr, size_t size); const char *get_bug_type(struct kasan_access_info *info); --- a/mm/kasan/sw_tags.c~kasan-hide-invalid-free-check-implementation +++ a/mm/kasan/sw_tags.c @@ -121,6 +121,15 @@ bool check_memory_region(unsigned long a return true; } +bool check_invalid_free(void *addr) +{ + u8 tag = get_tag(addr); + u8 shadow_byte = READ_ONCE(*(u8 *)kasan_mem_to_shadow(reset_tag(addr))); + + return (shadow_byte == KASAN_TAG_INVALID) || + (tag != KASAN_TAG_KERNEL && tag != shadow_byte); +} + #define DEFINE_HWASAN_LOAD_STORE(size) \ void __hwasan_load##size##_noabort(unsigned long addr) \ { \ From patchwork Fri Dec 18 22:02:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A651CC4361B for ; Fri, 18 Dec 2020 22:02:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 367D023B9B for ; Fri, 18 Dec 2020 22:02:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 367D023B9B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2A26A6B0088; Fri, 18 Dec 2020 17:02:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1DBE96B0089; Fri, 18 Dec 2020 17:02:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 030006B008A; Fri, 18 Dec 2020 17:02:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0149.hostedemail.com [216.40.44.149]) by kanga.kvack.org (Postfix) with ESMTP id DC1266B0088 for ; Fri, 18 Dec 2020 17:02:52 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A89911813F3C5 for ; Fri, 18 Dec 2020 22:02:52 +0000 (UTC) X-FDA: 77607778584.23.kitty21_4316b2627440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 8014E37604 for ; Fri, 18 Dec 2020 22:02:52 +0000 (UTC) X-HE-Tag: kitty21_4316b2627440 X-Filterd-Recvd-Size: 13124 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:51 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:50 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328971; bh=756GrEw9riVx6R5mXVZ+zUzRgjk0NVBevAxEpXN7LqI=; h=From:To:Subject:In-Reply-To:From; b=PkCJhpcxgdKDu3AN0Kkvz/LP4Z9wnhB2yw4nn4kjoR7FE1UPFcE8NwNr85tJRVBEK Q28cvGAVPZ+cevzns4lKCixG7ZbyLdTkl471R1G70awbThDAJgZB3+Y8kLKWSDf5Ek fS8eRY7g2oiwBIR45imSc2mQSReTyMySvxaoFbxE= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 26/78] kasan: decode stack frame only with KASAN_STACK_ENABLE Message-ID: <20201218220250.kUbArAE8a%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: decode stack frame only with KASAN_STACK_ENABLE Decoding routines aren't needed when CONFIG_KASAN_STACK_ENABLE is not enabled. Currently only generic KASAN mode implements stack error reporting. No functional changes for software modes. Link: https://lkml.kernel.org/r/05a24db36f5ec876af876a299bbea98c29468ebd.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/kasan.h | 6 + mm/kasan/report.c | 162 ------------------------------------ mm/kasan/report_generic.c | 162 ++++++++++++++++++++++++++++++++++++ 3 files changed, 168 insertions(+), 162 deletions(-) --- a/mm/kasan/kasan.h~kasan-decode-stack-frame-only-with-kasan_stack_enable +++ a/mm/kasan/kasan.h @@ -171,6 +171,12 @@ bool check_invalid_free(void *addr); void *find_first_bad_addr(void *addr, size_t size); const char *get_bug_type(struct kasan_access_info *info); +#if defined(CONFIG_KASAN_GENERIC) && CONFIG_KASAN_STACK +void print_address_stack_frame(const void *addr); +#else +static inline void print_address_stack_frame(const void *addr) { } +#endif + bool kasan_report(unsigned long addr, size_t size, bool is_write, unsigned long ip); void kasan_report_invalid_free(void *object, unsigned long ip); --- a/mm/kasan/report.c~kasan-decode-stack-frame-only-with-kasan_stack_enable +++ a/mm/kasan/report.c @@ -211,168 +211,6 @@ static inline bool init_task_stack_addr( sizeof(init_thread_union.stack)); } -static bool __must_check tokenize_frame_descr(const char **frame_descr, - char *token, size_t max_tok_len, - unsigned long *value) -{ - const char *sep = strchr(*frame_descr, ' '); - - if (sep == NULL) - sep = *frame_descr + strlen(*frame_descr); - - if (token != NULL) { - const size_t tok_len = sep - *frame_descr; - - if (tok_len + 1 > max_tok_len) { - pr_err("KASAN internal error: frame description too long: %s\n", - *frame_descr); - return false; - } - - /* Copy token (+ 1 byte for '\0'). */ - strlcpy(token, *frame_descr, tok_len + 1); - } - - /* Advance frame_descr past separator. */ - *frame_descr = sep + 1; - - if (value != NULL && kstrtoul(token, 10, value)) { - pr_err("KASAN internal error: not a valid number: %s\n", token); - return false; - } - - return true; -} - -static void print_decoded_frame_descr(const char *frame_descr) -{ - /* - * We need to parse the following string: - * "n alloc_1 alloc_2 ... alloc_n" - * where alloc_i looks like - * "offset size len name" - * or "offset size len name:line". - */ - - char token[64]; - unsigned long num_objects; - - if (!tokenize_frame_descr(&frame_descr, token, sizeof(token), - &num_objects)) - return; - - pr_err("\n"); - pr_err("this frame has %lu %s:\n", num_objects, - num_objects == 1 ? "object" : "objects"); - - while (num_objects--) { - unsigned long offset; - unsigned long size; - - /* access offset */ - if (!tokenize_frame_descr(&frame_descr, token, sizeof(token), - &offset)) - return; - /* access size */ - if (!tokenize_frame_descr(&frame_descr, token, sizeof(token), - &size)) - return; - /* name length (unused) */ - if (!tokenize_frame_descr(&frame_descr, NULL, 0, NULL)) - return; - /* object name */ - if (!tokenize_frame_descr(&frame_descr, token, sizeof(token), - NULL)) - return; - - /* Strip line number; without filename it's not very helpful. */ - strreplace(token, ':', '\0'); - - /* Finally, print object information. */ - pr_err(" [%lu, %lu) '%s'", offset, offset + size, token); - } -} - -static bool __must_check get_address_stack_frame_info(const void *addr, - unsigned long *offset, - const char **frame_descr, - const void **frame_pc) -{ - unsigned long aligned_addr; - unsigned long mem_ptr; - const u8 *shadow_bottom; - const u8 *shadow_ptr; - const unsigned long *frame; - - BUILD_BUG_ON(IS_ENABLED(CONFIG_STACK_GROWSUP)); - - /* - * NOTE: We currently only support printing frame information for - * accesses to the task's own stack. - */ - if (!object_is_on_stack(addr)) - return false; - - aligned_addr = round_down((unsigned long)addr, sizeof(long)); - mem_ptr = round_down(aligned_addr, KASAN_GRANULE_SIZE); - shadow_ptr = kasan_mem_to_shadow((void *)aligned_addr); - shadow_bottom = kasan_mem_to_shadow(end_of_stack(current)); - - while (shadow_ptr >= shadow_bottom && *shadow_ptr != KASAN_STACK_LEFT) { - shadow_ptr--; - mem_ptr -= KASAN_GRANULE_SIZE; - } - - while (shadow_ptr >= shadow_bottom && *shadow_ptr == KASAN_STACK_LEFT) { - shadow_ptr--; - mem_ptr -= KASAN_GRANULE_SIZE; - } - - if (shadow_ptr < shadow_bottom) - return false; - - frame = (const unsigned long *)(mem_ptr + KASAN_GRANULE_SIZE); - if (frame[0] != KASAN_CURRENT_STACK_FRAME_MAGIC) { - pr_err("KASAN internal error: frame info validation failed; invalid marker: %lu\n", - frame[0]); - return false; - } - - *offset = (unsigned long)addr - (unsigned long)frame; - *frame_descr = (const char *)frame[1]; - *frame_pc = (void *)frame[2]; - - return true; -} - -static void print_address_stack_frame(const void *addr) -{ - unsigned long offset; - const char *frame_descr; - const void *frame_pc; - - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) - return; - - if (!get_address_stack_frame_info(addr, &offset, &frame_descr, - &frame_pc)) - return; - - /* - * get_address_stack_frame_info only returns true if the given addr is - * on the current task's stack. - */ - pr_err("\n"); - pr_err("addr %px is located in stack of task %s/%d at offset %lu in frame:\n", - addr, current->comm, task_pid_nr(current), offset); - pr_err(" %pS\n", frame_pc); - - if (!frame_descr) - return; - - print_decoded_frame_descr(frame_descr); -} - static void print_address_description(void *addr, u8 tag) { struct page *page = kasan_addr_to_page(addr); --- a/mm/kasan/report_generic.c~kasan-decode-stack-frame-only-with-kasan_stack_enable +++ a/mm/kasan/report_generic.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include #include @@ -122,6 +123,167 @@ const char *get_bug_type(struct kasan_ac return get_wild_bug_type(info); } +#if CONFIG_KASAN_STACK +static bool __must_check tokenize_frame_descr(const char **frame_descr, + char *token, size_t max_tok_len, + unsigned long *value) +{ + const char *sep = strchr(*frame_descr, ' '); + + if (sep == NULL) + sep = *frame_descr + strlen(*frame_descr); + + if (token != NULL) { + const size_t tok_len = sep - *frame_descr; + + if (tok_len + 1 > max_tok_len) { + pr_err("KASAN internal error: frame description too long: %s\n", + *frame_descr); + return false; + } + + /* Copy token (+ 1 byte for '\0'). */ + strlcpy(token, *frame_descr, tok_len + 1); + } + + /* Advance frame_descr past separator. */ + *frame_descr = sep + 1; + + if (value != NULL && kstrtoul(token, 10, value)) { + pr_err("KASAN internal error: not a valid number: %s\n", token); + return false; + } + + return true; +} + +static void print_decoded_frame_descr(const char *frame_descr) +{ + /* + * We need to parse the following string: + * "n alloc_1 alloc_2 ... alloc_n" + * where alloc_i looks like + * "offset size len name" + * or "offset size len name:line". + */ + + char token[64]; + unsigned long num_objects; + + if (!tokenize_frame_descr(&frame_descr, token, sizeof(token), + &num_objects)) + return; + + pr_err("\n"); + pr_err("this frame has %lu %s:\n", num_objects, + num_objects == 1 ? "object" : "objects"); + + while (num_objects--) { + unsigned long offset; + unsigned long size; + + /* access offset */ + if (!tokenize_frame_descr(&frame_descr, token, sizeof(token), + &offset)) + return; + /* access size */ + if (!tokenize_frame_descr(&frame_descr, token, sizeof(token), + &size)) + return; + /* name length (unused) */ + if (!tokenize_frame_descr(&frame_descr, NULL, 0, NULL)) + return; + /* object name */ + if (!tokenize_frame_descr(&frame_descr, token, sizeof(token), + NULL)) + return; + + /* Strip line number; without filename it's not very helpful. */ + strreplace(token, ':', '\0'); + + /* Finally, print object information. */ + pr_err(" [%lu, %lu) '%s'", offset, offset + size, token); + } +} + +static bool __must_check get_address_stack_frame_info(const void *addr, + unsigned long *offset, + const char **frame_descr, + const void **frame_pc) +{ + unsigned long aligned_addr; + unsigned long mem_ptr; + const u8 *shadow_bottom; + const u8 *shadow_ptr; + const unsigned long *frame; + + BUILD_BUG_ON(IS_ENABLED(CONFIG_STACK_GROWSUP)); + + /* + * NOTE: We currently only support printing frame information for + * accesses to the task's own stack. + */ + if (!object_is_on_stack(addr)) + return false; + + aligned_addr = round_down((unsigned long)addr, sizeof(long)); + mem_ptr = round_down(aligned_addr, KASAN_GRANULE_SIZE); + shadow_ptr = kasan_mem_to_shadow((void *)aligned_addr); + shadow_bottom = kasan_mem_to_shadow(end_of_stack(current)); + + while (shadow_ptr >= shadow_bottom && *shadow_ptr != KASAN_STACK_LEFT) { + shadow_ptr--; + mem_ptr -= KASAN_GRANULE_SIZE; + } + + while (shadow_ptr >= shadow_bottom && *shadow_ptr == KASAN_STACK_LEFT) { + shadow_ptr--; + mem_ptr -= KASAN_GRANULE_SIZE; + } + + if (shadow_ptr < shadow_bottom) + return false; + + frame = (const unsigned long *)(mem_ptr + KASAN_GRANULE_SIZE); + if (frame[0] != KASAN_CURRENT_STACK_FRAME_MAGIC) { + pr_err("KASAN internal error: frame info validation failed; invalid marker: %lu\n", + frame[0]); + return false; + } + + *offset = (unsigned long)addr - (unsigned long)frame; + *frame_descr = (const char *)frame[1]; + *frame_pc = (void *)frame[2]; + + return true; +} + +void print_address_stack_frame(const void *addr) +{ + unsigned long offset; + const char *frame_descr; + const void *frame_pc; + + if (!get_address_stack_frame_info(addr, &offset, &frame_descr, + &frame_pc)) + return; + + /* + * get_address_stack_frame_info only returns true if the given addr is + * on the current task's stack. + */ + pr_err("\n"); + pr_err("addr %px is located in stack of task %s/%d at offset %lu in frame:\n", + addr, current->comm, task_pid_nr(current), offset); + pr_err(" %pS\n", frame_pc); + + if (!frame_descr) + return; + + print_decoded_frame_descr(frame_descr); +} +#endif /* CONFIG_KASAN_STACK */ + #define DEFINE_ASAN_REPORT_LOAD(size) \ void __asan_report_load##size##_noabort(unsigned long addr) \ { \ From patchwork Fri Dec 18 22:02:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37E65C4361B for ; Fri, 18 Dec 2020 22:02:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DB9C323B9B for ; Fri, 18 Dec 2020 22:02:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB9C323B9B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 318AD6B0071; Fri, 18 Dec 2020 17:02:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 279406B0072; Fri, 18 Dec 2020 17:02:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11D956B0089; Fri, 18 Dec 2020 17:02:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id EC4EF6B0071 for ; Fri, 18 Dec 2020 17:02:55 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B19874DC8 for ; Fri, 18 Dec 2020 22:02:55 +0000 (UTC) X-FDA: 77607778710.28.burn70_130986e27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 8B82E6D71 for ; Fri, 18 Dec 2020 22:02:55 +0000 (UTC) X-HE-Tag: burn70_130986e27440 X-Filterd-Recvd-Size: 4659 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:55 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:53 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328974; bh=wdKlI9yL+xh6v3JC0ySPdCRxNpCE+sflhhU2MWYOChE=; h=From:To:Subject:In-Reply-To:From; b=cr0GcaJod2HSNR4tjLMA8lyh9cgDTGDO0m10RJl00cw72pKUAoVn+olIoc8ZkuuNt W1K4tVh41oyfHt+jTFAiHavA648PPOMfESAC65NewXzN3L+FdCV/c/P+7Q1f5224pu seI9tS5khchT+hppUCM+eA+uLLZxHzKBaIWzyNII= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 27/78] kasan, arm64: only init shadow for software modes Message-ID: <20201218220253.TNKMoKDGf%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, arm64: only init shadow for software modes This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. Hardware tag-based KASAN won't be using shadow memory. Only initialize it when one of the software KASAN modes are enabled. No functional changes for software modes. Link: https://lkml.kernel.org/r/d1742eea2cd728d150d49b144e49b6433405c7ba.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Catalin Marinas Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/include/asm/kasan.h | 8 ++++++-- arch/arm64/mm/kasan_init.c | 15 ++++++++++++++- 2 files changed, 20 insertions(+), 3 deletions(-) --- a/arch/arm64/include/asm/kasan.h~kasan-arm64-only-init-shadow-for-software-modes +++ a/arch/arm64/include/asm/kasan.h @@ -13,6 +13,12 @@ #define arch_kasan_get_tag(addr) __tag_get(addr) #ifdef CONFIG_KASAN +void kasan_init(void); +#else +static inline void kasan_init(void) { } +#endif + +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) /* * KASAN_SHADOW_START: beginning of the kernel virtual addresses. @@ -33,12 +39,10 @@ #define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT))) #define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual) -void kasan_init(void); void kasan_copy_shadow(pgd_t *pgdir); asmlinkage void kasan_early_init(void); #else -static inline void kasan_init(void) { } static inline void kasan_copy_shadow(pgd_t *pgdir) { } #endif --- a/arch/arm64/mm/kasan_init.c~kasan-arm64-only-init-shadow-for-software-modes +++ a/arch/arm64/mm/kasan_init.c @@ -21,6 +21,8 @@ #include #include +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) + static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE); /* @@ -208,7 +210,7 @@ static void __init clear_pgds(unsigned l set_pgd(pgd_offset_k(start), __pgd(0)); } -void __init kasan_init(void) +static void __init kasan_init_shadow(void) { u64 kimg_shadow_start, kimg_shadow_end; u64 mod_shadow_start, mod_shadow_end; @@ -269,6 +271,17 @@ void __init kasan_init(void) memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE); cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); +} + +#else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) */ + +static inline void __init kasan_init_shadow(void) { } + +#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ + +void __init kasan_init(void) +{ + kasan_init_shadow(); /* At this point kasan is fully initialized. Enable error messages */ init_task.kasan_depth = 0; From patchwork Fri Dec 18 22:02:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E04DC4361B for ; Fri, 18 Dec 2020 22:03:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E35EA23B9B for ; Fri, 18 Dec 2020 22:02:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E35EA23B9B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7D02C6B0072; Fri, 18 Dec 2020 17:02:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 75D756B0074; Fri, 18 Dec 2020 17:02:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 600736B0080; Fri, 18 Dec 2020 17:02:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0144.hostedemail.com [216.40.44.144]) by kanga.kvack.org (Postfix) with ESMTP id 441526B0072 for ; Fri, 18 Dec 2020 17:02:59 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 10BE4181AEF3E for ; Fri, 18 Dec 2020 22:02:59 +0000 (UTC) X-FDA: 77607778878.22.dock25_251310127440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id E0564180C9A6E for ; Fri, 18 Dec 2020 22:02:58 +0000 (UTC) X-HE-Tag: dock25_251310127440 X-Filterd-Recvd-Size: 7004 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:02:58 +0000 (UTC) Date: Fri, 18 Dec 2020 14:02:56 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328977; bh=rO//3/1UfidKt13OxnpGFo0kBSvNUoU6eix40zc5yKE=; h=From:To:Subject:In-Reply-To:From; b=kFZeRwRsiI9QuKxXH9ypluZPKHDzcIs9XV2+qzrRca0iemfQM9SKDa00VO6oSmIB3 JOcjnJu00F/TZ15HIYLRj8sXCm+3iH84DfzNI6JNtrTzWbTi33rMjdJNuZSR0cCQat tHtnjTN2RKy2blxFM5Yqz/z52mjfXQrxIGbF3v8Q= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 28/78] kasan, arm64: only use kasan_depth for software modes Message-ID: <20201218220256.oHKji6pux%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, arm64: only use kasan_depth for software modes This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. Hardware tag-based KASAN won't use kasan_depth. Only define and use it when one of the software KASAN modes are enabled. No functional changes for software modes. Link: https://lkml.kernel.org/r/e16f15aeda90bc7fb4dfc2e243a14b74cc5c8219.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Catalin Marinas Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/mm/kasan_init.c | 11 ++++++++--- include/linux/kasan.h | 18 +++++++++--------- include/linux/sched.h | 2 +- init/init_task.c | 2 +- mm/kasan/common.c | 2 ++ mm/kasan/report.c | 2 ++ 6 files changed, 23 insertions(+), 14 deletions(-) --- a/arch/arm64/mm/kasan_init.c~kasan-arm64-only-use-kasan_depth-for-software-modes +++ a/arch/arm64/mm/kasan_init.c @@ -273,17 +273,22 @@ static void __init kasan_init_shadow(voi cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); } +static void __init kasan_init_depth(void) +{ + init_task.kasan_depth = 0; +} + #else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) */ static inline void __init kasan_init_shadow(void) { } +static inline void __init kasan_init_depth(void) { } + #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ void __init kasan_init(void) { kasan_init_shadow(); - - /* At this point kasan is fully initialized. Enable error messages */ - init_task.kasan_depth = 0; + kasan_init_depth(); pr_info("KernelAddressSanitizer initialized\n"); } --- a/include/linux/kasan.h~kasan-arm64-only-use-kasan_depth-for-software-modes +++ a/include/linux/kasan.h @@ -52,6 +52,12 @@ static inline void *kasan_mem_to_shadow( int kasan_add_zero_shadow(void *start, unsigned long size); void kasan_remove_zero_shadow(void *start, unsigned long size); +/* Enable reporting bugs after kasan_disable_current() */ +extern void kasan_enable_current(void); + +/* Disable reporting bugs for current task */ +extern void kasan_disable_current(void); + #else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ static inline int kasan_add_zero_shadow(void *start, unsigned long size) @@ -62,16 +68,13 @@ static inline void kasan_remove_zero_sha unsigned long size) {} +static inline void kasan_enable_current(void) {} +static inline void kasan_disable_current(void) {} + #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ #ifdef CONFIG_KASAN -/* Enable reporting bugs after kasan_disable_current() */ -extern void kasan_enable_current(void); - -/* Disable reporting bugs for current task */ -extern void kasan_disable_current(void); - void kasan_unpoison_range(const void *address, size_t size); void kasan_unpoison_task_stack(struct task_struct *task); @@ -122,9 +125,6 @@ static inline void kasan_unpoison_range( static inline void kasan_unpoison_task_stack(struct task_struct *task) {} -static inline void kasan_enable_current(void) {} -static inline void kasan_disable_current(void) {} - static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} static inline void kasan_free_pages(struct page *page, unsigned int order) {} --- a/include/linux/sched.h~kasan-arm64-only-use-kasan_depth-for-software-modes +++ a/include/linux/sched.h @@ -1234,7 +1234,7 @@ struct task_struct { u64 timer_slack_ns; u64 default_timer_slack_ns; -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) unsigned int kasan_depth; #endif --- a/init/init_task.c~kasan-arm64-only-use-kasan_depth-for-software-modes +++ a/init/init_task.c @@ -176,7 +176,7 @@ struct task_struct init_task .numa_group = NULL, .numa_faults = NULL, #endif -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) .kasan_depth = 1, #endif #ifdef CONFIG_KCSAN --- a/mm/kasan/common.c~kasan-arm64-only-use-kasan_depth-for-software-modes +++ a/mm/kasan/common.c @@ -46,6 +46,7 @@ void kasan_set_track(struct kasan_track track->stack = kasan_save_stack(flags); } +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) void kasan_enable_current(void) { current->kasan_depth++; @@ -55,6 +56,7 @@ void kasan_disable_current(void) { current->kasan_depth--; } +#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ void kasan_unpoison_range(const void *address, size_t size) { --- a/mm/kasan/report.c~kasan-arm64-only-use-kasan_depth-for-software-modes +++ a/mm/kasan/report.c @@ -292,8 +292,10 @@ static void print_shadow_for_address(con static bool report_enabled(void) { +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) if (current->kasan_depth) return false; +#endif if (test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags)) return true; return !test_and_set_bit(KASAN_BIT_REPORTED, &kasan_flags); From patchwork Fri Dec 18 22:03:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB305C4361B for ; Fri, 18 Dec 2020 22:03:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 832FE23B7E for ; Fri, 18 Dec 2020 22:03:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 832FE23B7E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2028F6B006E; Fri, 18 Dec 2020 17:03:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 18C4F6B0080; Fri, 18 Dec 2020 17:03:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 053856B0089; Fri, 18 Dec 2020 17:03:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id DF2226B0074 for ; Fri, 18 Dec 2020 17:03:02 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A3DF81802151D for ; Fri, 18 Dec 2020 22:03:02 +0000 (UTC) X-FDA: 77607779004.23.color88_300032e27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 7C23137606 for ; Fri, 18 Dec 2020 22:03:02 +0000 (UTC) X-HE-Tag: color88_300032e27440 X-Filterd-Recvd-Size: 5176 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:01 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328981; bh=0o31pG6AdEpFva+9QxweYjoptW0XURaqpwcvMaqTi7c=; h=From:To:Subject:In-Reply-To:From; b=AvgUbR8rIzxf0x72aacKf0ebNnbyM5NezphlR2m9+582xdc2zrWid/cyYE6zKPN4I mAwNYv+tq5e0AGq20XFU+eC4zK+aSXJIevoJdcs0rNllTnvOhwFUNakHK/mmRU+Q2i u0S7fF1of7DyUEiWJSKRpkw+JzCghj6twvTFomgs= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 29/78] kasan, arm64: move initialization message Message-ID: <20201218220300.qqC_WE-uM%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, arm64: move initialization message Software tag-based KASAN mode is fully initialized with kasan_init_tags(), while the generic mode only requires kasan_init(). Move the initialization message for tag-based mode into kasan_init_tags(). Also fix pr_fmt() usage for KASAN code: generic.c doesn't need it as it doesn't use any printing functions; tag-based mode should use "kasan:" instead of KBUILD_MODNAME (which stands for file name). Link: https://lkml.kernel.org/r/29a30ea4e1750450dd1f693d25b7b6cb05913ecf.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/include/asm/kasan.h | 9 +++------ arch/arm64/mm/kasan_init.c | 13 +++++-------- mm/kasan/generic.c | 2 -- mm/kasan/sw_tags.c | 4 +++- 4 files changed, 11 insertions(+), 17 deletions(-) --- a/arch/arm64/include/asm/kasan.h~kasan-arm64-move-initialization-message +++ a/arch/arm64/include/asm/kasan.h @@ -12,14 +12,10 @@ #define arch_kasan_reset_tag(addr) __tag_reset(addr) #define arch_kasan_get_tag(addr) __tag_get(addr) -#ifdef CONFIG_KASAN -void kasan_init(void); -#else -static inline void kasan_init(void) { } -#endif - #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) +void kasan_init(void); + /* * KASAN_SHADOW_START: beginning of the kernel virtual addresses. * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/N of kernel virtual addresses, @@ -43,6 +39,7 @@ void kasan_copy_shadow(pgd_t *pgdir); asmlinkage void kasan_early_init(void); #else +static inline void kasan_init(void) { } static inline void kasan_copy_shadow(pgd_t *pgdir) { } #endif --- a/arch/arm64/mm/kasan_init.c~kasan-arm64-move-initialization-message +++ a/arch/arm64/mm/kasan_init.c @@ -278,17 +278,14 @@ static void __init kasan_init_depth(void init_task.kasan_depth = 0; } -#else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) */ - -static inline void __init kasan_init_shadow(void) { } - -static inline void __init kasan_init_depth(void) { } - -#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ - void __init kasan_init(void) { kasan_init_shadow(); kasan_init_depth(); +#if defined(CONFIG_KASAN_GENERIC) + /* CONFIG_KASAN_SW_TAGS also requires kasan_init_tags(). */ pr_info("KernelAddressSanitizer initialized\n"); +#endif } + +#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ --- a/mm/kasan/generic.c~kasan-arm64-move-initialization-message +++ a/mm/kasan/generic.c @@ -9,8 +9,6 @@ * Andrey Konovalov */ -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - #include #include #include --- a/mm/kasan/sw_tags.c~kasan-arm64-move-initialization-message +++ a/mm/kasan/sw_tags.c @@ -6,7 +6,7 @@ * Author: Andrey Konovalov */ -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#define pr_fmt(fmt) "kasan: " fmt #include #include @@ -41,6 +41,8 @@ void kasan_init_tags(void) for_each_possible_cpu(cpu) per_cpu(prng_state, cpu) = (u32)get_cycles(); + + pr_info("KernelAddressSanitizer initialized\n"); } /* From patchwork Fri Dec 18 22:03:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8FC2C4361B for ; Fri, 18 Dec 2020 22:03:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9EA6723B7E for ; Fri, 18 Dec 2020 22:03:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9EA6723B7E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3FFF06B0074; Fri, 18 Dec 2020 17:03:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B3106B0080; Fri, 18 Dec 2020 17:03:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 277196B0089; Fri, 18 Dec 2020 17:03:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0159.hostedemail.com [216.40.44.159]) by kanga.kvack.org (Postfix) with ESMTP id 0AD396B0074 for ; Fri, 18 Dec 2020 17:03:06 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C91981EFF for ; Fri, 18 Dec 2020 22:03:05 +0000 (UTC) X-FDA: 77607779130.26.bikes53_081610227440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 9C06A180C3A19 for ; Fri, 18 Dec 2020 22:03:05 +0000 (UTC) X-HE-Tag: bikes53_081610227440 X-Filterd-Recvd-Size: 4234 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:05 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:03 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328984; bh=IV8/Ek5M+HJU0AOVakQNsPSidNEzQjIvcDfJcOLMfh4=; h=From:To:Subject:In-Reply-To:From; b=zx7RkBaDgiHSlPxw0wsS6E5lP2b4Dlus0oD51znakw3NsvtiIXDpO+MDAb9ye7+pQ TUmM3WSF/tOBNb+suCyeQIeXq1y81GxZbZOjOX3u9WIXmvNL6TyzKd38Vj9lLSRi3R 7BxfUH5aCvktaRkhvF2WzjZdciqBKea4luB5st5M= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 30/78] kasan, arm64: rename kasan_init_tags and mark as __init Message-ID: <20201218220303.pv_SIRaL7%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, arm64: rename kasan_init_tags and mark as __init Rename kasan_init_tags() to kasan_init_sw_tags() as the upcoming hardware tag-based KASAN mode will have its own initialization routine. Also similarly to kasan_init() mark kasan_init_tags() as __init. Link: https://lkml.kernel.org/r/71e52af72a09f4b50c8042f16101c60e50649fbb.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/kernel/setup.c | 2 +- arch/arm64/mm/kasan_init.c | 2 +- include/linux/kasan.h | 4 ++-- mm/kasan/sw_tags.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) --- a/arch/arm64/kernel/setup.c~kasan-arm64-rename-kasan_init_tags-and-mark-as-__init +++ a/arch/arm64/kernel/setup.c @@ -358,7 +358,7 @@ void __init __no_sanitize_address setup_ smp_build_mpidr_hash(); /* Init percpu seeds for random tags after cpus are set up. */ - kasan_init_tags(); + kasan_init_sw_tags(); #ifdef CONFIG_ARM64_SW_TTBR0_PAN /* --- a/arch/arm64/mm/kasan_init.c~kasan-arm64-rename-kasan_init_tags-and-mark-as-__init +++ a/arch/arm64/mm/kasan_init.c @@ -283,7 +283,7 @@ void __init kasan_init(void) kasan_init_shadow(); kasan_init_depth(); #if defined(CONFIG_KASAN_GENERIC) - /* CONFIG_KASAN_SW_TAGS also requires kasan_init_tags(). */ + /* CONFIG_KASAN_SW_TAGS also requires kasan_init_sw_tags(). */ pr_info("KernelAddressSanitizer initialized\n"); #endif } --- a/include/linux/kasan.h~kasan-arm64-rename-kasan_init_tags-and-mark-as-__init +++ a/include/linux/kasan.h @@ -192,7 +192,7 @@ static inline void kasan_record_aux_stac #ifdef CONFIG_KASAN_SW_TAGS -void kasan_init_tags(void); +void __init kasan_init_sw_tags(void); void *kasan_reset_tag(const void *addr); @@ -201,7 +201,7 @@ bool kasan_report(unsigned long addr, si #else /* CONFIG_KASAN_SW_TAGS */ -static inline void kasan_init_tags(void) { } +static inline void kasan_init_sw_tags(void) { } static inline void *kasan_reset_tag(const void *addr) { --- a/mm/kasan/sw_tags.c~kasan-arm64-rename-kasan_init_tags-and-mark-as-__init +++ a/mm/kasan/sw_tags.c @@ -35,7 +35,7 @@ static DEFINE_PER_CPU(u32, prng_state); -void kasan_init_tags(void) +void __init kasan_init_sw_tags(void) { int cpu; From patchwork Fri Dec 18 22:03:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07942C4361B for ; Fri, 18 Dec 2020 22:03:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AA96F23B7E for ; Fri, 18 Dec 2020 22:03:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA96F23B7E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 44D266B0080; Fri, 18 Dec 2020 17:03:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3DA8C6B0087; Fri, 18 Dec 2020 17:03:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 251506B0089; Fri, 18 Dec 2020 17:03:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0168.hostedemail.com [216.40.44.168]) by kanga.kvack.org (Postfix) with ESMTP id 0F9946B0080 for ; Fri, 18 Dec 2020 17:03:09 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id CF8CE1802BACB for ; Fri, 18 Dec 2020 22:03:08 +0000 (UTC) X-FDA: 77607779256.10.care33_2b13b1827440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id AEFD616A4B1 for ; Fri, 18 Dec 2020 22:03:08 +0000 (UTC) X-HE-Tag: care33_2b13b1827440 X-Filterd-Recvd-Size: 4424 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:08 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:06 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328987; bh=CntEdFeEhpEy8nANGl/CKXpvCpvnknYL6NlgBt4vUtk=; h=From:To:Subject:In-Reply-To:From; b=w3gZY4TdqDyVzUo2t4j7vdKwlRKqv6bZ+nioM10K82eHicI3+Wbg0HlYczJUtcntK idTyQnod3WFcn+eQ3gqXKqjNQoiuCe3pVJDUgmWPpNnHGVaVppddnZgC/jH3q09key MEIc0EwYlbn4+0YSlwBBa3CBNqQv398dEHeyJBj8= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 31/78] kasan: rename addr_has_shadow to addr_has_metadata Message-ID: <20201218220306.DFMEgkikH%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000016, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: rename addr_has_shadow to addr_has_metadata This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. Hardware tag-based KASAN won't be using shadow memory, but will reuse this function. Rename "shadow" to implementation-neutral "metadata". No functional changes. Link: https://lkml.kernel.org/r/370466fba590a4596b55ffd38adfd990f8886db4.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/kasan.h | 2 +- mm/kasan/report.c | 6 +++--- mm/kasan/report_generic.c | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) --- a/mm/kasan/kasan.h~kasan-rename-addr_has_shadow-to-addr_has_metadata +++ a/mm/kasan/kasan.h @@ -147,7 +147,7 @@ static inline const void *kasan_shadow_t << KASAN_SHADOW_SCALE_SHIFT); } -static inline bool addr_has_shadow(const void *addr) +static inline bool addr_has_metadata(const void *addr) { return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START)); } --- a/mm/kasan/report.c~kasan-rename-addr_has_shadow-to-addr_has_metadata +++ a/mm/kasan/report.c @@ -361,7 +361,7 @@ static void __kasan_report(unsigned long untagged_addr = reset_tag(tagged_addr); info.access_addr = tagged_addr; - if (addr_has_shadow(untagged_addr)) + if (addr_has_metadata(untagged_addr)) info.first_bad_addr = find_first_bad_addr(tagged_addr, size); else info.first_bad_addr = untagged_addr; @@ -372,11 +372,11 @@ static void __kasan_report(unsigned long start_report(&flags); print_error_description(&info); - if (addr_has_shadow(untagged_addr)) + if (addr_has_metadata(untagged_addr)) print_tags(get_tag(tagged_addr), info.first_bad_addr); pr_err("\n"); - if (addr_has_shadow(untagged_addr)) { + if (addr_has_metadata(untagged_addr)) { print_address_description(untagged_addr, get_tag(tagged_addr)); pr_err("\n"); print_shadow_for_address(info.first_bad_addr); --- a/mm/kasan/report_generic.c~kasan-rename-addr_has_shadow-to-addr_has_metadata +++ a/mm/kasan/report_generic.c @@ -118,7 +118,7 @@ const char *get_bug_type(struct kasan_ac if (info->access_addr + info->access_size < info->access_addr) return "out-of-bounds"; - if (addr_has_shadow(info->access_addr)) + if (addr_has_metadata(info->access_addr)) return get_shadow_bug_type(info); return get_wild_bug_type(info); } From patchwork Fri Dec 18 22:03:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 939FFC4361B for ; Fri, 18 Dec 2020 22:03:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3A8CD23BA7 for ; Fri, 18 Dec 2020 22:03:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A8CD23BA7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C65376B0087; Fri, 18 Dec 2020 17:03:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BC5DD6B0089; Fri, 18 Dec 2020 17:03:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB3A96B008A; Fri, 18 Dec 2020 17:03:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id 9059E6B0087 for ; Fri, 18 Dec 2020 17:03:12 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 584F4181AEF1E for ; Fri, 18 Dec 2020 22:03:12 +0000 (UTC) X-FDA: 77607779424.17.crime13_6101d5a27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 3D87B1801E306 for ; Fri, 18 Dec 2020 22:03:12 +0000 (UTC) X-HE-Tag: crime13_6101d5a27440 X-Filterd-Recvd-Size: 3562 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:11 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:09 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328990; bh=2B/NNjgKynQJZv4Y6anRy8CVgpGnS6XKSXDRJna02cY=; h=From:To:Subject:In-Reply-To:From; b=y/lFHzEsXJHf7jyeONwycsuU7a7uQZsL5/QZ2Gvu1jJyXmdPOj8NG1ruEaR7B8NRR ksiRNi9RdkytU6DSW29VsewGa7OiDmIv2tuA/GTLPY9gzr4Pz9koC6ustGcI2lpYbN Rq6LCI3XyNRFI/myvzHg94fA8GjyUkA2Sej45CYk= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 32/78] kasan: rename print_shadow_for_address to print_memory_metadata Message-ID: <20201218220309.N8c1ZkijW%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: rename print_shadow_for_address to print_memory_metadata This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. Hardware tag-based KASAN won't be using shadow memory, but will reuse this function. Rename "shadow" to implementation-neutral "metadata". No functional changes. Link: https://lkml.kernel.org/r/dd955c5aadaee16aef451a6189d19172166a23f5.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/report.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/mm/kasan/report.c~kasan-rename-print_shadow_for_address-to-print_memory_metadata +++ a/mm/kasan/report.c @@ -252,7 +252,7 @@ static int shadow_pointer_offset(const v (shadow - row) / SHADOW_BYTES_PER_BLOCK + 1; } -static void print_shadow_for_address(const void *addr) +static void print_memory_metadata(const void *addr) { int i; const void *shadow = kasan_mem_to_shadow(addr); @@ -338,7 +338,7 @@ void kasan_report_invalid_free(void *obj pr_err("\n"); print_address_description(object, tag); pr_err("\n"); - print_shadow_for_address(object); + print_memory_metadata(object); end_report(&flags); } @@ -379,7 +379,7 @@ static void __kasan_report(unsigned long if (addr_has_metadata(untagged_addr)) { print_address_description(untagged_addr, get_tag(tagged_addr)); pr_err("\n"); - print_shadow_for_address(info.first_bad_addr); + print_memory_metadata(info.first_bad_addr); } else { dump_stack(); } From patchwork Fri Dec 18 22:03:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2F73C4361B for ; Fri, 18 Dec 2020 22:03:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9A6CD23B8C for ; Fri, 18 Dec 2020 22:03:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9A6CD23B8C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3635A6B0089; Fri, 18 Dec 2020 17:03:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3126D6B008A; Fri, 18 Dec 2020 17:03:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18C3D6B008C; Fri, 18 Dec 2020 17:03:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id F0BBC6B0089 for ; Fri, 18 Dec 2020 17:03:15 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B4A245010 for ; Fri, 18 Dec 2020 22:03:15 +0000 (UTC) X-FDA: 77607779550.02.coal71_54177ac27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 9BBDE10097AA0 for ; Fri, 18 Dec 2020 22:03:15 +0000 (UTC) X-HE-Tag: coal71_54177ac27440 X-Filterd-Recvd-Size: 5386 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:15 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:13 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328994; bh=WkJP5ueegwzdiDDMvDzqZf2U3rSH1/aHsOmLVGy5lhQ=; h=From:To:Subject:In-Reply-To:From; b=tqxOx1ITslRFc3SaxyO/q6vTjEtsh6k/udRbi4vlGzTjwK4KGqOvDN0loChNJjq9l qU4L/WLtIbJEseiDd7RvkLEcJkz+zBT6dAyEOIHf28Bi0T9qplYkRlyPtu+G2VhfYZ c9gmGbEFBRm1ESigqEwx5gUsxOKUmey5jZuDqPT0= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 33/78] kasan: rename SHADOW layout macros to META Message-ID: <20201218220313.vJuJ2g_R2%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: rename SHADOW layout macros to META This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. Hardware tag-based KASAN won't be using shadow memory, but will reuse these macros. Rename "SHADOW" to implementation-neutral "META". No functional changes. Link: https://lkml.kernel.org/r/f96244ec59dc17db35173ec352c5592b14aefaf8.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/report.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) --- a/mm/kasan/report.c~kasan-rename-shadow-layout-macros-to-meta +++ a/mm/kasan/report.c @@ -33,11 +33,11 @@ #include "kasan.h" #include "../slab.h" -/* Shadow layout customization. */ -#define SHADOW_BYTES_PER_BLOCK 1 -#define SHADOW_BLOCKS_PER_ROW 16 -#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK) -#define SHADOW_ROWS_AROUND_ADDR 2 +/* Metadata layout customization. */ +#define META_BYTES_PER_BLOCK 1 +#define META_BLOCKS_PER_ROW 16 +#define META_BYTES_PER_ROW (META_BLOCKS_PER_ROW * META_BYTES_PER_BLOCK) +#define META_ROWS_AROUND_ADDR 2 static unsigned long kasan_flags; @@ -240,7 +240,7 @@ static void print_address_description(vo static bool row_is_guilty(const void *row, const void *guilty) { - return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW); + return (row <= guilty) && (guilty < row + META_BYTES_PER_ROW); } static int shadow_pointer_offset(const void *row, const void *shadow) @@ -249,7 +249,7 @@ static int shadow_pointer_offset(const v * 3 + (BITS_PER_LONG/8)*2 chars. */ return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 + - (shadow - row) / SHADOW_BYTES_PER_BLOCK + 1; + (shadow - row) / META_BYTES_PER_BLOCK + 1; } static void print_memory_metadata(const void *addr) @@ -259,15 +259,15 @@ static void print_memory_metadata(const const void *shadow_row; shadow_row = (void *)round_down((unsigned long)shadow, - SHADOW_BYTES_PER_ROW) - - SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW; + META_BYTES_PER_ROW) + - META_ROWS_AROUND_ADDR * META_BYTES_PER_ROW; pr_err("Memory state around the buggy address:\n"); - for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) { + for (i = -META_ROWS_AROUND_ADDR; i <= META_ROWS_AROUND_ADDR; i++) { const void *kaddr = kasan_shadow_to_mem(shadow_row); char buffer[4 + (BITS_PER_LONG/8)*2]; - char shadow_buf[SHADOW_BYTES_PER_ROW]; + char shadow_buf[META_BYTES_PER_ROW]; snprintf(buffer, sizeof(buffer), (i == 0) ? ">%px: " : " %px: ", kaddr); @@ -276,17 +276,17 @@ static void print_memory_metadata(const * function, because generic functions may try to * access kasan mapping for the passed address. */ - memcpy(shadow_buf, shadow_row, SHADOW_BYTES_PER_ROW); + memcpy(shadow_buf, shadow_row, META_BYTES_PER_ROW); print_hex_dump(KERN_ERR, buffer, - DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1, - shadow_buf, SHADOW_BYTES_PER_ROW, 0); + DUMP_PREFIX_NONE, META_BYTES_PER_ROW, 1, + shadow_buf, META_BYTES_PER_ROW, 0); if (row_is_guilty(shadow_row, shadow)) pr_err("%*c\n", shadow_pointer_offset(shadow_row, shadow), '^'); - shadow_row += SHADOW_BYTES_PER_ROW; + shadow_row += META_BYTES_PER_ROW; } } From patchwork Fri Dec 18 22:03:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D864C3527E for ; Fri, 18 Dec 2020 22:03:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3E22323BA9 for ; Fri, 18 Dec 2020 22:03:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3E22323BA9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CB1B66B008A; Fri, 18 Dec 2020 17:03:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C3A146B008C; Fri, 18 Dec 2020 17:03:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B29226B0092; Fri, 18 Dec 2020 17:03:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id 9B7206B008A for ; Fri, 18 Dec 2020 17:03:19 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 636211802BACB for ; Fri, 18 Dec 2020 22:03:19 +0000 (UTC) X-FDA: 77607779718.17.straw17_060fbf227440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 37A9018022016 for ; Fri, 18 Dec 2020 22:03:19 +0000 (UTC) X-HE-Tag: straw17_060fbf227440 X-Filterd-Recvd-Size: 7916 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:18 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:16 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608328997; bh=SZTQn9fjqbA0ifLQitsdafMEg0vq3YSpL7vQ+CatsAs=; h=From:To:Subject:In-Reply-To:From; b=OlnsXsXcZvGqWhce3jvpPvQGfltC0/lsn6j15H/jDflYrJK62yi8YfKIVj045+RZj 6sLPxcwvfQ9Ulaotl2grM5qoi5bZ08T/2+qOnpJSaHm+4dV9s4BOD4Ciy7N3u9mmRJ K2MrZ5Pwem3Xc2hYRZVt2jt13gOSYYsp4OdtEhP8= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 34/78] kasan: separate metadata_fetch_row for each mode Message-ID: <20201218220316.GpDRmn3sx%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: separate metadata_fetch_row for each mode This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. Rework print_memory_metadata() to make it agnostic with regard to the way metadata is stored. Allow providing a separate metadata_fetch_row() implementation for each KASAN mode. Hardware tag-based KASAN will provide its own implementation that doesn't use shadow memory. No functional changes for software modes. Link: https://lkml.kernel.org/r/5fb1ec0152bb1f521505017800387ec3e36ffe18.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/kasan.h | 8 +++++ mm/kasan/report.c | 56 +++++++++++++++++------------------- mm/kasan/report_generic.c | 5 +++ mm/kasan/report_sw_tags.c | 5 +++ 4 files changed, 45 insertions(+), 29 deletions(-) --- a/mm/kasan/kasan.h~kasan-separate-metadata_fetch_row-for-each-mode +++ a/mm/kasan/kasan.h @@ -58,6 +58,13 @@ #define KASAN_ABI_VERSION 1 #endif +/* Metadata layout customization. */ +#define META_BYTES_PER_BLOCK 1 +#define META_BLOCKS_PER_ROW 16 +#define META_BYTES_PER_ROW (META_BLOCKS_PER_ROW * META_BYTES_PER_BLOCK) +#define META_MEM_BYTES_PER_ROW (META_BYTES_PER_ROW * KASAN_GRANULE_SIZE) +#define META_ROWS_AROUND_ADDR 2 + struct kasan_access_info { const void *access_addr; const void *first_bad_addr; @@ -170,6 +177,7 @@ bool check_invalid_free(void *addr); void *find_first_bad_addr(void *addr, size_t size); const char *get_bug_type(struct kasan_access_info *info); +void metadata_fetch_row(char *buffer, void *row); #if defined(CONFIG_KASAN_GENERIC) && CONFIG_KASAN_STACK void print_address_stack_frame(const void *addr); --- a/mm/kasan/report.c~kasan-separate-metadata_fetch_row-for-each-mode +++ a/mm/kasan/report.c @@ -33,12 +33,6 @@ #include "kasan.h" #include "../slab.h" -/* Metadata layout customization. */ -#define META_BYTES_PER_BLOCK 1 -#define META_BLOCKS_PER_ROW 16 -#define META_BYTES_PER_ROW (META_BLOCKS_PER_ROW * META_BYTES_PER_BLOCK) -#define META_ROWS_AROUND_ADDR 2 - static unsigned long kasan_flags; #define KASAN_BIT_REPORTED 0 @@ -238,55 +232,59 @@ static void print_address_description(vo print_address_stack_frame(addr); } -static bool row_is_guilty(const void *row, const void *guilty) +static bool meta_row_is_guilty(const void *row, const void *addr) { - return (row <= guilty) && (guilty < row + META_BYTES_PER_ROW); + return (row <= addr) && (addr < row + META_MEM_BYTES_PER_ROW); } -static int shadow_pointer_offset(const void *row, const void *shadow) +static int meta_pointer_offset(const void *row, const void *addr) { - /* The length of ">ff00ff00ff00ff00: " is - * 3 + (BITS_PER_LONG/8)*2 chars. + /* + * Memory state around the buggy address: + * ff00ff00ff00ff00: 00 00 00 05 fe fe fe fe fe fe fe fe fe fe fe fe + * ... + * + * The length of ">ff00ff00ff00ff00: " is + * 3 + (BITS_PER_LONG / 8) * 2 chars. + * The length of each granule metadata is 2 bytes + * plus 1 byte for space. */ - return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 + - (shadow - row) / META_BYTES_PER_BLOCK + 1; + return 3 + (BITS_PER_LONG / 8) * 2 + + (addr - row) / KASAN_GRANULE_SIZE * 3 + 1; } static void print_memory_metadata(const void *addr) { int i; - const void *shadow = kasan_mem_to_shadow(addr); - const void *shadow_row; + void *row; - shadow_row = (void *)round_down((unsigned long)shadow, - META_BYTES_PER_ROW) - - META_ROWS_AROUND_ADDR * META_BYTES_PER_ROW; + row = (void *)round_down((unsigned long)addr, META_MEM_BYTES_PER_ROW) + - META_ROWS_AROUND_ADDR * META_MEM_BYTES_PER_ROW; pr_err("Memory state around the buggy address:\n"); for (i = -META_ROWS_AROUND_ADDR; i <= META_ROWS_AROUND_ADDR; i++) { - const void *kaddr = kasan_shadow_to_mem(shadow_row); - char buffer[4 + (BITS_PER_LONG/8)*2]; - char shadow_buf[META_BYTES_PER_ROW]; + char buffer[4 + (BITS_PER_LONG / 8) * 2]; + char metadata[META_BYTES_PER_ROW]; snprintf(buffer, sizeof(buffer), - (i == 0) ? ">%px: " : " %px: ", kaddr); + (i == 0) ? ">%px: " : " %px: ", row); + /* * We should not pass a shadow pointer to generic * function, because generic functions may try to * access kasan mapping for the passed address. */ - memcpy(shadow_buf, shadow_row, META_BYTES_PER_ROW); + metadata_fetch_row(&metadata[0], row); + print_hex_dump(KERN_ERR, buffer, DUMP_PREFIX_NONE, META_BYTES_PER_ROW, 1, - shadow_buf, META_BYTES_PER_ROW, 0); + metadata, META_BYTES_PER_ROW, 0); - if (row_is_guilty(shadow_row, shadow)) - pr_err("%*c\n", - shadow_pointer_offset(shadow_row, shadow), - '^'); + if (meta_row_is_guilty(row, addr)) + pr_err("%*c\n", meta_pointer_offset(row, addr), '^'); - shadow_row += META_BYTES_PER_ROW; + row += META_MEM_BYTES_PER_ROW; } } --- a/mm/kasan/report_generic.c~kasan-separate-metadata_fetch_row-for-each-mode +++ a/mm/kasan/report_generic.c @@ -123,6 +123,11 @@ const char *get_bug_type(struct kasan_ac return get_wild_bug_type(info); } +void metadata_fetch_row(char *buffer, void *row) +{ + memcpy(buffer, kasan_mem_to_shadow(row), META_BYTES_PER_ROW); +} + #if CONFIG_KASAN_STACK static bool __must_check tokenize_frame_descr(const char **frame_descr, char *token, size_t max_tok_len, --- a/mm/kasan/report_sw_tags.c~kasan-separate-metadata_fetch_row-for-each-mode +++ a/mm/kasan/report_sw_tags.c @@ -80,6 +80,11 @@ void *find_first_bad_addr(void *addr, si return p; } +void metadata_fetch_row(char *buffer, void *row) +{ + memcpy(buffer, kasan_mem_to_shadow(row), META_BYTES_PER_ROW); +} + void print_tags(u8 addr_tag, const void *addr) { u8 *shadow = (u8 *)kasan_mem_to_shadow(addr); From patchwork Fri Dec 18 22:03:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A28FC4361B for ; Fri, 18 Dec 2020 22:03:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D24E823B9B for ; Fri, 18 Dec 2020 22:03:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D24E823B9B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7825A6B008C; Fri, 18 Dec 2020 17:03:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 709A76B0092; Fri, 18 Dec 2020 17:03:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5AB036B0093; Fri, 18 Dec 2020 17:03:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 44BEA6B008C for ; Fri, 18 Dec 2020 17:03:23 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 00E6A8249980 for ; Fri, 18 Dec 2020 22:03:22 +0000 (UTC) X-FDA: 77607779886.01.point25_2000a6d27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id D601810045EE0 for ; Fri, 18 Dec 2020 22:03:22 +0000 (UTC) X-HE-Tag: point25_2000a6d27440 X-Filterd-Recvd-Size: 7512 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:21 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:20 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329001; bh=aZ20VFyvvs6LomSn07Oc4uZtBKQwry8f4qJ1pOVDoBk=; h=From:To:Subject:In-Reply-To:From; b=WaMdTGpb5GBV2ImxWFRd8yfUg7D6+1VwrvG42LVMqbZzgkKUcqa3Y9MDBB9ZGAq+V +ab+pFol+txVSeUztpPwkxGU8Nrq//GeglmrHgmsbVy9LO8RE25iZWccUlAmUb1rt/ jodXYSOpI93UC1PDhAhF3Ns/m90NXWib4iagV99Q= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 35/78] kasan: introduce CONFIG_KASAN_HW_TAGS Message-ID: <20201218220320.AJqL9i43U%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: introduce CONFIG_KASAN_HW_TAGS This patch adds a configuration option for a new KASAN mode called hardware tag-based KASAN. This mode uses the memory tagging approach like the software tag-based mode, but relies on arm64 Memory Tagging Extension feature for tag management and access checking. Link: https://lkml.kernel.org/r/44906a209d3a44f9c6f5a21841e90988e365601e.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Co-developed-by: Vincenzo Frascino Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- lib/Kconfig.kasan | 61 +++++++++++++++++++++++++++++++------------- 1 file changed, 44 insertions(+), 17 deletions(-) --- a/lib/Kconfig.kasan~kasan-introduce-config_kasan_hw_tags +++ a/lib/Kconfig.kasan @@ -6,7 +6,10 @@ config HAVE_ARCH_KASAN config HAVE_ARCH_KASAN_SW_TAGS bool -config HAVE_ARCH_KASAN_VMALLOC +config HAVE_ARCH_KASAN_HW_TAGS + bool + +config HAVE_ARCH_KASAN_VMALLOC bool config CC_HAS_KASAN_GENERIC @@ -15,16 +18,19 @@ config CC_HAS_KASAN_GENERIC config CC_HAS_KASAN_SW_TAGS def_bool $(cc-option, -fsanitize=kernel-hwaddress) +# This option is only required for software KASAN modes. +# Old GCC versions don't have proper support for no_sanitize_address. +# See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89124 for details. config CC_HAS_WORKING_NOSANITIZE_ADDRESS def_bool !CC_IS_GCC || GCC_VERSION >= 80300 menuconfig KASAN bool "KASAN: runtime memory debugger" - depends on (HAVE_ARCH_KASAN && CC_HAS_KASAN_GENERIC) || \ - (HAVE_ARCH_KASAN_SW_TAGS && CC_HAS_KASAN_SW_TAGS) + depends on (((HAVE_ARCH_KASAN && CC_HAS_KASAN_GENERIC) || \ + (HAVE_ARCH_KASAN_SW_TAGS && CC_HAS_KASAN_SW_TAGS)) && \ + CC_HAS_WORKING_NOSANITIZE_ADDRESS) || \ + HAVE_ARCH_KASAN_HW_TAGS depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB) - depends on CC_HAS_WORKING_NOSANITIZE_ADDRESS - select CONSTRUCTORS select STACKDEPOT help Enables KASAN (KernelAddressSANitizer) - runtime memory debugger, @@ -37,18 +43,24 @@ choice prompt "KASAN mode" default KASAN_GENERIC help - KASAN has two modes: generic KASAN (similar to userspace ASan, - x86_64/arm64/xtensa, enabled with CONFIG_KASAN_GENERIC) and - software tag-based KASAN (a version based on software memory - tagging, arm64 only, similar to userspace HWASan, enabled with - CONFIG_KASAN_SW_TAGS). + KASAN has three modes: + 1. generic KASAN (similar to userspace ASan, + x86_64/arm64/xtensa, enabled with CONFIG_KASAN_GENERIC), + 2. software tag-based KASAN (arm64 only, based on software + memory tagging (similar to userspace HWASan), enabled with + CONFIG_KASAN_SW_TAGS), and + 3. hardware tag-based KASAN (arm64 only, based on hardware + memory tagging, enabled with CONFIG_KASAN_HW_TAGS). + + All KASAN modes are strictly debugging features. - Both generic and tag-based KASAN are strictly debugging features. + For better error reports enable CONFIG_STACKTRACE. config KASAN_GENERIC bool "Generic mode" depends on HAVE_ARCH_KASAN && CC_HAS_KASAN_GENERIC select SLUB_DEBUG if SLUB + select CONSTRUCTORS help Enables generic KASAN mode. @@ -61,8 +73,6 @@ config KASAN_GENERIC and introduces an overhead of ~x1.5 for the rest of the allocations. The performance slowdown is ~x3. - For better error detection enable CONFIG_STACKTRACE. - Currently CONFIG_KASAN_GENERIC doesn't work with CONFIG_DEBUG_SLAB (the resulting kernel does not boot). @@ -70,11 +80,15 @@ config KASAN_SW_TAGS bool "Software tag-based mode" depends on HAVE_ARCH_KASAN_SW_TAGS && CC_HAS_KASAN_SW_TAGS select SLUB_DEBUG if SLUB + select CONSTRUCTORS help Enables software tag-based KASAN mode. - This mode requires Top Byte Ignore support by the CPU and therefore - is only supported for arm64. This mode requires Clang. + This mode require software memory tagging support in the form of + HWASan-like compiler instrumentation. + + Currently this mode is only implemented for arm64 CPUs and relies on + Top Byte Ignore. This mode requires Clang. This mode consumes about 1/16th of available memory at kernel start and introduces an overhead of ~20% for the rest of the allocations. @@ -82,15 +96,27 @@ config KASAN_SW_TAGS casting and comparison, as it embeds tags into the top byte of each pointer. - For better error detection enable CONFIG_STACKTRACE. - Currently CONFIG_KASAN_SW_TAGS doesn't work with CONFIG_DEBUG_SLAB (the resulting kernel does not boot). +config KASAN_HW_TAGS + bool "Hardware tag-based mode" + depends on HAVE_ARCH_KASAN_HW_TAGS + depends on SLUB + help + Enables hardware tag-based KASAN mode. + + This mode requires hardware memory tagging support, and can be used + by any architecture that provides it. + + Currently this mode is only implemented for arm64 CPUs starting from + ARMv8.5 and relies on Memory Tagging Extension and Top Byte Ignore. + endchoice choice prompt "Instrumentation type" + depends on KASAN_GENERIC || KASAN_SW_TAGS default KASAN_OUTLINE config KASAN_OUTLINE @@ -114,6 +140,7 @@ endchoice config KASAN_STACK_ENABLE bool "Enable stack instrumentation (unsafe)" if CC_IS_CLANG && !COMPILE_TEST + depends on KASAN_GENERIC || KASAN_SW_TAGS help The LLVM stack address sanitizer has a know problem that causes excessive stack usage in a lot of functions, see From patchwork Fri Dec 18 22:03:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F4200C2D0E4 for ; Fri, 18 Dec 2020 22:03:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A47A623B9B for ; Fri, 18 Dec 2020 22:03:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A47A623B9B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3DD306B0068; Fri, 18 Dec 2020 17:03:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 338EB6B0092; Fri, 18 Dec 2020 17:03:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 206476B0093; Fri, 18 Dec 2020 17:03:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id 0427A6B0068 for ; Fri, 18 Dec 2020 17:03:26 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BE67C8249980 for ; Fri, 18 Dec 2020 22:03:25 +0000 (UTC) X-FDA: 77607779970.16.wren43_290edd327440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 98C64100E690B for ; Fri, 18 Dec 2020 22:03:25 +0000 (UTC) X-HE-Tag: wren43_290edd327440 X-Filterd-Recvd-Size: 3564 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:24 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:23 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329004; bh=o4LWwQlSOpED0V2os9lzQIgkMFv6BxZ2jMv9cz3gbxo=; h=From:To:Subject:In-Reply-To:From; b=ONAP4qDQ4KV0R61G4/3hd8m557uK7oHCno3obmvtsO69EaVdKu19tz0H1KWsrgeKu 8lzEHMrIDv84Ex7RTiSeU944uDTGfL7d02LvmA5yV7YmWQ/q3+TsaEYpbXigdxa+jo n8c1z1hOrIgRL88edBrZu0OPgdxcshMlXOipD0Cs= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 36/78] arm64: enable armv8.5-a asm-arch option Message-ID: <20201218220323.FBOTcmeWO%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vincenzo Frascino Subject: arm64: enable armv8.5-a asm-arch option Hardware tag-based KASAN relies on Memory Tagging Extension (MTE) which is an armv8.5-a architecture extension. Enable the correct asm option when the compiler supports it in order to allow the usage of ALTERNATIVE()s with MTE instructions. Link: https://lkml.kernel.org/r/d03d1157124ea3532eaeb77507988733f5734986.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/Kconfig | 4 ++++ arch/arm64/Makefile | 5 +++++ 2 files changed, 9 insertions(+) --- a/arch/arm64/Kconfig~arm64-enable-armv85-a-asm-arch-option +++ a/arch/arm64/Kconfig @@ -1571,6 +1571,9 @@ endmenu menu "ARMv8.5 architectural features" +config AS_HAS_ARMV8_5 + def_bool $(cc-option,-Wa$(comma)-march=armv8.5-a) + config ARM64_BTI bool "Branch Target Identification support" default y @@ -1645,6 +1648,7 @@ config ARM64_MTE bool "Memory Tagging Extension support" default y depends on ARM64_AS_HAS_MTE && ARM64_TAGGED_ADDR_ABI + depends on AS_HAS_ARMV8_5 select ARCH_USES_HIGH_VMA_FLAGS help Memory Tagging (part of the ARMv8.5 Extensions) provides --- a/arch/arm64/Makefile~arm64-enable-armv85-a-asm-arch-option +++ a/arch/arm64/Makefile @@ -96,6 +96,11 @@ ifeq ($(CONFIG_AS_HAS_ARMV8_4), y) asm-arch := armv8.4-a endif +ifeq ($(CONFIG_AS_HAS_ARMV8_5), y) +# make sure to pass the newest target architecture to -march. +asm-arch := armv8.5-a +endif + ifdef asm-arch KBUILD_CFLAGS += -Wa,-march=$(asm-arch) \ -DARM64_ASM_ARCH='"$(asm-arch)"' From patchwork Fri Dec 18 22:03:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34950C4361B for ; Fri, 18 Dec 2020 22:03:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D8E5F23BA9 for ; Fri, 18 Dec 2020 22:03:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8E5F23BA9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 824FB6B0092; Fri, 18 Dec 2020 17:03:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7857A6B0093; Fri, 18 Dec 2020 17:03:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 626196B0095; Fri, 18 Dec 2020 17:03:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id 44B1E6B0092 for ; Fri, 18 Dec 2020 17:03:29 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0D8A352AC for ; Fri, 18 Dec 2020 22:03:29 +0000 (UTC) X-FDA: 77607780138.04.rub24_170ab3a27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id E49A9801D1CB for ; Fri, 18 Dec 2020 22:03:28 +0000 (UTC) X-HE-Tag: rub24_170ab3a27440 X-Filterd-Recvd-Size: 8669 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:28 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:26 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329007; bh=76oWDf5f4w1BlWSph6gBIkwhthhPKhw/sXWIAqJa58Y=; h=From:To:Subject:In-Reply-To:From; b=wflsU9xZULkd8PO57ZZv2QYJMa+lCYBLDvSgTiD+oQOHbqiFif7aVnL1EnOqPw/Id nPcRiLHFEVxYzK9XEzRYh1Z4sOU2Z+nJq2lgFIEHAKKDHtZ3ac5UeDAJ3lWqrvACE8 V//KC7KzR+btvXoCKUeE2IS0VPxRzq1U48y7DP58= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 37/78] arm64: mte: add in-kernel MTE helpers Message-ID: <20201218220326.MNvex-upo%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vincenzo Frascino Subject: arm64: mte: add in-kernel MTE helpers Provide helper functions to manipulate allocation and pointer tags for kernel addresses. Low-level helper functions (mte_assign_*, written in assembly) operate tag values from the [0x0, 0xF] range. High-level helper functions (mte_get/set_*) use the [0xF0, 0xFF] range to preserve compatibility with normal kernel pointers that have 0xFF in their top byte. MTE_GRANULE_SIZE and related definitions are moved to mte-def.h header that doesn't have any dependencies and is safe to include into any low-level header. Link: https://lkml.kernel.org/r/c31bf759b4411b2d98cdd801eb928e241584fd1f.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino Co-developed-by: Andrey Konovalov Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/include/asm/esr.h | 1 arch/arm64/include/asm/mte-def.h | 15 +++++++ arch/arm64/include/asm/mte-kasan.h | 56 +++++++++++++++++++++++++++ arch/arm64/include/asm/mte.h | 20 ++++++--- arch/arm64/kernel/mte.c | 48 +++++++++++++++++++++++ arch/arm64/lib/mte.S | 16 +++++++ 6 files changed, 150 insertions(+), 6 deletions(-) --- a/arch/arm64/include/asm/esr.h~arm64-mte-add-in-kernel-mte-helpers +++ a/arch/arm64/include/asm/esr.h @@ -106,6 +106,7 @@ #define ESR_ELx_FSC_TYPE (0x3C) #define ESR_ELx_FSC_LEVEL (0x03) #define ESR_ELx_FSC_EXTABT (0x10) +#define ESR_ELx_FSC_MTE (0x11) #define ESR_ELx_FSC_SERROR (0x11) #define ESR_ELx_FSC_ACCESS (0x08) #define ESR_ELx_FSC_FAULT (0x04) --- /dev/null +++ a/arch/arm64/include/asm/mte-def.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020 ARM Ltd. + */ +#ifndef __ASM_MTE_DEF_H +#define __ASM_MTE_DEF_H + +#define MTE_GRANULE_SIZE UL(16) +#define MTE_GRANULE_MASK (~(MTE_GRANULE_SIZE - 1)) +#define MTE_TAG_SHIFT 56 +#define MTE_TAG_SIZE 4 +#define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT) +#define MTE_TAG_MAX (MTE_TAG_MASK >> MTE_TAG_SHIFT) + +#endif /* __ASM_MTE_DEF_H */ --- a/arch/arm64/include/asm/mte.h~arm64-mte-add-in-kernel-mte-helpers +++ a/arch/arm64/include/asm/mte.h @@ -5,14 +5,16 @@ #ifndef __ASM_MTE_H #define __ASM_MTE_H -#define MTE_GRANULE_SIZE UL(16) -#define MTE_GRANULE_MASK (~(MTE_GRANULE_SIZE - 1)) -#define MTE_TAG_SHIFT 56 -#define MTE_TAG_SIZE 4 +#include +#include + +#define __MTE_PREAMBLE ARM64_ASM_PREAMBLE ".arch_extension memtag\n" #ifndef __ASSEMBLY__ +#include #include +#include #include @@ -45,7 +47,9 @@ long get_mte_ctrl(struct task_struct *ta int mte_ptrace_copy_tags(struct task_struct *child, long request, unsigned long addr, unsigned long data); -#else +void mte_assign_mem_tag_range(void *addr, size_t size); + +#else /* CONFIG_ARM64_MTE */ /* unused if !CONFIG_ARM64_MTE, silence the compiler */ #define PG_mte_tagged 0 @@ -80,7 +84,11 @@ static inline int mte_ptrace_copy_tags(s return -EIO; } -#endif +static inline void mte_assign_mem_tag_range(void *addr, size_t size) +{ +} + +#endif /* CONFIG_ARM64_MTE */ #endif /* __ASSEMBLY__ */ #endif /* __ASM_MTE_H */ --- /dev/null +++ a/arch/arm64/include/asm/mte-kasan.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020 ARM Ltd. + */ +#ifndef __ASM_MTE_KASAN_H +#define __ASM_MTE_KASAN_H + +#include + +#ifndef __ASSEMBLY__ + +#include + +/* + * The functions below are meant to be used only for the + * KASAN_HW_TAGS interface defined in asm/memory.h. + */ +#ifdef CONFIG_ARM64_MTE + +static inline u8 mte_get_ptr_tag(void *ptr) +{ + /* Note: The format of KASAN tags is 0xF */ + u8 tag = 0xF0 | (u8)(((u64)(ptr)) >> MTE_TAG_SHIFT); + + return tag; +} + +u8 mte_get_mem_tag(void *addr); +u8 mte_get_random_tag(void); +void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag); + +#else /* CONFIG_ARM64_MTE */ + +static inline u8 mte_get_ptr_tag(void *ptr) +{ + return 0xFF; +} + +static inline u8 mte_get_mem_tag(void *addr) +{ + return 0xFF; +} +static inline u8 mte_get_random_tag(void) +{ + return 0xFF; +} +static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag) +{ + return addr; +} + +#endif /* CONFIG_ARM64_MTE */ + +#endif /* __ASSEMBLY__ */ + +#endif /* __ASM_MTE_KASAN_H */ --- a/arch/arm64/kernel/mte.c~arm64-mte-add-in-kernel-mte-helpers +++ a/arch/arm64/kernel/mte.c @@ -13,10 +13,13 @@ #include #include #include +#include #include +#include #include #include +#include #include #include @@ -72,6 +75,51 @@ int memcmp_pages(struct page *page1, str return ret; } +u8 mte_get_mem_tag(void *addr) +{ + if (!system_supports_mte()) + return 0xFF; + + asm(__MTE_PREAMBLE "ldg %0, [%0]" + : "+r" (addr)); + + return mte_get_ptr_tag(addr); +} + +u8 mte_get_random_tag(void) +{ + void *addr; + + if (!system_supports_mte()) + return 0xFF; + + asm(__MTE_PREAMBLE "irg %0, %0" + : "+r" (addr)); + + return mte_get_ptr_tag(addr); +} + +void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag) +{ + void *ptr = addr; + + if ((!system_supports_mte()) || (size == 0)) + return addr; + + /* Make sure that size is MTE granule aligned. */ + WARN_ON(size & (MTE_GRANULE_SIZE - 1)); + + /* Make sure that the address is MTE granule aligned. */ + WARN_ON((u64)addr & (MTE_GRANULE_SIZE - 1)); + + tag = 0xF0 | tag; + ptr = (void *)__tag_set(ptr, tag); + + mte_assign_mem_tag_range(ptr, size); + + return ptr; +} + static void update_sctlr_el1_tcf0(u64 tcf0) { /* ISB required for the kernel uaccess routines */ --- a/arch/arm64/lib/mte.S~arm64-mte-add-in-kernel-mte-helpers +++ a/arch/arm64/lib/mte.S @@ -149,3 +149,19 @@ SYM_FUNC_START(mte_restore_page_tags) ret SYM_FUNC_END(mte_restore_page_tags) + +/* + * Assign allocation tags for a region of memory based on the pointer tag + * x0 - source pointer + * x1 - size + * + * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and + * size must be non-zero and MTE_GRANULE_SIZE aligned. + */ +SYM_FUNC_START(mte_assign_mem_tag_range) +1: stg x0, [x0] + add x0, x0, #MTE_GRANULE_SIZE + subs x1, x1, #MTE_GRANULE_SIZE + b.gt 1b + ret +SYM_FUNC_END(mte_assign_mem_tag_range) From patchwork Fri Dec 18 22:03:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D91EC4361B for ; Fri, 18 Dec 2020 22:03:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2182823B87 for ; Fri, 18 Dec 2020 22:03:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2182823B87 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ADF796B0093; Fri, 18 Dec 2020 17:03:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A1D526B0095; Fri, 18 Dec 2020 17:03:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BEE46B0096; Fri, 18 Dec 2020 17:03:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id 6482C6B0093 for ; Fri, 18 Dec 2020 17:03:32 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 362352C37 for ; Fri, 18 Dec 2020 22:03:32 +0000 (UTC) X-FDA: 77607780264.29.stick35_1f16f6e27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 121631801E316 for ; Fri, 18 Dec 2020 22:03:32 +0000 (UTC) X-HE-Tag: stick35_1f16f6e27440 X-Filterd-Recvd-Size: 5671 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:31 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:29 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329010; bh=ah5TPYDRLqIHq32JpImdGs7kOi1DMjEaUHb3vestQn4=; h=From:To:Subject:In-Reply-To:From; b=C8vc3oa8lfU1zrLZXYK5CMS4oiuWizZ4w9BcitB5pJv4y/JgA36o2+8xUvNtzhfhq pnXRWfmboSHBRo+Rw2V2lt4RVpx1HGtpQkN5mo9onSrEfJTpExkT8qp2NxIAgozsgm NML7B1h2vQlbXveX9ZJ8S53VoJvGEGbXBRS9bfYU= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 38/78] arm64: mte: reset the page tag in page->flags Message-ID: <20201218220329.YlRLmNiSU%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vincenzo Frascino Subject: arm64: mte: reset the page tag in page->flags The hardware tag-based KASAN for compatibility with the other modes stores the tag associated to a page in page->flags. Due to this the kernel faults on access when it allocates a page with an initial tag and the user changes the tags. Reset the tag associated by the kernel to a page in all the meaningful places to prevent kernel faults on access. Note: An alternative to this approach could be to modify page_to_virt(). This though could end up being racy, in fact if a CPU checks the PG_mte_tagged bit and decides that the page is not tagged but another CPU maps the same with PROT_MTE and becomes tagged the subsequent kernel access would fail. Link: https://lkml.kernel.org/r/9073d4e973747a6f78d5bdd7ebe17f290d087096.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/kernel/hibernate.c | 5 +++++ arch/arm64/kernel/mte.c | 9 +++++++++ arch/arm64/mm/copypage.c | 9 +++++++++ arch/arm64/mm/mteswap.c | 9 +++++++++ 4 files changed, 32 insertions(+) --- a/arch/arm64/kernel/hibernate.c~arm64-mte-reset-the-page-tag-in-page-flags +++ a/arch/arm64/kernel/hibernate.c @@ -371,6 +371,11 @@ static void swsusp_mte_restore_tags(void unsigned long pfn = xa_state.xa_index; struct page *page = pfn_to_online_page(pfn); + /* + * It is not required to invoke page_kasan_tag_reset(page) + * at this point since the tags stored in page->flags are + * already restored. + */ mte_restore_page_tags(page_address(page), tags); mte_free_tag_storage(tags); --- a/arch/arm64/kernel/mte.c~arm64-mte-reset-the-page-tag-in-page-flags +++ a/arch/arm64/kernel/mte.c @@ -34,6 +34,15 @@ static void mte_sync_page_tags(struct pa return; } + page_kasan_tag_reset(page); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_clear_page_tags(page_address(page)); } --- a/arch/arm64/mm/copypage.c~arm64-mte-reset-the-page-tag-in-page-flags +++ a/arch/arm64/mm/copypage.c @@ -23,6 +23,15 @@ void copy_highpage(struct page *to, stru if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { set_bit(PG_mte_tagged, &to->flags); + page_kasan_tag_reset(to); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_copy_page_tags(kto, kfrom); } } --- a/arch/arm64/mm/mteswap.c~arm64-mte-reset-the-page-tag-in-page-flags +++ a/arch/arm64/mm/mteswap.c @@ -53,6 +53,15 @@ bool mte_restore_tags(swp_entry_t entry, if (!tags) return false; + page_kasan_tag_reset(page); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_restore_page_tags(page_address(page), tags); return true; From patchwork Fri Dec 18 22:03:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A25D2C4361B for ; Fri, 18 Dec 2020 22:03:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 36D4F23BAE for ; Fri, 18 Dec 2020 22:03:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 36D4F23BAE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CD91B6B005C; Fri, 18 Dec 2020 17:03:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C65856B005D; Fri, 18 Dec 2020 17:03:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB8616B0073; Fri, 18 Dec 2020 17:03:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 877466B005C for ; Fri, 18 Dec 2020 17:03:35 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4DB32181AEF1D for ; Fri, 18 Dec 2020 22:03:35 +0000 (UTC) X-FDA: 77607780390.01.rifle25_150961827440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 1AA4210047BFF for ; Fri, 18 Dec 2020 22:03:35 +0000 (UTC) X-HE-Tag: rifle25_150961827440 X-Filterd-Recvd-Size: 6598 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:34 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:33 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329013; bh=qpMIML1/gII246ISYK6mXLCd8OVwFtW20z8VkUZtJjY=; h=From:To:Subject:In-Reply-To:From; b=nzIH8Bzw6W1TZE6QLZ7KALeB1jqEZw/SDei2dyRlbz3YbwpFL5DAN1GCQ7sogHr1H eXbFMN1j2JwdmraxdgXkJ32mPi32PYwdMDH/L5Z+qMNarBMYAn5wMc5scDrJzLzjQB Z4GVGelKM/uHqT2rDDe9ljMTOEAyKXQ4wCaH40/o= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 39/78] arm64: mte: add in-kernel tag fault handler Message-ID: <20201218220333.9v99rO2V0%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vincenzo Frascino Subject: arm64: mte: add in-kernel tag fault handler Add the implementation of the in-kernel fault handler. When a tag fault happens on a kernel address: * MTE is disabled on the current CPU, * the execution continues. When a tag fault happens on a user address: * the kernel executes do_bad_area() and panics. The tag fault handler for kernel addresses is currently empty and will be filled in by a future commit. Link: https://lkml.kernel.org/r/ad31529b073e22840b7a2246172c2b67747ed7c4.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino Co-developed-by: Andrey Konovalov Signed-off-by: Andrey Konovalov Signed-off-by: Catalin Marinas Reviewed-by: Catalin Marinas Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon [catalin.marinas@arm.com: ensure CONFIG_ARM64_PAN is enabled with MTE] Link: https://lkml.kernel.org/r/20201203102628.GB2224@gaia Signed-off-by: Andrew Morton --- arch/arm64/Kconfig | 2 + arch/arm64/include/asm/uaccess.h | 23 ++++++++++++++ arch/arm64/mm/fault.c | 45 +++++++++++++++++++++++++++++ 3 files changed, 70 insertions(+) --- a/arch/arm64/include/asm/uaccess.h~arm64-mte-add-in-kernel-tag-fault-handler +++ a/arch/arm64/include/asm/uaccess.h @@ -159,8 +159,28 @@ static inline void __uaccess_enable_hw_p CONFIG_ARM64_PAN)); } +/* + * The Tag Check Flag (TCF) mode for MTE is per EL, hence TCF0 + * affects EL0 and TCF affects EL1 irrespective of which TTBR is + * used. + * The kernel accesses TTBR0 usually with LDTR/STTR instructions + * when UAO is available, so these would act as EL0 accesses using + * TCF0. + * However futex.h code uses exclusives which would be executed as + * EL1, this can potentially cause a tag check fault even if the + * user disables TCF0. + * + * To address the problem we set the PSTATE.TCO bit in uaccess_enable() + * and reset it in uaccess_disable(). + * + * The Tag check override (TCO) bit disables temporarily the tag checking + * preventing the issue. + */ static inline void uaccess_disable_privileged(void) { + asm volatile(ALTERNATIVE("nop", SET_PSTATE_TCO(0), + ARM64_MTE, CONFIG_KASAN_HW_TAGS)); + if (uaccess_ttbr0_disable()) return; @@ -169,6 +189,9 @@ static inline void uaccess_disable_privi static inline void uaccess_enable_privileged(void) { + asm volatile(ALTERNATIVE("nop", SET_PSTATE_TCO(1), + ARM64_MTE, CONFIG_KASAN_HW_TAGS)); + if (uaccess_ttbr0_enable()) return; --- a/arch/arm64/Kconfig~arm64-mte-add-in-kernel-tag-fault-handler +++ a/arch/arm64/Kconfig @@ -1649,6 +1649,8 @@ config ARM64_MTE default y depends on ARM64_AS_HAS_MTE && ARM64_TAGGED_ADDR_ABI depends on AS_HAS_ARMV8_5 + # Required for tag checking in the uaccess routines + depends on ARM64_PAN select ARCH_USES_HIGH_VMA_FLAGS help Memory Tagging (part of the ARMv8.5 Extensions) provides --- a/arch/arm64/mm/fault.c~arm64-mte-add-in-kernel-tag-fault-handler +++ a/arch/arm64/mm/fault.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include #include @@ -296,6 +297,44 @@ static void die_kernel_fault(const char do_exit(SIGKILL); } +static void report_tag_fault(unsigned long addr, unsigned int esr, + struct pt_regs *regs) +{ +} + +static void do_tag_recovery(unsigned long addr, unsigned int esr, + struct pt_regs *regs) +{ + static bool reported; + + if (!READ_ONCE(reported)) { + report_tag_fault(addr, esr, regs); + WRITE_ONCE(reported, true); + } + + /* + * Disable MTE Tag Checking on the local CPU for the current EL. + * It will be done lazily on the other CPUs when they will hit a + * tag fault. + */ + sysreg_clear_set(sctlr_el1, SCTLR_ELx_TCF_MASK, SCTLR_ELx_TCF_NONE); + isb(); +} + +static bool is_el1_mte_sync_tag_check_fault(unsigned int esr) +{ + unsigned int ec = ESR_ELx_EC(esr); + unsigned int fsc = esr & ESR_ELx_FSC; + + if (ec != ESR_ELx_EC_DABT_CUR) + return false; + + if (fsc == ESR_ELx_FSC_MTE) + return true; + + return false; +} + static void __do_kernel_fault(unsigned long addr, unsigned int esr, struct pt_regs *regs) { @@ -312,6 +351,12 @@ static void __do_kernel_fault(unsigned l "Ignoring spurious kernel translation fault at virtual address %016lx\n", addr)) return; + if (is_el1_mte_sync_tag_check_fault(esr)) { + do_tag_recovery(addr, esr, regs); + + return; + } + if (is_el1_permission_fault(addr, esr, regs)) { if (esr & ESR_ELx_WNR) msg = "write to read-only memory"; From patchwork Fri Dec 18 22:03:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BD18C35273 for ; Fri, 18 Dec 2020 22:03:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DA9FA23B8C for ; Fri, 18 Dec 2020 22:03:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DA9FA23B8C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 70D796B005D; Fri, 18 Dec 2020 17:03:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6BB7E6B0073; Fri, 18 Dec 2020 17:03:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5872E6B0075; Fri, 18 Dec 2020 17:03:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0127.hostedemail.com [216.40.44.127]) by kanga.kvack.org (Postfix) with ESMTP id 431AE6B005D for ; Fri, 18 Dec 2020 17:03:39 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F18465830 for ; Fri, 18 Dec 2020 22:03:38 +0000 (UTC) X-FDA: 77607780516.11.duck27_3d0e55f27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id C8E82180FA141 for ; Fri, 18 Dec 2020 22:03:38 +0000 (UTC) X-HE-Tag: duck27_3d0e55f27440 X-Filterd-Recvd-Size: 5849 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:37 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:36 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329017; bh=gh4y1Xk5AtcuXlMWZAg1eDaRrWbiCuy7OvwHp+REC6g=; h=From:To:Subject:In-Reply-To:From; b=mywKPSYJorLe+I9EG93iuK3MtjxPs6RMQgOClw4YlAdqhnChDll1Xk41vyZ8xelUT 9lLQ8rdyqnhH+3SIc1tLCDSHdKIh/267nPhf9YODRJrualG1v9GDnzvjd/un1n4nFk 1QnrcT/jiozUPrv9e5zBwDzn5IXlXrwyFqifC8ac= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 40/78] arm64: kasan: allow enabling in-kernel MTE Message-ID: <20201218220336.YDA6D1lzU%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vincenzo Frascino Subject: arm64: kasan: allow enabling in-kernel MTE Hardware tag-based KASAN relies on Memory Tagging Extension (MTE) feature and requires it to be enabled. MTE supports This patch adds a new mte_enable_kernel() helper, that enables MTE in Synchronous mode in EL1 and is intended to be called from KASAN runtime during initialization. The Tag Checking operation causes a synchronous data abort as a consequence of a tag check fault when MTE is configured in synchronous mode. As part of this change enable match-all tag for EL1 to allow the kernel to access user pages without faulting. This is required because the kernel does not have knowledge of the tags set by the user in a page. Note: For MTE, the TCF bit field in SCTLR_EL1 affects only EL1 in a similar way as TCF0 affects EL0. MTE that is built on top of the Top Byte Ignore (TBI) feature hence we enable it as part of this patch as well. Link: https://lkml.kernel.org/r/7352b0a0899af65c2785416c8ca6bf3845b66fa1.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino Co-developed-by: Andrey Konovalov Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/include/asm/mte-kasan.h | 6 ++++++ arch/arm64/kernel/mte.c | 7 +++++++ arch/arm64/mm/proc.S | 23 ++++++++++++++++++++--- 3 files changed, 33 insertions(+), 3 deletions(-) --- a/arch/arm64/include/asm/mte-kasan.h~arm64-kasan-allow-enabling-in-kernel-mte +++ a/arch/arm64/include/asm/mte-kasan.h @@ -29,6 +29,8 @@ u8 mte_get_mem_tag(void *addr); u8 mte_get_random_tag(void); void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag); +void mte_enable_kernel(void); + #else /* CONFIG_ARM64_MTE */ static inline u8 mte_get_ptr_tag(void *ptr) @@ -49,6 +51,10 @@ static inline void *mte_set_mem_tag_rang return addr; } +static inline void mte_enable_kernel(void) +{ +} + #endif /* CONFIG_ARM64_MTE */ #endif /* __ASSEMBLY__ */ --- a/arch/arm64/kernel/mte.c~arm64-kasan-allow-enabling-in-kernel-mte +++ a/arch/arm64/kernel/mte.c @@ -129,6 +129,13 @@ void *mte_set_mem_tag_range(void *addr, return ptr; } +void mte_enable_kernel(void) +{ + /* Enable MTE Sync Mode for EL1. */ + sysreg_clear_set(sctlr_el1, SCTLR_ELx_TCF_MASK, SCTLR_ELx_TCF_SYNC); + isb(); +} + static void update_sctlr_el1_tcf0(u64 tcf0) { /* ISB required for the kernel uaccess routines */ --- a/arch/arm64/mm/proc.S~arm64-kasan-allow-enabling-in-kernel-mte +++ a/arch/arm64/mm/proc.S @@ -40,9 +40,15 @@ #define TCR_CACHE_FLAGS TCR_IRGN_WBWA | TCR_ORGN_WBWA #ifdef CONFIG_KASAN_SW_TAGS -#define TCR_KASAN_FLAGS TCR_TBI1 | TCR_TBID1 +#define TCR_KASAN_SW_FLAGS TCR_TBI1 | TCR_TBID1 #else -#define TCR_KASAN_FLAGS 0 +#define TCR_KASAN_SW_FLAGS 0 +#endif + +#ifdef CONFIG_KASAN_HW_TAGS +#define TCR_KASAN_HW_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 +#else +#define TCR_KASAN_HW_FLAGS 0 #endif /* @@ -427,6 +433,10 @@ SYM_FUNC_START(__cpu_setup) */ mov_q x5, MAIR_EL1_SET #ifdef CONFIG_ARM64_MTE + mte_tcr .req x20 + + mov mte_tcr, #0 + /* * Update MAIR_EL1, GCR_EL1 and TFSR*_EL1 if MTE is supported * (ID_AA64PFR1_EL1[11:8] > 1). @@ -447,6 +457,9 @@ SYM_FUNC_START(__cpu_setup) /* clear any pending tag check faults in TFSR*_EL1 */ msr_s SYS_TFSR_EL1, xzr msr_s SYS_TFSRE0_EL1, xzr + + /* set the TCR_EL1 bits */ + mov_q mte_tcr, TCR_KASAN_HW_FLAGS 1: #endif msr mair_el1, x5 @@ -456,7 +469,11 @@ SYM_FUNC_START(__cpu_setup) */ mov_q x10, TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \ TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ - TCR_TBI0 | TCR_A1 | TCR_KASAN_FLAGS + TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS +#ifdef CONFIG_ARM64_MTE + orr x10, x10, mte_tcr + .unreq mte_tcr +#endif tcr_clear_errata_bits x10, x9, x5 #ifdef CONFIG_ARM64_VA_BITS_52 From patchwork Fri Dec 18 22:03:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36B45C4361B for ; Fri, 18 Dec 2020 22:03:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D923123B8C for ; Fri, 18 Dec 2020 22:03:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D923123B8C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 783336B0073; Fri, 18 Dec 2020 17:03:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 735EC6B0075; Fri, 18 Dec 2020 17:03:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5AADE6B0078; Fri, 18 Dec 2020 17:03:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 451706B0073 for ; Fri, 18 Dec 2020 17:03:42 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0BFF51802BACB for ; Fri, 18 Dec 2020 22:03:42 +0000 (UTC) X-FDA: 77607780684.20.pan55_480d5aa27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id D440F180CC34B for ; Fri, 18 Dec 2020 22:03:41 +0000 (UTC) X-HE-Tag: pan55_480d5aa27440 X-Filterd-Recvd-Size: 6146 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:41 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:39 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329020; bh=DDDH2RVDluwp8itwATWBD8/EvB5Hgdjo8/Xdk/NWnkg=; h=From:To:Subject:In-Reply-To:From; b=Ht5e/4GCZA3MsYjivYDn37fGpIHcHGW2I5mJXvVaAKKhCg8Mz/EfMMOOpbtmPM0Cs z4WqEd2SUZo16j1dS4jlm38h9PkkAM4+Kov7ly51LEZPmnOrEvEjTGJ4ZlAP1QTIJt 3Md3MuSORzF6l9NDLfS7LwQrG9LHpkhvS5eeRDOU= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 41/78] arm64: mte: convert gcr_user into an exclude mask Message-ID: <20201218220339.DTCIzvVAQ%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vincenzo Frascino Subject: arm64: mte: convert gcr_user into an exclude mask The gcr_user mask is a per thread mask that represents the tags that are excluded from random generation when the Memory Tagging Extension is present and an 'irg' instruction is invoked. gcr_user affects the behavior on EL0 only. Currently that mask is an include mask and it is controlled by the user via prctl() while GCR_EL1 accepts an exclude mask. Convert the include mask into an exclude one to make it easier the register setting. Note: This change will affect gcr_kernel (for EL1) introduced with a future patch. Link: https://lkml.kernel.org/r/946dd31be833b660334c4f93410acf6d6c4cf3c4.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/include/asm/processor.h | 2 - arch/arm64/kernel/mte.c | 29 +++++++++++++-------------- 2 files changed, 16 insertions(+), 15 deletions(-) --- a/arch/arm64/include/asm/processor.h~arm64-mte-convert-gcr_user-into-an-exclude-mask +++ a/arch/arm64/include/asm/processor.h @@ -152,7 +152,7 @@ struct thread_struct { #endif #ifdef CONFIG_ARM64_MTE u64 sctlr_tcf0; - u64 gcr_user_incl; + u64 gcr_user_excl; #endif }; --- a/arch/arm64/kernel/mte.c~arm64-mte-convert-gcr_user-into-an-exclude-mask +++ a/arch/arm64/kernel/mte.c @@ -156,23 +156,22 @@ static void set_sctlr_el1_tcf0(u64 tcf0) preempt_enable(); } -static void update_gcr_el1_excl(u64 incl) +static void update_gcr_el1_excl(u64 excl) { - u64 excl = ~incl & SYS_GCR_EL1_EXCL_MASK; /* - * Note that 'incl' is an include mask (controlled by the user via - * prctl()) while GCR_EL1 accepts an exclude mask. + * Note that the mask controlled by the user via prctl() is an + * include while GCR_EL1 accepts an exclude mask. * No need for ISB since this only affects EL0 currently, implicit * with ERET. */ sysreg_clear_set_s(SYS_GCR_EL1, SYS_GCR_EL1_EXCL_MASK, excl); } -static void set_gcr_el1_excl(u64 incl) +static void set_gcr_el1_excl(u64 excl) { - current->thread.gcr_user_incl = incl; - update_gcr_el1_excl(incl); + current->thread.gcr_user_excl = excl; + update_gcr_el1_excl(excl); } void flush_mte_state(void) @@ -187,7 +186,7 @@ void flush_mte_state(void) /* disable tag checking */ set_sctlr_el1_tcf0(SCTLR_EL1_TCF0_NONE); /* reset tag generation mask */ - set_gcr_el1_excl(0); + set_gcr_el1_excl(SYS_GCR_EL1_EXCL_MASK); } void mte_thread_switch(struct task_struct *next) @@ -198,7 +197,7 @@ void mte_thread_switch(struct task_struc /* avoid expensive SCTLR_EL1 accesses if no change */ if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0) update_sctlr_el1_tcf0(next->thread.sctlr_tcf0); - update_gcr_el1_excl(next->thread.gcr_user_incl); + update_gcr_el1_excl(next->thread.gcr_user_excl); } void mte_suspend_exit(void) @@ -206,13 +205,14 @@ void mte_suspend_exit(void) if (!system_supports_mte()) return; - update_gcr_el1_excl(current->thread.gcr_user_incl); + update_gcr_el1_excl(current->thread.gcr_user_excl); } long set_mte_ctrl(struct task_struct *task, unsigned long arg) { u64 tcf0; - u64 gcr_incl = (arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT; + u64 gcr_excl = ~((arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT) & + SYS_GCR_EL1_EXCL_MASK; if (!system_supports_mte()) return 0; @@ -233,10 +233,10 @@ long set_mte_ctrl(struct task_struct *ta if (task != current) { task->thread.sctlr_tcf0 = tcf0; - task->thread.gcr_user_incl = gcr_incl; + task->thread.gcr_user_excl = gcr_excl; } else { set_sctlr_el1_tcf0(tcf0); - set_gcr_el1_excl(gcr_incl); + set_gcr_el1_excl(gcr_excl); } return 0; @@ -245,11 +245,12 @@ long set_mte_ctrl(struct task_struct *ta long get_mte_ctrl(struct task_struct *task) { unsigned long ret; + u64 incl = ~task->thread.gcr_user_excl & SYS_GCR_EL1_EXCL_MASK; if (!system_supports_mte()) return 0; - ret = task->thread.gcr_user_incl << PR_MTE_TAG_SHIFT; + ret = incl << PR_MTE_TAG_SHIFT; switch (task->thread.sctlr_tcf0) { case SCTLR_EL1_TCF0_NONE: From patchwork Fri Dec 18 22:03:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B200C4361B for ; Fri, 18 Dec 2020 22:03:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1BF7923B8C for ; Fri, 18 Dec 2020 22:03:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1BF7923B8C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ADCAC6B0075; Fri, 18 Dec 2020 17:03:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A64FB6B0078; Fri, 18 Dec 2020 17:03:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 92D8E6B007D; Fri, 18 Dec 2020 17:03:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id 7C1CE6B0075 for ; Fri, 18 Dec 2020 17:03:45 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 42D585010 for ; Fri, 18 Dec 2020 22:03:45 +0000 (UTC) X-FDA: 77607780810.29.food31_4b06a4f27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 2100B1802BACB for ; Fri, 18 Dec 2020 22:03:45 +0000 (UTC) X-HE-Tag: food31_4b06a4f27440 X-Filterd-Recvd-Size: 8471 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:44 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:43 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329023; bh=K55WA5i+MZqHFYpmOdSCc9fuL4mhAA32VNirWDBvmWY=; h=From:To:Subject:In-Reply-To:From; b=HGM1J2Y6bCPYpNIpXnJRT3ooQz6MeRtgHqGDghovrB5M+4qdYsW+isndbXSyjiLwO c/DiedEgl3bdF3XnESVJEQhsJ2CpEAPPbp38bml4Roq3KMe/ccWMQVgQHboD874z0H PIsAm9ZDaPdJKSwCcunn9LCPQ8BqYxpn7coDBRJk= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 42/78] arm64: mte: switch GCR_EL1 in kernel entry and exit Message-ID: <20201218220343.7OU5VdVs1%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vincenzo Frascino Subject: arm64: mte: switch GCR_EL1 in kernel entry and exit When MTE is present, the GCR_EL1 register contains the tags mask that allows to exclude tags from the random generation via the IRG instruction. With the introduction of the new Tag-Based KASAN API that provides a mechanism to reserve tags for special reasons, the MTE implementation has to make sure that the GCR_EL1 setting for the kernel does not affect the userspace processes and viceversa. Save and restore the kernel/user mask in GCR_EL1 in kernel entry and exit. Link: https://lkml.kernel.org/r/578b03294708cc7258fad0dc9c2a2e809e5a8214.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino Co-developed-by: Andrey Konovalov Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/include/asm/mte-def.h | 1 arch/arm64/include/asm/mte-kasan.h | 5 +++ arch/arm64/include/asm/mte.h | 2 + arch/arm64/kernel/asm-offsets.c | 3 + arch/arm64/kernel/entry.S | 41 +++++++++++++++++++++++++++ arch/arm64/kernel/mte.c | 31 ++++++++++++++++++-- 6 files changed, 79 insertions(+), 4 deletions(-) --- a/arch/arm64/include/asm/mte-def.h~arm64-mte-switch-gcr_el1-in-kernel-entry-and-exit +++ a/arch/arm64/include/asm/mte-def.h @@ -10,6 +10,5 @@ #define MTE_TAG_SHIFT 56 #define MTE_TAG_SIZE 4 #define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT) -#define MTE_TAG_MAX (MTE_TAG_MASK >> MTE_TAG_SHIFT) #endif /* __ASM_MTE_DEF_H */ --- a/arch/arm64/include/asm/mte.h~arm64-mte-switch-gcr_el1-in-kernel-entry-and-exit +++ a/arch/arm64/include/asm/mte.h @@ -18,6 +18,8 @@ #include +extern u64 gcr_kernel_excl; + void mte_clear_page_tags(void *addr); unsigned long mte_copy_tags_from_user(void *to, const void __user *from, unsigned long n); --- a/arch/arm64/include/asm/mte-kasan.h~arm64-mte-switch-gcr_el1-in-kernel-entry-and-exit +++ a/arch/arm64/include/asm/mte-kasan.h @@ -30,6 +30,7 @@ u8 mte_get_random_tag(void); void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag); void mte_enable_kernel(void); +void mte_init_tags(u64 max_tag); #else /* CONFIG_ARM64_MTE */ @@ -55,6 +56,10 @@ static inline void mte_enable_kernel(voi { } +static inline void mte_init_tags(u64 max_tag) +{ +} + #endif /* CONFIG_ARM64_MTE */ #endif /* __ASSEMBLY__ */ --- a/arch/arm64/kernel/asm-offsets.c~arm64-mte-switch-gcr_el1-in-kernel-entry-and-exit +++ a/arch/arm64/kernel/asm-offsets.c @@ -47,6 +47,9 @@ int main(void) DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user)); DEFINE(THREAD_KEYS_KERNEL, offsetof(struct task_struct, thread.keys_kernel)); #endif +#ifdef CONFIG_ARM64_MTE + DEFINE(THREAD_GCR_EL1_USER, offsetof(struct task_struct, thread.gcr_user_excl)); +#endif BLANK(); DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); DEFINE(S_X2, offsetof(struct pt_regs, regs[2])); --- a/arch/arm64/kernel/entry.S~arm64-mte-switch-gcr_el1-in-kernel-entry-and-exit +++ a/arch/arm64/kernel/entry.S @@ -173,6 +173,43 @@ alternative_else_nop_endif #endif .endm + .macro mte_set_gcr, tmp, tmp2 +#ifdef CONFIG_ARM64_MTE + /* + * Calculate and set the exclude mask preserving + * the RRND (bit[16]) setting. + */ + mrs_s \tmp2, SYS_GCR_EL1 + bfi \tmp2, \tmp, #0, #16 + msr_s SYS_GCR_EL1, \tmp2 + isb +#endif + .endm + + .macro mte_set_kernel_gcr, tmp, tmp2 +#ifdef CONFIG_KASAN_HW_TAGS +alternative_if_not ARM64_MTE + b 1f +alternative_else_nop_endif + ldr_l \tmp, gcr_kernel_excl + + mte_set_gcr \tmp, \tmp2 +1: +#endif + .endm + + .macro mte_set_user_gcr, tsk, tmp, tmp2 +#ifdef CONFIG_ARM64_MTE +alternative_if_not ARM64_MTE + b 1f +alternative_else_nop_endif + ldr \tmp, [\tsk, #THREAD_GCR_EL1_USER] + + mte_set_gcr \tmp, \tmp2 +1: +#endif + .endm + .macro kernel_entry, el, regsize = 64 .if \regsize == 32 mov w0, w0 // zero upper 32 bits of x0 @@ -212,6 +249,8 @@ alternative_else_nop_endif ptrauth_keys_install_kernel tsk, x20, x22, x23 + mte_set_kernel_gcr x22, x23 + scs_load tsk, x20 .else add x21, sp, #S_FRAME_SIZE @@ -315,6 +354,8 @@ alternative_else_nop_endif /* No kernel C function calls after this as user keys are set. */ ptrauth_keys_install_user tsk, x0, x1, x2 + mte_set_user_gcr tsk, x0, x1 + apply_ssbd 0, x0, x1 .endif --- a/arch/arm64/kernel/mte.c~arm64-mte-switch-gcr_el1-in-kernel-entry-and-exit +++ a/arch/arm64/kernel/mte.c @@ -23,6 +23,8 @@ #include #include +u64 gcr_kernel_excl __ro_after_init; + static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap) { pte_t old_pte = READ_ONCE(*ptep); @@ -129,6 +131,26 @@ void *mte_set_mem_tag_range(void *addr, return ptr; } +void mte_init_tags(u64 max_tag) +{ + static bool gcr_kernel_excl_initialized; + + if (!gcr_kernel_excl_initialized) { + /* + * The format of the tags in KASAN is 0xFF and in MTE is 0xF. + * This conversion extracts an MTE tag from a KASAN tag. + */ + u64 incl = GENMASK(FIELD_GET(MTE_TAG_MASK >> MTE_TAG_SHIFT, + max_tag), 0); + + gcr_kernel_excl = ~incl & SYS_GCR_EL1_EXCL_MASK; + gcr_kernel_excl_initialized = true; + } + + /* Enable the kernel exclude mask for random tags generation. */ + write_sysreg_s(SYS_GCR_EL1_RRND | gcr_kernel_excl, SYS_GCR_EL1); +} + void mte_enable_kernel(void) { /* Enable MTE Sync Mode for EL1. */ @@ -171,7 +193,11 @@ static void update_gcr_el1_excl(u64 excl static void set_gcr_el1_excl(u64 excl) { current->thread.gcr_user_excl = excl; - update_gcr_el1_excl(excl); + + /* + * SYS_GCR_EL1 will be set to current->thread.gcr_user_excl value + * by mte_set_user_gcr() in kernel_exit, + */ } void flush_mte_state(void) @@ -197,7 +223,6 @@ void mte_thread_switch(struct task_struc /* avoid expensive SCTLR_EL1 accesses if no change */ if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0) update_sctlr_el1_tcf0(next->thread.sctlr_tcf0); - update_gcr_el1_excl(next->thread.gcr_user_excl); } void mte_suspend_exit(void) @@ -205,7 +230,7 @@ void mte_suspend_exit(void) if (!system_supports_mte()) return; - update_gcr_el1_excl(current->thread.gcr_user_excl); + update_gcr_el1_excl(gcr_kernel_excl); } long set_mte_ctrl(struct task_struct *task, unsigned long arg) From patchwork Fri Dec 18 22:03:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983109 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56A12C4361B for ; Fri, 18 Dec 2020 22:03:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EC89723B5D for ; Fri, 18 Dec 2020 22:03:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EC89723B5D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8BEB76B0078; Fri, 18 Dec 2020 17:03:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 81B886B007D; Fri, 18 Dec 2020 17:03:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E9F16B007E; Fri, 18 Dec 2020 17:03:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0023.hostedemail.com [216.40.44.23]) by kanga.kvack.org (Postfix) with ESMTP id 53A286B0078 for ; Fri, 18 Dec 2020 17:03:48 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 211AF18030082 for ; Fri, 18 Dec 2020 22:03:48 +0000 (UTC) X-FDA: 77607780936.18.apple08_0a0407a27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 00CCE100EC665 for ; Fri, 18 Dec 2020 22:03:47 +0000 (UTC) X-HE-Tag: apple08_0a0407a27440 X-Filterd-Recvd-Size: 3090 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:47 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:46 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329026; bh=rXPig3Y2BiXNEZi+1w0PI3jdh8Ii07tQ40TtSpXIZlQ=; h=From:To:Subject:In-Reply-To:From; b=wNCQAJwejZM/SAPn+/uDmP5lDFUh5Q+aeIDFD3F9NQB0xCmYpZ2F+lhMFdFlUrd7Q ZIMyf03KrKFB3MgWLfR1dPnAoyNeouvOJ4mUvEMmgv0XHXdEbojr/xU7fr0EZzh6ib qJN5E8vCR8GU6NNCmvRbhAlsoUp1EklORh9vPs6o= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 43/78] kasan, mm: untag page address in free_reserved_area Message-ID: <20201218220346.OSOCAeHhM%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vincenzo Frascino Subject: kasan, mm: untag page address in free_reserved_area free_reserved_area() memsets the pages belonging to a given memory area. As that memory hasn't been allocated via page_alloc, the KASAN tags that those pages have are 0x00. As the result the memset might result in a tag mismatch. Untag the address to avoid spurious faults. Link: https://lkml.kernel.org/r/ebef6425f4468d063e2f09c1b62ccbb2236b71d3.1606161801.git.andreyknvl@google.com Cc: Andrew Morton Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/page_alloc.c | 5 +++++ 1 file changed, 5 insertions(+) --- a/mm/page_alloc.c~kasan-mm-untag-page-address-in-free_reserved_area +++ a/mm/page_alloc.c @@ -7671,6 +7671,11 @@ unsigned long free_reserved_area(void *s * alias for the memset(). */ direct_map_addr = page_address(page); + /* + * Perform a kasan-unchecked memset() since this memory + * has not been initialized. + */ + direct_map_addr = kasan_reset_tag(direct_map_addr); if ((unsigned int)poison <= 0xFF) memset(direct_map_addr, poison, PAGE_SIZE); From patchwork Fri Dec 18 22:03:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983111 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 931DFC4361B for ; Fri, 18 Dec 2020 22:03:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3BA8B23406 for ; Fri, 18 Dec 2020 22:03:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3BA8B23406 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CFFE46B005C; Fri, 18 Dec 2020 17:03:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C608C6B0068; Fri, 18 Dec 2020 17:03:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4EFF6B007D; Fri, 18 Dec 2020 17:03:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0031.hostedemail.com [216.40.44.31]) by kanga.kvack.org (Postfix) with ESMTP id 991B16B005C for ; Fri, 18 Dec 2020 17:03:51 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 624DF8249980 for ; Fri, 18 Dec 2020 22:03:51 +0000 (UTC) X-FDA: 77607781062.06.cakes50_620802227440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 40599100584F1 for ; Fri, 18 Dec 2020 22:03:51 +0000 (UTC) X-HE-Tag: cakes50_620802227440 X-Filterd-Recvd-Size: 3015 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:50 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:49 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329030; bh=2lgIsQoKW+rr7/1buXqUkMofgxltHlPWM9zrtbujye4=; h=From:To:Subject:In-Reply-To:From; b=a2vt7LKsSz7uZ9KWoi1Z1bzSHWdv+X/vgHwLGb+bi3dfQ66Mti0UN0c4iDcU0wF+z j/NFefRuXLDsOM8l9vzqgrg7Hm+HCgvswyo+TVC9+in7Y74X2qjknUgUMKM/c7+d/v vDUzxMFz/Hm7cnQ+h/dbTmw2vAH2fanRoHRemvNk= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 44/78] arm64: kasan: align allocations for HW_TAGS Message-ID: <20201218220349.LMfphKdBC%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: arm64: kasan: align allocations for HW_TAGS Hardware tag-based KASAN uses the memory tagging approach, which requires all allocations to be aligned to the memory granule size. Align the allocations to MTE_GRANULE_SIZE via ARCH_SLAB_MINALIGN when CONFIG_KASAN_HW_TAGS is enabled. Link: https://lkml.kernel.org/r/fe64131606b1c2aabfd34ae99554c0d9df18eb19.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Catalin Marinas Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/include/asm/cache.h | 3 +++ 1 file changed, 3 insertions(+) --- a/arch/arm64/include/asm/cache.h~arm64-kasan-align-allocations-for-hw_tags +++ a/arch/arm64/include/asm/cache.h @@ -6,6 +6,7 @@ #define __ASM_CACHE_H #include +#include #define CTR_L1IP_SHIFT 14 #define CTR_L1IP_MASK 3 @@ -51,6 +52,8 @@ #ifdef CONFIG_KASAN_SW_TAGS #define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT) +#elif defined(CONFIG_KASAN_HW_TAGS) +#define ARCH_SLAB_MINALIGN MTE_GRANULE_SIZE #endif #ifndef __ASSEMBLY__ From patchwork Fri Dec 18 22:03:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983113 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00CE7C3526B for ; Fri, 18 Dec 2020 22:03:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A4AE023406 for ; Fri, 18 Dec 2020 22:03:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A4AE023406 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4247E6B0068; Fri, 18 Dec 2020 17:03:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3AE656B007D; Fri, 18 Dec 2020 17:03:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29D726B007E; Fri, 18 Dec 2020 17:03:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id 138E36B0068 for ; Fri, 18 Dec 2020 17:03:55 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D894E52AC for ; Fri, 18 Dec 2020 22:03:54 +0000 (UTC) X-FDA: 77607781188.26.skirt89_021362a27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id B9D98180D511C for ; Fri, 18 Dec 2020 22:03:54 +0000 (UTC) X-HE-Tag: skirt89_021362a27440 X-Filterd-Recvd-Size: 4855 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:54 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:52 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329033; bh=8O226vQPibDszPVjnip5QVA2WJlOWY1esr9ASN8tjZQ=; h=From:To:Subject:In-Reply-To:From; b=l2J2lEKJjRIJ5HX1HThunsnts3CQXTB09LjRFhc9tJbb9yNarmxpgGw0wu+0MWRaQ aOWhTQghkktH4BPIV++aIdBuIO77hWBr572M6Y8BK0VGsnB0OT/7dxmO+/YlXo4d5J DZSsrKpOYQ+zHoMKik8USXT3pn7dl/JE403TJFZM= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 45/78] arm64: kasan: add arch layer for memory tagging helpers Message-ID: <20201218220352.EvO6Tyjws%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: arm64: kasan: add arch layer for memory tagging helpers This patch add a set of arch_*() memory tagging helpers currently only defined for arm64 when hardware tag-based KASAN is enabled. These helpers will be used by KASAN runtime to implement the hardware tag-based mode. The arch-level indirection level is introduced to simplify adding hardware tag-based KASAN support for other architectures in the future by defining the appropriate arch_*() macros. Link: https://lkml.kernel.org/r/fc9e5bb71201c03131a2fc00a74125723568dda9.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Co-developed-by: Vincenzo Frascino Signed-off-by: Vincenzo Frascino Reviewed-by: Catalin Marinas Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/include/asm/memory.h | 9 +++++++++ mm/kasan/kasan.h | 26 ++++++++++++++++++++++++++ 2 files changed, 35 insertions(+) --- a/arch/arm64/include/asm/memory.h~arm64-kasan-add-arch-layer-for-memory-tagging-helpers +++ a/arch/arm64/include/asm/memory.h @@ -230,6 +230,15 @@ static inline const void *__tag_set(cons return (const void *)(__addr | __tag_shifted(tag)); } +#ifdef CONFIG_KASAN_HW_TAGS +#define arch_enable_tagging() mte_enable_kernel() +#define arch_init_tags(max_tag) mte_init_tags(max_tag) +#define arch_get_random_tag() mte_get_random_tag() +#define arch_get_mem_tag(addr) mte_get_mem_tag(addr) +#define arch_set_mem_tag_range(addr, size, tag) \ + mte_set_mem_tag_range((addr), (size), (tag)) +#endif /* CONFIG_KASAN_HW_TAGS */ + /* * Physical vs virtual RAM address space conversion. These are * private definitions which should NOT be used outside memory.h --- a/mm/kasan/kasan.h~arm64-kasan-add-arch-layer-for-memory-tagging-helpers +++ a/mm/kasan/kasan.h @@ -243,6 +243,32 @@ static inline const void *arch_kasan_set #define reset_tag(addr) ((void *)arch_kasan_reset_tag(addr)) #define get_tag(addr) arch_kasan_get_tag(addr) +#ifdef CONFIG_KASAN_HW_TAGS + +#ifndef arch_enable_tagging +#define arch_enable_tagging() +#endif +#ifndef arch_init_tags +#define arch_init_tags(max_tag) +#endif +#ifndef arch_get_random_tag +#define arch_get_random_tag() (0xFF) +#endif +#ifndef arch_get_mem_tag +#define arch_get_mem_tag(addr) (0xFF) +#endif +#ifndef arch_set_mem_tag_range +#define arch_set_mem_tag_range(addr, size, tag) ((void *)(addr)) +#endif + +#define hw_enable_tagging() arch_enable_tagging() +#define hw_init_tags(max_tag) arch_init_tags(max_tag) +#define hw_get_random_tag() arch_get_random_tag() +#define hw_get_mem_tag(addr) arch_get_mem_tag(addr) +#define hw_set_mem_tag_range(addr, size, tag) arch_set_mem_tag_range((addr), (size), (tag)) + +#endif /* CONFIG_KASAN_HW_TAGS */ + /* * Exported functions for interfaces called from assembly or from generated * code. Declarations here to avoid warning about missing declarations. From patchwork Fri Dec 18 22:03:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983115 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22DB3C3526B for ; Fri, 18 Dec 2020 22:03:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C887623B87 for ; Fri, 18 Dec 2020 22:03:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C887623B87 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6E37E6B006C; Fri, 18 Dec 2020 17:03:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 66C616B007B; Fri, 18 Dec 2020 17:03:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 543786B007E; Fri, 18 Dec 2020 17:03:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0224.hostedemail.com [216.40.44.224]) by kanga.kvack.org (Postfix) with ESMTP id 3B1CC6B006C for ; Fri, 18 Dec 2020 17:03:58 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 06EEF4DC8 for ; Fri, 18 Dec 2020 22:03:58 +0000 (UTC) X-FDA: 77607781356.28.geese96_1e0312527440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id DF31D6D6D for ; Fri, 18 Dec 2020 22:03:57 +0000 (UTC) X-HE-Tag: geese96_1e0312527440 X-Filterd-Recvd-Size: 2920 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:57 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:55 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329036; bh=QMYVZBUjFli/H6X7hOz80AwmmUTQTOwCo6dvVlQUpWY=; h=From:To:Subject:In-Reply-To:From; b=1jkwA/khmn5vUwITa7pL5MPIFv3HjH//ZUgxEmiK217xdIao3+CwJ9Co2v7stOiZ7 DBsA3FITCZonlHMSI4hrMNJAvAaQKMBTKveSk6mv88gCzcH2T8opOWyuLRfysr9Uff Xmbw6qYshXyKXoy2GRWIoatr7pZfVi0otyEiVO7A= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 46/78] kasan: define KASAN_GRANULE_SIZE for HW_TAGS Message-ID: <20201218220355.eTvuK7ok6%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: define KASAN_GRANULE_SIZE for HW_TAGS Hardware tag-based KASAN has granules of MTE_GRANULE_SIZE. Define KASAN_GRANULE_SIZE to MTE_GRANULE_SIZE for CONFIG_KASAN_HW_TAGS. Link: https://lkml.kernel.org/r/3d15794b3d1b27447fd7fdf862c073192ba657bd.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/kasan.h | 6 ++++++ 1 file changed, 6 insertions(+) --- a/mm/kasan/kasan.h~kasan-define-kasan_granule_size-for-hw_tags +++ a/mm/kasan/kasan.h @@ -5,7 +5,13 @@ #include #include +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) #define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) +#else +#include +#define KASAN_GRANULE_SIZE MTE_GRANULE_SIZE +#endif + #define KASAN_GRANULE_MASK (KASAN_GRANULE_SIZE - 1) #define KASAN_MEMORY_PER_SHADOW_PAGE (KASAN_GRANULE_SIZE << PAGE_SHIFT) From patchwork Fri Dec 18 22:03:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983117 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C37BC4361B for ; Fri, 18 Dec 2020 22:04:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3A8A523B87 for ; Fri, 18 Dec 2020 22:04:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A8A523B87 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C14306B0070; Fri, 18 Dec 2020 17:04:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BD2526B007B; Fri, 18 Dec 2020 17:04:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8C206B007D; Fri, 18 Dec 2020 17:04:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0073.hostedemail.com [216.40.44.73]) by kanga.kvack.org (Postfix) with ESMTP id 913346B0070 for ; Fri, 18 Dec 2020 17:04:01 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5A394181AF5FD for ; Fri, 18 Dec 2020 22:04:01 +0000 (UTC) X-FDA: 77607781482.05.frog13_3d024b527440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 3B5C9180B2639 for ; Fri, 18 Dec 2020 22:04:01 +0000 (UTC) X-HE-Tag: frog13_3d024b527440 X-Filterd-Recvd-Size: 3385 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:00 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:59 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329040; bh=bPiKkZ6Z/+1py3lpgzBMnzQUWTR7th/BFIrQJlJkvTc=; h=From:To:Subject:In-Reply-To:From; b=dAa4h4/BgI3AeZ0V//6rmlJAKQqQ9EWV9VJaw0sJHlMxs0FoEWP0HQvnfvCpQ/MUw cL54GY0vTjfLl4eMaWH6f4ucb9x4MT2yBnKx0ZINRM1a/h6wP6UIuEIeqXjLbKRrql heN4uIOFVbsrkodhRgpXABokZKaVbIsJ/gdWHwwI= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 47/78] kasan, x86, s390: update undef CONFIG_KASAN Message-ID: <20201218220359.0AVN6LBhe%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, x86, s390: update undef CONFIG_KASAN With the intoduction of hardware tag-based KASAN some kernel checks of this kind: ifdef CONFIG_KASAN will be updated to: if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) x86 and s390 use a trick to #undef CONFIG_KASAN for some of the code that isn't linked with KASAN runtime and shouldn't have any KASAN annotations. Also #undef CONFIG_KASAN_GENERIC with CONFIG_KASAN. Link: https://lkml.kernel.org/r/9d84bfaaf8fabe0fc89f913c9e420a30bd31a260.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Acked-by: Vasily Gorbik Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/s390/boot/string.c | 1 + arch/x86/boot/compressed/misc.h | 1 + 2 files changed, 2 insertions(+) --- a/arch/s390/boot/string.c~kasan-x86-s390-update-undef-config_kasan +++ a/arch/s390/boot/string.c @@ -3,6 +3,7 @@ #include #include #undef CONFIG_KASAN +#undef CONFIG_KASAN_GENERIC #include "../lib/string.c" int strncmp(const char *cs, const char *ct, size_t count) --- a/arch/x86/boot/compressed/misc.h~kasan-x86-s390-update-undef-config_kasan +++ a/arch/x86/boot/compressed/misc.h @@ -12,6 +12,7 @@ #undef CONFIG_PARAVIRT_XXL #undef CONFIG_PARAVIRT_SPINLOCKS #undef CONFIG_KASAN +#undef CONFIG_KASAN_GENERIC /* cpu_feature_enabled() cannot be used this early */ #define USE_EARLY_PGTABLE_L5 From patchwork Fri Dec 18 22:04:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983119 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41146C4361B for ; Fri, 18 Dec 2020 22:04:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C62C723B5D for ; Fri, 18 Dec 2020 22:04:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C62C723B5D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5863E6B0072; Fri, 18 Dec 2020 17:04:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 50C636B007B; Fri, 18 Dec 2020 17:04:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3FC796B007D; Fri, 18 Dec 2020 17:04:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0115.hostedemail.com [216.40.44.115]) by kanga.kvack.org (Postfix) with ESMTP id 235416B0072 for ; Fri, 18 Dec 2020 17:04:05 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E2FF053B3 for ; Fri, 18 Dec 2020 22:04:04 +0000 (UTC) X-FDA: 77607781608.17.twig16_280c6e327440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id C03271814B0D0 for ; Fri, 18 Dec 2020 22:04:04 +0000 (UTC) X-HE-Tag: twig16_280c6e327440 X-Filterd-Recvd-Size: 14437 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:04 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:02 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329043; bh=mphpU/kKZiL4hPerdJpms8BqxL07kb338OrSwnPQhrQ=; h=From:To:Subject:In-Reply-To:From; b=x+hfal4WzSPuAAVZUo+gHtoF1SmTAowbMA0l+1NmLVEl2aiLBySV6Bp4jVbcOY/Fx rDmvbB87yASdOpkA/0CW1z+mTJTKlsGEpiBGorAEjyTrYz0P7FrvCBBk1/cfIKBW8P D3wIwdRr4Gsu14IuugC36mDwpKj0T61YG2X8INnU= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 48/78] kasan, arm64: expand CONFIG_KASAN checks Message-ID: <20201218220402.uyNyJPQzQ%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, arm64: expand CONFIG_KASAN checks Some #ifdef CONFIG_KASAN checks are only relevant for software KASAN modes (either related to shadow memory or compiler instrumentation). Expand those into CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS. Link: https://lkml.kernel.org/r/e6971e432dbd72bb897ff14134ebb7e169bdcf0c.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Catalin Marinas Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/Kconfig | 2 +- arch/arm64/Makefile | 2 +- arch/arm64/include/asm/assembler.h | 2 +- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/string.h | 5 +++-- arch/arm64/kernel/head.S | 2 +- arch/arm64/kernel/image-vars.h | 2 +- arch/arm64/kernel/kaslr.c | 3 ++- arch/arm64/kernel/module.c | 6 ++++-- arch/arm64/mm/ptdump.c | 6 +++--- include/linux/kasan-checks.h | 2 +- include/linux/kasan.h | 7 ++++--- include/linux/moduleloader.h | 3 ++- include/linux/string.h | 2 +- mm/ptdump.c | 13 ++++++++----- scripts/Makefile.lib | 2 ++ 16 files changed, 36 insertions(+), 25 deletions(-) --- a/arch/arm64/include/asm/assembler.h~kasan-arm64-expand-config_kasan-checks +++ a/arch/arm64/include/asm/assembler.h @@ -473,7 +473,7 @@ USER(\label, ic ivau, \tmp2) // invali #define NOKPROBE(x) #endif -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) #define EXPORT_SYMBOL_NOKASAN(name) #else #define EXPORT_SYMBOL_NOKASAN(name) EXPORT_SYMBOL(name) --- a/arch/arm64/include/asm/memory.h~kasan-arm64-expand-config_kasan-checks +++ a/arch/arm64/include/asm/memory.h @@ -72,7 +72,7 @@ * address space for the shadow region respectively. They can bloat the stack * significantly, so double the (minimum) stack size when they are in use. */ -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) #define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \ + KASAN_SHADOW_OFFSET) --- a/arch/arm64/include/asm/string.h~kasan-arm64-expand-config_kasan-checks +++ a/arch/arm64/include/asm/string.h @@ -5,7 +5,7 @@ #ifndef __ASM_STRING_H #define __ASM_STRING_H -#ifndef CONFIG_KASAN +#if !(defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) #define __HAVE_ARCH_STRRCHR extern char *strrchr(const char *, int c); @@ -48,7 +48,8 @@ extern void *__memset(void *, int, __ker void memcpy_flushcache(void *dst, const void *src, size_t cnt); #endif -#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) +#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ + !defined(__SANITIZE_ADDRESS__) /* * For files that are not instrumented (e.g. mm/slub.c) we --- a/arch/arm64/Kconfig~kasan-arm64-expand-config_kasan-checks +++ a/arch/arm64/Kconfig @@ -334,7 +334,7 @@ config BROKEN_GAS_INST config KASAN_SHADOW_OFFSET hex - depends on KASAN + depends on KASAN_GENERIC || KASAN_SW_TAGS default 0xdfff800000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && !KASAN_SW_TAGS default 0xdfffc00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS --- a/arch/arm64/kernel/head.S~kasan-arm64-expand-config_kasan-checks +++ a/arch/arm64/kernel/head.S @@ -433,7 +433,7 @@ SYM_FUNC_START_LOCAL(__primary_switched) bl __pi_memset dsb ishst // Make zero page visible to PTW -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) bl kasan_early_init #endif #ifdef CONFIG_RANDOMIZE_BASE --- a/arch/arm64/kernel/image-vars.h~kasan-arm64-expand-config_kasan-checks +++ a/arch/arm64/kernel/image-vars.h @@ -37,7 +37,7 @@ __efistub_strncmp = __pi_strncmp; __efistub_strrchr = __pi_strrchr; __efistub___clean_dcache_area_poc = __pi___clean_dcache_area_poc; -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) __efistub___memcpy = __pi_memcpy; __efistub___memmove = __pi_memmove; __efistub___memset = __pi_memset; --- a/arch/arm64/kernel/kaslr.c~kasan-arm64-expand-config_kasan-checks +++ a/arch/arm64/kernel/kaslr.c @@ -161,7 +161,8 @@ u64 __init kaslr_early_init(u64 dt_phys) /* use the top 16 bits to randomize the linear region */ memstart_offset_seed = seed >> 48; - if (IS_ENABLED(CONFIG_KASAN)) + if (IS_ENABLED(CONFIG_KASAN_GENERIC) || + IS_ENABLED(CONFIG_KASAN_SW_TAGS)) /* * KASAN does not expect the module region to intersect the * vmalloc region, since shadow memory is allocated for each --- a/arch/arm64/kernel/module.c~kasan-arm64-expand-config_kasan-checks +++ a/arch/arm64/kernel/module.c @@ -30,7 +30,8 @@ void *module_alloc(unsigned long size) if (IS_ENABLED(CONFIG_ARM64_MODULE_PLTS)) gfp_mask |= __GFP_NOWARN; - if (IS_ENABLED(CONFIG_KASAN)) + if (IS_ENABLED(CONFIG_KASAN_GENERIC) || + IS_ENABLED(CONFIG_KASAN_SW_TAGS)) /* don't exceed the static module region - see below */ module_alloc_end = MODULES_END; @@ -39,7 +40,8 @@ void *module_alloc(unsigned long size) NUMA_NO_NODE, __builtin_return_address(0)); if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) && - !IS_ENABLED(CONFIG_KASAN)) + !IS_ENABLED(CONFIG_KASAN_GENERIC) && + !IS_ENABLED(CONFIG_KASAN_SW_TAGS)) /* * KASAN can only deal with module allocations being served * from the reserved module region, since the remainder of --- a/arch/arm64/Makefile~kasan-arm64-expand-config_kasan-checks +++ a/arch/arm64/Makefile @@ -137,7 +137,7 @@ head-y := arch/arm64/kernel/head.o ifeq ($(CONFIG_KASAN_SW_TAGS), y) KASAN_SHADOW_SCALE_SHIFT := 4 -else +else ifeq ($(CONFIG_KASAN_GENERIC), y) KASAN_SHADOW_SCALE_SHIFT := 3 endif --- a/arch/arm64/mm/ptdump.c~kasan-arm64-expand-config_kasan-checks +++ a/arch/arm64/mm/ptdump.c @@ -29,7 +29,7 @@ enum address_markers_idx { PAGE_OFFSET_NR = 0, PAGE_END_NR, -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) KASAN_START_NR, #endif }; @@ -37,7 +37,7 @@ enum address_markers_idx { static struct addr_marker address_markers[] = { { PAGE_OFFSET, "Linear Mapping start" }, { 0 /* PAGE_END */, "Linear Mapping end" }, -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) { 0 /* KASAN_SHADOW_START */, "Kasan shadow start" }, { KASAN_SHADOW_END, "Kasan shadow end" }, #endif @@ -383,7 +383,7 @@ void ptdump_check_wx(void) static int ptdump_init(void) { address_markers[PAGE_END_NR].start_address = PAGE_END; -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START; #endif ptdump_initialize(); --- a/include/linux/kasan-checks.h~kasan-arm64-expand-config_kasan-checks +++ a/include/linux/kasan-checks.h @@ -9,7 +9,7 @@ * even in compilation units that selectively disable KASAN, but must use KASAN * to validate access to an address. Never use these in header files! */ -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) bool __kasan_check_read(const volatile void *p, unsigned int size); bool __kasan_check_write(const volatile void *p, unsigned int size); #else --- a/include/linux/kasan.h~kasan-arm64-expand-config_kasan-checks +++ a/include/linux/kasan.h @@ -238,7 +238,8 @@ static inline void kasan_release_vmalloc #endif /* CONFIG_KASAN_VMALLOC */ -#if defined(CONFIG_KASAN) && !defined(CONFIG_KASAN_VMALLOC) +#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ + !defined(CONFIG_KASAN_VMALLOC) /* * These functions provide a special case to support backing module @@ -248,12 +249,12 @@ static inline void kasan_release_vmalloc int kasan_module_alloc(void *addr, size_t size); void kasan_free_shadow(const struct vm_struct *vm); -#else /* CONFIG_KASAN && !CONFIG_KASAN_VMALLOC */ +#else /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */ static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } static inline void kasan_free_shadow(const struct vm_struct *vm) {} -#endif /* CONFIG_KASAN && !CONFIG_KASAN_VMALLOC */ +#endif /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */ #ifdef CONFIG_KASAN_INLINE void kasan_non_canonical_hook(unsigned long addr); --- a/include/linux/moduleloader.h~kasan-arm64-expand-config_kasan-checks +++ a/include/linux/moduleloader.h @@ -96,7 +96,8 @@ void module_arch_cleanup(struct module * /* Any cleanup before freeing mod->module_init */ void module_arch_freeing_init(struct module *mod); -#if defined(CONFIG_KASAN) && !defined(CONFIG_KASAN_VMALLOC) +#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ + !defined(CONFIG_KASAN_VMALLOC) #include #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) #else --- a/include/linux/string.h~kasan-arm64-expand-config_kasan-checks +++ a/include/linux/string.h @@ -267,7 +267,7 @@ void __write_overflow(void) __compiletim #if !defined(__NO_FORTIFY) && defined(__OPTIMIZE__) && defined(CONFIG_FORTIFY_SOURCE) -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) extern void *__underlying_memchr(const void *p, int c, __kernel_size_t size) __RENAME(memchr); extern int __underlying_memcmp(const void *p, const void *q, __kernel_size_t size) __RENAME(memcmp); extern void *__underlying_memcpy(void *p, const void *q, __kernel_size_t size) __RENAME(memcpy); --- a/mm/ptdump.c~kasan-arm64-expand-config_kasan-checks +++ a/mm/ptdump.c @@ -4,7 +4,7 @@ #include #include -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) /* * This is an optimization for KASAN=y case. Since all kasan page tables * eventually point to the kasan_early_shadow_page we could call note_page() @@ -31,7 +31,8 @@ static int ptdump_pgd_entry(pgd_t *pgd, struct ptdump_state *st = walk->private; pgd_t val = READ_ONCE(*pgd); -#if CONFIG_PGTABLE_LEVELS > 4 && defined(CONFIG_KASAN) +#if CONFIG_PGTABLE_LEVELS > 4 && \ + (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) if (pgd_page(val) == virt_to_page(lm_alias(kasan_early_shadow_p4d))) return note_kasan_page_table(walk, addr); #endif @@ -51,7 +52,8 @@ static int ptdump_p4d_entry(p4d_t *p4d, struct ptdump_state *st = walk->private; p4d_t val = READ_ONCE(*p4d); -#if CONFIG_PGTABLE_LEVELS > 3 && defined(CONFIG_KASAN) +#if CONFIG_PGTABLE_LEVELS > 3 && \ + (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) if (p4d_page(val) == virt_to_page(lm_alias(kasan_early_shadow_pud))) return note_kasan_page_table(walk, addr); #endif @@ -71,7 +73,8 @@ static int ptdump_pud_entry(pud_t *pud, struct ptdump_state *st = walk->private; pud_t val = READ_ONCE(*pud); -#if CONFIG_PGTABLE_LEVELS > 2 && defined(CONFIG_KASAN) +#if CONFIG_PGTABLE_LEVELS > 2 && \ + (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) if (pud_page(val) == virt_to_page(lm_alias(kasan_early_shadow_pmd))) return note_kasan_page_table(walk, addr); #endif @@ -91,7 +94,7 @@ static int ptdump_pmd_entry(pmd_t *pmd, struct ptdump_state *st = walk->private; pmd_t val = READ_ONCE(*pmd); -#if defined(CONFIG_KASAN) +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) if (pmd_page(val) == virt_to_page(lm_alias(kasan_early_shadow_pte))) return note_kasan_page_table(walk, addr); #endif --- a/scripts/Makefile.lib~kasan-arm64-expand-config_kasan-checks +++ a/scripts/Makefile.lib @@ -148,10 +148,12 @@ endif # we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE) # ifeq ($(CONFIG_KASAN),y) +ifneq ($(CONFIG_KASAN_HW_TAGS),y) _c_flags += $(if $(patsubst n%,, \ $(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)y), \ $(CFLAGS_KASAN), $(CFLAGS_KASAN_NOSANITIZE)) endif +endif ifeq ($(CONFIG_UBSAN),y) _c_flags += $(if $(patsubst n%,, \ From patchwork Fri Dec 18 22:04:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F18EC4361B for ; Fri, 18 Dec 2020 22:04:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1889923B5D for ; Fri, 18 Dec 2020 22:04:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1889923B5D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A17016B007B; Fri, 18 Dec 2020 17:04:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9CA056B007D; Fri, 18 Dec 2020 17:04:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8906E6B007E; Fri, 18 Dec 2020 17:04:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0204.hostedemail.com [216.40.44.204]) by kanga.kvack.org (Postfix) with ESMTP id 6F2DA6B007B for ; Fri, 18 Dec 2020 17:04:08 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3C4C18249980 for ; Fri, 18 Dec 2020 22:04:08 +0000 (UTC) X-FDA: 77607781776.23.skin48_081264c27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 1EF7537604 for ; Fri, 18 Dec 2020 22:04:08 +0000 (UTC) X-HE-Tag: skin48_081264c27440 X-Filterd-Recvd-Size: 16465 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:07 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:05 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329046; bh=TosaJ+INYQPH4GRmvvdGf3IYkx1tUP+mduePYB+lGww=; h=From:To:Subject:In-Reply-To:From; b=2q3PhA2eWQ9ijoEQBaRJ2LNggoZbdKnP/0i38aj3cwI2ncTJtrXSA6QZAFAvuOOJf jQd0AlhIuoz+JKmKwOYKDN6jZy12QwQv1HTxauXYhD1Q7zy/c3E0cL0SU5j9NfbUTz jPjyOTNuVg7qb81DuUpInYxstogCKYOBx94PPpcg= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 49/78] kasan, arm64: implement HW_TAGS runtime Message-ID: <20201218220405.TDzOdRjiz%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, arm64: implement HW_TAGS runtime Provide implementation of KASAN functions required for the hardware tag-based mode. Those include core functions for memory and pointer tagging (tags_hw.c) and bug reporting (report_tags_hw.c). Also adapt common KASAN code to support the new mode. Link: https://lkml.kernel.org/r/cfd0fbede579a6b66755c98c88c108e54f9c56bf.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Acked-by: Catalin Marinas Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/include/asm/memory.h | 4 - arch/arm64/kernel/cpufeature.c | 3 arch/arm64/kernel/smp.c | 2 include/linux/kasan.h | 24 +++++-- include/linux/mm.h | 2 include/linux/page-flags-layout.h | 2 mm/kasan/Makefile | 5 + mm/kasan/common.c | 15 ++-- mm/kasan/hw_tags.c | 89 ++++++++++++++++++++++++++++ mm/kasan/kasan.h | 19 ++++- mm/kasan/report_hw_tags.c | 42 +++++++++++++ mm/kasan/report_sw_tags.c | 2 mm/kasan/shadow.c | 2 mm/kasan/sw_tags.c | 2 14 files changed, 187 insertions(+), 26 deletions(-) --- a/arch/arm64/include/asm/memory.h~kasan-arm64-implement-hw_tags-runtime +++ a/arch/arm64/include/asm/memory.h @@ -214,7 +214,7 @@ static inline unsigned long kaslr_offset (__force __typeof__(addr))__addr; \ }) -#ifdef CONFIG_KASAN_SW_TAGS +#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS) #define __tag_shifted(tag) ((u64)(tag) << 56) #define __tag_reset(addr) __untagged_addr(addr) #define __tag_get(addr) (__u8)((u64)(addr) >> 56) @@ -222,7 +222,7 @@ static inline unsigned long kaslr_offset #define __tag_shifted(tag) 0UL #define __tag_reset(addr) (addr) #define __tag_get(addr) 0 -#endif /* CONFIG_KASAN_SW_TAGS */ +#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */ static inline const void *__tag_set(const void *addr, u8 tag) { --- a/arch/arm64/kernel/cpufeature.c~kasan-arm64-implement-hw_tags-runtime +++ a/arch/arm64/kernel/cpufeature.c @@ -70,6 +70,7 @@ #include #include #include +#include #include #include #include @@ -1709,6 +1710,8 @@ static void cpu_enable_mte(struct arm64_ cleared_zero_page = true; mte_clear_page_tags(lm_alias(empty_zero_page)); } + + kasan_init_hw_tags_cpu(); } #endif /* CONFIG_ARM64_MTE */ --- a/arch/arm64/kernel/smp.c~kasan-arm64-implement-hw_tags-runtime +++ a/arch/arm64/kernel/smp.c @@ -462,6 +462,8 @@ void __init smp_prepare_boot_cpu(void) /* Conditionally switch to GIC PMR for interrupt masking */ if (system_uses_irq_prio_masking()) init_gic_priority_masking(); + + kasan_init_hw_tags(); } static u64 __init of_get_cpu_mpidr(struct device_node *dn) --- a/include/linux/kasan.h~kasan-arm64-implement-hw_tags-runtime +++ a/include/linux/kasan.h @@ -190,25 +190,35 @@ static inline void kasan_record_aux_stac #endif /* CONFIG_KASAN_GENERIC */ -#ifdef CONFIG_KASAN_SW_TAGS - -void __init kasan_init_sw_tags(void); +#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS) void *kasan_reset_tag(const void *addr); bool kasan_report(unsigned long addr, size_t size, bool is_write, unsigned long ip); -#else /* CONFIG_KASAN_SW_TAGS */ - -static inline void kasan_init_sw_tags(void) { } +#else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */ static inline void *kasan_reset_tag(const void *addr) { return (void *)addr; } -#endif /* CONFIG_KASAN_SW_TAGS */ +#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS*/ + +#ifdef CONFIG_KASAN_SW_TAGS +void __init kasan_init_sw_tags(void); +#else +static inline void kasan_init_sw_tags(void) { } +#endif + +#ifdef CONFIG_KASAN_HW_TAGS +void kasan_init_hw_tags_cpu(void); +void __init kasan_init_hw_tags(void); +#else +static inline void kasan_init_hw_tags_cpu(void) { } +static inline void kasan_init_hw_tags(void) { } +#endif #ifdef CONFIG_KASAN_VMALLOC --- a/include/linux/mm.h~kasan-arm64-implement-hw_tags-runtime +++ a/include/linux/mm.h @@ -1421,7 +1421,7 @@ static inline bool cpupid_match_pid(stru } #endif /* CONFIG_NUMA_BALANCING */ -#ifdef CONFIG_KASAN_SW_TAGS +#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS) static inline u8 page_kasan_tag(const struct page *page) { return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK; --- a/include/linux/page-flags-layout.h~kasan-arm64-implement-hw_tags-runtime +++ a/include/linux/page-flags-layout.h @@ -77,7 +77,7 @@ #define LAST_CPUPID_SHIFT 0 #endif -#ifdef CONFIG_KASAN_SW_TAGS +#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS) #define KASAN_TAG_WIDTH 8 #else #define KASAN_TAG_WIDTH 0 --- a/mm/kasan/common.c~kasan-arm64-implement-hw_tags-runtime +++ a/mm/kasan/common.c @@ -118,7 +118,7 @@ void kasan_free_pages(struct page *page, */ static inline unsigned int optimal_redzone(unsigned int object_size) { - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) + if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) return 0; return @@ -183,14 +183,14 @@ size_t kasan_metadata_size(struct kmem_c struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache, const void *object) { - return (void *)object + cache->kasan_info.alloc_meta_offset; + return (void *)reset_tag(object) + cache->kasan_info.alloc_meta_offset; } struct kasan_free_meta *get_free_info(struct kmem_cache *cache, const void *object) { BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32); - return (void *)object + cache->kasan_info.free_meta_offset; + return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset; } void kasan_poison_slab(struct page *page) @@ -272,9 +272,8 @@ void * __must_check kasan_init_slab_obj( alloc_info = get_alloc_info(cache, object); __memset(alloc_info, 0, sizeof(*alloc_info)); - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) - object = set_tag(object, - assign_tag(cache, object, true, false)); + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS)) + object = set_tag(object, assign_tag(cache, object, true, false)); return (void *)object; } @@ -342,10 +341,10 @@ static void *__kasan_kmalloc(struct kmem redzone_end = round_up((unsigned long)object + cache->object_size, KASAN_GRANULE_SIZE); - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS)) tag = assign_tag(cache, object, false, keep_tag); - /* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */ + /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */ unpoison_range(set_tag(object, tag), size); poison_range((void *)redzone_start, redzone_end - redzone_start, KASAN_KMALLOC_REDZONE); --- /dev/null +++ a/mm/kasan/hw_tags.c @@ -0,0 +1,89 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * This file contains core hardware tag-based KASAN code. + * + * Copyright (c) 2020 Google, Inc. + * Author: Andrey Konovalov + */ + +#define pr_fmt(fmt) "kasan: " fmt + +#include +#include +#include +#include +#include +#include +#include + +#include "kasan.h" + +/* kasan_init_hw_tags_cpu() is called for each CPU. */ +void kasan_init_hw_tags_cpu(void) +{ + hw_init_tags(KASAN_TAG_MAX); + hw_enable_tagging(); +} + +/* kasan_init_hw_tags() is called once on boot CPU. */ +void __init kasan_init_hw_tags(void) +{ + pr_info("KernelAddressSanitizer initialized\n"); +} + +void *kasan_reset_tag(const void *addr) +{ + return reset_tag(addr); +} + +void poison_range(const void *address, size_t size, u8 value) +{ + /* Skip KFENCE memory if called explicitly outside of sl*b. */ + if (is_kfence_address(address)) + return; + + hw_set_mem_tag_range(reset_tag(address), + round_up(size, KASAN_GRANULE_SIZE), value); +} + +void unpoison_range(const void *address, size_t size) +{ + /* Skip KFENCE memory if called explicitly outside of sl*b. */ + if (is_kfence_address(address)) + return; + + hw_set_mem_tag_range(reset_tag(address), + round_up(size, KASAN_GRANULE_SIZE), get_tag(address)); +} + +u8 random_tag(void) +{ + return hw_get_random_tag(); +} + +bool check_invalid_free(void *addr) +{ + u8 ptr_tag = get_tag(addr); + u8 mem_tag = hw_get_mem_tag(addr); + + return (mem_tag == KASAN_TAG_INVALID) || + (ptr_tag != KASAN_TAG_KERNEL && ptr_tag != mem_tag); +} + +void kasan_set_free_info(struct kmem_cache *cache, + void *object, u8 tag) +{ + struct kasan_alloc_meta *alloc_meta; + + alloc_meta = get_alloc_info(cache, object); + kasan_set_track(&alloc_meta->free_track[0], GFP_NOWAIT); +} + +struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, + void *object, u8 tag) +{ + struct kasan_alloc_meta *alloc_meta; + + alloc_meta = get_alloc_info(cache, object); + return &alloc_meta->free_track[0]; +} --- a/mm/kasan/kasan.h~kasan-arm64-implement-hw_tags-runtime +++ a/mm/kasan/kasan.h @@ -154,6 +154,11 @@ struct kasan_alloc_meta *get_alloc_info( struct kasan_free_meta *get_free_info(struct kmem_cache *cache, const void *object); +void poison_range(const void *address, size_t size, u8 value); +void unpoison_range(const void *address, size_t size); + +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) + static inline const void *kasan_shadow_to_mem(const void *shadow_addr) { return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET) @@ -165,9 +170,6 @@ static inline bool addr_has_metadata(con return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START)); } -void poison_range(const void *address, size_t size, u8 value); -void unpoison_range(const void *address, size_t size); - /** * check_memory_region - Check memory region, and report if invalid access. * @addr: the accessed address @@ -179,6 +181,15 @@ void unpoison_range(const void *address, bool check_memory_region(unsigned long addr, size_t size, bool write, unsigned long ret_ip); +#else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ + +static inline bool addr_has_metadata(const void *addr) +{ + return true; +} + +#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ + bool check_invalid_free(void *addr); void *find_first_bad_addr(void *addr, size_t size); @@ -215,7 +226,7 @@ static inline void quarantine_reduce(voi static inline void quarantine_remove_cache(struct kmem_cache *cache) { } #endif -#ifdef CONFIG_KASAN_SW_TAGS +#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS) void print_tags(u8 addr_tag, const void *addr); --- a/mm/kasan/Makefile~kasan-arm64-implement-hw_tags-runtime +++ a/mm/kasan/Makefile @@ -10,8 +10,10 @@ CFLAGS_REMOVE_init.o = $(CC_FLAGS_FTRACE CFLAGS_REMOVE_quarantine.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_report.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_report_generic.o = $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_report_hw_tags.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_report_sw_tags.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_shadow.o = $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_hw_tags.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_sw_tags.o = $(CC_FLAGS_FTRACE) # Function splitter causes unnecessary splits in __asan_load1/__asan_store1 @@ -27,10 +29,13 @@ CFLAGS_init.o := $(CC_FLAGS_KASAN_RUNTIM CFLAGS_quarantine.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_report.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_report_generic.o := $(CC_FLAGS_KASAN_RUNTIME) +CFLAGS_report_hw_tags.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_report_sw_tags.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_shadow.o := $(CC_FLAGS_KASAN_RUNTIME) +CFLAGS_hw_tags.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_sw_tags.o := $(CC_FLAGS_KASAN_RUNTIME) obj-$(CONFIG_KASAN) := common.o report.o obj-$(CONFIG_KASAN_GENERIC) += init.o generic.o report_generic.o shadow.o quarantine.o +obj-$(CONFIG_KASAN_HW_TAGS) += hw_tags.o report_hw_tags.o obj-$(CONFIG_KASAN_SW_TAGS) += init.o report_sw_tags.o shadow.o sw_tags.o --- /dev/null +++ a/mm/kasan/report_hw_tags.c @@ -0,0 +1,42 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * This file contains hardware tag-based KASAN specific error reporting code. + * + * Copyright (c) 2020 Google, Inc. + * Author: Andrey Konovalov + */ + +#include +#include +#include +#include +#include +#include + +#include "kasan.h" + +const char *get_bug_type(struct kasan_access_info *info) +{ + return "invalid-access"; +} + +void *find_first_bad_addr(void *addr, size_t size) +{ + return reset_tag(addr); +} + +void metadata_fetch_row(char *buffer, void *row) +{ + int i; + + for (i = 0; i < META_BYTES_PER_ROW; i++) + buffer[i] = hw_get_mem_tag(row + i * KASAN_GRANULE_SIZE); +} + +void print_tags(u8 addr_tag, const void *addr) +{ + u8 memory_tag = hw_get_mem_tag((void *)addr); + + pr_err("Pointer tag: [%02x], memory tag: [%02x]\n", + addr_tag, memory_tag); +} --- a/mm/kasan/report_sw_tags.c~kasan-arm64-implement-hw_tags-runtime +++ a/mm/kasan/report_sw_tags.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * This file contains tag-based KASAN specific error reporting code. + * This file contains software tag-based KASAN specific error reporting code. * * Copyright (c) 2014 Samsung Electronics Co., Ltd. * Author: Andrey Ryabinin --- a/mm/kasan/shadow.c~kasan-arm64-implement-hw_tags-runtime +++ a/mm/kasan/shadow.c @@ -120,7 +120,7 @@ void unpoison_range(const void *address, if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) *shadow = tag; - else + else /* CONFIG_KASAN_GENERIC */ *shadow = size & KASAN_GRANULE_MASK; } } --- a/mm/kasan/sw_tags.c~kasan-arm64-implement-hw_tags-runtime +++ a/mm/kasan/sw_tags.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * This file contains core tag-based KASAN code. + * This file contains core software tag-based KASAN code. * * Copyright (c) 2018 Google, Inc. * Author: Andrey Konovalov From patchwork Fri Dec 18 22:04:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB6A9C4361B for ; Fri, 18 Dec 2020 22:04:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0E4B23406 for ; Fri, 18 Dec 2020 22:04:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0E4B23406 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3AF3C6B007D; Fri, 18 Dec 2020 17:04:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 338F16B007E; Fri, 18 Dec 2020 17:04:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 226E46B0082; Fri, 18 Dec 2020 17:04:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 05CB56B007D for ; Fri, 18 Dec 2020 17:04:12 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C80D4181AF5FD for ; Fri, 18 Dec 2020 22:04:11 +0000 (UTC) X-FDA: 77607781902.03.park41_510f34f27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 6B49E28A4EA for ; Fri, 18 Dec 2020 22:04:11 +0000 (UTC) X-HE-Tag: park41_510f34f27440 X-Filterd-Recvd-Size: 4635 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:10 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:09 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329050; bh=PrQJM9rlJRzM+cN/hE0wVm8FU6/mdbHuXXko4hMlZVg=; h=From:To:Subject:In-Reply-To:From; b=fex+flua/a79D+vvCRQ+7mFhyvxbdijDaURj5Trkb5cukPSDgy9TgouCXlLH80Lpj cS2FVEUgkC4Lzjsl7ofkr6y3FROniU+BBS2BLkQo6v7HMQAm/O+0WnOuaqsfWP/TxX HqgdGSRqCk7V8++reBy8kcotNKGqs0ovi0kqQWDo= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 50/78] kasan, arm64: print report from tag fault handler Message-ID: <20201218220409.dEPzly-u5%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, arm64: print report from tag fault handler Add error reporting for hardware tag-based KASAN. When CONFIG_KASAN_HW_TAGS is enabled, print KASAN report from the arm64 tag fault handler. SAS bits aren't set in ESR for all faults reported in EL1, so it's impossible to find out the size of the access the caused the fault. Adapt KASAN reporting code to handle this case. Link: https://lkml.kernel.org/r/b559c82b6a969afedf53b4694b475f0234067a1a.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Co-developed-by: Vincenzo Frascino Signed-off-by: Vincenzo Frascino Reviewed-by: Catalin Marinas Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/mm/fault.c | 14 ++++++++++++++ mm/kasan/report.c | 11 ++++++++--- 2 files changed, 22 insertions(+), 3 deletions(-) --- a/arch/arm64/mm/fault.c~kasan-arm64-print-report-from-tag-fault-handler +++ a/arch/arm64/mm/fault.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -297,10 +298,23 @@ static void die_kernel_fault(const char do_exit(SIGKILL); } +#ifdef CONFIG_KASAN_HW_TAGS static void report_tag_fault(unsigned long addr, unsigned int esr, struct pt_regs *regs) { + bool is_write = ((esr & ESR_ELx_WNR) >> ESR_ELx_WNR_SHIFT) != 0; + + /* + * SAS bits aren't set for all faults reported in EL1, so we can't + * find out access size. + */ + kasan_report(addr, 0, is_write, regs->pc); } +#else +/* Tag faults aren't enabled without CONFIG_KASAN_HW_TAGS. */ +static inline void report_tag_fault(unsigned long addr, unsigned int esr, + struct pt_regs *regs) { } +#endif static void do_tag_recovery(unsigned long addr, unsigned int esr, struct pt_regs *regs) --- a/mm/kasan/report.c~kasan-arm64-print-report-from-tag-fault-handler +++ a/mm/kasan/report.c @@ -62,9 +62,14 @@ static void print_error_description(stru { pr_err("BUG: KASAN: %s in %pS\n", get_bug_type(info), (void *)info->ip); - pr_err("%s of size %zu at addr %px by task %s/%d\n", - info->is_write ? "Write" : "Read", info->access_size, - info->access_addr, current->comm, task_pid_nr(current)); + if (info->access_size) + pr_err("%s of size %zu at addr %px by task %s/%d\n", + info->is_write ? "Write" : "Read", info->access_size, + info->access_addr, current->comm, task_pid_nr(current)); + else + pr_err("%s at addr %px by task %s/%d\n", + info->is_write ? "Write" : "Read", + info->access_addr, current->comm, task_pid_nr(current)); } static DEFINE_SPINLOCK(report_lock); From patchwork Fri Dec 18 22:04:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF575C4361B for ; Fri, 18 Dec 2020 22:04:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 87AB523406 for ; Fri, 18 Dec 2020 22:04:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 87AB523406 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 204876B0080; Fri, 18 Dec 2020 17:04:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 18E686B0082; Fri, 18 Dec 2020 17:04:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0572D6B0083; Fri, 18 Dec 2020 17:04:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0021.hostedemail.com [216.40.44.21]) by kanga.kvack.org (Postfix) with ESMTP id DEF166B0080 for ; Fri, 18 Dec 2020 17:04:14 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id AE3638249980 for ; Fri, 18 Dec 2020 22:04:14 +0000 (UTC) X-FDA: 77607782028.21.car89_041032727440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 8499018016C95 for ; Fri, 18 Dec 2020 22:04:14 +0000 (UTC) X-HE-Tag: car89_041032727440 X-Filterd-Recvd-Size: 7899 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:13 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:12 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329053; bh=SVHQFaEU4AA89s82OpI6oHUPgqGsTJGJuIRydzHoxG8=; h=From:To:Subject:In-Reply-To:From; b=kmM+WIuPHbYf6m+omTasdcq0WW4i3jgp9J4bUVJIdQJ3kBO27gC0LZuY0rfIF461R 3PmZPqre7qr3UZe9hkqS9S8HJ1eDPQTA/PLdlxBBLeuR08ud8zGUEXYuUV45eP/+gy 4S+c26bLbkE598pHB3X5RGe2qrIXvS/bdINVCqZE= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 51/78] kasan, mm: reset tags when accessing metadata Message-ID: <20201218220412.UPzvik1wp%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, mm: reset tags when accessing metadata Kernel allocator code accesses metadata for slab objects, that may lie out-of-bounds of the object itself, or be accessed when an object is freed. Such accesses trigger tag faults and lead to false-positive reports with hardware tag-based KASAN. Software KASAN modes disable instrumentation for allocator code via KASAN_SANITIZE Makefile macro, and rely on kasan_enable/disable_current() annotations which are used to ignore KASAN reports. With hardware tag-based KASAN neither of those options are available, as it doesn't use compiler instrumetation, no tag faults are ignored, and MTE is disabled after the first one. Instead, reset tags when accessing metadata (currently only for SLUB). Link: https://lkml.kernel.org/r/a0f3cefbc49f34c843b664110842de4db28179d0.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Acked-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/page_alloc.c | 4 +++- mm/page_poison.c | 2 +- mm/slub.c | 29 ++++++++++++++++------------- 3 files changed, 20 insertions(+), 15 deletions(-) --- a/mm/page_alloc.c~kasan-mm-reset-tags-when-accessing-metadata +++ a/mm/page_alloc.c @@ -1204,8 +1204,10 @@ static void kernel_init_free_pages(struc /* s390's use of memset() could override KASAN redzones. */ kasan_disable_current(); - for (i = 0; i < numpages; i++) + for (i = 0; i < numpages; i++) { + page_kasan_tag_reset(page + i); clear_highpage(page + i); + } kasan_enable_current(); } --- a/mm/page_poison.c~kasan-mm-reset-tags-when-accessing-metadata +++ a/mm/page_poison.c @@ -25,7 +25,7 @@ static void poison_page(struct page *pag /* KASAN still think the page is in-use, so skip it. */ kasan_disable_current(); - memset(addr, PAGE_POISON, PAGE_SIZE); + memset(kasan_reset_tag(addr), PAGE_POISON, PAGE_SIZE); kasan_enable_current(); kunmap_atomic(addr); } --- a/mm/slub.c~kasan-mm-reset-tags-when-accessing-metadata +++ a/mm/slub.c @@ -249,7 +249,7 @@ static inline void *freelist_ptr(const s { #ifdef CONFIG_SLAB_FREELIST_HARDENED /* - * When CONFIG_KASAN_SW_TAGS is enabled, ptr_addr might be tagged. + * When CONFIG_KASAN_SW/HW_TAGS is enabled, ptr_addr might be tagged. * Normally, this doesn't cause any issues, as both set_freepointer() * and get_freepointer() are called with a pointer with the same tag. * However, there are some issues with CONFIG_SLUB_DEBUG code. For @@ -275,6 +275,7 @@ static inline void *freelist_dereference static inline void *get_freepointer(struct kmem_cache *s, void *object) { + object = kasan_reset_tag(object); return freelist_dereference(s, object + s->offset); } @@ -304,6 +305,7 @@ static inline void set_freepointer(struc BUG_ON(object == fp); /* naive detection of double free or corruption */ #endif + freeptr_addr = (unsigned long)kasan_reset_tag((void *)freeptr_addr); *(void **)freeptr_addr = freelist_ptr(s, fp, freeptr_addr); } @@ -538,8 +540,8 @@ static void print_section(char *level, c unsigned int length) { metadata_access_enable(); - print_hex_dump(level, text, DUMP_PREFIX_ADDRESS, 16, 1, addr, - length, 1); + print_hex_dump(level, kasan_reset_tag(text), DUMP_PREFIX_ADDRESS, + 16, 1, addr, length, 1); metadata_access_disable(); } @@ -570,7 +572,7 @@ static struct track *get_track(struct km p = object + get_info_end(s); - return p + alloc; + return kasan_reset_tag(p + alloc); } static void set_track(struct kmem_cache *s, void *object, @@ -583,7 +585,8 @@ static void set_track(struct kmem_cache unsigned int nr_entries; metadata_access_enable(); - nr_entries = stack_trace_save(p->addrs, TRACK_ADDRS_COUNT, 3); + nr_entries = stack_trace_save(kasan_reset_tag(p->addrs), + TRACK_ADDRS_COUNT, 3); metadata_access_disable(); if (nr_entries < TRACK_ADDRS_COUNT) @@ -747,7 +750,7 @@ static __printf(3, 4) void slab_err(stru static void init_object(struct kmem_cache *s, void *object, u8 val) { - u8 *p = object; + u8 *p = kasan_reset_tag(object); if (s->flags & SLAB_RED_ZONE) memset(p - s->red_left_pad, val, s->red_left_pad); @@ -777,7 +780,7 @@ static int check_bytes_and_report(struct u8 *addr = page_address(page); metadata_access_enable(); - fault = memchr_inv(start, value, bytes); + fault = memchr_inv(kasan_reset_tag(start), value, bytes); metadata_access_disable(); if (!fault) return 1; @@ -873,7 +876,7 @@ static int slab_pad_check(struct kmem_ca pad = end - remainder; metadata_access_enable(); - fault = memchr_inv(pad, POISON_INUSE, remainder); + fault = memchr_inv(kasan_reset_tag(pad), POISON_INUSE, remainder); metadata_access_disable(); if (!fault) return 1; @@ -1118,7 +1121,7 @@ void setup_page_debug(struct kmem_cache return; metadata_access_enable(); - memset(addr, POISON_INUSE, page_size(page)); + memset(kasan_reset_tag(addr), POISON_INUSE, page_size(page)); metadata_access_disable(); } @@ -1566,10 +1569,10 @@ static inline bool slab_free_freelist_ho * Clear the object and the metadata, but don't touch * the redzone. */ - memset(object, 0, s->object_size); + memset(kasan_reset_tag(object), 0, s->object_size); rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0; - memset((char *)object + s->inuse, 0, + memset((char *)kasan_reset_tag(object) + s->inuse, 0, s->size - s->inuse - rsize); } @@ -2881,10 +2884,10 @@ redo: stat(s, ALLOC_FASTPATH); } - maybe_wipe_obj_freeptr(s, object); + maybe_wipe_obj_freeptr(s, kasan_reset_tag(object)); if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) - memset(object, 0, s->object_size); + memset(kasan_reset_tag(object), 0, s->object_size); slab_post_alloc_hook(s, objcg, gfpflags, 1, &object); From patchwork Fri Dec 18 22:04:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F263EC3526E for ; Fri, 18 Dec 2020 22:04:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9CD7B23B97 for ; Fri, 18 Dec 2020 22:04:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9CD7B23B97 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3FCEA6B0083; Fri, 18 Dec 2020 17:04:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 385516B0085; Fri, 18 Dec 2020 17:04:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 275246B0087; Fri, 18 Dec 2020 17:04:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id 0AB176B0083 for ; Fri, 18 Dec 2020 17:04:18 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C7489181AF5FD for ; Fri, 18 Dec 2020 22:04:17 +0000 (UTC) X-FDA: 77607782154.23.river45_21047d227440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id A985637606 for ; Fri, 18 Dec 2020 22:04:17 +0000 (UTC) X-HE-Tag: river45_21047d227440 X-Filterd-Recvd-Size: 2779 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:17 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:15 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329056; bh=24kg3PNyKlXnB6Xe61RW6pY/C7fqdxm5pJlEpurIZYw=; h=From:To:Subject:In-Reply-To:From; b=xcsgY2vWMd9CX2x/dVgqNXOfPwXFJpHjDtiL2wdlsFj5UQh/jRDsUDdmDEV38E6R/ Xudd19gPGTZh62gHeyvadHXHKrExlG7HaGw2eGHd7gBgo4nfRm+SYT8i2Xglmo8vy7 Cqx1cxVBZ1S7fmJ4cWrLKIwH1R7H2xXuT2ibs6og= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 52/78] kasan, arm64: enable CONFIG_KASAN_HW_TAGS Message-ID: <20201218220415.jdplxZa3y%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, arm64: enable CONFIG_KASAN_HW_TAGS Hardware tag-based KASAN is now ready, enable the configuration option. Link: https://lkml.kernel.org/r/a6fa50d3bb6b318e05c6389a44095be96442b8b0.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Acked-by: Catalin Marinas Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/Kconfig | 1 + 1 file changed, 1 insertion(+) --- a/arch/arm64/Kconfig~kasan-arm64-enable-config_kasan_hw_tags +++ a/arch/arm64/Kconfig @@ -137,6 +137,7 @@ config ARM64 select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN + select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE) select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT From patchwork Fri Dec 18 22:04:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9A69C4361B for ; Fri, 18 Dec 2020 22:04:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4D5E323B54 for ; Fri, 18 Dec 2020 22:04:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4D5E323B54 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D732E6B0087; Fri, 18 Dec 2020 17:04:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CFB8F6B0088; Fri, 18 Dec 2020 17:04:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BEB0E6B0089; Fri, 18 Dec 2020 17:04:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0173.hostedemail.com [216.40.44.173]) by kanga.kvack.org (Postfix) with ESMTP id A5AEF6B0087 for ; Fri, 18 Dec 2020 17:04:21 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6BABA181AF5FD for ; Fri, 18 Dec 2020 22:04:21 +0000 (UTC) X-FDA: 77607782322.13.rain60_4a0589f27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 4C498181355F5 for ; Fri, 18 Dec 2020 22:04:21 +0000 (UTC) X-HE-Tag: rain60_4a0589f27440 X-Filterd-Recvd-Size: 8979 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:20 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:18 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329059; bh=tAxHNT7XQoQvZrsc0GxwdWXao08bTrbBlGwiVEcT74w=; h=From:To:Subject:In-Reply-To:From; b=jplPaZctkxW58w0C6k/cHJi7Quk9CFft6WiJpPGmpiQYIfH4QVbIt4aCT29s/nQ/B 8Q+amj0GGMtOIG+sPhTElX2NTrfjNf7N2yk0tP5Wlb+MmlTE+S+dr6EOZKs2OF5xct vZLRqB1+QnKVb64p7A9cmRklIif6aJ6MsH9QgOdk= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 53/78] kasan: add documentation for hardware tag-based mode Message-ID: <20201218220418.em96prJN1%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: add documentation for hardware tag-based mode Add documentation for hardware tag-based KASAN mode and also add some clarifications for software tag-based mode. Link: https://lkml.kernel.org/r/20ed1d387685e89fc31be068f890f070ef9fd5d5.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- Documentation/dev-tools/kasan.rst | 84 ++++++++++++++++++++-------- 1 file changed, 61 insertions(+), 23 deletions(-) --- a/Documentation/dev-tools/kasan.rst~kasan-add-documentation-for-hardware-tag-based-mode +++ a/Documentation/dev-tools/kasan.rst @@ -5,12 +5,14 @@ Overview -------- KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to -find out-of-bound and use-after-free bugs. KASAN has two modes: generic KASAN -(similar to userspace ASan) and software tag-based KASAN (similar to userspace -HWASan). - -KASAN uses compile-time instrumentation to insert validity checks before every -memory access, and therefore requires a compiler version that supports that. +find out-of-bound and use-after-free bugs. KASAN has three modes: +1. generic KASAN (similar to userspace ASan), +2. software tag-based KASAN (similar to userspace HWASan), +3. hardware tag-based KASAN (based on hardware memory tagging). + +Software KASAN modes (1 and 2) use compile-time instrumentation to insert +validity checks before every memory access, and therefore require a compiler +version that supports that. Generic KASAN is supported in both GCC and Clang. With GCC it requires version 8.3.0 or later. Any supported Clang version is compatible, but detection of @@ -19,7 +21,7 @@ out-of-bounds accesses for global variab Tag-based KASAN is only supported in Clang. Currently generic KASAN is supported for the x86_64, arm64, xtensa, s390 and -riscv architectures, and tag-based KASAN is supported only for arm64. +and riscv architectures, and tag-based KASAN modes are supported only for arm64. Usage ----- @@ -28,14 +30,16 @@ To enable KASAN configure kernel with:: CONFIG_KASAN = y -and choose between CONFIG_KASAN_GENERIC (to enable generic KASAN) and -CONFIG_KASAN_SW_TAGS (to enable software tag-based KASAN). - -You also need to choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. -Outline and inline are compiler instrumentation types. The former produces -smaller binary while the latter is 1.1 - 2 times faster. +and choose between CONFIG_KASAN_GENERIC (to enable generic KASAN), +CONFIG_KASAN_SW_TAGS (to enable software tag-based KASAN), and +CONFIG_KASAN_HW_TAGS (to enable hardware tag-based KASAN). + +For software modes, you also need to choose between CONFIG_KASAN_OUTLINE and +CONFIG_KASAN_INLINE. Outline and inline are compiler instrumentation types. +The former produces smaller binary while the latter is 1.1 - 2 times faster. -Both KASAN modes work with both SLUB and SLAB memory allocators. +Both software KASAN modes work with both SLUB and SLAB memory allocators, +hardware tag-based KASAN currently only support SLUB. For better bug detection and nicer reporting, enable CONFIG_STACKTRACE. To augment reports with last allocation and freeing stack of the physical page, @@ -197,17 +201,24 @@ call_rcu() and workqueue queuing. Software tag-based KASAN ~~~~~~~~~~~~~~~~~~~~~~~~ -Tag-based KASAN uses the Top Byte Ignore (TBI) feature of modern arm64 CPUs to -store a pointer tag in the top byte of kernel pointers. Like generic KASAN it -uses shadow memory to store memory tags associated with each 16-byte memory +Software tag-based KASAN requires software memory tagging support in the form +of HWASan-like compiler instrumentation (see HWASan documentation for details). + +Software tag-based KASAN is currently only implemented for arm64 architecture. + +Software tag-based KASAN uses the Top Byte Ignore (TBI) feature of arm64 CPUs +to store a pointer tag in the top byte of kernel pointers. Like generic KASAN +it uses shadow memory to store memory tags associated with each 16-byte memory cell (therefore it dedicates 1/16th of the kernel memory for shadow memory). -On each memory allocation tag-based KASAN generates a random tag, tags the -allocated memory with this tag, and embeds this tag into the returned pointer. +On each memory allocation software tag-based KASAN generates a random tag, tags +the allocated memory with this tag, and embeds this tag into the returned +pointer. + Software tag-based KASAN uses compile-time instrumentation to insert checks before each memory access. These checks make sure that tag of the memory that is being accessed is equal to tag of the pointer that is used to access this -memory. In case of a tag mismatch tag-based KASAN prints a bug report. +memory. In case of a tag mismatch software tag-based KASAN prints a bug report. Software tag-based KASAN also has two instrumentation modes (outline, that emits callbacks to check memory accesses; and inline, that performs the shadow @@ -216,9 +227,36 @@ simply printed from the function that pe instrumentation a brk instruction is emitted by the compiler, and a dedicated brk handler is used to print bug reports. -A potential expansion of this mode is a hardware tag-based mode, which would -use hardware memory tagging support instead of compiler instrumentation and -manual shadow memory manipulation. +Software tag-based KASAN uses 0xFF as a match-all pointer tag (accesses through +pointers with 0xFF pointer tag aren't checked). The value 0xFE is currently +reserved to tag freed memory regions. + +Software tag-based KASAN currently only supports tagging of +kmem_cache_alloc/kmalloc and page_alloc memory. + +Hardware tag-based KASAN +~~~~~~~~~~~~~~~~~~~~~~~~ + +Hardware tag-based KASAN is similar to the software mode in concept, but uses +hardware memory tagging support instead of compiler instrumentation and +shadow memory. + +Hardware tag-based KASAN is currently only implemented for arm64 architecture +and based on both arm64 Memory Tagging Extension (MTE) introduced in ARMv8.5 +Instruction Set Architecture, and Top Byte Ignore (TBI). + +Special arm64 instructions are used to assign memory tags for each allocation. +Same tags are assigned to pointers to those allocations. On every memory +access, hardware makes sure that tag of the memory that is being accessed is +equal to tag of the pointer that is used to access this memory. In case of a +tag mismatch a fault is generated and a report is printed. + +Hardware tag-based KASAN uses 0xFF as a match-all pointer tag (accesses through +pointers with 0xFF pointer tag aren't checked). The value 0xFE is currently +reserved to tag freed memory regions. + +Hardware tag-based KASAN currently only supports tagging of +kmem_cache_alloc/kmalloc and page_alloc memory. What memory accesses are sanitised by KASAN? -------------------------------------------- From patchwork Fri Dec 18 22:04:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 058F2C2D0E4 for ; Fri, 18 Dec 2020 22:04:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B988723B87 for ; Fri, 18 Dec 2020 22:04:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B988723B87 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 278D56B0089; Fri, 18 Dec 2020 17:04:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1FF226B008A; Fri, 18 Dec 2020 17:04:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F19C6B0092; Fri, 18 Dec 2020 17:04:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0167.hostedemail.com [216.40.44.167]) by kanga.kvack.org (Postfix) with ESMTP id E69876B0089 for ; Fri, 18 Dec 2020 17:04:24 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id AF8DD181AF5FD for ; Fri, 18 Dec 2020 22:04:24 +0000 (UTC) X-FDA: 77607782448.15.alarm38_620bf7e27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 819D7180E1CEE for ; Fri, 18 Dec 2020 22:04:24 +0000 (UTC) X-HE-Tag: alarm38_620bf7e27440 X-Filterd-Recvd-Size: 6779 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:23 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:22 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329063; bh=+xLkThKDUWSr3xAxWlQfGBf1+rjemeW56IA8qmmEWa0=; h=From:To:Subject:In-Reply-To:From; b=EXrGjLGUmz/h8SLuDrvvSO17zxHwDpLl4VMhiPrsA9mNxU4q7QOWCepm0pmPNeogq D7wuYKIP3QdApgEBjVpCjwJ7YC9FO3ckIx3VMFj7y6UTEYx8OeKxm4HZPNK+6b7grc 0JU6wuqU2xqadiaSyBQgUD446E+cNuFo5bFyHBeY= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 54/78] kselftest/arm64: check GCR_EL1 after context switch Message-ID: <20201218220422.Rkf6gwsUD%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000012, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vincenzo Frascino Subject: kselftest/arm64: check GCR_EL1 after context switch This test is specific to MTE and verifies that the GCR_EL1 register is context switched correctly. It spawns 1024 processes and each process spawns 5 threads. Each thread writes a random setting of GCR_EL1 through the prctl() system call and reads it back verifying that it is the same. If the values are not the same it reports a failure. Note: The test has been extended to verify that even SYNC and ASYNC mode setting is preserved correctly over context switching. Link: https://lkml.kernel.org/r/b51a165426e906e7ec8a68d806ef3f8cd92581a6.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Acked-by: Catalin Marinas Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- tools/testing/selftests/arm64/mte/Makefile | 2 tools/testing/selftests/arm64/mte/check_gcr_el1_cswitch.c | 155 ++++++++++ 2 files changed, 156 insertions(+), 1 deletion(-) --- /dev/null +++ a/tools/testing/selftests/arm64/mte/check_gcr_el1_cswitch.c @@ -0,0 +1,155 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2020 ARM Limited + +#define _GNU_SOURCE + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kselftest.h" +#include "mte_common_util.h" + +#define PR_SET_TAGGED_ADDR_CTRL 55 +#define PR_GET_TAGGED_ADDR_CTRL 56 +# define PR_TAGGED_ADDR_ENABLE (1UL << 0) +# define PR_MTE_TCF_SHIFT 1 +# define PR_MTE_TCF_NONE (0UL << PR_MTE_TCF_SHIFT) +# define PR_MTE_TCF_SYNC (1UL << PR_MTE_TCF_SHIFT) +# define PR_MTE_TCF_ASYNC (2UL << PR_MTE_TCF_SHIFT) +# define PR_MTE_TCF_MASK (3UL << PR_MTE_TCF_SHIFT) +# define PR_MTE_TAG_SHIFT 3 +# define PR_MTE_TAG_MASK (0xffffUL << PR_MTE_TAG_SHIFT) + +#include "mte_def.h" + +#define NUM_ITERATIONS 1024 +#define MAX_THREADS 5 +#define THREAD_ITERATIONS 1000 + +void *execute_thread(void *x) +{ + pid_t pid = *((pid_t *)x); + pid_t tid = gettid(); + uint64_t prctl_tag_mask; + uint64_t prctl_set; + uint64_t prctl_get; + uint64_t prctl_tcf; + + srand(time(NULL) ^ (pid << 16) ^ (tid << 16)); + + prctl_tag_mask = rand() & 0xffff; + + if (prctl_tag_mask % 2) + prctl_tcf = PR_MTE_TCF_SYNC; + else + prctl_tcf = PR_MTE_TCF_ASYNC; + + prctl_set = PR_TAGGED_ADDR_ENABLE | prctl_tcf | (prctl_tag_mask << PR_MTE_TAG_SHIFT); + + for (int j = 0; j < THREAD_ITERATIONS; j++) { + if (prctl(PR_SET_TAGGED_ADDR_CTRL, prctl_set, 0, 0, 0)) { + perror("prctl() failed"); + goto fail; + } + + prctl_get = prctl(PR_GET_TAGGED_ADDR_CTRL, 0, 0, 0, 0); + + if (prctl_set != prctl_get) { + ksft_print_msg("Error: prctl_set: 0x%lx != prctl_get: 0x%lx\n", + prctl_set, prctl_get); + goto fail; + } + } + + return (void *)KSFT_PASS; + +fail: + return (void *)KSFT_FAIL; +} + +int execute_test(pid_t pid) +{ + pthread_t thread_id[MAX_THREADS]; + int thread_data[MAX_THREADS]; + + for (int i = 0; i < MAX_THREADS; i++) + pthread_create(&thread_id[i], NULL, + execute_thread, (void *)&pid); + + for (int i = 0; i < MAX_THREADS; i++) + pthread_join(thread_id[i], (void *)&thread_data[i]); + + for (int i = 0; i < MAX_THREADS; i++) + if (thread_data[i] == KSFT_FAIL) + return KSFT_FAIL; + + return KSFT_PASS; +} + +int mte_gcr_fork_test(void) +{ + pid_t pid; + int results[NUM_ITERATIONS]; + pid_t cpid; + int res; + + for (int i = 0; i < NUM_ITERATIONS; i++) { + pid = fork(); + + if (pid < 0) + return KSFT_FAIL; + + if (pid == 0) { + cpid = getpid(); + + res = execute_test(cpid); + + exit(res); + } + } + + for (int i = 0; i < NUM_ITERATIONS; i++) { + wait(&res); + + if (WIFEXITED(res)) + results[i] = WEXITSTATUS(res); + else + --i; + } + + for (int i = 0; i < NUM_ITERATIONS; i++) + if (results[i] == KSFT_FAIL) + return KSFT_FAIL; + + return KSFT_PASS; +} + +int main(int argc, char *argv[]) +{ + int err; + + err = mte_default_setup(); + if (err) + return err; + + ksft_set_plan(1); + + evaluate_test(mte_gcr_fork_test(), + "Verify that GCR_EL1 is set correctly on context switch\n"); + + mte_restore_setup(); + ksft_print_cnts(); + + return ksft_get_fail_cnt() == 0 ? KSFT_PASS : KSFT_FAIL; +} + --- a/tools/testing/selftests/arm64/mte/Makefile~kselftest-arm64-check-gcr_el1-after-context-switch +++ a/tools/testing/selftests/arm64/mte/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 # Copyright (C) 2020 ARM Limited -CFLAGS += -std=gnu99 -I. +CFLAGS += -std=gnu99 -I. -lpthread SRCS := $(filter-out mte_common_util.c,$(wildcard *.c)) PROGS := $(patsubst %.c,%,$(SRCS)) From patchwork Fri Dec 18 22:04:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4D76C3526B for ; Fri, 18 Dec 2020 22:04:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2EF1B23B98 for ; Fri, 18 Dec 2020 22:04:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2EF1B23B98 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B52146B0092; Fri, 18 Dec 2020 17:04:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AD91E6B0093; Fri, 18 Dec 2020 17:04:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9EE986B0095; Fri, 18 Dec 2020 17:04:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0162.hostedemail.com [216.40.44.162]) by kanga.kvack.org (Postfix) with ESMTP id 85A3F6B0092 for ; Fri, 18 Dec 2020 17:04:28 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4A29E180E1CEE for ; Fri, 18 Dec 2020 22:04:28 +0000 (UTC) X-FDA: 77607782616.09.slope22_4b17eb827440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 3123B18173968 for ; Fri, 18 Dec 2020 22:04:28 +0000 (UTC) X-HE-Tag: slope22_4b17eb827440 X-Filterd-Recvd-Size: 8322 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:27 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:26 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329066; bh=Qa/lJeHYl8b4KzgVDEcPnrNSL3TndyoXcPsKu0X9a9U=; h=From:To:Subject:In-Reply-To:From; b=VWayIH3VsNQ0otj7vzyi/dXtsGirtaw0Q03B2Za+R+3emElE9YVg2YmlZjS38lBEw 9iDf0muVGmAEpuLrvweu55/1Zy5QDtOA3+e9qCuhW9eNk0kjt4EJwkN4XMpEsmgd/4 eq7zLQo4EvsNkEpPFZ3IHDLGM3WKxAHFzCuktaIA= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 55/78] kasan: simplify quarantine_put call site Message-ID: <20201218220426.ZSJB8Aga-%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: simplify quarantine_put call site Patch series "kasan: boot parameters for hardware tag-based mode", v4. === Overview Hardware tag-based KASAN mode [1] is intended to eventually be used in production as a security mitigation. Therefore there's a need for finer control over KASAN features and for an existence of a kill switch. This patchset adds a few boot parameters for hardware tag-based KASAN that allow to disable or otherwise control particular KASAN features, as well as provides some initial optimizations for running KASAN in production. There's another planned patchset what will further optimize hardware tag-based KASAN, provide proper benchmarking and tests, and will fully enable tag-based KASAN for production use. Hardware tag-based KASAN relies on arm64 Memory Tagging Extension (MTE) [2] to perform memory and pointer tagging. Please see [3] and [4] for detailed analysis of how MTE helps to fight memory safety problems. The features that can be controlled are: 1. Whether KASAN is enabled at all. 2. Whether KASAN collects and saves alloc/free stacks. 3. Whether KASAN panics on a detected bug or not. The patch titled "kasan: add and integrate kasan boot parameters" of this series adds a few new boot parameters. kasan.mode allows to choose one of three main modes: - kasan.mode=off - KASAN is disabled, no tag checks are performed - kasan.mode=prod - only essential production features are enabled - kasan.mode=full - all KASAN features are enabled The chosen mode provides default control values for the features mentioned above. However it's also possible to override the default values by providing: - kasan.stacktrace=off/on - enable stacks collection (default: on for mode=full, otherwise off) - kasan.fault=report/panic - only report tag fault or also panic (default: report) If kasan.mode parameter is not provided, it defaults to full when CONFIG_DEBUG_KERNEL is enabled, and to prod otherwise. It is essential that switching between these modes doesn't require rebuilding the kernel with different configs, as this is required by the Android GKI (Generic Kernel Image) initiative. === Benchmarks For now I've only performed a few simple benchmarks such as measuring kernel boot time and slab memory usage after boot. There's an upcoming patchset which will optimize KASAN further and include more detailed benchmarking results. The benchmarks were performed in QEMU and the results below exclude the slowdown caused by QEMU memory tagging emulation (as it's different from the slowdown that will be introduced by hardware and is therefore irrelevant). KASAN_HW_TAGS=y + kasan.mode=off introduces no performance or memory impact compared to KASAN_HW_TAGS=n. kasan.mode=prod (manually excluding tagging) introduces 3% of performance and no memory impact (except memory used by hardware to store tags) compared to kasan.mode=off. kasan.mode=full has about 40% performance and 30% memory impact over kasan.mode=prod. Both come from alloc/free stack collection. === Notes This patchset is available here: https://github.com/xairy/linux/tree/up-boot-mte-v4 This patchset is based on v11 of "kasan: add hardware tag-based mode for arm64" patchset [1]. For testing in QEMU hardware tag-based KASAN requires: 1. QEMU built from master [6] (use "-machine virt,mte=on -cpu max" arguments to run). 2. GCC version 10. [1] https://lore.kernel.org/linux-arm-kernel/cover.1606161801.git.andreyknvl@google.com/T/#t [2] https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/enhancing-memory-safety [3] https://arxiv.org/pdf/1802.09517.pdf [4] https://github.com/microsoft/MSRC-Security-Research/blob/master/papers/2020/Security%20analysis%20of%20memory%20tagging.pdf [5] https://source.android.com/devices/architecture/kernel/generic-kernel-image [6] https://github.com/qemu/qemu === Tags Tested-by: Vincenzo Frascino This patch (of 19): Move get_free_info() call into quarantine_put() to simplify the call site. No functional changes. Link: https://lkml.kernel.org/r/cover.1606162397.git.andreyknvl@google.com Link: https://lkml.kernel.org/r/312d0a3ef92cc6dc4fa5452cbc1714f9393ca239.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/Iab0f04e7ebf8d83247024b7190c67c3c34c7940f Signed-off-by: Andrey Konovalov Reviewed-by: Dmitry Vyukov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Catalin Marinas Cc: Will Deacon Cc: Andrey Ryabinin Cc: Alexander Potapenko Cc: Evgenii Stepanov Cc: Branislav Rankov Cc: Kevin Brodsky Cc: Vasily Gorbik Signed-off-by: Andrew Morton --- mm/kasan/common.c | 2 +- mm/kasan/kasan.h | 5 ++--- mm/kasan/quarantine.c | 3 ++- 3 files changed, 5 insertions(+), 5 deletions(-) --- a/mm/kasan/common.c~kasan-simplify-quarantine_put-call-site +++ a/mm/kasan/common.c @@ -313,7 +313,7 @@ static bool __kasan_slab_free(struct kme kasan_set_free_info(cache, object, tag); - quarantine_put(get_free_info(cache, object), cache); + quarantine_put(cache, object); return IS_ENABLED(CONFIG_KASAN_GENERIC); } --- a/mm/kasan/kasan.h~kasan-simplify-quarantine_put-call-site +++ a/mm/kasan/kasan.h @@ -216,12 +216,11 @@ struct kasan_track *kasan_get_free_track #if defined(CONFIG_KASAN_GENERIC) && \ (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) -void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache); +void quarantine_put(struct kmem_cache *cache, void *object); void quarantine_reduce(void); void quarantine_remove_cache(struct kmem_cache *cache); #else -static inline void quarantine_put(struct kasan_free_meta *info, - struct kmem_cache *cache) { } +static inline void quarantine_put(struct kmem_cache *cache, void *object) { } static inline void quarantine_reduce(void) { } static inline void quarantine_remove_cache(struct kmem_cache *cache) { } #endif --- a/mm/kasan/quarantine.c~kasan-simplify-quarantine_put-call-site +++ a/mm/kasan/quarantine.c @@ -163,11 +163,12 @@ static void qlist_free_all(struct qlist_ qlist_init(q); } -void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache) +void quarantine_put(struct kmem_cache *cache, void *object) { unsigned long flags; struct qlist_head *q; struct qlist_head temp = QLIST_INIT; + struct kasan_free_meta *info = get_free_info(cache, object); /* * Note: irq must be disabled until after we move the batch to the From patchwork Fri Dec 18 22:04:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1240EC3526B for ; Fri, 18 Dec 2020 22:04:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A522B23406 for ; Fri, 18 Dec 2020 22:04:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A522B23406 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 42A3D6B0095; Fri, 18 Dec 2020 17:04:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 385F76B0096; Fri, 18 Dec 2020 17:04:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24DC56B0098; Fri, 18 Dec 2020 17:04:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0201.hostedemail.com [216.40.44.201]) by kanga.kvack.org (Postfix) with ESMTP id 0C7DE6B0095 for ; Fri, 18 Dec 2020 17:04:32 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C4E664DCA for ; Fri, 18 Dec 2020 22:04:31 +0000 (UTC) X-FDA: 77607782742.17.wren40_501682727440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id AC2481815EB48 for ; Fri, 18 Dec 2020 22:04:31 +0000 (UTC) X-HE-Tag: wren40_501682727440 X-Filterd-Recvd-Size: 10535 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:31 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:29 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329070; bh=ZIKFG3s2HbABl1ycLayTzh7a5MiOp7SIfFIxwSMWR18=; h=From:To:Subject:In-Reply-To:From; b=BxrpZtktucvkthhZ7D5wWUg+OgG+vmRBsryn7cn2hzlnz4C1F70qT36d2bWaDIT69 l83YNTPJqg3YBh1Hi5U9XtdMQ8IVfQfAhJKhJ77v1ceF9TdY4TAnhqsNuC0m7QhNci oAeAE5j3B/ZY/HJ1iYF9c9os7G3NCHdw4/K/pgHs= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 56/78] kasan: rename get_alloc/free_info Message-ID: <20201218220429.Frcjp4aKF%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: rename get_alloc/free_info Rename get_alloc_info() and get_free_info() to kasan_get_alloc_meta() and kasan_get_free_meta() to better reflect what those do and avoid confusion with kasan_set_free_info(). No functional changes. Link: https://lkml.kernel.org/r/27b7c036b754af15a2839e945f6d8bfce32b4c2f.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/Ib6e4ba61c8b12112b403d3479a9799ac8fff8de1 Signed-off-by: Andrey Konovalov Reviewed-by: Dmitry Vyukov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/common.c | 16 ++++++++-------- mm/kasan/generic.c | 12 ++++++------ mm/kasan/hw_tags.c | 4 ++-- mm/kasan/kasan.h | 8 ++++---- mm/kasan/quarantine.c | 4 ++-- mm/kasan/report.c | 12 ++++++------ mm/kasan/report_sw_tags.c | 2 +- mm/kasan/sw_tags.c | 4 ++-- 8 files changed, 31 insertions(+), 31 deletions(-) --- a/mm/kasan/common.c~kasan-rename-get_alloc-free_info +++ a/mm/kasan/common.c @@ -180,14 +180,14 @@ size_t kasan_metadata_size(struct kmem_c sizeof(struct kasan_free_meta) : 0); } -struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache, - const void *object) +struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache, + const void *object) { return (void *)reset_tag(object) + cache->kasan_info.alloc_meta_offset; } -struct kasan_free_meta *get_free_info(struct kmem_cache *cache, - const void *object) +struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache, + const void *object) { BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32); return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset; @@ -264,13 +264,13 @@ static u8 assign_tag(struct kmem_cache * void * __must_check kasan_init_slab_obj(struct kmem_cache *cache, const void *object) { - struct kasan_alloc_meta *alloc_info; + struct kasan_alloc_meta *alloc_meta; if (!(cache->flags & SLAB_KASAN)) return (void *)object; - alloc_info = get_alloc_info(cache, object); - __memset(alloc_info, 0, sizeof(*alloc_info)); + alloc_meta = kasan_get_alloc_meta(cache, object); + __memset(alloc_meta, 0, sizeof(*alloc_meta)); if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS)) object = set_tag(object, assign_tag(cache, object, true, false)); @@ -350,7 +350,7 @@ static void *__kasan_kmalloc(struct kmem KASAN_KMALLOC_REDZONE); if (cache->flags & SLAB_KASAN) - kasan_set_track(&get_alloc_info(cache, object)->alloc_track, flags); + kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags); return set_tag(object, tag); } --- a/mm/kasan/generic.c~kasan-rename-get_alloc-free_info +++ a/mm/kasan/generic.c @@ -328,7 +328,7 @@ void kasan_record_aux_stack(void *addr) { struct page *page = kasan_addr_to_page(addr); struct kmem_cache *cache; - struct kasan_alloc_meta *alloc_info; + struct kasan_alloc_meta *alloc_meta; void *object; if (!(page && PageSlab(page))) @@ -336,10 +336,10 @@ void kasan_record_aux_stack(void *addr) cache = page->slab_cache; object = nearest_obj(cache, page, addr); - alloc_info = get_alloc_info(cache, object); + alloc_meta = kasan_get_alloc_meta(cache, object); - alloc_info->aux_stack[1] = alloc_info->aux_stack[0]; - alloc_info->aux_stack[0] = kasan_save_stack(GFP_NOWAIT); + alloc_meta->aux_stack[1] = alloc_meta->aux_stack[0]; + alloc_meta->aux_stack[0] = kasan_save_stack(GFP_NOWAIT); } void kasan_set_free_info(struct kmem_cache *cache, @@ -347,7 +347,7 @@ void kasan_set_free_info(struct kmem_cac { struct kasan_free_meta *free_meta; - free_meta = get_free_info(cache, object); + free_meta = kasan_get_free_meta(cache, object); kasan_set_track(&free_meta->free_track, GFP_NOWAIT); /* @@ -361,5 +361,5 @@ struct kasan_track *kasan_get_free_track { if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_KMALLOC_FREETRACK) return NULL; - return &get_free_info(cache, object)->free_track; + return &kasan_get_free_meta(cache, object)->free_track; } --- a/mm/kasan/hw_tags.c~kasan-rename-get_alloc-free_info +++ a/mm/kasan/hw_tags.c @@ -75,7 +75,7 @@ void kasan_set_free_info(struct kmem_cac { struct kasan_alloc_meta *alloc_meta; - alloc_meta = get_alloc_info(cache, object); + alloc_meta = kasan_get_alloc_meta(cache, object); kasan_set_track(&alloc_meta->free_track[0], GFP_NOWAIT); } @@ -84,6 +84,6 @@ struct kasan_track *kasan_get_free_track { struct kasan_alloc_meta *alloc_meta; - alloc_meta = get_alloc_info(cache, object); + alloc_meta = kasan_get_alloc_meta(cache, object); return &alloc_meta->free_track[0]; } --- a/mm/kasan/kasan.h~kasan-rename-get_alloc-free_info +++ a/mm/kasan/kasan.h @@ -149,10 +149,10 @@ struct kasan_free_meta { #endif }; -struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache, - const void *object); -struct kasan_free_meta *get_free_info(struct kmem_cache *cache, - const void *object); +struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache, + const void *object); +struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache, + const void *object); void poison_range(const void *address, size_t size, u8 value); void unpoison_range(const void *address, size_t size); --- a/mm/kasan/quarantine.c~kasan-rename-get_alloc-free_info +++ a/mm/kasan/quarantine.c @@ -168,7 +168,7 @@ void quarantine_put(struct kmem_cache *c unsigned long flags; struct qlist_head *q; struct qlist_head temp = QLIST_INIT; - struct kasan_free_meta *info = get_free_info(cache, object); + struct kasan_free_meta *meta = kasan_get_free_meta(cache, object); /* * Note: irq must be disabled until after we move the batch to the @@ -185,7 +185,7 @@ void quarantine_put(struct kmem_cache *c local_irq_restore(flags); return; } - qlist_put(q, &info->quarantine_link, cache->size); + qlist_put(q, &meta->quarantine_link, cache->size); if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) { qlist_move_all(q, &temp); --- a/mm/kasan/report.c~kasan-rename-get_alloc-free_info +++ a/mm/kasan/report.c @@ -164,12 +164,12 @@ static void describe_object_addr(struct static void describe_object(struct kmem_cache *cache, void *object, const void *addr, u8 tag) { - struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object); + struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object); if (cache->flags & SLAB_KASAN) { struct kasan_track *free_track; - print_track(&alloc_info->alloc_track, "Allocated"); + print_track(&alloc_meta->alloc_track, "Allocated"); pr_err("\n"); free_track = kasan_get_free_track(cache, object, tag); if (free_track) { @@ -178,14 +178,14 @@ static void describe_object(struct kmem_ } #ifdef CONFIG_KASAN_GENERIC - if (alloc_info->aux_stack[0]) { + if (alloc_meta->aux_stack[0]) { pr_err("Last potentially related work creation:\n"); - print_stack(alloc_info->aux_stack[0]); + print_stack(alloc_meta->aux_stack[0]); pr_err("\n"); } - if (alloc_info->aux_stack[1]) { + if (alloc_meta->aux_stack[1]) { pr_err("Second to last potentially related work creation:\n"); - print_stack(alloc_info->aux_stack[1]); + print_stack(alloc_meta->aux_stack[1]); pr_err("\n"); } #endif --- a/mm/kasan/report_sw_tags.c~kasan-rename-get_alloc-free_info +++ a/mm/kasan/report_sw_tags.c @@ -46,7 +46,7 @@ const char *get_bug_type(struct kasan_ac if (page && PageSlab(page)) { cache = page->slab_cache; object = nearest_obj(cache, page, (void *)addr); - alloc_meta = get_alloc_info(cache, object); + alloc_meta = kasan_get_alloc_meta(cache, object); for (i = 0; i < KASAN_NR_FREE_STACKS; i++) if (alloc_meta->free_pointer_tag[i] == tag) --- a/mm/kasan/sw_tags.c~kasan-rename-get_alloc-free_info +++ a/mm/kasan/sw_tags.c @@ -174,7 +174,7 @@ void kasan_set_free_info(struct kmem_cac struct kasan_alloc_meta *alloc_meta; u8 idx = 0; - alloc_meta = get_alloc_info(cache, object); + alloc_meta = kasan_get_alloc_meta(cache, object); #ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY idx = alloc_meta->free_track_idx; @@ -191,7 +191,7 @@ struct kasan_track *kasan_get_free_track struct kasan_alloc_meta *alloc_meta; int i = 0; - alloc_meta = get_alloc_info(cache, object); + alloc_meta = kasan_get_alloc_meta(cache, object); #ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY for (i = 0; i < KASAN_NR_FREE_STACKS; i++) { From patchwork Fri Dec 18 22:04:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30216C3526B for ; Fri, 18 Dec 2020 22:04:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D589B23406 for ; Fri, 18 Dec 2020 22:04:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D589B23406 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7CEB36B0098; Fri, 18 Dec 2020 17:04:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 755A96B0099; Fri, 18 Dec 2020 17:04:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 646FD6B009A; Fri, 18 Dec 2020 17:04:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0240.hostedemail.com [216.40.44.240]) by kanga.kvack.org (Postfix) with ESMTP id 466DA6B0098 for ; Fri, 18 Dec 2020 17:04:35 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1601B181AF5FD for ; Fri, 18 Dec 2020 22:04:35 +0000 (UTC) X-FDA: 77607782910.06.idea20_4d0fecf27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id E404810059C22 for ; Fri, 18 Dec 2020 22:04:34 +0000 (UTC) X-HE-Tag: idea20_4d0fecf27440 X-Filterd-Recvd-Size: 3230 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:34 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:32 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329073; bh=t0cd2hZ+0IGmC6YZOXAl7iIEiMiy2IvhFrjitqSK3sE=; h=From:To:Subject:In-Reply-To:From; b=JD2nX6kf2s7J+qvRqRVc3K/A4yGGM1m/+FHdiIeXL5hQ1YV3hLIXxa3goCnz63117 k5QWLx6fcQMJIBIfWoljggdOgMKC95ZxCNhx/WrynG4JPfV2l4GtBKXrV0cakyJJ/C BdOpPHEWOBh1iNNrkJ36+4mcizFJdI4sUfLboNcA= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 57/78] kasan: introduce set_alloc_info Message-ID: <20201218220432.y4LBUB0SU%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: introduce set_alloc_info Add set_alloc_info() helper and move kasan_set_track() into it. This will simplify the code for one of the upcoming changes. No functional changes. Link: https://lkml.kernel.org/r/b2393e8f1e311a70fc3aaa2196461b6acdee7d21.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/I0316193cbb4ecc9b87b7c2eee0dd79f8ec908c1a Signed-off-by: Andrey Konovalov Reviewed-by: Dmitry Vyukov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/common.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) --- a/mm/kasan/common.c~kasan-introduce-set_alloc_info +++ a/mm/kasan/common.c @@ -323,6 +323,11 @@ bool kasan_slab_free(struct kmem_cache * return __kasan_slab_free(cache, object, ip, true); } +static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags) +{ + kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags); +} + static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, gfp_t flags, bool keep_tag) { @@ -350,7 +355,7 @@ static void *__kasan_kmalloc(struct kmem KASAN_KMALLOC_REDZONE); if (cache->flags & SLAB_KASAN) - kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags); + set_alloc_info(cache, (void *)object, flags); return set_tag(object, tag); } From patchwork Fri Dec 18 22:04:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3D99C4361B for ; Fri, 18 Dec 2020 22:04:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3746823406 for ; Fri, 18 Dec 2020 22:04:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3746823406 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D240D6B0099; Fri, 18 Dec 2020 17:04:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CADD86B009A; Fri, 18 Dec 2020 17:04:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC34C6B009B; Fri, 18 Dec 2020 17:04:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id A20556B0099 for ; Fri, 18 Dec 2020 17:04:38 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 672CA1814C6F1 for ; Fri, 18 Dec 2020 22:04:38 +0000 (UTC) X-FDA: 77607783036.17.stamp56_36020aa27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 42BCA181BC43F for ; Fri, 18 Dec 2020 22:04:38 +0000 (UTC) X-HE-Tag: stamp56_36020aa27440 X-Filterd-Recvd-Size: 5545 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:37 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:36 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329076; bh=WNKVRpkZq69N5UEbck7KrA2+8iIlHIEGJXiayP/E030=; h=From:To:Subject:In-Reply-To:From; b=LDkxqGjwRuyHbYLMr6LHZZ7G3hl3FhPuS58CzKRx/KVRfzoEQDLVzsr4IxWVZgtUz +aGure57z3R8lc3EKJPijTVinGvfITyt7yb34ouEk4+kr31xqp4Yh7YyIAZj3K/+vh qX0sZDOFX3SrBXxSuSx/GQP2+FdwXMyiXqkkZwTU= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 58/78] kasan, arm64: unpoison stack only with CONFIG_KASAN_STACK Message-ID: <20201218220436.9KojDJlYY%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, arm64: unpoison stack only with CONFIG_KASAN_STACK There's a config option CONFIG_KASAN_STACK that has to be enabled for KASAN to use stack instrumentation and perform validity checks for stack variables. There's no need to unpoison stack when CONFIG_KASAN_STACK is not enabled. Only call kasan_unpoison_task_stack[_below]() when CONFIG_KASAN_STACK is enabled. Note, that CONFIG_KASAN_STACK is an option that is currently always defined when CONFIG_KASAN is enabled, and therefore has to be tested with #if instead of #ifdef. Link: https://lkml.kernel.org/r/d09dd3f8abb388da397fd11598c5edeaa83fe559.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/If8a891e9fe01ea543e00b576852685afec0887e3 Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Acked-by: Catalin Marinas Reviewed-by: Dmitry Vyukov Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/kernel/sleep.S | 2 +- arch/x86/kernel/acpi/wakeup_64.S | 2 +- include/linux/kasan.h | 10 ++++++---- mm/kasan/common.c | 2 ++ 4 files changed, 10 insertions(+), 6 deletions(-) --- a/arch/arm64/kernel/sleep.S~kasan-arm64-unpoison-stack-only-with-config_kasan_stack +++ a/arch/arm64/kernel/sleep.S @@ -133,7 +133,7 @@ SYM_FUNC_START(_cpu_resume) */ bl cpu_do_resume -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK mov x0, sp bl kasan_unpoison_task_stack_below #endif --- a/arch/x86/kernel/acpi/wakeup_64.S~kasan-arm64-unpoison-stack-only-with-config_kasan_stack +++ a/arch/x86/kernel/acpi/wakeup_64.S @@ -112,7 +112,7 @@ SYM_FUNC_START(do_suspend_lowlevel) movq pt_regs_r14(%rax), %r14 movq pt_regs_r15(%rax), %r15 -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK /* * The suspend path may have poisoned some areas deeper in the stack, * which we now need to unpoison. --- a/include/linux/kasan.h~kasan-arm64-unpoison-stack-only-with-config_kasan_stack +++ a/include/linux/kasan.h @@ -77,8 +77,6 @@ static inline void kasan_disable_current void kasan_unpoison_range(const void *address, size_t size); -void kasan_unpoison_task_stack(struct task_struct *task); - void kasan_alloc_pages(struct page *page, unsigned int order); void kasan_free_pages(struct page *page, unsigned int order); @@ -123,8 +121,6 @@ void kasan_restore_multi_shot(bool enabl static inline void kasan_unpoison_range(const void *address, size_t size) {} -static inline void kasan_unpoison_task_stack(struct task_struct *task) {} - static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} static inline void kasan_free_pages(struct page *page, unsigned int order) {} @@ -176,6 +172,12 @@ static inline size_t kasan_metadata_size #endif /* CONFIG_KASAN */ +#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK +void kasan_unpoison_task_stack(struct task_struct *task); +#else +static inline void kasan_unpoison_task_stack(struct task_struct *task) {} +#endif + #ifdef CONFIG_KASAN_GENERIC void kasan_cache_shrink(struct kmem_cache *cache); --- a/mm/kasan/common.c~kasan-arm64-unpoison-stack-only-with-config_kasan_stack +++ a/mm/kasan/common.c @@ -63,6 +63,7 @@ void kasan_unpoison_range(const void *ad unpoison_range(address, size); } +#if CONFIG_KASAN_STACK static void __kasan_unpoison_stack(struct task_struct *task, const void *sp) { void *base = task_stack_page(task); @@ -89,6 +90,7 @@ asmlinkage void kasan_unpoison_task_stac unpoison_range(base, watermark - base); } +#endif /* CONFIG_KASAN_STACK */ void kasan_alloc_pages(struct page *page, unsigned int order) { From patchwork Fri Dec 18 22:04:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBA1EC4361B for ; Fri, 18 Dec 2020 22:04:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A028423B54 for ; Fri, 18 Dec 2020 22:04:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A028423B54 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3FC7A6B009B; Fri, 18 Dec 2020 17:04:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3ACC26B009C; Fri, 18 Dec 2020 17:04:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29B786B009D; Fri, 18 Dec 2020 17:04:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id 105006B009B for ; Fri, 18 Dec 2020 17:04:42 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C9C4252C0 for ; Fri, 18 Dec 2020 22:04:41 +0000 (UTC) X-FDA: 77607783162.01.arch54_431038f27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id A880910059824 for ; Fri, 18 Dec 2020 22:04:41 +0000 (UTC) X-HE-Tag: arch54_431038f27440 X-Filterd-Recvd-Size: 3395 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:41 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:39 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329080; bh=rn5Neti983/U06+GwPVruLgh6wBz38/ROfpXbbmR6Qk=; h=From:To:Subject:In-Reply-To:From; b=yGJ5+VQ+7sgfgeXo9sDSsuMY3nluW1QKpbj0F+cT7gUuaoj7HSTcPOutLmybjp6f8 fY89pOhtJHMgVzBTHR6MawKvdK0A/+G/zdOBIQV//TZbiEdEIljx4T0OQPTMv33yij TjFp47eSO30ougd/1RuEhU3lFMQABbmZuGCdvP/4= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 59/78] kasan: allow VMAP_STACK for HW_TAGS mode Message-ID: <20201218220439.vuqXQp-7X%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: allow VMAP_STACK for HW_TAGS mode Even though hardware tag-based mode currently doesn't support checking vmalloc allocations, it doesn't use shadow memory and works with VMAP_STACK as is. Change VMAP_STACK definition accordingly. Link: https://lkml.kernel.org/r/ecdb2a1658ebd88eb276dee2493518ac0e82de41.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/I3552cbc12321dec82cd7372676e9372a2eb452ac Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Acked-by: Catalin Marinas Reviewed-by: Dmitry Vyukov Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/Kconfig | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/arch/Kconfig~kasan-allow-vmap_stack-for-hw_tags-mode +++ a/arch/Kconfig @@ -976,16 +976,16 @@ config VMAP_STACK default y bool "Use a virtually-mapped stack" depends on HAVE_ARCH_VMAP_STACK - depends on !KASAN || KASAN_VMALLOC + depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC help Enable this if you want the use virtually-mapped kernel stacks with guard pages. This causes kernel stack overflows to be caught immediately rather than causing difficult-to-diagnose corruption. - To use this with KASAN, the architecture must support backing - virtual mappings with real shadow memory, and KASAN_VMALLOC must - be enabled. + To use this with software KASAN modes, the architecture must support + backing virtual mappings with real shadow memory, and KASAN_VMALLOC + must be enabled. config ARCH_OPTIONAL_KERNEL_RWX def_bool n From patchwork Fri Dec 18 22:04:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46D62C4361B for ; Fri, 18 Dec 2020 22:04:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D018423B54 for ; Fri, 18 Dec 2020 22:04:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D018423B54 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 693A46B009C; Fri, 18 Dec 2020 17:04:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 642876B009D; Fri, 18 Dec 2020 17:04:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 558B66B009E; Fri, 18 Dec 2020 17:04:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id 3D2C56B009C for ; Fri, 18 Dec 2020 17:04:45 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 07AA58249980 for ; Fri, 18 Dec 2020 22:04:45 +0000 (UTC) X-FDA: 77607783330.21.hope20_130034027440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id DD6321815EB48 for ; Fri, 18 Dec 2020 22:04:44 +0000 (UTC) X-HE-Tag: hope20_130034027440 X-Filterd-Recvd-Size: 3178 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:44 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:42 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329083; bh=qTjQu+aAiKh3WeHDrcy3Ad4bAKWypLWyNRllSXK4W20=; h=From:To:Subject:In-Reply-To:From; b=PwJlgHek4is7XozRRfKSjFcOq0gkuqJC9rrAFGsgLLuB68rtFy1DbeAvydbmolRye l09j6l24hDMo4zXkMLyyr/2C37+eNHyPgAnABgZvkGlAco8ZnNeGEWh4pRsWxiPwFu Nkk6jWofWzw59+PUOWC5zq0JCA9/w3YwDWwSwpOk= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 60/78] kasan: remove __kasan_unpoison_stack Message-ID: <20201218220442.8aFrp07ly%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: remove __kasan_unpoison_stack There's no need for __kasan_unpoison_stack() helper, as it's only currently used in a single place. Removing it also removes unneeded arithmetic. No functional changes. Link: https://lkml.kernel.org/r/93e78948704a42ea92f6248ff8a725613d721161.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/Ie5ba549d445292fe629b4a96735e4034957bcc50 Signed-off-by: Andrey Konovalov Reviewed-by: Dmitry Vyukov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/common.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) --- a/mm/kasan/common.c~kasan-remove-__kasan_unpoison_stack +++ a/mm/kasan/common.c @@ -64,18 +64,12 @@ void kasan_unpoison_range(const void *ad } #if CONFIG_KASAN_STACK -static void __kasan_unpoison_stack(struct task_struct *task, const void *sp) -{ - void *base = task_stack_page(task); - size_t size = sp - base; - - unpoison_range(base, size); -} - /* Unpoison the entire stack for a task. */ void kasan_unpoison_task_stack(struct task_struct *task) { - __kasan_unpoison_stack(task, task_stack_page(task) + THREAD_SIZE); + void *base = task_stack_page(task); + + unpoison_range(base, THREAD_SIZE); } /* Unpoison the stack for the current task beyond a watermark sp value. */ From patchwork Fri Dec 18 22:04:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0A06C4361B for ; Fri, 18 Dec 2020 22:04:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 80D1923B9B for ; Fri, 18 Dec 2020 22:04:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 80D1923B9B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1D7386B0075; Fri, 18 Dec 2020 17:04:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1627E6B009E; Fri, 18 Dec 2020 17:04:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09DDE6B009F; Fri, 18 Dec 2020 17:04:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0116.hostedemail.com [216.40.44.116]) by kanga.kvack.org (Postfix) with ESMTP id E3E136B0075 for ; Fri, 18 Dec 2020 17:04:48 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B088C181C2203 for ; Fri, 18 Dec 2020 22:04:48 +0000 (UTC) X-FDA: 77607783456.04.fear48_2510fc127440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 9507B8010F81 for ; Fri, 18 Dec 2020 22:04:48 +0000 (UTC) X-HE-Tag: fear48_2510fc127440 X-Filterd-Recvd-Size: 9636 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:47 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:46 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329087; bh=UXzZsrC3gKABaDAdhnglQRlbopWcHxAyb+rRH9q5vXk=; h=From:To:Subject:In-Reply-To:From; b=AqlQFadd9B6r/AXTYWuDlyTN/Q6JPNore2Lp+vUAxQ5P6UxgNvFOufGvJA8OjrobU yybsm+MEE+lUwBTGBGtTKiVnuZi/MmrPUxD7qLtyseWhAR7WN2CWcciHKJsosD3YxR nbgp1tO2O9dZZsIgAoLvnP4wxClUEL9+/I3GxPlU= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 61/78] kasan: inline kasan_reset_tag for tag-based modes Message-ID: <20201218220446.KN_H-rtDs%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: inline kasan_reset_tag for tag-based modes Using kasan_reset_tag() currently results in a function call. As it's called quite often from the allocator code, this leads to a noticeable slowdown. Move it to include/linux/kasan.h and turn it into a static inline function. Also remove the now unneeded reset_tag() internal KASAN macro and use kasan_reset_tag() instead. Link: https://lkml.kernel.org/r/6940383a3a9dfb416134d338d8fac97a9ebb8686.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/I4d2061acfe91d480a75df00b07c22d8494ef14b5 Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Reviewed-by: Dmitry Vyukov Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- include/linux/kasan.h | 5 ++++- mm/kasan/common.c | 6 +++--- mm/kasan/hw_tags.c | 9 ++------- mm/kasan/kasan.h | 4 ---- mm/kasan/report.c | 4 ++-- mm/kasan/report_hw_tags.c | 2 +- mm/kasan/report_sw_tags.c | 4 ++-- mm/kasan/shadow.c | 4 ++-- mm/kasan/sw_tags.c | 9 ++------- 9 files changed, 18 insertions(+), 29 deletions(-) --- a/include/linux/kasan.h~kasan-inline-kasan_reset_tag-for-tag-based-modes +++ a/include/linux/kasan.h @@ -194,7 +194,10 @@ static inline void kasan_record_aux_stac #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS) -void *kasan_reset_tag(const void *addr); +static inline void *kasan_reset_tag(const void *addr) +{ + return (void *)arch_kasan_reset_tag(addr); +} bool kasan_report(unsigned long addr, size_t size, bool is_write, unsigned long ip); --- a/mm/kasan/common.c~kasan-inline-kasan_reset_tag-for-tag-based-modes +++ a/mm/kasan/common.c @@ -179,14 +179,14 @@ size_t kasan_metadata_size(struct kmem_c struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache, const void *object) { - return (void *)reset_tag(object) + cache->kasan_info.alloc_meta_offset; + return kasan_reset_tag(object) + cache->kasan_info.alloc_meta_offset; } struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache, const void *object) { BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32); - return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset; + return kasan_reset_tag(object) + cache->kasan_info.free_meta_offset; } void kasan_poison_slab(struct page *page) @@ -283,7 +283,7 @@ static bool __kasan_slab_free(struct kme tag = get_tag(object); tagged_object = object; - object = reset_tag(object); + object = kasan_reset_tag(object); if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != object)) { --- a/mm/kasan/hw_tags.c~kasan-inline-kasan_reset_tag-for-tag-based-modes +++ a/mm/kasan/hw_tags.c @@ -31,18 +31,13 @@ void __init kasan_init_hw_tags(void) pr_info("KernelAddressSanitizer initialized\n"); } -void *kasan_reset_tag(const void *addr) -{ - return reset_tag(addr); -} - void poison_range(const void *address, size_t size, u8 value) { /* Skip KFENCE memory if called explicitly outside of sl*b. */ if (is_kfence_address(address)) return; - hw_set_mem_tag_range(reset_tag(address), + hw_set_mem_tag_range(kasan_reset_tag(address), round_up(size, KASAN_GRANULE_SIZE), value); } @@ -52,7 +47,7 @@ void unpoison_range(const void *address, if (is_kfence_address(address)) return; - hw_set_mem_tag_range(reset_tag(address), + hw_set_mem_tag_range(kasan_reset_tag(address), round_up(size, KASAN_GRANULE_SIZE), get_tag(address)); } --- a/mm/kasan/kasan.h~kasan-inline-kasan_reset_tag-for-tag-based-modes +++ a/mm/kasan/kasan.h @@ -248,15 +248,11 @@ static inline const void *arch_kasan_set return addr; } #endif -#ifndef arch_kasan_reset_tag -#define arch_kasan_reset_tag(addr) ((void *)(addr)) -#endif #ifndef arch_kasan_get_tag #define arch_kasan_get_tag(addr) 0 #endif #define set_tag(addr, tag) ((void *)arch_kasan_set_tag((addr), (tag))) -#define reset_tag(addr) ((void *)arch_kasan_reset_tag(addr)) #define get_tag(addr) arch_kasan_get_tag(addr) #ifdef CONFIG_KASAN_HW_TAGS --- a/mm/kasan/report.c~kasan-inline-kasan_reset_tag-for-tag-based-modes +++ a/mm/kasan/report.c @@ -328,7 +328,7 @@ void kasan_report_invalid_free(void *obj unsigned long flags; u8 tag = get_tag(object); - object = reset_tag(object); + object = kasan_reset_tag(object); #if IS_ENABLED(CONFIG_KUNIT) if (current->kunit_test) @@ -361,7 +361,7 @@ static void __kasan_report(unsigned long disable_trace_on_warning(); tagged_addr = (void *)addr; - untagged_addr = reset_tag(tagged_addr); + untagged_addr = kasan_reset_tag(tagged_addr); info.access_addr = tagged_addr; if (addr_has_metadata(untagged_addr)) --- a/mm/kasan/report_hw_tags.c~kasan-inline-kasan_reset_tag-for-tag-based-modes +++ a/mm/kasan/report_hw_tags.c @@ -22,7 +22,7 @@ const char *get_bug_type(struct kasan_ac void *find_first_bad_addr(void *addr, size_t size) { - return reset_tag(addr); + return kasan_reset_tag(addr); } void metadata_fetch_row(char *buffer, void *row) --- a/mm/kasan/report_sw_tags.c~kasan-inline-kasan_reset_tag-for-tag-based-modes +++ a/mm/kasan/report_sw_tags.c @@ -41,7 +41,7 @@ const char *get_bug_type(struct kasan_ac int i; tag = get_tag(info->access_addr); - addr = reset_tag(info->access_addr); + addr = kasan_reset_tag(info->access_addr); page = kasan_addr_to_page(addr); if (page && PageSlab(page)) { cache = page->slab_cache; @@ -72,7 +72,7 @@ const char *get_bug_type(struct kasan_ac void *find_first_bad_addr(void *addr, size_t size) { u8 tag = get_tag(addr); - void *p = reset_tag(addr); + void *p = kasan_reset_tag(addr); void *end = p + size; while (p < end && tag == *(u8 *)kasan_mem_to_shadow(p)) --- a/mm/kasan/shadow.c~kasan-inline-kasan_reset_tag-for-tag-based-modes +++ a/mm/kasan/shadow.c @@ -82,7 +82,7 @@ void poison_range(const void *address, s * some of the callers (e.g. kasan_poison_object_data) pass tagged * addresses to this function. */ - address = reset_tag(address); + address = kasan_reset_tag(address); /* Skip KFENCE memory if called explicitly outside of sl*b. */ if (is_kfence_address(address)) @@ -103,7 +103,7 @@ void unpoison_range(const void *address, * some of the callers (e.g. kasan_unpoison_object_data) pass tagged * addresses to this function. */ - address = reset_tag(address); + address = kasan_reset_tag(address); /* * Skip KFENCE memory if called explicitly outside of sl*b. Also note --- a/mm/kasan/sw_tags.c~kasan-inline-kasan_reset_tag-for-tag-based-modes +++ a/mm/kasan/sw_tags.c @@ -67,11 +67,6 @@ u8 random_tag(void) return (u8)(state % (KASAN_TAG_MAX + 1)); } -void *kasan_reset_tag(const void *addr) -{ - return reset_tag(addr); -} - bool check_memory_region(unsigned long addr, size_t size, bool write, unsigned long ret_ip) { @@ -107,7 +102,7 @@ bool check_memory_region(unsigned long a if (tag == KASAN_TAG_KERNEL) return true; - untagged_addr = reset_tag((const void *)addr); + untagged_addr = kasan_reset_tag((const void *)addr); if (unlikely(untagged_addr < kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) { return !kasan_report(addr, size, write, ret_ip); @@ -126,7 +121,7 @@ bool check_memory_region(unsigned long a bool check_invalid_free(void *addr) { u8 tag = get_tag(addr); - u8 shadow_byte = READ_ONCE(*(u8 *)kasan_mem_to_shadow(reset_tag(addr))); + u8 shadow_byte = READ_ONCE(*(u8 *)kasan_mem_to_shadow(kasan_reset_tag(addr))); return (shadow_byte == KASAN_TAG_INVALID) || (tag != KASAN_TAG_KERNEL && tag != shadow_byte); From patchwork Fri Dec 18 22:04:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983147 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DC28C4361B for ; Fri, 18 Dec 2020 22:04:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CE6E423BA7 for ; Fri, 18 Dec 2020 22:04:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE6E423BA7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6E1CD6B0078; Fri, 18 Dec 2020 17:04:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B87E6B009F; Fri, 18 Dec 2020 17:04:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F6DD6B00A0; Fri, 18 Dec 2020 17:04:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0145.hostedemail.com [216.40.44.145]) by kanga.kvack.org (Postfix) with ESMTP id 45C816B0078 for ; Fri, 18 Dec 2020 17:04:52 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 0E1E852D9 for ; Fri, 18 Dec 2020 22:04:52 +0000 (UTC) X-FDA: 77607783624.22.straw38_480df3927440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id D761D1808F288 for ; Fri, 18 Dec 2020 22:04:51 +0000 (UTC) X-HE-Tag: straw38_480df3927440 X-Filterd-Recvd-Size: 4468 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:51 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:49 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329090; bh=fDxMtLRiE1u+3tBgyEKdRzjZEs7YzQkCtht17+8a05U=; h=From:To:Subject:In-Reply-To:From; b=HUVIbVH4pI4QVv3KNwXFOvjvBow1Ucu+QLrOUoYyXVABx6aRY6nMnbZCGCJlz54i4 igJz4msifx6uW9AC8h5k1Hz28fLKhwZSqqYXmKGzSvShyYJXKnoCCxoLoVkakds8Wg WI7qBUnzNl10XCPr8BnLzbuI2KK3rAv0Kkg9/Qa0= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 62/78] kasan: inline random_tag for HW_TAGS Message-ID: <20201218220449.Ziq-X8NkT%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: inline random_tag for HW_TAGS Using random_tag() currently results in a function call. Move its definition to mm/kasan/kasan.h and turn it into a static inline function for hardware tag-based mode to avoid uneeded function calls. Link: https://lkml.kernel.org/r/be438471690e351e1d792e6bb432e8c03ccb15d3.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/Iac5b2faf9a912900e16cca6834d621f5d4abf427 Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Reviewed-by: Dmitry Vyukov Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/hw_tags.c | 5 ----- mm/kasan/kasan.h | 31 ++++++++++++++----------------- 2 files changed, 14 insertions(+), 22 deletions(-) --- a/mm/kasan/hw_tags.c~kasan-inline-random_tag-for-hw_tags +++ a/mm/kasan/hw_tags.c @@ -51,11 +51,6 @@ void unpoison_range(const void *address, round_up(size, KASAN_GRANULE_SIZE), get_tag(address)); } -u8 random_tag(void) -{ - return hw_get_random_tag(); -} - bool check_invalid_free(void *addr) { u8 ptr_tag = get_tag(addr); --- a/mm/kasan/kasan.h~kasan-inline-random_tag-for-hw_tags +++ a/mm/kasan/kasan.h @@ -190,6 +190,12 @@ static inline bool addr_has_metadata(con #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ +#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS) +void print_tags(u8 addr_tag, const void *addr); +#else +static inline void print_tags(u8 addr_tag, const void *addr) { } +#endif + bool check_invalid_free(void *addr); void *find_first_bad_addr(void *addr, size_t size); @@ -225,23 +231,6 @@ static inline void quarantine_reduce(voi static inline void quarantine_remove_cache(struct kmem_cache *cache) { } #endif -#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS) - -void print_tags(u8 addr_tag, const void *addr); - -u8 random_tag(void); - -#else - -static inline void print_tags(u8 addr_tag, const void *addr) { } - -static inline u8 random_tag(void) -{ - return 0; -} - -#endif - #ifndef arch_kasan_set_tag static inline const void *arch_kasan_set_tag(const void *addr, u8 tag) { @@ -281,6 +270,14 @@ static inline const void *arch_kasan_set #endif /* CONFIG_KASAN_HW_TAGS */ +#ifdef CONFIG_KASAN_SW_TAGS +u8 random_tag(void); +#elif defined(CONFIG_KASAN_HW_TAGS) +static inline u8 random_tag(void) { return hw_get_random_tag(); } +#else +static inline u8 random_tag(void) { return 0; } +#endif + /* * Exported functions for interfaces called from assembly or from generated * code. Declarations here to avoid warning about missing declarations. From patchwork Fri Dec 18 22:04:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCC96C3526B for ; Fri, 18 Dec 2020 22:04:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 59AFF23BAB for ; Fri, 18 Dec 2020 22:04:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 59AFF23BAB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EF9196B005C; Fri, 18 Dec 2020 17:04:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EA7736B00A0; Fri, 18 Dec 2020 17:04:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE48B6B00A1; Fri, 18 Dec 2020 17:04:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id C4A296B005C for ; Fri, 18 Dec 2020 17:04:55 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8E6A28249980 for ; Fri, 18 Dec 2020 22:04:55 +0000 (UTC) X-FDA: 77607783750.18.cows77_2a0e62127440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 6DBFD100ED0F5 for ; Fri, 18 Dec 2020 22:04:55 +0000 (UTC) X-HE-Tag: cows77_2a0e62127440 X-Filterd-Recvd-Size: 3743 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:54 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:53 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329094; bh=uwEfFpsDitRQNrxaDV+NeBYhlLTFGRyRjUl+0c90tqg=; h=From:To:Subject:In-Reply-To:From; b=agAxwA3cBzwTQBHQbxn7DbSXigSZcvLYTnwQbM2I2tugg37OoUUpkEsdETD/uMUOU onmiVOJ8IQD8dJk3OAgcyXj61kj4p5nX3JUl9Oj0S6Bz5O5HlTOK8A4xVl1HtZvqLP PEjqdaIpEychkQ4tcZATwGVW1Fo3doJa9fANWTh4= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 63/78] kasan: open-code kasan_unpoison_slab Message-ID: <20201218220453.Dsv2ErHqO%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: open-code kasan_unpoison_slab There's the external annotation kasan_unpoison_slab() that is currently defined as static inline and uses kasan_unpoison_range(). Open-code this function in mempool.c. Otherwise with an upcoming change this function will result in an unnecessary function call. Link: https://lkml.kernel.org/r/131a6694a978a9a8b150187e539eecc8bcbf759b.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/Ia7c8b659f79209935cbaab3913bf7f082cc43a0e Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- include/linux/kasan.h | 6 ------ mm/mempool.c | 2 +- 2 files changed, 1 insertion(+), 7 deletions(-) --- a/include/linux/kasan.h~kasan-open-code-kasan_unpoison_slab +++ a/include/linux/kasan.h @@ -107,11 +107,6 @@ struct kasan_cache { int free_meta_offset; }; -size_t __ksize(const void *); -static inline void kasan_unpoison_slab(const void *ptr) -{ - kasan_unpoison_range(ptr, __ksize(ptr)); -} size_t kasan_metadata_size(struct kmem_cache *cache); bool kasan_save_enable_multi_shot(void); @@ -167,7 +162,6 @@ static inline bool kasan_slab_free(struc return false; } -static inline void kasan_unpoison_slab(const void *ptr) { } static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } #endif /* CONFIG_KASAN */ --- a/mm/mempool.c~kasan-open-code-kasan_unpoison_slab +++ a/mm/mempool.c @@ -112,7 +112,7 @@ static __always_inline void kasan_poison static void kasan_unpoison_element(mempool_t *pool, void *element) { if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) - kasan_unpoison_slab(element); + kasan_unpoison_range(element, __ksize(element)); else if (pool->alloc == mempool_alloc_pages) kasan_alloc_pages(element, (unsigned long)pool->pool_data); } From patchwork Fri Dec 18 22:04:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983151 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85F85C2D0E4 for ; Fri, 18 Dec 2020 22:05:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0A67823406 for ; Fri, 18 Dec 2020 22:04:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A67823406 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8DC296B0068; Fri, 18 Dec 2020 17:04:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 88C8C6B00A1; Fri, 18 Dec 2020 17:04:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A1796B00A2; Fri, 18 Dec 2020 17:04:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id 5F71B6B0068 for ; Fri, 18 Dec 2020 17:04:59 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 29C358249980 for ; Fri, 18 Dec 2020 22:04:59 +0000 (UTC) X-FDA: 77607783918.18.toys26_0c00afa27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 06F6E100EC665 for ; Fri, 18 Dec 2020 22:04:58 +0000 (UTC) X-HE-Tag: toys26_0c00afa27440 X-Filterd-Recvd-Size: 6434 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:04:58 +0000 (UTC) Date: Fri, 18 Dec 2020 14:04:56 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329097; bh=7LQXGIx6RDAeofE5iVyO/yTxLjk97FSR2IehAPp21CE=; h=From:To:Subject:In-Reply-To:From; b=ja7tR8kmuSpGQ9GOhqKS2O2DcX15vlm22X708eqsZQ1CJnCy8fgze7FgiifPycf7y BaGw9AcRdqx6r4EnXKt5XnngOWmVRrb1fTXzVilDPFVp4eFf+puHKZR0OaDZVuJQlw kaYgV9NtGyIwZbtezyVO5qoGo4QBNvy+CzVnHu/k= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 64/78] kasan: inline (un)poison_range and check_invalid_free Message-ID: <20201218220456.ewCaZp1VN%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: inline (un)poison_range and check_invalid_free Using (un)poison_range() or check_invalid_free() currently results in function calls. Move their definitions to mm/kasan/kasan.h and turn them into static inline functions for hardware tag-based mode to avoid unneeded function calls. Link: https://lkml.kernel.org/r/7007955b69eb31b5376a7dc1e0f4ac49138504f2.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/Ia9d8191024a12d1374675b3d27197f10193f50bb Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton Reported-by: kernel test robot --- mm/kasan/hw_tags.c | 30 ---------------------------- mm/kasan/kasan.h | 45 ++++++++++++++++++++++++++++++++++++++----- 2 files changed, 40 insertions(+), 35 deletions(-) --- a/mm/kasan/hw_tags.c~kasan-inline-unpoison_range-and-check_invalid_free +++ a/mm/kasan/hw_tags.c @@ -10,7 +10,6 @@ #include #include -#include #include #include #include @@ -31,35 +30,6 @@ void __init kasan_init_hw_tags(void) pr_info("KernelAddressSanitizer initialized\n"); } -void poison_range(const void *address, size_t size, u8 value) -{ - /* Skip KFENCE memory if called explicitly outside of sl*b. */ - if (is_kfence_address(address)) - return; - - hw_set_mem_tag_range(kasan_reset_tag(address), - round_up(size, KASAN_GRANULE_SIZE), value); -} - -void unpoison_range(const void *address, size_t size) -{ - /* Skip KFENCE memory if called explicitly outside of sl*b. */ - if (is_kfence_address(address)) - return; - - hw_set_mem_tag_range(kasan_reset_tag(address), - round_up(size, KASAN_GRANULE_SIZE), get_tag(address)); -} - -bool check_invalid_free(void *addr) -{ - u8 ptr_tag = get_tag(addr); - u8 mem_tag = hw_get_mem_tag(addr); - - return (mem_tag == KASAN_TAG_INVALID) || - (ptr_tag != KASAN_TAG_KERNEL && ptr_tag != mem_tag); -} - void kasan_set_free_info(struct kmem_cache *cache, void *object, u8 tag) { --- a/mm/kasan/kasan.h~kasan-inline-unpoison_range-and-check_invalid_free +++ a/mm/kasan/kasan.h @@ -3,6 +3,7 @@ #define __MM_KASAN_KASAN_H #include +#include #include #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) @@ -154,9 +155,6 @@ struct kasan_alloc_meta *kasan_get_alloc struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache, const void *object); -void poison_range(const void *address, size_t size, u8 value); -void unpoison_range(const void *address, size_t size); - #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) static inline const void *kasan_shadow_to_mem(const void *shadow_addr) @@ -196,8 +194,6 @@ void print_tags(u8 addr_tag, const void static inline void print_tags(u8 addr_tag, const void *addr) { } #endif -bool check_invalid_free(void *addr); - void *find_first_bad_addr(void *addr, size_t size); const char *get_bug_type(struct kasan_access_info *info); void metadata_fetch_row(char *buffer, void *row); @@ -278,6 +274,45 @@ static inline u8 random_tag(void) { retu static inline u8 random_tag(void) { return 0; } #endif +#ifdef CONFIG_KASAN_HW_TAGS + +static inline void poison_range(const void *address, size_t size, u8 value) +{ + /* Skip KFENCE memory if called explicitly outside of sl*b. */ + if (is_kfence_address(address)) + return; + + hw_set_mem_tag_range(kasan_reset_tag(address), + round_up(size, KASAN_GRANULE_SIZE), value); +} + +static inline void unpoison_range(const void *address, size_t size) +{ + /* Skip KFENCE memory if called explicitly outside of sl*b. */ + if (is_kfence_address(address)) + return; + + hw_set_mem_tag_range(kasan_reset_tag(address), + round_up(size, KASAN_GRANULE_SIZE), get_tag(address)); +} + +static inline bool check_invalid_free(void *addr) +{ + u8 ptr_tag = get_tag(addr); + u8 mem_tag = hw_get_mem_tag(addr); + + return (mem_tag == KASAN_TAG_INVALID) || + (ptr_tag != KASAN_TAG_KERNEL && ptr_tag != mem_tag); +} + +#else /* CONFIG_KASAN_HW_TAGS */ + +void poison_range(const void *address, size_t size, u8 value); +void unpoison_range(const void *address, size_t size); +bool check_invalid_free(void *addr); + +#endif /* CONFIG_KASAN_HW_TAGS */ + /* * Exported functions for interfaces called from assembly or from generated * code. Declarations here to avoid warning about missing declarations. From patchwork Fri Dec 18 22:05:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDC9EC4361B for ; Fri, 18 Dec 2020 22:05:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7586123B7E for ; Fri, 18 Dec 2020 22:05:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7586123B7E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1D76E6B006C; Fri, 18 Dec 2020 17:05:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 187BC6B00A2; Fri, 18 Dec 2020 17:05:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09FFC6B00A3; Fri, 18 Dec 2020 17:05:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0223.hostedemail.com [216.40.44.223]) by kanga.kvack.org (Postfix) with ESMTP id E42A36B006C for ; Fri, 18 Dec 2020 17:05:02 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id AE00052C1 for ; Fri, 18 Dec 2020 22:05:02 +0000 (UTC) X-FDA: 77607784044.12.plane51_3c0afbf27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 83827180CC5DE for ; Fri, 18 Dec 2020 22:05:02 +0000 (UTC) X-HE-Tag: plane51_3c0afbf27440 X-Filterd-Recvd-Size: 12799 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:01 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329101; bh=ZiS8j4yJG5lVPU9Sslr3jP15X5it0IqDuKtXrDDf8Kw=; h=From:To:Subject:In-Reply-To:From; b=dmk6wyFF9L3RMYNkj3E+tgTIjPhB9egT8Ul84PZR507BVkYoMt6kP7JlimPQ5R6KL dGCqcNuzVn9YqmJNdZeKVzh3t8lyodP3rrX0C5iPuOg2HikTL6rLXdQ8sTK4ZkgIYE K9hnhogSAIcdn3cA0MDrYhnNbOrPK0bDiNbelVSc= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 65/78] kasan: add and integrate kasan boot parameters Message-ID: <20201218220500.YhsoRDCs0%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: add and integrate kasan boot parameters Hardware tag-based KASAN mode is intended to eventually be used in production as a security mitigation. Therefore there's a need for finer control over KASAN features and for an existence of a kill switch. This change adds a few boot parameters for hardware tag-based KASAN that allow to disable or otherwise control particular KASAN features. The features that can be controlled are: 1. Whether KASAN is enabled at all. 2. Whether KASAN collects and saves alloc/free stacks. 3. Whether KASAN panics on a detected bug or not. With this change a new boot parameter kasan.mode allows to choose one of three main modes: - kasan.mode=off - KASAN is disabled, no tag checks are performed - kasan.mode=prod - only essential production features are enabled - kasan.mode=full - all KASAN features are enabled The chosen mode provides default control values for the features mentioned above. However it's also possible to override the default values by providing: - kasan.stacktrace=off/on - enable alloc/free stack collection (default: on for mode=full, otherwise off) - kasan.fault=report/panic - only report tag fault or also panic (default: report) If kasan.mode parameter is not provided, it defaults to full when CONFIG_DEBUG_KERNEL is enabled, and to prod otherwise. It is essential that switching between these modes doesn't require rebuilding the kernel with different configs, as this is required by the Android GKI (Generic Kernel Image) initiative [1]. [1] https://source.android.com/devices/architecture/kernel/generic-kernel-image [andreyknvl@google.com: don't use read-only static keys] Link: https://lkml.kernel.org/r/f2ded589eba1597f7360a972226083de9afd86e2.1607537948.git.andreyknvl@google.com Link: https://lkml.kernel.org/r/cb093613879d8d8841173f090133eddeb4c35f1f.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4 Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Reviewed-by: Dmitry Vyukov Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/common.c | 22 ++++-- mm/kasan/hw_tags.c | 151 +++++++++++++++++++++++++++++++++++++++++++ mm/kasan/kasan.h | 16 ++++ mm/kasan/report.c | 14 +++ 4 files changed, 196 insertions(+), 7 deletions(-) --- a/mm/kasan/common.c~kasan-add-and-integrate-kasan-boot-parameters +++ a/mm/kasan/common.c @@ -134,6 +134,11 @@ void kasan_cache_create(struct kmem_cach unsigned int redzone_size; int redzone_adjust; + if (!kasan_stack_collection_enabled()) { + *flags |= SLAB_KASAN; + return; + } + /* Add alloc meta. */ cache->kasan_info.alloc_meta_offset = *size; *size += sizeof(struct kasan_alloc_meta); @@ -170,6 +175,8 @@ void kasan_cache_create(struct kmem_cach size_t kasan_metadata_size(struct kmem_cache *cache) { + if (!kasan_stack_collection_enabled()) + return 0; return (cache->kasan_info.alloc_meta_offset ? sizeof(struct kasan_alloc_meta) : 0) + (cache->kasan_info.free_meta_offset ? @@ -262,11 +269,13 @@ void * __must_check kasan_init_slab_obj( { struct kasan_alloc_meta *alloc_meta; - if (!(cache->flags & SLAB_KASAN)) - return (void *)object; + if (kasan_stack_collection_enabled()) { + if (!(cache->flags & SLAB_KASAN)) + return (void *)object; - alloc_meta = kasan_get_alloc_meta(cache, object); - __memset(alloc_meta, 0, sizeof(*alloc_meta)); + alloc_meta = kasan_get_alloc_meta(cache, object); + __memset(alloc_meta, 0, sizeof(*alloc_meta)); + } if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS)) object = set_tag(object, assign_tag(cache, object, true, false)); @@ -303,6 +312,9 @@ static bool __kasan_slab_free(struct kme rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE); poison_range(object, rounded_up_size, KASAN_KMALLOC_FREE); + if (!kasan_stack_collection_enabled()) + return false; + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) || unlikely(!(cache->flags & SLAB_KASAN))) return false; @@ -350,7 +362,7 @@ static void *__kasan_kmalloc(struct kmem poison_range((void *)redzone_start, redzone_end - redzone_start, KASAN_KMALLOC_REDZONE); - if (cache->flags & SLAB_KASAN) + if (kasan_stack_collection_enabled() && (cache->flags & SLAB_KASAN)) set_alloc_info(cache, (void *)object, flags); return set_tag(object, tag); --- a/mm/kasan/hw_tags.c~kasan-add-and-integrate-kasan-boot-parameters +++ a/mm/kasan/hw_tags.c @@ -8,18 +8,115 @@ #define pr_fmt(fmt) "kasan: " fmt +#include #include #include #include #include +#include #include #include #include "kasan.h" +enum kasan_arg_mode { + KASAN_ARG_MODE_DEFAULT, + KASAN_ARG_MODE_OFF, + KASAN_ARG_MODE_PROD, + KASAN_ARG_MODE_FULL, +}; + +enum kasan_arg_stacktrace { + KASAN_ARG_STACKTRACE_DEFAULT, + KASAN_ARG_STACKTRACE_OFF, + KASAN_ARG_STACKTRACE_ON, +}; + +enum kasan_arg_fault { + KASAN_ARG_FAULT_DEFAULT, + KASAN_ARG_FAULT_REPORT, + KASAN_ARG_FAULT_PANIC, +}; + +static enum kasan_arg_mode kasan_arg_mode __ro_after_init; +static enum kasan_arg_stacktrace kasan_arg_stacktrace __ro_after_init; +static enum kasan_arg_fault kasan_arg_fault __ro_after_init; + +/* Whether KASAN is enabled at all. */ +DEFINE_STATIC_KEY_FALSE(kasan_flag_enabled); +EXPORT_SYMBOL(kasan_flag_enabled); + +/* Whether to collect alloc/free stack traces. */ +DEFINE_STATIC_KEY_FALSE(kasan_flag_stacktrace); + +/* Whether panic or disable tag checking on fault. */ +bool kasan_flag_panic __ro_after_init; + +/* kasan.mode=off/prod/full */ +static int __init early_kasan_mode(char *arg) +{ + if (!arg) + return -EINVAL; + + if (!strcmp(arg, "off")) + kasan_arg_mode = KASAN_ARG_MODE_OFF; + else if (!strcmp(arg, "prod")) + kasan_arg_mode = KASAN_ARG_MODE_PROD; + else if (!strcmp(arg, "full")) + kasan_arg_mode = KASAN_ARG_MODE_FULL; + else + return -EINVAL; + + return 0; +} +early_param("kasan.mode", early_kasan_mode); + +/* kasan.stack=off/on */ +static int __init early_kasan_flag_stacktrace(char *arg) +{ + if (!arg) + return -EINVAL; + + if (!strcmp(arg, "off")) + kasan_arg_stacktrace = KASAN_ARG_STACKTRACE_OFF; + else if (!strcmp(arg, "on")) + kasan_arg_stacktrace = KASAN_ARG_STACKTRACE_ON; + else + return -EINVAL; + + return 0; +} +early_param("kasan.stacktrace", early_kasan_flag_stacktrace); + +/* kasan.fault=report/panic */ +static int __init early_kasan_fault(char *arg) +{ + if (!arg) + return -EINVAL; + + if (!strcmp(arg, "report")) + kasan_arg_fault = KASAN_ARG_FAULT_REPORT; + else if (!strcmp(arg, "panic")) + kasan_arg_fault = KASAN_ARG_FAULT_PANIC; + else + return -EINVAL; + + return 0; +} +early_param("kasan.fault", early_kasan_fault); + /* kasan_init_hw_tags_cpu() is called for each CPU. */ void kasan_init_hw_tags_cpu(void) { + /* + * There's no need to check that the hardware is MTE-capable here, + * as this function is only called for MTE-capable hardware. + */ + + /* If KASAN is disabled, do nothing. */ + if (kasan_arg_mode == KASAN_ARG_MODE_OFF) + return; + hw_init_tags(KASAN_TAG_MAX); hw_enable_tagging(); } @@ -27,6 +124,60 @@ void kasan_init_hw_tags_cpu(void) /* kasan_init_hw_tags() is called once on boot CPU. */ void __init kasan_init_hw_tags(void) { + /* If hardware doesn't support MTE, do nothing. */ + if (!system_supports_mte()) + return; + + /* Choose KASAN mode if kasan boot parameter is not provided. */ + if (kasan_arg_mode == KASAN_ARG_MODE_DEFAULT) { + if (IS_ENABLED(CONFIG_DEBUG_KERNEL)) + kasan_arg_mode = KASAN_ARG_MODE_FULL; + else + kasan_arg_mode = KASAN_ARG_MODE_PROD; + } + + /* Preset parameter values based on the mode. */ + switch (kasan_arg_mode) { + case KASAN_ARG_MODE_DEFAULT: + /* Shouldn't happen as per the check above. */ + WARN_ON(1); + return; + case KASAN_ARG_MODE_OFF: + /* If KASAN is disabled, do nothing. */ + return; + case KASAN_ARG_MODE_PROD: + static_branch_enable(&kasan_flag_enabled); + break; + case KASAN_ARG_MODE_FULL: + static_branch_enable(&kasan_flag_enabled); + static_branch_enable(&kasan_flag_stacktrace); + break; + } + + /* Now, optionally override the presets. */ + + switch (kasan_arg_stacktrace) { + case KASAN_ARG_STACKTRACE_DEFAULT: + break; + case KASAN_ARG_STACKTRACE_OFF: + static_branch_disable(&kasan_flag_stacktrace); + break; + case KASAN_ARG_STACKTRACE_ON: + static_branch_enable(&kasan_flag_stacktrace); + break; + } + + switch (kasan_arg_fault) { + case KASAN_ARG_FAULT_DEFAULT: + break; + case KASAN_ARG_FAULT_REPORT: + kasan_flag_panic = false; + break; + case KASAN_ARG_FAULT_PANIC: + kasan_flag_panic = true; + break; + } + pr_info("KernelAddressSanitizer initialized\n"); } --- a/mm/kasan/kasan.h~kasan-add-and-integrate-kasan-boot-parameters +++ a/mm/kasan/kasan.h @@ -6,6 +6,22 @@ #include #include +#ifdef CONFIG_KASAN_HW_TAGS +#include +DECLARE_STATIC_KEY_FALSE(kasan_flag_stacktrace); +static inline bool kasan_stack_collection_enabled(void) +{ + return static_branch_unlikely(&kasan_flag_stacktrace); +} +#else +static inline bool kasan_stack_collection_enabled(void) +{ + return true; +} +#endif + +extern bool kasan_flag_panic __ro_after_init; + #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) #define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #else --- a/mm/kasan/report.c~kasan-add-and-integrate-kasan-boot-parameters +++ a/mm/kasan/report.c @@ -99,6 +99,10 @@ static void end_report(unsigned long *fl panic_on_warn = 0; panic("panic_on_warn set ...\n"); } +#ifdef CONFIG_KASAN_HW_TAGS + if (kasan_flag_panic) + panic("kasan.fault=panic set ...\n"); +#endif kasan_enable_current(); } @@ -161,8 +165,8 @@ static void describe_object_addr(struct (void *)(object_addr + cache->object_size)); } -static void describe_object(struct kmem_cache *cache, void *object, - const void *addr, u8 tag) +static void describe_object_stacks(struct kmem_cache *cache, void *object, + const void *addr, u8 tag) { struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object); @@ -190,7 +194,13 @@ static void describe_object(struct kmem_ } #endif } +} +static void describe_object(struct kmem_cache *cache, void *object, + const void *addr, u8 tag) +{ + if (kasan_stack_collection_enabled()) + describe_object_stacks(cache, object, addr, tag); describe_object_addr(cache, object, addr); } From patchwork Fri Dec 18 22:05:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E258C4361B for ; Fri, 18 Dec 2020 22:05:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3A0EC23B7E for ; Fri, 18 Dec 2020 22:05:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A0EC23B7E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CD4A16B00A3; Fri, 18 Dec 2020 17:05:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C5CFF6B00A4; Fri, 18 Dec 2020 17:05:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4B8E6B00A5; Fri, 18 Dec 2020 17:05:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id 96C426B00A3 for ; Fri, 18 Dec 2020 17:05:06 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 619CB52AD for ; Fri, 18 Dec 2020 22:05:06 +0000 (UTC) X-FDA: 77607784212.16.guide02_0e11bce27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 4249D100E6930 for ; Fri, 18 Dec 2020 22:05:06 +0000 (UTC) X-HE-Tag: guide02_0e11bce27440 X-Filterd-Recvd-Size: 19297 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:05 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:03 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329104; bh=HeMNj/mDEtimD9mVrilCzSQB2upiy3W+vAHdTEgIQRQ=; h=From:To:Subject:In-Reply-To:From; b=q46iUlAzqxFb3EIr0U2adVm0cG9UN6rXIX3qcvT1+3conTrk/aptvBMXPJSoU88MG 0hUzaMmo/y7gI296dk9rX/QvXQhyN4t0jQf4VulsB6vKBF+EJ46J0/LSTxiMuyj/JJ U+qqOSYWSWOQGX8VkZhfwLxxdbX6S55Voc0t/Unw= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, Vincenzo.Frascino@arm.com, will.deacon@arm.com Subject: [patch 66/78] kasan, mm: check kasan_enabled in annotations Message-ID: <20201218220503.bg9ovjDIo%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, mm: check kasan_enabled in annotations Declare the kasan_enabled static key in include/linux/kasan.h and in include/linux/mm.h and check it in all kasan annotations. This allows to avoid any slowdown caused by function calls when kasan_enabled is disabled. Link: https://lkml.kernel.org/r/9f90e3c0aa840dbb4833367c2335193299f69023.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/I2589451d3c96c97abbcbf714baabe6161c6f153e Co-developed-by: Vincenzo Frascino Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Reviewed-by: Dmitry Vyukov Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- include/linux/kasan.h | 217 ++++++++++++++++++++++++++++++---------- include/linux/mm.h | 22 ++-- mm/kasan/common.c | 56 +++++----- 3 files changed, 212 insertions(+), 83 deletions(-) --- a/include/linux/kasan.h~kasan-mm-check-kasan_enabled-in-annotations +++ a/include/linux/kasan.h @@ -2,6 +2,7 @@ #ifndef _LINUX_KASAN_H #define _LINUX_KASAN_H +#include #include struct kmem_cache; @@ -75,54 +76,176 @@ static inline void kasan_disable_current #ifdef CONFIG_KASAN -void kasan_unpoison_range(const void *address, size_t size); - -void kasan_alloc_pages(struct page *page, unsigned int order); -void kasan_free_pages(struct page *page, unsigned int order); - -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, - slab_flags_t *flags); - -void kasan_poison_slab(struct page *page); -void kasan_unpoison_object_data(struct kmem_cache *cache, void *object); -void kasan_poison_object_data(struct kmem_cache *cache, void *object); -void * __must_check kasan_init_slab_obj(struct kmem_cache *cache, - const void *object); - -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, - gfp_t flags); -void kasan_kfree_large(void *ptr, unsigned long ip); -void kasan_poison_kfree(void *ptr, unsigned long ip); -void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object, - size_t size, gfp_t flags); -void * __must_check kasan_krealloc(const void *object, size_t new_size, - gfp_t flags); - -void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object, - gfp_t flags); -bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip); - struct kasan_cache { int alloc_meta_offset; int free_meta_offset; }; -size_t kasan_metadata_size(struct kmem_cache *cache); +#ifdef CONFIG_KASAN_HW_TAGS +DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); +static __always_inline bool kasan_enabled(void) +{ + return static_branch_likely(&kasan_flag_enabled); +} +#else +static inline bool kasan_enabled(void) +{ + return true; +} +#endif + +void __kasan_unpoison_range(const void *addr, size_t size); +static __always_inline void kasan_unpoison_range(const void *addr, size_t size) +{ + if (kasan_enabled()) + __kasan_unpoison_range(addr, size); +} + +void __kasan_alloc_pages(struct page *page, unsigned int order); +static __always_inline void kasan_alloc_pages(struct page *page, + unsigned int order) +{ + if (kasan_enabled()) + __kasan_alloc_pages(page, order); +} + +void __kasan_free_pages(struct page *page, unsigned int order); +static __always_inline void kasan_free_pages(struct page *page, + unsigned int order) +{ + if (kasan_enabled()) + __kasan_free_pages(page, order); +} + +void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, + slab_flags_t *flags); +static __always_inline void kasan_cache_create(struct kmem_cache *cache, + unsigned int *size, slab_flags_t *flags) +{ + if (kasan_enabled()) + __kasan_cache_create(cache, size, flags); +} + +size_t __kasan_metadata_size(struct kmem_cache *cache); +static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache) +{ + if (kasan_enabled()) + return __kasan_metadata_size(cache); + return 0; +} + +void __kasan_poison_slab(struct page *page); +static __always_inline void kasan_poison_slab(struct page *page) +{ + if (kasan_enabled()) + __kasan_poison_slab(page); +} + +void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object); +static __always_inline void kasan_unpoison_object_data(struct kmem_cache *cache, + void *object) +{ + if (kasan_enabled()) + __kasan_unpoison_object_data(cache, object); +} + +void __kasan_poison_object_data(struct kmem_cache *cache, void *object); +static __always_inline void kasan_poison_object_data(struct kmem_cache *cache, + void *object) +{ + if (kasan_enabled()) + __kasan_poison_object_data(cache, object); +} + +void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache, + const void *object); +static __always_inline void * __must_check kasan_init_slab_obj( + struct kmem_cache *cache, const void *object) +{ + if (kasan_enabled()) + return __kasan_init_slab_obj(cache, object); + return (void *)object; +} + +bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip); +static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object, + unsigned long ip) +{ + if (kasan_enabled()) + return __kasan_slab_free(s, object, ip); + return false; +} + +void * __must_check __kasan_slab_alloc(struct kmem_cache *s, + void *object, gfp_t flags); +static __always_inline void * __must_check kasan_slab_alloc( + struct kmem_cache *s, void *object, gfp_t flags) +{ + if (kasan_enabled()) + return __kasan_slab_alloc(s, object, flags); + return object; +} + +void * __must_check __kasan_kmalloc(struct kmem_cache *s, const void *object, + size_t size, gfp_t flags); +static __always_inline void * __must_check kasan_kmalloc(struct kmem_cache *s, + const void *object, size_t size, gfp_t flags) +{ + if (kasan_enabled()) + return __kasan_kmalloc(s, object, size, flags); + return (void *)object; +} + +void * __must_check __kasan_kmalloc_large(const void *ptr, + size_t size, gfp_t flags); +static __always_inline void * __must_check kasan_kmalloc_large(const void *ptr, + size_t size, gfp_t flags) +{ + if (kasan_enabled()) + return __kasan_kmalloc_large(ptr, size, flags); + return (void *)ptr; +} + +void * __must_check __kasan_krealloc(const void *object, + size_t new_size, gfp_t flags); +static __always_inline void * __must_check kasan_krealloc(const void *object, + size_t new_size, gfp_t flags) +{ + if (kasan_enabled()) + return __kasan_krealloc(object, new_size, flags); + return (void *)object; +} + +void __kasan_poison_kfree(void *ptr, unsigned long ip); +static __always_inline void kasan_poison_kfree(void *ptr, unsigned long ip) +{ + if (kasan_enabled()) + __kasan_poison_kfree(ptr, ip); +} + +void __kasan_kfree_large(void *ptr, unsigned long ip); +static __always_inline void kasan_kfree_large(void *ptr, unsigned long ip) +{ + if (kasan_enabled()) + __kasan_kfree_large(ptr, ip); +} bool kasan_save_enable_multi_shot(void); void kasan_restore_multi_shot(bool enabled); #else /* CONFIG_KASAN */ +static inline bool kasan_enabled(void) +{ + return false; +} static inline void kasan_unpoison_range(const void *address, size_t size) {} - static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} static inline void kasan_free_pages(struct page *page, unsigned int order) {} - static inline void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) {} - +static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } static inline void kasan_poison_slab(struct page *page) {} static inline void kasan_unpoison_object_data(struct kmem_cache *cache, void *object) {} @@ -133,36 +256,32 @@ static inline void *kasan_init_slab_obj( { return (void *)object; } - -static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) +static inline bool kasan_slab_free(struct kmem_cache *s, void *object, + unsigned long ip) { - return ptr; + return false; +} +static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags) +{ + return object; } -static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} -static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size, gfp_t flags) { return (void *)object; } +static inline void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +{ + return (void *)ptr; +} static inline void *kasan_krealloc(const void *object, size_t new_size, gfp_t flags) { return (void *)object; } - -static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, - gfp_t flags) -{ - return object; -} -static inline bool kasan_slab_free(struct kmem_cache *s, void *object, - unsigned long ip) -{ - return false; -} - -static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } +static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} +static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} #endif /* CONFIG_KASAN */ --- a/include/linux/mm.h~kasan-mm-check-kasan_enabled-in-annotations +++ a/include/linux/mm.h @@ -31,6 +31,7 @@ #include #include #include +#include struct mempolicy; struct anon_vma; @@ -1422,22 +1423,30 @@ static inline bool cpupid_match_pid(stru #endif /* CONFIG_NUMA_BALANCING */ #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS) + static inline u8 page_kasan_tag(const struct page *page) { - return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK; + if (kasan_enabled()) + return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK; + return 0xff; } static inline void page_kasan_tag_set(struct page *page, u8 tag) { - page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT); - page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT; + if (kasan_enabled()) { + page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT); + page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT; + } } static inline void page_kasan_tag_reset(struct page *page) { - page_kasan_tag_set(page, 0xff); + if (kasan_enabled()) + page_kasan_tag_set(page, 0xff); } -#else + +#else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */ + static inline u8 page_kasan_tag(const struct page *page) { return 0xff; @@ -1445,7 +1454,8 @@ static inline u8 page_kasan_tag(const st static inline void page_kasan_tag_set(struct page *page, u8 tag) { } static inline void page_kasan_tag_reset(struct page *page) { } -#endif + +#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */ static inline struct zone *page_zone(const struct page *page) { --- a/mm/kasan/common.c~kasan-mm-check-kasan_enabled-in-annotations +++ a/mm/kasan/common.c @@ -58,7 +58,7 @@ void kasan_disable_current(void) } #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ -void kasan_unpoison_range(const void *address, size_t size) +void __kasan_unpoison_range(const void *address, size_t size) { unpoison_range(address, size); } @@ -86,7 +86,7 @@ asmlinkage void kasan_unpoison_task_stac } #endif /* CONFIG_KASAN_STACK */ -void kasan_alloc_pages(struct page *page, unsigned int order) +void __kasan_alloc_pages(struct page *page, unsigned int order) { u8 tag; unsigned long i; @@ -100,7 +100,7 @@ void kasan_alloc_pages(struct page *page unpoison_range(page_address(page), PAGE_SIZE << order); } -void kasan_free_pages(struct page *page, unsigned int order) +void __kasan_free_pages(struct page *page, unsigned int order) { if (likely(!PageHighMem(page))) poison_range(page_address(page), @@ -127,8 +127,8 @@ static inline unsigned int optimal_redzo object_size <= (1 << 16) - 1024 ? 1024 : 2048; } -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, - slab_flags_t *flags) +void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, + slab_flags_t *flags) { unsigned int orig_size = *size; unsigned int redzone_size; @@ -173,7 +173,7 @@ void kasan_cache_create(struct kmem_cach *flags |= SLAB_KASAN; } -size_t kasan_metadata_size(struct kmem_cache *cache) +size_t __kasan_metadata_size(struct kmem_cache *cache) { if (!kasan_stack_collection_enabled()) return 0; @@ -196,7 +196,7 @@ struct kasan_free_meta *kasan_get_free_m return kasan_reset_tag(object) + cache->kasan_info.free_meta_offset; } -void kasan_poison_slab(struct page *page) +void __kasan_poison_slab(struct page *page) { unsigned long i; @@ -206,12 +206,12 @@ void kasan_poison_slab(struct page *page KASAN_KMALLOC_REDZONE); } -void kasan_unpoison_object_data(struct kmem_cache *cache, void *object) +void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object) { unpoison_range(object, cache->object_size); } -void kasan_poison_object_data(struct kmem_cache *cache, void *object) +void __kasan_poison_object_data(struct kmem_cache *cache, void *object) { poison_range(object, round_up(cache->object_size, KASAN_GRANULE_SIZE), @@ -264,7 +264,7 @@ static u8 assign_tag(struct kmem_cache * #endif } -void * __must_check kasan_init_slab_obj(struct kmem_cache *cache, +void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache, const void *object) { struct kasan_alloc_meta *alloc_meta; @@ -283,7 +283,7 @@ void * __must_check kasan_init_slab_obj( return (void *)object; } -static bool __kasan_slab_free(struct kmem_cache *cache, void *object, +static bool ____kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip, bool quarantine) { u8 tag; @@ -326,9 +326,9 @@ static bool __kasan_slab_free(struct kme return IS_ENABLED(CONFIG_KASAN_GENERIC); } -bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) +bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) { - return __kasan_slab_free(cache, object, ip, true); + return ____kasan_slab_free(cache, object, ip, true); } static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags) @@ -336,7 +336,7 @@ static void set_alloc_info(struct kmem_c kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags); } -static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object, +static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, gfp_t flags, bool keep_tag) { unsigned long redzone_start; @@ -368,20 +368,20 @@ static void *__kasan_kmalloc(struct kmem return set_tag(object, tag); } -void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object, - gfp_t flags) +void * __must_check __kasan_slab_alloc(struct kmem_cache *cache, + void *object, gfp_t flags) { - return __kasan_kmalloc(cache, object, cache->object_size, flags, false); + return ____kasan_kmalloc(cache, object, cache->object_size, flags, false); } -void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object, - size_t size, gfp_t flags) +void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object, + size_t size, gfp_t flags) { - return __kasan_kmalloc(cache, object, size, flags, true); + return ____kasan_kmalloc(cache, object, size, flags, true); } -EXPORT_SYMBOL(kasan_kmalloc); +EXPORT_SYMBOL(__kasan_kmalloc); -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, +void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) { struct page *page; @@ -406,7 +406,7 @@ void * __must_check kasan_kmalloc_large( return (void *)ptr; } -void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags) +void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags) { struct page *page; @@ -416,13 +416,13 @@ void * __must_check kasan_krealloc(const page = virt_to_head_page(object); if (unlikely(!PageSlab(page))) - return kasan_kmalloc_large(object, size, flags); + return __kasan_kmalloc_large(object, size, flags); else - return __kasan_kmalloc(page->slab_cache, object, size, + return ____kasan_kmalloc(page->slab_cache, object, size, flags, true); } -void kasan_poison_kfree(void *ptr, unsigned long ip) +void __kasan_poison_kfree(void *ptr, unsigned long ip) { struct page *page; @@ -435,11 +435,11 @@ void kasan_poison_kfree(void *ptr, unsig } poison_range(ptr, page_size(page), KASAN_FREE_PAGE); } else { - __kasan_slab_free(page->slab_cache, ptr, ip, false); + ____kasan_slab_free(page->slab_cache, ptr, ip, false); } } -void kasan_kfree_large(void *ptr, unsigned long ip) +void __kasan_kfree_large(void *ptr, unsigned long ip) { if (ptr != page_address(virt_to_head_page(ptr))) kasan_report_invalid_free(ptr, ip); From patchwork Fri Dec 18 22:05:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 691FEC4361B for ; Fri, 18 Dec 2020 22:05:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 100ED23BAD for ; Fri, 18 Dec 2020 22:05:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 100ED23BAD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9A0286B00A5; Fri, 18 Dec 2020 17:05:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 950826B00A6; Fri, 18 Dec 2020 17:05:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 866E56B00A7; Fri, 18 Dec 2020 17:05:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id 6AB406B00A5 for ; Fri, 18 Dec 2020 17:05:09 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 3835A181D0288 for ; Fri, 18 Dec 2020 22:05:09 +0000 (UTC) X-FDA: 77607784338.04.swim72_0012ae827440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 1CD5F801BF03 for ; Fri, 18 Dec 2020 22:05:09 +0000 (UTC) X-HE-Tag: swim72_0012ae827440 X-Filterd-Recvd-Size: 6301 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:08 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:06 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329107; bh=9TsHQ6uDKwIEBcEbiKo3oU2d3k0N/gw6C3j6ssode7M=; h=From:To:Subject:In-Reply-To:From; b=BKlb4Kt3rU4Ltg5MmBjvuVN+vJrGVHIuXOPLY7jo/5yX7CzETvoTLuCmncf4xDsku ahhv2euNqTfYYkDM86YK3O0b7ULKqSrDsK/JDDUTpU95e92tzwTnlteIlqLQL50wg3 LWdnebOVPjqGka72s0Y/y0DCyJxVrTGx81SRUc/0= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 67/78] kasan, mm: rename kasan_poison_kfree Message-ID: <20201218220506.CAXpvj66H%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, mm: rename kasan_poison_kfree Rename kasan_poison_kfree() to kasan_slab_free_mempool() as it better reflects what this annotation does. Also add a comment that explains the PageSlab() check. No functional changes. Link: https://lkml.kernel.org/r/141675fb493555e984c5dca555e9d9f768c7bbaa.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/I5026f87364e556b506ef1baee725144bb04b8810 Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- include/linux/kasan.h | 16 ++++++++-------- mm/kasan/common.c | 40 +++++++++++++++++++++++----------------- mm/mempool.c | 2 +- 3 files changed, 32 insertions(+), 26 deletions(-) --- a/include/linux/kasan.h~kasan-mm-rename-kasan_poison_kfree +++ a/include/linux/kasan.h @@ -176,6 +176,13 @@ static __always_inline bool kasan_slab_f return false; } +void __kasan_slab_free_mempool(void *ptr, unsigned long ip); +static __always_inline void kasan_slab_free_mempool(void *ptr, unsigned long ip) +{ + if (kasan_enabled()) + __kasan_slab_free_mempool(ptr, ip); +} + void * __must_check __kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); static __always_inline void * __must_check kasan_slab_alloc( @@ -216,13 +223,6 @@ static __always_inline void * __must_che return (void *)object; } -void __kasan_poison_kfree(void *ptr, unsigned long ip); -static __always_inline void kasan_poison_kfree(void *ptr, unsigned long ip) -{ - if (kasan_enabled()) - __kasan_poison_kfree(ptr, ip); -} - void __kasan_kfree_large(void *ptr, unsigned long ip); static __always_inline void kasan_kfree_large(void *ptr, unsigned long ip) { @@ -261,6 +261,7 @@ static inline bool kasan_slab_free(struc { return false; } +static inline void kasan_slab_free_mempool(void *ptr, unsigned long ip) {} static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) { @@ -280,7 +281,6 @@ static inline void *kasan_krealloc(const { return (void *)object; } -static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} #endif /* CONFIG_KASAN */ --- a/mm/kasan/common.c~kasan-mm-rename-kasan_poison_kfree +++ a/mm/kasan/common.c @@ -331,6 +331,29 @@ bool __kasan_slab_free(struct kmem_cache return ____kasan_slab_free(cache, object, ip, true); } +void __kasan_slab_free_mempool(void *ptr, unsigned long ip) +{ + struct page *page; + + page = virt_to_head_page(ptr); + + /* + * Even though this function is only called for kmem_cache_alloc and + * kmalloc backed mempool allocations, those allocations can still be + * !PageSlab() when the size provided to kmalloc is larger than + * KMALLOC_MAX_SIZE, and kmalloc falls back onto page_alloc. + */ + if (unlikely(!PageSlab(page))) { + if (ptr != page_address(page)) { + kasan_report_invalid_free(ptr, ip); + return; + } + poison_range(ptr, page_size(page), KASAN_FREE_PAGE); + } else { + ____kasan_slab_free(page->slab_cache, ptr, ip, false); + } +} + static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags) { kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags); @@ -422,23 +445,6 @@ void * __must_check __kasan_krealloc(con flags, true); } -void __kasan_poison_kfree(void *ptr, unsigned long ip) -{ - struct page *page; - - page = virt_to_head_page(ptr); - - if (unlikely(!PageSlab(page))) { - if (ptr != page_address(page)) { - kasan_report_invalid_free(ptr, ip); - return; - } - poison_range(ptr, page_size(page), KASAN_FREE_PAGE); - } else { - ____kasan_slab_free(page->slab_cache, ptr, ip, false); - } -} - void __kasan_kfree_large(void *ptr, unsigned long ip) { if (ptr != page_address(virt_to_head_page(ptr))) --- a/mm/mempool.c~kasan-mm-rename-kasan_poison_kfree +++ a/mm/mempool.c @@ -104,7 +104,7 @@ static inline void poison_element(mempoo static __always_inline void kasan_poison_element(mempool_t *pool, void *element) { if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) - kasan_poison_kfree(element, _RET_IP_); + kasan_slab_free_mempool(element, _RET_IP_); else if (pool->alloc == mempool_alloc_pages) kasan_free_pages(element, (unsigned long)pool->pool_data); } From patchwork Fri Dec 18 22:05:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BA28C3526B for ; Fri, 18 Dec 2020 22:05:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AC4C523B7E for ; Fri, 18 Dec 2020 22:05:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AC4C523B7E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4966E6B0070; Fri, 18 Dec 2020 17:05:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 448CD6B00A7; Fri, 18 Dec 2020 17:05:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 383BE6B00A8; Fri, 18 Dec 2020 17:05:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0133.hostedemail.com [216.40.44.133]) by kanga.kvack.org (Postfix) with ESMTP id 1F67A6B0070 for ; Fri, 18 Dec 2020 17:05:13 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D71E8181BECD9 for ; Fri, 18 Dec 2020 22:05:12 +0000 (UTC) X-FDA: 77607784464.04.magic50_06100ca27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id B818E801D254 for ; Fri, 18 Dec 2020 22:05:12 +0000 (UTC) X-HE-Tag: magic50_06100ca27440 X-Filterd-Recvd-Size: 3777 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:12 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:10 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329111; bh=DHvmhn7Z9vTJP2KW1DOx07RPN/cMcwd6mWA3WswqlMQ=; h=From:To:Subject:In-Reply-To:From; b=GMFDQOkosCB9Wvu/VEi/44DJZRyXSi2pHf4YN8KiQhrJSv8gO7IYukf7NNmoZahql DBikFuGQlR9SG5NLnc0lrcRjvrcl2wfJYC7y13FOEnBORQzu7iJMy/S0EeJsQf2pBP t/I14R+i5IdDPIKQ5Fp8ilAO3uVqaSxbxhHvzR5Y= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 68/78] kasan: don't round_up too much Message-ID: <20201218220510.q8idAv5xN%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: don't round_up too much For hardware tag-based mode kasan_poison_memory() already rounds up the size. Do the same for software modes and remove round_up() from the common code. Link: https://lkml.kernel.org/r/47b232474f1f89dc072aeda0fa58daa6efade377.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/Ib397128fac6eba874008662b4964d65352db4aa4 Signed-off-by: Andrey Konovalov Reviewed-by: Dmitry Vyukov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/common.c | 8 ++------ mm/kasan/shadow.c | 1 + 2 files changed, 3 insertions(+), 6 deletions(-) --- a/mm/kasan/common.c~kasan-dont-round_up-too-much +++ a/mm/kasan/common.c @@ -213,9 +213,7 @@ void __kasan_unpoison_object_data(struct void __kasan_poison_object_data(struct kmem_cache *cache, void *object) { - poison_range(object, - round_up(cache->object_size, KASAN_GRANULE_SIZE), - KASAN_KMALLOC_REDZONE); + poison_range(object, cache->object_size, KASAN_KMALLOC_REDZONE); } /* @@ -288,7 +286,6 @@ static bool ____kasan_slab_free(struct k { u8 tag; void *tagged_object; - unsigned long rounded_up_size; tag = get_tag(object); tagged_object = object; @@ -309,8 +306,7 @@ static bool ____kasan_slab_free(struct k return true; } - rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE); - poison_range(object, rounded_up_size, KASAN_KMALLOC_FREE); + poison_range(object, cache->object_size, KASAN_KMALLOC_FREE); if (!kasan_stack_collection_enabled()) return false; --- a/mm/kasan/shadow.c~kasan-dont-round_up-too-much +++ a/mm/kasan/shadow.c @@ -83,6 +83,7 @@ void poison_range(const void *address, s * addresses to this function. */ address = kasan_reset_tag(address); + size = round_up(size, KASAN_GRANULE_SIZE); /* Skip KFENCE memory if called explicitly outside of sl*b. */ if (is_kfence_address(address)) From patchwork Fri Dec 18 22:05:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DA7DC4361B for ; Fri, 18 Dec 2020 22:05:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0DB1223B5D for ; Fri, 18 Dec 2020 22:05:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0DB1223B5D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9A05D6B00A7; Fri, 18 Dec 2020 17:05:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 976486B00A8; Fri, 18 Dec 2020 17:05:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88D626B00A9; Fri, 18 Dec 2020 17:05:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id 70BC76B00A7 for ; Fri, 18 Dec 2020 17:05:16 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 3CF26181C5568 for ; Fri, 18 Dec 2020 22:05:16 +0000 (UTC) X-FDA: 77607784632.13.song34_3005e7527440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 1D8C8181BD7A9 for ; Fri, 18 Dec 2020 22:05:16 +0000 (UTC) X-HE-Tag: song34_3005e7527440 X-Filterd-Recvd-Size: 4091 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:15 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:13 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329114; bh=ly+Js3FOuGOqMb/jC+AuE0wJvoWr61MuGVX2dNpum2g=; h=From:To:Subject:In-Reply-To:From; b=UMzW1yO789T5sQvneFYxPZIAug9YmN+9kI/XAZLiC2cAaeHhP5jl78l1bxzAyNfZ/ kk4m4hAeXTn2eEr9YbIEivMDnmhfn+0+SWjKOcI4MG+GEz3y4gB0RnqgSZIgqeboFX gYtDEjPUo1yXJxsiE4MFB3i+3hMbpbiAcPEtRW2A= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 69/78] kasan: simplify assign_tag and set_tag calls Message-ID: <20201218220513.PdVgvgvya%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: simplify assign_tag and set_tag calls set_tag() already ignores the tag for the generic mode, so just call it as is. Add a check for the generic mode to assign_tag(), and simplify its call in ____kasan_kmalloc(). Link: https://lkml.kernel.org/r/121eeab245f98555862b289d2ba9269c868fbbcf.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/I18905ca78fb4a3d60e1a34a4ca00247272480438 Signed-off-by: Andrey Konovalov Reviewed-by: Dmitry Vyukov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/common.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) --- a/mm/kasan/common.c~kasan-simplify-assign_tag-and-set_tag-calls +++ a/mm/kasan/common.c @@ -233,6 +233,9 @@ void __kasan_poison_object_data(struct k static u8 assign_tag(struct kmem_cache *cache, const void *object, bool init, bool keep_tag) { + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + return 0xff; + /* * 1. When an object is kmalloc()'ed, two hooks are called: * kasan_slab_alloc() and kasan_kmalloc(). We assign the @@ -275,8 +278,8 @@ void * __must_check __kasan_init_slab_ob __memset(alloc_meta, 0, sizeof(*alloc_meta)); } - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS)) - object = set_tag(object, assign_tag(cache, object, true, false)); + /* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */ + object = set_tag(object, assign_tag(cache, object, true, false)); return (void *)object; } @@ -360,7 +363,7 @@ static void *____kasan_kmalloc(struct km { unsigned long redzone_start; unsigned long redzone_end; - u8 tag = 0xff; + u8 tag; if (gfpflags_allow_blocking(flags)) quarantine_reduce(); @@ -372,9 +375,7 @@ static void *____kasan_kmalloc(struct km KASAN_GRANULE_SIZE); redzone_end = round_up((unsigned long)object + cache->object_size, KASAN_GRANULE_SIZE); - - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS)) - tag = assign_tag(cache, object, false, keep_tag); + tag = assign_tag(cache, object, false, keep_tag); /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */ unpoison_range(set_tag(object, tag), size); From patchwork Fri Dec 18 22:05:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C5DFC4361B for ; Fri, 18 Dec 2020 22:05:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B96C623B5D for ; Fri, 18 Dec 2020 22:05:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B96C623B5D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5AA0A6B00A9; Fri, 18 Dec 2020 17:05:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 57E576B00AA; Fri, 18 Dec 2020 17:05:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 495546B00AB; Fri, 18 Dec 2020 17:05:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id 2D46B6B00A9 for ; Fri, 18 Dec 2020 17:05:20 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id EBA1252AD for ; Fri, 18 Dec 2020 22:05:19 +0000 (UTC) X-FDA: 77607784758.30.screw11_2a1133c27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id A289D180D76A2 for ; Fri, 18 Dec 2020 22:05:19 +0000 (UTC) X-HE-Tag: screw11_2a1133c27440 X-Filterd-Recvd-Size: 2811 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:19 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:17 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329118; bh=JM8QWrDg7sBMI5LsEMe0za0IAKlgysbCmniUFjiFTvI=; h=From:To:Subject:In-Reply-To:From; b=q9ioADmNzqBwQDaJU/ofokFcjyfo7Xe+TfB4cEARK6L1AXzhNDASU7PvPe7r9lu0n 4SxX2zd8ywq+3q7vVYFQ93i8051KTfW3wjZYBUZ/nAYrA2kBoBEZZ/fw5E3jW9MQOL jNJ0KUNIjhQjHouhzy9QJqyDQwUekqSXNd3G27FQ= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 70/78] kasan: clarify comment in __kasan_kfree_large Message-ID: <20201218220517.7DMt2WUs1%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: clarify comment in __kasan_kfree_large Currently it says that the memory gets poisoned by page_alloc code. Clarify this by mentioning the specific callback that poisons the memory. Link: https://lkml.kernel.org/r/1c8380fe0332a3bcc720fe29f1e0bef2e2974416.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/I1334dffb69b87d7986fab88a1a039cc3ea764725 Signed-off-by: Andrey Konovalov Reviewed-by: Dmitry Vyukov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/common.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/kasan/common.c~kasan-clarify-comment-in-__kasan_kfree_large +++ a/mm/kasan/common.c @@ -446,5 +446,5 @@ void __kasan_kfree_large(void *ptr, unsi { if (ptr != page_address(virt_to_head_page(ptr))) kasan_report_invalid_free(ptr, ip); - /* The object will be poisoned by page_alloc. */ + /* The object will be poisoned by kasan_free_pages(). */ } From patchwork Fri Dec 18 22:05:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D615C4361B for ; Fri, 18 Dec 2020 22:05:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B047123B5D for ; Fri, 18 Dec 2020 22:05:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B047123B5D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5331E6B0081; Fri, 18 Dec 2020 17:05:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5090D6B00AA; Fri, 18 Dec 2020 17:05:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F9496B00AB; Fri, 18 Dec 2020 17:05:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 216D86B00AA for ; Fri, 18 Dec 2020 17:05:25 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D9EC6181CBDDD for ; Fri, 18 Dec 2020 22:05:24 +0000 (UTC) X-FDA: 77607784968.05.north59_300bf3a27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id BCC00181C5568 for ; Fri, 18 Dec 2020 22:05:24 +0000 (UTC) X-HE-Tag: north59_300bf3a27440 X-Filterd-Recvd-Size: 18666 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:23 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:20 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329123; bh=32xp9Is504o+Td5UvjodxrqWKxOwW6xqTveoBkAomMM=; h=From:To:Subject:In-Reply-To:From; b=w+cbbTptlaxQeA0BQn9SVEYcsVAb7IEMactX1NIrkCE2Runbg7UNCs5Yrvdr+Iu8g VPTD6D3IzEX5eV3ySwUqrt8/X0d8N9hGqyPX8tlfONEmpDWeiC6GC+T98PmtGyL3rd NDnJb+WGUiQMVjPSk6N0x0ZPq9k1NTss4z5Toda8= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, Vincenzo.Frascino@arm.com, will.deacon@arm.com Subject: [patch 71/78] kasan: sanitize objects when metadata doesn't fit Message-ID: <20201218220520.v3sUnHVDg%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: sanitize objects when metadata doesn't fit KASAN marks caches that are sanitized with the SLAB_KASAN cache flag. Currently if the metadata that is appended after the object (stores e.g. stack trace ids) doesn't fit into KMALLOC_MAX_SIZE (can only happen with SLAB, see the comment in the patch), KASAN turns off sanitization completely. With this change sanitization of the object data is always enabled. However the metadata is only stored when it fits. Instead of checking for SLAB_KASAN flag accross the code to find out whether the metadata is there, use cache->kasan_info.alloc/free_meta_offset. As 0 can be a valid value for free_meta_offset, introduce KASAN_NO_FREE_META as an indicator that the free metadata is missing. Without this change all sanitized KASAN objects would be put into quarantine with generic KASAN. With this change, only the objects that have metadata (i.e. when it fits) are put into quarantine, the rest is freed right away. Along the way rework __kasan_cache_create() and add claryfying comments. Link: https://lkml.kernel.org/r/aee34b87a5e4afe586c2ac6a0b32db8dc4dcc2dc.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/Icd947e2bea054cb5cfbdc6cf6652227d97032dcb Co-developed-by: Vincenzo Frascino Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/common.c | 116 ++++++++++++++++++++++-------------- mm/kasan/generic.c | 9 +- mm/kasan/hw_tags.c | 6 + mm/kasan/kasan.h | 17 ++++- mm/kasan/quarantine.c | 18 ++++- mm/kasan/report.c | 43 +++++++------ mm/kasan/report_sw_tags.c | 9 +- mm/kasan/sw_tags.c | 4 + 8 files changed, 147 insertions(+), 75 deletions(-) --- a/mm/kasan/common.c~kasan-sanitize-objects-when-metadata-doesnt-fit +++ a/mm/kasan/common.c @@ -114,9 +114,6 @@ void __kasan_free_pages(struct page *pag */ static inline unsigned int optimal_redzone(unsigned int object_size) { - if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) - return 0; - return object_size <= 64 - 16 ? 16 : object_size <= 128 - 32 ? 32 : @@ -130,47 +127,77 @@ static inline unsigned int optimal_redzo void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) { - unsigned int orig_size = *size; - unsigned int redzone_size; - int redzone_adjust; + unsigned int ok_size; + unsigned int optimal_size; + + /* + * SLAB_KASAN is used to mark caches as ones that are sanitized by + * KASAN. Currently this flag is used in two places: + * 1. In slab_ksize() when calculating the size of the accessible + * memory within the object. + * 2. In slab_common.c to prevent merging of sanitized caches. + */ + *flags |= SLAB_KASAN; - if (!kasan_stack_collection_enabled()) { - *flags |= SLAB_KASAN; + if (!kasan_stack_collection_enabled()) return; - } - /* Add alloc meta. */ + ok_size = *size; + + /* Add alloc meta into redzone. */ cache->kasan_info.alloc_meta_offset = *size; *size += sizeof(struct kasan_alloc_meta); - /* Add free meta. */ - if (IS_ENABLED(CONFIG_KASAN_GENERIC) && - (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || - cache->object_size < sizeof(struct kasan_free_meta))) { - cache->kasan_info.free_meta_offset = *size; - *size += sizeof(struct kasan_free_meta); + /* + * If alloc meta doesn't fit, don't add it. + * This can only happen with SLAB, as it has KMALLOC_MAX_SIZE equal + * to KMALLOC_MAX_CACHE_SIZE and doesn't fall back to page_alloc for + * larger sizes. + */ + if (*size > KMALLOC_MAX_SIZE) { + cache->kasan_info.alloc_meta_offset = 0; + *size = ok_size; + /* Continue, since free meta might still fit. */ } - redzone_size = optimal_redzone(cache->object_size); - redzone_adjust = redzone_size - (*size - cache->object_size); - if (redzone_adjust > 0) - *size += redzone_adjust; - - *size = min_t(unsigned int, KMALLOC_MAX_SIZE, - max(*size, cache->object_size + redzone_size)); + /* Only the generic mode uses free meta or flexible redzones. */ + if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) { + cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META; + return; + } /* - * If the metadata doesn't fit, don't enable KASAN at all. + * Add free meta into redzone when it's not possible to store + * it in the object. This is the case when: + * 1. Object is SLAB_TYPESAFE_BY_RCU, which means that it can + * be touched after it was freed, or + * 2. Object has a constructor, which means it's expected to + * retain its content until the next allocation, or + * 3. Object is too small. + * Otherwise cache->kasan_info.free_meta_offset = 0 is implied. */ - if (*size <= cache->kasan_info.alloc_meta_offset || - *size <= cache->kasan_info.free_meta_offset) { - cache->kasan_info.alloc_meta_offset = 0; - cache->kasan_info.free_meta_offset = 0; - *size = orig_size; - return; + if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor || + cache->object_size < sizeof(struct kasan_free_meta)) { + ok_size = *size; + + cache->kasan_info.free_meta_offset = *size; + *size += sizeof(struct kasan_free_meta); + + /* If free meta doesn't fit, don't add it. */ + if (*size > KMALLOC_MAX_SIZE) { + cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META; + *size = ok_size; + } } - *flags |= SLAB_KASAN; + /* Calculate size with optimal redzone. */ + optimal_size = cache->object_size + optimal_redzone(cache->object_size); + /* Limit it with KMALLOC_MAX_SIZE (relevant for SLAB only). */ + if (optimal_size > KMALLOC_MAX_SIZE) + optimal_size = KMALLOC_MAX_SIZE; + /* Use optimal size if the size with added metas is not large enough. */ + if (*size < optimal_size) + *size = optimal_size; } size_t __kasan_metadata_size(struct kmem_cache *cache) @@ -186,15 +213,21 @@ size_t __kasan_metadata_size(struct kmem struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache, const void *object) { + if (!cache->kasan_info.alloc_meta_offset) + return NULL; return kasan_reset_tag(object) + cache->kasan_info.alloc_meta_offset; } +#ifdef CONFIG_KASAN_GENERIC struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache, const void *object) { BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32); + if (cache->kasan_info.free_meta_offset == KASAN_NO_FREE_META) + return NULL; return kasan_reset_tag(object) + cache->kasan_info.free_meta_offset; } +#endif void __kasan_poison_slab(struct page *page) { @@ -271,11 +304,9 @@ void * __must_check __kasan_init_slab_ob struct kasan_alloc_meta *alloc_meta; if (kasan_stack_collection_enabled()) { - if (!(cache->flags & SLAB_KASAN)) - return (void *)object; - alloc_meta = kasan_get_alloc_meta(cache, object); - __memset(alloc_meta, 0, sizeof(*alloc_meta)); + if (alloc_meta) + __memset(alloc_meta, 0, sizeof(*alloc_meta)); } /* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */ @@ -314,15 +345,12 @@ static bool ____kasan_slab_free(struct k if (!kasan_stack_collection_enabled()) return false; - if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) || - unlikely(!(cache->flags & SLAB_KASAN))) + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine)) return false; kasan_set_free_info(cache, object, tag); - quarantine_put(cache, object); - - return IS_ENABLED(CONFIG_KASAN_GENERIC); + return quarantine_put(cache, object); } bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) @@ -355,7 +383,11 @@ void __kasan_slab_free_mempool(void *ptr static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags) { - kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags); + struct kasan_alloc_meta *alloc_meta; + + alloc_meta = kasan_get_alloc_meta(cache, object); + if (alloc_meta) + kasan_set_track(&alloc_meta->alloc_track, flags); } static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, @@ -382,7 +414,7 @@ static void *____kasan_kmalloc(struct km poison_range((void *)redzone_start, redzone_end - redzone_start, KASAN_KMALLOC_REDZONE); - if (kasan_stack_collection_enabled() && (cache->flags & SLAB_KASAN)) + if (kasan_stack_collection_enabled()) set_alloc_info(cache, (void *)object, flags); return set_tag(object, tag); --- a/mm/kasan/generic.c~kasan-sanitize-objects-when-metadata-doesnt-fit +++ a/mm/kasan/generic.c @@ -348,11 +348,11 @@ void kasan_set_free_info(struct kmem_cac struct kasan_free_meta *free_meta; free_meta = kasan_get_free_meta(cache, object); - kasan_set_track(&free_meta->free_track, GFP_NOWAIT); + if (!free_meta) + return; - /* - * the object was freed and has free track set - */ + kasan_set_track(&free_meta->free_track, GFP_NOWAIT); + /* The object was freed and has free track set. */ *(u8 *)kasan_mem_to_shadow(object) = KASAN_KMALLOC_FREETRACK; } @@ -361,5 +361,6 @@ struct kasan_track *kasan_get_free_track { if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_KMALLOC_FREETRACK) return NULL; + /* Free meta must be present with KASAN_KMALLOC_FREETRACK. */ return &kasan_get_free_meta(cache, object)->free_track; } --- a/mm/kasan/hw_tags.c~kasan-sanitize-objects-when-metadata-doesnt-fit +++ a/mm/kasan/hw_tags.c @@ -187,7 +187,8 @@ void kasan_set_free_info(struct kmem_cac struct kasan_alloc_meta *alloc_meta; alloc_meta = kasan_get_alloc_meta(cache, object); - kasan_set_track(&alloc_meta->free_track[0], GFP_NOWAIT); + if (alloc_meta) + kasan_set_track(&alloc_meta->free_track[0], GFP_NOWAIT); } struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, @@ -196,5 +197,8 @@ struct kasan_track *kasan_get_free_track struct kasan_alloc_meta *alloc_meta; alloc_meta = kasan_get_alloc_meta(cache, object); + if (!alloc_meta) + return NULL; + return &alloc_meta->free_track[0]; } --- a/mm/kasan/kasan.h~kasan-sanitize-objects-when-metadata-doesnt-fit +++ a/mm/kasan/kasan.h @@ -156,20 +156,31 @@ struct kasan_alloc_meta { struct qlist_node { struct qlist_node *next; }; + +/* + * Generic mode either stores free meta in the object itself or in the redzone + * after the object. In the former case free meta offset is 0, in the latter + * case it has some sane value smaller than INT_MAX. Use INT_MAX as free meta + * offset when free meta isn't present. + */ +#define KASAN_NO_FREE_META INT_MAX + struct kasan_free_meta { +#ifdef CONFIG_KASAN_GENERIC /* This field is used while the object is in the quarantine. * Otherwise it might be used for the allocator freelist. */ struct qlist_node quarantine_link; -#ifdef CONFIG_KASAN_GENERIC struct kasan_track free_track; #endif }; struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache, const void *object); +#ifdef CONFIG_KASAN_GENERIC struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache, const void *object); +#endif #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) @@ -234,11 +245,11 @@ struct kasan_track *kasan_get_free_track #if defined(CONFIG_KASAN_GENERIC) && \ (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) -void quarantine_put(struct kmem_cache *cache, void *object); +bool quarantine_put(struct kmem_cache *cache, void *object); void quarantine_reduce(void); void quarantine_remove_cache(struct kmem_cache *cache); #else -static inline void quarantine_put(struct kmem_cache *cache, void *object) { } +static inline bool quarantine_put(struct kmem_cache *cache, void *object) { return false; } static inline void quarantine_reduce(void) { } static inline void quarantine_remove_cache(struct kmem_cache *cache) { } #endif --- a/mm/kasan/quarantine.c~kasan-sanitize-objects-when-metadata-doesnt-fit +++ a/mm/kasan/quarantine.c @@ -137,7 +137,12 @@ static void qlink_free(struct qlist_node if (IS_ENABLED(CONFIG_SLAB)) local_irq_save(flags); + /* + * As the object now gets freed from the quaratine, assume that its + * free track is no longer valid. + */ *(u8 *)kasan_mem_to_shadow(object) = KASAN_KMALLOC_FREE; + ___cache_free(cache, object, _THIS_IP_); if (IS_ENABLED(CONFIG_SLAB)) @@ -163,7 +168,7 @@ static void qlist_free_all(struct qlist_ qlist_init(q); } -void quarantine_put(struct kmem_cache *cache, void *object) +bool quarantine_put(struct kmem_cache *cache, void *object) { unsigned long flags; struct qlist_head *q; @@ -171,6 +176,13 @@ void quarantine_put(struct kmem_cache *c struct kasan_free_meta *meta = kasan_get_free_meta(cache, object); /* + * If there's no metadata for this object, don't put it into + * quarantine. + */ + if (!meta) + return false; + + /* * Note: irq must be disabled until after we move the batch to the * global quarantine. Otherwise quarantine_remove_cache() can miss * some objects belonging to the cache if they are in our local temp @@ -183,7 +195,7 @@ void quarantine_put(struct kmem_cache *c q = this_cpu_ptr(&cpu_quarantine); if (q->offline) { local_irq_restore(flags); - return; + return false; } qlist_put(q, &meta->quarantine_link, cache->size); if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) { @@ -206,6 +218,8 @@ void quarantine_put(struct kmem_cache *c } local_irq_restore(flags); + + return true; } void quarantine_reduce(void) --- a/mm/kasan/report.c~kasan-sanitize-objects-when-metadata-doesnt-fit +++ a/mm/kasan/report.c @@ -168,32 +168,35 @@ static void describe_object_addr(struct static void describe_object_stacks(struct kmem_cache *cache, void *object, const void *addr, u8 tag) { - struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object); - - if (cache->flags & SLAB_KASAN) { - struct kasan_track *free_track; + struct kasan_alloc_meta *alloc_meta; + struct kasan_track *free_track; + alloc_meta = kasan_get_alloc_meta(cache, object); + if (alloc_meta) { print_track(&alloc_meta->alloc_track, "Allocated"); pr_err("\n"); - free_track = kasan_get_free_track(cache, object, tag); - if (free_track) { - print_track(free_track, "Freed"); - pr_err("\n"); - } + } + + free_track = kasan_get_free_track(cache, object, tag); + if (free_track) { + print_track(free_track, "Freed"); + pr_err("\n"); + } #ifdef CONFIG_KASAN_GENERIC - if (alloc_meta->aux_stack[0]) { - pr_err("Last potentially related work creation:\n"); - print_stack(alloc_meta->aux_stack[0]); - pr_err("\n"); - } - if (alloc_meta->aux_stack[1]) { - pr_err("Second to last potentially related work creation:\n"); - print_stack(alloc_meta->aux_stack[1]); - pr_err("\n"); - } -#endif + if (!alloc_meta) + return; + if (alloc_meta->aux_stack[0]) { + pr_err("Last potentially related work creation:\n"); + print_stack(alloc_meta->aux_stack[0]); + pr_err("\n"); } + if (alloc_meta->aux_stack[1]) { + pr_err("Second to last potentially related work creation:\n"); + print_stack(alloc_meta->aux_stack[1]); + pr_err("\n"); + } +#endif } static void describe_object(struct kmem_cache *cache, void *object, --- a/mm/kasan/report_sw_tags.c~kasan-sanitize-objects-when-metadata-doesnt-fit +++ a/mm/kasan/report_sw_tags.c @@ -48,9 +48,12 @@ const char *get_bug_type(struct kasan_ac object = nearest_obj(cache, page, (void *)addr); alloc_meta = kasan_get_alloc_meta(cache, object); - for (i = 0; i < KASAN_NR_FREE_STACKS; i++) - if (alloc_meta->free_pointer_tag[i] == tag) - return "use-after-free"; + if (alloc_meta) { + for (i = 0; i < KASAN_NR_FREE_STACKS; i++) { + if (alloc_meta->free_pointer_tag[i] == tag) + return "use-after-free"; + } + } return "out-of-bounds"; } --- a/mm/kasan/sw_tags.c~kasan-sanitize-objects-when-metadata-doesnt-fit +++ a/mm/kasan/sw_tags.c @@ -170,6 +170,8 @@ void kasan_set_free_info(struct kmem_cac u8 idx = 0; alloc_meta = kasan_get_alloc_meta(cache, object); + if (!alloc_meta) + return; #ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY idx = alloc_meta->free_track_idx; @@ -187,6 +189,8 @@ struct kasan_track *kasan_get_free_track int i = 0; alloc_meta = kasan_get_alloc_meta(cache, object); + if (!alloc_meta) + return NULL; #ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY for (i = 0; i < KASAN_NR_FREE_STACKS; i++) { From patchwork Fri Dec 18 22:05:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D206C4361B for ; Fri, 18 Dec 2020 22:05:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BC92C23B5D for ; Fri, 18 Dec 2020 22:05:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BC92C23B5D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5A80B6B00AB; Fri, 18 Dec 2020 17:05:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 569996B00AC; Fri, 18 Dec 2020 17:05:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 445D26B00AD; Fri, 18 Dec 2020 17:05:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id 2AB906B00AB for ; Fri, 18 Dec 2020 17:05:28 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E06E04DA4 for ; Fri, 18 Dec 2020 22:05:27 +0000 (UTC) X-FDA: 77607785094.22.pen30_5515edf27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id B567B180909E9 for ; Fri, 18 Dec 2020 22:05:27 +0000 (UTC) X-HE-Tag: pen30_5515edf27440 X-Filterd-Recvd-Size: 5525 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:27 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:25 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329126; bh=uJ4An1wnldMhr2Wy6GlXvJSV0e7Q73eZYXMEC7AsUTo=; h=From:To:Subject:In-Reply-To:From; b=TcwgyenqxT0bIEBtCIPpismGwVzJ4mVOIBUYqqhSG7hqwdJjOyU8y2u3vprWO4GJ9 EL6I+Z6ZObVGzsdfg0kvYyNvGJrWq96k+cn+Ocux1jFvV9RcmNqzfiiUgS4yOwzZP/ qVf8N8xpH6oS10Vc/HYiyCr6qtQDJaJ/q15RMbmo= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, Vincenzo.Frascino@arm.com, will.deacon@arm.com Subject: [patch 72/78] kasan, mm: allow cache merging with no metadata Message-ID: <20201218220525.X5G3jCEts%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan, mm: allow cache merging with no metadata The reason cache merging is disabled with KASAN is because KASAN puts its metadata right after the allocated object. When the merged caches have slightly different sizes, the metadata ends up in different places, which KASAN doesn't support. It might be possible to adjust the metadata allocation algorithm and make it friendly to the cache merging code. Instead this change takes a simpler approach and allows merging caches when no metadata is present. Which is the case for hardware tag-based KASAN with kasan.mode=prod. Link: https://lkml.kernel.org/r/37497e940bfd4b32c0a93a702a9ae4cf061d5392.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/Ia114847dfb2244f297d2cb82d592bf6a07455dba Co-developed-by: Vincenzo Frascino Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Reviewed-by: Dmitry Vyukov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- include/linux/kasan.h | 21 +++++++++++++++++++-- mm/kasan/common.c | 11 +++++++++++ mm/slab_common.c | 3 ++- 3 files changed, 32 insertions(+), 3 deletions(-) --- a/include/linux/kasan.h~kasan-mm-allow-cache-merging-with-no-metadata +++ a/include/linux/kasan.h @@ -82,17 +82,30 @@ struct kasan_cache { }; #ifdef CONFIG_KASAN_HW_TAGS + DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); + static __always_inline bool kasan_enabled(void) { return static_branch_likely(&kasan_flag_enabled); } -#else + +#else /* CONFIG_KASAN_HW_TAGS */ + static inline bool kasan_enabled(void) { return true; } -#endif + +#endif /* CONFIG_KASAN_HW_TAGS */ + +slab_flags_t __kasan_never_merge(void); +static __always_inline slab_flags_t kasan_never_merge(void) +{ + if (kasan_enabled()) + return __kasan_never_merge(); + return 0; +} void __kasan_unpoison_range(const void *addr, size_t size); static __always_inline void kasan_unpoison_range(const void *addr, size_t size) @@ -239,6 +252,10 @@ static inline bool kasan_enabled(void) { return false; } +static inline slab_flags_t kasan_never_merge(void) +{ + return 0; +} static inline void kasan_unpoison_range(const void *address, size_t size) {} static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} static inline void kasan_free_pages(struct page *page, unsigned int order) {} --- a/mm/kasan/common.c~kasan-mm-allow-cache-merging-with-no-metadata +++ a/mm/kasan/common.c @@ -86,6 +86,17 @@ asmlinkage void kasan_unpoison_task_stac } #endif /* CONFIG_KASAN_STACK */ +/* + * Only allow cache merging when stack collection is disabled and no metadata + * is present. + */ +slab_flags_t __kasan_never_merge(void) +{ + if (kasan_stack_collection_enabled()) + return SLAB_KASAN; + return 0; +} + void __kasan_alloc_pages(struct page *page, unsigned int order) { u8 tag; --- a/mm/slab_common.c~kasan-mm-allow-cache-merging-with-no-metadata +++ a/mm/slab_common.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -53,7 +54,7 @@ static DECLARE_WORK(slab_caches_to_rcu_d */ #define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \ - SLAB_FAILSLAB | SLAB_KASAN) + SLAB_FAILSLAB | kasan_never_merge()) #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ SLAB_CACHE_DMA32 | SLAB_ACCOUNT) From patchwork Fri Dec 18 22:05:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE483C3526C for ; Fri, 18 Dec 2020 22:05:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 535FF23B87 for ; Fri, 18 Dec 2020 22:05:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 535FF23B87 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D707D6B00AD; Fri, 18 Dec 2020 17:05:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D20B76B00AE; Fri, 18 Dec 2020 17:05:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C35DF6B00AF; Fri, 18 Dec 2020 17:05:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id AAB596B00AD for ; Fri, 18 Dec 2020 17:05:31 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 72EEF824999B for ; Fri, 18 Dec 2020 22:05:31 +0000 (UTC) X-FDA: 77607785262.17.cakes69_0e0359527440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 5246D1816392B for ; Fri, 18 Dec 2020 22:05:31 +0000 (UTC) X-HE-Tag: cakes69_0e0359527440 X-Filterd-Recvd-Size: 15155 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:30 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:28 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329129; bh=8DcJNR4WmHnL0Hv0fKmZFOYV6gb4BwyMrwvUViDgsMg=; h=From:To:Subject:In-Reply-To:From; b=BEpwRlOUax5N9jns6SFi8qxB0bZYKAAxZck4YI6iOMXgHJ1P5CGO9OFaIPfe1WdEp QMJQCYrd1/ncfA9k1f7lHS+U3HQXXHf8v+XF7WiaHNkQ2bhsB8WjqpCaJpokQO5qUp llD/pOi9j9x53c0upBUrc5ULvbArRuBXTRVdLbsA= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 73/78] kasan: update documentation Message-ID: <20201218220528.rMPAjuQZl%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: update documentation This change updates KASAN documentation to reflect the addition of boot parameters and also reworks and clarifies some of the existing sections, in particular: defines what a memory granule is, mentions quarantine, makes Kunit section more readable. Link: https://lkml.kernel.org/r/748daf013e17d925b0fe00c1c3b5dce726dd2430.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/Ib1f83e91be273264b25f42b04448ac96b858849f Signed-off-by: Andrey Konovalov Reviewed-by: Dmitry Vyukov Reviewed-by: Marco Elver Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- Documentation/dev-tools/kasan.rst | 188 +++++++++++++++++----------- 1 file changed, 117 insertions(+), 71 deletions(-) --- a/Documentation/dev-tools/kasan.rst~kasan-update-documentation +++ a/Documentation/dev-tools/kasan.rst @@ -4,8 +4,9 @@ The Kernel Address Sanitizer (KASAN) Overview -------- -KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to -find out-of-bound and use-after-free bugs. KASAN has three modes: +KernelAddressSANitizer (KASAN) is a dynamic memory safety error detector +designed to find out-of-bound and use-after-free bugs. KASAN has three modes: + 1. generic KASAN (similar to userspace ASan), 2. software tag-based KASAN (similar to userspace HWASan), 3. hardware tag-based KASAN (based on hardware memory tagging). @@ -39,23 +40,13 @@ CONFIG_KASAN_INLINE. Outline and inline The former produces smaller binary while the latter is 1.1 - 2 times faster. Both software KASAN modes work with both SLUB and SLAB memory allocators, -hardware tag-based KASAN currently only support SLUB. -For better bug detection and nicer reporting, enable CONFIG_STACKTRACE. +while the hardware tag-based KASAN currently only support SLUB. + +For better error reports that include stack traces, enable CONFIG_STACKTRACE. To augment reports with last allocation and freeing stack of the physical page, it is recommended to enable also CONFIG_PAGE_OWNER and boot with page_owner=on. -To disable instrumentation for specific files or directories, add a line -similar to the following to the respective kernel Makefile: - -- For a single file (e.g. main.o):: - - KASAN_SANITIZE_main.o := n - -- For all files in one directory:: - - KASAN_SANITIZE := n - Error reports ~~~~~~~~~~~~~ @@ -140,22 +131,75 @@ freed (in case of a use-after-free bug r the accessed slab object and information about the accessed memory page. In the last section the report shows memory state around the accessed address. -Reading this part requires some understanding of how KASAN works. - -The state of each 8 aligned bytes of memory is encoded in one shadow byte. -Those 8 bytes can be accessible, partially accessible, freed or be a redzone. -We use the following encoding for each shadow byte: 0 means that all 8 bytes -of the corresponding memory region are accessible; number N (1 <= N <= 7) means -that the first N bytes are accessible, and other (8 - N) bytes are not; -any negative value indicates that the entire 8-byte word is inaccessible. -We use different negative values to distinguish between different kinds of -inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h). +Internally KASAN tracks memory state separately for each memory granule, which +is either 8 or 16 aligned bytes depending on KASAN mode. Each number in the +memory state section of the report shows the state of one of the memory +granules that surround the accessed address. + +For generic KASAN the size of each memory granule is 8. The state of each +granule is encoded in one shadow byte. Those 8 bytes can be accessible, +partially accessible, freed or be a part of a redzone. KASAN uses the following +encoding for each shadow byte: 0 means that all 8 bytes of the corresponding +memory region are accessible; number N (1 <= N <= 7) means that the first N +bytes are accessible, and other (8 - N) bytes are not; any negative value +indicates that the entire 8-byte word is inaccessible. KASAN uses different +negative values to distinguish between different kinds of inaccessible memory +like redzones or freed memory (see mm/kasan/kasan.h). In the report above the arrows point to the shadow byte 03, which means that the accessed address is partially accessible. For tag-based KASAN this last report section shows the memory tags around the -accessed address (see Implementation details section). +accessed address (see `Implementation details`_ section). + +Boot parameters +~~~~~~~~~~~~~~~ + +Hardware tag-based KASAN mode (see the section about different mode below) is +intended for use in production as a security mitigation. Therefore it supports +boot parameters that allow to disable KASAN competely or otherwise control +particular KASAN features. + +The things that can be controlled are: + +1. Whether KASAN is enabled at all. +2. Whether KASAN collects and saves alloc/free stacks. +3. Whether KASAN panics on a detected bug or not. + +The ``kasan.mode`` boot parameter allows to choose one of three main modes: + +- ``kasan.mode=off`` - KASAN is disabled, no tag checks are performed +- ``kasan.mode=prod`` - only essential production features are enabled +- ``kasan.mode=full`` - all KASAN features are enabled + +The chosen mode provides default control values for the features mentioned +above. However it's also possible to override the default values by providing: + +- ``kasan.stacktrace=off`` or ``=on`` - enable alloc/free stack collection + (default: ``on`` for ``mode=full``, + otherwise ``off``) +- ``kasan.fault=report`` or ``=panic`` - only print KASAN report or also panic + (default: ``report``) + +If ``kasan.mode`` parameter is not provided, it defaults to ``full`` when +``CONFIG_DEBUG_KERNEL`` is enabled, and to ``prod`` otherwise. + +For developers +~~~~~~~~~~~~~~ + +Software KASAN modes use compiler instrumentation to insert validity checks. +Such instrumentation might be incompatible with some part of the kernel, and +therefore needs to be disabled. To disable instrumentation for specific files +or directories, add a line similar to the following to the respective kernel +Makefile: + +- For a single file (e.g. main.o):: + + KASAN_SANITIZE_main.o := n + +- For all files in one directory:: + + KASAN_SANITIZE := n Implementation details @@ -164,10 +208,10 @@ Implementation details Generic KASAN ~~~~~~~~~~~~~ -From a high level, our approach to memory error detection is similar to that -of kmemcheck: use shadow memory to record whether each byte of memory is safe -to access, and use compile-time instrumentation to insert checks of shadow -memory on each memory access. +From a high level perspective, KASAN's approach to memory error detection is +similar to that of kmemcheck: use shadow memory to record whether each byte of +memory is safe to access, and use compile-time instrumentation to insert checks +of shadow memory on each memory access. Generic KASAN dedicates 1/8th of kernel memory to its shadow memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and offset to @@ -198,6 +242,9 @@ Generic KASAN also reports the last 2 ca potentially has access to an object. Call stacks for the following are shown: call_rcu() and workqueue queuing. +Generic KASAN is the only mode that delays the reuse of freed object via +quarantine (see mm/kasan/quarantine.c for implementation). + Software tag-based KASAN ~~~~~~~~~~~~~~~~~~~~~~~~ @@ -305,15 +352,15 @@ therefore be wasteful. Furthermore, to e use different shadow pages, mappings would have to be aligned to ``KASAN_GRANULE_SIZE * PAGE_SIZE``. -Instead, we share backing space across multiple mappings. We allocate +Instead, KASAN shares backing space across multiple mappings. It allocates a backing page when a mapping in vmalloc space uses a particular page of the shadow region. This page can be shared by other vmalloc mappings later on. -We hook in to the vmap infrastructure to lazily clean up unused shadow +KASAN hooks into the vmap infrastructure to lazily clean up unused shadow memory. -To avoid the difficulties around swapping mappings around, we expect +To avoid the difficulties around swapping mappings around, KASAN expects that the part of the shadow region that covers the vmalloc space will not be covered by the early shadow page, but will be left unmapped. This will require changes in arch-specific code. @@ -324,24 +371,31 @@ architectures that do not have a fixed m CONFIG_KASAN_KUNIT_TEST & CONFIG_TEST_KASAN_MODULE -------------------------------------------------- -``CONFIG_KASAN_KUNIT_TEST`` utilizes the KUnit Test Framework for testing. -This means each test focuses on a small unit of functionality and -there are a few ways these tests can be run. +KASAN tests consist on two parts: + +1. Tests that are integrated with the KUnit Test Framework. Enabled with +``CONFIG_KASAN_KUNIT_TEST``. These tests can be run and partially verified +automatically in a few different ways, see the instructions below. + +2. Tests that are currently incompatible with KUnit. Enabled with +``CONFIG_TEST_KASAN_MODULE`` and can only be run as a module. These tests can +only be verified manually, by loading the kernel module and inspecting the +kernel log for KASAN reports. -Each test will print the KASAN report if an error is detected and then -print the number of the test and the status of the test: +Each KUnit-compatible KASAN test prints a KASAN report if an error is detected. +Then the test prints its number and status. -pass:: +When a test passes:: ok 28 - kmalloc_double_kzfree -or, if kmalloc failed:: +When a test fails due to a failed ``kmalloc``:: # kmalloc_large_oob_right: ASSERTION FAILED at lib/test_kasan.c:163 Expected ptr is not null, but is not ok 4 - kmalloc_large_oob_right -or, if a KASAN report was expected, but not found:: +When a test fails due to a missing KASAN report:: # kmalloc_double_kzfree: EXPECTATION FAILED at lib/test_kasan.c:629 Expected kasan_data->report_expected == kasan_data->report_found, but @@ -349,46 +403,38 @@ or, if a KASAN report was expected, but kasan_data->report_found == 0 not ok 28 - kmalloc_double_kzfree -All test statuses are tracked as they run and an overall status will -be printed at the end:: +At the end the cumulative status of all KASAN tests is printed. On success:: ok 1 - kasan -or:: +Or, if one of the tests failed:: not ok 1 - kasan -(1) Loadable Module -~~~~~~~~~~~~~~~~~~~~ + +There are a few ways to run KUnit-compatible KASAN tests. + +1. Loadable module +~~~~~~~~~~~~~~~~~~ With ``CONFIG_KUNIT`` enabled, ``CONFIG_KASAN_KUNIT_TEST`` can be built as -a loadable module and run on any architecture that supports KASAN -using something like insmod or modprobe. The module is called ``test_kasan``. +a loadable module and run on any architecture that supports KASAN by loading +the module with insmod or modprobe. The module is called ``test_kasan``. -(2) Built-In -~~~~~~~~~~~~~ +2. Built-In +~~~~~~~~~~~ With ``CONFIG_KUNIT`` built-in, ``CONFIG_KASAN_KUNIT_TEST`` can be built-in -on any architecture that supports KASAN. These and any other KUnit -tests enabled will run and print the results at boot as a late-init -call. - -(3) Using kunit_tool -~~~~~~~~~~~~~~~~~~~~~ - -With ``CONFIG_KUNIT`` and ``CONFIG_KASAN_KUNIT_TEST`` built-in, we can also -use kunit_tool to see the results of these along with other KUnit -tests in a more readable way. This will not print the KASAN reports -of tests that passed. Use `KUnit documentation `_ for more up-to-date -information on kunit_tool. +on any architecure that supports KASAN. These and any other KUnit tests enabled +will run and print the results at boot as a late-init call. -.. _KUnit: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html +3. Using kunit_tool +~~~~~~~~~~~~~~~~~~~ -``CONFIG_TEST_KASAN_MODULE`` is a set of KASAN tests that could not be -converted to KUnit. These tests can be run only as a module with -``CONFIG_TEST_KASAN_MODULE`` built as a loadable module and -``CONFIG_KASAN`` built-in. The type of error expected and the -function being run is printed before the expression expected to give -an error. Then the error is printed, if found, and that test -should be interpreted to pass only if the error was the one expected -by the test. +With ``CONFIG_KUNIT`` and ``CONFIG_KASAN_KUNIT_TEST`` built-in, it's also +possible use ``kunit_tool`` to see the results of these and other KUnit tests +in a more readable way. This will not print the KASAN reports of the tests that +passed. Use `KUnit documentation `_ +for more up-to-date information on ``kunit_tool``. + +.. _KUnit: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html From patchwork Fri Dec 18 22:05:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7AE0C2D0E4 for ; Fri, 18 Dec 2020 22:05:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5836423B87 for ; Fri, 18 Dec 2020 22:05:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5836423B87 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E80746B00AF; Fri, 18 Dec 2020 17:05:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E318F6B00B0; Fri, 18 Dec 2020 17:05:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D47456B00B1; Fri, 18 Dec 2020 17:05:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id B5E3F6B00AF for ; Fri, 18 Dec 2020 17:05:34 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 85D8C181D19E6 for ; Fri, 18 Dec 2020 22:05:34 +0000 (UTC) X-FDA: 77607785388.04.cent49_55160ee27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 6AC66801D232 for ; Fri, 18 Dec 2020 22:05:34 +0000 (UTC) X-HE-Tag: cent49_55160ee27440 X-Filterd-Recvd-Size: 1918 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:33 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:32 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329132; bh=u3y600jJ/ZseY1CtgA2tFCx7R9Smf+pM46eTlF7Qd4Q=; h=From:To:Subject:In-Reply-To:From; b=v9VpoY5+qmSl5nW7V5YHQLX26Wwdh6XzyMfZcyQGuRJ8QWikzXBGviYhVHrOQD3g4 UFtJI5FYrP6uAtHoJrz4GsHCaocrT9N0ETIH2Arf2WwgHiIvvYjCqvW8tOxx9GosaV qkc10Nz9KP/bP9OTP+VCV/tlxQQNfapKgskn+HJo= From: Andrew Morton To: akpm@linux-foundation.org, colin.king@canonical.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org Subject: [patch 74/78] mm/Kconfig: fix spelling mistake "whats" -> "what's" Message-ID: <20201218220532.m4psI7_CA%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Colin Ian King Subject: mm/Kconfig: fix spelling mistake "whats" -> "what's" There is a spelling mistake in the Kconfig help text. Fix it. Link: https://lkml.kernel.org/r/20201217172717.58203-1-colin.king@canonical.com Signed-off-by: Colin Ian King Signed-off-by: Andrew Morton --- mm/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/Kconfig~mm-fix-spelling-mistake-in-kconfig-whats-whats +++ a/mm/Kconfig @@ -713,7 +713,7 @@ config ZSMALLOC_STAT select DEBUG_FS help This option enables code in the zsmalloc to collect various - statistics about whats happening in zsmalloc and exports that + statistics about what's happening in zsmalloc and exports that information to userspace via debugfs. If unsure, say N. From patchwork Fri Dec 18 22:05:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE596C2D0E4 for ; Fri, 18 Dec 2020 22:05:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8A4DC23B9B for ; Fri, 18 Dec 2020 22:05:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8A4DC23B9B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 272766B00B1; Fri, 18 Dec 2020 17:05:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 224496B00B2; Fri, 18 Dec 2020 17:05:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1619F8D0001; Fri, 18 Dec 2020 17:05:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id EC6C76B00B1 for ; Fri, 18 Dec 2020 17:05:38 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C3DC152AC for ; Fri, 18 Dec 2020 22:05:37 +0000 (UTC) X-FDA: 77607785514.09.roof32_620947827440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 9B64F181C5568 for ; Fri, 18 Dec 2020 22:05:37 +0000 (UTC) X-HE-Tag: roof32_620947827440 X-Filterd-Recvd-Size: 6731 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:36 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:35 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329136; bh=Ph3QeDiZ8JX+xdjTpn7oNtGbJ6OTQTRUkX7366TxEWg=; h=From:To:Subject:In-Reply-To:From; b=DX7HPNZ/CZ+845MEKN2ej8CAoxx8doDfDy/3uG4TIznvSWCsQNkyc1F1gOTMbKhhx xkATJcHDua6IwTsoZOH3SFOFyy2fsoXQI4aYb+IVlvcYoVKjIBFU3mtEqUg1lL/g30 YdH4oGQLnF75aBr4oqftYYEvE4bkRfrJOqlptvL0= From: Andrew Morton To: akpm@linux-foundation.org, arnd@arndb.de, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, viro@zeniv.linux.org.uk, willemb@google.com, willy@infradead.org Subject: [patch 75/78] epoll: convert internal api to timespec64 Message-ID: <20201218220535.mUwZSJWEg%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Willem de Bruijn Subject: epoll: convert internal api to timespec64 Patch series "add epoll_pwait2 syscall", v4. Enable nanosecond timeouts for epoll. Analogous to pselect and ppoll, introduce an epoll_wait syscall variant that takes a struct timespec instead of int timeout. This patch (of 4): Make epoll more consistent with select/poll: pass along the timeout as timespec64 pointer. In anticipation of additional changes affecting all three polling mechanisms: - add epoll_pwait2 syscall with timespec semantics, and share poll_select_set_timeout implementation. - compute slack before conversion to absolute time, to save one ktime_get_ts64 call. Link: https://lkml.kernel.org/r/20201121144401.3727659-1-willemdebruijn.kernel@gmail.com Link: https://lkml.kernel.org/r/20201121144401.3727659-2-willemdebruijn.kernel@gmail.com Signed-off-by: Willem de Bruijn Cc: Al Viro Cc: Matthew Wilcox (Oracle) Cc: Arnd Bergmann Signed-off-by: Andrew Morton --- fs/eventpoll.c | 57 ++++++++++++++++++++++++++++++----------------- 1 file changed, 37 insertions(+), 20 deletions(-) --- a/fs/eventpoll.c~epoll-convert-internal-api-to-timespec64 +++ a/fs/eventpoll.c @@ -1712,15 +1712,25 @@ static int ep_send_events(struct eventpo return res; } -static inline struct timespec64 ep_set_mstimeout(long ms) +static struct timespec64 *ep_timeout_to_timespec(struct timespec64 *to, long ms) { - struct timespec64 now, ts = { - .tv_sec = ms / MSEC_PER_SEC, - .tv_nsec = NSEC_PER_MSEC * (ms % MSEC_PER_SEC), - }; + struct timespec64 now; + + if (ms < 0) + return NULL; + + if (!ms) { + to->tv_sec = 0; + to->tv_nsec = 0; + return to; + } + + to->tv_sec = ms / MSEC_PER_SEC; + to->tv_nsec = NSEC_PER_MSEC * (ms % MSEC_PER_SEC); ktime_get_ts64(&now); - return timespec64_add_safe(now, ts); + *to = timespec64_add_safe(now, *to); + return to; } /** @@ -1732,8 +1742,8 @@ static inline struct timespec64 ep_set_m * stored. * @maxevents: Size (in terms of number of events) of the caller event buffer. * @timeout: Maximum timeout for the ready events fetch operation, in - * milliseconds. If the @timeout is zero, the function will not block, - * while if the @timeout is less than zero, the function will block + * timespec. If the timeout is zero, the function will not block, + * while if the @timeout ptr is NULL, the function will block * until at least one event has been retrieved (or an error * occurred). * @@ -1741,7 +1751,7 @@ static inline struct timespec64 ep_set_m * error code, in case of error. */ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, - int maxevents, long timeout) + int maxevents, struct timespec64 *timeout) { int res, eavail, timed_out = 0; u64 slack = 0; @@ -1750,13 +1760,11 @@ static int ep_poll(struct eventpoll *ep, lockdep_assert_irqs_enabled(); - if (timeout > 0) { - struct timespec64 end_time = ep_set_mstimeout(timeout); - - slack = select_estimate_accuracy(&end_time); + if (timeout && (timeout->tv_sec | timeout->tv_nsec)) { + slack = select_estimate_accuracy(timeout); to = &expires; - *to = timespec64_to_ktime(end_time); - } else if (timeout == 0) { + *to = timespec64_to_ktime(*timeout); + } else if (timeout) { /* * Avoid the unnecessary trip to the wait queue loop, if the * caller specified a non blocking operation. @@ -2175,7 +2183,7 @@ SYSCALL_DEFINE4(epoll_ctl, int, epfd, in * part of the user space epoll_wait(2). */ static int do_epoll_wait(int epfd, struct epoll_event __user *events, - int maxevents, int timeout) + int maxevents, struct timespec64 *to) { int error; struct fd f; @@ -2209,7 +2217,7 @@ static int do_epoll_wait(int epfd, struc ep = f.file->private_data; /* Time to fish for events ... */ - error = ep_poll(ep, events, maxevents, timeout); + error = ep_poll(ep, events, maxevents, to); error_fput: fdput(f); @@ -2219,7 +2227,10 @@ error_fput: SYSCALL_DEFINE4(epoll_wait, int, epfd, struct epoll_event __user *, events, int, maxevents, int, timeout) { - return do_epoll_wait(epfd, events, maxevents, timeout); + struct timespec64 to; + + return do_epoll_wait(epfd, events, maxevents, + ep_timeout_to_timespec(&to, timeout)); } /* @@ -2230,6 +2241,7 @@ SYSCALL_DEFINE6(epoll_pwait, int, epfd, int, maxevents, int, timeout, const sigset_t __user *, sigmask, size_t, sigsetsize) { + struct timespec64 to; int error; /* @@ -2240,7 +2252,9 @@ SYSCALL_DEFINE6(epoll_pwait, int, epfd, if (error) return error; - error = do_epoll_wait(epfd, events, maxevents, timeout); + error = do_epoll_wait(epfd, events, maxevents, + ep_timeout_to_timespec(&to, timeout)); + restore_saved_sigmask_unless(error == -EINTR); return error; @@ -2253,6 +2267,7 @@ COMPAT_SYSCALL_DEFINE6(epoll_pwait, int, const compat_sigset_t __user *, sigmask, compat_size_t, sigsetsize) { + struct timespec64 to; long err; /* @@ -2263,7 +2278,9 @@ COMPAT_SYSCALL_DEFINE6(epoll_pwait, int, if (err) return err; - err = do_epoll_wait(epfd, events, maxevents, timeout); + err = do_epoll_wait(epfd, events, maxevents, + ep_timeout_to_timespec(&to, timeout)); + restore_saved_sigmask_unless(err == -EINTR); return err; From patchwork Fri Dec 18 22:05:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DEC9C4361B for ; Fri, 18 Dec 2020 22:05:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 078A123BA7 for ; Fri, 18 Dec 2020 22:05:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 078A123BA7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 755328D0001; Fri, 18 Dec 2020 17:05:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 706546B00B4; Fri, 18 Dec 2020 17:05:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CF0D8D0001; Fri, 18 Dec 2020 17:05:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id 411B36B00B3 for ; Fri, 18 Dec 2020 17:05:41 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 036AD52AD for ; Fri, 18 Dec 2020 22:05:41 +0000 (UTC) X-FDA: 77607785682.13.bat56_3a0498927440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 7203A1817B06F for ; Fri, 18 Dec 2020 22:05:40 +0000 (UTC) X-HE-Tag: bat56_3a0498927440 X-Filterd-Recvd-Size: 6076 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:39 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:38 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329139; bh=cScJVjf2e9RhdV5mgebB1/uKz6YeNP6orIKwJoQk6rw=; h=From:To:Subject:In-Reply-To:From; b=Fm6PjUPCHkkRGnrSqTn2GHOJLaE143rVYJU7BOraeNQxveBHlcgwYZkn8WrpEMPpZ dMbGO2EPwgIoQWMv6yLIk+VCPXDypBeJVZOM6kv0Qsz587x+Sb0Ce9/gMl2xVe9Y9c fnWnfkym7I3WgN5xdVITZkJ6NB/4ZdHcPC9TSb14= From: Andrew Morton To: akpm@linux-foundation.org, arnd@arndb.de, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, viro@zeniv.linux.org.uk, willemb@google.com, willy@infradead.org Subject: [patch 76/78] epoll: add syscall epoll_pwait2 Message-ID: <20201218220538.j23gQP2if%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Willem de Bruijn Subject: epoll: add syscall epoll_pwait2 Add syscall epoll_pwait2, an epoll_wait variant with nsec resolution that replaces int timeout with struct timespec. It is equivalent otherwise. int epoll_pwait2(int fd, struct epoll_event *events, int maxevents, const struct timespec *timeout, const sigset_t *sigset); The underlying hrtimer is already programmed with nsec resolution. pselect and ppoll also set nsec resolution timeout with timespec. The sigset_t in epoll_pwait has a compat variant. epoll_pwait2 needs the same. For timespec, only support this new interface on 2038 aware platforms that define __kernel_timespec_t. So no CONFIG_COMPAT_32BIT_TIME. Link: https://lkml.kernel.org/r/20201121144401.3727659-3-willemdebruijn.kernel@gmail.com Signed-off-by: Willem de Bruijn Cc: Al Viro Cc: Arnd Bergmann Cc: Matthew Wilcox (Oracle) Signed-off-by: Andrew Morton --- fs/eventpoll.c | 87 +++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 73 insertions(+), 14 deletions(-) --- a/fs/eventpoll.c~epoll-add-syscall-epoll_pwait2 +++ a/fs/eventpoll.c @@ -2237,11 +2237,10 @@ SYSCALL_DEFINE4(epoll_wait, int, epfd, s * Implement the event wait interface for the eventpoll file. It is the kernel * part of the user space epoll_pwait(2). */ -SYSCALL_DEFINE6(epoll_pwait, int, epfd, struct epoll_event __user *, events, - int, maxevents, int, timeout, const sigset_t __user *, sigmask, - size_t, sigsetsize) +static int do_epoll_pwait(int epfd, struct epoll_event __user *events, + int maxevents, struct timespec64 *to, + const sigset_t __user *sigmask, size_t sigsetsize) { - struct timespec64 to; int error; /* @@ -2252,22 +2251,48 @@ SYSCALL_DEFINE6(epoll_pwait, int, epfd, if (error) return error; - error = do_epoll_wait(epfd, events, maxevents, - ep_timeout_to_timespec(&to, timeout)); + error = do_epoll_wait(epfd, events, maxevents, to); restore_saved_sigmask_unless(error == -EINTR); return error; } -#ifdef CONFIG_COMPAT -COMPAT_SYSCALL_DEFINE6(epoll_pwait, int, epfd, - struct epoll_event __user *, events, - int, maxevents, int, timeout, - const compat_sigset_t __user *, sigmask, - compat_size_t, sigsetsize) +SYSCALL_DEFINE6(epoll_pwait, int, epfd, struct epoll_event __user *, events, + int, maxevents, int, timeout, const sigset_t __user *, sigmask, + size_t, sigsetsize) { struct timespec64 to; + + return do_epoll_pwait(epfd, events, maxevents, + ep_timeout_to_timespec(&to, timeout), + sigmask, sigsetsize); +} + +SYSCALL_DEFINE6(epoll_pwait2, int, epfd, struct epoll_event __user *, events, + int, maxevents, const struct __kernel_timespec __user *, timeout, + const sigset_t __user *, sigmask, size_t, sigsetsize) +{ + struct timespec64 ts, *to = NULL; + + if (timeout) { + if (get_timespec64(&ts, timeout)) + return -EFAULT; + to = &ts; + if (poll_select_set_timeout(to, ts.tv_sec, ts.tv_nsec)) + return -EINVAL; + } + + return do_epoll_pwait(epfd, events, maxevents, to, + sigmask, sigsetsize); +} + +#ifdef CONFIG_COMPAT +static int do_compat_epoll_pwait(int epfd, struct epoll_event __user *events, + int maxevents, struct timespec64 *timeout, + const compat_sigset_t __user *sigmask, + compat_size_t sigsetsize) +{ long err; /* @@ -2278,13 +2303,47 @@ COMPAT_SYSCALL_DEFINE6(epoll_pwait, int, if (err) return err; - err = do_epoll_wait(epfd, events, maxevents, - ep_timeout_to_timespec(&to, timeout)); + err = do_epoll_wait(epfd, events, maxevents, timeout); restore_saved_sigmask_unless(err == -EINTR); return err; } + +COMPAT_SYSCALL_DEFINE6(epoll_pwait, int, epfd, + struct epoll_event __user *, events, + int, maxevents, int, timeout, + const compat_sigset_t __user *, sigmask, + compat_size_t, sigsetsize) +{ + struct timespec64 to; + + return do_compat_epoll_pwait(epfd, events, maxevents, + ep_timeout_to_timespec(&to, timeout), + sigmask, sigsetsize); +} + +COMPAT_SYSCALL_DEFINE6(epoll_pwait2, int, epfd, + struct epoll_event __user *, events, + int, maxevents, + const struct __kernel_timespec __user *, timeout, + const compat_sigset_t __user *, sigmask, + compat_size_t, sigsetsize) +{ + struct timespec64 ts, *to = NULL; + + if (timeout) { + if (get_timespec64(&ts, timeout)) + return -EFAULT; + to = &ts; + if (poll_select_set_timeout(to, ts.tv_sec, ts.tv_nsec)) + return -EINVAL; + } + + return do_compat_epoll_pwait(epfd, events, maxevents, to, + sigmask, sigsetsize); +} + #endif static int __init eventpoll_init(void) From patchwork Fri Dec 18 22:05:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D69BC4361B for ; Fri, 18 Dec 2020 22:05:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D8DAD23BA7 for ; Fri, 18 Dec 2020 22:05:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8DAD23BA7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8187E8D0002; Fri, 18 Dec 2020 17:05:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7C8F96B00B5; Fri, 18 Dec 2020 17:05:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7060D8D0002; Fri, 18 Dec 2020 17:05:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0007.hostedemail.com [216.40.44.7]) by kanga.kvack.org (Postfix) with ESMTP id 568B06B00B4 for ; Fri, 18 Dec 2020 17:05:45 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 206414DD8 for ; Fri, 18 Dec 2020 22:05:45 +0000 (UTC) X-FDA: 77607785850.11.slave55_5b0468a27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 85EF2181D2062 for ; Fri, 18 Dec 2020 22:05:43 +0000 (UTC) X-HE-Tag: slave55_5b0468a27440 X-Filterd-Recvd-Size: 11183 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:42 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:41 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329142; bh=ijEvgyIh4mCm0UNuUMzEgEigB1judULww4GwyOgn7xs=; h=From:To:Subject:In-Reply-To:From; b=SjNlYPGaFl4D4q+s7ISy5n3wlJF7QnYohHfSWE3LGWWXA8IwfV6LVWiiGmioVxHjI zFyWyiflPENwOmwHR0j8f0gOptSJl3vjPxv3L8NwpnXjQVOSnJtxCP5LHsUj6iiyZ9 CDdmP6vgwmwE1dVNNEQ9Oo2+MQPJ/veWm+uZ6URU= From: Andrew Morton To: akpm@linux-foundation.org, arnd@arndb.de, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, viro@zeniv.linux.org.uk, willemb@google.com, willy@infradead.org Subject: [patch 77/78] epoll: wire up syscall epoll_pwait2 Message-ID: <20201218220541.HmQ5hLrPx%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Willem de Bruijn Subject: epoll: wire up syscall epoll_pwait2 Split off from prev patch in the series that implements the syscall. Link: https://lkml.kernel.org/r/20201121144401.3727659-4-willemdebruijn.kernel@gmail.com Signed-off-by: Willem de Bruijn Cc: Al Viro Cc: Arnd Bergmann Cc: Matthew Wilcox (Oracle) Signed-off-by: Andrew Morton --- arch/alpha/kernel/syscalls/syscall.tbl | 1 + arch/arm/tools/syscall.tbl | 1 + arch/arm64/include/asm/unistd.h | 2 +- arch/arm64/include/asm/unistd32.h | 2 ++ arch/ia64/kernel/syscalls/syscall.tbl | 1 + arch/m68k/kernel/syscalls/syscall.tbl | 1 + arch/microblaze/kernel/syscalls/syscall.tbl | 1 + arch/mips/kernel/syscalls/syscall_n32.tbl | 1 + arch/mips/kernel/syscalls/syscall_n64.tbl | 1 + arch/mips/kernel/syscalls/syscall_o32.tbl | 1 + arch/parisc/kernel/syscalls/syscall.tbl | 1 + arch/powerpc/kernel/syscalls/syscall.tbl | 1 + arch/s390/kernel/syscalls/syscall.tbl | 1 + arch/sh/kernel/syscalls/syscall.tbl | 1 + arch/sparc/kernel/syscalls/syscall.tbl | 1 + arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + arch/xtensa/kernel/syscalls/syscall.tbl | 1 + include/linux/compat.h | 6 ++++++ include/linux/syscalls.h | 5 +++++ include/uapi/asm-generic/unistd.h | 4 +++- kernel/sys_ni.c | 2 ++ 22 files changed, 35 insertions(+), 2 deletions(-) --- a/arch/alpha/kernel/syscalls/syscall.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/alpha/kernel/syscalls/syscall.tbl @@ -480,3 +480,4 @@ 548 common pidfd_getfd sys_pidfd_getfd 549 common faccessat2 sys_faccessat2 550 common process_madvise sys_process_madvise +551 common epoll_pwait2 sys_epoll_pwait2 --- a/arch/arm64/include/asm/unistd32.h~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/arm64/include/asm/unistd32.h @@ -889,6 +889,8 @@ __SYSCALL(__NR_pidfd_getfd, sys_pidfd_ge __SYSCALL(__NR_faccessat2, sys_faccessat2) #define __NR_process_madvise 440 __SYSCALL(__NR_process_madvise, sys_process_madvise) +#define __NR_epoll_pwait2 441 +__SYSCALL(__NR_epoll_pwait2, sys_epoll_pwait2) /* * Please add new compat syscalls above this comment and update --- a/arch/arm64/include/asm/unistd.h~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/arm64/include/asm/unistd.h @@ -38,7 +38,7 @@ #define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE + 5) #define __ARM_NR_COMPAT_END (__ARM_NR_COMPAT_BASE + 0x800) -#define __NR_compat_syscalls 441 +#define __NR_compat_syscalls 442 #endif #define __ARCH_WANT_SYS_CLONE --- a/arch/arm/tools/syscall.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/arm/tools/syscall.tbl @@ -454,3 +454,4 @@ 438 common pidfd_getfd sys_pidfd_getfd 439 common faccessat2 sys_faccessat2 440 common process_madvise sys_process_madvise +441 common epoll_pwait2 sys_epoll_pwait2 --- a/arch/ia64/kernel/syscalls/syscall.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/ia64/kernel/syscalls/syscall.tbl @@ -361,3 +361,4 @@ 438 common pidfd_getfd sys_pidfd_getfd 439 common faccessat2 sys_faccessat2 440 common process_madvise sys_process_madvise +441 common epoll_pwait2 sys_epoll_pwait2 --- a/arch/m68k/kernel/syscalls/syscall.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/m68k/kernel/syscalls/syscall.tbl @@ -440,3 +440,4 @@ 438 common pidfd_getfd sys_pidfd_getfd 439 common faccessat2 sys_faccessat2 440 common process_madvise sys_process_madvise +441 common epoll_pwait2 sys_epoll_pwait2 --- a/arch/microblaze/kernel/syscalls/syscall.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/microblaze/kernel/syscalls/syscall.tbl @@ -446,3 +446,4 @@ 438 common pidfd_getfd sys_pidfd_getfd 439 common faccessat2 sys_faccessat2 440 common process_madvise sys_process_madvise +441 common epoll_pwait2 sys_epoll_pwait2 --- a/arch/mips/kernel/syscalls/syscall_n32.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/mips/kernel/syscalls/syscall_n32.tbl @@ -379,3 +379,4 @@ 438 n32 pidfd_getfd sys_pidfd_getfd 439 n32 faccessat2 sys_faccessat2 440 n32 process_madvise sys_process_madvise +441 n32 epoll_pwait2 sys_epoll_pwait2 --- a/arch/mips/kernel/syscalls/syscall_n64.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/mips/kernel/syscalls/syscall_n64.tbl @@ -355,3 +355,4 @@ 438 n64 pidfd_getfd sys_pidfd_getfd 439 n64 faccessat2 sys_faccessat2 440 n64 process_madvise sys_process_madvise +441 n64 epoll_pwait2 sys_epoll_pwait2 --- a/arch/mips/kernel/syscalls/syscall_o32.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/mips/kernel/syscalls/syscall_o32.tbl @@ -428,3 +428,4 @@ 438 o32 pidfd_getfd sys_pidfd_getfd 439 o32 faccessat2 sys_faccessat2 440 o32 process_madvise sys_process_madvise +441 o32 epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 --- a/arch/parisc/kernel/syscalls/syscall.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/parisc/kernel/syscalls/syscall.tbl @@ -438,3 +438,4 @@ 438 common pidfd_getfd sys_pidfd_getfd 439 common faccessat2 sys_faccessat2 440 common process_madvise sys_process_madvise +441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 --- a/arch/powerpc/kernel/syscalls/syscall.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/powerpc/kernel/syscalls/syscall.tbl @@ -530,3 +530,4 @@ 438 common pidfd_getfd sys_pidfd_getfd 439 common faccessat2 sys_faccessat2 440 common process_madvise sys_process_madvise +441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 --- a/arch/s390/kernel/syscalls/syscall.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/s390/kernel/syscalls/syscall.tbl @@ -443,3 +443,4 @@ 438 common pidfd_getfd sys_pidfd_getfd sys_pidfd_getfd 439 common faccessat2 sys_faccessat2 sys_faccessat2 440 common process_madvise sys_process_madvise sys_process_madvise +441 common epoll_pwait2 sys_epoll_pwait2 sys_epoll_pwait2 --- a/arch/sh/kernel/syscalls/syscall.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/sh/kernel/syscalls/syscall.tbl @@ -443,3 +443,4 @@ 438 common pidfd_getfd sys_pidfd_getfd 439 common faccessat2 sys_faccessat2 440 common process_madvise sys_process_madvise +441 common epoll_pwait2 sys_epoll_pwait2 --- a/arch/sparc/kernel/syscalls/syscall.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/sparc/kernel/syscalls/syscall.tbl @@ -486,3 +486,4 @@ 438 common pidfd_getfd sys_pidfd_getfd 439 common faccessat2 sys_faccessat2 440 common process_madvise sys_process_madvise +441 common epoll_pwait2 sys_epoll_pwait2 --- a/arch/x86/entry/syscalls/syscall_32.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/x86/entry/syscalls/syscall_32.tbl @@ -445,3 +445,4 @@ 438 i386 pidfd_getfd sys_pidfd_getfd 439 i386 faccessat2 sys_faccessat2 440 i386 process_madvise sys_process_madvise +441 i386 epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 --- a/arch/x86/entry/syscalls/syscall_64.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/x86/entry/syscalls/syscall_64.tbl @@ -362,6 +362,7 @@ 438 common pidfd_getfd sys_pidfd_getfd 439 common faccessat2 sys_faccessat2 440 common process_madvise sys_process_madvise +441 common epoll_pwait2 sys_epoll_pwait2 # # Due to a historical design error, certain syscalls are numbered differently --- a/arch/xtensa/kernel/syscalls/syscall.tbl~epoll-wire-up-syscall-epoll_pwait2 +++ a/arch/xtensa/kernel/syscalls/syscall.tbl @@ -411,3 +411,4 @@ 438 common pidfd_getfd sys_pidfd_getfd 439 common faccessat2 sys_faccessat2 440 common process_madvise sys_process_madvise +441 common epoll_pwait2 sys_epoll_pwait2 --- a/include/linux/compat.h~epoll-wire-up-syscall-epoll_pwait2 +++ a/include/linux/compat.h @@ -537,6 +537,12 @@ asmlinkage long compat_sys_epoll_pwait(i int maxevents, int timeout, const compat_sigset_t __user *sigmask, compat_size_t sigsetsize); +asmlinkage long compat_sys_epoll_pwait2(int epfd, + struct epoll_event __user *events, + int maxevents, + const struct __kernel_timespec __user *timeout, + const compat_sigset_t __user *sigmask, + compat_size_t sigsetsize); /* fs/fcntl.c */ asmlinkage long compat_sys_fcntl(unsigned int fd, unsigned int cmd, --- a/include/linux/syscalls.h~epoll-wire-up-syscall-epoll_pwait2 +++ a/include/linux/syscalls.h @@ -362,6 +362,11 @@ asmlinkage long sys_epoll_pwait(int epfd int maxevents, int timeout, const sigset_t __user *sigmask, size_t sigsetsize); +asmlinkage long sys_epoll_pwait2(int epfd, struct epoll_event __user *events, + int maxevents, + const struct __kernel_timespec __user *timeout, + const sigset_t __user *sigmask, + size_t sigsetsize); /* fs/fcntl.c */ asmlinkage long sys_dup(unsigned int fildes); --- a/include/uapi/asm-generic/unistd.h~epoll-wire-up-syscall-epoll_pwait2 +++ a/include/uapi/asm-generic/unistd.h @@ -859,9 +859,11 @@ __SYSCALL(__NR_pidfd_getfd, sys_pidfd_ge __SYSCALL(__NR_faccessat2, sys_faccessat2) #define __NR_process_madvise 440 __SYSCALL(__NR_process_madvise, sys_process_madvise) +#define __NR_epoll_pwait2 441 +__SC_COMP(__NR_epoll_pwait2, sys_epoll_pwait2, compat_sys_epoll_pwait2) #undef __NR_syscalls -#define __NR_syscalls 441 +#define __NR_syscalls 442 /* * 32 bit systems traditionally used different --- a/kernel/sys_ni.c~epoll-wire-up-syscall-epoll_pwait2 +++ a/kernel/sys_ni.c @@ -68,6 +68,8 @@ COND_SYSCALL(epoll_create1); COND_SYSCALL(epoll_ctl); COND_SYSCALL(epoll_pwait); COND_SYSCALL_COMPAT(epoll_pwait); +COND_SYSCALL(epoll_pwait2); +COND_SYSCALL_COMPAT(epoll_pwait2); /* fs/fcntl.c */ From patchwork Fri Dec 18 22:05:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11983179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45979C4361B for ; Fri, 18 Dec 2020 22:05:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ECDA623B5D for ; Fri, 18 Dec 2020 22:05:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ECDA623B5D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9294C6B00B5; Fri, 18 Dec 2020 17:05:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9011D6B00B6; Fri, 18 Dec 2020 17:05:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 83D336B00B7; Fri, 18 Dec 2020 17:05:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0127.hostedemail.com [216.40.44.127]) by kanga.kvack.org (Postfix) with ESMTP id 6B0E46B00B5 for ; Fri, 18 Dec 2020 17:05:50 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 3E786181D12EB for ; Fri, 18 Dec 2020 22:05:50 +0000 (UTC) X-FDA: 77607786060.04.rain31_480d8cb27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 1EA58801D249 for ; Fri, 18 Dec 2020 22:05:50 +0000 (UTC) X-HE-Tag: rain31_480d8cb27440 X-Filterd-Recvd-Size: 4076 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:05:49 +0000 (UTC) Date: Fri, 18 Dec 2020 14:05:44 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329145; bh=T+Uzd1okeJMYlcO+itJriRw0Z8tWJ79iJqUckl6xXpw=; h=From:To:Subject:In-Reply-To:From; b=wPBfIHeeyq9isjXwPTiWeeiLWJF9PqqMafRKxqnMHA/u1it2isIQQYCHUOEHXUucC yYLk4QFM5n06MhGmzfjqR2Xh5gyzFTz3uOgsKq6JtpFpTWW/Ti7POBWSBrAq0K1elJ fUK/6vwye2qpm46OicJB351ZYBJ3Km5bCM0daDZ0= From: Andrew Morton To: akpm@linux-foundation.org, arnd@arndb.de, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, viro@zeniv.linux.org.uk, willemb@google.com, willy@infradead.org Subject: [patch 78/78] selftests/filesystems: expand epoll with epoll_pwait2 Message-ID: <20201218220544.-qfLR98pC%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Willem de Bruijn Subject: selftests/filesystems: expand epoll with epoll_pwait2 Code coverage for the epoll_pwait2 syscall. epoll62: Repeat basic test epoll1, but exercising the new syscall. epoll63: Pass a timespec and exercise the timeout wakeup path. Link: https://lkml.kernel.org/r/20201121144401.3727659-5-willemdebruijn.kernel@gmail.com Signed-off-by: Willem de Bruijn Cc: Al Viro Cc: Arnd Bergmann Cc: Matthew Wilcox (Oracle) Signed-off-by: Andrew Morton --- tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c | 72 ++++++++++ 1 file changed, 72 insertions(+) --- a/tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c~selftests-filesystems-expand-epoll-with-epoll_pwait2 +++ a/tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 #define _GNU_SOURCE +#include +#include #include #include #include @@ -21,6 +23,19 @@ struct epoll_mtcontext pthread_t waiter; }; +#ifndef __NR_epoll_pwait2 +#define __NR_epoll_pwait2 -1 +#endif + +static inline int sys_epoll_pwait2(int fd, struct epoll_event *events, + int maxevents, + const struct __kernel_timespec *timeout, + const sigset_t *sigset, size_t sigsetsize) +{ + return syscall(__NR_epoll_pwait2, fd, events, maxevents, timeout, + sigset, sigsetsize); +} + static void signal_handler(int signum) { } @@ -3377,4 +3392,61 @@ TEST(epoll61) close(ctx.evfd); } +/* Equivalent to basic test epoll1, but exercising epoll_pwait2. */ +TEST(epoll62) +{ + int efd; + int sfd[2]; + struct epoll_event e; + + ASSERT_EQ(socketpair(AF_UNIX, SOCK_STREAM, 0, sfd), 0); + + efd = epoll_create(1); + ASSERT_GE(efd, 0); + + e.events = EPOLLIN; + ASSERT_EQ(epoll_ctl(efd, EPOLL_CTL_ADD, sfd[0], &e), 0); + + ASSERT_EQ(write(sfd[1], "w", 1), 1); + + EXPECT_EQ(sys_epoll_pwait2(efd, &e, 1, NULL, NULL, 0), 1); + EXPECT_EQ(sys_epoll_pwait2(efd, &e, 1, NULL, NULL, 0), 1); + + close(efd); + close(sfd[0]); + close(sfd[1]); +} + +/* Epoll_pwait2 basic timeout test. */ +TEST(epoll63) +{ + const int cfg_delay_ms = 10; + unsigned long long tdiff; + struct __kernel_timespec ts; + int efd; + int sfd[2]; + struct epoll_event e; + + ASSERT_EQ(socketpair(AF_UNIX, SOCK_STREAM, 0, sfd), 0); + + efd = epoll_create(1); + ASSERT_GE(efd, 0); + + e.events = EPOLLIN; + ASSERT_EQ(epoll_ctl(efd, EPOLL_CTL_ADD, sfd[0], &e), 0); + + ts.tv_sec = 0; + ts.tv_nsec = cfg_delay_ms * 1000 * 1000; + + tdiff = msecs(); + EXPECT_EQ(sys_epoll_pwait2(efd, &e, 1, &ts, NULL, 0), 0); + tdiff = msecs() - tdiff; + + EXPECT_GE(tdiff, cfg_delay_ms); + + close(efd); + close(sfd[0]); + close(sfd[1]); +} + TEST_HARNESS_MAIN