From patchwork Mon Oct 3 04:04:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 12997039 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44C56C433FE for ; Mon, 3 Oct 2022 04:04:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3ECBA6B0073; Mon, 3 Oct 2022 00:04:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 39B9C8E0001; Mon, 3 Oct 2022 00:04:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2630B6B0075; Mon, 3 Oct 2022 00:04:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 14B966B0073 for ; Mon, 3 Oct 2022 00:04:34 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D43D7120B67 for ; Mon, 3 Oct 2022 04:04:33 +0000 (UTC) X-FDA: 79978296426.11.8F05AD8 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf21.hostedemail.com (Postfix) with ESMTP id C17661C0005 for ; Mon, 3 Oct 2022 04:04:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664769872; x=1696305872; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=UB5cjKVsB4H26nWv1fTPcqnJR9cV/qO68D0Qc0DBJ0Y=; b=lcw7u+QEv4XwjYlf0S+jxKbwwT3T+To0iU0uPX0KKevyms3nBEYCAYR4 lVHMY0qzf4NOBPP+P74NFSNqk7BgFdE5Gcn4MK2HWOTAFiKhbtldzXnGn wxyvSvQR4ICbYGukYk1PCYMJB2X7IDXnuxoTZAFSXVREAug/pmF/ZnPjB guHe2Admta76+9Pmx6yVLTdaqj9btJDLCe4Fu70itCI7Dts4+s9aBnNdr NH2H74yNcwZ3PvjhXHw5DCD1p0SHXbpB4cYrDAVXD4km2a+KJSoksYTuY ZFNmt5lMOKufZxBYc7iKes7HUCS8bFNh9QQ11eQ1NT+1d/B89HV54i3Sl g==; X-IronPort-AV: E=McAfee;i="6500,9779,10488"; a="304030878" X-IronPort-AV: E=Sophos;i="5.93,364,1654585200"; d="scan'208";a="304030878" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Oct 2022 21:04:30 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10488"; a="625622175" X-IronPort-AV: E=Sophos;i="5.93,364,1654585200"; d="scan'208";a="625622175" Received: from bkbuerkl-mobl2.amr.corp.intel.com (HELO localhost) ([10.209.51.15]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Oct 2022 21:04:28 -0700 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , "Fabio M. De Francesco" , Thomas Gleixner , Christoph Hellwig , Al Viro , Linus Walleij , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] highmem: Fix kmap_to_page() for kmap_local_page() addresses Date: Sun, 2 Oct 2022 21:04:27 -0700 Message-Id: <20221003040427.1082050-1-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664769873; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=f0BBE6VWTuMCQC5yfIg/MCrYS8O2cIZ7kmjp0Q/q5bE=; b=j1ZKXbIC1oIIDBMxKeU3nbhmUWBsrMFG7ok8vpqydd1BKKcWFVWCW7blXevGVzA9Xb1oTE yxq95B6QXlRP2JVgm4jBdhkSg+S6iNiOKC5+ypvGfLOcT7Hi3L8Cc2rLpkz3GK2KKR6YKP Fv2CeuSoK822LtvnTrAIkrt2KFjnUBs= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=lcw7u+QE; spf=pass (imf21.hostedemail.com: domain of ira.weiny@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=ira.weiny@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664769873; a=rsa-sha256; cv=none; b=5b00Lrk7GmRURxXbJUS3ONVNY/7A0sdO+r0LRnEO7cF6n91V1xlKnw26M8qOX99HxxvmEg KKQNO9htNl7Xr3rPNBHYvLExWu5nLTIRRdpzzUVgXXO7ax08F88k6aiZwek8QmcJK5Gm8m HoMUlbpS5ie0Vcllv8e1nAwaW7Yy4J4= X-Stat-Signature: hmwn98x5ww9g1cgnozobmj11aqcz3a5g X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: C17661C0005 Authentication-Results: imf21.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=lcw7u+QE; spf=pass (imf21.hostedemail.com: domain of ira.weiny@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=ira.weiny@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-HE-Tag: 1664769872-402108 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ira Weiny kmap_to_page() is used to get the page for a virtual address which may be kmap'ed. Unfortunately, kmap_local_page() stores mappings in a thread local array separate from kmap(). These mappings were not checked by the call. Check the kmap_local_page() mappings and return the page if found. Because it is intended to remove kmap_to_page() add a warn on once to the kmap checks to flag potential issues early. Cc: "Fabio M. De Francesco" Cc: Thomas Gleixner Cc: Christoph Hellwig Cc: Andrew Morton Reported-by: Al Viro Signed-off-by: Ira Weiny --- I'm still working toward getting rid of kmap_to_page.[1] But until then this fix should be applied. [1] https://lore.kernel.org/linux-mm/20221002002326.946620-1-ira.weiny@intel.com/ --- mm/highmem.c | 40 ++++++++++++++++++++++++++++++---------- 1 file changed, 30 insertions(+), 10 deletions(-) base-commit: 274d7803837da78dfc911bcda0d593412676fc20 diff --git a/mm/highmem.c b/mm/highmem.c index c707d7202d5f..29423c1afb3e 100644 --- a/mm/highmem.c +++ b/mm/highmem.c @@ -140,16 +140,45 @@ pte_t *pkmap_page_table; do { spin_unlock(&kmap_lock); (void)(flags); } while (0) #endif +static inline int kmap_local_calc_idx(int idx) +{ + return idx + KM_MAX_IDX * smp_processor_id(); +} + +#ifndef arch_kmap_local_map_idx +#define arch_kmap_local_map_idx(idx, pfn) kmap_local_calc_idx(idx) +#endif + struct page *__kmap_to_page(void *vaddr) { + unsigned long base = (unsigned long) vaddr & PAGE_MASK; + struct kmap_ctrl *kctrl = ¤t->kmap_ctrl; unsigned long addr = (unsigned long)vaddr; + int i; - if (addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP)) { + /* kmap() mappings */ + if (WARN_ON_ONCE(addr >= PKMAP_ADDR(0) && + addr < PKMAP_ADDR(LAST_PKMAP))) { int i = PKMAP_NR(addr); return pte_page(pkmap_page_table[i]); } + /* kmap_local_page() mappings */ + if (WARN_ON_ONCE(base >= __fix_to_virt(FIX_KMAP_END) && + base < __fix_to_virt(FIX_KMAP_BEGIN))) { + for (i = 0; i < kctrl->idx; i++) { + unsigned long base_addr; + int idx; + + idx = arch_kmap_local_map_idx(i, pte_pfn(pteval)); + base_addr = __fix_to_virt(FIX_KMAP_BEGIN + idx); + + if (base_addr == base) + return pte_page(kctrl->pteval[i]); + } + } + return virt_to_page(vaddr); } EXPORT_SYMBOL(__kmap_to_page); @@ -462,10 +491,6 @@ static inline void kmap_local_idx_pop(void) # define arch_kmap_local_post_unmap(vaddr) do { } while (0) #endif -#ifndef arch_kmap_local_map_idx -#define arch_kmap_local_map_idx(idx, pfn) kmap_local_calc_idx(idx) -#endif - #ifndef arch_kmap_local_unmap_idx #define arch_kmap_local_unmap_idx(idx, vaddr) kmap_local_calc_idx(idx) #endif @@ -494,11 +519,6 @@ static inline bool kmap_high_unmap_local(unsigned long vaddr) return false; } -static inline int kmap_local_calc_idx(int idx) -{ - return idx + KM_MAX_IDX * smp_processor_id(); -} - static pte_t *__kmap_pte; static pte_t *kmap_get_pte(unsigned long vaddr, int idx)