From patchwork Fri Feb 4 17:56:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12735409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 863E3C433F5 for ; Fri, 4 Feb 2022 17:56:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D57876B0074; Fri, 4 Feb 2022 12:56:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CDF2C6B0075; Fri, 4 Feb 2022 12:56:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA7DF6B0078; Fri, 4 Feb 2022 12:56:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id AA13E6B0074 for ; Fri, 4 Feb 2022 12:56:53 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 5B1C618276BDC for ; Fri, 4 Feb 2022 17:56:53 +0000 (UTC) X-FDA: 79105853106.28.44ED0BC Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf17.hostedemail.com (Postfix) with ESMTP id D4D5040007 for ; Fri, 4 Feb 2022 17:56:52 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CA050B83875; Fri, 4 Feb 2022 17:56:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DABF7C004E1; Fri, 4 Feb 2022 17:56:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1643997410; bh=CwXvMW0kzg+EbExulgkv4VP+W8ZnB/a12489SOXC0sI=; h=Date:To:From:In-Reply-To:Subject:From; b=YvfVMtW5UDCYQmM/nExmIECKhTi87BAlX9gyrsYFoS1UJ/O/1lvphDWB7tW5QFFAE 7/+4opfNbkDtOndCHcvRqJZB295Z140a8Hbn/rswXLTFRIkFzNOU3CbT07J+CU3N1q 1M5kQEcwZTbdHW0nRco0I0/Sjav8mu7BReDBjUMY= Received: by hp1 (sSMTP sendmail emulation); Fri, 04 Feb 2022 09:56:48 -0800 Date: Fri, 04 Feb 2022 09:56:48 -0800 To: ziy@nvidia.com,will@kernel.org,weixugc@google.com,songmuchun@bytedance.com,rppt@kernel.org,rientjes@google.com,pjt@google.com,mingo@redhat.com,jirislaby@kernel.org,hughd@google.com,hpa@zytor.com,gthelen@google.com,dave.hansen@linux.intel.com,anshuman.khandual@arm.com,aneesh.kumar@linux.ibm.com,pasha.tatashin@soleen.com,akpm@linux-foundation.org,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220203204836.88dcebe504f440686cc63a60@linux-foundation.org> Subject: [patch 03/10] mm/page_table_check: use unsigned long for page counters and cleanup Message-Id: <20220204175648.DABF7C004E1@smtp.kernel.org> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D4D5040007 X-Stat-Signature: mj4b3k118xe7jjxtrk41jbywtiify1ct Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=YvfVMtW5; dmarc=none; spf=pass (imf17.hostedemail.com: domain of akpm@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=akpm@kernel.org X-Rspam-User: nil X-HE-Tag: 1643997412-233586 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Pasha Tatashin Subject: mm/page_table_check: use unsigned long for page counters and cleanup For consistency, use "unsigned long" for all page counters. Also, reduce code duplication by calling __page_table_check_*_clear() from __page_table_check_*_set() functions. Link: https://lkml.kernel.org/r/20220131203249.2832273-3-pasha.tatashin@soleen.com Signed-off-by: Pasha Tatashin Reviewed-by: Wei Xu Acked-by: David Rientjes Cc: Aneesh Kumar K.V Cc: Anshuman Khandual Cc: Dave Hansen Cc: Greg Thelen Cc: H. Peter Anvin Cc: Hugh Dickins Cc: Ingo Molnar Cc: Jiri Slaby Cc: Mike Rapoport Cc: Muchun Song Cc: Paul Turner Cc: Will Deacon Cc: Zi Yan Signed-off-by: Andrew Morton --- mm/page_table_check.c | 35 +++++++---------------------------- 1 file changed, 7 insertions(+), 28 deletions(-) --- a/mm/page_table_check.c~mm-page_table_check-use-unsigned-long-for-page-counters-and-cleanup +++ a/mm/page_table_check.c @@ -86,8 +86,8 @@ static void page_table_check_clear(struc { struct page_ext *page_ext; struct page *page; + unsigned long i; bool anon; - int i; if (!pfn_valid(pfn)) return; @@ -121,8 +121,8 @@ static void page_table_check_set(struct { struct page_ext *page_ext; struct page *page; + unsigned long i; bool anon; - int i; if (!pfn_valid(pfn)) return; @@ -152,10 +152,10 @@ static void page_table_check_set(struct void __page_table_check_zero(struct page *page, unsigned int order) { struct page_ext *page_ext = lookup_page_ext(page); - int i; + unsigned long i; BUG_ON(!page_ext); - for (i = 0; i < (1 << order); i++) { + for (i = 0; i < (1ul << order); i++) { struct page_table_check *ptc = get_page_table_check(page_ext); BUG_ON(atomic_read(&ptc->anon_map_count)); @@ -206,17 +206,10 @@ EXPORT_SYMBOL(__page_table_check_pud_cle void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - pte_t old_pte; - if (&init_mm == mm) return; - old_pte = *ptep; - if (pte_user_accessible_page(old_pte)) { - page_table_check_clear(mm, addr, pte_pfn(old_pte), - PAGE_SIZE >> PAGE_SHIFT); - } - + __page_table_check_pte_clear(mm, addr, *ptep); if (pte_user_accessible_page(pte)) { page_table_check_set(mm, addr, pte_pfn(pte), PAGE_SIZE >> PAGE_SHIFT, @@ -228,17 +221,10 @@ EXPORT_SYMBOL(__page_table_check_pte_set void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { - pmd_t old_pmd; - if (&init_mm == mm) return; - old_pmd = *pmdp; - if (pmd_user_accessible_page(old_pmd)) { - page_table_check_clear(mm, addr, pmd_pfn(old_pmd), - PMD_PAGE_SIZE >> PAGE_SHIFT); - } - + __page_table_check_pmd_clear(mm, addr, *pmdp); if (pmd_user_accessible_page(pmd)) { page_table_check_set(mm, addr, pmd_pfn(pmd), PMD_PAGE_SIZE >> PAGE_SHIFT, @@ -250,17 +236,10 @@ EXPORT_SYMBOL(__page_table_check_pmd_set void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr, pud_t *pudp, pud_t pud) { - pud_t old_pud; - if (&init_mm == mm) return; - old_pud = *pudp; - if (pud_user_accessible_page(old_pud)) { - page_table_check_clear(mm, addr, pud_pfn(old_pud), - PUD_PAGE_SIZE >> PAGE_SHIFT); - } - + __page_table_check_pud_clear(mm, addr, *pudp); if (pud_user_accessible_page(pud)) { page_table_check_set(mm, addr, pud_pfn(pud), PUD_PAGE_SIZE >> PAGE_SHIFT,