From patchwork Fri Nov 20 14:35:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 11920399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC55FC63777 for ; Fri, 20 Nov 2020 14:37:57 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8790C22253 for ; Fri, 20 Nov 2020 14:37:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="T5tpk4Ki"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="NWbZpQUt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8790C22253 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=heHwFg4xK891xqeAb82Fu/8Qj3UNE6euMPjXmWP1r4c=; b=T5tpk4KinzojGDi77UsXHJ/UL vC6tT2mOd1gzqwyKomZjv74cAMuEvt1+4X/gXDWQ/Md+v30/yCsiGAPgBGympXm55oqYTsDxeYI+f aM5dKAc+QgzIHhloiId5mA6i4D38UuMM9loaCt1YzntbSyMsTctn2pQhKJW3LMDwhnMUsjnR/ZQb4 +ubdD8yW4wi+KmoFXJ9ubNT960I+XkEkC/n1hr7aKKsKsV/f2c+7ExExl/oA+sgmnruxz7zF5Gtrf qQM4t3bFAW3yVfuQ8d0XR4J5MEPUdyjEQsJGf90cxp8OrBJcv1eiHzXQQuBbMzOl+2UIIhTwIiT2P w6s2sTq2A==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7Wd-0002qn-TR; Fri, 20 Nov 2020 14:36:35 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7WB-0002fU-7W for linux-arm-kernel@lists.infradead.org; Fri, 20 Nov 2020 14:36:09 +0000 Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3C5082236F; Fri, 20 Nov 2020 14:36:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882966; bh=j2dSYypETbDYVNoDBWsVOOyAKJVVu4Ogy3WRiJXmbGI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NWbZpQUtUGh4S38MxVlOuy+y3ZRHQ+hSU0lessRpwqEV7+WCNXSMp1fUFfrMKd9OH TFMKC+MD6wHULTQjMpUVJDiQmSZt9QXpeyxuEaJQ1LAFkfuHqsrtMGoKfOdxlcr2Li oJC/N5qg2Ae1eefEiIBLHa8KwMUVOHPqVqRv1oPY= From: Will Deacon To: linux-kernel@vger.kernel.org Subject: [PATCH 1/6] arm64: pgtable: Fix pte_accessible() Date: Fri, 20 Nov 2020 14:35:52 +0000 Message-Id: <20201120143557.6715-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201120_093607_450700_783F5399 X-CRM114-Status: GOOD ( 14.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Yu Zhao , Anshuman Khandual , Peter Zijlstra , kernel-team@android.com, Linus Torvalds , stable@vger.kernel.org, linux-mm@kvack.org, Minchan Kim , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org pte_accessible() is used by ptep_clear_flush() to figure out whether TLB invalidation is necessary when unmapping pages for reclaim. Although our implementation is correct according to the architecture, returning true only for valid, young ptes in the absence of racing page-table modifications, this is in fact flawed due to lazy invalidation of old ptes in ptep_clear_flush_young() where we elide the expensive DSB instruction for completing the TLB invalidation. Rather than penalise the aging path, adjust pte_accessible() to return true for any valid pte, even if the access flag is cleared. Cc: Fixes: 76c714be0e5e ("arm64: pgtable: implement pte_accessible()") Reported-by: Yu Zhao Signed-off-by: Will Deacon Reviewed-by: Minchan Kim Acked-by: Yu Zhao Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/pgtable.h | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 4ff12a7adcfd..1bdf51f01e73 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -115,8 +115,6 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; #define pte_valid(pte) (!!(pte_val(pte) & PTE_VALID)) #define pte_valid_not_user(pte) \ ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID) -#define pte_valid_young(pte) \ - ((pte_val(pte) & (PTE_VALID | PTE_AF)) == (PTE_VALID | PTE_AF)) #define pte_valid_user(pte) \ ((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER)) @@ -126,7 +124,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; * remapped as PROT_NONE but are yet to be flushed from the TLB. */ #define pte_accessible(mm, pte) \ - (mm_tlb_flush_pending(mm) ? pte_present(pte) : pte_valid_young(pte)) + (mm_tlb_flush_pending(mm) ? pte_present(pte) : pte_valid(pte)) /* * p??_access_permitted() is true for valid user mappings (subject to the From patchwork Fri Nov 20 14:35:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 11920391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E65BC5519F for ; Fri, 20 Nov 2020 14:37:24 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1B3422224C for ; Fri, 20 Nov 2020 14:37:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="xXx4QksX"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="oPCA9JlQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1B3422224C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=tjjBhVDJVEH7bL7kCBO7l2CKNh1sMzZF9msQ7DottYM=; b=xXx4QksX4Cjz/k/gcZbOgrr9a Q0SeEjOjB5WpqAwvqHqMwuXFAAhZzA+u0YwMuLZSFcsdqhMGmakAp5raOTDN6BhUJyi4QINgF43xm OlmRZy65agyCJx/32GDru20KmiGkooPa1RRxJ3llIzgyMVNJnuUgeVKYp9tI74Bb0OoJgtiBftDCq WmXP7qSe4OvaZZ23UqGyXNh1Qq99RwfqJIiyITmvxl0WaiGnSbVa+U4JRotPpEZHywtozOn4doca2 o1l3cVZvZzmo8f1HRepGeO44Mw57R/BC28io80MDJD6STXPlfDN1/caXtiEZzGeXJ2BeV69vurtzE AN23QQaRg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7Wo-0002u9-Ut; Fri, 20 Nov 2020 14:36:47 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7WE-0002hP-Hz for linux-arm-kernel@lists.infradead.org; Fri, 20 Nov 2020 14:36:14 +0000 Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E341822403; Fri, 20 Nov 2020 14:36:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882969; bh=5+VLlS1UUO/TI2Fp+RDt7LnIJ1HNEXVw/xMX/rGfLQg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oPCA9JlQU8QGFpdcEWLoYhJojlk4qHNZfH3K7ZHQreYPLhy7IDyvsPJuzXJKkloFt XB4RJF7DC3c1CsG5sP2y4Hsq0DuBMmjhDDSOjbH0H1hm84YDFuLwpLGdsQyWOJIkSr Ad0byF1KCrj5mPq5P5e+ftrx4wfGjw85zDyts/Yk= From: Will Deacon To: linux-kernel@vger.kernel.org Subject: [PATCH 2/6] arm64: pgtable: Ensure dirty bit is preserved across pte_wrprotect() Date: Fri, 20 Nov 2020 14:35:53 +0000 Message-Id: <20201120143557.6715-3-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201120_093610_855605_E302BF48 X-CRM114-Status: GOOD ( 16.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Yu Zhao , Anshuman Khandual , Peter Zijlstra , kernel-team@android.com, Linus Torvalds , stable@vger.kernel.org, linux-mm@kvack.org, Minchan Kim , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With hardware dirty bit management, calling pte_wrprotect() on a writable, dirty PTE will lose the dirty state and return a read-only, clean entry. Move the logic from ptep_set_wrprotect() into pte_wrprotect() to ensure that the dirty bit is preserved for writable entries, as this is required for soft-dirty bit management if we enable it in the future. Cc: Signed-off-by: Will Deacon Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/pgtable.h | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 1bdf51f01e73..a155551863c9 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -162,13 +162,6 @@ static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot) return pmd; } -static inline pte_t pte_wrprotect(pte_t pte) -{ - pte = clear_pte_bit(pte, __pgprot(PTE_WRITE)); - pte = set_pte_bit(pte, __pgprot(PTE_RDONLY)); - return pte; -} - static inline pte_t pte_mkwrite(pte_t pte) { pte = set_pte_bit(pte, __pgprot(PTE_WRITE)); @@ -194,6 +187,20 @@ static inline pte_t pte_mkdirty(pte_t pte) return pte; } +static inline pte_t pte_wrprotect(pte_t pte) +{ + /* + * If hardware-dirty (PTE_WRITE/DBM bit set and PTE_RDONLY + * clear), set the PTE_DIRTY bit. + */ + if (pte_hw_dirty(pte)) + pte = pte_mkdirty(pte); + + pte = clear_pte_bit(pte, __pgprot(PTE_WRITE)); + pte = set_pte_bit(pte, __pgprot(PTE_RDONLY)); + return pte; +} + static inline pte_t pte_mkold(pte_t pte) { return clear_pte_bit(pte, __pgprot(PTE_AF)); @@ -843,12 +850,6 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres pte = READ_ONCE(*ptep); do { old_pte = pte; - /* - * If hardware-dirty (PTE_WRITE/DBM bit set and PTE_RDONLY - * clear), set the PTE_DIRTY bit. - */ - if (pte_hw_dirty(pte)) - pte = pte_mkdirty(pte); pte = pte_wrprotect(pte); pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep), pte_val(old_pte), pte_val(pte)); From patchwork Fri Nov 20 14:35:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 11920401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABA50C56201 for ; Fri, 20 Nov 2020 14:38:41 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4683C2224C for ; Fri, 20 Nov 2020 14:38:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="iDtRMi9d"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="sdvwdSiU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4683C2224C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=8BWeuzi7iSrwAVU6UQYUEWT/OfzrUJGnjR9JspcFDuw=; b=iDtRMi9d9Ro+fGjkN/5ravSlm UGd/+VIsT2L66BtGKJyxm3Up2WlBsRZmPgP5tNu6RGdBuqRw1OdLZoIcVlNw04W0K5eW7gW+ahZAJ jfBiz/Wx+AWe3U32NQytZ9jSDSr7brhjhI6elZHFjFRnd86mCS84aAGZdr+N//BtqmrnLx6AQAKlo y4lowmiyS6xEky5XMN9Q3T765r1jP6FLvamdJxew7EukbWCSM48QmPZPU+QjOUEC00vZ7OPTSwsAI 2ypbmBSP1Je9bB70vRvHffaSJr3F6HwIdQzU1Nzmou+LaZDecOcRg60MoD7gh2TBq7APZYWLGMVz7 gh4NuDlLQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7X8-00031j-Ae; Fri, 20 Nov 2020 14:37:06 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7WH-0002in-1e for linux-arm-kernel@lists.infradead.org; Fri, 20 Nov 2020 14:36:19 +0000 Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E8DE722253; Fri, 20 Nov 2020 14:36:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882972; bh=2RI4uHy8G5D16ugcm1YLYcVZG8lPDpNpEfnOzJ0v6uA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sdvwdSiU4xyVAqMlfniSPeo0oxdpl4vZS8dgLoKDT+v8XvtbeDFDxvgNgBMQvkjY/ dKklSZmXym4ItI4MPPkk9RFsrWko5vwsCSa1QNxKdR4xBXqbNGIraBFNwsDem0YCMv cTutuPcUP68rGBQuLepHX6iHgvpumYAOGULLi6eE= From: Will Deacon To: linux-kernel@vger.kernel.org Subject: [PATCH 3/6] tlb: mmu_gather: Remove unused start/end arguments from tlb_finish_mmu() Date: Fri, 20 Nov 2020 14:35:54 +0000 Message-Id: <20201120143557.6715-4-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201120_093613_352659_21006125 X-CRM114-Status: GOOD ( 17.94 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Yu Zhao , Anshuman Khandual , Peter Zijlstra , kernel-team@android.com, Linus Torvalds , linux-mm@kvack.org, Minchan Kim , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org tlb_finish_mmu() takes two confusing and unused 'start'/'end' address arguments. Remove them. Signed-off-by: Will Deacon --- arch/ia64/include/asm/tlb.h | 2 +- arch/x86/kernel/ldt.c | 2 +- fs/exec.c | 2 +- fs/proc/task_mmu.c | 2 +- include/linux/mm_types.h | 3 +-- mm/hugetlb.c | 2 +- mm/madvise.c | 6 +++--- mm/memory.c | 4 ++-- mm/mmap.c | 4 ++-- mm/mmu_gather.c | 5 +---- mm/oom_kill.c | 4 ++-- 11 files changed, 16 insertions(+), 20 deletions(-) diff --git a/arch/ia64/include/asm/tlb.h b/arch/ia64/include/asm/tlb.h index 8d9da6f08a62..7059eb2e867a 100644 --- a/arch/ia64/include/asm/tlb.h +++ b/arch/ia64/include/asm/tlb.h @@ -36,7 +36,7 @@ * tlb_end_vma(tlb, vma); * } * } - * tlb_finish_mmu(tlb, start, end); // finish unmap for address space MM + * tlb_finish_mmu(tlb); // finish unmap for address space MM */ #include #include diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c index b8aee71840ae..0d4e1253c9c9 100644 --- a/arch/x86/kernel/ldt.c +++ b/arch/x86/kernel/ldt.c @@ -400,7 +400,7 @@ static void free_ldt_pgtables(struct mm_struct *mm) tlb_gather_mmu(&tlb, mm, start, end); free_pgd_range(&tlb, start, end, start, end); - tlb_finish_mmu(&tlb, start, end); + tlb_finish_mmu(&tlb); #endif } diff --git a/fs/exec.c b/fs/exec.c index 547a2390baf5..aa846c6ec2f0 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -724,7 +724,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift) free_pgd_range(&tlb, old_start, old_end, new_end, vma->vm_next ? vma->vm_next->vm_start : USER_PGTABLES_CEILING); } - tlb_finish_mmu(&tlb, old_start, old_end); + tlb_finish_mmu(&tlb); /* * Shrink the vma to just the new range. Always succeeds. diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 217aa2705d5d..cd03ab9087b0 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1260,7 +1260,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, &cp); if (type == CLEAR_REFS_SOFT_DIRTY) mmu_notifier_invalidate_range_end(&range); - tlb_finish_mmu(&tlb, 0, -1); + tlb_finish_mmu(&tlb); mmap_read_unlock(mm); out_mm: mmput(mm); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5a9238f6caad..7b90058a62be 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -585,8 +585,7 @@ static inline cpumask_t *mm_cpumask(struct mm_struct *mm) struct mmu_gather; extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end); -extern void tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end); +extern void tlb_finish_mmu(struct mmu_gather *tlb); static inline void init_tlb_flush_pending(struct mm_struct *mm) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 37f15c3c24dc..4c0235122464 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3985,7 +3985,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, tlb_gather_mmu(&tlb, mm, tlb_start, tlb_end); __unmap_hugepage_range(&tlb, vma, start, end, ref_page); - tlb_finish_mmu(&tlb, tlb_start, tlb_end); + tlb_finish_mmu(&tlb); } /* diff --git a/mm/madvise.c b/mm/madvise.c index 416a56b8e757..29cd3d4172f5 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -508,7 +508,7 @@ static long madvise_cold(struct vm_area_struct *vma, lru_add_drain(); tlb_gather_mmu(&tlb, mm, start_addr, end_addr); madvise_cold_page_range(&tlb, vma, start_addr, end_addr); - tlb_finish_mmu(&tlb, start_addr, end_addr); + tlb_finish_mmu(&tlb); return 0; } @@ -560,7 +560,7 @@ static long madvise_pageout(struct vm_area_struct *vma, lru_add_drain(); tlb_gather_mmu(&tlb, mm, start_addr, end_addr); madvise_pageout_page_range(&tlb, vma, start_addr, end_addr); - tlb_finish_mmu(&tlb, start_addr, end_addr); + tlb_finish_mmu(&tlb); return 0; } @@ -732,7 +732,7 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, &madvise_free_walk_ops, &tlb); tlb_end_vma(&tlb, vma); mmu_notifier_invalidate_range_end(&range); - tlb_finish_mmu(&tlb, range.start, range.end); + tlb_finish_mmu(&tlb); return 0; } diff --git a/mm/memory.c b/mm/memory.c index c48f8df6e502..04a88c15e076 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1529,7 +1529,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, for ( ; vma && vma->vm_start < range.end; vma = vma->vm_next) unmap_single_vma(&tlb, vma, start, range.end, NULL); mmu_notifier_invalidate_range_end(&range); - tlb_finish_mmu(&tlb, start, range.end); + tlb_finish_mmu(&tlb); } /** @@ -1555,7 +1555,7 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr mmu_notifier_invalidate_range_start(&range); unmap_single_vma(&tlb, vma, address, range.end, details); mmu_notifier_invalidate_range_end(&range); - tlb_finish_mmu(&tlb, address, range.end); + tlb_finish_mmu(&tlb); } /** diff --git a/mm/mmap.c b/mm/mmap.c index d91ecb00d38c..6d94b2ee9c45 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2678,7 +2678,7 @@ static void unmap_region(struct mm_struct *mm, unmap_vmas(&tlb, vma, start, end); free_pgtables(&tlb, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS, next ? next->vm_start : USER_PGTABLES_CEILING); - tlb_finish_mmu(&tlb, start, end); + tlb_finish_mmu(&tlb); } /* @@ -3221,7 +3221,7 @@ void exit_mmap(struct mm_struct *mm) /* Use -1 here to ensure all VMAs in the mm are unmapped */ unmap_vmas(&tlb, vma, 0, -1); free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING); - tlb_finish_mmu(&tlb, 0, -1); + tlb_finish_mmu(&tlb); /* * Walk the list again, actually closing and freeing it, diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 03c33c93a582..b0be5a7aa08f 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -290,14 +290,11 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, /** * tlb_finish_mmu - finish an mmu_gather structure * @tlb: the mmu_gather structure to finish - * @start: start of the region that will be removed from the page-table - * @end: end of the region that will be removed from the page-table * * Called at the end of the shootdown operation to free up any resources that * were required. */ -void tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end) +void tlb_finish_mmu(struct mmu_gather *tlb) { /* * If there are parallel threads are doing PTE changes on same range diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 8b84661a6410..c7936196a4ae 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -546,13 +546,13 @@ bool __oom_reap_task_mm(struct mm_struct *mm) vma->vm_end); tlb_gather_mmu(&tlb, mm, range.start, range.end); if (mmu_notifier_invalidate_range_start_nonblock(&range)) { - tlb_finish_mmu(&tlb, range.start, range.end); + tlb_finish_mmu(&tlb); ret = false; continue; } unmap_page_range(&tlb, vma, range.start, range.end, NULL); mmu_notifier_invalidate_range_end(&range); - tlb_finish_mmu(&tlb, range.start, range.end); + tlb_finish_mmu(&tlb); } } From patchwork Fri Nov 20 14:35:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 11920395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C37DC63798 for ; Fri, 20 Nov 2020 14:37:47 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D90B62224C for ; Fri, 20 Nov 2020 14:37:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Fw+WEHMK"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="YNtrw4KI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D90B62224C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=a+lrSO4wHH8CeYs3gs2bzu+K0/fgQYwk9MF7awJAvB0=; b=Fw+WEHMKLPkes1YnaxDLAr+LS SIa1Od+fSG8pwKNQbYtXGC7GtJ0Mk9db0+hESR27wP1HGydoPlTBVn+YAR1/gj+ohsD9Ks0s5iOwi 54KUWH91TDX5H0gvdDvEiuqAG61gggZXfPALHDWiE0EZJX9k4Cy61tKyq8VlIIlYgQUkEa0xzXvh6 7GWPkCN+Okvafkr0Q1FVCqmSVEqe1jmRpnFxMeNzGNDq6Igk/0Pl704ZmHU49Trl6+831vNBEmA00 GfZqcNhUKkQMXnjduskM7bcVt5setvdvgIUG7YhSvsP7xm2OAK1fMZzZCiyJqbgXmrKDv/Gmyh+U+ FMjTDJYNw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7XH-00035y-Qf; Fri, 20 Nov 2020 14:37:15 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7WJ-0002k3-FX for linux-arm-kernel@lists.infradead.org; Fri, 20 Nov 2020 14:36:20 +0000 Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6CCEB2237B; Fri, 20 Nov 2020 14:36:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882974; bh=Ee4dHPlm9Xtf0uOlgveF1NWM2zzjVroNVw3kT3+i2IQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YNtrw4KIsx5ciCK6WLPTjUp8aE7IPVNFvghiWSSah0DnFIQwNnZQ/LdP0EwuNax4T A55M+EnVXDrPZXG1Gw57bUGVT/GghFB57TIauZ2nuQaIiI4wHrSoGXob5xLjuA/l/C j1U6xOsKzYOmZCl5KHq2KPMRO4gRXBdHk9rdYPpM= From: Will Deacon To: linux-kernel@vger.kernel.org Subject: [PATCH 4/6] mm: proc: Invalidate TLB after clearing soft-dirty page state Date: Fri, 20 Nov 2020 14:35:55 +0000 Message-Id: <20201120143557.6715-5-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201120_093615_852921_B82D3253 X-CRM114-Status: GOOD ( 16.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Yu Zhao , Anshuman Khandual , Peter Zijlstra , kernel-team@android.com, Linus Torvalds , linux-mm@kvack.org, Minchan Kim , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since commit 0758cd830494 ("asm-generic/tlb: avoid potential double flush"), TLB invalidation is elided in tlb_finish_mmu() if no entries were batched via the tlb_remove_*() functions. Consequently, the page-table modifications performed by clear_refs_write() in response to a write to /proc//clear_refs do not perform TLB invalidation. Although this is fine when simply aging the ptes, in the case of clearing the "soft-dirty" state we can end up with entries where pte_write() is false, yet a writable mapping remains in the TLB. Fix this by calling tlb_remove_tlb_entry() for each entry being write-protected when cleating soft-dirty. Signed-off-by: Will Deacon --- fs/proc/task_mmu.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index cd03ab9087b0..3308292ee5c5 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1032,11 +1032,12 @@ enum clear_refs_types { struct clear_refs_private { enum clear_refs_types type; + struct mmu_gather *tlb; }; #ifdef CONFIG_MEM_SOFT_DIRTY static inline void clear_soft_dirty(struct vm_area_struct *vma, - unsigned long addr, pte_t *pte) + unsigned long addr, pte_t *pte, struct mmu_gather *tlb) { /* * The soft-dirty tracker uses #PF-s to catch writes @@ -1053,6 +1054,7 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma, ptent = pte_wrprotect(old_pte); ptent = pte_clear_soft_dirty(ptent); ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent); + tlb_remove_tlb_entry(tlb, pte, addr); } else if (is_swap_pte(ptent)) { ptent = pte_swp_clear_soft_dirty(ptent); set_pte_at(vma->vm_mm, addr, pte, ptent); @@ -1060,14 +1062,14 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma, } #else static inline void clear_soft_dirty(struct vm_area_struct *vma, - unsigned long addr, pte_t *pte) + unsigned long addr, pte_t *pte, struct mmu_gather *tlb) { } #endif #if defined(CONFIG_MEM_SOFT_DIRTY) && defined(CONFIG_TRANSPARENT_HUGEPAGE) static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, - unsigned long addr, pmd_t *pmdp) + unsigned long addr, pmd_t *pmdp, struct mmu_gather *tlb) { pmd_t old, pmd = *pmdp; @@ -1081,6 +1083,7 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, pmd = pmd_wrprotect(pmd); pmd = pmd_clear_soft_dirty(pmd); + tlb_remove_pmd_tlb_entry(tlb, pmdp, addr); set_pmd_at(vma->vm_mm, addr, pmdp, pmd); } else if (is_migration_entry(pmd_to_swp_entry(pmd))) { @@ -1090,7 +1093,7 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, } #else static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, - unsigned long addr, pmd_t *pmdp) + unsigned long addr, pmd_t *pmdp, struct mmu_gather *tlb) { } #endif @@ -1107,7 +1110,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { if (cp->type == CLEAR_REFS_SOFT_DIRTY) { - clear_soft_dirty_pmd(vma, addr, pmd); + clear_soft_dirty_pmd(vma, addr, pmd, cp->tlb); goto out; } @@ -1133,7 +1136,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, ptent = *pte; if (cp->type == CLEAR_REFS_SOFT_DIRTY) { - clear_soft_dirty(vma, addr, pte); + clear_soft_dirty(vma, addr, pte, cp->tlb); continue; } @@ -1212,7 +1215,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, if (mm) { struct mmu_notifier_range range; struct clear_refs_private cp = { - .type = type, + .type = type, + .tlb = &tlb, }; if (type == CLEAR_REFS_MM_HIWATER_RSS) { From patchwork Fri Nov 20 14:35:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 11920403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1355BC56201 for ; Fri, 20 Nov 2020 14:38:52 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 661092224C for ; Fri, 20 Nov 2020 14:38:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="TvToZKHU"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="hewr0Z2s" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 661092224C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=rQAk/LGRDw4dY/5PJ5SNw47qOaGGwTlxoWo80fGHC38=; b=TvToZKHU3SOZcKDA/3ihsdcXK Fv2+RF73B1OkOOGu/qVHPHAXNF9ruG6evDsSEBcMwIse9bdNzdRfBQpDNUeTX5326vE4/Cqz/bMk0 unzXDw6gnq35ibPhCZttn2Giuk5b4rNxJvi+l2V2yxuF+BmDLkixf1O7u6aU/Bn+b4DpAcI2ltZmG ESPTRZpnGewR5hbbbp6ktIgoBRNoUC6D9rGot4XoIpC6lMngDlt1XCcokvCAqmlWzBvLf1z5jHg/o K2MZ4Cj2ZjHi5MFRDJNXWopxypXDM44v571p+e1Rhm8JoiMlTwRElpBel+oMOk5YORJBCpk29MByx 8P53qtAiw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7XQ-0003An-Co; Fri, 20 Nov 2020 14:37:24 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7WL-0002lJ-S1 for linux-arm-kernel@lists.infradead.org; Fri, 20 Nov 2020 14:36:25 +0000 Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EC6A12236F; Fri, 20 Nov 2020 14:36:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882977; bh=JrUunaT13FOAbVs5krWn+S3cSLbG3IZElKuxk2ra5Zs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hewr0Z2s9VAjf449FFaVKXNLa3oxpQEFrq22RF7GnyAapbmtehNsFPQ4nI053WAXp 11ca/MGmrJ+tDWmfwVDghHg8kd2bDwxDl7qa3iE9ckSeXg1uAXoXqlAQb5jYtCTlHc fTRbYX+P0e5QjWzWHdIGwzK2IYcm5ozrbF3sVFC8= From: Will Deacon To: linux-kernel@vger.kernel.org Subject: [PATCH 5/6] tlb: mmu_gather: Introduce tlb_gather_mmu_fullmm() Date: Fri, 20 Nov 2020 14:35:56 +0000 Message-Id: <20201120143557.6715-6-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201120_093618_476814_8EE56630 X-CRM114-Status: GOOD ( 17.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Yu Zhao , Anshuman Khandual , Peter Zijlstra , kernel-team@android.com, Linus Torvalds , linux-mm@kvack.org, Minchan Kim , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Passing the range '0, -1' to tlb_gather_mmu() sets the 'fullmm' flag, which indicates that the mm_struct being operated on is going away. In this case, some architectures (such as arm64) can elide TLB invalidation by ensuring that the TLB tag (ASID) associated with this mm is not immediately reclaimed. Although this behaviour is documented in asm-generic/tlb.h, it's subtle and easily missed. Consequently, the /proc walker for manipulating the young and soft-dirty bits passes this range regardless. Introduce tlb_gather_mmu_fullmm() to make it clearer that this is for the entire mm and WARN() if tlb_gather_mmu() is called with an 'end' address greated than TASK_SIZE. Signed-off-by: Will Deacon --- fs/proc/task_mmu.c | 2 +- include/asm-generic/tlb.h | 6 ++++-- include/linux/mm_types.h | 1 + mm/mmap.c | 2 +- mm/mmu_gather.c | 16 ++++++++++++++-- 5 files changed, 21 insertions(+), 6 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3308292ee5c5..a76d339b5754 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1238,7 +1238,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, count = -EINTR; goto out_mm; } - tlb_gather_mmu(&tlb, mm, 0, -1); + tlb_gather_mmu_fullmm(&tlb, mm); if (type == CLEAR_REFS_SOFT_DIRTY) { for (vma = mm->mmap; vma; vma = vma->vm_next) { if (!(vma->vm_flags & VM_SOFTDIRTY)) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 6661ee1cff47..2c68a545ffa7 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -46,7 +46,9 @@ * * The mmu_gather API consists of: * - * - tlb_gather_mmu() / tlb_finish_mmu(); start and finish a mmu_gather + * - tlb_gather_mmu() / tlb_gather_mmu_fullmm() / tlb_finish_mmu() + * + * start and finish a mmu_gather * * Finish in particular will issue a (final) TLB invalidate and free * all (remaining) queued pages. @@ -91,7 +93,7 @@ * * - mmu_gather::fullmm * - * A flag set by tlb_gather_mmu() to indicate we're going to free + * A flag set by tlb_gather_mmu_fullmm() to indicate we're going to free * the entire mm; this allows a number of optimizations. * * - We can ignore tlb_{start,end}_vma(); because we don't diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 7b90058a62be..42231729affe 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -585,6 +585,7 @@ static inline cpumask_t *mm_cpumask(struct mm_struct *mm) struct mmu_gather; extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end); +extern void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm); extern void tlb_finish_mmu(struct mmu_gather *tlb); static inline void init_tlb_flush_pending(struct mm_struct *mm) diff --git a/mm/mmap.c b/mm/mmap.c index 6d94b2ee9c45..4b2809fbbd4a 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -3216,7 +3216,7 @@ void exit_mmap(struct mm_struct *mm) lru_add_drain(); flush_cache_mm(mm); - tlb_gather_mmu(&tlb, mm, 0, -1); + tlb_gather_mmu_fullmm(&tlb, mm); /* update_hiwater_rss(mm) here? but nobody should be looking */ /* Use -1 here to ensure all VMAs in the mm are unmapped */ unmap_vmas(&tlb, vma, 0, -1); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index b0be5a7aa08f..87b48444e7e5 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -261,8 +261,8 @@ void tlb_flush_mmu(struct mmu_gather *tlb) * respectively when @mm is without users and we're going to destroy * the full address space (exit/execve). */ -void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) +static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, + unsigned long start, unsigned long end) { tlb->mm = mm; @@ -287,6 +287,18 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, inc_tlb_flush_pending(tlb->mm); } +void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, + unsigned long start, unsigned long end) +{ + WARN_ON(end > TASK_SIZE); + __tlb_gather_mmu(tlb, mm, start, end); +} + +void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm) +{ + __tlb_gather_mmu(tlb, mm, 0, -1); +} + /** * tlb_finish_mmu - finish an mmu_gather structure * @tlb: the mmu_gather structure to finish From patchwork Fri Nov 20 14:35:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 11920397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9663C5519F for ; Fri, 20 Nov 2020 14:37:57 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 58AD32224C for ; Fri, 20 Nov 2020 14:37:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="KPbEZR1X"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="afgeTZc4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 58AD32224C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=2TAqbDwg05RVKlUlnpJrAnAaviDP4edSMs/cBvIY0iE=; b=KPbEZR1Xhg2qbSnCgcUYXfMJF xWOfADqFwiZadVwwhtpcBxzbJNXHbp0Lf7U3kguvh+6jZLdaCjvQtPFMnY8hkGuxf5HXEw5XrP4cY 32vSK7mm+JT8+tCMO7gfnwj1ZzJXmWubz4ORdLgJcGZZDp2U2k7e87lHiGS9spfTo2tNao3tGlfsM UMFQZB/T0zIQN760fqPIO8qHLD6Pq39l++A6HDGOUvLKkLe5LldZ3VpQdJx5pADGbtJCwgPqvlKXZ 4FH+q/InUpa5IRtBV63v51GuZ9r8TZZah/uH/WW005Oq9lb64dxR77Ri++hE4J/TyEi3FWzntof0d IJDYw6zeg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7XU-0003DH-KY; Fri, 20 Nov 2020 14:37:28 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7WO-0002mY-Qn for linux-arm-kernel@lists.infradead.org; Fri, 20 Nov 2020 14:36:28 +0000 Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7948B2224C; Fri, 20 Nov 2020 14:36:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882979; bh=Cx43nH7+0p7dCMlh2hWC5+4DOrQsHy/9+cM/6j7sA0k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=afgeTZc4tCjR0MOdMgoK69uQiEsqDnC9K0pzMB5Sj80IghTT6RiSH0yixLei86pAh eA3sUhCiS98J5br6nm+aCIY5W8YWvOl3cFeDFnuwN33agaWXZXHtRo46DvLXqEmIYl oudAEy1bh7kfEsjFKdgb8jP7qkR0efezAG0Z/UKY= From: Will Deacon To: linux-kernel@vger.kernel.org Subject: [PATCH 6/6] mm: proc: Avoid fullmm flush for young/dirty bit toggling Date: Fri, 20 Nov 2020 14:35:57 +0000 Message-Id: <20201120143557.6715-7-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201120_093621_193135_C0BBBE25 X-CRM114-Status: GOOD ( 16.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Yu Zhao , Anshuman Khandual , Peter Zijlstra , kernel-team@android.com, Linus Torvalds , linux-mm@kvack.org, Minchan Kim , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org clear_refs_write() uses the 'fullmm' API for invalidating TLBs after updating the page-tables for the current mm. However, since the mm is not being freed, this can result in stale TLB entries on architectures which elide 'fullmm' invalidation. Ensure that TLB invalidation is performed after updating soft-dirty entries via clear_refs_write() by using the non-fullmm API to MMU gather. Signed-off-by: Will Deacon --- fs/proc/task_mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index a76d339b5754..316af047f1aa 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1238,7 +1238,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, count = -EINTR; goto out_mm; } - tlb_gather_mmu_fullmm(&tlb, mm); + tlb_gather_mmu(&tlb, mm, 0, TASK_SIZE); if (type == CLEAR_REFS_SOFT_DIRTY) { for (vma = mm->mmap; vma; vma = vma->vm_next) { if (!(vma->vm_flags & VM_SOFTDIRTY))