From patchwork Mon Jul 1 09:54:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13717719 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9547E14BF97; Mon, 1 Jul 2024 09:55:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719827745; cv=none; b=ghF8+7vx2VwDCLfxbH9g3LikBhRdvGSzEbOh3UEipL0q2UKK98wVuChBRqb3+gyJx+1mFplhs+rQnFVGyIwOeahB7S559b001p2m4G9FtjNrIkpArHhj0t7xqv4u/WfUYOgqPUqPdOqEvBdna58qLJBS47iGwETvdxFABMwVSzk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719827745; c=relaxed/simple; bh=UbbovNi33/o7xauGKxX994se+VOXHy9SSou9Bs4VBZE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oZEvyqWJy8zDO6PkAV+1ZDZP3lu4zvUweq9/1emuWX2tAcoOGU+WRepyP8dHmBwXzo5+ypPPE3V8L2wO/kavDRx98rU3CkJp/CtKk6ivwro0FD4kgvHjdoDKSOIu3xX47Nw0kFZ4FnJb1+zFMhqjwcjUoRbfJeuLBU4/0Y3yyaI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 04F6A367; Mon, 1 Jul 2024 02:56:09 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.44.170]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4AD173F762; Mon, 1 Jul 2024 02:55:41 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni Subject: [PATCH v4 08/15] arm64: mm: Avoid TLBI when marking pages as valid Date: Mon, 1 Jul 2024 10:54:58 +0100 Message-Id: <20240701095505.165383-9-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240701095505.165383-1-steven.price@arm.com> References: <20240701095505.165383-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When __change_memory_common() is purely setting the valid bit on a PTE (e.g. via the set_memory_valid() call) there is no need for a TLBI as either the entry isn't changing (the valid bit was already set) or the entry was invalid and so should not have been cached in the TLB. Signed-off-by: Steven Price --- v4: New patch --- arch/arm64/mm/pageattr.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 0e270a1c51e6..547a9e0b46c2 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -60,7 +60,13 @@ static int __change_memory_common(unsigned long start, unsigned long size, ret = apply_to_page_range(&init_mm, start, size, change_page_range, &data); - flush_tlb_kernel_range(start, start + size); + /* + * If the memory is being made valid without changing any other bits + * then a TLBI isn't required as a non-valid entry cannot be cached in + * the TLB. + */ + if (pgprot_val(set_mask) != PTE_VALID || pgprot_val(clear_mask)) + flush_tlb_kernel_range(start, start + size); return ret; }