diff mbox series

[v2,2/7] KVM: arm64: MTE: Update code comments

Message ID 20250110110023.2963795-3-aneesh.kumar@kernel.org (mailing list archive)
State New
Headers show
Series Add support for NoTagAccess memory attribute | expand

Commit Message

Aneesh Kumar K.V Jan. 10, 2025, 11 a.m. UTC
commit d77e59a8fccd ("arm64: mte: Lock a page for MTE tag
initialisation") updated the locking such the kernel now allows
VM_SHARED mapping with MTE. Update the code comment to reflect this.

Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
---
 arch/arm64/kvm/mmu.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Comments

Catalin Marinas Jan. 10, 2025, 1:11 p.m. UTC | #1
On Fri, Jan 10, 2025 at 04:30:18PM +0530, Aneesh Kumar K.V (Arm) wrote:
> commit d77e59a8fccd ("arm64: mte: Lock a page for MTE tag
> initialisation") updated the locking such the kernel now allows
> VM_SHARED mapping with MTE. Update the code comment to reflect this.
> 
> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
> ---
>  arch/arm64/kvm/mmu.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index c9d46ad57e52..eb8220a409e1 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1391,11 +1391,11 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
>   * able to see the page's tags and therefore they must be initialised first. If
>   * PG_mte_tagged is set, tags have already been initialised.
>   *
> - * The race in the test/set of the PG_mte_tagged flag is handled by:
> - * - preventing VM_SHARED mappings in a memslot with MTE preventing two VMs
> - *   racing to santise the same page
> - * - mmap_lock protects between a VM faulting a page in and the VMM performing
> - *   an mprotect() to add VM_MTE
> + * The race in the test/set of the PG_mte_tagged flag is handled by using
> + * PG_mte_lock and PG_mte_tagged together. if PG_mte_lock is found unset, we can
> + * go ahead and clear the page tags. if PG_mte_lock is found set, then the page
> + * tags are already cleared or there is a parallel tag clearing is going on. We
				  ^^^^^^^^
				  remove this (or the other 'is')


> + * wait for the parallel tag clear to finish by waiting on PG_mte_tagged bit.
>   */

I don't think we need to describe the behaviour of set_page_mte_tagged()
and try_page_mte_tagging() in here. How the locking works for tagged
pages was hidden in those functions with their own documentation. I
would just remove this whole paragraph here, just leave the first one
stating that the tags must be initialised if not already done so.
diff mbox series

Patch

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index c9d46ad57e52..eb8220a409e1 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1391,11 +1391,11 @@  static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
  * able to see the page's tags and therefore they must be initialised first. If
  * PG_mte_tagged is set, tags have already been initialised.
  *
- * The race in the test/set of the PG_mte_tagged flag is handled by:
- * - preventing VM_SHARED mappings in a memslot with MTE preventing two VMs
- *   racing to santise the same page
- * - mmap_lock protects between a VM faulting a page in and the VMM performing
- *   an mprotect() to add VM_MTE
+ * The race in the test/set of the PG_mte_tagged flag is handled by using
+ * PG_mte_lock and PG_mte_tagged together. if PG_mte_lock is found unset, we can
+ * go ahead and clear the page tags. if PG_mte_lock is found set, then the page
+ * tags are already cleared or there is a parallel tag clearing is going on. We
+ * wait for the parallel tag clear to finish by waiting on PG_mte_tagged bit.
  */
 static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
 			      unsigned long size)