From patchwork Fri Nov 8 16:20:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13868442 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E4D8D64060 for ; Fri, 8 Nov 2024 16:21:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E45DD6B00A4; Fri, 8 Nov 2024 11:21:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DCEDB6B00A6; Fri, 8 Nov 2024 11:21:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFB6C6B00A5; Fri, 8 Nov 2024 11:21:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 906456B00A3 for ; Fri, 8 Nov 2024 11:21:04 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 53AE5AC45A for ; Fri, 8 Nov 2024 16:21:04 +0000 (UTC) X-FDA: 82763440956.24.19EC2AA Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf27.hostedemail.com (Postfix) with ESMTP id 854064001D for ; Fri, 8 Nov 2024 16:20:24 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=y6brMbQw; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of 3bDouZwUKCNkO5665BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3bDouZwUKCNkO5665BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--tabba.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731082802; a=rsa-sha256; cv=none; b=svFv+7MwuU3BGDI8qPd5jPXDGWng7UueYfPpxPjOR1yYzXoYqGefUuQ94XKqA9Gr8UsC7C 5aJFOjPylmOijSutawrBKD+5VDc7OaN456Brnk0fwpzdZk4Ixb7YnE9bBAX7N2xoaZADNB g7EF2v5r/sLPRjr2g/Z1D2JcbKfxVOo= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=y6brMbQw; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of 3bDouZwUKCNkO5665BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3bDouZwUKCNkO5665BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--tabba.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731082802; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Zc2jp+pMldWQELuiNGfEMyRAPqgAV/dSRAey9KCp9+8=; b=qFl9pUTraOkICk71KKI5/JA6vY7MiKPNlWhs2u9daHSOFEM+iS+86+IBTpX14siRGTfOCW WW+PG60lOGVvVpNIDqxiEn/fqySOo9F0QiY4meBZM0Remeo0AdiKxNTqyOPQoZsG0C5ZHg VGJH/qcG/TVYQPz5GXOBYN0ecV0GFKw= Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-431518ae047so18129215e9.0 for ; Fri, 08 Nov 2024 08:21:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731082861; x=1731687661; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Zc2jp+pMldWQELuiNGfEMyRAPqgAV/dSRAey9KCp9+8=; b=y6brMbQwlb9eemfh1eeOosH5O5hji+Ez4I6TJODypmPyP7/9C4lt36dMcLI9LxUaJG SbI980jfBJrkJ+03/pkQoJqXHkb4ovdoSX7LZlXFBKafV1fi1h0d7J0ZG1LSuSV17pg3 NHwhDK4xocM2Df5igR3koga75Z1LjoGdqbhFDgCVEb0ZYU5d0UFqVPqcDPHHaMZiwU+p 3rdD93P2Vb7g6AC1T8If6TJSXCJpOjzgYQt303vHZqAx9J+VlQY6qVJZvuoUosm5NwRl mB/CaFlWLnAXsuyfD/SAVIeLff4rrGw1FCPNVJ+5OIZhQcvY80HGkvhUqbIPK95OUOlG x3iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731082861; x=1731687661; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Zc2jp+pMldWQELuiNGfEMyRAPqgAV/dSRAey9KCp9+8=; b=Q+7blzAt4WF5+iwpzKbrek93aFg5PgYL7YNZJCypBMNyjDg3flE1ZxSbEzqOieuYZ5 W31Xm4xz+akyr1d/EgTbUv1s6l9dXnM/dZWt5v0xEaSrnkut9QXQDKXYoHL2qXELxW0d lr3Pi5e681m8pJIADUhgmsN9aQDow+38JaptbUlGSo9SCryK6mS4CHmOacIzzsvEtyUV zZZUMcjV3B1+cztnQyyiEULoGG2Foh23DDq46KSW3jY2IXecGJQj9YtDsqLRn/TkjIrC jvHp50noK0O+R1/LnfNaPidaH0nJGuAUvnLDMFenInc6y+SOhrbQoGXXnCZ3Vdr13uU9 jByQ== X-Gm-Message-State: AOJu0YzjhFkP63qk/nA1xQPA1Rcyw9o+qX1The4PgFvbwIz3UB5wgFTB UPl2EzI9eGsRioEvAdQw8cy5b4JKkyTyrJuYZpeFcH7R/RT5UfC/vYFA1h8YxUaay/i/681xUFb Z8JUzwUo7HDfumN2a8VkjPuo2L+O/lnWDHWWBncGfvYSfgaj4Rb/coeCgr07M/5L3asaUZ9TlHX Ie1LacpqBsd9kmvBrUvkuARg== X-Google-Smtp-Source: AGHT+IEFnogtOzb1Xio0nKmbj4KqgDBTNvMMzqTZB5Q0tAq5Rm4eo6VUxTjFp/Ump5nYGzIZAg3lfcnSTA== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a7b:cbc9:0:b0:42e:6ad4:e411 with SMTP id 5b1f17b1804b1-432b741c9b5mr131765e9.1.1731082860803; Fri, 08 Nov 2024 08:21:00 -0800 (PST) Date: Fri, 8 Nov 2024 16:20:38 +0000 In-Reply-To: <20241108162040.159038-1-tabba@google.com> Mime-Version: 1.0 References: <20241108162040.159038-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108162040.159038-9-tabba@google.com> Subject: [RFC PATCH v1 08/10] mm: Use getters and setters to access page pgmap From: Fuad Tabba To: linux-mm@kvack.org Cc: kvm@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, david@redhat.com, rppt@kernel.org, jglisse@redhat.com, akpm@linux-foundation.org, muchun.song@linux.dev, simona@ffwll.ch, airlied@gmail.com, pbonzini@redhat.com, seanjc@google.com, willy@infradead.org, jgg@nvidia.com, jhubbard@nvidia.com, ackerleytng@google.com, vannapurve@google.com, mail@maciej.szmigiero.name, kirill.shutemov@linux.intel.com, quic_eberman@quicinc.com, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, tabba@google.com X-Rspam-User: X-Rspamd-Queue-Id: 854064001D X-Rspamd-Server: rspam01 X-Stat-Signature: xosyfh3kmhmiesicj7s7hm1tryxmco4a X-HE-Tag: 1731082824-916722 X-HE-Meta: U2FsdGVkX19Nd48PicjGcOSSf638Ch21lCmDO2qxEbaqBaamK322P96cDvcSaMTwqW/kfcxER9hahZb4RzN7sy3ocewpjcNvdQQux0PF1koOx1ksFgYQz+3de6Gl0hVsQ60cvh6m2mnYj4eX2Ag5Q3ijPc1zMoYhKM10xgbZaCEMq/TG2eZ3WuTV322qWtEmgsRWJZR2L2/NT7qbshCnQ+DsesuSP0OMrnUTJw9UM5I/p5bx9KL4r85JDCjPOHI8z/ooY6ZgfcY3sKm9y2sU1t2l6bxhKSyJpdX4iNEkHoTtugzx9ao1e0vnIRNrv+kU5rONAOWrEfPKPCQ8+fMMypgiH4G00iTC5KKQo3pTFq8VctMZzJWbIwGR2gPYntCLANLq19IuwQavLqtBYfHwW5f43vv90nrykWMS/nkF53JJk9rdLzCEy2vySWXTwyJ+HxPPumDH1droEnueHerieODLLzVIeJIj1/FmHEM1YaBQcnhfWTGtB6Z76xxW5Jubhcyfod9AdV7WtDh5CzmUNi81qfY6aEFvceqbXVCgLS2crOHAoDMR7Gh36tLla0LIV80J4TAaLTnK4Bbk8OY5glLEOipQpX7jLy5htJcKPo7vHDdiFkM3wj1tf3JNt3AWoflFjid/El1TFI9TrhIGAotaAJ6toQc/RiE4MgAE7JRQ8Fuw1IDZ+WWlYl4eVySvQuSipGXV0uUAFaO8Eb1eenPj00DzLlzNw3/iOLkDyOVVxBUorRrF0XnuXGGCd4Lul1tlYQDP9pzhABIEFhqka+T8udI+04uye07J7Fq/VbSkx2lze3KYq/FsrhXkVoSn4UhnqkUq/xjPXUcgyc79v/R8MJXbPf2Oho+7yt8fFg8+NQWZarmwstNnA0QpHHsBeWG9RXRnJAkHDI900D0EFf0SeCw5vQCD1S3OT7E4urPoZi88JgHC8/Wu4BmfCNuc+RRRfY8+DRLGiXGEk+i gmyKKhS+ eVYs8n7Ao+wD3Lq38DNtX6PysyGPy6iOsanQ2TqENzFNmayTZ7ALRv/ZY0TIbq3v1hMx0nIK3aQvhc0lx148chkWXvkD172nLflTWQ8TFoY4DS0eJ/+9V/VysF8d4io5YpYNiMQcXHu23JUt9O/sArcBpMnLN5bK5+V8BaNlCVGDoKR1FGAMVh8O7usYYJJS2WT7uHqLK9olyn3tpmVYHvGa+VsIB2Mwkzh30VHmS2gN5aOHn5rOrw37U2RIo7t8cwXXdTChfMNAd89WctnPZABnJslhd20zRQ7Lb8IvXJSy88J7CUiHUYbav+6jwk1JzSAprKZLwr6PxfmXM4iVtM0gV2U47qlmoNCSm2m6KcZJ7eq08LXy7YhHTZDU+n38G3XuQAZCVYb8hpNsxuud6/LtNHXD2oPbYbPAVVH2DIUrxiB0wQ0b2ZByUr64+HhPc8bhtWi3Nxyca5iJ0CJuEeunb7SuUTvuj/lkNxReLjQZhIfRh67eocbSGDxRwwDUnDFdymYy+1PTzX8/lL3mnIJHCg0vvOepmBSRr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The pointer to pgmap in struct page is overlaid with folio owner_ops. To indicate that a page/folio has owner ops, bit 1 is set. Therefore, before we can start to using owner_ops, we need to ensure that all accesses to page pgmap sanitize the pointer value. This patch introduces the accessors, which will be modified in the following patch to sanitize the pointer values. No functional change intended. Signed-off-by: Fuad Tabba --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 4 +++- drivers/pci/p2pdma.c | 8 +++++--- include/linux/memremap.h | 6 +++--- include/linux/mm_types.h | 13 +++++++++++++ lib/test_hmm.c | 2 +- mm/hmm.c | 2 +- mm/memory.c | 2 +- mm/memremap.c | 19 +++++++++++-------- mm/migrate_device.c | 4 ++-- mm/mm_init.c | 2 +- 10 files changed, 41 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 1a072568cef6..d7d9d9476bb0 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -88,7 +88,9 @@ struct nouveau_dmem { static struct nouveau_dmem_chunk *nouveau_page_to_chunk(struct page *page) { - return container_of(page->pgmap, struct nouveau_dmem_chunk, pagemap); + struct dev_pagemap *pgmap = page_get_pgmap(page); + + return container_of(pgmap, struct nouveau_dmem_chunk, pagemap); } static struct nouveau_drm *page_to_drm(struct page *page) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 4f47a13cb500..19519bb4ba56 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -193,7 +193,7 @@ static const struct attribute_group p2pmem_group = { static void p2pdma_page_free(struct page *page) { - struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(page->pgmap); + struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(page_get_pgmap(page)); /* safe to dereference while a reference is held to the percpu ref */ struct pci_p2pdma *p2pdma = rcu_dereference_protected(pgmap->provider->p2pdma, 1); @@ -1016,8 +1016,10 @@ enum pci_p2pdma_map_type pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, struct scatterlist *sg) { - if (state->pgmap != sg_page(sg)->pgmap) { - state->pgmap = sg_page(sg)->pgmap; + struct dev_pagemap *pgmap = page_get_pgmap(sg_page(sg)); + + if (state->pgmap != pgmap) { + state->pgmap = pgmap; state->map = pci_p2pdma_map_type(state->pgmap, dev); state->bus_off = to_p2p_pgmap(state->pgmap)->bus_offset; } diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 3f7143ade32c..060e27b6aee0 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -161,7 +161,7 @@ static inline bool is_device_private_page(const struct page *page) { return IS_ENABLED(CONFIG_DEVICE_PRIVATE) && is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PRIVATE; + page_get_pgmap(page)->type == MEMORY_DEVICE_PRIVATE; } static inline bool folio_is_device_private(const struct folio *folio) @@ -173,13 +173,13 @@ static inline bool is_pci_p2pdma_page(const struct page *page) { return IS_ENABLED(CONFIG_PCI_P2PDMA) && is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA; + page_get_pgmap(page)->type == MEMORY_DEVICE_PCI_P2PDMA; } static inline bool is_device_coherent_page(const struct page *page) { return is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_COHERENT; + page_get_pgmap(page)->type == MEMORY_DEVICE_COHERENT; } static inline bool folio_is_device_coherent(const struct folio *folio) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6e06286f44f1..27075ea24e67 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -616,6 +616,19 @@ static inline const struct folio_owner_ops *folio_get_owner_ops(struct folio *fo return owner_ops; } +/* + * Get the page dev_pagemap pgmap pointer. + */ +#define page_get_pgmap(page) ((page)->pgmap) + +/* + * Set the page dev_pagemap pgmap pointer. + */ +static inline void page_set_pgmap(struct page *page, struct dev_pagemap *pgmap) +{ + page->pgmap = pgmap; +} + struct page_frag_cache { void * va; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 056f2e411d7b..d3e3843f57dd 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -195,7 +195,7 @@ static int dmirror_fops_release(struct inode *inode, struct file *filp) static struct dmirror_chunk *dmirror_page_to_chunk(struct page *page) { - return container_of(page->pgmap, struct dmirror_chunk, pagemap); + return container_of(page_get_pgmap(page), struct dmirror_chunk, pagemap); } static struct dmirror_device *dmirror_page_to_device(struct page *page) diff --git a/mm/hmm.c b/mm/hmm.c index 7e0229ae4a5a..b5f5ac218fda 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -248,7 +248,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, * just report the PFN. */ if (is_device_private_entry(entry) && - pfn_swap_entry_to_page(entry)->pgmap->owner == + page_get_pgmap(pfn_swap_entry_to_page(entry))->owner == range->dev_private_owner) { cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) diff --git a/mm/memory.c b/mm/memory.c index 80850cad0e6f..5853fa5767c7 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4276,7 +4276,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ get_page(vmf->page); pte_unmap_unlock(vmf->pte, vmf->ptl); - ret = vmf->page->pgmap->ops->migrate_to_ram(vmf); + ret = page_get_pgmap(vmf->page)->ops->migrate_to_ram(vmf); put_page(vmf->page); } else if (is_hwpoison_entry(entry)) { ret = VM_FAULT_HWPOISON; diff --git a/mm/memremap.c b/mm/memremap.c index 40d4547ce514..931bc85da1df 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -458,8 +458,9 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap); void free_zone_device_folio(struct folio *folio) { - if (WARN_ON_ONCE(!folio->page.pgmap->ops || - !folio->page.pgmap->ops->page_free)) + struct dev_pagemap *pgmap = page_get_pgmap(&folio->page); + + if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free)) return; mem_cgroup_uncharge(folio); @@ -486,17 +487,17 @@ void free_zone_device_folio(struct folio *folio) * to clear folio->mapping. */ folio->mapping = NULL; - folio->page.pgmap->ops->page_free(folio_page(folio, 0)); + pgmap->ops->page_free(folio_page(folio, 0)); - if (folio->page.pgmap->type != MEMORY_DEVICE_PRIVATE && - folio->page.pgmap->type != MEMORY_DEVICE_COHERENT) + if (pgmap->type != MEMORY_DEVICE_PRIVATE && + pgmap->type != MEMORY_DEVICE_COHERENT) /* * Reset the refcount to 1 to prepare for handing out the page * again. */ folio_set_count(folio, 1); else - put_dev_pagemap(folio->page.pgmap); + put_dev_pagemap(pgmap); } void zone_device_page_init(struct page *page) @@ -505,7 +506,7 @@ void zone_device_page_init(struct page *page) * Drivers shouldn't be allocating pages after calling * memunmap_pages(). */ - WARN_ON_ONCE(!percpu_ref_tryget_live(&page->pgmap->ref)); + WARN_ON_ONCE(!percpu_ref_tryget_live(&page_get_pgmap(page)->ref)); set_page_count(page, 1); lock_page(page); } @@ -514,7 +515,9 @@ EXPORT_SYMBOL_GPL(zone_device_page_init); #ifdef CONFIG_FS_DAX bool __put_devmap_managed_folio_refs(struct folio *folio, int refs) { - if (folio->page.pgmap->type != MEMORY_DEVICE_FS_DAX) + struct dev_pagemap *pgmap = page_get_pgmap(&folio->page); + + if (pgmap->type != MEMORY_DEVICE_FS_DAX) return false; /* diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 9cf26592ac93..368def358d02 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -135,7 +135,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, page = pfn_swap_entry_to_page(entry); if (!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) || - page->pgmap->owner != migrate->pgmap_owner) + page_get_pgmap(page)->owner != migrate->pgmap_owner) goto next; mpfn = migrate_pfn(page_to_pfn(page)) | @@ -156,7 +156,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, goto next; else if (page && is_device_coherent_page(page) && (!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_COHERENT) || - page->pgmap->owner != migrate->pgmap_owner)) + page_get_pgmap(page)->owner != migrate->pgmap_owner)) goto next; mpfn = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE; mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0; diff --git a/mm/mm_init.c b/mm/mm_init.c index 1c205b0a86ed..279cdaebfd2b 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -995,7 +995,7 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, * and zone_device_data. It is a bug if a ZONE_DEVICE page is * ever freed or placed on a driver-private list. */ - page->pgmap = pgmap; + page_set_pgmap(page, pgmap); page->zone_device_data = NULL; /*