From patchwork Fri Oct 21 16:36:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13015075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A89F7FA373D for ; Fri, 21 Oct 2022 16:37:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E9E938E0009; Fri, 21 Oct 2022 12:37:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E27D98E0001; Fri, 21 Oct 2022 12:37:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C2B828E0009; Fri, 21 Oct 2022 12:37:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AB4D38E0001 for ; Fri, 21 Oct 2022 12:37:23 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 80F911A0512 for ; Fri, 21 Oct 2022 16:37:23 +0000 (UTC) X-FDA: 80045511966.02.EC10930 Received: from mail-vs1-f74.google.com (mail-vs1-f74.google.com [209.85.217.74]) by imf15.hostedemail.com (Postfix) with ESMTP id 2DB5BA0037 for ; Fri, 21 Oct 2022 16:37:22 +0000 (UTC) Received: by mail-vs1-f74.google.com with SMTP id a126-20020a676684000000b003a6eeb4e8b7so1039168vsc.1 for ; Fri, 21 Oct 2022 09:37:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EmV7/WIsSX/8L/pw2a2OJCYUYMgWWbmlq26RmUc7uno=; b=EjsRjGW9BIOWJ38zTf5oxqMXDmlQSTQ3fYn3u9zu3boTToWZbcGqMU9ujmx640GFRE 9DUkw2yIPQ77Nn5MUKOb5kXytgsWQCvBSzYZM5ncj6hObT+maHj7QLFp+FH44fcEAm6s E54uAsE9CsLgxWffXC8Kk3/4Jzx+VZfruoAAQwnnwD+/xbUG4mg5X6y9LbZPB2Zen+1y 1D72cLftsFvHU+NJIIKl4sRZGZcynq0InczbAJ1ff1mayWhCSq0HjcxZ85zVQqshHO7I JBMe4cxSTiZbZ0HfhHlpTs6kGaZKFF03HNKa1/XrU2XZyxzQiAxSFXRPbPzeToVyxPWz xW8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EmV7/WIsSX/8L/pw2a2OJCYUYMgWWbmlq26RmUc7uno=; b=oydHP2pKPgbfpkeoMUP7TmKnktpeEC31cy/eqEEh7s2b+GOQEiPA1APQ3bduT39Jc0 wpW7xtmt0TeCXj6/bOBhpFygFx4aVd1TPbNG5mxGYx1/AJR72qcMS9YagTETcLckesmv f8MWjbViFsRbaX9nIYz8BRJq68mSHBy+QNL+R8zzYmdzioQpL2D7J7RjWO6E4/jzgWvC CHj0Z9XLyXHuXeu/u0rkqL0GJQ+7EAlNms1qr9GGsTTbn7laRfDf/Om6Gw5QFqRoZqRA lyTDdxd5qrMGJJs76cJELHfONdzyAvk6hm5UprMF8eCYUDdqH1/HVpUzerCJO9qZM190 mI/g== X-Gm-Message-State: ACrzQf0+ykB4fduRVg+XzQDxKAus5TX/qUlNHl0ySgqwameZ2dNG3ozK qX2Cye6ogcbcdhIV90eUktUEI/hXUi2TJ9Hr X-Google-Smtp-Source: AMsMyM78OfFVLJ/HGE0WMCwWX8vn7VKvg/OslMBhw8pk/6tm+bsmEHDM6BDnshVWSAveaCuilj26QFmSeysallRX X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:ab0:3742:0:b0:403:e8e2:865a with SMTP id i2-20020ab03742000000b00403e8e2865amr1305908uat.37.1666370242435; Fri, 21 Oct 2022 09:37:22 -0700 (PDT) Date: Fri, 21 Oct 2022 16:36:22 +0000 In-Reply-To: <20221021163703.3218176-1-jthoughton@google.com> Mime-Version: 1.0 References: <20221021163703.3218176-1-jthoughton@google.com> X-Mailer: git-send-email 2.38.0.135.g90850a2211-goog Message-ID: <20221021163703.3218176-7-jthoughton@google.com> Subject: [RFC PATCH v2 06/47] hugetlb: extend vma lock for shared vmas From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666370243; a=rsa-sha256; cv=none; b=kDRtJFoirB0LbgfxtBD/KYBcD4G/78e/UU+RflWB+kuWYuvpKEj13tM8Se2ZMivMhgEJ3C GF7HtrAMArkwo0tBZjN1wvw+0TPVcVlO8ZMAIibP+sDQ8+8ST5GncZ4uRaL3BrUYZAAWUU 6VGn3okS6q7RwfVZdGLFbSlrseLMufQ= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=EjsRjGW9; spf=pass (imf15.hostedemail.com: domain of 3wspSYwoKCMMs2qx3pq2xwpxxpun.lxvurw36-vvt4jlt.x0p@flex--jthoughton.bounces.google.com designates 209.85.217.74 as permitted sender) smtp.mailfrom=3wspSYwoKCMMs2qx3pq2xwpxxpun.lxvurw36-vvt4jlt.x0p@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666370243; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EmV7/WIsSX/8L/pw2a2OJCYUYMgWWbmlq26RmUc7uno=; b=uC8NiZACuSOxpFn/EsIWYDn6kJmJitom+yF82yGTbGXu5NCwfmzHVkIOi5yqIstHHp+cyS 43cG/5mTn3dCM/qcJD4X1qbhGPbQGnBsMgGsaRJUmdv8GBD/pq17/u2t580WaYUVpbQBjg 5+dRVNidLyZrnDhqdNFUAi0v5NSaTwo= Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=EjsRjGW9; spf=pass (imf15.hostedemail.com: domain of 3wspSYwoKCMMs2qx3pq2xwpxxpun.lxvurw36-vvt4jlt.x0p@flex--jthoughton.bounces.google.com designates 209.85.217.74 as permitted sender) smtp.mailfrom=3wspSYwoKCMMs2qx3pq2xwpxxpun.lxvurw36-vvt4jlt.x0p@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: tooiusdpcg5uootpn4oupdxoigot9nx9 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 2DB5BA0037 X-Rspam-User: X-HE-Tag: 1666370242-571608 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This allows us to add more data into the shared structure, which we will use to store whether or not HGM is enabled for this VMA or not, as HGM is only available for shared mappings. It may be better to include HGM as a VMA flag instead of extending the VMA lock structure. Signed-off-by: James Houghton --- include/linux/hugetlb.h | 4 +++ mm/hugetlb.c | 65 +++++++++++++++++++++-------------------- 2 files changed, 37 insertions(+), 32 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index a899bc76d677..534958499ac4 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -121,6 +121,10 @@ struct hugetlb_vma_lock { struct vm_area_struct *vma; }; +struct hugetlb_shared_vma_data { + struct hugetlb_vma_lock vma_lock; +}; + extern struct resv_map *resv_map_alloc(void); void resv_map_release(struct kref *ref); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dc82256b89dd..5ae8bc8c928e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -91,8 +91,8 @@ struct mutex *hugetlb_fault_mutex_table ____cacheline_aligned_in_smp; /* Forward declaration */ static int hugetlb_acct_memory(struct hstate *h, long delta); -static void hugetlb_vma_lock_free(struct vm_area_struct *vma); -static int hugetlb_vma_lock_alloc(struct vm_area_struct *vma); +static void hugetlb_vma_data_free(struct vm_area_struct *vma); +static int hugetlb_vma_data_alloc(struct vm_area_struct *vma); static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma); static inline bool subpool_is_free(struct hugepage_subpool *spool) @@ -4643,11 +4643,11 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma) if (vma_lock) { if (vma_lock->vma != vma) { vma->vm_private_data = NULL; - hugetlb_vma_lock_alloc(vma); + hugetlb_vma_data_alloc(vma); } else pr_warn("HugeTLB: vma_lock already exists in %s.\n", __func__); } else - hugetlb_vma_lock_alloc(vma); + hugetlb_vma_data_alloc(vma); } } @@ -4659,7 +4659,7 @@ static void hugetlb_vm_op_close(struct vm_area_struct *vma) unsigned long reserve, start, end; long gbl_reserve; - hugetlb_vma_lock_free(vma); + hugetlb_vma_data_free(vma); resv = vma_resv_map(vma); if (!resv || !is_vma_resv_set(vma, HPAGE_RESV_OWNER)) @@ -6629,7 +6629,7 @@ bool hugetlb_reserve_pages(struct inode *inode, /* * vma specific semaphore used for pmd sharing synchronization */ - hugetlb_vma_lock_alloc(vma); + hugetlb_vma_data_alloc(vma); /* * Only apply hugepage reservation if asked. At fault time, an @@ -6753,7 +6753,7 @@ bool hugetlb_reserve_pages(struct inode *inode, hugetlb_cgroup_uncharge_cgroup_rsvd(hstate_index(h), chg * pages_per_huge_page(h), h_cg); out_err: - hugetlb_vma_lock_free(vma); + hugetlb_vma_data_free(vma); if (!vma || vma->vm_flags & VM_MAYSHARE) /* Only call region_abort if the region_chg succeeded but the * region_add failed or didn't run. @@ -6901,55 +6901,55 @@ static bool __vma_shareable_flags_pmd(struct vm_area_struct *vma) void hugetlb_vma_lock_read(struct vm_area_struct *vma) { if (__vma_shareable_flags_pmd(vma)) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; + struct hugetlb_shared_vma_data *data = vma->vm_private_data; - down_read(&vma_lock->rw_sema); + down_read(&data->vma_lock.rw_sema); } } void hugetlb_vma_unlock_read(struct vm_area_struct *vma) { if (__vma_shareable_flags_pmd(vma)) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; + struct hugetlb_shared_vma_data *data = vma->vm_private_data; - up_read(&vma_lock->rw_sema); + up_read(&data->vma_lock.rw_sema); } } void hugetlb_vma_lock_write(struct vm_area_struct *vma) { if (__vma_shareable_flags_pmd(vma)) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; + struct hugetlb_shared_vma_data *data = vma->vm_private_data; - down_write(&vma_lock->rw_sema); + down_write(&data->vma_lock.rw_sema); } } void hugetlb_vma_unlock_write(struct vm_area_struct *vma) { if (__vma_shareable_flags_pmd(vma)) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; + struct hugetlb_shared_vma_data *data = vma->vm_private_data; - up_write(&vma_lock->rw_sema); + up_write(&data->vma_lock.rw_sema); } } int hugetlb_vma_trylock_write(struct vm_area_struct *vma) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; + struct hugetlb_shared_vma_data *data = vma->vm_private_data; if (!__vma_shareable_flags_pmd(vma)) return 1; - return down_write_trylock(&vma_lock->rw_sema); + return down_write_trylock(&data->vma_lock.rw_sema); } void hugetlb_vma_assert_locked(struct vm_area_struct *vma) { if (__vma_shareable_flags_pmd(vma)) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; + struct hugetlb_shared_vma_data *data = vma->vm_private_data; - lockdep_assert_held(&vma_lock->rw_sema); + lockdep_assert_held(&data->vma_lock.rw_sema); } } @@ -6985,7 +6985,7 @@ static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma) } } -static void hugetlb_vma_lock_free(struct vm_area_struct *vma) +static void hugetlb_vma_data_free(struct vm_area_struct *vma) { /* * Only present in sharable vmas. @@ -6994,16 +6994,17 @@ static void hugetlb_vma_lock_free(struct vm_area_struct *vma) return; if (vma->vm_private_data) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; + struct hugetlb_shared_vma_data *data = vma->vm_private_data; + struct hugetlb_vma_lock *vma_lock = &data->vma_lock; down_write(&vma_lock->rw_sema); __hugetlb_vma_unlock_write_put(vma_lock); } } -static int hugetlb_vma_lock_alloc(struct vm_area_struct *vma) +static int hugetlb_vma_data_alloc(struct vm_area_struct *vma) { - struct hugetlb_vma_lock *vma_lock; + struct hugetlb_shared_vma_data *data; /* Only establish in (flags) sharable vmas */ if (!vma || !(vma->vm_flags & VM_MAYSHARE)) @@ -7013,8 +7014,8 @@ static int hugetlb_vma_lock_alloc(struct vm_area_struct *vma) if (vma->vm_private_data) return 0; - vma_lock = kmalloc(sizeof(*vma_lock), GFP_KERNEL); - if (!vma_lock) { + data = kmalloc(sizeof(*data), GFP_KERNEL); + if (!data) { /* * If we can not allocate structure, then vma can not * participate in pmd sharing. This is only a possible @@ -7025,14 +7026,14 @@ static int hugetlb_vma_lock_alloc(struct vm_area_struct *vma) * until the file is removed. Warn in the unlikely case of * allocation failure. */ - pr_warn_once("HugeTLB: unable to allocate vma specific lock\n"); + pr_warn_once("HugeTLB: unable to allocate vma shared data\n"); return -ENOMEM; } - kref_init(&vma_lock->refs); - init_rwsem(&vma_lock->rw_sema); - vma_lock->vma = vma; - vma->vm_private_data = vma_lock; + kref_init(&data->vma_lock.refs); + init_rwsem(&data->vma_lock.rw_sema); + data->vma_lock.vma = vma; + vma->vm_private_data = data; return 0; } @@ -7157,11 +7158,11 @@ static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma) { } -static void hugetlb_vma_lock_free(struct vm_area_struct *vma) +static void hugetlb_vma_data_free(struct vm_area_struct *vma) { } -static int hugetlb_vma_lock_alloc(struct vm_area_struct *vma) +static int hugetlb_vma_data_alloc(struct vm_area_struct *vma) { return 0; }