From patchwork Wed Oct 19 09:29:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?6buE5p2w?= X-Patchwork-Id: 13011526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1967C4332F for ; Wed, 19 Oct 2022 09:29:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 279A46B0072; Wed, 19 Oct 2022 05:29:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 22A036B0073; Wed, 19 Oct 2022 05:29:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0CB496B0074; Wed, 19 Oct 2022 05:29:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id EDBB06B0072 for ; Wed, 19 Oct 2022 05:29:54 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8533AC0AFE for ; Wed, 19 Oct 2022 09:29:54 +0000 (UTC) X-FDA: 80037177108.14.33E9FD2 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf02.hostedemail.com (Postfix) with ESMTP id 3348680008 for ; Wed, 19 Oct 2022 09:29:53 +0000 (UTC) Received: by mail-pg1-f173.google.com with SMTP id 78so15716068pgb.13 for ; Wed, 19 Oct 2022 02:29:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lctdt7xKU2W5w8gZ0ZmMK8o6iNtjlEm1kVR6gDr/Tqs=; b=azz7whhIofhV6dNq+IvcUJPAUjbCNfvl3vKJyPjW72wvWGyw6jPg/RkDpwHAbCv3W9 94AnUE69DuI7RM7xKmSYYooKJpF5y/N8mX2vGj4N5BGTM8LkQSI2lq+RrsvMv+4HE0PK 5WR3deCHsMafGs7KLtXai79FiSkJjtKLr3lTapM3hALcVr0S8/eG72ZDZbhfXk2WKqwJ hVkomDg6F8zd3d5N0fltw4sn1YLzVwu+Z7udqrLY0Mj0s53vNEq1xo5r+8rZ2IGoVus9 txu7TDUHmcflBVcTFAoglOSpL1Xv3XFf8K96Qva2ybjItMKC4TmwHS2xnRyPkK1DPICQ WvCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lctdt7xKU2W5w8gZ0ZmMK8o6iNtjlEm1kVR6gDr/Tqs=; b=tetK7XAtswkpM9vqORRe41vr3VZIXC1bI7VEtopG/Di2WWzTon2qBD3p/h5QQ4ar59 S/tWGJymcNRNjhb75bRGgObeG27A3cGpbvLhJ8JavNOR2ipm18IjmVGLlJuiloZ07da6 jEjsUvK5mWii/w4d2763pgVosbImKDRP76lwyhoyIDHfxCE2sl9FIIfOS0MJJsJnAs3C b61OjXj9uU8KUwipCgLoDWR18FxcTaJd08bROcqtk7bxS8EATagClhV+Ooa88002NDj6 XXY2g2NpDXmsetJW7fTVtqlSZkd+8q8JsHyMvZQNYCpFFnxN44lZXRW+0JwzqTzskEdR sLVQ== X-Gm-Message-State: ACrzQf1BwC46qNu7cnjcVIfDIp1Z+dtp+aT39qT62vNzk6k2/73XtMUU if3VjOYt4aYBVyRjrsbAiEA9Jw== X-Google-Smtp-Source: AMsMyM58iub7ESHfa48FrgnOmzaMcWvkgwmAW8caBAentKFSof8Duq0VF1bxr5JfnFXxxIIBSFqH5A== X-Received: by 2002:a65:62c7:0:b0:463:9c67:5fe2 with SMTP id m7-20020a6562c7000000b004639c675fe2mr6419990pgv.443.1666171791791; Wed, 19 Oct 2022 02:29:51 -0700 (PDT) Received: from C02FG34NMD6R.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id x1-20020a170902ec8100b0017da2798025sm10364877plg.295.2022.10.19.02.29.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Oct 2022 02:29:51 -0700 (PDT) From: Albert Huang To: mike.kravetz@oracle.com Cc: "huangjie.albert" , Jonathan Corbet , Muchun Song , Andrew Morton , "Aneesh Kumar K.V" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2] mm: hugetlb: support for shared memory policy Date: Wed, 19 Oct 2022 17:29:25 +0800 Message-Id: <20221019092928.44146-1-huangjie.albert@bytedance.com> X-Mailer: git-send-email 2.37.0 (Apple Git-136) In-Reply-To: References: MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666171794; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lctdt7xKU2W5w8gZ0ZmMK8o6iNtjlEm1kVR6gDr/Tqs=; b=MZItHXBMOqcjTFWa1W3GZt+4oryzmR03HaAx1DoRzSj+65WPuq9zXr2+6isL0yweUOBTPT UMyA4vX7tgcG/u8YfZpQCUwMpewEJLgDO8DH3LO7e3NtJ5m2grTYiu9Rf5BuA+0F4tk6lC HRiIpHu2Y5TXmR/IROb5Axvxv1i/fzE= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=azz7whhI; spf=pass (imf02.hostedemail.com: domain of huangjie.albert@bytedance.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=huangjie.albert@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666171794; a=rsa-sha256; cv=none; b=be2M6J2uhPcXGSZNDqe2O3ACL272o7xSo7dk5soK/73+4XQpJrNq9WxhSCJYErwk/8I8ug rJD+9RJBbLSxHPULDlvU4X+/Fepfzcg7KzPAfa8HTtumwEr1J2BC+4Owt4dAf1pJ5UcRsm QcAy8AxSq6YvnJ/Fgt/XG0fRtmz/VXg= Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=azz7whhI; spf=pass (imf02.hostedemail.com: domain of huangjie.albert@bytedance.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=huangjie.albert@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 3348680008 X-Stat-Signature: 6n81aeo8ga8u3pczw5iudrnaze64si81 X-Rspam-User: X-HE-Tag: 1666171793-469490 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "huangjie.albert" implement get/set_policy for hugetlb_vm_ops to support the shared policy This ensures that the mempolicy of all processes sharing this huge page file is consistent. In some scenarios where huge pages are shared: if we need to limit the memory usage of vm within node0, so I set qemu's mempilciy bind to node0, but if there is a process (such as virtiofsd) shared memory with the vm, in this case. If the page fault is triggered by virtiofsd, the allocated memory may go to node1 which depends on virtiofsd. Although we can use the memory prealloc provided by qemu to avoid this issue, but this method will significantly increase the creation time of the vm(a few seconds, depending on memory size). after we hooked up hugetlb_vm_ops(set/get_policy): both the shared memory segments created by shmget() with SHM_HUGETLB flag and the mmap(MAP_SHARED|MAP_HUGETLB), also support shared policy. v1->v2: 1、hugetlb share the memory policy when the vma with the VM_SHARED flag. 2、update the documentation. Signed-off-by: huangjie.albert --- .../admin-guide/mm/numa_memory_policy.rst | 20 +++++++++------ mm/hugetlb.c | 25 +++++++++++++++++++ 2 files changed, 37 insertions(+), 8 deletions(-) diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst index 5a6afecbb0d0..5672a6c2d2ef 100644 --- a/Documentation/admin-guide/mm/numa_memory_policy.rst +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst @@ -133,14 +133,18 @@ Shared Policy the object share the policy, and all pages allocated for the shared object, by any task, will obey the shared policy. - As of 2.6.22, only shared memory segments, created by shmget() or - mmap(MAP_ANONYMOUS|MAP_SHARED), support shared policy. When shared - policy support was added to Linux, the associated data structures were - added to hugetlbfs shmem segments. At the time, hugetlbfs did not - support allocation at fault time--a.k.a lazy allocation--so hugetlbfs - shmem segments were never "hooked up" to the shared policy support. - Although hugetlbfs segments now support lazy allocation, their support - for shared policy has not been completed. + As of 2.6.22, only shared memory segments, created by shmget() without + SHM_HUGETLB flag or mmap(MAP_ANONYMOUS|MAP_SHARED) without MAP_HUGETLB + flag, support shared policy. When shared policy support was added to Linux, + the associated data structures were added to hugetlbfs shmem segments. + At the time, hugetlbfs did not support allocation at fault time--a.k.a + lazy allocation--so hugetlbfs shmem segments were never "hooked up" to + the shared policy support. Although hugetlbfs segments now support lazy + allocation, their support for shared policy has not been completed. + + after we hooked up hugetlb_vm_ops(set/get_policy): + both the shared memory segments created by shmget() with SHM_HUGETLB flag + and mmap(MAP_SHARED|MAP_HUGETLB), also support shared policy. As mentioned above in :ref:`VMA policies ` section, allocations of page cache pages for regular files mmap()ed diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 87d875e5e0a9..fc7038931832 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4632,6 +4632,27 @@ static vm_fault_t hugetlb_vm_op_fault(struct vm_fault *vmf) return 0; } +#ifdef CONFIG_NUMA +int hugetlb_vm_op_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol) +{ + struct inode *inode = file_inode(vma->vm_file); + + if (!(vma->vm_flags & VM_SHARED)) + return 0; + + return mpol_set_shared_policy(&HUGETLBFS_I(inode)->policy, vma, mpol); +} + +struct mempolicy *hugetlb_vm_op_get_policy(struct vm_area_struct *vma, unsigned long addr) +{ + struct inode *inode = file_inode(vma->vm_file); + pgoff_t index; + + index = ((addr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + return mpol_shared_policy_lookup(&HUGETLBFS_I(inode)->policy, index); +} +#endif + /* * When a new function is introduced to vm_operations_struct and added * to hugetlb_vm_ops, please consider adding the function to shm_vm_ops. @@ -4645,6 +4666,10 @@ const struct vm_operations_struct hugetlb_vm_ops = { .close = hugetlb_vm_op_close, .may_split = hugetlb_vm_op_split, .pagesize = hugetlb_vm_op_pagesize, +#ifdef CONFIG_NUMA + .set_policy = hugetlb_vm_op_set_policy, + .get_policy = hugetlb_vm_op_get_policy, +#endif }; static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,