From patchwork Fri Apr 14 00:11:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ackerley Tng X-Patchwork-Id: 13210777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 433BCC77B7F for ; Fri, 14 Apr 2023 00:12:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230352AbjDNAMG (ORCPT ); Thu, 13 Apr 2023 20:12:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230028AbjDNAMC (ORCPT ); Thu, 13 Apr 2023 20:12:02 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E60E33C1E for ; Thu, 13 Apr 2023 17:12:00 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id s18-20020a170902ea1200b001a1f4137086so8846751plg.14 for ; Thu, 13 Apr 2023 17:12:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681431120; x=1684023120; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LGz+M9nFtzHbGGbLaqDB42+XLCRk5tnoOBaeHE6w0dg=; b=S5LluL5NFzRfMffvrC48pA8vM9jBztEWtYOdVu6RC2ufBdtUUSqD+XyBpvK1yq+IN2 IHnZhiT1W0yssxQ4r7Y9clMkV6zvB2ogiOPODlbRwrRyDr07/qDDpZBBDVLk/N65uZzy CN1ozoAyt0GCn/Q/UB/ZvGZUO3F2PMmedv6i9jH6jJ0uY0cAH01PJa7G0h1cPoPbYq3f IdXBPJH4a6Ro/C6Sct1ltwBhbcLCAMsdz4Y30Y2jgpBNq+x1h2YwjODMyjFmoi8MJRCf dDhFwUgzHhEfG/8I1Qg7KrBG7Crz9lPVWk5qe+WYi0MTtv3ZN9nLmsTMDMwmdbsD2L57 OQPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681431120; x=1684023120; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LGz+M9nFtzHbGGbLaqDB42+XLCRk5tnoOBaeHE6w0dg=; b=R4Wd71V312AH9m+SZTrb2WreFhJwKWe2FoYUX26pe1Yrjk/JdBJretYdvR12/PM/A7 a+W0lzar/oO+5oEz7IvpAzJjfIFX9SleZUODpFGDVg4VuCsp32ndjOFgPy8dAhAew2D7 7um/JUGfvtSepWJpq+OUpqiN2xTEVHpwJFFUUrZT3QAX280zNnLYkdLDOj4dTJOBj5Gv BQovNV5jp1uT6a8qpyURBJKJ/o3nPbYTlp/VYCAipkEuV+99nT3WJe1O05zwZT8VVdds HzOWe21eWlF1YI1JI6Q/dZANaWJwFMqAOKIEOJiRjwHfDX+8/hdV3iCHsmuZAozpd10h 8Q1w== X-Gm-Message-State: AAQBX9eizL+Bz/cuQSug3Z6stbnFjkRUvaNuhyKuVm1dgza1zTpJg3xz 5/x/pt+Qhi2+nyhyoG46k2TedIeZOkMRP8tP7w== X-Google-Smtp-Source: AKy350YIYIzzkTKMTgAPyGCMhA2HHi565MHAj4RU3xofJGGDi/ggUb7Aj3J6+1abX8zL+AMGXGGHN+6shLHAP2f2mA== X-Received: from ackerleytng-cloudtop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1f5f]) (user=ackerleytng job=sendgmr) by 2002:a17:902:bd90:b0:19a:f9d9:28d4 with SMTP id q16-20020a170902bd9000b0019af9d928d4mr282396pls.3.1681431120479; Thu, 13 Apr 2023 17:12:00 -0700 (PDT) Date: Fri, 14 Apr 2023 00:11:50 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <476aa5a107994d293dcdfc5a620cc52f625768c2.1681430907.git.ackerleytng@google.com> Subject: [RFC PATCH 1/6] mm: shmem: Refactor out shmem_shared_policy() function From: Ackerley Tng To: kvm@vger.kernel.org, linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, qemu-devel@nongnu.org Cc: aarcange@redhat.com, ak@linux.intel.com, akpm@linux-foundation.org, arnd@arndb.de, bfields@fieldses.org, bp@alien8.de, chao.p.peng@linux.intel.com, corbet@lwn.net, dave.hansen@intel.com, david@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, hpa@zytor.com, hughd@google.com, jlayton@kernel.org, jmattson@google.com, joro@8bytes.org, jun.nakajima@intel.com, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, luto@kernel.org, mail@maciej.szmigiero.name, mhocko@suse.com, michael.roth@amd.com, mingo@redhat.com, naoya.horiguchi@nec.com, pbonzini@redhat.com, qperret@google.com, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, tabba@google.com, tglx@linutronix.de, vannapurve@google.com, vbabka@suse.cz, vkuznets@redhat.com, wanpengli@tencent.com, wei.w.wang@intel.com, x86@kernel.org, yu.c.zhang@linux.intel.com, muchun.song@linux.dev, feng.tang@intel.com, brgerst@gmail.com, rdunlap@infradead.org, masahiroy@kernel.org, mailhol.vincent@wanadoo.fr, Ackerley Tng Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Refactor out shmem_shared_policy() to allow reading of a file's shared mempolicy Signed-off-by: Ackerley Tng --- include/linux/shmem_fs.h | 7 +++++++ mm/shmem.c | 10 ++++++---- 2 files changed, 13 insertions(+), 4 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index d9e57485a686..bc1eeb4b4bd9 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -134,6 +134,13 @@ static inline bool shmem_file(struct file *file) return shmem_mapping(file->f_mapping); } +static inline struct shared_policy *shmem_shared_policy(struct file *file) +{ + struct inode *inode = file_inode(file); + + return &SHMEM_I(inode)->policy; +} + /* * If fallocate(FALLOC_FL_KEEP_SIZE) has been used, there may be pages * beyond i_size's notion of EOF, which fallocate has committed to reserving: diff --git a/mm/shmem.c b/mm/shmem.c index b053cd1f12da..4f801f398454 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2248,20 +2248,22 @@ unsigned long shmem_get_unmapped_area(struct file *file, } #ifdef CONFIG_NUMA + static int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol) { - struct inode *inode = file_inode(vma->vm_file); - return mpol_set_shared_policy(&SHMEM_I(inode)->policy, vma, mpol); + struct shared_policy *info; + + info = shmem_shared_policy(vma->vm_file); + return mpol_set_shared_policy(info, vma, mpol); } static struct mempolicy *shmem_get_policy(struct vm_area_struct *vma, unsigned long addr) { - struct inode *inode = file_inode(vma->vm_file); pgoff_t index; index = ((addr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; - return mpol_shared_policy_lookup(&SHMEM_I(inode)->policy, index); + return mpol_shared_policy_lookup(shmem_shared_policy(vma->vm_file), index); } #endif From patchwork Fri Apr 14 00:11:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ackerley Tng X-Patchwork-Id: 13210778 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66831C77B6F for ; Fri, 14 Apr 2023 00:12:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230387AbjDNAMH (ORCPT ); Thu, 13 Apr 2023 20:12:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230272AbjDNAMD (ORCPT ); Thu, 13 Apr 2023 20:12:03 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0FCF3586 for ; Thu, 13 Apr 2023 17:12:02 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id i10-20020a170902c94a00b001a0468b4afcso8889347pla.12 for ; Thu, 13 Apr 2023 17:12:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681431122; x=1684023122; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BkEj6q2IVKaYOP6wiiRLnj5ts2mBBZa+OIxL/GSTf98=; b=hEKe7KJhnbtB7HOx0Et2GGRR4tPoUqzTdWnfUxGGsyi9EoTWrZhVqGnrZvvHuOndFb TZMGSQ8vslsUBzDoxn9iNYO9LNL12Zs3VwW97btbxF/u5m0XCRPh93bGl4sbvEzqxT6/ vZR2E4QR8bsqooC9JFGEbUD4GNGVdkQWEnhHWe0gty7hNJfMUG+x6qUJrqOv1s31GKD5 /0uqMrvUHdk6drQi8pnb3EXthHB7+IGInrY92tJ9ElhNKYIt+IMYZonjXRQ9OCvmz8n+ DadYc4CMCAJvwcgOviymbXY411dscM1zCutEA5OdbSJuHP8nTNCFxIaU7gllAACoSnWm TMoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681431122; x=1684023122; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BkEj6q2IVKaYOP6wiiRLnj5ts2mBBZa+OIxL/GSTf98=; b=GjeQ4ES6n0xmpQHuG6ryziVfqPltjjbUrujHjKvM9QDq1mBGn3EMJr40kpdvOFZyIl auRDK4QpkJmG3bOYbu4K2nv3J/qaEoCHKPI7i6qaKd0WAcijmB4IU+4DpP1eqM5bet9l yVwkJf2mpF9RzuLR9EbRFJVBv1V1fmYPAZj6bz4bnt/MSiE4mrOrCHbKRxAt65q6EEXF 8BUdlopTX98e68lomXnY34Aws+NS64c2WZEh0NhIBvr4kwvxcJ52kketz5rTiujSZ+HF KHnuwtXiPGZgX3xdhEc75kZmRT3xxVZj/WygeeZWDXe7gAV0iz1nlG2MbqFtRSfDaRqw 36Gg== X-Gm-Message-State: AAQBX9diuWrjDb9jl1tpCpCd+sgCORJ//W68u7id1W6C1IH5Sq+sRKgR 9AV1CGEiw5dIsLYnc7a5WUNj4K6iwNAoU2cbYQ== X-Google-Smtp-Source: AKy350avtllPlWwvIfTF15qlG1lqqPqwu/NAHA4tctgmwlguwviMhGUhrJ6084VLBRdQnA8mKuN5Lp3jk9IuHlWLyQ== X-Received: from ackerleytng-cloudtop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1f5f]) (user=ackerleytng job=sendgmr) by 2002:a63:1c09:0:b0:507:3e33:43e3 with SMTP id c9-20020a631c09000000b005073e3343e3mr240709pgc.7.1681431122125; Thu, 13 Apr 2023 17:12:02 -0700 (PDT) Date: Fri, 14 Apr 2023 00:11:51 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: Subject: [RFC PATCH 2/6] mm: mempolicy: Refactor out mpol_init_from_nodemask From: Ackerley Tng To: kvm@vger.kernel.org, linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, qemu-devel@nongnu.org Cc: aarcange@redhat.com, ak@linux.intel.com, akpm@linux-foundation.org, arnd@arndb.de, bfields@fieldses.org, bp@alien8.de, chao.p.peng@linux.intel.com, corbet@lwn.net, dave.hansen@intel.com, david@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, hpa@zytor.com, hughd@google.com, jlayton@kernel.org, jmattson@google.com, joro@8bytes.org, jun.nakajima@intel.com, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, luto@kernel.org, mail@maciej.szmigiero.name, mhocko@suse.com, michael.roth@amd.com, mingo@redhat.com, naoya.horiguchi@nec.com, pbonzini@redhat.com, qperret@google.com, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, tabba@google.com, tglx@linutronix.de, vannapurve@google.com, vbabka@suse.cz, vkuznets@redhat.com, wanpengli@tencent.com, wei.w.wang@intel.com, x86@kernel.org, yu.c.zhang@linux.intel.com, muchun.song@linux.dev, feng.tang@intel.com, brgerst@gmail.com, rdunlap@infradead.org, masahiroy@kernel.org, mailhol.vincent@wanadoo.fr, Ackerley Tng Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Refactor out mpol_init_from_nodemask() to simplify logic in do_mbind(). mpol_init_from_nodemask() will be used to perform similar functionality in do_memfd_restricted_bind() in a later patch. Signed-off-by: Ackerley Tng --- mm/mempolicy.c | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index a256a241fd1d..a2655b626731 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1254,6 +1254,25 @@ static struct page *new_page(struct page *page, unsigned long start) } #endif +static long mpol_init_from_nodemask(struct mempolicy *mpol, const nodemask_t *nmask, + bool always_unlock) +{ + long err; + NODEMASK_SCRATCH(scratch); + + if (!scratch) + return -ENOMEM; + + /* Cannot take lock before allocating in NODEMASK_SCRATCH */ + mmap_write_lock(current->mm); + err = mpol_set_nodemask(mpol, nmask, scratch); + if (always_unlock || err) + mmap_write_unlock(current->mm); + + NODEMASK_SCRATCH_FREE(scratch); + return err; +} + static long do_mbind(unsigned long start, unsigned long len, unsigned short mode, unsigned short mode_flags, nodemask_t *nmask, unsigned long flags) @@ -1306,17 +1325,8 @@ static long do_mbind(unsigned long start, unsigned long len, lru_cache_disable(); } - { - NODEMASK_SCRATCH(scratch); - if (scratch) { - mmap_write_lock(mm); - err = mpol_set_nodemask(new, nmask, scratch); - if (err) - mmap_write_unlock(mm); - } else - err = -ENOMEM; - NODEMASK_SCRATCH_FREE(scratch); - } + + err = mpol_init_from_nodemask(new, nmask, false); if (err) goto mpol_out; From patchwork Fri Apr 14 00:11:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ackerley Tng X-Patchwork-Id: 13210780 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF828C77B61 for ; Fri, 14 Apr 2023 00:12:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230413AbjDNAMK (ORCPT ); Thu, 13 Apr 2023 20:12:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230260AbjDNAMG (ORCPT ); Thu, 13 Apr 2023 20:12:06 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C54CA35A1 for ; Thu, 13 Apr 2023 17:12:04 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-54f87e44598so83411897b3.5 for ; Thu, 13 Apr 2023 17:12:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681431124; x=1684023124; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Gsr9n7GPJt3nQiq4GcQ+TdB+rTZDJHWVuXMgSAc/inc=; b=kiDTePLk95tLtwi05EOCo6F4V5CaUODRfRqdJI4Zgzm9FQWvaGZtlrD6yXaeMKkBjy 8RYIMMqxVwtK8hcqiFL4L6XAaUQuBxXj0Ob3OL8FosXLKFkReuzMoaSCAc6uSho3g0AT 1Mad+18Vu1ODUbN+2p4Gx7UN/JCMROb9IKSQZwVC0SqUcslnNIKVGAjm6jXtAj4CVyW6 AubdJjVqBGvAcIQftDMGLDBeWf29B2Tvkh+iboimGU/GxdHwLZqBuIds+tebPgdOIXna fQ+8mzrH61gD5Hs3hKsbNr1LXybHaYbyOLpWo62wPHciL4cbfv9pbXLpD+tC0w5oX1yN Vaaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681431124; x=1684023124; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Gsr9n7GPJt3nQiq4GcQ+TdB+rTZDJHWVuXMgSAc/inc=; b=MVulSaHXvz9klDwuz1C6exmkUII55lD+3JCvw0LYekPrfDrbA5FS5Li5QzGOrCg3vS laMEGH0fnK4HacFyZ1X8WslEeTsX6uRRFHGHZmNhVuddhStlfhthdUg5VQu07sexHr2J j1xzsz7gP88WpMcQMDT+za9OFFpVgrj/jisEBET0AyGS9Oi1AOgelTCiz9Xjx1wnCrHP Z93hROZ2yQP93F4mLEC5h6s3pz2Xg8nG9IJAnCZG5dQLCoKAOrFm9u1IwWNTVzdKEigY YSZBmrZW7JaLa5j912HQq5jFRw7TVZXKcwwoI2wD9U2zvXA6CO9twgrS1cnP8zkQYfTb wOJQ== X-Gm-Message-State: AAQBX9e+o6agXRDzLRxgr5a8ounAO+FPPP7QKBuY0USJdH9u/o3D/sYG 9v9frBDJHaSHFYEn4GtsNUpt6fFHtxVCTuzqFA== X-Google-Smtp-Source: AKy350aRre2/mW24vntSSS5eHQnqj/lI3imd9ledXWOeS14bBFvJmCzkihwUza6OXJc/lSYwICNxvky6ENjOvWGfVA== X-Received: from ackerleytng-cloudtop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1f5f]) (user=ackerleytng job=sendgmr) by 2002:a81:af0c:0:b0:54f:8566:495 with SMTP id n12-20020a81af0c000000b0054f85660495mr2640217ywh.1.1681431123807; Thu, 13 Apr 2023 17:12:03 -0700 (PDT) Date: Fri, 14 Apr 2023 00:11:52 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <43e1c951125d6700586dbd332c2036db0f2f5f2d.1681430907.git.ackerleytng@google.com> Subject: [RFC PATCH 3/6] mm: mempolicy: Refactor out __mpol_set_shared_policy() From: Ackerley Tng To: kvm@vger.kernel.org, linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, qemu-devel@nongnu.org Cc: aarcange@redhat.com, ak@linux.intel.com, akpm@linux-foundation.org, arnd@arndb.de, bfields@fieldses.org, bp@alien8.de, chao.p.peng@linux.intel.com, corbet@lwn.net, dave.hansen@intel.com, david@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, hpa@zytor.com, hughd@google.com, jlayton@kernel.org, jmattson@google.com, joro@8bytes.org, jun.nakajima@intel.com, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, luto@kernel.org, mail@maciej.szmigiero.name, mhocko@suse.com, michael.roth@amd.com, mingo@redhat.com, naoya.horiguchi@nec.com, pbonzini@redhat.com, qperret@google.com, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, tabba@google.com, tglx@linutronix.de, vannapurve@google.com, vbabka@suse.cz, vkuznets@redhat.com, wanpengli@tencent.com, wei.w.wang@intel.com, x86@kernel.org, yu.c.zhang@linux.intel.com, muchun.song@linux.dev, feng.tang@intel.com, brgerst@gmail.com, rdunlap@infradead.org, masahiroy@kernel.org, mailhol.vincent@wanadoo.fr, Ackerley Tng Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Refactor out __mpol_set_shared_policy() to remove dependency on struct vm_area_struct, since only 2 parameters from struct vm_area_struct are used. __mpol_set_shared_policy() will be used in a later patch by restrictedmem_set_shared_policy(). Signed-off-by: Ackerley Tng --- include/linux/mempolicy.h | 2 ++ mm/mempolicy.c | 29 +++++++++++++++++++---------- 2 files changed, 21 insertions(+), 10 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index d232de7cdc56..9a2a2dd95432 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -126,6 +126,8 @@ struct shared_policy { int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst); void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol); +int __mpol_set_shared_policy(struct shared_policy *info, struct mempolicy *mpol, + unsigned long pgoff_start, unsigned long npages); int mpol_set_shared_policy(struct shared_policy *info, struct vm_area_struct *vma, struct mempolicy *new); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index a2655b626731..f3fa5494e4a8 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2817,30 +2817,39 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) } } -int mpol_set_shared_policy(struct shared_policy *info, - struct vm_area_struct *vma, struct mempolicy *npol) +int __mpol_set_shared_policy(struct shared_policy *info, struct mempolicy *mpol, + unsigned long pgoff_start, unsigned long npages) { int err; struct sp_node *new = NULL; - unsigned long sz = vma_pages(vma); + unsigned long pgoff_end = pgoff_start + npages; pr_debug("set_shared_policy %lx sz %lu %d %d %lx\n", - vma->vm_pgoff, - sz, npol ? npol->mode : -1, - npol ? npol->flags : -1, - npol ? nodes_addr(npol->nodes)[0] : NUMA_NO_NODE); + pgoff_start, npages, + mpol ? mpol->mode : -1, + mpol ? mpol->flags : -1, + mpol ? nodes_addr(mpol->nodes)[0] : NUMA_NO_NODE); - if (npol) { - new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, npol); + if (mpol) { + new = sp_alloc(pgoff_start, pgoff_end, mpol); if (!new) return -ENOMEM; } - err = shared_policy_replace(info, vma->vm_pgoff, vma->vm_pgoff+sz, new); + + err = shared_policy_replace(info, pgoff_start, pgoff_end, new); + if (err && new) sp_free(new); + return err; } +int mpol_set_shared_policy(struct shared_policy *info, + struct vm_area_struct *vma, struct mempolicy *mpol) +{ + return __mpol_set_shared_policy(info, mpol, vma->vm_pgoff, vma_pages(vma)); +} + /* Free a backing policy store on inode delete. */ void mpol_free_shared_policy(struct shared_policy *p) { From patchwork Fri Apr 14 00:11:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ackerley Tng X-Patchwork-Id: 13210779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5800FC7EE25 for ; Fri, 14 Apr 2023 00:12:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229958AbjDNAMJ (ORCPT ); Thu, 13 Apr 2023 20:12:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230369AbjDNAMH (ORCPT ); Thu, 13 Apr 2023 20:12:07 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 323DB3C1E for ; Thu, 13 Apr 2023 17:12:06 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-54f87e44598so83412397b3.5 for ; Thu, 13 Apr 2023 17:12:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681431125; x=1684023125; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=W+vIzhvgnDTeM5yLC6D6DMUdvSTbR9axXaSgRhroXf8=; b=mW9eVt20/JcXUsgxb+xjqD4VG4js+2q/p5hJJDDZIcnukRm0syQv/TXqiec2heBNck /LJ2OaI6hIksRPbp9XAhh99LqDjoZM2pQCkJ+bmJzc8yZHZYkwMAwFEhAsHbl30CzShC AnjRJgdI7su71kGpp2x6zyUwFmi0ZsMW7HG1IhDvuRpLPR8dBsIbCcLfLX3bb41Nf7ID fPGnb4gcd1xumZwEpzLDR6LXrZNMCwms/UiqwFrs/MG66By8fMMhSBTwwbfU9CItCsXE IVTc6CfCREM1gKWUMxH19y2K58Mf0+JvqHllDRZBSVXeFCA0+XBcsiL9bPxv2h5LcfvI JjvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681431125; x=1684023125; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=W+vIzhvgnDTeM5yLC6D6DMUdvSTbR9axXaSgRhroXf8=; b=S8VhoQW4Iqh7xQAeZkALUZXs8ANVyjbs1OgoJHMFh3UCI5AfiiVLDrdK+YfYP+Zdcr FSknRK7EosuxyQHWFUIjnemj3FjG4AuA2aqlcL0Jv+3vmtnm/ER8nP1+KO0BFhDr9p+z Qy4xgR0mZcaN4560xcdJHXzQZQLh+F7K/QWApKHJevIStyXEuMfTI7nR/uE59O8OSNc+ PlverhwhR0kCLIXZZ+Yu9gh6B4kHzKfHiT3FF9ocfPZD2ibrxW5EfCA5x3pPsnAvFqO+ t7WdGrZ23sBwdKJuN6ugZFktvibGpbf1ciMvf5LSillFm9ieWcOYh2FswazpRbKlfGrt 2Hqg== X-Gm-Message-State: AAQBX9eRzYNnyRPd/SYdXZd308BsTlhE7hi4E/HbLzERGmyPr6ZLK7CU dpN4f7pGBlSbT+FFu/5+gqVf5YqAHRZuUdCgPg== X-Google-Smtp-Source: AKy350b34CBn9n2raOJ+zF+ClQSsfsH/2Lr8AcQiejmlqBPjPibu2aqY4QIJU2kHR7dXtCx3CNEm4OqoAAxVmXTquQ== X-Received: from ackerleytng-cloudtop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1f5f]) (user=ackerleytng job=sendgmr) by 2002:a81:4328:0:b0:545:4133:fc40 with SMTP id q40-20020a814328000000b005454133fc40mr2444452ywa.9.1681431125361; Thu, 13 Apr 2023 17:12:05 -0700 (PDT) Date: Fri, 14 Apr 2023 00:11:53 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <17bb8e925c08f27c627cd1f2bbb2714daf590c1d.1681430907.git.ackerleytng@google.com> Subject: [RFC PATCH 4/6] mm: mempolicy: Add and expose mpol_create From: Ackerley Tng To: kvm@vger.kernel.org, linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, qemu-devel@nongnu.org Cc: aarcange@redhat.com, ak@linux.intel.com, akpm@linux-foundation.org, arnd@arndb.de, bfields@fieldses.org, bp@alien8.de, chao.p.peng@linux.intel.com, corbet@lwn.net, dave.hansen@intel.com, david@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, hpa@zytor.com, hughd@google.com, jlayton@kernel.org, jmattson@google.com, joro@8bytes.org, jun.nakajima@intel.com, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, luto@kernel.org, mail@maciej.szmigiero.name, mhocko@suse.com, michael.roth@amd.com, mingo@redhat.com, naoya.horiguchi@nec.com, pbonzini@redhat.com, qperret@google.com, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, tabba@google.com, tglx@linutronix.de, vannapurve@google.com, vbabka@suse.cz, vkuznets@redhat.com, wanpengli@tencent.com, wei.w.wang@intel.com, x86@kernel.org, yu.c.zhang@linux.intel.com, muchun.song@linux.dev, feng.tang@intel.com, brgerst@gmail.com, rdunlap@infradead.org, masahiroy@kernel.org, mailhol.vincent@wanadoo.fr, Ackerley Tng Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org mpol_create builds a mempolicy based on mode, nmask and maxnode. mpol_create is exposed for use in memfd_restricted_bind() in a later patch. Signed-off-by: Ackerley Tng --- include/linux/mempolicy.h | 2 ++ mm/mempolicy.c | 39 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 41 insertions(+) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 9a2a2dd95432..15facd9de087 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -125,6 +125,8 @@ struct shared_policy { }; int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst); +struct mempolicy *mpol_create( + unsigned long mode, const unsigned long __user *nmask, unsigned long maxnode) void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol); int __mpol_set_shared_policy(struct shared_policy *info, struct mempolicy *mpol, unsigned long pgoff_start, unsigned long npages); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index f3fa5494e4a8..f4fe241c17ff 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -3181,3 +3181,42 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) p += scnprintf(p, buffer + maxlen - p, ":%*pbl", nodemask_pr_args(&nodes)); } + +/** + * mpol_create - build mempolicy based on mode, nmask and maxnode + * @mode: policy mode, as in MPOL_MODE_FLAGS + * @nmask: node mask from userspace + * @maxnode: number of valid bits in nmask + * + * Will allocate a new struct mempolicy that has to be released with + * mpol_put. Will also take and release the write lock mmap_lock in current->mm. + */ +struct mempolicy *mpol_create( + unsigned long mode, const unsigned long __user *nmask, unsigned long maxnode) +{ + int err; + unsigned short mode_flags; + nodemask_t nodes; + int lmode = mode; + struct mempolicy *mpol; + + err = sanitize_mpol_flags(&lmode, &mode_flags); + if (err) + return ERR_PTR(err); + + err = get_nodes(&nodes, nmask, maxnode); + if (err) + return ERR_PTR(err); + + mpol = mpol_new(mode, mode_flags, &nodes); + if (IS_ERR(mpol)) + return mpol; + + err = mpol_init_from_nodemask(mpol, &nodes, true); + if (err) { + mpol_put(mpol); + return ERR_PTR(err); + } + + return mpol; +} From patchwork Fri Apr 14 00:11:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ackerley Tng X-Patchwork-Id: 13210781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65FC3C77B78 for ; Fri, 14 Apr 2023 00:12:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230373AbjDNAMM (ORCPT ); Thu, 13 Apr 2023 20:12:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230020AbjDNAMJ (ORCPT ); Thu, 13 Apr 2023 20:12:09 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9291530F3 for ; Thu, 13 Apr 2023 17:12:07 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 188-20020a2504c5000000b00b8f6f5dca5dso244968ybe.7 for ; Thu, 13 Apr 2023 17:12:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681431127; x=1684023127; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Xv2Sbflv1PdMqZUyau/vGtw69NpiI05I8fyTNzzRBtE=; b=IeAvbh5LiWqeMu3W4lreAV4pEYXDzFTMIK++YoYtL9HONMbDl2n8JG1cJvXTxQ4W/Z GxRYbHWg5mdzTkvevWjucHzalLOLaRJPEZSOdabNIZoeGxpZJ7sqGOpBbuMaxh+N9UO4 laslticySsTQ5Z1P33WcVcn5XKY3X/szRjrTAuUNdQOCywlR5EjataM835zXeQyYF84o sNh45HAi3xIJdUtIf7JG4FaHzE4kQ9wGOth27DrwXDDPKWE4i8uvOAxFPGUJWILtbFqJ B0wpIq7/vwtgTuDsvfcJaOD5/zbThG99OLcVYbtOP9o7ei2nNW7AfnDuSWHGLmDR7UAC GOjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681431127; x=1684023127; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Xv2Sbflv1PdMqZUyau/vGtw69NpiI05I8fyTNzzRBtE=; b=e1XVuazSLfcDF+OcyVBMnqypxyM7kxLWbiWayNzcDwe2biNBRC/kzaU+VGkOq5pTig aNRr+mWx8E+1242tr2AOPr+bEEcW2rHkafXrfvGmkxBqfF4VV6nTIRBcKrgPWfAJgvFz J0pwmDyDxmB2zFIG6zxsLlEXdrZ0cATM2iqlLNeRYNg+mTf/Lw9p0y1IxZ0Iq4zm+dxb 4qLiWVXylKd61zcsRffhq7R+4NpfWIrxd5zLJgI+hov7p79KRJMmro99Kn7Z4rUhGLD/ YUdfpJJ+HC7iQsFtX+sKzMulVjzuznOdBMJ9Dy5kBwA8PasdLrTPz2DFp1w46/s7+B2z xP2A== X-Gm-Message-State: AAQBX9dquMWQ1OzY9mTZjUkvVPevYd4HKF1uRv1gpTBXvCCTuD4/I0qF uESGEnGN/rrwnBYB6fbr9LnqREk1dAxoHwSphA== X-Google-Smtp-Source: AKy350bz8+zUcgBBK3sJ0KBBaZXIEou5c04bKvPj9N3prJO/LHXQy58hyZlX1YHGEbHTNZX5qvCYitSezXVlLtbvug== X-Received: from ackerleytng-cloudtop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1f5f]) (user=ackerleytng job=sendgmr) by 2002:a25:68cc:0:b0:a27:3ecc:ffe7 with SMTP id d195-20020a2568cc000000b00a273eccffe7mr5642963ybc.3.1681431127201; Thu, 13 Apr 2023 17:12:07 -0700 (PDT) Date: Fri, 14 Apr 2023 00:11:54 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: Subject: [RFC PATCH 5/6] mm: restrictedmem: Add memfd_restricted_bind() syscall From: Ackerley Tng To: kvm@vger.kernel.org, linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, qemu-devel@nongnu.org Cc: aarcange@redhat.com, ak@linux.intel.com, akpm@linux-foundation.org, arnd@arndb.de, bfields@fieldses.org, bp@alien8.de, chao.p.peng@linux.intel.com, corbet@lwn.net, dave.hansen@intel.com, david@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, hpa@zytor.com, hughd@google.com, jlayton@kernel.org, jmattson@google.com, joro@8bytes.org, jun.nakajima@intel.com, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, luto@kernel.org, mail@maciej.szmigiero.name, mhocko@suse.com, michael.roth@amd.com, mingo@redhat.com, naoya.horiguchi@nec.com, pbonzini@redhat.com, qperret@google.com, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, tabba@google.com, tglx@linutronix.de, vannapurve@google.com, vbabka@suse.cz, vkuznets@redhat.com, wanpengli@tencent.com, wei.w.wang@intel.com, x86@kernel.org, yu.c.zhang@linux.intel.com, muchun.song@linux.dev, feng.tang@intel.com, brgerst@gmail.com, rdunlap@infradead.org, masahiroy@kernel.org, mailhol.vincent@wanadoo.fr, Ackerley Tng Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org memfd_restricted_bind() sets the NUMA memory policy, which consists of a policy mode and zero or more nodes, for an offset within a restrictedmem file with file descriptor fd and continuing for len bytes. This is intended to be like mbind() but specially for restrictedmem files, which cannot be mmap()ed into userspace and hence has no memory addresses that can be used with mbind(). Unlike mbind(), memfd_restricted_bind() will override any existing memory policy if a new memory policy is defined for the same ranges. For now, memfd_restricted_bind() does not perform migrations and no flags are supported. This syscall is specialised just for restrictedmem files because this functionality is not required by other files. Signed-off-by: Ackerley Tng --- arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + include/linux/mempolicy.h | 2 +- include/linux/syscalls.h | 5 ++ include/uapi/asm-generic/unistd.h | 5 +- include/uapi/linux/mempolicy.h | 7 ++- kernel/sys_ni.c | 1 + mm/restrictedmem.c | 75 ++++++++++++++++++++++++++ scripts/checksyscalls.sh | 1 + 9 files changed, 95 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index dc70ba90247e..c94e9ce46cc3 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -456,3 +456,4 @@ 449 i386 futex_waitv sys_futex_waitv 450 i386 set_mempolicy_home_node sys_set_mempolicy_home_node 451 i386 memfd_restricted sys_memfd_restricted +452 i386 memfd_restricted_bind sys_memfd_restricted_bind diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index 06516abc8318..6bd86b45d63a 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -373,6 +373,7 @@ 449 common futex_waitv sys_futex_waitv 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 451 common memfd_restricted sys_memfd_restricted +452 common memfd_restricted_bind sys_memfd_restricted_bind # # Due to a historical design error, certain syscalls are numbered differently diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 15facd9de087..af62233df0c0 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -126,7 +126,7 @@ struct shared_policy { int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst); struct mempolicy *mpol_create( - unsigned long mode, const unsigned long __user *nmask, unsigned long maxnode) + unsigned long mode, const unsigned long __user *nmask, unsigned long maxnode); void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol); int __mpol_set_shared_policy(struct shared_policy *info, struct mempolicy *mpol, unsigned long pgoff_start, unsigned long npages); diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index 660be0bf89d5..852b202d3837 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -1059,6 +1059,11 @@ asmlinkage long sys_set_mempolicy_home_node(unsigned long start, unsigned long l unsigned long home_node, unsigned long flags); asmlinkage long sys_memfd_restricted(unsigned int flags); +asmlinkage long sys_memfd_restricted_bind(int fd, struct file_range __user *range, + unsigned long mode, + const unsigned long __user *nmask, + unsigned long maxnode, + unsigned int flags); /* * Architecture-specific system calls diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index e2ea7cd964f8..b5a1385bb4a7 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -889,10 +889,13 @@ __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node) #ifdef __ARCH_WANT_MEMFD_RESTRICTED #define __NR_memfd_restricted 451 __SYSCALL(__NR_memfd_restricted, sys_memfd_restricted) + +#define __NR_memfd_restricted_bind 452 +__SYSCALL(__NR_memfd_restricted_bind, sys_memfd_restricted_bind) #endif #undef __NR_syscalls -#define __NR_syscalls 452 +#define __NR_syscalls 453 /* * 32 bit systems traditionally used different diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index 046d0ccba4cd..979499abd253 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -6,9 +6,9 @@ #ifndef _UAPI_LINUX_MEMPOLICY_H #define _UAPI_LINUX_MEMPOLICY_H +#include #include - /* * Both the MPOL_* mempolicy mode and the MPOL_F_* optional mode flags are * passed by the user to either set_mempolicy() or mbind() in an 'int' actual. @@ -72,4 +72,9 @@ enum { #define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */ #define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */ +struct file_range { + __kernel_loff_t offset; + __kernel_size_t len; +}; + #endif /* _UAPI_LINUX_MEMPOLICY_H */ diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index 7c4a32cbd2e7..db24d3fe6dc5 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -362,6 +362,7 @@ COND_SYSCALL(memfd_secret); /* memfd_restricted */ COND_SYSCALL(memfd_restricted); +COND_SYSCALL(memfd_restricted_bind); /* * Architecture specific weak syscall entries. diff --git a/mm/restrictedmem.c b/mm/restrictedmem.c index 55e99e6c09a1..9c249722c61b 100644 --- a/mm/restrictedmem.c +++ b/mm/restrictedmem.c @@ -1,4 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include "linux/sbitmap.h" #include #include @@ -359,3 +360,77 @@ int restrictedmem_get_page(struct file *file, pgoff_t offset, return 0; } EXPORT_SYMBOL_GPL(restrictedmem_get_page); + +static int restrictedmem_set_shared_policy( + struct file *file, loff_t start, size_t len, struct mempolicy *mpol) +{ + struct restrictedmem *rm; + unsigned long end; + + if (!PAGE_ALIGNED(start)) + return -EINVAL; + + len = PAGE_ALIGN(len); + end = start + len; + + if (end < start) + return -EINVAL; + if (end == start) + return 0; + + rm = file->f_mapping->private_data; + return __mpol_set_shared_policy(shmem_shared_policy(rm->memfd), mpol, + start >> PAGE_SHIFT, len >> PAGE_SHIFT); +} + +static long do_memfd_restricted_bind( + int fd, loff_t offset, size_t len, + unsigned long mode, const unsigned long __user *nmask, + unsigned long maxnode, unsigned int flags) +{ + long ret; + struct fd f; + struct mempolicy *mpol; + + /* None of the flags are supported */ + if (flags) + return -EINVAL; + + f = fdget_raw(fd); + if (!f.file) + return -EBADF; + + if (!file_is_restrictedmem(f.file)) + return -EINVAL; + + mpol = mpol_create(mode, nmask, maxnode); + if (IS_ERR(mpol)) { + ret = PTR_ERR(mpol); + goto out; + } + + ret = restrictedmem_set_shared_policy(f.file, offset, len, mpol); + + mpol_put(mpol); + +out: + fdput(f); + + return ret; +} + +SYSCALL_DEFINE6(memfd_restricted_bind, int, fd, struct file_range __user *, range, + unsigned long, mode, const unsigned long __user *, nmask, + unsigned long, maxnode, unsigned int, flags) +{ + loff_t offset; + size_t len; + + if (unlikely(get_user(offset, &range->offset))) + return -EFAULT; + if (unlikely(get_user(len, &range->len))) + return -EFAULT; + + return do_memfd_restricted_bind(fd, offset, len, mode, nmask, + maxnode, flags); +} diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh index 3c4d2508226a..e253529cf1ec 100755 --- a/scripts/checksyscalls.sh +++ b/scripts/checksyscalls.sh @@ -46,6 +46,7 @@ cat << EOF #ifndef __ARCH_WANT_MEMFD_RESTRICTED #define __IGNORE_memfd_restricted +#define __IGNORE_memfd_restricted_bind #endif /* Missing flags argument */ From patchwork Fri Apr 14 00:11:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ackerley Tng X-Patchwork-Id: 13210782 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77D92C77B76 for ; Fri, 14 Apr 2023 00:12:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231127AbjDNAMV (ORCPT ); Thu, 13 Apr 2023 20:12:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230386AbjDNAMO (ORCPT ); Thu, 13 Apr 2023 20:12:14 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3833A3A86 for ; Thu, 13 Apr 2023 17:12:10 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 18-20020a250b12000000b00b8f6cf4f5c9so753439ybl.10 for ; Thu, 13 Apr 2023 17:12:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681431129; x=1684023129; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JIdXLIzbqoaS1yDLNUT0qQstKE8tePtbIkE0dhqmXxw=; b=tYf5bMIhxkr3hMt+lIl1PwjzLoiSUpfQWFuv6YVcAfFgRdEoudCktx2A+d0JqBuvOp z7jDltQapqSgAAhCbia4ma/bs6+Elmvi2Gje0GcQy+s+haBTRtk+BaqUdfLeOeW36uBM 6MypyFUJxdUxMweXKlUl7CO/Q1AO0eUVAwfsLrKRHXW02ZCD/fodZ/BhJs3hNjpCIo+Y cIPpRQQptnvuiKd6Qf6tMjU2sO+NUQFKpX/Achfnm+Vek4M3QErvwjtnfJ6Clswfoz0b wIt6IvVXO9FiMRR1epVuwOCV/qP/1d3QqZG2bG2qG2X3Hmji5CKuZuuD4fO544jPiWvf +lTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681431129; x=1684023129; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JIdXLIzbqoaS1yDLNUT0qQstKE8tePtbIkE0dhqmXxw=; b=lUokp31xqHZlXPjTGHv3tVUCX4mLNEtIWVnLjjjUawhFP986e6aLX7F95GpPVD8TjZ aKtif/GJfKYoZGE6nkK/4N1B78yGTkSLSILNDPVbZJVelUVOR2VEkVzdXUR6lOJl3Gy5 mIYMH1NIdLmFKcLy7kRQdBPsh7yn02GRfD64ENGl9ZbH4Miu9SNL9gWYZF9BxyjYZpQj rude3bticfphyml1ewXW1xGWPzOIM++CkL1tYjOPjthLIT59APvigV4XOzJXQMdwK0vP h8iwZjMa4gDW9o+xdGdOMt9zS5R+rB07eNjqDhVw0ANQDqywCcXZsyKyJkoaoMBWG5ty CBVw== X-Gm-Message-State: AAQBX9dT21kdY9EUD4V9cEmXCDE87UY/4SlKupNd17Ce6uO9scdlXQnE lS66Humr4SW/I6+XXy7NMjfz7WGT7UR0hzgOTQ== X-Google-Smtp-Source: AKy350aw1qXzddjDywjx3OQ/Kkf8aSm9qQ52ggrXe9lxiXXBS/UAEkiIAH3/1jEr1IF4yNFdix0pirr9uwMpC8d++w== X-Received: from ackerleytng-cloudtop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1f5f]) (user=ackerleytng job=sendgmr) by 2002:a25:d288:0:b0:b75:3fd4:1b31 with SMTP id j130-20020a25d288000000b00b753fd41b31mr2693074ybg.1.1681431128801; Thu, 13 Apr 2023 17:12:08 -0700 (PDT) Date: Fri, 14 Apr 2023 00:11:55 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <7b40fc4afa41e382d72f556399ed5e0808b969b5.1681430907.git.ackerleytng@google.com> Subject: [RFC PATCH 6/6] selftests: mm: Add selftest for memfd_restricted_bind() From: Ackerley Tng To: kvm@vger.kernel.org, linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, qemu-devel@nongnu.org Cc: aarcange@redhat.com, ak@linux.intel.com, akpm@linux-foundation.org, arnd@arndb.de, bfields@fieldses.org, bp@alien8.de, chao.p.peng@linux.intel.com, corbet@lwn.net, dave.hansen@intel.com, david@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, hpa@zytor.com, hughd@google.com, jlayton@kernel.org, jmattson@google.com, joro@8bytes.org, jun.nakajima@intel.com, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, luto@kernel.org, mail@maciej.szmigiero.name, mhocko@suse.com, michael.roth@amd.com, mingo@redhat.com, naoya.horiguchi@nec.com, pbonzini@redhat.com, qperret@google.com, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, tabba@google.com, tglx@linutronix.de, vannapurve@google.com, vbabka@suse.cz, vkuznets@redhat.com, wanpengli@tencent.com, wei.w.wang@intel.com, x86@kernel.org, yu.c.zhang@linux.intel.com, muchun.song@linux.dev, feng.tang@intel.com, brgerst@gmail.com, rdunlap@infradead.org, masahiroy@kernel.org, mailhol.vincent@wanadoo.fr, Ackerley Tng Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This selftest uses memfd_restricted_bind() to set the mempolicy for a restrictedmem file, and then checks that pages were indeed allocated according to that policy. Because restrictedmem pages are never mapped into userspace memory, the usual ways of checking which NUMA node the page was allocated on (e.g. /proc/pid/numa_maps) cannot be used. This selftest adds a small kernel module that overloads the ioctl syscall on /proc/restrictedmem to request a restrictedmem page and get the node it was allocated on. The page is freed within the ioctl handler. Signed-off-by: Ackerley Tng --- tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 8 + .../selftests/mm/memfd_restricted_bind.c | 139 ++++++++++++++++++ .../mm/restrictedmem_testmod/Makefile | 21 +++ .../restrictedmem_testmod.c | 89 +++++++++++ tools/testing/selftests/mm/run_vmtests.sh | 6 + 6 files changed, 264 insertions(+) create mode 100644 tools/testing/selftests/mm/memfd_restricted_bind.c create mode 100644 tools/testing/selftests/mm/restrictedmem_testmod/Makefile create mode 100644 tools/testing/selftests/mm/restrictedmem_testmod/restrictedmem_testmod.c diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index fb6e4233374d..10c5701b9645 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -31,6 +31,7 @@ map_fixed_noreplace write_to_hugetlbfs hmm-tests memfd_restricted +memfd_restricted_bind memfd_secret soft-dirty split_huge_page_test diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index 5ec338ea1fed..4a6cf922db45 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -46,6 +46,8 @@ TEST_GEN_FILES += map_fixed_noreplace TEST_GEN_FILES += map_hugetlb TEST_GEN_FILES += map_populate TEST_GEN_FILES += memfd_restricted +TEST_GEN_FILES += memfd_restricted_bind +TEST_GEN_FILES += restrictedmem_testmod.ko TEST_GEN_FILES += memfd_secret TEST_GEN_FILES += migration TEST_GEN_FILES += mlock-random-test @@ -171,6 +173,12 @@ $(OUTPUT)/ksm_tests: LDLIBS += -lnuma $(OUTPUT)/migration: LDLIBS += -lnuma +$(OUTPUT)/memfd_restricted_bind: LDLIBS += -lnuma +$(OUTPUT)/restrictedmem_testmod.ko: $(wildcard restrictedmem_testmod/Makefile restrictedmem_testmod/*.[ch]) + $(call msg,MOD,,$@) + $(Q)$(MAKE) -C restrictedmem_testmod + $(Q)cp restrictedmem_testmod/restrictedmem_testmod.ko $@ + local_config.mk local_config.h: check_config.sh /bin/sh ./check_config.sh $(CC) diff --git a/tools/testing/selftests/mm/memfd_restricted_bind.c b/tools/testing/selftests/mm/memfd_restricted_bind.c new file mode 100644 index 000000000000..64aa44c72d09 --- /dev/null +++ b/tools/testing/selftests/mm/memfd_restricted_bind.c @@ -0,0 +1,139 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include +#include +#include +#include +#include +#include + +#include "../kselftest_harness.h" + +int memfd_restricted(int flags, int fd) +{ + return syscall(__NR_memfd_restricted, flags, fd); +} + +int memfd_restricted_bind( + int fd, loff_t offset, unsigned long len, unsigned long mode, + const unsigned long *nmask, unsigned long maxnode, unsigned int flags) +{ + struct file_range range = { + .offset = offset, + .len = len, + }; + + return syscall(__NR_memfd_restricted_bind, fd, &range, mode, nmask, maxnode, flags); +} + +int memfd_restricted_bind_node( + int fd, loff_t offset, unsigned long len, + unsigned long mode, int node, unsigned int flags) +{ + int ret; + struct bitmask *mask = numa_allocate_nodemask(); + + numa_bitmask_setbit(mask, node); + + ret = memfd_restricted_bind(fd, offset, len, mode, mask->maskp, mask->size, flags); + + numa_free_nodemask(mask); + + return ret; +} + +/** + * Allocates a page in restrictedmem_fd, reads the node that the page was + * allocated it and returns it. Returns -1 on error. + */ +int read_node(int restrictedmem_fd, unsigned long offset) +{ + int ret; + int fd; + + fd = open("/proc/restrictedmem", O_RDWR); + if (!fd) + return -ENOTSUP; + + ret = ioctl(fd, restrictedmem_fd, offset); + + close(fd); + + return ret; +} + +bool restrictedmem_testmod_loaded(void) +{ + struct stat buf; + + return stat("/proc/restrictedmem", &buf) == 0; +} + +FIXTURE(restrictedmem_file) +{ + int fd; + size_t page_size; +}; + +FIXTURE_SETUP(restrictedmem_file) +{ + int fd; + int ret; + struct stat stat; + + fd = memfd_restricted(0, -1); + ASSERT_GT(fd, 0); + +#define RESTRICTEDMEM_TEST_NPAGES 16 + ret = ftruncate(fd, getpagesize() * RESTRICTEDMEM_TEST_NPAGES); + ASSERT_EQ(ret, 0); + + ret = fstat(fd, &stat); + ASSERT_EQ(ret, 0); + + self->fd = fd; + self->page_size = stat.st_blksize; +}; + +FIXTURE_TEARDOWN(restrictedmem_file) +{ + int ret; + + ret = close(self->fd); + EXPECT_EQ(ret, 0); +} + +#define ASSERT_REQUIREMENTS() \ + do { \ + struct bitmask *mask = numa_get_membind(); \ + ASSERT_GT(numa_num_configured_nodes(), 1); \ + ASSERT_TRUE(numa_bitmask_isbitset(mask, 0)); \ + ASSERT_TRUE(numa_bitmask_isbitset(mask, 1)); \ + numa_bitmask_free(mask); \ + ASSERT_TRUE(restrictedmem_testmod_loaded()); \ + } while (0) + +TEST_F(restrictedmem_file, memfd_restricted_bind_works_as_expected) +{ + int ret; + int node; + int i; + int node_bindings[] = { 1, 0, 1, 0, 1, 1, 0, 1 }; + + ASSERT_REQUIREMENTS(); + + for (i = 0; i < ARRAY_SIZE(node_bindings); i++) { + ret = memfd_restricted_bind_node( + self->fd, i * self->page_size, self->page_size, + MPOL_BIND, node_bindings[i], 0); + ASSERT_EQ(ret, 0); + } + + for (i = 0; i < ARRAY_SIZE(node_bindings); i++) { + node = read_node(self->fd, i * self->page_size); + ASSERT_EQ(node, node_bindings[i]); + } +} + +TEST_HARNESS_MAIN diff --git a/tools/testing/selftests/mm/restrictedmem_testmod/Makefile b/tools/testing/selftests/mm/restrictedmem_testmod/Makefile new file mode 100644 index 000000000000..11b1d5d15e3c --- /dev/null +++ b/tools/testing/selftests/mm/restrictedmem_testmod/Makefile @@ -0,0 +1,21 @@ +# SPDX-License-Identifier: GPL-2.0-only + +RESTRICTEDMEM_TESTMOD_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST))))) +KDIR ?= $(abspath $(RESTRICTEDMEM_TESTMOD_DIR)/../../../../..) + +ifeq ($(V),1) +Q = +else +Q = @ +endif + +MODULES = restrictedmem_testmod.ko + +obj-m += restrictedmem_testmod.o +CFLAGS_restrictedmem_testmod.o = -I$(src) + +all: + +$(Q)make -C $(KDIR) M=$(RESTRICTEDMEM_TESTMOD_DIR) modules + +clean: + +$(Q)make -C $(KDIR) M=$(RESTRICTEDMEM_TESTMOD_DIR) clean diff --git a/tools/testing/selftests/mm/restrictedmem_testmod/restrictedmem_testmod.c b/tools/testing/selftests/mm/restrictedmem_testmod/restrictedmem_testmod.c new file mode 100644 index 000000000000..d35f55d26408 --- /dev/null +++ b/tools/testing/selftests/mm/restrictedmem_testmod/restrictedmem_testmod.c @@ -0,0 +1,89 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "linux/printk.h" +#include "linux/types.h" +#include +#include +#include +#include +#include +#include +#include +#include + +MODULE_DESCRIPTION("A kernel module to support restrictedmem testing"); +MODULE_AUTHOR("ackerleytng@google.com"); +MODULE_LICENSE("GPL"); + +void dummy_op(struct restrictedmem_notifier *notifier, pgoff_t start, pgoff_t end) +{ +} + +static const struct restrictedmem_notifier_ops dummy_notifier_ops = { + .invalidate_start = dummy_op, + .invalidate_end = dummy_op, + .error = dummy_op, +}; + +static struct restrictedmem_notifier dummy_notifier = { + .ops = &dummy_notifier_ops, +}; + +static long restrictedmem_testmod_ioctl( + struct file *file, unsigned int cmd, unsigned long offset) +{ + long ret; + struct fd f; + struct page *page; + pgoff_t start = offset >> PAGE_SHIFT; + + f = fdget(cmd); + if (!f.file) + return -EBADF; + + ret = -EINVAL; + if (!file_is_restrictedmem(f.file)) + goto out; + + + ret = restrictedmem_bind(f.file, start, start + 1, &dummy_notifier, true); + if (ret) + goto out; + + ret = restrictedmem_get_page(f.file, (unsigned long)start, &page, NULL); + if (ret) + goto out; + + ret = page_to_nid(page); + + folio_put(page_folio(page)); + + restrictedmem_unbind(f.file, start, start + 1, &dummy_notifier); + +out: + fdput(f); + + return ret; +} + +static const struct proc_ops restrictedmem_testmod_ops = { + .proc_ioctl = restrictedmem_testmod_ioctl, +}; + +static struct proc_dir_entry *restrictedmem_testmod_entry; + +static int restrictedmem_testmod_init(void) +{ + restrictedmem_testmod_entry = proc_create( + "restrictedmem", 0660, NULL, &restrictedmem_testmod_ops); + + return 0; +} + +static void restrictedmem_testmod_exit(void) +{ + proc_remove(restrictedmem_testmod_entry); +} + +module_init(restrictedmem_testmod_init); +module_exit(restrictedmem_testmod_exit); diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index 53de84e3ec2c..bdc853d6afe4 100644 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -40,6 +40,8 @@ separated by spaces: test memadvise(2) MADV_POPULATE_{READ,WRITE} options - memfd_restricted_ test memfd_restricted(2) +- memfd_restricted_bind + test memfd_restricted_bind(2) - memfd_secret test memfd_secret(2) - process_mrelease @@ -240,6 +242,10 @@ CATEGORY="madv_populate" run_test ./madv_populate CATEGORY="memfd_restricted" run_test ./memfd_restricted +test_selected "memfd_restricted_bind" && insmod ./restrictedmem_testmod.ko && \ + CATEGORY="memfd_restricted_bind" run_test ./memfd_restricted_bind && \ + rmmod restrictedmem_testmod > /dev/null + CATEGORY="memfd_secret" run_test ./memfd_secret # KSM MADV_MERGEABLE test with 10 identical pages