From patchwork Tue Mar 8 11:28:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 12773544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BD2BC433FE for ; Tue, 8 Mar 2022 11:29:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C9B738D0005; Tue, 8 Mar 2022 06:29:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BFD198D0001; Tue, 8 Mar 2022 06:29:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A76BD8D0005; Tue, 8 Mar 2022 06:29:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0025.hostedemail.com [216.40.44.25]) by kanga.kvack.org (Postfix) with ESMTP id 952648D0001 for ; Tue, 8 Mar 2022 06:29:19 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4F736A89BA for ; Tue, 8 Mar 2022 11:29:19 +0000 (UTC) X-FDA: 79220998038.24.AD72CF4 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf18.hostedemail.com (Postfix) with ESMTP id B935F1C0003 for ; Tue, 8 Mar 2022 11:29:18 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 13E7B6165E; Tue, 8 Mar 2022 11:29:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 22B53C340EC; Tue, 8 Mar 2022 11:29:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1646738957; bh=aNy6hu47F0HY+IsRTnwsy1/gwrBD0sBaHYzkZ0C0zfg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sWs2atuZwSihcX9Dy7w5encgCdyhpdbOegWQykbW4luZFEKIbFy86FaRrHEBusMEh 0oXAF1kxbHkUygisJQxlslivtXff8+bOoWpn+KlMQHEBqGNCHdxYxCYg7M9p6s3QwP 6Bs7VnLmco7h+VzS7jatw86hXzY5r+Jabjgq9Vm+uZUrMqOs8K/jvizmOXUGI520T5 UHixmt47amntsk5Jgt6T6H3TeAxv7EX1XlCmvX2TRxtiDn8aNXdGPzgMgHUrzpRCIE U6fm/dlXYjLtMFfJBUF1/eePdZnqbXBj69Y/LMRFun9WfVyWrrOhHEmlI3eVc2CaSy iIBCMGnqmnGyw== From: Jarkko Sakkinen To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Cc: Dave Hansen , Nathaniel McCallum , Reinette Chatre , Alexander Viro , linux-sgx@vger.kernel.org, linux-kernel@vger.kernel.org, Andrew Morton , Jarkko Sakkinen Subject: [PATCH RFC v3 1/3] mm: Add f_op->populate() for populating memory outside of core mm Date: Tue, 8 Mar 2022 13:28:31 +0200 Message-Id: <20220308112833.262805-2-jarkko@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220308112833.262805-1-jarkko@kernel.org> References: <20220308112833.262805-1-jarkko@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: B935F1C0003 X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=sWs2atuZ; spf=pass (imf18.hostedemail.com: domain of jarkko@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=jarkko@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Stat-Signature: jcgpdhbg7yrh4r83odkubo4a11p77hfe X-HE-Tag: 1646738958-719827 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: SGX memory is managed outside the core mm. It doesn't have a 'struct page' and get_user_pages() doesn't work on it. Its VMAs are marked with VM_IO. So, none of the existing methods for avoiding page faults work on SGX memory. Add f_op->populate() to overcome this issue: int (*populate)(struct file *, unsigned long start, unsigned long end); Then in populate_vma_page_range(), allow it to be used in the place of get_user_pages() for memory that falls outside of its scope. Signed-off-by: Jarkko Sakkinen --- v5: * In v4, one diff was left out of staging area in __mm_populate(). It was unintentional to remove the conditional statement. v4: * Reimplement based on Dave's suggestion: https://lore.kernel.org/linux-sgx/c3083144-bfc1-3260-164c-e59b2d110df8@intel.com/ * Copy the text from the suggestion as part of the commit message (and cover letter). v3: - if (!ret && do_populate && file->f_op->populate) + if (!ret && do_populate && file->f_op->populate && + !!(vma->vm_flags & (VM_IO | VM_PFNMAP))) (reported by Matthew Wilcox) v2: - if (!ret && do_populate) + if (!ret && do_populate && file->f_op->populate) (reported by Jan Harkes) --- include/linux/fs.h | 1 + mm/gup.c | 11 ++++++++--- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index e2d892b201b0..54151af88ee0 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1993,6 +1993,7 @@ struct file_operations { long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); long (*compat_ioctl) (struct file *, unsigned int, unsigned long); int (*mmap) (struct file *, struct vm_area_struct *); + int (*populate)(struct file *, unsigned long start, unsigned long end); unsigned long mmap_supported_flags; int (*open) (struct inode *, struct file *); int (*flush) (struct file *, fl_owner_t id); diff --git a/mm/gup.c b/mm/gup.c index a9d4d724aef7..1f3a1d0b6e0d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1519,8 +1519,11 @@ long populate_vma_page_range(struct vm_area_struct *vma, * We made sure addr is within a VMA, so the following will * not result in a stack expansion that recurses back here. */ - return __get_user_pages(mm, start, nr_pages, gup_flags, - NULL, NULL, locked); + if ((vma->vm_flags & (VM_IO | VM_PFNMAP)) && vma->vm_file->f_op->populate) + return vma->vm_file->f_op->populate(vma->vm_file, start, end); + else + return __get_user_pages(mm, start, nr_pages, gup_flags, + NULL, NULL, locked); } /* @@ -1598,6 +1601,7 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) struct vm_area_struct *vma = NULL; int locked = 0; long ret = 0; + bool is_io; end = start + len; @@ -1619,7 +1623,8 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) * range with the first VMA. Also, skip undesirable VMA types. */ nend = min(end, vma->vm_end); - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) + is_io = !!(vma->vm_flags & (VM_IO | VM_PFNMAP)); + if (is_io && !(is_io && vma->vm_file->f_op->populate)) continue; if (nstart < vma->vm_start) nstart = vma->vm_start;