From patchwork Fri Jun 4 20:59:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kasireddy, Vivek" X-Patchwork-Id: 12300791 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8A07C4743D for ; Fri, 4 Jun 2021 21:12:42 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7AAE4613AC for ; Fri, 4 Jun 2021 21:12:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7AAE4613AC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C7FA76F51B; Fri, 4 Jun 2021 21:12:41 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id A439C6F51B for ; Fri, 4 Jun 2021 21:12:39 +0000 (UTC) IronPort-SDR: 4YMh5BNh81aCXr9HsuoThlapnq2bALA6Ina7LcFxd45TeWvOIBiXUxvJB1KPs529bOmiccQTjb QGRmjTeJgXcA== X-IronPort-AV: E=McAfee;i="6200,9189,10005"; a="265532997" X-IronPort-AV: E=Sophos;i="5.83,248,1616482800"; d="scan'208";a="265532997" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2021 14:12:37 -0700 IronPort-SDR: jFrX5Qd4myXD/U7yy9q+3XjHct8mn9iXDahHQSitrCwCTxVuagvBDeIAkR6xktCZ7/sH+J4uBU UXakKQ+akvag== X-IronPort-AV: E=Sophos;i="5.83,248,1616482800"; d="scan'208";a="480778736" Received: from vkasired-desk2.fm.intel.com ([10.105.128.127]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2021 14:12:37 -0700 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org Subject: [PATCH] udmabuf: Add support for mapping hugepages (v3) Date: Fri, 4 Jun 2021 13:59:39 -0700 Message-Id: <20210604205939.376598-1-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210604055903.g2bp4vuay2555omg@sirius.home.kraxel.org> References: <20210604055903.g2bp4vuay2555omg@sirius.home.kraxel.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dongwon Kim , Vivek Kasireddy , Gerd Hoffmann Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" If the VMM's (Qemu) memory backend is backed up by memfd + Hugepages (hugetlbfs and not THP), we have to first find the hugepage(s) where the Guest allocations are located and then extract the regular 4k sized subpages from them. v2: Ensure that the subpage and hugepage offsets are calculated correctly when the range of subpage allocations cuts across multiple hugepages. v3: Instead of repeatedly looking up the hugepage for each subpage, only do it when the subpage allocation crosses over into a different hugepage. (suggested by Gerd and DW) Cc: Gerd Hoffmann Signed-off-by: Vivek Kasireddy Signed-off-by: Dongwon Kim --- drivers/dma-buf/udmabuf.c | 51 +++++++++++++++++++++++++++++++++------ 1 file changed, 44 insertions(+), 7 deletions(-) diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c index db732f71e59a..2e02bbfe30fd 100644 --- a/drivers/dma-buf/udmabuf.c +++ b/drivers/dma-buf/udmabuf.c @@ -11,6 +11,7 @@ #include #include #include +#include static const u32 list_limit = 1024; /* udmabuf_create_list->count limit */ static const size_t size_limit_mb = 64; /* total dmabuf size, in megabytes */ @@ -163,7 +164,9 @@ static long udmabuf_create(struct miscdevice *device, struct udmabuf *ubuf; struct dma_buf *buf; pgoff_t pgoff, pgcnt, pgidx, pgbuf = 0, pglimit; - struct page *page; + struct page *page, *hpage = NULL; + pgoff_t subpgoff, maxsubpgs; + struct hstate *hpstate; int seals, ret = -EINVAL; u32 i, flags; @@ -194,7 +197,8 @@ static long udmabuf_create(struct miscdevice *device, memfd = fget(list[i].memfd); if (!memfd) goto err; - if (!shmem_mapping(file_inode(memfd)->i_mapping)) + if (!shmem_mapping(file_inode(memfd)->i_mapping) && + !is_file_hugepages(memfd)) goto err; seals = memfd_fcntl(memfd, F_GET_SEALS, 0); if (seals == -EINVAL) @@ -205,17 +209,50 @@ static long udmabuf_create(struct miscdevice *device, goto err; pgoff = list[i].offset >> PAGE_SHIFT; pgcnt = list[i].size >> PAGE_SHIFT; + if (is_file_hugepages(memfd)) { + hpstate = hstate_file(memfd); + pgoff = list[i].offset >> huge_page_shift(hpstate); + subpgoff = (list[i].offset & + ~huge_page_mask(hpstate)) >> PAGE_SHIFT; + maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT; + } for (pgidx = 0; pgidx < pgcnt; pgidx++) { - page = shmem_read_mapping_page( - file_inode(memfd)->i_mapping, pgoff + pgidx); - if (IS_ERR(page)) { - ret = PTR_ERR(page); - goto err; + if (is_file_hugepages(memfd)) { + if (!hpage) { + hpage = find_get_page_flags( + file_inode(memfd)->i_mapping, + pgoff, FGP_ACCESSED); + if (IS_ERR(hpage)) { + ret = PTR_ERR(hpage); + goto err; + } + } + page = hpage + subpgoff; + get_page(page); + subpgoff++; + if (subpgoff == maxsubpgs) { + put_page(hpage); + hpage = NULL; + subpgoff = 0; + pgoff++; + } + } else { + page = shmem_read_mapping_page( + file_inode(memfd)->i_mapping, + pgoff + pgidx); + if (IS_ERR(page)) { + ret = PTR_ERR(page); + goto err; + } } ubuf->pages[pgbuf++] = page; } fput(memfd); memfd = NULL; + if (hpage) { + put_page(hpage); + hpage = NULL; + } } exp_info.ops = &udmabuf_ops;