From patchwork Fri Jun 1 11:41:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Andrushchenko X-Patchwork-Id: 10443125 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B9C08604D4 for ; Fri, 1 Jun 2018 11:41:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AAB6E28E18 for ; Fri, 1 Jun 2018 11:41:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9F6B028E17; Fri, 1 Jun 2018 11:41:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2ABB228E17 for ; Fri, 1 Jun 2018 11:41:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9D1C96E683; Fri, 1 Jun 2018 11:41:50 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-lf0-x244.google.com (mail-lf0-x244.google.com [IPv6:2a00:1450:4010:c07::244]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2AF2F6E67A for ; Fri, 1 Jun 2018 11:41:49 +0000 (UTC) Received: by mail-lf0-x244.google.com with SMTP id r2-v6so14365325lff.4 for ; Fri, 01 Jun 2018 04:41:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ezjJKyHwiy2SJ/5O6yZPxX+3vP7+E9QCnj4ooG0Aw9E=; b=Gx+Cz55r44FbW/VuQPXsa8KpZQf68zQGFPes8Bm6GZc23TNTStx+00hSYbf+R4fUvU 6mkK8WqGpxJENDSCI7FoDqIKs1x/TEXzFLIdgje/VCjRKPMyO7lmBxFCGzAN2Ph9PPoA M3HUzc032PUv2fzysxuQn/rmCn3Unu00mfNkihseGVV3DcVTAhEkuSMu2tfItebZjKcv JY1tod2QhpHKW9+21B2+7jSLtnxw1JxyHoGITzy58RKuUievoJxZTv2drKVCsd1mGY/h yiGpoD7NWaIX0XD3LjBdwsyumGc7xfSd832NZzHIMxGRm/MkGaozdTMoDIrxRyWuh7M9 CLxQ== X-Gm-Message-State: ALKqPwdDuFXTZvBm2yTTCkg4fche6uQtBlWxl+MVeOvRA3W+KY3rScXf npJObSxutdBmEQQBgCv26pE= X-Google-Smtp-Source: ADUXVKL5tEHnHH6U7P7ZNQwxso+DgPOLAY2YbqBPDjL8moE20H/3t41RigvtTZrLwEyO8Ivn0dVxyw== X-Received: by 2002:a19:1099:: with SMTP id 25-v6mr6368647lfq.112.1527853307266; Fri, 01 Jun 2018 04:41:47 -0700 (PDT) Received: from a2k-HP-ProDesk-600-G2-SFF.kyiv.epam.com (ll-51.209.223.85.sovam.net.ua. [85.223.209.51]) by smtp.gmail.com with ESMTPSA id c6-v6sm8066280lja.22.2018.06.01.04.41.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 01 Jun 2018 04:41:46 -0700 (PDT) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com Subject: [PATCH v2 5/9] xen/gntdev: Allow mappings for DMA buffers Date: Fri, 1 Jun 2018 14:41:28 +0300 Message-Id: <20180601114132.22596-6-andr2000@gmail.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180601114132.22596-1-andr2000@gmail.com> References: <20180601114132.22596-1-andr2000@gmail.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andr2000@gmail.com, daniel.vetter@intel.com, dongwon.kim@intel.com, Oleksandr Andrushchenko MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Oleksandr Andrushchenko Allow mappings for DMA backed buffers if grant table module supports such: this extends grant device to not only map buffers made of balloon pages, but also from buffers allocated with dma_alloc_xxx. Signed-off-by: Oleksandr Andrushchenko --- drivers/xen/gntdev.c | 99 ++++++++++++++++++++++++++++++++++++++- include/uapi/xen/gntdev.h | 15 ++++++ 2 files changed, 112 insertions(+), 2 deletions(-) diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c index bd56653b9bbc..9813fc440c70 100644 --- a/drivers/xen/gntdev.c +++ b/drivers/xen/gntdev.c @@ -37,6 +37,9 @@ #include #include #include +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC +#include +#endif #include #include @@ -72,6 +75,11 @@ struct gntdev_priv { struct mutex lock; struct mm_struct *mm; struct mmu_notifier mn; + +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + /* Device for which DMA memory is allocated. */ + struct device *dma_dev; +#endif }; struct unmap_notify { @@ -96,10 +104,27 @@ struct grant_map { struct gnttab_unmap_grant_ref *kunmap_ops; struct page **pages; unsigned long pages_vm_start; + +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + /* + * If dmabuf_vaddr is not NULL then this mapping is backed by DMA + * capable memory. + */ + + struct device *dma_dev; + /* Flags used to create this DMA buffer: GNTDEV_DMA_FLAG_XXX. */ + int dma_flags; + void *dma_vaddr; + dma_addr_t dma_bus_addr; + /* This is required for gnttab_dma_{alloc|free}_pages. */ + xen_pfn_t *frames; +#endif }; static int unmap_grant_pages(struct grant_map *map, int offset, int pages); +static struct miscdevice gntdev_miscdev; + /* ------------------------------------------------------------------ */ static void gntdev_print_maps(struct gntdev_priv *priv, @@ -121,8 +146,27 @@ static void gntdev_free_map(struct grant_map *map) if (map == NULL) return; +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + if (map->dma_vaddr) { + struct gnttab_dma_alloc_args args; + + args.dev = map->dma_dev; + args.coherent = map->dma_flags & GNTDEV_DMA_FLAG_COHERENT; + args.nr_pages = map->count; + args.pages = map->pages; + args.frames = map->frames; + args.vaddr = map->dma_vaddr; + args.dev_bus_addr = map->dma_bus_addr; + + gnttab_dma_free_pages(&args); + } else +#endif if (map->pages) gnttab_free_pages(map->count, map->pages); + +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + kfree(map->frames); +#endif kfree(map->pages); kfree(map->grants); kfree(map->map_ops); @@ -132,7 +176,8 @@ static void gntdev_free_map(struct grant_map *map) kfree(map); } -static struct grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count) +static struct grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count, + int dma_flags) { struct grant_map *add; int i; @@ -155,6 +200,37 @@ static struct grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count) NULL == add->pages) goto err; +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + add->dma_flags = dma_flags; + + /* + * Check if this mapping is requested to be backed + * by a DMA buffer. + */ + if (dma_flags & (GNTDEV_DMA_FLAG_WC | GNTDEV_DMA_FLAG_COHERENT)) { + struct gnttab_dma_alloc_args args; + + add->frames = kcalloc(count, sizeof(add->frames[0]), + GFP_KERNEL); + if (!add->frames) + goto err; + + /* Remember the device, so we can free DMA memory. */ + add->dma_dev = priv->dma_dev; + + args.dev = priv->dma_dev; + args.coherent = dma_flags & GNTDEV_DMA_FLAG_COHERENT; + args.nr_pages = count; + args.pages = add->pages; + args.frames = add->frames; + + if (gnttab_dma_alloc_pages(&args)) + goto err; + + add->dma_vaddr = args.vaddr; + add->dma_bus_addr = args.dev_bus_addr; + } else +#endif if (gnttab_alloc_pages(count, add->pages)) goto err; @@ -325,6 +401,14 @@ static int map_grant_pages(struct grant_map *map) map->unmap_ops[i].handle = map->map_ops[i].handle; if (use_ptemod) map->kunmap_ops[i].handle = map->kmap_ops[i].handle; +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + else if (map->dma_vaddr) { + unsigned long mfn; + + mfn = __pfn_to_mfn(page_to_pfn(map->pages[i])); + map->unmap_ops[i].dev_bus_addr = __pfn_to_phys(mfn); + } +#endif } return err; } @@ -548,6 +632,17 @@ static int gntdev_open(struct inode *inode, struct file *flip) } flip->private_data = priv; +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + priv->dma_dev = gntdev_miscdev.this_device; + + /* + * The device is not spawn from a device tree, so arch_setup_dma_ops + * is not called, thus leaving the device with dummy DMA ops. + * Fix this call of_dma_configure() with a NULL node to set + * default DMA ops. + */ + of_dma_configure(priv->dma_dev, NULL); +#endif pr_debug("priv %p\n", priv); return 0; @@ -589,7 +684,7 @@ static long gntdev_ioctl_map_grant_ref(struct gntdev_priv *priv, return -EINVAL; err = -ENOMEM; - map = gntdev_alloc_map(priv, op.count); + map = gntdev_alloc_map(priv, op.count, 0 /* This is not a dma-buf. */); if (!map) return err; diff --git a/include/uapi/xen/gntdev.h b/include/uapi/xen/gntdev.h index 6d1163456c03..4b9d498a31d4 100644 --- a/include/uapi/xen/gntdev.h +++ b/include/uapi/xen/gntdev.h @@ -200,4 +200,19 @@ struct ioctl_gntdev_grant_copy { /* Send an interrupt on the indicated event channel */ #define UNMAP_NOTIFY_SEND_EVENT 0x2 +/* + * Flags to be used while requesting memory mapping's backing storage + * to be allocated with DMA API. + */ + +/* + * The buffer is backed with memory allocated with dma_alloc_wc. + */ +#define GNTDEV_DMA_FLAG_WC (1 << 0) + +/* + * The buffer is backed with memory allocated with dma_alloc_coherent. + */ +#define GNTDEV_DMA_FLAG_COHERENT (1 << 1) + #endif /* __LINUX_PUBLIC_GNTDEV_H__ */