From patchwork Fri May 25 15:33:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Andrushchenko X-Patchwork-Id: 10427801 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5B326602D6 for ; Fri, 25 May 2018 15:35:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 50B45294B2 for ; Fri, 25 May 2018 15:35:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 44B8C2974F; Fri, 25 May 2018 15:35:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BEA45294B2 for ; Fri, 25 May 2018 15:35:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966932AbeEYPfJ (ORCPT ); Fri, 25 May 2018 11:35:09 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:39065 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966576AbeEYPdu (ORCPT ); Fri, 25 May 2018 11:33:50 -0400 Received: by mail-wm0-f65.google.com with SMTP id f8-v6so15723230wmc.4; Fri, 25 May 2018 08:33:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=8WMPc5rp7i/AwH109oWouFUP2xXQKFW4sWotxAOula8=; b=LtKPC3hh5ynYJQc7eAWVxYTFL2bpTuo3IKmBlsCGxHMWjbzf3L1jG+s3BbnVdjTgdJ g2AGK7BLLSvEv6+KV57mkFpSmm+AocetcG7ENJ3YrIHXdf/x/l0cjSNpJLfcUKkzjfDN qEFnUxAfpDs6jfb7JhRV4F0K/veuarWEGPaI2+Q7mxRIrhKn29wL2qyLOFgKZZ2EStM+ AiYhuOXyvhMqjU/ZcJxD1thv7w6rWQrr2Qc2ZUaLStnYVXTmGErxL6C3rf90+iKWvzZL b5QWzXLpxEDF3ggC7NYKprHVpi5b96SjS1pZvVxi0yocHSkX8kG/F1FCyuB+DYJO6afm pjbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8WMPc5rp7i/AwH109oWouFUP2xXQKFW4sWotxAOula8=; b=YqKNCY3KmaeBx5tb24egh5Ya82+vAcCxVpkZGPs26DmvJgBq2paiEg23W3wej82ioh E/f5zh8nEaY2u4swSATFQfGtdqpi9lhTXAfANDgjxNbVhdpzBQ4jHpXPcUI21ITQ5Fij 1BSy2AIa+99jliF/co31CQs5lqI1NTHyXMznPb/sKECBVEKQLeHWBPRJqEM1n2M8D9s0 W4qaQ7fpuoft+mW1K/4AiRI2ZP3gf17wRztyBnCIIICBLhpVPCmq34W7kSMX0xXr4w15 SPlKauObHlSeecQ/cIwocPDbGLU0SJdP2jFeGJuszp0mXqWov3CdRcm9pvj/cXChEqCd djkQ== X-Gm-Message-State: ALKqPwd1s+kBIAgLrAMYlFmqbePEo5riZ18oF8ZRSCpR6OalKeVkqrmQ RWQAHm7Sj5xxBh2qLUsG4hw= X-Google-Smtp-Source: ADUXVKJSljJzV0yaCD65RfuSVp5bxx6W3n6j10wfzWXg9BN4R7cJ3FwTbOKpb1c2Blfdvea7kVFZ5g== X-Received: by 2002:a2e:4411:: with SMTP id r17-v6mr2017610lja.139.1527262429224; Fri, 25 May 2018 08:33:49 -0700 (PDT) Received: from a2k-HP-ProDesk-600-G2-SFF.kyiv.epam.com (ll-54.209.223.85.sovam.net.ua. [85.223.209.54]) by smtp.gmail.com with ESMTPSA id l7-v6sm314231ljh.53.2018.05.25.08.33.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 08:33:48 -0700 (PDT) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com Cc: daniel.vetter@intel.com, andr2000@gmail.com, dongwon.kim@intel.com, matthew.d.roper@intel.com, Oleksandr Andrushchenko Subject: [PATCH 4/8] xen/gntdev: Allow mappings for DMA buffers Date: Fri, 25 May 2018 18:33:27 +0300 Message-Id: <20180525153331.31188-5-andr2000@gmail.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180525153331.31188-1-andr2000@gmail.com> References: <20180525153331.31188-1-andr2000@gmail.com> Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Oleksandr Andrushchenko Allow mappings for DMA backed buffers if grant table module supports such: this extends grant device to not only map buffers made of balloon pages, but also from buffers allocated with dma_alloc_xxx. Signed-off-by: Oleksandr Andrushchenko --- drivers/xen/gntdev.c | 100 +++++++++++++++++++++++++++++++++++++- include/uapi/xen/gntdev.h | 15 ++++++ 2 files changed, 113 insertions(+), 2 deletions(-) diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c index bd56653b9bbc..640a579f42ea 100644 --- a/drivers/xen/gntdev.c +++ b/drivers/xen/gntdev.c @@ -37,6 +37,9 @@ #include #include #include +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC +#include +#endif #include #include @@ -72,6 +75,11 @@ struct gntdev_priv { struct mutex lock; struct mm_struct *mm; struct mmu_notifier mn; + +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + /* Device for which DMA memory is allocated. */ + struct device *dma_dev; +#endif }; struct unmap_notify { @@ -96,10 +104,28 @@ struct grant_map { struct gnttab_unmap_grant_ref *kunmap_ops; struct page **pages; unsigned long pages_vm_start; + +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + /* + * If dmabuf_vaddr is not NULL then this mapping is backed by DMA + * capable memory. + */ + + /* Device for which DMA memory is allocated. */ + struct device *dma_dev; + /* Flags used to create this DMA buffer: GNTDEV_DMABUF_FLAG_XXX. */ + bool dma_flags; + /* Virtual/CPU address of the DMA buffer. */ + void *dma_vaddr; + /* Bus address of the DMA buffer. */ + dma_addr_t dma_bus_addr; +#endif }; static int unmap_grant_pages(struct grant_map *map, int offset, int pages); +static struct miscdevice gntdev_miscdev; + /* ------------------------------------------------------------------ */ static void gntdev_print_maps(struct gntdev_priv *priv, @@ -121,8 +147,26 @@ static void gntdev_free_map(struct grant_map *map) if (map == NULL) return; +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + if (map->dma_vaddr) { + struct gnttab_dma_alloc_args args; + + args.dev = map->dma_dev; + args.coherent = map->dma_flags & GNTDEV_DMA_FLAG_COHERENT; + args.nr_pages = map->count; + args.pages = map->pages; + args.vaddr = map->dma_vaddr; + args.dev_bus_addr = map->dma_bus_addr; + + gnttab_dma_free_pages(&args); + } else if (map->pages) { + gnttab_free_pages(map->count, map->pages); + } +#else if (map->pages) gnttab_free_pages(map->count, map->pages); +#endif + kfree(map->pages); kfree(map->grants); kfree(map->map_ops); @@ -132,7 +176,8 @@ static void gntdev_free_map(struct grant_map *map) kfree(map); } -static struct grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count) +static struct grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count, + int dma_flags) { struct grant_map *add; int i; @@ -155,8 +200,37 @@ static struct grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count) NULL == add->pages) goto err; +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + add->dma_flags = dma_flags; + + /* + * Check if this mapping is requested to be backed + * by a DMA buffer. + */ + if (dma_flags & (GNTDEV_DMA_FLAG_WC | GNTDEV_DMA_FLAG_COHERENT)) { + struct gnttab_dma_alloc_args args; + + /* Remember the device, so we can free DMA memory. */ + add->dma_dev = priv->dma_dev; + + args.dev = priv->dma_dev; + args.coherent = dma_flags & GNTDEV_DMA_FLAG_COHERENT; + args.nr_pages = count; + args.pages = add->pages; + + if (gnttab_dma_alloc_pages(&args)) + goto err; + + add->dma_vaddr = args.vaddr; + add->dma_bus_addr = args.dev_bus_addr; + } else { + if (gnttab_alloc_pages(count, add->pages)) + goto err; + } +#else if (gnttab_alloc_pages(count, add->pages)) goto err; +#endif for (i = 0; i < count; i++) { add->map_ops[i].handle = -1; @@ -323,8 +397,19 @@ static int map_grant_pages(struct grant_map *map) } map->unmap_ops[i].handle = map->map_ops[i].handle; +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + if (use_ptemod) { + map->kunmap_ops[i].handle = map->kmap_ops[i].handle; + } else if (map->dma_vaddr) { + unsigned long mfn; + + mfn = __pfn_to_mfn(page_to_pfn(map->pages[i])); + map->unmap_ops[i].dev_bus_addr = __pfn_to_phys(mfn); + } +#else if (use_ptemod) map->kunmap_ops[i].handle = map->kmap_ops[i].handle; +#endif } return err; } @@ -548,6 +633,17 @@ static int gntdev_open(struct inode *inode, struct file *flip) } flip->private_data = priv; +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + priv->dma_dev = gntdev_miscdev.this_device; + + /* + * The device is not spawn from a device tree, so arch_setup_dma_ops + * is not called, thus leaving the device with dummy DMA ops. + * Fix this call of_dma_configure() with a NULL node to set + * default DMA ops. + */ + of_dma_configure(priv->dma_dev, NULL); +#endif pr_debug("priv %p\n", priv); return 0; @@ -589,7 +685,7 @@ static long gntdev_ioctl_map_grant_ref(struct gntdev_priv *priv, return -EINVAL; err = -ENOMEM; - map = gntdev_alloc_map(priv, op.count); + map = gntdev_alloc_map(priv, op.count, 0 /* This is not a dma-buf. */); if (!map) return err; diff --git a/include/uapi/xen/gntdev.h b/include/uapi/xen/gntdev.h index 6d1163456c03..2d5a4672f07c 100644 --- a/include/uapi/xen/gntdev.h +++ b/include/uapi/xen/gntdev.h @@ -200,4 +200,19 @@ struct ioctl_gntdev_grant_copy { /* Send an interrupt on the indicated event channel */ #define UNMAP_NOTIFY_SEND_EVENT 0x2 +/* + * Flags to be used while requesting memory mapping's backing storage + * to be allocated with DMA API. + */ + +/* + * The buffer is backed with memory allocated with dma_alloc_wc. + */ +#define GNTDEV_DMA_FLAG_WC (1 << 1) + +/* + * The buffer is backed with memory allocated with dma_alloc_coherent. + */ +#define GNTDEV_DMA_FLAG_COHERENT (1 << 2) + #endif /* __LINUX_PUBLIC_GNTDEV_H__ */