From patchwork Fri May 25 15:33:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Andrushchenko X-Patchwork-Id: 10427819 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9F387602D8 for ; Fri, 25 May 2018 15:36:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9641829750 for ; Fri, 25 May 2018 15:36:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8A00C29758; Fri, 25 May 2018 15:36:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CD0DE29750 for ; Fri, 25 May 2018 15:36:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966612AbeEYPf1 (ORCPT ); Fri, 25 May 2018 11:35:27 -0400 Received: from mail-wr0-f193.google.com ([209.85.128.193]:40093 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966571AbeEYPdt (ORCPT ); Fri, 25 May 2018 11:33:49 -0400 Received: by mail-wr0-f193.google.com with SMTP id l41-v6so9964108wre.7; Fri, 25 May 2018 08:33:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PT75Xx2PeD5FxXLxcp2rdyuIp0J7lNOHVFPqtXIO4iY=; b=S6st1FjRTQqBc7pR3fDT+8Pf8oHF0uZQvrEIBC46YRFZ77blSZtXXLYVigJVQNu9rV v0AVoHVFBCfhsur3wvSdwGeyxZl4YF2h+fE/j4WzsgTA/17MyP+fAobDKL4FzJjTrD4t pNki2EdSAfEA7yisUeHrnsry4sf8EGAGLFU6j28kZLJxFMuPeda0CYQY3hOtaC2EcN5w JbkOwfuasnTmeYeEks9rAtFpn5sSBEjmwuen2lq5eB4zqJOnz0wPXdMJfWDApD+gd5eB KoWx/6ZRcjC7/+qFfRKJ8S9OphofYAdIzmS2JqDtURAaor4LFUhPKErJbjEPfL+5LIPY Xn3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PT75Xx2PeD5FxXLxcp2rdyuIp0J7lNOHVFPqtXIO4iY=; b=A6WmWym7ysWOyA14k7Nw2CFySwA+dIQ7fzWfbF5+xggAt9jDd2Xxg+YcKNU9T14vEj T5TrbGSy/j9Nm9o2VLVo2cQ3284tSQwhOQV0KHHMh6P+mJuJ0fFzCfPAbmjOgieGM4qB Rls/gLsZhOZgmANHaECDNeaWkppBunauegCuOS+83ZuKBtqeszQirAXP+2SGarinyCvd wet8C9r2/SuznuS3XQOjzYcJVydZ+pclLiziNGkbmgb7t6evZzrgL7t0HYbYLMhhroUb U1jCPqcHgInMq/t4qBvO2hcxariJlVZKMgLLg1QN3O/2rR+nufKOE/u5dN8VMyQINBxg Dwbg== X-Gm-Message-State: ALKqPwcbF3iwae4NIdit16jiUzWpC0mjIjr7PWSfYrmrQCWF6xhwYcVj fxMTGETNSd9SISVB9TniKpI= X-Google-Smtp-Source: ADUXVKJnBKXy9tEdmfHA/eGFcXFfbRWW/l5kVBCDxRUOSJIdZgsCgx2JShMqdhpYrMUb8dM28k1bBQ== X-Received: by 2002:a19:544b:: with SMTP id i72-v6mr1803976lfb.4.1527262427617; Fri, 25 May 2018 08:33:47 -0700 (PDT) Received: from a2k-HP-ProDesk-600-G2-SFF.kyiv.epam.com (ll-54.209.223.85.sovam.net.ua. [85.223.209.54]) by smtp.gmail.com with ESMTPSA id l7-v6sm314231ljh.53.2018.05.25.08.33.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 08:33:46 -0700 (PDT) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com Cc: daniel.vetter@intel.com, andr2000@gmail.com, dongwon.kim@intel.com, matthew.d.roper@intel.com, Oleksandr Andrushchenko Subject: [PATCH 3/8] xen/grant-table: Allow allocating buffers suitable for DMA Date: Fri, 25 May 2018 18:33:26 +0300 Message-Id: <20180525153331.31188-4-andr2000@gmail.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180525153331.31188-1-andr2000@gmail.com> References: <20180525153331.31188-1-andr2000@gmail.com> Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Oleksandr Andrushchenko Extend grant table module API to allow allocating buffers that can be used for DMA operations and mapping foreign grant references on top of those. The resulting buffer is similar to the one allocated by the balloon driver in terms that proper memory reservation is made ({increase|decrease}_reservation and VA mappings updated if needed). This is useful for sharing foreign buffers with HW drivers which cannot work with scattered buffers provided by the balloon driver, but require DMAable memory instead. Signed-off-by: Oleksandr Andrushchenko --- drivers/xen/Kconfig | 13 ++++ drivers/xen/grant-table.c | 124 ++++++++++++++++++++++++++++++++++++++ include/xen/grant_table.h | 25 ++++++++ 3 files changed, 162 insertions(+) diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig index e5d0c28372ea..3431fe210624 100644 --- a/drivers/xen/Kconfig +++ b/drivers/xen/Kconfig @@ -161,6 +161,19 @@ config XEN_GRANT_DEV_ALLOC to other domains. This can be used to implement frontend drivers or as part of an inter-domain shared memory channel. +config XEN_GRANT_DMA_ALLOC + bool "Allow allocating DMA capable buffers with grant reference module" + depends on XEN + help + Extends grant table module API to allow allocating DMA capable + buffers and mapping foreign grant references on top of it. + The resulting buffer is similar to one allocated by the balloon + driver in terms that proper memory reservation is made + ({increase|decrease}_reservation and VA mappings updated if needed). + This is useful for sharing foreign buffers with HW drivers which + cannot work with scattered buffers provided by the balloon driver, + but require DMAable memory instead. + config SWIOTLB_XEN def_bool y select SWIOTLB diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c index d7488226e1f2..06fe6e7f639c 100644 --- a/drivers/xen/grant-table.c +++ b/drivers/xen/grant-table.c @@ -45,6 +45,9 @@ #include #include #include +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC +#include +#endif #include #include @@ -57,6 +60,7 @@ #ifdef CONFIG_X86 #include #endif +#include #include #include @@ -811,6 +815,82 @@ int gnttab_alloc_pages(int nr_pages, struct page **pages) } EXPORT_SYMBOL(gnttab_alloc_pages); +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC +/** + * gnttab_dma_alloc_pages - alloc DMAable pages suitable for grant mapping into + * @args: arguments to the function + */ +int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args) +{ + unsigned long pfn, start_pfn; + xen_pfn_t *frames; + size_t size; + int i, ret; + + frames = kcalloc(args->nr_pages, sizeof(*frames), GFP_KERNEL); + if (!frames) + return -ENOMEM; + + size = args->nr_pages << PAGE_SHIFT; + if (args->coherent) + args->vaddr = dma_alloc_coherent(args->dev, size, + &args->dev_bus_addr, + GFP_KERNEL | __GFP_NOWARN); + else + args->vaddr = dma_alloc_wc(args->dev, size, + &args->dev_bus_addr, + GFP_KERNEL | __GFP_NOWARN); + if (!args->vaddr) { + pr_err("Failed to allocate DMA buffer of size %zu\n", size); + ret = -ENOMEM; + goto fail_free_frames; + } + + start_pfn = __phys_to_pfn(args->dev_bus_addr); + for (pfn = start_pfn, i = 0; pfn < start_pfn + args->nr_pages; + pfn++, i++) { + struct page *page = pfn_to_page(pfn); + + args->pages[i] = page; + frames[i] = xen_page_to_gfn(page); + xenmem_reservation_scrub_page(page); + } + + xenmem_reservation_va_mapping_reset(args->nr_pages, args->pages); + + ret = xenmem_reservation_decrease(args->nr_pages, frames); + if (ret != args->nr_pages) { + pr_err("Failed to decrease reservation for DMA buffer\n"); + xenmem_reservation_increase(ret, frames); + ret = -EFAULT; + goto fail_free_dma; + } + + ret = gnttab_pages_set_private(args->nr_pages, args->pages); + if (ret < 0) + goto fail_clear_private; + + kfree(frames); + return 0; + +fail_clear_private: + gnttab_pages_clear_private(args->nr_pages, args->pages); +fail_free_dma: + xenmem_reservation_va_mapping_update(args->nr_pages, args->pages, + frames); + if (args->coherent) + dma_free_coherent(args->dev, size, + args->vaddr, args->dev_bus_addr); + else + dma_free_wc(args->dev, size, + args->vaddr, args->dev_bus_addr); +fail_free_frames: + kfree(frames); + return ret; +} +EXPORT_SYMBOL(gnttab_dma_alloc_pages); +#endif + void gnttab_pages_clear_private(int nr_pages, struct page **pages) { int i; @@ -838,6 +918,50 @@ void gnttab_free_pages(int nr_pages, struct page **pages) } EXPORT_SYMBOL(gnttab_free_pages); +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC +/** + * gnttab_dma_free_pages - free DMAable pages + * @args: arguments to the function + */ +int gnttab_dma_free_pages(struct gnttab_dma_alloc_args *args) +{ + xen_pfn_t *frames; + size_t size; + int i, ret; + + gnttab_pages_clear_private(args->nr_pages, args->pages); + + frames = kcalloc(args->nr_pages, sizeof(*frames), GFP_KERNEL); + if (!frames) + return -ENOMEM; + + for (i = 0; i < args->nr_pages; i++) + frames[i] = page_to_xen_pfn(args->pages[i]); + + ret = xenmem_reservation_increase(args->nr_pages, frames); + if (ret != args->nr_pages) { + pr_err("Failed to decrease reservation for DMA buffer\n"); + ret = -EFAULT; + } else { + ret = 0; + } + + xenmem_reservation_va_mapping_update(args->nr_pages, args->pages, + frames); + + size = args->nr_pages << PAGE_SHIFT; + if (args->coherent) + dma_free_coherent(args->dev, size, + args->vaddr, args->dev_bus_addr); + else + dma_free_wc(args->dev, size, + args->vaddr, args->dev_bus_addr); + kfree(frames); + return ret; +} +EXPORT_SYMBOL(gnttab_dma_free_pages); +#endif + /* Handling of paged out grant targets (GNTST_eagain) */ #define MAX_DELAY 256 static inline void diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h index de03f2542bb7..982e34242b9c 100644 --- a/include/xen/grant_table.h +++ b/include/xen/grant_table.h @@ -198,6 +198,31 @@ void gnttab_free_auto_xlat_frames(void); int gnttab_alloc_pages(int nr_pages, struct page **pages); void gnttab_free_pages(int nr_pages, struct page **pages); +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC +struct gnttab_dma_alloc_args { + /* Device for which DMA memory will be/was allocated. */ + struct device *dev; + /* + * If set then DMA buffer is coherent and write-combine otherwise. + */ + bool coherent; + /* + * Number of entries in the @pages array, defines the size + * of the DMA buffer. + */ + int nr_pages; + /* Array of pages @pages filled with pages of the DMA buffer. */ + struct page **pages; + /* Virtual/CPU address of the DMA buffer. */ + void *vaddr; + /* Bus address of the DMA buffer. */ + dma_addr_t dev_bus_addr; +}; + +int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args); +int gnttab_dma_free_pages(struct gnttab_dma_alloc_args *args); +#endif + int gnttab_pages_set_private(int nr_pages, struct page **pages); void gnttab_pages_clear_private(int nr_pages, struct page **pages);