From patchwork Mon May 19 06:51:08 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Courbot X-Patchwork-Id: 4199851 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 44C04BEEAB for ; Mon, 19 May 2014 06:51:38 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 67AA02021B for ; Mon, 19 May 2014 06:51:37 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 816B42025B for ; Mon, 19 May 2014 06:51:36 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E48546E32D; Sun, 18 May 2014 23:51:35 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from hqemgate15.nvidia.com (hqemgate15.nvidia.com [216.228.121.64]) by gabe.freedesktop.org (Postfix) with ESMTP id DC7446E32D; Sun, 18 May 2014 23:51:34 -0700 (PDT) Received: from hqnvupgp08.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com id ; Sun, 18 May 2014 23:51:01 -0700 Received: from hqemhub01.nvidia.com ([172.20.12.94]) by hqnvupgp08.nvidia.com (PGP Universal service); Sun, 18 May 2014 23:46:42 -0700 X-PGP-Universal: processed; by hqnvupgp08.nvidia.com on Sun, 18 May 2014 23:46:42 -0700 Received: from percival.nvidia.com (172.20.144.16) by hqemhub01.nvidia.com (172.20.150.30) with Microsoft SMTP Server (TLS) id 8.3.342.0; Sun, 18 May 2014 23:51:33 -0700 From: Alexandre Courbot To: Ben Skeggs Subject: [PATCH 2/2] drm/gk20a/fb: fix compile error whith CMA and module Date: Mon, 19 May 2014 15:51:08 +0900 Message-ID: <1400482268-4971-3-git-send-email-acourbot@nvidia.com> X-Mailer: git-send-email 1.9.2 In-Reply-To: <1400482268-4971-1-git-send-email-acourbot@nvidia.com> References: <1400482268-4971-1-git-send-email-acourbot@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Cc: gnurou@gmail.com, nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-tegra@vger.kernel.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP CMA functions are not available to kernel modules, but the GK20A FB driver currently (and temporarily) relies on them. This patch replaces the calls to CMA functions in problematic cases (CMA enabled and Nouveau compiled as a module) with dummy stubs that will make this particular driver fail, but at least won't produce a compile error. This is a temporary fix until a better memory allocation scheme is devised. Signed-off-by: Alexandre Courbot --- drivers/gpu/drm/nouveau/core/subdev/fb/ramgk20a.c | 25 +++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/nouveau/core/subdev/fb/ramgk20a.c b/drivers/gpu/drm/nouveau/core/subdev/fb/ramgk20a.c index 5904af52e6d6..fa867ce5449e 100644 --- a/drivers/gpu/drm/nouveau/core/subdev/fb/ramgk20a.c +++ b/drivers/gpu/drm/nouveau/core/subdev/fb/ramgk20a.c @@ -39,6 +39,27 @@ struct gk20a_mem { struct list_head head; }; +/* + * CMA is not available to modules. Until we find a better solution, make + * memory allocations fail in that case. + */ +#if IS_ENABLED(CONFIG_CMA) && IS_MODULE(CONFIG_DRM_NOUVEAU) +static inline struct page * +alloc_contiguous_memory(struct device *dev, int count, unsigned int order) +{ + dev_err(dev, "cannot use CMA from a module - allocation failed\n"); + return NULL; +} + +static inline void +release_contiguous_memory(struct device *dev, struct page *page, int count) +{ +} +#else +#define alloc_contiguous_memory(d, c, o) dma_alloc_from_contiguous(d, c, o) +#define release_contiguous_memory(d, p, c) dma_release_from_contiguous(d, p, c) +#endif + static void gk20a_ram_put(struct nouveau_fb *pfb, struct nouveau_mem **pmem) { @@ -51,7 +72,7 @@ gk20a_ram_put(struct nouveau_fb *pfb, struct nouveau_mem **pmem) return; list_for_each_entry_safe(chunk, n, &mem->head, list) { - dma_release_from_contiguous(dev, chunk->pages, chunk->npages); + release_contiguous_memory(dev, chunk->pages, chunk->npages); kfree(chunk); } @@ -128,7 +149,7 @@ gk20a_ram_get(struct nouveau_fb *pfb, u64 size, u32 align, u32 ncmin, return -ENOMEM; } - chunk->pages = dma_alloc_from_contiguous(dev, ncmin, order); + chunk->pages = alloc_contiguous_memory(dev, ncmin, order); if (!chunk->pages) { kfree(chunk); gk20a_ram_put(pfb, pmem);