From patchwork Mon Nov 30 18:32:01 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 7728971 Return-Path: X-Original-To: patchwork-ath10k@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id BB0E19F1C0 for ; Mon, 30 Nov 2015 18:32:58 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E11A32058E for ; Mon, 30 Nov 2015 18:32:57 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 08E51205DE for ; Mon, 30 Nov 2015 18:32:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a3TFc-0001NO-MU; Mon, 30 Nov 2015 18:32:36 +0000 Received: from static.88-198-24-112.clients.your-server.de ([88.198.24.112] helo=nbd.name) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1a3TFX-00010T-EC for ath10k@lists.infradead.org; Mon, 30 Nov 2015 18:32:34 +0000 Received: by nf-2.local (Postfix, from userid 501) id 225AD113E7960; Mon, 30 Nov 2015 19:32:01 +0100 (CET) From: Felix Fietkau To: linux-wireless@vger.kernel.org Subject: [PATCH v2] ath10k: do not use coherent memory for allocated device memory chunks Date: Mon, 30 Nov 2015 19:32:01 +0100 Message-Id: <1448908321-3042-1-git-send-email-nbd@openwrt.org> X-Mailer: git-send-email 2.2.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151130_103231_817207_1E807B54 X-CRM114-Status: GOOD ( 11.36 ) X-Spam-Score: -1.9 (-) X-BeenThere: ath10k@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ath10k@lists.infradead.org, kvalo@codeaurora.org MIME-Version: 1.0 Sender: "ath10k" Errors-To: ath10k-bounces+patchwork-ath10k=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Coherent memory is more expensive to allocate (and constrained on some architectures where it has to be pre-allocated). It is also completely unnecessary, since the host has no reason to even access these allocated memory spaces Signed-off-by: Felix Fietkau --- drivers/net/wireless/ath/ath10k/wmi.c | 61 ++++++++++++++++++++++++----------- 1 file changed, 43 insertions(+), 18 deletions(-) diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c index 9021079..1386dd8 100644 --- a/drivers/net/wireless/ath/ath10k/wmi.c +++ b/drivers/net/wireless/ath/ath10k/wmi.c @@ -4300,34 +4300,58 @@ void ath10k_wmi_event_vdev_resume_req(struct ath10k *ar, struct sk_buff *skb) ath10k_dbg(ar, ATH10K_DBG_WMI, "WMI_VDEV_RESUME_REQ_EVENTID\n"); } -static int ath10k_wmi_alloc_host_mem(struct ath10k *ar, u32 req_id, - u32 num_units, u32 unit_len) +static int ath10k_wmi_alloc_chunk(struct ath10k *ar, u32 req_id, + u32 num_units, u32 unit_len) { dma_addr_t paddr; - u32 pool_size; + u32 pool_size = 0; int idx = ar->wmi.num_mem_chunks; + void *vaddr = NULL; - pool_size = num_units * round_up(unit_len, 4); + if (ar->wmi.num_mem_chunks == ARRAY_SIZE(ar->wmi.mem_chunks)) + return -ENOMEM; - if (!pool_size) - return -EINVAL; + while (!vaddr && num_units) { + pool_size = num_units * round_up(unit_len, 4); + if (!pool_size) + return -EINVAL; - ar->wmi.mem_chunks[idx].vaddr = dma_alloc_coherent(ar->dev, - pool_size, - &paddr, - GFP_KERNEL); - if (!ar->wmi.mem_chunks[idx].vaddr) { - ath10k_warn(ar, "failed to allocate memory chunk\n"); - return -ENOMEM; + vaddr = kzalloc(pool_size, GFP_KERNEL | __GFP_NOWARN); + if (!vaddr) + num_units /= 2; } - memset(ar->wmi.mem_chunks[idx].vaddr, 0, pool_size); + if (!num_units) + return -ENOMEM; + + paddr = dma_map_single(ar->dev, vaddr, pool_size, DMA_TO_DEVICE); + if (dma_mapping_error(ar->dev, paddr)) { + kfree(vaddr); + return -ENOMEM; + } + ar->wmi.mem_chunks[idx].vaddr = vaddr; ar->wmi.mem_chunks[idx].paddr = paddr; ar->wmi.mem_chunks[idx].len = pool_size; ar->wmi.mem_chunks[idx].req_id = req_id; ar->wmi.num_mem_chunks++; + return num_units; +} + +static int ath10k_wmi_alloc_host_mem(struct ath10k *ar, u32 req_id, + u32 num_units, u32 unit_len) +{ + int ret; + + while (num_units) { + ret = ath10k_wmi_alloc_chunk(ar, req_id, num_units, unit_len); + if (ret < 0) + return ret; + + num_units -= ret; + } + return 0; } @@ -7705,10 +7729,11 @@ void ath10k_wmi_free_host_mem(struct ath10k *ar) /* free the host memory chunks requested by firmware */ for (i = 0; i < ar->wmi.num_mem_chunks; i++) { - dma_free_coherent(ar->dev, - ar->wmi.mem_chunks[i].len, - ar->wmi.mem_chunks[i].vaddr, - ar->wmi.mem_chunks[i].paddr); + dma_unmap_single(ar->dev, + ar->wmi.mem_chunks[i].paddr, + ar->wmi.mem_chunks[i].len, + DMA_TO_DEVICE); + kfree(ar->wmi.mem_chunks[i].vaddr); } ar->wmi.num_mem_chunks = 0;