From patchwork Fri Jan 8 13:35:49 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 7985851 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 15E81BEEE5 for ; Fri, 8 Jan 2016 13:36:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 05859201B9 for ; Fri, 8 Jan 2016 13:36:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8085D200E1 for ; Fri, 8 Jan 2016 13:36:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754101AbcAHNg3 (ORCPT ); Fri, 8 Jan 2016 08:36:29 -0500 Received: from mail-pa0-f47.google.com ([209.85.220.47]:34678 "EHLO mail-pa0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754035AbcAHNg2 (ORCPT ); Fri, 8 Jan 2016 08:36:28 -0500 Received: by mail-pa0-f47.google.com with SMTP id uo6so264595359pac.1 for ; Fri, 08 Jan 2016 05:36:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=trBOlYNwUKi8L+d7MDeN2QrSv3EvHGU68Ov6d3kqbXg=; b=cz4gbeYeOSBw/hn0puAbI9rWB3LSJfaIQYJGF4IEUhlfs9kK3rsR+rQZSy2rdDZDjP KPqc5R6kf7eKMZjvwQf0incGEyAtuz25p994tRCFXUWTyiGOMvhwNyDUmg44VaEh+MRP WbKTUiFsQeazTI6PytUMXMm6422f8fby/+aw8cdTQum1m6QH2/xeKlz7BnW8vsQQohFn S2oF6UB1m26bJ++Geb/iuEIHVi6HtyrQFs+2MJgxp6JoIbWeu26GD7yIisu3z3QuOnKd woUNDischfFSheDrjpV5EM5wHI61nDaX/CGwIbdM/4GwFmUk+etKiUbkez28CbGX0O+U jJow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=trBOlYNwUKi8L+d7MDeN2QrSv3EvHGU68Ov6d3kqbXg=; b=ZwhgODbTnSn9m1ZsqsI5t45nz3cRgyayL2mInypxDy/Lapz9PD83nlMyovbGTKX7tT 6iuuwbbyncbTorQtwxYKr12XGEn1RU/sVDyxSFP7CqTpSzQ8jtMk3aGPuzwKkalK6q+8 Z2DeKXX5Z/BvP9YoaQCeRNMc2luwj3YeqY+i0PSqVq9FKLxHXzIrKcpB93uspMVnFRdm hmSz50fu25qEksrp9i3JYv9AxkrnaFfzm/76BiJkUXh+6b4ANcuNxkjR1lMikScdODqt 6PmJ4RTXYM0HHdpt04NwHjNmndhh0hoenvIsmsZiC6uQq4lP7LDFziePCAKxvkTG8mp0 Z9rA== X-Gm-Message-State: ALoCoQl6jt3uujECTkLBLm1mhNTjWwtTbL8lOT/K5edictWRQXQG/jEeVyhuDba959bKKBWFcPr2QcQ5MLN4u0Hgjs//inKM5w== X-Received: by 10.66.160.100 with SMTP id xj4mr159402925pab.14.1452260188102; Fri, 08 Jan 2016 05:36:28 -0800 (PST) Received: from localhost.localdomain (host-174-45-44-32.hln-mt.client.bresnan.net. [174.45.44.32]) by smtp.gmail.com with ESMTPSA id n5sm4999227pfi.3.2016.01.08.05.36.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 08 Jan 2016 05:36:26 -0800 (PST) From: tim.gardner@canonical.com To: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Tim Gardner , Vinod Koul , Dan Williams , Dave Jiang , Prarit Bhargava , Nicholas Mc Guire , Jarkko Nikula Subject: [PATCH v4.4-rc8 v2] dmaengine: ioatdma: Squelch framesize warnings Date: Fri, 8 Jan 2016 06:35:49 -0700 Message-Id: <1452260149-29576-1-git-send-email-tim.gardner@canonical.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1452190050-11296-1-git-send-email-tim.gardner@canonical.com> References: <1452190050-11296-1-git-send-email-tim.gardner@canonical.com> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Tim Gardner CC [M] drivers/dma/ioat/prep.o drivers/dma/ioat/prep.c: In function 'ioat_prep_pqxor': drivers/dma/ioat/prep.c:682:1: warning: the frame size of 1048 bytes is larger than 1024 bytes [-Wframe-larger-than=] } ^ drivers/dma/ioat/prep.c: In function 'ioat_prep_pqxor_val': drivers/dma/ioat/prep.c:714:1: warning: the frame size of 1048 bytes is larger than 1024 bytes [-Wframe-larger-than=] } gcc version 5.3.1 20151219 (Ubuntu 5.3.1-4ubuntu1) Cc: Vinod Koul Cc: Dan Williams Cc: Dave Jiang Cc: Prarit Bhargava Cc: Nicholas Mc Guire Cc: Jarkko Nikula Signed-off-by: Tim Gardner --- v2 - use per CPU static buffers instead of dynamically allocating memory. drivers/dma/ioat/prep.c | 33 +++++++++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-) diff --git a/drivers/dma/ioat/prep.c b/drivers/dma/ioat/prep.c index 6bb4a13..2c0768b 100644 --- a/drivers/dma/ioat/prep.c +++ b/drivers/dma/ioat/prep.c @@ -21,6 +21,8 @@ #include #include #include +#include +#include #include "../dmaengine.h" #include "registers.h" #include "hw.h" @@ -655,13 +657,25 @@ ioat_prep_pq_val(struct dma_chan *chan, dma_addr_t *pq, dma_addr_t *src, flags); } +/* + * The scf scratch buffer is too large for an automatic variable, and + * we don't want to take the performance hit for dynamic allocation. + * Therefore, define per CPU buffers and disable preemption while in use. + */ +static DEFINE_PER_CPU(unsigned char [MAX_SCF], ioat_scf); +static inline unsigned char *ioat_assign_scratch_buffer(void) +{ + return get_cpu_var(ioat_scf); +} + struct dma_async_tx_descriptor * ioat_prep_pqxor(struct dma_chan *chan, dma_addr_t dst, dma_addr_t *src, unsigned int src_cnt, size_t len, unsigned long flags) { - unsigned char scf[MAX_SCF]; + unsigned char *scf; dma_addr_t pq[2]; struct ioatdma_chan *ioat_chan = to_ioat_chan(chan); + struct dma_async_tx_descriptor *desc; if (test_bit(IOAT_CHAN_DOWN, &ioat_chan->state)) return NULL; @@ -669,16 +683,21 @@ ioat_prep_pqxor(struct dma_chan *chan, dma_addr_t dst, dma_addr_t *src, if (src_cnt > MAX_SCF) return NULL; + preempt_disable(); + scf = ioat_assign_scratch_buffer(); + memset(scf, 0, src_cnt); pq[0] = dst; flags |= DMA_PREP_PQ_DISABLE_Q; pq[1] = dst; /* specify valid address for disabled result */ - return src_cnt_flags(src_cnt, flags) > 8 ? + desc = src_cnt_flags(src_cnt, flags) > 8 ? __ioat_prep_pq16_lock(chan, NULL, pq, src, src_cnt, scf, len, flags) : __ioat_prep_pq_lock(chan, NULL, pq, src, src_cnt, scf, len, flags); + preempt_enable(); + return desc; } struct dma_async_tx_descriptor * @@ -686,9 +705,10 @@ ioat_prep_pqxor_val(struct dma_chan *chan, dma_addr_t *src, unsigned int src_cnt, size_t len, enum sum_check_flags *result, unsigned long flags) { - unsigned char scf[MAX_SCF]; + unsigned char *scf; dma_addr_t pq[2]; struct ioatdma_chan *ioat_chan = to_ioat_chan(chan); + struct dma_async_tx_descriptor *desc; if (test_bit(IOAT_CHAN_DOWN, &ioat_chan->state)) return NULL; @@ -696,6 +716,9 @@ ioat_prep_pqxor_val(struct dma_chan *chan, dma_addr_t *src, if (src_cnt > MAX_SCF) return NULL; + preempt_disable(); + scf = ioat_assign_scratch_buffer(); + /* the cleanup routine only sets bits on validate failure, it * does not clear bits on validate success... so clear it here */ @@ -706,11 +729,13 @@ ioat_prep_pqxor_val(struct dma_chan *chan, dma_addr_t *src, flags |= DMA_PREP_PQ_DISABLE_Q; pq[1] = pq[0]; /* specify valid address for disabled result */ - return src_cnt_flags(src_cnt, flags) > 8 ? + desc = src_cnt_flags(src_cnt, flags) > 8 ? __ioat_prep_pq16_lock(chan, result, pq, &src[1], src_cnt - 1, scf, len, flags) : __ioat_prep_pq_lock(chan, result, pq, &src[1], src_cnt - 1, scf, len, flags); + preempt_enable(); + return desc; } struct dma_async_tx_descriptor *