From patchwork Thu Feb 2 04:47:13 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 9550995 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6485360424 for ; Thu, 2 Feb 2017 04:48:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5524D2844A for ; Thu, 2 Feb 2017 04:48:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 49F142845F; Thu, 2 Feb 2017 04:48:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,RCVD_IN_DNSWL_HI,RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CF4C42844A for ; Thu, 2 Feb 2017 04:48:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751156AbdBBEsa (ORCPT ); Wed, 1 Feb 2017 23:48:30 -0500 Received: from mail-qt0-f169.google.com ([209.85.216.169]:36407 "EHLO mail-qt0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751119AbdBBEsY (ORCPT ); Wed, 1 Feb 2017 23:48:24 -0500 Received: by mail-qt0-f169.google.com with SMTP id k15so9742597qtg.3 for ; Wed, 01 Feb 2017 20:48:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PucKqBaIlLpEJiRc5d84Eex15rWmX3D9leRo5O7SWm0=; b=R29oGFWsII0eABEDCHIZolxYdDGPlPXTun4PZJG8ba4x/QvS5ai/uEfJLF2olRAdox WlMJTrCFBaj5Jvq2djQxLtQir5FphTFLJMW/Ah94WF34rRQiO2uqVm0plTzO42aStQsO Sr1FJ3won1i7IBisQHcZ02jghWRUQYtzYaDL0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PucKqBaIlLpEJiRc5d84Eex15rWmX3D9leRo5O7SWm0=; b=Dh9z8S0F/Nzn5rgVmUaunTuoUePwD6bpwvlkCjS3xulN+OE9D/eqCnAPbLMDTiJPZT +OOFd7vRSSHxeW9Sw/wD4feG8PzyM2Zjd0dZR8Qk1htKbFkMwpbWXrr8ozK4amMGwAPL Tb5UIj4HqInhCsqCiS/NncnlKTuzNzf4ywaoX21hhioatJ2jvqH3xOHGRrhx6nlYbUW+ WOWo+GWP44IyDbGM73YaC3JG2ZZwyrq180l2BeHBdIbFS1FtMPsnFbnZ17qmODEBYcye ijWvo0YYzTDrMSmzXeO2xO8m6L4oQYy+fPIztqKEScVrJAL+4jntaKYssfKIW5o0JCln 9kMA== X-Gm-Message-State: AIkVDXLnK7LGAbIFmbZGSHqlQ6CC/ZcstDuTVJ1g5MXPpsIT9sMfPkziai/6pbTh9GzhNM82 X-Received: by 10.200.45.177 with SMTP id p46mr5966926qta.240.1486010898506; Wed, 01 Feb 2017 20:48:18 -0800 (PST) Received: from anup-HP-Compaq-8100-Elite-CMT-PC.dhcp.avagotech.net ([192.19.237.250]) by smtp.gmail.com with ESMTPSA id y189sm20532696qky.39.2017.02.01.20.48.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 01 Feb 2017 20:48:18 -0800 (PST) From: Anup Patel To: Vinod Koul , Rob Herring , Mark Rutland , Herbert Xu , "David S . Miller" , Jassi Brar Cc: Dan Williams , Ray Jui , Scott Branden , Jon Mason , Rob Rice , bcm-kernel-feedback-list@broadcom.com, dmaengine@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, linux-raid@vger.kernel.org, Anup Patel Subject: [PATCH 3/6] async_tx: Handle DMA devices having support for fewer PQ coefficients Date: Thu, 2 Feb 2017 10:17:13 +0530 Message-Id: <1486010836-25228-4-git-send-email-anup.patel@broadcom.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1486010836-25228-1-git-send-email-anup.patel@broadcom.com> References: <1486010836-25228-1-git-send-email-anup.patel@broadcom.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The DMAENGINE framework assumes that if PQ offload is supported by a DMA device then all 256 PQ coefficients are supported. This assumption does not hold anymore because we now have BCM-SBA-RAID offload engine which supports PQ offload with limited number of PQ coefficients. This patch extends async_tx APIs to handle DMA devices with support for fewer PQ coefficients. Signed-off-by: Anup Patel Reviewed-by: Scott Branden --- crypto/async_tx/async_pq.c | 3 +++ crypto/async_tx/async_raid6_recov.c | 12 ++++++++++-- include/linux/dmaengine.h | 19 +++++++++++++++++++ include/linux/raid/pq.h | 3 +++ 4 files changed, 35 insertions(+), 2 deletions(-) diff --git a/crypto/async_tx/async_pq.c b/crypto/async_tx/async_pq.c index f83de99..16c6526 100644 --- a/crypto/async_tx/async_pq.c +++ b/crypto/async_tx/async_pq.c @@ -187,6 +187,9 @@ async_gen_syndrome(struct page **blocks, unsigned int offset, int disks, BUG_ON(disks > 255 || !(P(blocks, disks) || Q(blocks, disks))); + if (device && dma_maxpqcoef(device) < src_cnt) + device = NULL; + if (device) unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOWAIT); diff --git a/crypto/async_tx/async_raid6_recov.c b/crypto/async_tx/async_raid6_recov.c index 8fab627..2916f95 100644 --- a/crypto/async_tx/async_raid6_recov.c +++ b/crypto/async_tx/async_raid6_recov.c @@ -352,6 +352,7 @@ async_raid6_2data_recov(int disks, size_t bytes, int faila, int failb, { void *scribble = submit->scribble; int non_zero_srcs, i; + struct dma_chan *chan = async_dma_find_channel(DMA_PQ); BUG_ON(faila == failb); if (failb < faila) @@ -359,12 +360,15 @@ async_raid6_2data_recov(int disks, size_t bytes, int faila, int failb, pr_debug("%s: disks: %d len: %zu\n", __func__, disks, bytes); + if (chan && dma_maxpqcoef(chan->device) < RAID6_PQ_MAX_COEF) + chan = NULL; + /* if a dma resource is not available or a scribble buffer is not * available punt to the synchronous path. In the 'dma not * available' case be sure to use the scribble buffer to * preserve the content of 'blocks' as the caller intended. */ - if (!async_dma_find_channel(DMA_PQ) || !scribble) { + if (!chan || !scribble) { void **ptrs = scribble ? scribble : (void **) blocks; async_tx_quiesce(&submit->depend_tx); @@ -432,15 +436,19 @@ async_raid6_datap_recov(int disks, size_t bytes, int faila, void *scribble = submit->scribble; int good_srcs, good, i; struct page *srcs[2]; + struct dma_chan *chan = async_dma_find_channel(DMA_PQ); pr_debug("%s: disks: %d len: %zu\n", __func__, disks, bytes); + if (chan && dma_maxpqcoef(chan->device) < RAID6_PQ_MAX_COEF) + chan = NULL; + /* if a dma resource is not available or a scribble buffer is not * available punt to the synchronous path. In the 'dma not * available' case be sure to use the scribble buffer to * preserve the content of 'blocks' as the caller intended. */ - if (!async_dma_find_channel(DMA_PQ) || !scribble) { + if (!chan || !scribble) { void **ptrs = scribble ? scribble : (void **) blocks; async_tx_quiesce(&submit->depend_tx); diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index feee6ec..d938a8b 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -24,6 +24,7 @@ #include #include #include +#include #include /** @@ -668,6 +669,7 @@ struct dma_filter { * @cap_mask: one or more dma_capability flags * @max_xor: maximum number of xor sources, 0 if no capability * @max_pq: maximum number of PQ sources and PQ-continue capability + * @max_pqcoef: maximum number of PQ coefficients, 0 if all supported * @copy_align: alignment shift for memcpy operations * @xor_align: alignment shift for xor operations * @pq_align: alignment shift for pq operations @@ -727,11 +729,13 @@ struct dma_device { dma_cap_mask_t cap_mask; unsigned short max_xor; unsigned short max_pq; + unsigned short max_pqcoef; enum dmaengine_alignment copy_align; enum dmaengine_alignment xor_align; enum dmaengine_alignment pq_align; enum dmaengine_alignment fill_align; #define DMA_HAS_PQ_CONTINUE (1 << 15) + #define DMA_HAS_FEWER_PQ_COEF (1 << 15) int dev_id; struct device *dev; @@ -1122,6 +1126,21 @@ static inline int dma_maxpq(struct dma_device *dma, enum dma_ctrl_flags flags) BUG(); } +static inline void dma_set_maxpqcoef(struct dma_device *dma, + unsigned short max_pqcoef) +{ + if (max_pqcoef < RAID6_PQ_MAX_COEF) { + dma->max_pqcoef = max_pqcoef; + dma->max_pqcoef |= DMA_HAS_FEWER_PQ_COEF; + } +} + +static inline unsigned short dma_maxpqcoef(struct dma_device *dma) +{ + return (dma->max_pqcoef & DMA_HAS_FEWER_PQ_COEF) ? + (dma->max_pqcoef & ~DMA_HAS_FEWER_PQ_COEF) : RAID6_PQ_MAX_COEF; +} + static inline size_t dmaengine_get_icg(bool inc, bool sgl, size_t icg, size_t dir_icg) { diff --git a/include/linux/raid/pq.h b/include/linux/raid/pq.h index 30f9453..f3a04bb 100644 --- a/include/linux/raid/pq.h +++ b/include/linux/raid/pq.h @@ -15,6 +15,9 @@ #ifdef __KERNEL__ +/* Max number of PQ coefficients */ +#define RAID6_PQ_MAX_COEF 256 + /* Set to 1 to use kernel-wide empty_zero_page */ #define RAID6_USE_EMPTY_ZERO_PAGE 0 #include