From patchwork Mon Sep 12 13:03:03 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sinan Kaya X-Patchwork-Id: 9326429 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DCD7A6048B for ; Mon, 12 Sep 2016 13:04:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CEB3328DB1 for ; Mon, 12 Sep 2016 13:04:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C384428DB3; Mon, 12 Sep 2016 13:04:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E0F0D28DB2 for ; Mon, 12 Sep 2016 13:04:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758916AbcILNEU (ORCPT ); Mon, 12 Sep 2016 09:04:20 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:48019 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758671AbcILNES (ORCPT ); Mon, 12 Sep 2016 09:04:18 -0400 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id 1C60261295; Mon, 12 Sep 2016 13:04:17 +0000 (UTC) Received: from drakthul.qualcomm.com (global_nat1_iad_fw.qualcomm.com [129.46.232.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: okaya@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 0897561849; Mon, 12 Sep 2016 13:04:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.1 smtp.codeaurora.org 0897561849 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=pass smtp.mailfrom=okaya@codeaurora.org From: Sinan Kaya To: dmaengine@vger.kernel.org, timur@codeaurora.org, devicetree@vger.kernel.org, cov@codeaurora.org, vinod.koul@intel.com, jcm@redhat.com Cc: agross@codeaurora.org, arnd@arndb.de, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Sinan Kaya , Dan Williams , linux-kernel@vger.kernel.org Subject: [PATCH V2 09/10] dmaengine: qcom_hidma: protect common data structures Date: Mon, 12 Sep 2016 09:03:03 -0400 Message-Id: <1473685384-19913-10-git-send-email-okaya@codeaurora.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1473685384-19913-1-git-send-email-okaya@codeaurora.org> References: <1473685384-19913-1-git-send-email-okaya@codeaurora.org> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When MSI interrupts are supported, error and the transfer interrupt can come from multiple processor contexts. Each error interrupt is an MSI interrupt. If the channel is disabled by the first error interrupt, the remaining error interrupts will gracefully return in the interrupt handler. If an error is observed while servicing the completions in success case, the posting of the completions will be aborted as soon as channel disabled state is observed. The error interrupt handler will take it from there and finish the remaining completions. We don't want to create multiple success and error messages to be delivered to the client in mixed order. Also got rid of hidma_post_completed method and moved the locks inside hidma_ll_int_handler_internal function. Rearranged the assignments so that variables are updated only when a lock is held. Signed-off-by: Sinan Kaya --- drivers/dma/qcom/hidma_ll.c | 142 ++++++++++++++++++-------------------------- 1 file changed, 58 insertions(+), 84 deletions(-) diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c index f0630e0..386a64c 100644 --- a/drivers/dma/qcom/hidma_ll.c +++ b/drivers/dma/qcom/hidma_ll.c @@ -198,18 +198,50 @@ static void hidma_ll_tre_complete(unsigned long arg) } } -static int hidma_post_completed(struct hidma_lldev *lldev, int tre_iterator, - u8 err_info, u8 err_code) +/* + * Called to handle the interrupt for the channel. + * Return a positive number if TRE or EVRE were consumed on this run. + * Return a positive number if there are pending TREs or EVREs. + * Return 0 if there is nothing to consume or no pending TREs/EVREs found. + */ +static int hidma_handle_tre_completion(struct hidma_lldev *lldev, u8 err_info, + u8 err_code) { + u32 *current_evre; struct hidma_tre *tre; unsigned long flags; + u32 evre_write_off; + u32 cfg; + u32 offset; + + evre_write_off = readl_relaxed(lldev->evca + HIDMA_EVCA_WRITE_PTR_REG); + if ((evre_write_off > lldev->evre_ring_size) || + (evre_write_off % HIDMA_EVRE_SIZE)) { + dev_err(lldev->dev, "HW reports invalid EVRE write offset\n"); + return -EINVAL; + } spin_lock_irqsave(&lldev->lock, flags); - tre = lldev->pending_tre_list[tre_iterator / HIDMA_TRE_SIZE]; + if (lldev->evre_processed_off == evre_write_off) { + spin_unlock_irqrestore(&lldev->lock, flags); + return 0; + } + current_evre = lldev->evre_ring + lldev->evre_processed_off; + cfg = current_evre[HIDMA_EVRE_CFG_IDX]; + if (!err_info) { + err_info = cfg >> HIDMA_EVRE_ERRINFO_BIT_POS; + err_info &= HIDMA_EVRE_ERRINFO_MASK; + } + if (!err_code) + err_code = (cfg >> HIDMA_EVRE_CODE_BIT_POS) & + HIDMA_EVRE_CODE_MASK; + + offset = lldev->tre_processed_off; + tre = lldev->pending_tre_list[offset / HIDMA_TRE_SIZE]; if (!tre) { spin_unlock_irqrestore(&lldev->lock, flags); dev_warn(lldev->dev, "tre_index [%d] and tre out of sync\n", - tre_iterator / HIDMA_TRE_SIZE); + lldev->tre_processed_off / HIDMA_TRE_SIZE); return -EINVAL; } lldev->pending_tre_list[tre->tre_index] = NULL; @@ -223,6 +255,14 @@ static int hidma_post_completed(struct hidma_lldev *lldev, int tre_iterator, atomic_set(&lldev->pending_tre_count, 0); } + + HIDMA_INCREMENT_ITERATOR(lldev->tre_processed_off, HIDMA_TRE_SIZE, + lldev->tre_ring_size); + HIDMA_INCREMENT_ITERATOR(lldev->evre_processed_off, HIDMA_EVRE_SIZE, + lldev->evre_ring_size); + + writel(lldev->evre_processed_off, + lldev->evca + HIDMA_EVCA_DOORBELL_REG); spin_unlock_irqrestore(&lldev->lock, flags); tre->err_info = err_info; @@ -232,86 +272,7 @@ static int hidma_post_completed(struct hidma_lldev *lldev, int tre_iterator, kfifo_put(&lldev->handoff_fifo, tre); tasklet_schedule(&lldev->task); - return 0; -} - -/* - * Called to handle the interrupt for the channel. - * Return a positive number if TRE or EVRE were consumed on this run. - * Return a positive number if there are pending TREs or EVREs. - * Return 0 if there is nothing to consume or no pending TREs/EVREs found. - */ -static int hidma_handle_tre_completion(struct hidma_lldev *lldev, u8 err_info, - u8 err_code) -{ - u32 evre_ring_size = lldev->evre_ring_size; - u32 tre_ring_size = lldev->tre_ring_size; - u32 tre_iterator, evre_iterator; - u32 num_completed = 0; - - evre_write_off = readl_relaxed(lldev->evca + HIDMA_EVCA_WRITE_PTR_REG); - tre_iterator = lldev->tre_processed_off; - evre_iterator = lldev->evre_processed_off; - - if ((evre_write_off > evre_ring_size) || - (evre_write_off % HIDMA_EVRE_SIZE)) { - dev_err(lldev->dev, "HW reports invalid EVRE write offset\n"); - return 0; - } - - /* - * By the time control reaches here the number of EVREs and TREs - * may not match. Only consume the ones that hardware told us. - */ - while ((evre_iterator != evre_write_off)) { - u32 *current_evre = lldev->evre_ring + evre_iterator; - u32 cfg; - - cfg = current_evre[HIDMA_EVRE_CFG_IDX]; - if (!err_info) { - err_info = cfg >> HIDMA_EVRE_ERRINFO_BIT_POS; - err_info &= HIDMA_EVRE_ERRINFO_MASK; - } - if (!err_code) - err_code = (cfg >> HIDMA_EVRE_CODE_BIT_POS) & - HIDMA_EVRE_CODE_MASK; - - if (hidma_post_completed(lldev, tre_iterator, err_info, - err_code)) - break; - - HIDMA_INCREMENT_ITERATOR(tre_iterator, HIDMA_TRE_SIZE, - tre_ring_size); - HIDMA_INCREMENT_ITERATOR(evre_iterator, HIDMA_EVRE_SIZE, - evre_ring_size); - - /* - * Read the new event descriptor written by the HW. - * As we are processing the delivered events, other events - * get queued to the SW for processing. - */ - evre_write_off = - readl_relaxed(lldev->evca + HIDMA_EVCA_WRITE_PTR_REG); - num_completed++; - } - - if (num_completed) { - u32 evre_read_off = (lldev->evre_processed_off + - HIDMA_EVRE_SIZE * num_completed); - u32 tre_read_off = (lldev->tre_processed_off + - HIDMA_TRE_SIZE * num_completed); - - evre_read_off = evre_read_off % evre_ring_size; - tre_read_off = tre_read_off % tre_ring_size; - - writel(evre_read_off, lldev->evca + HIDMA_EVCA_DOORBELL_REG); - - /* record the last processed tre offset */ - lldev->tre_processed_off = tre_read_off; - lldev->evre_processed_off = evre_read_off; - } - - return num_completed; + return 1; } void hidma_cleanup_pending_tre(struct hidma_lldev *lldev, u8 err_info, @@ -399,6 +360,16 @@ static int hidma_ll_reset(struct hidma_lldev *lldev) */ static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev, int cause) { + if ((lldev->trch_state == HIDMA_CH_DISABLED) || + (lldev->evch_state == HIDMA_CH_DISABLED)) { + dev_err(lldev->dev, "error 0x%x, already disabled...\n", + cause); + + /* Clear out pending interrupts */ + writel(cause, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG); + return; + } + if (cause & HIDMA_ERR_INT_MASK) { dev_err(lldev->dev, "error 0x%x, disabling...\n", cause); @@ -430,6 +401,9 @@ static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev, int cause) */ if (hidma_handle_tre_completion(lldev, 0, 0)) break; + if ((lldev->trch_state == HIDMA_CH_DISABLED) || + (lldev->evch_state == HIDMA_CH_DISABLED)) + break; } /* We consumed TREs or there are pending TREs or EVREs. */