From patchwork Fri Jul 28 23:31:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Baryshkov X-Patchwork-Id: 13332739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72958C04A94 for ; Fri, 28 Jul 2023 23:32:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233020AbjG1XcL (ORCPT ); Fri, 28 Jul 2023 19:32:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234675AbjG1XcK (ORCPT ); Fri, 28 Jul 2023 19:32:10 -0400 Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com [IPv6:2a00:1450:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A88B83C2F for ; Fri, 28 Jul 2023 16:32:06 -0700 (PDT) Received: by mail-lj1-x22f.google.com with SMTP id 38308e7fff4ca-2b9b6e943ebso44230151fa.1 for ; Fri, 28 Jul 2023 16:32:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1690587125; x=1691191925; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oseJULiE4DKRUQdbGIWxmLhogan8Futk21rKFzuy5Bk=; b=gjAVkTpbFC3noMLyvbdGvtMv2KHat0IjgnXmNn0GJmmDM1shuknmARsg0mC392Al4p aAOmSd4XZWiGQC69sdyG3A+50UUYc+fPQ5mDRRIK4+wgXgdDxGaVDysif5Xam3YkE6Cw Aa20QVRjx4uIJxF+Phsb8bPAQZmJGwS2tlBIMGRagOAVNuGJnoJKKLeA/DM6pfa9gFam +bs1+LKtXYc2Ht48ZFmCrbDNH8MrSl2Wtm0499dKZu9hreRnNa3T2nRJ6W0edGrQyWz/ s83Q4R4elwV8sUZ7KxzmVGwy07juAJbP+3eo8iaMn9iwrIFsb9i6a4Ee9n3MOJemwOU4 dwsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690587125; x=1691191925; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oseJULiE4DKRUQdbGIWxmLhogan8Futk21rKFzuy5Bk=; b=BDqN4eFyAOzSZz7jrEYfjUgwOTs2l5SPhU8twS+ndZFta1bOGsH2KorMfrqJkgfexi 6RGfj9HlwGk+B7rorBmxHVOWCGMiBc2K3WYp9zCkMOFaS81MmVrp4FRLA7fCBdqIQs7S Kunp8h8Ep56coYk+8wy/3+iImJgOff9+MB8Woj9nE01En8Sg7EwlP2Es+qvliZbyDdsR 5Zblyc+zoJdLDvEDjDAKCF/NmXaD9bmj+uHP8/iBvFUqrb4zQwwZbhegi6eqCVuVZah7 5Od5s8vSbEQ8U2kbznuhn+QR48nFmbyMsjR+bJpKEyDlwpNnpOpya5hwiOQ2uqvP4EuH w0fw== X-Gm-Message-State: ABy/qLZfTCGdZNx83B7v8X7GhmjXeidrH1+PdyfTXERRcIN5W9SClNPG 2brUshYEclBFu6ulKCiiIxw7rQ== X-Google-Smtp-Source: APBJJlFZeYd4DRxKDl4/KQISrmcgGIigRQFDnjdlV963Z4Uj9wD33UhHnTlC27k+RC8Qr6H1wt+wAA== X-Received: by 2002:a05:651c:39e:b0:2b9:90fb:3502 with SMTP id e30-20020a05651c039e00b002b990fb3502mr2427838ljp.9.1690587124843; Fri, 28 Jul 2023 16:32:04 -0700 (PDT) Received: from umbar.unikie.fi ([192.130.178.91]) by smtp.gmail.com with ESMTPSA id x11-20020a2e9dcb000000b002b6c8cf48bfsm1148289ljj.104.2023.07.28.16.32.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Jul 2023 16:32:04 -0700 (PDT) From: Dmitry Baryshkov To: Rob Clark , Sean Paul , Abhinav Kumar , Marijn Suijten Cc: Stephen Boyd , David Airlie , Daniel Vetter , Bjorn Andersson , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org Subject: [PATCH v3 5/6] drm/msm/dpu: stop using raw IRQ indices in the kernel output Date: Sat, 29 Jul 2023 02:31:59 +0300 Message-Id: <20230728233200.151735-6-dmitry.baryshkov@linaro.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230728233200.151735-1-dmitry.baryshkov@linaro.org> References: <20230728233200.151735-1-dmitry.baryshkov@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org In preparation to reworking IRQ indcies, stop using raw indices in kernel output (both printk and debugfs). Instead use a pair of register index and bit. This corresponds closer to the values in HW catalog. Signed-off-by: Dmitry Baryshkov --- .../gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c | 49 ++++++++++++------- 1 file changed, 31 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c index 308b122059cd..6071d3f05b0c 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c @@ -199,6 +199,7 @@ static const struct dpu_intr_reg dpu_intr_set_7xxx[] = { #define DPU_IRQ_REG(irq_idx) (irq_idx / 32) #define DPU_IRQ_MASK(irq_idx) (BIT(irq_idx % 32)) +#define DPU_IRQ_BIT(irq_idx) (ffs(DPU_IRQ_MASK(irq_idx)) - 1) static inline bool dpu_core_irq_is_valid(struct dpu_hw_intr *intr, int irq_idx) @@ -221,10 +222,11 @@ static void dpu_core_irq_callback_handler(struct dpu_kms *dpu_kms, int irq_idx) { struct dpu_hw_intr_entry *irq_entry = dpu_core_irq_get_entry(dpu_kms->hw_intr, irq_idx); - VERB("irq_idx=%d\n", irq_idx); + VERB("irq=[%d, %d]\n", DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx)); if (!irq_entry->cb) - DRM_ERROR("no registered cb, idx:%d\n", irq_idx); + DRM_ERROR("no registered cb, IRQ:[%d, %d]\n", + DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx)); atomic_inc(&irq_entry->count); @@ -306,7 +308,8 @@ static int dpu_hw_intr_enable_irq_locked(struct dpu_hw_intr *intr, int irq_idx) return -EINVAL; if (!dpu_core_irq_is_valid(intr, irq_idx)) { - pr_err("invalid IRQ index: [%d]\n", irq_idx); + pr_err("invalid IRQ: [%d, %d]\n", + DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx)); return -EINVAL; } @@ -342,7 +345,8 @@ static int dpu_hw_intr_enable_irq_locked(struct dpu_hw_intr *intr, int irq_idx) intr->cache_irq_mask[reg_idx] = cache_irq_mask; } - pr_debug("DPU IRQ %d %senabled: MASK:0x%.8lx, CACHE-MASK:0x%.8x\n", irq_idx, dbgstr, + pr_debug("DPU IRQ [%d, %d] %senabled: MASK:0x%.8lx, CACHE-MASK:0x%.8x\n", + DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx), dbgstr, DPU_IRQ_MASK(irq_idx), cache_irq_mask); return 0; @@ -359,7 +363,8 @@ static int dpu_hw_intr_disable_irq_locked(struct dpu_hw_intr *intr, int irq_idx) return -EINVAL; if (!dpu_core_irq_is_valid(intr, irq_idx)) { - pr_err("invalid IRQ index: [%d]\n", irq_idx); + pr_err("invalid IRQ: [%d, %d]\n", + DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx)); return -EINVAL; } @@ -391,7 +396,8 @@ static int dpu_hw_intr_disable_irq_locked(struct dpu_hw_intr *intr, int irq_idx) intr->cache_irq_mask[reg_idx] = cache_irq_mask; } - pr_debug("DPU IRQ %d %sdisabled: MASK:0x%.8lx, CACHE-MASK:0x%.8x\n", irq_idx, dbgstr, + pr_debug("DPU IRQ [%d, %d] %sdisabled: MASK:0x%.8lx, CACHE-MASK:0x%.8x\n", + DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx), dbgstr, DPU_IRQ_MASK(irq_idx), cache_irq_mask); return 0; @@ -444,7 +450,7 @@ u32 dpu_core_irq_read(struct dpu_kms *dpu_kms, int irq_idx) return 0; if (!dpu_core_irq_is_valid(intr, irq_idx)) { - pr_err("invalid IRQ index: [%d]\n", irq_idx); + pr_err("invalid IRQ: [%d, %d]\n", DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx)); return 0; } @@ -520,16 +526,19 @@ int dpu_core_irq_register_callback(struct dpu_kms *dpu_kms, int irq_idx, int ret; if (!irq_cb) { - DPU_ERROR("invalid ird_idx:%d irq_cb:%ps\n", irq_idx, irq_cb); + DPU_ERROR("invalid ird_IRQ:[%d, %d] irq_cb:%ps\n", + DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx), irq_cb); return -EINVAL; } if (!dpu_core_irq_is_valid(dpu_kms->hw_intr, irq_idx)) { - DPU_ERROR("invalid IRQ index: [%d]\n", irq_idx); + DPU_ERROR("invalid IRQ: [%d, %d]\n", + DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx)); return -EINVAL; } - VERB("[%pS] irq_idx=%d\n", __builtin_return_address(0), irq_idx); + VERB("[%pS] irq=[%d, %d]\n", __builtin_return_address(0), + DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx)); spin_lock_irqsave(&dpu_kms->hw_intr->irq_lock, irq_flags); @@ -548,8 +557,8 @@ int dpu_core_irq_register_callback(struct dpu_kms *dpu_kms, int irq_idx, dpu_kms->hw_intr, irq_idx); if (ret) - DPU_ERROR("Fail to enable IRQ for irq_idx:%d\n", - irq_idx); + DPU_ERROR("Fail to enable IRQ for [%d, %d]\n", + DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx)); spin_unlock_irqrestore(&dpu_kms->hw_intr->irq_lock, irq_flags); trace_dpu_irq_register_success(irq_idx); @@ -564,19 +573,21 @@ int dpu_core_irq_unregister_callback(struct dpu_kms *dpu_kms, int irq_idx) int ret; if (!dpu_core_irq_is_valid(dpu_kms->hw_intr, irq_idx)) { - DPU_ERROR("invalid IRQ index: [%d]\n", irq_idx); + DPU_ERROR("invalid IRQ: [%d, %d]\n", + DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx)); return -EINVAL; } - VERB("[%pS] irq_idx=%d\n", __builtin_return_address(0), irq_idx); + VERB("[%pS] irq=[%d, %d]\n", __builtin_return_address(0), + DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx)); spin_lock_irqsave(&dpu_kms->hw_intr->irq_lock, irq_flags); trace_dpu_core_irq_unregister_callback(irq_idx); ret = dpu_hw_intr_disable_irq_locked(dpu_kms->hw_intr, irq_idx); if (ret) - DPU_ERROR("Fail to disable IRQ for irq_idx:%d: %d\n", - irq_idx, ret); + DPU_ERROR("Fail to disable IRQ for [%d, %d]: %d\n", + DPU_IRQ_REG(irq_idx), DPU_IRQ_BIT(irq_idx), ret); irq_entry = dpu_core_irq_get_entry(dpu_kms->hw_intr, irq_idx); irq_entry->cb = NULL; @@ -606,7 +617,8 @@ static int dpu_debugfs_core_irq_show(struct seq_file *s, void *v) spin_unlock_irqrestore(&dpu_kms->hw_intr->irq_lock, irq_flags); if (irq_count || cb) - seq_printf(s, "idx:%d irq:%d cb:%ps\n", i, irq_count, cb); + seq_printf(s, "IRQ:[%d, %d] count:%d cb:%ps\n", + DPU_IRQ_REG(i), DPU_IRQ_BIT(i), irq_count, cb); } return 0; @@ -652,7 +664,8 @@ void dpu_core_irq_uninstall(struct msm_kms *kms) for (i = 0; i < DPU_NUM_IRQS; i++) { irq_entry = dpu_core_irq_get_entry(dpu_kms->hw_intr, i); if (irq_entry->cb) - DPU_ERROR("irq_idx=%d still enabled/registered\n", i); + DPU_ERROR("irq=[%d, %d] still enabled/registered\n", + DPU_IRQ_REG(i), DPU_IRQ_BIT(i)); } dpu_clear_irqs(dpu_kms);