From patchwork Wed Jan 22 20:00:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Connor Abbott X-Patchwork-Id: 13947660 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 03028C02181 for ; Wed, 22 Jan 2025 20:05:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=C0W+ZEfLiMkXEH7rt2/GiovuzxyzO0s3Xlcxt3/DbVg=; b=vt9IxpmrkYlX49PYEdLktZtrhT n2/E6iC2HnH4GG0QO/reMv+cUe5oQw5+3GG2RH0O1laOkEnJZtcO6DGfDnXUI+z3RnARcCDvGEubO UbrEKbHDTuUgTUVpJheZKAsQWypWfxHaS/jTOv98J2g1cbsVkfsvYY1K0wItblXvLg5/7Ok1J07pA Tric6AybGoAbZ4REeOGwjRVm24RbjP4cliPJJwcL/rjFyvh9DhAs5TizTqYBubtEolujkO3/Wrero XwiOfuodnbWx/Qh5fafyUXjZn9NzBs9X2UO8+70FVzCZjuDK/CNtC4UEnZ5Ui/d5TYfGubNO3UJ88 AY4gQ82A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tagyX-0000000B8Aq-0T6g; Wed, 22 Jan 2025 20:05:21 +0000 Received: from mail-qk1-x72e.google.com ([2607:f8b0:4864:20::72e]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1taguU-0000000B7b5-2u15 for linux-arm-kernel@lists.infradead.org; Wed, 22 Jan 2025 20:01:13 +0000 Received: by mail-qk1-x72e.google.com with SMTP id af79cd13be357-7b6eadd2946so1363085a.1 for ; Wed, 22 Jan 2025 12:01:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737576069; x=1738180869; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=C0W+ZEfLiMkXEH7rt2/GiovuzxyzO0s3Xlcxt3/DbVg=; b=c1FB4cTr8wnnSqAhiW8/geR0c0a9LOU4N6yUEh7W1onBaVTUCfU+QYMwq99qW+vWfc Xu3KOR5XHQctUAn0XGxiuYTEC4VALDwwOonDdYFIkDEiTwLzJTjl6Y0XV+lASp90YJcj sXdQGCl5ae87QlBWhVuy4x0XW/F704qBKOLYteyMyf6t1WuXeaMgJMlN0DSpIkH05Xbt dHJnE9PKiRur3eDpBJvL2Jn9g9AoE8IkPI7NSoWGQavad6yZlixa1njE4XrhvSt8mCf+ cEgFsQBAsvEXwC1/Om+Ly7rDRUA0U7w2+eWgxv+fmti5d5rficP1EceYEdTRrJMZHyFo c9fQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737576069; x=1738180869; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C0W+ZEfLiMkXEH7rt2/GiovuzxyzO0s3Xlcxt3/DbVg=; b=bktYrpXhn5bsxTYg6uE5kTLX/mQ7146qlMP73/XFN7wBzBuxLckR+IjqPu12YfFUMp XC3iWOL67/1wVJM5N5jhAeEA/f/7kl4M0X3lEY8LyRAJ30EFpYjZZBmQKVRco7fvGn7q hflBEnHHc3NFI+XGCYKkWsbixN1AVb/5OsGRmRJfGA5ikSCjVBU/pZumg1/4ZCtoNvEn Kt0VpJnzfa+U/fiYp8e66c/xCAlXflJYX+GxcNgC/Z14gSe5eTo/RXrmbtbY8X2TvXuS UW/SPCQ39Ju9AkGLmd684KTskaeB5PJuzj/GOVoyZJYIxe8V0X7MWEAAlX7rmhkq7xsS qSkw== X-Forwarded-Encrypted: i=1; AJvYcCXmpEsyjf/KAeiB7apWJwS9qjVC2JEUORw/S0UeeaMMeAmUTLBpuzaScsi5A8M1ZeoOfU26PJSfuS608Rjt5hZp@lists.infradead.org X-Gm-Message-State: AOJu0YxAn4hYevmCNYQ8bKn1N66fu+UMKiUx+zkNESVQ6mEPite0FMLn O5I8oUWcTX6wY9tl2dlex4mR0bB9Wjtc3NLUgS7txrcGD6HgD86K X-Gm-Gg: ASbGnctwUL72lTMoau54D0Y4vk45UUMp1woF5OJYeoYEbHz4+1tSrbc47WVUccJ/7aA F/O6Ct0Dr+g6xXiZNpz5e1EaDOzaWU4UrVBN4/+ZOCANUqru4gkjIfPomMDgQFBRLSxkYxLTjYL uVt2GuIyRwkachsye2fI7emwpZQg2OOLUbwuoGY+F3fk891V0mgfUam7/vndODsk8iF0LobVvJW PDPJ/CCtpuHAo2ARVNhV5SHojHH+ab0etPU+WQxiTNPpRAV2h2Rvn3U55dDpQSORMs2AJI7baxQ QkObUMFe9UB+XETJsriAGyo77udC X-Google-Smtp-Source: AGHT+IH7Bb216J6U5VF2T1X3drcXM9V1TK2n9v5TtKhVnuUToiXKlxArzXqCFTvaC3c4LRNMangaew== X-Received: by 2002:a05:620a:2893:b0:7b1:3bf8:b3c4 with SMTP id af79cd13be357-7be63158aa9mr1312874885a.0.1737576069547; Wed, 22 Jan 2025 12:01:09 -0800 (PST) Received: from [192.168.1.99] (ool-4355b0da.dyn.optonline.net. [67.85.176.218]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7be6147e30asm694606385a.31.2025.01.22.12.01.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Jan 2025 12:01:09 -0800 (PST) From: Connor Abbott Date: Wed, 22 Jan 2025 15:00:58 -0500 Subject: [PATCH v3 1/3] iommu/arm-smmu: Fix spurious interrupts with stall-on-fault MIME-Version: 1.0 Message-Id: <20250122-msm-gpu-fault-fixes-next-v3-1-0afa00158521@gmail.com> References: <20250122-msm-gpu-fault-fixes-next-v3-0-0afa00158521@gmail.com> In-Reply-To: <20250122-msm-gpu-fault-fixes-next-v3-0-0afa00158521@gmail.com> To: Rob Clark , Will Deacon , Robin Murphy , Joerg Roedel , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten Cc: iommu@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, freedreno@lists.freedesktop.org, Connor Abbott X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1737576067; l=5809; i=cwabbott0@gmail.com; s=20240426; h=from:subject:message-id; bh=LZOQxUUGdQqLBW3ZuPFFeTY2r3+nASK7HMPqbv/N5EE=; b=+NXIUeaospHwBoU1GxlEI04bEiPY7s3tZ+XXIo6zrznDjds++HImSxpVTUy3AAj3qlRlGR+0D 8GDZdqn+QNSDR9v9/8qj5NGy4z/lrwbX5MhDaoec++7JU7xDAILX6pc X-Developer-Key: i=cwabbott0@gmail.com; a=ed25519; pk=dkpOeRSXLzVgqhy0Idr3nsBr4ranyERLMnoAgR4cHmY= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250122_120110_740272_070CCC47 X-CRM114-Status: GOOD ( 24.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On some SMMUv2 implementations, including MMU-500, SMMU_CBn_FSR.SS asserts an interrupt. The only way to clear that bit is to resume the transaction by writing SMMU_CBn_RESUME, but typically resuming the transaction requires complex operations (copying in pages, etc.) that can't be done in IRQ context. drm/msm already has a problem, because its fault handler sometimes schedules a job to dump the GPU state and doesn't resume translation until this is complete. Work around this by disabling context fault interrupts until after the transaction is resumed. Because other context banks can share an IRQ line, we may still get an interrupt intended for another context bank, but in this case only SMMU_CBn_FSR.SS will be asserted and we can skip it assuming that interrupts are disabled which is accomplished by removing the bit from ARM_SMMU_CB_FSR_FAULT. SMMU_CBn_FSR.SS won't be asserted unless an external user enabled stall-on-fault, and they are expected to resume the translation and re-enable interrupts. Signed-off-by: Connor Abbott Reviewed-by Robin Murphy --- drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 15 ++++++++++- drivers/iommu/arm/arm-smmu/arm-smmu.c | 41 +++++++++++++++++++++++++++++- drivers/iommu/arm/arm-smmu/arm-smmu.h | 1 - 3 files changed, 54 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c index 59d02687280e8d37b5e944619fcfe4ebd1bd6926..7d86e9972094eb4d304b24259f4ed9a4820cabc7 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c @@ -125,12 +125,25 @@ static void qcom_adreno_smmu_resume_translation(const void *cookie, bool termina struct arm_smmu_domain *smmu_domain = (void *)cookie; struct arm_smmu_cfg *cfg = &smmu_domain->cfg; struct arm_smmu_device *smmu = smmu_domain->smmu; - u32 reg = 0; + u32 reg = 0, sctlr; + unsigned long flags; if (terminate) reg |= ARM_SMMU_RESUME_TERMINATE; + spin_lock_irqsave(&smmu_domain->cb_lock, flags); + arm_smmu_cb_write(smmu, cfg->cbndx, ARM_SMMU_CB_RESUME, reg); + + /* + * Re-enable interrupts after they were disabled by + * arm_smmu_context_fault(). + */ + sctlr = arm_smmu_cb_read(smmu, cfg->cbndx, ARM_SMMU_CB_SCTLR); + sctlr |= ARM_SMMU_SCTLR_CFIE; + arm_smmu_cb_write(smmu, cfg->cbndx, ARM_SMMU_CB_SCTLR, sctlr); + + spin_unlock_irqrestore(&smmu_domain->cb_lock, flags); } static void qcom_adreno_smmu_set_prr_bit(const void *cookie, bool set) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 79afc92e1d8b984dd35c469a3f283ad0c78f3d26..ca1ff59015a63912f0f9c5256452b2b2efa928f1 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -463,13 +463,52 @@ static irqreturn_t arm_smmu_context_fault(int irq, void *dev) if (!(cfi.fsr & ARM_SMMU_CB_FSR_FAULT)) return IRQ_NONE; + /* + * On some implementations FSR.SS asserts a context fault + * interrupt. We do not want this behavior, because resolving the + * original context fault typically requires operations that cannot be + * performed in IRQ context but leaving the stall unacknowledged will + * immediately lead to another spurious interrupt as FSR.SS is still + * set. Work around this by disabling interrupts for this context bank. + * It's expected that interrupts are re-enabled after resuming the + * translation. + * + * We have to do this before report_iommu_fault() so that we don't + * leave interrupts disabled in case the downstream user decides the + * fault can be resolved inside its fault handler. + * + * There is a possible race if there are multiple context banks sharing + * the same interrupt and both signal an interrupt in between writing + * RESUME and SCTLR. We could disable interrupts here before we + * re-enable them in the resume handler, leaving interrupts enabled. + * Lock the write to serialize it with the resume handler. + */ + if (cfi.fsr & ARM_SMMU_CB_FSR_SS) { + u32 val; + + spin_lock(&smmu_domain->cb_lock); + val = arm_smmu_cb_read(smmu, idx, ARM_SMMU_CB_SCTLR); + val &= ~ARM_SMMU_SCTLR_CFIE; + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_SCTLR, val); + spin_unlock(&smmu_domain->cb_lock); + } + + /* + * The SMMUv2 architecture specification says that if stall-on-fault is + * enabled the correct sequence is to write to SMMU_CBn_FSR to clear + * the fault and then write to SMMU_CBn_RESUME. Clear the interrupt + * first before running the user's fault handler to make sure we follow + * this sequence. It should be ok if there is another fault in the + * meantime because we have already read the fault info. + */ + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_FSR, cfi.fsr); + ret = report_iommu_fault(&smmu_domain->domain, NULL, cfi.iova, cfi.fsynr & ARM_SMMU_CB_FSYNR0_WNR ? IOMMU_FAULT_WRITE : IOMMU_FAULT_READ); if (ret == -ENOSYS && __ratelimit(&rs)) arm_smmu_print_context_fault_info(smmu, idx, &cfi); - arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_FSR, cfi.fsr); return IRQ_HANDLED; } diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.h b/drivers/iommu/arm/arm-smmu/arm-smmu.h index 2dbf3243b5ad2db01e17fb26c26c838942a491be..789c64ff3eb9944c8af37426e005241a8288da20 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.h +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.h @@ -216,7 +216,6 @@ enum arm_smmu_cbar_type { ARM_SMMU_CB_FSR_TLBLKF) #define ARM_SMMU_CB_FSR_FAULT (ARM_SMMU_CB_FSR_MULTI | \ - ARM_SMMU_CB_FSR_SS | \ ARM_SMMU_CB_FSR_UUT | \ ARM_SMMU_CB_FSR_EF | \ ARM_SMMU_CB_FSR_PF | \ From patchwork Wed Jan 22 20:00:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Connor Abbott X-Patchwork-Id: 13947658 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A3E02C02181 for ; Wed, 22 Jan 2025 20:04:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=clItPhIQuRHdt/X/5Yn3hkPoRrPZ3x4kk0mvqKl7Y8o=; b=x2XdV0TrEAL+Vvqcbai7/Alscu MlbsDV61c65vbPYAjg/4SUNGhmnIYdvtmgp0MFmoBWCNwHWgnl8QNHfJcAdFo30GuojuLill7EKe8 YHvR3ykXGCe2dr57P5HG64JrreBJCZM5ji7X6D7H0tv/Qe3vcaeJVSM9qPkoIdpCLEekU3//3EO3I ASzmOMy5Ouj2qU9qatrUKGFIKP0PnAHinas3OMnl4wzVHjmLKEuJU1QQTLICJhebvmJVQ2JWXoVuj XV0kkgufgRb6RCuOp/PAJ2hqrO8fQhpE8CI8WpWv8Qu3C1Y6OiZLAaX9RvHQIqwSnrchaWLHSSqsE 4pxAGYxQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tagxF-0000000B7z7-1rRX; Wed, 22 Jan 2025 20:04:01 +0000 Received: from mail-qk1-x72e.google.com ([2607:f8b0:4864:20::72e]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1taguV-0000000B7bV-1r49 for linux-arm-kernel@lists.infradead.org; Wed, 22 Jan 2025 20:01:12 +0000 Received: by mail-qk1-x72e.google.com with SMTP id af79cd13be357-7b858357124so1338585a.1 for ; Wed, 22 Jan 2025 12:01:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737576070; x=1738180870; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=clItPhIQuRHdt/X/5Yn3hkPoRrPZ3x4kk0mvqKl7Y8o=; b=ll9jhOE+Q5hqNVqm/wm7ooUgxKNQ89nIG21Axq+bjVCreiekB5apH91IFwTawM+ufI mXT3WYzDqfRvt51XTgjij/ovI8l0jZbOGkv054YUpHWF4l6eao9k9AAi85NcfBDoKhd8 EoQBWP52BkR0HJ4uvnGWl06LgUZaLQBLXfspeLLR6CgRrTcu1BoRKlHOoOkbpHdNlv+o ND5VE1QmseRamb74FXk/kUaAMqMHNIhGXEk/ygJLpubZ7KemUKFE0SRQpJKuKsGl+Wzf mbXtsQDIdzlkgFJs7npDD+tbPj15Nvjp+nNMxBpTwv8aVWFFZTBGf4nvVBIDrUoOSbv+ Rs2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737576070; x=1738180870; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=clItPhIQuRHdt/X/5Yn3hkPoRrPZ3x4kk0mvqKl7Y8o=; b=rCS/MqZRGuXJ4TeGNDRayqdeMIVQBL+ylH3mbyTAlbUrLwnvhHuxmolN1PbjoZnhTj EMFrpTU3bLuwV6WCwUao1ET/RLN8Y3ZokY1rvImLBs0J8BrINHzEAC6+bDQVmuemXDb5 KdgsrMdFnEmSqHENe6dHHLeMiIpHRq9hYDsukLskb/QC0tjvEMI+EbySl6rr2CPwT1HG RdHKA8mwzFFKZhBk6SEXAwymWQuFFzOeZq1wuPmIw2Xq43+c6hnsAFQncKzAMH/p06kN ZEDkbeimmw0kw4D1qlIQZfT0QaNxC34ycAN8sqOdmsMC5joOOZKOQQf3SBSUeJm58YRJ gNeQ== X-Forwarded-Encrypted: i=1; AJvYcCX313A5lRXMwyYnHnDmWk4f7L4R06VRJdFkzwuMQpGbHeWgB5fWNubnM7S6w89P5IXgTr+6440iDUvUe30nT8Co@lists.infradead.org X-Gm-Message-State: AOJu0YxOsXB0qixmpHl49Q+pZzQ2INAXVXGVOTs54NrZj+DY4GaV7isw gdAI5HbKM8CaVoa4NcmuxWXCzl5iU7bYL/i6TZInhnOmjIRj2hW8 X-Gm-Gg: ASbGncvuyOKT0hiiuanTHRTFD5xoH0GDXju0ntP464nBdtIexrnD6SBYgFjh2ffzjzV lpR31+NMwKQp52bpEetqDAX0SMZFToiHrW3PqhWBgqFzfUli5Gz8nR1Dge/bDLGS47dw58lf1Xh SwDzGcu6hsT1ewVC6U97T9H7FPztOgSC/9yTnP0kkTRmXgP99Ag9StTCIIqwjCo+QJAvT13drI9 iUcxTTHq5mPOFYZr+h92svuegwFOlQUyraDm0quUTPsRzcSvxn1fkaUHUBS5mgs1WD/ewER59WX 2LdzGic5qQEgqdNiLYppHFcIbGBD X-Google-Smtp-Source: AGHT+IFbsQgN+OYWoJ8tGUrhqTk3b6gEBJi7yaE4oLpHOAjzYS5P1remXjUXmLoR8liW70biIpVbJQ== X-Received: by 2002:a05:620a:1911:b0:7be:3cf2:5b46 with SMTP id af79cd13be357-7be6320bc70mr1391147185a.8.1737576070653; Wed, 22 Jan 2025 12:01:10 -0800 (PST) Received: from [192.168.1.99] (ool-4355b0da.dyn.optonline.net. [67.85.176.218]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7be6147e30asm694606385a.31.2025.01.22.12.01.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Jan 2025 12:01:10 -0800 (PST) From: Connor Abbott Date: Wed, 22 Jan 2025 15:00:59 -0500 Subject: [PATCH v3 2/3] iommu/arm-smmu-qcom: Make set_stall work when the device is on MIME-Version: 1.0 Message-Id: <20250122-msm-gpu-fault-fixes-next-v3-2-0afa00158521@gmail.com> References: <20250122-msm-gpu-fault-fixes-next-v3-0-0afa00158521@gmail.com> In-Reply-To: <20250122-msm-gpu-fault-fixes-next-v3-0-0afa00158521@gmail.com> To: Rob Clark , Will Deacon , Robin Murphy , Joerg Roedel , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten Cc: iommu@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, freedreno@lists.freedesktop.org, Connor Abbott X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1737576067; l=2108; i=cwabbott0@gmail.com; s=20240426; h=from:subject:message-id; bh=4obbXrvjN1YYsEVfq8apmOVUt5dXPe6TwBqzW1FSgs0=; b=UtLt9Gxw8iQpbRsxeRyHyMb7Pm7ArBDOREehdF+WX9yPMugQJ3Icy/92gYrtsug8VrkLc1is8 CiXL67DhVUCAGFOPZCD8bglUB3zCsRgXLnoLC4DgfCG1QTviaTVY5MT X-Developer-Key: i=cwabbott0@gmail.com; a=ed25519; pk=dkpOeRSXLzVgqhy0Idr3nsBr4ranyERLMnoAgR4cHmY= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250122_120111_477028_7FAEB608 X-CRM114-Status: GOOD ( 14.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Up until now we have only called the set_stall callback during initialization when the device is off. But we will soon start calling it to temporarily disable stall-on-fault when the device is on, so handle that by checking if the device is on and writing SCTLR. Signed-off-by: Connor Abbott --- drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 30 +++++++++++++++++++++++++++--- 1 file changed, 27 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c index 7d86e9972094eb4d304b24259f4ed9a4820cabc7..6693d8f8e3ae4e970ca9d7f549321ab4f59e8b32 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c @@ -112,12 +112,36 @@ static void qcom_adreno_smmu_set_stall(const void *cookie, bool enabled) { struct arm_smmu_domain *smmu_domain = (void *)cookie; struct arm_smmu_cfg *cfg = &smmu_domain->cfg; - struct qcom_smmu *qsmmu = to_qcom_smmu(smmu_domain->smmu); + struct arm_smmu_device *smmu = smmu_domain->smmu; + struct qcom_smmu *qsmmu = to_qcom_smmu(smmu); + u32 mask = BIT(cfg->cbndx); + bool stall_changed = !!(qsmmu->stall_enabled & mask) != enabled; + unsigned long flags; if (enabled) - qsmmu->stall_enabled |= BIT(cfg->cbndx); + qsmmu->stall_enabled |= mask; else - qsmmu->stall_enabled &= ~BIT(cfg->cbndx); + qsmmu->stall_enabled &= ~mask; + + /* + * If the device is on and we changed the setting, update the register. + */ + if (stall_changed && pm_runtime_get_if_active(smmu->dev) > 0) { + spin_lock_irqsave(&smmu_domain->cb_lock, flags); + + u32 reg = arm_smmu_cb_read(smmu, cfg->cbndx, ARM_SMMU_CB_SCTLR); + + if (enabled) + reg |= ARM_SMMU_SCTLR_CFCFG; + else + reg &= ~ARM_SMMU_SCTLR_CFCFG; + + arm_smmu_cb_write(smmu, cfg->cbndx, ARM_SMMU_CB_SCTLR, reg); + + spin_unlock_irqrestore(&smmu_domain->cb_lock, flags); + + pm_runtime_put_autosuspend(smmu->dev); + } } static void qcom_adreno_smmu_resume_translation(const void *cookie, bool terminate) From patchwork Wed Jan 22 20:01:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Connor Abbott X-Patchwork-Id: 13947661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E1AA6C02182 for ; Wed, 22 Jan 2025 20:06:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=uvSBZTv+Bn83Qgi3n8/RjQE01uKIcw895iDtbXzMRaM=; b=U43PvPPlKKU7qQxmsgYxH+PW4I qipLatt/CvLTNOGzvAKfdpzGnX3XDSyQm3pyRf2VTALHd8t1wS6FsQ1dRUnMnEQC5nVzRYSTzE7L0 K8hlaQY/cihGkXM/Ee025H8ogolagO21MhWec0vYb9Y6H6qFLlZIhs2xfEm3y7kxowr2tfn57xPSt jCLCx3G3wFfFQjmDA8FKYEaFGmSINpWJrYlLvQFMsWALH5+DCkQwWE5f7xmEXh06f0ljvepWNriqm n9k4uixobG/g8siu5tzJcAXc/YMhsVKe3DfJLBMR4+YaGJQd+lXd6jUcMGNx6+6VFGNrLd9oabvwu eC466UqA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tagzn-0000000B8Nl-396i; Wed, 22 Jan 2025 20:06:39 +0000 Received: from mail-qk1-x72c.google.com ([2607:f8b0:4864:20::72c]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1taguW-0000000B7c0-2Wo0 for linux-arm-kernel@lists.infradead.org; Wed, 22 Jan 2025 20:01:13 +0000 Received: by mail-qk1-x72c.google.com with SMTP id af79cd13be357-7b6fdbcf42bso872485a.0 for ; Wed, 22 Jan 2025 12:01:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737576072; x=1738180872; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=uvSBZTv+Bn83Qgi3n8/RjQE01uKIcw895iDtbXzMRaM=; b=NMLQuEXHFxRjIWcXWPRW5BIhm+P+tPilqlTNk6krfSIFEnYIR94h5dVJoE5MeXUi0z 4XZn/Y7r1N7+vZSGe+HT0EAfJyDZJuKmIbB6bhCE+afRkBqChmykdrR4oBy9fXk/bmd+ DXpU4zKKJRu5HGubiIlCMXbtFPH+pCdH643L//Elwe5Zt+DzCd0x4b3X40/D+AZOVqye R2tga0PC3qOpzCXPLr1Djo+jkgL01Yu9L1E6rbP6BR2QvLUM+OFF/nyrR4toQ8wMHOzn Eqb9gTuiVblhlGzu6WVqg58CwBzCzWuHar5xrLZGuRVBYdHgOy9z95V0lE+Ss7VlFi8q d+tA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737576072; x=1738180872; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uvSBZTv+Bn83Qgi3n8/RjQE01uKIcw895iDtbXzMRaM=; b=VH8WAnHeJOjPnYaWlns8KACyB0K8EBd/Fjajw/ijPpt/EcehByOtUHn1ZRxA3YEzj5 15ax8PDLubg77IjXIq7YUFlRJw6taqFyCAytKsDRuKIbdt80j+D2w1gZS6SXh85wcq8t Ph7V+VFlwQq6RK+zk8yp5BiCdb12zI3manJ1g6bLsnTdGO80BDyEe1l4TL+kEsVW+6PU 6wCkdotjlLW8a2Ulj3L62k1bOgzIH0pz+Fxzah84sDQDovyWJCYaLNQKyC55f0qeF7my 0nTmxZL+I/sgDX98DfP4O+tCmFvN/tcb1bm3WMHnhjJTOfw1eYuH48ecxVpO7ihv1hK3 Fzxw== X-Forwarded-Encrypted: i=1; AJvYcCUx5gxLED7mCOvH4pDebd+Og+zL8UfwkpK/RuTjHuTUVces4O1SZ06A12oBLXZwI1/4mtq5NtQpGf45AjDK9fml@lists.infradead.org X-Gm-Message-State: AOJu0YzhzZ6tlwqhkW/PuR9yE1kLQSbE29e32EII0GCceTcmjFmSF8U7 EqKVr8IndyW3KHz+60QaDoGlJpnmTtZ2E3b5jjBPdnt3NZ3a8yfb X-Gm-Gg: ASbGncv4BSRHqfTvUeLxKzHdWDJsFSLfLj5PPjHUy27oY2QgpA3b1pq1gcBLL60BhIt 38vHzj7B5rSfddtw5d6JTW30eqnceQ57SWfp8NTb+BFWWntm9ciK/mHVL/EP0HEJMPINcm1zbY7 qFSVv8JmSMOdC6qfUQk5PHFvsb2upTRNTrGMqyhm1y26VCa6p+dhgHtkMbDJpgjzvRJwPfptxSf NihS65Byoi9GZnSlUEP0fRBZPMAkvoydtzOYl26fmXIL0KabOH3n1MRr7LVWMuU6neDK0M2KknW CK4BU7tEBZnMZXjAvigm1PNsRu3d X-Google-Smtp-Source: AGHT+IG7pcOO4j8rcAXTAXAOFHFzbrJ+ShZfoiNI9k3XAbErfF6sb3mFVDmAynz5yJb92coD3wnqQQ== X-Received: by 2002:a05:620a:394b:b0:7af:cac7:5017 with SMTP id af79cd13be357-7be8b2eaceamr237140485a.4.1737576071617; Wed, 22 Jan 2025 12:01:11 -0800 (PST) Received: from [192.168.1.99] (ool-4355b0da.dyn.optonline.net. [67.85.176.218]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7be6147e30asm694606385a.31.2025.01.22.12.01.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Jan 2025 12:01:11 -0800 (PST) From: Connor Abbott Date: Wed, 22 Jan 2025 15:01:00 -0500 Subject: [PATCH v3 3/3] drm/msm: Temporarily disable stall-on-fault after a page fault MIME-Version: 1.0 Message-Id: <20250122-msm-gpu-fault-fixes-next-v3-3-0afa00158521@gmail.com> References: <20250122-msm-gpu-fault-fixes-next-v3-0-0afa00158521@gmail.com> In-Reply-To: <20250122-msm-gpu-fault-fixes-next-v3-0-0afa00158521@gmail.com> To: Rob Clark , Will Deacon , Robin Murphy , Joerg Roedel , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten Cc: iommu@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, freedreno@lists.freedesktop.org, Connor Abbott X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1737576067; l=9275; i=cwabbott0@gmail.com; s=20240426; h=from:subject:message-id; bh=1LEQ/SMNbwf2yI/kiLpEh729+NspG9MjxFycNdw8Mco=; b=A9/IQz3QphuH91aEBD88IGONhoh/I0uf9QAf2bM/Kp176OTv6KhhkKafFHl0Sv7zzoxyaJ6hY MPmB4Vx6LgoASfwb9x1YGfU/BJSXsIzB1PEk2zUafGAkpNmG1blCRpo X-Developer-Key: i=cwabbott0@gmail.com; a=ed25519; pk=dkpOeRSXLzVgqhy0Idr3nsBr4ranyERLMnoAgR4cHmY= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250122_120112_706838_073C074E X-CRM114-Status: GOOD ( 27.83 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When things go wrong, the GPU is capable of quickly generating millions of faulting translation requests per second. When that happens, in the stall-on-fault model each access will stall until it wins the race to signal the fault and then the RESUME register is written. This slows processing page faults to a crawl as the GPU can generate faults much faster than the CPU can acknowledge them. It also means that all available resources in the SMMU are saturated waiting for the stalled transactions, so that other transactions such as transactions generated by the GMU, which shares a context bank with the GPU, cannot proceed. This causes a GMU watchdog timeout, which leads to a failed reset because GX cannot collapse when there is a transaction pending and a permanently hung GPU. On older platforms with qcom,smmu-v2, it seems that when one transaction is stalled subsequent faulting transactions are terminated, which avoids this problem, but the MMU-500 follows the spec here. To work around these problem, disable stall-on-fault as soon as we get a page fault until a cooldown period after pagefaults stop. This allows the GMU some guaranteed time to continue working. We only use stall-on-fault to halt the GPU while we collect a devcoredump and we always terminate the transaction afterward, so it's fine to miss some subsequent page faults. We also keep it disabled so long as the current devcoredump hasn't been deleted, because in that case we likely won't capture another one if there's a fault. After this commit HFI messages still occasionally time out, because the crashdump handler doesn't run fast enough to let the GMU resume, but the driver seems to recover from it. This will probably go away after the HFI timeout is increased. Signed-off-by: Connor Abbott --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 2 ++ drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++++ drivers/gpu/drm/msm/adreno/adreno_gpu.c | 42 ++++++++++++++++++++++++++++++++- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 24 +++++++++++++++++++ drivers/gpu/drm/msm/msm_iommu.c | 9 +++++++ drivers/gpu/drm/msm/msm_mmu.h | 1 + 6 files changed, 81 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 71dca78cd7a5324e9ff5b14f173e2209fa42e196..670141531112c9d29cef8ef1fd51b74759fdd6d2 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -131,6 +131,8 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct msm_ringbuffer *ring = submit->ring; unsigned int i, ibs = 0; + adreno_check_and_reenable_stall(adreno_gpu); + if (IS_ENABLED(CONFIG_DRM_MSM_GPU_SUDO) && submit->in_rb) { ring->cur_ctx_seqno = 0; a5xx_submit_in_rb(gpu, submit); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 0ae29a7c8a4d3f74236a35cc919f69d5c0a384a0..5a34cd2109a2d74c92841448a61ccb0d4f34e264 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -212,6 +212,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct msm_ringbuffer *ring = submit->ring; unsigned int i, ibs = 0; + adreno_check_and_reenable_stall(adreno_gpu); + a6xx_set_pagetable(a6xx_gpu, ring, submit); get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP(0), @@ -335,6 +337,8 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct msm_ringbuffer *ring = submit->ring; unsigned int i, ibs = 0; + adreno_check_and_reenable_stall(adreno_gpu); + /* * Toggle concurrent binning for pagetable switch and set the thread to * BR since only it can execute the pagetable switch packets. diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 1238f326597808eb28b4c6822cbd41a26e555eb9..bac586101dc0494f46b069a8440a45825dfe9b5e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -246,16 +246,53 @@ u64 adreno_private_address_space_size(struct msm_gpu *gpu) return SZ_4G; } +void adreno_check_and_reenable_stall(struct adreno_gpu *adreno_gpu) +{ + struct msm_gpu *gpu = &adreno_gpu->base; + unsigned long flags; + + /* + * Wait until the cooldown period has passed and we would actually + * collect a crashdump to re-enable stall-on-fault. + */ + spin_lock_irqsave(&adreno_gpu->fault_stall_lock, flags); + if (!adreno_gpu->stall_enabled && + ktime_after(ktime_get(), adreno_gpu->stall_reenable_time) && + !READ_ONCE(gpu->crashstate)) { + adreno_gpu->stall_enabled = true; + + gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, true); + } + spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, flags); +} + #define ARM_SMMU_FSR_TF BIT(1) #define ARM_SMMU_FSR_PF BIT(3) #define ARM_SMMU_FSR_EF BIT(4) +#define ARM_SMMU_FSR_SS BIT(30) int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, struct adreno_smmu_fault_info *info, const char *block, u32 scratch[4]) { + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); const char *type = "UNKNOWN"; - bool do_devcoredump = info && !READ_ONCE(gpu->crashstate); + bool do_devcoredump = info && (info->fsr & ARM_SMMU_FSR_SS) && + !READ_ONCE(gpu->crashstate); + unsigned long irq_flags; + + /* + * In case there is a subsequent storm of pagefaults, disable + * stall-on-fault for at least half a second. + */ + spin_lock_irqsave(&adreno_gpu->fault_stall_lock, irq_flags); + if (adreno_gpu->stall_enabled) { + adreno_gpu->stall_enabled = false; + + gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, false); + } + adreno_gpu->stall_reenable_time = ktime_add_ms(ktime_get(), 500); + spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, irq_flags); /* * If we aren't going to be resuming later from fault_worker, then do @@ -1143,6 +1180,9 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev, adreno_gpu->info->inactive_period); pm_runtime_use_autosuspend(dev); + spin_lock_init(&adreno_gpu->fault_stall_lock); + adreno_gpu->stall_enabled = true; + return msm_gpu_init(drm, pdev, &adreno_gpu->base, &funcs->base, gpu_name, &adreno_gpu_config); } diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index dcf454629ce037b2a8274a6699674ad754ce1f07..a528036b46216bd898f6d48c5fb0555c4c4b053b 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -205,6 +205,28 @@ struct adreno_gpu { /* firmware: */ const struct firmware *fw[ADRENO_FW_MAX]; + /** + * fault_stall_lock: + * + * Serialize changes to stall-on-fault state. + */ + spinlock_t fault_stall_lock; + + /** + * fault_stall_reenable_time: + * + * if stall_enabled is false, when to reenable stall-on-fault. + */ + ktime_t stall_reenable_time; + + /** + * stall_enabled: + * + * Whether stall-on-fault is currently enabled. + */ + bool stall_enabled; + + struct { /** * @rgb565_predicator: Unknown, introduced with A650 family, @@ -629,6 +651,8 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, struct adreno_smmu_fault_info *info, const char *block, u32 scratch[4]); +void adreno_check_and_reenable_stall(struct adreno_gpu *gpu); + int adreno_read_speedbin(struct device *dev, u32 *speedbin); /* diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 2a94e82316f95c5f9dcc37ef0a4664a29e3492b2..8d5380e6dcc217c7c209b51527bf15748b3ada71 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -351,6 +351,14 @@ static void msm_iommu_resume_translation(struct msm_mmu *mmu) adreno_smmu->resume_translation(adreno_smmu->cookie, true); } +static void msm_iommu_set_stall(struct msm_mmu *mmu, bool enable) +{ + struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(mmu->dev); + + if (adreno_smmu->set_stall) + adreno_smmu->set_stall(adreno_smmu->cookie, enable); +} + static void msm_iommu_detach(struct msm_mmu *mmu) { struct msm_iommu *iommu = to_msm_iommu(mmu); @@ -399,6 +407,7 @@ static const struct msm_mmu_funcs funcs = { .unmap = msm_iommu_unmap, .destroy = msm_iommu_destroy, .resume_translation = msm_iommu_resume_translation, + .set_stall = msm_iommu_set_stall, }; struct msm_mmu *msm_iommu_new(struct device *dev, unsigned long quirks) diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index 88af4f490881f2a6789ae2d03e1c02d10046331a..2694a356a17904e7572b767b16ed0cee806406cf 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -16,6 +16,7 @@ struct msm_mmu_funcs { int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); void (*destroy)(struct msm_mmu *mmu); void (*resume_translation)(struct msm_mmu *mmu); + void (*set_stall)(struct msm_mmu *mmu, bool enable); }; enum msm_mmu_type {