From patchwork Wed Apr 13 06:54:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Garg X-Patchwork-Id: 12811622 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 43EA8C433F5 for ; Wed, 13 Apr 2022 06:56:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L5KUycxv0w0Eo8Lq4VK4duxRDFs+MQL4pAYJ9KWkfF4=; b=uQa3iya+iFyIJf TjI0qjqUJYaJ3DhWJUiunaJZvtalh05nQxNDVxc21lgOsacN5kT2uuQDKDJQaBS8CbXumgnSFJIWu J1PIfoIQf+Vxk1J8RcA7kU5xYMiyJHandTx4oy2JoWrR4yNX3+99jV4WoawidPLPMOWxK/zqL21yp O2VMObsqquMvdYoCSdorsAFtEsIgHuwaEe3qLTtFbJ273VC0+M0oh9H07jncYBII08WNMNkeEFmzf n6X5Mpqu2gl1TAwZxs9vBckQ3SNxBYZXJDn52GnSdJDVaHy3soZ/cHwcqjT4Tod9wHw/lohiN989x Cv1CeAmqCdGhpL8dDwEQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1neWuW-00HIcJ-AP; Wed, 13 Apr 2022 06:55:28 +0000 Received: from mail-pj1-x102b.google.com ([2607:f8b0:4864:20::102b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1neWuN-00HIav-Md for linux-arm-kernel@lists.infradead.org; Wed, 13 Apr 2022 06:55:21 +0000 Received: by mail-pj1-x102b.google.com with SMTP id e8-20020a17090a118800b001cb13402ea2so1206474pja.0 for ; Tue, 12 Apr 2022 23:55:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EgTL4VNIKYlvmdQhFawLMMFW9frxcSpLBWlCsK4+EuA=; b=lpC2u8VBw8yiA8bju+WTgVAQeQ744QXVIWPCGFjCUzzcOz28RCZWHmAwzHlFqIJMK8 Ri03+jB1jNbHP719pH63Q/KRDMzokqkdMVZQ6mgkPBGdwZD+nL6rEHTLslPWIOOBZdpJ RM7N3nADJnZ1U+PNc4XdxpGOU5uFE34w4aFgx6Gn0dJeMPQL/AJAgzhsaG1buqJSp1YR taEqks+wbqS+pLwurwG1liXYsSPna5gmTmMMzFy1nWhu979ZpiEkZ6+RqGPaPbWyRUD3 o10ETRrHulw6Hpirni26g+5nYgltoC3CtNVFKAZNUWeq0OM4agJXexl/5rzA78gNq5I4 1ACA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EgTL4VNIKYlvmdQhFawLMMFW9frxcSpLBWlCsK4+EuA=; b=Ii3hBnA+oFL3ortEqUVtDdnX2ffoNtpLl4MkMade6NwPmV78Eyo3aJBDz8ggkgby2L PWAn1t5ivQXFtXVdMT00nmjwqb/dK8Cox2Tofs8hHrCPsGVBnSdZoNK0pdlPsZGNslD4 PDPJtfkKfIAGENzGXr/VdQzeQkowXSFGDEwBYP+fOWXHDJS7sN98jzcgcCHiiLA6Re5z q4H9D7QK6odQX+kGPR/ldA56sY2b4G9XsAfvRvR0xb7JSKl1xURAV4X0TgNdiHeiX2Cd epOSYsIxKlooBMh1v7YWPT2nriCRwl8jQycwT/fHLndOVDPI5vFOodpE4HobSDnvYWco v7/A== X-Gm-Message-State: AOAM532/OXLngcrJmZ/+cP8KYflCOun2rScgtlGL5OcXxNur3/OUMYSv c7pjHxgoDQelYEwZ0Dh26iokKYAhrFNw+AZe X-Google-Smtp-Source: ABdhPJzD4jFXtu6nu5Ka7mSx2egDcsFHYNX1QDkBM9R0i9t3C+ZiLSgo171wvIYs+yCawebSCZX1vg== X-Received: by 2002:a17:903:11d1:b0:151:9fb2:9858 with SMTP id q17-20020a17090311d100b001519fb29858mr40419078plh.136.1649832918149; Tue, 12 Apr 2022 23:55:18 -0700 (PDT) Received: from localhost.localdomain ([223.177.215.72]) by smtp.gmail.com with ESMTPSA id m15-20020a638c0f000000b003827bfe1f5csm4926908pgd.7.2022.04.12.23.55.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Apr 2022 23:55:17 -0700 (PDT) From: Sumit Garg To: linux-arm-kernel@lists.infradead.org, dianders@chromium.org, will@kernel.org, liwei391@huawei.com Cc: catalin.marinas@arm.com, mark.rutland@arm.com, mhiramat@kernel.org, daniel.thompson@linaro.org, jason.wessel@windriver.com, maz@kernel.org, linux-kernel@vger.kernel.org, Sumit Garg Subject: [PATCH v2 1/2] arm64: entry: Skip single stepping interrupt handlers Date: Wed, 13 Apr 2022 12:24:57 +0530 Message-Id: <20220413065458.88541-2-sumit.garg@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220413065458.88541-1-sumit.garg@linaro.org> References: <20220413065458.88541-1-sumit.garg@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220412_235519_795096_6869752C X-CRM114-Status: GOOD ( 14.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Current implementation allows single stepping into interrupt handlers for interrupts that were received during single stepping. But interrupt handlers aren't something that the user expect to debug. Moreover single stepping interrupt handlers is risky as it may sometimes leads to unbalanced locking when we resume from single-step debug. Fix broken single-step implementation via skipping single-step over interrupt handlers. The methodology is when we receive an interrupt from EL1, check if we are single stepping (pstate.SS). If yes then we save MDSCR_EL1.SS and clear the register bit if it was set. Then unmask only D and leave I set. On return from the interrupt, set D and restore MDSCR_EL1.SS. Along with this skip reschedule if we were stepping. Suggested-by: Will Deacon Signed-off-by: Sumit Garg --- arch/arm64/kernel/entry-common.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 878c65aa7206..dd2d3af615de 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -458,19 +458,35 @@ static __always_inline void __el1_irq(struct pt_regs *regs, do_interrupt_handler(regs, handler); irq_exit_rcu(); - arm64_preempt_schedule_irq(); + /* Don't reschedule in case we are single stepping */ + if (!(regs->pstate & DBG_SPSR_SS)) + arm64_preempt_schedule_irq(); exit_to_kernel_mode(regs); } + static void noinstr el1_interrupt(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { + unsigned long reg; + + /* Disable single stepping within interrupt handler */ + if (regs->pstate & DBG_SPSR_SS) { + reg = read_sysreg(mdscr_el1); + write_sysreg(reg & ~DBG_MDSCR_SS, mdscr_el1); + } + write_sysreg(DAIF_PROCCTX_NOIRQ, daif); if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) __el1_pnmi(regs, handler); else __el1_irq(regs, handler); + + if (regs->pstate & DBG_SPSR_SS) { + write_sysreg(DAIF_PROCCTX_NOIRQ | PSR_D_BIT, daif); + write_sysreg(reg, mdscr_el1); + } } asmlinkage void noinstr el1h_64_irq_handler(struct pt_regs *regs)