From patchwork Thu Feb 2 07:31:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Garg X-Patchwork-Id: 13125434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C92FEC05027 for ; Thu, 2 Feb 2023 07:33:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=a0PS4ExbzjgCiQrivc1R+EDabpVwFKQe3LuIhVIIhIE=; b=u3dSV81wZPYcN8 jt2tnhARF7WTIDiTTGTmqLHWI03/FpmkSYXWtbt1aBG+ISvDC0PJCsgQCkDCJbFpI91+si5mS76B9 LDNLSx9DFnRDuxQUY2wmEUYb83P0szIOPnGVhMbsawk9304cNt48oe81VelcgAn5Y1ytL9UWjF4J3 pH9UKQMVDPalAGirtuDf80RdWcjN9BsZsGt/PtleAs4PTIE7J30+rhgzwvXnit97osvyZcRyNPII3 fOk3/nhSal8CeBNsI6gqBODfacah/G0quOdAsM3sINc+qs9b0BDrTnqRH7l6NOReHOWPMH90Iqe2h 9gBnGQE0p7YClbK3Nejg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNU4v-00EeDe-He; Thu, 02 Feb 2023 07:32:17 +0000 Received: from mail-pl1-x62b.google.com ([2607:f8b0:4864:20::62b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNU4r-00EeBH-W7 for linux-arm-kernel@lists.infradead.org; Thu, 02 Feb 2023 07:32:15 +0000 Received: by mail-pl1-x62b.google.com with SMTP id be8so997795plb.7 for ; Wed, 01 Feb 2023 23:32:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Pbd4MZHsiULdUR359ojLBIAjtvO1jr9zQzzGbH8lc2I=; b=c7qz3Xm43EOdvxhAg2kglrpK7w69x6NXU0683wUB8k24bVzmdjZkhS9hg8pk4QZVjK HZ68jI30Gj3ABe1GPXaeKXcqyFeqCJWM4xxJxEN+rldJPq4A7mHnWSPV1oCIHqxMI+Zx 34+FRsb/4qen45qmPaq3Is+SCRxTZASELP+Brxn86RBeWCrmYfhO5pzsSmKUdRje8m4F 0kjKJsycKJSmDBL0bY0TJFcO70k4QqX681urqKfOMWaIwCWpM33+aVhB6CY3k8cnYloy BRebmiflvKyg5XMkWT8ARX5+BP3Di1v6vfqsmkCFjUgZmkcc5SdUEJJIsZHHIfnApQ+4 q7Ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Pbd4MZHsiULdUR359ojLBIAjtvO1jr9zQzzGbH8lc2I=; b=1vFh+kvW2uKWNlfflkNqG+2VJ/0rvuvdviyJo0P6QQVgie2IVtxHLSCTc69iR4611a 1HSk+3ns6ILEurqbXJ+sJibvLeA1VAiI4U8Bu/lAHk4DWjHBK99dejZ4hnuhYsZl1YaL MTu6mAUcGEKnAiltaracnPO5DIdnThd2hf+PWk/0LaGRNvz19ug2bE3x/Z3Efe4mUwMA lS/0jdhdviYnkkHXL8VRch3QN0QB6u+Qf5WkBdmD78qfZgJkTe3u/DlhjisC095DZ2nc qvuuvGtyNSpGi09cfLAkU4ShY97wUMeBPMPzhovQuwDCR4gbFglFS+qZcB6kARta9UbI hfAQ== X-Gm-Message-State: AO0yUKW5VBufRA5zaTqQQY5IARuHI7BHd/45hFAPwL06Xsy1PY3a4y7H GZ+QdSwjq56+lzY1DsdkSNINFA== X-Google-Smtp-Source: AK7set8tKyTpkj0ZXvECd9oUs8GVhp0d9osE3pKU1MLtVb6u22AP3K2FPXbaHngjMj1iLyaFia6c0Q== X-Received: by 2002:a17:902:d292:b0:197:90f8:f3b with SMTP id t18-20020a170902d29200b0019790f80f3bmr5040697plc.57.1675323128769; Wed, 01 Feb 2023 23:32:08 -0800 (PST) Received: from sumit-X1.. ([223.178.209.222]) by smtp.gmail.com with ESMTPSA id o17-20020a170902d4d100b00196077ba463sm12959015plg.123.2023.02.01.23.32.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 23:32:08 -0800 (PST) From: Sumit Garg To: will@kernel.org, catalin.marinas@arm.com Cc: mark.rutland@arm.com, daniel.thompson@linaro.org, dianders@chromium.org, liwei391@huawei.com, mhiramat@kernel.org, maz@kernel.org, ardb@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sumit Garg Subject: [PATCH v6 1/2] arm64: entry: Skip single stepping into interrupt handlers Date: Thu, 2 Feb 2023 13:01:47 +0530 Message-Id: <20230202073148.657746-2-sumit.garg@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230202073148.657746-1-sumit.garg@linaro.org> References: <20230202073148.657746-1-sumit.garg@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_233214_058314_765535A8 X-CRM114-Status: GOOD ( 16.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently on systems where the timer interrupt (or any other fast-at-human-scale periodic interrupt) is active then it is impossible to step any code with interrupts unlocked because we will always end up stepping into the timer interrupt instead of stepping the user code. The common user's goal while single stepping is that when they step then the system will stop at PC+4 or PC+I for a branch that gets taken relative to the instruction they are stepping. So, fix broken single step implementation via skipping single stepping into interrupt handlers. The methodology is when we receive an interrupt from EL1, check if we are single stepping (pstate.SS). If yes then we save MDSCR_EL1.SS and clear the register bit if it was set. Then unmask only D and leave I set. On return from the interrupt, set D and restore MDSCR_EL1.SS. Along with this skip reschedule if we were stepping. Suggested-by: Will Deacon Signed-off-by: Sumit Garg Tested-by: Douglas Anderson Acked-by: Daniel Thompson Tested-by: Daniel Thompson --- arch/arm64/kernel/entry-common.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index cce1167199e3..568481f66977 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -231,11 +231,15 @@ DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); #define need_irq_preemption() (IS_ENABLED(CONFIG_PREEMPTION)) #endif -static void __sched arm64_preempt_schedule_irq(void) +static void __sched arm64_preempt_schedule_irq(struct pt_regs *regs) { if (!need_irq_preemption()) return; + /* Don't reschedule in case we are single stepping */ + if (regs->pstate & DBG_SPSR_SS) + return; + /* * Note: thread_info::preempt_count includes both thread_info::count * and thread_info::need_resched, and is not equivalent to @@ -471,19 +475,33 @@ static __always_inline void __el1_irq(struct pt_regs *regs, do_interrupt_handler(regs, handler); irq_exit_rcu(); - arm64_preempt_schedule_irq(); + arm64_preempt_schedule_irq(regs); exit_to_kernel_mode(regs); } + static void noinstr el1_interrupt(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { + unsigned long mdscr; + + /* Disable single stepping within interrupt handler */ + if (regs->pstate & DBG_SPSR_SS) { + mdscr = read_sysreg(mdscr_el1); + write_sysreg(mdscr & ~DBG_MDSCR_SS, mdscr_el1); + } + write_sysreg(DAIF_PROCCTX_NOIRQ, daif); if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) __el1_pnmi(regs, handler); else __el1_irq(regs, handler); + + if (regs->pstate & DBG_SPSR_SS) { + write_sysreg(DAIF_PROCCTX_NOIRQ | PSR_D_BIT, daif); + write_sysreg(mdscr, mdscr_el1); + } } asmlinkage void noinstr el1h_64_irq_handler(struct pt_regs *regs)