From patchwork Mon Jul 18 07:18:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 12920917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0E140CCA486 for ; Mon, 18 Jul 2022 07:19:10 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.369274.600612 (Exim 4.92) (envelope-from ) id 1oDL1l-0005Eo-SN; Mon, 18 Jul 2022 07:18:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 369274.600612; Mon, 18 Jul 2022 07:18:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oDL1l-0005ET-M1; Mon, 18 Jul 2022 07:18:49 +0000 Received: by outflank-mailman (input) for mailman id 369274; Mon, 18 Jul 2022 07:18:47 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oDL1j-0004to-NP for xen-devel@lists.xenproject.org; Mon, 18 Jul 2022 07:18:47 +0000 Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id db04218d-0669-11ed-924f-1f966e50362f; Mon, 18 Jul 2022 09:18:45 +0200 (CEST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: db04218d-0669-11ed-924f-1f966e50362f DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1658128725; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MBnjpYcrthIMf2A0TClYrTSnU8giI6pEvHtffLJlDVY=; b=G6cN5xEVpKp13zbtjNdQm9WsVzxFaw8p3fJI03vxLfTZTyT70n27sbKB 5Fe0u/NAZ1NvsVofjfiTZOKFUtndI93zhU6NZCoh/hWXVLl3emHSQ0B3W /j1dTrGX2SBDEjWOH75qG4lmfkTV6cwAbIIO1aihYscV9kohivGAuh1Py 0=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 2.7 X-MesageID: 76011728 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:P9jYk60dOKxWYV0JffbD5bpxkn2cJEfYwER7XKvMYLTBsI5bpzNVn zMeWD+HbKrcMGCgfo1xYYjj/E5S65+Gm9FhSVNspC1hF35El5HIVI+TRqvS04J+DSFhoGZPt Zh2hgzodZhsJpPkjk7xdOKn9RGQ7InQLpLkEunIJyttcgFtTSYlmHpLlvUwx4VlmrBVOSvU0 T/Ji5CZaQXNNwJcaDpOsfrc8Uw35pwehRtD1rAATaET1LPhvyF94KI3fcmZM3b+S49IKe+2L 86rIGaRpz6xE78FU7tJo56jGqE4aue60Tum0xK6b5OKkBlazhHe545gXBYqheW7vB3S9zx54 I0lWZVd0m7FNIWU8AgWe0Ew/y2TocSqUVIISJSymZX78qHIT5fj6/FxKkI2P4kmw7YtI01E2 6I7NC4kVh/W0opawJrjIgVtrsEqLc2tN4IDoHBwizreCJ7KQ7iaHf+Mv4UBmm5t2IYeRp4yZ OJAAdZrRD3GbwdCJRE8D5Umkf3zrnL+bydZuBSeoq9fD237k1IpieGyaoq9ltqiVcdSlWiWh 1P/pHnHXS8UNPGy6yDZ/Sf57gPItXyiA99DfFGizdZ1hHWDy2pVDwcZPXOZi/Skjk+1W/pEN lcZvCEpqMAa5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWblXoVRYYNoZg7pVvA2V3i BnZxLsFGACDrpWRVlSe9rWQkwriYwo5J0FcRjMaZDMKtoyLTJ4Isv7fcjpyOPfr04KkQWChn 2riQDsW3OtK05NSv0mv1RWe2m/3+MCUJuIgzl+PNl9J+D+Vc2JMi2aAzVHApchNI4+CJrVql ChVwpPOhAzi4HzkqcBsfAnuNOvwjxp9GGeA6WOD5rF4n9hXx1atfJpL/BZ1L1pzP8APdFfBO RGO6VsBvs4MZCL7Mcebhr5d7Oxzl8Dd+SnNDKiIPrKinLArHON4wM2eTRHJhD28+KTduao+J Y2aYa6RMJruMow+lWLeb7pMjtcWKtUWnzy7qWbTk0v6itJzpRe9Fd84Dbd5RrpktP3U+FiJr 4432gnj40w3bdASqxL/qeY7RW3m51BibXwqg6S7rtK+Hzc= IronPort-HdrOrdr: A9a23:6oN8nqq5ic+i+tM3scP2gRoaV5oTeYIsimQD101hICG8cqSj+f xG+85rrCMc6QxhPk3I9urhBEDtex/hHNtOkOws1NSZLW7bUQmTXeJfBOLZqlWKcUDDH6xmpM NdmsBFeaXN5DNB7PoSjjPWLz9Z+qjkzJyV X-IronPort-AV: E=Sophos;i="5.92,280,1650945600"; d="scan'208";a="76011728" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu Subject: [PATCH 1/5] xen/wait: Drop vestigial remnants of TRAP_regs_partial Date: Mon, 18 Jul 2022 08:18:21 +0100 Message-ID: <20220718071825.22113-2-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220718071825.22113-1-andrew.cooper3@citrix.com> References: <20220718071825.22113-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 The preservation of entry_vector was introduced with ecf9846a6a20 ("x86: save/restore only partial register state where possible") where TRAP_regs_partial was introduced, but missed from f9eb74789af7 ("x86/entry: Remove support for partial cpu_user_regs frames") where TRAP_regs_partial was removed. Fixes: f9eb74789af7 ("x86/entry: Remove support for partial cpu_user_regs frames") Signed-off-by: Andrew Cooper Reviewed-by: Jan Beulich --- CC: Jan Beulich CC: Roger Pau Monné CC: Wei Liu --- xen/common/wait.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/xen/common/wait.c b/xen/common/wait.c index 9276d76464fb..3ebb884fe738 100644 --- a/xen/common/wait.c +++ b/xen/common/wait.c @@ -124,7 +124,6 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv) struct cpu_info *cpu_info = get_cpu_info(); struct vcpu *curr = current; unsigned long dummy; - u32 entry_vector = cpu_info->guest_cpu_user_regs.entry_vector; ASSERT(wqv->esp == 0); @@ -169,8 +168,6 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv) for ( ; ; ) do_softirq(); } - - cpu_info->guest_cpu_user_regs.entry_vector = entry_vector; } static void __finish_wait(struct waitqueue_vcpu *wqv) From patchwork Mon Jul 18 07:18:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 12920916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8ABF9CCA479 for ; Mon, 18 Jul 2022 07:19:10 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.369272.600595 (Exim 4.92) (envelope-from ) id 1oDL1k-0004uv-8b; Mon, 18 Jul 2022 07:18:48 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 369272.600595; Mon, 18 Jul 2022 07:18:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oDL1k-0004uT-55; Mon, 18 Jul 2022 07:18:48 +0000 Received: by outflank-mailman (input) for mailman id 369272; Mon, 18 Jul 2022 07:18:47 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oDL1i-0004to-Vl for xen-devel@lists.xenproject.org; Mon, 18 Jul 2022 07:18:46 +0000 Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id daf19339-0669-11ed-924f-1f966e50362f; Mon, 18 Jul 2022 09:18:45 +0200 (CEST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: daf19339-0669-11ed-924f-1f966e50362f DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1658128725; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=F6JK8R4vklGJZNuhaf1bZprn0UinPkgJRlwtPk8D4E8=; b=HZK3hd3kiqWSIHd/x1jXWXR8ib2eGtSumJ/y/am1jHgNUn3jQnwoxrJV PiVyrphtfpQ4a1saxqDjdYAXNEtepVzCMeAfIhoI4HVIst5awZCN3UL+X hDnBmx4shia/Yo4UsiXSNt7r/9tSH8WkDhvYspbURR6cV+gXq4D9xJL5S 8=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 2.7 X-MesageID: 76007331 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:e1brrKP0+ugpJ/zvrXNrnJYgzKimJRAgMBhAA5OJyBtyWkJYtZNa93r/TTRbcNWNAwAcmmIG0TFDXW4DK0JKoVnUygFF/X7yBptrHy3eogHlySd64WL/45REHnRZwXvr0cO00x4icwyFQPofEgcEg2A9myiYEv9wJDbu2DlYKfYgvPGDMBIhVKJL48ReOvCXKqpniY5GLvlbxIQggiE59XnbzZi68uGBRy3+JDcSLhVh36GdRVC1ubN0aWN7JSUWeWN1V0phqE4k8tAatNnXPck7dADaMfnpsO2IhMoGwUIXklH7v6FVIQVsPe3YflA2/L+ABkPjXNH1bo8mvpQEH5eAXkPLTEEqCK2N5HnFphwakPmhBHBSn+slivtfCFQG81Z5/ll82YUldxN/GbdiKQiE1aVCJMHx5Jkc2yBM1T9NgtjHVF93U8cvndjzBQdNiU8ADZNtFE0mrCzvbbyl0QJ/9z1iY/lY2zNr+e+vKHFq21Wy8Y/aExTk/Fk03u55Xb7/lp01yOrhe++d1rZAaqYxosbi+GZO8SMQfUfj4cWE3pX6tCtSnN+hdDJmwKPSQWNNjyv3WowYP9eByCdtkVdnZffBSlWokb3JTU8KFyZ7cWVN5uWMHvE2TB/mV7dCC/lMbsVxFDgztceO+m1hk6WE8rover18RBGHyMun3r8akIqLlc8X0BxB8KPKzwSLKKLaZox7dCewjh7uWwRH4/OV8oXud/2q003eHU5jBC60S576otRN9KQ/aFfo2bdsWfc8HYwtdZKBSjchkB7lsg4ulVHbJ/TdbSeVOjkSjRnsjxZgqwdOh0ec0cm/7NGjQ6mdRKUmUecLzhQzzSLzNdkkKD9hsMZlqoqGQjZB4IjRoKJAssNvEGLycLdqJWW02YcG90fR+vLlhCNqcGPdGgDAsMQTRfKdCrCf7mw4RwGMkdw1MOac7qIq2GOKfQhbwCbQrV8xeKaZS8IGXL46B1r3gIW4ijgHHNHz/sGBpOI8fjcRWXw9TSRgN/ICgtkZcyoE3YWZo39Ea6GEFQD2OiW3910/aoTbMzJ6QIRUvYu0AWtCwhNDA4rn3KwonBqiP46DAW6C8DJR22pL+ZuMEESNCT58le6QP8YUi8ernjKQuViv/mtFjGrybNJdmGwT0Ha7N0ZO32pybvYDm9Z+XixTArysugTj0poqGbjUqPKZ0U2LYhIG1rmPDFTiEvq2x687Y4VMG0H7RUvFSjeZX56rg8rabFSvjIRedtpv+ImH31p57hqqp2cj4F00Xj0KaQCpIwkIFicH+8mgn1s4EVsp4IlwxwKN4qvPjx3oyfdTB/7Hb5VlaljhnPLILgw1Dk2zfnEODnK+nO4QOAC4hIBG5jIuFbKo/8ZoySQNrZWVLFtxMldfd80rJh14Wx7U1WdBv6TYoz0FHeJoTuuhvPr+izHegFPGk5F14bcaF9esGdxtCaFaHZ5bnY+PBXAYs6JNev7x1TVmWRfm0Dzz8n1RLIwTDvohD1Kapz9JtRhDBbKEUmSFGDGV/QacltqJMY15Fo4X4d+joQ3UfK9H2r2uzY0zN4k15XCegBgVza/UXEJAVDUKZIQrIY33vL7yrS1CY2D962FtyO2dx4JK3rNmYvyNzvO8TcOlT6T0k59Z+i75bhUJ22nQBZJdTJLGCa7p5v3lYydZv895/k9Z9sm1y8f2BQT62NLFXxPTgn3lyl/zKh/qonB546CCFVDSfOf7POkesdYW7JoYRAiL/TpX6052YL5WuP4pGkHKZGtTgzULqM7bW+pvYtmKR31NWyTiSvhi4LFh9n5LAtJ3IRwypxKfo99J/J7lh/0aTvtALs6gfUde9hNuCIImQ1hroMfyxGvr+/H9CpgM9Hq1q12r1IAPp6WQxWwmA3YaB/Jy4WQ4Wt05qDYHzk/iuLHbwAAqgl3AsdQMvfmoxGzhuv9NXtC7v4r5axjTd92RR/meuYTlHeIda7xUsKX2aedzBaEYldB8RxCQrnfavPTHHb6DtZaHMft7OJCPzOX6ag5biexcO3FvUdXJzEGG3jVS50ekjgjUsGB7PyGJ5iHKxArTZTFEbMXcdhQIJKo4wliWOQgpRGnbMlOUiosvkXS70eQTKzIdGXE/JDyAxRskUVTVwhHRZnIYfwFbVZZlgT0BAolCfDaFiT7+P9wGOgZXS2+ELeDsQYHjGLKr8A1cEX8iWEbjjeseBC8MZidyaJdQHbs/iOH9mdEkImoumGZ/QEAqg2VxFoLAYv3URFHrAcI64UkNnn3Df0ywgniN6sD0YL/hDiuBmEdOXs5opcKG4cjy+Exv7UVILKvO218hY+LgAtJI7Gh1wugXAad2NS8kJQh2D4fl1YqykeZLwsoBzzwL4TyUyYnYvVpHY7q7xgWCG6qbKm001crSPd0Kop7xDQMTZwRzcqumEVY6eYQKtu8R6HELunW1qxXW4IiRKJ+6U5BIUmtJ9P+1hjfUyesROhwOL6COc5liJF0GFudPmIU0nkWdbSFsjlnf63+9LmQLdmEtn4D05fUOHgVUEC8ZXYMn/UHyP0lwFdZhVt8h/jpwdnXf6EFB9uZoBIAGBpTKbOTYxzBtxVkC8bF0zufqLP23uQYoaqaVTmFUgAHWUkdkPJd0vIFlObk5KSStdVqJvtvCC0SPIliCTn6tNuPIk/riYAsnpg1gUW2P+m2O6PiPWtyeW6tq X-IronPort-AV: E=Sophos;i="5.92,280,1650945600"; d="scan'208";a="76007331" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu Subject: [PATCH 2/5] xen/wait: Extend the description of how this logic actually works Date: Mon, 18 Jul 2022 08:18:22 +0100 Message-ID: <20220718071825.22113-3-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220718071825.22113-1-andrew.cooper3@citrix.com> References: <20220718071825.22113-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Signed-off-by: Andrew Cooper Reviewed-by: Jan Beulich --- CC: Jan Beulich CC: Roger Pau Monné CC: Wei Liu --- xen/common/wait.c | 29 +++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/xen/common/wait.c b/xen/common/wait.c index 3ebb884fe738..4dcfa17a8a3f 100644 --- a/xen/common/wait.c +++ b/xen/common/wait.c @@ -137,7 +137,19 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv) do_softirq(); } - /* Hand-rolled setjmp(). */ + /* + * Hand-rolled setjmp(). + * + * __prepare_to_wait() is the leaf of a deep calltree. Preserve the GPRs, + * bounds check what we want to stash in wqv->stack, copy the active stack + * (up to cpu_info) into wqv->stack, then return normally. Our caller + * will shortly schedule() and discard the current context. + * + * The copy out is performed with a rep movsb. When + * check_wakeup_from_wait() longjmp()'s back into us, %rsp is pre-adjusted + * to be suitable and %rsi/%rdi are swapped, so the rep movsb instead + * copies in from wqv->stack over the active stack. + */ asm volatile ( "push %%rax; push %%rbx; push %%rdx; push %%rbp;" "push %%r8; push %%r9; push %%r10; push %%r11;" @@ -199,9 +211,18 @@ void check_wakeup_from_wait(void) } /* - * Hand-rolled longjmp(). Returns to __prepare_to_wait(), and lands on a - * `rep movs` instruction. All other GPRs are restored from the stack, so - * are available for use here. + * Hand-rolled longjmp(). + * + * check_wakeup_from_wait() is always called with a shallow stack, + * immediately after the vCPU has been rescheduled. + * + * Adjust %rsp to be the correct depth for the (deeper) stack we want to + * restore, then prepare %rsi, %rdi and %rcx such that when we intercept + * the rep movs in __prepare_to_wait(), it copies from wqv->stack over the + * active stack. + * + * All other GPRs are available for use; they're either restored from + * wqv->stack or explicitly clobbered. */ asm volatile ( "mov %1,%%"__OP"sp; jmp .L_wq_resume;" From patchwork Mon Jul 18 07:18:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 12920915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B3928CCA485 for ; Mon, 18 Jul 2022 07:19:10 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.369277.600635 (Exim 4.92) (envelope-from ) id 1oDL1n-0005jt-82; Mon, 18 Jul 2022 07:18:51 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 369277.600635; Mon, 18 Jul 2022 07:18:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oDL1m-0005gn-WD; Mon, 18 Jul 2022 07:18:51 +0000 Received: by outflank-mailman (input) for mailman id 369277; Mon, 18 Jul 2022 07:18:49 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oDL1l-0004tp-GG for xen-devel@lists.xenproject.org; Mon, 18 Jul 2022 07:18:49 +0000 Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id ddd22d0f-0669-11ed-bd2d-47488cf2e6aa; Mon, 18 Jul 2022 09:18:48 +0200 (CEST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ddd22d0f-0669-11ed-bd2d-47488cf2e6aa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1658128728; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XkwkicbnqybarudXENzuLnv1vjxC0glvjl5QcpNJ1bw=; b=Vz3PPRUgdVdDfZEX51XXFe+pmhT1AL5iIn+wrslCYDDwMuYRR3eaTSQ1 KDP11ZTmTJHofKwcAxPNuARg/IjlcC/NyKc4N+++LirLZO3FV+ZTa1J/e 76ZK486u99VGRgEdVGbNyYn4bOdymMtZPzgChq8IYjHk21vDkrECDamBg 0=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 2.7 X-MesageID: 76442484 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:YD4n06JDNufvLTwkFE+RuJUlxSXFcZb7ZxGr2PjKsXjdYENS1j1Wn WRLXWGBb/mIM2D8KIhzYdi+9kkF7ZfRm9NlHgJlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s q3yv/GZdJhcokf0/0vraP65xZVF/fngbqLmD+LZMTxGSwZhSSMw4TpugOd8iYNz6TSDK1rlV eja/ouOYjdJ5xYuajhOs/3a90s11BjPkGhwUmIWNKgjUGD2zxH5PLpHTYmtIn3xRJVjH+LSb 44vG5ngows1Vz90Yj+Uuu6Tnn8iG9Y+DiDX4pZiYICwgwAqm8AH+v1T2Mzwy6tgo27hc9hZk L2hvHErIOsjFvWkdO81C3G0H8ziVEHvFXCuzXWX6KSuI0P6n3TE8sh+J2IbbY4iw6VRGmVC0 eIqcgIQcUXW7w626OrTpuhEg80iKI/gPZ8Fu2EmxjbcZRokacmdGeOQv4YehWpuwJAVdRrdT 5NxhT5HRRLMeRBQfHwQD4ozhryAjXjjaTxI7lmSoMLb5kCMklAtiuS9bLI5fPSJaf1Rwk/Bq ljo/m/aLEgIaMXClheKpyfEaujnwnqgBdN6+KeD3uFuqE2ewCoUEhJ+fXmRrOS9i0W+c8lCM EFS8S0rxYAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxO4QfZmcfMpp87pZwHGF0k A/S9z/0OdBxmIyoWVm+2+eXlwrxHzMRIUscV3U2UBRQtrEPv7oPYgLzosdLSfDo0YytSGCsm VhmvwBl2exN0JdjO7GTuAme3mny/sWhohsdvF2/Y46z0u9uiGdJjaSM4EOT0/tPJZ3xorKp7 CldwJj2AAzj4PiweM2xrAYlRujBCw6tamG0vLKWN8BJG86R03CiZ5tMxzp1OV1kNM0JERewP hKD4VMLtcALZSr2BUOSX25WI51wpZUM6Py/DqyEBjawSsIZmPC7ENFGOhfLgjGFfLkEmqAjI 5aLGfuR4YIhIf0+lFKeGrZCuYLHMwhkmgs/s7inkEn8uVdfDVbJIYo43KymNL1hsvPc/VmJq 76y9aKikn1ibQE3WQGPmaZ7ELzABSJT6UzewyCPStO+Hw== IronPort-HdrOrdr: A9a23:ENnVu62qu+sCT10vgT5uSwqjBIgkLtp133Aq2lEZdPRUGvb4qy nIpoVi6faUskdpZJhOo6HiBEDtexzhHNtOkO0s1NSZLW/bUQmTXeNfBOLZqlWKcUCTygce79 YGT0EXMqyKMbEQt6bHCWeDferIuOP3lZyVuQ== X-IronPort-AV: E=Sophos;i="5.92,280,1650945600"; d="scan'208";a="76442484" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu Subject: [PATCH 3/5] xen/wait: Minor asm improvements Date: Mon, 18 Jul 2022 08:18:23 +0100 Message-ID: <20220718071825.22113-4-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220718071825.22113-1-andrew.cooper3@citrix.com> References: <20220718071825.22113-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 There is no point preserving all registers. Instead, preserve an arbitrary 6 registers, and list the rest as clobbered. This does not alter the register scheduling at all, but does reduce the amount of state needing saving. Use a named parameter for page size, instead of needing to parse which is parameter 3. Adjust the formatting of the parameters slightly to simply the Reviewed-by: Jan Beulich diff of the subsequent patch. Signed-off-by: Andrew Cooper --- CC: Jan Beulich CC: Roger Pau Monné CC: Wei Liu --- xen/common/wait.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/xen/common/wait.c b/xen/common/wait.c index 4dcfa17a8a3f..4bc030d1a09d 100644 --- a/xen/common/wait.c +++ b/xen/common/wait.c @@ -151,13 +151,12 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv) * copies in from wqv->stack over the active stack. */ asm volatile ( - "push %%rax; push %%rbx; push %%rdx; push %%rbp;" - "push %%r8; push %%r9; push %%r10; push %%r11;" - "push %%r12; push %%r13; push %%r14; push %%r15;" + "push %%rbx; push %%rbp; push %%r12;" + "push %%r13; push %%r14; push %%r15;" "sub %%esp,%%ecx;" - "cmp %3,%%ecx;" - "ja .L_skip;" + "cmp %[sz], %%ecx;" + "ja .L_skip;" /* Bail if >4k */ "mov %%rsp,%%rsi;" /* check_wakeup_from_wait() longjmp()'s to this point. */ @@ -165,12 +164,12 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv) "mov %%rsp,%%rsi;" ".L_skip:" - "pop %%r15; pop %%r14; pop %%r13; pop %%r12;" - "pop %%r11; pop %%r10; pop %%r9; pop %%r8;" - "pop %%rbp; pop %%rdx; pop %%rbx; pop %%rax" + "pop %%r15; pop %%r14; pop %%r13;" + "pop %%r12; pop %%rbp; pop %%rbx;" : "=&S" (wqv->esp), "=&c" (dummy), "=&D" (dummy) - : "i" (PAGE_SIZE), "0" (0), "1" (cpu_info), "2" (wqv->stack) - : "memory" ); + : "0" (0), "1" (cpu_info), "2" (wqv->stack), + [sz] "i" (PAGE_SIZE) + : "memory", "rax", "rdx", "r8", "r9", "r10", "r11" ); if ( unlikely(wqv->esp == 0) ) { @@ -224,11 +223,12 @@ void check_wakeup_from_wait(void) * All other GPRs are available for use; they're either restored from * wqv->stack or explicitly clobbered. */ - asm volatile ( - "mov %1,%%"__OP"sp; jmp .L_wq_resume;" - : : "S" (wqv->stack), "D" (wqv->esp), - "c" ((char *)get_cpu_info() - (char *)wqv->esp) - : "memory" ); + asm volatile ( "mov %%rdi, %%rsp;" + "jmp .L_wq_resume;" + : + : "S" (wqv->stack), "D" (wqv->esp), + "c" ((char *)get_cpu_info() - (char *)wqv->esp) + : "memory" ); unreachable(); } From patchwork Mon Jul 18 07:18:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 12920918 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 72290CCA482 for ; Mon, 18 Jul 2022 07:19:11 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.369276.600626 (Exim 4.92) (envelope-from ) id 1oDL1m-0005U9-K7; Mon, 18 Jul 2022 07:18:50 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 369276.600626; Mon, 18 Jul 2022 07:18:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oDL1m-0005Rb-CH; Mon, 18 Jul 2022 07:18:50 +0000 Received: by outflank-mailman (input) for mailman id 369276; Mon, 18 Jul 2022 07:18:48 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oDL1k-0004to-Nh for xen-devel@lists.xenproject.org; Mon, 18 Jul 2022 07:18:48 +0000 Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id dcd84a51-0669-11ed-924f-1f966e50362f; Mon, 18 Jul 2022 09:18:46 +0200 (CEST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: dcd84a51-0669-11ed-924f-1f966e50362f DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1658128726; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ebOQyw54nOOd0pf8acUFtFVapBth6uiZeKHzw4jUO8k=; b=dbRg3o8jbpf7EYumHEO9USjY+zLSYaKU6IU3RjkwyUnzL8bnIQ3sMkJw TuUrI0SedUVrKNuV/qcJeYeFxVx+Hjjw8tsvdWg+e9QBL8KMxxATDqO/S BVv/MW2GfUmXN8oX7WoytCg9e1rz/RMFOXQ8E+krJfNLHWhoTQRmpydmo E=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 2.7 X-MesageID: 76007332 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:zjBJsa2tB0xEkJQofPbDix16xS7N8HFdZTh5A9k8jSZ0cfkRliZWkhF6PiwklHlZu+rW2YndT/em3FGgmSslSUnMXd4QO/PJHhVrmHATwQEHUHVE2NFApiHviEdWNnjUaBy65wBs81Eismu0A9pSy5YFciCEkjMPwJ7ek37nVVutO35b33UT7JRP5F/H3jn+KVRCr5/xrZZxp+VXVcLF2F/lFq3KSUyb0H8LcL8SdO1n3lp5gr09/uS/VfPfvYA9iS3hT7yfGtBZyU31dpkBNrx7mkC4lYI8xl9BbuzQl74WYbXOwNE/LrirTkv/hEMReKhLiR+VXP5Z+PtEzTU6wWRo/NhkDABzB3T4+aL8uBx9xnwALmCCVJBPKATbm+55KHZrd4jNRVensDLN5KJ1Y6xEWpQzxsWc5HfteSE8KUCIRl/fIwRaGMAsL/A3WrdvC88c/l0lIPjXh23vQYIm3LxsIRS7q8w01GL6FhGa6/6SXuQGnlFXp4T8Ji5Pt0qELmP4LT09DwL1nc7phsLmE9/PePhpT6Ud1kUBu09lXuDlpZ+xCmm8rIQEhZ7j4u+TayUflwoG+NwI4z/9IKnKb0cFF4+9zymJqLwW1Wnrc//O+P9cjXfFrPkKSUqQm2EggxRtnS4E/6WVUlJrarcs5WB/VN1P/ZAXBxdJHGF3ZCUNGeaP2BXb2zaplsufyb+QxJFWdAd2AKlTk6hmAtFfGCY3BqecuUIAeOODnFRSRD5El1bZkVTeABXJEPLNd6CmcKiocVKHTNs1Rtv+ib0aow1Vd0Yv32hmzbpopmL+T6n4Aa8AVrHNePMiFAEcM32wAr6S6dapPO3yV3uMjFv6Ys88F8D7hZo936QdpG89j2JPVrhuBki62RiKdIFqrKlnavZ/bY657+c1P0iwmUgF9QIKc4jf6twjk8fpsaxq+Giw3iXdS68a5dxfndxD3hcJfd/wNlkKM5LF7u0ozPQ7Ez77Jl+YmJwAhQI1EkXMrCr08HlFkRWHYITsYZaSi2ElyS0RC9cKwnNjQEmQ2HMSjMrSInTqrdQjf+riPAUd+l8mNMHVbXcIXHDycDBQse6Ge8bg9VxNOzDtekFMDffAlheffciXvURsKXmLqcbiYkn4WewPfWmZfqzlVyBvIrUWimBC2wBsY1lCh8k2L2jiJKs6Al4i018ucgPgxFmvAtc0uyjgJ5STKYWPAmnX27nXbh9s9dWyo2btBF4JolKewyKnxDWTKc7IbTZXOLcM8xw4dLBHJ4U5irDGZSfpCErn3f4IU4cmTxMgwifYs04mdMMGMQTLyr/6MdJyQEA2j5uuxVn40FcGIB0CuzDrO45+s52Ra633kXyOeUecITqzRciQrgQ1W0OoyawiQerdoeH0bJwZSDZH28HIVc7UaZGK4buvxPDiIOhldnMURbQj8s2ZDSXG5OLr9fEWX1nI36xi9QnDc7DZH8/GVR99AO9doyOuS1XDLNLwsiuh0pNr8n2WYiScvxMph9l9L8q0O24BqBKIMlalyS4MuQZ21pcl+7pZOwDGzoNGu6mw2XodioYsE52k+95+n5vAiQ8n40yjzVLk6zardi15ONIP6BmCHVyscY0b5B/8Gj89HB7vxzzp1sGb1qGR/EaOwbAa6TGikImFi9Dn2iV+vWsizV5GHj0BIc08AUTtwB5HHVScAGy2hNmS5u8QCzMbSMOXQiTGa3P9x3rtOHMSePyfCBDtS7A49pq2qbVqjvTqlOyhDwZEpz693zP4aekwr9NbEctmYzP5o8ssw5YIW8yqTqsDy3lo5aJEjsNqcdoc5qJl/IOkFXuj3N6ka7Z49rFDWBIEJkyh00D/Qd+h99JpNijBx/QGjyb544knApj9Ejal+CU0Ywh20BqUNi7eKaUcqHJA/ofDDxeKFpJk5EL4wYvctZslSfJJ4R3JyAc4c5s1TS+3GixP7gZxI/icX+Na5bY8xIPrhWMW7hC8A72hBrVvHbt79AuNDhw26dB10X/ckN7JR8mblrdqMfrVtZbolwZFcU7nqfLSK8j8rbrOLzIWpUeHZy91oDwz4nfAE3JecsOKzTdqu1C9AkQCD+8XC/mUfPBOhE2AmA0IJ4GFX289K03lJPx0aR3997VDbHIasgdyPe3+pkEwtUXw26Jor0+X4bXD7xZ28BjVchK/lyrWU/ujyFtTBZ3e7NWsnlQQTWIeGwUrodpuJrSGaaLo9GGoxT19Nv3uIp4ox9nRMgHxAOQTJp2OZ63K4KtgZo69spRrkTbnOQSfVRrxN12Er5y4z4b/E3BsOIHhLAFB7kdpPCsz7iZfLsgYuVEXz7UQCGOFCSEIzBob9f2riyi5KiDJrg9jSpp9yxa0SV7oOaZZub+vcX+RGabo8BPqd4Iou1q9SsQO0znSDXzPbobCCTDCaL+MN2eEXbefTVQzeXq+3/1g6+OE/b/UgQWHkjK2JGEMaTjKqH2ozj4SInCJAUhMPApaPU53NqC5Vgf7RlsOJJ9XxseJneJi6YTkHGtDva8UAa47lvKzc2ZMLHNHfVPYG6Q5gg/Rrd41LgRGvEI5NmNuy3YQwZqtSB3mgzFhIoshSkkfmg6fvAGkhao89CkGoTqbJafEAy0842u7U8M7jxVpIVtopct0AuiWcTNdhKh0d0Lz8pm1so0Z8FrzDCepLYKVW6cdf3FZ5JzO7gB7Wdgwrbip3We7+UJn X-IronPort-AV: E=Sophos;i="5.92,280,1650945600"; d="scan'208";a="76007332" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu Subject: [PATCH 4/5] xen/wait: Use relative stack adjustments Date: Mon, 18 Jul 2022 08:18:24 +0100 Message-ID: <20220718071825.22113-5-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220718071825.22113-1-andrew.cooper3@citrix.com> References: <20220718071825.22113-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 The waitqueue's esp field is overloaded. It serves both as an indication that the waitqueue is in use, and as a direction to check_wakeup_from_wait() as to where to adjust the stack pointer to, but using an absolute pointer comes with a cost if requiring the vCPU to wake up on the same pCPU it went to sleep on. Instead, have the waitqueue just keep track of how much data is on wqv->stack. This is no practical change in __prepare_to_wait() (it already calculated the delta) but split the result out of the (also overloaded) %rsi output parameter by using a separate register instead. check_wakeup_from_wait() has a bit more work to do. It now needs to calculate the adjustment to %rsp rather than having the new %rsp provided as a parameter. Signed-off-by: Andrew Cooper --- CC: Jan Beulich CC: Roger Pau Monné CC: Wei Liu --- xen/common/wait.c | 44 ++++++++++++++++++++++++++++---------------- 1 file changed, 28 insertions(+), 16 deletions(-) diff --git a/xen/common/wait.c b/xen/common/wait.c index 4bc030d1a09d..4f1daf650bc4 100644 --- a/xen/common/wait.c +++ b/xen/common/wait.c @@ -32,8 +32,8 @@ struct waitqueue_vcpu { * Xen/x86 does not have per-vcpu hypervisor stacks. So we must save the * hypervisor context before sleeping (descheduling), setjmp/longjmp-style. */ - void *esp; char *stack; + unsigned int used; #endif }; @@ -121,11 +121,11 @@ void wake_up_all(struct waitqueue_head *wq) static void __prepare_to_wait(struct waitqueue_vcpu *wqv) { - struct cpu_info *cpu_info = get_cpu_info(); struct vcpu *curr = current; unsigned long dummy; + unsigned int used; - ASSERT(wqv->esp == 0); + ASSERT(wqv->used == 0); /* Save current VCPU affinity; force wakeup on *this* CPU only. */ if ( vcpu_temporary_affinity(curr, smp_processor_id(), VCPU_AFFINITY_WAIT) ) @@ -154,24 +154,25 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv) "push %%rbx; push %%rbp; push %%r12;" "push %%r13; push %%r14; push %%r15;" - "sub %%esp,%%ecx;" + "sub %%esp, %%ecx;" /* ecx = delta to cpu_info */ "cmp %[sz], %%ecx;" "ja .L_skip;" /* Bail if >4k */ - "mov %%rsp,%%rsi;" + + "mov %%ecx, %%eax;" + "mov %%rsp, %%rsi;" /* Copy from the stack, into wqv->stack */ /* check_wakeup_from_wait() longjmp()'s to this point. */ ".L_wq_resume: rep movsb;" - "mov %%rsp,%%rsi;" ".L_skip:" "pop %%r15; pop %%r14; pop %%r13;" "pop %%r12; pop %%rbp; pop %%rbx;" - : "=&S" (wqv->esp), "=&c" (dummy), "=&D" (dummy) - : "0" (0), "1" (cpu_info), "2" (wqv->stack), + : "=a" (used), "=D" (dummy), "=c" (dummy), "=&S" (dummy) + : "a" (0), "D" (wqv->stack), "c" (get_cpu_info()), [sz] "i" (PAGE_SIZE) - : "memory", "rax", "rdx", "r8", "r9", "r10", "r11" ); + : "memory", "rdx", "r8", "r9", "r10", "r11" ); - if ( unlikely(wqv->esp == 0) ) + if ( unlikely(used > PAGE_SIZE) ) { gdprintk(XENLOG_ERR, "Stack too large in %s\n", __func__); domain_crash(curr->domain); @@ -179,11 +180,13 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv) for ( ; ; ) do_softirq(); } + + wqv->used = used; } static void __finish_wait(struct waitqueue_vcpu *wqv) { - wqv->esp = NULL; + wqv->used = 0; vcpu_temporary_affinity(current, NR_CPUS, VCPU_AFFINITY_WAIT); } @@ -191,10 +194,11 @@ void check_wakeup_from_wait(void) { struct vcpu *curr = current; struct waitqueue_vcpu *wqv = curr->waitqueue_vcpu; + unsigned long tmp; ASSERT(list_empty(&wqv->list)); - if ( likely(wqv->esp == NULL) ) + if ( likely(!wqv->used) ) return; /* Check if we are still pinned. */ @@ -220,14 +224,22 @@ void check_wakeup_from_wait(void) * the rep movs in __prepare_to_wait(), it copies from wqv->stack over the * active stack. * + * We are also bound by __prepare_to_wait()'s output constraints, so %eax + * needs to be wqv->used. + * * All other GPRs are available for use; they're either restored from * wqv->stack or explicitly clobbered. */ - asm volatile ( "mov %%rdi, %%rsp;" + asm volatile ( "sub %%esp, %k[var];" /* var = delta to cpu_info */ + "neg %k[var];" + "add %%ecx, %k[var];" /* var = -delta + wqv->used */ + + "sub %[var], %%rsp;" /* Adjust %rsp down to make room */ + "mov %%rsp, %%rdi;" /* Copy from wqv->stack, into the stack */ "jmp .L_wq_resume;" - : - : "S" (wqv->stack), "D" (wqv->esp), - "c" ((char *)get_cpu_info() - (char *)wqv->esp) + : "=D" (tmp), [var] "=&r" (tmp) + : "S" (wqv->stack), "c" (wqv->used), "a" (wqv->used), + "[var]" (get_cpu_info()) : "memory" ); unreachable(); } From patchwork Mon Jul 18 07:18:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 12920919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 55F7AC43334 for ; Mon, 18 Jul 2022 07:19:12 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.369273.600605 (Exim 4.92) (envelope-from ) id 1oDL1l-0005BU-Fq; Mon, 18 Jul 2022 07:18:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 369273.600605; Mon, 18 Jul 2022 07:18:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oDL1l-0005BN-Ci; Mon, 18 Jul 2022 07:18:49 +0000 Received: by outflank-mailman (input) for mailman id 369273; Mon, 18 Jul 2022 07:18:47 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oDL1j-0004tp-Eu for xen-devel@lists.xenproject.org; Mon, 18 Jul 2022 07:18:47 +0000 Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id db29b1b6-0669-11ed-bd2d-47488cf2e6aa; Mon, 18 Jul 2022 09:18:45 +0200 (CEST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: db29b1b6-0669-11ed-bd2d-47488cf2e6aa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1658128725; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PfqPuU4iTL51oPEUzQUxPoZi5ngRHlXvkVkGKzmfCpg=; b=Cl4/cJ4sG+JO9UjFbRgz+vXSyCSLwY79A0ptG1tdgkJzKNYr4D2Vq2ij aIeySyznHnmJTl5Eo+BkFr7il9rlQ97FsVghTuVZnzv6k2hnPrH0DucM3 sGNoJvsGiIFgFCrEO27Yo4WMKZiepnP27NBNzddkBTiJzdhueUYvLHk0y o=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 2.7 X-MesageID: 76442482 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:KjqbEqMATTNixGLvrR2xl8FynXyQoLVcMsEvi/4bfWQNrUpwhTQDy DAYUG+Hbv3fN2b0Kd13YN/npxwE7JTWmN5jSgto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h ynLQoCYdKjYdleF+lH3dOCJQUBUjcmgXqD7BPPPJhd/TAplTDZJoR94kqsyj5UAbeKRWmthg vuv5ZyFULOZ82QsaDhMtPvc8EkHUMna41v0gHRvPZing3eG/5UlJMp3Db28KXL+Xr5VEoaSL woU5Ojklo9x105F5uKNyt4XQGVTKlLhFVHmZk5tc7qjmnB/Shkaic7XAha+hXB/0F1ll/gpo DlEWAfZpQ0BZsUgk8xFO/VU/r0X0QSrN9YrLFDm2fF/wXEqfFPd/uVFL2xmPrchucBsBW1q6 85CGi0SO0Xra+KemNpXS8Fpj8UnasLqIJkeqjdryjSx4fQOGM6ZBf+QvJkBgWl21psm8fX2P qL1bRJGahjabgIJEVAQEJ8kx8+jh2Xlci0eo1WQzUYyyzeNkFArjOi3WDbTUvOQfsRUsmmAn 03l42bBWk4fGc247QPQpxpAgceQxHimCer+DoaQ9PFwh0aI7ncOExBQXly+ydG1hEexVNNYL 0084Tc1oO4580nDZvvXUgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1 FKCg5XuHzMHmKKRYWKQ8PGTtzzaBMQOBTZcP2leF1JDuoS95tFo5v7Scjp9OJ+InvbWNGHz+ QnJijM6i5ENv5Q5yrruqDgrnAmQSoj1oh8dv1uKATP9v1IlPuZJdKTztwGFsK8owJKxCwDY4 SNaw5X2APUmV8nlqcCbfAka8FhFDd6hOSaUv1NgFoJJG9+Fqy/6JtA4DN2TyS5U3ic4ldzBO ha7Vft5vsM7AZdTRfYfj3iNI8or17P8Mt/uS+rZaNFDCrAoKlLXpHE0NBHIhjGx+KTJrU3YE c7BGftA8F5AUfg3pNZIb711PUAXKtAWmjqIGMGTI+WP2ruCfn+FIYo43K+1RrlgtMus/VSKm +uzwuPQlH2zpsWiPXSMmWPSRHhWRUUG6Wfe8ZEPKL7bf1Y6QAnMyZb5mNscRmCspIwN/s+gw 513chUwJIbX7ZEfFTi3Vw== IronPort-HdrOrdr: A9a23:zruxQq8fbdeFxVAGgkJuk+DgI+orL9Y04lQ7vn2YSXRuHPBw8P re5cjztCWE7gr5N0tBpTntAsW9qDbnhPtICOoqTNCftWvdyQiVxehZhOOIqVDd8m/Fh4pgPM 9bAtBD4bbLbGSS4/yU3ODBKadD/OW6 X-IronPort-AV: E=Sophos;i="5.92,280,1650945600"; d="scan'208";a="76442482" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Juergen Gross , Dario Faggioli Subject: [PATCH 5/5] xen/wait: Remove VCPU_AFFINITY_WAIT Date: Mon, 18 Jul 2022 08:18:25 +0100 Message-ID: <20220718071825.22113-6-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220718071825.22113-1-andrew.cooper3@citrix.com> References: <20220718071825.22113-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 With the waitqueue logic updated to not use an absolute stack pointer reference, the vCPU can safely be resumed anywhere. Remove VCPU_AFFINITY_WAIT completely, getting rid of two domain crashes, and a logical corner case where resetting the vcpu with an oustanding waitqueue would crash the domain. Signed-off-by: Andrew Cooper --- CC: Jan Beulich CC: Roger Pau Monné CC: Wei Liu CC: Juergen Gross CC: Dario Faggioli --- xen/common/domain.c | 2 -- xen/common/sched/core.c | 4 +--- xen/common/wait.c | 23 ----------------------- xen/include/xen/sched.h | 1 - 4 files changed, 1 insertion(+), 29 deletions(-) diff --git a/xen/common/domain.c b/xen/common/domain.c index 618410e3b257..323b92102cce 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -1428,8 +1428,6 @@ int vcpu_reset(struct vcpu *v) v->is_initialised = 0; if ( v->affinity_broken & VCPU_AFFINITY_OVERRIDE ) vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE); - if ( v->affinity_broken & VCPU_AFFINITY_WAIT ) - vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_WAIT); clear_bit(_VPF_blocked, &v->pause_flags); clear_bit(_VPF_in_reset, &v->pause_flags); diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index f689b55783f7..cff8e59aba7c 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -1610,12 +1610,10 @@ void watchdog_domain_destroy(struct domain *d) /* * Pin a vcpu temporarily to a specific CPU (or restore old pinning state if * cpu is NR_CPUS). - * Temporary pinning can be done due to two reasons, which may be nested: + * Temporary pinning can be done for a number of reasons, which may be nested: * - VCPU_AFFINITY_OVERRIDE (requested by guest): is allowed to fail in case * of a conflict (e.g. in case cpupool doesn't include requested CPU, or * another conflicting temporary pinning is already in effect. - * - VCPU_AFFINITY_WAIT (called by wait_event()): only used to pin vcpu to the - * CPU it is just running on. Can't fail if used properly. */ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason) { diff --git a/xen/common/wait.c b/xen/common/wait.c index 4f1daf650bc4..bd6f09662ac0 100644 --- a/xen/common/wait.c +++ b/xen/common/wait.c @@ -127,16 +127,6 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv) ASSERT(wqv->used == 0); - /* Save current VCPU affinity; force wakeup on *this* CPU only. */ - if ( vcpu_temporary_affinity(curr, smp_processor_id(), VCPU_AFFINITY_WAIT) ) - { - gdprintk(XENLOG_ERR, "Unable to set vcpu affinity\n"); - domain_crash(curr->domain); - - for ( ; ; ) - do_softirq(); - } - /* * Hand-rolled setjmp(). * @@ -187,7 +177,6 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv) static void __finish_wait(struct waitqueue_vcpu *wqv) { wqv->used = 0; - vcpu_temporary_affinity(current, NR_CPUS, VCPU_AFFINITY_WAIT); } void check_wakeup_from_wait(void) @@ -201,18 +190,6 @@ void check_wakeup_from_wait(void) if ( likely(!wqv->used) ) return; - /* Check if we are still pinned. */ - if ( unlikely(!(curr->affinity_broken & VCPU_AFFINITY_WAIT)) ) - { - gdprintk(XENLOG_ERR, "vcpu affinity lost\n"); - domain_crash(curr->domain); - - /* Re-initiate scheduler and don't longjmp(). */ - raise_softirq(SCHEDULE_SOFTIRQ); - for ( ; ; ) - do_softirq(); - } - /* * Hand-rolled longjmp(). * diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index b9515eb497de..ba859a4abed3 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -223,7 +223,6 @@ struct vcpu /* VCPU need affinity restored */ uint8_t affinity_broken; #define VCPU_AFFINITY_OVERRIDE 0x01 -#define VCPU_AFFINITY_WAIT 0x02 /* A hypercall has been preempted. */ bool hcall_preempted;