From patchwork Thu Mar 28 12:06:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10874835 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 713C7139A for ; Thu, 28 Mar 2019 12:09:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5D24B289B9 for ; Thu, 28 Mar 2019 12:09:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5100428BBB; Thu, 28 Mar 2019 12:09:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 95A1628A54 for ; Thu, 28 Mar 2019 12:09:05 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9ToK-0008DI-JQ; Thu, 28 Mar 2019 12:07:08 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9ToI-0008Cy-84 for xen-devel@lists.xenproject.org; Thu, 28 Mar 2019 12:07:06 +0000 X-Inumbo-ID: ff9ca403-5151-11e9-bc90-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id ff9ca403-5151-11e9-bc90-bc764e045a96; Thu, 28 Mar 2019 12:07:04 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 1EBB5C048; Thu, 28 Mar 2019 12:07:03 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Thu, 28 Mar 2019 13:06:56 +0100 Message-Id: <20190328120658.11083-5-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190328120658.11083-1-jgross@suse.com> References: <20190328120658.11083-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 4/6] xen: don't free percpu areas during suspend X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Andrew Cooper , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Instead of freeing percpu areas during suspend and allocating them again when resuming keep them. Only free an area in case a cpu didn't come up again when resuming. It should be noted that there is a potential change in behaviour as the percpu areas are no longer zeroed out during suspend/resume. While I have checked the called cpu notifier hooks to cope with that there might be some well hidden dependency on the previous behaviour. OTOH a component not registering itself for cpu down/up and expecting to see a zeroed percpu variable after suspend/resume is kind of broken already. And the opposite case, where a component is not registered to be called for cpu down/up and is not expecting a percpu variable suddenly to be zero due to suspend/resume is much more probable, especially as the suspend/resume functionality seems not to be tested that often. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/arch/x86/percpu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/percpu.c b/xen/arch/x86/percpu.c index 8be4ebddf4..5ea14b6ec3 100644 --- a/xen/arch/x86/percpu.c +++ b/xen/arch/x86/percpu.c @@ -76,7 +76,8 @@ static int cpu_percpu_callback( break; case CPU_UP_CANCELED: case CPU_DEAD: - if ( !park_offline_cpus ) + case CPU_RESUME_FAILED: + if ( !park_offline_cpus && system_state != SYS_STATE_suspend ) free_percpu_area(cpu); break; case CPU_REMOVE: