From patchwork Mon Sep 12 23:38:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 12974210 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3C3EC6FA83 for ; Mon, 12 Sep 2022 23:38:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229711AbiILXir (ORCPT ); Mon, 12 Sep 2022 19:38:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229482AbiILXiq (ORCPT ); Mon, 12 Sep 2022 19:38:46 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E43E31D33C; Mon, 12 Sep 2022 16:38:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663025925; x=1694561925; h=date:from:to:cc:subject:message-id:mime-version; bh=ULMC7Di4B1xKaASEhwiU8VwSJyMqxghMLy2T9gnp3pQ=; b=U+5lvmdLRS797D1513coUVvJJ1kcAiW8Ap1tMWl6qFWVdz9JndaunWqT V2zu0OjS52+3geebK4PPzSUUd77oPgSxK3XjpTtwht2JGc13TuypXOjob 1ZpCqHFnr9UBfkuWa1Z90N0QCynoP0ozoDCpF3NjJlHzv+y6BrNb4gDWf CLeKPg5xj42FhBpdl26SDiW+hN64R1H2JDwsy1vOQfhG+m3u5KVFVjIfV 7F2K1GJdcS8mS7VutV8+HTl0qULmv/NDsCkVV/YclmjtG/Ge5Foo7zm0N +1cMGF3sy3NSpnMs8cW3y27C+9PoczxedxHDsGxljTkCe5bhTMy74vjik w==; X-IronPort-AV: E=McAfee;i="6500,9779,10468"; a="384291176" X-IronPort-AV: E=Sophos;i="5.93,311,1654585200"; d="scan'208";a="384291176" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2022 16:38:45 -0700 X-IronPort-AV: E=Sophos;i="5.93,311,1654585200"; d="scan'208";a="758575373" Received: from sho10-mobl1.amr.corp.intel.com (HELO desk) ([10.251.9.78]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2022 16:38:44 -0700 Date: Mon, 12 Sep 2022 16:38:44 -0700 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , "Rafael J. Wysocki" , Pavel Machek , Andrew Cooper , degoede@redhat.com Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com Subject: [PATCH 0/3] Check enumeration before MSR save/restore Message-ID: MIME-Version: 1.0 Content-Disposition: inline Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Hi, This patchset is to fix the "unchecked MSR access error" [1] during S3 resume. Patch 1/3 adds a feature bit for MSR_IA32_TSX_CTRL. Patch 2/3 adds a feature bit for MSR_AMD64_LS_CFG. Patch 3/3 adds check for feature bit before adding any speculation control MSR to the list of MSRs to save/restore. [1] https://lore.kernel.org/lkml/20220906201743.436091-1-hdegoede@redhat.com/ Pawan Gupta (3): x86/tsx: Add feature bit for TSX control MSR support x86/cpu/amd: Add feature bit for MSR_AMD64_LS_CFG enumeration x86/pm: Add enumeration check before spec MSRs save/restore setup arch/x86/include/asm/cpufeatures.h | 2 ++ arch/x86/kernel/cpu/amd.c | 3 +++ arch/x86/kernel/cpu/tsx.c | 30 +++++++++++++++--------------- arch/x86/power/cpu.c | 23 ++++++++++++++++------- 4 files changed, 36 insertions(+), 22 deletions(-) base-commit: 80e78fcce86de0288793a0ef0f6acf37656ee4cf Tested-by: Hans de Goede