From patchwork Wed Apr 3 21:21:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fenghua Yu X-Patchwork-Id: 10884499 X-Patchwork-Delegate: johannes@sipsolutions.net Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4CA41708 for ; Wed, 3 Apr 2019 21:32:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D018728450 for ; Wed, 3 Apr 2019 21:32:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C454D285C7; Wed, 3 Apr 2019 21:32:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C98228450 for ; Wed, 3 Apr 2019 21:32:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726682AbfDCVaU (ORCPT ); Wed, 3 Apr 2019 17:30:20 -0400 Received: from mga05.intel.com ([192.55.52.43]:37312 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726628AbfDCVaT (ORCPT ); Wed, 3 Apr 2019 17:30:19 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Apr 2019 14:30:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,306,1549958400"; d="scan'208";a="334754234" Received: from romley-ivt3.sc.intel.com ([172.25.110.60]) by fmsmga005.fm.intel.com with ESMTP; 03 Apr 2019 14:30:10 -0700 From: Fenghua Yu To: "Thomas Gleixner" , "Ingo Molnar" , "Borislav Petkov" , "H Peter Anvin" , "Dave Hansen" , "Paolo Bonzini" , "Ashok Raj" , "Peter Zijlstra" , "Kalle Valo" , "Xiaoyao Li " , "Michael Chan" , "Ravi V Shankar" Cc: "linux-kernel" , "x86" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, kvm@vger.kernel.org, Fenghua Yu Subject: [PATCH v6 04/20] x86/split_lock: Align x86_capability to unsigned long to avoid split locked access Date: Wed, 3 Apr 2019 14:21:50 -0700 Message-Id: <1554326526-172295-5-git-send-email-fenghua.yu@intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1554326526-172295-1-git-send-email-fenghua.yu@intel.com> References: <1554326526-172295-1-git-send-email-fenghua.yu@intel.com> Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP set_cpu_cap() calls locked BTS and clear_cpu_cap() calls locked BTR to operate on bitmap defined in x86_capability. Locked BTS/BTR accesses a single unsigned long location. In 64-bit mode, the location is at: base address of x86_capability + (bit offset in x86_capability / 64) * 8 Since base address of x86_capability may not be aligned to unsigned long, the single unsigned long location may cross two cache lines and accessing the location by locked BTS/BTR introductions will trigger #AC. To fix the split lock issue, align x86_capability to unsigned long so that the location will be always within one cache line. Changing x86_capability[]'s type to unsigned long may also fix the issue because x86_capability[] will be naturally aligned to unsigned long. But this needs additional code changes. So choose the simpler solution by setting alignment to size of unsigned long. Signed-off-by: Fenghua Yu --- arch/x86/include/asm/processor.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 2bb3a648fc12..7c62b9ad6e5a 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -93,7 +93,9 @@ struct cpuinfo_x86 { __u32 extended_cpuid_level; /* Maximum supported CPUID level, -1=no CPUID: */ int cpuid_level; - __u32 x86_capability[NCAPINTS + NBUGINTS]; + /* Aligned to size of unsigned long to avoid split lock in atomic ops */ + __u32 x86_capability[NCAPINTS + NBUGINTS] + __aligned(sizeof(unsigned long)); char x86_vendor_id[16]; char x86_model_id[64]; /* in KB - valid for CPUS which support this call: */