From patchwork Sat Jul 13 01:38:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13732272 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9EF014012 for ; Sat, 13 Jul 2024 01:39:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720834747; cv=none; b=RDMFJkQT6u5q0aZKmoSEod7AI0Ke8i1kHLa5wFH9VpmSb8CaMIP7yWThIXKF83UMh5arru5idXJ5a82ZPi/bUGJJBAV/CaoQhrPCD782CjBXzbJWp8oV8c6EBp/ESS7eK3/00DN1Mf/vPgdVCMg2hup/pTj0N2ybm93Bta/qrOw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720834747; c=relaxed/simple; bh=ZLnJzFlCPFz+aFSkdpAsEfC2J8bneJIy6qVFc/o0Kg0=; h=From:To:Cc:Subject:Date:Message-Id:Content-Type:MIME-Version; b=bCd/Jkro226wyu9IMxw2scqd1QON7L4d9CyuHecN/ecKoDOPiGHmufoGTkzCZWXl9TDBleo/aQT1cYX25T6lIbe96r6gsC4SniWTT2yf/Mn8P92HOKO4GZeBc6YQ7X1akLK7CXbra0Q88O9MAH8Qklou3Kqj/rQBJwe8faCjFBw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=E7ELI+V+; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="E7ELI+V+" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720834744; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=MUkwksWZM4bpo+VerhOiEyiXHghLuvfzYDHRObdTPyU=; b=E7ELI+V+h5wweoxuXuiKVjodC6scmB2u8olN3Kjirmu/EkeftHY0E4ITad9OWc9gQS/1mc hdFANP9CzZisxVOt+85xOVQgWvXVursW0YqYWZqGGe2kLEFTCVTsWoji9D67RflT9rHOgL haxwi/V+TnlfdpItXogWqXjACdhB5J4= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-509-TjfmI7CGMJqB2UM6TOXzqA-1; Fri, 12 Jul 2024 21:39:01 -0400 X-MC-Unique: TjfmI7CGMJqB2UM6TOXzqA-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 874DD19560AD; Sat, 13 Jul 2024 01:38:59 +0000 (UTC) Received: from starship.lan (unknown [10.22.18.76]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id D5DC13000181; Sat, 13 Jul 2024 01:38:56 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Dave Hansen , Thomas Gleixner , Paolo Bonzini , Borislav Petkov , x86@kernel.org, linux-kernel@vger.kernel.org, Sean Christopherson , Ingo Molnar , "H. Peter Anvin" , Maxim Levitsky Subject: [PATCH 0/2] Fix for a very old KVM bug in the segment cache Date: Fri, 12 Jul 2024 21:38:54 -0400 Message-Id: <20240713013856.1568501-1-mlevitsk@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Hi, Recently, while trying to understand why the pmu_counters_test selftest sometimes fails when run nested I stumbled upon a very interesting and old bug: It turns out that KVM caches guest segment state, but this cache doesn't have any protection against concurrent use. This usually works because the cache is per vcpu, and should only be accessed by vCPU thread, however there is an exception: If the full preemption is enabled in the host kernel, it is possible that vCPU thread will be preempted, for example during the vmx_vcpu_reset. vmx_vcpu_reset resets the segment cache bitmask and then initializes the segments in the vmcs, however if the vcpus is preempted in the middle of this code, the kvm_arch_vcpu_put is called which reads SS's AR bytes to determine if the vCPU is in the kernel mode, which caches the old value. Later vmx_vcpu_reset will set the SS's AR field to the correct value in vmcs but the cache still contains an invalid value which can later for example leak via KVM_GET_SREGS and such. In particular, kvm selftests will do KVM_GET_SREGS, and then KVM_SET_SREGS, with a broken SS's AR field passed as is, which will lead to vm entry failure. This issue is not a nested issue, and actually I was able to reproduce it on bare metal, but due to timing it happens much more often nested. The only requirement for this to happen is to have full preemption enabled in the kernel which runs the selftest. pmu_counters_test reproduces this issue well, because it creates lots of short lived VMs, but the issue as was noted about is not related to pmu. To fix this issue, I wrapped the places which write the segment fields with preempt_disable/enable. It's not an ideal fix, other options are possible. Please tell me if you prefer these: 1. Getting rid of the segment cache. I am not sure how much it helps these days - this code is very old. 2. Using a read/write lock - IMHO the cleanest solution but might also affect performance. 3. Making the kvm_arch_vcpu_in_kernel not touch the cache and instead do a vmread directly. This is a shorter solution but probably less future proof. Best regards, Maxim Levitsky Maxim Levitsky (2): KVM: nVMX: use vmx_segment_cache_clear KVM: VMX: disable preemption when writing guest segment state arch/x86/kvm/vmx/nested.c | 7 ++++++- arch/x86/kvm/vmx/vmx.c | 22 ++++++++++++++++++---- arch/x86/kvm/vmx/vmx.h | 5 +++++ 3 files changed, 29 insertions(+), 5 deletions(-)