From patchwork Tue Nov 7 14:56:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449044 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DA323A298 for ; Tue, 7 Nov 2023 15:05:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ZoEyIQm2" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC0061BE4; Tue, 7 Nov 2023 07:00:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369246; x=1730905246; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VlxyZs32pRTkYavSIKQ5MHjmJWv+kamMW9fPpT5fpTQ=; b=ZoEyIQm2luWoYegN31DlhAe1P0u5Bxu6oX+aDfqxNOEks3heem8NJxi4 gjaDCdT729G2MWVFxfMu7jPV4W3OBRr2I4KNfwUOZ7RvpZ4paPHZLpBZc 8BjTXIafVGEfXfOP2PLFHmNT5+NNGqQ/iZrrjms4GYF7Dbs540xp0k0V9 WlgOyaD52M0Hm1ZsfyTv7AWmdgb9cJHLPeUxjfWLi73vcQLNX8L41KFee KnkGUPzsNGuhgDrG1OyHnqkCjCNg9hXPJgnAY7BA1FrHKQH0+UJ/eEhoE 6QoiRspcw5q2OzdXSSh77JVIP60wUxQto9JU3TIEJ48JgRUM1nHm5S3Vl w==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="2462443" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="2462443" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 06:58:19 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10851461" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 06:58:18 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v17 066/116] KVM: TDX: restore user ret MSRs Date: Tue, 7 Nov 2023 06:56:32 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata Several user ret MSRs are clobbered on TD exit. Restore those values on TD exit and before returning to ring 3. Because TSX_CTRL requires special treat, this patch doesn't address it. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/vmx/tdx.c | 43 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index d5a43803fadb..fbc3a1920f79 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -509,6 +509,28 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) */ } +struct tdx_uret_msr { + u32 msr; + unsigned int slot; + u64 defval; +}; + +static struct tdx_uret_msr tdx_uret_msrs[] = { + {.msr = MSR_SYSCALL_MASK, .defval = 0x20200 }, + {.msr = MSR_STAR,}, + {.msr = MSR_LSTAR,}, + {.msr = MSR_TSC_AUX,}, +}; + +static void tdx_user_return_update_cache(void) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(tdx_uret_msrs); i++) + kvm_user_return_update_cache(tdx_uret_msrs[i].slot, + tdx_uret_msrs[i].defval); +} + static void tdx_restore_host_xsave_state(struct kvm_vcpu *vcpu) { struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm); @@ -601,6 +623,7 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) tdx_vcpu_enter_exit(tdx); + tdx_user_return_update_cache(); tdx_restore_host_xsave_state(vcpu); tdx->host_state_need_restore = true; @@ -1815,6 +1838,26 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops) return -EINVAL; } + for (i = 0; i < ARRAY_SIZE(tdx_uret_msrs); i++) { + /* + * Here it checks if MSRs (tdx_uret_msrs) can be saved/restored + * before returning to user space. + * + * this_cpu_ptr(user_return_msrs)->registered isn't checked + * because the registration is done at vcpu runtime by + * kvm_set_user_return_msr(). + * Here is setting up cpu feature before running vcpu, + * registered is already false. + */ + tdx_uret_msrs[i].slot = kvm_find_user_return_msr(tdx_uret_msrs[i].msr); + if (tdx_uret_msrs[i].slot == -1) { + /* If any MSR isn't supported, it is a KVM bug */ + pr_err("MSR %x isn't included by kvm_find_user_return_msr\n", + tdx_uret_msrs[i].msr); + return -EIO; + } + } + max_pkgs = topology_max_packages(); tdx_mng_key_config_lock = kcalloc(max_pkgs, sizeof(*tdx_mng_key_config_lock), GFP_KERNEL);