From patchwork Thu Aug 20 09:13:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11725945 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6F238739 for ; Thu, 20 Aug 2020 09:13:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 55A6520885 for ; Thu, 20 Aug 2020 09:13:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fuo/QEJu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726836AbgHTJNk (ORCPT ); Thu, 20 Aug 2020 05:13:40 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:39752 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726819AbgHTJNh (ORCPT ); Thu, 20 Aug 2020 05:13:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597914816; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=A3aFtdgeFbcYR03wKrWlf344PhwxLiPdlKO9qqCpcnY=; b=fuo/QEJuz+YkSDag8JooAPqlkwvrfGQPu2hSCGSI6zDV3dQH5WSKGnVfNTHxDtSxoiXmYo X9KLfksQ99KlNSI741yot/L60y6zzlvztrzFnKss7QMGYORnt6PGUiCs/NO/mmNyBtnjgV 7BJZhpDC72YWTmaHlnu17se449eT95M= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-497-zyT1cK7ZMwSTsNDtTeLzcw-1; Thu, 20 Aug 2020 05:13:34 -0400 X-MC-Unique: zyT1cK7ZMwSTsNDtTeLzcw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B72B4425D2; Thu, 20 Aug 2020 09:13:32 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.173]) by smtp.corp.redhat.com (Postfix) with ESMTP id BEBE27E309; Thu, 20 Aug 2020 09:13:28 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Jim Mattson , Joerg Roedel , Paolo Bonzini , Borislav Petkov , Thomas Gleixner , "H. Peter Anvin" , Vitaly Kuznetsov , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Ingo Molnar , Wanpeng Li , Sean Christopherson , Maxim Levitsky Subject: [PATCH 0/8] KVM: nSVM: ondemand nested state allocation + nested guest state caching Date: Thu, 20 Aug 2020 12:13:19 +0300 Message-Id: <20200820091327.197807-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hi! This patch series implements caching of the whole nested guest vmcb as opposed to current code that only caches its control area. This allows us to avoid race in which guest changes the data area while we are verifying it. In adddition to that I also implemented on demand nested state area to compensate a bit for memory usage increase from this caching. This way at least guests that don't use nesting won't waste memory on nested state. Patches 1,2,3 are just refactoring, Patches 4,5 are for ondemand nested state, while patches 6,7,8 are for caching of the nested state. Patch 8 is more of an optimization and can be dropped if you like to. The series was tested with various nested guests, in one case even with L3 running, but note that due to unrelated issue, migration with nested guest running didn't work for me with or without this series. I am investigating this currently. Best regards, Maxim Levitsky Maxim Levitsky (8): KVM: SVM: rename a variable in the svm_create_vcpu KVM: nSVM: rename nested 'vmcb' to vmcb_gpa in few places KVM: SVM: refactor msr permission bitmap allocation KVM: x86: allow kvm_x86_ops.set_efer to return a value KVM: nSVM: implement ondemand allocation of the nested state SVM: nSVM: cache whole nested vmcb instead of only its control area KVM: nSVM: implement caching of nested vmcb save area KVM: nSVM: read only changed fields of the nested guest data area arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/svm/nested.c | 296 +++++++++++++++++++++++--------- arch/x86/kvm/svm/svm.c | 129 +++++++------- arch/x86/kvm/svm/svm.h | 32 ++-- arch/x86/kvm/vmx/vmx.c | 5 +- arch/x86/kvm/x86.c | 3 +- 6 files changed, 312 insertions(+), 155 deletions(-)